- What's New
- Function Overview
- Service Overview
- Data Governance Methodology
- Preparations
- Getting Started
-
User Guide
- DataArts Studio Introduction
- Preparations Before Using DataArts Studio
- Management Center
-
DataArts Migration
- Overview
- Constraints
- Supported Data Sources
- Managing Clusters
-
Managing Links
- Creating Links
- Managing Drivers
- Managing Agents
- Managing Cluster Configurations
- Link to a Common Relational Database
- Link to a Database Shard
- Link to MyCAT
- Link to a Dameng Database
- Link to an RDS for MySQL/MySQL Database
- Link to an Oracle Database
- Link to DLI
- Link to Hive
- Link to HBase
- Link to HDFS
- Link to OBS
- Link to an FTP or SFTP Server
- Link to Redis/DCS
- Link to DDS
- Link to CloudTable
- Link to CloudTable OpenTSDB
- Link to MongoDB
- Link to Cassandra
- Link to Kafka
- Link to DMS Kafka
- Link to Elasticsearch/CSS
- Managing Jobs
- Auditing
- Performance Reference
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- Migrating Data from DDS to DWS
- More Cases and Practices
-
Advanced Operations
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- DataArts Architecture
-
DataArts Factory
- Overview
- Data Management
- Script Development
- Job Development
- Solution
- Execution History
- O&M and Scheduling
- Configuration and Management
-
Node Reference
- Node Overview
- Node Lineages
- CDM Job
- Rest Client
- Import GES
- MRS Kafka
- Kafka Client
- ROMA FDI Job
- DLI Flink Job
- DLI SQL
- DLI Spark
- DWS SQL
- MRS Spark SQL
- MRS Hive SQL
- MRS Presto SQL
- MRS Spark
- MRS Spark Python
- MRS Flink Job
- MRS MapReduce
- CSS
- Shell
- RDS SQL
- ETL Job
- Python
- Create OBS
- Delete OBS
- OBS Manager
- Open/Close Resource
- Data Quality Monitor
- Subjob
- For Each
- SMN
- Dummy
- EL Expression Reference
- Usage Guidance
- DataArts Quality
- DataArts Catalog
- DataArts DataService
- Error Codes
-
Best Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Advanced Data Development Guidance
- Cross-Workspace DataArts Studio Data Migration
- Preventing an IAM User from Logging In to DataArts Studio by Setting Specific Conditions
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Incremental Migration on CDM Supported by DLF
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Practice: Data Development Based on E-commerce BI Reports
- Practice: Data Integration and Development Based on Movie Scores
- Practice: Data Governance Based on Taxi Trip Data
- Case: Trade Data Statistics and Analysis
-
Advanced Data Migration Guidance
- SDK Reference
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Application Cases
- DataArts Migration APIs
-
DataArts Factory APIs
- Connection Management APIs
- Script Development APIs
- Resource Management APIs
- Job Development APIs
-
APIs to Be Taken Offline
- Creating a Job
- Editing a Job
- Viewing a Job List
- Viewing Job Details
- Exporting a Job
- Batch Exporting Jobs
- Importing a Job
- Executing a Job Immediately
- Starting a Job
- Viewing Running Status of a Real-Time Job
- Viewing a Job Instance List
- Viewing Job Instance Details
- Querying a System Task
- Creating a Script
- Modifying a Script
- Querying a Script
- Querying a Script List
- Querying the Execution Result of a Script Instance
- Creating a Resource
- Modifying a Resource
- Querying a Resource
- Querying a Resource List
- Importing a Connection
-
DataArts Architecture APIs
- Overview
- Data Standard APIs
- Lookup Table Management APIs
- Catalog APIs
- Data Standard Template APIs
- Approval Management APIs
- Subject Management APIs
- Subject Level APIs
- Atomic Metric APIs
- Derivative Metric APIs
- Compound Metric APIs
- Dimension APIs
- Filter APIs
- Dimension Table APIs
- Fact Table APIs
- Summary Table APIs
- Business Metric APIs
- ER Modeling APIs
- Import/Export APIs
- Catalog Management APIs
- Version Information APIs
- DataArts Quality APIs
- DataArts Catalog APIs
-
Data Lake Mall APIs
-
API Management
- Creating an API
- Debugging an API
- Querying the API List
- Updating an API
- Querying API Information
- Deleting APIs
- Publishing an API
- Unpublishing, Suspending, and Restoring an API
- Authorizing an API to Apps
- Performing API Authorization Operations
- Querying API Publishing Messages in DLM Exclusive
- Querying API Debugging Messages in DLM Exclusive
- Querying Instances for API Operations in DLM Exclusive
- Authorization Management
- Application Management
- Message Management
-
Service Catalog Management
- Creating a Service Catalog
- Updating a Service Catalog
- Querying a Service Catalog
- Deleting Service Catalogs
- Moving a Catalog to Another Catalog
- Moving APIs to Another Catalog
- Obtaining the List of APIs and Catalogs in a Catalog
- Obtaining the List of APIs in a Catalog
- Obtaining the List of Sub-Catalogs in a Catalog
- Obtaining the ID of a Catalog Through Its Path
- Obtaining the Path of a Catalog Through Its ID
- Obtaining the Paths to a Catalog Through Its ID
- Gateway Management
- App Management
-
Overview APIs
- Querying API Overview
- Querying App Overview
- Querying Top N Services Called by an API
- Querying Top N Services Used by an App
- Querying API Statistics Details
- Querying App Statistics Details
- Querying API Dashboard Data Details
- Querying Data Details of a Specified API Dashboard
- Querying App Dashboard Data Details
- Querying Top N Apps Called by a Specified API
-
API Management
- Appendix
-
FAQs
-
Consultation and Billing
- Regions and AZs
- Can DataArts Studio Be Deployed in a Local Data Center or on a Private Cloud?
- What Should I Do If a User Cannot View Existing Workspaces After I Have Assigned the Required Policy to the User?
- Can I Delete DataArts Studio Workspaces?
- Can I Transfer a Purchased or Trial Instance to Another Account?
- Does DataArts Studio Support Version Upgrade?
- Does DataArts Studio Support Version Downgrade?
- How Do I View the DataArts Studio Instance Version?
-
Management Center
- What Are the Precautions for Creating Data Connections?
- Why Do DWS/Hive/HBase Data Connections Fail to Obtain the Information About Database or Tables?
- Why Are MRS Hive/HBase Clusters Not Displayed on the Page for Creating Data Connections?
- What Should I Do If the Connection Test Fails When I Enable the SSL Connection During the Creation of a DWS Data Connection?
- Can I Create Multiple Data Connections in a Workspace in Proxy Mode?
- Should I Choose a Direct or a Proxy Connection When Creating a DWS Connection?
- How Do I Migrate the Data Development Jobs and Data Connections from One Workspace to Another?
- Can I Delete Workspaces?
-
DataArts Migration
- General
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- How Do I Obtain the Current Time Using an Expression?
-
Troubleshooting
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If a CDM Connector Reports the Error "Configuration Item [linkConfig.iamAuth] Does Not Exist"?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
- What Should I Do If invalid input syntax for integer: "true" Is Displayed During MySQL Database Migration?
-
DataArts Architecture
- What Is the Relationship Between Lookup Tables and Data Standards?
- What Is the Difference Between ER Modeling and Dimensional Modeling?
- What Data Modeling Methods Are Supported by DataArts Architecture?
- How Can I Use Standardized Data?
- Does DataArts Architecture Support Database Reverse?
- What Are the Differences Between the Metrics in DataArts Architecture and DataArts Quality?
- Why Does a Table Remain Unchanged When I Have Updated It in DataArts Architecture?
- Can I Configure Lifecycle Management for Tables?
-
DataArts Factory
- How Many Jobs Can Be Created in DataArts Factory? Is There a Limit on the Number of Nodes in a Job?
- Why Is There a Large Difference Between Job Execution Time and Start Time of a Job?
- Will Subsequent Jobs Be Affected If a Job Fails to Be Executed During Scheduling of Dependent Jobs? What Should I Do?
- What Should I Pay Attention to When Using DataArts Studio to Schedule Big Data Services?
- What Are the Differences and Connections Among Environment Variables, Job Parameters, and Script Parameters?
- What Do I Do If Node Error Logs Cannot Be Viewed When a Job Fails?
- What Should I Do If the Agency List Fails to Be Obtained During Agency Configuration?
- How Do I Locate Job Scheduling Nodes with a Large Number?
- Why Cannot Specified Peripheral Resources Be Selected When a Data Connection Is Created in Data Development?
- Why Is There No Job Running Scheduling Log on the Monitor Instance Page After Periodic Scheduling Is Configured for a Job?
- Why Does the GUI Display Only the Failure Result but Not the Specific Error Cause After Hive SQL and Spark SQL Scripts Fail to Be Executed?
- What Do I Do If the Token Is Invalid During the Running of a Data Development Node?
- How Do I View Run Logs After a Job Is Tested?
- Why Does a Job Scheduled by Month Start Running Before the Job Scheduled by Day Is Complete?
- What Should I Do If Invalid Authentication Is Reported When I Run a DLI Script?
- Why Cannot I Select the Desired CDM Cluster in Proxy Mode When Creating a Data Connection?
- Why Is There No Job Running Scheduling Record After Daily Scheduling Is Configured for the Job?
- What Do I Do If No Content Is Displayed in Job Logs?
- Why Do I Fail to Establish a Dependency Between Two Jobs?
- What Should I Do If an Error Is Displayed During DataArts Studio Scheduling: The Job Does Not Have a Submitted Version?
- What Do I Do If an Error Is Displayed During DataArts Studio Scheduling: The Script Associated with Node XXX in the Job Is Not Submitted?
- What Should I Do If a Job Fails to Be Executed After Being Submitted for Scheduling and an Error Displayed: Depend Job [XXX] Is Not Running Or Pause?
- How Do I Create a Database And Data Table? Is the database a data connection?
- Why Is No Result Displayed After an HIVE Task Is Executed?
- Why Does the Last Instance Status On the Monitor Instance page Only Display Succeeded or Failed?
- How Do I Create a Notification for All Jobs?
- What Is the Maximum Number of Nodes That Can Be Executed Simultaneously?
- What Is the Priority of the Startup User, Execution User, Workspace Agency, and Job Agency?
-
DataArts Quality
- What Are the Differences Between Quality Jobs and Comparison Jobs?
- How Can I Confirm that a Quality Job or Comparison Job Is Blocked?
- How Do I Manually Restart a Blocked Quality Job or Comparison Job?
- How Do I View Jobs Associated with a Quality Rule Template?
- What Should I Do If the System Displays a Message Indicating that I Do Not Have the MRS Permission to Perform a Quality Job?
- DataArts Catalog
-
DataArts DataService
- What Languages Do Data Lake Mall SDKs Support?
- What Can I Do If the System Displays a Message Indicating that the Proxy Fails to Be Invoked During API Creation?
- What Should I Do If the Background Reports an Error When I Access the Test App Through the Data Service API and Set Related Parameters?
- What Can I Do If an Error Is Reported When I Use an API?
- Can Operators Be Transferred When API Parameters Are Transferred?
- What Should I Do If the API Quota Provided by DataArts DataService Exclusive Has Been Used up?
-
Consultation and Billing
Show all
Function Overview
-
DataArts Migration
-
Cloud DataArts Migration enables batch data migration between 30+ homogeneous and heterogeneous data sources. You can use it to ingest data from both on-premises and cloud-based data sources, including file systems, relational databases, data warehouses, NoSQL databases, big data services, and object storage.
DataArts Migration uses a distributed compute framework and concurrent processing techniques to help you migrate data in batches without any downtime and rapidly build desired data structures.
Available in:ALL
-
Cluster Management
-
The following cluster management capabilities are available:
- Creating a cluster
- Binding or unbinding an EIP
- Modifying cluster configurations
- Viewing cluster configurations, logs, and monitoring data
- Configuring monitoring metrics
-
-
Link Management
-
The following link management capabilities are available:
- Managing links to DLI, MRS Hive, Spark SQL, DWS, MySQL, and hosts
- Supporting various link modes, such as agent links, direct links, and MRS API links
-
-
Job Management
-
CDM can migrate tables or files between homogeneous and heterogeneous data sources. For details about data sources that support table/file migration, see Supported Data Sources.
CDM is applicable to data migration to the cloud, data exchange on the cloud, and data migration to on-premises service systems.
-
-
-
DataArts Factory
-
The DataArts Factory module of DataArts Studio is a one-stop agile big DataArts Factory platform. It provides a visualized graphical development interface, rich DataArts Factory types (script development and job development), fully-hosted job scheduling and O&M monitoring capabilities, built-in industry data processing pipeline, one-click development, full-process visualization, and online collaborative development by multiple people, as well as supports management of multiple big data cloud services, greatly lowering the threshold for using big data and helping you quickly build big data processing centers.
Available in:ALL
-
Data Management
-
The data management function helps you quickly establish data models and provides you with data entities for script and job development. With data management, you can:
- Manage multiple types of data warehouses, such as DWS and MRS Hive.
- Use the GUI and DDL to manage database tables.
-
-
Script Development
-
The following script development capabilities are available:
- An online script editor that allows more than one operator to collaboratively develop and debug SQL and Shell scripts online
- Variables and functions
- Script version management
-
-
Job Development
-
The following job development capabilities are available:
- A graphical designer that allows you to quickly build a data processing workflow by drag-and-drop
- Presetting multiple job types, such as data integration, computing and analysis, resource management, and data monitoring, and completing complex data analysis and processing based on dependencies between jobs
- Various scheduling modes
- Importing and exporting jobs
- Monitoring job status and sending job result notifications
- Managing job versions
-
-
O&M and Scheduling
-
You can view the statistics of job instances in charts. Currently, you can view four types of statistics:
- Today's Job Instance Scheduling
- Latest 7 Days' Job Instance Scheduling
- Latest 30 Days' Top 10 Ranking in Job Instance Execution Duration: View the detailed running records of the job instance with a long execution time.
- Latest 30 Days' Top 10 Ranking in Job Instance Running Failed: View the detailed running records of the job instance that is running abnormally.
-
-
Configuration and Management
-
The following configuration and management capabilities are available:
- Managing a host connection
- Managing resources
- Configuring environment variables
- Managing job labels
- Configuring agencies
- Backing up and restoring assets
-
-
-
Management Center
-
DataArts Studio Management Center provides instance management, workspace management, data connection management, and resource migration functions.
Available in:ALL
-
Instance Management
-
You can create an instance and configure the enterprise project, VPC, subnet, and security group on which the instance depends.
-
-
Workspace Management
-
A workspace enables its administrator to manage user (member) permissions, resources, and underlying compute engines of DataArts Studio.
The workspace is a basic unit for member management as well as role assignment. Each team has an independent workspace.
After an admin adds an account to a workspace and assigns the required permissions, the account user can access Management Center, DataArts Catalog, DataArts Quality, DataArts Architecture, DataArts DataService, DataArts Factory, and Data Integration modules.
-
-
Data Connection Management
-
You can create data connections by configuring data sources. Metadata management allows you to create, edit, and delete data connections, as well as test their connectivity. Data connections apply to collection tasks, business metrics, and data quality. If there are any changes made to the saved information, update the related data connections.
-
-
Resource Migration
-
To migrate rules created for an environment to another, you can enable resource migration of DataArts Studio to import and export resources. Resources that can be migrated include data services, metadata categories, metadata tags, metadata collection tasks, and data connections.
-
-
-
DataArts Architecture
-
DataArts Studio DataArts Architecture incorporates data governance methods. You can use it to visualize data governance operations, connect data from different layers, formulate data standards, and generate DataArts Catalog. You can standardize your data through ER modeling and dimensional modeling. DataArts Architecture is a good option for unified construction of metric platforms. With DataArts Architecture, you can build standard metric systems to eliminate data ambiguity and facilitate communications between different departments. In addition to unifying computing logic, you can use it to query data and explore data value by subject.
Available in:ALL
-
Information Architecture
-
An information architecture is a set of component specifications that describe various types of information required for business operations and management decision-making as well as the relationships of business entities. On the Information Architecture page, you can view and manage business tables, dimension tables, fact tables, and summary tables.
-
-
Process Design
-
Business Process Architecture (BPA) is developed based on value streams, and is used to guide and standardize the management of BT&IT requirements and ensure the efficiency of business requirement handling, analysis, and delivery. BPA prioritizes high-value requirements, which maximizes the business value, assists in business operations, and facilitates goal achievement.
-
-
Subject Design
-
A subject is a hierarchical architecture that classifies and defines data to help clarify DataArts Catalog and specify relationships between subject areas and business objects.
You can design subjects in either of the following ways:
- Creating a subject
Manually create a subject.
- Importing a subject
If the subject information is complex, you are advised to import subjects in batches.
- You can download the provided subject design template, fill in the content, and upload the file to import the subjects in batches.
- You can export the subjects created in DataArts Architecture of a DataArts Studio instance to an Excel file. Then, import the Excel file.
After creating a subject, you can search for, edit, or delete it.
-
-
Lookup Table Management
-
A lookup table is also called a data dictionary table. It consists of enumerable data names and codes and stores the relationships between them. A lookup table provides the following functions:
- Standardizes business data and supplements mapping fields during data cleansing.
- Monitors the value range of business data during data quality monitoring.
- Enumerates dimensions during dimensional modeling.
-
-
Data Standards
-
Data standards describe data meanings and business rules that are stipulated and commonly recognized by enterprises and must be complied with by the enterprises.
A data standard, also called a data element, is the smallest unit of data used. It cannot be further divided. A data standard is a data unit whose definition, identifiers, representations, and allowed values are specified by a group of properties. You can associate data standards with databases of a wide range of businesses. The identifier, data type, expression format, and value range are the basis of data exchange. They are used to describe field metadata of a table and standardize data information stored in a field.
This section describes how to create a data standard. A created data standard can be associated with fields in a business table created during ER modeling, ensuring that fields in the business table comply with the specified data standards.
-
-
ER Modeling
-
ER modeling supports logical model design, physical model design, reverse database, quality rule association, table import and export, and table viewing.
-
-
Dimensional Modeling
-
A dimension is the perspective to observe and analyze business data and assist in data aggregation, drilling, slicing, and analysis, and used as a GROUP BY condition in SQL statements. Most dimensions have hierarchical structures, such as geographic dimensions (including countries, regions, provinces/states, and cities) and time dimensions (including annually, quarterly, and monthly dimensions). Creating a dimension is a way to standardize the existence and uniqueness of business entities (also called primary data) from the top down.
-
-
Business Metrics
-
After data survey and requirement analysis, you must implement metrics. A metric is a statistical value that measures the overall characteristic of a target and reflects the business situation in a business activity of an enterprise. A metric consists of its name and value. The metric name and its definition reflect the quality and quantity of the metric. The metric value reflects the quantifiable values of the specified time, location, and condition of the metric. Business metrics are used to guide technical metrics, and technical metrics are used to implement business metrics.
-
-
Technical Metrics
-
You can create atomic metrics, derivative metrics, compound metrics, and time filters.
-
-
Review Center
-
After the modeling and data processing tasks generated in the development environment are submitted, they are stored in the review center. After the tasks are approved on the Review Center page, these tasks are available in the production environment.
-
-
Configuration Center
-
Configuration Center supports standard template management, function configuration, field type management, DDL template management, and metric encoding rules.
-
-
-
DataArts Quality
-
DataArts Quality can monitor your metrics and data quality, and screen out unqualified data in a timely manner.
Available in:ALL
-
Monitoring Business Metrics
-
You can use DQC to monitor the quality of data in your databases. You can create metrics, rules, or scenarios that meet your requirements and schedule them in real time or recursively.
-
-
Monitoring Data Quality
-
DQC is a type of quality management tool used to manage the quality of data in databases. You can filter out unqualified data in a single column or across columns, rows, and tables from the following perspectives: integrity, validity, timeliness, consistency, accuracy, and uniqueness. It can also be used for data standardization, automatic generation of standardization rules based on data standards, and periodic monitoring.
-
-
Viewing Quality Reports
-
A five-point scale is used for quality scoring based on table-associated rules. The scores in different dimensions, such as tables, business objects, and subject areas, are calculated based on the weighted average values of rule scores in different dimensions.
You can query the quality scores of subject area groups, subject areas, business objects, tables, and table-associated rules.
-
-
-
DataArts Catalog
-
DataArts Studio provides enterprise-class metadata management to clarify information assets. It also supports data drilling and source tracing. It uses a data map to display a data lineage and panorama of DataArts Catalog for intelligent data search, operations, and monitoring.
Available in:ALL
-
Data Maps
-
Data maps facilitate data search, analysis, development, mining, and operations. With data maps, you can search for data quickly and perform lineage and impact analysis with ease.
- Search: Before data analysis, a data map can be used to search for keywords to narrow down the scope of data to be analyzed.
- Details: A data map can be used to query table details by table names, letting you know how to use a table.
- Lineage: Through lineage analysis, a data map displays you how a table is generated and where it is applied, and the logic used for processing table fields.
-
-
Data Permissions
-
To ensure data security and controllability, you need to apply for permissions before using data tables. The Data Permissions module facilitates permission control, provides visualized application and approval processes, and supports permission audit and management. Data is secure and data permission control is convenient.
The Data Permissions module consists of Data Catalog Permissions, Data Table Permissions, and Review Center. The following functions are provided:
- Self-service permission application: You can select a data table and quickly apply for the needed permissions online.
- Permission audit: Administrators can quickly and easily view the personnel with the corresponding database table permissions and perform audit management.
- Permission revoking and returning: Administrators can revoke user permissions in a timely manner. Users can also proactively return unnecessary permissions.
- Permission approval and management: A visualized and process-based management and authorization mechanism facilitates post-event tracing.
-
-
Metadata Collection
-
Metadata is data about data. Metadata streamlines source data, data warehouses, and data applications, and records the entire process from data generation to data consumption. Metadata mainly refers to model definitions in the data warehouse and mappings between layers. It also describes the monitoring data status of the data warehouse and running status of ETL tasks. In the data warehouse system, metadata helps data warehouse administrators and developers easily locate the data they are looking for, improving the efficiency of data management and development.
Metadata is classified into technical metadata and business metadata by function.
- Technical metadata is data that stores technical details of a data warehouse system and is used to develop and manage data warehouses.
- Business metadata describes data in a data warehouse from the business perspective. It provides a semantic layer between users and actual systems, enabling business personnel who do not understand computer technologies to understand data in the data warehouse.
The metadata management module is the cornerstone of data lake governance. It allows you to create collection tasks by custom collection policies to collect technical metadata from data sources, customize business metamodels to batch import business metadata, associate business metadata with technical metadata, and manage and apply linkages throughout the entire link.
-
-
-
DataArts DataService
-
DataService aims to build a unified data service bus for enterprises to centrally manage internal and external API services. You can use DataService to generate APIs and register the APIs with DataService for unified management and publication.
DataService adjusts and controls API access requests based on throttling policies to provide multi-dimensional protection for backend services. API throttling allows you to limit the number of API calls by user, application, or time period. You can select a policy based on your service requirements.
DataService uses a serverless architecture. You only need to focus on the API query logic and do not need to worry about infrastructure such as the runtime environment. DataService supports elastic scaling of compute resources, significantly reducing O&M costs.
Available in:ALL
-
Generating APIs
-
DataService supports API generation in the wizard or script mode.
DataService can quickly generate data APIs based on data source tables in the wizard mode. You can configure a data API within several minutes without coding.
To meet personalized query requirements, DataService also supports API generation in the SQL script mode. It allows you to compile API query SQL statements and provides multi-table join, complex query conditions, and aggregation functions.
-
-
Registering APIs
-
This section describes how to register APIs, manage APIs generated based on data tables, and publish APIs to API Gateway.
DataService supports the registration of RESTful APIs using GET and POST methods.
-
-
Publishing APIs
-
This section describes how to publish APIs on DataService to the service market.
DataService provides API hosting services through API Gateway, including API publishing, management, O&M, and sales. It helps you implement microservice aggregation, frontend and backend separation, and system integration in an easy, quick, cost-effective, and low-risk manner. With DataService, you can make your functions and data accessible to your partners and developers.
DataService is the last defense for opening up or calling APIs. It provides services such as permission management, traffic control, and access control. For security reasons, the generated and registered APIs on DataService must be published to the service market before they can be provided.
-
-
Reviewing APIs
-
The review center of DataService is designed to approve the applications of publishing APIs, suspending APIs, applying for authorization, renewal, and other operations.
- If an API developer wants to publish an API to the service market, remove an API from the service market, and reclaim the authorization of an application, these operations take effect only after being approved by the reviewers.
- If an API caller wants to apply for API authorization or renewal, these operations take effect only after being approved by the reviewers.
- An API developer or caller can cancel an API application to be reviewed in the review center.
-
-
Calling APIs
-
You can create an application and get authorized, and authorize an application to use an API. To call an API, perform the following operations:
- Obtain an API from the service market.
- Create an application and get authorized.
- After completing the preceding operations, you can call the API.
-
-
Operating APIs
-
You can create and delete throttling policies and bind a throttling policy to an API.
-
-
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.