- What's New
- Function Overview
- Service Overview
- Data Governance Methodology
- Preparations
- Getting Started
-
User Guide
- DataArts Studio Introduction
- Preparations Before Using DataArts Studio
- Management Center
-
DataArts Migration
- Overview
- Constraints
- Supported Data Sources
- Managing Clusters
-
Managing Links
- Creating Links
- Managing Drivers
- Managing Agents
- Managing Cluster Configurations
- Link to a Common Relational Database
- Link to a Database Shard
- Link to MyCAT
- Link to a Dameng Database
- Link to an RDS for MySQL/MySQL Database
- Link to an Oracle Database
- Link to DLI
- Link to Hive
- Link to HBase
- Link to HDFS
- Link to OBS
- Link to an FTP or SFTP Server
- Link to Redis/DCS
- Link to DDS
- Link to CloudTable
- Link to CloudTable OpenTSDB
- Link to MongoDB
- Link to Cassandra
- Link to Kafka
- Link to DMS Kafka
- Link to Elasticsearch/CSS
- Managing Jobs
- Auditing
- Performance Reference
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- Migrating Data from DDS to DWS
- More Cases and Practices
-
Advanced Operations
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- DataArts Architecture
-
DataArts Factory
- Overview
- Data Management
- Script Development
- Job Development
- Solution
- Execution History
- O&M and Scheduling
- Configuration and Management
-
Node Reference
- Node Overview
- Node Lineages
- CDM Job
- Rest Client
- Import GES
- MRS Kafka
- Kafka Client
- ROMA FDI Job
- DLI Flink Job
- DLI SQL
- DLI Spark
- DWS SQL
- MRS Spark SQL
- MRS Hive SQL
- MRS Presto SQL
- MRS Spark
- MRS Spark Python
- MRS Flink Job
- MRS MapReduce
- CSS
- Shell
- RDS SQL
- ETL Job
- Python
- Create OBS
- Delete OBS
- OBS Manager
- Open/Close Resource
- Data Quality Monitor
- Subjob
- For Each
- SMN
- Dummy
- EL Expression Reference
- Usage Guidance
- DataArts Quality
- DataArts Catalog
- DataArts DataService
- Error Codes
-
Best Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Advanced Data Development Guidance
- Cross-Workspace DataArts Studio Data Migration
- Preventing an IAM User from Logging In to DataArts Studio by Setting Specific Conditions
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Incremental Migration on CDM Supported by DLF
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Practice: Data Development Based on E-commerce BI Reports
- Practice: Data Integration and Development Based on Movie Scores
- Practice: Data Governance Based on Taxi Trip Data
- Case: Trade Data Statistics and Analysis
-
Advanced Data Migration Guidance
- SDK Reference
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Application Cases
- DataArts Migration APIs
-
DataArts Factory APIs
- Connection Management APIs
- Script Development APIs
- Resource Management APIs
- Job Development APIs
-
APIs to Be Taken Offline
- Creating a Job
- Editing a Job
- Viewing a Job List
- Viewing Job Details
- Exporting a Job
- Batch Exporting Jobs
- Importing a Job
- Executing a Job Immediately
- Starting a Job
- Viewing Running Status of a Real-Time Job
- Viewing a Job Instance List
- Viewing Job Instance Details
- Querying a System Task
- Creating a Script
- Modifying a Script
- Querying a Script
- Querying a Script List
- Querying the Execution Result of a Script Instance
- Creating a Resource
- Modifying a Resource
- Querying a Resource
- Querying a Resource List
- Importing a Connection
-
DataArts Architecture APIs
- Overview
- Data Standard APIs
- Lookup Table Management APIs
- Catalog APIs
- Data Standard Template APIs
- Approval Management APIs
- Subject Management APIs
- Subject Level APIs
- Atomic Metric APIs
- Derivative Metric APIs
- Compound Metric APIs
- Dimension APIs
- Filter APIs
- Dimension Table APIs
- Fact Table APIs
- Summary Table APIs
- Business Metric APIs
- ER Modeling APIs
- Import/Export APIs
- Catalog Management APIs
- Version Information APIs
- DataArts Quality APIs
- DataArts Catalog APIs
-
Data Lake Mall APIs
-
API Management
- Creating an API
- Debugging an API
- Querying the API List
- Updating an API
- Querying API Information
- Deleting APIs
- Publishing an API
- Unpublishing, Suspending, and Restoring an API
- Authorizing an API to Apps
- Performing API Authorization Operations
- Querying API Publishing Messages in DLM Exclusive
- Querying API Debugging Messages in DLM Exclusive
- Querying Instances for API Operations in DLM Exclusive
- Authorization Management
- Application Management
- Message Management
-
Service Catalog Management
- Creating a Service Catalog
- Updating a Service Catalog
- Querying a Service Catalog
- Deleting Service Catalogs
- Moving a Catalog to Another Catalog
- Moving APIs to Another Catalog
- Obtaining the List of APIs and Catalogs in a Catalog
- Obtaining the List of APIs in a Catalog
- Obtaining the List of Sub-Catalogs in a Catalog
- Obtaining the ID of a Catalog Through Its Path
- Obtaining the Path of a Catalog Through Its ID
- Obtaining the Paths to a Catalog Through Its ID
- Gateway Management
- App Management
-
Overview APIs
- Querying API Overview
- Querying App Overview
- Querying Top N Services Called by an API
- Querying Top N Services Used by an App
- Querying API Statistics Details
- Querying App Statistics Details
- Querying API Dashboard Data Details
- Querying Data Details of a Specified API Dashboard
- Querying App Dashboard Data Details
- Querying Top N Apps Called by a Specified API
-
API Management
- Appendix
-
FAQs
-
Consultation and Billing
- Regions and AZs
- Can DataArts Studio Be Deployed in a Local Data Center or on a Private Cloud?
- What Should I Do If a User Cannot View Existing Workspaces After I Have Assigned the Required Policy to the User?
- Can I Delete DataArts Studio Workspaces?
- Can I Transfer a Purchased or Trial Instance to Another Account?
- Does DataArts Studio Support Version Upgrade?
- Does DataArts Studio Support Version Downgrade?
- How Do I View the DataArts Studio Instance Version?
-
Management Center
- What Are the Precautions for Creating Data Connections?
- Why Do DWS/Hive/HBase Data Connections Fail to Obtain the Information About Database or Tables?
- Why Are MRS Hive/HBase Clusters Not Displayed on the Page for Creating Data Connections?
- What Should I Do If the Connection Test Fails When I Enable the SSL Connection During the Creation of a DWS Data Connection?
- Can I Create Multiple Data Connections in a Workspace in Proxy Mode?
- Should I Choose a Direct or a Proxy Connection When Creating a DWS Connection?
- How Do I Migrate the Data Development Jobs and Data Connections from One Workspace to Another?
- Can I Delete Workspaces?
-
DataArts Migration
- General
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- How Do I Obtain the Current Time Using an Expression?
-
Troubleshooting
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If a CDM Connector Reports the Error "Configuration Item [linkConfig.iamAuth] Does Not Exist"?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
- What Should I Do If invalid input syntax for integer: "true" Is Displayed During MySQL Database Migration?
-
DataArts Architecture
- What Is the Relationship Between Lookup Tables and Data Standards?
- What Is the Difference Between ER Modeling and Dimensional Modeling?
- What Data Modeling Methods Are Supported by DataArts Architecture?
- How Can I Use Standardized Data?
- Does DataArts Architecture Support Database Reverse?
- What Are the Differences Between the Metrics in DataArts Architecture and DataArts Quality?
- Why Does a Table Remain Unchanged When I Have Updated It in DataArts Architecture?
- Can I Configure Lifecycle Management for Tables?
-
DataArts Factory
- How Many Jobs Can Be Created in DataArts Factory? Is There a Limit on the Number of Nodes in a Job?
- Why Is There a Large Difference Between Job Execution Time and Start Time of a Job?
- Will Subsequent Jobs Be Affected If a Job Fails to Be Executed During Scheduling of Dependent Jobs? What Should I Do?
- What Should I Pay Attention to When Using DataArts Studio to Schedule Big Data Services?
- What Are the Differences and Connections Among Environment Variables, Job Parameters, and Script Parameters?
- What Do I Do If Node Error Logs Cannot Be Viewed When a Job Fails?
- What Should I Do If the Agency List Fails to Be Obtained During Agency Configuration?
- How Do I Locate Job Scheduling Nodes with a Large Number?
- Why Cannot Specified Peripheral Resources Be Selected When a Data Connection Is Created in Data Development?
- Why Is There No Job Running Scheduling Log on the Monitor Instance Page After Periodic Scheduling Is Configured for a Job?
- Why Does the GUI Display Only the Failure Result but Not the Specific Error Cause After Hive SQL and Spark SQL Scripts Fail to Be Executed?
- What Do I Do If the Token Is Invalid During the Running of a Data Development Node?
- How Do I View Run Logs After a Job Is Tested?
- Why Does a Job Scheduled by Month Start Running Before the Job Scheduled by Day Is Complete?
- What Should I Do If Invalid Authentication Is Reported When I Run a DLI Script?
- Why Cannot I Select the Desired CDM Cluster in Proxy Mode When Creating a Data Connection?
- Why Is There No Job Running Scheduling Record After Daily Scheduling Is Configured for the Job?
- What Do I Do If No Content Is Displayed in Job Logs?
- Why Do I Fail to Establish a Dependency Between Two Jobs?
- What Should I Do If an Error Is Displayed During DataArts Studio Scheduling: The Job Does Not Have a Submitted Version?
- What Do I Do If an Error Is Displayed During DataArts Studio Scheduling: The Script Associated with Node XXX in the Job Is Not Submitted?
- What Should I Do If a Job Fails to Be Executed After Being Submitted for Scheduling and an Error Displayed: Depend Job [XXX] Is Not Running Or Pause?
- How Do I Create a Database And Data Table? Is the database a data connection?
- Why Is No Result Displayed After an HIVE Task Is Executed?
- Why Does the Last Instance Status On the Monitor Instance page Only Display Succeeded or Failed?
- How Do I Create a Notification for All Jobs?
- What Is the Maximum Number of Nodes That Can Be Executed Simultaneously?
- What Is the Priority of the Startup User, Execution User, Workspace Agency, and Job Agency?
-
DataArts Quality
- What Are the Differences Between Quality Jobs and Comparison Jobs?
- How Can I Confirm that a Quality Job or Comparison Job Is Blocked?
- How Do I Manually Restart a Blocked Quality Job or Comparison Job?
- How Do I View Jobs Associated with a Quality Rule Template?
- What Should I Do If the System Displays a Message Indicating that I Do Not Have the MRS Permission to Perform a Quality Job?
- DataArts Catalog
-
DataArts DataService
- What Languages Do Data Lake Mall SDKs Support?
- What Can I Do If the System Displays a Message Indicating that the Proxy Fails to Be Invoked During API Creation?
- What Should I Do If the Background Reports an Error When I Access the Test App Through the Data Service API and Set Related Parameters?
- What Can I Do If an Error Is Reported When I Use an API?
- Can Operators Be Transferred When API Parameters Are Transferred?
- What Should I Do If the API Quota Provided by DataArts DataService Exclusive Has Been Used up?
-
Consultation and Billing
Show all
Step 1: Process Design
This guide uses the collection of operations statistics from a taxi vendor in 2017 as an example. Figure 1 shows the data governance process which is based on requirement analysis and service survey.
Requirement Analysis
Requirement analysis helps you develop a data governance framework to support the process design for data governance.
- No standardized model is available.
- There is no standard for data field naming.
- Data content is not standard, and data quality is uncontrollable.
- Statistics standards are inconsistent, hindering business decision-making.
- Standardized data and models
- Unified statistics standards and high-quality data reports
- Data quality monitoring and alarm
- Daily revenue statistics
- Monthly revenue statistics
- Statistics on the revenue proportion of each payment type
Service Survey
Before using DataArts Studio, conduct a service survey to understand the component functions required in the service process and analyze the subsequent service load.
No. |
Configuration Item |
Information to Be Collected |
Survey Result |
Remarks |
---|---|---|---|---|
1 |
Workspace |
Organizations and relationships between the enterprise's big data departments |
N/A |
Properly plan workspaces to reduce the complexity of workspace dependency |
Access control permissions on data and resources between departments |
N/A |
User permissions and resource permissions control are involved. |
||
2 |
DataArts Migration |
Data source from which the data is to be migrated and the data source version |
CSV source data files in the OBS bucket |
N/A |
Full data volume of each data source |
2,114 bytes |
N/A |
||
Daily incremental data volume of each data source |
N/A |
N/A |
||
Types and versions of data sources at the destination |
MRS Hive 3.1 |
N/A |
||
Data migration period: day, hour, minute, or real-time |
Day |
N/A |
||
Network bandwidth between data sources at the source and destination |
100 MB |
N/A |
||
Description of the network connectivity between the data sources and integration tools |
N/A |
N/A |
||
Database migration: number of survey tables and maximum table size |
N/A. In this example, data needs to be migrated from OBS to the database. |
Understand the scale of database migration and whether the migration duration of the largest table is acceptable. |
||
File migration: number of files, and whether the size of any file reaches 1 TB |
A CSV file smaller than 1 TB |
N/A |
||
3 |
DataArts Factory |
Whether job orchestration and scheduling are required |
Yes |
N/A |
Services required in orchestration and scheduling, such as MRS, GaussDB(DWS), and CDM |
DataArts Migration and DataArts Quality of DataArts Studio, and MRS Hive |
Understand application scenarios of jobs to further investigate the suitability of platform capabilities for customer scenarios. |
||
Number of jobs |
Less than 20 |
Understand the job scale. Generally, the job scale is described by the number of operators and can be estimated based on the number of tables. |
||
Number of times a job is scheduled |
Unlimited |
Determine the DataArts Studio edition based on the scheduling quota of each DataArts Studio sales edition. |
||
Number of data developers |
1 |
N/A |
||
4 |
DataArts Architecture |
Data sources and number of tables |
Only one CSV file |
Analyze source data to understand the data source and overall situation. |
Services, requirements, and benefits |
Standardize data and models and collect statistics on revenue in a flexible manner. |
Analyze the destination to understand the purposes of data governance and digitalization. |
||
Data survey, data overview, data standards degree, and industry standards overview |
N/A |
Analyze the process to understand the standards and quality compliance in the data governance process. |
||
5 |
DataArts Quality |
Requirements and benefits |
Data quality monitoring |
Monitor more data sources and rules. |
Number of jobs |
1 |
You can manually create dozens of jobs or enable the function of automatically generating data quality jobs on DataArts Architecture. If the API for creating data quality jobs is called, more than 100 quality jobs can be created. |
||
Application scenarios |
Standardize and cleanse data at the DWI layer. |
Generally, before and after data processing, the data quality is monitored from six dimensions. If any data that does not comply with rules is detected, users will receive an alarm notification. |
||
6 |
DataArts Catalog |
Data sources to support |
MRS Hive |
N/A |
Data volume |
A table contains fewer than 100 records. |
A maximum of 1 million tables can be managed. |
||
Scheduling frequency of metadata collection |
N/A |
Collection tasks can be executed by hour, day, or week. |
||
Key metrics of metadata collection |
N/A |
The key metrics include the table name, field name, owner, description, and creation time. |
||
Application scenarios of tags |
N/A |
Tags are highly related keywords that help you classify and describe assets to facilitate search. |
||
7 |
DataArts Security |
Data sources to which access is controlled |
N/A |
Access to the following components is controlled: HDFS, Hive, HBase, Yarn, Kafka, Storm, and Elasticsearch. |
Data security levels to be identified |
N/A |
A maximum of 10 data security levels can be defined. |
||
Data sources to be masked |
In this example, data in the MRS standard trip table needs to be masked to DWS. |
Only DWS and MRS data sources can be masked. |
||
Data sources that require watermarking |
N/A |
Watermarks can be embedded only for DWS and MRS data sources. |
||
8 |
DataArts DataService |
Open data sources |
Revenue summary table |
Generally, these data sources store the tables at the final layer after a data warehouse is established. Such tables contain high-quality service data but fewer records, which can be directly displayed. |
Daily data calls |
N/A |
If the database response takes a long period of time due to complex extraction logic, the data calling volume will decrease. |
||
Number of peak data calls per second |
N/A |
The number of peak data calls per second varies depending on the edition in use and data extraction logic. |
||
Average latency of a single data call |
N/A |
The database response duration is related to the data extraction logic. |
||
Whether data access records are required |
N/A |
N/A |
||
Data access method: intranet or Internet |
N/A |
N/A |
||
Number of DataArts DataService developers |
1 |
N/A |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.