- What's New
- Function Overview
- Service Overview
- Data Governance Methodology
- Preparations
- Getting Started
-
User Guide
- DataArts Studio Introduction
- Preparations Before Using DataArts Studio
- Management Center
-
DataArts Migration
- Overview
- Constraints
- Supported Data Sources
- Managing Clusters
-
Managing Links
- Creating Links
- Managing Drivers
- Managing Agents
- Managing Cluster Configurations
- Link to a Common Relational Database
- Link to a Database Shard
- Link to MyCAT
- Link to a Dameng Database
- Link to an RDS for MySQL/MySQL Database
- Link to an Oracle Database
- Link to DLI
- Link to Hive
- Link to HBase
- Link to HDFS
- Link to OBS
- Link to an FTP or SFTP Server
- Link to Redis/DCS
- Link to DDS
- Link to CloudTable
- Link to CloudTable OpenTSDB
- Link to MongoDB
- Link to Cassandra
- Link to Kafka
- Link to DMS Kafka
- Link to Elasticsearch/CSS
- Managing Jobs
- Auditing
- Performance Reference
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- Migrating Data from DDS to DWS
- More Cases and Practices
-
Advanced Operations
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- DataArts Architecture
-
DataArts Factory
- Overview
- Data Management
- Script Development
- Job Development
- Solution
- Execution History
- O&M and Scheduling
- Configuration and Management
-
Node Reference
- Node Overview
- Node Lineages
- CDM Job
- Rest Client
- Import GES
- MRS Kafka
- Kafka Client
- ROMA FDI Job
- DLI Flink Job
- DLI SQL
- DLI Spark
- DWS SQL
- MRS Spark SQL
- MRS Hive SQL
- MRS Presto SQL
- MRS Spark
- MRS Spark Python
- MRS Flink Job
- MRS MapReduce
- CSS
- Shell
- RDS SQL
- ETL Job
- Python
- Create OBS
- Delete OBS
- OBS Manager
- Open/Close Resource
- Data Quality Monitor
- Subjob
- For Each
- SMN
- Dummy
- EL Expression Reference
- Usage Guidance
- DataArts Quality
- DataArts Catalog
- DataArts DataService
- Error Codes
-
Best Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Advanced Data Development Guidance
- Cross-Workspace DataArts Studio Data Migration
- Preventing an IAM User from Logging In to DataArts Studio by Setting Specific Conditions
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Incremental Migration on CDM Supported by DLF
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Practice: Data Development Based on E-commerce BI Reports
- Practice: Data Integration and Development Based on Movie Scores
- Practice: Data Governance Based on Taxi Trip Data
- Case: Trade Data Statistics and Analysis
-
Advanced Data Migration Guidance
- SDK Reference
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Application Cases
- DataArts Migration APIs
-
DataArts Factory APIs
- Connection Management APIs
- Script Development APIs
- Resource Management APIs
- Job Development APIs
-
APIs to Be Taken Offline
- Creating a Job
- Editing a Job
- Viewing a Job List
- Viewing Job Details
- Exporting a Job
- Batch Exporting Jobs
- Importing a Job
- Executing a Job Immediately
- Starting a Job
- Viewing Running Status of a Real-Time Job
- Viewing a Job Instance List
- Viewing Job Instance Details
- Querying a System Task
- Creating a Script
- Modifying a Script
- Querying a Script
- Querying a Script List
- Querying the Execution Result of a Script Instance
- Creating a Resource
- Modifying a Resource
- Querying a Resource
- Querying a Resource List
- Importing a Connection
-
DataArts Architecture APIs
- Overview
- Data Standard APIs
- Lookup Table Management APIs
- Catalog APIs
- Data Standard Template APIs
- Approval Management APIs
- Subject Management APIs
- Subject Level APIs
- Atomic Metric APIs
- Derivative Metric APIs
- Compound Metric APIs
- Dimension APIs
- Filter APIs
- Dimension Table APIs
- Fact Table APIs
- Summary Table APIs
- Business Metric APIs
- ER Modeling APIs
- Import/Export APIs
- Catalog Management APIs
- Version Information APIs
- DataArts Quality APIs
- DataArts Catalog APIs
-
Data Lake Mall APIs
-
API Management
- Creating an API
- Debugging an API
- Querying the API List
- Updating an API
- Querying API Information
- Deleting APIs
- Publishing an API
- Unpublishing, Suspending, and Restoring an API
- Authorizing an API to Apps
- Performing API Authorization Operations
- Querying API Publishing Messages in DLM Exclusive
- Querying API Debugging Messages in DLM Exclusive
- Querying Instances for API Operations in DLM Exclusive
- Authorization Management
- Application Management
- Message Management
-
Service Catalog Management
- Creating a Service Catalog
- Updating a Service Catalog
- Querying a Service Catalog
- Deleting Service Catalogs
- Moving a Catalog to Another Catalog
- Moving APIs to Another Catalog
- Obtaining the List of APIs and Catalogs in a Catalog
- Obtaining the List of APIs in a Catalog
- Obtaining the List of Sub-Catalogs in a Catalog
- Obtaining the ID of a Catalog Through Its Path
- Obtaining the Path of a Catalog Through Its ID
- Obtaining the Paths to a Catalog Through Its ID
- Gateway Management
- App Management
-
Overview APIs
- Querying API Overview
- Querying App Overview
- Querying Top N Services Called by an API
- Querying Top N Services Used by an App
- Querying API Statistics Details
- Querying App Statistics Details
- Querying API Dashboard Data Details
- Querying Data Details of a Specified API Dashboard
- Querying App Dashboard Data Details
- Querying Top N Apps Called by a Specified API
-
API Management
- Appendix
-
FAQs
-
Consultation and Billing
- Regions and AZs
- Can DataArts Studio Be Deployed in a Local Data Center or on a Private Cloud?
- What Should I Do If a User Cannot View Existing Workspaces After I Have Assigned the Required Policy to the User?
- Can I Delete DataArts Studio Workspaces?
- Can I Transfer a Purchased or Trial Instance to Another Account?
- Does DataArts Studio Support Version Upgrade?
- Does DataArts Studio Support Version Downgrade?
- How Do I View the DataArts Studio Instance Version?
-
Management Center
- What Are the Precautions for Creating Data Connections?
- Why Do DWS/Hive/HBase Data Connections Fail to Obtain the Information About Database or Tables?
- Why Are MRS Hive/HBase Clusters Not Displayed on the Page for Creating Data Connections?
- What Should I Do If the Connection Test Fails When I Enable the SSL Connection During the Creation of a DWS Data Connection?
- Can I Create Multiple Data Connections in a Workspace in Proxy Mode?
- Should I Choose a Direct or a Proxy Connection When Creating a DWS Connection?
- How Do I Migrate the Data Development Jobs and Data Connections from One Workspace to Another?
- Can I Delete Workspaces?
-
DataArts Migration
- General
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- How Do I Obtain the Current Time Using an Expression?
-
Troubleshooting
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If a CDM Connector Reports the Error "Configuration Item [linkConfig.iamAuth] Does Not Exist"?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
- What Should I Do If invalid input syntax for integer: "true" Is Displayed During MySQL Database Migration?
-
DataArts Architecture
- What Is the Relationship Between Lookup Tables and Data Standards?
- What Is the Difference Between ER Modeling and Dimensional Modeling?
- What Data Modeling Methods Are Supported by DataArts Architecture?
- How Can I Use Standardized Data?
- Does DataArts Architecture Support Database Reverse?
- What Are the Differences Between the Metrics in DataArts Architecture and DataArts Quality?
- Why Does a Table Remain Unchanged When I Have Updated It in DataArts Architecture?
- Can I Configure Lifecycle Management for Tables?
-
DataArts Factory
- How Many Jobs Can Be Created in DataArts Factory? Is There a Limit on the Number of Nodes in a Job?
- Why Is There a Large Difference Between Job Execution Time and Start Time of a Job?
- Will Subsequent Jobs Be Affected If a Job Fails to Be Executed During Scheduling of Dependent Jobs? What Should I Do?
- What Should I Pay Attention to When Using DataArts Studio to Schedule Big Data Services?
- What Are the Differences and Connections Among Environment Variables, Job Parameters, and Script Parameters?
- What Do I Do If Node Error Logs Cannot Be Viewed When a Job Fails?
- What Should I Do If the Agency List Fails to Be Obtained During Agency Configuration?
- How Do I Locate Job Scheduling Nodes with a Large Number?
- Why Cannot Specified Peripheral Resources Be Selected When a Data Connection Is Created in Data Development?
- Why Is There No Job Running Scheduling Log on the Monitor Instance Page After Periodic Scheduling Is Configured for a Job?
- Why Does the GUI Display Only the Failure Result but Not the Specific Error Cause After Hive SQL and Spark SQL Scripts Fail to Be Executed?
- What Do I Do If the Token Is Invalid During the Running of a Data Development Node?
- How Do I View Run Logs After a Job Is Tested?
- Why Does a Job Scheduled by Month Start Running Before the Job Scheduled by Day Is Complete?
- What Should I Do If Invalid Authentication Is Reported When I Run a DLI Script?
- Why Cannot I Select the Desired CDM Cluster in Proxy Mode When Creating a Data Connection?
- Why Is There No Job Running Scheduling Record After Daily Scheduling Is Configured for the Job?
- What Do I Do If No Content Is Displayed in Job Logs?
- Why Do I Fail to Establish a Dependency Between Two Jobs?
- What Should I Do If an Error Is Displayed During DataArts Studio Scheduling: The Job Does Not Have a Submitted Version?
- What Do I Do If an Error Is Displayed During DataArts Studio Scheduling: The Script Associated with Node XXX in the Job Is Not Submitted?
- What Should I Do If a Job Fails to Be Executed After Being Submitted for Scheduling and an Error Displayed: Depend Job [XXX] Is Not Running Or Pause?
- How Do I Create a Database And Data Table? Is the database a data connection?
- Why Is No Result Displayed After an HIVE Task Is Executed?
- Why Does the Last Instance Status On the Monitor Instance page Only Display Succeeded or Failed?
- How Do I Create a Notification for All Jobs?
- What Is the Maximum Number of Nodes That Can Be Executed Simultaneously?
- What Is the Priority of the Startup User, Execution User, Workspace Agency, and Job Agency?
-
DataArts Quality
- What Are the Differences Between Quality Jobs and Comparison Jobs?
- How Can I Confirm that a Quality Job or Comparison Job Is Blocked?
- How Do I Manually Restart a Blocked Quality Job or Comparison Job?
- How Do I View Jobs Associated with a Quality Rule Template?
- What Should I Do If the System Displays a Message Indicating that I Do Not Have the MRS Permission to Perform a Quality Job?
- DataArts Catalog
-
DataArts DataService
- What Languages Do Data Lake Mall SDKs Support?
- What Can I Do If the System Displays a Message Indicating that the Proxy Fails to Be Invoked During API Creation?
- What Should I Do If the Background Reports an Error When I Access the Test App Through the Data Service API and Set Related Parameters?
- What Can I Do If an Error Is Reported When I Use an API?
- Can Operators Be Transferred When API Parameters Are Transferred?
- What Should I Do If the API Quota Provided by DataArts DataService Exclusive Has Been Used up?
-
Consultation and Billing
Overview
Introduction to DataArts Architecture
DataArts Architecture can be used to create entity-relationship (ER) models and dimensional models to standardize and visualize data development and output data governance methods that can guide development personnel to work with ease.
DataArts Architecture is for processing and commercializing data, and is the core module of data governance. It consists of four parts: data survey, standards design, model design, and metric design. DataArts Architecture supports DLI, POSTGRESQL, DWS, MRS Hive, and MRS Spark connections. (It supports MRS Hudi data sources through MRS Spark.)
DataArts Architecture aims to build:
- A unified data classification system to manage all business data in directories for easier data classification, search, evaluation, and use.
- A unified data standards system that complies with national or industrial standards to standardize each row of data and each field value and improve data quality and usability.
- A unified data model system and a tiered enterprise data system from top to bottom based on standards definitions and data modeling. These systems can be used to construct enterprises' public data layers and subject libraries, facilitating data flow, sharing, creation, and innovation. They will make data usage more efficient, greatly reducing data redundancy, disorder, isolation, inconsistencies, and inaccuracies.
Model Design Method Overview
A data model can reflect the relationships between objects. It incorporates the key information features extracted based on business requirements. It visually represents how the internal information of an enterprise is organized. A data model must be capable of simulating scenarios, easy-to-understand, and easily implemented in the IT system.
ER and dimensional modeling are both used on DataArts Architecture.
- ER modeling
ER modeling describes the business processes within an enterprise. Compliant with the third normal form (3NF), ER modeling is designed for data integration. It is used for combining and merging data with similarities by subject. ER modeling results cannot be used directly for decision-making, but they are a useful tool.
There are three different models involved in ER modeling: design conceptual models, logical models, and physical models.
- Conceptual model is used to represent business processes and business data involved in various activities. A conceptual model illustrates the relationships between business entities.
- Logical model is much more detailed than the conceptual model. Logical models outline business details based on entities, attributes, and relationships. They enable communication between IT and business staff. A logical model is a set of standardized logical table structures. Based on business rules, a logical model outlines business objects, data items of the business objects, and relationships between business objects.
- Physical model: An advanced version of the logic model and used to design the database architecture for data storage with a full consideration of various technical factors. For example, the selected data warehouse is DWS or MRS_Hive.
- Dimensional modeling
Dimensional modeling is the construction of models based on analysis and decision-making requirements. It is mainly used for data analysis. Dimensional modeling is focused on how to quickly analyze user requirements and respond rapidly to complicated, large-scale queries.
A multidimensional model is a fact table consisting of numeric metrics. The fact table is associated with a group of dimensional tables containing description attributes with primary or foreign keys.
Typical dimensional models include star models and snowflake models used in some special scenarios.
In the DataArts Architecture module of DataArts Studio, dimensional modeling involves constructing bus matrices to extract business facts and dimensions for model creation. You need to sort out business requirements for constructing metric systems and creating summary models.
DataArts Architecture Overview Page
On the DataArts Studio console, locate a workspace and click DataArts Architecture. The Overview page is displayed.

- My To-Dos
- The My To-Dos area displays the quantity of My Applications and Pending Review.
- Click the numbers above My Applications and Pending Review to access the My Applications and Pending Review pages, respectively.
- Assets
- The Assets area displays all the objects in DataArts Architecture.
- Click the number next to each object name to access the object management page.
- Quick Start
The Quick Start area displays the overall process for data governance. You can click a specific operation under the process to go to the corresponding page.
- DataArts Architecture Process
- This area displays the DataArts Architecture process and how the DataArts Architecture module interacts with other modules of DataArts Studio. For details about the DataArts Architecture process, see DataArts Architecture Use Process.
- You can move the cursor over the name of an object to view its description.
- You can click the name of any object supported by DataArts Studio to access the object management page.
Information Architecture of DataArts Architecture
An information architecture is a set of component specifications that describe various types of information required for business operations and management decision-making as well as the relationships of business entities. On the Information Architecture page, you can view and manage all tables, including business tables, dimension tables, fact tables, and summary tables.
On the DataArts Studio console, locate a workspace and click DataArts Architecture. In the navigation pane, choose Information Architecture.
- Search
On the top of the Information Architecture page, click Advanced Search, set the table name, type, data source, and other filters, and click Search to search for a specific table. Then click the table name to access its details page.
- Create
Click Create to create a logical model, physical model, dimension table, fact table, or summary table. For details, see Designing Logical Models, Designing Physical Models, Creating Dimensions, Creating Fact Tables, or Creating Summary Tables.
- Import
Choose More > Import. (Currently, only tables can be imported.) Download the table template, fill in it, and upload it. Then click Close. For details, see Importing/Exporting Tables.
- Export
Choose More > Export to export a physical table model or DDL. For details, see Exporting a Table or DDL.
- Synchronize
Choose More > Synchronize to synchronize table information to DataArts Catalog as technical assets or synchronize logical models to DataArts Catalog as logical assets.
- Modify Subject
Choose More > Modify Subject to change the selected table to another subject.
- Delete
Choose More > Delete to delete a data table. A data table in the pending publishing, published, or pending suspension state cannot be deleted. A referenced data table cannot be deleted either.
- Suspend
Choose More > Suspend to suspend a published data table. A referenced data table cannot be suspended.
NOTE:
Edited versions refer to the data that is re-edited after published.
- Publish
Click Publish to publish a data table. Data tables in the pending publishing, pending suspension, or published (without edited versions) state cannot be published.
- Associate Rule
Click Associate Rule and set the parameters to associate a quality rule with the object you select. For details, see Associating Quality Rules.
Figure 2 Associating a quality rule with an objectGenerate Anomaly Data: If this option is selected, anomaly data is stored in the specified database based on the configured parameters.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.