- What's New
- Function Overview
- Service Overview
- Data Governance Methodology
- Preparations
- Getting Started
-
User Guide
- DataArts Studio Introduction
- Preparations Before Using DataArts Studio
- Management Center
-
DataArts Migration
- Overview
- Constraints
- Supported Data Sources
- Managing Clusters
-
Managing Links
- Creating Links
- Managing Drivers
- Managing Agents
- Managing Cluster Configurations
- Link to a Common Relational Database
- Link to a Database Shard
- Link to MyCAT
- Link to a Dameng Database
- Link to an RDS for MySQL/MySQL Database
- Link to an Oracle Database
- Link to DLI
- Link to Hive
- Link to HBase
- Link to HDFS
- Link to OBS
- Link to an FTP or SFTP Server
- Link to Redis/DCS
- Link to DDS
- Link to CloudTable
- Link to CloudTable OpenTSDB
- Link to MongoDB
- Link to Cassandra
- Link to Kafka
- Link to DMS Kafka
- Link to Elasticsearch/CSS
- Managing Jobs
- Auditing
- Performance Reference
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- Migrating Data from DDS to DWS
- More Cases and Practices
-
Advanced Operations
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- DataArts Architecture
-
DataArts Factory
- Overview
- Data Management
- Script Development
- Job Development
- Solution
- Execution History
- O&M and Scheduling
- Configuration and Management
-
Node Reference
- Node Overview
- Node Lineages
- CDM Job
- Rest Client
- Import GES
- MRS Kafka
- Kafka Client
- ROMA FDI Job
- DLI Flink Job
- DLI SQL
- DLI Spark
- DWS SQL
- MRS Spark SQL
- MRS Hive SQL
- MRS Presto SQL
- MRS Spark
- MRS Spark Python
- MRS Flink Job
- MRS MapReduce
- CSS
- Shell
- RDS SQL
- ETL Job
- Python
- Create OBS
- Delete OBS
- OBS Manager
- Open/Close Resource
- Data Quality Monitor
- Subjob
- For Each
- SMN
- Dummy
- EL Expression Reference
- Usage Guidance
- DataArts Quality
- DataArts Catalog
- DataArts DataService
- Error Codes
-
Best Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Advanced Data Development Guidance
- Cross-Workspace DataArts Studio Data Migration
- Preventing an IAM User from Logging In to DataArts Studio by Setting Specific Conditions
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Incremental Migration on CDM Supported by DLF
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Practice: Data Development Based on E-commerce BI Reports
- Practice: Data Integration and Development Based on Movie Scores
- Practice: Data Governance Based on Taxi Trip Data
- Case: Trade Data Statistics and Analysis
-
Advanced Data Migration Guidance
- SDK Reference
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Application Cases
- DataArts Migration APIs
-
DataArts Factory APIs
- Connection Management APIs
- Script Development APIs
- Resource Management APIs
- Job Development APIs
-
APIs to Be Taken Offline
- Creating a Job
- Editing a Job
- Viewing a Job List
- Viewing Job Details
- Exporting a Job
- Batch Exporting Jobs
- Importing a Job
- Executing a Job Immediately
- Starting a Job
- Viewing Running Status of a Real-Time Job
- Viewing a Job Instance List
- Viewing Job Instance Details
- Querying a System Task
- Creating a Script
- Modifying a Script
- Querying a Script
- Querying a Script List
- Querying the Execution Result of a Script Instance
- Creating a Resource
- Modifying a Resource
- Querying a Resource
- Querying a Resource List
- Importing a Connection
-
DataArts Architecture APIs
- Overview
- Data Standard APIs
- Lookup Table Management APIs
- Catalog APIs
- Data Standard Template APIs
- Approval Management APIs
- Subject Management APIs
- Subject Level APIs
- Atomic Metric APIs
- Derivative Metric APIs
- Compound Metric APIs
- Dimension APIs
- Filter APIs
- Dimension Table APIs
- Fact Table APIs
- Summary Table APIs
- Business Metric APIs
- ER Modeling APIs
- Import/Export APIs
- Catalog Management APIs
- Version Information APIs
- DataArts Quality APIs
- DataArts Catalog APIs
-
Data Lake Mall APIs
-
API Management
- Creating an API
- Debugging an API
- Querying the API List
- Updating an API
- Querying API Information
- Deleting APIs
- Publishing an API
- Unpublishing, Suspending, and Restoring an API
- Authorizing an API to Apps
- Performing API Authorization Operations
- Querying API Publishing Messages in DLM Exclusive
- Querying API Debugging Messages in DLM Exclusive
- Querying Instances for API Operations in DLM Exclusive
- Authorization Management
- Application Management
- Message Management
-
Service Catalog Management
- Creating a Service Catalog
- Updating a Service Catalog
- Querying a Service Catalog
- Deleting Service Catalogs
- Moving a Catalog to Another Catalog
- Moving APIs to Another Catalog
- Obtaining the List of APIs and Catalogs in a Catalog
- Obtaining the List of APIs in a Catalog
- Obtaining the List of Sub-Catalogs in a Catalog
- Obtaining the ID of a Catalog Through Its Path
- Obtaining the Path of a Catalog Through Its ID
- Obtaining the Paths to a Catalog Through Its ID
- Gateway Management
- App Management
-
Overview APIs
- Querying API Overview
- Querying App Overview
- Querying Top N Services Called by an API
- Querying Top N Services Used by an App
- Querying API Statistics Details
- Querying App Statistics Details
- Querying API Dashboard Data Details
- Querying Data Details of a Specified API Dashboard
- Querying App Dashboard Data Details
- Querying Top N Apps Called by a Specified API
-
API Management
- Appendix
-
FAQs
-
Consultation and Billing
- Regions and AZs
- Can DataArts Studio Be Deployed in a Local Data Center or on a Private Cloud?
- What Should I Do If a User Cannot View Existing Workspaces After I Have Assigned the Required Policy to the User?
- Can I Delete DataArts Studio Workspaces?
- Can I Transfer a Purchased or Trial Instance to Another Account?
- Does DataArts Studio Support Version Upgrade?
- Does DataArts Studio Support Version Downgrade?
- How Do I View the DataArts Studio Instance Version?
-
Management Center
- What Are the Precautions for Creating Data Connections?
- Why Do DWS/Hive/HBase Data Connections Fail to Obtain the Information About Database or Tables?
- Why Are MRS Hive/HBase Clusters Not Displayed on the Page for Creating Data Connections?
- What Should I Do If the Connection Test Fails When I Enable the SSL Connection During the Creation of a DWS Data Connection?
- Can I Create Multiple Data Connections in a Workspace in Proxy Mode?
- Should I Choose a Direct or a Proxy Connection When Creating a DWS Connection?
- How Do I Migrate the Data Development Jobs and Data Connections from One Workspace to Another?
- Can I Delete Workspaces?
-
DataArts Migration
- General
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- How Do I Obtain the Current Time Using an Expression?
-
Troubleshooting
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If a CDM Connector Reports the Error "Configuration Item [linkConfig.iamAuth] Does Not Exist"?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
- What Should I Do If invalid input syntax for integer: "true" Is Displayed During MySQL Database Migration?
-
DataArts Architecture
- What Is the Relationship Between Lookup Tables and Data Standards?
- What Is the Difference Between ER Modeling and Dimensional Modeling?
- What Data Modeling Methods Are Supported by DataArts Architecture?
- How Can I Use Standardized Data?
- Does DataArts Architecture Support Database Reverse?
- What Are the Differences Between the Metrics in DataArts Architecture and DataArts Quality?
- Why Does a Table Remain Unchanged When I Have Updated It in DataArts Architecture?
- Can I Configure Lifecycle Management for Tables?
-
DataArts Factory
- How Many Jobs Can Be Created in DataArts Factory? Is There a Limit on the Number of Nodes in a Job?
- Why Is There a Large Difference Between Job Execution Time and Start Time of a Job?
- Will Subsequent Jobs Be Affected If a Job Fails to Be Executed During Scheduling of Dependent Jobs? What Should I Do?
- What Should I Pay Attention to When Using DataArts Studio to Schedule Big Data Services?
- What Are the Differences and Connections Among Environment Variables, Job Parameters, and Script Parameters?
- What Do I Do If Node Error Logs Cannot Be Viewed When a Job Fails?
- What Should I Do If the Agency List Fails to Be Obtained During Agency Configuration?
- How Do I Locate Job Scheduling Nodes with a Large Number?
- Why Cannot Specified Peripheral Resources Be Selected When a Data Connection Is Created in Data Development?
- Why Is There No Job Running Scheduling Log on the Monitor Instance Page After Periodic Scheduling Is Configured for a Job?
- Why Does the GUI Display Only the Failure Result but Not the Specific Error Cause After Hive SQL and Spark SQL Scripts Fail to Be Executed?
- What Do I Do If the Token Is Invalid During the Running of a Data Development Node?
- How Do I View Run Logs After a Job Is Tested?
- Why Does a Job Scheduled by Month Start Running Before the Job Scheduled by Day Is Complete?
- What Should I Do If Invalid Authentication Is Reported When I Run a DLI Script?
- Why Cannot I Select the Desired CDM Cluster in Proxy Mode When Creating a Data Connection?
- Why Is There No Job Running Scheduling Record After Daily Scheduling Is Configured for the Job?
- What Do I Do If No Content Is Displayed in Job Logs?
- Why Do I Fail to Establish a Dependency Between Two Jobs?
- What Should I Do If an Error Is Displayed During DataArts Studio Scheduling: The Job Does Not Have a Submitted Version?
- What Do I Do If an Error Is Displayed During DataArts Studio Scheduling: The Script Associated with Node XXX in the Job Is Not Submitted?
- What Should I Do If a Job Fails to Be Executed After Being Submitted for Scheduling and an Error Displayed: Depend Job [XXX] Is Not Running Or Pause?
- How Do I Create a Database And Data Table? Is the database a data connection?
- Why Is No Result Displayed After an HIVE Task Is Executed?
- Why Does the Last Instance Status On the Monitor Instance page Only Display Succeeded or Failed?
- How Do I Create a Notification for All Jobs?
- What Is the Maximum Number of Nodes That Can Be Executed Simultaneously?
- What Is the Priority of the Startup User, Execution User, Workspace Agency, and Job Agency?
-
DataArts Quality
- What Are the Differences Between Quality Jobs and Comparison Jobs?
- How Can I Confirm that a Quality Job or Comparison Job Is Blocked?
- How Do I Manually Restart a Blocked Quality Job or Comparison Job?
- How Do I View Jobs Associated with a Quality Rule Template?
- What Should I Do If the System Displays a Message Indicating that I Do Not Have the MRS Permission to Perform a Quality Job?
- DataArts Catalog
-
DataArts DataService
- What Languages Do Data Lake Mall SDKs Support?
- What Can I Do If the System Displays a Message Indicating that the Proxy Fails to Be Invoked During API Creation?
- What Should I Do If the Background Reports an Error When I Access the Test App Through the Data Service API and Set Related Parameters?
- What Can I Do If an Error Is Reported When I Use an API?
- Can Operators Be Transferred When API Parameters Are Transferred?
- What Should I Do If the API Quota Provided by DataArts DataService Exclusive Has Been Used up?
-
Consultation and Billing
Step 3: Develop Data
This step describes how to use the movie information and rating data to analyze 10 top-rated movies and 10 most frequently scored movies. Jobs are periodically executed and the results are exported to tables every day for data analysis.
Creating DWS SQL Script top_rating_movie for Storing 10 Top-rated Movies
The method of finding out the 10 top-rated movies is as follows: Calculate the total score of each movie and the number of the users who participate in scoring the movies, filter out the movies that are scored by less than three users, and then return the movie names, average scores, and participant quantity.
- Log in to the DataArts Studio console. Locate an instance and click Access. On the displayed page, locate a workspace and click DataArts Factory.
Figure 1 DataArts Factory
- Create a DWS SQL script used to create data tables by entering DWS SQL statements in the editor.
Figure 2 Creating a script
- In the SQL editor, enter the following SQL statements and click Execute to calculate the 10 top-rated movies from the movies_item and ratings_item tables and save the result to the top_rating_movie table.
SET SEARCH_PATH TO dgc; insert overwrite into top_rating_movie select a.movieTitle, b.ratings / b.rating_user_number as avg_rating, b.rating_user_number from movies_item a, ( select movieId, sum(rating) ratings, count(1) as rating_user_number from ratings_item group by movieId ) b where rating_user_number > 3 and a.movieId = b.movieId order by avg_rating desc limit 10
Figure 3 Script (top_rating_movie) - After debugging the script, click Save and Submit to submit the script and name it top_rating_movie. This script will be referenced later in Developing and Scheduling a Job.
- After the script is saved and executed successfully, you can use the following SQL statement to view data in the top_rating_movie table. You can also download or dump the table data by referring to Figure 4.
SET SEARCH_PATH TO dgc; SELECT * FROM top_rating_movie
Creating DWS SQL Script top_active_movie for Storing 10 Most Frequently Scored Movies
The method of finding out the 10 most frequently scored movies is as follows: Calculate the 10 most frequently scored movies whose average scores are higher than 3.5.
- Log in to the DataArts Studio console. Locate an instance and click Access. On the displayed page, locate a workspace and click DataArts Factory.
Figure 5 DataArts Factory
- Create a DWS SQL script used to create data tables by entering DWS SQL statements in the editor.
Figure 6 Creating a script
- In the SQL editor, enter the following SQL statements and click Execute to calculate the 10 most frequently scored movies from the movies_item and ratings_item tables and save the result to the top_active_movie table.
SET SEARCH_PATH TO dgc; insert overwrite into top_active_movie select * from ( select a.movieTitle, b.ratingSum / b.rating_user_number as avg_rating, b.rating_user_number from movies_item a, ( select movieId, sum(rating) ratingSum, count(1) as rating_user_number from ratings_item group by movieId ) b where a.movieId = b.movieId ) t where t.avg_rating > 3.5 order by rating_user_number desc limit 10
Figure 7 Script (top_active_movie) - After debugging the script, click Save and Submit to submit the script and name it top_active_movie. This script will be referenced later in Developing and Scheduling a Job.
- After the script is saved and executed successfully, you can use the following SQL statement to view data in the top_active_movie table. You can also download or dump the table data by referring to Figure 8.
SET SEARCH_PATH TO dgc; SELECT * FROM top_active_movie
Developing and Scheduling a Job
Assume that the movie and rating tables in the OBS bucket are changing in real time. To update top 10 movies every day, use the job orchestration and scheduling functions of DataArts Factory.
- Log in to the DataArts Studio console. Locate an instance and click Access. On the displayed page, locate a workspace and click DataArts Factory.
Figure 9 DataArts Factory
- Create a batch job named topmovie.
Figure 10 Creating a jobFigure 11 Configuring the job
- Open the created job, drag two CDM Job nodes, three Dummy nodes, and two DWS SQL nodes to the canvas, select and drag
, and orchestrate the job shown in Figure 12.
Key nodes:
- Begin (Dummy node): serves only as a start identifier.
- movies_obs2dws (CDM Job node): In Node Properties, select the CDM cluster in Step 2: Integrate Data and associate it with the CDM job movies_obs2dws.
- ratings_obs2dws (CDM Job node): In Node Properties, select the CDM cluster in Step 2: Integrate Data and associate it with the CDM job ratings_obs2dws.
- Waiting (Dummy node): No operation is performed. It is an identifier of the execution completion of the previous node.
- top_rating_movie (DWS SQL node): In Node Properties, associate this node with the DWS SQL script top_rating_movie you have created in Creating DWS SQL Script top_rating_movie.
- top_active_movie (DWS SQL node): In Node Properties, associate this node with the DWS SQL script top_active_movie you have created in Creating DWS SQL Script top_active_movie.
- Finish (Dummy node): serves only as an end identifier.
- After configuring the job, click
to test it.
- If the job runs properly, click Scheduling Setup in the right pane and configure the scheduling policy for the job.
Figure 13 Configuring scheduling
Notes:
- Scheduling Properties: The job is executed at 01:00 every day from Feb 09 to Feb 28, 2022.
- Dependency Properties: You can configure a dependency job for this job. You do not need to configure it in this practice.
- Cross-Cycle Dependency: Select Independent on the previous schedule cycle.
- Click Save and Submit and Execute. Then the job will be automatically executed every day so the 10 highest scored and most frequently scored movies are automatically saved to the top_active_movie and top_rating_movie tables, respectively.
- If you want to check the job execution result, choose Monitoring > Monitor Instance in the left navigation pane.
Figure 14 Viewing the job execution status
You can also configure notifications to be sent through SMS messages, emails, or console when a job encounters exceptions or fails.
Now you have learned the data integration and development process based on movie scores. In addition, you can analyze the ratings and browsing of different types of movies to provide valuable information for marketing decision-making, advertising, and user behavior prediction.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.