- What's New
- Function Overview
- Service Overview
- Data Governance Methodology
- Preparations
- Getting Started
-
User Guide
- DataArts Studio development process
-
Buying and Configuring a DataArts Studio Instance
- Buying a DataArts Studio Instance
-
Buying a DataArts Studio Incremental Package
- Introduction to Incremental Packages
- Buying a DataArts Migration Incremental Package
- Buying a DataArts Migration Resource Group Incremental Package
- Buying a DataArts DataService Exclusive Cluster Incremental Package
- Buying an Incremental Package for Job Node Scheduling Times/Day
- Buying an Incremental Package for Technical Asset Quantity
- Buying an Incremental Package for Data Model Quantity
- Accessing the DataArts Studio Instance Console
- Creating and Configuring a Workspace in Simple Mode
- (Optional) Creating and Using a Workspace in Enterprise Mode
- Managing DataArts Studio Resources
- Authorizing Users to Use DataArts Studio
-
Management Center
- Data Sources Supported by DataArts Studio
- Creating a DataArts Studio Data Connection
-
Configuring DataArts Studio Data Connection Parameters
- DWS Connection Parameters
- DLI Connection Parameters
- MRS Hive Connection Parameters
- MRS HBase Connection Parameters
- MRS Kafka Connection Parameters
- MRS Spark Connection Parameters
- MRS ClickHouse Connection Parameters
- MRS Hetu Connection Parameters
- MRS Impala Connection Parameters
- MRS Ranger Connection Parameters
- MRS Presto Connection Parameters
- Doris Connection Parameters
- OpenSource ClickHouse Connection Parameters
- RDS Connection Parameters
- Oracle Connection Parameters
- DIS Connection Parameters
- Host Connection Parameters
- Rest Client Connection Parameters
- Redis Connection Parameters
- SAP HANA Connection Parameters
- LTS Connection Parameters
- Configuring DataArts Studio Resource Migration
- Configuring Environment Isolation for a DataArts Studio Workspace in Enterprise Mode
- Typical Scenarios for Using Management Center
-
DataArts Migration (CDM Jobs)
- Overview
- Notes and Constraints
- Supported Data Sources
- Creating and Managing a CDM Cluster
-
Creating a Link in a CDM Cluster
- Creating a Link Between CDM and a Data Source
-
Configuring Link Parameters
- OBS Link Parameters
- PostgreSQL/SQLServer Link Parameters
- GaussDB(DWS) Link Parameters
- RDS for MySQL/MySQL Database Link Parameters
- Oracle Database Link Parameters
- DLI Link Parameters
- Hive Link Parameters
- HBase Link Parameters
- HDFS Link Parameters
- FTP/SFTP Link Parameters
- Redis Link Parameters
- DDS Link Parameters
- CloudTable Link Parameters
- MongoDB Link Parameters
- Cassandra Link Parameters
- DIS Link Parameters
- Kafka Link Parameters
- DMS Kafka Link Parameters
- CSS Link Parameters
- Elasticsearch Link Parameters
- Dameng Database Link Parameters
- SAP HANA Link Parameters
- Shard Link Parameters
- MRS Hudi Link Parameters
- MRS ClickHouse Link Parameters
- ShenTong Database Link Parameters
- CloudTable OpenTSDB Link Parameters
- GBASE Link Parameters
- YASHAN Link Parameters
- Uploading a CDM Link Driver
- Creating a Hadoop Cluster Configuration
-
Creating a Job in a CDM Cluster
- Table/File Migration Jobs
- Creating an Entire Database Migration Job
-
Configuring CDM Source Job Parameters
- From OBS
- From HDFS
- From HBase/CloudTable
- From Hive
- From DLI
- From FTP/SFTP
- From HTTP
- From PostgreSQL/SQL Server
- From DWS
- From SAP HANA
- From MySQL
- From Oracle
- From a Database Shard
- From MongoDB/DDS
- From Redis
- From DIS
- From Kafka/DMS Kafka
- From Elasticsearch or CSS
- From OpenTSDB
- From MRS Hudi
- From MRS ClickHouse
- From a ShenTong Database
- From a Dameng Database
- From YASHAN
- Configuring CDM Destination Job Parameters
- Configuring CDM Job Field Mapping
- Configuring a Scheduled CDM Job
- Managing CDM Job Configuration
- Managing a CDM Job
- Managing CDM Jobs
- Using Macro Variables of Date and Time
- Improving Migration Performance
-
Key Operation Guide
- Incremental Migration
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Configuring Field Converters
- Adding Fields
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Converting Unsupported Data Types
- Auto Table Creation
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- Error Codes
- DataArts Migration (Offline Jobs)
-
DataArts Migration (Real-Time Jobs)
- Overview of Real-Time Jobs
- Supported Data Sources
- Check Before Use
-
Enabling Network Communications
- Database Deployed in an On-premises IDC
- Database Deployed on Another Cloud
-
Database Deployed on Huawei Cloud
- Enabling Network Communications Directly for the Same Region and Tenant
- Using a VPC Peering Connection to Enable Network Communications for the Same Region but Different Tenants
- Using an Enterprise Router to Enable Network Communications for the Same Region but Different Tenants
- Using a Cloud Connection to Enable Cross-Region Network Communications
- Creating a Real-Time Migration Job
- Configuring a Real-Time Migration Job
- Real-Time Migration Job O&M
- Field Type Mapping
-
Job Performance Optimization
- Overview
- Optimizing Job Parameters
- Optimizing the Parameters of a Job for Migrating Data from MySQL to MRS Hudi
- Optimizing the Parameters of a Job for Migrating Data from MySQL to GaussDB(DWS)
- Optimizing the Parameters of a Job for Migrating Data from MySQL to DMS for Kafka
- Optimizing the Parameters of a Job for Migrating Data from DMS for Kafka to OBS
- Optimizing the Parameters of a Job for Migrating Data from Apache Kafka to MRS Kafka
- Optimizing the Parameters of a Job for Migrating Data from SQL Server to MRS Hudi
- Optimizing the Parameters of a Job for Migrating Data from PostgreSQL to GaussDB(DWS)
- Optimizing the Parameters of a Job for Migrating Data from Oracle to GaussDB(DWS)
- Optimizing the Parameters of a Job for Migrating Data from Oracle to MRS Hudi
-
Tutorials
- Overview
- Migrating a DRS Task to DataArts Migration
- Configuring a Job for Synchronizing Data from MySQL to MRS Hudi
- Configuring a Job for Synchronizing Data from MySQL to GaussDB(DWS)
- Configuring a Job for Synchronizing Data from MySQL to Kafka
- Configuring a Job for Synchronizing Data from DMS for Kafka to OBS
- Configuring a Job for Synchronizing Data from Apache Kafka to MRS Kafka
- Configuring a Job for Synchronizing Data from SQL Server to MRS Hudi
- Configuring a Job for Synchronizing Data from PostgreSQL to GaussDB(DWS)
- Configuring a Job for Synchronizing Data from Oracle to GaussDB(DWS)
- Configuring a Job for Synchronizing Data from Oracle to MRS Hudi
- Configuring a Job for Synchronizing Data from MongoDB to GaussDB(DWS)
- DataArts Architecture
-
DataArts Factory
- Overview
- Data Management
- Script Development
-
Job Development
- Job Development Process
- Creating a Job
- Developing a Pipeline Job
- Developing a Batch Processing Single-Task SQL Job
- Developing a Real-Time Processing Single-Task MRS Flink SQL Job
- Developing a Real-Time Processing Single-Task MRS Flink Jar Job
- Developing a Real-Time Processing Single-Task DLI Spark Job
- Setting Up Scheduling for a Job
- Submitting a Version
- Releasing a Job Task
- (Optional) Managing Jobs
- Notebook Development
- Solution
- Execution History
- O&M and Scheduling
- Configuration and Management
- Review Center
- Download Center
-
Node Reference
- Node Overview
- Node Lineages
- CDM Job
- Data Migration
- DIS Stream
- DIS Dump
- DIS Client
- Rest Client
- Import GES
- MRS Kafka
- Kafka Client
- ROMA FDI Job
- DLI Flink Job
- DLI SQL
- DLI Spark
- DWS SQL
- MRS Spark SQL
- MRS Hive SQL
- MRS Presto SQL
- MRS Spark
- MRS Spark Python
- MRS ClickHouse
- MRS Impala SQL
- MRS Flink Job
- MRS MapReduce
- CSS
- Shell
- RDS SQL
- ETL Job
- Python
- DORIS SQL
- ModelArts Train
- Create OBS
- Delete OBS
- OBS Manager
- Open/Close Resource
- Data Quality Monitor
- Subjob
- For Each
- SMN
- Dummy
- EL Expression Reference
- Simple Variable Set
-
Usage Guidance
- Referencing Parameters in Scripts and Jobs
- Setting the Job Scheduling Time to the Last Day of Each Month
- Configuring a Yearly Scheduled Job
- Using PatchData
- Obtaining the Output of an SQL Node
- Obtaining the Maximum Value and Transferring It to a CDM Job Using a Query SQL Statement
- IF Statements
- Obtaining the Return Value of a Rest Client Node
- Using For Each Nodes
- Using Script Templates and Parameter Templates
- Developing a Python Job
- Developing a DWS SQL Job
- Developing a Hive SQL Job
- Developing a DLI Spark Job
- Developing an MRS Flink Job
- Developing an MRS Spark Python Job
- DataArts Quality
- DataArts Catalog
-
DataArts Security
- Overview
- Dashboard
- Unified Permission Governance
- Sensitive Data Governance
- Sensitive Data Protection
- Data Security Operations
- Managing the Recycle Bin
-
DataArts DataService
- Overview
- Specifications
- Developing APIs in DataArts DataService
-
Calling APIs in DataArts DataService
- Applying for API Authorization
-
Calling APIs Using Different Methods
- API Calling Methods
- (Recommended) Using an SDK to Call an API Which Uses App Authentication
- Using an API Tool to Call an API Which Uses App Authentication
- Using an API Tool to Call an API Which Uses IAM Authentication
- Using an API Tool to Call an API Which Requires No Authentication
- Using a Browser to Call an API Which Requires No Authentication
- Viewing API Access Logs
- Configuring Review Center
- Audit Log
-
Best Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Configuring Field Converters
- Adding Fields
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Converting Unsupported Data Types
-
Advanced Data Development Guidance
- Dependency Policies for Periodic Scheduling
- Scheduling by Discrete Hours and Scheduling by the Nearest Job Instance
- Using PatchData
- Setting the Job Scheduling Time to the Last Day of Each Month
- Obtaining the Output of an SQL Node
- IF Statements
- Obtaining the Return Value of a Rest Client Node
- Using For Each Nodes
- Invoking DataArts Quality Operators Using DataArts Factory and Transferring Quality Parameters During Job Running
- Scheduling Jobs Across Workspaces
-
DataArts Studio Data Migration Configuration
- Overview
- Management Center Data Migration Configuration
- DataArts Migration Data Migration Configuration
- DataArts Architecture Data Migration Configuration
- DataArts Factory Data Migration Configuration
- DataArts Quality Data Migration Configuration
- DataArts Catalog Data Migration Configuration
- DataArts Security Data Migration Configuration
- DataArts DataService Data Migration Configuration
- Least Privilege Authorization
- How Do I View the Number of Table Rows and Database Size?
- Comparing Data Before and After Data Migration Using DataArts Quality
- Configuring Alarms for Jobs in DataArts Factory of DataArts Studio
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Enabling Incremental Data Migration Through DataArts Factory
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Automatic Construction and Analysis of Graph Data
- Simplified Migration of Trade Data to the Cloud and Analysis
- Migration of IoV Big Data to the Lake Without Loss
- Real-Time Alarm Platform Construction
-
Advanced Data Migration Guidance
- SDK Reference
-
API Reference
- Before You Start
- API Overview
- Calling APIs
-
DataArts Migration APIs
-
Cluster Management
- Querying Cluster Details
- Deleting a Cluster
- Querying All AZs
- Querying Supported Versions
- Querying Version Specifications
- Querying Details About a Flavor
- Querying the Enterprise Project IDs of All Clusters
- Querying the Enterprise Project ID of a Specified Cluster
- Query a Specified Instance in a Cluster
- Modifying a Cluster
- Restarting a Cluster
- Starting a Cluster
- Stopping a Cluster (To Be Taken Offline)
- Creating a Cluster
- Querying the Cluster List
- Job Management
- Link Management
- Public Data Structures
-
Cluster Management
-
DataArts Factory APIs (V1)
- Script Development APIs
- Resource Management APIs
-
Job Development APIs
- Creating a Job
- Modifying a Job
- Viewing a Job List
- Viewing Job Details
- Viewing a Job File
- Exporting a Job
- Batch Exporting Jobs
- Importing a Job
- Executing a Job Immediately
- Starting a Job
- Stopping a Job
- Deleting a Job
- Stopping a Job Instance
- Rerunning a Job Instance
- Viewing Running Status of a Real-Time Job
- Viewing a Job Instance List
- Viewing Job Instance Details
- Querying System Task Details
-
Connection Management APIs (To Be Taken Offline)
- Creating a Connection (to Be Taken Offline)
- Querying a Connection List (to Be Taken Offline)
- Querying Connection Details (to Be Taken Offline)
- Modifying a Connection (to Be Taken Offline)
- Deleting a Connection (to Be Taken Offline)
- Exporting Connections (to Be Taken Offline)
- Importing Connections (to Be Taken Offline)
-
DataArts Factory APIs (V2)
-
Job Development APIs
- Creating a PatchData Instance
- Querying PatchData Instances
- Stopping a PatchData Instance
- Changing a Job Name
- Querying Release Packages
- Querying Details About a Release Package
- Configuring Job Tags
- Querying Alarm Notifications
- Releasing Task Packages
- Canceling Task Packages
- Querying the Instance Execution Status
- Querying Completed Tasks
- Querying Instances of a Specified Job
-
Job Development APIs
-
DataArts Architecture APIs
- Overview
- Information Architecture
- Data Standards
- Data Sources
- Process Architecture
- Data Standard Templates
- Approval Management
- Subject Management
- Subject Levels
- Catalog Management
- Atomic Metrics
- Derivative Metrics
- Compound Metrics
- Dimensions
- Filters
- Dimension Tables
- Fact Tables
- Summary Tables
- Business Metrics
- Version Information
-
ER Modeling
- Lookup Table Model List
- Creating a Table Model
- Updating a Table Model
- Deleting a Table Model
- Querying a Relationship
- Viewing Relationship Details
- Querying All Relationships in a Model
- Viewing Table Model Details
- Obtaining a Model
- Creating a Model Workspace
- Updating the Model Workspace
- Deleting a Model Workspace
- Viewing Details About a Model
- Querying Destination Tables and Fields (To Be Offline)
- Exporting DDL Statements of Tables in a Model
- Converting a Logical Model to a Physical Model
- Obtaining the Operation Result
- Import and Export
- Customized Items
- Quality Rules
- Tag API
- Lookup Table Management
- DataArts Quality APIs
-
DataArts DataService APIs
-
API Management
- Create an API
- Querying an API List
- Updating an API
- Querying API Information
- Deleting APIs
- Publishing an API
- API operations (offline/suspension/resumption)
- Batch Authorization API (Exclusive Edition)
- Debugging an API
- API authorization operations (authorization/authorization cancellation/application/renewal)
- Querying API Publishing Messages in DLM Exclusive
- Querying Instances for API Operations in DLM Exclusive
- Querying API Debugging Messages in DLM Exclusive
- Importing an Excel File Containing APIs
- Exporting an Excel File Containing APIs
- Exporting a .zip File Containing All APIs
- Downloading an Excel Template
- Application Management
- Message Management
- Authorization Management
-
Service Catalog Management
- Obtaining the List of APIs and Catalogs in a Catalog
- Obtaining the List of APIs in a Catalog
- Obtaining the List of Sub-Catalogs in a Catalog
- Updating a Service Catalog
- Query the service catalog
- Creating a Service Catalog
- Deleting Directories in Batches
- Moving a Catalog to Another Catalog
- Moving APIs to Another Catalog
- Obtaining the ID of a Catalog Through Its Path
- Obtaining the Path of a Catalog Through Its ID
- Obtaining the Paths to a Catalog Through Its ID
- Querying the Service Catalog API List
- Gateway Management
- App Management
-
Overview
- Querying and Collecting Statistics on User-related Overview Development Indicators
- This API is used to query and collect statistics on user-related overview invoking metrics.
- Querying Top N API Services Invoked
- Querying Top N Services Used by an App
- Querying API Statistics Details
- Querying App Statistics
- Querying API Dashboard Data Details
- Querying Data Details of a Specified API Dashboard
- Querying App Dashboard Data Details
- Querying Top N APIs Called by a Specified API Application
- Cluster Management
-
API Management
- Application Cases
- Appendix
-
FAQs
-
Consultation and Billing
- How Do I Select a Region and an AZ?
- What Is a Database, Data Warehouse, Data Lake, and Huawei FusionInsight Intelligent Data Lake? What Are the Differences and Relationships Between Them?
- What Is the Relationship Between DataArts Studio and Huawei Horizon Digital Platform?
- What Are the Differences Between DataArts Studio and ROMA?
- Can DataArts Studio Be Deployed in a Local Data Center or on a Private Cloud?
- How Do I Create a Fine-Grained Permission Policy in IAM?
- How Do I Isolate Workspaces So That Users Cannot View Unauthorized Workspaces?
- What Should I Do If a User Cannot View Workspaces After I Have Assigned the Required Policy to the User?
- What Should I Do If Insufficient Permissions Are Prompted When I Am Trying to Perform an Operation as an IAM User?
- Can I Delete DataArts Studio Workspaces?
- Can I Transfer a Purchased or Trial Instance to Another Account?
- Does DataArts Studio Support Version Upgrade?
- Does DataArts Studio Support Version Downgrade?
- How Do I View the DataArts Studio Instance Version?
- Why Can't I Select a Specified IAM Project When Purchasing a DataArts Studio Instance?
- What Is the Session Timeout Period of DataArts Studio? Can the Session Timeout Period Be Modified?
- Will My Data Be Retained If My Package Expires or My Pay-per-Use Resources Are in Arrears?
- How Do I Check the Remaining Validity Period of a Package?
- Why Isn't the CDM Cluster in a DataArts Studio Instance Billed?
- Why Does the System Display a Message Indicating that the Number of Daily Executed Nodes Has Reached the Upper Limit? What Should I Do?
-
Management Center
- Which Data Sources Can DataArts Studio Connect To?
- What Are the Precautions for Creating Data Connections?
- What Should I Do If Database or Table Information Cannot Be Obtained Through a GaussDB(DWS)/Hive/HBase Data Connection?
- Why Are MRS Hive/HBase Clusters Not Displayed on the Page for Creating Data Connections?
- What Should I Do If a GaussDB(DWS) Connection Test Fails When SSL Is Enabled for the Connection?
- Can I Create Multiple Connections to the Same Data Source in a Workspace?
- Should I Select the API or Proxy Connection Type When Creating a Data Connection in Management Center?
- How Do I Migrate the Data Development Jobs and Data Connections from One Workspace to Another?
-
DataArts Migration (CDM Jobs)
- What Are the Differences Between CDM and Other Data Migration Services?
- What Are the Advantages of CDM?
- What Are the Security Protection Mechanisms of CDM?
- How Do I Reduce the Cost of Using CDM?
- Will I Be Billed If My CDM Cluster Does Not Use the Data Transmission Function?
- Why Am I Billed Pay per Use When I Have Purchased a Yearly/Monthly CDM Incremental Package?
- How Do I Check the Remaining Validity Period of a Package?
- Can CDM Be Shared by Different Tenants?
- Can I Upgrade a CDM Cluster?
- How Is the Migration Performance of CDM?
- What Is the Number of Concurrent Jobs for Different CDM Cluster Versions?
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- What Should I Do If Only Some Nodes in a HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- Does CDM Support Parameters or Variables?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- Can I Stop CDM Clusters?
- How Do I Obtain the Current Time Using an Expression?
- What Should I Do If the Log Prompts that the Date Format Fails to Be Parsed?
- What Can I Do If the Map Field Tab Page Cannot Display All Columns?
- How Do I Select Distribution Columns When Using CDM to Migrate Data to GaussDB(DWS)?
- What Do I Do If the Error Message "value too long for type character varying" Is Displayed When I Migrate Data to DWS?
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the Cluster List Is Empty, I Have No Access Permission, or My Operation Is Denied?
- Why Is Error ORA-01555 Reported During Migration from Oracle to DWS?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to GaussDB(DWS) Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If a CDM Connector Reports the Error "Configuration Item [linkConfig.iamAuth] Does Not Exist"?
- What Should I Do If Error "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" or "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Reported?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
- What Should I Do If invalid input syntax for integer: "true" Is Displayed During MySQL Database Migration?
-
DataArts Migration (Real-Time Jobs)
- Overview
- How Do I Troubleshoot a Network Disconnection Between the Data Source and Resource Group?
- Which Ports Must Be Allowed by the Data Source Security Group So That DataArts Migration Can Access the Data Source?
- How Do I Configure a Spark Periodic Task for Hudi Compaction?
- What Should I Do If an Error Is Reported During DDL Synchronization of New Columns in a Real-Time MySQL-to-DWS Synchronization Job?
- Why Does DWS Filter the Null Value of the Primary Key During Real-Time Synchronization from MySQL to DWS?
- What Should I Do If a Job for Synchronizing Data from Kafka to DLI in Real Time Fails and "Array element access needs an index starting at 1 but was 0" Is Displayed?
- How Do I Grant the Log Archiving, Query, and Parsing Permissions of an Oracle Data Source?
- How Do I Manually Delete Replication Slots from a PostgreSQL Data Source?
-
DataArts Architecture
- What Is the Relationship Between Lookup Tables and Data Standards?
- What Are the Differences Between ER Modeling and Dimensional Modeling?
- What Data Modeling Methods Are Supported by DataArts Architecture?
- How Can I Use Standardized Data?
- Does DataArts Architecture Support Database Reversing?
- What Are the Differences Between the Metrics in DataArts Architecture and DataArts Quality?
- Why Doesn't the Table in the Database Change After I Have Modified Fields in an ER or Dimensional Model?
- Can I Configure Lifecycle Management for Tables?
- How Should I Select a Subject When a Public Dimension (Date, Region, Supplier, or Product) Is Shared by Multiple Subject Areas?
-
DataArts Factory
- How Many Jobs Can Be Created in DataArts Factory? Is There a Limit on the Number of Nodes in a Job?
- Does DataArts Studio Support Custom Python Scripts?
- How Can I Quickly Rectify a Deleted CDM Cluster Associated with a Job?
- Why Is There a Large Difference Between Job Execution Time and Start Time of a Job?
- Will Subsequent Jobs Be Affected If a Job Fails to Be Executed During Scheduling of Dependent Jobs? What Should I Do?
- What Should I Pay Attention to When Using DataArts Studio to Schedule Big Data Services?
- What Are the Differences and Relationships Between Environment Variables, Job Parameters, and Script Parameters?
- What Should I Do If a Job Log Cannot Be Opened and Error 404 Is Reported?
- What Should I Do If the Agency List Fails to Be Obtained During Agency Configuration?
- Why Can't I Select Specified Peripheral Resources When Creating a Data Connection in DataArts Factory?
- Why Can't I Receive Job Failure Alarm Notifications After I Have Configured SMN Notifications?
- Why Is There No Job Running Scheduling Log on the Monitor Instance Page After Periodic Scheduling Is Configured for a Job?
- Why Isn't the Error Cause Displayed on the Console When a Hive SQL or Spark SQL Scripts Fails?
- What Should I Do If the Token Is Invalid During the Execution of a Data Development Node?
- How Do I View Run Logs After a Job Is Tested?
- Why Does a Job Scheduled by Month Start Running Before the Job Scheduled by Day Is Complete?
- What Should I Do If Invalid Authentication Is Reported When I Run a DLI Script?
- Why Cannot I Select a Desired CDM Cluster in Proxy Mode When Creating a Data Connection?
- Why Is There No Job Running Scheduling Record After Daily Scheduling Is Configured for the Job?
- What Do I Do If No Content Is Displayed in Job Logs?
- Why Do I Fail to Establish a Dependency Between Two Jobs?
- What Should I Do If an Error Is Reported During Job Scheduling in DataArts Studio, Indicating that the Job Has Not Been Submitted?
- What Should I Do If an Error Is Reported During Job Scheduling in DataArts Studio, Indicating that the Script Associated with Node XXX in the Job Has Not Been Submitted?
- What Should I Do If a Job Fails to Be Executed After Being Submitted for Scheduling and an Error Displayed: Depend Job [XXX] Is Not Running Or Pause?
- How Do I Create Databases and Data Tables? Do Databases Correspond to Data Connections?
- Why Is No Result Displayed After a Hive Task Is Executed?
- Why Is the Last Instance Status On the Monitor Instance Page Either Successful or Failed?
- How Do I Configure Notifications for All Jobs?
- What Is the Maximum Number of Nodes That Can Be Executed Simultaneously?
- Can I Change the Time Zone of a DataArts Studio Instance?
- How Do I Synchronize the Changed Names of CDM Jobs to DataArts Factory?
- Why Does the Execution of an RDS SQL Statement Fail and an Error Is Reported Indicating That hll Does Not Exist?
- What Should I Do If Error Message "The account has been locked" Is Displayed When I Am Creating a DWS Data Connection?
- What Should I Do If a Job Instance Is Canceled and Message "The node start execute failed, so the current node status is set to cancel." Is Displayed?
- What Should I Do If Error Message "Workspace does not exists" Is Displayed When I Call a DataArts Factory API?
- Why Don't the URL Parameters for Calling an API Take Effect in the Test Environment When the API Can Be Called Properly Using Postman?
- What Should I Do If Error Message "Agent need to be updated?" Is Displayed When I Run a Python Script?
- Why Is an Execution Failure Displayed for a Node in the Log When the Node Status Is Successful?
- What Should I Do If an Unknown Exception Occurs When I Call a DataArts Factory API?
- Why Is an Error Message Indicating an Invalid Resource Name Is Displayed When I Call a Resource Creation API?
- Why Does a PatchData Task Fail When All PatchData Job Instances Are Successful?
- Why Is a Table Unavailable When an Error Message Indicating that the Table Already Exists Is Displayed During Table Creation from a DWS Data Connection?
- What Should I Do If Error Message "The throttling threshold has been reached: policy user over ratelimit,limit:60,time:1 minute." Is Displayed When I Schedule an MRS Spark Job?
- What Should I Do If Error Message "UnicodeEncodeError: 'ascii' codec can't encode characters in position 63-64: ordinal not in range(128)" Is Displayed When I Run a Python Script?
- What Should I Do If an Error Message Is Displayed When I View Logs?
- What Should I Do If a Shell/Python Node Fails and Error "session is down" Is Reported?
- What Should I Do If a Parameter Value in a Request Header Contains More Than 512 Characters?
- What Should I Do If a Message Is Displayed Indicating that the ID Does Not Exist During the Execution of a DWS SQL Script?
- How Do I Check Which Jobs Invoke a CDM Job?
- What Should I Do If Error Message "The request parameter invalid" Is Displayed When I Use Python to Call the API for Executing Scripts?
- What Should I Do If the Default Queue of a New DLI SQL Script in DataArts Factory Has Been Deleted?
- Does the Event-based Scheduling Type in DataArts Factory Support Offline Kafka?
-
DataArts Quality
- What Are the Differences Between Quality Jobs and Comparison Jobs?
- How Can I Confirm that a Quality Job or Comparison Job Is Blocked?
- How Do I Manually Restart a Blocked Quality Job or Comparison Job?
- How Do I View Jobs Associated with a Quality Rule Template?
- What Should I Do If the System Displays a Message Indicating that I Do Not Have the MRS Permission to Perform a Quality Job?
- DataArts Catalog
-
DataArts Security
- Why Isn't Data Masked Based on a Specified Rule After a Data Masking Task Is Executed?
- What Should I Do If a Message Is Displayed Indicating that Necessary Request Parameters Are Missing When I Approve a GaussDB(DWS) Permission Application?
- What Should I Do If Error Message "FATAL: Invalid username/password,login denied" Is Displayed During the GaussDB(DWS) Connectivity Check When Fine-grained Authentication Is Enabled?
- What Should I Do If Error Message "Failed to obtain the database" Is Displayed When I Select a Database in DataArts Factory After Fine-grained Authentication Is Enabled?
- Why Does the System Display a Message Indicating Insufficient Permissions During Permission Synchronization to DLI?
-
DataArts DataService
- What Languages Do DataArts DataService SDKs Support?
- What Can I Do If the System Displays a Message Indicating that the Proxy Fails to Be Invoked During API Creation?
- What Should I Do If the Background Reports an Error When I Access the Test App Through the Data Service API and Set Related Parameters?
- How Many Times Can a Subdomain Name Be Accessed Using APIs Every Day?
- Can Operators Be Transferred When API Parameters Are Transferred?
- What Should I Do If No More APIs Can Be Created When the API Quota in the Workspace Is Used Up?
- How Can I Access APIs of DataArts DataService Exclusive from the Internet?
- How Can I Access APIs of DataArts DataService Exclusive Using Domain Names?
- What Should I Do If It Takes a Long Time to Obtain the Total Number of Data Records of a Table Through an API If the Table Contains a Large Amount of Data?
-
Consultation and Billing
-
More Documents
-
User Guide (Kuala Lumpur Region)
- Service Overview
- Preparations
-
User Guide
- Preparations Before Using DataArts Studio
- Management Center
-
DataArts Migration
- Overview
- Constraints
- Supported Data Sources
- Managing Clusters
-
Managing Links
- Creating Links
- Managing Drivers
- Managing Agents
- Managing Cluster Configurations
- Link to a Common Relational Database
- Link to a Database Shard
- Link to MyCAT
- Link to a Dameng Database
- Link to a MySQL Database
- Link to an Oracle Database
- Link to DLI
- Link to Hive
- Link to HBase
- Link to HDFS
- Link to OBS
- Link to an FTP or SFTP Server
- Link to Redis/DCS
- Link to DDS
- Link to CloudTable
- Link to CloudTable OpenTSDB
- Link to MongoDB
- Link to Cassandra
- Link to Kafka
- Link to DMS Kafka
- Link to Elasticsearch/CSS
- Managing Jobs
- Auditing
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- Advanced Operations
-
DataArts Factory
- Overview
- Data Management
- Script Development
- Job Development
- Solution
- Execution History
- O&M and Scheduling
- Configuration and Management
-
Node Reference
- Node Overview
- CDM Job
- Rest Client
- Import GES
- MRS Kafka
- Kafka Client
- ROMA FDI Job
- DLI Flink Job
- DLI SQL
- DLI Spark
- DWS SQL
- MRS Spark SQL
- MRS Hive SQL
- MRS Presto SQL
- MRS Spark
- MRS Spark Python
- MRS Flink Job
- MRS MapReduce
- CSS
- Shell
- RDS SQL
- ETL Job
- Python
- Create OBS
- Delete OBS
- OBS Manager
- Open/Close Resource
- Subjob
- For Each
- SMN
- Dummy
- EL Expression Reference
- Usage Guidance
-
FAQs
- Consultation
-
Management Center
- What Are the Precautions for Creating Data Connections?
- Why Do DWS/Hive/HBase Data Connections Fail to Obtain the Information About Database or Tables?
- Why Are MRS Hive/HBase Clusters Not Displayed on the Page for Creating Data Connections?
- What Should I Do If the Connection Test Fails When I Enable the SSL Connection During the Creation of a DWS Data Connection?
- Can I Create Multiple Data Connections in a Workspace in Proxy Mode?
- Should I Choose a Direct or a Proxy Connection When Creating a DWS Connection?
- How Do I Migrate the Data Development Jobs and Data Connections from One Workspace to Another?
- Can I Delete Workspaces?
-
DataArts Migration
- General
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
-
Troubleshooting
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- Why Is Error ORA-01555 Reported During Migration from Oracle to DWS?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If a CDM Connector Reports the Error "Configuration Item [linkConfig.iamAuth] Does Not Exist"?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
-
DataArts Factory
- How Many Jobs Can Be Created in DataArts Factory? Is There a Limit on the Number of Nodes in a Job?
- Why Is There a Large Difference Between Job Execution Time and Start Time of a Job?
- Will Subsequent Jobs Be Affected If a Job Fails to Be Executed During Scheduling of Dependent Jobs? What Should I Do?
- What Should I Pay Attention to When Using DataArts Studio to Schedule Big Data Services?
- What Are the Differences and Connections Among Environment Variables, Job Parameters, and Script Parameters?
- What Do I Do If Node Error Logs Cannot Be Viewed When a Job Fails?
- What Should I Do If the Agency List Fails to Be Obtained During Agency Configuration?
- How Do I Locate Job Scheduling Nodes with a Large Number?
- Why Cannot Specified Peripheral Resources Be Selected When a Data Connection Is Created in Data Development?
- Why Is There No Job Running Scheduling Log on the Monitor Instance Page After Periodic Scheduling Is Configured for a Job?
- Why Does the GUI Display Only the Failure Result but Not the Specific Error Cause After Hive SQL and Spark SQL Scripts Fail to Be Executed?
- What Do I Do If the Token Is Invalid During the Running of a Data Development Node?
- How Do I View Run Logs After a Job Is Tested?
- Why Does a Job Scheduled by Month Start Running Before the Job Scheduled by Day Is Complete?
- What Should I Do If Invalid Authentication Is Reported When I Run a DLI Script?
- Why Cannot I Select the Desired CDM Cluster in Proxy Mode When Creating a Data Connection?
- Why Is There No Job Running Scheduling Record After Daily Scheduling Is Configured for the Job?
- What Do I Do If No Content Is Displayed in Job Logs?
- Why Do I Fail to Establish a Dependency Between Two Jobs?
- What Should I Do If an Error Is Displayed During DataArts Studio Scheduling: The Job Does Not Have a Submitted Version?
- What Do I Do If an Error Is Displayed During DataArts Studio Scheduling: The Script Associated with Node XXX in the Job Is Not Submitted?
- What Should I Do If a Job Fails to Be Executed After Being Submitted for Scheduling and an Error Displayed: Depend Job [XXX] Is Not Running Or Pause?
- How Do I Create a Database And Data Table? Is the database a data connection?
- Why Is No Result Displayed After an HIVE Task Is Executed?
- Why Does the Last Instance Status On the Monitor Instance page Only Display Succeeded or Failed?
- How Do I Create a Notification for All Jobs?
- How Many Nodes Can Be Executed Concurrently in Each DataArts Studio Version?
- What Is the Priority of the Startup User, Execution User, Workspace Agency, and Job Agency?
-
API Reference (Kuala Lumpur Region)
- Before You Start
- API Overview
- Calling APIs
- Application Cases
-
DataArts Migration APIs
- Cluster Management
- Job Management
- Link Management
-
Public Data Structures
-
Link Parameter Description
- Link to a Relational Database
- Link to OBS
- Link to HDFS
- Link to HBase
- Link to CloudTable
- Link to Hive
- Link to an FTP or SFTP Server
- Link to MongoDB
- Link to Redis/DCS (to Be Brought Offline)
- Link to Kafka
- Link to Elasticsearch/Cloud Search Service
- Link to DLI
- Link to CloudTable OpenTSDB
- Link to Amazon S3
- Link to DMS Kafka
-
Source Job Parameters
- From a Relational Database
- From Object Storage
- From HDFS
- From Hive
- From HBase/CloudTable
- From FTP/SFTP/NAS (to Be Brought Offline)/SFS (to Be Brought Offline)
- From HTTP/HTTPS
- From MongoDB/DDS
- From Redis/DCS (to Be Brought Offline)
- From DIS
- From Kafka
- From Elasticsearch/Cloud Search Service
- From OpenTSDB
- Destination Job Parameters
- Job Parameter Description
-
Link Parameter Description
-
DataArts Factory APIs
- Connection Management APIs
- Script Development APIs
- Resource Management APIs
- Job Development APIs
- Data Structure
-
APIs to Be Taken Offline
- Creating a Job
- Editing a Job
- Viewing a Job List
- Viewing Job Details
- Exporting a Job
- Batch Exporting Jobs
- Importing a Job
- Executing a Job Immediately
- Starting a Job
- Viewing Running Status of a Real-Time Job
- Viewing a Job Instance List
- Viewing Job Instance Details
- Querying a System Task
- Creating a Script
- Modifying a Script
- Querying a Script
- Querying a Script List
- Querying the Execution Result of a Script Instance
- Creating a Resource
- Modifying a Resource
- Querying a Resource
- Querying a Resource List
- Importing a Connection
- Appendix
-
User Guide (Kuala Lumpur Region)
- General Reference
Copied.
Monitoring a Batch Job
In the batch processing mode, data is processed periodically in batches based on the job-level scheduling plan, which is used in scenarios with low real-time requirements. This type of job is a pipeline that consists of one or more nodes and is scheduled as a whole. It cannot run for an unlimited period of time, that is, it must end after running for a certain period of time.
You can choose Monitor Job and click the Batch Job Monitoring tab to view the scheduling status, scheduling period, and start time of a batch job, and perform the operations listed in Table 1.

Operation |
Description |
---|---|
Filtering jobs by Job Name, Owner, CDM Job, Scheduling Identity, or Node Type |
N/A |
Filtering jobs by whether notifications have been configured, scheduling status, job tag, or next plan time |
You can filter jobs for which no notification has been configured by notification type (such as exception or failure) so that you can set alarm notifications in batches. |
Performing operations on jobs in a batch |
Select multiple jobs and perform operations on them. |
Viewing job instance status |
Click In the Operation column of the last instance, you can view the run logs of the instance and rerun the instance.
|
Viewing node information of the job |
Click a job name. On the displayed page, click the job node and view its associated jobs/scripts and monitoring information. Click a job name. On the displayed page, view the job instance. For details, see Batch Job Monitoring: Job Instances. |
Job scheduling operations |
You can run, pause, recover, stop, and configure scheduling. For details, see Batch Job Monitoring: Scheduling a Job. |
Configuring notifications |
In the Operation column of a job, choose More > Configuration Notification. In the displayed dialog box, configure notification parameters. Table 1 describes the notification parameters. |
Monitoring instances |
In the Operation column of a job, choose More > Monitor Instance to view the running records of all instances of the job. |
Configuring scheduling information |
In the Operation column of a job, click More and select Scheduling Setup. On the displayed job development page, you can view and configure the job scheduling information. You cannot configure scheduling information for a running job. |
PatchData |
In the Operation column of a job, choose More > PatchData. For details, see Batch Job Monitoring: PatchData. This function is available only for jobs that are scheduled periodically. |
Adding a job tag |
In the Operation column of a job, choose More > Add Job Tag. For details, see Batch Job Monitoring: Adding a Job Tag. |
Viewing a job dependency graph |
In the Operation column of a job, click More and select View Job Dependency Graph. For details, see Batch Job Monitoring: Viewing a Job Dependency Graph. |
Exporting all data |
Click Export All Data. In the displayed Export All Data dialog box, click OK. After the export is complete, go to the Download Center page to view the exported data. If the default storage path is not configured, you can set a storage path and select Set as default OBS path in the Export to OBS dialog box. A maximum of 30 MB data can be exported. If there are more than 30 MB data, the data will be automatically truncated. The exported job instances map job nodes. You cannot export data by selecting job names. Instead, you can select the data to be exported by setting filter criteria. |
Click a job name. On the displayed page, view the job parameters, properties, and instances.
Click a node of a job to view the node properties, script content, and node monitoring information.
In addition, you can view the current job version and job scheduling status, schedule, stop, or pause a job, configure patch data, notification, or update frequency for a job.
Batch Job Monitoring: Job Instances
- Log in to the DataArts Studio console by following the instructions in Accessing the DataArts Studio Instance Console.
- On the DataArts Studio console, locate a workspace and click DataArts Factory.
- In the left navigation pane of DataArts Factory, choose Monitoring > Job Monitoring.
- Click the Batch Job Monitoring tab.
- Click a job name. On the displayed page, click the Job Instances tab to view job instances. You can perform the following operations:
- Select Show Instances to Be Generated and set the time range to filter job instances that are expected to be generated in the future.
NOTE:
A maximum of 100 instances expected to be generated can be displayed.
- Freeze or unfreeze job instances that are expected to be generated in the future. You can click Freeze or Unfreeze above the job instance list, or click More in the Operation column and select Freeze or Unfreeze.
NOTE:
Freeze: You can only freeze job instances that have not been generated or are in waiting state.
You cannot freeze jobs instances that have been frozen.
When a job is frozen, it is considered to be failed and its downstream jobs will be suspended, executed, or canceled based on the failure policy configured for the job.
When job instances that have not been generated are frozen, you can view them on the Batch Job Monitoring page or filter them by status on the Monitor Instance page.
Unfreeze: You can unfreeze a job instance that has not been scheduled and has been frozen.
- Perform other operations on job instances, such as stopping, rerunning, and retrying job instances, continuing running job instances, making job instances succeed, viewing waiting job instances, and viewing job configuration. When viewing waiting job instances, you can click Remove Dependency in the Operation column to remove dependency on an upstream instance.
- If jobs need manual confirmation before they are executed, they are in waiting confirmation state on the Batch Jobs page. When you click Execute, the jobs are in waiting execution state.
- Select Show Instances to Be Generated and set the time range to filter job instances that are expected to be generated in the future.
Batch Job Monitoring: Scheduling a Job
After developing a job, you can manage job scheduling tasks on the Monitor Job page. Specific operations include to run, pause, restore, or stop scheduling.

- Log in to the DataArts Studio console by following the instructions in Accessing the DataArts Studio Instance Console.
- On the DataArts Studio console, locate a workspace and click DataArts Factory.
- In the left navigation pane of DataArts Factory, choose Monitoring > Job Monitoring.
- Click the Batch Job Monitoring tab.
NOTE:
You can filter batch processing jobs by scheduling type or scheduling frequency.
- In the Operation column of the job, click Execute, Pause, Restore, or Stop Scheduling.
If a dependent job has been configured for a batch job, you can select either Start Current Job Only or Start Current and Depended Jobs when submitting the batch job. For details about how to configure dependent jobs, see Setting Up Scheduling for a Job Using the Batch Processing Mode.
If the job is on the baseline task link, the system automatically displays a dialog box indicating that the baseline is associated when the scheduling is paused or stopped.
If the job is on the baseline task link or is depended on by other jobs, the system automatically displays a dialog box when the scheduling is paused or stopped.

Batch Job Monitoring: PatchData
A job executes a scheduling task to generate a series of instances in a certain period of time. This series of instances are called PatchData. PatchData can be used to fix the job instances that have data errors in the historical records or to build job records for debugging programs.
Only the periodically scheduled jobs support PatchData. For details about the execution records of PatchData, see Monitoring PatchData.
Do not modify the job configuration when PatchData is being performed. Otherwise, job instances generated during PatchData will be affected.
- Log in to the DataArts Studio console by following the instructions in Accessing the DataArts Studio Instance Console.
- On the DataArts Studio console, locate a workspace and click DataArts Factory.
- In the left navigation pane of DataArts Factory, choose Monitoring > Job Monitoring.
- Click the Batch Job Monitoring tab.
- In the Operation column of the job, choose More > Configure PatchData.
- Configure PatchData parameters based on Table 2.
Figure 4 PatchData parameters
Table 2 Parameters Parameter
Description
PatchData Name
Name of the automatically generated PatchData task. The value can be modified.
Job Name
Name of the job that requires PatchData.
Scheduling Time Type
Date
If Scheduling Time Type is set to Consecutive date range:
Period of time when PatchData is required. If the date is later than the current time, the current time is displayed by default.
NOTE:
PatchData can be configured for a job multiple times. However, avoid configuring PatchData multiple times on the same date to prevent data duplication or disorder.
If you select Patch data in reverse order of date, the patch data of each day is in positive sequence.
NOTE:
- This function is applicable when the data of each day is not coupled with each other.
- The PatchData job will ignore the dependencies between the job instances created before this date.
If Scheduling Time Type is set to Discrete date ranges:
You also need to set the following PatchData parameters:
You can click Add Date Range to add multiple discrete date ranges for PatchData. You must set at least one date range.
You can click Delete to delete discrete date ranges.
NOTE:
DataArts Studio does not support concurrent running of PatchData instances and periodic job instances of underlying services (such as CDM and DLI). To prevent PatchData instances from affecting periodic job instances and avoid exceptions, ensure that they do not run at the same time.
Run PatchData Tasks Periodically
- Yes: PatchData jobs will be executed based on the configured period.
The first value indicates a specific value.
The second value indicates that data is patched based on a specified period, for example, hours, days, weeks, or months.
NOTE:
If you set a period, PatchData tasks will be scheduled based on that period. If the job is scheduled every few minutes, hours, or days, PatchData tasks will be scheduled based on the period you set. For example, if you want to patch data from 00:00 on Jan 1, 2023 to 00:00 on Feb 1, 2023 for an hourly job that starts at 01:00 every day, and set the PatchData period to two days, PatchData tasks will be scheduled at 00:00 on Jan 1, 2023, 00:00 on Jan 3, 2023, 00:00 on Jan 5, 2023, and so on. If the PatchData task scheduling period is in months and the first scheduling date falls on the last day of a month, PatchData tasks will be scheduled on the last day of each month.
- No: PatchData jobs will not be executed periodically. Instead, the system executes PatchData jobs based on the existing rule.
Cycle
This parameter is required when Scheduling Time Type is set to Discrete date ranges.
It specifies the PatchData cycle.
You can click Viewing Scheduling Details to view the execution time of the task instances in the current time segment.
NOTE:
This parameter is required only when a job is scheduled by hour or minute and Scheduling Time Type is set to Discrete date ranges.
Parallel Instances
Number of instances to be executed at the same time. A maximum of five instances can be executed at the same time.
If you select Yes for Patch Data by Day, Parallel Instances means the number of concurrent job instances on the same day.
If you select No for Patch Data by Day, Parallel Instances means the number of concurrent job instances in the scheduling cycle.
NOTE:
Set this parameter based on the site requirements. For example, if a CDM job instance is used, data cannot be supplemented at the same time. The value of this parameter can only be set to 1.
Upstream or Downstream Job
Select the upstream and downstream jobs (jobs that depend on the current job) that require PatchData.
The job dependency graph is displayed. For details about the operations on the job dependency graph, see Batch Job Monitoring: Viewing a Job Dependency Graph.
NOTE:
If you set Run PatchData Tasks Periodically to Yes, you can only select an upstream or downstream job with the same scheduling period as the job.
Patch Data by Day
If you select Yes, PatchData instances on the same day can be executed concurrently for a job, but those on different days cannot be executed concurrently. For example, a job instance scheduled at 5:00 and one scheduled at 6:00 can be executed concurrently, but a job instance scheduled on 1st of a month and one scheduled on 2nd of the month cannot be executed concurrently.
Yes: Data is patched by day.
No: Data is not patched by day.
Stop Upon Failure
This parameter is mandatory if Patch Data by Day is set to Yes.
Yes: If a daily PatchData task fails, subsequent PatchData tasks stop immediately.
No: If a daily PatchData task fails, subsequent PatchData tasks continue.
NOTE:
If data is patched by day and a PatchData task fails on a day, no PatchData task will be executed on the next day. This function is supported only by daily PatchData tasks, and not by hourly PatchData tasks.
Priority
Select a PatchData priority. You can set the priority of a workspace-level PatchData job in Default Configuration.
NOTE:
The priority of PatchData is higher than that of PatchData in the workspace.
Currently, only the priorities of DLI SQL operators can be set.
Ignore OBS Listening
- Yes: OBS listening is ignored in PatchData scenarios.
- No: The system listens to the OBS path in PatchData scenarios.
Set Running Period
Whether a running period can be set for the PatchData task.
- Click OK. The system starts to perform PatchData and the PatchData Monitoring page is displayed.
Batch Job Monitoring: Adding a Job Tag
Tags can be added to jobs to facilitate job instance filtering.
- Log in to the DataArts Studio console by following the instructions in Accessing the DataArts Studio Instance Console.
- On the DataArts Studio console, locate a workspace and click DataArts Factory.
- In the left navigation pane of DataArts Factory, choose Monitoring > Job Monitoring.
- Click the Batch Job Monitoring tab.
- In the Operation column of a job, choose More > Add Job Tag.
- In the Add Job Tag dialog box displayed, set the job tag parameters.
Figure 5 Parameters for adding a job tag
- Click OK.
Batch Job Monitoring: Viewing a Job Dependency Graph
In the job dependency graph, you can view the dependencies between jobs.
- Log in to the DataArts Studio console by following the instructions in Accessing the DataArts Studio Instance Console.
- On the DataArts Studio console, locate a workspace and click DataArts Factory.
- In the left navigation pane of DataArts Factory, choose Monitoring > Job Monitoring.
- Click the Batch Job Monitoring tab.
- In the Operation column of a job, choose More > View Job Dependency Graph.
- On the displayed Job Dependency page, perform any of the following operations:
- In the upper right corner, select Display complete dependency graphs, Display the current job and its upstream and downstream jobs, or Display the current job and its directly connected jobs.
- In the search box in the upper right corner, you can enter the name of a node to search for the node. The node found will be highlighted.
- Click Download to download the job dependency file.
- Scroll your mouse wheel to zoom in or zoom out the dependency graph.
- Drag the blank area to view the complete relationship graph.
- When the cursor is hovered on a job node, the node is marked green, its upstream job is marked blue, and its downstream job is marked orange.
Figure 6 Marking upstream and downstream job nodes of a node
- Right-click a job node to view the job, copy the job name, and collapse upstream or downstream jobs.
Figure 7 Job node operations
You can also view the node monitoring information of a job on the job details page.
- Log in to the DataArts Studio console by following the instructions in Accessing the DataArts Studio Instance Console.
- On the DataArts Studio console, locate a workspace and click DataArts Factory.
- In the left navigation pane of DataArts Factory, choose Monitoring > Job Monitoring.
- Click the Batch Job Monitoring tab.
- Click a job name and then a node to view monitoring information of the node.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot