Deze pagina is nog niet beschikbaar in uw eigen taal. We werken er hard aan om meer taalversies toe te voegen. Bedankt voor uw steun.
DataArts Studio
DataArts Studio
- What's New
- Function Overview
- Service Overview
- Data Governance Methodology
- Preparations
- Getting Started
-
User Guide
- DataArts Studio Introduction
- Preparations Before Using DataArts Studio
- Management Center
-
DataArts Migration
- Overview
- Constraints
- Supported Data Sources
- Managing Clusters
-
Managing Links
- Creating Links
- Managing Drivers
- Managing Agents
- Managing Cluster Configurations
- Link to a Common Relational Database
- Link to a Database Shard
- Link to MyCAT
- Link to a Dameng Database
- Link to an RDS for MySQL/MySQL Database
- Link to an Oracle Database
- Link to DLI
- Link to Hive
- Link to HBase
- Link to HDFS
- Link to OBS
- Link to an FTP or SFTP Server
- Link to Redis/DCS
- Link to DDS
- Link to CloudTable
- Link to CloudTable OpenTSDB
- Link to MongoDB
- Link to Cassandra
- Link to Kafka
- Link to DMS Kafka
- Link to Elasticsearch/CSS
- Managing Jobs
- Auditing
- Performance Reference
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- Migrating Data from DDS to DWS
- More Cases and Practices
-
Advanced Operations
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- DataArts Architecture
-
DataArts Factory
- Overview
- Data Management
- Script Development
- Job Development
- Solution
- Execution History
- O&M and Scheduling
- Configuration and Management
-
Node Reference
- Node Overview
- Node Lineages
- CDM Job
- Rest Client
- Import GES
- MRS Kafka
- Kafka Client
- ROMA FDI Job
- DLI Flink Job
- DLI SQL
- DLI Spark
- DWS SQL
- MRS Spark SQL
- MRS Hive SQL
- MRS Presto SQL
- MRS Spark
- MRS Spark Python
- MRS Flink Job
- MRS MapReduce
- CSS
- Shell
- RDS SQL
- ETL Job
- Python
- Create OBS
- Delete OBS
- OBS Manager
- Open/Close Resource
- Data Quality Monitor
- Subjob
- For Each
- SMN
- Dummy
- EL Expression Reference
- Usage Guidance
- DataArts Quality
- DataArts Catalog
- DataArts DataService
- Error Codes
-
Best Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Advanced Data Development Guidance
- Cross-Workspace DataArts Studio Data Migration
- Preventing an IAM User from Logging In to DataArts Studio by Setting Specific Conditions
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Incremental Migration on CDM Supported by DLF
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Practice: Data Development Based on E-commerce BI Reports
- Practice: Data Integration and Development Based on Movie Scores
- Practice: Data Governance Based on Taxi Trip Data
- Case: Trade Data Statistics and Analysis
-
Advanced Data Migration Guidance
- SDK Reference
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Application Cases
- DataArts Migration APIs
-
DataArts Factory APIs
- Connection Management APIs
- Script Development APIs
- Resource Management APIs
- Job Development APIs
-
APIs to Be Taken Offline
- Creating a Job
- Editing a Job
- Viewing a Job List
- Viewing Job Details
- Exporting a Job
- Batch Exporting Jobs
- Importing a Job
- Executing a Job Immediately
- Starting a Job
- Viewing Running Status of a Real-Time Job
- Viewing a Job Instance List
- Viewing Job Instance Details
- Querying a System Task
- Creating a Script
- Modifying a Script
- Querying a Script
- Querying a Script List
- Querying the Execution Result of a Script Instance
- Creating a Resource
- Modifying a Resource
- Querying a Resource
- Querying a Resource List
- Importing a Connection
-
DataArts Architecture APIs
- Overview
- Data Standard APIs
- Lookup Table Management APIs
- Catalog APIs
- Data Standard Template APIs
- Approval Management APIs
- Subject Management APIs
- Subject Level APIs
- Atomic Metric APIs
- Derivative Metric APIs
- Compound Metric APIs
- Dimension APIs
- Filter APIs
- Dimension Table APIs
- Fact Table APIs
- Summary Table APIs
- Business Metric APIs
- ER Modeling APIs
- Import/Export APIs
- Catalog Management APIs
- Version Information APIs
- DataArts Quality APIs
- DataArts Catalog APIs
-
Data Lake Mall APIs
-
API Management
- Creating an API
- Debugging an API
- Querying the API List
- Updating an API
- Querying API Information
- Deleting APIs
- Publishing an API
- Unpublishing, Suspending, and Restoring an API
- Authorizing an API to Apps
- Performing API Authorization Operations
- Querying API Publishing Messages in DLM Exclusive
- Querying API Debugging Messages in DLM Exclusive
- Querying Instances for API Operations in DLM Exclusive
- Authorization Management
- Application Management
- Message Management
-
Service Catalog Management
- Creating a Service Catalog
- Updating a Service Catalog
- Querying a Service Catalog
- Deleting Service Catalogs
- Moving a Catalog to Another Catalog
- Moving APIs to Another Catalog
- Obtaining the List of APIs and Catalogs in a Catalog
- Obtaining the List of APIs in a Catalog
- Obtaining the List of Sub-Catalogs in a Catalog
- Obtaining the ID of a Catalog Through Its Path
- Obtaining the Path of a Catalog Through Its ID
- Obtaining the Paths to a Catalog Through Its ID
- Gateway Management
- App Management
-
Overview APIs
- Querying API Overview
- Querying App Overview
- Querying Top N Services Called by an API
- Querying Top N Services Used by an App
- Querying API Statistics Details
- Querying App Statistics Details
- Querying API Dashboard Data Details
- Querying Data Details of a Specified API Dashboard
- Querying App Dashboard Data Details
- Querying Top N Apps Called by a Specified API
-
API Management
- Appendix
-
FAQs
-
Consultation and Billing
- Regions and AZs
- Can DataArts Studio Be Deployed in a Local Data Center or on a Private Cloud?
- What Should I Do If a User Cannot View Existing Workspaces After I Have Assigned the Required Policy to the User?
- Can I Delete DataArts Studio Workspaces?
- Can I Transfer a Purchased or Trial Instance to Another Account?
- Does DataArts Studio Support Version Upgrade?
- Does DataArts Studio Support Version Downgrade?
- How Do I View the DataArts Studio Instance Version?
-
Management Center
- What Are the Precautions for Creating Data Connections?
- Why Do DWS/Hive/HBase Data Connections Fail to Obtain the Information About Database or Tables?
- Why Are MRS Hive/HBase Clusters Not Displayed on the Page for Creating Data Connections?
- What Should I Do If the Connection Test Fails When I Enable the SSL Connection During the Creation of a DWS Data Connection?
- Can I Create Multiple Data Connections in a Workspace in Proxy Mode?
- Should I Choose a Direct or a Proxy Connection When Creating a DWS Connection?
- How Do I Migrate the Data Development Jobs and Data Connections from One Workspace to Another?
- Can I Delete Workspaces?
-
DataArts Migration
- General
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- How Do I Obtain the Current Time Using an Expression?
-
Troubleshooting
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If a CDM Connector Reports the Error "Configuration Item [linkConfig.iamAuth] Does Not Exist"?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
- What Should I Do If invalid input syntax for integer: "true" Is Displayed During MySQL Database Migration?
-
DataArts Architecture
- What Is the Relationship Between Lookup Tables and Data Standards?
- What Is the Difference Between ER Modeling and Dimensional Modeling?
- What Data Modeling Methods Are Supported by DataArts Architecture?
- How Can I Use Standardized Data?
- Does DataArts Architecture Support Database Reverse?
- What Are the Differences Between the Metrics in DataArts Architecture and DataArts Quality?
- Why Does a Table Remain Unchanged When I Have Updated It in DataArts Architecture?
- Can I Configure Lifecycle Management for Tables?
-
DataArts Factory
- How Many Jobs Can Be Created in DataArts Factory? Is There a Limit on the Number of Nodes in a Job?
- Why Is There a Large Difference Between Job Execution Time and Start Time of a Job?
- Will Subsequent Jobs Be Affected If a Job Fails to Be Executed During Scheduling of Dependent Jobs? What Should I Do?
- What Should I Pay Attention to When Using DataArts Studio to Schedule Big Data Services?
- What Are the Differences and Connections Among Environment Variables, Job Parameters, and Script Parameters?
- What Do I Do If Node Error Logs Cannot Be Viewed When a Job Fails?
- What Should I Do If the Agency List Fails to Be Obtained During Agency Configuration?
- How Do I Locate Job Scheduling Nodes with a Large Number?
- Why Cannot Specified Peripheral Resources Be Selected When a Data Connection Is Created in Data Development?
- Why Is There No Job Running Scheduling Log on the Monitor Instance Page After Periodic Scheduling Is Configured for a Job?
- Why Does the GUI Display Only the Failure Result but Not the Specific Error Cause After Hive SQL and Spark SQL Scripts Fail to Be Executed?
- What Do I Do If the Token Is Invalid During the Running of a Data Development Node?
- How Do I View Run Logs After a Job Is Tested?
- Why Does a Job Scheduled by Month Start Running Before the Job Scheduled by Day Is Complete?
- What Should I Do If Invalid Authentication Is Reported When I Run a DLI Script?
- Why Cannot I Select the Desired CDM Cluster in Proxy Mode When Creating a Data Connection?
- Why Is There No Job Running Scheduling Record After Daily Scheduling Is Configured for the Job?
- What Do I Do If No Content Is Displayed in Job Logs?
- Why Do I Fail to Establish a Dependency Between Two Jobs?
- What Should I Do If an Error Is Displayed During DataArts Studio Scheduling: The Job Does Not Have a Submitted Version?
- What Do I Do If an Error Is Displayed During DataArts Studio Scheduling: The Script Associated with Node XXX in the Job Is Not Submitted?
- What Should I Do If a Job Fails to Be Executed After Being Submitted for Scheduling and an Error Displayed: Depend Job [XXX] Is Not Running Or Pause?
- How Do I Create a Database And Data Table? Is the database a data connection?
- Why Is No Result Displayed After an HIVE Task Is Executed?
- Why Does the Last Instance Status On the Monitor Instance page Only Display Succeeded or Failed?
- How Do I Create a Notification for All Jobs?
- What Is the Maximum Number of Nodes That Can Be Executed Simultaneously?
- What Is the Priority of the Startup User, Execution User, Workspace Agency, and Job Agency?
-
DataArts Quality
- What Are the Differences Between Quality Jobs and Comparison Jobs?
- How Can I Confirm that a Quality Job or Comparison Job Is Blocked?
- How Do I Manually Restart a Blocked Quality Job or Comparison Job?
- How Do I View Jobs Associated with a Quality Rule Template?
- What Should I Do If the System Displays a Message Indicating that I Do Not Have the MRS Permission to Perform a Quality Job?
- DataArts Catalog
-
DataArts DataService
- What Languages Do Data Lake Mall SDKs Support?
- What Can I Do If the System Displays a Message Indicating that the Proxy Fails to Be Invoked During API Creation?
- What Should I Do If the Background Reports an Error When I Access the Test App Through the Data Service API and Set Related Parameters?
- What Can I Do If an Error Is Reported When I Use an API?
- Can Operators Be Transferred When API Parameters Are Transferred?
- What Should I Do If the API Quota Provided by DataArts DataService Exclusive Has Been Used up?
-
Consultation and Billing
On this page
Help Center/
DataArts Studio/
User Guide/
DataArts Migration/
Tutorials/
Migrating Data from Oracle to CSS
Migrating Data from Oracle to CSS
Updated on 2022-09-23 GMT+08:00
Scenario
Cloud Search Service provides users with structured and unstructured data search, statistics, and report capabilities. This section describes how to use CDM to migrate data from the Oracle database to Cloud Search Service. The procedure is as follows:
Prerequisites
- You have subscribed to Cloud Search Service and obtained the IP address and port number of the Cloud Search Service cluster.
- You have obtained the IP address, name, username, and password of the Oracle database.
- If the Oracle database is deployed on an on-premises data center or a third-party cloud, ensure that an IP address that can be accessed from the public network has been configured for the Oracle database, or the VPN or Direct Connect between the on-premises data center and has been established.
- You have uploaded an Oracle database driver by following the instructions provided in Managing Drivers.
Creating a CDM Cluster and Binding an EIP to the Cluster
- If is an independent CDM service, create a CDM cluster by following the instructions provided in Creating a Cluster. If is used as a CDM component of DataArts Studio, create a CDM cluster by following the instructions provided in Creating a Cluster.
The key configurations are as follows:
- The flavor of the CDM cluster is selected based on the amount of data to be migrated. Generally, cdm.medium meets the requirements for most migration scenarios.
- The CDM and Cloud Search Service clusters must be in the same VPC. In addition, it is recommended that the CDM cluster be in the same subnet and security group as the Cloud Search Service cluster.
- If the same subnet and security group cannot be used for security purposes, ensure that a security group rule has been configured to allow the CDM cluster to access the Cloud Search Service cluster.
- After the CDM cluster is created, on the Cluster Management page, click Bind EIP in the Operation column to bind an EIP to the cluster. The CDM cluster uses the EIP to access the Oracle data source.
NOTE:
If SSL encryption is configured for the access channel of a local data source, CDM cannot connect to the data source using the EIP.
Creating a Cloud Search Service Link
- Click Job Management in the Operation column of the CDM cluster. On the displayed page, click the Links tab and then Create Link. The Select Connector page is displayed.
Figure 1 Selecting a connector
- Select Cloud Search Service and click Next. On the page that is displayed, configure the CSS link parameters.
- Name: Enter a custom link name, for example, csslink.
- Elasticsearch Server List: Enter the IP address and port number of the Cloud Search Service cluster (cluster later than 5.x). The format is ip:port. Use semicolons to separate multiple addresses. For example, 192.168.0.1:9200;192.168.0.2:9200.
- Username and Password: Enter the username and password used for logging in to the Cloud Search Service cluster. The user must have the read and write permissions on the database.
Figure 2 Creating a CSS link - Click Save. The Link Management page is displayed.
Creating an Oracle Link
- Click Job Management in the Operation column of the CDM cluster. On the displayed page, click the Links tab and then Create Link. The Select Connector page is displayed.
Figure 3 Selecting a connector type
- Select Oracle and click Next to configure parameters for the Oracle link.
- Name: Enter a custom link name, for example, oracle_link.
- Database Server and Port: Enter the address and port number of the Oracle server.
- Database Name: Enter the name of the Oracle database whose data is to be exported.
- Username and Password: Enter the username and password used for logging in to the Oracle database. The user must have the permission to read the Oracle metadata.
- Click Save. The Link Management page is displayed.
Creating a Migration Job
- Choose Table/File Migration > Create Job to create a job for exporting data from the Oracle database to Cloud Search Service.
Figure 4 Creating a job for migrating data from Oracle to Cloud Search Service
- Job Name: Enter a unique name.
- Source Job Configuration
- Source Link Name: Select the oracle_link link created in Creating an Oracle Link.
- Schema/Tablespace: Enter the name of the database whose data is to be migrated.
- Table Name: Enter the name of the table to be migrated.
- Retain the default values of the optional parameters in Show Advanced Attributes. For details, see From a Common Relational Database.
- Destination Job Configuration
- Destination Link Name: Select the csslink link created in Creating a Cloud Search Service Link.
- Index: Select the Elasticsearch index of the data to be written. You can also enter a new index. CDM automatically creates the index on Cloud Search Service.
- Type: Select the Elasticsearch type of the data to be written. You can enter a new type. CDM automatically creates a type at the migration destination.
- Retain the default values of the optional parameters in Show Advanced Attributes. For details, see To CSS.
- Click Next. The Map Field page is displayed. CDM automatically matches the source and destination fields. See Figure 5.
- If the field mapping is incorrect, you can drag the fields to adjust the mapping.
- If the type is automatically created at the migration destination, you need to configure the type and name of each field.
- CDM supports field conversion during the migration. For details, see Converting Fields.
- Click Next and set task parameters. Generally, retain the default values of all parameters.
In this step, you can configure the following optional functions:
- Retry Upon Failure: If the job fails to be executed, you can determine whether to automatically retry. Retain the default value Never.
- Group: Select the group to which the job belongs. The default group is DEFAULT. On the Job Management page, jobs can be displayed, started, or exported by group.
- Schedule Execution: To configure scheduled jobs, see Scheduling Job Execution. Retain the default value No.
- Concurrent Extractors: Enter the number of extractors to be concurrently executed. Retain the default value 1.
- Write Dirty Data: Specify this parameter if data that fails to be processed or filtered out during job execution needs to be written to OBS for future viewing. Before writing dirty data, create an OBS link. Retain the default value No so that dirty data is not recorded.
- Delete Job After Completion: Retain the default value Do not delete.
- Click Save and Run. The Job Management page is displayed, on which you can view the job execution progress and result.
- After the job is successfully executed, in the Operation column of the job, click Historical Record to view the job's historical execution records and read/write statistics.
On the Historical Record page, click Log to view the job logs.
Parent topic: Tutorials
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.
The system is busy. Please try again later.