Updated on 2026-04-30 GMT+08:00

Migrating Data Using CDM

Enterprises often operate heterogeneous platforms, with data distributed across traditional relational databases such as Oracle and storage systems like Object Storage Service (OBS). To enable high-performance full-text search and analytics, this data often needs to be migrated into Elasticsearch. However, migrating large volumes of heterogeneous data using custom scripts is costly and often hindered by complex network configurations and low transfer efficiency. Huawei Cloud's Cloud Data Migration (CDM) simplifies this process with a no-code, managed service that supports high concurrency and resumable transfers. Using CDM, you can quickly and securely migrate data from Oracle databases or OBS to CSS Elasticsearch clusters.

Table 1 Ingesting data into CSS using CDM

Scenario

Source Data

Target Cluster

Ingesting data from an Oracle database into CSS

A local or third-party Oracle database

Elasticsearch 5.5, 6.2, 6.5, 7.1, 7.6, 7.9, or 7.10

Ingesting data from OBS to CSS

JSON/CSV files in OBS buckets

Elasticsearch 5.5, 6.2, 6.5, 7.1, 7.6, 7.9, or 7.10

Preparations

  1. Connect the network.
    • Ensure the CDM cluster, CSS cluster, and OBS bucket reside in the same VPC. This enables data transmission over the internal network for optimal speed.
    • If the source is an on-premises Oracle database, make sure CDM can access it through a VPN, Direct Connect, or public IP address.
  2. Obtain connection information
    • Obtain the private network address (for example, 192.168.xxx.xxx:9200), username, and password of the CSS cluster. (The username and password are required only for a security-mode cluster.)
    • If the source is an Oracle database, obtain the database's IP address, port, database name, username, and password.
    • If the source is OBS, obtain the OBS bucket name, domain name, port, AK, and SK.

Migrating Data

  1. Log in to Kibana and go to the command execution page.
    1. Log in to the CSS management console.
    2. In the navigation pane on the left, choose Clusters > Elasticsearch.
    3. In the cluster list, find the target cluster, and click Kibana in the Operation column to log in to the Kibana console.
    4. In the left navigation pane, choose Dev Tools.

      The left part of the console is the command input box, and the triangle icon in its upper-right corner is the execution button. The right part shows the execution result.

  2. (Optional) Create the destination index in an Elasticsearch cluster. CDM can automatically create destination indexes. However, to ensure optimal query performance, you are advised to define the index mappings in the destination Elasticsearch cluster in advance.

    For example, run the following command to create index demo:

    • For Elasticsearch 7.x or later:
      PUT /demo
      {
        "settings": {
          "number_of_shards": 1
        },
        "mappings": {
            "properties": {
              "productName": {
                "type": "text",
                "analyzer": "ik_smart"
              },
              "size": {
                "type": "keyword"
              }
            }
          }
        }
    • For Elasticsearch earlier than 7.x:
      PUT /demo
      {
        "settings": {
          "number_of_shards": 1
        },
        "mappings": {
          "products": {
            "properties": {
              "productName": {
                "type": "text",
                "analyzer": "ik_smart"
              },
              "size": {
                "type": "keyword"
              }
            }
          }
        }
      }

    The command is successfully executed if the following information is displayed.

    {
      "acknowledged" : true,
      "shards_acknowledged" : true,
      "index" : "demo"
    }
  3. Ingest data from Oracle or OBS to the Elasticsearch cluster using CDM.

    If the data source is MySQL, see Creating a MySQL Connector for how to connect to a MySQL database.

  4. After the migration is complete, verify data integrity.
    1. Log in to the Elasticsearch cluster via Kibana, and navigate to the Dev Tools page.
    2. Run the following command to check the newly ingested data:
      GET demo/_count         # Check the number of records ingested.
      GET demo/_search        # Check the content of the ingested data.

      If the results are consistent with the source, data ingestion is successful.