Help Center/ MapReduce Service/ Component Operation Guide (LTS)/ Using CDL/ Creating a CDL Job/ Synchronizing drs-oracle-avro Database to Hudi Using CDL (ThirdKafka)
Updated on 2024-10-09 GMT+08:00

Synchronizing drs-oracle-avro Database to Hudi Using CDL (ThirdKafka)

Scenario

Import drs-avro-oracle database data from ThirdKafka to Hudi on the CDLService web UI of a cluster with Kerberos authentication enabled.

This section applies to MRS 3.3.0 or later.

Prerequisites

  • The CDL and Hudi services have been installed in a cluster and are running properly.
  • Topics of the ThirdKafka database can be consumed by the MRS cluster. For details, see Prerequisites for ThirdPartyKafka.
  • You have created a human-machine user, for example, cdluser, added the user to user groups cdladmin (primary group), hadoop, kafka, and supergroup, and associated the user with the System_administrator role on FusionInsight Manager.

Procedure

  1. Log in to FusionInsight Manager as user cdluser (change the password upon your first login), choose Cluster > Services > CDL, and click the link next to CDLService UI to go to the CDLService web UI.
  2. Choose Link Management and click Add Link. On the displayed dialog box, set parameters for adding the thirdparty-kafka and hudi links by referring to the following tables. Creating a CDL Database Connection describes the data link parameters.

    Table 1 thirdparty-kafka data link parameters

    Parameter

    Example Value

    Name

    drs_avro_oracle_link

    Link Type

    thirdparty-kafka

    Bootstrap Servers

    10.10.10.10:9093

    Security Protocol

    SASL_SSL

    Username

    testuser

    Password

    Password of the testuser user

    SSL Truststore Location

    Click Upload to upload the authentication file.

    SSL Truststore Password

    -

    Datastore Type

    drs-oracle-avro

    Description

    thirdparty-kafka Link

    MRS Kafka can also be used as the source of thirdparty-kafka. If the username and password are used for login authentication, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Configurations, search for the sasl.enabled.mechanisms parameter in the search box, add PLAIN as the parameter value, click Save to save the configuration, and restart the Kafka service for the configuration to take effect.

    On the CDL web UI, configure the thirdparty-kafka link that uses MRS Kafka as the source. For example, the data link configuration is as follows:

    Table 2 Hudi data link parameters

    Parameter

    Example Value

    Link Type

    hudi

    Name

    hudilink

    Storage Type

    hdfs

    Auth KeytabFile

    /opt/Bigdata/third_lib/CDL/user_libs/cdluser.keytab

    Principal

    cdluser

    Description

    -

  3. After the parameters are configured, click Test to check whether the data link is normal.

    After the test is successful, click OK.

  4. (Optional) Choose ENV Management and click Add Env. In the displayed dialog box, configure the parameters based on the following table.

    Table 3 Parameters for adding an ENV

    Parameter

    Example Value

    Name

    test-env

    Driver Memory

    1 GB

    Type

    spark

    Executor Memory

    1 GB

    Executor Cores

    1

    Number Executors

    1

    Queue

    -

    Description

    -

    Click OK.

  5. Choose Job Management > Data synchronization task and click Add Job. In the displayed dialog box, set parameters. Click Next.

    Specifically:

    Parameter

    Example Value

    Name

    job_avro_oracletohudi

    Desc

    New CDL Job

  6. Configure ThirdKafka job parameters.

    1. On the Job Management page, drag the thirdparty-kafka icon on the left to the editing area on the right and double-click the icon to go to the ThirdpartyKafka job configuration page. Set parameters by referring to the following table. Creating a CDL Data Synchronization Job describes the job parameters.
      Table 4 thirdparty-kafka job parameters

      Parameter

      Example Value

      Link

      job_avro_oracletohudi

      DB Name

      avrooracledb

      Schema

      avrooracleschema

      Datastore Type

      drs-oracle-avro

      Source Topics

      source_topic

      Tasks Max

      1

      Tolerance

      none

      Data Filter Time

      -

      Topic Table Mapping

      test/hudi_topic

    2. Click OK. The ThirdpartyKafka job parameters are configured.

  7. Configure Hudi job parameters.

    1. On the Job Management page, drag the hudi icon in the Sink area on the left to the editing area on the right and double-click the icon to go to the Hudi job configuration page. Set parameters by referring to the following table. Creating a CDL Data Synchronization Job describes the job parameters.
      Table 5 Sink Hudi job parameters

      Parameter

      Example Value

      Link

      hudilink

      Path

      /cdl/test

      Interval

      10

      Max Rate Per Partition

      0

      Parallelism

      10

      Target Hive Database

      default

      Configuring Hudi Table Attributes

      Visual View

      Global Configuration of Hudi Table Attributes

      -

      Configuring the Attributes of the Hudi Table: Table Name

      test

      Configuring the Attributes of the Hudi Table: Table Type Opt Key

      COPY_ON_WRITE

      Configuring the Attributes of the Hudi Table: Hudi TableName Mapping

      -

      Configuring the Attributes of the Hudi Table: Hive TableName Mapping

      -

      Configuring the Attributes of the Hudi Table: Table Primarykey Mapping

      id

      Configuring the Attributes of the Hudi Table: Table Hudi Partition Type

      -

      Configuring the Attributes of the Hudi Table: Custom Config

      -

    2. (Optional) Click the plus sign (+) to display the Execution Env parameter. Select a created environment for it. The default value is defaultEnv.

    3. Click OK.

  8. Drag the two icons to associate the job parameters and click Save. The job configuration is complete.

  9. In the job list on the Job Management page, locate the created job, click Start in the Operation column, and wait until the job is started.

    Check whether the data transmission takes effect, for example, insert data into the table in the drs-avro-oracle database and view the content of the file imported to Hudi.