Updated on 2024-10-24 GMT+08:00

To Hive

If the destination link of a job is a Hive link, configure the destination job parameters based on Table 1.

Table 1 Parameter description

Parameter

Description

Example Value

Database Name

Database name. Click the icon next to the text box. The dialog box for selecting the database is displayed.

default

Table Name

Destination table name. Click the icon next to the text box. The dialog box for selecting the table is displayed.

This parameter can be configured as a macro variable of date and time and a path name can contain multiple macro variables. When the macro variable of date and time works with a scheduled job, the incremental data can be synchronized periodically. For details, see Incremental Synchronization Using the Macro Variables of Date and Time.

NOTE:

If you have configured a macro variable of date and time and schedule a CDM job through DataArts Studio DataArts Factory, the system replaces the macro variable of date and time with (Planned start time of the data development jobOffset) rather than (Actual start time of the CDM jobOffset).

TBL_X

Auto Table Creation

This parameter is displayed only when the source is a relational database. The options are as follows:
  • Non-auto creation: CDM will not automatically create a table.
  • Auto creation: If the destination database does not contain the table specified by Table Name, CDM will automatically create the table. If the table specified by Table Name already exists, no table is created and data is written to the existing table.
  • Deletion before creation: CDM deletes the table specified by Table Name, and then creates the table again.
NOTE:
  • Only column comments are synchronized during automatic table creation. Table comments are not synchronized.
  • Primary keys cannot be synchronized during automatic table creation.

Non-auto creation

Clear Data Before Import

Whether the data in the destination table is cleared before data import. The options are as follows:
  • Yes: The data is cleared.
  • No: The data is not cleared. Instead, it will be added to the existing table.

Yes

Partition to Clear

This parameter is available when Clear Data Before Import is set to Yes.

When you enter the information about the partitions to be cleared, the data in the partitions will be cleared.

Single partition: year=2020,location=sun

Multiple partitions: ['year=2020,location=sun', 'year=2021,location=earth']

Executing Analyze Statements

After all data is written, the ANALYZE TABLE statement is asynchronously executed to accelerate the Hive table query. The SQL statement is as follows:

  • Non-partitioned table: ANALYZE TABLE tablename COMPUTE STATISTICS
  • Partitioned table: ANALYZE TABLE tablename PARTITION(partcol1[=val1], partcol2[=val2], ...) COMPUTE STATISTICS
NOTE:

Parameter Executing Analyze Statements applies only to the migration of a single table.

Yes

  • When Hive serves as the destination end, a table whose storage format is ORC is automatically created.
  • Due to file format restrictions, complex data can be written only in ORC or Parquet format.
  • If the source Hive contains both the array and map types of data, the destination table format can only be the ORC or parquet complex type. If the destination table format is RC or TEXT, the source data will be processed and can be successfully written.
  • As the map type is an unordered data structure, the data type may change after a migration.
  • If Hive serves as the migration destination and the storage format is Textfile, delimiters must be explicitly specified in the statement for creating Hive tables. The following is an example:
    CREATE TABLE csv_tbl(
    smallint_value smallint,
    tinyint_value tinyint,
    int_value int,
    bigint_value bigint,
    float_value float,
    double_value double,
    decimal_value decimal(9, 7),
    timestmamp_value timestamp,
    date_value date,
    varchar_value varchar(100),
    string_value string,
    char_value char(20),
    boolean_value boolean,
    binary_value binary,
    varchar_null varchar(100),
    string_null string,
    char_null char(20),
    int_null int
    )
    ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
    WITH SERDEPROPERTIES (
    "separatorChar" = "\t",
    "quoteChar"     = "'",
    "escapeChar"    = "\\"
    )
    STORED AS TEXTFILE;