To Hive
If the destination link of a job is a Hive link, configure the destination job parameters based on Table 1.
Parameter |
Description |
Example Value |
---|---|---|
Database Name |
Database name. Click the icon next to the text box. The dialog box for selecting the database is displayed. |
default |
Table Name |
Destination table name. Click the icon next to the text box. The dialog box for selecting the table is displayed. This parameter can be configured as a macro variable of date and time and a path name can contain multiple macro variables. When the macro variable of date and time works with a scheduled job, the incremental data can be synchronized periodically. For details, see Incremental Synchronization Using the Macro Variables of Date and Time.
NOTE:
If you have configured a macro variable of date and time and schedule a CDM job through DataArts Studio DataArts Factory, the system replaces the macro variable of date and time with (Planned start time of the data development job – Offset) rather than (Actual start time of the CDM job – Offset). |
TBL_X |
Auto Table Creation |
This parameter is displayed only when the source is a relational database. The options are as follows:
NOTE:
|
Non-auto creation |
Clear Data Before Import |
Whether the data in the destination table is cleared before data import. The options are as follows:
|
Yes |
Partition to Clear |
This parameter is available when Clear Data Before Import is set to Yes. When you enter the information about the partitions to be cleared, the data in the partitions will be cleared. |
Single partition: year=2020,location=sun Multiple partitions: ['year=2020,location=sun', 'year=2021,location=earth'] |
Executing Analyze Statements |
After all data is written, the ANALYZE TABLE statement is asynchronously executed to accelerate the Hive table query. The SQL statement is as follows:
NOTE:
Parameter Executing Analyze Statements applies only to the migration of a single table. |
Yes |
- When Hive serves as the destination end, a table whose storage format is ORC is automatically created.
- Due to file format restrictions, complex data can be written only in ORC or Parquet format.
- If the source Hive contains both the array and map types of data, the destination table format can only be the ORC or parquet complex type. If the destination table format is RC or TEXT, the source data will be processed and can be successfully written.
- As the map type is an unordered data structure, the data type may change after a migration.
- If Hive serves as the migration destination and the storage format is Textfile, delimiters must be explicitly specified in the statement for creating Hive tables. The following is an example:
CREATE TABLE csv_tbl( smallint_value smallint, tinyint_value tinyint, int_value int, bigint_value bigint, float_value float, double_value double, decimal_value decimal(9, 7), timestmamp_value timestamp, date_value date, varchar_value varchar(100), string_value string, char_value char(20), boolean_value boolean, binary_value binary, varchar_null varchar(100), string_null string, char_null char(20), int_null int ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ( "separatorChar" = "\t", "quoteChar" = "'", "escapeChar" = "\\" ) STORED AS TEXTFILE;
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot