To Elasticsearch
Type |
Parameter |
Description |
Example Value |
---|---|---|---|
Basic parameters |
Index |
Elasticsearch index, which is similar to the name of a relational database.CDM supports automatic creation of indexes and field types. The index and field type names can contain only lowercase letters. |
index |
Type |
Elasticsearch type, which is similar to the table name of a relational database. The type name can contain only lowercase letters.
NOTE:
Elasticsearch 7.x and later versions do not support custom types. Instead, only the _doc type can be used. In this case, this parameter does not take effect even if it is set. |
type |
|
Operation |
Operation type
|
UPSERT |
|
Primary Key Mode |
This parameter is available when Operator is UPSERT, UPDATE, or CREATE.
|
Single primary key |
|
Clear Data Before Import |
Whether to delete data when the current task already exists in the index.
|
No |
|
Primary Key Delimiter |
This parameter is available when Primary Key Mode is Composite primary key. It separates the primary keys to be written to the ID. |
_ |
|
Advanced attributes |
Pipeline ID |
This parameter is available only after a pipeline ID is created in Kibana. It is used to convert the data format using the data transformation pipeline of Elasticsearch after data is transferred to Elasticsearch. |
pipeline_id |
Write ES with Routing |
If you enable this function, a column can be written to Elasticsearch as a route.
NOTE:
Before enabling this function, create indexes at the destination to improve the query efficiency. |
No |
|
Routing Column |
This parameter is available when Write ES with Routing is set to Yes. It specifies the destination routing column. If the destination index exists but the column information cannot be obtained, you can manually enter the column. The routing column can be empty. If it is empty, no routing value is specified for the data written to Elasticsearch. |
value1 |
|
Periodically Create Index |
For streaming jobs that continuously write data to Elasticsearch, CDM periodically creates indexes and writes data to the indexes, which helps you delete expired data. The indexes can be created based on the following periods:
When extracting data from a file, you must configure a single extractor, which means setting Concurrent Extractors to 1. Otherwise, this parameter is invalid. |
Every hour |
|
Commits |
Size of data to be submitted at a time |
10000 |
|
Retries |
Number of retries upon a request failure. A maximum of 10 retries are allowed. |
3 |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot