Example Configuration File for Ingesting Data from DIS into an Elasticsearch or OpenSearch Cluster
In our example, the source is Huawei Cloud's Data Ingestion Service (DIS) and the destination is an Elasticsearch or OpenSearch cluster created in CSS, and the migration is performed using the CSS-hosted Logstash service.
# logstash-input-dis
input {
dis {
# DIS stream name
streams => ["YOUR_DIS_STREAM_NAME"]
# Endpoint URL for the region where your DIS service is located
endpoint => "https://dis.example.com"
# User's AK/SK
ak => "YOUR_ACCESS_KEY_ID"
sk => "YOUR_SECRET_KEY_ID"
# Region where your DIS service is located
region => "YOUR_Region"
# Project ID
project_id => "YOUR_PROJECT_ID"
# DIS application name
group_id => "YOUR_APP_ID"
# Client ID
client_id => "YOUR_CLIENT_ID"
# Start position for data consumption in the stream
auto_offset_reset => "earliest"
}
}
filter {
# Removes some metadata fields automatically added by Logstash
mutate {
remove_field => ["@timestamp", "@version"]
}
}
# logstash-output-elasticsearch
output {
elasticsearch {
# Destination cluster node addresses. No need to include the protocol.
hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
# Name of the index that events are written into.
index => "xxx"
# Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
# user => "xxx" # Username for accessing the cluster.
# password => "xxx" # Password corresponding to the username.
# If SSL is enabled for the destination cluster, additionally configure the following information:
# ssl => true
# cacert => "/opt/logstash/extend/certs" # Path of the CA certificate used to verify the destination cluster.
# ssl_certificate_verification => false # Whether to enable SSL certificate verification for the destination cluster.
}
} | Configuration Item | Mandatory | Description |
|---|---|---|
| logstash-input-dis | ||
| streams | Yes | DIS stream name. The value must match the stream name specified on the DIS console during stream creation. |
| endpoint | Yes | Endpoint for the region where your DIS service is located |
| ak | Yes | The user's Access Key (AK). For details, see Checking Authentication Information. |
| sk | Yes | The user's Secret Key (SK). For details, see Checking Authentication Information. |
| region | Yes | Region where your DIS service is located |
| project_id | Yes | Project ID. For details, see Checking Authentication Information. |
| group_id | Yes | DIS app name, used to identify a consumer group. The value can be any character string. |
| client_id | No | Client ID, which identifies a consumer in a consumer group. If multiple pipelines or Logstash instances are present, configure unique values for different consumers. For example, the value of instance 1 is client1, and the value of instance 2 is client2. |
| auto_offset_reset | No | Start position for data consumption in the stream. The value can be:
|
| logstash-output-elasticsearch | ||
| hosts | Yes | Destination cluster node addresses. You can configure multiple IP addresses. Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"] |
| index | Yes | Name of the index to which events are written.
|
| user | No | Username for accessing the destination cluster. Mandatory for a security-mode cluster. |
| password | No | Password for accessing the destination cluster. Mandatory for a security-mode cluster. |
| ssl | No | Whether SSL is enabled for the destination cluster. The value can be:
|
| cacert | No | Path of the CA certificate used to verify the destination cluster. Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.
|
| ssl_certificate_verification | No | Whether SSL certificate verification is enabled for the destination cluster. The value can be:
|
For more information, see Configuring the DIS Logstash Plugin in Data Ingestion Service User Guide, as well as Elasticsearch output plugin from the official Logstash documentation.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot