Configuration File Templates
When building a real-time log processing or big data migration pipeline, writing Logstash configuration files manually is often time-consuming and error-prone due to complex syntax. Configuration errors may even lead to data loss. To address this, CSS provides prebuilt Logstash configuration templates for a variety of mainstream data sources, such as Redis, Elasticsearch, MySQL (JDBC), and Kafka. Using these verified templates, you can quickly set up automated data transmission channels from data sources to CSS Elasticsearch or OpenSearch clusters by configuring only the required connection information. This significantly reduces operational overhead and helps ensure the reliability of data ingestion pipelines.
Introduction to the Templates
| System Template Name | Description | Details |
|---|---|---|
| redis | Ingests data from a Redis database into an Elasticsearch or OpenSearch cluster. | |
| elasticsearch | Ingests data from an Elasticsearch or OpenSearch cluster into another Elasticsearch or OpenSearch cluster. | |
| jdbc | Ingests data from a MySQL or MariaDB database via JDBC into an Elasticsearch or OpenSearch cluster. | |
| kafka | Ingests data from Kafka into an Elasticsearch or OpenSearch cluster. | |
| beats | Ingests data from Beats into an Elasticsearch or OpenSearch cluster. |
Redis Template
Ingest data from a Redis database into an Elasticsearch or OpenSearch cluster.
# logstash-input-redis
input {
redis {
# Redis data type
data_type => "pattern_channel"
# Redis list or channel name
key => "lgs-*"
host => "xxx.xxx.xxx.xxx"
port => 6379
}
}
filter {
# Removes some metadata fields automatically added by Logstash
mutate {
remove_field => ["@timestamp", "@version"]
}
}
# logstash-output-elasticsearch
output {
elasticsearch {
# Destination cluster node addresses. No need to include the protocol.
hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
# Name of the index that events are written into.
index => "xxx"
# Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
# user => "xxx" # Username for accessing the cluster.
# password => "xxx" # Password corresponding to the username.
# If SSL is enabled for the destination cluster, additionally configure the following information:
# ssl => true
# cacert => "/opt/logstash/extend/certs" # Path of the CA certificate used to verify the destination cluster.
# ssl_certificate_verification => false # Whether to enable SSL certificate verification for the destination cluster.
}
} | Configuration Item | Mandatory | Description |
|---|---|---|
| logstash-input-redis | ||
| data_type | Yes | Redis data types. The value can be:
|
| key | Yes | Name of the Redis list or channel.
|
| host | Yes | Redis server address. Enter an IP address or domain name. |
| port | No | Listening port of the Redis server. |
| logstash-output-elasticsearch | ||
| hosts | Yes | Destination cluster node addresses. You can configure multiple IP addresses. Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"] |
| index | Yes | Name of the index to which events are written.
|
| user | No | Username for accessing the destination cluster. Mandatory for a security-mode cluster. |
| password | No | Password for accessing the destination cluster. Mandatory for a security-mode cluster. |
| ssl | No | Whether SSL is enabled for the destination cluster. The value can be:
|
| cacert | No | Path of the CA certificate used to verify the destination cluster. Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.
|
| ssl_certificate_verification | No | Whether SSL certificate verification is enabled for the destination cluster. The value can be:
|
For more information, see Redis input plugin and Elasticsearch output plugin from the official Logstash documentation.
Elasticsearch Template
Ingest data from an Elasticsearch or OpenSearch cluster into another Elasticsearch or OpenSearch cluster.
# logstash-input-elasticsearch
input {
elasticsearch {
# Source cluster node addresses. No need to include the protocol.
hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
# Name of the source index to be migrated
index => "xxx,xxx"
docinfo => true
# Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
# user => "xxx" # Username for accessing the cluster.
# password => "xxx" # Password corresponding to the username.
# If SSL is enabled for the source cluster, additionally configure the following information:
# ssl => true
# ca_file => "/opt/logstash/extend/certs" # Path of the CA certificate used to verify the source cluster.
}
}
filter {
# Removes some metadata fields automatically added by Logstash
mutate {
remove_field => ["@timestamp", "@version"]
}
}
# logstash-output-elasticsearch
output {
elasticsearch {
# Destination cluster node addresses. No need to include the protocol.
hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
# Target index for writing events. The following configuration retains the source index name, document type, and document IDs.
index => "%{[@metadata][_index]}"
document_type => "%{[@metadata][_type]}"
document_id => "%{[@metadata][_id]}"
# Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
# user => "xxx" # Username for accessing the cluster.
# password => "xxx" # Password corresponding to the username.
# If SSL is enabled for the destination cluster, additionally configure the following information:
# ssl => true
# cacert => "/opt/logstash/extend/certs" # Path of the CA certificate used to verify the destination cluster.
# ssl_certificate_verification => false # Whether to enable SSL certificate verification for the destination cluster.
}
} | Configuration Item | Mandatory | Description |
|---|---|---|
| logstash-input-elasticsearch | ||
| hosts | Yes | Source cluster node addresses. You can configure multiple IP addresses. Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"] |
| index | Yes | Name of the source index to be migrated.
|
| docinfo | No | Whether to retain the metadata of the source documents. The value can be:
|
| user | No | Username for accessing the source cluster. Mandatory for a security-mode cluster. |
| password | No | Password for accessing the source cluster. Mandatory for a security-mode cluster. |
| ssl | No | Whether SSL is enabled for the source cluster. The value can be:
|
| ca_file | No | Path of the CA certificate used to verify the source cluster. Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.
|
| logstash-output-elasticsearch | ||
| hosts | Yes | Destination cluster node addresses. You can configure multiple IP addresses. Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"] |
| index | Yes | Name of the index to which events are written.
|
| document_type | No | Type of document to which events are written. Similar events should be written into documents of the same type. |
| document_id | No | ID of document to which events are written. Specifying an existing ID may cause historical data to be overwritten. For data migration tasks, you are advised to set this parameter to prevent duplicate data. |
| user | No | Username for accessing the destination cluster. Mandatory for a security-mode cluster. |
| password | No | Password for accessing the destination cluster. Mandatory for a security-mode cluster. |
| ssl | No | Whether SSL is enabled for the destination cluster. The value can be:
|
| cacert | No | Path of the CA certificate used to verify the destination cluster. Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.
|
| ssl_certificate_verification | No | Whether SSL certificate verification is enabled for the destination cluster. The value can be:
|
For more information, see Elasticsearch input plugin and Elasticsearch output plugin from the official Logstash documentation.
JDBC Template
Ingest data from a MySQL or MariaDB database via JDBC into an Elasticsearch or OpenSearch cluster.
# logstash-input-jdbc
input {
jdbc {
# Path of the JDBC driver package
jdbc_driver_library => "/opt/logstash/extend/jars/mariadb-java-client-2.7.0.jar"
jdbc_driver_class => "org.mariadb.jdbc.Driver"
jdbc_connection_string => "jdbc:mariadb://xxx.xxx.xxx.xxx:xxx/database_name"
jdbc_user => "xxx"
jdbc_password => "xxx"
# SQL query statement, used to determine the scope of data synchronization
statement => "select * from table_name"
}
}
filter {
# Removes some metadata fields automatically added by Logstash
mutate {
remove_field => ["@timestamp", "@version"]
}
}
# logstash-output-elasticsearch
output {
elasticsearch {
# Destination cluster node addresses. No need to include the protocol.
hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
# Name of the index that events are written into.
index => "xxx"
# Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
# user => "xxx" # Username for accessing the cluster.
# password => "xxx" # Password corresponding to the username.
# If SSL is enabled for the destination cluster, additionally configure the following information:
# ssl => true
# cacert => "/opt/logstash/extend/certs" # Path of the CA certificate used to verify the destination cluster.
# ssl_certificate_verification => false # Whether to enable SSL certificate verification for the destination cluster.
}
} | Configuration Item | Mandatory | Description |
|---|---|---|
| logstash-input-jdbc | ||
| jdbc_driver_library | Yes | Path of the JDBC driver package. MariaDB and MySQL JDBC drivers are pre-installed for Logstash clusters in CSS. Value format: <Default certificate path>jars/<Driver name> (for example, /opt/logstash/extend/jars/mariadb-java-client-2.7.0.jar)
|
| jdbc_driver_class | Yes | Driver class name.
|
| jdbc_connection_string | Yes | Database connection string.
|
| jdbc_user | Yes | Database username |
| jdbc_password | Yes | Password of the user |
| statement | Yes | SQL query statement, used to determine the scope of data synchronization. |
| logstash-output-elasticsearch | ||
| hosts | Yes | Destination cluster node addresses. You can configure multiple IP addresses. Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"] |
| index | Yes | Name of the index to which events are written.
|
| user | No | Username for accessing the destination cluster. Mandatory for a security-mode cluster. |
| password | No | Password for accessing the destination cluster. Mandatory for a security-mode cluster. |
| ssl | No | Whether SSL is enabled for the destination cluster. The value can be:
|
| cacert | No | Path of the CA certificate used to verify the destination cluster. Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.
|
| ssl_certificate_verification | No | Whether SSL certificate verification is enabled for the destination cluster. The value can be:
|
For more information, see Jdbc input plugin and Elasticsearch output plugin from the official Logstash documentation.
Kafka Template
Ingest data from Kafka into an Elasticsearch or OpenSearch cluster.
# logstash-input-kafka
input {
kafka {
# Kafka broker addresses
bootstrap_servers => "xxx.xxx.xxx.xxx:xxx"
# Kafka topics to subscribe to
topics => ["xxx"]
# Consumer group ID for offset tracking
group_id => "kafka_es_test"
# Offset reset policy
auto_offset_reset => "earliest"
}
}
filter {
# Removes some metadata fields automatically added by Logstash
mutate {
remove_field => ["@timestamp", "@version"]
}
}
# logstash-output-elasticsearch
output {
elasticsearch {
# Destination cluster node addresses. No need to include the protocol.
hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
# Name of the index that events are written into.
index => "xxx"
# Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
# user => "xxx" # Username for accessing the cluster.
# password => "xxx" # Password corresponding to the username.
# If SSL is enabled for the destination cluster, additionally configure the following information:
# ssl => true
# cacert => "/opt/logstash/extend/certs" # Path of the CA certificate used to verify the destination cluster.
# ssl_certificate_verification => false # Whether to enable SSL certificate verification for the destination cluster.
}
} | Configuration Item | Mandatory | Description |
|---|---|---|
| logstash-input-kafka | ||
| bootstrap_servers | Yes | Kafka broker addresses. Multiple addresses can be configured. Value format: "<Kafka broker IP address 1>:<port number>, <Kafka broker IP address 2>:<port number>" |
| topics | Yes | Kafka topics to subscribe to.
|
| group_id | Yes | Consumer group ID for offset tracking |
| auto_offset_reset | Yes | Offset reset policy. The value can be:
|
| logstash-output-elasticsearch | ||
| hosts | Yes | Destination cluster node addresses. You can configure multiple addresses. Value format: ["<Node IP address 1>:<port number>", "<Node IP address 2>:<port number>"]. |
| index | Yes | Name of the index to which events are written.
|
| user | No | Username for accessing the destination cluster. Mandatory for a security-mode cluster. |
| password | No | Password for accessing the destination cluster. Mandatory for a security-mode cluster. |
| ssl | No | Whether SSL is enabled for the destination cluster. The value can be:
|
| cacert | No | Path of the CA certificate used to verify the destination cluster. Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.
|
| ssl_certificate_verification | No | Whether SSL certificate verification is enabled for the destination cluster. The value can be:
|
For more information, see Kafka input plugin and Elasticsearch output plugin from the official Logstash documentation.
Beats Template
# logstash-input-beats
input {
beats {
# Logstash listening port, used for receiving data from Filebeat or Metricbeat.
port => 5044
}
}
filter {
# Removes some metadata fields automatically added by Logstash
mutate {
remove_field => ["@timestamp", "@version"]
}
}
# logstash-output-elasticsearch
output {
elasticsearch {
# Destination cluster node addresses. No need to include the protocol.
hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
# Name of the index that events are written into.
index => "xxx"
# Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
# user => "xxx" # Username for accessing the cluster.
# password => "xxx" # Password corresponding to the username.
# If SSL is enabled for the destination cluster, additionally configure the following information:
# ssl => true
# cacert => "/opt/logstash/extend/certs" # Path of the CA certificate used to verify the destination cluster.
# ssl_certificate_verification => false # Whether to enable SSL certificate verification for the destination cluster.
}
} | Configuration Item | Mandatory | Description |
|---|---|---|
| logstash-input-beats | ||
| port | Yes | Logstash listening port, used for receiving data from Filebeat or Metricbeat. Make sure this port (for example, 5044) is not occupied by other processes, and that it is allowed by the Logstash cluster's security group rules. |
| logstash-output-elasticsearch | ||
| hosts | Yes | Destination cluster node addresses. You can configure multiple IP addresses. Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"] |
| index | Yes | Name of the index to which events are written.
|
| user | No | Username for accessing the destination cluster. Mandatory for a security-mode cluster. |
| password | No | Password for accessing the destination cluster. Mandatory for a security-mode cluster. |
| ssl | No | Whether SSL is enabled for the destination cluster. The value can be:
|
| cacert | No | Path of the CA certificate used to verify the destination cluster. Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.
|
| ssl_certificate_verification | No | Whether SSL certificate verification is enabled for the destination cluster. The value can be:
|
For more information, see Beats input plugin and Elasticsearch output plugin from the official Logstash documentation.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot