Updated on 2026-04-30 GMT+08:00

Configuration File Templates

When building a real-time log processing or big data migration pipeline, writing Logstash configuration files manually is often time-consuming and error-prone due to complex syntax. Configuration errors may even lead to data loss. To address this, CSS provides prebuilt Logstash configuration templates for a variety of mainstream data sources, such as Redis, Elasticsearch, MySQL (JDBC), and Kafka. Using these verified templates, you can quickly set up automated data transmission channels from data sources to CSS Elasticsearch or OpenSearch clusters by configuring only the required connection information. This significantly reduces operational overhead and helps ensure the reliability of data ingestion pipelines.

Introduction to the Templates

Table 1 Logstash configuration file templates

System Template Name

Description

Details

redis

Ingests data from a Redis database into an Elasticsearch or OpenSearch cluster.

Redis Template

elasticsearch

Ingests data from an Elasticsearch or OpenSearch cluster into another Elasticsearch or OpenSearch cluster.

Elasticsearch Template

jdbc

Ingests data from a MySQL or MariaDB database via JDBC into an Elasticsearch or OpenSearch cluster.

JDBC Template

kafka

Ingests data from Kafka into an Elasticsearch or OpenSearch cluster.

Kafka Template

beats

Ingests data from Beats into an Elasticsearch or OpenSearch cluster.

Beats Template

Redis Template

Ingest data from a Redis database into an Elasticsearch or OpenSearch cluster.

# logstash-input-redis
input {
    redis {
        # Redis data type
        data_type => "pattern_channel"
        # Redis list or channel name
        key => "lgs-*"
        host => "xxx.xxx.xxx.xxx"
        port => 6379
    }
}

filter {
    # Removes some metadata fields automatically added by Logstash
    mutate {
        remove_field => ["@timestamp", "@version"]
    }
}

# logstash-output-elasticsearch
output {
    elasticsearch {
        # Destination cluster node addresses. No need to include the protocol.
        hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
        # Name of the index that events are written into.
        index => "xxx"
        # Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
        # user => "xxx"           # Username for accessing the cluster.
        # password => "xxx"       # Password corresponding to the username.
        # If SSL is enabled for the destination cluster, additionally configure the following information:
        # ssl => true
        # cacert => "/opt/logstash/extend/certs"    # Path of the CA certificate used to verify the destination cluster.
        # ssl_certificate_verification => false     # Whether to enable SSL certificate verification for the destination cluster.
    }
}
Table 2 Configuration items

Configuration Item

Mandatory

Description

logstash-input-redis

data_type

Yes

Redis data types.

The value can be:

  • list: Implements a message queue by consuming data from a Redis list using the blocking pop command (BLPOP).
  • channel: Subscribes to a specific channel name using the SUBSCRIBE command.
  • pattern_channel: Subscribes to channels matching a pattern using the PSUBSCRIBE command.

key

Yes

Name of the Redis list or channel.

  • Single name: Enter the channel name (for example, log).
  • Wildcard: Use the wildcard (*) to match multiple channels. For example, log-* indicates all channels whose name starts with log-.

host

Yes

Redis server address.

Enter an IP address or domain name.

port

No

Listening port of the Redis server.

logstash-output-elasticsearch

hosts

Yes

Destination cluster node addresses. You can configure multiple IP addresses.

Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"]

index

Yes

Name of the index to which events are written.

  • Single index: Enter the index name, for example, my_index.
  • Multiple indexes: Use dynamic naming (based on event fields) or configure multiple conditional output blocks to route events to different indexes.

user

No

Username for accessing the destination cluster.

Mandatory for a security-mode cluster.

password

No

Password for accessing the destination cluster.

Mandatory for a security-mode cluster.

ssl

No

Whether SSL is enabled for the destination cluster.

The value can be:

  • true: Uses HTTPS to transmit data.
  • false: Uses HTTP to transmit data.

cacert

No

Path of the CA certificate used to verify the destination cluster.

Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.

  • If the destination is a CSS Elasticsearch or OpenSearch cluster, the certificate name and certificate path of the default CA certificate will be obtained. For details, see Viewing Default Certificates.
  • If the destination is a self-managed or third-party Elasticsearch or OpenSearch cluster, upload the destination cluster's security certificate to Logstash and obtain the certificate name and certificate path. For details, see Uploading a Custom Certificate.

ssl_certificate_verification

No

Whether SSL certificate verification is enabled for the destination cluster.

The value can be:

  • true (default): Uses an SSL certificate to verify the destination cluster.
  • false: Ignores SSL certificate verification.

For more information, see Redis input plugin and Elasticsearch output plugin from the official Logstash documentation.

Elasticsearch Template

Ingest data from an Elasticsearch or OpenSearch cluster into another Elasticsearch or OpenSearch cluster.

# logstash-input-elasticsearch
input {
    elasticsearch {
        # Source cluster node addresses. No need to include the protocol.
        hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
        # Name of the source index to be migrated
        index => "xxx,xxx"
        docinfo => true
        # Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
        # user => "xxx"           # Username for accessing the cluster.
        # password => "xxx"      # Password corresponding to the username.
        # If SSL is enabled for the source cluster, additionally configure the following information:
        # ssl => true
        # ca_file => "/opt/logstash/extend/certs"     # Path of the CA certificate used to verify the source cluster.
    }
}

filter {
    # Removes some metadata fields automatically added by Logstash
    mutate {
        remove_field => ["@timestamp", "@version"]
    }
}

# logstash-output-elasticsearch
output {
    elasticsearch {
        # Destination cluster node addresses. No need to include the protocol.
        hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
        # Target index for writing events. The following configuration retains the source index name, document type, and document IDs.
        index => "%{[@metadata][_index]}"
        document_type => "%{[@metadata][_type]}"
        document_id => "%{[@metadata][_id]}"
        # Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
        # user => "xxx"           # Username for accessing the cluster.
        # password => "xxx"       # Password corresponding to the username.
        # If SSL is enabled for the destination cluster, additionally configure the following information:
        # ssl => true
        # cacert => "/opt/logstash/extend/certs"      # Path of the CA certificate used to verify the destination cluster.
        # ssl_certificate_verification => false        # Whether to enable SSL certificate verification for the destination cluster.
    }
}
Table 3 Configuration items

Configuration Item

Mandatory

Description

logstash-input-elasticsearch

hosts

Yes

Source cluster node addresses. You can configure multiple IP addresses.

Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"]

index

Yes

Name of the source index to be migrated.

  • Single index: Enter the index name, for example, my_index.
  • Multiple indexes: Enter multiple index names and use a comma (,) to separate them, for example, my_index1,my_index2.
  • Wildcard: Use the wildcard (*) to match multiple indexes. For example, myindex* indicates all indexes whose name starts with myindex.

docinfo

No

Whether to retain the metadata of the source documents.

The value can be:

  • true: Extracts metadata into the @metadata field of the events.
  • false (default): Only extracts the _source part of documents.

user

No

Username for accessing the source cluster.

Mandatory for a security-mode cluster.

password

No

Password for accessing the source cluster.

Mandatory for a security-mode cluster.

ssl

No

Whether SSL is enabled for the source cluster.

The value can be:

  • true: Uses HTTPS to transmit data.
  • false: Uses HTTP to transmit data.

ca_file

No

Path of the CA certificate used to verify the source cluster.

Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.

  • If the source is a CSS Elasticsearch or OpenSearch cluster, the certificate name and certificate path of the default CA certificate will be obtained. For details, see Viewing Default Certificates.
  • If the source is a self-managed or third-party Elasticsearch or OpenSearch cluster, upload the source cluster's security certificate to Logstash and obtain the certificate name and certificate path. For details, see Uploading a Custom Certificate.

logstash-output-elasticsearch

hosts

Yes

Destination cluster node addresses. You can configure multiple IP addresses.

Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"]

index

Yes

Name of the index to which events are written.

  • Single index: Enter the index name, for example, my_index.
  • Multiple indexes: Use dynamic naming (based on event fields) or configure multiple conditional output blocks to route events to different indexes.

document_type

No

Type of document to which events are written.

Similar events should be written into documents of the same type.

document_id

No

ID of document to which events are written.

Specifying an existing ID may cause historical data to be overwritten.

For data migration tasks, you are advised to set this parameter to prevent duplicate data.

user

No

Username for accessing the destination cluster.

Mandatory for a security-mode cluster.

password

No

Password for accessing the destination cluster.

Mandatory for a security-mode cluster.

ssl

No

Whether SSL is enabled for the destination cluster.

The value can be:

  • true: Uses HTTPS to transmit data.
  • false: Uses HTTP to transmit data.

cacert

No

Path of the CA certificate used to verify the destination cluster.

Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.

  • If the destination is a CSS Elasticsearch or OpenSearch cluster, the certificate name and certificate path of the default CA certificate will be obtained. For details, see Viewing Default Certificates.
  • If the destination is a self-managed or third-party Elasticsearch or OpenSearch cluster, upload the destination cluster's security certificate to Logstash and obtain the certificate name and certificate path. For details, see Uploading a Custom Certificate.

ssl_certificate_verification

No

Whether SSL certificate verification is enabled for the destination cluster.

The value can be:

  • true (default): Uses an SSL certificate to verify the destination cluster.
  • false: Ignores SSL certificate verification.

For more information, see Elasticsearch input plugin and Elasticsearch output plugin from the official Logstash documentation.

JDBC Template

Ingest data from a MySQL or MariaDB database via JDBC into an Elasticsearch or OpenSearch cluster.

# logstash-input-jdbc
input {
    jdbc {
        # Path of the JDBC driver package
        jdbc_driver_library => "/opt/logstash/extend/jars/mariadb-java-client-2.7.0.jar"
        jdbc_driver_class => "org.mariadb.jdbc.Driver"
        jdbc_connection_string => "jdbc:mariadb://xxx.xxx.xxx.xxx:xxx/database_name"
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        # SQL query statement, used to determine the scope of data synchronization
        statement => "select * from table_name"
    }
}

filter {
    # Removes some metadata fields automatically added by Logstash
    mutate {
        remove_field => ["@timestamp", "@version"]
    }
}

# logstash-output-elasticsearch
output {
    elasticsearch {
        # Destination cluster node addresses. No need to include the protocol.
        hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
        # Name of the index that events are written into.
        index => "xxx"
        # Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
        # user => "xxx"           # Username for accessing the cluster.
        # password => "xxx"       # Password corresponding to the username.
        # If SSL is enabled for the destination cluster, additionally configure the following information:
        # ssl => true
        # cacert => "/opt/logstash/extend/certs"      # Path of the CA certificate used to verify the destination cluster.
        # ssl_certificate_verification => false        # Whether to enable SSL certificate verification for the destination cluster.
    }
}
Table 4 Configuration items

Configuration Item

Mandatory

Description

logstash-input-jdbc

jdbc_driver_library

Yes

Path of the JDBC driver package.

MariaDB and MySQL JDBC drivers are pre-installed for Logstash clusters in CSS.

Value format: <Default certificate path>jars/<Driver name> (for example, /opt/logstash/extend/jars/mariadb-java-client-2.7.0.jar)

  • For the default certificate path, see Viewing Default Certificates.
  • Driver name:
    • MariaDB driver: mariadb-java-client-2.7.0.jar and mariadb-java-client-2.4.0.jar
    • MySQL driver (supported only by Logstash clusters whose image version is not earlier than x.x.x_26.1.0_xxx): mysql-connector-j-9.4.0.jar, mysql-connector-java-8.0.16.jar, and mysql-connector-java-8.0.30.jar

jdbc_driver_class

Yes

Driver class name.

  • For MariaDB, set this field to org.mariadb.jdbc.Driver.
  • For MySQL, set it to com.mysql.cj.jdbc.Driver.

jdbc_connection_string

Yes

Database connection string.

  • If the MariaDB driver is used, the value format is jdbc:mariadb://<Database access address>:<Listening port>/<Database name>.
  • If the MySQL driver is used, the value format is jdbc:mysql://<Database access address>:<Listening port>/<Database name>.

jdbc_user

Yes

Database username

jdbc_password

Yes

Password of the user

statement

Yes

SQL query statement, used to determine the scope of data synchronization.

logstash-output-elasticsearch

hosts

Yes

Destination cluster node addresses. You can configure multiple IP addresses.

Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"]

index

Yes

Name of the index to which events are written.

  • Single index: Enter the index name, for example, my_index.
  • Multiple indexes: Use dynamic naming (based on event fields) or configure multiple conditional output blocks to route events to different indexes.

user

No

Username for accessing the destination cluster.

Mandatory for a security-mode cluster.

password

No

Password for accessing the destination cluster.

Mandatory for a security-mode cluster.

ssl

No

Whether SSL is enabled for the destination cluster.

The value can be:

  • true: Uses HTTPS to transmit data.
  • false: Uses HTTP to transmit data.

cacert

No

Path of the CA certificate used to verify the destination cluster.

Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.

  • If the destination is a CSS Elasticsearch or OpenSearch cluster, the certificate name and certificate path of the default CA certificate will be obtained. For details, see Viewing Default Certificates.
  • If the destination is a self-managed or third-party Elasticsearch or OpenSearch cluster, upload the destination cluster's security certificate to Logstash and obtain the certificate name and certificate path. For details, see Uploading a Custom Certificate.

ssl_certificate_verification

No

Whether SSL certificate verification is enabled for the destination cluster.

The value can be:

  • true (default): Uses an SSL certificate to verify the destination cluster.
  • false: Ignores SSL certificate verification.

For more information, see Jdbc input plugin and Elasticsearch output plugin from the official Logstash documentation.

Kafka Template

Ingest data from Kafka into an Elasticsearch or OpenSearch cluster.

# logstash-input-kafka
input {
    kafka {
        # Kafka broker addresses
        bootstrap_servers => "xxx.xxx.xxx.xxx:xxx"
        # Kafka topics to subscribe to
        topics => ["xxx"]
        # Consumer group ID for offset tracking
        group_id => "kafka_es_test"
        # Offset reset policy
        auto_offset_reset => "earliest"
    }
}

filter {
    # Removes some metadata fields automatically added by Logstash
    mutate {
        remove_field => ["@timestamp", "@version"]
    }
}

# logstash-output-elasticsearch
output {
    elasticsearch {
        # Destination cluster node addresses. No need to include the protocol.
        hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
        # Name of the index that events are written into.
        index => "xxx"
        # Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
        # user => "xxx"           # Username for accessing the cluster.
        # password => "xxx"       # Password corresponding to the username.
        # If SSL is enabled for the destination cluster, additionally configure the following information:
        # ssl => true
        # cacert => "/opt/logstash/extend/certs"       # Path of the CA certificate used to verify the destination cluster.
        # ssl_certificate_verification => false        # Whether to enable SSL certificate verification for the destination cluster.
    }
}
Table 5 Configuration items

Configuration Item

Mandatory

Description

logstash-input-kafka

bootstrap_servers

Yes

Kafka broker addresses. Multiple addresses can be configured.

Value format: "<Kafka broker IP address 1>:<port number>, <Kafka broker IP address 2>:<port number>"

topics

Yes

Kafka topics to subscribe to.

  • Single topic: Enter the topic name, for example, [log].
  • Multiple topics: Enter multiple topic names and separate them with a comma (,), for example, [log1,log2].

group_id

Yes

Consumer group ID for offset tracking

auto_offset_reset

Yes

Offset reset policy.

The value can be:

  • earliest: Start consuming from the earliest available messages.
  • latest: Start consuming from the most recent messages.
  • none: Report an error if no previous offset is found for the consumer group.
  • anything else: Throws an exception to the consumer.

logstash-output-elasticsearch

hosts

Yes

Destination cluster node addresses. You can configure multiple addresses.

Value format: ["<Node IP address 1>:<port number>", "<Node IP address 2>:<port number>"].

index

Yes

Name of the index to which events are written.

  • Single index: Enter the index name, for example, my_index.
  • Multiple indexes: Use dynamic naming (based on event fields) or configure multiple conditional output blocks to route events to different indexes.

user

No

Username for accessing the destination cluster.

Mandatory for a security-mode cluster.

password

No

Password for accessing the destination cluster.

Mandatory for a security-mode cluster.

ssl

No

Whether SSL is enabled for the destination cluster.

The value can be:

  • true: Uses HTTPS to transmit data.
  • false: Uses HTTP to transmit data.

cacert

No

Path of the CA certificate used to verify the destination cluster.

Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.

  • If the destination is a CSS Elasticsearch or OpenSearch cluster, the certificate name and certificate path of the default CA certificate will be obtained. For details, see Viewing Default Certificates.
  • If the destination is a self-managed or third-party Elasticsearch or OpenSearch cluster, upload the destination cluster's security certificate to Logstash and obtain the certificate name and certificate path. For details, see Uploading a Custom Certificate.

ssl_certificate_verification

No

Whether SSL certificate verification is enabled for the destination cluster.

The value can be:

  • true (default): Uses an SSL certificate to verify the destination cluster.
  • false: Ignores SSL certificate verification.

For more information, see Kafka input plugin and Elasticsearch output plugin from the official Logstash documentation.

Beats Template

Ingests data from Beats into an Elasticsearch or OpenSearch cluster.
# logstash-input-beats
input {
    beats {
        # Logstash listening port, used for receiving data from Filebeat or Metricbeat.
        port => 5044
    }
}

filter {
    # Removes some metadata fields automatically added by Logstash
    mutate {
        remove_field => ["@timestamp", "@version"]
    }
}

# logstash-output-elasticsearch
output {
    elasticsearch {
        # Destination cluster node addresses. No need to include the protocol.
        hosts => ["xxx.xxx.xxx.xxx:9200", "xxx.xxx.xxx.xxx:9200"]
        # Name of the index that events are written into.
        index => "xxx"
        # Mandatory fields for a security-mode cluster. (Delete them for a cluster with the security mode disabled.)
        # user => "xxx"           # Username for accessing the cluster.
        # password => "xxx"       # Password corresponding to the username.
        # If SSL is enabled for the destination cluster, additionally configure the following information:
        # ssl => true
        # cacert => "/opt/logstash/extend/certs"      # Path of the CA certificate used to verify the destination cluster.
        # ssl_certificate_verification => false        # Whether to enable SSL certificate verification for the destination cluster.
    }
}
Table 6 Configuration items

Configuration Item

Mandatory

Description

logstash-input-beats

port

Yes

Logstash listening port, used for receiving data from Filebeat or Metricbeat.

Make sure this port (for example, 5044) is not occupied by other processes, and that it is allowed by the Logstash cluster's security group rules.

logstash-output-elasticsearch

hosts

Yes

Destination cluster node addresses. You can configure multiple IP addresses.

Value format: ["<Node IP address 1>:<Port number>", "<Node IP address 2>:<Port number>"]

index

Yes

Name of the index to which events are written.

  • Single index: Enter the index name, for example, my_index.
  • Multiple indexes: Use dynamic naming (based on event fields) or configure multiple conditional output blocks to route events to different indexes.

user

No

Username for accessing the destination cluster.

Mandatory for a security-mode cluster.

password

No

Password for accessing the destination cluster.

Mandatory for a security-mode cluster.

ssl

No

Whether SSL is enabled for the destination cluster.

The value can be:

  • true: Uses HTTPS to transmit data.
  • false: Uses HTTP to transmit data.

cacert

No

Path of the CA certificate used to verify the destination cluster.

Value format: <certificate path><certificate name>, for example, /opt/logstash/extend/certs.

  • If the destination is a CSS Elasticsearch or OpenSearch cluster, the certificate name and certificate path of the default CA certificate will be obtained. For details, see Viewing Default Certificates.
  • If the destination is a self-managed or third-party Elasticsearch or OpenSearch cluster, upload the destination cluster's security certificate to Logstash and obtain the certificate name and certificate path. For details, see Uploading a Custom Certificate.

ssl_certificate_verification

No

Whether SSL certificate verification is enabled for the destination cluster.

The value can be:

  • true (default): Uses an SSL certificate to verify the destination cluster.
  • false: Ignores SSL certificate verification.

For more information, see Beats input plugin and Elasticsearch output plugin from the official Logstash documentation.