Updated on 2024-04-19 GMT+08:00

Configuration Center

You can modify configuration files of Logstash clusters in the configuration center, facilitating data migration from different data sources to destinations. Normally, the destinations are Elasticsearch clusters.

Testing Connectivity

Before migrating the data of a Logstash cluster, you can test whether the network connection between the data source and the Logstash cluster is established. You can also enter the IP address or domain name and port of the destination to check the network connectivity between the Logstash cluster and the destination.

  1. Log in to the CSS management console.
  2. Choose Clusters > Logstash. Click the name of the target cluster. The Cluster Information page is displayed. Click Configuration Center. Alternatively, click Configuration Center in the Operation column of the target cluster.
  3. On the Configuration Center page, click Test Connectivity.
  4. Enter the IP address and port of the data source and click Test.
    Figure 1 Testing connectivity

    You can test a maximum of 10 IP addresses or domain names at a time. You can click Add to add more IP addresses or domain names and click Test on the bottom to test the connectivity of multiple IP addresses or domain names at a time.

Creating a Configuration File

  1. Log in to the CSS management console.
  2. Choose Clusters > Logstash. Click the name of the target cluster. The Cluster Information page is displayed. Click Configuration Center. Alternatively, click Configuration Center in the Operation column of the target cluster.
  3. On the Configuration Center page, click Create in the upper right corner.

    You can create a configuration file using a system template or a custom template, or directly create a configuration file.

    • To use a system template, click Apply in the Operation column of the target template, and then configure Name, Configuration File Content, and Hidden Content.

      Currently, the following system template types are supported:

      • redis: You can import data from a Redis database to an Elasticsearch cluster.
      • elasticsearch: You can migrate data between Elasticsearch clusters.
      • jdbc: You can import data from a Java Database Connectivity (JDBC) to an Elasticsearch cluster.
      • kafka: You can import data from Kafka to an Elasticsearch cluster.
      • beats: You can import data from Beats to an Elasticsearch cluster.
      • dis: You can import data from DIS to an Elasticsearch cluster.

      For details about how to set parameters for each template, see Parameters for Configuring a System Template.

    • To directly create a configuration file, enter Name and Configuration File. The created configuration file cannot exceed 100 KB. A maximum of 50 configuration files can be created.
    • Hidden Content: Enter a sensitive string and press Enter to create it. The string will be replaced with asterisks (*) in configurations. (Up to 20 strings are allowed, and each can be up to 512 bytes long.)
  4. After the configuration is complete, click Next and set parameters.
    Configure the file in the pipeline during data migration.
    Table 1 Parameters

    Parameter

    Description

    pipeline.workers

    Number of working threads in the Filters + Outputs phases of parallel pipelines. The default value is the number of CPU cores. The recommended value ranges from 1 to 20.

    pipeline.batch.size

    Maximum number of events that a worker thread collects from inputs before attempting to execute its filters and outputs. A larger value is more effective but increases memory overhead. The default value is 125.

    pipeline.batch.delay

    When creating pipeline event batches, the period (in milliseconds) waiting for each event before dispatching an undersized batch to pipeline worker threads. The default value is 50.

    queue.type

    An internal queue model for event buffering. memory indicates a memory-based traditional queue, and persisted indicates a disk-based ACKed persistent queue. The default value is memory.

    queue.checkpoint.writes

    Maximum number of written events before forcing a checkpoint when persistent queues are enabled. The default value is 1024.

    queue.max_bytes

    Total capacity of the persistent queue. Make sure the capacity of your disk drive is greater than the value you specify. The default value is 1024.

    Unit: MB

  5. After the configuration is complete, click Create.

    On the Configuration Center page, you can view the created configuration file. If the status of the configuration file is Available, the configuration file is successfully created. You can also edit the created configuration file, add it to a custom template, or delete it.

    • To edit a configuration file, click Edit in the Operation column of a file to modify the file content and parameters.
    • You can add a created configuration file to a custom template.
    • To delete a configuration file that is not required, click Delete in the Operation column.

You can also click Operation Record or View Running Log to view the operation records and running logs.

Starting a Configuration File

Created configuration files are displayed on the Configuration Center page.

  1. Select a configuration file you want to start and click Start in the upper left corner.

    You can start up 50 configuration files at a time.

  2. In the Start Logstash dialog box, select Keepalive based on service requirements.

    The Keepalive function is applicable to long-term services. After this function is enabled, a daemon process is configured on each node. When Logstash is faulty, the daemon process automatically starts and rectifies the fault. The Keepalive function is not applicable to short-term services. If the function is enabled for a short-term service, the task will fail if no data is available at the source end.

  3. Click OK to start the configuration file.

    You can view the startup configuration file in the pipe list.

    You can also click View Operation Record or View Running Log to view the operation records and running logs.

Configuration File Hot Start

When Logstash is running, you can use the hot start function to add a pipe.

  • Configuration files using the logstash stdin plugin cannot use the hot start function.
  • If the hot start of a configuration file fails and the Logstash process exits abnormally, the recovery mechanism will be used to restart the original Logstash process. Exercise caution when performing this operation.
  • Only one configuration file can be selected for hot start, and the number of configuration items in the Running state in the pipe list is less than 20.
  1. Select the target configuration file and click Hot Start in the upper left corner.

    By default, the status of the Keepalive function in the dialog box is the same as that in the pipe list.

  2. Click OK to start the hot start of the configuration file.

    You can view the hot start configuration file in the pipe list.

Configuration File Hot Stop

When Logstash is running, you can use the hot stop function to remove a pipeline.

  1. Select the target configuration in the pipeline list and click Hot Stop above the pipeline list.
  2. Click OK in the dialog box.

    If the hot stop is successful, the target configuration is removed from the pipeline list and the pipeline data migration is interrupted.

Stopping All Configuration Files

To stop data migration of all configuration files in the pipeline list, click Stop All above the pipe list.

Click OK in the dialog box. If all pipelines are stopped, data migration will be interrupted.

If all pipelines are stopped successfully, data migration of all pipelines is stopped.

Exporting Configuration Files

You can click in the upper right corner to export the configuration files to the local host in batches.