Help Center> Cloud Search Service> User Guide (Paris and Amsterdam Regions)> Best Practices> Practices> Using CSS to Build a Unified Log Management Platform
Updated on 2023-06-20 GMT+08:00

Using CSS to Build a Unified Log Management Platform

A unified log management platform built using CSS can manage logs in real time in a unified and convenient manner, enabling log-driven O&M and improving service management efficiency.

Overview

Elasticsearch, Logstash, Kibana, and Beats (ELKB) provides a complete set of log solutions and is a mainstream log system. The following figure shows its framework.

Figure 1 Unified log management platform framework
  • Beats is a lightweight log collector, comprising Filebeat and Metricbeat.
  • Logstash collects and preprocesses logs. It supports multiple data sources and ETL processing modes.
  • Elasticsearch is an open-source distributed search engine that collects, analyzes, and stores data. CSS allows you to create Elasticsearch clusters.
  • Kibana is a visualization tool used to perform web-based visualized query and make BI reports.

This section describes how to use CSS, Filebeat, Logstash, and Kibana to build a unified log management platform. Filebeat collects ECS logs and sends the logs to Logstash for data processing. The processing results are stored in CSS, and can be queried, analyzed, and visualized using Kibana.

For details about the version compatibility of ELKB components, see https://www.elastic.co/support/matrix#matrix_compatibility.

Prerequisites

  • A CSS cluster in non-security mode has been created.
  • You have applied for an ECS and installed the Java environment on it.

Procedure

  1. Deploy and configure Filebeat.

    1. Download Filebeat. The recommended version is 7.6.2. Download it at https://www.elastic.co/downloads/past-releases#filebeat-oss.
    2. Configure the Filebeat configuration file filebeat.yml.
      For example, to collect all the files whose names end with log in the /root/ directory, configure the filebeat.yml file is as follows:
      filebeat.inputs:
      - type: log
        enabled: true
        # Path of the collected log file
        paths:
          - /root/*.log
      
      filebeat.config.modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false
      # Logstash hosts information
      output.logstash:
        hosts: ["192.168.0.126:5044"]
      
      processors:

  2. Deploy and configure Logstash.

    To achieve better performance, you are advised to set the JVM parameter in Logstash to half of the ECS or docker memory.

    1. Download Logstash. The recommended version is 7.6.2. Download it at https://www.elastic.co/downloads/past-releases#logstash-oss.
    2. Ensure that Logstash can communicate with the CSS cluster.
    3. Configure the Logstash configuration file logstash-sample.conf.

      The content of the logstash-sample.conf file is as follows:

      input {
        beats {
          port => 5044
        }
      }
      # Split data.
      filter {
          grok {
              match => {
                  "message" => '\[%{GREEDYDATA:timemaybe}\] \[%{WORD:level}\] %{GREEDYDATA:content}'
              }
          }
          mutate {
            remove_field => ["@version","tags","source","input","prospector","beat"]
          }
      }
      # CSS cluster information
      output {
        elasticsearch {
          hosts => ["http://192.168.0.4:9200"]
          index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
          #user => "xxx"
          #password => "xxx"
        }
      }

      You can use Grok Debugger (http://grokdebug.herokuapp.com/) to configure the filter mode of Logstash.

  3. Configure the index template of the CSS cluster on Kibana or via API.

    For example, create an index template. Let the index use three shards and no replicas. Fields such as @timestamp, content, host.name, level, log.file.path, message and timemaybe are defined in the index.

    PUT _template/filebeat
    {
      "index_patterns": ["filebeat*"],
      "settings": {
        # Define the number of shards.
        "number_of_shards": 3,
        # Define the number of copies.
        "number_of_replicas": 0,
        "refresh_interval": "5s"
      },
      # Define a field.
      "mappings": {
            "properties": {
              "@timestamp": {
                "type": "date"
              },
              "content": {
                "type": "text"
              },
              "host": {
                "properties": {
                  "name": {
                    "type": "text"
                  }
                }
              },
              "level": {
                "type": "keyword"
              },
              "log": {
                "properties": {
                  "file": {
                    "properties": {
                      "path": {
                        "type": "text"
                      }
                    }
                  }
                }
              },
              "message": {
                "type": "text"
              },
              "timemaybe": {
                "type": "date",
                "format": "yyyy-MM-dd HH:mm:ss||epoch_millis"
              }
            }
        }
    }

  4. Prepare test data on ECS.

    Run the following command to generate test data and write the data to /root/tmp.log:

    bash -c 'while true; do echo [$(date)] [info] this is the test message; sleep 1; done;' >> /root/tmp.log &

    The following is an example of the generated test data:

    [Thu Feb 13 14:01:16 CST 2020] [info] this is the test message

  5. Run the following command to start Logstash:

    nohup ./bin/logstash -f /opt/pht/logstash-6.8.6/logstash-sample.conf &

  6. Run the following command to start Filebeat:

    ./filebeat

  7. Use Kibana to query data and create reports.

    1. Go to the Kibana page of the CSS cluster.
    2. Click Discover and perform query and analysis, as shown in the following figure.
      Figure 2 Discover page