Help Center/ Log Tank Service/ User Guide/ Log Ingestion/ Using ICAgent to Collect Logs/ Ingesting ServiceStage Containerized Application Logs to LTS
Updated on 2024-11-11 GMT+08:00

Ingesting ServiceStage Containerized Application Logs to LTS

LTS collects log data from CCE. By processing a massive number of logs efficiently, securely, and in real time, LTS provides useful insights for you to optimize the availability and performance of cloud services and applications. It also helps you efficiently perform real-time decision-making, device O&M management, and service trend analysis.

Perform the following steps to complete the ingestion configuration:

  1. Step 1: Select a Log Stream
  2. Step 2: Check Dependencies
  3. Step 3: (Optional) Select a Host Group
  4. Step 4: Configure the Collection
  5. Step 5: Configure Indexing
  6. Step 6: Complete the Ingestion Configuration

To collect logs from multiple scenarios, set multiple ingestion configurations in a batch.

Currently, this function is available only to whitelisted users. To use it, submit a service ticket.

Prerequisites

Restrictions

  • CCE cluster nodes whose container engine is Docker are supported.
  • CCE cluster nodes whose container engine is Containerd are supported. You must be using ICAgent 5.12.130 or later.
  • To collect container log directories mounted to host directories to LTS, you must configure the node file path.
  • Restrictions on the Docker storage driver: Currently, container file log collection supports only the overlay2 storage driver. devicemapper cannot be used as the storage driver. Run the following command to check the storage driver type:
    docker info | grep "Storage Driver" 

Step 1: Select a Log Stream

  1. Log in to the LTS console.
  2. Choose Log Ingestion > Ingestion Center in the navigation pane. Then, click ServiceStage - Containerized Application Logs under Self-Built software.

    Alternatively, choose Log Ingestion > Ingestion Management in the navigation pane, and click Ingest Log > ServiceStage - Containerized Application Logs.

    Alternatively, choose Log Management in the navigation pane and click the target log stream to access its details page. Click in the upper right corner. On the displayed page, click the Log Ingestion tab and click Ingest Log. In the displayed dialog box, click ServiceStage - Containerized Application Logs under Self-Built software.

  3. In the Select Log Stream step, set the following parameters:
    1. Select a ServiceStage application and ServiceStage environment.
    2. Select a log group from the Log Group drop-down list. If there are no desired log groups, click Create Log Group to create one.
    3. Select a log stream from the Log Stream drop-down list. If there are no desired log streams, click Create Log Stream to create one.
  4. Click Next: Check Dependencies.

Step 2: Check Dependencies

  1. The system automatically checks whether there is a host group with the custom identifier k8s-log-Application ID.
    If not, click Auto Correct.
    • Auto Correct: Configure dependencies with one click.
    • Check Again: Recheck dependencies.
  2. Click Next: (Optional) Select Host Group.

Step 3: (Optional) Select a Host Group

  1. In the host group list, select one or more host groups from which you want to collect logs.

    If there are no desired host groups, click Create above the host group list to create one.

  2. Click Next: Configurations.

Step 4: Configure the Collection

When you configure ServiceStage containerized application log ingestion, the collection configuration details are as follows.

  1. Collection Configuration Name: Enter 1 to 64 characters. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. Do not start with a period or underscore, or end with a period.
  2. Data Source: Select a data source type and configure it. The following data source types are supported: container standard output, container file path, node file path, and K8s event.
    Table 1 Collection configuration parameters

    Type

    Description

    Container standard output

    Collects stderr and stdout logs of a specified container in the cluster. Either Container Standard Output (stdout) or Container Standard Error (stderr) must be enabled.

    • If you enable Container Standard Error (stderr), select your collection destination path: Collect standard output and standard error to different files (stdout.log and stderr.log) or Collect standard output and standard error to the same file (stdout.log).
    • The standard output of the matched container is collected to the specified log stream. Standard output to AOM stops.
    • The container standard output can be collected to only one log stream.

    Container file

    Collects file logs of a specified container in the cluster.

    • Collection Paths: Add one or more host paths. LTS will collect logs from these paths.
      NOTE:
      • If a container mount path has been configured for the CCE cluster workload, the paths added for this field are invalid. The collection paths take effect only after the mount path is deleted.
    • Files can be collected for multiple times (not available to Windows).

      After you enable Allow Repeated File Collection, one host log file can be collected to multiple log streams. This function is available only to certain ICAgent versions. For details, see Checking the ICAgent Version Description.

      After you disable this function, each collection path must be unique. That is, the same log file in the same host cannot be collected to different log streams.

    • Set Collection Filters: Blacklisted directories or files will not be collected. If you specify a directory, all files in the directory are filtered out.

    Node file

    Collects files of a specified node in a cluster.

    • Collection Paths: Add one or more host paths. LTS will collect logs from these paths.
      NOTE:

      You cannot add the same host path to more than one log stream.

    • Files can be collected for multiple times (not available to Windows).

      After you enable Allow Repeated File Collection, one host log file can be collected to multiple log streams. This function is available only to certain ICAgent versions. For details, see Checking the ICAgent Version Description.

      After you disable this function, each collection path must be unique. That is, the same log file in the same host cannot be collected to different log streams.

    • Set Collection Filters: Blacklisted directories or files will not be collected. If you specify a directory, all files in the directory are filtered out.

    Kubernetes event

    Collects event logs of the Kubernetes cluster. You do not need to set parameters. Only ICAgent 5.12.150 or later is supported.

    NOTE:

    Kubernetes events of a Kubernetes cluster can be collected to only one log stream.

  3. If you select Container standard output or Container file as the data source type, set the ServiceStage matching rule by selecting the corresponding component from the drop-down list.
  4. Enable structuring parsing. For details, see Setting ICAgent Structuring Parsing Rules.

    LTS enables combined parsing, allowing you to create different structuring parsing rules for each collection configuration of a log stream.

    If you have configured cloud structuring parsing, delete its configurations before configuring ICAgent structuring parsing.

    Figure 1 ICAgent structuring parsing configuration
  5. Set other configurations.
    Table 2 Other configurations

    Parameter

    Description

    Max Directory Depth

    The maximum directory depth is 20 levels.

    Collection paths can use double asterisks (**) for multi-layer fuzzy match. Specify the maximum directory depth in the text box. For example, if your log path is /var/logs/department/app/a.log and your collection path is /var/logs/**/a.log, logs will not be collected when this parameter is set to 1, but will be collected when this parameter is set to 2 or a larger number.

    Max Directory Depth

    The maximum directory depth is 5 levels.

    ICAgent does not collect log files with directory levels beyond this value. Set this parameter to the appropriate level for a target collection path with fuzzy matching strings to avoid ICAgent performance waste.

    Split Logs

    LTS can split logs.

    You can set the value (size of a split log file) to up to 1,024 KB. In this example, the value is 500 KB. If this option is enabled, a single-line log larger than 500 KB will be split into multiple lines for collection. For example, a 600 KB single-line log will be split into a line of 500 KB and a line of 100 KB.

    If this option is disabled, when a log exceeds 500 KB, the extra part will be truncated and discarded.

    Collect Binary Files

    LTS can collect binary files.

    Run the file -i File_name command to view the file type. charset=binary indicates that a log file is a binary file.

    If this option is enabled, binary log files will be collected, but only UTF-8 strings are supported. Other strings will be garbled on the LTS console.

    If this option is disabled, binary log files will not be collected.

    Log File Code

    The log file encoding format can be UTF-8 or GBK (not available to Windows).

    UTF-8 encoding is a variable-length encoding mode and represents Unicode character sets. GBK, an acronym for Chinese Internal Code Extension Specification, is a Chinese character encoding standard that extends both the ASCII and GB2312 encoding systems.

    Collection Policy

    Select Incremental or All.

    • Incremental: When collecting a new file, ICAgent reads the file from the end of the file.
    • All: When collecting a new file, ICAgent reads the file from the beginning of the file.

    Custom Metadata

    • If this option is disabled, ICAgent will report logs to LTS based on the default system fields. You do not need to and cannot configure the fields.
    • If this option is enabled, ICAgent will report logs based on your selected built-in fields and fields created with custom key-value pairs.

      Built-in Fields: Select built-in fields as required.

      Custom Key-Value Pairs: Click Add and set a key and value.

  6. Configure the log format and time by referring to Table 3.
    Table 3 Log collection settings

    Parameter

    Description

    Log Format

    • Single-line: Each log line is displayed as a single log event.
    • Multi-line: Multiple lines of exception log events can be displayed as a single log event. This is helpful when you check logs to locate problems.

    Log Time

    System time: log collection time by default. It is displayed at the beginning of each log event.

    NOTE:
    • Log collection time is the time when logs are collected and sent by ICAgent to LTS.
    • Log printing time is the time when logs are printed. ICAgent collects and sends logs to LTS with an interval of 1 second.
    • Restriction on log collection time: Logs are collected within 24 hours before and after the system time.

    Time wildcard: You can set a time wildcard so that ICAgent will look for the log printing time as the beginning of a log event.

    • If the time format in a log event is 2019-01-01 23:59:59.011, the time wildcard should be set to YYYY-MM-DD hh:mm:ss.SSS.
    • If the time format in a log event is 19-1-1 23:59:59.011, the time wildcard should be set to YY-M-D hh:mm:ss.SSS.
    NOTE:

    If a log event does not contain year information, ICAgent regards it as printed in the current year.

    Example:

    YY   - year (19)     
    YYYY - year (2019)  
    M    - month (1)     
    MM   - month (01)    
    D    - day (1)       
    DD   - day (01)        
    hh   - hours (23)     
    mm   - minutes (59)   
    ss   - seconds (59) 
    SSS  - millisecond (999)
    hpm     - hours (03PM)
    h:mmpm    - hours:minutes (03:04PM)
    h:mm:sspm  - hours:minutes:seconds (03:04:05PM)       
    hh:mm:ss ZZZZ (16:05:06 +0100)       
    hh:mm:ss ZZZ  (16:05:06 CET)       
    hh:mm:ss ZZ   (16:05:06 +01:00)

    Log Segmentation

    This parameter needs to be specified if the Log Format is set to Multi-line. By generation time indicates that a time wildcard is used to detect log boundaries, whereas By regular expression indicates that a regular expression is used.

    By regular expression

    You can set a regular expression to look for a specific pattern to indicate the beginning of a log event. This parameter needs to be specified when you select Multi-line for Log Format and By regular expression for Log Segmentation.

Step 5: Configure Indexing

  1. (Optional) Configure indexing. For details, see Setting Indexes.
  2. Click Submit.

Step 6: Complete the Ingestion Configuration

The configured ingestion rule will be displayed on the Ingestion Rule tab page.
  • Click its name to view its details.
  • Click Edit in the Operation column to modify the ingestion configuration.
  • Click Configure Tag in the Operation column to add a tag.
  • Click Copy in the Operation column to copy the ingestion configuration.
  • Click Delete in the Operation column to delete the ingestion configuration.
  • Click ICAgent Collect Diagnosis in the Operation column of the ingestion configuration to monitor the exceptions, overall status, and collection status of ICAgent.

Setting Multiple Ingestion Configurations in a Batch

You can set multiple ingestion configurations for multiple scenarios in a batch, avoiding repetitive setups.

  1. On the Ingestion Management page, click Batch Ingestion to go to the details page. For details, see Table 4.

    Table 4 Adding configurations in batches

    Type

    Operation

    Description

    Basic Settings

    Ingestion Type

    Select ServiceStage - Containerized Application Logs.

    Configurations to Add

    Enter the number of ingestion configurations in the text box and click Add.

    A maximum of 100 ingestion configurations can be added, including the one already exists under Ingestion Settings by default. Therefore, you can add up to 99 more.

    Ingestion Settings

    Configuration List

    1. The ingestion configurations are displayed on the left. You can add up to 99 more configurations.
    2. The ingestion configuration details are displayed on the right. Set them by referring to Step 4: Configure the Collection.
    3. After an ingestion configuration is complete, you can click Apply to Other Configurations to copy its settings to other configurations.

  2. Click Check Parameters. After the check is successful, click Submit.

    The added ingestion configurations will be displayed on the Ingestion Rule tab page after the batch creation is successful.

  3. (Optional) Perform the following operations on ingestion configurations:

    • Select multiple existing ingestion configurations and click Edit. On the displayed page, select an ingestion type to modify the corresponding ingestion configurations.
    • Select multiple existing ingestion configurations and click Enable or Disable. If you toggle off the switch in the Status column of an ingestion configuration, logs will not be collected for this configuration.
    • Select multiple existing ingestion configurations and click Delete.