Updated on 2025-08-14 GMT+08:00

Ingesting CCE Application Logs to LTS

CCE provides highly scalable, high-performance, enterprise-class Kubernetes clusters. With CCE, you can easily deploy, manage, and scale containerized applications.

After ingesting CCE logs to LTS, you can centrally manage and analyze them on the LTS console. This helps you promptly detect container issues and improve container performance and reliability.

Follow these steps to complete the ingestion configuration:

Step 1: Select a Log Stream

Step 2: Check Dependencies

Step 3: (Optional) Select a Host Group

Step 4: Configure the Collection

Step 5: Configure Indexing

Step 6: Complete the Ingestion Configuration

Setting Multiple Ingestion Configurations in a Batch: Select this mode to collect logs from multiple scenarios.

Prerequisites

  • ICAgent has been installed in the CCE cluster. For details, see Managing ICAgent.

    (You are advised to install ICAgent before configuring CCE log ingestion. This avoids automatic checks and repairs during configuration, saving time.)

  • Add the CCE cluster with ICAgent installed to a host group of the custom identifier type. For details, see Creating a Host Group (Custom Identifier).
  • You have disabled Output to AOM.
  • A log group and a log stream have been created. For details, see Managing Log Groups and Managing Log Streams.

Constraints

  • Currently, ServiceStage hosting is not supported.
  • CCE cluster nodes whose container engine is Docker are supported. For details, seeNode Overview.
  • CCE cluster nodes whose container engine is containerd are supported. You must be using ICAgent 5.12.130 or later.
  • To collect container log directories mounted to host directories to LTS, you must configure the node file path.
  • Constraints on the Docker storage driver: Currently, container file log collection supports only the overlay2 storage driver. devicemapper cannot be used as the storage driver. Run the following command to check the storage driver type:
    docker info | grep "Storage Driver" 
  • If you select Fixed log stream for log ingestion, ensure that you have created a CCE cluster.

Step 1: Select a Log Stream

  1. Log in to the management console and choose Management & Deployment > Log Tank Service.
  2. Choose Log Ingestion > Ingestion Center in the navigation pane and click CCE (Cloud Container Engine).

    You can also choose Log Ingestion > Ingestion Management in the navigation pane and click Create. On the displayed page, click CCE (Cloud Container Engine).

  3. Choose a collection mode between Fixed log stream and Custom log stream.
    • If you set Collect to Fixed log stream, perform the following steps:

      Logs will be collected to a fixed log stream. The default log streams for a CCE cluster are stdout-{ClusterID} for standard output/errors, hostfile-{ClusterID} for node files, event-{ClusterID} for Kubernetes events, and containerfile-{ClusterID} for container files. Log streams are automatically named with a cluster ID. For example, if the cluster ID is Cluster01, the standard output/error log stream is stdout-Cluster01.

      Log streams that can be created for a CCE cluster are stdout-{ClusterID} for standard output/errors, hostfile-{ClusterID} for node files, event-{ClusterID} for Kubernetes events, and containerfile-{ClusterID} for container files. If one of them has been created in a log group, the log stream will no longer be created in the same log group or other log groups.

      1. Select a cluster from the CCE Cluster drop-down list.
      2. The default log group is k8s-log-ClusterID. For example, if the cluster ID is c7f3f4a5-bcb8-11ed-a4ec-0255ac100b07, the default log group will be k8s-log-c7f3f4a5-bcb8-11ed-a4ec-0255ac100b07. If there is no such group, the system displays the following message: This log group does not exist and will be automatically created to start collecting logs.
      3. Click Next: Check Dependencies.
    • If you set Collect to Custom log stream, perform the following steps:
      1. Select a cluster from the CCE Cluster drop-down list.
      2. Select a log group from the Log Group drop-down list. If there are no desired log groups, click Create Log Group to create one.
      3. Select a log stream from the Log Stream drop-down list. If there are no desired log streams, click Create Log Stream to create one.
      4. Click Next: Check Dependencies.

Step 2: Check Dependencies

The system automatically checks the following items:

  1. ICAgent has been installed (version 5.12.130 or later).
  2. There is a host group with the custom identifier k8s-log-ClusterID.
  3. There is a log group named k8s-log-ClusterID. If Fixed log stream is selected, this item is checked.
  4. The recommended log stream exists. If Fixed log stream is selected, this item is checked.

You need to meet all the requirements before moving on. If not, click Auto Correct.

  • Auto Correct: Configure the previous dependencies with one click.
  • Check Again: Recheck dependencies.

Step 3: (Optional) Select a Host Group

A host group is a virtual group of hosts, allowing you to configure host log collection efficiently. ICAgent has been installed in the CCE cluster and a host group with custom identifiers has been created for related nodes. The system will automatically check these configurations and make necessary corrections when CCE logs are ingested to LTS.

  1. On the Select Host Group page, the host group k8s-log-Cluster ID created during dependency check is selected by default.
    • You can also select other host groups as required.
    • You can skip this step and configure host groups as follows after the ingestion configuration is complete. However, you are advised to configure host groups during the first ingestion configuration to ensure that the configuration takes effect.
      • Choose Host Management > Host Groups in the navigation pane and associate host groups with ingestion configurations.
      • Choose Log Ingestion > Ingestion Management in the navigation pane. In the ingestion configuration list, click Modify in the Operation column. On the page displayed, select required host groups.
  2. Click Next: Configurations.

Step 4: Configure the Collection

Collection configuration items include the log collection scope, collection mode, and format processing. Configure them as follows.

  1. Collection Configuration Name: Enter 1 to 64 characters. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. Do not start with a period or underscore, or end with a period.
  2. Data Source: Select a data source type and configure it. For details, see Table 1.
    Table 1 Data source parameters

    Parameter

    Description

    Container standard output

    Collects stderr and stdout logs of a specified container in the cluster.

    The standard output of the matched container is collected to the specified log stream. Standard output to AOM stops.

    • Output to AOM1.0: ICAgent has been installed on hosts in the cluster and collects container standard output to AOM only. This function is disabled by default. To collect container standard output to LTS, disable this function.
    • Either Container Standard Output (stdout) or Container Standard Error (stderr) must be enabled.
    • If you enable Container Standard Error (stderr), select your collection destination path: Collect standard output and standard error to different files (stdout.log and stderr.log) or Collect standard output and standard error to the same file (stdout.log).
    • Allow Repeated File Collection (not available to Windows)

      After you enable this function, one host log file can be collected to multiple log streams.

      After you disable this function, each collection path must be unique. That is, the same log file in the same host cannot be collected to different log streams.

    Container file

    Collects file logs of a specified container in the cluster.

    • Add Collection Path: Specify the paths from which LTS will collect logs. For more examples, see Collection Paths.

      If a container mount path has been configured for the CCE cluster workload, the paths added for this field are invalid. The collection paths take effect only after the mount path is deleted.

      LTS does not collect soft links when collecting container logs. To collect soft links, configure them to point to actual files.

    • Add Custom Wrapping Rule: ICAgent determines whether a file is wrapped based on the file name rule. If your wrapping rule does not comply with the built-in rules, you can add a custom wrap rule to prevent log loss during repeated collection and wrapping.

      The built-in rules are {basename}{connector}{wrapping identifier}.{suffix} and {basename}.{suffix}{connector}{wrapping identifier}. Connectors can be hyphens (-), periods (.), or underscores (_), wrapping identifiers can contain only non-letter characters, and the suffix can contain only letters.

      A custom wrapping rule consists of {basename} and the feature regular expression of the wrapped file. Example: If your log file name is test.out.log and the names after wrapping are test.2024-01-01.0.out.log and test.2024-01-01.1.out.log, configure the collection path to /opt/*.log, and add a custom wrapping rule: {basename}\.\d{4}-\d{2}-\d{2}\.\d{1}.out.log.

    • You can verify collection paths to ensure that logs can be properly collected. Click use path verification, enter the collection paths and absolute paths of the log files, and click OK. You can add up to 30 collection paths. If the paths are correct, a success message will be displayed.
    • You can verify wrapping rules to ensure that logs can be properly collected. Click wrapping rule verification, enter the name of the collected file, file name after wrapping, and wrapping rule, and click OK. If the wrapping rule is correct, a success message will be displayed.
    • Allow Repeated File Collection (not available to Windows)

      After you enable this function, one host log file can be collected to multiple log streams.

      After you disable this function, each collection path must be unique. That is, the same log file in the same host cannot be collected to different log streams.

    • Set Collection Filters: Blacklisted directories or files will not be collected. If you specify a directory, all files in the directory are filtered out.

    Node file

    Collects files of a specified node in a cluster.

    • Add Collection Path: Specify the paths from which LTS will collect logs. For more examples, see Collection Paths.
    • Add Custom Wrapping Rule: ICAgent determines whether a file is wrapped based on the file name rule. If your wrapping rule does not comply with the built-in rules, you can add a custom wrap rule to prevent log loss during repeated collection and wrapping.

      The built-in rules are {basename}{connector}{wrapping identifier}.{suffix} and {basename}.{suffix}{connector}{wrapping identifier}. Connectors can be hyphens (-), periods (.), or underscores (_), wrapping identifiers can contain only non-letter characters, and the suffix can contain only letters.

      A custom wrapping rule consists of {basename} and the feature regular expression of the wrapped file. Example: If your log file name is test.out.log and the names after wrapping are test.2024-01-01.0.out.log and test.2024-01-01.1.out.log, configure the collection path to /opt/*.log, and add a custom wrapping rule: {basename}\.\d{4}-\d{2}-\d{2}\.\d{1}.out.log.

    • You can verify collection paths to ensure that logs can be properly collected. Click use path verification, enter the collection paths and absolute paths of the log files, and click OK. You can add up to 30 collection paths. If the paths are correct, a success message will be displayed.
    • You can verify wrapping rules to ensure that logs can be properly collected. Click wrapping rule verification, enter the name of the collected file, file name after wrapping, and wrapping rule, and click OK. If the wrapping rule is correct, a success message will be displayed.
    • Allow Repeated File Collection (not available to Windows)

      After you enable this function, one host log file can be collected to multiple log streams.

      After you disable this function, each collection path must be unique. That is, the same log file in the same host cannot be collected to different log streams.

    • Set Collection Filters: Blacklisted directories or files will not be collected. If you specify a directory, all files in the directory are filtered out.

    Kubernetes event

    Collects event logs of the Kubernetes cluster.

    Kubernetes events of a Kubernetes cluster can be collected to only one log stream.

  3. (Optional) Kubernetes Matching Rules: Set these parameters only when the data source type is set to Container standard output or Container file.

    After entering a regular expression, click Verify to verify it. ICAgent supports only RE2 regular expressions. For details, see Syntax.

    Table 2 Kubernetes matching rules

    Parameter

    Description

    Namespace Name Regular Expression

    Specifies the container whose logs are to be collected based on the namespace name. Regular expression matching is supported.

    LTS will collect logs of the namespaces with names matching this expression. To collect logs of all namespaces, leave this field empty.

    Pod Name Regular Expression

    Specifies the container whose logs are to be collected based on the pod name. Regular expression matching is supported.

    LTS will collect logs of the pods with names matching this expression. To collect logs of all pods, leave this field empty.

    Container Name Regular Expression

    Specifies the container whose logs are to be collected based on the container name (the Kubernetes container name is defined in spec.containers). Regular expression matching is supported.

    LTS will collect logs of the containers with names matching this expression. To collect logs of all containers, leave this field empty.

    Label Whitelist

    Specifies the containers whose logs are to be collected. If you want to set a Kubernetes label whitelist, Label Key is mandatory and Label Value is optional.

    When adding multiple whitelists, you can select the And or Or relationship. This means a container will be matched when it satisfies all or any of the whitelists.

    If Label Value is empty, LTS will match all containers whose Kubernetes label contains a specified Label Key. If Label Value is not empty, only containers whose Kubernetes label contains a specified Label Key that is equal to its Label Value are matched. Label Key requires full matching while Label Value supports regular matching.

    Label Blacklist

    Specifies the containers whose logs are not to be collected. If you want to set a Kubernetes label blacklist, Label Key is mandatory and Label Value is optional.

    When adding multiple blacklists, you can select the And or Or relationship. This means a container will be excluded when it satisfies all or any of the blacklists.

    If Label Value is empty, LTS will exclude all containers whose Kubernetes label contains a specified Label Key. If Label Value is not empty, only containers whose Kubernetes label contains a specified Label Key that is equal to its Label Value will be excluded. Label Key requires full matching while Label Value supports regular matching.

    Kubernetes Label

    After the Kubernetes Label is set, LTS adds related fields to logs.

    LTS adds the specified fields to the log when each Label Key has a corresponding Label Value. For example, if you enter app as the key and app_alias as the value, when the container label contains app=lts, {app_alias: lts} will be added to the log.

    Container Label Whitelist

    Specifies the containers whose logs are to be collected. If you want to set a container label whitelist, Label Key is mandatory and Label Value is optional.

    When adding multiple whitelists, you can select the And or Or relationship. This means a container will be matched when it satisfies all or any of the whitelists.

    If Label Value is empty, LTS will match all containers whose container label contains a specified Label Key. If Label Value is not empty, only containers whose container label contains a specified Label Key that is equal to its Label Value are matched. Label Key requires full matching while Label Value supports regular matching.

    Container Label Blacklist

    Specifies the containers whose logs are not to be collected. If you want to set a container label blacklist, Label Key is mandatory and Label Value is optional.

    When adding multiple blacklists, you can select the And or Or relationship. This means a container will be excluded when it satisfies all or any of the blacklists.

    If Label Value is empty, LTS will exclude all containers whose container label contains a specified Label Key. If Label Value is not empty, only containers whose container label contains a specified Label Key that is equal to its Label Value will be excluded. Label Key requires full matching while Label Value supports regular matching.

    Container Label

    After the Container Label is set, LTS adds related fields to logs.

    LTS adds the specified fields to the log when each Label Key has a corresponding Label Value. For example, if you enter app as the key and app_alias as the value, when the container label contains app=lts, {app_alias: lts} will be added to the log.

    Environment Variable Whitelist

    Specifies the containers whose logs are to be collected. If you want to set an environment variable whitelist, Label Key is mandatory and Label Value is optional.

    When adding multiple whitelists, you can select the And or Or relationship. This means a container will be matched when it satisfies all or any of the whitelists.

    If Environment Variable Value is empty, LTS will match all containers whose environment variable contains a specified Environment Variable Key. If Environment Variable Value is not empty, only containers whose environment variable contains a specified Environment Variable Key that is equal to its Environment Variable Value are matched. Label Key requires full matching while Label Value supports regular matching.

    Environment Variable Blacklist

    Specifies the containers whose logs are not to be collected. If you want to set an environment variable blacklist, Label Key is mandatory and Label Value is optional.

    When adding multiple blacklists, you can select the And or Or relationship. This means a container will be excluded when it satisfies all or any of the blacklists.

    If Environment Variable Value is empty, LTS will exclude all containers whose environment variable contains a specified Environment Variable Key. If Environment Variable Value is not empty, only containers whose environment variable contains a specified Environment Variable Key that is equal to its Environment Variable Value will be excluded. Label Key requires full matching while Label Value supports regular matching.

    Environment Variable Label

    After the environment variable label is set, the log service adds related fields to the log.

    LTS adds the specified fields to the log when each Environment Variable Key has a corresponding Environment Variable Value. For example, if you enter app as the key and app_alias as the value, when the Kubernetes environment variable contains app=lts, {app_alias: lts} will be added to the log.

  4. Structuring Parsing:

    LTS offers various log parsing rules, including Single-Line - Full-Text Log, Multi-Line - Full-Text Log, JSON, Delimiter, Single-Line - Completely Regular, Multi-Line - Completely Regular, and Combined Parsing. Select a parsing rule that matches your log content. Once collected, structured logs are sent to your specified log stream, enabling field-based searching and SQL analysis.

    • If you enable Structuring Parsing, configure it by referring to Configuring ICAgent Structuring Parsing.
    • If you disable Structuring Parsing, log data will not be structured. Raw logs will be sent to the specified log stream, allowing only keyword-based searches.
  5. Other: After setting the collection paths, you can also set log splitting, binary file collection, and custom metadata.
    Table 3 Other configurations

    Parameter

    Description

    Example Value

    Max Directory Depth

    Specify the number of directory levels that can be traversed when using double asterisks (**) for fuzzy matching of log collection paths. LTS supports a maximum of 20 directory levels.

    For example, to collect logs from /var/logs/department/app/a.log, set the collection path to /var/logs/**/a.log and Max Directory Depth to 5.

    5

    Split Logs

    To prevent individual logs from being too large or being truncated and discarded, you can split logs based on file size.

    • If Split Logs is enabled, logs exceeding the specified size will be split into multiple logs for collection. Specify the size in the range from 500 KB to 1,024 KB. For example, if you set the size to 500 KB, a 600 KB log will be split into a 500 KB log and a 100 KB log. This restriction is applicable to single-line logs only, not multi-line logs.
    • If Split Logs is disabled, any log exceeding 500 KB will have its excess content truncated and discarded.

    Enable

    Collect Binary Files

    Specify whether to collect log data stored in binary format. You can run the following command to check the file type. Log files containing charset=binary are binary files.
    file -i File name
    • If this option is enabled, binary log files will be collected, but only UTF-8 strings are supported. Other strings will be garbled on the LTS console.
    • If this option is disabled, binary log files will not be collected.

    Enable

    Log File Code

    Select the storage format of characters in log files. You can select UTF-8 or GBK encoding. GBK is not supported in the Windows OS. Set the encoding format properly to ensure that log content can be correctly read and parsed, preventing garbled characters or data damage.

    • UTF-8 encoding is a variable-length encoding mode and represents Unicode character sets.
    • GBK, an acronym for Chinese Internal Code Extension Specification, is a Chinese character encoding standard that extends both the ASCII and GB2312 encoding systems.

    UTF-8

    Collection Policy

    Set whether ICAgent reads a file from the end or the beginning when collecting new log files.

    • Incremental: When collecting a new file, ICAgent reads the file from the end of the file.
    • All: When collecting a new file, ICAgent reads the file from the beginning of the file.

    Incremental

    Custom Metadata

    • If this option is disabled, the ICAgent system's default fields are used to report logs to LTS. You do not need to and cannot configure the fields.
    • If this option is enabled, ICAgent will report logs based on your selected built-in fields and fields created with custom key-value pairs.

      Built-in Fields: Select built-in fields as required.

      Custom Key-Value Pairs: Click Add and set a key and value.

    Enable

  6. Configure the log format and time by referring to Table 4.
    Table 4 Log collection settings

    Parameter

    Description

    Log Format

    • Single-line: Each log line is displayed as a single log event.
    • Multi-line: Multiple lines of exception logs can be displayed as a single log event and each line of regular logs is displayed as a log event. This is helpful when you check logs to locate problems.

    Log Time

    System time: log collection time by default. It is displayed at the beginning of each log event.

    • Log collection time is the time when logs are collected and sent by ICAgent to LTS.
    • Log printing time is the time when logs are printed. ICAgent collects and sends logs to LTS every second.
    • Restriction on log collection time: Logs are collected within 24 hours before and after the system time.

    Time wildcard: You can set a time wildcard so that ICAgent will look for the log printing time as the beginning of a log event.

    • If the time format in a log event is 2019-01-01 23:59:59.011, the time wildcard should be set to YYYY-MM-DD hh:mm:ss.SSS.
    • If the time format in a log event is 19-1-1 23:59:59.011, the time wildcard should be set to YY-M-D hh:mm:ss.SSS. If a log event does not contain year information, ICAgent regards it as printed in the current year.

    Example:

    YY   - year (19)     
    YYYY - year (2019)  
    M    - month (1)     
    MM   - month (01)    
    D    - day (1)       
    DD   - day (01)        
    hh   - hours (23)     
    mm   - minutes (59)   
    ss   - seconds (59) 
    SSS - millisecond (999)
    hpm     - hours (03PM)
    h:mmpm    - hours:minutes (03:04PM)
    h:mm:sspm  - hours:minutes:seconds (03:04:05PM)       
    hh:mm:ss ZZZZ (16:05:06 +0100)       
    hh:mm:ss ZZZ  (16:05:06 CET)       
    hh:mm:ss ZZ   (16:05:06 +01:00)

    Log Segmentation

    This parameter needs to be specified if the Log Format is set to Multi-line. By generation time indicates that a time wildcard is used to detect log boundaries, whereas By regular expression indicates that a regular expression is used.

    By regular expression

    You can set a regular expression to look for a specific pattern to indicate the beginning of a log event. This parameter needs to be specified when you select Multi-line for Log Format and By regular expression for Log Segmentation.

    The time wildcard and regular expression will look for the specified pattern right from the beginning of each log line. If no match is found, the system time, which may be different from the time in the log event, is used. In general cases, you are advised to select Single-line for Log Format and System time for Log Time.

    ICAgent supports only RE2 regular expressions. For details, see Syntax.

  7. Click Next: Index Settings.

Step 5: Configure Indexing

An index is a storage structure used to query log data. Configuring indexing makes log searches and analysis faster and easier. Different index settings generate different query and analysis results. Configure index settings to fit your service requirements.

  • If you do not want to query or analyze logs using specific fields, you can skip configuring indexing when configuring log ingestion. This will not affect log collection. You can also configure indexing after creating the log ingestion configuration. However, index settings will only apply to newly ingested logs. For details, see Configuring Log Indexing. If you choose to skip this step, retain the default settings on the Index Settings page and click Skip and Submit. The message "Logs ingested" will appear.
  • To query or analyze logs using specific fields, configure indexing on the Index Settings page when creating an ingestion configuration. For details, see Configuring Log Indexing.

    On this page, click Auto Configure to have LTS generate index fields based on the first log event in the last 15 minutes or common system reserved fields (such as hostIP, hostName, and pathFile), and manually add structured fields. After completing the settings, click Submit. The message "Logs ingested" will appear. You can also adjust the index settings after the ingestion configuration is created. However, the changes will only affect newly ingested logs.

Step 6: Complete the Ingestion Configuration

The created ingestion configuration will be displayed.
  • Click its name to view its details.
  • Click Modify in the Operation column to modify the ingestion configuration.
  • Click Configure Tag in the Operation column to add a tag.
  • Click More > Copy in the Operation column to copy the ingestion configuration.
  • Click More > Delete in the Operation column to delete the ingestion configuration.

    Deleting an ingestion configuration may lead to log collection failures, potentially resulting in service exceptions related to user logs. In addition, the deleted ingestion configuration cannot be restored. Exercise caution when performing this operation.

  • Click More > ICAgent Collect Diagnosis in the Operation column of the ingestion configuration to monitor the exceptions, overall status, and collection status of ICAgent. If this function is not displayed, enable ICAgent diagnosis by referring to Setting ICAgent Collection.

Setting Multiple Ingestion Configurations in a Batch

You can set multiple ingestion configurations for multiple scenarios in a batch, avoiding repetitive setups.

  1. On the Ingestion Management page, click Batch Create to go to the configuration details page.

    1. Ingestion Type: Select CCE (Cloud Container Engine).
    2. Rule List:
      • Enter the number of ingestion configurations in the text box and click Add.
      • Enter a rule name under Configuration Items on the right. You can also double-click the name of the ingestion configuration on the left to replace it with a custom name after setting the configuration items. A rule name can contain 1 to 64 characters, including only letters, digits, hyphens (-), underscores (_), and periods (.). It cannot start with a period or underscore or end with a period.
      • To copy an ingestion configuration, move the cursor to it and click .
      • To delete an ingestion configuration, move the cursor to it and click . In the displayed dialog box, click Yes.
    3. Configuration Items:
      • The ingestion configurations are displayed on the left. You can add up to 99 more configurations.
      • The ingestion configuration items are displayed on the right. Set them by referring to Step 4: Configure the Collection.
      • After an ingestion configuration is complete, you can click Apply to Other Configurations to copy its settings to other configurations.

  2. Click Check Parameters. After the check is successful, click Submit.

    The added ingestion configurations will be displayed on the Ingestion Management page after the batch creation is successful.

  3. (Optional) Perform the following operations on ingestion configurations:

    • Select multiple existing ingestion configurations and click Edit. On the displayed page, select an ingestion type to modify the corresponding ingestion configurations.
    • Select multiple disabled ingestion configurations, click Enable/Disable Ingestion Configuration, and select Enable to enable them in a batch.
    • Select multiple enabled ingestion configurations, click Enable/Disable Ingestion Configuration, and select Disable. Logs will not be collected for disabled ingestion configurations. Exercise caution when disabling these configurations.
    • Select multiple existing ingestion configurations and click Delete.