Help Center> Log Tank Service> User Guide> Log Ingestion> Collecting Logs from Cloud Services> Collecting Logs from ServiceStage - Containerized Applications
Updated on 2024-03-22 GMT+08:00

Collecting Logs from ServiceStage - Containerized Applications

LTS collects log data from CCE. By processing a massive number of logs efficiently, securely, and in real time, LTS provides useful insights for you to optimize the availability and performance of cloud services and applications. It also helps you efficiently perform real-time decision-making, device O&M management, and service trend analysis.

Currently, this function is available only to whitelisted users. To use it, submit a service ticket.

Prerequisites

Restrictions

  • CCE cluster nodes whose container engine is Docker are supported.
  • CCE cluster nodes whose container engine is Containerd are supported. You must be using ICAgent 5.12.130 or later.
  • To collect container log directories mounted to host directories to LTS, you must configure the node file path.
  • Restrictions on the Docker storage driver: Currently, container file log collection supports only the overlay2 storage driver. devicemapper cannot be used as the storage driver. Run the following command to check the storage driver type:
    docker info | grep "Storage Driver" 

Procedure

  1. Log in to the LTS console.
  2. In the left navigation pane, choose Log Ingestion. On the displayed page, click ServiceStage - Containerized Application Logs.
  3. Alternatively, choose Log Management in the left navigation pane. Click the name of the target log stream to go to the log details page. Click in the upper right corner. On the displayed page, click the Collection Configuration tab and click Create. In the displayed dialog box, click ServiceStage - Containerized Application Logs.
  4. In the Select Log Stream step, set the following parameters:

    1. Select a ServiceStage application and ServiceStage environment.
    2. Select a log group from the Log Group drop-down list. If there are no desired log groups, click Create Log Group to create one.
    3. Select a log stream from the Log Stream drop-down list. If there are no desired log streams, click Create Log Stream to create one.
    4. Click Next: Check Dependencies.

  5. Check dependencies.

    The system automatically checks whether the following items meet the requirements:

    1. There is a host group with the custom identifier k8s-log-Application ID.
    2. There is a log group named k8s-log-Application ID.
    You need to meet all the requirements before moving on. If not, click Auto Correct.
    • Auto Correct: Check the previous settings with one click.
    • Check Again: Recheck dependencies.
    Figure 1 Checking dependencies

  6. (Optional) Select a host group.

    1. Select one or more host groups from which you want to collect logs. If there are no desired host groups, click Create above the host group list to create one.
      • The host group to which the cluster belongs is selected by default. You can also select host groups as required.
      • You can skip this step and configure host groups after the ingestion configuration is complete. There are two ways to do this:
        • Choose Host Management in the navigation pane, click the Host Groups tab, and associate host groups with ingestion configurations.
        • On the LTS console, choose Log Ingestion in the navigation pane and click an ingestion configuration. On the displayed page, add one or more host groups for association.
      Figure 2 Selecting a host group
    2. Click Next: Configurations.

  7. Configure the collection.

    Specify collection rules. For details, see Configuring the Collection.

  8. Configure log structuring. For details, see Overview.

    If the selected log stream has been structured, exercise caution when deleting it.

  9. Configure indexes. For details, see Index Settings.
  10. Click Submit. An ingestion configuration will be displayed on the Log Ingestion page. You can:

    • Click the name of the ingestion configuration to view its details.
    • Click Edit in the Operation column to modify the ingestion configuration.
    • Click Copy in the Operation column to copy the ingestion configuration.
    • Click Delete in the Operation column to delete the ingestion configuration.

Configuring the Collection

When you configure ServiceStage log ingestion, the collection configuration details are as follows.

Figure 3 Basic settings
  1. Basic Settings: Enter a name containing 1 to 64 characters. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. The name cannot start with a period or underscore, or end with a period.
  2. Data Source: Select a data source type and configure it.
    • Container standard output: Collects stderr and stdout logs of a specified container in the cluster.
      • The standard output of the matched container is collected to the specified log stream. Standard output to AOM stops.
      • The container standard output can be ingested only to one log stream.
    • Container file: Collects file logs of a specified container in the cluster.
    • Node file: Collects files of a specified node in the cluster.

      You cannot add the same host path to more than one log stream.

    • Kubernetes event: Collects event logs in the Kubernetes cluster.

      Kubernetes events of a Kubernetes cluster can be ingested to only one log stream.

    Table 1 Collection configuration parameters

    Type

    Description

    Container standard output

    Either Container Standard Output (stdout) or Container Standard Error (stderr) must be enabled.

    Container file

    • Add one or more host paths. LTS will collect logs from these paths.
      NOTE:
      • If a container mount path has been configured for the CCE cluster workload, the paths added for this field are invalid. The collection paths take effect only after the mount path is deleted.
      • You cannot add the same host path to more than one log stream.
    • Set Collection Filters: Blacklisted directories or files will not be collected. If you specify a directory, all files in the directory are filtered out.

    Node file

    • Add one or more host paths. LTS will collect logs from these paths.
      NOTE:

      You cannot add the same host path to more than one log stream.

    • Set Collection Filters: Blacklisted directories or files will not be collected. If you specify a directory, all files in the directory are filtered out.

    Kubernetes event

    You do not need to configure this parameter. Only ICAgent 5.12.150 or later is supported.

  3. Select the corresponding component under ServiceStage matching rule.
  4. Perform other configurations.
    Table 2 Other configurations

    Parameter

    Description

    Split Logs

    LTS supports log splitting, which is disabled by default.

    If this option is enabled, a single-line log larger than 500 KB will be split into multiple lines for collection. For example, a 600 KB single-line log will be split into a line of 500 KB and a line of 100 KB.

    If this option is disabled, a log larger than 500 KB will be truncated.

    Collect Binary Files

    LTS supports binary file collection, which is disabled by default.

    Run the file -i File_name command to view the file type. charset=binary indicates that a log file is a binary file.

    If this option is enabled, binary log files will be collected, but only UTF-8 strings are supported. Other strings will be garbled on the LTS console.

    If this option is disabled, binary log files will not be collected.

  5. Advanced Settings: Configure the log format and log time.
    Figure 4 Advanced settings
    Table 3 Log collection settings

    Parameter

    Description

    Log Format

    • Single-line: Each log line is displayed as a single log event.
    • Multi-line: Multiple lines of exception log events can be displayed as a single log event. This is helpful when you check logs to locate problems.

    Log Time

    System time: log collection time by default. It is displayed at the beginning of each log event.

    NOTE:
    • Log collection time is the time when logs are collected and sent by ICAgent to LTS.
    • Log printing time is the time when logs are printed. ICAgent collects and sends logs to LTS with an interval of 1 second.
    • Restriction on log collection time: Logs are collected within 24 hours before and after the system time.

    Time wildcard: You can set a time wildcard so that ICAgent will look for the log printing time as the beginning of a log event.

    • If the time format in a log event is 2019-01-01 23:59:59.011, the time wildcard should be set to YYYY-MM-DD hh:mm:ss.SSS.
    • If the time format in a log event is 19-1-1 23:59:59.011, the time wildcard should be set to YY-M-D hh:mm:ss.SSS.
    NOTE:

    If a log event does not contain year information, ICAgent regards it as printed in the current year.

    Example:

    YY   - year (19)     
    YYYY - year (2019)  
    M    - month (1)     
    MM   - month (01)    
    D    - day (1)       
    DD   - day (01)        
    hh   - hours (23)     
    mm   - minutes (59)   
    ss   - seconds (59) 
    SSS  - millisecond (999)
    hpm     - hours (03PM)
    h:mmpm    - hours:minutes (03:04PM)
    h:mm:sspm  - hours:minutes:seconds (03:04:05PM)       
    hh:mm:ss ZZZZ (16:05:06 +0100)       
    hh:mm:ss ZZZ  (16:05:06 CET)       
    hh:mm:ss ZZ   (16:05:06 +01:00)

    Log Segmentation

    This parameter needs to be specified if the Log Format is set to Multi-line. By generation time indicates that a time wildcard is used to detect log boundaries, whereas By regular expression indicates that a regular expression is used.

    By regular expression

    You can set a regular expression to look for a specific pattern to indicate the beginning of a log event. This parameter needs to be specified when you select Multi-line for Log Format and By regular expression for Log Segmentation.