Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Collecting Data Plane Logs

Updated on 2025-02-14 GMT+08:00

Billing

LTS does not charge you for creating log groups and offers a free quota for log collection every month. You pay only for log volume that exceeds the quota.

Data Plane Components

There are two types of data plane logs. Each log stream corresponds to a component of the Kubernetes data plane. To learn more about these components, see Kubernetes Components.

Table 1 Data plane components

Log Type

Component

Log Stream

Description

Data plane component logs

default-stdout

stdout-{clusterID}

Stdout logs

Default log group: k8s-logs-{Cluster ID}

default-event

event-{clusterID}

Kubernetes events

Default log group: k8s-logs-{Cluster ID}

Log Collection

  1. View and configure log collection policies.

    1. Access the fleet console. In the navigation pane, choose Container Clusters. Then, click the cluster name to access the cluster console. In the navigation pane, choose Logging.
    2. In the upper right corner, click View Log Collection Policies. All log collection policies in the current cluster are displayed.
      Figure 1 Viewing log collection policies

      If Container standard output and Kubernetes events are selected during add-on installation, two log collection policies will be created, and the collected logs will be reported to the default log group and log streams.

    3. Click Create Log Policy and configure parameters as required.

      Policy Template: If no log collection policy is enabled during add-on installation or the log collection policy is deleted, you can use this option to create a default log collection policy.

      Figure 2 Policy template

      Custom Policy: You can use this option to create custom log collection policies.

      Figure 3 Custom policy
      Table 2 Custom policy parameters

      Parameter

      Description

      Log Type

      Type of logs to be collected.

      • Container standard output: used to collect container standard output logs. You can create a log collection policy by namespace, workload name, or instance label.
      • Container file log: used to collect text logs. You can create a log collection policy by workload or instance label.
      • Node file log: used to collect logs from a node. Only one file path can be configured for a log collection policy.

      Log Source

      Containers whose logs are to be collected.

      • All containers: You can specify all containers in a namespace. If this parameter is not specified, logs of containers in all namespaces will be collected.
      • Workload: You can specify a workload and its containers. If this parameter is not specified, logs of all containers running the workload will be collected.
      • Workload with target label: You can specify a workload by label and its containers. If this parameter is not specified, logs of all containers running the workload will be collected.

      Collection Path

      Path of files where logs are to be collected.

      The path must start with a slash (/) and contain a maximum of 512 characters. Only uppercase letters, lowercase letters, digits, hyphens (-), underscores (_), slashes (/), asterisks (*), and question marks (?) are allowed.

      The file name can contain only uppercase letters, lowercase letters, digits, hyphens (-), underscores (_), asterisks (*), question marks (?), and periods (.).

      Enter an absolute path for the log directory. Logs in the format of .gz, .tar, and .zip are not supported.

      A maximum of three levels of directories can be matched using wildcards. The level-1 directory does not support wildcards.

      The directory name and file name must be complete names and support asterisks (*) and question marks (?) as wildcards.

      An asterisk (*) can match multiple characters. A question mark (?) can match only one character. Example:

      • If the directory is /var/logs/* and the file name is *.log, any log files with the extension .log in all directories in the /var/logs directory will be reported.
      • If the directory is /var/logs/app_* and the file name is *.log, any log files with the extension .log in all directories that match app_* in the /var/logs directory will be reported.

      If a volume is attached to the data directory of a service container, this add-on cannot collect data from the parent directory. In this case, you need to configure a complete data directory. For example, if the data volume is attached to the /var/log/service directory, logs cannot be collected from the /var/log or /var/log/* directory. In this case, you need to set the collection directory to /var/log/service.

      Log Format

      • Single-line

        Each log contains only one line of text. The newline character \n denotes the start of a new log.

      • Multi-line

        Some programs (for example, Java program) print a log that occupies multiple lines. By default, logs are collected by line. If you want to display logs as a single message, you can enable multi-line logging and use the regular pattern. If you select the multi-line text, you need to enter the log matching format.

        Example:

        If logs need to be collected by line, enter \d{4}-\d{2}-\d{2} \d{2}\:\d{2}\:\d{2}.*.

        The following three lines starting with the date are regarded as a log.

        2022-01-01 00:00:00 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!

        at com.myproject.module.MyProject.badMethod(MyProject.java:22)

        at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)

      Report to LTS

      This parameter is used to configure the log group and log stream for log reporting.

      • Default log groups/log streams: The default log group (k8s-log-{Cluster ID}) and default log stream (stdout-{Cluster ID}) are automatically selected.
      • Custom log groups/log streams: You can select any log group and log stream.

      Log Group

      A log group is the basic unit for LTS to manage logs. If you do not have a log group, CCE prompts you to create one. The default name is k8s-log-{Cluster ID}, for example, k8s-log-bb7eaa87-07dd-11ed-ab6c-0255ac1001b3.

      Log Stream

      A log stream is the basic unit for log read and write. You can create log streams in a log group to store different types of logs for finer log management. When you install the add-on or create a log policy based on a template, the following log streams are automatically created:

      • stdout-{Cluster ID} for container logs, for example, stdout-bb7eaa87-07dd-11ed-ab6c-0255ac1001b3
      • event-{Cluster ID} for Kubernetes events, for example, event-bb7eaa87-07dd-11ed-ab6c-0255ac1001b3
    4. Click Edit to modify an existing log collection policy.
    5. Click Delete to delete an existing log collection policy.

  2. View the logs.

    1. Access the fleet console. In the navigation pane, choose Container Clusters. Then, click the cluster name to access the cluster console. In the navigation pane, choose Logging.
    2. View different types of logs:
      • Container Logs: displays all logs in the default log stream stdout-{Cluster ID} of the default log group k8s-log-{Cluster ID}. You can search for logs by workload for a Huawei Cloud cluster.
        Figure 4 Querying container logs
      • Kubernetes Events: displays all Kubernetes events in the default log stream event-{Cluster ID} of the default log group k8s-log-{Cluster ID}.
      • Control Plane Logs: displays all logs of components on the control plane in the default log stream {Component name}-{Cluster ID} of the default log group k8s-log-{Cluster ID}.
      • Control Plane Audit Logs: displays all audit logs of the control plane in the default log stream audit-{Cluster ID} of the default log group k8s-log-{Cluster ID}.
      • Global Log Query: You can view logs in the log streams of all log groups. You can specify a log stream to view the logs. By default, the default log group k8s-log-{Cluster ID} is selected. You can click the edit icon on the right of Switching Log Groups to switch to another log group.
        Figure 5 Global log query
    3. Click View Log Collection Policies in the upper right corner. Locate the log collection policy and click View Log to go to the log list.
      Figure 6 Viewing logs

Troubleshooting

  1. "Failed to create log group, the number of log groups exceeds the quota" is reported in the standard output log of log-operator.

    Example:

    2023/05/05 12:17:20.799 [E] call 3 times failed, resion: create group failed, projectID: xxx, groupName: k8s-log-xxx, err: create groups status code: 400, response: {"error_code":"LTS.0104","error_msg":"Failed to create log group, the number of log groups exceeds the quota"}, url: https://lts.cn-north-4.myhuaweicloud.com/v2/xxx/groups, process will retry after 45s

    Solution: On the LTS console, delete unnecessary log groups. For details about the quota limit of log groups, see Log Groups.

  2. A container file path is configured but is not mounted to the container, and Docker is used as the container engine. As a result, logs cannot be collected.

    Solution:

    Check whether Device Mapper is used for the node where the workload resides. Device Mapper does not support text log collection. (This restriction has been displayed when you create a log collection policy, as shown in Figure 7.) To check this, perform the following operations:

    1. Go to the node where the workload resides.
    2. Run the docker info | grep "Storage Driver" command.
    3. If the value of Storage Driver is devicemapper, text logs cannot be collected.
    Figure 7 Creating a log collection policy
  3. Logs cannot be reported, and "log's quota has full" is reported in the standard output log of the OTel component.

    Solution:

    LTS provides a free log quota. If the quota is used up, you will be billed for the excess log usage. If an error message is displayed, the free quota has been used up. To continue collecting logs, log in to the LTS console, choose Configuration Center > Quota Configuration, and enable Continue to Collect Logs When the Free Quota Is Exceeded.

  4. Text logs cannot be collected because wildcards are configured for the collection directory.

    Troubleshooting: Check the volume mounting status in the workload configuration. If a volume is attached to the data directory of a service container, this add-on cannot collect data from the parent directory. In this case, you need to set the collection directory to a complete data directory. For example, if the data volume is attached to the /var/log/service directory, logs cannot be collected from the /var/log or /var/log/* directory. In this case, you need to set the collection directory to /var/log/service.

    Solution: If the log generation directory is /application/logs/{Application name}/*.log, attach the data volume to the /application/logs directory and set the collection directory in the log collection policy to /application/logs/*/*.log.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback