Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Logging FAQ

Updated on 2025-02-18 GMT+08:00

How Do I Disable Logging?

Disabling container log and Kubernetes event collection

Method 1: Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging. In the upper right corner, click View Log Collection Policies. Then, locate and delete the corresponding log collection policy. The default-event policy reports Kubernetes events by default, and the default-stdout policy reports stdout logs by default.

Figure 1 Deleting a log collection policy

Method 2: Access the Add-ons page and uninstall the Cloud Native Log Collection add-on. Note: Once you uninstall this add-on, it will no longer report Kubernetes events to AOM.

Disabling log collection for control plane components

Choose Logging > Control Plane Logs, click Configure Control Plane Component Logs, and deselect one or more components whose logs do not need to be collected.

Figure 2 Configuring control plane component logs

Disabling control plane audit log collection

Choose Logging > Control Plane Audit Logs, click Configure Control Plane Audit Logs, and deselect the component whose logs do not need to be collected.

Figure 3 Configuring control plane audit logs

All Components Except log-operator Are Not Ready

Symptom: All components except log-operator are not ready, and the volume failed to be attached to the node.

Solution: Check the logs of log-operator. During add-on installation, the configuration files required by other components are generated by log-operator. If the configuration files are invalid, all components cannot be started.

The log information is as follows:

MountVolume.SetUp failed for volume "otel-collector-config-vol":configmap "log-agent-otel-collector-config" not found

There Is An Error in Stdout Logs of log-operator

Symptom:

2023/05/05 12:17:20.799 [E] call 3 times failed, reason: create group failed, projectID: xxx, groupName: k8s-log-xxx, err: create groups status code: 400, response: {"error_code":"LTS.0104","error_msg":"Failed to create log group, the number of log groups exceeds the quota"}, url: https://lts.cn-north-4.myhuaweicloud.com/v2/xxx/groups, process will retry after 45s

Solution: On the LTS console, delete unnecessary log groups. For details about the log group quota, see Log Groups.

Container File Logs Cannot Be Collected When Docker Is Used as the Container Engine

Symptom:

A container file path is configured but is not mounted to the container, and Docker is used as the container engine. As a result, logs cannot be collected.

Solution:

Check whether Device Mapper is used for the node where the workload resides. Device Mapper does not support text log collection. (This restriction is displayed when you create a log collection policy.) To check this, perform the following operations:

  1. Go to the node where the workload resides.
  2. Run the docker info | grep "Storage Driver" command.
  3. If the value of Storage Driver is Device Mapper, text logs cannot be collected.
Figure 4 Creating a log policy

Logs Cannot Be Reported and "log's quota has full" Is Displayed in Stdout Logs of otel

Figure 5 Error information of otel

Solution:

LTS provides a free log quota. If the quota is used up, you will be charged for the excess log usage. If an error message is displayed, the free quota has been used up. To continue collecting logs, log in to the LTS console, choose Configuration Center in the navigation pane, and enable Continue to Collect Logs When the Free Quota Is Exceeded.

Figure 6 Quota configuration

Container File Logs Cannot Be Collected Due to the Wildcard in the Collection Directory

Troubleshooting: Check the volume mounting status in the workload configuration. If a volume is attached to the data directory of a service container, this add-on cannot collect data from the parent directory. In this case, you need to set the collection directory to a complete data directory. For example, if the data volume is attached to the /var/log/service directory, logs cannot be collected from the /var/log or /var/log/* directory. In this case, you need to set the collection directory to /var/log/service.

Solution: If the log generation directory is /application/logs/{Application name}/*.log, attach the data volume to the /application/logs directory and set the collection directory in the log collection policy to /application/logs/*/*.log.

fluent-bit Pod Keeps Restarting

Troubleshooting: Run the kubectl describe pod command. The output shows that the pod was restarted due to OOM. There are a large number of evicted pods on the node where the fluent-bit resides. As a result, resources are occupied, causing OOM.

Solution: Delete the evicted pods from the node.

Logs Cannot Be Collected When the Node OS Is Ubuntu 18.04

Troubleshooting: Restart the fluent-bit pod on the current node and check whether logs are properly collected. If the logs cannot be collected, check whether the log file to be collected already exists in the image during image packaging. In the container log collection scenario, the logs of the existing files during image packaging are invalid and cannot be collected. This issue is known in the community. For details, see Issues.

Solution: If you want to collect log files that already exist in the image during image packaging, you are advised to set Startup Command to Post-Start on the Lifestyle page when creating a workload. Before the pod of the workload is started, delete the original log files so that the log files can be regenerated.

Job Logs Cannot Be Collected

Troubleshooting: Check the job lifetime. If the job lifetime is less than 1 minute, the pod will be destroyed before logs are collected. In this case, logs cannot be collected.

Solution: Prolong the job lifetime.

Cloud Native Log Collection is Running Normally, but Some Log Collection Policies Do Not Take Effect

Solution:

  • If the log collection policy of the event type does not take effect or the add-on version is earlier than 1.5.0, check the stdout of the log-agent-otel-collector workload.

    Go to the Add-ons page and click the name of Cloud Native Log Collection. Then, click the Pods tab, locate log-agent-otel-collector, and choose More > View Log.

    Figure 7 Viewing the log of the log-agent-otel-collector instance
  • If the log collection policy of the other type does not take effect and the add-on version is later than 1.5.0, check the log of the log-agent-fluent-bit instance on the node where the container to be monitored resides.
    Figure 8 Viewing the log of the log-agent-fluent-bit instance

    Select the fluent-bit container, search for the keyword "fail to push {event/log} data via lts exporter" in the log, and view the error message.

    Figure 9 Viewing the log of the fluent-bit container
    1. If the error message "The log streamId does not exist." is displayed, the log group or log stream does not exist. In this case, choose Logging > View Log Collection Policies, edit or delete the log collection policy, and recreate a log collection policy to update the log group or log stream.
    2. For other errors, go to LTS to search for the error code and view the cause.

OOM Occurs on log-agent-otel-collector

Troubleshooting:

  1. View the stdout of the log-agent-otel-collector component to check whether errors occur recently.
    kubectl logs -n monitoring log-agent-otel-collector-xxx

    If an error is reported, handle the error first and ensure that logs can be collected normally.

  2. If no error is reported recently and OOM still occurs, perform the following steps:
    1. Go to Logging, click the Global Log Query tab, and click Expand Log Statistics Chart to view the log statistics chart. If the reported log group and log stream are not the default ones, click the Global Log Query tab and select the reported log group and log stream.
      Figure 10 Viewing log statistics
    2. Calculate the number of logs reported per second based on the bar chart in the statistics chart and check whether the number of logs exceeds the log collection performance specification.

      If the number of logs exceeds the log collection performance specification, you can increase the number of log-agent-otel-collector copies or increase the memory upper limit of log-agent-otel-collector.

    3. If the CPU usage exceeds 90%, increase the CPU upper limit of log-agent-otel-collector.

Some Pod Information Is Missing During Log Collection Due to Excessive Node Load

When the Cloud Native Log Collection add-on version is later than 1.5.0, some pod information, such as the pod ID and name, is missing from container file logs or stdout logs.

Troubleshooting:

Go to the Add-ons page and click the name of Cloud Native Log Collection. Then, click the Pods tab, locate the log-agent-fluent-bit of the corresponding node, and choose More > View Log.

Figure 11 Viewing the log of the log-agent-fluent-bit instance

Select the fluent-bit container and search for the keyword "cannot increase buffer: current=512000 requested=*** max=512000" in the log.

Figure 12 Viewing the log of the fluent-bit container

Solution:

Run the kubectl edit deploy -n monitoring log-agent-log-operator command on the node and add --kubernetes-buffer-size=20MB to the command lines of the log-operator container. The default value is 16MB. You can estimate the value based on the total size of pod information on the node. 0 indicates no limits.

CAUTION:

If the Cloud Native Log Collection add-on is upgraded, you need to reconfigure kubernetes-buffer-size.

Figure 13 Modifying the command line parameter of the log-operator container

How Do I Change the Log Storage Period on Logging?

  1. On the Clusters page, hover the cursor over the cluster name to view the current cluster ID.

    Figure 14 Viewing the cluster ID

  2. Log in to the LTS console. Then, query the log group and log stream by cluster ID.

    Figure 15 Querying the log group

  3. Locate the log group and click Modify to configure the log storage period.

    NOTE:

    The log retention period affects log storage expenditures.

    Figure 16 Changing the log retention period

What Can I Do If the Log Group (Stream) in the Log Collection Policy Does Not Exist?

  • Scenario 1: The default log group (stream) does not exist.

    Take Kubernetes events as an example: If the default log group (stream) does not exist, the Kubernetes Events page on the console displays a message indicating that the current log group (stream) does not exist. In this case, you can create the default log group (stream) again.

    After the recreation, the ID of the default log group (stream) changes, and the existing log collection policy of the default log group (stream) does not take effect. In this case, you can rectify the fault by referring to Scenario 2.

    Figure 17 Creating a default log group (stream)
  • Scenario 2: The default log group (stream) exists but is inconsistent with the log collection policy.
    • The log collection policy, for example, default-stdout, can be modified as follows:
      1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging.
      2. In the upper right corner, click View Log Collection Policies. Then, locate the log collection policy and click Edit in the Operation column.
      3. Select Custom Log Group/Log Stream and configure the default log group (stream).
      Figure 18 Configuring the default log group (stream)
    • If a log collection policy cannot be modified, for example, default-event, you need to re-create a log collection policy as follows:
      1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging.
      2. In the upper right corner, click View Log Collection Policies. Then, locate the log collection policy and click Delete in the Operation column.
      3. Click Create Log Collection Policy. Then, select Kubernetes events in Policy Template and click OK.
  • Scenario 3: The custom log group (stream) does not exist.

    CCE does not support the creation of non-default log groups (streams). You can create a non-default log group (stream) on the LTS console.

    After the creation is complete, take the following steps:

    1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging.
    2. In the upper right corner, click View Log Collection Policies. Then, locate the log collection policy and click Edit in the Operation column.
    3. Select Custom Log Group/Log Stream and configure a log group (stream).
    Figure 19 Configuring a custom log group (stream)

Logs Cannot Be Collected After Pods Are Scheduled to CCI

After pods are scheduled to CCI by using a profile, logs cannot be collected, but the collection policies work on the CCE console.

Check whether the version of the CCE Cloud Bursting Engine for CCI add-on is earlier than 1.3.54. If yes, upgrade the add-on.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback