Updated on 2024-06-26 GMT+08:00

Logging FAQ

How Do I Disable Logging?

Disabling container log and Kubernetes event collection

Method 1: Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane on the left, choose Logging. In the upper right corner, click View Log Collection Policies. Then, locate and delete the corresponding log collection policy. default-event indicates that Kubernetes events are reported by default, and default-stdout indicates that standard output logs are reported by default.

Figure 1 Deleting a log collection policy

Method 2: Access the Add-ons page and uninstall Cloud Native Logging. Note: Once you uninstall this add-on, it will no longer report Kubernetes events to AOM.

Disabling log collection for control plane components

Choose Logging > Control Plane Logs, click Configure Control Plane Component Logs, and deselect one or more components whose logs do not need to be collected.

Figure 2 Configuring control plane component logs

Disabling control plane audit log collection

Choose Logging > Control Plane Audit Logs, click Configure Control Plane Audit Logs, and deselect the component whose logs do not need to be collected.

Figure 3 Configuring control plane audit logs

All Components Except log-operator Are Not Ready

Symptom: All components except log-operator are not ready, and the volume failed to be attached to the node.

Solution: Check the logs of log-operator. During add-on installation, the configuration files required by other components are generated by log-operator. If the configuration files are invalid, all components cannot be started.

The log information is as follows:

MountVolume.SetUp failed for volume "otel-collector-config-vol":configmap "log-agent-otel-collector-config" not found

There Is An Error in the Standard Output Log of log-operator

Symptom:

2023/05/05 12:17:20.799 [E] call 3 times failed, resion: create group failed, projectID: xxx, groupName: k8s-log-xxx, err: create groups status code: 400, response: {"error_code":"LTS.0104","error_msg":"Failed to create log group, the number of log groups exceeds the quota"}, url: https://lts.cn-north-4.myhuaweicloud.com/v2/xxx/groups, process will retry after 45s

Solution: On the LTS console, delete unnecessary log groups. For details about the log group quota, see Log Groups.

Container File Logs Cannot Be Collected When Docker Is Used as the Container Engine

Symptom:

A container file path is configured but is not mounted to the container, and Docker is used as the container engine. As a result, logs cannot be collected.

Solution:

Check whether Device Mapper is used for the node where the workload resides. Device Mapper does not support text log collection. (This restriction is displayed when you create a log collection policy.) To check this, perform the following operations:

  1. Go to the node where the workload resides.
  2. Run the docker info | grep "Storage Driver" command.
  3. If the value of Storage Driver is Device Mapper, text logs cannot be collected.
Figure 4 Creating a log policy

Logs Cannot Be Reported and "log's quota has full" Is Displayed in the Standard Output Log of the otel Component

Figure 5 Error information of the otel component

Solution:

LTS provides a free log quota. If the quota is used up, you will be charged for the excess log usage. If an error message is displayed, the free quota has been used up. To continue collecting logs, log in to the LTS console, choose Configuration Center in the navigation pane, and enable Continue to Collect Logs When the Free Quota Is Exceeded.

Figure 6 Quota configuration

Container File Logs Cannot Be Collected Because Wildcards Are Configured for the Collection Directory

Troubleshooting: Check the volume mounting status in the workload configuration. If a volume is attached to the data directory of a service container, this add-on cannot collect data from the parent directory. In this case, you need to set the collection directory to a complete data directory. For example, if the data volume is attached to the /var/log/service directory, logs cannot be collected from the /var/log or /var/log/* directory. In this case, you need to set the collection directory to /var/log/service.

Solution: If the log generation directory is /application/logs/{Application name}/*.log, attach the data volume to the /application/logs directory and set the collection directory in the log collection policy to /application/logs/*/*.log.

fluent-bit Pod Keeps Restarting

Troubleshooting: Run the kubectl describe pod command. The output shows that the pod was restarted due to OOM. There are a large number of evicted pods on the node where the fluent-bit resides. As a result, resources are occupied, causing OOM.

Solution: Delete the evicted pods from the node.

Logs Cannot Be Collected When the Node OS Is Ubuntu 18.04

Troubleshooting: Restart the fluent-bit pod on the current node and check whether logs are properly collected. If the logs cannot be collected, check whether the log file to be collected already exists in the image during image packaging. In the container log collection scenario, the logs of the existing files during image packaging are invalid and cannot be collected. This issue is known in the community. For details, see Issues.

Solution: If you want to collect log files that already exist in the image during image packaging, you are advised to set Startup Command to Post-Start on the Lifestyle page when creating a workload. Before the pod of the workload is started, delete the original log files so that the log files can be regenerated.

Job Logs Cannot Be Collected

Troubleshooting: Check the job lifetime. If the job lifetime is less than 1 minute, the pod will be destroyed before logs are collected. In this case, logs cannot be collected.

Solution: Prolong the job lifetime.

Cloud Native Logging is Running Normally, but Some Log Collection Policies Do Not Take Effect

Solution:

  • If the log collection policy of the event type does not take effect or the add-on version is earlier than 1.5.0, check the standard output of the log-agent-otel-collector workload.

    Go to the Add-ons page and click the name of Cloud Native Logging. Then, click the Pods tab, locate log-agent-otel-collector, and choose More > View Log.

    Figure 7 Viewing the logs of the log-agent-otel-collector instance
  • If the log collection policy of the other type does not take effect and the add-on version is later than 1.5.0, check the log of the log-agent-fluent-bit instance on the node where the container to be monitored resides.
    Figure 8 Viewing the logs of the log-agent-fluent-bit instance

    Select the fluent-bit container, search for the keyword "fail to push {event/log} data via lts exporter" in the log, and view the error message.

    Figure 9 Viewing the logs of the fluent-bit container
    1. If the error message "The log streamId does not exist." is displayed, the log group or log stream does not exist. In this case, choose Logging > View Log Collection Policies, edit or delete the log collection policy, and recreate a log collection policy to update the log group or log stream.
    2. For other errors, go to LTS to search for the error code and view the cause.

OOM Occurs on log-agent-otel-collector

Troubleshooting:

  1. View the standard output log of the log-agent-otel-collector component to check whether errors occur recently.
    kubectl logs -n monitoring log-agent-otel-collector-xxx

    If an error is reported, handle the error first and ensure that logs can be collected normally.

  2. If no error is reported recently and OOM still occurs, perform the following steps:
    1. Go to Logging, click the Global Log Query tab, and click Expand Log Statistics Chart to view the log statistics chart. If the reported log group and log stream are not the default ones, click the Global Log Query tab and select the reported log group and log stream.
      Figure 10 Viewing log statistics
    2. Calculate the number of logs reported per second based on the bar chart in the statistics chart and check whether the number of logs exceeds the log collection performance specification.

      If the number of logs exceeds the log collection performance specification, you can increase the number of log-agent-otel-collector copies or increase the memory upper limit of log-agent-otel-collector.

    3. If the CPU usage exceeds 90%, increase the CPU upper limit of log-agent-otel-collector.

Some Pod Information Is Missing During Log Collection Due to Excessive Node Load

When Cloud Native Logging is later than 1.5.0, some pod information, such as the pod ID and name, is missing from the container file log or standard output log.

Troubleshooting:

Go to the Add-ons page and click the name of Cloud Native Logging. Then, click the Pods tab, locate the log-agent-fluent-bit of the corresponding node, and choose More > View Log.

Figure 11 Viewing the log of the log-agent-fluent-bit instance

Select the fluent-bit container and search for the keyword "cannot increase buffer: current=512000 requested=*** max=512000" in the log.

Figure 12 Viewing the log of the fluent-bit container

Solution:

Run the kubectl edit deploy -n monitoring log-agent-log-operator command on the node and add --kubernetes-buffer-size=20MB to the command lines of the log-operator container. The default value is 16MB. You can estimate the value based on the total size of pod information on the node. 0 indicates no limits.

If Cloud Native Logging is upgraded, you need to reconfigure kubernetes-buffer-size.

Figure 13 Modifying the command line parameter of the log-operator container

How Do I Change the Log Storage Period on Logging?

  1. Obtain the current cluster ID.

    Figure 14 Viewing the cluster ID

  2. Log in to the LTS console and query the log group and log stream by cluster ID.

    Figure 15 Querying the log group

  3. Locate the log group and click Modify to configure the log storage period.

    The log retention period affects log storage expenditures.

    Figure 16 Changing the log retention period