Logging FAQ
Indexes
- How Do I Disable Logging?
- All Components Except log-operator Are Not Ready
- There Is An Error in Stdout Logs of log-operator
- Container File Logs Cannot Be Collected When Docker Is Used as the Container Engine
- Logs Cannot Be Reported and "log's quota has full" Is Displayed in Stdout Logs of otel
- Container File Logs Cannot Be Collected Due to the Wildcard in the Collection Directory
- fluent-bit Pod Keeps Restarting
- Logs Cannot Be Collected When the Node OS Is Ubuntu 18.04
- Job Logs Cannot Be Collected
- Cloud Native Logging is Running Normally, but Some Log Collection Policies Do Not Take Effect
- OOM Occurs on log-agent-otel-collector
- Some Pod Information Is Missing During Log Collection Due to Excessive Node Load
- How Do I Change the Log Storage Period on Logging?
- What Can I Do If the Log Group (Stream) in the Log Collection Policy Does Not Exist?
- Logs Cannot Be Collected After Pods Are Scheduled to CCI
How Do I Disable Logging?
Disabling container log and Kubernetes event collection
Method 1: Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging. In the upper right corner, click View Log Collection Policies. Then, locate and delete the corresponding log collection policy. The default-event policy reports Kubernetes events by default, and the default-stdout policy reports stdout logs by default.
Method 2: Access the Add-ons page and uninstall Cloud Native Logging. Note: Once you uninstall this add-on, it will no longer report Kubernetes events to AOM.
Disabling log collection for control plane components
Choose Logging > Control Plane Logs, click Configure Control Plane Component Logs, and deselect one or more components whose logs do not need to be collected.
Disabling control plane audit log collection
Choose Logging > Control Plane Audit Logs, click Configure Control Plane Audit Logs, and deselect the component whose logs do not need to be collected.
All Components Except log-operator Are Not Ready
Symptom: All components except log-operator are not ready, and the volume failed to be attached to the node.
Solution: Check the logs of log-operator. During add-on installation, the configuration files required by other components are generated by log-operator. If the configuration files are invalid, all components cannot be started.
The log information is as follows:
MountVolume.SetUp failed for volume "otel-collector-config-vol":configmap "log-agent-otel-collector-config" not found
There Is An Error in Stdout Logs of log-operator
Symptom:
2023/05/05 12:17:20.799 [E] call 3 times failed, resion: create group failed, projectID: xxx, groupName: k8s-log-xxx, err: create groups status code: 400, response: {"error_code":"LTS.0104","error_msg":"Failed to create log group, the number of log groups exceeds the quota"}, url: https://lts.cn-north-4.myhuaweicloud.com/v2/xxx/groups, process will retry after 45s
Solution: On the LTS console, delete unnecessary log groups. For details about the log group quota, see Log Groups.
Container File Logs Cannot Be Collected When Docker Is Used as the Container Engine
Symptom:
A container file path is configured but is not mounted to the container, and Docker is used as the container engine. As a result, logs cannot be collected.
Solution:
Check whether Device Mapper is used for the node where the workload resides. Device Mapper does not support text log collection. (This restriction is displayed when you create a log collection policy.) To check this, perform the following operations:
- Go to the node where the workload resides.
- Run the docker info | grep "Storage Driver" command.
- If the value of Storage Driver is Device Mapper, text logs cannot be collected.
Logs Cannot Be Reported and "log's quota has full" Is Displayed in Stdout Logs of otel
Solution:
LTS provides a free log quota. If the quota is used up, you will be charged for the excess log usage. If an error message is displayed, the free quota has been used up. To continue collecting logs, log in to the LTS console, choose Configuration Center in the navigation pane, and enable Continue to Collect Logs When the Free Quota Is Exceeded.
Container File Logs Cannot Be Collected Due to the Wildcard in the Collection Directory
Troubleshooting: Check the volume mounting status in the workload configuration. If a volume is attached to the data directory of a service container, this add-on cannot collect data from the parent directory. In this case, you need to set the collection directory to a complete data directory. For example, if the data volume is attached to the /var/log/service directory, logs cannot be collected from the /var/log or /var/log/* directory. In this case, you need to set the collection directory to /var/log/service.
Solution: If the log generation directory is /application/logs/{Application name}/*.log, attach the data volume to the /application/logs directory and set the collection directory in the log collection policy to /application/logs/*/*.log.
fluent-bit Pod Keeps Restarting
Troubleshooting: Run the kubectl describe pod command. The output shows that the pod was restarted due to OOM. There are a large number of evicted pods on the node where the fluent-bit resides. As a result, resources are occupied, causing OOM.
Solution: Delete the evicted pods from the node.
Logs Cannot Be Collected When the Node OS Is Ubuntu 18.04
Troubleshooting: Restart the fluent-bit pod on the current node and check whether logs are properly collected. If the logs cannot be collected, check whether the log file to be collected already exists in the image during image packaging. In the container log collection scenario, the logs of the existing files during image packaging are invalid and cannot be collected. This issue is known in the community. For details, see Issues.
Solution: If you want to collect log files that already exist in the image during image packaging, you are advised to set Startup Command to Post-Start on the Lifestyle page when creating a workload. Before the pod of the workload is started, delete the original log files so that the log files can be regenerated.
Job Logs Cannot Be Collected
Troubleshooting: Check the job lifetime. If the job lifetime is less than 1 minute, the pod will be destroyed before logs are collected. In this case, logs cannot be collected.
Solution: Prolong the job lifetime.
Cloud Native Logging is Running Normally, but Some Log Collection Policies Do Not Take Effect
Solution:
- If the log collection policy of the event type does not take effect or the add-on version is earlier than 1.5.0, check the stdout of the log-agent-otel-collector workload.
Go to the Add-ons page and click the name of Cloud Native Logging. Then, click the Pods tab, locate log-agent-otel-collector, and choose More > View Log.
Figure 7 Viewing the logs of the log-agent-otel-collector instance
- If the log collection policy of the other type does not take effect and the add-on version is later than 1.5.0, check the log of the log-agent-fluent-bit instance on the node where the container to be monitored resides.
Figure 8 Viewing the logs of the log-agent-fluent-bit instance
Select the fluent-bit container, search for the keyword "fail to push {event/log} data via lts exporter" in the log, and view the error message.
Figure 9 Viewing the log of the fluent-bit container
- If the error message "The log streamId does not exist." is displayed, the log group or log stream does not exist. In this case, choose Logging > View Log Collection Policies, edit or delete the log collection policy, and recreate a log collection policy to update the log group or log stream.
- For other errors, go to LTS to search for the error code and view the cause.
OOM Occurs on log-agent-otel-collector
Troubleshooting:
- View the stdout of the log-agent-otel-collector component to check whether errors occur recently.
kubectl logs -n monitoring log-agent-otel-collector-xxx
If an error is reported, handle the error first and ensure that logs can be collected normally.
- If no error is reported recently and OOM still occurs, perform the following steps:
- Go to Logging, click the Global Log Query tab, and click Expand Log Statistics Chart to view the log statistics chart. If the reported log group and log stream are not the default ones, click the Global Log Query tab and select the reported log group and log stream.
Figure 10 Viewing log statistics
- Calculate the number of logs reported per second based on the bar chart in the statistics chart and check whether the number of logs exceeds the log collection performance specification.
If the number of logs exceeds the log collection performance specification, you can increase the number of log-agent-otel-collector copies or increase the memory upper limit of log-agent-otel-collector.
- If the CPU usage exceeds 90%, increase the CPU upper limit of log-agent-otel-collector.
- Go to Logging, click the Global Log Query tab, and click Expand Log Statistics Chart to view the log statistics chart. If the reported log group and log stream are not the default ones, click the Global Log Query tab and select the reported log group and log stream.
Some Pod Information Is Missing During Log Collection Due to Excessive Node Load
When Cloud Native Logging is later than 1.5.0, some pod information, such as the pod ID and name, is missing from container file logs or stdout logs.
Troubleshooting:
Go to the Add-ons page and click the name of Cloud Native Logging. Then, click the Pods tab, locate the log-agent-fluent-bit of the corresponding node, and choose More > View Log.
Select the fluent-bit container and search for the keyword "cannot increase buffer: current=512000 requested=*** max=512000" in the log.
Solution:
Run the kubectl edit deploy -n monitoring log-agent-log-operator command on the node and add --kubernetes-buffer-size=20MB to the command lines of the log-operator container. The default value is 16MB. You can estimate the value based on the total size of pod information on the node. 0 indicates no limits.
If Cloud Native Logging is upgraded, you need to reconfigure kubernetes-buffer-size.
How Do I Change the Log Storage Period on Logging?
- Obtain the cluster ID.
Figure 14 Viewing the cluster ID
- Log in to the LTS console. Query the log group and log stream by cluster ID.
Figure 15 Querying the log group
- Locate the log group and click Modify to configure the log storage period.
The log retention period affects log storage expenditures.
Figure 16 Changing the log retention period
What Can I Do If the Log Group (Stream) in the Log Collection Policy Does Not Exist?
- Scenario 1: The default log group (stream) does not exist.
Take Kubernetes events as an example: If the default log group (stream) does not exist, the Kubernetes Events page on the console displays a message indicating that the current log group (stream) does not exist. In this case, you can create the default log group (stream) again.
After the recreation, the ID of the default log group (stream) changes, and the existing log collection policy of the default log group (stream) does not take effect. In this case, you can rectify the fault by referring to Scenario 2.
Figure 17 Creating a default log group (stream)
- Scenario 2: The default log group (stream) exists but is inconsistent with the log collection policy.
- The log collection policy, for example, default-stdout, can be modified as follows:
- Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging.
- In the upper right corner, click View Log Collection Policies. Then, locate the log collection policy and click Edit in the Operation column.
- Select Custom Log Group/Log Stream and configure the default log group (stream).
Figure 18 Configuring the default log group (stream)
- If a log collection policy cannot be modified, for example, default-event, you need to re-create a log collection policy as follows:
- Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging.
- In the upper right corner, click View Log Collection Policies. Then, locate the log collection policy and click Delete in the Operation column.
- Click Create Log Collection Policy. Then, select Kubernetes events in Policy Template and click OK.
- The log collection policy, for example, default-stdout, can be modified as follows:
- Scenario 3: The custom log group (stream) does not exist.
CCE does not support the creation of non-default log groups (streams). You can create a non-default log group (stream) on the LTS console.
After the creation is complete, take the following steps:
- Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging.
- In the upper right corner, click View Log Collection Policies. Then, locate the log collection policy and click Edit in the Operation column.
- Select Custom Log Group/Log Stream and configure a log group (stream).
Figure 19 Configuring a custom log group (stream)
Logs Cannot Be Collected After Pods Are Scheduled to CCI
After pods are scheduled to CCI by using a profile, logs cannot be collected, but the collection policies work on the CCE console.
Check whether the version of the CCE Cloud Bursting Engine for CCI add-on is earlier than 1.3.54. If yes, upgrade the add-on.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot