Collecting Container Logs Using Cloud Native Logging
An add-on (Cloud Native Logging) based on Fluent Bit and OpenTelemetry is provided for log and Kubernetes event collection. It supports CRD-based log collection policies, and collects and forwards stdout logs, container file logs, node file logs, and Kubernetes events of containers in a cluster.
Constraints
- A maximum of 50 log rules can be created for each cluster.
- Cloud Native Logging cannot collect .gz, .tar, and .zip logs and cannot access symbolic links of logs.
- If the node storage driver is Device Mapper, container file logs must be collected from the path where the data disk is attached to the node.
- If the container runtime is containerd, each stdout log cannot be in multiple lines. (This does not apply to Cloud Native Logging 1.3.0 or later.)
- If a volume is attached to the data directory of a service container, this add-on cannot collect data from the parent directory. In this case, you need to configure a complete data directory.
- If the lifetime of a container is less than 1 minute, logs cannot be collected in a timely manner. As a result, logs may be lost.
Billing
LTS does not charge you for creating log groups and offers a free quota for log collection every month. You pay only for log volume that exceeds the quota. For details, see Price Calculator.
Configuring Log Collection on the Console
- Enable log collection.
Enabling log collection during cluster creation
- Log in to the CCE console.
- Click Buy Cluster from the top menu.
- On the Select Add-on page, select Cloud Native Logging.
- Click Next: Add-on Configuration in the lower right corner and select the required logs.
- Container logs: A log collection policy named default-stdout will be created, which will report stdout logs from all namespaces to LTS.
- Kubernetes events: A log collection policy named default-event will be created, which will report Kubernetes events from all namespaces to LTS.
- Click Next: Confirm Configuration in the lower right corner. On the displayed page, click Submit.
Enabling log collection for an existing cluster
- Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Logging.
- (Optional) If you are not authorized, obtain required permissions first.
In the displayed dialog box, click Authorize.
Figure 1 Authorize
- Click Enable and wait for about 30 seconds until the log page is automatically displayed.
Figure 2 Enable
- Stdout logs: A log collection policy named default-stdout will be created, which will report stdout logs from all namespaces to LTS.
- Kubernetes events: A log collection policy named default-event will be created, which will report Kubernetes events from all namespaces to LTS.
- To collect add-on logs (NGINX Ingress Controller stdout), you need to install NGINX Ingress Controller and enable logging for the add-on.
After logging is enabled, a log collection policy named default-nginx-ingress will be created, which will report all nginx-ingress stdout logs with the collection label from all namespaces to LTS.
- View and configure log collection policies.
- On the CCE console, click the cluster name to access the cluster console. In the navigation pane, choose Logging.
-
Click View Log Collection Policies in the upper right corner.
All log collection policies reported to LTS are displayed.
Figure 3 Viewing log collection policies
- Click Create Log Collection Policy.
Policy Template: If the default log collection policy provided by CCE is not enabled when log collection is enabled or the default log collection policy is deleted, you can use this option to create a default log collection policy.
Custom Policy: You can use this option to create a custom log collection policy.
Figure 4 Custom policy
- To avoid log disorder, you are advised to select different log streams for reporting logs in the log collection policies of various log types.
- The following are requirements for configuring the container and node file log paths:
- Log directory: Enter an absolute path, for example, /log. The path must start with a slash (/) and contain a maximum of 512 characters. Only uppercase letters, lowercase letters, digits, hyphens (-), underscores (_), slashes (/), asterisks (*), and question marks (?) are allowed.
- Log file name: It can contain only uppercase letters, lowercase letters, digits, hyphens (-), underscores (_), asterisks (*), question marks (?), and periods (.). Logs in the format of .gz, .tar, and .zip are not supported.
The directory and file names must be complete and support asterisks (*) and question marks (?) as wildcards. A maximum of three levels of directories can be matched using wildcards. The level-1 directory does not support wildcards. An asterisk (*) can match multiple characters. A question mark (?) can match only one character. For example:
- If the directory is /var/logs/* and the file name is *.log, the match expression is /var/logs/*/*.log, indicating that any files with the extension .log in all level-1 directories in the /var/logs directory are matched. Note that this expression cannot match any files with the extension .log in the /var/logs directory and multi-level directories in the /var/logs directory.
- If the directory is /var/logs/app_* and the file name is *.log, any log files with the extension .log in all directories that match app_* in the /var/logs directory will be reported.
Table 1 Custom policy parameters Parameter
Description
Log Type
Container standard output: used to collect container stdout logs. You can create a log collection policy by namespace, workload name, or instance label.
Container file log: used to collect text logs. You can specify a workload or instance label to create a log collection policy.
Node file log: used to collect logs from a node. Only one file path can be configured for a log collection policy.
Log Source
- All containers: You can specify all containers in a namespace. If this parameter is not specified, logs of containers in all namespaces will be collected.
- Workload: You can specify a workload and its containers. If this parameter is not specified, logs of all containers running the workload will be collected.
- Workload with target label: You can specify a workload by label and its containers. If this parameter is not specified, logs of all containers running the workload will be collected.
- Workload: You can specify a workload and its containers. If this parameter is not specified, logs of all containers running the workload will be collected.
- Workload with target label: You can specify a workload by label and its containers. If this parameter is not specified, logs of all containers running the workload will be collected.
You also need to specify the log collection path. For details, see the log path configuration requirements.
Collection Path: used to configure the log collection path. For details, see the log path configuration requirements.
Log Format
- Single-line
Each log contains only one line of text. The newline character \n denotes the start of a new log.
- Multi-line
Some programs (for example, Java program) print a log that occupies multiple lines. By default, logs are collected by line. If you want to display logs as a single message, you can enable multi-line logging and use the regular pattern. When you select Multi-line, configure Log Matching Format.
For example, if logs need to be collected by line and each log starts with a date and occupies three lines, you can set Log Matching Format to the regular expression of the date, for example, \d{4}-\d{2}-\d{2} \d{2}\:\d{2}\:\d{2}.*.
The three lines starting with the date are regarded as a log.2022-01-01 00:00:00 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting! at com.myproject.module.MyProject.badMethod(MyProject.java:22) at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
Report to LTS
This parameter is used to configure the log group and log stream for log reporting.
- Default log groups/log streams: The default log group (k8s-log-{Cluster ID}) and default log stream (stdout-{Cluster ID}) are automatically selected.
- Custom log groups/log streams: You can select any log group and log stream.
- Log Group: A log group is the basic unit for LTS to manage logs. If you do not have a log group, CCE prompts you to create one. The default name is k8s-log-{Cluster ID}, for example, k8s-log-bb7eaa87-07dd-11ed-ab6c-0255ac1001b3.
- Log Stream: A log stream is the basic unit for reading and writing logs. You can put different types of logs into different streams to ease management. When you install the add-on or create a log collection policy based on the policy template, the following log streams are automatically created:
- stdout-{Cluster ID} for container logs, for example, stdout-bb7eaa87-07dd-11ed-ab6c-0255ac1001b3
- event-{Cluster ID} for Kubernetes events, for example, event-bb7eaa87-07dd-11ed-ab6c-0255ac1001b3
- cceaddon-nginx-ingress-{Cluster ID} for NGINX Ingress Controller logs, for example, cceaddon-nginx-ingress-bb7eaa87-07dd-11ed-ab6c-0255ac1001b3.
- View the logs.
- On the CCE console, click the cluster name to access the cluster console. In the navigation pane, choose Logging.
- View different types of logs:
- Container Logs: displays all logs in the default log stream stdout-{Cluster ID} of the default log group k8s-log-{Cluster ID}. You can search for logs by workload.
Figure 5 Querying container logs
- Kubernetes Events: displays all Kubernetes events in the default log stream event-{Cluster ID} of the default log group k8s-log-{Cluster ID}.
- Control Plane Logs: displays all logs of components on the control plane in the default log stream {Component name}-{Cluster ID} of the default log group k8s-log-{Cluster ID}.
- Control Plane Audit Logs: displays all control plane audit logs in the default log stream audit-{Cluster ID} of the default log group k8s-log-{Cluster ID}.
- Global Log Query: You can view logs in the log streams of all log groups. You can specify a log stream to view the logs. By default, the default log group k8s-log-{Cluster ID} is selected. You can click the edit icon on the right of Switching Log Groups to switch to another log group.
Figure 6 Global log query
- Add-on Logs: displays the add-on logs in the default log group k8s-log-{Cluster ID}. You can view the cluster add-on logs.
- Container Logs: displays all logs in the default log stream stdout-{Cluster ID} of the default log group k8s-log-{Cluster ID}. You can search for logs by workload.
- Click View Log Collection Policies in the upper right corner. Locate the log collection policy and click View Log to go to the log list.
Figure 7 Viewing logs
Configuring Log Collection Using YAML
The Cloud Native Logging add-on must be 1.6.1 or later.
- Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
- Create a YAML file named log-config.yaml. The file name can be customized.
vi log-config.yaml
The following examples are for your reference. For details about the parameters, see Table 2.
- Scenario 1: Collecting stdout logs of all workloads
apiVersion: logging.openvessel.io/v1 kind: LogConfig metadata: name: test-log-01 # Change the rule name as needed. namespace: kube-system #Namespace of the collection rule. The value is fixed at kube-system. spec: inputDetail : # Input configuration type: container_stdout # Input type. container_stdout indicates stdout logs. containerStdout: # Stdout log configuration. This parameter is valid only when type is set to container_stdout. allContainers: true # Whether to collect the logs of all containers. namespaces: [] # Namespace list, which is of array type. This parameter is valid only when allContainers is set to true. The stdout logs of containers in specified namespaces will be collected. An empty array indicates that the stdout logs of containers in all namespaces will be collected. outputDetail: # Output configuration type: LTS # Output type. The value is fixed at LTS. LTS: ltsGroupID: abf5f0ad-627e-41cc-8d3f-61c9e1f57f5a # LTS log group ID. The specified ID must exist. ltsStreamID: f7ed71e9-6b9d-4ba3-86e4-b1b9d22ef4fb # LTS log stream ID. The specified ID must exist.
- Scenario 2: Collecting container file logs of a specified workload
apiVersion: logging.openvessel.io/v1 kind: LogConfig metadata: name: test-log-02 # Change the rule name as needed. namespace: kube-system #Namespace of the collection rule. The value is fixed at kube-system. spec: inputDetail : # Input configuration type: container_file # Input type. container_file indicates container file logs. containerFile: # Container file log configuration. This parameter is valid only when type is set to container_file. workloads: # Modify the workload information as needed. - namespace: monitoring # Namespace that the workload belongs to kind: Deployment # Workload type. The value can be Deployment, DaemonSet, StatefulSet, Job, or CronJob. name: prometheus-lightweight # Workload name container: prometheus # Container name files: - logPath: "/var/log" # Log directory, which is an absolute path. filePattern: "*.log" # Log file name, which supports wildcard characters. processors: # Multi-line log definition. If multiple lines are not required, delete processors. type: multiline # Log type, which is optional. The value can be singleline or multiline. The default value is singleline. multilineRegulation: '\d+:\d+:\d+.*?' # Multi-line regular expression, which is optional. This field is valid only when type is set to multiline. outputDetail: # Output configuration type: LTS # Output type. The value is fixed at LTS. LTS: ltsGroupID: abf5f0ad-627e-41cc-8d3f-61c9e1f57f5a # LTS log group ID. The specified ID must exist. ltsStreamID: f7ed71e9-6b9d-4ba3-86e4-b1b9d22ef4fb # LTS log stream ID. The specified ID must exist.
- Scenario 3: Collecting container file logs of pods with specified labels
apiVersion: logging.openvessel.io/v1 kind: LogConfig metadata: name: test-log-03 # Change the rule name as needed. namespace: kube-system #Namespace of the collection rule. The value is fixed at kube-system. spec: inputDetail : # Input configuration type: container_file # Input type. container_file indicates container file logs. containerFile: # Container file log configuration. This parameter is valid only when type is set to container_file. podLabels: # Modify the value based on the CRD description. - includeLabels: # Label set. The logs of pods with the following labels will be collected. At least one label must be specified. Note that the pod label is not the label of the workload. foo: bar namespaces: # Namespace list, which is of array type. An empty array indicates all namespaces. - monitoring - kube-system containers: [] # Container name list, which is of array type. An empty array indicates all containers. files: - logPath: "/var/log" # Log directory, which is an absolute path. filePattern: "*.log" # Log file name, which supports wildcard characters. outputDetail: # Output configuration type: LTS # Output type. The value is fixed at LTS. LTS: ltsGroupID: abf5f0ad-627e-41cc-8d3f-61c9e1f57f5a # LTS log group ID. The specified ID must exist. ltsStreamID: f7ed71e9-6b9d-4ba3-86e4-b1b9d22ef4fb # LTS log stream ID. The specified ID must exist.
- Scenario 4: Collecting node logs
apiVersion: logging.openvessel.io/v1 kind: LogConfig metadata: name: test-log-04 # Change the rule name as needed. namespace: kube-system #Namespace of the collection rule. The value is fixed at kube-system. spec: inputDetail : # Input configuration type: host_file # Input type. host_file indicates node logs. hostFile: # Node log configuration. This parameter is valid only when type is set to host_file. file: logPath: "/var/log" # Log directory, which is an absolute path. Change it as needed. filePattern: "messages" # Log file name, which supports wildcard characters and can be modified as needed. outputDetail: # Output configuration type: LTS # Output type. The value is fixed at LTS. LTS: ltsGroupID: abf5f0ad-627e-41cc-8d3f-61c9e1f57f5a # LTS log group ID. The specified ID must exist. ltsStreamID: f7ed71e9-6b9d-4ba3-86e4-b1b9d22ef4fb # LTS log stream ID. The specified ID must exist.
Table 2 Parameters Parameter
Type
Description
Example
spec.inputDetail.type
String
Input type. The options are as follows:
- container_stdout: indicates stdout logs. This field must be used together with the containerStdout field.
- container_file: indicates container file logs. This field must be used together with the containerFile field.
- host_file: indicates node logs. This field must be used together with the hostFile field.
-
spec.inputDetail.containerStdout
Object
Stdout log configuration. This parameter is valid only when type is set to container_stdout.
It contains the following fields:
- allContainers: Whether to collect the logs of all containers. If the value is true, the logs of all containers will be collected. In this case, you need to specify the namespaces field. If the value is false, the logs of specified workloads will be collected. In this case, you need to specify the workloads field.
- namespaces: namespace list, which is of array type and is valid only when allContainers is set to true. The stdout logs of containers in specified namespaces will be collected. An empty array indicates that the stdout logs of containers in all namespaces will be collected.
- workloads: workload list, which is of array type and is valid only when allContainers is set to false.
- namespace: namespace that a workload belongs to.
- kind: workload type. The value can be Deployment, DaemonSet, StatefulSet, Job, or CronJob.
- name: workload name.
- containers: container name list, which is of array type. An empty array indicates all containers.
- podLabels: pod labels, which is of array type and is valid only when allContainers is set to false and workloads is left empty.
- includeLabels: label set. The logs of pods with the following labels will be collected. At least one label must be specified. Note that the pod label is not the label of the workload.
- namespaces: namespace list, which is of array type. An empty array indicates all namespaces.
- containers: container name list, which is of array type. An empty array indicates all containers.
Example 1: Collecting the stdout logs of all containers in a namespace
... spec: inputDetail: type: container_stdout containerStdout: allContainers: true namespaces: - monitoring ...
Example 2: Collecting the stdout logs of a workload
... spec: inputDetail: type: container_stdout containerStdout: allContainers: false workloads: - namespaces: monitoring kind: Deployment name: prometheus-lightweight container: prometheus ...
Example 3: Collecting the stdout logs of a pod
... spec: inputDetail: type: container_stdout containerStdout: allContainers: false workloads: [] podLabels: - includeLabels: foo: bar namespaces: - monitoring containers: [] ...
spec.inputDetail.containerFile
Object
Container file log configuration. This parameter is valid only when type is set to container_file.
It contains the following fields:
- workloads: workload list, which is of array type.
- namespace: namespace that a workload belongs to.
- kind: workload type. The value can be Deployment, DaemonSet, StatefulSet, Job, or CronJob.
- name: workload name.
- container: container name.
- files: file list, which is of array type and contains the logPath and filePattern fields.
- logPath: log directory, which is an absolute path, for example, /var/log.
- filePattern: log file name, which supports wildcard characters, for example, *.log.
- podLabels: pod labels, which is of array type and is valid only when workloads is left empty.
- includeLabels: label set. The logs of pods with the following labels will be collected. At least one label must be specified. Note that the pod label is not the label of the workload.
- namespaces: namespace list, which is of array type. An empty array indicates all namespaces.
- containers: container name list, which is of array type. An empty array indicates all containers.
- files: file list, which is of array type and contains the logPath and filePattern fields.
- logPath: log directory, which is an absolute path, for example, /var/log.
- filePattern: log file name, which supports wildcard characters, for example, *.log.
Example 1: Collecting container file logs of a workload
... spec: inputDetail: type: container_file containerFile: workloads: - namespaces: monitoring kind: Deployment name: prometheus-lightweight container: prometheus files: - logPath: "/var/log" filePattern: "*.log" ...
Example 2: Collecting container file logs of a pod
... spec: inputDetail: type: container_file containerFile: workloads: [] podLabels: - includeLabels: foo: bar namespaces: - monitoring containers: [] files: - logPath: "/var/log" filePattern: "*.log" ...
spec.inputDetail.hostFile
Object
Node log configuration. This parameter is valid only when type is set to host_file.
- file:
- - logPath: log directory, which is an absolute path, for example, /var/log.
- - filePattern: log file name, which supports wildcard characters, for example, *.log.
... spec: inputDetail: type: host_file hostFile: files: logPath: "/var/log" filePattern: "*.log" ...
spec.inputDetail.processors
Object
- type: log type, which is optional. The value can be singleline or multiline. The default value is singleline.
- multilineRegulation: multi-line regular expression. This parameter is optional and is valid only when type is set to multiline.
... processors: type: multiline multilineRegulation: '\d+:\d+:\d+.*?' ...
spec.outputDetail.type
String
Output type. The value is fixed at LTS.
-
spec.outputDetail.LTS
Object
The following fields are supported:
- ltsGroupID: LTS log group ID. The specified ID must exist.
- ltsStreamID: LTS log stream ID. The specified ID must exist.
Either ltsStreamID or ltsStreamName must be configured.
- ltsStreamName: LTS log stream name. If the specified log stream name does not exist, it will be automatically created.
Either ltsStreamID or ltsStreamName must be configured.
- ltsStreamCreateParam: log stream creation parameter. This parameter is optional and is valid only when ltsStreamName is specified and an LTS log stream is automatically created.
- enterpriseProjectID: Enterprise project ID of the LTS log group. This field is optional. If this field is not specified, the ID of the enterprise project that the cluster belongs to will be used.
Example 1: Specifying an existing log group ID and log stream ID
... LTS: ltsGroupID: ***** ltsStreamID: *****
Example 2: Specifying an existing log group ID and an existing log stream name
... LTS: ltsGroupID: ***** ltsStreamName: test-stream-name-1
Example 3: Specifying an existing log group ID and a new log stream name to automatically create a log stream
... LTS: ltsGroupID: ***** ltsStreamName: test-stream-name-2 ltsStreamCreateParam: enterpriseProjectID: ""
- Scenario 1: Collecting stdout logs of all workloads
- Create a LogConfig.
kubectl create -f log-config.yaml
If information similar to the following is displayed, the LogConfig has been created:
logconfig.logging.openvessel.io/test-log-xx created
- Check the created LogConfig.
kubectl get LogConfig -n kube-system
If information similar to the following is displayed, the log collection policy has been created.NAME AGE test-log-xx 30s
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot