Updated on 2024-02-23 GMT+08:00

Collecting Pod Logs

This section describes how to collect logs in a pod. You can configure the log files in a user-defined path in a container to collect logs, process the logs based on user-defined policies, and report the logs to the Kafka log center.

Resource Restriction

You are advised to reserve 50 MB memory for Fluent Bit.

Constraints

  • Currently, container stdout logs cannot be collected and reported to Kafka.
  • Log rotation is not supported. You need to control the log file size by yourself.
  • A log larger than 250 KB cannot be collected.
  • Logs of the specified system, device, cgroup, tmpfs, and localdir mount directories cannot be collected.
  • The names of the log files to be collected in the same container must be unique. If there are multiple files with the same name, the collector collects only the first one it detects.
  • If the name of a log file exceeds 190 characters, the log file will not be collected.

Basic Configuration

Fluent Bit is an open source multi-platform log processor. It consists of modules SERVICE, INPUT, FILTER, PARSER, and OUTPUT. Currently, you can only define the destination of the log contents in the OUTPUT module.

You can use the following ConfigMap to send Fluent Bit process logs to Kafka.

Constraints

  • The size of output.conf must be less than 1 MB.
  • [OUTPUT] is the outermost parameter and must not be indented. Configuration items below it are indented by four spaces.

Basic Configuration

Configure the following parameters in your main configuration file:

kind: ConfigMap
apiVersion: v1
metadata:
  name: cci-logging-conf
  labels:
     logconf.k8s.io/discovery: "true"
data:     
    output.conf: |        
        [OUTPUT]          
            Name       kafka      
            Match      *          
            Brokers    192.168.1.3:9092    
            Topics     test
Table 1 Parameter description

Parameter

Description

Mandatory

Constraint

logconf.k8s.io/discovery

Labels the ConfigMap as a Fluent Bit log configuration file.

Yes

Mandatory value: true

Name

Add-on name.

Yes

Mandatory value: kafka

Currently, only the Kafka add-on is supported.

Match

Matches the label of the transferred record. The asterisk (*) is used as a wildcard.

No

If this parameter is set, the value must be *.

Brokers

Broker (Kafka) address. You can configure multiple broker addresses at the same time.

Yes

Example: 192.168.1.3:9092,192.168.1.4:9092,192.168.1.5:9092

Topics

Log topic.

No

The default value is fluent-bit.

The transferred topic must exist.

You can configure the volume on the pod and configure annotations to specify the sandbox volume and the corresponding log output configuration file.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-dey
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
      annotations:
        logpath.k8s.io/container-0: /var/log/*.log;/var/paas/sys/log/virtual-kubelet.log
        logconf.k8s.io/fluent-bit-configmap-reference: cci-logging-conf
    spec:
      containers:
        - name: container-0
          image: 'nginx:alpine'  
          resources:
            limits:
              cpu: 1000m
              memory: 2048Mi
            requests:
              cpu: 1000m
              memory: 2048Mi
      imagePullSecrets:
        - name: default-secret
Table 2 Parameter description

Annotation

Function

Constraint

logpath.k8s.io/$containerName

Configures the collection file using the environment variables of the pod container.

$containerName is a container name variable.

Multiple paths can be configured. Each path must be an absolute path starting with a slash (/) and paths are separated by semicolons (;).

Only complete log file paths or file names with the wildcard (*) are supported. If a file name contains the wildcard (*), the directory where the file is located must exist when the container is started.

The maximum length of a file name is 190 characters.

logconf.k8s.io/fluent-bit-configmap-reference

Specifies the ConfigMap configured for the Fluent Bit log collection.

The ConfigMap must exist and meet the requirements of configuring Fluent Bit.

Advanced Configuration

A secret is a resource object for encrypted storage. You can save the authentication information, certificates, and private keys in a secret for configuring sensitive data such as passwords, tokens, and keys.

apiVersion: v1
kind: Secret
metadata:
  name: cci-sfs-kafka-tls
type: Opaque
data:
   ca.crt: ...
   server.crt: ...
   server.key: ...

You can configure SSL parameters to implement encrypted secure connections. Files such as certificates can be referenced by the sandbox volume feature.

kind: ConfigMap
apiVersion: v1 
metadata:  
  name: cci-logging-conf-tls 
  labels:    
     logconf.k8s.io/discovery: true
data: 
    output.conf: |
        [OUTPUT]       
            Name        kafka       
            Match       *       
            Brokers     192.168.1.3:9092   
            Topics      test 
            rdkafka.security.protocol ssl    
            rdkafka.ssl.certificate.location ${sandbox_volume_kafkatls}/client.crt     
            rdkafka.ssl.key.location ${sandbox_volume_kafkatls}/client.key 
            rdkafka.ssl.ca.location ${sandbox_volume_kafkatls}/ca.crt
            rdkafka.enable.ssl.certificate.verification true     
            rdkafka.request.required.acks 1       
Table 3 Parameter description

Parameter

Description

Mandatory

Available Value

rdkafka.security.protocol

Protocol used to communicate with the agent

Mandatory if SSL authentication is enabled

ssl

rdkafka.ssl.certificate.location

Path for storing SSL public keys

Mandatory if SSL authentication is enabled

${sandbox_volume_${VOLUME_NAME}}/some.cert

rdkafka.ssl.key.location

Path for storing SSL private keys

Mandatory if SSL authentication is enabled

${sandbox_volume_${VOLUME_NAME}}/some.key

rdkafka.ssl.ca.location

File path or directory of the CA certificate

Mandatory if server certificate authentication is enabled

${sandbox_volume_${VOLUME_NAME}}/some-bundle.crt

rdkafka.enable.ssl.certificate.verification

Whether to start server certificate authentication

No

The value can be true (default) or false.

You can configure the volume on the pod and configure annotations to specify the sandbox volume and the corresponding log output configuration file.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-tls
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
      annotations:
        logpath.k8s.io/container-0: /var/log/*.log;/var/paas/sys/log/virtual-kubelet.log
        logconf.k8s.io/fluent-bit-configmap-reference: cci-logging-conf       
        sandbox-volume.openvessel.io/volume-names: kafkatls
    spec: 
      volumes: 
        - name: kafkatls
          secret:
            secretName: cci-sfs-kafka-tls
      containers:
        - name: container-0
          image: 'nginx:alpine'  
          resources:
            limits:
              cpu: 1000m
              memory: 2048Mi
            requests:
              cpu: 1000m
              memory: 2048Mi
          volumeMounts:
            - name: kafkatls
              mountPath: /tmp/sfs
      imagePullSecrets:
        - name: default-secret

For details about Kafka configuration items, see https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md.