Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Collecting Pod Logs

Updated on 2024-11-01 GMT+08:00

This section describes how to collect logs in a pod. You can configure the log files in a user-defined path in a container to collect logs, process the logs based on user-defined policies, and report the logs to the Kafka log center.

Resource Restriction

You are advised to reserve 50 MiB memory for Fluent Bit.

Constraints

  • Logs of soft link paths in containers cannot be collected.
  • Container stdout logs cannot be collected and reported to Kafka.
  • Log rotation is not supported. You need to control the log file size by yourself.
  • A log larger than 250 KB cannot be collected.
  • Logs cannot be collected from the directory that a specified system, device, cgroup, tmpfs, or localdir is mounted to.
  • In a container, if a log name exceeds 190 characters, the log will not be collected. If there are logs whose name contains 180 to 190 characters, only the first log can be collected.
  • When a container is stopped, if log collection is delayed due to network latency or high resource usage, some logs generated before the container is stopped may be lost.

Basic Configuration

Fluent Bit is an open source multi-platform log processor. It consists of modules SERVICE, INPUT, FILTER, PARSER, and OUTPUT. Currently, you can only define the destination of the log contents in the OUTPUT module.

You can use the following ConfigMap to send Fluent Bit process logs to Kafka.

Constraints

  • The size of output.conf must be less than 1 MB.
  • [OUTPUT] is the outermost parameter and must not be indented. Configuration items below it are indented by four spaces.

Basic Configuration

Configure the following parameters in your main configuration file:

kind: ConfigMap
apiVersion: v1
metadata:
  name: cci-logging-conf
  labels:
     logconf.k8s.io/discovery: "true"
data:     
    output.conf: |        
        [OUTPUT]          
            Name       kafka      
            Match      *          
            Brokers    192.168.1.3:9092    
            Topics     test
Table 1 Parameter description

Parameter

Description

Mandatory

Constraint

logconf.k8s.io/discovery

Labels the ConfigMap as a Fluent Bit log configuration file.

Yes

Mandatory value: true

Name

Add-on name.

Yes

Mandatory value: kafka

Currently, only the Kafka add-on is supported.

Match

Matches the label of the transferred record. The asterisk (*) is used as a wildcard.

No

If this parameter is set, the value must be *.

Brokers

Broker (Kafka) address. You can configure multiple broker addresses at the same time.

Yes

Example: 192.168.1.3:9092,192.168.1.4:9092,192.168.1.5:9092

Topics

Log topic.

No

The default value is fluent-bit.

The transferred topic must exist.

You can configure the volume on the pod and configure annotations to specify the sandbox volume and the corresponding log output configuration file.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-dey
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
      annotations:
        logpath.k8s.io/container-0: /var/log/*.log;/var/paas/sys/log/virtual-kubelet.log
        logconf.k8s.io/fluent-bit-configmap-reference: cci-logging-conf
    spec:
      containers:
        - name: container-0
          image: 'nginx:alpine'  
          resources:
            limits:
              cpu: 1000m
              memory: 2048Mi
            requests:
              cpu: 1000m
              memory: 2048Mi
      imagePullSecrets:
        - name: default-secret
Table 2 Parameter description

Annotation

Function

Constraint

logpath.k8s.io/$containerName

Configures the collection file using the environment variables of the pod container.

$containerName is a container name variable.

Multiple paths can be configured. Each path must be an absolute path starting with a slash (/) and paths are separated by semicolons (;).

Only complete log file paths or file names with the wildcard (*) are supported. If a file name contains the wildcard (*), the directory where the file is located must exist when the container is started.

The maximum length of a file name is 190 characters.

logconf.k8s.io/fluent-bit-configmap-reference

Specifies the ConfigMap configured for the Fluent Bit log collection.

The ConfigMap must exist and meet the requirements of configuring Fluent Bit.

Advanced Configuration

A secret is a resource object for encrypted storage. You can save the authentication information, certificates, and private keys in a secret for configuring sensitive data such as passwords, tokens, and keys.

apiVersion: v1
kind: Secret
metadata:
  name: cci-sfs-kafka-tls
type: Opaque
data:
   ca.crt: ...
   server.crt: ...
   server.key: ...

You can configure SSL parameters to implement encrypted secure connections. Files such as certificates can be referenced by the sandbox volume feature.

kind: ConfigMap
apiVersion: v1 
metadata:  
  name: cci-logging-conf-tls 
  labels:    
     logconf.k8s.io/discovery: true
data: 
    output.conf: |
        [OUTPUT]       
            Name        kafka       
            Match       *       
            Brokers     192.168.1.3:9092   
            Topics      test 
            rdkafka.security.protocol ssl    
            rdkafka.ssl.certificate.location ${sandbox_volume_kafkatls}/client.crt     
            rdkafka.ssl.key.location ${sandbox_volume_kafkatls}/client.key 
            rdkafka.ssl.ca.location ${sandbox_volume_kafkatls}/ca.crt
            rdkafka.enable.ssl.certificate.verification true     
            rdkafka.request.required.acks 1       
Table 3 Parameter description

Parameter

Description

Mandatory

Available Value

rdkafka.security.protocol

Protocol used to communicate with the agent

Mandatory if SSL authentication is enabled

ssl

rdkafka.ssl.certificate.location

Path for storing SSL public keys

Mandatory if SSL authentication is enabled

${sandbox_volume_${VOLUME_NAME}}/some.cert

rdkafka.ssl.key.location

Path for storing SSL private keys

Mandatory if SSL authentication is enabled

${sandbox_volume_${VOLUME_NAME}}/some.key

rdkafka.ssl.ca.location

File path or directory of the CA certificate

Mandatory if server certificate authentication is enabled

${sandbox_volume_${VOLUME_NAME}}/some-bundle.crt

rdkafka.enable.ssl.certificate.verification

Whether to start server certificate authentication

No

The value can be true (default) or false.

You can configure the volume on the pod and configure annotations to specify the sandbox volume and the corresponding log output configuration file.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kafka-tls
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
      annotations:
        logpath.k8s.io/container-0: /var/log/*.log;/var/paas/sys/log/virtual-kubelet.log
        logconf.k8s.io/fluent-bit-configmap-reference: cci-logging-conf       
        sandbox-volume.openvessel.io/volume-names: kafkatls
    spec: 
      volumes: 
        - name: kafkatls
          secret:
            secretName: cci-sfs-kafka-tls
      containers:
        - name: container-0
          image: 'nginx:alpine'  
          resources:
            limits:
              cpu: 1000m
              memory: 2048Mi
            requests:
              cpu: 1000m
              memory: 2048Mi
          volumeMounts:
            - name: kafkatls
              mountPath: /tmp/sfs
      imagePullSecrets:
        - name: default-secret

For details about Kafka configuration items, see https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback