Help Center/ Cloud Container Engine/ Best Practices/ Networking/ Increasing the Listening Queue Length by Configuring Container Kernel Parameters
Updated on 2024-11-12 GMT+08:00

Increasing the Listening Queue Length by Configuring Container Kernel Parameters

Application Scenarios

By default, the listening queue (backlog) length of net.core.somaxconn is set to 128. If the number of connection requests surpasses this limit during busy services, new requests will be declined. To avoid this issue, you can adjust the kernel parameter net.core.somaxconn to increase the length of the listening queue.

Procedure

  1. Modify kubelet configurations.

    Modifying the kubelet configurations of a node pool

    1. Log in to the CCE console and click the cluster name to access the cluster console.
    2. Locate the row containing the target node pool and choose More > Manage.
      Figure 1 Managing node pool configurations
    3. Modify kubelet configuration parameters and add [net.core.somaxconn] to Allowed unsafe sysctls.
      Figure 2 Modifying kubelet parameters

    Modifying the kubelet parameters of a node (not recommended)

    1. Log in to the target node.
    2. Edit the /opt/cloud/cce/kubernetes/kubelet/kubelet file. In versions earlier than 1.15, the file is /var/paas/kubernetes/kubelet/kubelet.

      Enable net.core.somaxconn.

      --allowed-unsafe-sysctls=net.core.somaxconn

    3. Restart kubelet.

      systemctl restart kubelet

      Check the kubelet status.

      systemctl status kubelet

    After the kubelet parameters of a node are modified, the configurations will be restored if the cluster is upgraded to a later version. Exercise caution when performing this operation.

  2. (Required only for clusters earlier than v1.25) Create a pod security policy.

    kube-apiserver enables pod security policies for CCE clusters of versions earlier than v1.25. The configurations take effect only after net.core.somaxconn is added to allowedUnsafeSysctls in the pod security policy. For details about CCE security policies, see Configuring a Pod Security Policy.

    The following is an example:
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
      name: sysctl-psp
    spec:
      allowedUnsafeSysctls:
      - net.core.somaxconn
      allowPrivilegeEscalation: true
      allowedCapabilities:
      - '*'
      fsGroup:
        rule: RunAsAny
      hostIPC: true
      hostNetwork: true
      hostPID: true
      hostPorts:
      - max: 65535
        min: 0
      privileged: true
      runAsGroup:
        rule: RunAsAny
      runAsUser:
        rule: RunAsAny
      seLinux:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      volumes:
      - '*'

    After creating the pod security policy sysctl-psp, configure RBAC permission control for it.

    The following is an example:

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: sysctl-psp
    rules:
      - apiGroups:
          - "*"
        resources:
          - podsecuritypolicies
        resourceNames:
          - sysctl-psp
        verbs:
          - use
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: sysctl-psp
    roleRef:
      kind: ClusterRole
      name: sysctl-psp
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: Group
      name: system:authenticated
      apiGroup: rbac.authorization.k8s.io

  3. Create a workload, configure the kernel parameters, and ensure that the workload is affinity with the node with net.core.somaxconn enabled in 1.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        description: ''
      labels:
        appgroup: ''
      name: test1
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: test1
      template:
        metadata:
          annotations:
            metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"","path":"","port":"","names":""}]'
          labels:
            app: test1
        spec:
          containers:
            - image: 'nginx:1.14-alpine-perl'
              name: container-0
              resources:
                requests:
                  cpu: 250m
                  memory: 512Mi
                limits:
                  cpu: 250m
                  memory: 512Mi
          imagePullSecrets:
            - name: default-secret
          securityContext:
            sysctls:
              - name: net.core.somaxconn
                value: '3000'
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/hostname
                        operator: In
                        values:
                          - 192.168.x.x       # Node name.

  4. Go to the container and check whether the parameter settings take effect.

    kubectl exec -it <pod name> -- /bin/sh

    Run the following command in the container to check whether the configuration takes effect:

    sysctl -a |grep somax
    Figure 3 Viewing the parameter configuration