Help Center> Cloud Container Engine> Best Practices> Networking> Increasing the Listening Queue Length by Configuring Container Kernel Parameters
Updated on 2023-10-27 GMT+08:00

Increasing the Listening Queue Length by Configuring Container Kernel Parameters

Application Scenarios

net.core.somaxconn indicates the maximum number of half-open connections that can be backlogged in a listening queue. The default value is 128. If the queue is overloaded, increase the listening queue length.

Procedure

  1. Modify kubelet configurations.

    You can use either of the following methods to modify the kubelet parameters:
    • Modifying kubelet parameters in the node pool (only for clusters of v1.15 or later)

      Log in to the CCE console, go to the cluster details page, choose More > Manage in the node pool, and modify the kubelet parameters.

      Figure 1 Node pool configuration
      Figure 2 Modifying kubelet parameters
    • Modifying kubelet parameters of the node
      1. Log in to the node.
      2. Edit the /opt/cloud/cce/kubernetes/kubelet/kubelet file. In versions earlier than 1.15, the file is /var/paas/kubernetes/kubelet/kubelet.

        Enable net.core.somaxconn.

        --allowed-unsafe-sysctls=net.core.somaxconn

      3. Restart kubelet.

        systemctl restart kubelet

        Check the kubelet status.

        systemctl status kubelet

      After the kubelet configurations are changed for a cluster of v1.13 or earlier, the configurations will be restored if the cluster is upgraded to a later version.

  2. Create a pod security policy.

    Starting from v1.17.17, CCE enables pod security policies for kube-apiserver. Add net.core.somaxconn to allowedUnsafeSysctls of a pod security policy to make the policy take effect. (This configuration is not required for clusters earlier than v1.17.17.)

    • For details about CCE security policies, see Pod Security Policies.
    • For details about Kubernetes security policies, see PodSecurityPolicy.
    • For clusters with net.core.somaxconn enabled, add this configuration to allowedUnsafeSysctls of the corresponding pod security policy. For example, create a pod security policy as follows:
      apiVersion: policy/v1beta1
      kind: PodSecurityPolicy
      metadata:
        annotations:
          seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
        name: sysctl-psp
      spec:
        allowedUnsafeSysctls:
        - net.core.somaxconn
        allowPrivilegeEscalation: true
        allowedCapabilities:
        - '*'
        fsGroup:
          rule: RunAsAny
        hostIPC: true
        hostNetwork: true
        hostPID: true
        hostPorts:
        - max: 65535
          min: 0
        privileged: true
        runAsGroup:
          rule: RunAsAny
        runAsUser:
          rule: RunAsAny
        seLinux:
          rule: RunAsAny
        supplementalGroups:
          rule: RunAsAny
        volumes:
        - '*'

      After creating the pod security policy sysctl-psp, configure RBAC permission control for it.

      An example is as follows:

      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: sysctl-psp
      rules:
        - apiGroups:
            - "*"
          resources:
            - podsecuritypolicies
          resourceNames:
            - sysctl-psp
          verbs:
            - use
      
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: sysctl-psp
      roleRef:
        kind: ClusterRole
        name: sysctl-psp
        apiGroup: rbac.authorization.k8s.io
      subjects:
      - kind: Group
        name: system:authenticated
        apiGroup: rbac.authorization.k8s.io

  3. Create a workload, set kernel parameters, and configure the affinity with the node in 1.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        description: ''
      labels:
        appgroup: ''
      name: test1
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: test1
      template:
        metadata:
          annotations:
            metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"","path":"","port":"","names":""}]'
          labels:
            app: test1
        spec:
          containers:
            - image: 'nginx:1.14-alpine-perl'
              name: container-0
              resources:
                requests:
                  cpu: 250m
                  memory: 512Mi
                limits:
                  cpu: 250m
                  memory: 512Mi
          imagePullSecrets:
            - name: default-secret
          securityContext:
            sysctls:
              - name: net.core.somaxconn
                value: '3000'
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/hostname
                        operator: In
                        values:
                          - 192.168.x.x       # Node name.

  4. Log in to the node where the workload is deployed, access the container, and check whether the parameter configuration takes effect.

    Run the following command in the container to check whether the configuration takes effect:

    sysctl -a |grep somax

    Figure 3 Viewing the parameter configuration