Help Center/ Elastic Cloud Server/ Troubleshooting/ Linux ECS Issues/ What Can I Do If the Fork Process Failed and New Threads Cannot Be Created?
Updated on 2024-08-15 GMT+08:00

What Can I Do If the Fork Process Failed and New Threads Cannot Be Created?

Symptom

The following error message is displayed during command execution or log printing on Linux ECSs.

Error message 1:

root@localhost:~# free -g
            total       used       free     shared     buffers   cached 
Mem:         94          43         51        0           0        0
Swap:        19          0         19
root@localhost:~# uname -a
-bash: fork: Cannot allocate memory

Error message 2:

xxxxsshd2[23985]: fatal: setresuid 20054: Resource temporarily unavailable
xxxxsshd2[28377]: Disconnecting: fork failed: Resource temporarily unavailable
xxxxsshd2[4484]: Disconnecting: fork failed: Resource temporarily unavailable

Error message 3:

[root@ecs-xxxx ~]$ sudo docker info
runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort

Possible Causes

Generally, the preceding errors occur because the thread fails to be created. The possible cause is that the ECS memory is insufficient and the number of current threads reaches the configured maximum value.

Solution

  1. Log in to the management console.
  2. Monitor the ECS memory usage using the server monitoring function. For details, see Viewing ECS Metrics.
  3. Log in to an ECS as the root user and run the following command to check the message and dmesg logs:

    dmesg -T

    cat /var/log/messages
    • If the cgroup error message shown in Figure 1 is displayed, go to 8.
    • If there is no such message, go to 4.
      Figure 1 Log error
  4. Run the following command to check the total number of threads in the current system:

    ps -efL | wc -l

  5. Run the following commands to obtain two values and compare the values with the total number of system threads queried in 4:

    sysctl -a | grep pid_max

    sysctl -a | grep threads-max

    • If the total number of threads in the current system is close to either of the two values, modify the values of pid_max and threads-max. For details, see Modifying the values of pid_max and threads-max.
    • If they do not stay close, go to 6.
  1. Run the following command to determine the PID of the error process:

    ps -ef | grep Error process name

  2. Run the following command to check the limits configuration of the process based on the PID obtained in the last step:

    cat /proc/pid/limits

    Figure 2 Viewing the limits configuration of the process
    • Check the value of Max processes. If the number of threads is close to the value of Max processes, modify the value of limits. For details, see Modifying the value of limits.
    • If they do not stay close, go to 8.
  3. Run the following commands to obtain the values of pid_max and pids.current based on the log error cgroup:

    cat /sys/fs/cgroup/pids/Directory where the error in the combined logs is reported/pids.max

    cat /sys/fs/cgroup/pids/Directory where the error in the combined logs is reported/pids.current

    Figure 3 cgroup directory

    An example is as follows:

    1. Run the following command to search for the cgroup directory based on the PID of the process:

      cat /proc/pid/cgroup

      Figure 4 Searching for the cgroup directory based on the PID

      In the command output, /user.slice/user-0.slice/session-5.scope/ in the pids line can be combined with /sys/fs/cgroup/pids/ to specify the cgroup directory /sys/fs/cgroup/pids/user.slice/user-0.slice/session-5.scope/.

    2. Run the following commands to obtain the values of pid_max and pids.current based on the cgroup directory:

      cat /sys/fs/cgroup/pids/user.slice/user-0.slice/session-5.scope/pids.max

      cat /sys/fs/cgroup/pids/user.slice/user-0.slice/session-5.scope/pids.current

      • If the value of pids.current is close to that of pids.max, modify the value of cgroup. For details, see Modifying the value of cgroup.
      • If they do not stay close, submit a service ticket.

Related Commands

  • Modifying the values of pid_max and threads-max.
    1. Default parameters vary by OS version. Therefore, run the following commands to query how pid_max and threads-max is configured:

      sysctl -a | grep pid_max

      sysctl -a | grep threads-max

    2. Run the following commands to modify the values of pid_max and threads-max:

      echo 'kernel.pid_max = 4194304' >> /etc/sysctl.conf

      echo 'kernel.threads-max = 4194304' >> /etc/sysctl.conf

    3. Run the following command to make the new value be applied:

      sysctl -p

  • Modifying the value of limits
    1. Log in to the ECS as the user who starts the error reporting process and run the following command to query the current configuration of limits:

      ulimit -u

    2. Run the following commands to configure a proper upper limit for nproc based on service requirements and the current value:

      For example, run the following commands to configure 100000 for nproc as the root user:

      echo 'root soft nproc 100000' >> /etc/security/limits.conf

      echo 'root hard nproc 100000' >> /etc/security/limits.conf

    3. Log in to the ECS again and run the following command to check whether the configuration has taken effect:

      ulimit -u

      • If the command output is the value configured in 2, the configuration has taken effect. Restart the service process in this session.
      • If the command output is not the value configured in 2, submit a service ticket.
  • Modifying the value of cgroup
    • Temporary modification solution

      Run the following command to temporarily change the upper limit of the cgroup directory to the maximum value:

      echo max > /sys/fs/cgroup/pids/user.slice/user-0.slice/session-25.scope/pids.max

    • Permanent modification solution:

      Run the following command to set the cgroup directory to infinity and modify the cgroup directory that exceeds the limit:

      The value can be adjusted as required. After the modification, restart the ECS for the configuration to be applied.

      echo DefaultTasksMax=infinity >>/etc/systemd/system.conf

      echo DefaultTasksMax=infinity >>/etc/systemd/user.conf

      echo UserTasksMax=infinity >>/etc/systemd/logind.conf