Updated on 2023-11-08 GMT+08:00

Thread Pool Parameters Configuration

Thread pools are the main service processing units of microservices. Proper thread pools planning can maximize system performance and prevent the system from failing to provide services for normal users due to exceptions. The optimization of the thread pool is closely related to the service performance. The parameter settings vary according to scenarios. The following describes two scenarios. Before setting, you need to check the service performance, test common APIs, and check the latency.

  • The service performance is good.

    That is, in non-concurrent scenarios, the average API latency is less than 10 ms.

    When the service performance is good, to make the service system more predictable and prevent the JVM garbage collection, network fluctuation, and burst traffic from affecting the system stability, the system needs to quickly discard requests and take measures such as retry to ensure that the system performance is predictable in the case of fluctuation and normal service running.

    • Number of connections and timeout settings
      # Number of verticle instances on the server. Retain the default value. It is recommended that this parameter be set to a value in the range of 8 to 10.
      servicecomb.rest.server.verticle-count: 10
      # Maximum number of connections. The default value is Integer.MAX_VALUE. The maximum value can be estimated based on the actual situation so that the system has better resilience.
      servicecomb.rest.server.connection-limit: 20000
      # Connection idle time. The default value is 60s. Generally, you do not need to change the value.
      servicecomb.rest.server.connection.idleTimeoutInSeconds: 60
      # Number of verticle instances on the client. Retain the default value. It is recommended that this parameter be set to a value in the range of 8 to 10.
      servicecomb.rest.client.verticle-count: 0
      # Maximum number of connections between a client and the server is verticle – count * maxPoolSize, which cannot exceed the number of threads.
      #In this example, the number of connections is 500 (10 x 50). If there are a large number of instances, reduce the number of connections of a single instance.
      servicecomb.rest.client.connection.maxPoolSize: 50
      # Connection idle time. The default value is 30s. Generally, you do not need to change the value. The value must be shorter than the connection idle time of the server.
      servicecomb.rest.client.connection.idleTimeoutInSeconds
      
    • Service thread pool configuration
      # Number of thread pool groups. The recommended value is 2 to 4.
      servicecomb.executor.default.group: 2
      # Recommended value range: 50–200
      servicecomb.executor.default.thread-per-group: 100
      # Size of a queue in the thread pool. The default value is Integer.MAX_VALUE. Do not use the default value in high-performance scenarios to quickly discard requests.
      servicecomb.executor.default.maxQueueSize-per-group: 10000
      # Maximum waiting time of a queue. If the waiting time exceeds the maximum value, the request is discarded and a response is returned. The default value is 0.
      # In high-performance scenarios, set the queuing timeout interval to a small value to quickly discard requests.
      servicecomb.rest.server.requestWaitInPoolTimeout: 100
      # Set a short timeout period to quickly discard requests. However, you are advised to set the timeout period to a value greater than or equal to 1s. Otherwise, many problems may occur.
      servicecomb.request.timeout=5000
      
  • The service performance is not good.

    That is, in non-concurrent scenarios, the average API latency is longer than 100 ms. High latency is usually caused by low CPU usage due to I/O and resource waiting in service code. If the high latency is caused by complex calculation, the optimization becomes complex.

    When the service performance is not good, you need to increase the values of the following parameters. Otherwise, a large number of services will be blocked. Increasing these parameters ensures the system throughput and avoids service failures caused by burst traffic. However, user experience will be affected.

    # Server connection idle time.
    servicecomb.rest.server.connection.idleTimeoutInSeconds: 120000
    # Client connection idle time.
    servicecomb.rest.client.connection.idleTimeoutInSeconds: 90000
    # Number of thread pool groups.
    servicecomb.executor.default.group: 4
    # Size of the thread pool.
    servicecomb.executor.default.thread-per-group: 200
    # Size of the queue in the thread pool. Threads will be queued when the performance is not good.
    servicecomb.executor.default.maxQueueSize-per-group: 100000
    # Set the timeout period to a large value.
    servicecomb.rest.server.requestWaitInPoolTimeout: 10000
    servicecomb.request.timeout=30000