Help Center/ Cloud Search Service/ Troubleshooting/ Clusters/ "Bulk Reject" Is Displayed for an Elasticsearch Cluster
Updated on 2025-10-11 GMT+08:00

"Bulk Reject" Is Displayed for an Elasticsearch Cluster

Symptom

Sometimes the cluster write rejection rate increases and the "Bulk Reject" message is displayed. When I perform bulk writing operations, an error message similar to the following is reported:

[2019-03-01 10:09:58][ERROR]rspItemError: {
    "reason": "rejected execution of org.elasticsearch.transport.TransportService$7@5436e129 on EsThreadPoolExecutor[bulk, queue capacity = 1024, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6bd77359[Running, pool size = 12, active threads = 12, queued tasks = 2390, completed tasks = 20018208656]]",
    "type": "es_rejected_execution_exception"
}

Issue Analysis

Bulk reject is usually caused by large or unevenly distributed shards. You can use the following methods to locate and analyze the fault:

  1. Check whether the data volume in shards is too large.

    The recommended data size in a single shard is 20 GB to 50 GB. You can run the following command on the Kibana console to view the size of each shard of an index:

    GET _cat/shards?index=index_name&v
  2. Check whether the shards in nodes are unevenly distributed.

    You can check shard allocation in either of the following ways:

    • Method 1: Check the cluster's monitoring metrics on the console, especially those related to index shards. For details, see Viewing Monitoring Metrics.
    • Method 2: Use the CLI to check the distribution of index shards across cluster nodes.
      For example, run the curl command on the client to check the number of shards on each cluster node.
      curl "$p:$port/_cat/shards?index={index_name}&s=node,store:desc" | awk '{print $8}' | sort | uniq -c | sort
      Figure 1 Example of command output

      In the command output, the first column indicates the number of shards, and the second column indicates the node ID. In the result shown in the figure above, some nodes have only one shard, and some have eight shards. This means index shards are unevenly distributed.

Solution

  • If the problem is caused by oversized shards:

    Set the value of the number_of_shards parameter in the index template to limit the shard size.

    A newly created template will be applied immediately to new indexes. Existing indexes cannot be changed.

  • If the problem is caused by uneven shard distribution:

    Workarounds:

    1. You can run the following command to set the routing.allocation.total_shards_per_node parameter to dynamically adjust an index:
      PUT <index_name>/_settings
      {
          "settings": {
              "index": {
                  "routing": {
                      "allocation": {
                          "total_shards_per_node": "3"
                      }
                  }
              }
          }
      }

      When you configure the total_shards_per_node parameter, reserve some buffer space to avoid shard allocation failures caused by machine faults. For example, if there are 10 servers and the index has 20 shards, the value of total_shards_per_node must be greater than 2.

    2. Before creating indexes, configure an index template to specify the quantity of index shards on each node.
      PUT _template/<template_name>
      {
          "order": 0,
          "template": "{index_prefix@}*",  //The index prefix you want to change
          "settings": {
              "index": {
                  "number_of_shards": "30", //Total number of shards allocated to nodes. The capacity of a shard can be assumed as 30 GB.
                  "routing.allocation.total_shards_per_node": 3 //Maximum number of shards on a node
              }
          },
          "aliases": {}
      }