Help Center> GeminiDB> GeminiDB Cassandra API> FAQs> Database Usage> Why Does the Overall Instance Performance Deteriorate When QPS Increases After the Batch Size Is Decreased?
Updated on 2023-03-02 GMT+08:00

Why Does the Overall Instance Performance Deteriorate When QPS Increases After the Batch Size Is Decreased?

Symptom

The original batch_size was 100, and the size of a single row was about 400 bytes. batch_size was then changed to 10 because an alarm was triggered when the batch size reached 5 KB. To ensure the overall write performance, QPS was 10 times of the original QPS. However, the overall performance deteriorated after the changes.

Possible Cause

The number of concurrent clients is restricted by the Driver configuration parameters, including the number of hosts, number of sessions, ConnectionsPerHost, and MaxRequestsPerConnection.

For example, a user starts a cluster, creates a session for the cluster, and has three hosts. If ConnectionsPerHost is set to 2 and MaxRequestsPerConnection uses the default value 128, the maximum number of concurrent requests of the session is 768, and the maximum number of requests of a single node is 256.

For details about the parameters, see the official document.

Solution

View monitoring metrics to observe the CPU usage, read/write pending, and read/write latency of a single node.

  • If the load of a single node reaches the upper limit, you need to add nodes. For details, see Adding Nodes.
  • If the load of a single node is low, you need to adjust the configuration of Driver.
    1. Increase the value of ConnectionsPerHost. Ensure that the total number of connections to the cluster does not exceed the configured alarm threshold.
    2. Increase the value of MaxRequestsPerConnection. Ensure that the value does not exceed the load capability of a single node. Observe the CPU usage, read/write latency, and read/write pending.