Updated on 2022-11-18 GMT+08:00

Suggestions

In the same group, the number of consumers and that of Topic Partitions to be consumed should be the same

If the number of consumers is greater than that of Topic Partitions, some consumers cannot consume Topics. If the number of consumers is smaller than that of Topic Partitions, concurrent consumption cannot be fully represented. Therefore, the number of consumers and that of Topic Partitions to be consumed should be the same.

Avoid writing data with single ultra-large log

Data with single ultra-large log can affect efficiency and writing. Under such circumstance, modify the values of the max.request.size and max.partition.fetch.bytes configuration items when initializing Kafka producer instances and consumer instances, respectively.

For example, set max.request.size and max.partition.fetch.bytes to 5252880.
         // Protcol type:configuration SASL_PLAINTEXT or PLAINTEXT
         props.put(securityProtocol, kafkaProc.getValues(securityProtocol, "SASL_PLAINTEXT"));
         // service name
         props.put(saslKerberosServiceName, "kafka");
         props.put("max.request.size", "5252880");
         // Security protocol type
         props.put(securityProtocol, kafkaProc.getValues(securityProtocol, "SASL_PLAINTEXT"));
         // service name 
         props.put(saslKerberosServiceName, "kafka");
         props.put("max.partition.fetch.bytes","5252880");