Setting Parameters for Kafka Clients
This section provides recommendations on configuring common parameters for Kafka producers and consumers. Kafka clients in different versions may have different parameter names. The following parameters are supported in v1.1.0 and later. For details about other parameters and versions, see Kafka Configuration.
Parameter |
Default Value |
Recommended Value |
Description |
---|---|---|---|
acks |
1 |
all or –1 (if high reliability mode is selected) 1 (if high throughput mode is selected) |
Number of acknowledgments the producer requires the server to return before considering a request complete. This controls the durability of records that are sent. The value of this parameter can be any of the following: 0: The producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record, and the retries configuration will not take effect (as the client generally does not know of any failures). The offset given back for each record will always be set to –1. 1: The leader will write the record to its local log but will respond without waiting until receiving full acknowledgement from all followers. If the leader fails immediately after acknowledging the record but before the followers have replicated it, the record will be lost. all or -1: The leader needs to wait until all backups in the ISR are written into logs. As long as any backup survives, data will not be lost. min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. |
retries |
0 |
/ |
Number of times that the client resends a message. Setting this parameter to a value greater than zero will cause the client to resend any record that failed to be sent. Note that this retry is no different than if the client re-sent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two batches are sent to the same partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. You are advised to configure producers so that they can be able to retry in case of network disconnections. Set retries to 3 and the retry interval retry.backoff.ms to 1000. |
request.timeout.ms |
30000 |
/ |
Maximum amount of time (in ms) the client will wait for the response of a request. If the response is not received before the timeout elapses, the client will throw a timeout exception. Setting this parameter to a large value, for example, 127000 (127s), can prevent records from failing to be sent in high-concurrency scenarios. |
block.on.buffer.full |
TRUE |
TRUE |
Setting this parameter to TRUE indicates that when buffer memory is exhausted, the producer must stop receiving new message records or throw an exception. By default, this parameter is set to TRUE. However, in some cases, non-blocking usage is desired and it is better to throw an exception immediately. Setting this parameter to FALSE will cause the producer to instead throw "BufferExhaustedException" when buffer memory is exhausted. |
batch.size |
16384 |
262144 |
Default maximum number of bytes of messages that can be processed at a time. The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps improve performance of both the client and the server. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A smaller batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A larger batch size may use more memory as a buffer of the specified batch size will always be allocated in anticipation of additional records. |
buffer.memory |
33554432 |
67108864 |
Total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the broker, the producer will stop sending records or throw a "block.on.buffer.full" exception. This setting should correspond roughly to the total memory the producer will use, but is not a rigid bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. |
enable.idempotence |
|
If idempotence is not required, you are advised to set this parameter to false. |
If you have enabled idempotence on the producer client, and produced messages, message offsets are not continuous on the consumer client or on the Message Query page on the Kafka console. This is because enabling idempotence generates some metadata control messages during message production. These control messages are produced to topics, and are invisible to consumers. |
Parameter |
Default Value |
Recommended Value |
Description |
---|---|---|---|
auto.commit.enable |
TRUE |
FALSE |
If this parameter is set to TRUE, the offset of messages already fetched by the consumer will be periodically committed to ZooKeeper. This committed offset will be used when the process fails as the position from which the new consumer will begin. Constraints: If this parameter is set to FALSE, to avoid message loss, an offset must be committed to ZooKeeper after the messages are successfully consumed. |
auto.offset.reset |
latest |
earliest |
Indicates what to do when there is no initial offset in ZooKeeper or if the current offset has been deleted. Options:
NOTE:
If this parameter is set to latest, the producer may start to send messages to new partitions (if any) before the consumer resets to the initial offset. As a result, some messages will be lost. |
connections.max.idle.ms |
600000 |
30000 |
Timeout interval (in ms) for an idle connection. The server closes the idle connection after this period of time ends. Setting this parameter to 30000 can reduce the server response failures when the network condition is poor. |
max.poll.records |
500 |
Must be less than the value of max.poll.interval.ms. |
The maximum number of messages that a consumer can pull from a broker at a time. |
max.poll.interval.ms |
300000 |
Increase this parameter if complex and time-consuming logic exists between two polls. |
The maximum interval between consumer polls, in milliseconds. If this parameter is exceeded, the consumption fails and the consumer is removed from the consumer group, triggering rebalance. |
heartbeat.interval.ms |
3000 |
≥ 3000 |
Heartbeat interval between a consumer and Kafka, in milliseconds. |
session.timeout.ms |
10000 |
Set this parameter to at least 3 times the value of heartbeat.interval.ms. |
The consumer-broker session timeout when the offset is managed by consumer group, in milliseconds. |
fetch.max.bytes |
1000000 |
max.request.size < message.max.bytes < fetch.max.bytes |
The maximum bytes of a message that a consumer pulls from a broker at a time. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.