Halaman ini belum tersedia dalam bahasa lokal Anda. Kami berusaha keras untuk menambahkan lebih banyak versi bahasa. Terima kasih atas dukungan Anda.
- What's New
- Service Overview
- Getting Started
-
User Guide
- IAM Permissions Management
- Getting Started
- Managing DIS Streams
-
Using DIS
- Checking and Configuring DNS Information
- Uploading Data by Using Agent
- Using DIS Flume Plugin to Upload and Download Data
- Using a DIS Logstash Plugin to Upload and Download Data
- Using Kafka Adapter to Upload and Download Data
- Using SparkStreaming SDK to Download Data
- Using a DIS Flink Connector to Upload and Download Data
- Managing a Dump Task
- Managing Enterprise Projects
- Notifying Events
- Monitoring
- Best Practices
-
SDK Reference
- Overview
- Related Resources
- Enabling DIS
- Creating a DIS Stream
- Obtaining Authentication Information
-
Getting Started with SDK
-
Using the Java SDK
- Preparing the Environment
- Configuring a Sample Project
- Initializing a DIS SDK Client Instance
- Creating a Stream
- Creating a Dump Task
- Updating a Dump Task
- Deleting a Dump Task
- Querying a Dump Task List
- Querying Dump Details
- Deleting a Stream
- Querying a Stream List
- Querying Stream Details
- Downloading Streaming Data
- Uploading Streaming Data
- Obtaining the Data Cursor
- Creating an Application
- Deleting an Application
- Adding a Checkpoint
- Querying a Checkpoint
- Changing Partition Quantity
- Using Kafka Adapter to Upload and Download Data
-
Using the Python SDK
- Preparing the Installation Environment
- Configuring a Sample Project
- Initializing a DIS SDK Client Instance
- Creating a Stream
- Creating a Dump Task
- Deleting a Stream
- Deleting a Dump Task
- Querying a Stream List
- Querying a Dump Task List
- Querying Stream Details
- Querying Dump Details
- Uploading Streaming Data in JSON Format
- Uploading Streaming Data in Protobuf Format
- Downloading Streaming Data
- Creating an Application
- Deleting an Application
- Viewing Application Details
- Querying an Application List
- Adding a Checkpoint
- Querying a Checkpoint
- Changing Partition Quantity
- Obtaining a Data Cursor
-
Using the Java SDK
- Error Codes
- Change History
- API Reference
-
FAQs
-
General Questions
- What Is DIS?
- What Is a Partition?
- What Can I Do with DIS?
- What Advantages Does DIS Have?
- Which Modules Do DIS Have?
- How Do I Create a DIS Stream?
- What Is the Difference Between Storing Data into DIS and Dumping Data Elsewhere?
- How Do I Check Software Package Integrity?
- How Do I Send and Retrieve Data Using DIS?
- What Is Data Control?
- Dump Questions
- DIS Agent Questions
-
General Questions
- General Reference
Show all
Copied.
Consumption Offset
Two offset committing policies are available: automatic and manual. When creating a DISKafkaConsumer object, set the enable.auto.commit parameter to specify a desired offset committing policy. If the value is set to true, automatic offset committing is used. If the value is set to false, manual offset committing is used.
With automatic offset committing, the consumer enables the coordinator to commit offsets every auto.commit.interval.ms. With manual offset committing, instead of relying on the consumer to periodically commit consumed offsets, users can control when records should be considered as consumed and hence commit their offsets.
- Automatic
When a consumer is created, automatic offset committing is set by default. The default committing interval is 5,000 ms. Parameters for automatic offset committing are as follows:
Props.setProperty("enable.auto.commit", "true");//Automatic offset committing is used. Props.setProperty("auto.commit.interval.ms", "5000");//Offsets are committed at an interval of 5,000 ms.
- Automatic
In some scenarios, offsets need to be more strictly managed to ensure that messages are not repeatedly consumed or are not lost. For example, the pulled messages need to be written to the database for processing, or are used for complex service processing such as processing other network access requests. In such scenarios, the messages are regarded as successfully consumed only after all services are processed. In this case, you must manually control offset committing. Parameters for manual offset committing are as follows:
props.put("enable.auto.commit", "false");//Manual offset committing is used.
After the services are successfully processed, call the commitAsync() or commitSync() method to commit offsets. commitAsync() is used to commit offsets asynchronously. With this method, the consumer threads will not be blocked, and the next pull operation may be started before the offset committing result is returned. To obtain the committing result, add the OffsetCommitCallback method. After the offsets are committed, the onComplete() method is automatically called for processing of different logics based on the callback results.
CommitSync() is used to commit offsets synchronously. With this method, the consumer thread will be blocked until the offset committing result is returned.
In addition, the specific offset data of the specific partition may be further controlled. The confirmed offset is the maximum offset of the accepted data plus 1. For example, when a batch of data is consumed and the offset of the last record is 100, commit offset 101. In this case, consumption starts from the record whose offset is 101 and data will not be consumed repeatedly. The code sample is as follows:
ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE); if (!records.isEmpty()) { for (TopicPartition partition : records.partitions()) { List<ConsumerRecord<String, String>> partitionRecords = records.records(partition); for (ConsumerRecord<String, String> record : partitionRecords) { LOGGER.info("Value [{}], Partition [{}], Offset [{}], Key [{}]", record.value(), record.partition(), record.offset(), record.key()); } if (!partitionRecords.isEmpty()) { // Confirm the specific offset of a partition synchronously. long lastOffset = partitionRecords.get(partitionRecords.size() - 1).offset(); consumer.commitSync(Collections.singletonMap(partition, new OffsetAndMetadata(lastOffset + 1))); } } }
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot