TUMBLE WINDOW Extension
Function
- Periodical tumbling windows for lower latency
Before the tumbling window closes, the window can be triggered periodically based on the configured trigger frequency, outputting the calculated results from the window's start time up to the current periodic time window. This does not affect the final window output value, allowing you to see the most recent results during each cycle before the window concludes.
- Custom latency for higher data accuracy
After the window closes, a delay time can be set. According to this delay time, the window's output result will be updated whenever late data arrives.
Notes
- If using an INSERT statement to write results to a sink, the sink must support the UPSERT mode. Hence, the result table must support UPSERT operations and define a primary key.
- Latency settings only take effect for event time and not for proctime.
- When calling helper functions, it is important to use the same parameters as those used in the GROUP BY clause for grouping window functions.
- If event time is used, watermark must be used. The code is as follows (order_time is identified as the event time column and watermark is set to 3 seconds):
CREATE TABLE orders ( order_id string, order_channel string, order_time timestamp(3), pay_amount double, real_pay double, pay_time string, user_id string, user_name string, area_id string, watermark for order_time as order_time - INTERVAL '3' SECOND ) WITH ( 'connector' = 'kafka', 'topic' = 'kafkaTopic', 'properties.bootstrap.servers' = 'KafkaAddress1:KafkaPort,KafkaAddress2:KafkaPort', 'properties.group.id' = 'GroupId', 'scan.startup.mode' = 'latest-offset', 'format' = 'json' );
- If the processing time is used, you need to use the computed column. The code is as follows (proc is the processing time column):
CREATE TABLE orders ( order_id string, order_channel string, order_time timestamp(3), pay_amount double, real_pay double, pay_time string, user_id string, user_name string, area_id string, proc as proctime() ) WITH ( 'connector' = 'kafka', 'topic' = 'kafkaTopic', 'properties.bootstrap.servers' = 'KafkaAddress1:KafkaPort,KafkaAddress2:KafkaPort', 'properties.group.id' = 'GroupId', 'scan.startup.mode' = 'latest-offset', 'format' = 'json' );
Syntax
TUMBLE(time_attr, window_interval, period_interval, lateness_interval)
Syntax Example
TUMBLE(testtime, INTERVAL '10' SECOND, INTERVAL '10' SECOND, INTERVAL '10' SECOND)
Description
|
Parameter |
Description |
Format |
|---|---|---|
|
time_attr |
Event time or processing time attribute column
|
- |
|
window_interval |
Duration of the window |
|
|
period_interval |
Frequency of periodic triggering within the window range. That is, before the window ends, the output result is updated at an interval specified by period_interval from the time when the window starts. If this parameter is not set, the periodic triggering policy is not used by default. |
|
|
lateness_interval |
Time to postpone the end of the window. The system continues to collect the data that reaches the window within lateness_interval after the window ends. The output is updated for each data that reaches the window within lateness_interval.
NOTE:
If the time window is for processing time, lateness_interval does not take effect. |
- If period_interval is set to 0, periodic triggering is disabled for the window.
- If lateness_interval is set to 0, the latency after the window ends is disabled.
- If neither of the two parameters is set, both periodic triggering and latency are disabled and only the regular tumbling window functions are available.
- If only the latency function needs to be used, set period_interval to INTERVAL '0' SECOND.
Helper Function
|
Helper Function |
Description |
|---|---|
|
TUMBLE_START(time_attr, window_interval, period_interval, lateness_interval) |
Returns the timestamp of the inclusive lower bound of the corresponding tumbling window. |
|
TUMBLE_END(time_attr, window_interval, period_interval, lateness_interval) |
Returns the timestamp of the exclusive upper bound of the corresponding tumbling window. |
Example
1. The Kafka is used as the data source table containing the order information, and the JDBC is used as the data result table for statistics on the number of orders settled by a user within 30 seconds. The order ID and window opening time are used as primary keys to collect result statistics in real time to JDBC.
- Create a datasource connection for the communication with the VPC and subnet where MySQL and Kafka locate and bind the connection to the queue. Set an inbound rule for the security group to allow access of the queue, and test the connectivity of the queue using the MySQL and Kafka addresses. If the connection is successful, the datasource is bound to the queue. Otherwise, the binding fails.
- Run the following statement to create the order_count table in the MySQL Flink database:
CREATE TABLE `flink`.`order_count` ( `user_id` VARCHAR(32) NOT NULL, `window_start` TIMESTAMP NOT NULL, `window_end` TIMESTAMP NULL, `total_num` BIGINT UNSIGNED NULL, PRIMARY KEY (`user_id`, `window_start`) ) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci;
- Create a Flink OpenSource SQL job and submit the job. In this example, the window size is 30 seconds, the triggering period is 10 seconds, and the latency is 5 seconds. That is, if the result is updated before the window ends, the intermediate result will be output every 10 seconds. After the watermark is reached and the window ends, the data whose event time is within 5 seconds of the watermark will still be processed and counted in the current window. If the event time exceeds 5 seconds of the watermark, the data will be discarded.
CREATE TABLE orders ( order_id string, order_channel string, order_time timestamp(3), pay_amount double, real_pay double, pay_time string, user_id string, user_name string, area_id string, watermark for order_time as order_time - INTERVAL '3' SECOND ) WITH ( 'connector' = 'kafka', 'topic' = 'kafkaTopic', 'properties.bootstrap.servers' = 'KafkaAddress1:KafkaPort,KafkaAddress2:KafkaPort', 'properties.group.id' = 'GroupId', 'scan.startup.mode' = 'latest-offset', 'format' = 'json' ); CREATE TABLE jdbcSink ( user_id string, window_start timestamp(3), window_end timestamp(3), total_num BIGINT, primary key (user_id, window_start) not enforced ) WITH ( 'connector' = 'jdbc', 'url' = 'jdbc:mysql://<yourMySQL>:3306/flink', 'table-name' = 'order_count', 'username' = '<yourUserName>', 'password' = '<yourPassword>', 'sink.buffer-flush.max-rows' = '1' ); insert into jdbcSink select order_id, TUMBLE_START(order_time, INTERVAL '30' SECOND, INTERVAL '10' SECOND, INTERVAL '5' SECOND), TUMBLE_END(order_time, INTERVAL '30' SECOND, INTERVAL '10' SECOND, INTERVAL '5' SECOND), COUNT(*) from orders GROUP BY user_id, TUMBLE(order_time, INTERVAL '30' SECOND, INTERVAL '10' SECOND, INTERVAL '5' SECOND); - Insert data to Kafka. Assume that orders are settled at different time and the order data at 10:00:13 arrives late.
{"order_id":"202103241000000001", "order_channel":"webShop", "order_time":"2021-03-24 10:00:00", "pay_amount":"100.00", "real_pay":"100.00", "pay_time":"2021-03-24 10:02:03", "user_id":"0001", "user_name":"Alice", "area_id":"330106"} {"order_id":"202103241000000002", "order_channel":"webShop", "order_time":"2021-03-24 10:00:20", "pay_amount":"100.00", "real_pay":"100.00", "pay_time":"2021-03-24 10:02:03", "user_id":"0001", "user_name":"Alice", "area_id":"330106"} {"order_id":"202103241000000003", "order_channel":"webShop", "order_time":"2021-03-24 10:00:33", "pay_amount":"100.00", "real_pay":"100.00", "pay_time":"2021-03-24 10:02:03", "user_id":"0001", "user_name":"Alice", "area_id":"330106"} {"order_id":"202103241000000004", "order_channel":"webShop", "order_time":"2021-03-24 10:00:13", "pay_amount":"100.00", "real_pay":"100.00", "pay_time":"2021-03-24 10:02:03", "user_id":"0001", "user_name":"Alice", "area_id":"330106"} - Run the following statement in the MySQL database to view the output result. The final result is displayed as follows because the periodic output result cannot be collected:
select * from order_count
user_id window_start window_end total_num 0001 2021-03-24 10:00:00 2021-03-24 10:00:30 3 0001 2021-03-24 10:00:30 2021-03-24 10:01:00 1
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.