Help Center/
    
      
      MapReduce Service/
      
      
        
        
        Component Operation Guide (LTS) (Ankara Region)/
        
        
        Using Flink/
        
      
      Enhancements to Flink SQL
    
  
  
    
        Updated on 2024-11-29 GMT+08:00
        
          
          
        
      
      
      
      
      
      
      
      
  
      
      
      
        
Enhancements to Flink SQL
- Using the DISTRIBUTEBY Feature
 - Supporting Late Data in Flink SQL Window Functions
 - Configuring Table-Level Time To Live (TTL) for Joining Multiple Flink Streams
 - Verifying SQL Statements with the FlinkSQL Client
 - Submitting a Job on the FlinkSQL Client
 - Joining Big and Small Tables
 - Deduplicating Data When Joining Big and Small Tables
 - Setting Source Parallelism
 - Limiting Read Rate for Flink SQL Kafka and Upsert-Kafka Connector
 - Consuming Data in drs-json Format with FlinkSQL Kafka Connector
 - Using ignoreDelete in JDBC Data Writes
 - Join-To-Live
 
   Parent topic: Using Flink
  
 Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
                The system is busy. Please try again later.
                
            
        For any further questions, feel free to contact us through the chatbot.
Chatbot