Help Center> Data Lake Insight> FAQs> Flink Jobs> Flink SQL> How Do I Create an OBS Partitioned Table for a Flink SQL Job?
Updated on 2024-05-16 GMT+08:00

How Do I Create an OBS Partitioned Table for a Flink SQL Job?

Scenario

When using a Flink SQL job, you need to create an OBS partition table for subsequent batch processing.

Procedure

In the following example, the day field is used as the partition field with the parquet encoding format to dump car_info data to OBS. For more information, see File System Sink Stream (Recommended).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
create sink stream car_infos (
  carId string,
  carOwner string,
  average_speed double,
  day string
  ) partitioned by (day)
  with (
    type = "filesystem",
    file.path = "obs://obs-sink/car_infos",
    encode = "parquet",
    ak = "{{myAk}}",
    sk = "{{mySk}}"
);

Structure of the data storage directory in OBS: obs://obs-sink/car_infos/day=xx/part-x-x.

After the data is generated, the OBS partition table can be established for subsequent batch processing through the following SQL statements:

  1. Create an OBS partitioned table.
    1
    2
    3
    4
    5
    6
    7
    8
    create table car_infos (
      carId string,
      carOwner string,
      average_speed double
    )
      partitioned by (day string)
      stored as parquet
      location 'obs://obs-sink/car-infos';
    
  2. Restore partition information from the associated OBS path.
    1
    alter table car_infos recover partitions;
    

Flink SQL FAQs

more