Updated on 2024-08-12 GMT+08:00

Scenario Description

Scenario

In this example, Flink consumes a custom data source and writes the consumed data to Elasticsearch or CSS.

This section describes how to build an Elasticsearch sink and set parameters so that Flink can write data to Elasticsearch.

Flink 1.12.0 and later versions are supported. Elasticsearch 7.x and later versions are supported but Elasticsearch clusters that use the HTTPS are not.

Data Preparation

  • If a custom data source is used, ensure that the source cluster and the destination cluster can communicate with each other through a network port.
  • If external data sources, such as Kafka and MySQL, are used, ensure that the corresponding user has required data operation permissions.

Development Guideline

  1. Import the dependency packages of Flink. The version must be the same as the Flink version in the cluster.
  2. Build a data source at the source.
  3. Build the Elasticsearch data source at the destination. (You can use the setRestClientFactory method to configure UserRestClientFactory when you build the data source.)
  4. Build a Flink execution environment.