Updated on 2022-09-14 GMT+08:00

Scenario Description

Scenario

In this example, Flink consumes a custom data source and writes the consumed data to Elasticsearch or CSS.

This section describes how to build an Elasticsearch sink and set parameters so that Flink can write data to Elasticsearch.

Flink 1.12.0 or later and Elasticsearch 7.x or later are supported.

Data Preparation

  • If a custom data source is used, ensure that the source cluster and the destination cluster can communicate with each other through a network port.
  • If external data sources, such as Kafka and MySQL, are used, ensure that the corresponding user has required data operation permissions.

Development Guideline

  1. Import the dependency packages of Flink. The version must be the same as the Flink version in the cluster.
  2. Build a data source at the source.
  3. Build the Elasticsearch data source at the destination. (You can use the setRestClientFactory method to configure UserRestClientFactory when you build the data source.)
  4. Build a Flink execution environment.