Scenario Description
Scenario
In this example, Flink consumes a custom data source and writes the consumed data to Elasticsearch or CSS.
This section describes how to build an Elasticsearch sink and set parameters so that Flink can write data to Elasticsearch.
Flink 1.12.0 and later versions are supported. Elasticsearch 7.x and later versions are supported but Elasticsearch clusters that use the HTTPS are not.
Data Preparation
- If a custom data source is used, ensure that the source cluster and the destination cluster can communicate with each other through a network port.
- If external data sources, such as Kafka and MySQL, are used, ensure that the corresponding user has required data operation permissions.
Development Guideline
- Import the dependency packages of Flink. The version must be the same as the Flink version in the cluster.
- Build a data source at the source.
- Build the Elasticsearch data source at the destination. (You can use the setRestClientFactory method to configure UserRestClientFactory when you build the data source.)
- Build a Flink execution environment.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot