Application Development Overview
Flink is a unified computing framework that supports both batch processing and stream processing. It provides a stream data processing engine that supports data distribution and parallel computing.
Flink provides high-concurrency pipeline data processing, millisecond-level latency, and high reliability, making it extremely suitable for low-latency data processing.
The entire Flink system consists of three parts:
- TaskManager
TaskManager is a service execution node of Flink. It executes specific tasks. A Flink system can have multiple TaskManagers. These TaskManagers are equivalent to each other.
- JobManager
JobManager is a management node of Flink. It manages all TaskManagers and schedules tasks submitted by users to specific TaskManagers. In high-availability (HA) mode, multiple JobManagers are deployed. Among these JobManagers, one is selected as the active JobManager, and the others are standby.
Flink provides the following features:
Flink DataStream APIs can be developed in Scala and Java, as shown in Table 1.
Function |
Description |
---|---|
Scala API |
API in Scala, which can be used for data processing, such as filtering, joining, windowing, and aggregation. Since Scala is easy to read, you are advised to use Scala APIs to develop applications. |
Java API |
API in Java, which can be used for data processing, such as filtering, joining, windowing, and aggregation. |
For details about Flink, visit https://flink.apache.org/.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.