Flink Job Engine
Flink web UI provides a web-based visual development platform. You only need to compile SQL statements to develop jobs, slashing the job development threshold. In addition, the exposure of platform capabilities allows service personnel to compile SQL statements for job development to quickly respond to requirements, greatly reducing the Flink job development workload.
This section applies to only MRS 3.1.0 or later.
Flink Web UI Features
The Flink web UI has the following features:
- Enterprise-class visual O&M: GUI-based O&M management, job monitoring, and standardization of Flink SQL statements for job development.
- Quick cluster connection: After configuring the client and user credential key file, you can quickly access a cluster using the cluster connection function.
- Quick data connection: You can access a component by configuring the data connection function. If Data Connection Type is set to HDFS, you need to create a cluster connection. If Authentication Mode is set to KERBEROS for other data connection types, you need to create a cluster connection. If Authentication Mode is set to SIMPLE, you do not need to create a cluster connection.
If Data Connection Type is set to Kafka, Authentication Type cannot be set to KERBEROS.
- Visual development platform: The input/output mapping table can be customized to meet the requirements of different input sources and output destinations.
- Easy to use GUI-based job management
Key Web UI Capabilities
Table 1 shows the key capabilities provided by Flink web UI.
Item |
Description |
---|---|
Batch-Stream convergence |
|
Flink SQL kernel capabilities |
|
Flink SQL functions on the console |
|
Flink job visual management |
|
Performance and reliability |
|
Flink Web UI Application Process
The Flink web UI application process is shown as follows:
Step |
Description |
Reference |
---|---|---|
Creating an application |
Applications can be used to isolate different upper-layer services. |
|
Creating a cluster connection |
Different clusters can be accessed by configuring the cluster connection. |
|
Creating a Data Connection |
Through data connections, you can access different data services, including HDFS and Kafka. |
|
Creating a stream table |
Data tables can be used to define basic attributes and parameters of source tables, dimension tables, and output tables. |
|
Creating a SQL/JAR job (stream/batch job) |
APIs can be used to define Flink jobs, including Flink SQL and Flink Jar jobs. |
|
Managing jobs |
A created job can be managed, including starting, developing, stopping, deleting, and editing the job. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot