Practices
You can better use DLI for big data analytics and processing by following the scenario-specific instructions and best practices provided in this section.
Scenario |
Instructions |
Description |
---|---|---|
Spark SQL job development |
Use a Spark SQL job to create OBS tables, and import, insert, and query OBS table data. |
|
Flink OpenSource SQL job development |
Use a Flink OpenSource SQL job to read data from Kafka and write the data to RDS. |
|
Use a Flink OpenSource SQL job to read data from Kafka and write the data to GaussDB(DWS). |
||
Use a Flink OpenSource SQL job to read data from Kafka and write the data to Elasticsearch. |
||
Reading Data from MySQL CDC and Writing Data to GaussDB(DWS) |
Use a Flink OpenSource SQL job to read data from MySQL CDC and write the data to GaussDB(DWS). |
|
Reading Data from PostgreSQL CDC and Writing Data to GaussDB(DWS) |
Use a Flink OpenSource SQL job to read data from PostgreSQL CDC and write the data to GaussDB(DWS). |
|
Flink Jar job development |
Create a custom Flink Jar job to interact with MRS. |
|
Write Kafka data to OBS. |
||
Using Flink Jar to Connect to Kafka with SASL_SSL Authentication Enabled |
Use Flink OpenSource SQL to connect to Kafka with SASL_SSL authentication enabled. |
|
Spark Jar job development |
Write a Spark program to read and query OBS data, compile and package your code, and submit a Spark Jar job. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.