Help Center/
Data Lake Insight/
API Reference (Kuala Lumpur Region)/
Getting Started/
Creating and Submitting a Flink Job
Updated on 2022-08-16 GMT+08:00
Creating and Submitting a Flink Job
Scenario Description
This section describes how to create and run a user-defined Flink job using APIs. For details on how to call APIs, see Calling APIs.
Constraints
- It takes 6 to 10 minutes to start a job using a new queue for the first time.
Involved APIs
- Creating a Queue: Create a queue.
- Uploading a Package Group: Upload the resource package required by the Flink custom job.
- Querying Resource Packages in a Group: Check whether the uploaded resource package is correct.
- Creating a Flink Jar job Create a user-defined Flink job.
- Running Jobs in Batches: Run a user-defined Flink job.
Procedure
- Create a queue for general use. For details, see Creating a Queue. In the request, set resource_mode to 1 to create a dedicated queue.
- Upload the resource package of the user-defined Flink job. For details, see 2.
- Query resource packages in a group. For details, see 3.
- Create a custom flink job.
- API
URI format: POST /v1.0/{project_id}/streaming/flink-jobs
- Obtain the value of {project_id} from Obtaining a Project ID.
- For details about the request parameters, see Creating a Database.
- Request example
- Description: Create a user-defined Flink job in the project whose ID is 48cc2c48765f481480c7db940d6409d1.
- Example URL: POST https://{endpoint}/v1.0/48cc2c48765f481480c7db940d6409d1/streaming/flink-jobs
- Body:
{ "name": "test", "desc": "job for test", "queue_name": "testQueue", "manager_cu_number": 1, "cu_number": 2, "parallel_number": 1, "tm_cus": 1, "tm_slot_num": 1, "log_enabled": true, "obs_bucket": "bucketName", "smn_topic": "topic", "main_class": "org.apache.flink.examples.streaming.JavaQueueStream", "restart_when_exception": false, "entrypoint": "javaQueueStream.jar", "entrypoint_args":"-windowSize 2000 -rate3", "dependency_jars": [ "myGroup/test.jar", "myGroup/test1.jar" ], "dependency_files": [ "myGroup/test.csv", "myGroup/test1.csv" ] }
- Example response
{ "is_success": true, "message": "A Flink job is created successfully.", "job": { "job_id": 138, "status_name": "job_init", "status_desc": "" } }
- API
- Run jobs in batches.
- API
URI format: POST /v1.0/{project_id}/streaming/jobs/run
- Obtain the value of {project_id} from Obtaining a Project ID.
- For details about the request parameters, see Running Jobs in Batches.
- Request example
- Description: Run the jobs whose job_id is 298765 and 298766 in the project whose ID is 48cc2c48765f481480c7db940d6409d1.
- Example URL: POST https://{endpoint}/v1.0/48cc2c48765f481480c7db940d6409d1/streaming/jobs/run
- Body:
{ "job_ids": [131,130,138,137], "resume_savepoint": true }
- Example response
[ { "is_success": "true", "message": "The request for submitting DLI jobs is delivered successfully." }, { "is_success": "true", "message": "The request for submitting DLI jobs is delivered successfully." }, { "is_success": "true", "message": "The request for submitting DLI jobs is delivered successfully." }, { "is_success": "true", "message": "The request for submitting DLI jobs is delivered successfully." } ]
- API
Parent topic: Getting Started
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot