Listing Job Templates
Function
This API is used to list job templates.
URI
- URI format
- Parameter description
Table 1 URI parameter Parameter
Mandatory
Type
Description
project_id
Yes
String
Project ID, which is used for resource isolation. For details about how to obtain its value, see Obtaining a Project ID.
Table 2 query parameters Parameter
Mandatory
Type
Description
type
Yes
String
Template type.
Available values:
- SPARK: Spark template
Currently, only Spark templates are supported.
keyword
No
String
Keyword used for searching for template names. Fuzzy match is supported.
page-size
No
Integer
Maximum number of lines displayed on each page. Value range: [1, 100]. The default value is 50.
current-page
No
Integer
Current page number. The default value is 1.
Request
None
Response
Parameter |
Type |
Description |
---|---|---|
is_success |
Boolean |
Whether the request is successfully executed. Value true indicates that the request is successfully executed. |
message |
String |
System prompt. If execution succeeds, the message may be left blank. |
count |
Integer |
Number of returned templates |
templates |
Array of objects |
Template information list. For details, see Table 4. |
Parameter |
Type |
Description |
---|---|---|
type |
String |
Template type |
id |
String |
Template ID |
name |
String |
Template name |
body |
Object |
Template content. For details, see Table 5. |
group |
String |
Template group |
description |
String |
Description of the template |
language |
String |
Language |
owner |
String |
Creator of the template |
Parameter |
Type |
Description |
---|---|---|
file |
String |
Name of the package that is of the JAR or pyFile type and has been uploaded to the DLI resource management system. You can also specify an OBS path, for example, obs://Bucket name/Package name. |
className |
String |
Java/Spark main class of the template. |
cluster_name |
String |
Queue name. Set this parameter to the created DLI queue name.
NOTE:
You are advised to use the queue parameter. The queue and cluster_name parameters cannot coexist. |
args |
Array of Strings |
Input parameters of the main class, that is, application parameters. |
sc_type |
String |
Compute resource type. Currently, resource types A, B, and C are available. If this parameter is not specified, the minimum configuration (type A) is used. For details about resource types, see Table 3. |
jars |
Array of Strings |
Name of the package that is of the JAR type and has been uploaded to the DLI resource management system. You can also specify an OBS path, for example, obs://Bucket name/Package name. |
pyFiles |
Array of Strings |
Name of the package that is of the PyFile type and has been uploaded to the DLI resource management system. You can also specify an OBS path, for example, obs://Bucket name/Package name. |
files |
Array of Strings |
Name of the package that is of the file type and has been uploaded to the DLI resource management system. You can also specify an OBS path, for example, obs://Bucket name/Package name. |
modules |
Array of Strings |
Name of the dependent system resource module. You can view the module name using the API related to Querying Resource Packages in a Group (Discarded).
DLI provides dependencies for executing datasource jobs. The following table lists the dependency modules corresponding to different services.
|
resources |
Array of objects |
JSON object list, including the name and type of the JSON package that has been uploaded to the queue. For details, see Table 4. |
groups |
Array of objects |
JSON object list, including the package group resource. For details about the format, see the request example. If the type of the name in resources is not verified, the package with the name exists in the group. For details, see Table 5. |
conf |
Object |
Batch configuration item. For details, see Spark Configuration. |
name |
String |
Batch processing task name. The value contains a maximum of 128 characters. |
driverMemory |
String |
Driver memory of the Spark application, for example, 2 GB and 2048 MB. This configuration item replaces the default parameter in sc_type. The unit must be provided. Otherwise, the startup fails. |
driverCores |
Integer |
Number of CPU cores of the Spark application driver. This configuration item replaces the default parameter in sc_type. |
executorMemory |
String |
Executor memory of the Spark application, for example, 2 GB and 2048 MB. This configuration item replaces the default parameter in sc_type. The unit must be provided. Otherwise, the startup fails. |
executorCores |
Integer |
Number of CPU cores of each Executor in the Spark application. This configuration item replaces the default parameter in sc_type. |
numExecutors |
Integer |
Number of Executors in a Spark application. This configuration item replaces the default parameter in sc_type. |
obs_bucket |
String |
OBS bucket for storing the Spark jobs. Set this parameter when you need to save jobs. |
auto_recovery |
Boolean |
Whether to enable the retry function. If enabled, Spark jobs will be automatically retried after an exception occurs. The default value is false. |
max_retry_times |
Integer |
Maximum retry times. The maximum value is 100, and the default value is 20. |
feature |
String |
Job feature. Type of the Spark image used by a job.
|
spark_version |
String |
Version of the Spark component
|
image |
String |
Custom image. The format is Organization name/Image name:Image version. |
queue |
String |
Queue name. Set this parameter to the name of the created DLI queue. The queue must be of the general-purpose type.
NOTE:
|
catalog_name |
String |
To access metadata, set this parameter to dli. |
Example Request
None
Example Response
{ "is_success": true, "message": "", "templates": [ { "name": "test2", "body": { "auto_recovery": false, "max_retry_times": 20, }, "group": "", "description": "", "type": "SPARK", "id": "3c92c202-b17c-4ed7-b353-ea08629dd671" } ], "count": 1 }
Status Codes
Status Code |
Description |
---|---|
200 |
OK |
Error Codes
For details, see Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot