Querying PatchData Instances
Function
This API is used to query PatchData instances. Pagination query is supported.
URI
- URI format
GET /v2/{project_id}/factory/supplement-data?sort={sort}&page={page}&size={size}&name={name}&user_name={user_name}&status={status}&start_date={start_date}&end_date={end_date}
- Parameter description
Parameter
Mandatory
Type
Description
project_id
Yes
String
Project ID. For details about how to obtain a project ID, see Project ID and Account ID.
name
No
String
PatchData instance name
user_name
No
String
Username
status
No
String
Instance status
- SUCCESS: The job is successful.
- RUNNING: The job is running.
- CANCEL: The job has been canceled.
sort
No
String
Sorting field
- desc: The results are displayed in descending order of creation time.
- asc: The results are displayed in ascending order of creation time.
The default value is desc.
page
No
Integer
Start page of the paging list. Default value: 0 The value must be greater than or equal to 0.
size
No
Integer
The maximum number of records on each page. The default value is 10.
start_date
No
Long
Query start date, which is a 13-digit timestamp
end_date
No
Long
Query end date, which is a 13-digit timestamp
Request Parameters
|
Parameter |
Mandatory |
Type |
Description |
|---|---|---|---|
|
workspace |
No |
String |
Workspace ID
|
|
X-Auth-Token |
Yes |
String |
IAM Token Minimum length: 0 Maximum length: 4096 |
Response Parameters
|
Parameter |
Mandatory |
Type |
Description |
|---|---|---|---|
|
total |
Yes |
Integer |
Number of jobs |
|
success |
Yes |
Boolean |
The value can be true or false. |
|
msg |
Yes |
String |
success |
|
rows |
Yes |
List<row> |
Information about PatchData instances. For details, see Table 3. |
|
Parameter |
Mandatory |
Type |
Description |
|---|---|---|---|
|
job_list |
Yes |
List<String> |
PatchData job. There may be dependent jobs, so there may be multiple jobs. |
|
name |
Yes |
String |
PatchData instance name |
|
user_name |
Yes |
String |
Username |
|
type |
Yes |
int |
How a PatchData job is triggered. The value is 0 or 1. Value 0 indicates that the PatchData job is triggered on the job monitoring page, and value 1 indicates that the PatchData job is triggered by a restoration. |
|
start_date |
Yes |
Long |
Job start date, which is a 13-digit timestamp |
|
end_date |
Yes |
Long |
Job end date, which is a 13-digit timestamp |
|
parallel |
Yes |
int |
Number of parallel periods of the PatchData instance. The value ranges from 1 to 5. |
|
status |
Yes |
String |
Instance status
|
|
submitted_date |
Yes |
Long |
Job creation time |
|
supplement_data_run_time |
No |
supplement_data_run_time object |
PatchData time period. Currently, data can only be patched only every day. If this parameter is not specified, the default value 00:00-00:00 is used. For details, see Table 4. |
|
supplement_data_instance_time |
No |
Discrete time for the PatchData job. For details, see Table 5. |
|
Parameter |
Mandatory |
Type |
Description |
|---|---|---|---|
|
time_of_day |
Yes |
String |
Time period for patching data every day, for example, 10:15-23:30. |
|
day_of_week |
No |
String |
Days of each week for patching data, for example, 10:15 to 23:30 on Monday and Wednesday |
|
day_of_month |
No |
String |
Days in each month for patching data, for example, 1,3 which indicates the first and third day in each month |
Example Request
Query the job list.
GET /v2/62099355b894428e8916573ae635f1f9/factory/supplement-data
Example Response
- Success response: HTTP status code 200
{ "msg": "success", "rows": [ { "end_date": 1692633599000, "job_list": [ "job_8810", "job_1000" ], "name": "P_job_8810_20230821_175711", "parallel": 1, "start_date": 1692547200000, "status": "RUNNING", "submitted_date": 1692611566436, "supplement_data_run_time": { "time_of_day": "00:00-00:00" }, "supplement_data_instance_time": {}, "type": 0, "user_name": "user_test" } ], "success": true, "total": 1 } - Failure response
{ "error_code":"DLF.3051", "error_msg":"The request parameter is invalid." }
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.