Obtaining Job Details
Function
This API is used to obtain job details.
URI
GET /v1/{project_id}/instances/{instance_id}/lf-jobs/{job_id}
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Project ID. For how to obtain the project ID, see Obtaining a Project ID (lakeformation_04_0026.xml). |
instance_id |
Yes |
String |
LakeFormation instance ID. The value is automatically generated when the instance is created, for example, 2180518f-42b8-4947-b20b-adfc53981a25. |
job_id |
Yes |
String |
Job ID. |
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
X-Auth-Token |
Yes |
String |
Tenant token. |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
id |
String |
Job ID, which is automatically generated when a job is created. 03141229-84cd-4b1b-9733-dd124320c125 is an example. |
name |
String |
Job name. The value can contain 4 to 255 characters. Only letters, digits, and underscores (_) are allowed. |
description |
String |
Description written when a user creates a job. |
type |
String |
METADATA_MIGRATION: metadata migration PERMISSION_MIGRATION data permission migration METADATA_DISCOVERY metadata discovery Enumeration values:
|
parameter |
JobParameter object |
Job parameters. |
create_time |
String |
Job creation timestamp, which is generated based on the job creation time. |
start_time |
String |
Execution timestamp of the last job, which is generated based on the last job execution time. |
status |
String |
Status.
Enumeration values:
|
Parameter |
Type |
Description |
---|---|---|
metadata_migration_parameter |
MetaDataMigrationParameter object |
Metadata migration. |
permission_migration_parameter |
PermissionMigrationParameter object |
Permission migration parameters. |
metadata_discovery_parameter |
MetaDataDiscoveryParameter object |
Metadata discovery parameters. |
Parameter |
Type |
Description |
---|---|---|
datasource_type |
String |
ALIYUN_DLF DLF MRS_RDS_FOR_MYSQL: MRS RDS (for MySQL) OPEN_FOR_MYSQL: Open source HiveMetastore (for MySQL) MRS_RDS_FOR_PG: MRS RDS (for PostgreSQL) MRS_LOCAL_GAUSSDB: MRS local database (GaussDB) Enumeration values:
|
datasource_parameter |
DataSourceParameter object |
Data source parameter. |
source_catalog |
String |
Source catalog, which is the catalog to be migrated out. |
target_catalog |
String |
Target catalog, which stores the migrated metadata. |
conflict_strategy |
String |
Conflict resolution policy. UPSERT indicates that only existing metadata is created and updated, but not deleted. Enumeration values:
|
log_location |
String |
Data storage path, which is selected by users. |
sync_objects |
Array of strings |
Migration metadata object array. The values include DATABASE, FUNCTION, TABLE, and PARTITION. Enumeration values:
|
default_owner |
String |
Default user information, which is determined by users. |
locations |
Array of LocationReplaceRule objects |
Path replacement table, which is generated after key-value pairs are determined by users. A maximum of 20 records are supported. |
instance_id |
String |
Instance ID. |
ignore_obs_checked |
Boolean |
Whether to ignore the restriction on the OBS path when creating an internal table. |
network_type |
String |
Migration network type, which can be EIP or VPC_PEERING. Enumeration values:
|
accepted_vpc_id |
String |
ID of the VPC where the peer RDS is located. |
Parameter |
Type |
Description |
---|---|---|
jdbc_url |
String |
JDBC URL, for example, jdbc:protocol://host:port/db_name. |
username |
String |
User name. The value can contain only letters and digits and cannot exceed 255 characters. |
password |
String |
Password. The value can be transferred only when a job is created or updated. If the value is empty, there is no password or the password does not need to be updated. The password cannot be exposed during query and listing. |
endpoint |
String |
Endpoint URL, for example, xxxx**.com**. |
access_key |
String |
Access key. The value can be transferred only when a job is created or updated. If the value is empty, no key is available or the key does not need to be updated. The key cannot be exposed during query and listing. |
secret_key |
String |
Secret key. The value can be transferred only when a job is created or updated. If the value is empty, there is no key or the key does not need to be updated. The key cannot be exposed during query and listing. |
subnet_ip |
String |
Subnet IP address of RDS. |
Parameter |
Type |
Description |
---|---|---|
key |
String |
Key (source path). |
value |
String |
Value (target path). |
Parameter |
Type |
Description |
---|---|---|
location |
String |
OBS file path for obtaining permission migration. |
file_name |
String |
Permission JSON file. The file name cannot contain special characters such as <, >, :,", /, , |, ?, *. |
log_location |
String |
Data storage path, which is selected by users. |
policy_type |
String |
Permission type. The values are DLF, RANGER, and LAKEFORMATION. Enumeration values:
|
catalog_id |
String |
The catalog_id field needs to be transferred for DLF permission policy conversion. |
instance_id |
String |
Instance ID. |
ranger_permission_migration_principal_relas |
Authorization entity conversion relationship of Ranger. |
Parameter |
Type |
Description |
---|---|---|
user_to |
String |
User conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role Enumeration values:
|
user_prefix |
String |
Prefix of the object name after user conversion |
user_suffix |
String |
Suffix of the object name after user conversion. |
group_to |
String |
Group conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role Enumeration values:
|
group_prefix |
String |
Prefix of the object name after group conversion. |
group_suffix |
String |
Suffix of the object name after group conversion. |
role_to |
String |
Role conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role Enumeration values:
|
role_prefix |
String |
Prefix of the object name after role conversion. |
role_suffix |
String |
Suffix of the object name after role conversion. |
Parameter |
Type |
Description |
---|---|---|
data_location |
String |
Data storage path, which is selected by users. |
target_catalog |
String |
Target catalog, which saves discovered metadata. |
target_database |
String |
Target database, which saves discovered metadata. |
conflict_strategy |
String |
Conflict resolution policy. UPDATE indicates that existing metadata is updated but not deleted. INSERT indicates that metadata is created but not updated or deleted. UPSERT indicates that existing metadata is created and updated but not deleted. Enumeration values:
|
file_discovery_type |
String |
File discovery type. PARQUET: a columnar storage format that is built on top of the Hadoop Distributed File System (HDFS). CSV: a comma-separated values file. JSON: stands for Java Script Object Notation. ORC: stands for Optimized Row Columnar. TEXT: stands for text. AVRO: a row-oriented remote procedure call and data serialization framework. ALL: means auto-detected the file types. Enumeration values:
|
separator |
String |
File separator. Common separators include commas (,) and semicolons (;). |
quote |
String |
File quotation character. Common quotation characters include single quotation marks, double quotation marks, and \u0000. Enumeration values:
|
escape |
String |
Escape character of a file. The common escape characters are\. |
header |
Boolean |
Indicates whether the first line of the file is considered as a header. The value true indicates that the first line is a header, and the value false indicates that the first line is not a header. The default value is false. |
file_sample_rate |
Integer |
File sampling rate. The value ranges from 0 to 100. 100 indicates 100% full scanning. 0 indicates that only one file in each folder is scanned. |
table_depth |
Integer |
Table depth. Assume that path obs://a/b/c/d/e=1/f=99 exists and the data storage path is obs://a/b. Group level 2 indicates that d is used as the boundary. d is the table name. e=1 and f=99 indicate that table d is a partitioned table. The partition keys are e and f and the partition values are 1 and 99. |
log_location |
String |
Data storage path, which is selected by users. |
default_owner |
String |
This parameter contains the information of the user who created the task, by default. |
principals |
Array of Principal objects |
Entity information. |
give_write |
Boolean |
Whether to grant the write permission. The options are true (yes) and false (no). The default value is false. The authorization entity gets the read and write permissions if the write permission is granted. |
instance_id |
String |
Instance ID. |
rediscovery_policy |
String |
Rediscovery policy. The options are FULL_DISCOVERY (full discovery) and INCREMENTAL_DISCOVERY (incremental discovery). The default value is FULL_DISCOVERY. Enumeration values:
|
execute_strategy |
String |
Metadata discovery execution mode. The options are MANNUAL (manual) and SCHEDULE (scheduled). The default value is MANNUAL. Enumeration values:
|
execute_frequency |
String |
Execution Frequency: The options are MONTHLY (monthly), WEEKLY (weekly), DAILY (daily), and HOURLY (hourly). Enumeration values:
|
execute_day |
String |
Indicates the date and time when the metadata discovery task is executed. When execute_frequency is set to MONLY, this parameter indicates the date when the task is executed every month. The value ranges from 1 to 31. If the specified date does not exist in the current month, the task is not executed. If execute_frequency is set to MONLY and execute_day is set to 30, the metadata discovery task will not be triggered in February. If execute_frequency is set to WEEKLY, execute_date indicates the day of a week. The value ranges from 1 to 7. If execute_frequency is set to DAILY or HOURLY, set this parameter to *, indicating that the scheduled task is executed every day. |
execute_hour |
String |
Hour when the metadata discovery schedule is executed. When execute_frequency is set to MONLY, WEEKLY, or DAILY, this parameter indicates the execution time of the selected day. The value ranges from 0 to 23. If execute_frequency is set to HOURLY, the value of this parameter is *, indicating that the task is triggered every hour. |
execute_minute |
String |
Specifies the minute when the metadata discovery task is executed. The value ranges from 0 to 59, indicating that the task is executed at the specified minute. |
Parameter |
Type |
Description |
---|---|---|
principal_type |
String |
Entity type. USER: user GROUP: group ROLE: role SHARE: share OTHER: others Enumeration values:
|
principal_source |
String |
Entity source. IAM: cloud user SAML: SAML-based federation LDAP: LDAP ID user LOCAL: local user AGENTTENANT: agency OTHER: others Enumeration values:
|
principal_name |
String |
Entity name. The value can contain 1 to 49 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. |
Status code: 400
Parameter |
Type |
Description |
---|---|---|
error_code |
String |
Error code. |
error_msg |
String |
Error description. |
common_error_code |
String |
CBC common error code. |
solution_msg |
String |
Solution. |
Status code: 404
Parameter |
Type |
Description |
---|---|---|
error_code |
String |
Error code. |
error_msg |
String |
Error description. |
common_error_code |
String |
CBC common error code. |
solution_msg |
String |
Solution. |
Status code: 500
Parameter |
Type |
Description |
---|---|---|
error_code |
String |
Error code. |
error_msg |
String |
Error description. |
common_error_code |
String |
CBC common error code. |
solution_msg |
String |
Solution. |
Example Requests
GET https://{endpoint}/v1/{project_id}/instances/{instance_id}/lf-jobs/{job_id}
Example Responses
Status code: 200
Job details obtained.
{ "id" : "03141229-84cd-4b1b-9733-dd124320c125", "name" : "testjob", "description" : "testJob", "type" : "METADATA_MIGRATION", "parameter" : { "metadata_migration_parameter" : { "datasource_type" : "ALIYUN_DLF", "datasource_parameter" : { "endpoint" : "protocol://xxxx.xxxx.com" }, "source_catalog" : "sourceCatalog1", "target_catalog" : "targetCatalog1", "conflict_strategy" : "UPDATE", "log_location" : "obs://logStore/2023", "sync_objects" : [ "DATABASE" ], "locations" : [ { "key" : "test/test1", "value" : "test2/db" } ] } }, "status" : { "status" : "SUCCESS" } }
Status code: 400
Bad Request
{ "error_code" : "common.01000001", "error_msg" : "failed to read http request, please check your input, code: 400, reason: Type mismatch., cause: TypeMismatchException" }
Status code: 401
Unauthorized
{ "error_code": 'APIG.1002', "error_msg": 'Incorrect token or token resolution failed' }
Status code: 403
Forbidden
{ "error" : { "code" : "403", "message" : "X-Auth-Token is invalid in the request", "error_code" : null, "error_msg" : null, "title" : "Forbidden" }, "error_code" : "403", "error_msg" : "X-Auth-Token is invalid in the request", "title" : "Forbidden" }
Status code: 404
Not Found
{ "error_code" : "common.01000001", "error_msg" : "response status exception, code: 404" }
Status code: 408
Request Timeout
{ "error_code" : "common.00000408", "error_msg" : "timeout exception occurred" }
Status code: 500
Internal Server Error
{ "error_code" : "common.00000500", "error_msg" : "internal error" }
Status Codes
Status Code |
Description |
---|---|
200 |
Job details obtained. |
400 |
Bad Request |
401 |
Unauthorized |
403 |
Forbidden |
404 |
Not Found |
408 |
Request Timeout |
500 |
Internal Server Error |
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot