Updated on 2024-02-21 GMT+08:00

Creating a Job

Function

This API is used to create jobs.

URI

POST /v1/{project_id}/instances/{instance_id}/lf-jobs

Table 1 Path Parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID. For how to obtain the project ID, see Obtaining a Project ID (lakeformation_04_0026.xml).

instance_id

Yes

String

LakeFormation instance ID. The value is automatically generated when the instance is created, for example, 2180518f-42b8-4947-b20b-adfc53981a25.

Request Parameters

Table 2 Request header parameters

Parameter

Mandatory

Type

Description

X-Auth-Token

Yes

String

Tenant token.

Table 3 Request body parameters

Parameter

Mandatory

Type

Description

name

Yes

String

Job name. The value can contain 4 to 255 characters. Only letters, digits, and underscores (_) are allowed.

description

No

String

Description. Enter 0 to 4000 characters to describe the created job.

type

Yes

String

METADATA_MIGRATION: metadata migration PERMISSION_MIGRATION data permission migration METADATA_DISCOVERY metadata discovery

Enumeration values:

  • METADATA_MIGRATION
  • PERMISSION_MIGRATION
  • METADATA_DISCOVERY

parameter

Yes

JobParameter object

Job parameter, which is determined by the system based on the migration job type.

Table 4 JobParameter

Parameter

Mandatory

Type

Description

metadata_migration_parameter

No

MetaDataMigrationParameter object

Metadata migration.

permission_migration_parameter

No

PermissionMigrationParameter object

Permission migration parameters.

metadata_discovery_parameter

No

MetaDataDiscoveryParameter object

Metadata discovery parameters.

Table 5 MetaDataMigrationParameter

Parameter

Mandatory

Type

Description

datasource_type

Yes

String

ALIYUN_DLF DLF MRS_RDS_FOR_MYSQL: MRS RDS (for MySQL) OPEN_FOR_MYSQL: Open source HiveMetastore (for MySQL) MRS_RDS_FOR_PG: MRS RDS (for PostgreSQL) MRS_LOCAL_GAUSSDB: MRS local database (GaussDB)

Enumeration values:

  • ALIYUN_DLF
  • MRS_RDS_FOR_MYSQL
  • OPEN_FOR_MYSQL
  • MRS_RDS_FOR_PG
  • MRS_LOCAL_GAUSSDB

datasource_parameter

Yes

DataSourceParameter object

Data source parameter.

source_catalog

Yes

String

Source catalog, which is the catalog to be migrated out.

target_catalog

Yes

String

Target catalog, which stores the migrated metadata.

conflict_strategy

Yes

String

Conflict resolution policy. UPSERT indicates that only existing metadata is created and updated, but not deleted.

Enumeration values:

  • UPSERT

log_location

Yes

String

Data storage path, which is selected by users.

sync_objects

Yes

Array of strings

Migration metadata object array. The values include DATABASE, FUNCTION, TABLE, and PARTITION.

Enumeration values:

  • DATABASE
  • FUNCTION
  • TABLE
  • PARTITION

default_owner

No

String

Default user information, which is determined by users.

locations

Yes

Array of LocationReplaceRule objects

Path replacement table, which is generated after key-value pairs are determined by users. A maximum of 20 records are supported.

instance_id

Yes

String

Instance ID.

ignore_obs_checked

No

Boolean

Whether to ignore the restriction on the OBS path when creating an internal table.

network_type

No

String

Migration network type, which can be EIP or VPC_PEERING.

Enumeration values:

  • EIP
  • VPC_PEERING

accepted_vpc_id

No

String

ID of the VPC where the peer RDS is located.

Table 6 DataSourceParameter

Parameter

Mandatory

Type

Description

jdbc_url

No

String

JDBC URL, for example, jdbc:protocol://host:port/db_name.

username

No

String

User name. The value can contain only letters and digits and cannot exceed 255 characters.

password

No

String

Password. The value can be transferred only when a job is created or updated. If the value is empty, there is no password or the password does not need to be updated. The password cannot be exposed during query and listing.

endpoint

No

String

Endpoint URL, for example, xxxx**.com**.

access_key

No

String

Access key. The value can be transferred only when a job is created or updated. If the value is empty, no key is available or the key does not need to be updated. The key cannot be exposed during query and listing.

secret_key

No

String

Secret key. The value can be transferred only when a job is created or updated. If the value is empty, there is no key or the key does not need to be updated. The key cannot be exposed during query and listing.

subnet_ip

No

String

Subnet IP address of RDS.

Table 7 LocationReplaceRule

Parameter

Mandatory

Type

Description

key

Yes

String

Key (source path).

value

Yes

String

Value (target path).

Table 8 PermissionMigrationParameter

Parameter

Mandatory

Type

Description

location

Yes

String

OBS file path for obtaining permission migration.

file_name

No

String

Permission JSON file. The file name cannot contain special characters such as <, >, :,", /, , |, ?, *.

log_location

Yes

String

Data storage path, which is selected by users.

policy_type

Yes

String

Permission type. The values are DLF, RANGER, and LAKEFORMATION.

Enumeration values:

  • DLF
  • RANGER
  • LAKEFORMATION

catalog_id

No

String

The catalog_id field needs to be transferred for DLF permission policy conversion.

instance_id

Yes

String

Instance ID.

ranger_permission_migration_principal_relas

No

RangerPermissionMigrationPrincipalRelas object

Authorization entity conversion relationship of Ranger.

Table 9 RangerPermissionMigrationPrincipalRelas

Parameter

Mandatory

Type

Description

user_to

No

String

User conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role

Enumeration values:

  • IAM_USER
  • IAM_GROUP
  • ROLE

user_prefix

No

String

Prefix of the object name after user conversion

user_suffix

No

String

Suffix of the object name after user conversion.

group_to

No

String

Group conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role

Enumeration values:

  • IAM_USER
  • IAM_GROUP
  • ROLE

group_prefix

No

String

Prefix of the object name after group conversion.

group_suffix

No

String

Suffix of the object name after group conversion.

role_to

No

String

Role conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role

Enumeration values:

  • IAM_USER
  • IAM_GROUP
  • ROLE

role_prefix

No

String

Prefix of the object name after role conversion.

role_suffix

No

String

Suffix of the object name after role conversion.

Table 10 MetaDataDiscoveryParameter

Parameter

Mandatory

Type

Description

data_location

Yes

String

Data storage path, which is selected by users.

target_catalog

Yes

String

Target catalog, which saves discovered metadata.

target_database

Yes

String

Target database, which saves discovered metadata.

conflict_strategy

Yes

String

Conflict resolution policy. UPDATE indicates that existing metadata is updated but not deleted. INSERT indicates that metadata is created but not updated or deleted. UPSERT indicates that existing metadata is created and updated but not deleted.

Enumeration values:

  • UPDATE
  • INSERT
  • UPSERT

file_discovery_type

Yes

String

File discovery type. PARQUET: a columnar storage format that is built on top of the Hadoop Distributed File System (HDFS). CSV: a comma-separated values file. JSON: stands for Java Script Object Notation. ORC: stands for Optimized Row Columnar. TEXT: stands for text. AVRO: a row-oriented remote procedure call and data serialization framework. ALL: means auto-detected the file types.

Enumeration values:

  • PARQUET
  • CSV
  • JSON
  • ORC
  • AVRO
  • ALL

separator

No

String

File separator. Common separators include commas (,) and semicolons (;).

quote

No

String

File quotation character. Common quotation characters include single quotation marks, double quotation marks, and \u0000.

Enumeration values:

  • DOUBLE_QUOTE
  • SINGLE_QUOTE
  • NULL_QUOTE

escape

No

String

Escape character of a file. The common escape characters are\.

header

No

Boolean

Indicates whether the first line of the file is considered as a header. The value true indicates that the first line is a header, and the value false indicates that the first line is not a header. The default value is false.

file_sample_rate

No

Integer

File sampling rate. The value ranges from 0 to 100. 100 indicates 100% full scanning. 0 indicates that only one file in each folder is scanned.

table_depth

No

Integer

Table depth. Assume that path obs://a/b/c/d/e=1/f=99 exists and the data storage path is obs://a/b. Group level 2 indicates that d is used as the boundary. d is the table name. e=1 and f=99 indicate that table d is a partitioned table. The partition keys are e and f and the partition values are 1 and 99.

log_location

Yes

String

Data storage path, which is selected by users.

default_owner

No

String

This parameter contains the information of the user who created the task, by default.

principals

No

Array of Principal objects

Entity information.

give_write

No

Boolean

Whether to grant the write permission. The options are true (yes) and false (no). The default value is false. The authorization entity gets the read and write permissions if the write permission is granted.

instance_id

Yes

String

Instance ID.

rediscovery_policy

No

String

Enumeration values:Rediscovery policy. The options are FULL_DISCOVERY (full discovery) and INCREMENTAL_DISCOVERY (incremental discovery). The default value is FULL_DISCOVERY.

  • FULL_DISCOVERY
  • INCREMENTAL_DISCOVERY

execute_strategy

No

String

Metadata discovery execution mode. The options are MANNUAL (manual) and SCHEDULE (scheduled). The default value is MANNUAL.

Enumeration values:

  • MANNUAL
  • SCHEDULE

execute_frequency

No

String

Execution Frequency: The options are MONTHLY (monthly), WEEKLY (weekly), DAILY (daily), and HOURLY (hourly).

Enumeration values:

  • MONTHLY
  • WEEKLY
  • DAILY
  • HOURLY

execute_day

No

String

Indicates the date and time when the metadata discovery task is executed. When execute_frequency is set to MONLY, this parameter indicates the date when the task is executed every month. The value ranges from 1 to 31. If the specified date does not exist in the current month, the task is not executed. If execute_frequency is set to MONLY and execute_day is set to 30, the metadata discovery task will not be triggered in February. If execute_frequency is set to WEEKLY, execute_date indicates the day of a week. The value ranges from 1 to 7. If execute_frequency is set to DAILY or HOURLY, set this parameter to *, indicating that the scheduled task is executed every day.

execute_hour

No

String

Hour when the metadata discovery schedule is executed. When execute_frequency is set to MONLY, WEEKLY, or DAILY, this parameter indicates the execution time of the selected day. The value ranges from 0 to 23. If execute_frequency is set to HOURLY, the value of this parameter is *, indicating that the task is triggered every hour.

execute_minute

No

String

Specifies the minute when the metadata discovery task is executed. The value ranges from 0 to 59, indicating that the task is executed at the specified minute.

Table 11 Principal

Parameter

Mandatory

Type

Description

principal_type

Yes

String

Entity type. USER: user GROUP: group ROLE: role SHARE: share OTHER: others

Enumeration values:

  • USER
  • GROUP
  • ROLE
  • SHARE
  • OTHER

principal_source

Yes

String

Entity source. IAM: cloud user SAML: SAML-based federation LDAP: LDAP ID user LOCAL: local user AGENTTENANT: agency OTHER: others

Enumeration values:

  • IAM
  • SAML
  • LDAP
  • LOCAL
  • AGENTTENANT
  • OTHER

principal_name

Yes

String

Entity name. The value can contain 1 to 49 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed.

Response Parameters

Status code: 201

Table 12 Response body parameters

Parameter

Type

Description

id

String

Job ID, which is automatically generated when a job is created. 03141229-84cd-4b1b-9733-dd124320c125 is an example.

name

String

Job name. The value can contain 4 to 255 characters. Only letters, digits, and underscores (_) are allowed.

description

String

Description written when a user creates a job.

type

String

METADATA_MIGRATION: metadata migration PERMISSION_MIGRATION data permission migration METADATA_DISCOVERY metadata discovery

Enumeration values:

  • METADATA_MIGRATION
  • PERMISSION_MIGRATION
  • METADATA_DISCOVERY

parameter

JobParameter object

Job parameters.

create_time

String

Job creation timestamp, which is generated based on the job creation time.

start_time

String

Execution timestamp of the last job, which is generated based on the last job execution time.

status

String

Status.

  • CREATED: The job is created.
  • SUBMITTED: The job is submitted.
  • RUNNING: The job is being executed.
  • SUCCESS: The job has been executed.
  • FAILED: The job failed.
  • STOPPED: The job has been stopped.

Enumeration values:

  • CREATED
  • SUBMITTED
  • RUNNING
  • SUCCESS
  • FAILED
  • STOPPED
Table 13 JobParameter

Parameter

Type

Description

metadata_migration_parameter

MetaDataMigrationParameter object

Metadata migration.

permission_migration_parameter

PermissionMigrationParameter object

Permission migration parameters.

metadata_discovery_parameter

MetaDataDiscoveryParameter object

Metadata discovery parameters.

Table 14 MetaDataMigrationParameter

Parameter

Type

Description

datasource_type

String

ALIYUN_DLF DLF MRS_RDS_FOR_MYSQL: MRS RDS (for MySQL) OPEN_FOR_MYSQL: Open source HiveMetastore (for MySQL) MRS_RDS_FOR_PG: MRS RDS (for PostgreSQL) MRS_LOCAL_GAUSSDB: MRS local database (GaussDB)

Enumeration values:

  • ALIYUN_DLF
  • MRS_RDS_FOR_MYSQL
  • OPEN_FOR_MYSQL
  • MRS_RDS_FOR_PG
  • MRS_LOCAL_GAUSSDB

datasource_parameter

DataSourceParameter object

Data source parameter.

source_catalog

String

Source catalog, which is the catalog to be migrated out.

target_catalog

String

Target catalog, which stores the migrated metadata.

conflict_strategy

String

Conflict resolution policy. UPSERT indicates that only existing metadata is created and updated, but not deleted.

Enumeration values:

  • UPSERT

log_location

String

Data storage path, which is selected by users.

sync_objects

Array of strings

Migration metadata object array. The values include DATABASE, FUNCTION, TABLE, and PARTITION.

Enumeration values:

  • DATABASE
  • FUNCTION
  • TABLE
  • PARTITION

default_owner

String

Default user information, which is determined by users.

locations

Array of LocationReplaceRule objects

Path replacement table, which is generated after key-value pairs are determined by users. A maximum of 20 records are supported.

instance_id

String

Instance ID.

ignore_obs_checked

Boolean

Whether to ignore the restriction on the OBS path when creating an internal table.

network_type

String

Migration network type, which can be EIP or VPC_PEERING.

Enumeration values:

  • EIP
  • VPC_PEERING

accepted_vpc_id

String

ID of the VPC where the peer RDS is located.

Table 15 DataSourceParameter

Parameter

Type

Description

jdbc_url

String

JDBC URL, for example, jdbc:protocol://host:port/db_name.

username

String

User name. The value can contain only letters and digits and cannot exceed 255 characters.

password

String

Password. The value can be transferred only when a job is created or updated. If the value is empty, there is no password or the password does not need to be updated. The password cannot be exposed during query and listing.

endpoint

String

Endpoint URL, for example, xxxx**.com**.

access_key

String

Access key. The value can be transferred only when a job is created or updated. If the value is empty, no key is available or the key does not need to be updated. The key cannot be exposed during query and listing.

secret_key

String

Secret key. The value can be transferred only when a job is created or updated. If the value is empty, there is no key or the key does not need to be updated. The key cannot be exposed during query and listing.

subnet_ip

String

Subnet IP address of RDS.

Table 16 LocationReplaceRule

Parameter

Type

Description

key

String

Key (source path).

value

String

Value (target path).

Table 17 PermissionMigrationParameter

Parameter

Type

Description

location

String

OBS file path for obtaining permission migration.

file_name

String

Permission JSON file. The file name cannot contain special characters such as <, >, :,", /, , |, ?, *.

log_location

String

Data storage path, which is selected by users.

policy_type

String

Permission type. The values are DLF, RANGER, and LAKEFORMATION.

Enumeration values:

  • DLF
  • RANGER
  • LAKEFORMATION

catalog_id

String

The catalog_id field needs to be transferred for DLF permission policy conversion.

instance_id

String

Instance ID.

ranger_permission_migration_principal_relas

RangerPermissionMigrationPrincipalRelas object

Authorization entity conversion relationship of Ranger.

Table 18 RangerPermissionMigrationPrincipalRelas

Parameter

Type

Description

user_to

String

User conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role

Enumeration values:

  • IAM_USER
  • IAM_GROUP
  • ROLE

user_prefix

String

Prefix of the object name after user conversion

user_suffix

String

Suffix of the object name after user conversion.

group_to

String

Group conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role

Enumeration values:

  • IAM_USER
  • IAM_GROUP
  • ROLE

group_prefix

String

Prefix of the object name after group conversion.

group_suffix

String

Suffix of the object name after group conversion.

role_to

String

Role conversion object. IAM_USER: IAM user IAM_GROUP: IAM group ROLE: role

Enumeration values:

  • IAM_USER
  • IAM_GROUP
  • ROLE

role_prefix

String

Prefix of the object name after role conversion.

role_suffix

String

Suffix of the object name after role conversion.

Table 19 MetaDataDiscoveryParameter

Parameter

Type

Description

data_location

String

Data storage path, which is selected by users.

target_catalog

String

Target catalog, which saves discovered metadata.

target_database

String

Target database, which saves discovered metadata.

conflict_strategy

String

Conflict resolution policy. UPDATE indicates that existing metadata is updated but not deleted. INSERT indicates that metadata is created but not updated or deleted. UPSERT indicates that existing metadata is created and updated but not deleted.

Enumeration values:

  • UPDATE
  • INSERT
  • UPSERT

file_discovery_type

String

File discovery type. PARQUET: a columnar storage format that is built on top of the Hadoop Distributed File System (HDFS). CSV: a comma-separated values file. JSON: stands for Java Script Object Notation. ORC: stands for Optimized Row Columnar. TEXT: stands for text. AVRO: a row-oriented remote procedure call and data serialization framework. ALL: means auto-detected the file types.

Enumeration values:

  • PARQUET
  • CSV
  • JSON
  • ORC
  • AVRO
  • ALL

separator

String

File separator. Common separators include commas (,) and semicolons (;).

quote

String

File quotation character. Common quotation characters include single quotation marks, double quotation marks, and \u0000.

Enumeration values:

  • DOUBLE_QUOTE
  • SINGLE_QUOTE
  • NULL_QUOTE

escape

String

Escape character of a file. The common escape characters are\.

header

Boolean

Indicates whether the first line of the file is considered as a header. The value true indicates that the first line is a header, and the value false indicates that the first line is not a header. The default value is false.

file_sample_rate

Integer

File sampling rate. The value ranges from 0 to 100. 100 indicates 100% full scanning. 0 indicates that only one file in each folder is scanned.

table_depth

Integer

Table depth. Assume that path obs://a/b/c/d/e=1/f=99 exists and the data storage path is obs://a/b. Group level 2 indicates that d is used as the boundary. d is the table name. e=1 and f=99 indicate that table d is a partitioned table. The partition keys are e and f and the partition values are 1 and 99.

log_location

String

Data storage path, which is selected by users.

default_owner

String

This parameter contains the information of the user who created the task, by default.

principals

Array of Principal objects

Entity information.

give_write

Boolean

Whether to grant the write permission. The options are true (yes) and false (no). The default value is false. The authorization entity gets the read and write permissions if the write permission is granted.

instance_id

String

Instance ID.

rediscovery_policy

String

Rediscovery policy. The options are FULL_DISCOVERY (full discovery) and INCREMENTAL_DISCOVERY (incremental discovery). The default value is FULL_DISCOVERY.

Enumeration values:

  • FULL_DISCOVERY
  • INCREMENTAL_DISCOVERY

execute_strategy

String

Metadata discovery execution mode. The options are MANNUAL (manual) and SCHEDULE (scheduled). The default value is MANNUAL.

Enumeration values:

  • MANNUAL
  • SCHEDULE

execute_frequency

String

Execution Frequency: The options are MONTHLY (monthly), WEEKLY (weekly), DAILY (daily), and HOURLY (hourly).

Enumeration values:

  • MONTHLY
  • WEEKLY
  • DAILY
  • HOURLY

execute_day

String

Indicates the date and time when the metadata discovery task is executed. When execute_frequency is set to MONLY, this parameter indicates the date when the task is executed every month. The value ranges from 1 to 31. If the specified date does not exist in the current month, the task is not executed. If execute_frequency is set to MONLY and execute_day is set to 30, the metadata discovery task will not be triggered in February. If execute_frequency is set to WEEKLY, execute_date indicates the day of a week. The value ranges from 1 to 7. If execute_frequency is set to DAILY or HOURLY, set this parameter to *, indicating that the scheduled task is executed every day.

execute_hour

String

Hour when the metadata discovery schedule is executed. When execute_frequency is set to MONLY, WEEKLY, or DAILY, this parameter indicates the execution time of the selected day. The value ranges from 0 to 23. If execute_frequency is set to HOURLY, the value of this parameter is *, indicating that the task is triggered every hour.

execute_minute

String

Specifies the minute when the metadata discovery task is executed. The value ranges from 0 to 59, indicating that the task is executed at the specified minute.

Table 20 Principal

Parameter

Type

Description

principal_type

String

Entity type. USER: user GROUP: group ROLE: role SHARE: share OTHER: others

Enumeration values:

  • USER
  • GROUP
  • ROLE
  • SHARE
  • OTHER

principal_source

String

Entity source. IAM: cloud user SAML: SAML-based federation LDAP: LDAP ID user LOCAL: local user AGENTTENANT: agency OTHER: others

Enumeration values:

  • IAM
  • SAML
  • LDAP
  • LOCAL
  • AGENTTENANT
  • OTHER

principal_name

String

Entity name. The value can contain 1 to 49 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed.

Status code: 400

Table 21 Response body parameters

Parameter

Type

Description

error_code

String

Error code.

error_msg

String

Error description.

common_error_code

String

CBC common error code.

solution_msg

String

Solution.

Status code: 404

Table 22 Response body parameters

Parameter

Type

Description

error_code

String

Error code.

error_msg

String

Error description.

common_error_code

String

CBC common error code.

solution_msg

String

Solution.

Status code: 500

Table 23 Response body parameters

Parameter

Type

Description

error_code

String

Error code.

error_msg

String

Error description.

common_error_code

String

CBC common error code.

solution_msg

String

Solution.

Example Requests

POST https://{endpoint}/v1/{project_id}/instances/{instance_id}/lf-jobs

{
  "name" : "testjob",
  "description" : "testjob",
  "type" : "METADATA_MIGRATION",
  "parameter" : {
    "metadata_migration_parameter" : {
      "datasource_type" : "ALIYUN_DLF",
      "datasource_parameter" : {
        "jdbc_url" : "jdbc:protocol://host:port/db_name",
        "username" : "root",
        "password" : "password",
        "endpoint" : "xxxx.com",
        "access_key" : "LTAIxxxxxxxxxxxxxxxxRxPG",
        "secret_key" : "12345xxxxxxxxxxxxxxxxNabcdefgh",
        "subnet_ip" : "127.0.0.1"
      },
      "source_catalog" : "sourceCatalog1",
      "target_catalog" : "targetCatalog1",
      "conflict_strategy" : "UPSERT",
      "log_location" : "obs://logStore/2023",
      "sync_objects" : [ "string" ],
      "default_owner" : "string",
      "locations" : [ {
        "key" : "test/test1",
        "value" : "test2/db"
      } ],
      "instance_id" : "string",
      "ignore_obs_checked" : false,
      "network_type" : "EIP",
      "accepted_vpc_id" : "13551d6b-755d-4757-b956-536f674975c0"
    },
    "permission_migration_parameter" : {
      "location" : "obs://location/uri/",
      "file_name" : "string",
      "log_location" : "obs://logStore/2023",
      "policy_type" : "DLF",
      "catalog_id" : "test_catalog",
      "instance_id" : "string",
      "ranger_permission_migration_principal_relas" : {
        "user_to" : "IAM_USER",
        "user_prefix" : "string",
        "user_suffix" : "string",
        "group_to" : "IAM_USER",
        "group_prefix" : "string",
        "group_suffix" : "string",
        "role_to" : "IAM_USER",
        "role_prefix" : "string",
        "role_suffix" : "string"
      }
    },
    "metadata_discovery_parameter" : {
      "data_location" : "obs://logStore/2023",
      "target_catalog" : "targetCatalog1",
      "target_database" : "targetCatalog1",
      "conflict_strategy" : "UPDATE",
      "file_discovery_type" : "PARQUET",
      "separator" : ",",
      "quote" : "DOUBLE_QUOTE",
      "escape" : "\\",
      "header" : false,
      "file_sample_rate" : 100,
      "table_depth" : 3,
      "log_location" : "obs://logStore/2023",
      "default_owner" : "testOwner",
      "principals" : [ {
        "principal_type" : "USER",
        "principal_source" : "IAM",
        "principal_name" : "user1"
      } ],
      "give_write" : false,
      "instance_id" : "abcdefgh12345678abcdefgh12345678",
      "rediscovery_policy" : "FULL_DISCOVERY",
      "execute_strategy" : "MANNUAL",
      "execute_frequency" : "MONTHLY",
      "execute_day" : 1,
      "execute_hour" : 1,
      "execute_minute" : 1
    }
  }
}

Example Responses

Status code: 201

Successful job creation.

{
  "id" : "03141229-84cd-4b1b-9733-dd124320c125",
  "name" : "testjob",
  "description" : "testJob",
  "type" : "METADATA_MIGRATION",
  "parameter" : {
    "metadata_migration_parameter" : {
      "datasource_type" : "ALIYUN_DLF",
      "datasource_parameter" : {
        "endpoint" : "protocol://xxxx.xxxx.com"
      },
      "source_catalog" : "sourceCatalog1",
      "target_catalog" : "targetCatalog1",
      "conflict_strategy" : "UPDATE",
      "log_location" : "obs://logStore/2023",
      "sync_objects" : [ "DATABASE" ],
      "locations" : [ {
        "key" : "test/test1",
        "value" : "test2/db"
      } ]
    }
  },
  "status" : {
    "status" : "SUCCESS"
  }
}

Status code: 400

Bad Request

{
  "error_code" : "common.01000001",
  "error_msg" : "failed to read http request, please check your input, code: 400, reason: Type mismatch., cause: TypeMismatchException"
}

Status code: 401

Unauthorized

{
  "error_code": 'APIG.1002',
  "error_msg": 'Incorrect token or token resolution failed'
}

Status code: 403

Forbidden

{
  "error" : {
    "code" : "403",
    "message" : "X-Auth-Token is invalid in the request",
    "error_code" : null,
    "error_msg" : null,
    "title" : "Forbidden"
  },
  "error_code" : "403",
  "error_msg" : "X-Auth-Token is invalid in the request",
  "title" : "Forbidden"
}

Status code: 404

Not Found

{
  "error_code" : "common.01000001",
  "error_msg" : "response status exception, code: 404"
}

Status code: 408

Request Timeout

{
  "error_code" : "common.00000408",
  "error_msg" : "timeout exception occurred"
}

Status code: 500

Internal Server Error

{
  "error_code" : "common.00000500",
  "error_msg" : "internal error"
}

Status Codes

Status Code

Description

201

Successful job creation.

400

Bad Request

401

Unauthorized

403

Forbidden

404

Not Found

408

Request Timeout

500

Internal Server Error

Error Codes

See Error Codes.