Help Center> ModelArts> API Reference> Historical APIs> Data Management (Old Version)> Updating Status of a Team Labeling Acceptance Task
Updated on 2023-12-14 GMT+08:00

Updating Status of a Team Labeling Acceptance Task

Function

Determine the acceptance scope for a team labeling task, including all labeled data, and update the sample data accordingly.

Debugging

You can debug this API in API Explorer which supports automatic authentication. API Explorer can automatically generate SDK code examples and provide the SDK code example debugging.

URI

PUT /v2/{project_id}/datasets/{dataset_id}/workforce-tasks/{workforce_task_id}/acceptance/status

Table 1 URI parameters

Parameter

Mandatory

Type

Description

dataset_id

Yes

String

Dataset ID

project_id

Yes

String

Project ID. For details, see Obtaining a Project ID and Name.

workforce_task_id

Yes

String

ID of a team labeling task.

Table 2 Query parameters

Parameter

Mandatory

Type

Description

locale

Yes

String

Language . Options:

  • zh-cn: Chinese
  • en-us: English (default value)

Request Parameters

Table 3 Request body parameters

Parameter

Mandatory

Type

Description

action

Yes

Integer

Acceptance action. Options:

  • 0: All samples are passed.

  • 1: All samples are rejected.

  • 2: The acceptance is canceled.

  • 3: The sample list of acceptance conflicts is obtained.

  • 4: Only the single-accepted samples and unprocessed samples are passed.

  • 5: Only the single-accepted samples are passed.

overwrite_last_result

No

Boolean

Whether to overwrite labeled data. Options:

  • true: The labeled data is overwritten.

  • false: The labeled data is not overwritten. (default value)

Response Parameters

Status code: 200

Table 4 Response body parameters

Parameter

Type

Description

sample_count

Integer

Total number of accepted samples

samples

Array of DescribeSampleResp objects

List of accepted samples

Table 5 DescribeSampleResp

Parameter

Type

Description

check_accept

Boolean

Whether the acceptance is passed, which is used for team labeling. Options:

  • true: The acceptance is passed.

  • false: The acceptance is not passed.

check_comment

String

Acceptance comment, which is used for team labeling.

check_score

String

Acceptance score, which is used for team labeling.

deletion_reasons

Array of strings

Reason for deleting a sample, which is used for healthcare.

hard_details

Map<String,HardDetail>

Difficult problem details, including description, causes, and suggestions of difficult problems.

labelers

Array of Worker objects

Labeling personnel list of sample allocation. The list records the team members receiving the samples, which is used for team labeling.

labels

Array of SampleLabel objects

List of sample labels

metadata

SampleMetadata object

Attribute key-value pair of the sample metadata

review_accept

Boolean

Whether to accept the review, which is used for team labeling. Options:

  • true: The review is accepted.

  • false: The review is rejected.

review_comment

String

Review comment, which is used for team labeling.

review_score

String

Review score, which is used for team labeling.

sample_data

Array of strings

List of sample data

sample_dir

String

Path for storing a sample

sample_id

String

Sample ID

sample_name

String

Sample name

sample_size

Long

Sample size or text length, in bytes

sample_status

String

Sample status. Options:

  • __ALL__: labeled

  • __NONE__: unlabeled

  • __UNCHECK__: to be accepted

  • __ACCEPTED__: accepted

  • __REJECTED__: rejected

  • __UNREVIEWED__: to be reviewed

  • __REVIEWED__: reviewed

  • __WORKFORCE_SAMPLED__: sampled

  • __WORKFORCE_SAMPLED_UNCHECK__: sampling pending check

  • __WORKFORCE_SAMPLED_CHECKED__: sampling checked

  • __WORKFORCE_SAMPLED_ACCEPTED__: sampling accepted

  • __WORKFORCE_SAMPLED_REJECTED__: sampling rejected

  • __AUTO_ANNOTATION__: to be confirmed

sample_time

Long

Sample time, when OBS is last modified.

sample_type

Integer

Sample type. Options:

  • 0: image

  • 1: text

  • 2: audio

  • 4: table

  • 6: video

  • 9: free format

score

String

Comprehensive score, which is used for team labeling.

source

String

Source address of sample data

sub_sample_url

String

Subsample URL, which is used for healthcare.

worker_id

String

ID of a labeling team member, which is used for team labeling.

Table 6 HardDetail

Parameter

Type

Description

alo_name

String

Alias

id

Integer

Reason ID

reason

String

Reason description

suggestion

String

Handling suggestion

Table 7 Worker

Parameter

Type

Description

create_time

Long

Worker creation time

description

String

Labeling team member description. The value contains 0 to 256 characters. Special characters (^!<>=&"') are not allowed.

email

String

Email address of a labeling team member

role

Integer

Role. Options:

  • 0: marker

  • 1: reviewer

  • 2: team manager

  • 3: dataset owner

status

Integer

Current login status of a labeled member. Options:

  • 0: No invitation email is sent.

  • 1: The invitation email is sent but the member has not logged in.

  • 2: The member has logged in.

  • 3: The member has been deleted.

update_time

Long

Worker update time

worker_id

String

ID of a labeling team member

workforce_id

String

ID of a labeling team

Table 8 SampleLabel

Parameter

Type

Description

annotated_by

String

Video labeling method, which is used to determine whether a video is labeled manually or automatically. Options:

  • human: manual labeling

  • auto: auto labeling

id

String

Label ID

name

String

Label name

property

SampleLabelProperty object

Attribute key-value pair of the sample label, such as the object shape and shape feature

score

Float

Confidence. The value ranges from 0 to 1.

type

Integer

Label type. Options:

  • 0: image classification

  • 1: object detection

  • 3: image segmentation

  • 100: text classification

  • 101: named entity recognition

  • 102: text triplet relationship

  • 103: text triplet entity

  • 200: sound classification

  • 201: speech content

  • 202: speech paragraph labeling

  • 600: video labeling

Table 9 SampleLabelProperty

Parameter

Type

Description

@modelarts:content

String

Speech text content, which is a default attribute dedicated to the speech label (including the speech content and speech start and end points)

@modelarts:end_index

Integer

End position of the text, which is a default attribute dedicated to the named entity label. The end position does not include the character corresponding to the value of end_index. Examples:

  • If the text is "Barack Hussein Obama II (born on August 4, 1961) is an attorney and politician.", the start_index and end_index of Barack Hussein Obama II are 0 and 23, respectively.

  • If the text is "Hope is the thing with feathers", start_index and end_index of Hope are 0 and 4, respectively.

@modelarts:end_time

String

Speech end time, which is a default attribute dedicated to the speech start/end point label, in the format of hh:mm:ss.SSS. (hh indicates hour; mm indicates minute; ss indicates second; and SSS indicates millisecond.)

@modelarts:feature

Object

Shape feature, which is a default attribute dedicated to the object detection label, with type of List The upper left corner of an image is used as the coordinate origin [0, 0]. Each coordinate point is represented by [x, y]. x indicates the horizontal coordinate, and y indicates the vertical coordinate (both x and y are greater than or equal to 0). The format of each shape is as follows:

  • bndbox: consists of two points, for example, [[0,10],[50,95]]. The upper left vertex of the rectangle is the first point, and the lower right vertex is the second point. That is, the x-coordinate of the first point must be less than the x-coordinate of the second point, and the y-coordinate of the first point must be less than the y-coordinate of the second point.

  • polygon: consists of multiple points that are connected in sequence to form a polygon, for example, [[0,100],[50,95],[10,60],[500,400]].

  • circle: consists of the center and radius, for example, [[100,100],[50]].

  • line: consists of two points, for example, [[0,100],[50,95]]. The first point is the start point, and the second point is the end point.

  • dashed: consists of two points, for example, [[0,100],[50,95]]. The first point is the start point, and the second point is the end point.

  • point: consists of one point, for example, [[0,100]].

  • polyline: consists of multiple points, for example, [[0,100],[50,95],[10,60],[500,400]].

@modelarts:from

String

Start entity ID of the triplet relationship label, which is a default attribute dedicated to the triplet relationship label

@modelarts:hard

String

Whether the sample is labeled as a hard example, which is a default attribute. Options:

  • 0/false: The label is not a hard example.

  • 1/true: The label is a hard example.

@modelarts:hard_coefficient

String

Coefficient of difficulty of each label level, which is a default attribute. The value ranges from 0 to 1.

@modelarts:hard_reasons

String

Reasons why the sample is a hard example, which is a default attribute. Use a hyphen (-) to separate every two hard example reason IDs, for example, 3-20-21-19. Options:

  • 0: No object is identified.

  • 1: The confidence is low.

  • 2: The clustering result based on the training dataset is inconsistent with the prediction result.

  • 3: The prediction result is greatly different from the data of the same type in the training dataset.

  • 4: The prediction results of multiple consecutive similar images are inconsistent.

  • 5: There is a large offset between the image resolution and the feature distribution of the training dataset.

  • 6: There is a large offset between the aspect ratio of the image and the feature distribution of the training dataset.

  • 7: There is a large offset between the brightness of the image and the feature distribution of the training dataset.

  • 8: There is a large offset between the saturation of the image and the feature distribution of the training dataset.

  • 9: There is a large offset between the color richness of the image and the feature distribution of the training dataset.

  • 10: There is a large offset between the definition of the image and the feature distribution of the training dataset.

  • 11: There is a large offset between the number of frames of the image and the feature distribution of the training dataset.

  • 12: There is a large offset between the standard deviation of area of image frames and the feature distribution of the training dataset.

  • 13: There is a large offset between the aspect ratio of image frames and the feature distribution of the training dataset.

  • 14: There is a large offset between the area portion of image frames and the feature distribution of the training dataset.

  • 15: There is a large offset between the edge of image frames and the feature distribution of the training dataset.

  • 16: There is a large offset between the brightness of image frames and the feature distribution of the training dataset.

  • 17: There is a large offset between the definition of image frames and the feature distribution of the training dataset.

  • 18: There is a large offset between the stack of image frames and the feature distribution of the training dataset.

  • 19: The data augmentation result based on GaussianBlur is inconsistent with the prediction result of the original image.

  • 20: The data augmentation result based on fliplr is inconsistent with the prediction result of the original image.

  • 21: The data augmentation result based on Crop is inconsistent with the prediction result of the original image.

  • 22: The data augmentation result based on flipud is inconsistent with the prediction result of the original image.

  • 23: The data augmentation result based on scale is inconsistent with the prediction result of the original image.

  • 24: The data augmentation result based on translate is inconsistent with the prediction result of the original image.

  • 25: The data augmentation result based on shear is inconsistent with the prediction result of the original image.

  • 26: The data augmentation result based on superpixels is inconsistent with the prediction result of the original image.

  • 27: The data augmentation result based on sharpen is inconsistent with the prediction result of the original image.

  • 28: The data augmentation result based on add is inconsistent with the prediction result of the original image.

  • 29: The data augmentation result based on invert is inconsistent with the prediction result of the original image.

  • 30: The data is predicted to be abnormal.

@modelarts:shape

String

Object shape, which is a default attribute dedicated to the object detection label and is left empty by default. Options:

  • bndbox: rectangle

  • polygon: polygon

  • circle: circle

  • line: straight line

  • dashed: dashed line

  • point: point

  • polyline: polyline

@modelarts:source

String

Speech source, which is a default attribute dedicated to the speech start/end point label and can be set to a speaker or narrator

@modelarts:start_index

Integer

Start position of the text, which is a default attribute dedicated to the named entity label. The start value begins from 0, including the character corresponding to the value of start_index.

@modelarts:start_time

String

Speech start time, which is a default attribute dedicated to the speech start/end point label, in the format of hh:mm:ss.SSS. (hh indicates hour; mm indicates minute; ss indicates second; and SSS indicates millisecond.)

@modelarts:to

String

Direction entity ID of the triplet relationship label, which is a default attribute dedicated to the triplet relationship label

Table 10 SampleMetadata

Parameter

Type

Description

@modelarts:import_origin

Integer

Sample source, which is a default attribute.

@modelarts:hard

Double

Whether the sample is labeled as a hard example, which is a default attribute. Options:

  • 0: The label is not a hard example.

  • 1: The label is a hard example.

@modelarts:hard_coefficient

Double

Coefficient of difficulty of each sample level, which is a default attribute. The value ranges from 0 to 1.

@modelarts:hard_reasons

Array of integers

ID of a hard example reason, which is a default attribute. Options:

  • 0: No object is identified.

  • 1: The confidence is low.

  • 2: The clustering result based on the training dataset is inconsistent with the prediction result.

  • 3: The prediction result is greatly different from the data of the same type in the training dataset.

  • 4: The prediction results of multiple consecutive similar images are inconsistent.

  • 5: There is a large offset between the image resolution and the feature distribution of the training dataset.

  • 6: There is a large offset between the aspect ratio of the image and the feature distribution of the training dataset.

  • 7: There is a large offset between the brightness of the image and the feature distribution of the training dataset.

  • 8: There is a large offset between the saturation of the image and the feature distribution of the training dataset.

  • 9: There is a large offset between the color richness of the image and the feature distribution of the training dataset.

  • 10: There is a large offset between the definition of the image and the feature distribution of the training dataset.

  • 11: There is a large offset between the number of frames of the image and the feature distribution of the training dataset.

  • 12: There is a large offset between the standard deviation of area of image frames and the feature distribution of the training dataset.

  • 13: There is a large offset between the aspect ratio of image frames and the feature distribution of the training dataset.

  • 14: There is a large offset between the area portion of image frames and the feature distribution of the training dataset.

  • 15: There is a large offset between the edge of image frames and the feature distribution of the training dataset.

  • 16: There is a large offset between the brightness of image frames and the feature distribution of the training dataset.

  • 17: There is a large offset between the definition of image frames and the feature distribution of the training dataset.

  • 18: There is a large offset between the stack of image frames and the feature distribution of the training dataset.

  • 19: The data augmentation result based on GaussianBlur is inconsistent with the prediction result of the original image.

  • 20: The data augmentation result based on fliplr is inconsistent with the prediction result of the original image.

  • 21: The data augmentation result based on Crop is inconsistent with the prediction result of the original image.

  • 22: The data augmentation result based on flipud is inconsistent with the prediction result of the original image.

  • 23: The data augmentation result based on scale is inconsistent with the prediction result of the original image.

  • 24: The data augmentation result based on translate is inconsistent with the prediction result of the original image.

  • 25: The data augmentation result based on shear is inconsistent with the prediction result of the original image.

  • 26: The data augmentation result based on superpixels is inconsistent with the prediction result of the original image.

  • 27: The data augmentation result based on sharpen is inconsistent with the prediction result of the original image.

  • 28: The data augmentation result based on add is inconsistent with the prediction result of the original image.

  • 29: The data augmentation result based on invert is inconsistent with the prediction result of the original image.

  • 30: The data is predicted to be abnormal.

@modelarts:size

Array of objects

image size, including width, height, and depth. The type is List[/topic/body/section/table/tgroup/tbody/row/entry/p/br. {""}) (br]. In the list, the first number indicates the width (pixels), the second number indicates the height (pixels), and the third number indicates the depth (the depth can be left blank and the default value is 3). For example, [100,200,3] and [100,200] are both valid. Note: This parameter is mandatory only when the sample label list contains the object detection label.

Request Example

The following shows an example indicating that all tasks are accepted.

{
  "action" : 0
}

Response Example

Status code: 200

OK

{ }

Status Code

Status Code

Description

200

OK

401

Unauthorized

403

Forbidden

404

Not Found

Error Code

For details, see Error Codes.