Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Obtaining an Auto Labeling Sample List

Updated on 2024-05-30 GMT+08:00

Function

Obtain a list of auto labeling samples in a dataset.

Debugging

You can debug this API in API Explorer which supports automatic authentication. API Explorer can automatically generate SDK code examples and provide the SDK code example debugging.

URI

GET /v2/{project_id}/datasets/{dataset_id}/auto-annotations/samples

Table 1 URI parameters

Parameter

Mandatory

Type

Description

dataset_id

Yes

String

Dataset ID

project_id

Yes

String

Project ID. For details, see Obtaining a Project ID and Name.

Table 2 Query parameters

Parameter

Mandatory

Type

Description

high_score

No

String

Upper limit of the confidence score. The default value is 1.

label_name

No

String

Label name

label_type

No

Integer

Labeling type. Options:

  • 0: image classification

  • 1: object detection

  • 3: image segmentation

  • 100: text classification

  • 101: named entity recognition

  • 102: text triplet

  • 200: sound classification

  • 201: speech content

  • 202: speech paragraph labeling

  • 400: table dataset

  • 600: video labeling

  • 900: free format

limit

No

Integer

Maximum number of records returned on each page. The value ranges from 1 to 100. The default value is 10.

low_score

No

String

Lower limit of the confidence score. The default value is 0.

offset

No

Integer

Start page for pagination display. The default value is 0.

order

No

String

Sorting sequence of the query. Options:

  • asc: ascending order

  • desc: descending order (default value)

process_parameter

No

String

Image resize configuration, which is the same as OBS settings. For details, see Resizing Images. For example, image/resize,m_lfit,h_200 indicates that the target image is resized proportionally and the height is set to 200 pixels.

search_conditions

No

String

Multi-dimensional search criteria after URL encoding. The relationship between multiple search conditions is AND.

Request Parameters

None

Response Parameters

Status code: 200

Table 3 Response body parameters

Parameter

Type

Description

sample_count

Integer

Number of samples

samples

Array of DescribeSampleResp objects

List of samples

Table 4 DescribeSampleResp

Parameter

Type

Description

check_accept

Boolean

Whether the acceptance is passed, which is used for team labeling. Options:

  • true: The acceptance is passed.

  • false: The acceptance is not passed.

check_comment

String

Acceptance comment, which is used for team labeling.

check_score

String

Acceptance score, which is used for team labeling.

deletion_reasons

Array of strings

Reason for deleting a sample, which is used for healthcare.

hard_details

Map<String,HardDetail>

Difficult problem details, including description, causes, and suggestions of difficult problems.

labelers

Array of Worker objects

Labeling personnel list of sample allocation. The list records the team members receiving the samples, which is used for team labeling.

labels

Array of SampleLabel objects

List of sample labels

metadata

SampleMetadata object

Attribute key-value pair of the sample metadata

review_accept

Boolean

Whether to accept the review, which is used for team labeling. Options:

  • true: The review is accepted.

  • false: The review is rejected.

review_comment

String

Review comment, which is used for team labeling.

review_score

String

Review score, which is used for team labeling.

sample_data

Array of strings

List of sample data

sample_dir

String

Path for storing a sample

sample_id

String

Sample ID

sample_name

String

Sample name

sample_size

Long

Sample size or text length, in bytes

sample_status

String

Sample status. Options:

  • __ALL__: labeled

  • __NONE__: unlabeled

  • __UNCHECK__: to be accepted

  • __ACCEPTED__: accepted

  • __REJECTED__: rejected

  • __UNREVIEWED__: to be reviewed

  • __REVIEWED__: reviewed

  • __WORKFORCE_SAMPLED__: sampled

  • __WORKFORCE_SAMPLED_UNCHECK__: sampling pending check

  • __WORKFORCE_SAMPLED_CHECKED__: sampling checked

  • __WORKFORCE_SAMPLED_ACCEPTED__: sampling accepted

  • __WORKFORCE_SAMPLED_REJECTED__: sampling rejected

  • __AUTO_ANNOTATION__: to be confirmed

sample_time

Long

Sample time, when OBS is last modified.

sample_type

Integer

Sample type. Options:

  • 0: image

  • 1: text

  • 2: audio

  • 4: table

  • 6: video

  • 9: free format

score

String

Comprehensive score, which is used for team labeling.

source

String

Source address of sample data

sub_sample_url

String

Subsample URL, which is used for healthcare.

worker_id

String

ID of a labeling team member, which is used for team labeling.

Table 5 HardDetail

Parameter

Type

Description

alo_name

String

Alias

id

Integer

Reason ID

reason

String

Reason description

suggestion

String

Handling suggestion

Table 6 Worker

Parameter

Type

Description

create_time

Long

Worker creation time

description

String

Labeling team member description. The value contains 0 to 256 characters. Special characters (^!<>=&"') are not allowed.

email

String

Email address of a labeling team member

role

Integer

Role. Options:

  • 0: marker

  • 1: reviewer

  • 2: team manager

  • 3: dataset owner

status

Integer

Current login status of a labeled member. Options:

  • 0: No invitation email is sent.

  • 1: The invitation email is sent but the member has not logged in.

  • 2: The member has logged in.

  • 3: The member has been deleted.

update_time

Long

Worker update time

worker_id

String

ID of a labeling team member

workforce_id

String

ID of a labeling team

Table 7 SampleLabel

Parameter

Type

Description

annotated_by

String

Video labeling method, which is used to determine whether a video is labeled manually or automatically. Options:

  • human: manual labeling

  • auto: auto labeling

id

String

Label ID

name

String

Label name

property

SampleLabelProperty object

Attribute key-value pair of the sample label, such as the object shape and shape feature

score

Float

Confidence. The value ranges from 0 to 1.

type

Integer

Label type. Options:

  • 0: image classification

  • 1: object detection

  • 3: image segmentation

  • 100: text classification

  • 101: named entity recognition

  • 102: text triplet relationship

  • 103: text triplet entity

  • 200: sound classification

  • 201: speech content

  • 202: speech paragraph labeling

  • 600: video labeling

Table 8 SampleLabelProperty

Parameter

Type

Description

@modelarts:content

String

Speech text content, which is a default attribute dedicated to the speech label (including the speech content and speech start and end points)

@modelarts:end_index

Integer

End position of the text, which is a default attribute dedicated to the named entity label. The end position does not include the character corresponding to the value of end_index. Examples:

  • If the text is "Barack Hussein Obama II (born on August 4, 1961) is an attorney and politician.", the start_index and end_index of Barack Hussein Obama II are 0 and 23, respectively.

  • If the text is "Hope is the thing with feathers", start_index and end_index of Hope are 0 and 4, respectively.

@modelarts:end_time

String

Speech end time, which is a default attribute dedicated to the speech start/end point label, in the format of hh:mm:ss.SSS. (hh indicates hour; mm indicates minute; ss indicates second; and SSS indicates millisecond.)

@modelarts:feature

Object

Shape feature, which is a default attribute dedicated to the object detection label, with type of List The upper left corner of an image is used as the coordinate origin [0, 0]. Each coordinate point is represented by [x, y]. x indicates the horizontal coordinate, and y indicates the vertical coordinate (both x and y are greater than or equal to 0). The format of each shape is as follows:

  • bndbox: consists of two points, for example, [[0,10],[50,95]]. The upper left vertex of the rectangle is the first point, and the lower right vertex is the second point. That is, the x-coordinate of the first point must be less than the x-coordinate of the second point, and the y-coordinate of the first point must be less than the y-coordinate of the second point.

  • polygon: consists of multiple points that are connected in sequence to form a polygon, for example, [[0,100],[50,95],[10,60],[500,400]].

  • circle: consists of the center and radius, for example, [[100,100],[50]].

  • line: consists of two points, for example, [[0,100],[50,95]]. The first point is the start point, and the second point is the end point.

  • dashed: consists of two points, for example, [[0,100],[50,95]]. The first point is the start point, and the second point is the end point.

  • point: consists of one point, for example, [[0,100]].

  • polyline: consists of multiple points, for example, [[0,100],[50,95],[10,60],[500,400]].

@modelarts:from

String

Start entity ID of the triplet relationship label, which is a default attribute dedicated to the triplet relationship label

@modelarts:hard

String

Whether the sample is labeled as a hard example, which is a default attribute. Options:

  • 0/false: The label is not a hard example.

  • 1/true: The label is a hard example.

@modelarts:hard_coefficient

String

Coefficient of difficulty of each label level, which is a default attribute. The value ranges from 0 to 1.

@modelarts:hard_reasons

String

Reasons why the sample is a hard example, which is a default attribute. Use a hyphen (-) to separate every two hard example reason IDs, for example, 3-20-21-19. Options:

  • 0: No object is identified.

  • 1: The confidence is low.

  • 2: The clustering result based on the training dataset is inconsistent with the prediction result.

  • 3: The prediction result is greatly different from the data of the same type in the training dataset.

  • 4: The prediction results of multiple consecutive similar images are inconsistent.

  • 5: There is a large offset between the image resolution and the feature distribution of the training dataset.

  • 6: There is a large offset between the aspect ratio of the image and the feature distribution of the training dataset.

  • 7: There is a large offset between the brightness of the image and the feature distribution of the training dataset.

  • 8: There is a large offset between the saturation of the image and the feature distribution of the training dataset.

  • 9: There is a large offset between the color richness of the image and the feature distribution of the training dataset.

  • 10: There is a large offset between the definition of the image and the feature distribution of the training dataset.

  • 11: There is a large offset between the number of frames of the image and the feature distribution of the training dataset.

  • 12: There is a large offset between the standard deviation of area of image frames and the feature distribution of the training dataset.

  • 13: There is a large offset between the aspect ratio of image frames and the feature distribution of the training dataset.

  • 14: There is a large offset between the area portion of image frames and the feature distribution of the training dataset.

  • 15: There is a large offset between the edge of image frames and the feature distribution of the training dataset.

  • 16: There is a large offset between the brightness of image frames and the feature distribution of the training dataset.

  • 17: There is a large offset between the definition of image frames and the feature distribution of the training dataset.

  • 18: There is a large offset between the stack of image frames and the feature distribution of the training dataset.

  • 19: The data augmentation result based on GaussianBlur is inconsistent with the prediction result of the original image.

  • 20: The data augmentation result based on fliplr is inconsistent with the prediction result of the original image.

  • 21: The data augmentation result based on Crop is inconsistent with the prediction result of the original image.

  • 22: The data augmentation result based on flipud is inconsistent with the prediction result of the original image.

  • 23: The data augmentation result based on scale is inconsistent with the prediction result of the original image.

  • 24: The data augmentation result based on translate is inconsistent with the prediction result of the original image.

  • 25: The data augmentation result based on shear is inconsistent with the prediction result of the original image.

  • 26: The data augmentation result based on superpixels is inconsistent with the prediction result of the original image.

  • 27: The data augmentation result based on sharpen is inconsistent with the prediction result of the original image.

  • 28: The data augmentation result based on add is inconsistent with the prediction result of the original image.

  • 29: The data augmentation result based on invert is inconsistent with the prediction result of the original image.

  • 30: The data is predicted to be abnormal.

@modelarts:shape

String

Object shape, which is a default attribute dedicated to the object detection label and is left empty by default. Options:

  • bndbox: rectangle

  • polygon: polygon

  • circle: circle

  • line: straight line

  • dashed: dashed line

  • point: point

  • polyline: polyline

@modelarts:source

String

Speech source, which is a default attribute dedicated to the speech start/end point label and can be set to a speaker or narrator

@modelarts:start_index

Integer

Start position of the text, which is a default attribute dedicated to the named entity label. The start value begins from 0, including the character corresponding to the value of start_index.

@modelarts:start_time

String

Speech start time, which is a default attribute dedicated to the speech start/end point label, in the format of hh:mm:ss.SSS. (hh indicates hour; mm indicates minute; ss indicates second; and SSS indicates millisecond.)

@modelarts:to

String

Direction entity ID of the triplet relationship label, which is a default attribute dedicated to the triplet relationship label

Table 9 SampleMetadata

Parameter

Type

Description

@modelarts:import_origin

Integer

Sample source, which is a default attribute.

@modelarts:hard

Double

Whether the sample is labeled as a hard example, which is a default attribute. Options:

  • 0: The label is not a hard example.

  • 1: The label is a hard example.

@modelarts:hard_coefficient

Double

Coefficient of difficulty of each sample level, which is a default attribute. The value ranges from 0 to 1.

@modelarts:hard_reasons

Array of integers

ID of a hard example reason, which is a default attribute. Options:

  • 0: No object is identified.

  • 1: The confidence is low.

  • 2: The clustering result based on the training dataset is inconsistent with the prediction result.

  • 3: The prediction result is greatly different from the data of the same type in the training dataset.

  • 4: The prediction results of multiple consecutive similar images are inconsistent.

  • 5: There is a large offset between the image resolution and the feature distribution of the training dataset.

  • 6: There is a large offset between the aspect ratio of the image and the feature distribution of the training dataset.

  • 7: There is a large offset between the brightness of the image and the feature distribution of the training dataset.

  • 8: There is a large offset between the saturation of the image and the feature distribution of the training dataset.

  • 9: There is a large offset between the color richness of the image and the feature distribution of the training dataset.

  • 10: There is a large offset between the definition of the image and the feature distribution of the training dataset.

  • 11: There is a large offset between the number of frames of the image and the feature distribution of the training dataset.

  • 12: There is a large offset between the standard deviation of area of image frames and the feature distribution of the training dataset.

  • 13: There is a large offset between the aspect ratio of image frames and the feature distribution of the training dataset.

  • 14: There is a large offset between the area portion of image frames and the feature distribution of the training dataset.

  • 15: There is a large offset between the edge of image frames and the feature distribution of the training dataset.

  • 16: There is a large offset between the brightness of image frames and the feature distribution of the training dataset.

  • 17: There is a large offset between the definition of image frames and the feature distribution of the training dataset.

  • 18: There is a large offset between the stack of image frames and the feature distribution of the training dataset.

  • 19: The data augmentation result based on GaussianBlur is inconsistent with the prediction result of the original image.

  • 20: The data augmentation result based on fliplr is inconsistent with the prediction result of the original image.

  • 21: The data augmentation result based on Crop is inconsistent with the prediction result of the original image.

  • 22: The data augmentation result based on flipud is inconsistent with the prediction result of the original image.

  • 23: The data augmentation result based on scale is inconsistent with the prediction result of the original image.

  • 24: The data augmentation result based on translate is inconsistent with the prediction result of the original image.

  • 25: The data augmentation result based on shear is inconsistent with the prediction result of the original image.

  • 26: The data augmentation result based on superpixels is inconsistent with the prediction result of the original image.

  • 27: The data augmentation result based on sharpen is inconsistent with the prediction result of the original image.

  • 28: The data augmentation result based on add is inconsistent with the prediction result of the original image.

  • 29: The data augmentation result based on invert is inconsistent with the prediction result of the original image.

  • 30: The data is predicted to be abnormal.

@modelarts:size

Array of objects

image size, including width, height, and depth. The type is List[/topic/body/section/table/tgroup/tbody/row/entry/p/br. {""}) (br]. In the list, the first number indicates the width (pixels), the second number indicates the height (pixels), and the third number indicates the depth (the depth can be left blank and the default value is 3). For example, [100,200,3] and [100,200] are both valid. Note: This parameter is mandatory only when the sample label list contains the object detection label.

Request Example

Run the following command to obtain an auto labeling sample list:

GET https://{endpoint}/v2/{project_id}/datasets/{dataset_id}/auto-annotations/samples

Response Example

Status code: 200

OK

{
  "sample_count" : 1,
  "samples" : [ {
    "sample_id" : "10de574cbf0f09d4798b87ba0eb34e37",
    "sample_type" : 0,
    "labels" : [ {
      "name" : "sunflowers",
      "type" : 0,
      "id" : "1",
      "property" : {
        "@modelarts:hard_coefficient" : "0.0",
        "@modelarts:hard" : "false"
      },
      "score" : 1.0
    } ],
    "source" : "https://test-obs.obs.xxx.com:443/animals/8_1597649054631.jpeg?AccessKeyId=alRn0xskf5luJaG2jBJe&Expires=1606299230&x-image-process=image%2Fresize%2Cm_lfit%2Ch_200&Signature=MNAAjXz%2Fmwn%2BSabSK9wkaG6b6bU%3D",
    "metadata" : {
      "@modelarts:hard_coefficient" : 1.0,
      "@modelarts:hard" : true,
      "@modelarts:import_origin" : 0,
      "@modelarts:hard_reasons" : [ 8, 6, 5, 3 ]
    },
    "sample_time" : 1601432758000,
    "sample_status" : "UN_ANNOTATION"
  } ]
}

Status Code

Status Code

Description

200

OK

401

Unauthorized

403

Forbidden

404

Not Found

Error Code

For details, see Error Codes.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback