Help Center/ ModelArts/ Resource Management/ Elastic Cluster/ Abnormal Status of a Dedicated Resource Pool
Updated on 2024-06-11 GMT+08:00

Abnormal Status of a Dedicated Resource Pool

Resource Quota Limit

When you use a dedicated resource pool (for example, scaling resources, creating a VPC, creating a VPC and subnet, or interconnecting a VPC), if the system displays a message indicating that the resource quota is limited, submit a service ticket.

Creation Failed/Change Failed

  1. Log in to the ModelArts management console. In the navigation pane, choose Dedicated Resource Pools > Elastic Cluster.
  2. Click Records on the right of Create. On the Records dialog box, view failed task records.
    Figure 1 Creating a resource pool failed
  3. Hover the cursor over , view the cause of task failures.

    By default, failed task records are sorted by application time. A maximum of 500 failed task records can be displayed and retained for three days.

Locating Faulty Node

ModelArts will add a taint on a detected K8S faulty node so that jobs will not be affected or scheduled to the tainted node. The following table lists the faults can be detected. You can locate the fault by referring to the isolation code and detection method.

Table 1 Isolation code

Isolation Code

Category

Sub-Category

Description

Detection Method

A050101

GPU

GPU memory

GPU ECC error exists.

Run the nvidia-smi -a command and check whether Pending Page Blacklist is Yes or the value of multi-bit Register File is greater than 0.

For Ampere GPUs, check whether the following content exists:

  1. Uncorrectable SRAM error
  2. Remapping Failure or Pending records
  3. Xid 95 events in dmesg

A050102

GPU

Other

The nvidia-smi output contains ERR.

Run nvidia-smi -a and check whether the output contains ERR. Normally, the hardware, such as the power supply or the fan, is faulty.

A050103

GPU

Other

The execution of nvidia-smi times out or does not exist.

Check that exit code of nvidia-smi is not 0.

A050104

GPU

GPU Memory

ECC error occurred 64 times.

Run the nvidia-smi -a command, locate Retired Pages, and check whether the sum of Single Bit and Double Bit is greater than 64.

A050148

GPU

Other

An infoROM alarm occurs.

Run the nvidia-smi command and check whether the output contains the alarm "infoROM is corrupted".

A050109

GPU

Other

Other GPU errors

Check whether other GPU error exists. Normally, there is a faulty hardware. Contact the technical engineer.

A050147

IB

Link

The IB NIC is abnormal.

Run the ibstat command and check whether the NIC is not in active state.

A050121

NPU

Other

A driver exception is detected by NPU DCMI.

The NPU driver environment is abnormal.

A050122

NPU

Other

The NPU DCMI device is abnormal.

The NPU device is abnormal. The Ascend DCMI interface returns a major or urgent alarm.

A050123

NPU

Link

The NPU DCMI net is abnormal.

The NPU network connection is abnormal.

A050129

NPU

Other

Other NPU errors

Check whether other NPU error exists. You cannot rectify the fault. Contact the technical engineer.

A050149

NPU

Link

Check whether the network port of the hccn tool is intermittently disconnected.

The NPU network is unstable and intermittently disconnected. Run the hccn_tool-i ${device_id} -link_stat -g command and the network is disconnected more than five times within 24 hours.

A050951

NPU

GPU memory

The number of NPU ECCs reaches the maintenance threshold.

The NPU's HBM Double Bit Isolated Pages Count value is greater than or equal to 64.

A050146

Runtime

Other

The NTP is abnormal.

The ntpd or chronyd service is abnormal.

A050202

Runtime

Other

The node is not ready.

The node is unavailable. The K8S node contains one of the following taints:

  • node.kubernetes.io/unreachable
  • node.kubernetes.io/not-ready

A050203

Runtime

Disconnection

The number of normal AI cards does not match the actual capacity.

The GPU or NPU is disconnected.

A050206

Runtime

Other

The Kubelet hard disk is read-only.

The /mnt/paas/kubernetes/kubelet directory is read-only.

A050801

Node management

Node O&M

Resource is reserved.

The node is marked as the standby node and contains a taint.

A050802

Node management

Node O&M

An unknown error occurs.

The node is marked with an unknown taint.

A200001

Node management

Driver upgrade

The GPU is being upgraded.

The GPU is being upgraded.

A200002

Node management

Driver upgrade

The NPU is being upgraded.

The NPU is being upgraded.

A200008

Node management

Node admission

The admission is being examined.

The admission is being examined, including basic node configuration check and simple service verification.

A050933

Node management

Fault tolerance Failover

The Failover service on the tainted node will be migrated.

The Failover service on the tainted node will be migrated.

A050931

Training toolkit

Pre-check container

A GPU error is detected in the pre-check container.

A GPU error is detected in the pre-check container.

A050932

Training toolkit

Pre-check container

An IB error is detected in the pre-check container.

An IB error is detected in the pre-check container.