Help Center/ Ubiquitous Cloud Native Service/ FAQs/ Permissions/ What Can I Do If I Can't Add Permissions for a Fleet or Cluster?
Updated on 2025-07-29 GMT+08:00

What Can I Do If I Can't Add Permissions for a Fleet or Cluster?

Symptom

When you add permissions for a fleet or a cluster not in a fleet, the permissions may fail to be added due to cluster connection exceptions. If this happens, an event will be displayed in the Add Permissions window. Locate and rectify the fault and add permissions again.

Troubleshooting

If there are exceptions when permissions are added for a fleet or cluster, locate faults based on error messages listed in Table 1.

Table 1 Error messages

Error Message

Description

Check Item

ClusterRole failed reason:Get \"https://kubernetes.default.svc.cluster.local/apis/rbac.authorization.k8s.io/v1/clusterroles/XXXXXXX?timeout=30s\": Precondition Required"

Or

Get ClusterRole failed reason:an error on the server (\"unknown\") has prevented the request from succeeding (get clusterroles.rbac.authorization.k8s.io

The cluster is not connected, proxy-agent in the connected cluster is abnormal, or the network is abnormal.

Unauthorized

Rectify the fault based on the status code.

For example, status code 401 indicates that the user does not have the access permissions. A possible cause is that the cluster authentication information has expired.

Get cluster namespace[x] failed.

Or

Reason:namespace "x" not found.

There is no corresponding namespace in the cluster.

Create a namespace in the cluster and try again. Run the following command to create a namespace:

kubectl create namespace ns_name

If the namespace is not required, ignore this exception.

Check Item 1: proxy-agent

After a cluster is unregistered from UCS, the authentication information contained in the original proxy-agent configuration file becomes invalid. You need to delete the proxy-agent pods deployed in the cluster. To connect the cluster to UCS again, download the proxy-agent configuration file from the UCS console again and use it for re-deployment.

  1. Log in to a master node in the cluster.
  2. Check the deployment of proxy-agent.

    kubectl -n kube-system get pod | grep proxy-agent

    Desired output for successful deployment:

    proxy-agent-*** 1/1 Running 0 9s

    If proxy-agent is not in the Running state, run the kubectl -n kube-system describe pod proxy-agent-*** command to view the pod alarms. For details, see What Can I Do If proxy-agent Fails to Be Deployed?.

    By default, proxy-agent is deployed with two pods. It can provide services as long as one pod is running normally. However, one pod cannot ensure high availability.

  3. Print the pod logs of proxy-agent and check whether the agent program can connect to UCS.

    kubectl -n kube-system logs proxy-agent-*** | grep "Start serving"

    If no "Start serving" log is printed but the proxy-agent pods are working, check other items.

Check Item 2: Network Connection Between the Cluster and UCS

Public Network Access

  1. Check whether a public IP address is bound to the cluster or a public NAT gateway is configured.
  2. Check whether the cluster security group allows outbound traffic. To perform access control on the outbound traffic, contact technical support to obtain the destination and port number.
  3. After rectifying network faults, delete the existing proxy-agent pods and rebuild pods. Check whether the logs of the new pods contain "Start serving".

    kubectl -n kube-system logs proxy-agent-*** | grep "Start serving"

  4. If desired logs are printed, refresh the UCS console page and check whether the cluster is connected.

Private Network Access

  1. Check whether the cluster security group allows outbound traffic. To perform access control on the outbound traffic, contact technical support to obtain the destination and port number.
  2. Rectify the network connection faults between the cluster and UCS or IDC.

    Refer to the following guides based on your network connection type:

  3. Rectify the VPC endpoint fault. The VPC endpoint status must be Accepted. If the VPC endpoint is deleted accidently, create another one. For details, see How Do I Restore a Deleted VPC Endpoint for a Cluster Connected Through a Private Network?.

    Figure 1 Checking the VPC endpoint status

  4. Delete the existing proxy-agent pods and rebuild pods. Check whether the logs of the new pods contain "Start serving".

    kubectl -n kube-system logs proxy-agent-*** | grep "Start serving"

  5. If desired logs are printed, refresh the UCS console page and check whether the cluster is connected.

Check Item 3: Cluster Authentication Information Changes

If the error message "cluster responded with non-successful status: [401][Unauthorized]" is displayed, the IAM network connection may be faulty, according to /var/paas/sys/log/kubernetes/auth-server.log of the three master nodes in the cluster. Ensure that the IAM domain name resolution and the IAM service connectivity are normal.

The common logs are as follows:

  • Failed to authenticate token: *******: dial tcp: lookup iam.myhuaweicloud.com on *.*.*.*:53: no such host

    This log indicates that the node is not capable of resolving iam.myhuaweicloud.com. Configure the corresponding domain name resolution by referring to Preparing for Installation.

  • Failed to authenticate token: Get *******: dial tcp *.*.*.*:443: i/o timeout

    This log indicates that the node's access to IAM times out. Ensure that the node can communicate with IAM normally.

  • currently only supports Agency token

    This log indicates that the request is not initiated by UCS. Currently, on-premises clusters can only be connected to UCS using IAM tokens.

  • IAM assumed user has no authorization/iam assumed user should allowed by TEAdmin

    This log indicates that the connection between UCS and the cluster is abnormal. Contact Huawei technical support.

  • Failed to authenticate token: token expired, please acquire a new token

    This log indicates that the token has expired. Run the date command to check whether the time difference is too large. If yes, synchronize the time and check whether the cluster is working. If the fault persists for a long time, you may need to reinstall the cluster. In this case, contact Huawei technical support.

After the preceding problem is resolved, run the crictl ps | grep auth | awk '{print $1}' | xargs crictl stop command to restart the auth-server container.