Updated on 2022-11-07 GMT+08:00

Connecting a Non-CCE Cluster (Private Access)

Security risks exist when clusters in the local data center and third-party cloud access UCS through the public network. You need a stable and secure cluster access mode. In this case, you can use the private network access mode to add clusters to the UCS for management.

When a cluster connects to UCS through a private network, you need to use Direct Connect (DC) or Virtual Private Network (VPN) to connect the on-premises network to the cloud VPC, and use VPC Endpoint (VPCEP) to connect to UCS through the private network, which is high-speed, low-latency, and secure.

Figure 1 Private access


  • Only the Huawei Cloud account or members of the IAM admin user group can connect clusters.
  • When a non-CCE cluster is connected through a private network, the use of image repositories may be restricted due to network restrictions.

    Clusters connected to UCS through a private network cannot download images from SoftWare Repository for Container (SWR). Ensure that your nodes where your workloads run can access public networks.


  • A cluster to be connected to UCS has been created and is running properly.
  • Create a VPC in the region where UCS provides services. For details, see Creating a VPC and Subnet. Currently, only AP-Singapore is supported.

    The VPC subnet CIDR block cannot overlap with the network CIDR block used in the IDC or third-party cloud. Otherwise, the cluster cannot be connected. For example, if the VPC subnet used in the IDC is, the subnet cannot be used in Huawei Cloud VPC.

  • You have obtained the kubeconfig file of the cluster to be added. The operation procedure varies depending on the vendor. For details, see Obtaining kubeconfig. For details about the kubeconfig file, see Organizing Cluster Access Using kubeconfig Files.

Step 1: Prepare the Network Environment

After the on-premises and cloud networks are connected, you are advised to ping the private IP address of the Huawei Cloud server in the target VPC from the server in the local data center to check whether the network connection is successful.

You can use either of the following solutions to connect the network environment of your own IDC or third-party cloud vendor to Huawei Cloud VPC:

Step 2: Connect a Cluster

  1. Log in to the UCS console. In the navigation pane, choose Container Clusters.
  2. Select Huawei Cloud Stack, or Third-party based on the type of the cluster to be imported.
  3. Enter the basic information about the cluster to be added by referring to section Table 1. Parameters marked with an asterisk (*) are mandatory.

    Table 1 Basic cluster information



    * Service Provider

    This parameter is mandatory when only the third-party cluster is imported. Select a cluster service provider.

    * kubeconfig

    Upload the kubectl configuration file to complete cluster authentication. The file can be in JSON or YAML format. The procedure for obtaining the KubeConfig file varies according to vendors. For details, see Obtaining kubeconfig.

    * Context

    Select the corresponding context. After the kubeconfig file is uploaded, the option list automatically obtains the contexts field from the file.

    By default, the option list is the context specified by the current-context field in the kubeconfig file. If the file does not contain this field, you need to manually select an option from the list.

    * Cluster Name

    Enter a cluster name. Start with a lowercase letter and do not end with a hyphen (-). Use only digits, lowercase letters, and hyphens (-).

    * Region

    Select a region where the cluster is deployed.

    * Cluster Group

    Select the cluster group to which the cluster belongs. Defaults to default.

    Cluster groups are used for refined permission management. A cluster can be added to only one cluster group. For details about how to create a cluster group, see Cluster Groups.

    Only the cluster groups on which you have the permission are displayed in the list. If the cluster group list is empty, contact members of the admin user.


    This parameter is optional. Tags are added to clusters in the form of key-value pairs. You can use tags to classify clusters. Start and end with a letter or digit. Use only letters, digits, hyphens (-), underscores (_), and periods (.). Max: 63 characters.

  4. Click to confirm the. After the cluster is successfully connected, the page shown in Figure 2 is displayed. Connect to the network within 30 minutes. You can select a cluster access mode or click in the upper right corner to view the detailed network access process.

    If you do not connect to the network within 30 minutes, the cluster will fail to be imported. In this case, click in the upper right corner to import the cluster again.
    Figure 2 Cluster in connecting state

Step 3: Buy a VPC Endpoint

  1. Log in to VPCEP console and click Buy VPC Endpoint.
  2. Select the region where the VPC endpoint is located.
  3. Choose Find Service by Name, enter the service name ap-southeast-3.open-vpcep.02d111fb-c448-4a4e-9b7c-88029de91717, and click Verify.

    Figure 3 Buying a VPC endpoint

  4. Select the VPC and subnet that are connected to the cluster network in Step 1: Prepare the Network Environment.
  5. Set the IP address of the VPC endpoint to Automatically Assign or Manually Assign.
  6. After configuring other parameters, click Buy Now and confirm the specifications.

    • If all of the specifications are correct, click Submit.
    • If any of the specifications are incorrect, click Previous to return to the previous page and modify the parameters as needed, and click Submit.

Step 4: Connect a Cluster

  1. Log in to the UCS console. In the target cluster column under Connecting, click Private Network Access.
  2. Select the VPC endpoint created in Step 3: Buy a VPC Endpoint.

    Figure 4 Selecting a VPC endpoint

  3. Download the configuration file of the cluster agent.

    The agent configuration file contains keys and can be downloaded only once. Keep the file secure.

  4. Use kubectl to connect to the cluster, create a YAML file named agent.yaml (the file name can be customized) in the cluster, and paste the agent configuration content in 3 to the YAML file.

    vim agent.yaml

  5. Run the following command in the cluster to be connected to deploy the agent:kubectl apply -f agent.yaml

    To pull the proxy-agent container image, the cluster must be able to access the public network, or the proxy-agent image must be uploaded to an image repository that can be accessed by the cluster. Otherwise, the proxy-agent container image fails to be deployed.

  6. Check the deployment of the cluster agent:kubectl -n kube-system get pod | grep proxy-agent

    If the deployment is successful, the expected output is:

    proxy-agent-5f7d568f6-6fc4k 1/1 Running 0 9s

  7. Check the running status of the cluster agent:kubectl -n kube-system logs <Agent Pod Name> | grep "Start serving"

    If the cluster agent is running properly, the expected log output is:

    Start serving

  8. Go to the UCS console and refresh the cluster status. The cluster is in the Running state.

    Figure 5 Cluster in running state