Updated on 2022-06-24 GMT+08:00

Accessing MCP by Using kubectl

kubectl can be used to configure resources such as Deployments of a specific MCP, as well as manage all MCP resources.

Prerequisites

If you select public network access, you must prepare an ECS that can connect to a public network.

Accessing MCP by Using kubectl

  1. Log in to the MCP console and click kubectl on the Dashboard page.
  2. Obtain the MCP access address and download the kubectl and its configuration file according to the instruction provided in Figure 1.

    Figure 1 Accessing MCP by using kubectl

  3. Install and configure kubectl (A Linux OS is used as an example).

    1. Copy kubectl and its configuration file to the /home directory on your client.
    2. Log in to your client, and configure kubectl.
      cd /home 
      chmod +x kubectl 
      mv -f kubectl /usr/local/bin 
      mkdir -p $HOME/.kube 
      mv -f kubeconfig.json $HOME/.kube/config
    3. Switch the kubectl access mode based on application scenarios.
      • Run the following command to enable Internet access:
        kubectl config use-context external
      • Run the following command to enable intra-VPC access:
        kubectl config use-context internal

        This mode can be used only if you have already established an internal network connection.

Deploying a Workload by Using kubectl

The following describes how to deploy a multi-cluster Nginx workload on different clusters.

  1. Access MCP by following the procedure described in Accessing MCP by Using kubectl.
  2. Query the clusters managed by MCP.

    kubectl get clusters

  3. Create a Deployment.

    kubectl create deployment nginx --image=nginx

  4. Create a propagation policy.

    cat <<EOF | kubectl apply -f -
    apiVersion: policy.karmada.io/v1alpha1
    kind: PropagationPolicy
    metadata:
      name: nginx-propagation
    spec:
      resourceSelectors:
        - apiVersion: apps/v1
          kind: Deployment
          name: nginx
      placement:
        clusterAffinity:
          clusterNames:
            - cluster1   #Cluster name obtained in step 2
            - cluster2
    EOF