Help Center/ Cloud Container Engine/ Best Practices/ Permission/ Configuring kubeconfig for Fine-Grained Management on Cluster Resources
Updated on 2024-05-31 GMT+08:00

Configuring kubeconfig for Fine-Grained Management on Cluster Resources

Application Scenarios

By default, the kubeconfig file provided by CCE for users has permissions bound to the cluster-admin role, which are equivalent to the permissions of user root. It is difficult to implement refined management on users with such permissions.

Purpose

Cluster resources are managed in a refined manner so that specific users have only certain permissions (such as adding, querying, and modifying resources).

Precautions

Ensure that kubectl is available on your host. If not, download it from here (corresponding to the cluster version or the latest version).

Configuration Method

In the following example, only pods and Deployments in the test space can be viewed and added, and they cannot be deleted.

  1. Set the service account name to my-sa and namespace to test.

    kubectl create sa my-sa -n test

  2. Configure the role table and assign operation permissions to different resources.

    vi role-test.yaml
    The content is as follows:

    In this example, the permission rules include the read-only permission (get/list/watch) of pods in the test namespace, and the read (get/list/watch) and create permissions of deployments.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: myrole
      namespace: test
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - apps
      resources:
      - pods
      - deployments
      verbs:
      - get
      - list
      - watch
      - create

    Create a Role.

    kubectl create -f role-test.yaml

  3. Create a RoleBinding and bind the service account to the role so that the user can obtain the corresponding permissions.

    vi myrolebinding.yaml
    The content is as follows:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: myrolebinding
      namespace: test
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: myrole
    subjects:
    - kind: ServiceAccount
      name: my-sa
      namespace: test

    Create a RoleBinding.

    kubectl create -f myrolebinding.yaml

    The user information is configured. Now perform 5 to 7 to write the user information to the configuration file.

  4. Manually create a token that is valid for a long time for ServiceAccount.

    vi my-sa-token.yaml
    The content is as follows:
    apiVersion: v1
    kind: Secret
    metadata:
      name: my-sa-token-secret
      namespace: test
      annotations:
        kubernetes.io/service-account.name: my-sa
    type: kubernetes.io/service-account-token

    Create a token:

    kubectl create -f my-sa-token.yaml

  5. Configure the cluster information.

    1. Decrypt the ca.crt file in the secret and export it.
    kubectl get secret my-sa-token-secret -n test -oyaml |grep ca.crt: | awk '{print $2}' |base64 -d > /home/ca.crt
    1. Set a cluster access mode. test-arm specifies the cluster to be accessed. https://192.168.0.110:5443 specifies the apiserver IP address of the cluster. For details about how to obtain the IP address, see Figure 1. /home/test.config specifies the path for storing the configuration file.
      • If the internal API server address is used, run the following command:
        kubectl config set-cluster test-arm --server=https://192.168.0.110:5443  --certificate-authority=/home/ca.crt  --embed-certs=true --kubeconfig=/home/test.config
      • If the public API server address is used, run the following command:
        kubectl config set-cluster test-arm --server=https://192.168.0.110:5443 --kubeconfig=/home/test.config --insecure-skip-tls-verify=true

    If you perform operations on a node in the cluster or the node that uses the configuration is a cluster node, do not set the path of kubeconfig to /root/.kube/config.

    By default, the apiserver IP address of the cluster is a private IP address. After an EIP is bound, you can use the public network IP address to access the apiserver.

    Figure 1 Obtaining the internal or public API server address

  6. Configure the cluster authentication information.

    1. Obtain the cluster token. (If the token is obtained in GET mode, run based64 -d to decode the token.)
    token=$(kubectl describe secret my-sa-token-secret -n test | awk '/token:/{print $2}')
    1. Set the cluster user ui-admin.
    kubectl config set-credentials ui-admin --token=$token --kubeconfig=/home/test.config

  7. Configure the context information for cluster authentication access. ui-admin@test specifies the context name.

    kubectl config set-context ui-admin@test --cluster=test-arm --user=ui-admin --kubeconfig=/home/test.config

  8. Configure the context. For details about how to use the context, see Verification.

    kubectl config use-context ui-admin@test --kubeconfig=/home/test.config

    If you want to assign other users the above permissions to perform operations on the cluster, provide the generated configuration file /home/test.config to the user after performing step 7. The user must ensure that the host can access the API server address of the cluster. When performing step 8 on the host and using kubectl, the user must set the kubeconfig parameter to the path of the configuration file.

Verification

  1. Pods in the test namespace cannot access pods in other namespaces.
    kubectl get pod -n test --kubeconfig=/home/test.config

  2. Pods in the test namespace cannot be deleted.

Further Readings

For more information about users and identity authentication in Kubernetes, see Authenticating.