Updated on 2026-04-03 GMT+08:00

Gatekeeper

Introduction

Gatekeeper is a customizable cloud native policy controller based on OPA. It helps enhance policy execution and governance and provides more security policy rules that comply with Kubernetes application scenarios in clusters.

Open-source community: https://github.com/open-policy-agent/gatekeeper

For details about how to use the add-on, see the Gatekeeper documentation.

Notes and Constraints

  • If you have deployed the community Gatekeeper in your cluster, uninstall it and then install the CCE Gatekeeper add-on. Otherwise, the add-on may fail to be installed.
  • After the Gatekeeper add-on is uninstalled, the related CRDs and CRs, such as ConstraintTemplates and Constraints, are retained, along with your custom policy data.

    This behavior follows the community's design. Because CRDs often store important custom policy data (for example, Constraints), deleting them automatically could lead to data loss or inconsistencies in cluster policies. To uninstall all CRDs related to the add-on, run the following command:

    kubectl delete crds $(kubectl get crds | grep gatekeeper.sh | awk '{print$1}')

Precautions

Gatekeeper's webhooks can impact the utilization of fundamental Kubernetes resources. If a service needs webhooks, it is crucial to carefully assess the potential risks associated with the add-on.

Gatekeeper is an open-source add-on that CCE has selected, adapted, and integrated into its services. CCE offers comprehensive technical support, but is not responsible for any service disruptions caused by defects in the open-source software, nor does it provide compensation or additional services for such disruptions. It is highly recommended that you regularly upgrade your software to address any potential issues.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Add-ons. Locate Gatekeeper on the right and click Install.
  3. In the Install Add-on sliding window, configure the specifications.

    • If you selected Preset, you can choose between Small or Large based on the cluster scale. The system will automatically set the number of add-on pods and resource quotas according to the preset specifications. You can see the configurations on the console.
    • If you selected Custom, you can adjust the number of pods and resource quotas as needed. High availability is not possible with a single pod. If an error occurs on the node where the add-on pod runs, the add-on will fail.

  4. Configure the parameters as needed. For details, see the parameters in GitHub.
  5. Configure deployment policies for the add-on pods.

    • Scheduling policies do not take effect on the DaemonSet pods of the add-on.
    • When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on pods cannot run.
    Table 1 Configurations for add-on scheduling

    Parameter

    Description

    Multi-AZ Deployment

    • Preferred: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to different nodes in that AZ.
    • Equivalent mode: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.
    • Forcible: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. There can be at most one pod in each AZ. If nodes in a cluster are not in different AZs, some add-on pods cannot run properly. If a node is faulty, the add-on pods on it may fail to be migrated.

    Node Affinity

    • Not configured: Node affinity is disabled for the add-on pods.
    • Specify node: Specify the nodes where the add-on pods are deployed. If you do not specify the nodes, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.
    • Specify node pool: Specify the node pool where the add-on pods are deployed. If you do not specify the node pools, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.
    • Customize affinity: Enter the labels of the nodes where the add-on pods are to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.

      If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on pods cannot run.

    Toleration

    Using both taints and tolerations enables (but does not require) the add-on's Deployment pods to be scheduled on nodes with matching taints, and allows control over pod eviction policies when host nodes are tainted.

    The add-on applies default toleration policies for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints. The tolerance time window is 60s.

    For details, see Configuring Tolerance Policies.

  6. Click Install.

Components

Table 2 Add-on components

Component

Description

Resource Type

gatekeeper-audit

Provide audit-related information.

Deployment

gatekeeper-controller-manager

Provide Gatekeeper webhooks to control Kubernetes resources based on custom policies.

Deployment

How to Use the Add-on

The following shows how to use Gatekeeper to enforce a constraint that requires a pod created in a specific namespace to have a label called test-label. For details, see How to use Gatekeeper.

  1. Use kubectl to access the cluster. For details, see Accessing a Cluster Using kubectl.
  2. Create a test-gatekeeper namespace for testing.

    kubectl create ns test-gatekeeper

  3. Create a policy template for checking labels.

    kubectl apply -f - <<EOF
    apiVersion: templates.gatekeeper.sh/v1beta1
    kind: ConstraintTemplate
    metadata:
      name: k8srequiredlabels
    spec:
      crd:
        spec:
          names:
            kind: K8sRequiredLabels
          validation:
            openAPIV3Schema:
              properties:
                labels:
                  type: array
                  items:
                    type: string
      targets:
        - target: admission.k8s.gatekeeper.sh
          rego: |
            package k8srequiredlabels
            violation[{"msg": msg, "details": {"missing_labels": missing}}] {
              provided := {label | input.review.object.metadata.labels[label]}
              required := {label | label := input.parameters.labels[_]}
              missing := required - provided
              count(missing) > 0
              msg := sprintf("you must provide labels: %v", [missing])
            }
    EOF

  4. Create a constraint for the preceding policy template. This constraint enforces the requirement for a pod created in the test-gatekeeper namespace to have the label test-label.

    kubectl apply -f - <<EOF
    apiVersion: constraints.gatekeeper.sh/v1beta1
    kind: K8sRequiredLabels
    metadata:
      name: pod-must-have-test-label
    spec:
      match:
        kinds:
          - apiGroups: [""]
            kinds: ["Pod"]
        namespaces:
          - test-gatekeeper
      parameters:
        labels: ["test-label"]
    EOF

  5. Verify the constraint effect.

    1. Create a pod that does not have the label test-label in the test-gatekeeper namespace.
      kubectl -n test-gatekeeper run test-deny --image=nginx --restart=Never

      Expected output:

      Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [pod-must-have-test-label] you must provide labels: {"test-label"}

      The pod that does not have the label test-label cannot be created in the test-gatekeeper namespace.

    2. Create a pod that has the label test-label in the test-gatekeeper namespace.
      kubectl -n test-gatekeeper run test -l test-label=test --image=nginx --restart=Never

      Check the pod. The pod has been created.

      kubectl get pod test -n test-gatekeeper

    Based on the previous verification, the pod created in the specific namespace has the test-label label.

Release History

Table 3 Gatekeeper add-on

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.1.5

v1.28

v1.29

v1.30

v1.31

v1.32

v1.33

v1.34

  • CCE clusters v1.34 are supported.
  • The community version is upgraded to 3.20.1.

3.20.1

1.1.3

v1.27

v1.28

v1.29

v1.30

v1.31

v1.32

v1.33

  • CCE clusters v1.33 are supported.
  • The community version is upgraded to v3.19.3.

3.19.3

1.0.32

v1.25

v1.27

v1.28

v1.29

v1.30

v1.31

v1.32

CCE clusters v1.32 are supported.

3.16.3

1.0.23

v1.25

v1.27

v1.28

v1.29

v1.30

v1.31

Fixed some issues.

3.16.3

1.0.22

v1.25

v1.27

v1.28

v1.29

v1.30

v1.31

CCE clusters v1.31 are supported.

3.16.3

1.0.10

v1.23

v1.25

v1.27

v1.28

v1.29

v1.30

CCE clusters v1.30 are supported.

3.16.3

1.0.3

v1.23

v1.25

v1.27

v1.28

v1.29

The Gatekeeper add-on is now available.

3.16.3