Updated on 2025-12-22 GMT+08:00

PropagationPolicy

Overview

PropagationPolicy is one of the core APIs of UCS cluster federation. It is used to define resource propagation policies in a multi-cluster environment. It can propagate Kubernetes resources to one or more member clusters and also provides flexible scheduling and resource management. This API is compatible with the PropagationPolicy API of Karmada.

PropagationPolicy features:

  • Multi-cluster resource propagation: Resources can be propagated to multiple Kubernetes clusters.
  • Flexible scheduling policies: Multiple scheduling modes are provided, such as Weighted, Duplicated, and Aggregated.
  • Resource dependency management: Dependencies are automatically propagated.

PropagationPolicy types:

  • PropagationPolicy: propagates namespace-scoped resources. Only resources in the namespace specified by this policy can be propagated. For details, see What Are Namespace-Scoped Resources?
  • ClusterPropagationPolicy: propagates cluster-scoped resources (such as PVs, storage classes, and CRDs) and resources in any namespace (excluding system namespaces). For details, see What Are Cluster-Scoped Resources?

API Specifications

Basic information:

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy

Application scope:

  • PropagationPolicy: namespace-wide policy. This policy allows you to propagate resources only in the namespace specified by this policy.
  • ClusterPropagationPolicy: cluster-wide policy. This policy allows you to propagate cluster-wide resources and resources in any namespace.

Resource Format

PropagationPolicy YAML template:

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: <policy-name>
  namespace: <namespace-name>  # Required only by PropagationPolicy
spec:
  resourceSelectors: []        # (Mandatory) Select the resources you want to propagate.
  conflictResolution: <string> # Abort/Overwrite
  dependentOverrides: []       # OverridePolicy dependencies
  placement:                   # (Mandatory) Cluster selection rule
    clusterAffinity:
      clusterNames: []
    clusterTolerations: []
    replicaScheduling:
      replicaSchedulingType: "Divided"     # Duplicated/Divided
      replicaDivisionPreference: "Weighted"  # Aggregated/Weighted
      weightPreference:
        dynamicWeight: ""
        staticWeightList: []
    spreadConstraints: []
  propagateDeps: false         # Dependencies are automatically propagated.

ClusterPropagationPolicy YAML template

apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: <policy-name>
spec:
  # Same as the spec structure of PropagationPolicy
  resourceSelectors: []
  placement: {}
  # ... Other fields

Parameter Description

  1. resourceSelectors (mandatory): Select the Kubernetes resources you want to propagate. You need to configure the following fields:
    • apiVersion (string, mandatory): API version of the target resource
    • kind (string, mandatory): type of the target resource (such as Deployment, Service, or ConfigMap)
    • name (string): name of the target resource. An empty string indicates that all resources are matched.
    • namespace (string): namespace of the target resource
    • labelSelector (LabelSelector): label selector

    Example:

    resourceSelectors:
      # Exact match
      - apiVersion: apps/v1
        kind: Deployment
        name: web-app
        namespace: default
    
      # Select by label
      - apiVersion: v1
        kind: ConfigMap
        labelSelector:
          matchLabels:
            app: web-app
          matchExpressions:
            - key: tier
              operator: In
              values: ["frontend", "backend"]
  2. placement: defines the rules for propagating resources to clusters.
    • clusterAffinity
      placement:
        clusterAffinity:
          clusterNames: ["cluster-1", "cluster-2"]  # Specify the cluster list.
    • replicaScheduling
    placement:
      replicaScheduling:
        replicaSchedulingType: "Divided"         # Scheduling type
        replicaDivisionPreference: "Weighted"    # Division preference
        weightPreference:
          dynamicWeight: "AvailableReplicas"     # Dynamic weight
          staticWeightList:                      # Static weight
            - targetCluster:
                clusterNames: ["cluster-1"]
              weight: 2
            - targetCluster:
                clusterNames: ["cluster-2"]
              weight: 1

    Scheduling modes:

    • Duplicated: replicates the same number of replicas to each candidate cluster.
    • Divided: schedules replicas based on the number of candidate clusters.

      Division preferences:

    • Weighted: divides replicas by weight.
    • Aggregated: prioritizes clusters with sufficient resources.
  3. spreadConstraints: controls how resources are spread across clusters to ensure high availability and resource division.
    placement:
      spreadConstraints:
        - spreadByField: "cluster"        # Spread by cluster. Only cluster is supported.
          minGroups: 3                         # Spread in at least three different clusters.
          maxGroups: 5                        # Spread to a maximum of five different clusters.

    Field description:

    • spreadByField: specifies the cluster as the spread constraint. Only cluster is supported.
    • minGroups: minimum number of clusters
    • maxGroups: maximum number of clusters
  4. Other important parameters
    • conflictResolution
      conflictResolution: "Abort"  # Abort: stops propagation. | Overwrite: overwrites existing resources.
    • propagateDeps
      propagateDeps: true # Dependencies such as ConfigMaps and Secrets are automatically propagated.
    • clusterTolerations
      clusterTolerations: # Cluster tolerations that are automatically added to prevent pod rescheduling caused by abnormal cluster status
      - key: cluster.karmada.io/not-ready
        operator: Exists
      - key: cluster.karmada.io/unreachable
        operator: Exists

Improving the Stability and Predictability of Workload Deployment

To ensure that your workloads run more stably and predictably in the Karmada environment and avoid potential production risks, we have optimized and enhanced the configuration logic related to cluster and replica scheduling. These improvements aim to guide you to adopt configuration modes for best practices and enjoy a more reliable deployment experience.

Cluster Affinity Configuration Optimization

When defining ClusterAffinity for a workload, we introduced more explicit and direct configuration requirements. For clear and accurate cluster selection, you are advised to:

  • Specify the cluster where you want to deploy the workload using clusterNames. This method can provide the most clear and precise cluster location, avoiding uncertainty caused by dynamic match.

This method is recommended so that your workloads can be deployed to the expected clusters accurately, improving deployment reliability.

Spread Constraint Configuration Guide

To better balance workloads across clusters, we have streamlined and standardized SpreadConstraints settings. When configuring SpreadConstraints:

  • Combine it with ClusterAffinity. This ensures that the spread operation is performed within a cluster that is clearly defined.
  • Focus on cluster-level spreading. SpreadByField (spreading dimension) works exclusively with "cluster." This ensures workloads distribute evenly and predictably across clusters.
  • Define one clear spread constraint in an array element to prevent complexity and conflicts in spread rules.

These changes aim to simplify how workloads spread, making them clearer and simpler to handle for better-balanced and effective resource use.

Enhanced Replica Scheduling

Using ReplicaSchedulingTypeDivided requires precise cluster weight settings for accurate and predictable replica distribution. You must pay attention to:

  • Integrity of weight configuration: The weight of any candidate cluster specified by ClusterAffinity must be explicitly configured in WeightPreference.StaticWeightList.

This ensures that each cluster considered for replica scheduling has a defined weight, so replicas can be accurately distributed based on the preset ratio, and imbalance or unexpected behavior caused by lack of weight configuration can be avoided.

These optimizations will make your deployment management more efficient and stable and can reduce potential operational risks. If you have any questions or need further assistance, feel free to contact our support team.

Examples

Example 1: basic static weight propagation

Scenario: Propagate web resources to three clusters and divide replicas in the ratio of 2:1:1.

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: web-app-static-weight
  namespace: default
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: web-app
    - apiVersion: v1
      kind: Service
      name: web-app-service
    - apiVersion: v1
      kind: ConfigMap
      name: app-config

  placement:
    clusterAffinity:
      clusterNames: 
        - member1
        - member2
        - member3

    replicaScheduling:
      replicaSchedulingType: "Divided"
      replicaDivisionPreference: "Weighted"
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames: ["member1"]
            weight: 2  # 50% of replicas
          - targetCluster:
              clusterNames: ["member2"]
            weight: 1  # 25% of replicas
          - targetCluster:
              clusterNames: ["member3"]
            weight: 1  # 25% of replicas

  propagateDeps: true                # Dependencies are automatically propagated.
  conflictResolution: "Abort"        # Propagation is stopped when there are conflicts.

Example 2: duplicated deployment

Scenario: Deploy resources with complete replicas in each cluster.

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: cache-cluster-duplicated
  namespace: production
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: redis-cluster
    - apiVersion: v1
      kind: ConfigMap
      name: redis-config
    - apiVersion: v1
      kind: Service
      name: redis-cluster-service

  placement:
    clusterAffinity:
      clusterNames:
        - cache-west-1
        - cache-east-1
        - cache-eu-1

    replicaScheduling:
      replicaSchedulingType: "Duplicated"  # Duplication mode: Each cluster obtains complete replicas.

  propagateDeps: true                      # Configurations and dependencies are automatically propagated.

  # Key: In duplication mode, all clusters will obtain complete replicas.
  # If you specify three replicas for a Deployment, each cluster will run three replicas.

Example 3: duplicated deployment + automatic DaemonSet expansion

Scenario: Deploy DaemonSet in all clusters. When a cluster is added, the DaemonSet is automatically deployed. You do not need to modify the propagation policy.

# PropagationPolicy: Select the duplication mode of all clusters.
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: file-sync-agent-auto-spread
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: DaemonSet
      name: file-sync-agent
    - apiVersion: v1
      kind: ConfigMap
      name: agent-config
    - apiVersion: v1
      kind: ServiceAccount
      name: file-sync-agent

  placement:
    # Key: If clusterAffinity is left empty, all available clusters will be automatically selected.
    # When a new cluster is added to the Karmada control plane, the cluster is automatically selected.


    # Duplication mode: The DaemonSet is deployed in each cluster.
    replicaScheduling:
      replicaSchedulingType: "Duplicated"
      # The DaemonSet runs on each node. The duplication mode is used to ensure that the DaemonSet is deployed in each cluster.

  # Dependencies are automatically propagated.
  propagateDeps: true

  # Conflict resolution policy
  conflictResolution: "Overwrite"