Continuous Delivery Using Argo CD
Background
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. Argo CD automates the deployment of applications to Kubernetes.
This section describes how to interconnect ArgoCD with CCE to perform continuous deployment.
Preparations
- Create a CCE cluster and a node and bind an EIP to the node for downloading an image during Argo CD installation.
- Create an ECS, bind an EIP to the ECS, and download and configure kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
- Prepare an application in the Git repository. This section uses the Nginx sample application in the https://gitlab.com/c8147/examples.git repository.
Installing Argo CD
- Install the Argo CD server in the cluster.
# kubectl create namespace argocd # kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.0/manifests/install.yaml
Check whether the installation is successful. If all pods in the argocd namespace are in the Running status, the installation is successful.
# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE argocd argocd-application-controller-0 1/1 Running 0 8m32s argocd argocd-applicationset-controller-789457b498-6n6l5 1/1 Running 0 8m32s argocd argocd-dex-server-748bddb496-bxj2c 1/1 Running 0 8m32s argocd argocd-notifications-controller-8668ffdd75-q7wdb 1/1 Running 0 8m32s argocd argocd-redis-55d64cd8bf-g85np 1/1 Running 0 8m32s argocd argocd-repo-server-778d695657-skprm 1/1 Running 0 8m32s argocd argocd-server-59c9ccff4c-vd9ww 1/1 Running 0 8m32s
Run the following commands to change the Service type of argocd-server to NodePort:
# kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}' service/argocd-server patched
Check the result.
# kubectl -n argocd get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-applicationset-controller ClusterIP 10.247.237.53 <none> 7000/TCP,8080/TCP 18m argocd-dex-server ClusterIP 10.247.164.111 <none> 5556/TCP,5557/TCP,5558/TCP 18m argocd-metrics ClusterIP 10.247.138.98 <none> 8082/TCP 18m argocd-notifications-controller-metrics ClusterIP 10.247.239.85 <none> 9001/TCP 18m argocd-redis ClusterIP 10.247.220.90 <none> 6379/TCP 18m argocd-repo-server ClusterIP 10.247.1.142 <none> 8081/TCP,8084/TCP 18m argocd-server NodePort 10.247.57.16 <none> 80:30118/TCP,443:31221/TCP 18m argocd-server-metrics ClusterIP 10.247.206.190 <none> 8083/TCP 18m
To access Argo CD using the argocd-server Service, use Node IP:Port number. In this example, the port number is 31221.
The login username is admin, and the password can be obtained by running the following command:
# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d;echo
- Install the Argo CD client on the ECS.
# wget https://github.com/argoproj/argo-cd/releases/download/v2.4.0/argocd-linux-amd64 # cp argocd-linux-amd64 /usr/local/bin/argocd # chmod +x /usr/local/bin/argocd
Run the following commands. If the following information is displayed, the installation is successful.
# argocd version argocd: v2.4.0+91aefab BuildDate: 2022-06-10T17:44:14Z GitCommit: 91aefabc5b213a258ddcfe04b8e69bb4a2dd2566 GitTreeState: clean GoVersion: go1.18.3 Compiler: gc Platform: linux/amd64 FATA[0000] Argo CD server address unspecified
Deploying an Application Using Argo CD
- Add a CCE cluster to Argo CD.
- Log in to an ECS.
- Check the kubectl context configuration.
# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * internal internalCluster user
- Log in to the Argo CD server. The username is admin. You can obtain the server IP address and password from 1. If the ECS and the cluster are in the same VPC, the node IP address can be a private IP address.
argocd login <Node IP address:Port number> --username admin --password <password>
Information similar to the following is displayed:
# argocd login 192.168.0.52:31221 --username admin --password ****** WARNING: server certificate had error: x509: cannot validate certificate for 192.168.0.52 because it doesn't contain any IP SANs. Proceed insecurely (y/n)? y 'admin:login' logged in successfully Context '192.168.0.52:31221' updated
- Add a CCE cluster.
# argocd cluster add internal --kubeconfig /root/.kube/config --name argocd-01
In the preceding command, internal is the context name queried in 1.b, /root/.kube/config is the path of the kubectl configuration file, and argocd-01 is the cluster name defined in Argo CD.
Information similar to the following is displayed:
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `internal` with full cluster level privileges. Do you want to continue [y/N]? y INFO[0002] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0002] ClusterRole "argocd-manager-role" updated INFO[0002] ClusterRoleBinding "argocd-manager-role-binding" updated Cluster "https://192.168.0.113:5443"" added
Log in to the Argo CD page. You can see that the connection is successful.
- Connect to the Git repository.
# argocd repo add https://gitlab.com/c8147/examples.git --username <username> --password <password>
In the preceding command, https://gitlab.com/c8147/examples.git indicates the repository address, and <username> and <password> indicate the repository login username and password. Replace them with the actual values.
Information similar to the following is displayed:
Repository 'https://gitlab.com/c8147/cidemo.git' added
Log in to the Argo CD page. You can see that the cluster has been added.
- Add an application to Argo CD.
# argocd app create nginx --repo https://gitlab.com/c8147/examples.git --path nginx --dest-server https://192.168.0.113:5443 --dest-namespace default
https://gitlab.com/c8147/examples.git indicates the repository address, nginx indicates the repository path, https://192.168.0.113:5443 indicates the address of the cluster to be deployed, and default indicates the namespace.
In this example, the nginx directory in the GitLab repository contains a YAML file of the Nginx application. The file includes a Deployment and a Service.
After the application is created, you can view its details.
# argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET nginx https://192.168.0.113:5443 default default OutOfSync Missing <none> <none> https://gitlab.com/c8147/examples.git nginx
Log in to the Argo CD page. You can see that the application has been added.
- Synchronize the application.
Synchronize the application and deploy it on a specified cluster. Run the following commands:
# argocd app sync nginx TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2022-10-24T12:15:10+08:00 Service default nginx OutOfSync Missing 2022-10-24T12:15:10+08:00 apps Deployment default nginx OutOfSync Missing 2022-10-24T12:15:10+08:00 Service default nginx Synced Healthy 2022-10-24T12:15:10+08:00 Service default nginx Synced Healthy service/nginx created 2022-10-24T12:15:10+08:00 apps Deployment default nginx OutOfSync Missing deployment.apps/nginx created 2022-10-24T12:15:10+08:00 apps Deployment default nginx Synced Progressing deployment.apps/nginx created Name: nginx Project: default Server: https://192.168.0.113:5443 Namespace: default URL: https://192.168.0.178:32459/applications/nginx Repo: https://gitlab.com/c8147/examples.git Target: Path: nginx SyncWindow: Sync Allowed Sync Policy: <none> Sync Status: Synced to (dd15906) Health Status: Progressing Operation: Sync Sync Revision: dd1590679856bd9288036847bdc4a5556c169267 Phase: Succeeded Start: 2022-10-24 12:15:10 +0800 CST Finished: 2022-10-24 12:15:10 +0800 CST Duration: 0s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default nginx Synced Healthy service/nginx created apps Deployment default nginx Synced Progressing deployment.apps/nginx created
You can see that an Nginx workload and a Service are deployed in a CCE cluster.
# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 2m47s # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.247.0.1 <none> 443/TCP 5h12m nginx ClusterIP 10.247.177.24 <none> 80/TCP 2m52s
Log in to the Argo CD page. You can see that the application status has changed to Synced.
Using Argo Rollouts for Grayscale Release
Argo Rollouts is a Kubernetes controller that provides advanced deployment capabilities such as blue-green, grayscale (canary) release, and progressive delivery.
- Install an argo-rollouts server in the cluster.
# kubectl create namespace argo-rollouts # kubectl apply -f https://github.com/argoproj/argo-rollouts/releases/download/v1.2.2/install.yaml -n argo-rollouts
If the application is deployed in multiple clusters, install the argo-rollouts server in each target cluster.
- Install the kubectl add-on of argo-rollouts on the ECS.
# curl -LO https://github.com/argoproj/argo-rollouts/releases/download/v1.2.2/kubectl-argo-rollouts-linux-amd64 # chmod +x ./kubectl-argo-rollouts-linux-amd64 # sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts
Run the following commands to check whether the add-on has been installed:
# kubectl argo rollouts version kubectl-argo-rollouts: v1.2.2+22aff27 BuildDate: 2022-07-26T17:24:43Z GitCommit: 22aff273bf95646e0cd02555fbe7d2da0f903316 GitTreeState: clean GoVersion: go1.17.6 Compiler: gc Platform: linux/amd64
- Prepare two sample Nginx application images whose versions are v1 and v2, respectively. The welcome pages are displayed as "nginx:v1!" and "nginx:v2!", respectively.
Create a Dockerfile. The content of the Dockerfile for v1 is as follows. For v2, replace nginx:v1! with nginx:v2!.
FROM nginx:latest RUN echo '<h1>nginx:v1!</h1>' > /usr/share/nginx/html/index.html
Create a v1 image.
docker build -t nginx:v1 .
Log in to SWR and push the image to SWR. For details, see Uploading an Image Through a Container Engine Client. During the push, container indicates the organization name in SWR. Set it as required.
docker login -u {region}@xxx -p xxx swr.{region}.myhuaweicloud.com docker tag nginx:v1 swr.cn-east-3.myhuaweicloud.com/container/nginx:v1 docker push swr.cn-east-3.myhuaweicloud.com/container/nginx:v1
Create a v2 image and push it to SWR in the same way.
- Deploy an Argo Rollouts controller. In this example, the controller first shifts 20% of all service traffic to the new version. Then, manually increase the traffic proportion. After that, the controller automatically and gradually increases traffic until the release is complete.
Create a file named rollout-canary.yaml:
apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollout-canary # Custom Rollout name spec: replicas: 5 # Five replicas strategy: # Upgrade policy canary: # Grayscale (canary) release steps: # Release pace (duration can be set for each phase) - setWeight: 20 # Traffic weight - pause: {} # If this field is not specified, the release is paused. - setWeight: 40 - pause: {duration: 10} # Pause duration, in seconds. - setWeight: 60 - pause: {duration: 10} - setWeight: 80 - pause: {duration: 10} revisionHistoryLimit: 2 selector: matchLabels: app: rollout-canary template: metadata: labels: app: rollout-canary spec: containers: - name: rollout-canary image: swr.cn-east-3.myhuaweicloud.com/container/nginx:v1 # The pushed image, whose version is v1. ports: - name: http containerPort: 80 protocol: TCP resources: requests: memory: 32Mi cpu: 5m imagePullSecrets: - name: default-secret --- apiVersion: v1 kind: Service metadata: name: rollout-canary spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 31270 # The custom node port number selector: app: rollout-canary
Run the following command to create the preceding two resource objects:
kubectl apply -f rollout-canary.yaml
The Argo Rollouts controller does not trigger upgrade during initial creation and the configured release policy does not take effect. Only the number of replicas increases to 100% immediately.
- Argo Rollouts visualizes the rollout process and related resource objects to display real-time changes. You can run the get rollout --watch command to observe the deployment process, for example:
kubectl argo rollouts get rollout rollout-canary --watch
In the preceding command, rollout-canary indicates the custom Rollout name.
- After the creation is complete, you can access the Nginx application using Node EIP:Port number. The port number is specified in the Service resource in the rollout-canary.yaml file. In this example, the port number is 31270.
- Use the v2 image to update the application.
kubectl argo rollouts set image rollout-canary rollout-canary=swr.cn-east-3.myhuaweicloud.com/container/nginx:v2
The controller will update the application according to the update policy. In this example, a 20% traffic weight is set in the first step, and the release is paused until the user cancels or continues. You can run the following command to view the detailed process. The release is paused.
kubectl argo rollouts get rollout rollout-canary --watch
You can view that only one of the five replicas runs the new version, that is, the weight of 20% defined in setWeight: 20.
If you run the following command for multiple times, 20% of the response results are the response of v2.
for i in {1..10}; do curl <Node EIP:Port number>; done;
Verification result:
<h1>nginx:v2!</h1> <h1>nginx:v1!</h1> <h1>nginx:v1!</h1> <h1>nginx:v1!</h1> <h1>nginx:v1!</h1> <h1>nginx:v1!</h1> <h1>nginx:v1!</h1> <h1>nginx:v1!</h1> <h1>nginx:v2!</h1> <h1>nginx:v1!</h1>
- Manually update the version.
kubectl argo rollouts promote rollout-canary
In this example, the remaining steps are fully automated until the release is complete. Run the following command to view the detailed process. The controller gradually switches all traffic to the new version.
kubectl argo rollouts get rollout rollout-canary --watch
- You can run the following command to use more Argo Rollouts functions, such as terminating or rolling back a release:
kubectl argo rollouts --help
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot