Deploying Kubeflow
Background
Building an end-to-end AI computing platform based on Kubernetes is complex. More than a dozen phases are required. Apart from the familiar model training phase, the process also includes data collection, preprocessing, resource management, feature extraction, data verification, model management, model release, and monitoring. If AI algorithm engineers want to run a model training task, they have to build an entire AI computing platform first. Imagine how time- and labor-consuming that is and how much knowledge and experience it requires.

Kubeflow was released in 2017, which is built on containers and Kubernetes. It aims to provide data scientists, machine learning engineers, and system O&M personnel with a platform for agile deployment, development, training, release, and management of machine learning services. It leverages the advantages of cloud native technologies to enable users to quickly and easily deploy, use, and manage the most popular machine learning software.
Kubeflow 1.0 is now available, providing capabilities in development, building, training, and deployment that cover the entire process of machine learning and deep learning for enterprise users.
An example is shown in the figure below.

With Kubeflow 1.0, you first develop a model using Jupyter, and then set up containers using tools such as Fairing (SDK). Next, you create Kubernetes resources to train the model. After the training is complete, you create and deploy servers for inference using KFServing. This is how you use Kubeflow to establish an end-to-end agile process of a machine learning task. This process can be fully automated using pipelines, which help achieve DevOps in the AI field.
Prerequisites
- A CCE cluster named clusterA is available. The cluster has an available GPU node that has two or more GPUs.
- EIPs have been bound to the nodes, and the kubectl command line tool has been configured. For details, see Accessing a Cluster Using kubectl.
Installing Kustomize
Kustomize is an open-source tool used to manage the configuration of applications running in Kubernetes clusters. It allows you to modify application configuration. Starting with Kubeflow 1.3, all components should be deployed only using Kustomize.
- Use the official script to install Kustomize. Kubeflow is incompatible with earlier versions of Kustomize. Therefore, only Kustomize 5 and later versions are supported. In this example, Kubeflow 5.1.0 is used.
curl -o install_kustomize.sh "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" sh install_kustomize.sh 5.1.0 .
The installation may take 3 to 5 minutes, and the information similar to the following will be displayed:v5.1.0 kustomize installed to /root/kubeflow/./kustomize
- Move kustomize to the /bin directory so that the kustomize command can be used globally.
cp kustomize /bin/
Installing Kubeflow
Perform the steps in this section to install all official Kubeflow components. After the installation, you can access the Kubeflow central dashboard. For details, see Connecting to Kubeflow.
- Install Kubeflow 1.7.0.
wget https://github.com/kubeflow/manifests/archive/refs/tags/v1.7.0.zip unzip v1.7.0.zip
- Use Kustomize to create a YAML file for deploying Kubeflow.
cd ./manifests-1.7.0/ kustomize build example -o example.yaml
- Configure storage resources required by Kubeflow.
- katib-mysql
- mysql-pv-claim
- minio-pv-claim
- authservice-pvc
Some storage resources need to be configured during the installation. The storage configuration in the official example cannot take effect in CCE. This may result in a PVC creation failure. Therefore, create a PVC with the same name in the cluster in advance. In this example, an EVS disk is used. You can change the storage type as required.
Create a pvc.yaml file. The following is an example:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: katib-mysql namespace: kubeflow annotations: everest.io/disk-volume-type: SAS # EVS disk type labels: failure-domain.beta.kubernetes.io/region: <your_region> # Region of the node where the application is to be deployed failure-domain.beta.kubernetes.io/zone: <your_zone> # AZ of the node where the application is to be deployed spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: csi-disk --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim namespace: kubeflow annotations: everest.io/disk-volume-type: SAS # EVS disk type labels: failure-domain.beta.kubernetes.io/region: <your_region> # Region of the node where the application is to be deployed failure-domain.beta.kubernetes.io/zone: <your_zone> # AZ of the node where the application is to be deployed spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: csi-disk --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minio-pvc namespace: kubeflow annotations: everest.io/disk-volume-type: SAS # EVS disk type labels: failure-domain.beta.kubernetes.io/region: <your_region> # Region of the node where the application is to be deployed failure-domain.beta.kubernetes.io/zone: <your_zone> # AZ of the node where the application is to be deployed spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: csi-disk --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: authservice-pvc namespace: istio-system annotations: everest.io/disk-volume-type: SAS # EVS disk type labels: failure-domain.beta.kubernetes.io/region: <your_region> # Region of the node where the application is to be deployed failure-domain.beta.kubernetes.io/zone: <your_zone> # AZ of the node where the application is to be deployed spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: csi-diskCreate the PVC.
kubectl apply -f pvc.yaml
- Create related resources.
kubectl apply -f example.yaml

Official images may fail to be pulled due to network problems, and the ImagePullBackOff or FailedPullImage error may occur in the workload. In this case, add a proper image proxy.
- Check whether pods in all namespaces are running.
kubectl get pod -A
If an unexpected problem occurs during resource creation, rectify it by referring to Common Issues.
Common Issues
- In some scenarios where CRDs do not exist, the following information is displayed:
error: resource mapping not found for name: "<RESOURCE_NAME>" namespace: "<SOME_NAMESPACE>" from "STDIN": no matches for kind "<CRD_NAME>" in version "<CRD_FULL_NAME>" ensure CRDs are installed first
Solution:
Create the resource again. This is because kustomization creates CRDs and CRs quickly, and it may create CRs ahead of CRDs.
- When a workload is created, an error message is displayed, indicating that there are too many pods on the node. The error message is displayed as follows:
0/x nodes are available: x Too many pods.
Solution:
Increase the number of nodes. This message indicates that the number of schedulable pods on the node exceeds the node's upper limit.
- The training-operator workload cannot run properly. The error message in the log is displayed as follows:
Waited for 1.039518449s due to client-side throttling, not priority and fairness, request: GET:https://10.247.0.1:443/apis/xxx/xx?timeout=32s
Solution:
Check the statuses of the unavailable APIServices in the cluster.
kubectl get apiservice
If there is no APIService in the FALSE state, the training-operator workload will begin running 1 to 2 minutes later.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot
