Training a TensorFlow Model
After Kubeflow is deployed, it is easy to use the ps-worker mode to train TensorFlow models. This section describes an official TensorFlow training example provided by Kubeflow. For details, see TensorFlow Training (TFJob).
Running the Mnist Example
- Deploy the TFJob resource to start training.
Create the tf-mnist.yaml file. The following is an example:
apiVersion: "kubeflow.org/v1" kind: TFJob metadata: name: tfjob-simple namespace: kubeflow spec: tfReplicaSpecs: Worker: replicas: 2 restartPolicy: OnFailure template: spec: containers: - name: tensorflow image: kubeflow/tf-mnist-with-summaries:latest command: - "python" - "/var/tf_mnist/mnist_with_summaries.py"
- Create the TFJob.
kubectl apply -f tf-mnist.yaml
- View the logs after the worker running is complete.
kubectl -n kubeflow logs tfjob-simple-worker-0
Information similar to the following is displayed:
... Accuracy at step 900: 0.964 Accuracy at step 910: 0.9653 Accuracy at step 920: 0.9665 Accuracy at step 930: 0.9681 Accuracy at step 940: 0.9664 Accuracy at step 950: 0.9667 Accuracy at step 960: 0.9694 Accuracy at step 970: 0.9683 Accuracy at step 980: 0.9687 Accuracy at step 990: 0.966 Adding run metadata for 999
- Delete the TFJob.
kubectl delete -f tf-mnist.yaml
Using a GPU
The training can be performed in the GPU scenario. In this scenario, the cluster must contain GPU nodes and proper drivers must be installed.
- Specify the GPU resources in the TFJob.
Create the tf-gpu.yaml file. The following is an example:
This example runs in the TensorFlow distributed architecture. The ResNet50 model in the convolutional neural network (CNN) is used to train randomly generated images. A total of 32 (batch_size) images are trained each time, and the images are trained 100 times in total. Additionally, the performance (image/sec) of each training is recorded.
apiVersion: "kubeflow.org/v1" kind: "TFJob" metadata: name: "tf-smoke-gpu" spec: tfReplicaSpecs: PS: replicas: 1 template: metadata: creationTimestamp: null spec: containers: - args: - python - tf_cnn_benchmarks.py - --batch_size=32 - --model=resnet50 - --variable_update=parameter_server - --flush_stdout=true - --num_gpus=1 - --local_parameter_device=cpu - --device=cpu - --data_format=NHWC image: docker.io/kubeflow/tf-benchmarks-cpu:v20171202-bdab599-dirty-284af3 name: tensorflow ports: - containerPort: 2222 name: tfjob-port resources: limits: cpu: "1" workingDir: /opt/tf-benchmarks/scripts/tf_cnn_benchmarks restartPolicy: OnFailure Worker: replicas: 1 template: metadata: creationTimestamp: null spec: containers: - args: - python - tf_cnn_benchmarks.py - --batch_size=32 - --model=resnet50 - --variable_update=parameter_server - --flush_stdout=true - --num_gpus=1 - --local_parameter_device=cpu - --device=gpu - --data_format=NHWC image: docker.io/kubeflow/tf-benchmarks-gpu:v20171202-bdab599-dirty-284af3 name: tensorflow ports: - containerPort: 2222 name: tfjob-port resources: limits: nvidia.com/gpu: 1 # Number of GPUs workingDir: /opt/tf-benchmarks/scripts/tf_cnn_benchmarks restartPolicy: OnFailure
- Create the TFJob.
kubectl apply -f tf-gpu.yaml
- After the worker runs the job (about 5 minutes if a GPU is used), run the following command to view the result.
kubectl logs tf-smoke-gpu-worker-0
Information similar to the following is displayed:
... INFO|2023-09-02T12:04:25|/opt/launcher.py|27| Running warm up INFO|2023-09-02T12:08:55|/opt/launcher.py|27| Done warm up INFO|2023-09-02T12:08:55|/opt/launcher.py|27| Step Img/sec loss INFO|2023-09-02T12:08:56|/opt/launcher.py|27| 1 images/sec: 68.8 +/- 0.0 (jitter = 0.0) 8.777 INFO|2023-09-02T12:09:00|/opt/launcher.py|27| 10 images/sec: 70.4 +/- 0.4 (jitter = 1.8) 8.557 INFO|2023-09-02T12:09:04|/opt/launcher.py|27| 20 images/sec: 70.5 +/- 0.3 (jitter = 1.5) 8.090 INFO|2023-09-02T12:09:09|/opt/launcher.py|27| 30 images/sec: 70.3 +/- 0.3 (jitter = 1.6) 8.041 INFO|2023-09-02T12:09:13|/opt/launcher.py|27| 40 images/sec: 70.1 +/- 0.2 (jitter = 1.7) 9.464 INFO|2023-09-02T12:09:18|/opt/launcher.py|27| 50 images/sec: 70.1 +/- 0.2 (jitter = 1.6) 7.797 INFO|2023-09-02T12:09:23|/opt/launcher.py|27| 60 images/sec: 70.1 +/- 0.2 (jitter = 1.6) 8.595 INFO|2023-09-02T12:09:27|/opt/launcher.py|27| 70 images/sec: 70.0 +/- 0.2 (jitter = 1.7) 7.853 INFO|2023-09-02T12:09:32|/opt/launcher.py|27| 80 images/sec: 69.9 +/- 0.2 (jitter = 1.7) 7.849 INFO|2023-09-02T12:09:36|/opt/launcher.py|27| 90 images/sec: 69.8 +/- 0.2 (jitter = 1.7) 7.911 INFO|2023-09-02T12:09:41|/opt/launcher.py|27| 100 images/sec: 69.7 +/- 0.1 (jitter = 1.7) 7.853 INFO|2023-09-02T12:09:41|/opt/launcher.py|27| ---------------------------------------------------------------- INFO|2023-09-02T12:09:41|/opt/launcher.py|27| total images/sec: 69.68 INFO|2023-09-02T12:09:41|/opt/launcher.py|27| ---------------------------------------------------------------- INFO|2023-09-02T12:09:42|/opt/launcher.py|80| Finished: python tf_cnn_benchmarks.py --batch_size=32 --model=resnet50 --variable_update=parameter_server --flush_stdout=true --num_gpus=1 --local_parameter_device=cpu --device=gpu --data_format=NHWC --job_name=worker --ps_hosts=tf-smoke-gpu-ps-0.default.svc:2222 --worker_hosts=tf-smoke-gpu-worker-0.default.svc:2222 --task_index=0 INFO|2023-09-02T12:09:42|/opt/launcher.py|84| Command ran successfully sleep for ever.
The training performance of a single GPU is 69.68 images per second.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot