Introduction to Training Job Visualization
ModelArts notebook of the new version supports TensorBoard and MindInsight for visualizing training jobs. In the development environment, use small datasets to train and debug algorithms, during which you can check algorithm convergence and detect issues to facilitate debugging.
You can create visualization jobs of TensorBoard and MindInsight types on ModelArts.
Both TensorBoard and MindInsight effectively display the change trend of a training job and the data used in the training.
- TensorBoard
TensorBoard effectively displays the computational graph of TensorFlow in the running process, the trend of all metrics in time, and the data used in the training. For more details about TensorBoard, see TensorBoard official website.
TensorBoard visualization training jobs support only CPU and GPU flavors based on TensorFlow 2.1, and PyTorch 1.4 and 1.8 images. Select images and flavors based on the site requirements.
- MindInsight
MindInsight visualizes information such as scalars, images, computational graphs, and model hyperparameters during training. It also provides functions such as training dashboard, model lineage, data lineage, and performance debugging, helping you train and debug models efficiently. MindInsight supports MindSpore training jobs. For more information about MindInsight, see MindSpore official website.
The following shows the images and flavors supported by MindInsight visualization training jobs, and select images and flavors based on the site requirements.
- MindSpore 1.2.0 (CPU or GPU)
- MindSpore 1.5.x or later (Ascend)
You can use the summary file generated during model training to create a visualization job in Notebook of DevEnviron.
- For details about how to create a MindInsight visualization job in a development environment, see MindInsight Visualization Jobs.
- For details about how to create a TensorBoard visualization job in a development environment, see TensorBoard Visualization Jobs.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.