Evaluation Results
After a training job has been executed, ModelArts evaluates your model and provides optimization diagnosis and suggestions.
- When you use a built-in algorithm to create a training job, you can view the evaluation result without any configurations. The system automatically provides optimization suggestions based on your model metrics. Read the suggestions and guidance on the page carefully to further optimize your model.
- For a training job created by writing a training script or using a custom image, you need to add the evaluation code to the training code so that you can view the evaluation result and diagnosis suggestions after the training job is complete.
- Only validation sets of the image type are supported.
- You can add the evaluation code only when the training scripts of the following frequently-used frameworks are used:
- TF-1.13.1-python3.6
- TF-2.1.0-python3.6
- PyTorch-1.4.0-python3.6
This section describes how to use the evaluation code in a training job. To adapt and modify the training code, three steps are involved, Adding the Output Path, Copying the Dataset to the Local Host, and Mapping the Dataset Path to OBS.
Adding the Output Path
The code for adding the output path is simple. That is, add a path for storing the evaluation result file to the code, which is called train_url, that is, the training output path on the console. Add train_url to the analysis function and use save_path to obtain train_url. The sample code is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
FLAGS = tf.app.flags.FLAGS tf.app.flags.DEFINE_string('model_url', '', 'path to saved model') tf.app.flags.DEFINE_string('data_url', '', 'path to output files') tf.app.flags.DEFINE_string('train_url', '', 'path to output files') tf.app.flags.DEFINE_string('adv_param_json', '{"attack_method":"FGSM","eps":40}', 'params for adversarial attacks') FLAGS(sys.argv, known_only=True) ... # analyse res = analyse( task_type=task_type, pred_list=pred_list, label_list=label_list, name_list=file_name_list, label_map_dict=label_dict, save_path=FLAGS.train_url) |
Copying the Dataset to the Local Host
Copying a dataset to the local host is to prevent the OBS connection from being interrupted due to long-time access. Therefore, copy the dataset to the local host before performing operations.
There are two methods for copying datasets. The recommended method is to use the OBS path.
- OBS path (recommended)
Call the copy_parallel API of MoXing to copy the corresponding OBS path.
- Dataset in ModelArts data management (manifest file format)
Call the copy_manifest API of MoXing to copy the file to the local host and obtain the path of the new manifest file. Then, use SDK to parse the new manifest file.
ModelArts data management is being upgraded and is invisible to users who have not used data management. It is recommended that new users store their training data in OBS buckets.
1 2 3 4 5 6 7 8 |
if data_path.startswith('obs://'): if '.manifest' in data_path: new_manifest_path, _ = mox.file.copy_manifest(data_path, '/cache/data/') data_path = new_manifest_path else: mox.file.copy_parallel(data_path, '/cache/data/') data_path = '/cache/data/' print('------------- download dataset success ------------') |
Mapping the Dataset Path to OBS
The actual path of the image file, that is, the OBS path, needs to be entered in the JSON body. Therefore, after analysis and evaluation are performed on the local host, the original local dataset path needs to be mapped to the OBS path, and the new list needs to be sent to the analysis API.
If the OBS path is used as the input of data_url, you only need to replace the string of the local path.
1 2 3 |
if FLAGS.data_url.startswith('obs://'): for idx, item in enumerate(file_name_list): file_name_list[idx] = item.replace(data_path, FLAGS.data_url) |
If the manifest file is used, the original manifest file needs to be parsed again to obtain the list and then the list is sent to the analysis API.
1 2 3 4 5 6 7 8 |
if or FLAGS.data_url.startswith('obs://'): if 'manifest' in FLAGS.data_url: file_name_list = [] manifest, _ = get_sample_list( manifest_path=FLAGS.data_url, task_type='image_classification') for item in manifest: if len(item[1]) != 0: file_name_list.append(item[0]) |
An example code for image classification that can be used to create training jobs is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
import json import logging import os import sys import tempfile import h5py import numpy as np from PIL import Image import moxing as mox import tensorflow as tf from deep_moxing.framework.manifest_api.manifest_api import get_sample_list from deep_moxing.model_analysis.api import analyse, tmp_save from deep_moxing.model_analysis.common.constant import TMP_FILE_NAME logging.basicConfig(level=logging.DEBUG) FLAGS = tf.app.flags.FLAGS tf.app.flags.DEFINE_string('model_url', '', 'path to saved model') tf.app.flags.DEFINE_string('data_url', '', 'path to output files') tf.app.flags.DEFINE_string('train_url', '', 'path to output files') tf.app.flags.DEFINE_string('adv_param_json', '{"attack_method":"FGSM","eps":40}', 'params for adversarial attacks') FLAGS(sys.argv, known_only=True) def _preprocess(data_path): img = Image.open(data_path) img = img.convert('RGB') img = np.asarray(img, dtype=np.float32) img = img[np.newaxis, :, :, :] return img def softmax(x): x = np.array(x) orig_shape = x.shape if len(x.shape) > 1: # Matrix x = np.apply_along_axis(lambda x: np.exp(x - np.max(x)), 1, x) denominator = np.apply_along_axis(lambda x: 1.0 / np.sum(x), 1, x) if len(denominator.shape) == 1: denominator = denominator.reshape((denominator.shape[0], 1)) x = x * denominator else: # Vector x_max = np.max(x) x = x - x_max numerator = np.exp(x) denominator = 1.0 / np.sum(numerator) x = numerator.dot(denominator) assert x.shape == orig_shape return x def get_dataset(data_path, label_map_dict): label_list = [] img_name_list = [] if 'manifest' in data_path: manifest, _ = get_sample_list( manifest_path=data_path, task_type='image_classification') for item in manifest: if len(item[1]) != 0: label_list.append(label_map_dict.get(item[1][0])) img_name_list.append(item[0]) else: continue else: label_name_list = os.listdir(data_path) label_dict = {} for idx, item in enumerate(label_name_list): label_dict[str(idx)] = item sub_img_list = os.listdir(os.path.join(data_path, item)) img_name_list += [ os.path.join(data_path, item, img_name) for img_name in sub_img_list ] label_list += [label_map_dict.get(item)] * len(sub_img_list) return img_name_list, label_list def deal_ckpt_and_data_with_obs(): pb_dir = FLAGS.model_url data_path = FLAGS.data_url if pb_dir.startswith('obs://'): mox.file.copy_parallel(pb_dir, '/cache/ckpt/') pb_dir = '/cache/ckpt' print('------------- download success ------------') if data_path.startswith('obs://'): if '.manifest' in data_path: new_manifest_path, _ = mox.file.copy_manifest(data_path, '/cache/data/') data_path = new_manifest_path else: mox.file.copy_parallel(data_path, '/cache/data/') data_path = '/cache/data/' print('------------- download dataset success ------------') assert os.path.isdir(pb_dir), 'Error, pb_dir must be a directory' return pb_dir, data_path def evalution(): pb_dir, data_path = deal_ckpt_and_data_with_obs() index_file = os.path.join(pb_dir, 'index') try: label_file = h5py.File(index_file, 'r') label_array = label_file['labels_list'][:].tolist() label_array = [item.decode('utf-8') for item in label_array] except Exception as e: logging.warning(e) logging.warning('index file is not a h5 file, try json.') with open(index_file, 'r') as load_f: label_file = json.load(load_f) label_array = label_file['labels_list'][:] label_map_dict = {} label_dict = {} for idx, item in enumerate(label_array): label_map_dict[item] = idx label_dict[idx] = item print(label_map_dict) print(label_dict) data_file_list, label_list = get_dataset(data_path, label_map_dict) assert len(label_list) > 0, 'missing valid data' assert None not in label_list, 'dataset and model not match' pred_list = [] file_name_list = [] img_list = [] for img_path in data_file_list: img = _preprocess(img_path) img_list.append(img) file_name_list.append(img_path) config = tf.ConfigProto() config.gpu_options.allow_growth = True config.gpu_options.visible_device_list = '0' with tf.Session(graph=tf.Graph(), config=config) as sess: meta_graph_def = tf.saved_model.loader.load( sess, [tf.saved_model.tag_constants.SERVING], pb_dir) signature = meta_graph_def.signature_def signature_key = 'predict_object' input_key = 'images' output_key = 'logits' x_tensor_name = signature[signature_key].inputs[input_key].name y_tensor_name = signature[signature_key].outputs[output_key].name x = sess.graph.get_tensor_by_name(x_tensor_name) y = sess.graph.get_tensor_by_name(y_tensor_name) for img in img_list: pred_output = sess.run([y], {x: img}) pred_output = softmax(pred_output[0]) pred_list.append(pred_output[0].tolist()) label_dict = json.dumps(label_dict) task_type = 'image_classification' if FLAGS.data_url.startswith('obs://'): if 'manifest' in FLAGS.data_url: file_name_list = [] manifest, _ = get_sample_list( manifest_path=FLAGS.data_url, task_type='image_classification') for item in manifest: if len(item[1]) != 0: file_name_list.append(item[0]) for idx, item in enumerate(file_name_list): file_name_list[idx] = item.replace(data_path, FLAGS.data_url) # analyse res = analyse( task_type=task_type, pred_list=pred_list, label_list=label_list, name_list=file_name_list, label_map_dict=label_dict, save_path=FLAGS.train_url) if __name__ == "__main__": evalution() |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.