更新时间:2024-11-21 GMT+08:00
分享

预置条件

本实践提供在CCE上运行caffe的基础分类例子https://github.com/BVLC/caffe/blob/master/examples/00-classification.ipynb的过程。

OBS存储数据预置

创建OBS桶,并确认以下文件夹已创建,文件已上传至指定位置(需要使用OBS Browser工具)。

例如:桶内文件路径/文件名,文件下载地址可至github中指定项目的指定路径下查找,示例如12所示。

  1. models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel

    https://github.com/BVLC/caffe/tree/master/models/bvlc_reference_caffenet

  1. models/bvlc_reference_caffenet/deploy.prototxt

    https://github.com/BVLC/caffe/tree/master/models/bvlc_reference_caffenet

  2. python/caffe/imagenet/ilsvrc_2012_mean.npy

    https://github.com/BVLC/caffe/tree/master/python/caffe/imagenet

  3. outputimg/

    创建一个空文件夹outputimg,以供存放输出文件。

  4. examples/images/cat.jpg

    https://github.com/BVLC/caffe/blob/master/examples/00-classification.ipynb

    另存链接中里面小猫图片。

  5. data/ilsvrc12/*

    https://github.com/BVLC/caffe/tree/master/data/ilsvrc12

    获取get_ilsvrc_aux.sh这个脚本并执行,这个脚本会下载一个压缩包并解压,执行完毕后将解压出来的所有文件上传至目录下。

  6. caffeEx00.py
    # set up Python environment: numpy for numerical routines, and matplotlib for plotting
    import numpy as np
    import matplotlib as mpl
    mpl.use('Agg')
    import matplotlib.pyplot as plt
    # display plots in this notebook
    #%matplotlib inline
    
    # set display defaults
    plt.rcParams['figure.figsize'] = (10, 10)        # large images
    plt.rcParams['image.interpolation'] = 'nearest'  # don't interpolate: show square pixels
    plt.rcParams['image.cmap'] = 'gray'  # use grayscale output rather than a (potentially misleading) color heatmap
    
    # The caffe module needs to be on the Python path;
    #  we'll add it here explicitly.
    import sys
    caffe_root = '/home/'  # this file should be run from {caffe_root}/examples (otherwise change this line)
    sys.path.insert(0, caffe_root + 'python')
    
    import caffe
    # If you get "No module named _caffe", either you have not built pycaffe or you have the wrong path.
    
    import os
    #if os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
    #    print 'CaffeNet found.'
    #else:
    #    print 'Downloading pre-trained CaffeNet model...'
    #    !../scripts/download_model_binary.py ../models/bvlc_reference_caffenet
    	
    caffe.set_mode_cpu()
    
    model_def = caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt'
    model_weights = caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'
    
    net = caffe.Net(model_def,      # defines the structure of the model
                    model_weights,  # contains the trained weights
                    caffe.TEST)     # use test mode (e.g., don't perform dropout)
    
    # load the mean ImageNet image (as distributed with Caffe) for subtraction
    mu = np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy')
    mu = mu.mean(1).mean(1)  # average over pixels to obtain the mean (BGR) pixel values
    print 'mean-subtracted values:', zip('BGR', mu)
    
    # create transformer for the input called 'data'
    transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
    
    transformer.set_transpose('data', (2,0,1))  # move image channels to outermost dimension
    transformer.set_mean('data', mu)            # subtract the dataset-mean value in each channel
    transformer.set_raw_scale('data', 255)      # rescale from [0, 1] to [0, 255]
    transformer.set_channel_swap('data', (2,1,0))  # swap channels from RGB to BGR
    
    # set the size of the input (we can skip this if we're happy
    #  with the default; we can also change it later, e.g., for different batch sizes)
    net.blobs['data'].reshape(50,        # batch size
                              3,         # 3-channel (BGR) images
                              227, 227)  # image size is 227x227
    						
    image = caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')
    transformed_image = transformer.preprocess('data', image)
    plt.imshow(image)
    plt.savefig(caffe_root + 'outputimg/img1.png')
    
    # copy the image data into the memory allocated for the net
    net.blobs['data'].data[...] = transformed_image
    
    ### perform classification
    output = net.forward()
    
    output_prob = output['prob'][0]  # the output probability vector for the first image in the batch
    
    print 'predicted class is:', output_prob.argmax()
    
    # load ImageNet labels
    labels_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
    #if not os.path.exists(labels_file):
    #    !../data/ilsvrc12/get_ilsvrc_aux.sh
    
    labels = np.loadtxt(labels_file, str, delimiter='\t')
    
    print 'output label:', labels[output_prob.argmax()]
    
    # sort top five predictions from softmax output
    top_inds = output_prob.argsort()[::-1][:5]  # reverse sort and take five largest items
    
    print 'probabilities and labels:'
    zip(output_prob[top_inds], labels[top_inds])

相关文档