caffe学习--cifar10学习-ubuntu16.04-gtx650tiboost--1g--01
引用了下文的资料,在此感谢!
http://www.cnblogs.com/alexcai/p/5468164.html
http://blog.csdn.net/garfielder007/article/details/51480844
第一、cifar数据集的知识
The CIFAR-10 dataset
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
Here are the classes in the dataset, as well as 10 random images from each:
airplane | ||||||||||
automobile | ||||||||||
bird | ||||||||||
cat | ||||||||||
deer | ||||||||||
dog | ||||||||||
frog | ||||||||||
horse | ||||||||||
ship | ||||||||||
truck |
The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks.
Download
If you're going to use this dataset, please cite the tech report at the bottom of this page.
Version | Size | md5sum |
CIFAR-10 python version | 163 MB | c58f30108f718f92721af3b95e74349a |
CIFAR-10 Matlab version | 175 MB | 70270af85842c9e89bb428ec9976c926 |
CIFAR-10 binary version (suitable for C programs) | 162 MB | c32a1d4ab5d03f1284b67883e8d87530 |
Baseline results
You can find some baseline replicable results on this dataset on the project page for cuda-convnet. These results were obtained with a convolutional neural network. Briefly, they are 18% test error without data augmentation and 11% with. Additionally, Jasper Snoek has a new paper in which he used Bayesian hyperparameter optimization to find nice settings of the weight decay and other hyperparameters, which allowed him to obtain a test error rate of 15% (without data augmentation) using the architecture of the net that got 18%.
Other results
Rodrigo Benenson has been kind enough to collect results on CIFAR-10/100 and other datasets on his website; click here to view.
Dataset layout
Python / Matlab versions
I will describe the layout of the Python version of the dataset. The layout of the Matlab version is identical.
The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with cPickle. Here is a Python routine which will open such a file and return a dictionary:
def unpickle(file):
import cPickle
fo = open(file, 'rb')
dict = cPickle.load(fo)
fo.close()
return dict
Loaded in this way, each of the batch files contains a dictionary with the following elements:
- data -- a 10000x3072 numpy array of uint8s. Each row of the array stores a 32x32 colour image. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image.
- labels -- a list of 10000 numbers in the range 0-9. The number at index i indicates the label of the ith image in the array data.
The dataset contains another file, called batches.meta. It too contains a Python dictionary object. It has the following entries:
- label_names -- a 10-element list which gives meaningful names to the numeric labels in the labels array described above. For example, label_names[0] == "airplane",label_names[1] == "automobile", etc.
Binary version
The binary version contains the files data_batch_1.bin, data_batch_2.bin, ..., data_batch_5.bin, as well as test_batch.bin. Each of these files is formatted as follows:
<1 x label><3072 x pixel>
...
<1 x label><3072 x pixel>
In other words, the first byte is the label of the first image, which is a number in the range 0-9. The next 3072 bytes are the values of the pixels of the image. The first 1024 bytes are the red channel values, the next 1024 the green, and the final 1024 the blue. The values are stored in row-major order, so the first 32 bytes are the red channel values of the first row of the image.
Each file contains 10000 such 3073-byte "rows" of images, although there is nothing delimiting the rows. Therefore each file should be exactly 30730000 bytes long.
There is another file, called batches.meta.txt. This is an ASCII file that maps numeric labels in the range 0-9 to meaningful class names. It is merely a list of the 10 class names, one per row. The class name on row i corresponds to numeric label i.
The CIFAR-100 dataset
This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs).
Here is the list of classes in the CIFAR-100:
Superclass | Classes |
aquatic mammals | beaver, dolphin, otter, seal, whale |
fish | aquarium fish, flatfish, ray, shark, trout |
flowers | orchids, poppies, roses, sunflowers, tulips |
food containers | bottles, bowls, cans, cups, plates |
fruit and vegetables | apples, mushrooms, oranges, pears, sweet peppers |
household electrical devices | clock, computer keyboard, lamp, telephone, television |
household furniture | bed, chair, couch, table, wardrobe |
insects | bee, beetle, butterfly, caterpillar, cockroach |
large carnivores | bear, leopard, lion, tiger, wolf |
large man-made outdoor things | bridge, castle, house, road, skyscraper |
large natural outdoor scenes | cloud, forest, mountain, plain, sea |
large omnivores and herbivores | camel, cattle, chimpanzee, elephant, kangaroo |
medium-sized mammals | fox, porcupine, possum, raccoon, skunk |
non-insect invertebrates | crab, lobster, snail, spider, worm |
people | baby, boy, girl, man, woman |
reptiles | crocodile, dinosaur, lizard, snake, turtle |
small mammals | hamster, mouse, rabbit, shrew, squirrel |
trees | maple, oak, palm, pine, willow |
vehicles 1 | bicycle, bus, motorcycle, pickup truck, train |
vehicles 2 | lawn-mower, rocket, streetcar, tank, tractor |
Yes, I know mushrooms aren't really fruit or vegetables and bears aren't really carnivores.
Download
Version | Size | md5sum |
CIFAR-100 python version | 161 MB | eb9058c3a382ffc7106e4002c42a8d85 |
CIFAR-100 Matlab version | 175 MB | 6a4bfa1dcd5c9453dda6bb54194911f4 |
CIFAR-100 binary version (suitable for C programs) | 161 MB | 03b5dce01913d631647c71ecec9e9cb8 |
Dataset layout
Python / Matlab versions
The python and Matlab versions are identical in layout to the CIFAR-10, so I won't waste space describing them here.
Binary version
The binary version of the CIFAR-100 is just like the binary version of the CIFAR-10, except that each image has two label bytes (coarse and fine) and 3072 pixel bytes, so the binary files look like this:
<1 x coarse label><1 x fine label><3072 x pixel>
...
<1 x coarse label><1 x fine label><3072 x pixel>
Indices into the original 80 million tiny images dataset
Sivan Sabato was kind enough to provide this file, which maps CIFAR-100 images to images in the 80 million tiny images dataset. Sivan Writes:
The file has 60000 rows, each row contains a single index into the tiny db,
where the first image in the tiny db is indexed "1". "0" stands for an image that is not from the tiny db.
The first 50000 lines correspond to the training set, and the last 10000 lines correspond
to the test set.
Reference
This tech report (Chapter 3) describes the dataset and the methodology followed when collecting it in much greater detail. Please cite it if you intend to use this dataset.
- Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009.
第二、caffe的知识
简介
caffe是一个友好、易于上手的开源深度学习平台,主要用于图像的相关处理,可以支持CNN等多种深度学习网络。
基于caffe,开发者可以方便快速地开发简单的学习网络,用于分类、定位等任务,也可以用于科研,在其源码基础上进行修改,实现自己的算法。
本文的主要目的,是介绍caffe的基本使用方法,希望通过本文,能让普通的工程师可以使用caffe训练自己的简单模型。
本文主要包括以下内容:运行caffe的例子训练cifar训练集、使用别人定义好的网络训练自己的数据、使用训练好的模型fine tune自己的数据。
背景知识简介
深度学习是机器学习的一个分支,主要目标在于通过学习的方法,解决以往普通编程无法解决的问题,例如:图像识别、文字识别等等。
机器学习里的“学习”,指通过向程序输入经验数据,通过若干次“迭代”,不断改进算法参数,最终能够获得“模型”,使用新数据输入模型,计算得出想要的结果。
例如图像分类任务中,经验数据是图片和对应的文字,训练出模型后,将新图片使用模型运算,就可以知道其对应的类别。
以上只是简单介绍,这里还是建议先学习机器学习、卷积神经网络的相关基础知识。
安装
这一部分网上有不少教程,这里就略掉,另外,我是用docker的镜像直接安装的,网上可以直接搜到带caffe的docker镜像。好处是省去安装环境的时间,缺点是后面设置文件会麻烦一些,建议从长计议还是直接安装在电脑上。
make runtest [ OK ] AdaDeltaSolverTest/2.TestLeastSquaresUpdateWithEverythingAccumShare (6 ms)
[ RUN ] AdaDeltaSolverTest/2.TestAdaDeltaLeastSquaresUpdateWithEverythingShare
[ OK ] AdaDeltaSolverTest/2.TestAdaDeltaLeastSquaresUpdateWithEverythingShare (61 ms)
[ RUN ] AdaDeltaSolverTest/2.TestAdaDeltaLeastSquaresUpdateWithHalfMomentum
[ OK ] AdaDeltaSolverTest/2.TestAdaDeltaLeastSquaresUpdateWithHalfMomentum (21 ms)
[ RUN ] AdaDeltaSolverTest/2.TestAdaDeltaLeastSquaresUpdateWithWeightDecay
[ OK ] AdaDeltaSolverTest/2.TestAdaDeltaLeastSquaresUpdateWithWeightDecay (10 ms)
[----------] 11 tests from AdaDeltaSolverTest/2 (324 ms total) [----------] Global test environment tear-down
[==========] 2101 tests from 277 test cases ran. (435790 ms total)
[ PASSED ] 2101 tests.
[100%] Built target runtest
训练cifar训练集
cifar是一个常见的图像分类训练集,包括上万张图片及20个分类,caffe提供了一个网络用于分类cifar数据集。
cifar网络的定义在examples/cifar10目录下,训练的过程十分简单。
(以下命令均在caffe默认根目录下运行,下同)
1、获取训练数据
cd $CAFFE_ROOT
./data/cifar10/get_cifar10.sh
./examples/cifar10/create_cifar10.sh
2、开始训练
cd $CAFFE_ROOT
./examples/cifar10/train_quick.sh
3、训练完成后我们会得到:
cifar10_quick_iter_4000.caffemodel.h5
cifar10_quick_iter_4000.solverstate.h5
此时,我们就训练得到了模型,用于后面的分类。
4、下面我们使用模型来分类新数据
先直接用一下别人的模型分类试一下:(默认用的ImageNet的模型)
python python/classify.py examples/images/cat.jpg foo
这样会生成一个名为foo.npy的输出文件,记录了所有的类型的“相似度”。使用以下语句可以读出:
import numpy as np
f=file("foo.npy", "rb")
print np.load(f)
下面我们来指定自己的模型进行分类:
python python/classify.py --model_def examples/cifar10/cifar10_quick.prototxt --pretrained_model examples/cifar10/cifar10_quick_iter_5000.caffemodel.h5 --center_only examples/images/cat.jpg foo
上面这句话的意思是,使用cifar10_quick.prototxt网络 + cifar10_quick_iter_5000.caffemodel.h5模型,对examples/images/cat.jpg图片进行分类。
默认的classify脚本不会直接输出结果,而是会把结果输入到foo文件里,不太直观,这里我在网上找了一个修改版,添加了一些参数,可以输出概率最高的分类。
替换python/classify.py,下载地址:http://download.csdn.net/detail/caisenchuan/9513196 (这个要积分的,不推荐。 )
实际上就是将对应内容改为
159 # Classify.
160 start = time.time()
161 predictions = classifier.predict(inputs, not args.center_only)
162 print("Done in %.2f s." % (time.time() - start))
163 print("Predictions : %s" % predictions)
164
165 # print result, add by caisenchuan
166 if args.print_results:
167 scores = predictions.flatten()
168 with open(args.labels_file) as f:
169 labels_df = pd.DataFrame([
170 {
171 'synset_id': l.strip().split(' ')[0],
172 'name': ' '.join(l.strip().split(' ')[1:]).split(',')[0]
173 }
174 for l in f.readlines()
175 ])
176 labels = labels_df.sort('synset_id')['name'].values
177
178 indices = (-scores).argsort()[:5]
179 ps = labels[indices]
180
181 meta = [
182 (p, '%.5f' % scores[i])
183 for i, p in zip(indices, ps)
184 ]
185
186 print meta
187
188 # Save
189 print("Saving results into %s" % args.output_file)
190 np.save(args.output_file, predictions)
这个脚本添加了两个参数,可以指定labels_file,然后可以直接把分类结果输出出来:
python python/classify.py --print_results --model_def examples/cifar10/cifar10_quick.prototxt --pretrained_model examples/cifar10/cifar10_quick_iter_5000.caffemodel.h5 --labels_file data/cifar10/cifar10_words.txt --center_only examples/images/cat.jpg foo
输出结果:
Loading file: examples/images/cat.jpg
Classifying 1 inputs.
predict 3 inputs.
Done in 0.02 s.
Predictions : [[ 0.03903743 0.00722749 0.04582177 0.44352672 0.01203315 0.11832549
0.02335102 0.25013766 0.03541689 0.02512246]]
python/classify.py:176: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)
labels = labels_df.sort('synset_id')['name'].values
[('cat', '0.44353'), ('horse', '0.25014'), ('dog', '0.11833'), ('bird', '0.04582'), ('airplane', '0.03904')]
上面标明了各个分类的顺序和置信度
Saving results into foo
Tips
最后,总结一下训练一个网络用到的相关文件:
cifar10_quick_solver.prototxt:方案配置,用于配置迭代次数等信息,训练时直接调用caffe train指定这个文件,就会开始训练
cifar10_quick_train_test.prototxt:训练网络配置,用来设置训练用的网络,这个文件的名字会在solver.prototxt里指定
cifar10_quick_iter_4000.caffemodel.h5:训练出来的模型,后面就用这个模型来做分类
cifar10_quick_iter_4000.solverstate.h5:也是训练出来的,应该是用来中断后继续训练用的文件
cifar10_quick.prototxt:分类用的网络
//--------------------------------------------------------------------------------------------------------------------------------------------------------
需要说明的是:期间曾遇到这个错误,用下面的方法终于搞定。当然,需要重新编译caffe。
macOS Sierra (10.12.4)下Caffe执行Python代码报告错误“Mean shape incompatible with input shape”
在执行 macOS Sierra (10.12.4)下Caffe通过Python接口加载binaryproto格式的均值文件 的时候,最后报告错误:
Traceback (mostrecentcall last):
File "analysis_memnet.py", line 29, in <module>
detector = caffe.Detector(model_def, pretrained_model, mean=means)
File "/Users/Source/caffe/distribute/python/caffe/detector.py", line 46, in __init__
self.transformer.set_mean(in_, mean)
File "/Users/Source/caffe/distribute/python/caffe/io.py", line 259, in set_mean
raiseValueError('Mean shape incompatible with input shape.')
ValueError: Meanshapeincompatiblewithinputshape.
这个错误发生的原因是由于 memnet 提供的均值文件是 256 * 256 的,但是提供的配置文件却是 227 * 227 的,导致在 io .py 里面的代码在进行判断的时候发生异常。调整源代码中的 python /caffe / io .py 里面的代码:
def set_mean(self, in_, mean):
"""
Set the mean to subtract for centering the data.
Parameters
----------
in_ : which input to assign this mean.
mean : mean ndarray (input dimensional or broadcastable)
"""
self.__check_input(in_)
ms = mean.shape
if mean.ndim == 1:
# broadcast channels
if ms[0] != self.inputs[in_][1]:
raise ValueError('Mean channels incompatible with input.')
mean = mean[:, np.newaxis, np.newaxis]
else:
# elementwise mean
if len(ms) == 2:
ms = (1,) + ms
if len(ms) != 3:
raise ValueError('Mean shape invalid')
if ms != self.inputs[in_][1:]:
raise ValueError('Mean shape incompatible with input shape.')
self.mean[in_] = mean
调整为:
def set_mean(self, in_, mean):
"""
Set the mean to subtract for centering the data.
Parameters
----------
in_ : which input to assign this mean.
mean : mean ndarray (input dimensional or broadcastable)
"""
self.__check_input(in_)
ms = mean.shape
if mean.ndim == 1:
# broadcast channels
if ms[0] != self.inputs[in_][1]:
raise ValueError('Mean channels incompatible with input.')
mean = mean[:, np.newaxis, np.newaxis]
else:
# elementwise mean
if len(ms) == 2:
ms = (1,) + ms
if len(ms) != 3:
raise ValueError('Mean shape invalid')
if ms != self.inputs[in_][1:]:
in_shape = self.inputs[in_][1:]
m_min, m_max = mean.min(), mean.max()
normal_mean = (mean - m_min) / (m_max - m_min)
mean = resize_image(normal_mean.transpose((1,2,0)),in_shape[1:]).transpose((2,0,1)) * (m_max - m_min) + m_min
#raise ValueError('Mean shape incompatible with input shape.')
self.mean[in_] = mean
调整完成后,需要重新编译 Caffe :
$ make clean
$ make
$ make pycaffe
$ make distribute
参考链接
附录:
python/classify.py:
#!/usr/bin/env python
"""
classify.py is an out-of-the-box image classifer callable from the command line. By default it configures and runs the Caffe reference ImageNet model.
"""
import numpy as np
import os
import sys
import argparse
import glob
import time import pandas as pd
from skimage.color import rgb2gray import caffe def main(argv):
pycaffe_dir = os.path.dirname(__file__) parser = argparse.ArgumentParser()
# Required arguments: input and output files.
parser.add_argument(
"input_file",
help="Input image, directory, or npy."
)
parser.add_argument(
"output_file",
help="Output npy filename."
)
# Optional arguments.
parser.add_argument(
"--model_def",
default=os.path.join(pycaffe_dir,
"../models/bvlc_reference_caffenet/deploy.prototxt"),
help="Model definition file."
)
parser.add_argument(
"--pretrained_model",
default=os.path.join(pycaffe_dir,
"../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel"),
help="Trained model weights file."
)
parser.add_argument(
"--gpu",
action='store_true',
help="Switch for gpu computation."
)
parser.add_argument(
"--center_only",
action='store_true',
help="Switch for prediction from center crop alone instead of " +
"averaging predictions across crops (default)."
)
parser.add_argument(
"--images_dim",
default='256,256',
help="Canonical 'height,width' dimensions of input images."
)
parser.add_argument(
"--mean_file",
default=os.path.join(pycaffe_dir,
'caffe/imagenet/ilsvrc_2012_mean.npy'),
help="Data set image mean of [Channels x Height x Width] dimensions " +
"(numpy array). Set to '' for no mean subtraction."
)
parser.add_argument(
"--input_scale",
type=float,
help="Multiply input features by this scale to finish preprocessing."
)
parser.add_argument(
"--raw_scale",
type=float,
default=255.0,
help="Multiply raw input by this scale before preprocessing."
)
parser.add_argument(
"--channel_swap",
default='2,1,0',
help="Order to permute input channels. The default converts " +
"RGB -> BGR since BGR is the Caffe default by way of OpenCV."
)
parser.add_argument(
"--ext",
default='jpg',
help="Image file extension to take as input when a directory " +
"is given as the input file."
) # add by caisenchuan
parser.add_argument(
"--labels_file",
default=os.path.join(pycaffe_dir,
"../data/ilsvrc12/synset_words.txt"),
help="Readable label definition file."
)
parser.add_argument(
"--print_results",
action='store_true',
help="Write output text to stdout rather than serializing to a file."
)
parser.add_argument(
"--force_grayscale",
action='store_true',
help="Converts RGB images down to single-channel grayscale versions," +
"useful for single-channel networks like MNIST."
) args = parser.parse_args() image_dims = [int(s) for s in args.images_dim.split(',')] mean, channel_swap = None, None # add by caisenchuan
if args.force_grayscale:
channel_swap = None
mean = None
else:
if args.mean_file:
mean = np.load(args.mean_file)
if args.channel_swap:
channel_swap = [int(s) for s in args.channel_swap.split(',')] if args.gpu:
caffe.set_mode_gpu()
print("GPU mode")
else:
caffe.set_mode_cpu()
print("CPU mode") # Make classifier.
classifier = caffe.Classifier(args.model_def, args.pretrained_model,
image_dims=image_dims, mean=mean,
input_scale=args.input_scale, raw_scale=args.raw_scale,
channel_swap=channel_swap) # Load numpy array (.npy), directory glob (*.jpg), or image file.
args.input_file = os.path.expanduser(args.input_file)
if args.input_file.endswith('npy'):
print("Loading file: %s" % args.input_file)
inputs = np.load(args.input_file)
elif os.path.isdir(args.input_file):
print("Loading folder: %s" % args.input_file)
inputs =[caffe.io.load_image(im_f)
for im_f in glob.glob(args.input_file + '/*.' + args.ext)]
else:
print("Loading file: %s" % args.input_file)
inputs = [caffe.io.load_image(args.input_file)] if args.force_grayscale:
inputs = [rgb2gray(input) for input in inputs]; print("Classifying %d inputs." % len(inputs)) # Classify.
start = time.time()
predictions = classifier.predict(inputs, not args.center_only)
print("Done in %.2f s." % (time.time() - start))
print("Predictions : %s" % predictions) # print result, add by caisenchuan
if args.print_results:
scores = predictions.flatten()
with open(args.labels_file) as f:
labels_df = pd.DataFrame([
{
'synset_id': l.strip().split(' ')[0],
'name': ' '.join(l.strip().split(' ')[1:]).split(',')[0]
}
for l in f.readlines()
])
labels = labels_df.sort('synset_id')['name'].values indices = (-scores).argsort()[:5]
ps = labels[indices] meta = [
(p, '%.5f' % scores[i])
for i, p in zip(indices, ps)
] print meta # Save
print("Saving results into %s" % args.output_file)
np.save(args.output_file, predictions) if __name__ == '__main__':
main(sys.argv)
三、测试和结果
1. 原始测试
python python/classify.py --print_results --model_def examples/cifar10/cifar10_quick.prototxt --pretrained_model examples/cifar10/cifar10_quick_iter_5000.caffemodel.h5 --labels_file data/cifar10/cifar10_words.txt --center_only examples/images/cat.jpg foo
结果:
Loading file: examples/images/cat.jpg
Classifying 1 inputs.
Done in 0.05 s.
Predictions : [[ 0.03749448 0.0027075 0.02589573 0.21115269 0.04923972 0.48335615
0.1005311 0.01322108 0.06116258 0.01523905]]
0 , 0.0374945
1 , 0.0027075
2 , 0.0258957
3 , 0.211153
4 , 0.0492397
5 , 0.483356
6 , 0.100531
7 , 0.0132211
8 , 0.0611626
9 , 0.015239
labels_df =
name synset_id
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Saving results into foo
二、使用完整模型
训练模型:
train_full.sh, 模型:
cifar10_full.prototxt
得到的是:
cifar10_full_iter_60000.caffemodel.h5
调用:
python python/classify.py --print_results --model_def examples/cifar10/cifar10_full.prototxt --pretrained_model examples/cifar10/cifar10_full_iter_60000.caffemodel.h5 --labels_file data/cifar10/cifar10_words.txt --center_only examples/images/cat.jpg foo
结果为:
Done in 0.02 s.
Predictions : [[ 0.06165585 0.00868315 0.02428157 0.15692885 0.17243297 0.4377735
0.05209383 0.0376116 0.01845259 0.03008615]]
0 , 0.0616559
1 , 0.00868315
2 , 0.0242816
3 , 0.156929
4 , 0.172433
5 , 0.437773
6 , 0.0520938
7 , 0.0376116
8 , 0.0184526
9 , 0.0300861
labels_df =
name synset_id
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Saving results into foo
3 下面的脚本写的不对。
python python/classify.py --print_results --model_def examples/cifar10/cifar10_full.prototxt --pretrained_model examples/cifar10_full_sigmoid_iter_60000.caffemodel --labels_file data/cifar10/cifar10_words.txt --center_only examples/images/cat.jpg foo
/usr/local/lib/python2.7/dist-packages/skimage/transform/_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
Loading file: examples/images/cat.jpg
Classifying 1 inputs.
Done in 0.02 s.
Predictions : [[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
0 , 1.0
1 , 0.0
2 , 0.0
3 , 0.0
4 , 0.0
5 , 0.0
6 , 0.0
7 , 0.0
8 , 0.0
9 , 0.0
labels_df =
name synset_id
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Saving results into foo
这时候classify.py会报错。 不想继续折腾了。那么就还是使用原来的旧的classify.py代码好了。
python python/classify.py --model_def examples/cifar10/cifar10_full.prototxt --pretrained_model examples/cifar10/cifar10_full_iter_60000.caffemodel.h5 examples/images/cat.jpg foo
/usr/local/lib/python2.7/dist-packages/skimage/transform/_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
Loading file: examples/images/cat.jpg
Classifying 1 inputs.
Done in 0.04 s.
Predictions : [[ 4.00000006e-01 0.00000000e+00 1.00000009e-01 1.00000001e-01
1.72153580e-39 1.00000001e-01 2.65188813e-26 2.99999952e-01
0.00000000e+00 0.00000000e+00]]
0 , 0.4
1 , 0.0
2 , 0.1
3 , 0.1
4 , 1.72154e-39
5 , 0.1
6 , 2.65189e-26
7 , 0.3
8 , 0.0
9 , 0.0
Saving results into foo
python python/classify.py --print_results --model_def examples/cifar10/cifar10_quick.prototxt --pretrained_model examples/cifar10/cifar10_quick_lr1_iter_10000.caffemodel.h5 --labels_file data/cifar10/cifar10_words.txt --center_only examples/images/cat.jpg foo
Loading file: examples/images/cat.jpg
Classifying 1 inputs.
Done in 0.01 s.
Predictions : [[ 6.70117297e-05 2.16639066e-07 6.03203371e-04 6.75522548e-04
2.93625082e-04 9.34589922e-01 2.98065459e-03 8.52398989e-06
5.51625788e-02 5.61880460e-03]]
0 , 6.70117e-05
1 , 2.16639e-07
2 , 0.000603203
3 , 0.000675523
4 , 0.000293625
5 , 0.93459
6 , 0.00298065
7 , 8.52399e-06
8 , 0.0551626
9 , 0.0056188
labels_df =
name synset_id
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Saving results into foo
python python/classify.py --print_results --model_def examples/cifar10/cifar10_quick.prototxt --pretrained_model examples/cifar10/cifar10_quick_lr1_iter_10000.caffemodel.h5 --labels_file data/cifar10/cifar10_words.txt --center_only /home/sea/Downloads/animal/horse/0.jpeg foo
Loading file: /home/sea/Downloads/animal/horse/0.jpeg
Classifying 1 inputs.
Done in 0.02 s.
Predictions : [[ 4.54714977e-09 2.20171312e-08 3.23279848e-04 9.74610925e-01
7.34964502e-04 1.07501030e-04 2.41789930e-02 3.99016353e-05
4.27404666e-06 1.10957735e-07]]
0 , 4.54715e-09
1 , 2.20171e-08
2 , 0.00032328
3 , 0.974611
4 , 0.000734965
5 , 0.000107501
6 , 0.024179
7 , 3.99016e-05
8 , 4.27405e-06
9 , 1.10958e-07
labels_df =
name synset_id
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Saving results into foo
Loading file: /home/sea/Downloads/animal/horse/3.jpg
Classifying 1 inputs.
Done in 0.01 s.
Predictions : [[ 3.89247731e-07 1.22837851e-09 4.36774513e-04 1.27305046e-01
8.69564056e-01 4.88617570e-06 1.12547104e-06 2.68773409e-03
3.65437597e-10 1.18015224e-11]]
0 , 3.89248e-07
1 , 1.22838e-09
2 , 0.000436775
3 , 0.127305
4 , 0.869564
5 , 4.88618e-06
6 , 1.12547e-06
7 , 0.00268773
8 , 3.65438e-10
9 , 1.18015e-11
labels_df =
name synset_id
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Saving results into foo
Loading file: /home/sea/Downloads/animal/horse/4.jpg
Classifying 1 inputs.
Done in 0.02 s.
Predictions : [[ 4.30966445e-08 1.11385980e-07 2.01909515e-05 1.71621272e-04
1.72364298e-06 9.99796927e-01 1.49317827e-07 2.20107488e-10
9.33068713e-06 6.23963229e-13]]
0 , 4.30966e-08
1 , 1.11386e-07
2 , 2.0191e-05
3 , 0.000171621
4 , 1.72364e-06
5 , 0.999797
6 , 1.49318e-07
7 , 2.20107e-10
8 , 9.33069e-06
9 , 6.23963e-13
labels_df =
name synset_id
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Saving results into foo
sea@sea-X550JK:~/caffe$
继续训练模型:
设置最高次数为80000,输入指令:
caffe train \
--solver=examples/cifar10/cifar10_full_solver_lr1.prototxt \
--snapshot=examples/cifar10/cifar10_full_iter_70000.solverstate.h5 $@
I1023 13:41:42.880556 12138 solver.cpp:273] Learning Rate Policy: fixed
I1023 13:41:42.920240 12138 solver.cpp:310] Iteration 70000, loss = 0.296884
I1023 13:41:42.920282 12138 solver.cpp:330] Iteration 70000, Testing net (#0)
I1023 13:41:44.514853 12144 data_layer.cpp:73] Restarting data prefetching from start.
I1023 13:41:44.582433 12138 solver.cpp:397] Test net output #0: accuracy = 0.8155
I1023 13:41:44.582465 12138 solver.cpp:397] Test net output #1: loss = 0.53114 (* 1 = 0.53114 loss)
I1023 13:41:44.582473 12138 solver.cpp:315] Optimization Done.
I1023 13:41:44.582479 12138 caffe.cpp:259] Optimization Done.
sea@sea-X550JK:~$ caffe device_query -gpu 0
I1025 09:31:34.609318 5993 caffe.cpp:138] Querying GPUs 0
I1025 09:31:34.613349 5993 common.cpp:178] Device id: 0
I1025 09:31:34.613365 5993 common.cpp:179] Major revision number: 5
I1025 09:31:34.613368 5993 common.cpp:180] Minor revision number: 0
I1025 09:31:34.613370 5993 common.cpp:181] Name: GeForce GTX 850M
I1025 09:31:34.613373 5993 common.cpp:182] Total global memory: 4240965632
I1025 09:31:34.613376 5993 common.cpp:183] Total shared memory per block: 49152
I1025 09:31:34.613380 5993 common.cpp:184] Total registers per block: 65536
I1025 09:31:34.613385 5993 common.cpp:185] Warp size: 32
I1025 09:31:34.613389 5993 common.cpp:186] Maximum memory pitch: 2147483647
I1025 09:31:34.613394 5993 common.cpp:187] Maximum threads per block: 1024
I1025 09:31:34.613399 5993 common.cpp:188] Maximum dimension of block: 1024, 1024, 64
I1025 09:31:34.613404 5993 common.cpp:191] Maximum dimension of grid: 2147483647, 65535, 65535
I1025 09:31:34.613409 5993 common.cpp:194] Clock rate: 901500
I1025 09:31:34.613414 5993 common.cpp:195] Total constant memory: 65536
I1025 09:31:34.613418 5993 common.cpp:196] Texture alignment: 512
I1025 09:31:34.613422 5993 common.cpp:197] Concurrent copy and execution: Yes
I1025 09:31:34.613427 5993 common.cpp:199] Number of multiprocessors: 5
I1025 09:31:34.613432 5993 common.cpp:200] Kernel execution timeout: Yes
caffe test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_5000.caffemodel -gpu 0 -iterations 100 I1025 09:33:55.320236 6215 caffe.cpp:313] Batch 96, accuracy = 0.99
I1025 09:33:55.320250 6215 caffe.cpp:313] Batch 96, loss = 0.0520074
I1025 09:33:55.323004 6215 caffe.cpp:313] Batch 97, accuracy = 0.97
I1025 09:33:55.323016 6215 caffe.cpp:313] Batch 97, loss = 0.0864158
I1025 09:33:55.327574 6215 caffe.cpp:313] Batch 98, accuracy = 1
I1025 09:33:55.327589 6215 caffe.cpp:313] Batch 98, loss = 0.00637727
I1025 09:33:55.330288 6215 caffe.cpp:313] Batch 99, accuracy = 0.99
I1025 09:33:55.330302 6215 caffe.cpp:313] Batch 99, loss = 0.0181812
I1025 09:33:55.330315 6215 caffe.cpp:318] Loss: 0.0303416
I1025 09:33:55.330328 6215 caffe.cpp:330] accuracy = 0.9902
I1025 09:33:55.330353 6215 caffe.cpp:330] loss = 0.0303416 (* 1 = 0.0303416 loss)
caffe time -model examples/mnist/lenet_train_test.prototxt -iterations 10 I1025 09:35:24.342404 6288 caffe.cpp:403] Iteration: 9 forward-backward time: 71 ms.
I1025 09:35:24.413054 6288 caffe.cpp:403] Iteration: 10 forward-backward time: 70 ms.
I1025 09:35:24.413080 6288 caffe.cpp:406] Average time per layer:
I1025 09:35:24.413084 6288 caffe.cpp:409] mnist forward: 0.0106 ms.
I1025 09:35:24.413089 6288 caffe.cpp:412] mnist backward: 0.0009 ms.
I1025 09:35:24.413094 6288 caffe.cpp:409] conv1 forward: 7.6432 ms.
I1025 09:35:24.413097 6288 caffe.cpp:412] conv1 backward: 7.9099 ms.
I1025 09:35:24.413101 6288 caffe.cpp:409] pool1 forward: 3.8151 ms.
I1025 09:35:24.413105 6288 caffe.cpp:412] pool1 backward: 0.7859 ms.
I1025 09:35:24.413108 6288 caffe.cpp:409] conv2 forward: 13.4716 ms.
I1025 09:35:24.413112 6288 caffe.cpp:412] conv2 backward: 26.854 ms.
I1025 09:35:24.413115 6288 caffe.cpp:409] pool2 forward: 2.0305 ms.
I1025 09:35:24.413120 6288 caffe.cpp:412] pool2 backward: 0.9011 ms.
I1025 09:35:24.413123 6288 caffe.cpp:409] ip1 forward: 2.6852 ms.
I1025 09:35:24.413127 6288 caffe.cpp:412] ip1 backward: 5.1399 ms.
I1025 09:35:24.413131 6288 caffe.cpp:409] relu1 forward: 0.0334 ms.
I1025 09:35:24.413134 6288 caffe.cpp:412] relu1 backward: 0.0371 ms.
I1025 09:35:24.413138 6288 caffe.cpp:409] ip2 forward: 0.1682 ms.
I1025 09:35:24.413142 6288 caffe.cpp:412] ip2 backward: 0.1895 ms.
I1025 09:35:24.413146 6288 caffe.cpp:409] loss forward: 0.0589 ms.
I1025 09:35:24.413149 6288 caffe.cpp:412] loss backward: 0.0023 ms.
I1025 09:35:24.413156 6288 caffe.cpp:417] Average Forward pass: 29.9322 ms.
I1025 09:35:24.413159 6288 caffe.cpp:419] Average Backward pass: 41.8356 ms.
I1025 09:35:24.413163 6288 caffe.cpp:421] Average Forward-Backward: 71.8 ms.
I1025 09:35:24.413167 6288 caffe.cpp:423] Total Time: 718 ms.
I1025 09:35:24.413170 6288 caffe.cpp:424] *** Benchmark ends ***
caffe学习--cifar10学习-ubuntu16.04-gtx650tiboost--1g--01的更多相关文章
- caffe学习--cifar10学习-ubuntu16.04-gtx650tiboost--1g--02
caffe学习--cifar10学习-ubuntu16.04-gtx650tiboost--1g--02 训练网络: caffe train -solver examples/cifar10/cifa ...
- caffe学习一:ubuntu16.04下跑Faster R-CNN demo (基于caffe). (亲测有效,记录经历两天的吐血经历)
兜兜转转,兜兜转转; 一次有一次,这次终于把Faster R-CNN 跑通了. 重要提示1:在开始跑Faster R-CNN之前一定要搞清楚用的是Python2 还是Python3. 不然你会无限次陷 ...
- 深度学习环境配置Ubuntu16.04+CUDA8.0+CUDNN5
深度学习从12年开始打响,配置深度学习环境软件一直是一个头疼的问题,如何安装显卡驱动,如何安装CUDA,如何安装CUDNN:Ubuntu官方一直吐槽Nvidia显卡驱动有问题,网上大神也给出了关闭li ...
- 深度学习环境配置:Ubuntu16.04安装GTX1080Ti+CUDA9.0+cuDNN7.0完整安装教程(多链接多参考文章)
本来就对Linux不熟悉,经过几天惨痛的教训,参考了不知道多少篇文章,终于把环境装好了,每篇文章或多或少都有一些用,但没有一篇完整的能解决我安装过程碰到的问题,所以决定还是自己写一篇我安装过程的教程, ...
- 深度学习环境配置:Ubuntu16.04下安装GTX1080Ti+CUDA9.0+cuDNN7.0完整安装教程(多链接多参考文章)
本来就对Linux不熟悉,经过几天惨痛的教训,参考了不知道多少篇文章,终于把环境装好了,每篇文章或多或少都有一些用,但没有一篇完整的能解决我安装过程碰到的问题,所以决定还是自己写一篇我安装过程的教程, ...
- caffe学习--cifar10学习-ubuntu16.04-gtx650tiboost--1g--03--20171103
classification ./examples/cifar10/cifar10_full.prototxt ./examples/cifar10/cifar10_full_iter_70000.c ...
- ROS入门学习(基于Ubuntu16.04+kinetic)
本文主要部分全部来源于ROS官网的Tutorials. Setup roscore # making sure that we have roscore running rosrun turtlesi ...
- Ubuntu16.04+cuda8.0rc+opencv3.1.0+caffe+Theano+torch7搭建教程
https://blog.csdn.net/jywowaa/article/details/52263711 学习中用到深度学习的框架,需要搭建caffe.theano和torch框架.经过一个月的不 ...
- win10安装ubuntu16.04及后续配置
原文地址:https://www.jianshu.com/p/842e36a8255c UEFI 模式下win10安装ubuntu16.04双系统教程 - baobei0112的专栏 - CSDN博客 ...
随机推荐
- NBOJv2——Problem 1037: Wormhole(map邻接表+优先队列SPFA)
Problem 1037: Wormhole Time Limits: 5000 MS Memory Limits: 200000 KB 64-bit interger IO format: ...
- 刷题总结——旅馆(bzoj1593线段树)
题目: Description 奶牛们最近的旅游计划,是到苏必利尔湖畔,享受那里的湖光山色,以及明媚的阳光.作为整个旅游的策划者和负责人,贝茜选择在湖边的一家著名的旅馆住宿.这个巨大的旅馆一共有N ( ...
- TroubleShoot: Excel Services Fix - "The workbook cannot be opened".
1. 问题描述: 在SharePoint 2013 文档库中打开Excel 文件提示"The workbook cannot be opened" 错误提示框,文档不能正常显示. ...
- batch.bat explaination
1.Echo 命令 打开回显或关闭请求回显功能,或显示消息.如果没有任何参数,echo 命令将显示当前回显设置. 语法 echo [{on|off}] [message] Sample篅echo of ...
- node总结--回调函数阻塞和非阻塞代码实例
阻塞代码实例: var fs = require("fs"); var data = fs.readFileSync('input.txt'); console.log(data. ...
- Pόlya定理-学习笔记
gi为一个为一个置换 c(g),为c(g)的轮换的数量 (循环的数量) 太监了
- 标准C程序设计七---101
Linux应用 编程深入 语言编程 标准C程序设计七---经典C11程序设计 以下内容为阅读: <标准C程序设计>(第7版) 作者 ...
- indexOf()、includes()、startsWith()、endsWith()
是否包含字符串三种新方法 传统上,JavaScript只有 indexOf 方法,可以用来确定一个字符串是否包含在另一个字符串中.ES6又提供了三种新方法. includes():返回布尔值,表示是否 ...
- centos 目录
http://www.iteye.com/topic/1125162 使用linux也有一年多时间了 最近也是一直在维护网站系统主机 下面是linux目录结构说明 本人使用的是centos系统,很 ...
- 作为程序员,再也不想和PM干架了
上周,又看见有程序和PM(产品经理)吵了起来,大致是因为晚上就要上线了,下午的时候PM来说要改点需求,但程序不愿意.兴许是天气热了,大家都很烦躁,于是一言不合就发飙了,最终还是程序老大介入才解决了问题 ...