caffe学习--caffe入门classification00学习--ipython
首先,数据文件和模型文件都已经下载并处理好,不提。
cd "caffe-root-dir "
----------------------------------分割线-------------------------------
# set up Python environment: numpy for numerical routines, and matplotlib for plotting
import numpy as np
import matplotlib.pyplot as plt
# display plots in this notebook
%matplotlib inline
--------------------------------------------------------------------------------------
# set display defaults
plt.rcParams['figure.figsize'] = (10, 10) # large images
plt.rcParams['image.interpolation'] = 'nearest' # don't interpolate: show square pixels
plt.rcParams['image.cmap'] = 'gray' # use grayscale output rather than a (potentially misleading) color heatmap
--------------------------------------------------------------------------------------
# The caffe module needs to be on the Python path;
# we'll add it here explicitly.
import sys
caffe_root = './' # this file should be run from {caffe_root}/examples (otherwise change this line)
sys.path.insert(0, caffe_root + 'build/install/python')
--------------------------------------------------------------------------------------
import caffe
# If you get "No module named _caffe", either you have not built pycaffe or you have the wrong path.
caffe.set_mode_cpu()
--------------------------------------------------------------------------------------
model_def = caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt'
model_weights = caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
--------------------------------------------------------------------------------------
# load the mean ImageNet image (as distributed with Caffe) for subtraction
mu = np.load(caffe_root + 'build/install/python/caffe/imagenet/ilsvrc_2012_mean.npy')
mu = mu.mean(1).mean(1) # average over pixels to obtain the mean (BGR) pixel values
print 'mean-subtracted values:', zip('BGR', mu)
# create transformer for the input called 'data'
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1)) # move image channels to outermost dimension
transformer.set_mean('data', mu) # subtract the dataset-mean value in each channel
transformer.set_raw_scale('data', 255) # rescale from [0, 1] to [0, 255]
transformer.set_channel_swap('data', (2,1,0)) # swap channels from RGB to BGR
--------------------------------------------------------------------------------------
# set the size of the input (we can skip this if we're happy
# with the default; we can also change it later, e.g., for different batch sizes)
net.blobs['data'].reshape(50, # batch size
3, # 3-channel (BGR) images
227, 227) # image size is 227x227
image = caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')
transformed_image = transformer.preprocess('data', image)
plt.imshow(image)
--------------------------------------------------------------------------------------
# copy the image data into the memory allocated for the net
net.blobs['data'].data[...] = transformed_image
### perform classification
output = net.forward()
output_prob = output['prob'][0] # the output probability vector for the first image in the batch
print 'predicted class is:', output_prob.argmax()
-----------------------------------------
# load ImageNet labels
labels_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
if not os.path.exists(labels_file):
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(labels_file, str, delimiter='\t')
print 'output label:', labels[output_prob.argmax()]
----------------------------------------------------------------
# sort top five predictions from softmax output
top_inds = output_prob.argsort()[::-1][:5] # reverse sort and take five largest items
print 'probabilities and labels:'
zip(output_prob[top_inds], labels[top_inds])
----------------------------------------------------------------
%timeit net.forward()
----------------------------------------------------------------
caffe.set_device(0) # if we have multiple GPUs, pick the first one
caffe.set_mode_gpu()
net.forward() # run once before timing to set up memory
%timeit net.forward()
----------------------------------------------------------------
# for each layer, show the output shape
for layer_name, blob in net.blobs.iteritems():
print layer_name + '\t' + str(blob.data.shape)
----------------------------------------------------------------
for layer_name, param in net.params.iteritems():
print layer_name + '\t' + str(param[0].data.shape), str(param[1].data.shape)
----------------------------------------------------------------
def vis_square(data):
"""Take an array of shape (n, height, width) or (n, height, width, 3)
and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)"""
# normalize data for display
data = (data - data.min()) / (data.max() - data.min())
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = (((0, n ** 2 - data.shape[0]),
(0, 1), (0, 1)) # add some space between filters
+ ((0, 0),) * (data.ndim - 3)) # don't pad the last dimension (if there is one)
data = np.pad(data, padding, mode='constant', constant_values=1) # pad with ones (white)
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
plt.imshow(data); plt.axis('off')
----------------------------------------------------------------
# the parameters are a list of [weights, biases]
filters = net.params['conv1'][0].data
vis_square(filters.transpose(0, 2, 3, 1))
----------------------------------------------------------------
feat = net.blobs['conv1'].data[0, :36]
vis_square(feat)
----------------------------------------------------------------
feat = net.blobs['pool5'].data[0]
vis_square(feat)
----------------------------------------------------------------
feat = net.blobs['fc6'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
----------------------------------------------------------------
feat = net.blobs['prob'].data[0]
plt.figure(figsize=(15, 3))
plt.plot(feat.flat)
----------------------------------------------------------------
# download an image
my_image_url = "..." # paste your URL here
# for example:
# my_image_url = "https://upload.wikimedia.org/wikipedia/commons/b/be/Orang_Utan%2C_Semenggok_Forest_Reserve%2C_Sarawak%2C_Borneo%2C_Malaysia.JPG"
!wget -O image.jpg $my_image_url
# transform it and copy it into the net
image = caffe.io.load_image('image.jpg')
net.blobs['data'].data[...] = transformer.preprocess('data', image)
# perform classification
net.forward()
# obtain the output probabilities
output_prob = net.blobs['prob'].data[0]
# sort top five predictions from softmax output
top_inds = output_prob.argsort()[::-1][:5]
plt.imshow(image)
print 'probabilities and labels:'
zip(output_prob[top_inds], labels[top_inds])
----------------------------------------------------------------
----------------------------------------------------------------
----------------------------------------------------------------
----------------------------------------------------------------
----------------------------------------------------------------
caffe学习--caffe入门classification00学习--ipython的更多相关文章
- 人工智能深度学习Caffe框架介绍,优秀的深度学习架构
人工智能深度学习Caffe框架介绍,优秀的深度学习架构 在深度学习领域,Caffe框架是人们无法绕过的一座山.这不仅是因为它无论在结构.性能上,还是在代码质量上,都称得上一款十分出色的开源框架.更重要 ...
- win7 配置微软的深度学习caffe
win7 配置微软的深度学习caffe 官方下载: https://github.com/Microsoft/caffe 然后 直接修改caffe目录下的windows目录下的项目的props文件 ...
- Caffe——清晰高效的深度学习(Deep Learning)框架
Caffe(http://caffe.berkeleyvision.org/)是一个清晰而高效的深度学习框架,其作者是博士毕业于UC Berkeley的贾扬清(http://daggerfs.com/ ...
- 学习Caffe(一)安装Caffe
Caffe是一个深度学习框架,本文讲阐述如何在linux下安装GPU加速的caffe. 系统配置是: OS: Ubuntu14.04 CPU: i5-4690 GPU: GTX960 RAM: 8G ...
- 深度学习caffe测试代码c++
#include <caffe/caffe.hpp> #include <opencv2/core/core.hpp> #include <opencv2/highgui ...
- 21天学习caffe(二)
本文大致记录使用caffe的一次完整流程 Process 1 下载mnist数据集(数据量很小),解压放在data/mnist文件夹中:2 运行create_mnist.sh,生成lmdb格式的数据( ...
- Python 初学者 入门 应该学习 python 2 还是 python 3?
许多刚入门 Python 的朋友都在纠结的的问题是:我应该选择学习 python2 还是 python3? 对此,咪博士的回答是:果断 Python3 ! 可是,还有许多小白朋友仍然犹豫:那为什么还是 ...
- 问题集录--新手入门深度学习,选择TensorFlow 好吗?
新手入门深度学习,选择 TensorFlow 有哪些益处? 佟达:首先,对于新手来说,TensorFlow的环境配置包装得真心非常好.相较之下,安装Caffe要痛苦的多,如果还要再CUDA环境下配合O ...
- Python学习--01入门
Python学习--01入门 Python是一种解释型.面向对象.动态数据类型的高级程序设计语言.和PHP一样,它是后端开发语言. 如果有C语言.PHP语言.JAVA语言等其中一种语言的基础,学习Py ...
随机推荐
- [CODEVS1915] 分配问题(最小费用最大流)
传送门 脑残题 建图都懒得说了 ——代码 #include <queue> #include <cstdio> #include <cstring> #includ ...
- BZOJ4571 [Scoi2016]美味 【主席树】
题目 一家餐厅有 n 道菜,编号 1...n ,大家对第 i 道菜的评价值为 ai(1≤i≤n).有 m 位顾客,第 i 位顾客的期 望值为 bi,而他的偏好值为 xi .因此,第 i 位顾客认为第 ...
- 学习 WebService 第一步:体系结构、三元素SOAP/WSDL/UDDI
原文地址:爱军的博客——WebService简介 一.为什么需要Web Service 笔记: WebService 可以实现 跨(硬件.服务器.开发工具.平台.应用程序.程序语言……)共享数据和应用 ...
- P4551 最长异或路径 (01字典树,异或前缀和)
题目描述 给定一棵 n 个点的带权树,结点下标从 1 开始到 N .寻找树中找两个结点,求最长的异或路径. 异或路径指的是指两个结点之间唯一路径上的所有边权的异或. 输入输出格式 输入格式: 第一行一 ...
- java面试题之osi七层网络模型,五层网络模型,每层分别有哪些协议(阿里面试题)
OSI七层网络模型 TCP/IP五层网络模型 对应网络协议 应用层 应用层 HTTP.TFTP.FTP.NFS.WAIS.SMTP 表示层 应用层 Telnet.Rlogin.SNMP.Gopher ...
- Hadoop 3.1.0 在 Ubuntu 16.04 上安装时遇到的问题
1.Hadoop 安装 pdsh localhost: Connection refused Hadoop安装过程中使用 $ sbin/start-dfs.sh 启动节点时,发生错误提示: pdsh@ ...
- Using a USB host controller security extension for controlling changes in and auditing USB topology
Protecting computer systems from attacks that attempt to change USB topology and for ensuring that t ...
- Linux 之 FTP服务器搭建
FTP服务器搭建 参考教程:[千峰教育] 1.关闭防火墙: service iptables stop 2.关闭Selinux setenforce 0 3.安装所需要依赖及编译工具 yum inst ...
- java string中indexOf()常用用法
Java中字符串中子串的查找共有四种方法,如下: 1.int indexOf(String str) :返回第一次出现的指定子字符串在此字符串中的索引. 2.int indexOf(String st ...
- 2013 ACM/ICPC 亚洲区 杭州站
题目链接 2013杭州区域赛 Problem A Problem B 这题我用的是SPFA+ mask dp 首先跑5次SPFA: 1次是求出每个起点和其他所有点的最短距离 4次是求出每个输入的点和 ...