finetune on caffe
官方例程:http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html
相应的中文说明:http://blog.csdn.net/liumaolincycle/article/details/48501423
下文链接:https://stackoverflow.com/questions/36841158/fine-tuning-of-googlenet-model
Assuming you are trying to do image classification. These should be the steps for finetuning a model:
1. Classification layer
The original classification layer "loss3/classifier"
outputs predictions for 1000 classes (it's mum_output
is set to 1000). You'll need to replace it with a new layer with appropriate num_output
. Replacing the classification layer:
- Change layer's name (so that when you read the original weights from caffemodel file there will be no conflict with the weights of this layer).
- Change
num_output
to the right number of output classes you are trying to predict. - Note that you need to change ALL classification layers. Usually there is only one, but GoogLeNet happens to have three:
"loss1/classifier"
,"loss2/classifier"
and"loss3/classifier"
.
2. Data
You need to make a new training dataset with the new labels you want to fine tune to. See, for example, this post on how to make an lmdb dataset.
3. How extensive a finetuning you want?
When finetuning a model, you can train ALL model's weights or choose to fix some weights (usually filters of the lower/deeper layers) and train only the weights of the top-most layers. This choice is up to you and it ususally depends on the amount of training data available (the more examples you have the more weights you can afford to finetune).
Each layer (that holds trainable parameters) has param { lr_mult: XX }
. This coefficient determines how susceptible these weights to SGD updates. Setting param { lr_mult: 0 }
means you FIX the weights of this layer and they will not be changed during the training process.
Edit your train_val.prototxt
accordingly.
4. Run caffe
Run caffe train
but supply it with caffemodel weights as an initial weights:
~$ $CAFFE_ROOT/build/tools/caffe train -solver /path/to/solver.ptototxt -weights /path/to/orig_googlenet_weights.caffemodel
Fine-tuning is a very useful trick to achieve a promising accuracy compared to past manual feature. @Shai already posted a good tutorial for fine-tuning the Googlenet using Caffe, so I just want to give some recommends and tricks for fine-tuning for general cases.
In most of time, we face a task classification problem that new dataset (e.g. Oxford 102 flower dataset or Cat&Dog) has following four common situations CS231n:
- New dataset is small and similar to original dataset.
- New dataset is small but is different to original dataset (Most common cases)
- New dataset is large and similar to original dataset.
- New dataset is large but is different to original dataset.
In practice, most of time we do not have enough data to train the network from scratch, but may be enough for pre-trained model. Whatever which cases I mentions above only thing we must care about is that do we have enough data to train the CNN?
If yes, we can train the CNN from scratch. However, in practice it is still beneficial to initialize the weight from pre-trained model.
If no, we need to check whether data is very different from original datasets? If it is very similar, we can just fine-tune the fully connected neural network or fine-tune with SVM. However, If it is very different from original dataset, we may need to fine-tune the convolutional neural network to improve the generalization.
参考链接:https://groups.google.com/forum/#!topic/caffe-users/3x82qPZ2f8E
http://www.cnblogs.com/louyihang-loves-baiyan/p/5038758.html
finetune on caffe的更多相关文章
- DL开源框架Caffe | 模型微调 (finetune)的场景、问题、技巧以及解决方案
转自:http://blog.csdn.net/u010402786/article/details/70141261 前言 什么是模型的微调? 使用别人训练好的网络模型进行训练,前提是必须和别人 ...
- Caffe学习系列(13):对训练好的模型进行fine-tune
使用http://www.cnblogs.com/573177885qq/p/5804863.html中的图片进行训练和测试. 整个流程差不多,fine-tune命令: ./build/tools/c ...
- caffe进行finetune时出现"shapeequals(proto) shape mismatch (reshape not set)"的解决办法
声明:加载的caffemodel会根据你的net.prototxt文件里的各个layer的name来进行参数赋值. 错误:[Caffe]: Check failed: ShapeEquals(prot ...
- 【转】Caffe初试(十)命令行解析
caffe的运行提供三种接口:C++接口(命令行).Python接口和matlab接口.本文先对命令行进行解析,后续会依次介绍其它两种接口. caffe的C++主程序(caffe.cpp)放在根目录下 ...
- finetuning caffe
还没解决,以下是解释fine-tune 比如说,先设计出一个CNN结构.然后用一个大的数据集A,训练该CNN网络,得到网络a.可是在数据集B上,a网络预测效果并不理想(可能的原因是数据集A和B存在一些 ...
- caffe使用
训练时, solver.prototxt中使用的是train_val.prototxt ./build/tools/caffe/train -solver ./models/bvlc_referenc ...
- caffe: test code for PETA dataset
test code for PETA datasets .... #ifdef WITH_PYTHON_LAYER #include "boost/python.hpp" name ...
- 转:谷歌大脑科学家 Caffe缔造者 贾扬清 微信讲座完整版
[转:http://blog.csdn.net/buaalei/article/details/46344675] 大家好!我是贾扬清,目前在Google Brain,今天有幸受雷鸣师兄邀请来和大家聊 ...
- Chapter 3 Start Caffe with MNIST Demo
先从一个具体的例子来开始Caffe,以MNIST手写数据为例. 1.下载数据 下载mnist到caffe-master\data\mnist文件夹. THE MNIST DATABASE:Yann L ...
随机推荐
- JS获取HTML DOM元素的8种方法
什么是HTML DOM 文档对象模型(Document Object Model),是W3C组织推荐的处理可扩展置标语言的标准编程接口.简单理解就是HTML DOM 是关于如何获取.修改.添加或删除 ...
- unity实现框选效果
思路: 在uinity中既可以将屏幕坐标转换为世界坐标,也可以将世界坐标转换为屏幕坐标.这样的话我们就可以通过判断物体在世界坐标转换为平幕坐标是否在鼠标框选的矩形区域坐标内,来判断物体是否在框选范围. ...
- 题解 CF191C 【Fools and Roads】
树上差分半裸题 常规思路是进行三次DFS,然后常规运算即可 这里提供两次dfs的思路(wyz tql orz) 我们以样例2为例 我们考虑任意一条路径,令其起点为u终点为v,每走一次当前路径则v的访问 ...
- Spring Boot下的lombok安装 (日志) 不能识别log变量问题
参考地址:http://blog.csdn.net/blueheart20/article/details/52909775 ps:除了要加载依赖之外 还要安装lombok插件
- lintcode39 恢复旋转排序数组
恢复旋转排序数组 给定一个旋转排序数组,在原地恢复其排序. 您在真实的面试中是否遇到过这个题? Yes 说明 什么是旋转数组? 比如,原始数组为[1,2,3,4], 则其旋转数组可以是[1,2,3 ...
- Liunx 基本命令
find : find ./ -name "*instantiate_post_check.yml*" grep: openstack network show fe92bfcf- ...
- 购物单:Excel的应用
题目描述: 小明刚刚找到工作,老板人很好,只是老板夫人很爱购物.老板忙的时候经常让小明帮忙到商场代为购物.小明很厌烦,但又不好推辞. 这不,XX大促销又来了!老板夫人开出了长长的购物单,都是有打折优惠 ...
- HDU 3260/POJ 3827 Facer is learning to swim(DP+搜索)(2009 Asia Ningbo Regional)
Description Facer is addicted to a game called "Tidy is learning to swim". But he finds it ...
- Repair the Wall (贪心)
Long time ago , Kitty lived in a small village. The air was fresh and the scenery was very beautiful ...
- Notes of the scrum meeting(12.10)
meeting time:20:00~20:30p.m.,December 10th,2013 meeting place:20号公寓前 attendees: 顾育豪 ...