VGG_19 train_vali.prototxt file
name: "VGG_ILSVRC_19_layer" layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TRAIN
}
image_data_param {
batch_size: 12
source: "../../fine_tuning_data/HAT_fineTuning_data/train_data_fineTuning.txt"
root_folder: "../../fine_tuning_data/HAT_fineTuning_data/train_data/"
}
} layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
}
image_data_param {
batch_size: 10
source: "../../fine_tuning_data/HAT_fineTuning_data/test_data_fineTuning.txt"
root_folder: "../../fine_tuning_data/HAT_fineTuning_data/test_data/"
}
} layer {
bottom:"data"
top:"conv1_1"
name:"conv1_1"
type:"Convolution"
convolution_param {
num_output:64
pad:1
kernel_size:3
}
}
layer {
bottom:"conv1_1"
top:"conv1_1"
name:"relu1_1"
type:"ReLU"
}
layer {
bottom:"conv1_1"
top:"conv1_2"
name:"conv1_2"
type:"Convolution"
convolution_param {
num_output:64
pad:1
kernel_size:3
}
}
layer {
bottom:"conv1_2"
top:"conv1_2"
name:"relu1_2"
type:"ReLU"
}
layer {
bottom:"conv1_2"
top:"pool1"
name:"pool1"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size:2
stride:2
}
}
layer {
bottom:"pool1"
top:"conv2_1"
name:"conv2_1"
type:"Convolution"
convolution_param {
num_output:128
pad:1
kernel_size:3
}
}
layer {
bottom:"conv2_1"
top:"conv2_1"
name:"relu2_1"
type:"ReLU"
}
layer {
bottom:"conv2_1"
top:"conv2_2"
name:"conv2_2"
type:"Convolution"
convolution_param {
num_output:128
pad:1
kernel_size:3
}
}
layer {
bottom:"conv2_2"
top:"conv2_2"
name:"relu2_2"
type:"ReLU"
}
layer {
bottom:"conv2_2"
top:"pool2"
name:"pool2"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size:2
stride:2
}
}
layer {
bottom:"pool2"
top:"conv3_1"
name: "conv3_1"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_1"
top:"conv3_1"
name:"relu3_1"
type:"ReLU"
}
layer {
bottom:"conv3_1"
top:"conv3_2"
name:"conv3_2"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_2"
top:"conv3_2"
name:"relu3_2"
type:"ReLU"
}
layer {
bottom:"conv3_2"
top:"conv3_3"
name:"conv3_3"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_3"
top:"conv3_3"
name:"relu3_3"
type:"ReLU"
}
layer {
bottom:"conv3_3"
top:"conv3_4"
name:"conv3_4"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_4"
top:"conv3_4"
name:"relu3_4"
type:"ReLU"
}
layer {
bottom:"conv3_4"
top:"pool3"
name:"pool3"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool3"
top:"conv4_1"
name:"conv4_1"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_1"
top:"conv4_1"
name:"relu4_1"
type:"ReLU"
}
layer {
bottom:"conv4_1"
top:"conv4_2"
name:"conv4_2"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_2"
top:"conv4_2"
name:"relu4_2"
type:"ReLU"
}
layer {
bottom:"conv4_2"
top:"conv4_3"
name:"conv4_3"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_3"
top:"conv4_3"
name:"relu4_3"
type:"ReLU"
}
layer {
bottom:"conv4_3"
top:"conv4_4"
name:"conv4_4"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_4"
top:"conv4_4"
name:"relu4_4"
type:"ReLU"
}
layer {
bottom:"conv4_4"
top:"pool4"
name:"pool4"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool4"
top:"conv5_1"
name:"conv5_1"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_1"
top:"conv5_1"
name:"relu5_1"
type:"ReLU"
}
layer {
bottom:"conv5_1"
top:"conv5_2"
name:"conv5_2"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_2"
top:"conv5_2"
name:"relu5_2"
type:"ReLU"
}
layer {
bottom:"conv5_2"
top:"conv5_3"
name:"conv5_3"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_3"
top:"conv5_3"
name:"relu5_3"
type:"ReLU"
}
layer {
bottom:"conv5_3"
top:"conv5_4"
name:"conv5_4"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_4"
top:"conv5_4"
name:"relu5_4"
type:"ReLU"
}
layer {
bottom:"conv5_4"
top:"pool5"
name:"pool5"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool5"
top:"fc6_"
name:"fc6_"
type:"InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom:"fc6_"
top:"fc6_"
name:"relu6"
type:"ReLU"
}
layer {
bottom:"fc6_"
top:"fc6_"
name:"drop6"
type:"Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom:"fc6_"
top:"fc7"
name:"fc7"
type:"InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom:"fc7"
top:"fc7"
name:"relu7"
type:"ReLU"
}
layer {
bottom:"fc7"
top:"fc7"
name:"drop7"
type:"Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom:"fc7"
top:"fc8_"
name:"fc8_"
type:"InnerProduct"
inner_product_param {
num_output: 27
}
} layer {
name: "sigmoid"
type: "Sigmoid"
bottom: "fc8_"
top: "fc8_"
} layer {
name: "accuracy"
type: "Accuracy"
bottom: "fc8_"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
} layer {
name: "loss"
type: "EuclideanLoss"
bottom: "fc8_"
bottom: "label"
top: "loss"
}
VGG_19 train_vali.prototxt file的更多相关文章
- 如何才能将Faster R-CNN训练起来?
如何才能将Faster R-CNN训练起来? 首先进入 Faster RCNN 的官网啦,即:https://github.com/rbgirshick/py-faster-rcnn#installa ...
- SSD框架训练自己的数据集
SSD demo中详细介绍了如何在VOC数据集上使用SSD进行物体检测的训练和验证.本文介绍如何使用SSD实现对自己数据集的训练和验证过程,内容包括: 1 数据集的标注2 数据集的转换3 使用SSD如 ...
- Faster-RCNN 训练自己的数据
在前一篇随笔中,数据制作成了VOC2007格式,可以用于Faster-RCNN的训练. 1.针对数据的修改 修改datasets\VOCdevkit2007\VOCcode\VOCinit.m,我只做 ...
- caffe drawnet.py 用Python画网络框架
在caffe中可以使用draw_net.py轻松地绘制卷积神经网络(CNN,Convolutional Neural Networks)的架构图.这个工具对于我们理解.学习甚至查错都有很大的帮助. 1 ...
- caffe实际运行中遇到的问题
https://blog.csdn.net/u010417185/article/details/52649178 1.均值计算是否需要统一图像的尺寸? 在图像计算均值时,应该先统一图像的尺寸,否则会 ...
- [OpenCV] Install OpenCV 3.3 with DNN
OpenCV 3.3 Aug 3, 2017 OpenCV 3.3 has been released with greatly improved Deep Learning module and l ...
- [PyImageSearch] Ubuntu16.04 使用深度学习和OpenCV实现物体检测
上一篇博文中讲到如何用OpenCV实现物体分类,但是接下来这篇博文将会告诉你图片中物体的位置具体在哪里. 我们将会知道如何使用OpenCV‘s的dnn模块去加载一个预训练的物体检测网络,它能使得我们将 ...
- 【PyImageSearch】Ubuntu16.04使用OpenCV3.3.0实现图像分类
这篇博文将会展示如何采用一个预训练的深度学习网络(模型)在ImageNet的数据集并把它当作输入图像. 首先说明,运行环境为Ubuntu16.04(或者MacOS),windows暂不支持,已经编译好 ...
- 机器学习进阶-目标追踪-SSD多进程执行 1.cv2.dnn.readnetFromCaffe(用于读取已经训练好的caffe模型) 2.delib.correlation_tracker(生成追踪器) 5.cv2.writer(将图片写入视频中) 6.cv2.dnn.blobFromImage(图片归一化) 10.multiprocessing.process(生成进程)
1. cv2.dnn.readNetFromCaffe(prototxt, model) 用于进行SSD网络的caffe框架的加载 参数说明:prototxt表示caffe网络的结构文本,model ...
随机推荐
- C# Delete Url Cookie
public static void DeleteCookieFile(Uri url) { string path = Environment.GetFolderPath(Environment.S ...
- C# 跨线程操作控件(简洁)
C# 跨线程操作控件 .net 原则上禁止跨线程访问控件,因为这样可能造成错误的发生.解决此问题的方法有两个: 第一 ...
- SVG 2D入门5 - 颜色的表示
SVG和canvas中是一样的,都是使用标准的HTML/CSS中的颜色表示方法,这些颜色都可以用于fill和stroke属性.基本有下面这些定义颜色的方式:1. 颜色名字: 直接使用颜色名字red, ...
- 12-27cell常用的属性
1.创建cell // 创建一个cell并且设置cell的风格 UITableViewCell *cell = [[UITableViewCell alloc]initWithStyle:UI ...
- 针对电信乌龙事件的深度测试: 广州电信错误将深圳地区189的号码在3G升级4G申请时从广州网厅发货,造成深圳用户收到4G卡后无法激活,深圳电信找不到订单
广州电信错误将深圳地区189的3G升级4G申请从中国电信广州网厅发货(智能卡号:8986 1114 9002 0851 742X S 电话号码 189),造成用户收到4G卡后无法激活,深圳电信找不 ...
- 您不能在64-位可执行文件上设置DEP属性?
我是为dllhost.exe设置DEP时遇到了同样的情况.你需要选择64位系统对应的程序.64位系统:C:\Windows\SysWOW64\dllhost.exe32位系统:C:\Windows\S ...
- JQuery源码分析(三)
jQuery中ready与load事件 jQuery有3种针对文档加载的方法 $(document).ready(function() { // ...代码... }) //document read ...
- JS 日历控件
http://www.cnblogs.com/yank/archive/2008/08/14/1267746.html http://code.google.com/p/lhgcalendar/dow ...
- CentOS 7.0 安装go 1.3.1
1.下载go安装包 golang中国上下载 2. 解压 tar -zxf go1.3.1.linux-amd64.tar.gz -C /usr/local/ 3. 修改 etc/profile 文件在 ...
- Python 如何跳出多重循环
Python 如何跳出多重循环 抛异常 return