VGG_19 train_vali.prototxt file
name: "VGG_ILSVRC_19_layer" layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TRAIN
}
image_data_param {
batch_size: 12
source: "../../fine_tuning_data/HAT_fineTuning_data/train_data_fineTuning.txt"
root_folder: "../../fine_tuning_data/HAT_fineTuning_data/train_data/"
}
} layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
}
image_data_param {
batch_size: 10
source: "../../fine_tuning_data/HAT_fineTuning_data/test_data_fineTuning.txt"
root_folder: "../../fine_tuning_data/HAT_fineTuning_data/test_data/"
}
} layer {
bottom:"data"
top:"conv1_1"
name:"conv1_1"
type:"Convolution"
convolution_param {
num_output:64
pad:1
kernel_size:3
}
}
layer {
bottom:"conv1_1"
top:"conv1_1"
name:"relu1_1"
type:"ReLU"
}
layer {
bottom:"conv1_1"
top:"conv1_2"
name:"conv1_2"
type:"Convolution"
convolution_param {
num_output:64
pad:1
kernel_size:3
}
}
layer {
bottom:"conv1_2"
top:"conv1_2"
name:"relu1_2"
type:"ReLU"
}
layer {
bottom:"conv1_2"
top:"pool1"
name:"pool1"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size:2
stride:2
}
}
layer {
bottom:"pool1"
top:"conv2_1"
name:"conv2_1"
type:"Convolution"
convolution_param {
num_output:128
pad:1
kernel_size:3
}
}
layer {
bottom:"conv2_1"
top:"conv2_1"
name:"relu2_1"
type:"ReLU"
}
layer {
bottom:"conv2_1"
top:"conv2_2"
name:"conv2_2"
type:"Convolution"
convolution_param {
num_output:128
pad:1
kernel_size:3
}
}
layer {
bottom:"conv2_2"
top:"conv2_2"
name:"relu2_2"
type:"ReLU"
}
layer {
bottom:"conv2_2"
top:"pool2"
name:"pool2"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size:2
stride:2
}
}
layer {
bottom:"pool2"
top:"conv3_1"
name: "conv3_1"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_1"
top:"conv3_1"
name:"relu3_1"
type:"ReLU"
}
layer {
bottom:"conv3_1"
top:"conv3_2"
name:"conv3_2"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_2"
top:"conv3_2"
name:"relu3_2"
type:"ReLU"
}
layer {
bottom:"conv3_2"
top:"conv3_3"
name:"conv3_3"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_3"
top:"conv3_3"
name:"relu3_3"
type:"ReLU"
}
layer {
bottom:"conv3_3"
top:"conv3_4"
name:"conv3_4"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_4"
top:"conv3_4"
name:"relu3_4"
type:"ReLU"
}
layer {
bottom:"conv3_4"
top:"pool3"
name:"pool3"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool3"
top:"conv4_1"
name:"conv4_1"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_1"
top:"conv4_1"
name:"relu4_1"
type:"ReLU"
}
layer {
bottom:"conv4_1"
top:"conv4_2"
name:"conv4_2"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_2"
top:"conv4_2"
name:"relu4_2"
type:"ReLU"
}
layer {
bottom:"conv4_2"
top:"conv4_3"
name:"conv4_3"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_3"
top:"conv4_3"
name:"relu4_3"
type:"ReLU"
}
layer {
bottom:"conv4_3"
top:"conv4_4"
name:"conv4_4"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_4"
top:"conv4_4"
name:"relu4_4"
type:"ReLU"
}
layer {
bottom:"conv4_4"
top:"pool4"
name:"pool4"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool4"
top:"conv5_1"
name:"conv5_1"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_1"
top:"conv5_1"
name:"relu5_1"
type:"ReLU"
}
layer {
bottom:"conv5_1"
top:"conv5_2"
name:"conv5_2"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_2"
top:"conv5_2"
name:"relu5_2"
type:"ReLU"
}
layer {
bottom:"conv5_2"
top:"conv5_3"
name:"conv5_3"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_3"
top:"conv5_3"
name:"relu5_3"
type:"ReLU"
}
layer {
bottom:"conv5_3"
top:"conv5_4"
name:"conv5_4"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_4"
top:"conv5_4"
name:"relu5_4"
type:"ReLU"
}
layer {
bottom:"conv5_4"
top:"pool5"
name:"pool5"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool5"
top:"fc6_"
name:"fc6_"
type:"InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom:"fc6_"
top:"fc6_"
name:"relu6"
type:"ReLU"
}
layer {
bottom:"fc6_"
top:"fc6_"
name:"drop6"
type:"Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom:"fc6_"
top:"fc7"
name:"fc7"
type:"InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom:"fc7"
top:"fc7"
name:"relu7"
type:"ReLU"
}
layer {
bottom:"fc7"
top:"fc7"
name:"drop7"
type:"Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom:"fc7"
top:"fc8_"
name:"fc8_"
type:"InnerProduct"
inner_product_param {
num_output: 27
}
} layer {
name: "sigmoid"
type: "Sigmoid"
bottom: "fc8_"
top: "fc8_"
} layer {
name: "accuracy"
type: "Accuracy"
bottom: "fc8_"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
} layer {
name: "loss"
type: "EuclideanLoss"
bottom: "fc8_"
bottom: "label"
top: "loss"
}
VGG_19 train_vali.prototxt file的更多相关文章
- 如何才能将Faster R-CNN训练起来?
如何才能将Faster R-CNN训练起来? 首先进入 Faster RCNN 的官网啦,即:https://github.com/rbgirshick/py-faster-rcnn#installa ...
- SSD框架训练自己的数据集
SSD demo中详细介绍了如何在VOC数据集上使用SSD进行物体检测的训练和验证.本文介绍如何使用SSD实现对自己数据集的训练和验证过程,内容包括: 1 数据集的标注2 数据集的转换3 使用SSD如 ...
- Faster-RCNN 训练自己的数据
在前一篇随笔中,数据制作成了VOC2007格式,可以用于Faster-RCNN的训练. 1.针对数据的修改 修改datasets\VOCdevkit2007\VOCcode\VOCinit.m,我只做 ...
- caffe drawnet.py 用Python画网络框架
在caffe中可以使用draw_net.py轻松地绘制卷积神经网络(CNN,Convolutional Neural Networks)的架构图.这个工具对于我们理解.学习甚至查错都有很大的帮助. 1 ...
- caffe实际运行中遇到的问题
https://blog.csdn.net/u010417185/article/details/52649178 1.均值计算是否需要统一图像的尺寸? 在图像计算均值时,应该先统一图像的尺寸,否则会 ...
- [OpenCV] Install OpenCV 3.3 with DNN
OpenCV 3.3 Aug 3, 2017 OpenCV 3.3 has been released with greatly improved Deep Learning module and l ...
- [PyImageSearch] Ubuntu16.04 使用深度学习和OpenCV实现物体检测
上一篇博文中讲到如何用OpenCV实现物体分类,但是接下来这篇博文将会告诉你图片中物体的位置具体在哪里. 我们将会知道如何使用OpenCV‘s的dnn模块去加载一个预训练的物体检测网络,它能使得我们将 ...
- 【PyImageSearch】Ubuntu16.04使用OpenCV3.3.0实现图像分类
这篇博文将会展示如何采用一个预训练的深度学习网络(模型)在ImageNet的数据集并把它当作输入图像. 首先说明,运行环境为Ubuntu16.04(或者MacOS),windows暂不支持,已经编译好 ...
- 机器学习进阶-目标追踪-SSD多进程执行 1.cv2.dnn.readnetFromCaffe(用于读取已经训练好的caffe模型) 2.delib.correlation_tracker(生成追踪器) 5.cv2.writer(将图片写入视频中) 6.cv2.dnn.blobFromImage(图片归一化) 10.multiprocessing.process(生成进程)
1. cv2.dnn.readNetFromCaffe(prototxt, model) 用于进行SSD网络的caffe框架的加载 参数说明:prototxt表示caffe网络的结构文本,model ...
随机推荐
- 2016 - 1 -17 GCD学习总结
一:GCD中的两个核心概念,队列与任务: 1.任务:执行什么操作.(代码块 block) 任务执行的类型分为以下两种: 1.1同步执行任务:在当前线程执行任务.不会开辟新的线程. 1.2异步执行任务: ...
- ios上 更改 状态栏(UIStatusBar)
摘要 ios上 更改状态栏(UIStatusBar)的颜色 ios UIStatusBar statusBar 状态栏 更改状态栏颜色 目录[-] IOS上 关于状态栏的相关设置(UIStatusBa ...
- LeetCode----Word Ladder 2
Given two words (start and end), and a dictionary, find all shortest transformation sequence(s) from ...
- swift系统学习控件篇:UITableView+UICollectionView
工作之余,学习下swift大法.把自己的学习过程分享一下.当中的布局很乱,就表在意这些细节了.直接上代码: UITableView: // // ViewController.swift // UIt ...
- Could not launch "app_name"
真机测试 不报错 编译通过后 Xcode总出这个错 process launch faild:NotFound-------解决办法 :重启设备
- 4、网页制作Dreamweaver(样式表CSS)
样式表style 制作一个风格统一的网页,需要样式表对颜色.字体等属性的规范,同时也省去在body中多次定义的麻烦,所以一个样式表是必不可少的. 样式表有两种引用的方法:一种是直接写在html的< ...
- SqlSever2005 一千万条以上记录分页数据库优化经验总结
http://www.cnblogs.com/jirigala/archive/2010/11/03/1868011.html 待测试???
- 获得供应商最近一次报价:OVER(PARTITION BY)函数用法的实际用法
利用rownumber ,关键字partition进行小范围分页 方法一: --所有供应商对该产品最近的一次报价with oa as(select a.SupplierId ,UnitPrice,Pr ...
- tracking 问题解决
1.dir,或者C++函数读文件名,不推荐.搞乱了名字 2. matio读写矩阵
- DNS劫持 DNS污染
编号:1021时间:2016年6月24日17:23:50功能:DNS劫持 DNS污染URL:http://www.itechzero.com/dns-hijacking-dns-pollution-i ...