Darknet19(
(conv1s): Sequential(
(0): Sequential(
(0): Conv2d_BatchNorm(
(conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
)
(1): Sequential(
(0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(1): Conv2d_BatchNorm(
(conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
)
(2): Sequential(
(0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(1): Conv2d_BatchNorm(
(conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(2): Conv2d_BatchNorm(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(3): Conv2d_BatchNorm(
(conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
)
(3): Sequential(
(0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(1): Conv2d_BatchNorm(
(conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(2): Conv2d_BatchNorm(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(3): Conv2d_BatchNorm(
(conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
)
(4): Sequential(
(0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(1): Conv2d_BatchNorm(
(conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(2): Conv2d_BatchNorm(
(conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(3): Conv2d_BatchNorm(
(conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(4): Conv2d_BatchNorm(
(conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(5): Conv2d_BatchNorm(
(conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
)
) (conv2): Sequential(
(0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(1): Conv2d_BatchNorm(
(conv): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(1024, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(2): Conv2d_BatchNorm(
(conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(3): Conv2d_BatchNorm(
(conv): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(1024, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(4): Conv2d_BatchNorm(
(conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(5): Conv2d_BatchNorm(
(conv): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(1024, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
) (conv3): Sequential(
(0): Conv2d_BatchNorm(
(conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(1024, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
(1): Conv2d_BatchNorm(
(conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(1024, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
)
(reorg): ReorgLayer(
) (conv4): Sequential(
(0): Conv2d_BatchNorm(
(conv): Conv2d(3072, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(1024, eps=1e-05, momentum=0.01, affine=True)
(relu): LeakyReLU(0.1, inplace)
)
) (conv5): Conv2d(
(conv): Conv2d(1024, 125, kernel_size=(1, 1), stride=(1, 1))
) (global_average_pool): AvgPool2d(kernel_size=(1, 1), stride=(1, 1), padding=0, ceil_mode=False, count_include_pad=True)
)

yolo.v2 darknet19结构的更多相关文章

  1. 目标检测之YOLO V2 V3

    YOLO V2 YOLO V2是在YOLO的基础上,融合了其他一些网络结构的特性(比如:Faster R-CNN的Anchor,GooLeNet的\(1\times1\)卷积核等),进行的升级.其目的 ...

  2. YOLO v2 损失函数源码分析

    损失函数的定义是在region_layer.c文件中,关于region层使用的参数在cfg文件的最后一个section中定义. 首先来看一看region_layer 都定义了那些属性值: layer ...

  3. yolo v2使用总结

    以下都是基于yolo v2版本的,对于现在的v3版本,可以先clone下来,再git checkout回v2版本. 玩了三四个月的yolo后发现数值相当不稳定,yolo只能用来小打小闹了. v2训练的 ...

  4. 目标检测论文解读7——YOLO v2

    背景 YOLO v1检测效果不好,且无法应用于检测密集物体. 方法 YOLO v2是在YOLO v1的基础上,做出如下改进. (1)引入很火的Batch Normalization,提高mAP和训练速 ...

  5. YOLO V2论文理解

    概述 YOLO(You Only Look Once: Unified, Real-Time Object Detection)从v1版本进化到了v2版本,作者在darknet主页先行一步放出源代码, ...

  6. YOLO系列:YOLO v2深度解析 v1 vs v2

    概述 第一,在保持原有速度的优势之下,精度上得以提升.VOC 2007数据集测试,67FPS下mAP达到76.8%,40FPS下mAP达到78.6%,可以与Faster R-CNN和SSD一战 第二, ...

  7. Darknet windows移植(YOLO v2)

    Darknet windows移植 代码地址: https://github.com/makefile/darknet 编译要求: VS2013 update5 及其之后的版本(低版本对C++标准支持 ...

  8. YOLO V2 代码分析

    先介绍YOLO[转]: 第一个颠覆ross的RCNN系列,提出region-free,把检测任务直接转换为回归来做,第一次做到精度可以,且实时性很好. 1. 直接将原图划分为SxS个grid cell ...

  9. 【计算机视觉】【神经网络与深度学习】YOLO v2 detection训练自己的数据2

    1. 前言 关于用yolo训练自己VOC格式数据的博文真的不少,但是当我按照他们的方法一步一步走下去的时候发现出了其他作者没有提及的问题.这里就我自己的经验讲讲如何训练自己的数据集. 2.数据集 这里 ...

随机推荐

  1. ACM-Hero In Maze

                                                   Hero In Maze 时间限制(普通/Java):1000MS/10000MS          运行 ...

  2. 导入android源码中的APP源码到eclipse

    导入android源码中的APP源码到eclipse 一般最简单的办法就是创建新的android工程,选择create project from existing source选项,直接导入源码就OK ...

  3. img 标签下多余空白的解决方法

    在浏览器中,图片默认的vertical-align是baseline.那么,我们该如何去掉这多余的空白呢? 1)将图片转换为块级 img{display:block;} 2) 设置图片的垂直对齐方式 ...

  4. VC++中有关句柄和指针及其转换(转)

    原文转自 https://blog.csdn.net/jearmy/article/details/47030011 1.MFC窗口的句柄和指针的转换 (1) 一般窗口对象都会有一个其对应的句柄变量, ...

  5. linux下搭建SVN服务器完全手册【转】

    转自:http://blog.csdn.net/bullbat/article/details/9115559 系统环境        RHEL5.4最小化安装(关iptables,关selinux) ...

  6. kubernetes节点安装配置

    #环境安装,要与控制节点一致Centos 7 Linux release 7.3.1611网络: 互通配置主机名设置各个服务器的主机名hosts#查找kubernetes支持的docker版本Kube ...

  7. FileReader&FileWriter

    FileReader public static void main(String[] args) { //创建文件对象指定要读取的文件路径 File file=new File("d:\\ ...

  8. 向PE文件中空白处添加代码

    // mem.cpp : 定义控制台应用程序的入口点. //PE文件从文件加载到内存,再从内存读取,然后存盘到文件 #include "stdafx.h" #include < ...

  9. git使用教程2-更新github上代码【转载】

    本篇转自博客:上海-悠悠 原文地址:http://www.cnblogs.com/yoyoketang/tag/git/ 前言 前面一篇已经实现首次上传代码到github了,迈出了装逼第一步,本篇继续 ...

  10. 计蒜客 18487.Divisions-大数的所有因子个数-Miller_Rabin+Pollard_rho-超快的(大数质因解+因子个数求解公式) (German Collegiate Programming Contest 2015 ACM-ICPC Asia Training League 暑假第一阶段第三场 F)

    这一场两个和大数有关的题目,都用到了米勒拉宾算法,有点东西,备忘一下. 题目传送门 F. Divisions 传送门 这个题是求一个数的所有因子个数,但是数据比较大,1e18,所以是大数的题目,正常的 ...