从零开始自己搭建复杂网络(以DenseNet为例)

DenseNet 是一种具有密集连接的卷积神经网络。在该网络中,任何两层之间都有直接的连接,也就是说,网络每一层的输入都是前面所有层输出的并集,

而该层所学习的特征图也会被直接传给其后面所有层作为输入。

DenseNet 在 ResNet 基础上,提出了更优秀的 shortcut 方式。Dense Connection 不仅能使得 feature 更加 robust ,还能带来更快的收敛速度。

显存和计算量上稍显不足,需要业界进一步的优化才能广泛应用。

我们使用slim框架来构建网络,进行slim官方的densenet代码的讲解。

"""Contains the definition of the DenseNet architecture.

As described in https://arxiv.org/abs/1608.06993.

  Densely Connected Convolutional Networks
Gao Huang, Zhuang Liu, Kilian Q. Weinberger, Laurens van der Maaten
"""

那么,开始构建整体网络框架吧

densenet的基本网络由以下代码构成

def densenet(inputs,
num_classes=1000,
reduction=None,
growth_rate=None,
num_filters=None,
num_layers=None,
dropout_rate=None,
data_format='NHWC',
is_training=True,
reuse=None,
scope=None):
assert reduction is not None
assert growth_rate is not None
assert num_filters is not None
assert num_layers is not None compression = 1.0 - reduction
num_dense_blocks = len(num_layers) if data_format == 'NCHW':
inputs = tf.transpose(inputs, [0, 3, 1, 2]) with tf.variable_scope(scope, 'densenetxxx', [inputs, num_classes],
reuse=reuse) as sc:
end_points_collection = sc.name + '_end_points'
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training), \
slim.arg_scope([slim.conv2d, _conv, _conv_block,
_dense_block, _transition_block],
outputs_collections=end_points_collection), \
slim.arg_scope([_conv], dropout_rate=dropout_rate):
net = inputs # initial convolution
net = slim.conv2d(net, num_filters, 7, stride=2, scope='conv1')
net = slim.batch_norm(net)
net = tf.nn.relu(net)
net = slim.max_pool2d(net, 3, stride=2, padding='SAME') # blocks
for i in range(num_dense_blocks - 1):
# dense blocks
net, num_filters = _dense_block(net, num_layers[i], num_filters,
growth_rate,
scope='dense_block' + str(i+1)) # Add transition_block
net, num_filters = _transition_block(net, num_filters,
compression=compression,
scope='transition_block' + str(i+1)) net, num_filters = _dense_block(
net, num_layers[-1], num_filters,
growth_rate,
scope='dense_block' + str(num_dense_blocks)) # final blocks
with tf.variable_scope('final_block', [inputs]):
net = slim.batch_norm(net)
net = tf.nn.relu(net)
net = _global_avg_pool2d(net, scope='global_avg_pool') net = slim.conv2d(net, num_classes, 1,
biases_initializer=tf.zeros_initializer(),
scope='logits') end_points = slim.utils.convert_collection_to_dict(
end_points_collection) if num_classes is not None:
end_points['predictions'] = slim.softmax(net, scope='predictions') return net, end_points

纵观论文的网络结构,Densenet由4个部分组成:

  • initial convolution
  • dense blocks
  • transition_block
  • final blocks

初始卷积层拥有

conv2d
batch_norm
relu
max_pool2d
这四个方法在开始定义了
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training), \
slim.arg_scope([slim.conv2d, _conv, _conv_block,
_dense_block, _transition_block],
outputs_collections=end_points_collection), \
slim.arg_scope([_conv], dropout_rate=dropout_rate):
然后我们定义_dense_block
@slim.add_arg_scope
def _dense_block(inputs, num_layers, num_filters, growth_rate,
grow_num_filters=True, scope=None, outputs_collections=None): with tf.variable_scope(scope, 'dense_blockx', [inputs]) as sc:
net = inputs
for i in range(num_layers):
branch = i + 1
net = _conv_block(net, growth_rate, scope='conv_block'+str(branch)) if grow_num_filters:
num_filters += growth_rate net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net) return net, num_filters
_dense_block中由不同个数的_conv_block组成。拿densenet121来说,卷积的个数为[6,12,24,16]。
_conv_block由一个1×1的卷积和3×3的卷积组合而成,之后将两个卷积融合起来。
@slim.add_arg_scope
def _conv_block(inputs, num_filters, data_format='NHWC', scope=None, outputs_collections=None):
with tf.variable_scope(scope, 'conv_blockx', [inputs]) as sc:
net = inputs
net = _conv(net, num_filters*4, 1, scope='x1')
net = _conv(net, num_filters, 3, scope='x2')
if data_format == 'NHWC':
#在某一个shape的第三个维度上连
net = tf.concat([inputs, net], axis=3)
else: # "NCHW"
net = tf.concat([inputs, net], axis=1) net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net) return net

接着,我们构建_transition_block:

@slim.add_arg_scope
def _transition_block(inputs, num_filters, compression=1.0,
scope=None, outputs_collections=None): num_filters = int(num_filters * compression)
with tf.variable_scope(scope, 'transition_blockx', [inputs]) as sc:
net = inputs
net = _conv(net, num_filters, 1, scope='blk') net = slim.avg_pool2d(net, 2) net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net) return net, num_filters

这个模块由一个1×1 的卷积对其维度,然后接平均池化。

最后一层,我们使用1×1的卷积将输出维度与最后的分类数对其。

# final blocks
with tf.variable_scope('final_block', [inputs]):
net = slim.batch_norm(net)
net = tf.nn.relu(net)
net = _global_avg_pool2d(net, scope='global_avg_pool') net = slim.conv2d(net, num_classes, 1,
biases_initializer=tf.zeros_initializer(),
scope='logits')
net = tf.contrib.layers.flatten(net)

Densenet的每个模块就介绍完毕了

下面是全部的代码:

# Copyright 2016 pudae. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition of the DenseNet architecture. As described in https://arxiv.org/abs/1608.06993. Densely Connected Convolutional Networks
Gao Huang, Zhuang Liu, Kilian Q. Weinberger, Laurens van der Maaten
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function import tensorflow as tf slim = tf.contrib.slim @slim.add_arg_scope
def _global_avg_pool2d(inputs, data_format='NHWC', scope=None, outputs_collections=None):
with tf.variable_scope(scope, 'xx', [inputs]) as sc:
axis = [1, 2] if data_format == 'NHWC' else [2, 3]
net = tf.reduce_mean(inputs, axis=axis, keepdims=True)
net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)
return net @slim.add_arg_scope
def _conv(inputs, num_filters, kernel_size, stride=1, dropout_rate=None,
scope=None, outputs_collections=None):
with tf.variable_scope(scope, 'xx', [inputs]) as sc:
net = slim.batch_norm(inputs)
net = tf.nn.relu(net)
net = slim.conv2d(net, num_filters, kernel_size) if dropout_rate:
net = tf.nn.dropout(net) net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net) return net @slim.add_arg_scope
def _conv_block(inputs, num_filters, data_format='NHWC', scope=None, outputs_collections=None):
with tf.variable_scope(scope, 'conv_blockx', [inputs]) as sc:
net = inputs
net = _conv(net, num_filters*4, 1, scope='x1')
net = _conv(net, num_filters, 3, scope='x2')
if data_format == 'NHWC':
net = tf.concat([inputs, net], axis=3)
else: # "NCHW"
net = tf.concat([inputs, net], axis=1) net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net) return net @slim.add_arg_scope
def _dense_block(inputs, num_layers, num_filters, growth_rate,
grow_num_filters=True, scope=None, outputs_collections=None): with tf.variable_scope(scope, 'dense_blockx', [inputs]) as sc:
net = inputs
for i in range(num_layers):
branch = i + 1
net = _conv_block(net, growth_rate, scope='conv_block'+str(branch)) if grow_num_filters:
num_filters += growth_rate net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net) return net, num_filters @slim.add_arg_scope
def _transition_block(inputs, num_filters, compression=1.0,
scope=None, outputs_collections=None): num_filters = int(num_filters * compression)
with tf.variable_scope(scope, 'transition_blockx', [inputs]) as sc:
net = inputs
net = _conv(net, num_filters, 1, scope='blk') net = slim.avg_pool2d(net, 2) net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net) return net, num_filters def densenet(inputs,
num_classes=1000,
reduction=None,
growth_rate=None,
num_filters=None,
num_layers=None,
dropout_rate=None,
data_format='NHWC',
is_training=True,
reuse=None,
scope=None):
assert reduction is not None
assert growth_rate is not None
assert num_filters is not None
assert num_layers is not None compression = 1.0 - reduction
num_dense_blocks = len(num_layers) if data_format == 'NCHW':
inputs = tf.transpose(inputs, [0, 3, 1, 2]) with tf.variable_scope(scope, 'densenetxxx', [inputs, num_classes],
reuse=reuse) as sc:
end_points_collection = sc.name + '_end_points'
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training), \
slim.arg_scope([slim.conv2d, _conv, _conv_block,
_dense_block, _transition_block],
outputs_collections=end_points_collection), \
slim.arg_scope([_conv], dropout_rate=dropout_rate):
net = inputs # initial convolution
net = slim.conv2d(net, num_filters, 7, stride=2, scope='conv1')
net = slim.batch_norm(net)
net = tf.nn.relu(net)
net = slim.max_pool2d(net, 3, stride=2, padding='SAME') # blocks
for i in range(num_dense_blocks - 1):
# dense blocks
net, num_filters = _dense_block(net, num_layers[i], num_filters,
growth_rate,
scope='dense_block' + str(i+1)) # Add transition_block
net, num_filters = _transition_block(net, num_filters,
compression=compression,
scope='transition_block' + str(i+1)) net, num_filters = _dense_block(
net, num_layers[-1], num_filters,
growth_rate,
scope='dense_block' + str(num_dense_blocks)) # final blocks
with tf.variable_scope('final_block', [inputs]):
net = slim.batch_norm(net)
net = tf.nn.relu(net)
net = _global_avg_pool2d(net, scope='global_avg_pool') net = slim.conv2d(net, num_classes, 1,
biases_initializer=tf.zeros_initializer(),
scope='logits')
net = tf.contrib.layers.flatten(net)
# print(net)
end_points = slim.utils.convert_collection_to_dict(
end_points_collection) if num_classes is not None:
end_points['predictions'] = slim.softmax(net, scope='predictions') return net, end_points def densenet121(inputs, num_classes=1000, data_format='NHWC', is_training=True, reuse=None):
return densenet(inputs,
num_classes=num_classes,
reduction=0.5,
growth_rate=32,
num_filters=64,
num_layers=[6,12,24,16],
data_format=data_format,
is_training=is_training,
reuse=reuse,
scope='densenet121')
densenet121.default_image_size = 224 def densenet161(inputs, num_classes=1000, data_format='NHWC', is_training=True, reuse=None):
return densenet(inputs,
num_classes=num_classes,
reduction=0.5,
growth_rate=48,
num_filters=96,
num_layers=[6,12,36,24],
data_format=data_format,
is_training=is_training,
reuse=reuse,
scope='densenet161')
densenet161.default_image_size = 224 def densenet169(inputs, num_classes=1000, data_format='NHWC', is_training=True, reuse=None):
return densenet(inputs,
num_classes=num_classes,
reduction=0.5,
growth_rate=32,
num_filters=64,
num_layers=[6,12,32,32],
data_format=data_format,
is_training=is_training,
reuse=reuse,
scope='densenet169')
densenet169.default_image_size = 224 def densenet_arg_scope(weight_decay=1e-4,
batch_norm_decay=0.99,
batch_norm_epsilon=1.1e-5,
data_format='NHWC'):
with slim.arg_scope([slim.conv2d, slim.batch_norm, slim.avg_pool2d, slim.max_pool2d,
_conv_block, _global_avg_pool2d],
data_format=data_format):
with slim.arg_scope([slim.conv2d],
weights_regularizer=slim.l2_regularizer(weight_decay),
activation_fn=None,
biases_initializer=None):
with slim.arg_scope([slim.batch_norm],
scale=True,
decay=batch_norm_decay,
epsilon=batch_norm_epsilon) as scope:
return scope


												

从零开始自己搭建复杂网络2(以Tensorflow为例)的更多相关文章

  1. 从零开始自己搭建复杂网络(以Tensorflow为例)

    从零开始自己搭建复杂网络(以MobileNetV2为例) tensorflow经过这几年的发展,已经成长为最大的神经网络框架.而mobileNetV2在经过Xception的实践与深度可分离卷积的应用 ...

  2. Hyperledger Fabric手动生成CA证书搭建Fabric网络

    之前介绍了使用官方脚本自动化启动一个Fabric网络,并且所有的证书都是通过官方的命令行工具cryptogen直接生成网络中的所有节点的证书.在开发环境可以这么简单进行,但是生成环境下还是需要我们自定 ...

  3. CentOS6.8环境下搭建yum网络仓库

    CentOS6.8环境下搭建yum网络仓库 本文利用ftp服务,在CentOS6.8系统下搭建一个yum仓库,然后用另一台虚拟机访问该仓库.并安装程序包 安装ftp服务 查询ftp服务是否安装 [ro ...

  4. 关于路由器漏洞利用,qemu环境搭建,网络配置的总结

    FAT 搭建的坑 1 先按照官方步骤进行,完成后进行如下步骤 2 修改 move /firmadyne into /firmware-analysis-toolkit navigate to the ...

  5. KVM——以桥接的方式搭建虚拟机网络配置

    以桥接的方式搭建虚拟机网络,其优势是可以将网络中的虚拟机看作是与主机同等地位的服务器. 在原本的局域网中有两台主机,一台是win7(IP: 192.168.0.236),一台是CentOS7(IP: ...

  6. 从零开始搭搭建系统3.1——顶级pom制定

    从零开始搭搭建系统3.1——顶级pom制定

  7. 【Hadoop离线基础总结】zookeeper的介绍以及集群环境搭建、网络编程和RPC的简单了解

    ZooKeeper的介绍以及集群环境搭建.网络编程和RPC的简单了解 ZooKeeper介绍 概述 ZooKeeper是一个分布式协调服务的开源框架,主要用来解决分布式集群中应用系统的一致性问题.例如 ...

  8. SpringBoot2.x【一】从零开始环境搭建

    SpringBoot2.x[一]从零开始环境搭建 对于之前的Spring框架的使用,各种配置文件XML.properties一旦出错之后错误难寻,这也是为什么SpringBoot被推上主流的原因,Sp ...

  9. LSTM(长短期记忆网络)及其tensorflow代码应用

     本文主要包括: 一.什么是LSTM 二.LSTM的曲线拟合 三.LSTM的分类问题 四.为什么LSTM有助于消除梯度消失 一.什么是LSTM Long Short Term 网络即为LSTM,是一种 ...

随机推荐

  1. 和我一起使用postcss+gulp进行vw单位的移动端的适配

    随着iphoneX的出现,新的一轮适配大法应该又出现了吧?不论是使用flex布局或者媒体查询,好似都不能完全解决新加的刘海带来的适配问题. 但是有一个单位vw就神奇的解决了这个问题.vw和vh是相对于 ...

  2. QQ企业邮箱+Spring+Javamail+ActiveMQ(发送企业邮件)

    原来有个教程是关于Javamail的,但是那个是自己写Javamail的发送过程,这次不同的是:使用Spring的Mail功能,使用了消息队列. 先看一下设想的场景 不过本文重点不是消息队列,而是使用 ...

  3. sizeof 与 字节对齐

    转:http://baike.baidu.com/view/1356720.htm sizeof是运算符,可用于任何变量名.类型名或常量值,当用于变量名(不是数组名)或常量时,它不需要用圆括号.    ...

  4. STM32 --- 断言(assert_param)的开启和使用

    默认,STM32的assert_param是没有开启检测,需要 #define USE_FULL_ASSERT 开启后,才能检测形参是否符合要求 // #define assert_param(exp ...

  5. frp源码剖析-frp中的log模块

    前言&引入 一个好的log模块可以帮助我们排错,分析,统计 一般来说log中需要有时间.栈信息(比如说文件名行号等),这些东西一般某些底层log模块已经帮我们做好了.但在业务中还有很多我们需要 ...

  6. shell脚本递归压缩实践

    #!/bin/bash Src_Path=/data/www/logs Dst_Path=/data/www/logs_bak for rfile in `find $Src_Path/ -depth ...

  7. 创建 elasticsearch 用户

    创建 elasticsearch 用户 groupadd -g 3048 elasticsearch useradd -s /sbin/nologin -u 3048 -g elasticsearch ...

  8. SerializeField和HideInInspector

    在Unity中,一个变量为公有类型,但是呢你不想让它显示在属性面板上,这个时候用 [HideInInspector] 这是隐藏的意思,举例: 用了[HideInInspector] 之后 就是这样用的 ...

  9. java compareTo() 用法注意点

    转自:http://www.2cto.com/kf/201305/210466.html compareTo就是比较两个值,如果前者大于后者,返回1,等于返回0,小于返回-1,我下面给出了例子,由于比 ...

  10. Java SE之String,字符串和子字符串的存储与区别

    理解String 是怎么占用内存的       来看一个每个String对象的各个属性,一个String包括如下的属性: 一个char数组(是个独立的对象用来存储字符串中的字符) 一个int 的off ...