Densenet-Tensorflow
在寻找densnet网络的时候,我发现了一个结构清晰完整的网络代码,在此作备份。
https://github.com/taki0112/Densenet-Tensorflow
Densenet-Tensorflow
Tensorflow implementation of Densenet using Cifar10, MNIST
- The code that implements this paper is Densenet.py
- There is a slight difference, I used AdamOptimizer
If you want to see the original author's code or other implementations, please refer to this link
Requirements
- Tensorflow 1.x
- Python 3.x
- tflearn (If you are easy to use global average pooling, you should install tflearn
However, I implemented it using tf.layers, so don't worry
Issue
- I used tf.contrib.layers.batch_norm
def Batch_Normalization(x, training, scope):
with arg_scope([batch_norm],
scope=scope,
updates_collections=None,
decay=0.9,
center=True,
scale=True,
zero_debias_moving_mean=True) :
return tf.cond(training,
lambda : batch_norm(inputs=x, is_training=training, reuse=None),
lambda : batch_norm(inputs=x, is_training=training, reuse=True))
- If not enough GPU memory, Please edit the code
with tf.Session() as sess : NO
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess : OK
Idea
What is the "Global Average Pooling" ?
def Global_Average_Pooling(x, stride=1) :
width = np.shape(x)[1]
height = np.shape(x)[2]
pool_size = [width, height]
return tf.layers.average_pooling2d(inputs=x, pool_size=pool_size, strides=stride)
# The stride value does not matter
If you use tflearn, please refer to this link
def Global_Average_Pooling(x):
return tflearn.layers.conv.global_avg_pool(x, name='Global_avg_pooling')
What is the "Dense Connectivity" ?
What is the "Densenet Architecture" ?
def Dense_net(self, input_x):
x = conv_layer(input_x, filter=2 * self.filters, kernel=[7,7], stride=2, layer_name='conv0')
x = Max_Pooling(x, pool_size=[3,3], stride=2) x = self.dense_block(input_x=x, nb_layers=6, layer_name='dense_1')
x = self.transition_layer(x, scope='trans_1') x = self.dense_block(input_x=x, nb_layers=12, layer_name='dense_2')
x = self.transition_layer(x, scope='trans_2') x = self.dense_block(input_x=x, nb_layers=48, layer_name='dense_3')
x = self.transition_layer(x, scope='trans_3') x = self.dense_block(input_x=x, nb_layers=32, layer_name='dense_final') x = Batch_Normalization(x, training=self.training, scope='linear_batch')
x = Relu(x)
x = Global_Average_Pooling(x)
x = Linear(x) return x
What is the "Dense Block" ?
def dense_block(self, input_x, nb_layers, layer_name):
with tf.name_scope(layer_name):
layers_concat = list()
layers_concat.append(input_x) x = self.bottleneck_layer(input_x, scope=layer_name + '_bottleN_' + str(0)) layers_concat.append(x) for i in range(nb_layers - 1):
x = Concatenation(layers_concat)
x = self.bottleneck_layer(x, scope=layer_name + '_bottleN_' + str(i + 1))
layers_concat.append(x) return x
What is the "Bottleneck Layer" ?
def bottleneck_layer(self, x, scope):
with tf.name_scope(scope):
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
x = conv_layer(x, filter=4 * self.filters, kernel=[1,1], layer_name=scope+'_conv1')
x = Drop_out(x, rate=dropout_rate, training=self.training) x = Batch_Normalization(x, training=self.training, scope=scope+'_batch2')
x = Relu(x)
x = conv_layer(x, filter=self.filters, kernel=[3,3], layer_name=scope+'_conv2')
x = Drop_out(x, rate=dropout_rate, training=self.training) return x
What is the "Transition Layer" ?
def transition_layer(self, x, scope):
with tf.name_scope(scope):
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
x = conv_layer(x, filter=self.filters, kernel=[1,1], layer_name=scope+'_conv1')
x = Drop_out(x, rate=dropout_rate, training=self.training)
x = Average_pooling(x, pool_size=[2,2], stride=2) return x
Compare Structure (CNN, ResNet, DenseNet)
Results
- (MNIST) The highest test accuracy is 99.2% (This result does not use dropout)
- The number of dense block layers is fixed to 4
for i in range(self.nb_blocks) :
# original : 6 -> 12 -> 48 x = self.dense_block(input_x=x, nb_layers=4, layer_name='dense_'+str(i))
x = self.transition_layer(x, scope='trans_'+str(i))
CIFAR-10
CIFAR-100
Image Net
Related works
References
Author
Junho Kim
Densenet-Tensorflow的更多相关文章
- densenet tensorflow 中文汉字手写识别
densenet 中文汉字手写识别,代码如下: import tensorflow as tf import os import random import math import tensorflo ...
- tensorflow学习笔记——DenseNet
完整代码及其数据,请移步小编的GitHub地址 传送门:请点击我 如果点击有误:https://github.com/LeBron-Jian/DeepLearningNote 这里结合网络的资料和De ...
- TensorFlow从1到2(五)图片内容识别和自然语言语义识别
Keras内置的预定义模型 上一节我们讲过了完整的保存模型及其训练完成的参数. Keras中使用这种方式,预置了多个著名的成熟神经网络模型.当然,这实际是Keras的功劳,并不适合算在TensorFl ...
- 从零开始自己搭建复杂网络2(以Tensorflow为例)
从零开始自己搭建复杂网络(以DenseNet为例) DenseNet 是一种具有密集连接的卷积神经网络.在该网络中,任何两层之间都有直接的连接,也就是说,网络每一层的输入都是前面所有层输出的并集, 而 ...
- Tensorflow 之finetune微调模型方法&&不同层上设置不同的学习率
在不同层上设置不同的学习率,fine-tuning https://github.com/dgurkaynak/tensorflow-cnn-finetune ConvNets: AlexNet VG ...
- DenseNet算法详解——思路就是highway,DneseNet在训练时十分消耗内存
论文笔记:Densely Connected Convolutional Networks(DenseNet模型详解) 2017年09月28日 11:58:49 阅读数:1814 [ 转载自http: ...
- W tensorflow/core/util/ctc/ctc_loss_calculator.cc:144] No valid path found 或 loss:inf的解决方案
基于Tensorflow和Keras实现端到端的不定长中文字符检测和识别(文本检测:CTPN,文本识别:DenseNet + CTC),在使用自己的数据训练这个模型的过程中,出现如下错误,由于问题已经 ...
- tensorflow+inceptionv3图像分类网络结构的解析与代码实现
tensorflow+inceptionv3图像分类网络结构的解析与代码实现 论文链接:论文地址 ResNet传送门:Resnet-cifar10 DenseNet传送门:DenseNet SegNe ...
- TensorFlow中的语义分割套件
TensorFlow中的语义分割套件 描述 该存储库用作语义细分套件.目标是轻松实现,训练和测试新的语义细分模型!完成以下内容: 训练和测试方式 资料扩充 几种最先进的模型.轻松随插即用 能够使用任何 ...
- Tensorflow 官方版教程中文版
2015年11月9日,Google发布人工智能系统TensorFlow并宣布开源,同日,极客学院组织在线TensorFlow中文文档翻译.一个月后,30章文档全部翻译校对完成,上线并提供电子书下载,该 ...
随机推荐
- Docker(八)-Docker创建Nginx容器
获取Nginx镜像 最简单的方法就是通过 docker pull nginx 命令来创建 Nginx容器. $ sudo docker pull nginx 或者: $ sudo docker pul ...
- Mysql 间隙锁原理,以及Repeatable Read隔离级别下可以防止幻读原理(百度)
Mysql知识实在太丰富了,前几天百度的面试官问我MySql在Repeatable Read下面是否会有幻读出现,我说按照事务的特性当然会有, 但是面试官却说 Mysql 在Repeatable Re ...
- 怎么把焦点放在RichEdit的最后一行
急急急!!!!如何把焦点放在RichEdit的最后一行!! 请高手指点,在线等!!!!当添加到出现滚动条时焦点就不会往下了,怎么把焦点移到最后一行 RichEdit-> Lines-> A ...
- aop 切点匹配规则
- 学习《Unix/Linux编程实践教程》(1):Unix 系统编程概述
0.目录 1.概念 2.系统资源 3.学习方法 4.从用户的角度来理解 Unix 4.1 登录--运行程序--注销 4.2 目录操作 4.3 文件操作 5.从系统的角度来理解 Unix 5.1 网络桥 ...
- BZOJ 2879 [Noi2012]美食节 | 费用流 动态开点
这道题就是"修车"的数据加强版--但是数据范围扩大了好多,应对方法是"动态开点". 首先先把"所有厨师做的倒数第一道菜"和所有菜连边,然后跑 ...
- 导致 KEIL error #20 的一种情况
> 描述 <+> 环境为 KEIL5.20 & STM32F429工程,平台为 win10 <+> 结构体原形如下 @File <A.h> #incl ...
- Mac安装mysql8.0.12
···shell 下载 wget https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.12-macos10.13-x86_64.tar.gz ...
- (转)面向对象——UML类图设计
背景:一直以来,对UMl类图的画法不甚理解,但是随着学习的深入,发现熟练掌握UML类图,能够更好理解代码间的逻辑性,而这也是程序设计的基础所在,所以很有必要把UML好好掌握. UML类图新手入门级介绍 ...
- SQL Server深入理解“锁”机制
相比于 SQL Server 2005(比如快照隔离和改进的锁与死锁监视),SQL Server 2008 并没有在锁的行为和特性上做出任何重大改变.SQL Server 2008 引入的一个主要新特 ...