在深度学习网络结构中,各个层的类别可以分为这几种:卷积层,全连接层,relu层,pool层和反卷积层等。目前,在像素级估计和端对端学习问题中,全卷积网络展现了他的优势,里面有个很重要的层,将卷积后的feature map上采样(反卷积)到输入图像的尺寸空间,就是反卷积层。那么它在tensorflow里是怎么实现的呢?本篇博文讲介绍这方面的内容。

1. 反卷积函数介绍

tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', name=None)

这是tensorflow里实现反卷积的函数,value是上一层的feature map,filter是卷积核[kernel_size, kernel_size, output_channel, input_channel ],output_shape定义输出的尺寸[batch_size, height, width, channel],padding是边界打补丁的算法。

这里需要特别说明的是,output_shape和strides里的参数是相互耦合的,我们可以根据输入和输出确定strides参数(正整数),也可以根据输入和strides确定输出尺寸。

2. Alex net加反卷积层

# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

"""Timing benchmark for AlexNet inference.

To run, use:
  bazel run -c opt --config=cuda \
      third_party/tensorflow/models/image/alexnet:alexnet_benchmark

Across 100 steps on batch size = 128.

Forward pass:
Run on Tesla K40c: 145 +/- 1.5 ms / batch
Run on Titan X:     70 +/- 0.1 ms / batch

Forward-backward pass:
Run on Tesla K40c: 480 +/- 48 ms / batch
Run on Titan X:    244 +/- 30 ms / batch
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from datetime import datetime
import math
import time

from six.moves import xrange  # pylint: disable=redefined-builtin
import tensorflow as tf

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_integer('batch_size', 1,
                            """Batch size.""")
tf.app.flags.DEFINE_integer('num_batches', 100,
                            """Number of batches to run.""")
tf.app.flags.DEFINE_integer('image_width', 345,
                            """image width.""")
tf.app.flags.DEFINE_integer('image_height', 460,
                            """image height.""")

def print_activations(t):
  print(t.op.name, ' ', t.get_shape().as_list())

def inference(images):
  """Build the AlexNet model.

  Args:
    images: Images Tensor

  Returns:
    pool5: the last Tensor in the convolutional component of AlexNet.
    parameters: a list of Tensors corresponding to the weights and biases of the
        AlexNet model.
  """
  parameters = []
  # conv1
  with tf.name_scope('conv1') as scope:
    kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype=tf.float32,
                                             stddev=1e-1), name='weights')
    conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding='SAME')
    biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),
                         trainable=True, name='biases')
    bias = tf.nn.bias_add(conv, biases)
    conv1 = tf.nn.relu(bias, name=scope)
    print_activations(conv1)
    parameters += [kernel, biases]

  # lrn1
  # TODO(shlens, jiayq): Add a GPU version of local response normalization.

  # pool1
  pool1 = tf.nn.max_pool(conv1,
                         ksize=[1, 3, 3, 1],
                         strides=[1, 2, 2, 1],
                         padding='VALID',
                         name='pool1')
  print_activations(pool1)

  # conv2
  with tf.name_scope('conv2') as scope:
    kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype=tf.float32,
                                             stddev=1e-1), name='weights')
    conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')
    biases = tf.Variable(tf.constant(0.0, shape=[192], dtype=tf.float32),
                         trainable=True, name='biases')
    bias = tf.nn.bias_add(conv, biases)
    conv2 = tf.nn.relu(bias, name=scope)
    parameters += [kernel, biases]
  print_activations(conv2)

  # pool2
  pool2 = tf.nn.max_pool(conv2,
                         ksize=[1, 3, 3, 1],
                         strides=[1, 2, 2, 1],
                         padding='VALID',
                         name='pool2')
  print_activations(pool2)

  # conv3
  with tf.name_scope('conv3') as scope:
    kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384],
                                             dtype=tf.float32,
                                             stddev=1e-1), name='weights')
    conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
    biases = tf.Variable(tf.constant(0.0, shape=[384], dtype=tf.float32),
                         trainable=True, name='biases')
    bias = tf.nn.bias_add(conv, biases)
    conv3 = tf.nn.relu(bias, name=scope)
    parameters += [kernel, biases]
    print_activations(conv3)

  # conv4
  with tf.name_scope('conv4') as scope:
    kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256],
                                             dtype=tf.float32,
                                             stddev=1e-1), name='weights')
    conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
    biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
                         trainable=True, name='biases')
    bias = tf.nn.bias_add(conv, biases)
    conv4 = tf.nn.relu(bias, name=scope)
    parameters += [kernel, biases]
    print_activations(conv4)

  # conv5
  with tf.name_scope('conv5') as scope:
    kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256],
                                             dtype=tf.float32,
                                             stddev=1e-1), name='weights')
    conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
    biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
                         trainable=True, name='biases')
    bias = tf.nn.bias_add(conv, biases)
    conv5 = tf.nn.relu(bias, name=scope)
    parameters += [kernel, biases]
    print_activations(conv5)

  # pool5
  pool5 = tf.nn.max_pool(conv5,
                         ksize=[1, 3, 3, 1],
                         strides=[1, 2, 2, 1],
                         padding='VALID',
                         name='pool5')
  print_activations(pool5)

  # conv6
  with tf.name_scope('conv6') as scope:
    kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 1],
                                             dtype=tf.float32,
                                             stddev=1e-1), name='weights')
    conv = tf.nn.conv2d(pool5, kernel, [1, 1, 1, 1], padding='SAME')
    biases = tf.Variable(tf.constant(0.0, shape=[1], dtype=tf.float32),
                         trainable=True, name='biases')
    bias = tf.nn.bias_add(conv, biases)
    conv6 = tf.nn.relu(bias, name=scope)
    parameters += [kernel, biases]
    print_activations(conv6)

  # deconv1
  with tf.name_scope('deconv1') as scope:
  	wt = tf.Variable(tf.truncated_normal([11, 11, 1, 1]))
	deconv1 = tf.nn.conv2d_transpose(conv6, wt, [FLAGS.batch_size, 130, 100, 1], [1, 10, 10, 1], 'SAME')
	print_activations(deconv1)

  # deconv2
  with tf.name_scope('deconv2') as scope:
  	wt = tf.Variable(tf.truncated_normal([11, 11, 1, 1]))
	deconv2 = tf.nn.conv2d_transpose(deconv1, wt, [FLAGS.batch_size, 260, 200, 1], [1, 2, 2, 1], 'SAME')
	print_activations(deconv2)

  return deconv2, parameters

def time_tensorflow_run(session, target, info_string):
  """Run the computation to obtain the target tensor and print timing stats.

  Args:
    session: the TensorFlow session to run the computation under.
    target: the target Tensor that is passed to the session's run() function.
    info_string: a string summarizing this run, to be printed with the stats.

  Returns:
    None
  """
  num_steps_burn_in = 10
  total_duration = 0.0
  total_duration_squared = 0.0
  for i in xrange(FLAGS.num_batches + num_steps_burn_in):
    start_time = time.time()
    _ = session.run(target)
    duration = time.time() - start_time
    if i > num_steps_burn_in:
      if not i % 10:
        print ('%s: step %d, duration = %.3f' %
               (datetime.now(), i - num_steps_burn_in, duration))
      total_duration += duration
      total_duration_squared += duration * duration
  mn = total_duration / FLAGS.num_batches
  vr = total_duration_squared / FLAGS.num_batches - mn * mn
  sd = math.sqrt(vr)
  print ('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %
         (datetime.now(), info_string, FLAGS.num_batches, mn, sd))

def run_benchmark():
  """Run the benchmark on AlexNet."""
  with tf.Graph().as_default():
    # Generate some dummy images.

    # Note that our padding definition is slightly different the cuda-convnet.
    # In order to force the model to start with the same activations sizes,
    # we add 3 to the image_size and employ VALID padding above.
    images = tf.Variable(tf.random_normal([FLAGS.batch_size,
                                           460,
                                           345, 3],
                                          dtype=tf.float32,
                                          stddev=1e-1))

    # Build a Graph that computes the logits predictions from the
    # inference model.
    pool5, parameters = inference(images)

    # Build an initialization operation.
    init = tf.initialize_all_variables()

    # Start running operations on the Graph.
    config = tf.ConfigProto()
    config.gpu_options.allocator_type = 'BFC'
    sess = tf.Session(config=config)
    sess.run(init)

    # Run the forward benchmark.
    time_tensorflow_run(sess, pool5, "Forward")

    # Add a simple objective so we can calculate the backward pass.
    objective = tf.nn.l2_loss(pool5)
    # Compute the gradient with respect to all the parameters.
    grad = tf.gradients(objective, parameters)
    # Run the backward benchmark.
    time_tensorflow_run(sess, grad, "Forward-backward")

def main(_):
  run_benchmark()

if __name__ == '__main__':
  tf.app.run()

三. 运行结果

reference url:

https://www.tensorflow.org/versions/r0.9/api_docs/python/nn.html#convolution

http://cvlab.postech.ac.kr/research/deconvnet/

学习Tensorflow,反卷积的更多相关文章

  1. 学习笔记TF052:卷积网络,神经网络发展,AlexNet的TensorFlow实现

    卷积神经网络(convolutional neural network,CNN),权值共享(weight sharing)网络结构降低模型复杂度,减少权值数量,是语音分析.图像识别热点.无须人工特征提 ...

  2. 第十四节,TensorFlow中的反卷积,反池化操作以及gradients的使用

    反卷积是指,通过测量输出和已知输入重构未知输入的过程.在神经网络中,反卷积过程并不具备学习的能力,仅仅是用于可视化一个已经训练好的卷积神经网络,没有学习训练的过程.反卷积有着许多特别的应用,一般可以用 ...

  3. 深度学习卷积网络中反卷积/转置卷积的理解 transposed conv/deconv

    搞明白了卷积网络中所谓deconv到底是个什么东西后,不写下来怕又忘记,根据参考资料,加上我自己的理解,记录在这篇博客里. 先来规范表达 为了方便理解,本文出现的举例情况都是2D矩阵卷积,卷积输入和核 ...

  4. 深度学习原理与框架-图像补全(原理与代码) 1.tf.nn.moments(求平均值和标准差) 2.tf.control_dependencies(先执行内部操作) 3.tf.cond(判别执行前或后函数) 4.tf.nn.atrous_conv2d 5.tf.nn.conv2d_transpose(反卷积) 7.tf.train.get_checkpoint_state(判断sess是否存在

    1. tf.nn.moments(x, axes=[0, 1, 2])  # 对前三个维度求平均值和标准差,结果为最后一个维度,即对每个feature_map求平均值和标准差 参数说明:x为输入的fe ...

  5. tensorflow 卷积/反卷积-池化/反池化操作详解

    Plese see this answer for a detailed example of how tf.nn.conv2d_backprop_input and tf.nn.conv2d_bac ...

  6. 深度学习原理与框架- tf.nn.conv2d_transpose(反卷积操作) tf.nn.conv2d_transpose(进行反卷积操作) 对于stride的理解存在问题?

    反卷积操作: 首先对需要进行维度扩张的feature_map 进行补零操作,然后使用3*3的卷积核,进行卷积操作,使得其维度进行扩张,图中可以看出,2*2的feature经过卷积变成了4*4.    ...

  7. 深度学习-tensorflow学习笔记(1)-MNIST手写字体识别预备知识

    深度学习-tensorflow学习笔记(1)-MNIST手写字体识别预备知识 在tf第一个例子的时候需要很多预备知识. tf基本知识 香农熵 交叉熵代价函数cross-entropy 卷积神经网络 s ...

  8. CNN中的卷积核及TensorFlow中卷积的各种实现

    声明: 1. 我和每一个应该看这篇博文的人一样,都是初学者,都是小菜鸟,我发布博文只是希望加深学习印象并与大家讨论. 2. 我不确定的地方用了"应该"二字 首先,通俗说一下,CNN ...

  9. 学习TensorFlow,浅析MNIST的python代码

    在github上,tensorflow的star是22798,caffe是10006,torch是4500,theano是3661.作为小码农的我,最近一直在学习tensorflow,主要使用pyth ...

随机推荐

  1. [测试题]幸运序列(lucky)

    Description Ly喜欢幸运数字,众所周知,幸运数字就是数字位上只有4和7的数字. 但是本题的幸运序列和幸运数字完全没关系,就是一个非常非常普通的序列.哈哈,是不是感觉被耍了,没错,你就是被耍 ...

  2. [HNOI 2003]激光炸弹

    Description 一种新型的激光炸弹,可以摧毁一个边长为R的正方形内的所有的目标.现在地图上有n个目标,用整数,表示目标在地图上的位置,每个目标都有一个价值.激光炸弹的投放是通过卫星定位的,但其 ...

  3. Set 集合

    [题目描述]现在给你一些连续的整数,它们是从 A 到 B 的整数.一开始每个整数都属于各自的集合,然后你需要进行如下操作:每次选择两个属于不同集合的整数,如果这两个整数拥有大于等于 P 的公共质因数, ...

  4. permu(变态考试题)

    题目描述 给定一个严格递增的序列T,求有多少个T的排列S满足:∑min(T[i],S[i])=k 输入输出格式 输入格式: 第一行两个数n,k 第二行n个数,表示T 输出格式: 一个正整数表示答案,答 ...

  5. ●POJ 3348 Cows

    题链: http://poj.org/problem?id=3348 题解: 计算几何,凸包,多边形面积 好吧,就是个裸题,没什么可讲的. 代码: #include<cmath> #inc ...

  6. ●BZOJ 2209 [Jsoi2011]括号序列

    题链: http://www.lydsy.com/JudgeOnline/problem.php?id=2209 题解: Splay 很好的题,但是把智障的我给恶心到了...   首先不难发现,最后没 ...

  7. 杜教筛进阶+洲阁筛讲解+SPOJ divcnt3

    Part 1:杜教筛进阶在了解了杜教筛基本应用,如$\sum_{i=1}^n\varphi(i)$的求法后,我们看一些杜教筛较难的应用.求$\sum_{i=1}^n\varphi(i)*i$考虑把它与 ...

  8. Mybatis迷你版--QueryObjectFactory

    今天在看JDBC4.2新规范,然后无意之间就碰到了这个东西QueryObjectFactory, 市面上orm框架有很多,在这里我就不一一列举了.那么今天我来记录一下QueryObjectFactor ...

  9. JFinal实现伪静态

    JFinal 是基于 Java 语言的极速 WEB + ORM 框架,其核心设计目标是开发迅速.代码量少.学习简单.功能强大.轻量级.易扩展.Restful.在拥有Java语言所有优势的同时再拥有ru ...

  10. David MacKay:用信息论解释 '快速排序'、'堆排序' 本质与差异

    这篇文章是David MacKay利用信息论,来对快排.堆排的本质差异导致的性能差异进行的比较. 信息论是非常强大的,它并不只是一个用来分析理论最优决策的工具. 从信息论的角度来分析算法效率是一件很有 ...