深度学习框架theano下的batch_norm实现代码——强化学习框架rllab
深度学习框架theano下的batch_norm实现代码——强化学习框架rllab
# encoding: utf-8
import lasagne.layers as L
import lasagne
import theano
import theano.tensor as TT
class ParamLayer(L.Layer):
def __init__(self, incoming, num_units, param=lasagne.init.Constant(0.),
trainable=True, **kwargs):
super(ParamLayer, self).__init__(incoming, **kwargs)
self.num_units = num_units
self.param = self.add_param(
param,
(num_units,),
name="param",
trainable=trainable
)
def get_output_shape_for(self, input_shape):
return input_shape[:-1] + (self.num_units,)
def get_output_for(self, input, **kwargs):
ndim = input.ndim
reshaped_param = TT.reshape(self.param, (1,) * (ndim - 1) + (self.num_units,))
tile_arg = TT.concatenate([input.shape[:-1], [1]])
tiled = TT.tile(reshaped_param, tile_arg, ndim=ndim)
return tiled
class OpLayer(L.MergeLayer):
def __init__(self, incoming, op,
shape_op=lambda x: x, extras=None, **kwargs):
if extras is None:
extras = []
incomings = [incoming] + extras
super(OpLayer, self).__init__(incomings, **kwargs)
self.op = op
self.shape_op = shape_op
self.incomings = incomings
def get_output_shape_for(self, input_shapes):
return self.shape_op(*input_shapes)
def get_output_for(self, inputs, **kwargs):
return self.op(*inputs)
class BatchNormLayer(L.Layer):
"""
lasagne.layers.BatchNormLayer(incoming, axes='auto', epsilon=1e-4,
alpha=0.1, mode='low_mem',
beta=lasagne.init.Constant(0), gamma=lasagne.init.Constant(1),
mean=lasagne.init.Constant(0), std=lasagne.init.Constant(1), **kwargs)
Batch Normalization
This layer implements batch normalization of its inputs, following [1]_:
.. math::
y = \\frac{x - \\mu}{\\sqrt{\\sigma^2 + \\epsilon}} \\gamma + \\beta
That is, the input is normalized to zero mean and unit variance, and then
linearly transformed. The crucial part is that the mean and variance are
computed across the batch dimension, i.e., over examples, not per example.
During training, :math:`\\mu` and :math:`\\sigma^2` are defined to be the
mean and variance of the current input mini-batch :math:`x`, and during
testing, they are replaced with average statistics over the training
data. Consequently, this layer has four stored parameters: :math:`\\beta`,
:math:`\\gamma`, and the averages :math:`\\mu` and :math:`\\sigma^2`
(nota bene: instead of :math:`\\sigma^2`, the layer actually stores
:math:`1 / \\sqrt{\\sigma^2 + \\epsilon}`, for compatibility to cuDNN).
By default, this layer learns the average statistics as exponential moving
averages computed during training, so it can be plugged into an existing
network without any changes of the training procedure (see Notes).
Parameters
----------
incoming : a :class:`Layer` instance or a tuple
The layer feeding into this layer, or the expected input shape
axes : 'auto', int or tuple of int
The axis or axes to normalize over. If ``'auto'`` (the default),
normalize over all axes except for the second: this will normalize over
the minibatch dimension for dense layers, and additionally over all
spatial dimensions for convolutional layers.
epsilon : scalar
Small constant :math:`\\epsilon` added to the variance before taking
the square root and dividing by it, to avoid numerical problems
alpha : scalar
Coefficient for the exponential moving average of batch-wise means and
standard deviations computed during training; the closer to one, the
more it will depend on the last batches seen
beta : Theano shared variable, expression, numpy array, callable or None
Initial value, expression or initializer for :math:`\\beta`. Must match
the incoming shape, skipping all axes in `axes`. Set to ``None`` to fix
it to 0.0 instead of learning it.
See :func:`lasagne.utils.create_param` for more information.
gamma : Theano shared variable, expression, numpy array, callable or None
Initial value, expression or initializer for :math:`\\gamma`. Must
match the incoming shape, skipping all axes in `axes`. Set to ``None``
to fix it to 1.0 instead of learning it.
See :func:`lasagne.utils.create_param` for more information.
mean : Theano shared variable, expression, numpy array, or callable
Initial value, expression or initializer for :math:`\\mu`. Must match
the incoming shape, skipping all axes in `axes`.
See :func:`lasagne.utils.create_param` for more information.
std : Theano shared variable, expression, numpy array, or callable
Initial value, expression or initializer for :math:`1 / \\sqrt{
\\sigma^2 + \\epsilon}`. Must match the incoming shape, skipping all
axes in `axes`.
See :func:`lasagne.utils.create_param` for more information.
**kwargs
Any additional keyword arguments are passed to the :class:`Layer`
superclass.
Notes
-----
This layer should be inserted between a linear transformation (such as a
:class:`DenseLayer`, or :class:`Conv2DLayer`) and its nonlinearity. The
convenience function :func:`batch_norm` modifies an existing layer to
insert batch normalization in front of its nonlinearity.
The behavior can be controlled by passing keyword arguments to
:func:`lasagne.layers.get_output()` when building the output expression
of any network containing this layer.
During training, [1]_ normalize each input mini-batch by its statistics
and update an exponential moving average of the statistics to be used for
validation. This can be achieved by passing ``deterministic=False``.
For validation, [1]_ normalize each input mini-batch by the stored
statistics. This can be achieved by passing ``deterministic=True``.
For more fine-grained control, ``batch_norm_update_averages`` can be passed
to update the exponential moving averages (``True``) or not (``False``),
and ``batch_norm_use_averages`` can be passed to use the exponential moving
averages for normalization (``True``) or normalize each mini-batch by its
own statistics (``False``). These settings override ``deterministic``.
Note that for testing a model after training, [1]_ replace the stored
exponential moving average statistics by fixing all network weights and
re-computing average statistics over the training data in a layerwise
fashion. This is not part of the layer implementation.
In case you set `axes` to not include the batch dimension (the first axis,
usually), normalization is done per example, not across examples. This does
not require any averages, so you can pass ``batch_norm_update_averages``
and ``batch_norm_use_averages`` as ``False`` in this case.
See also
--------
batch_norm : Convenience function to apply batch normalization to a layer
References
----------
.. [1] Ioffe, Sergey and Szegedy, Christian (2015):
Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift. http://arxiv.org/abs/1502.03167.
"""
def __init__(self, incoming, axes='auto', epsilon=1e-4, alpha=0.1,
mode='low_mem', beta=lasagne.init.Constant(0), gamma=lasagne.init.Constant(1),
mean=lasagne.init.Constant(0), std=lasagne.init.Constant(1), **kwargs):
super(BatchNormLayer, self).__init__(incoming, **kwargs)
if axes == 'auto':
# default: normalize over all but the second axis
axes = (0,) + tuple(range(2, len(self.input_shape)))
elif isinstance(axes, int):
axes = (axes,)
self.axes = axes
self.epsilon = epsilon
self.alpha = alpha
self.mode = mode
# create parameters, ignoring all dimensions in axes
shape = [size for axis, size in enumerate(self.input_shape)
if axis not in self.axes]
if any(size is None for size in shape):
raise ValueError("BatchNormLayer needs specified input sizes for "
"all axes not normalized over.")
if beta is None:
self.beta = None
else:
self.beta = self.add_param(beta, shape, 'beta',
trainable=True, regularizable=False)
if gamma is None:
self.gamma = None
else:
self.gamma = self.add_param(gamma, shape, 'gamma',
trainable=True, regularizable=False)
self.mean = self.add_param(mean, shape, 'mean',
trainable=False, regularizable=False)
self.std = self.add_param(std, shape, 'std',
trainable=False, regularizable=False)
def get_output_for(self, input, deterministic=False, **kwargs):
input_mean = input.mean(self.axes)
input_std = TT.sqrt(input.var(self.axes) + self.epsilon)
# Decide whether to use the stored averages or mini-batch statistics
use_averages = kwargs.get('batch_norm_use_averages',
deterministic)
if use_averages:
mean = self.mean
std = self.std
else:
mean = input_mean
std = input_std
# Decide whether to update the stored averages
update_averages = kwargs.get('batch_norm_update_averages',
not deterministic)
if update_averages:
# Trick: To update the stored statistics, we create memory-aliased
# clones of the stored statistics:
running_mean = theano.clone(self.mean, share_inputs=False)
running_std = theano.clone(self.std, share_inputs=False)
# set a default update for them:
running_mean.default_update = ((1 - self.alpha) * running_mean +
self.alpha * input_mean)
running_std.default_update = ((1 - self.alpha) *
running_std +
self.alpha * input_std)
# and make sure they end up in the graph without participating in
# the computation (this way their default_update will be collected
# and applied, but the computation will be optimized away):
mean += 0 * running_mean
std += 0 * running_std
# prepare dimshuffle pattern inserting broadcastable axes as needed
param_axes = iter(list(range(input.ndim - len(self.axes))))
pattern = ['x' if input_axis in self.axes
else next(param_axes)
for input_axis in range(input.ndim)]
# apply dimshuffle pattern to all parameters
beta = 0 if self.beta is None else self.beta.dimshuffle(pattern)
gamma = 1 if self.gamma is None else self.gamma.dimshuffle(pattern)
mean = mean.dimshuffle(pattern)
std = std.dimshuffle(pattern)
# normalize
normalized = (input - mean) * (gamma * TT.inv(std)) + beta
return normalized
def batch_norm(layer, **kwargs):
"""
Apply batch normalization to an existing layer. This is a convenience
function modifying an existing layer to include batch normalization: It
will steal the layer's nonlinearity if there is one (effectively
introducing the normalization right before the nonlinearity), remove
the layer's bias if there is one (because it would be redundant), and add
a :class:`BatchNormLayer` and :class:`NonlinearityLayer` on top.
Parameters
----------
layer : A :class:`Layer` instance
The layer to apply the normalization to; note that it will be
irreversibly modified as specified above
**kwargs
Any additional keyword arguments are passed on to the
:class:`BatchNormLayer` constructor.
Returns
-------
BatchNormLayer or NonlinearityLayer instance
A batch normalization layer stacked on the given modified `layer`, or
a nonlinearity layer stacked on top of both if `layer` was nonlinear.
Examples
--------
Just wrap any layer into a :func:`batch_norm` call on creating it:
>>> from lasagne.layers import InputLayer, DenseLayer, batch_norm
>>> from lasagne.nonlinearities import tanh
>>> l1 = InputLayer((64, 768))
>>> l2 = batch_norm(DenseLayer(l1, num_units=500, nonlinearity=tanh))
This introduces batch normalization right before its nonlinearity:
>>> from lasagne.layers import get_all_layers
>>> [l.__class__.__name__ for l in get_all_layers(l2)]
['InputLayer', 'DenseLayer', 'BatchNormLayer', 'NonlinearityLayer']
"""
nonlinearity = getattr(layer, 'nonlinearity', None)
if nonlinearity is not None:
layer.nonlinearity = lasagne.nonlinearities.identity
if hasattr(layer, 'b') and layer.b is not None:
del layer.params[layer.b]
layer.b = None
layer = BatchNormLayer(layer, **kwargs)
if nonlinearity is not None:
layer = L.NonlinearityLayer(layer, nonlinearity)
return layer
深度学习框架theano下的batch_norm实现代码——强化学习框架rllab的更多相关文章
- 关于 Poco::TCPServer框架 (windows 下使用的是 select模型) 学习笔记.
说明 为何要写这篇文章 ,之前看过阿二的梦想船的<Poco::TCPServer框架解析> http://www.cppblog.com/richbirdandy/archive/2010 ...
- 5G网络的深度强化学习:联合波束成形,功率控制和干扰协调
摘要:第五代无线通信(5G)支持大幅增加流量和数据速率,并提高语音呼叫的可靠性.在5G无线网络中共同优化波束成形,功率控制和干扰协调以增强最终用户的通信性能是一项重大挑战.在本文中,我们制定波束形成, ...
- 深度强化学习(DRL)专栏(一)
目录: 1. 引言 专栏知识结构 从AlphaGo看深度强化学习 2. 强化学习基础知识 强化学习问题 马尔科夫决策过程 最优价值函数和贝尔曼方程 3. 有模型的强化学习方法 价值迭代 策略迭代 4. ...
- 强化学习(十七) 基于模型的强化学习与Dyna算法框架
在前面我们讨论了基于价值的强化学习(Value Based RL)和基于策略的强化学习模型(Policy Based RL),本篇我们讨论最后一种强化学习流派,基于模型的强化学习(Model Base ...
- 深度强化学习day01初探强化学习
深度强化学习 基本概念 强化学习 强化学习(Reinforcement Learning)是机器学习的一个重要的分支,主要用来解决连续决策的问题.强化学习可以在复杂的.不确定的环境中学习如何实现我们设 ...
- 深度强化学习(DQN-Deep Q Network)之应用-Flappy Bird
深度强化学习(DQN-Deep Q Network)之应用-Flappy Bird 本文系作者原创,转载请注明出处:https://www.cnblogs.com/further-further-fu ...
- AI小白必读:深度学习、迁移学习、强化学习别再傻傻分不清
摘要:诸多关于人工智能的流行词汇萦绕在我们耳边,比如深度学习 (Deep Learning).强化学习 (Reinforcement Learning).迁移学习 (Transfer Learning ...
- 浅入深出之Java集合框架(下)
Java中的集合框架(下) 由于Java中的集合框架的内容比较多,在这里分为三个部分介绍Java的集合框架,内容是从浅到深,哈哈这篇其实也还是基础,惊不惊喜意不意外 ̄▽ ̄ 写文真的好累,懒得写了.. ...
- 深度强化学习(DRL)专栏开篇
2015年,DeepMind团队在Nature杂志上发表了一篇文章名为"Human-level control through deep reinforcement learning&quo ...
- 深度强化学习:Deep Q-Learning
在前两篇文章强化学习基础:基本概念和动态规划和强化学习基础:蒙特卡罗和时序差分中介绍的强化学习的三种经典方法(动态规划.蒙特卡罗以及时序差分)适用于有限的状态集合$\mathcal{S}$,以时序差分 ...
随机推荐
- 在线RSA公钥私钥生成工具
在线RSA非对称加密公钥私钥生成工具,提供便捷.安全的公私钥生成服务.支持多种密钥长度选择,满足个性化需求.一键生成PEM格式证书,让您快速实现数据加密与身份验证,保障数据安全,提升网络安全防护能力. ...
- scrapy爬取知名问答网站
scrapy爬取知名问答网站 分析及数据表设计 itemloader方式提取question spider爬虫逻辑的实现以及answer的提取 保存数据到mysql中
- Java与React轻松导出Excel/PDF数据
前言 在B/S架构中,服务端导出是一种高效的方式.它将导出的逻辑放在服务端,前端仅需发起请求即可.通过在服务端完成导出后,前端再下载文件完成整个导出过程.服务端导出具有许多优点,如数据安全.适用于大规 ...
- 配置pod拉取harbor容器镜像仓库私有镜像:secret保存账号密码
目录 一.系统环境 二.前言 三.Docker-Registry类型的Secret简介 四.镜像仓库简介 五.搭建Harbor容器镜像仓库 5.1 安装Harbor 5.2 创建项目 5.3 推送镜像 ...
- NXP i.MX 8M Plus工业开发板规格书(四核ARM Cortex-A53 + 单核ARM Cortex-M7,主频1.6GHz)
1 评估板简介 创龙科技TLIMX8MP-EVM是一款基于NXP i.MX 8M Plus的四核ARM Cortex-A53 + 单核ARM Cortex-M7异构多核处理器设计的高性能工业评估板 ...
- python3求取大文件sha1值和md5
小文件 import hashlib import base64 filePath = "test.txt" with open(filePath, "rb") ...
- MFC CFileDialog DoModal()无法弹出窗口,直接返回IDCANCEL
最近需要用VS2017在MFC中加一个文件浏览窗口,采用了如下方式 1 CFileDialog Dlg(TRUE); 2 int res = Dlg.DoModal(); 3 if(res == ID ...
- mac svn管理工具
App Store中搜索snailsvn 分付费(98元)和免费试用
- yb课堂之跨域配置 《二十三》
CorsInterceptor.java package net.ybclass.online_ybclass.interceptor; import org.springframework.http ...
- Maven Helper插件——实现一键Maven依赖冲突问题
业余在一个SpringBoot项目集成Swagger2时,启动过程一直出现以下报错信息-- An attempt was made to call a method that does not exi ...