theano学习指南5(翻译)- 降噪自动编码器
降噪自动编码器是经典的自动编码器的一种扩展,它最初被当作深度网络的一个模块使用 [Vincent08]。这篇指南中,我们首先也简单的讨论一下自动编码器。
自动编码器
文献[Bengio09] 给出了自动编码器的一个简介。在编码过程,它可以把输入$\mathbf{x} \in [0,1]^d$映射到一个隐式表达$\mathbf{y} \in [0,1]^{d'}$。映射关系定义如下:
$$\mathbf{y} = s(\mathbf{W}\mathbf{x} + \mathbf{b})$$
其中,$s$为一个非线性函数,比如sigmoid。在解码过程,映射后的$y$可以重新映射成和输入具有相同维度的$z$,这个过程和编码过程非常类似。
$$\mathbf{z} = s(\mathbf{W'}\mathbf{y} + \mathbf{b'})$$
这里需要注意,上标并不表示转置。$z$可以看作对于给定y的条件下,$x$的预测。当然,逆映射的权重$W'$也可以是映射种权重$W$的转置。$\mathbf{W'} = \mathbf{W}^T$,这种情况被成为绑定的权重。模型的系数($W$,$b$,$b'$,以及$W'$如果两个映射矩阵不绑定的话)可以通过最小化平均重建错误的方法求解。
这里的重建错误可以通过很多不同的方法量化。经典的方差$L(\mathbf{x}, \mathbf{z}) = || \mathbf{x} - \mathbf{z} ||^2$也可以采用。如果输入为位向量,或者是位概率向量,重建的交叉熵也可以使用。
$$L_{H} (\mathbf{x}, \mathbf{z}) = - \sum^d_{k=1}[\mathbf{x}_k \log \mathbf{z}_k + (1 - \mathbf{x}_k)\log(1 - \mathbf{z}_k)]$$
这里希望编码后的$y$为能够捕获数据中主要变量的变化的一种分布式表示。这个和主分量分析PCA中的原理是一样的。事实上,如果编码过程采用线性映射,并采用均方差准则训练网络,那么这$k$个隠单元可以把输入在PCA意义下进行投影。如果隐层位非线性函数,模型会捕获输入的多模态信息,这和PCA有些区别。
因为$y$可以看作输入$x$的有损压缩,它不能对所有的输入都达到最佳压缩的目的。优化过程可以对训练数据达到好的压缩目的,并希望可以应用到其他数据上面,但不能是任意数据上。这就是自动编码器泛化的道理:它可以在和输入数据具有相同分布的测试数据达到较低的重构错误,但是在随机数据上效果不佳。
为了复用的方便,我们采用theano实现自动编码器类。首先,创建模型参数的共享变量$W$,$b$,$b'$(这里$W'=W^T$)。
def __init__(
self,
numpy_rng,
theano_rng=None,
input=None,
n_visible=784,
n_hidden=500,
W=None,
bhid=None,
bvis=None
):
"""
Initialize the dA class by specifying the number of visible units (the
dimension d of the input ), the number of hidden units ( the dimension
d' of the latent or hidden space ) and the corruption level. The
constructor also receives symbolic variables for the input, weights and
bias. Such a symbolic variables are useful when, for example the input
is the result of some computations, or when weights are shared between
the dA and an MLP layer. When dealing with SdAs this always happens,
the dA on layer 2 gets as input the output of the dA on layer 1,
and the weights of the dA are used in the second stage of training
to construct an MLP. :type numpy_rng: numpy.random.RandomState
:param numpy_rng: number random generator used to generate weights :type theano_rng: theano.tensor.shared_randomstreams.RandomStreams
:param theano_rng: Theano random generator; if None is given one is
generated based on a seed drawn from `rng` :type input: theano.tensor.TensorType
:param input: a symbolic description of the input or None for
standalone dA :type n_visible: int
:param n_visible: number of visible units :type n_hidden: int
:param n_hidden: number of hidden units :type W: theano.tensor.TensorType
:param W: Theano variable pointing to a set of weights that should be
shared belong the dA and another architecture; if dA should
be standalone set this to None :type bhid: theano.tensor.TensorType
:param bhid: Theano variable pointing to a set of biases values (for
hidden units) that should be shared belong dA and another
architecture; if dA should be standalone set this to None :type bvis: theano.tensor.TensorType
:param bvis: Theano variable pointing to a set of biases values (for
visible units) that should be shared belong dA and another
architecture; if dA should be standalone set this to None """
self.n_visible = n_visible
self.n_hidden = n_hidden # create a Theano random generator that gives symbolic random values
if not theano_rng:
theano_rng = RandomStreams(numpy_rng.randint(2 ** 30)) # note : W' was written as `W_prime` and b' as `b_prime`
if not W:
# W is initialized with `initial_W` which is uniformely sampled
# from -4*sqrt(6./(n_visible+n_hidden)) and
# 4*sqrt(6./(n_hidden+n_visible))the output of uniform if
# converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
initial_W = numpy.asarray(
numpy_rng.uniform(
low=-4 * numpy.sqrt(6. / (n_hidden + n_visible)),
high=4 * numpy.sqrt(6. / (n_hidden + n_visible)),
size=(n_visible, n_hidden)
),
dtype=theano.config.floatX
)
W = theano.shared(value=initial_W, name='W', borrow=True) if not bvis:
bvis = theano.shared(
value=numpy.zeros(
n_visible,
dtype=theano.config.floatX
),
borrow=True
) if not bhid:
bhid = theano.shared(
value=numpy.zeros(
n_hidden,
dtype=theano.config.floatX
),
name='b',
borrow=True
) self.W = W
# b corresponds to the bias of the hidden
self.b = bhid
# b_prime corresponds to the bias of the visible
self.b_prime = bvis
# tied weights, therefore W_prime is W transpose
self.W_prime = self.W.T
self.theano_rng = theano_rng
# if no input is given, generate a variable representing the input
if input is None:
# we use a matrix because we expect a minibatch of several
# examples, each example being a row
self.x = T.dmatrix(name='input')
else:
self.x = input self.params = [self.W, self.b, self.b_prime]
这里我们把符号$input$作为输入传给模型,这样可以把几个自动编码器的层组合起来,构建深度网络,把第$k$层的输出,当作$k$+1层的输入。
潜在表示和重建信号可以通过一下方式进行计算:
def get_hidden_values(self, input):
""" Computes the values of the hidden layer """
return T.nnet.sigmoid(T.dot(input, self.W) + self.b)
def get_reconstructed_input(self, hidden):
"""Computes the reconstructed input given the values of the
hidden layer """
return T.nnet.sigmoid(T.dot(hidden, self.W_prime) + self.b_prime)
接下来,我们计算损失函数,并采用SGD算法求解参数。
def get_cost_updates(self, corruption_level, learning_rate):
""" This function computes the cost and the updates for one trainng
step of the dA """ tilde_x = self.get_corrupted_input(self.x, corruption_level)
y = self.get_hidden_values(tilde_x)
z = self.get_reconstructed_input(y)
# note : we sum over the size of a datapoint; if we are using
# minibatches, L will be a vector, with one entry per
# example in minibatch
L = - T.sum(self.x * T.log(z) + (1 - self.x) * T.log(1 - z), axis=1)
# note : L is now a vector, where each element is the
# cross-entropy cost of the reconstruction of the
# corresponding example of the minibatch. We need to
# compute the average of all these to get the cost of
# the minibatch
cost = T.mean(L) # compute the gradients of the cost of the `dA` with respect
# to its parameters
gparams = T.grad(cost, self.params)
# generate the list of updates
updates = [
(param, param - learning_rate * gparam)
for param, gparam in zip(self.params, gparams)
] return (cost, updates)
然后,我们定义一个函数来迭代模型参数以最小化重建误差。
da = dA(
numpy_rng=rng,
theano_rng=theano_rng,
input=x,
n_visible=28 * 28,
n_hidden=500
) cost, updates = da.get_cost_updates(
corruption_level=0.,
learning_rate=learning_rate
) train_da = theano.function(
[index],
cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size]
}
) start_time = timeit.default_timer() ############
# TRAINING #
############ # go through training epochs
for epoch in range(training_epochs):
# go through trainng set
c = []
for batch_index in range(n_train_batches):
c.append(train_da(batch_index)) print('Training epoch %d, cost ' % epoch, numpy.mean(c)) end_time = timeit.default_timer() training_time = (end_time - start_time) print(('The no corruption code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((training_time) / 60.)), file=sys.stderr)
image = Image.fromarray(
tile_raster_images(X=da.W.get_value(borrow=True).T,
img_shape=(28, 28), tile_shape=(10, 10),
tile_spacing=(1, 1)))
image.save('filters_corruption_0.png') # start-snippet-3
#####################################
# BUILDING THE MODEL CORRUPTION 30% #
##################################### rng = numpy.random.RandomState(123)
theano_rng = RandomStreams(rng.randint(2 ** 30)) da = dA(
numpy_rng=rng,
theano_rng=theano_rng,
input=x,
n_visible=28 * 28,
n_hidden=500
) cost, updates = da.get_cost_updates(
corruption_level=0.3,
learning_rate=learning_rate
) train_da = theano.function(
[index],
cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size]
}
) start_time = timeit.default_timer() ############
# TRAINING #
############ # go through training epochs
for epoch in range(training_epochs):
# go through trainng set
c = []
for batch_index in range(n_train_batches):
c.append(train_da(batch_index)) print('Training epoch %d, cost ' % epoch, numpy.mean(c)) end_time = timeit.default_timer() training_time = (end_time - start_time) print(('The 30% corruption code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % (training_time / 60.)), file=sys.stderr)
# end-snippet-3 # start-snippet-4
image = Image.fromarray(tile_raster_images(
X=da.W.get_value(borrow=True).T,
img_shape=(28, 28), tile_shape=(10, 10),
tile_spacing=(1, 1)))
image.save('filters_corruption_30.png')
# end-snippet-4 os.chdir('../') if __name__ == '__main__':
test_dA()
降噪自动编码器
降噪自动编码器的动机其实很简单。为了强制隐层发掘更鲁棒的特征,我们将污染后的数据作为输入来训练自动编码器。
降噪自动编码器是自动编码器的一个随机版本。直觉上,这个模型主要解决了两个问题,首先,它可以对输入进行编码并尽量保持其信息,其次它尽量地从污染数据中,恢复输入数据。后者主要通过分析输入数据的统计依赖性实现的。降噪自动编码器可以从流形学习、随机分析等不同方面进行解释[Vincent08]。
为了实现降噪自动编码器,我们只需要在自动编码器的基础上对输入数据添加一个随机污染的过程。这个有很多实现方法,这里我们采用随机地对输入进行掩膜处理,使每个实例的和为0,相应代码如下:
def get_corrupted_input(self, input, corruption_level):
"""This function keeps ``1-corruption_level`` entries of the inputs the
same and zero-out randomly selected subset of size ``coruption_level``
Note : first argument of theano.rng.binomial is the shape(size) of
random numbers that it should produce
second argument is the number of trials
third argument is the probability of success of any trial this will produce an array of 0s and 1s where 1 has a
probability of 1 - ``corruption_level`` and 0 with
``corruption_level`` The binomial function return int64 data type by
default. int64 multiplicated by the input
type(floatX) always return float64. To keep all data
in floatX when floatX is float32, we set the dtype of
the binomial to floatX. As in our case the value of
the binomial is always 0 or 1, this don't change the
result. This is needed to allow the gpu to work
correctly as it only support float32 for now. """
return self.theano_rng.binomial(size=input.shape, n=1,
p=1 - corruption_level,
dtype=theano.config.floatX) * input
这样,我们就可以得到一个完整的降噪自动编码器类:
class dA(object):
"""Denoising Auto-Encoder class (dA) A denoising autoencoders tries to reconstruct the input from a corrupted
version of it by projecting it first in a latent space and reprojecting
it afterwards back in the input space. Please refer to Vincent et al.,2008
for more details. If x is the input then equation (1) computes a partially
destroyed version of x by means of a stochastic mapping q_D. Equation (2)
computes the projection of the input into the latent space. Equation (3)
computes the reconstruction of the input, while equation (4) computes the
reconstruction error. .. math:: \tilde{x} ~ q_D(\tilde{x}|x) (1) y = s(W \tilde{x} + b) (2) x = s(W' y + b') (3) L(x,z) = -sum_{k=1}^d [x_k \log z_k + (1-x_k) \log( 1-z_k)] (4) """ def __init__(
self,
numpy_rng,
theano_rng=None,
input=None,
n_visible=784,
n_hidden=500,
W=None,
bhid=None,
bvis=None
):
"""
Initialize the dA class by specifying the number of visible units (the
dimension d of the input ), the number of hidden units ( the dimension
d' of the latent or hidden space ) and the corruption level. The
constructor also receives symbolic variables for the input, weights and
bias. Such a symbolic variables are useful when, for example the input
is the result of some computations, or when weights are shared between
the dA and an MLP layer. When dealing with SdAs this always happens,
the dA on layer 2 gets as input the output of the dA on layer 1,
and the weights of the dA are used in the second stage of training
to construct an MLP. :type numpy_rng: numpy.random.RandomState
:param numpy_rng: number random generator used to generate weights :type theano_rng: theano.tensor.shared_randomstreams.RandomStreams
:param theano_rng: Theano random generator; if None is given one is
generated based on a seed drawn from `rng` :type input: theano.tensor.TensorType
:param input: a symbolic description of the input or None for
standalone dA :type n_visible: int
:param n_visible: number of visible units :type n_hidden: int
:param n_hidden: number of hidden units :type W: theano.tensor.TensorType
:param W: Theano variable pointing to a set of weights that should be
shared belong the dA and another architecture; if dA should
be standalone set this to None :type bhid: theano.tensor.TensorType
:param bhid: Theano variable pointing to a set of biases values (for
hidden units) that should be shared belong dA and another
architecture; if dA should be standalone set this to None :type bvis: theano.tensor.TensorType
:param bvis: Theano variable pointing to a set of biases values (for
visible units) that should be shared belong dA and another
architecture; if dA should be standalone set this to None """
self.n_visible = n_visible
self.n_hidden = n_hidden # create a Theano random generator that gives symbolic random values
if not theano_rng:
theano_rng = RandomStreams(numpy_rng.randint(2 ** 30)) # note : W' was written as `W_prime` and b' as `b_prime`
if not W:
# W is initialized with `initial_W` which is uniformely sampled
# from -4*sqrt(6./(n_visible+n_hidden)) and
# 4*sqrt(6./(n_hidden+n_visible))the output of uniform if
# converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
initial_W = numpy.asarray(
numpy_rng.uniform(
low=-4 * numpy.sqrt(6. / (n_hidden + n_visible)),
high=4 * numpy.sqrt(6. / (n_hidden + n_visible)),
size=(n_visible, n_hidden)
),
dtype=theano.config.floatX
)
W = theano.shared(value=initial_W, name='W', borrow=True) if not bvis:
bvis = theano.shared(
value=numpy.zeros(
n_visible,
dtype=theano.config.floatX
),
borrow=True
) if not bhid:
bhid = theano.shared(
value=numpy.zeros(
n_hidden,
dtype=theano.config.floatX
),
name='b',
borrow=True
) self.W = W
# b corresponds to the bias of the hidden
self.b = bhid
# b_prime corresponds to the bias of the visible
self.b_prime = bvis
# tied weights, therefore W_prime is W transpose
self.W_prime = self.W.T
self.theano_rng = theano_rng
# if no input is given, generate a variable representing the input
if input is None:
# we use a matrix because we expect a minibatch of several
# examples, each example being a row
self.x = T.dmatrix(name='input')
else:
self.x = input self.params = [self.W, self.b, self.b_prime] def get_corrupted_input(self, input, corruption_level):
"""This function keeps ``1-corruption_level`` entries of the inputs the
same and zero-out randomly selected subset of size ``coruption_level``
Note : first argument of theano.rng.binomial is the shape(size) of
random numbers that it should produce
second argument is the number of trials
third argument is the probability of success of any trial this will produce an array of 0s and 1s where 1 has a
probability of 1 - ``corruption_level`` and 0 with
``corruption_level`` The binomial function return int64 data type by
default. int64 multiplicated by the input
type(floatX) always return float64. To keep all data
in floatX when floatX is float32, we set the dtype of
the binomial to floatX. As in our case the value of
the binomial is always 0 or 1, this don't change the
result. This is needed to allow the gpu to work
correctly as it only support float32 for now. """
return self.theano_rng.binomial(size=input.shape, n=1,
p=1 - corruption_level,
dtype=theano.config.floatX) * input def get_hidden_values(self, input):
""" Computes the values of the hidden layer """
return T.nnet.sigmoid(T.dot(input, self.W) + self.b) def get_reconstructed_input(self, hidden):
"""Computes the reconstructed input given the values of the
hidden layer """
return T.nnet.sigmoid(T.dot(hidden, self.W_prime) + self.b_prime) def get_cost_updates(self, corruption_level, learning_rate):
""" This function computes the cost and the updates for one trainng
step of the dA """ tilde_x = self.get_corrupted_input(self.x, corruption_level)
y = self.get_hidden_values(tilde_x)
z = self.get_reconstructed_input(y)
# note : we sum over the size of a datapoint; if we are using
# minibatches, L will be a vector, with one entry per
# example in minibatch
L = - T.sum(self.x * T.log(z) + (1 - self.x) * T.log(1 - z), axis=1)
# note : L is now a vector, where each element is the
# cross-entropy cost of the reconstruction of the
# corresponding example of the minibatch. We need to
# compute the average of all these to get the cost of
# the minibatch
cost = T.mean(L) # compute the gradients of the cost of the `dA` with respect
# to its parameters
gparams = T.grad(cost, self.params)
# generate the list of updates
updates = [
(param, param - learning_rate * gparam)
for param, gparam in zip(self.params, gparams)
] return (cost, updates)
整合
我们可以非常容易的构建一个实例并进行训练:
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
x = T.matrix('x') # the data is presented as rasterized images
#####################################
# BUILDING THE MODEL CORRUPTION 30% #
##################################### rng = numpy.random.RandomState(123)
theano_rng = RandomStreams(rng.randint(2 ** 30)) da = dA(
numpy_rng=rng,
theano_rng=theano_rng,
input=x,
n_visible=28 * 28,
n_hidden=500
) cost, updates = da.get_cost_updates(
corruption_level=0.3,
learning_rate=learning_rate
) train_da = theano.function(
[index],
cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size]
}
) start_time = timeit.default_timer() ############
# TRAINING #
############ # go through training epochs
for epoch in range(training_epochs):
# go through trainng set
c = []
for batch_index in range(n_train_batches):
c.append(train_da(batch_index)) print('Training epoch %d, cost ' % epoch, numpy.mean(c)) end_time = timeit.default_timer() training_time = (end_time - start_time) print(('The 30% corruption code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % (training_time / 60.)), file=sys.stderr)
最后,为了对模型有一个直观的了解,我们可以借助tile_raster_images函数,将训练得到的权重画出来。
image = Image.fromarray(tile_raster_images(
X=da.W.get_value(borrow=True).T,
img_shape=(28, 28), tile_shape=(10, 10),
tile_spacing=(1, 1)))
image.save('filters_corruption_30.png')
如果允许上述代码,我们可以得到如下结果:
1. 没有加入噪声的模型:
2. 加入30%噪声的模型
theano学习指南5(翻译)- 降噪自动编码器的更多相关文章
- SharePoint 2010 开发人员学习指南
kaneboy 翻译,一切内容版权归 Microsoft.1. SharePoint 开发起步教程 这是一个为准备进入到 SharePoint 开发领域的 .NET 开发人员所准备的免费在线学习课程. ...
- 基于theano的降噪自动编码器(Denoising Autoencoders--DA)
1.自动编码器 自动编码器首先通过下面的映射,把输入 $x\in[0,1]^{d}$映射到一个隐层 $y\in[0,1]^{d^{'}}$(编码器): $y=s(Wx+b)$ 其中 $s$ 是非线性的 ...
- 降噪自动编码器(Denoising Autoencoder)
起源:PCA.特征提取.... 随着一些奇怪的高维数据出现,比如图像.语音,传统的统计学-机器学习方法遇到了前所未有的挑战. 数据维度过高,数据单调,噪声分布广,传统方法的“数值游戏”很难奏效.数据挖 ...
- 《Spring MVC学习指南》怎么样?答:书名具有很大的欺骗性
2016年6月21日 最近,因为工作需要,我从网上买了一本<Spring MVC学习指南>,ISBN编号: 978-7-115-38639-7,定价:49.00元.此书是[美]Paul D ...
- 项目管理之道--纪我的新书《PMP项目管理认证学习指南(第4版)》出版并预祝大卖!
新年伊始,我最新的项目管理书籍——<PMP项目管理认证学习指南(第4版)>也出版了,真是新年新气象啊!翻译英文书籍是一件任重道远的工作,除了要具备扎实的基本功,熟悉相关的领域外,还需要细致 ...
- Android Wear(手表)开发 - 学习指南
版权声明:欢迎自由转载-非商用-非衍生-保持署名.作者:Benhero,博客地址:http://www.cnblogs.com/benhero/ Android Wear开发 - 学习指南 http: ...
- React-Native学习指南
React-Native学习指南 本指南汇集React-Native各类学习资源,给大家提供便利.指南正在不断的更新,大家有好的资源欢迎Pull Requests! 同时还有Awesome React ...
- TypeScript学习指南--目录索引
关于TypeScript: TypeScript是一种由微软开发的自由和开源的编程语言.它是JavaScript的一个超集,而且本质上向这个语言添加了可选的静态类型和基于类的面向对象编程. TypeS ...
- [转] 整理了一份React-Native学习指南
自己在学习React-Native过程中整理的一份学习指南,包含 教程.开源app和资源网站等,还在不断更新中.欢迎pull requests! React-Native学习指南 本指南汇集React ...
随机推荐
- Exploring the 7 Different Types of Data Stories
Exploring the 7 Different Types of Data Stories What makes a story truly data-driven? For one, the n ...
- PAT-乙级-1031. 查验身份证(15)
1031. 查验身份证(15) 时间限制 200 ms 内存限制 65536 kB 代码长度限制 8000 B 判题程序 Standard 作者 CHEN, Yue 一个合法的身份证号码由17位地区. ...
- hdu 4557 非诚勿扰
水题…… 代码如下: #include<iostream> #include<stdio.h> #include<algorithm> #include<io ...
- Linux系统下如何配置SSH?如何开启SSH?
SSH作为Linux远程连接重要的方式,如何配置安装linux系统的SSH服务,如何开启SSH?下面来看看吧(本例为centos系统演示如何开启SSH服务). 查询\安装SSH服务 1.登陆linux ...
- Time.deltaTime 含义和应用
第一種:使用Time.deltaTime 一秒內從第1個Frame到最後一個Frame所花的時間,所以不管電腦是一秒跑60格或者一秒30格.24格,值都會趨近於一. 就結果而言,deltaTime是為 ...
- HeadFirst设计模式之迭代器模式
一. 1.迭代器模式是对遍历集合元素的抽象 2.The Iterator Pattern provides a way to access the elements of an aggregate o ...
- android 电容屏(一):电容屏基本原理篇
平台信息: 内核:linux3.4.39系统:android4.4 平台:S5P4418(cortex a9) 作者:瘋耔(欢迎转载,请注明作者) 欢迎指正错误,共同学习.共同进步!! 关注博主新浪博 ...
- Oracle安装时先决条件检查失败的解决方案
Oracle安装时先决条件检查失败的解决方案 [java] 安装环境:Win7-64bit专业版,内存6G,硬盘空间足够 安装版本:Oracle Database 11g Release 2 (1 ...
- C#中的Marshal
Const.MaxLengthOfBufferd的长度固定为0x2000 也就是8192 private bool SendMessage(int messageType, string ip, ...
- Oracle DBA常用SQL
监控SQL 1.监控事例的等待: select event,sum(decode(wait_time,0,0,1)) prev, sum(decode(wait_time,0,1,0)) curr,c ...