A Statistical View of Deep Learning (II): Auto-encoders and Free Energy

With the success of discriminative modelling using deep feedforward neural networks (or using an alternative statistical lens, recursive generalised linear models) in numerous industrial applications, there is an increased drive to produce similar outcomes with unsupervised learning. In this post, I'd like to explore the connections between denoising auto-encoders as a leading approach for unsupervised learning in deep learning, and density estimation in statistics. The statistical view I'll explore casts learning in denoising auto-encoders as that of inference in latent factor (density) models. Such a connection has a number of useful benefits and implications for our machine learning practice.

Generalised Denoising Auto-encoders

Denoising auto-encoders are an important advancement in unsupervised deep learning, especially in moving towards scalable and robust representations of data. For every data point y, denoising auto-encoders begin by creating a perturbed version of it y', using a known corruption process C(y′|y). We then create a network that given the perturbed data y', reconstructs the original data y. The network is grouped into two parts, an encoder and a decoder, such that the output of the encoder z can be used as a representation/features of the data. The objective function is [1]:

Perturbation:y′∼C(y′|y)
Encoder:z(y′)=fϕ(y′)Decoder:y≈gθ(z)
Objective:LDAE=logp(y|z)

where logp(⋅) is an appropriate likelihood function for the data, and the objective function is averaged over all observations. Generalised denoising auto-encoders (GDAEs) realise that this formulation may be limited due to finite training data, and introduce an additional penalty term R(⋅) for added regularisation [2]:

LGDAE=logp(y|z)−λR(y,y′)

GDAEs exploit the insight that perturbations in the observation space give rise to robustness and insensitivity in the representation z. Two key questions that arise when we use GDAEs are: how to choose a realistic corruption process, and what are appropriate regularisation functions.

Separating Model and Inference

The difficulty in reasoning statistically about auto-encoders is that they do not maintain or encourage a distinction between a model of the data (statistical assumptions about the properties and structure we expect) and the approach for inference/estimation in that model (the ways in which we link the observed data to our modelling assumptions). The auto-encoder framework provides a computational pipeline, but not a statistical explanation, since to explain the data (which must be an outcome of our model), you must know it beforehand and use it as an input. Not maintaining the distinction between model and inference impedes our ability to correctly evaluate and compare competing approaches for a problem, leaves us unaware of relevant approaches in related literatures that could provide useful insight, and makes it difficult for us to provide the guidance that allows our insights to be incorporated into our community's broader knowledge-base.

To ameliorate these concerns we typically re-interpret the auto-encoder by seeing thedecoder as the statistical model of interest (and is indeed how many interpret and use auto-encoders in practice). A probabilistic decoder provides a generative description of the data, and our task is inference/learning in this model. For a given model, there are many competing approaches for inference, such as maximum likelihood (ML) andmaximum a posteriori (MAP) estimation, noise-contrastive estimationMarkov chain Monte Carlo (MCMC), variational inferencecavity methodsintegrated nested Laplace approximations (INLA), etc. The role of the encoder is now clear: the encoder is one mechanism for inference in the model described by the decoder. Its structure is not tied to the model (decoder), and it is just one from the smorgasbord of available approaches with its own advantages and tradeoffs.

Approximate Inference in Latent Variable Models

Encoder-decoder view of inference in latent variable models.

Another difficulty with DAEs is that robustness is obtained by considering perturbations in the data space — such a corruption process will, in general, not be easy to design. Furthermore, by carefully reasoning about the induced probabilities, we can show [1] that the DAE objective function LDAE corresponds to a lower bound obtained by applying the variational principle to the log-density of the corrupted data logp(y′) — this though, is nota quantity we are interested in reasoning about.

A way forward would be to instead apply the variational principle to the quantity we are interested in, the log-marginal probability of the observed data logp(y) [3][4]. The objective function obtained by applying the variational principle to the generative model (probabilistic decoder) is known as the variational free energy:

LVFE=Eq(z)[logp(y|z)]−KL[q(z)∥p(z)]

By inspection, we can see that this matches the form of the GDAE objective. There are notable differences though:

  • Instead of considering perturbations in the observation space, we consider perturbations in the hidden space, obtained by using a prior p(z). The hidden variables are now random, latent variables.  Auto-encoders are now generative models that are straightforward to sample from.
  • The encoder q(z|y) is a mechanism for approximating the true posterior distribution of the latent/hidden variables p(z|y).
  • We are now able to explain the introduction of the penalty function in the GDAE objective in a principled manner. Rather than designing the penalty by hand, we are able to derive the form this penalty should take, appearing as the KL divergence between the the prior and the encoder distribution.

Auto-encoders reformulated in this way, thus provide an efficient way of implementing approximate Bayesian inference. Using an encoder-decoder structure, we gain the ability to jointly optimise all parameters using the single computational graph; and we obtain an efficient way of doing inference at test time, since we only need a single forward pass through the encoder. The cost of taking this approach is that we have now obtained a potentially harder optimisation, since we have coupled the inferences for the latent variables together through the parameters of the encoder. Approaches that do not implement the q-distribution as an encoder have the ability to deal with arbitrary missingness patterns in the observed data and we lose this ability, since the encoder must be trained knowing the missingness pattern it will encounter. One way we explored these connections is in a model we called Deep Latent Gaussian Models (DLGM) with inference based on stochastic variational inference (and implemented using an encoder) [3], and is now the basis of a number of extensions [5][6].

Summary

Auto-encoders address the problem of statistical inference and provide a powerful mechanism for inference that plays a central role in our  search for more powerful unsupervised learning. A statistical view, and variational reformulation, of auto-encoders allows us to maintain a clear distinction between the assumed statistical model and our approach for inference, gives us one efficient way of implementing inference, gives us an easy-to-sample generative model, allows us to reason about the statistical quantity we are actually interested in, and gives us a principled loss function that includes the important regularisation terms. This is just one perspective that is becoming increasingly popular, and is worthwhile to reflect upon as we continue to explore the frontiers of unsupervised learning.


Some References
[1] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol,Extracting and composing robust features with denoising autoencoders, Proceedings of the 25th international conference on Machine learning, 2008
[2] Yoshua Bengio, Li Yao, Guillaume Alain, Pascal Vincent, Generalized denoising auto-encoders as generative models, Advances in Neural Information Processing Systems, 2013
[3] Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, Stochastic Backpropagation and Approximate Inference in Deep Generative Models, Proceedings of The 31st International Conference on Machine Learning, 2014
[4] Diederik P Kingma, Max Welling, Auto-encoding variational bayes, arXiv preprint arXiv:1312.6114, 2014
[5] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, Max Welling, Semi-supervised learning with deep generative models, Advances in Neural Information Processing Systems, 2014
[6] Karol Gregor, Ivo Danihelka, Alex Graves, Daan Wierstra, DRAW: A Recurrent Neural Network For Image Generation, arXiv preprint arXiv:1502.04623, 2015

A Statistical View of Deep Learning (II): Auto-encoders and Free Energy的更多相关文章

  1. A Statistical View of Deep Learning (V): Generalisation and Regularisation

    A Statistical View of Deep Learning (V): Generalisation and Regularisation We now routinely build co ...

  2. A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems

    A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems Recurrent neural netw ...

  3. A Statistical View of Deep Learning (III): Memory and Kernels

    A Statistical View of Deep Learning (III): Memory and Kernels Memory, the ways in which we remember ...

  4. A Statistical View of Deep Learning (I): Recursive GLMs

    A Statistical View of Deep Learning (I): Recursive GLMs Deep learningand the use of deep neural netw ...

  5. 机器学习(Machine Learning)&深度学习(Deep Learning)资料【转】

    转自:机器学习(Machine Learning)&深度学习(Deep Learning)资料 <Brief History of Machine Learning> 介绍:这是一 ...

  6. 【深度学习Deep Learning】资料大全

    最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books  by Yoshua Bengio, Ian Goodfellow and Aaron C ...

  7. 机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)

    ##机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)---#####注:机器学习资料[篇目一](https://github.co ...

  8. translation of 《deep learning》 Chapter 1 Introduction

    原文: http://www.deeplearningbook.org/contents/intro.html Inventors have long dreamed of creating mach ...

  9. 深度学习基础 Probabilistic Graphical Models | Statistical and Algorithmic Foundations of Deep Learning

    目录 Probabilistic Graphical Models Statistical and Algorithmic Foundations of Deep Learning 01 An ove ...

随机推荐

  1. centos6.4安装flashcache

    FlashCache呢是Facebook技术团队的又一力作,最初是为加速MySQL设计的.Flashcache是在Linux层面的,所以任何受磁盘IO困绕的软件或应用都可以方便的使用.为什么是用于加速 ...

  2. CakePHP之Model

    模型 模型在应用程序中是作为业务层而存在的(怎么感觉是数据层......).这就意味着,模型应当负责管理几乎所有涉及数据的事情,其合法性,以及你的业务领域中数据在工作流程中的演化和互动 . 通常模型类 ...

  3. 21、javascript 基础

    Javascript Javascript 是一种嵌入到HTML文档的脚本语言,由浏览器解释和执行,无需编译. Javascript 是大小写敏感的,大写字母与小写字母不同. 使用“:”分号来分割每一 ...

  4. C# 里窗体里(windows form)怎么播放音乐

    在.NET的winform里面,没有托管的音乐播放器,API只能播放WAV格式,对于MP3等形式的音频文件,就要依赖于 MediaPlayer里,嘿嘿 使用的方法: 在toolbox上点右键,选择“选 ...

  5. (转)ecshop产品详情页显示不清晰

    详情页面的商品图片的设置方法 后台商店设置-显示设置-显示设置(就是这里,商品图片宽度和高度设置的大点就行了,放大镜效果也清晰了) 按照您详情页面图片的实际显示大小来添写. 商品管理-图片批量处理,这 ...

  6. Android 4.0及以上版本接收开机广播BOOT_COMPLETED、开机自启动服务

    1.BootCompletedReceiver.Java文件 public class BootCompletedReceiver extends BroadcastReceiver { @Overr ...

  7. WinForm中的事件触发机制学习

    在一个Form窗体中拖个按钮,双击后系统自动生成代码: private void button1_Click(object sender, EventArgs e) { } 同时在窗体的Initial ...

  8. xargs rm -rf 与 -exec rm

    # find ./ -exec rm {} \; # find ./ | xargs rm -rf 两者都可以把find命令查找到的结果删除,其区别简单的说是前者是把find发现的结果一次性传给exe ...

  9. JDBC标准事物编程模式

    事物简介: 事物是一种数据库中保证交易可靠的机制,JDBC支持数据库中事物的概念,默认情况下事物是默认提交的. 事物的特性: 1.事物必须是原子工作单元,对于其数据的修改,要么都执行,要么都不执行2. ...

  10. SGU 275 To xor or not to xor(高斯消元)

    题意: 从n个数中选若干个数,使它们的异或和最大.n<=100 Solution 经典的异或高斯消元. //O(60*n) #include <iostream> using nam ...