Machine Learning Trick of the Day (2): Gaussian Integral Trick

Today's trick, the Gaussian integral trick, is one that allows us to re-express a (potentially troublesome) function in an alternative form, in particular, as an integral of a Gaussian against another function — integrals against a Gaussian turn out not to be too troublesome and can provide many statistical and computational benefits. One popular setting where we can exploit such an alternative representation is for inference in discrete undirected graphical models (think Boltzmann machines or discrete Markov random fields). In such cases, this trick lets us transform our discrete problem into one that has an underlying continuous (Gaussian) representation, which we can then solve using our other machine learning tricks. But this is part of a more general strategy that is used throughout machine learning, whether in Bayesian posterior analysis, deep learning or kernel machines. This trick has many facets, and this post explores the Gaussian integral trick and its more general form, auxiliary variable augmentation.

Gaussian integral trick state expansion.

Gaussian Integral Trick

The Gaussian integral trick is one of the statistical flavour and allows us to turn a function that is an exponential in x2 into an exponential that is linear in x. We do this by augmenting a linear function with auxiliary variables and then integrating over these auxiliary variables, hence a form of auxiliary variable augmentation. The simplest form of this trick is to apply the following identity:

∫exp(−ay2+xy)dy=πa−−√exp(14ax2)

We can prove this to ourselves by exploiting our knowledge of Gaussian distributions (which this looks strikingly similar to) and our ability to complete the square when we see such quadratic forms. Separating out the scaling factor a we get:

∫exp(−ay2+xy)dy=∫exp{−a(y2−xay)}dy

Which by completing the square becomes:

∫exp{−a(y−x2a)2+(x24a)}dy
=exp(x24a)∫exp{−a(y−x2a)2}dy
=exp(x24a)πa−−√

where the last integral is solved by matching it to a Gaussian with mean μ=x2a and variance σ2=12a, which we know has a normalisation of 2πσ2−−−−√ — this last step shows how this trick got its name.

The 'Gaussian integral trick' was coined and initially described by Hertz et al. [Ch10, pg 253] [1], and is closely related to the Hubbard-Stratonovich transform (which provides the augmentation for exp(−x2)).

Transforming Binary MRFs

This trick is also valid in the multivariate case, which is what we will most often be interested in. One good place to see this trick in action is when applied to binary MRFs or Boltzmann machines. Binary MRFs have a joint probability, for binary random variables x:

p(x)=1Zexp(θ⊤x+x⊤Wx)

where Z, is the normalising constant. The (multivariate) Gaussian integral trick can be applied to the quadratic term in this energy function allowing for an insightful analysis andinteresting reparameterisation that allows for alternative inference methods to be used. For example:

Variable Augmentation

Graphical model for a general augmentation.

This trick is a special case of a more general strategy called variable (or data) augmentation — I prefervariable augmentation to data augmentation [4], since it will not be confused with observed data preprocessing and manipulation. In this setting, the introduction of auxiliary variables has been most often used to develop better mixing Markov chain Monte Carlo samplers. This is because after augmentation, the conditional distributions of the model often have highly convenient and easy-to-sample-from forms.

One recent example of variable augmentation (and that parallels our initial trick) is the Polya-Gamma variable augmentation. In this case, we can express the sigmoid function that appears when computing the mean of the Bernoulli distribution, as:

σ(x)=11+exp(−x)=12√exp(x2)∫∞0exp(−12yx2)p(y)dy

where p(y) has a Polya-Gamma distribution [5]. This nicely transforms the sigmoid into a Gaussian convolution (integrated against a Polya-Gamma random variable) — and gives us a different type of Gaussian integral trick. In fact, similar Gaussian integral tricks are abound, and are typically described under the heading of Gaussian scale-mixture distributions.

There are many examples of variable augmentation to be found, especially for binary and categorical distributions. Much guidance is available, and some papers that demonstrate this are:

Summary

The Gaussian integral trick is just one from a large class of variable augmentation strategies that are widely used in statistics and machine learning. They work by introducing auxiliary variables into our problems that induce an alternative representation, and that then give us additional statistical and computational benefits. Such methods lie at the heart of efficient inference algorithms, whether these be Monte Carlo or deterministic approximate inference schemes, making variable augmentation a favourite in our box of machine learning tricks.


Some References
[1] John Hertz, Anders Krogh, Richard G Palmer, Introduction to the theory of neural computation, , 1991
[2] Yichuan Zhang, Zoubin Ghahramani, Amos J Storkey, Charles A Sutton, Continuous relaxations for discrete hamiltonian monte carlo, Advances in Neural Information Processing Systems, 2012
[3] Ari Pakman, Liam Paninski, Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions, Advances in Neural Information Processing Systems, 2013
[4] James H Albert, Siddhartha Chib, Bayesian analysis of binary and polychotomous response data, Journal of the American statistical Association, 1993
[5] Nicholas G Polson, James G Scott, Jesse Windle, Bayesian inference for logistic models using P\'olya--Gamma latent variables, Journal of the American Statistical Association, 2013

Machine Learning Trick of the Day (2): Gaussian Integral Trick的更多相关文章

  1. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

  2. Machine Learning Trick of the Day (1): Replica Trick

    Machine Learning Trick of the Day (1): Replica Trick 'Tricks' of all sorts are used throughout machi ...

  3. Kernel Functions for Machine Learning Applications

    In recent years, Kernel methods have received major attention, particularly due to the increased pop ...

  4. Machine Learning for Developers

    Machine Learning for Developers Most developers these days have heard of machine learning, but when ...

  5. 学习笔记之Machine Learning Crash Course | Google Developers

    Machine Learning Crash Course  |  Google Developers https://developers.google.com/machine-learning/c ...

  6. How do I learn mathematics for machine learning?

    https://www.quora.com/How-do-I-learn-mathematics-for-machine-learning   How do I learn mathematics f ...

  7. Machine Learning and Data Mining(机器学习与数据挖掘)

    Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...

  8. [C2P1] Andrew Ng - Machine Learning

    About this Course Machine learning is the science of getting computers to act without being explicit ...

  9. 【机器学习Machine Learning】资料大全

    昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...

随机推荐

  1. 为什么使彩色图变灰RGB的权重会固定(R:0.299 G:0.587 B:0.114)?

    人眼对绿色的敏感度最高,对红色的敏感度次之,对蓝色的敏感度最低,因此使用不同的权重将得到比较合理的灰度图像.根据实验和理论推导得出以下数值 R: 0.299. G:  0.587. B: 0.114.

  2. Internet History, Technology and Security (Week 4)

    Week 4 History: Commercialization and Growth We are now moving into Week 4! This week, we will be co ...

  3. Beta阶段 敏捷冲刺day1

    一.提供当天站立式会议照片一张: 二. 每个人的工作 (有work item 的ID) (1) 昨天已完成的工作: (2) 今天计划完成的工作: 今天大家一起讨论了一下之后几天的任务,以及如何对网页进 ...

  4. 爬虫学习之-scrapy交互式命令

    scrapy shell https:///www.baidu.com  会启动爬虫请求网页 view(response) 会在浏览器打开请求到的临时文件 response.xpath("/ ...

  5. mongo学习1 (转)

    关于mongodb的好处,优点之类的这里就不说了,唯一要讲的一点就是mongodb中有三元素:数据库,集合,文档,其中“集合” 就是对应关系数据库中的“表”,“文档”对应“行”. 一: 下载 上Mon ...

  6. jmeter提取正则表达式中所有关联值-----我想获取所有的ID

    [{ "ID": 1, "Name": "张三" }, { "ID": 2, "Name": &qu ...

  7. UVA11324_The Largest Clique

    极大团.即求一个最大点集,使得点集中的任意两个点u,v至少存在u->v,或者v->u的路径. 是这样做的,求出所有的联通分量,然后整个图就变成了无环图,把原来若干个点缩点,点权为分量的点数 ...

  8. Spring、SpringMVC、MyBatis整合

    项目结构: 新建web项目:File->new->Dynamic Web Project 一.准备所需jar包1. Spring框架的jar包 spring-framework-5.0.4 ...

  9. 【大数据】Azkaban学习笔记

    一 概述 1.1 为什么需要工作流调度系统 1)一个完整的数据分析系统通常都是由大量任务单元组成: shell脚本程序,java程序,mapreduce程序.hive脚本等 2)各任务单元之间存在时间 ...

  10. P4433 [COCI2009-2010#1] ALADIN

    题目描述 给你 n 个盒子,有 q 个操作,操作有两种: 第一种操作输入格式为"1 L R A B",表示将编号为L到R的盒子里的石头数量变为(X−L+1)×A mod B,其中 ...