The Joys of Conjugate Priors
(Warning: this post is a bit technical.)
Suppose you are a Bayesian reasoning agent. While going about your daily activities, you observe an event of type .
Because you're a good Bayesian, you have some internal parameter which
represents your belief that will occur.
Now, you're familiar with the Ways of Bayes, and therefore you know that your beliefs must be updated with every new datapoint you perceive. Your observation of is
a datapoint, and thus you'll want to modify .
But how much should this datapoint influence ?
Well, that will depend on how sure you are of in
the first place. If you calculated based
on a careful experiment involving hundreds of thousands of observations, then you're probably pretty confident in its value, and this single observation of shouldn't
have much impact. But if your estimate of is
just a wild guess based on something your unreliable friend told you, then this datapoint is important and should be weighted much more heavily in your reestimation of .
Of course, when you reestimate ,
you'll also have to reestimate how confident you are in its value. Or, to put it a different way, you'll want to compute a new probability distribution over possible values of .
This new distribution will be,
and it can be computed using Bayes' rule:
Here, since is
a parameter used to specify the distribution from which is
drawn, it can be assumed that computing is
straightforward. is your old
distribution over , which you already
have; it says how accurate you think different settings of the parameters are, and allows you to compute your confidence in any given value of .
So the numerator should be straightforward to compute; it's the denominator which might give you trouble, since for an arbitrary distribution, computing the integral is likely to be intractable.
But you're probably not really looking for a distribution over different parameter settings; you're looking for a single best setting of the parameters that you can use for making predictions.
If this is your goal, then once you've computed the distribution ,
you can pick the value of that maximizes
it. This will be your new parameter, and because you have the formula ,
you'll know exactly how confident you are in this parameter.
In practice, picking the value of which
maximizes is usually pretty
difficult, thanks to the presence of local optima, as well as the general difficulty of optimization problems. For simple enough distributions, you can use the EM algorithm, which is guarranteed to converge to a local optimum. But for more complicated distributions,
even this method is intractable, and approximate algorithms must be used. Because of this concern, it's important to keep the distributions and
simple.
Choosing the distribution is
a matter of model selection; more complicated models can capture deeper patterns in data, but will take more time and space to compute with.
It is assumed that the type of model is chosen before deciding on the form of the distribution .
So how do you choose a good distribution for ?
Notice that every time you see a new datapoint, you'll have to do the computation in the equation above. Thus, in the course of observing data, you'll be multiplying lots of different probability distributions together. If these distributions are chosen
poorly, could get quite messy
very quickly.
If you're a smart Bayesian agent, then, you'll pick to
be a conjugate prior to the distribution .
The distribution is conjugate to
if
multiplying these two distributions together and normalizing results in another distribution of the same form as .
Let's consider a concrete example: flipping a biased coin. Suppose you use the bernoulli distribution to model your coin. Then it has a parameter which
represents the probability of gettings heads. Assume that the value 1 corresponds to heads, and the value 0 corresponds to tails. Then the distribution of the outcome of
the coin flip looks like this:
It turns out that the conjugate prior for the bernoulli distribution is something called the beta distribution. It has two parameters, and
,
which we call hyperparameters because they are parameters for a distribution over our parameters. (Eek!)
The beta distribution looks like this:
Since represents
the probability of getting heads, it can take on any value between 0 and 1, and thus this function is normalized properly.
Suppose you observe a single coin flip and
want to update your beliefs regarding .
Since the denominator of the beta function in the equation above is just a normalizing constant, you can ignore it for the moment while computing ,
as long as you promise to normalize after completing the computation:
Normalizing this equation will, of course, give another beta distribution, confirming that this is indeed a conjugate prior for the bernoulli distribution. Super cool, right?
If you are familiar with the binomial distribution, you should see that the numerator of the beta distribution in the equation for looks
remarkably similar to the non-factorial part of the binomial distribution. This suggests a form for the normalization constant:
The beta and binomial distributions are almost identical. The biggest difference between them is that the beta distribution is a function of ,
with and
as
prespecified parameters, while the binomial distribution is a function of ,
with and
as
prespecified parameters. It should be clear that the beta distribution is also conjugate to the binomial distribution, making it just that much awesomer.
Another difference between the two distributions is that the beta distribution uses gammas where the binomial distribution uses factorials. Recall that the gamma function is just a generalization
of the factorial to the reals; thus, the beta distribution allows and
to
be any positive real number, while the binomial distribution is only defined for integers. As a final note on the beta distribution, the -1 in the exponents is not philosophically significant; I think it is mostly there so that the gamma functions will not
contain +1s. For more information about the mathematics behind the gamma function and the beta distribution, I recommend checking out this pdf:http://www.mhtl.uwaterloo.ca/courses/me755/web_chap1.pdf.
It gives an actual derivation which shows that the first equation for is
equivalent to the second equation for ,
which is nice if you don't find the argument by analogy to the binomial distribution convincing.
So, what is the philosophical significance of the conjugate prior? Is it just a pretty piece of mathematics that makes the computation work out the way we'd like it to? No; there is deep
philosophical significance to the form of the beta distribution.
Recall the intuition from above: if you've seen a lot of data already, then one more datapoint shouldn't change your understanding of the world too drastically. If, on the other hand, you've
seen relatively little data, then a single datapoint could influence your beliefs significantly. This intuition is captured by the form of the conjugate prior. and
can
be viewed as keeping track of how many heads and tails you've seen, respectively. So if you've already done some experiments with this coin, you can store that data in a beta distribution and use that as your conjugate prior. The beta distribution captures
the difference between claiming that the coin has 30% chance of coming up heads after seeing 3 heads and 7 tails, and claiming that the coin has a 30% chance of coming up heads after seeing 3000 heads and 7000 tails.
Suppose you haven't observed any coin flips yet, but you have some intuition about what the distribution should be. Then you can choose values for and
that
represent your prior understanding of the coin. Higher values of indicate
more confidence in your intuition; thus, choosing the appropriate hyperparameters is a method of quantifying your prior understanding so that it can be used in computation. and
will
act like "imaginary data"; when you update your distribution over after
observing a coin flip , it will be like you
already saw heads and
tails
before that coin flip.
If you want to express that you have no prior knowledge about the system, you can do so by setting and
to
1. This will turn the beta distribution into a uniform distribution. You can also use the beta distribution to do add-N smoothing, by setting and
to
both be N+1. Setting the hyperparameters to a value lower than 1 causes them to act like "negative data", which helps avoid overfitting to
noise in the actual data.
In conclusion, the beta distribution, which is a conjugate prior to the bernoulli and binomial distributions, is super awesome. It makes it possible to do Bayesian reasoning in a computationally
efficient manner, as well as having the philosophically satisfying interpretation of representing real or imaginary prior data. Other conjugate priors, such as the dirichlet prior for the multinomial distribution, are similarly cool.
The Joys of Conjugate Priors的更多相关文章
- Conjugate prior relationships
Conjugate prior relationships The following diagram summarizes conjugate prior relationships for a n ...
- [Bayes] Understanding Bayes: Updating priors via the likelihood
From: https://alexanderetz.com/2015/07/25/understanding-bayes-updating-priors-via-the-likelihood/ Re ...
- 转:Conjugate prior-共轭先验的解释
Conjugate prior-共轭先验的解释 原文:http://blog.csdn.net/polly_yang/article/details/8250161 一 问题来源: 看PRML第 ...
- PRML读书笔记——2 Probability Distributions
2.1. Binary Variables 1. Bernoulli distribution, p(x = 1|µ) = µ 2.Binomial distribution + 3.beta dis ...
- [MCSM]Exponential family: 指数分布族
Exponential family(指数分布族)是一个经常出现的概念,但是对其定义并不是特别的清晰,今天好好看了看WIKI上的内容,有了一个大致的了解,先和大家分享下.本文基本是WIKI上部分内容的 ...
- PRML Chapter 2. Probability Distributions
PRML Chapter 2. Probability Distributions P68 conjugate priors In Bayesian probability theory, if th ...
- 广义线性模型 GLM
Logistic Regression 同 Liner Regression 均属于广义线性模型,Liner Regression 假设 $y|x ; \theta$ 服从 Gaussian 分布,而 ...
- 机器学习的数学基础(1)--Dirichlet分布
机器学习的数学基础(1)--Dirichlet分布 这一系列(机器学习的数学基础)主要包括目前学习过程中回过头复习的基础数学知识的总结. 基础知识:conjugate priors共轭先验 共轭先验是 ...
- 随机采样和随机模拟:吉布斯采样Gibbs Sampling实现文档分类
http://blog.csdn.net/pipisorry/article/details/51525308 吉布斯采样的实现问题 本文主要说明如何通过吉布斯采样进行文档分类(聚类),当然更复杂的实 ...
随机推荐
- 添加web引用和添加服务引用有什么区别?
添加web引用和添加服务引用有什么区别,Add Service References 和 Add Web References 有啥区别?参考 http://social.microsoft.com/ ...
- System.Web.HttpRequestValidationException——从客户端检测到危险的Request值
这是比较常见的问题了,如果Web表单中有输入类似于Html标签之类的文本,在通过Request.QueryString或者Request.Form传递这些值的时候,就会触发这样的异常,出于脚本注入等安 ...
- 用virtualenv管理python3运行环境
1. 简介 virtualenv可以用来管理互不干扰的独立python虚拟环境,在有些场景下非常有用,例如: 你有两个python项目,一个是python2.7的,另一个是python3的,可以创建两 ...
- linux内核分析 期中总结
LINUX内核分析 链接汇总 LINUX内核分析第一周学习总结——计算机是如何工作的 LINUX内核分析第二周学习总结——操作系统是如何工作的 LINUX内核分析第三周学习总结——构造一个简单的Lin ...
- web.xml配置文件
一.web.xml里面的标签 <display-name> <context-param> <listener> <filter> 和 <filt ...
- 千万不要在JS中使用连等赋值操作
前言 文章标题这句话原本是在国外某JavaScript规范里看到的,当时并没有引起足够的重视,直到最近一次出现了bug发现JS里的连等赋值操作的特色(坑). 网上搜索一番发现一个非常好的连等赋值的(来 ...
- node的实践(项目一)
学习一门语言,我们先要学习他的基本的语法,基本的数据类型,基本的数组操作,字符串的操作,然后就是语言的特性,实现共享和降低耦合的方式,然后开始比较高级的学习(所有语言都是一样的),比如说通信方法,tc ...
- 14.C#属性访问器、命名空间、pragma指令(七章7.3-7.5)
看到一些零星的知识片,今天就用自己的理解说明下,也是因为太简单了,一下就过的,也是我们日常开发中常用.留下一个脚印,当书不在手上的,也能翻出来看看.说下属性访问器.命名空间和pragma指令. 属性访 ...
- RestFul API初识
python Restful API 微博开放平台: open.weibo.com: 点击文档进入API查看界面 点击API文档进行查看: 比如点开粉丝数的API可以看到: pro.jsonlint. ...
- python 条件判断和循环
一.条件判断 if if age>= 18: 记住在判断语句后面要加上 : 还有要注意他的缩进 age = 20if age >= 18: print 'your age ...