A Statistical View of Deep Learning (III): Memory and Kernels
A Statistical View of Deep Learning (III): Memory and Kernels
Memory, the ways in which we remember and recall past experiences and data to reason about future events, is a term used frequently in current literature. All models in machine learning consist of a memory that is central to their usage. We have two principal types of memory mechanisms, most often addressed under the types of models they stem from: parametric and non-parametric (but also all the shades of grey in-between). Deep networksrepresent the archetypical parametric model, in which memory is implemented by distilling the statistical properties of observed data into a set of model parameters or weights. The poster-child for non-parametric models would bekernel machines (and nearest neighbours) that implement their memory mechanism by actually storing all the data explicitly. It is easy to think that these represent fundamentally different ways of reasoning about data, but the reality of how we derive these methods points to far deeper connections and a more fundamental similarity.
Deep networks, kernel methods and Gaussian processes form a continuum of approaches for solving the same problem - in their final form, these approaches might seem very different, but they are fundamentally related, and keeping this in mind can only be useful for future research. This connection is what I explore in this post.
Basis Functions and Neural Networks
All the methods in this post look at regression: learning discriminative or input-output mappings. All such methods extend the humble linear model, where we assume that linear combinations of the input data x, or transformations of it φ(x), explain the target values y. The φ(x) are basis functions that transform the data into a set of more interesting features. Features such as SIFT for images or MFCCs for audio have been popular in the past - in these cases, we still have a linear regression, since the basis functions are fixed. Neural networks give us the ability to use adaptive basis functions, allowing us to learn what the best features are from data instead of designing these by-hand, and allowing for a non-linear regression.
A useful probabilistic formulation separates the regression into systematic and random components: the systematic component is a function f we wish to learn, and the targets are noisy realisations of this function. To connect neural networks to the linear model, I'll explicitly separate the last linear layer of the neural network from the layers that appear before it. Thus for an L-layer deep neural network, I'll denote the first L-1 layers by the mapping φ(x; θ) with parameters θ, and the final layer weights w; the set of all model parameters is q = {θ, w}.
Once we have specified our probabilistic model, this implies an objective function for optimising the model parameters given by the negative log joint-probability. We can now apply back-propagation and learn all the parameters, performing MAP estimation in the neural network model. Memory in this model is maintained in the parametric modelling framework; we do not save the data but compactly represent it by the parameters of our model. This formulation has many nice properties: we can encode properties of the data into the function f, such as being a 2D image for which convolutions are sensible, and we can choose to do a stochastic approximation for scalability and perform gradient descent using mini-batches instead of the entire data set. The loss function for the output weights is of particular interest, since it will offers us a way to move from neural networks to other types of regression.
Kernel Methods
If you stare a bit longer at this last objective function, especially as formulated by explicitly representing the last linear layer, you'll very quickly be tempted to compute its dual function [1][pp. 293]. We'll do this by first setting the derivative w.r.t. w to zero and solving for it:
We've combined all basis functions/features for the observations into the matrix Φ. By taking this optimal solution for the last layer weights and substituting it into the loss function, two things emerge: we obtain the dual loss function that is completely rewritten in terms of a new parameter α, and the computation involves the matrix product or Gram matrix K=ΦΦ'. We can repeat the process and solve the dual loss for the optimal parameter α, and obtain:
And this is where the kernel machines deviate from neural networks. Since we only need to consider inner products of the features φ(x) (implied by maintaining K), instead of parameterising them using a non-linear mapping given by a deep network, we can usekernel substitution (aka, the kernel trick) and get the same behaviour by choosing an appropriate and rich kernel function k(x, x'). This highlights the deep relationship between deep networks and kernel machines: they are more than simply related, they are duals of each other.
The memory mechanism has now been completely transformed into a non-parametric one - we explicitly represent all the data points (through the matrix K). The advantage of the kernel approach is that is is often easier to encode properties of the functions we wish to represent e.g., functions that are up to p-th order differentiable or periodic functions, but stochastic approximation is now not possible. Predictions for a test pointx* can now be written in a few different ways:
The last equality is a form of solution implied by the Representer theorem and shows that we can instead think of a different formulation of our problem: one that directly penalises the function we are trying to estimate, subject to the constraint that the function lies within a Hilbert space (and providing a direct non-parametric view):
Gaussian Processes
We can go even one step further and obtain not only a MAP estimate of the function f, but also its variance. We must now specify a probability model that yields the same loss function as this last objective function. This is possible since we now know what a suitable prior over functions is, and this probabilistic model corresponds to Gaussian process (GP) regression [2]:
We can now apply the standard rules for Gaussian conditioning to obtain a mean and variance for any predictions x*. What we obtain is:
Conveniently, we obtain the same solution for the mean whether we use the kernel approach or the Gaussian conditioning approach. We now also have a way to compute the variance of the functions of interest, which is useful for many problems (such as active learning and optimistic exploration). Memory in the GP is also of the non-parametric flavour, since our problem is formulated in the same way as the kernel machines. GPs form another nice bridge between kernel methods and neural networks: we can see GPs as derived by Bayesian reasoning in kernel machines (which are themselves dual functions of neural nets), or we can obtain a GP by taking the number of hidden units in a one layer neural network to infinity [3].
Summary
Deep neural networks, kernel methods and Gaussian processes are all different ways of solving the same problem - how to learn the best regression functions possible. They are deeply connected: starting from one we can derive any of the other methods, and they expose the many interesting ways in which we can address and combine approaches that are ostensibly in competition. I think such connections are very interesting, and should prove important as we continue to build more powerful and faithful models for regression and classification.
Some References
| [1] | Christopher M Bishop, Pattern recognition and machine learning, , 2006 |
| [2] | Carl Edward Rasmussen, Gaussian processes for machine learning, , 2006 |
| [3] | Radford M Neal, Bayesian Learning for Neural Networks, , 1994 |
A Statistical View of Deep Learning (III): Memory and Kernels的更多相关文章
- A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems
A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems Recurrent neural netw ...
- A Statistical View of Deep Learning (V): Generalisation and Regularisation
A Statistical View of Deep Learning (V): Generalisation and Regularisation We now routinely build co ...
- A Statistical View of Deep Learning (II): Auto-encoders and Free Energy
A Statistical View of Deep Learning (II): Auto-encoders and Free Energy With the success of discrimi ...
- A Statistical View of Deep Learning (I): Recursive GLMs
A Statistical View of Deep Learning (I): Recursive GLMs Deep learningand the use of deep neural netw ...
- 【深度学习Deep Learning】资料大全
最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books by Yoshua Bengio, Ian Goodfellow and Aaron C ...
- 机器学习(Machine Learning)&深度学习(Deep Learning)资料【转】
转自:机器学习(Machine Learning)&深度学习(Deep Learning)资料 <Brief History of Machine Learning> 介绍:这是一 ...
- 机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)
##机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)---#####注:机器学习资料[篇目一](https://github.co ...
- (转)Understanding Memory in Deep Learning Systems: The Neuroscience, Psychology and Technology Perspectives
Understanding Memory in Deep Learning Systems: The Neuroscience, Psychology and Technology Perspecti ...
- translation of 《deep learning》 Chapter 1 Introduction
原文: http://www.deeplearningbook.org/contents/intro.html Inventors have long dreamed of creating mach ...
随机推荐
- 看个人思路吧,清晰的话就简单 CodeForces 271A - Beautiful Year
It seems like the year of 2013 came only yesterday. Do you know a curious fact? The year of 2013 is ...
- 我的docker 使用笔记
0 容器启动 docker run image_name(镜像名称) echo "hello word" 1 启动容器 退出后 重新进入 方法一 sudo docker exec ...
- vim note(6)--vim的一个较全的介绍(转)
vim的配置文件 ~/.vimrc 用户的默认配置文件 ~/.vim/plugin/ 用户的默认脚本文件的存放文件夹 ~/.vim/ftplugin/ 用户的默认文件类型相关脚本文件的 ...
- TCP/IP(84) 详解
http://blog.csdn.net/zhangskd/article/category/873810
- Android(java)学习笔记202:Handler消息机制的原理和实现
联合学习 Android 异步消息处理机制 让你深入理解 Looper.Handler.Message三者关系 1. 首先我们通过一个实例案例来引出一个异常: (1)布局文件activity_m ...
- Markdown写接口文档,自动添加TOC
上回说到,用Impress.js代替PPT来做项目展示.这回换Markdown来做接口文档好了.(不敢说代替Word,只能说个人感觉更为方便)当然,还要辅之以Git,来方便版本管理. Markdown ...
- javascript新的原生态API
以下是最新的w3c标准的javascript,目前支持运行在firefox, chrome,IE9以上版本的浏览器 参考资料:https://developer.mozilla.org/ru/docs ...
- IK分词算法设计总结
IK分词算法设计思考 加载词典 IK分词算法初始化时加载了“敏感词”.“主词典”.“停词”.“量词”,如果这些词语的数量很多,怎么保证加载的时候内存不溢出 分词缓冲区 在分词缓冲区中进行分词操作,怎么 ...
- WCF 用netTcpbinding,basicHttpBinding 传输大文件
问题:WCF如何传输大文件 方案:主要有几种绑定方式netTcpbinding,basicHttpBinding,wsHttpbinding,设置相关的传输max消息选项,服务端和客户端都要设置,tr ...
- 在企业级开发中使用Try...Catch...会影响效率吗?
感谢神啊.上帝及老天爷让我失眠,才能够有了本篇文章. 记得不久之前,公司一同事曾经说过:“如果是Winform开发,由于程序是在本地,使用try...catch不会有太大性能问题,可是如果是在web服 ...