Predictive learning vs. representation learning  预测学习 与 表示学习

When you take a machine learning class, there’s a good chance it’s divided into a unit on supervised learning and a unit on unsupervised learning. We certainly care about this distinction for a practical reason: often there’s orders of magnitude more data available if we don’t need to collect ground-truth labels. But we also tend to think it matters for more fundamental reasons. In particular, the following are some common intuitions:

  • In supervised learning, the particular algorithm is usually less important than engineering and tuning it really well. In unsupervised learning, we’d think carefully about the structure of the data and build a model which reflects that structure.
  • In supervised learning, except in small-data settings, we throw whatever features we can think of at the problem. In unsupervised learning, we carefully pick the features we think best represent the aspects of the data we care about.
  • Supervised learning seems to have many algorithms with strong theoretical guarantees, and unsupervised learning very few.
  • Off-the-shelf algorithms perform very well on a wide variety of supervised tasks, but unsupervised learning requires more care and expertise to come up with an appropriate model.

I’d argue that this is deceptive. I think real division in machine learning isn’t between supervised and unsupervised, but what I’ll term predictive learning and representation learning. I haven’t heard it described in precisely this way before, but I think this distinction reflects a lot of our intuitions about how to approach a given machine learning problem.

In predictive learning, we observe data drawn from some distribution, and we are interested in predicting some aspect of this distribution. In textbook supervised learning, for instance, we observe a bunch of pairs , and given some new example , we’re interested in predicting something about the corresponding . In density modeling (a form of unsupervised learning), we observe unlabeled data , and we are interested in modeling the distribution the data comes from, perhaps so we can perform inference in that distribution. In each of these cases, there is a well-defined predictive task where we try to predict some aspect of the observable values possibly given some other aspect.

In representation learning, our goal isn’t to predict observables, but to learn something about the underlying structure. In cognitive science and AI, a representation is a formal system which maps to some domain of interest in systematic ways. A good representation allows us to answer queries about the domain by manipulating that system. In machine learning, representations often take the form of vectors, either real- or binary-valued, and we can manipulate these representations with operations like Euclidean distance and matrix multiplication. For instance, PCA learns representations of data points as vectors. We can ask how similar two data points are by checking the Euclidean distance between them.

In representation learning, the goal isn’t to make predictions about observables, but to learn a representation which would later help us to answer various queries. Sometimes the representations are meant for people, such as when we visualize data as a two-dimensional embedding. Sometimes they’re meant for machines, such as when the binary vector representations learned by deep Boltzmann machines are fed into a supervised classifier. In either case, what’s important is that mathematical operations map to the underlying relationships in the data in systematic ways.

Whether your goal is prediction or representation learning influences the sorts of techniques you’ll use to solve the problem. If you’re doing predictive learning, you’ll probably try to engineer a system which exploits as much information as possible about the data, carefully using a validation set to tune parameters and monitor overfitting. If you’re doing representation learning, there’s no good quantitative criterion, so you’ll more likely build a model based on your intuitions about the domain, and then keep staring at the learned representations to see if they make intuitive sense.

In other words, it parallels the differences I listed above between supervised and unsupervised learning. This shouldn’t be surprising, because the two dimensions are strongly correlated: most supervised learning is predictive learning, and most unsupervised learning is representation learning. So to see which of these dimensions is really the crux of the issue, let’s look at cases where the two differ.

Language modeling is a perfect example of an application which is unsupervised but predictive. The goal is to take a large corpus of unlabeled text (such as Wikipedia) and learn a distribution over English sentences. The problem is motivated by Bayesian models for speech recognition: a distribution over sentences can be used as a prior for what a person is likely to say. The goal, then, is to model the distribution, and any additional structure is unnecessary. Log-linear models, such as that of Mnih et al. [1], are very good at this, and recurrent neural nets [2] are even better. These are the sorts of approaches we’d normally apply in a supervised setting: very good at making predictions, but often hard to interpret. One state-of-the-art algorithm for density modeling of text is PAQ [3], which is a heavily engineered ensemble of sequential predictors, somewhat reminiscent of the winning entries of the Netflix competition.

On the flip side, supervised neural nets are often used to learn representations. One example is Collobert-Weston networks [4], which attempt to solve a number of supervised NLP tasks by learning representations which are shared between them. Some of the tasks are fairly simple and have a large amount of labeled data, such as predicting which of two words should be used to fill in the blank. Others are harder and have less data available, such as semantic role labeling. The simpler tasks are artificial, and they are there to help learn a representation of words and phrases as vectors, where similar words and phrases map to nearby vectors; this representation should then help performance on the harder tasks. We don’t care about the performance on those tasks per se; we care whether the learned embeddings reflect the underlying structure. To debug and tune the algorithm, we’d focus on whether the representations make intuitive sense, rather than on the quantitative performance. There are no theoretical guarantees that such an approach would work — it all depends on our intuitions of how the different tasks are related.

Based on these two examples, it seems like it’s the predictive/representation dimension which determines how we should approach the problem, rather than supervised/unsupervised.

In machine learning, we tend to think there’s no solid theoretical framework for unsupervised learning. But really, the problem is that we haven’t begun to formally characterize the problem of representation learning. If you just want to build a density modeler, that’s about as well understood as the supervised case. But if the goal is to learn representations which capture the underlying structure, that’s much harder to formalize. In my next post, I’ll try to take a stab at characterizing what representation learning is actually about.

[1] Mnih, A., and Hinton, G. E. Three new graphical models for statistical language modeling. NIPS 2009

[2] Sutskever, I., Martens, J., and Hinton, G. E. Generating text with recurrent neural networks. ICML 2011

[3] Mahoney, M. Adaptive weighting of context models for lossless data compression. Florida Institute of Technology Tech report, 2005

[4] Collobert, R., and Weston, J. A unified architecture for natural language processing: deep neural networks with multitask learning. ICML 2008

 

Posted in Machine Learning.

No comments

By Roger Grosse – February 4, 2013 

(转)Predictive learning vs. representation learning 预测学习 与 表示学习的更多相关文章

  1. (zhuan) Notes on Representation Learning

    this blog from: https://opendatascience.com/blog/notes-on-representation-learning-1/   Notes on Repr ...

  2. Deep Learning and Shallow Learning

    Deep Learning and Shallow Learning 由于 Deep Learning 现在如火如荼的势头,在各种领域逐渐占据 state-of-the-art 的地位,上个学期在一门 ...

  3. Representation Learning with Contrastive Predictive Coding

    目录 概 主要内容 从具有序的数据讲起 Contrastive Predictive Coding (CPC) 图片构建序 Den Oord A V, Li Y, Vinyals O, et al. ...

  4. 网络表示学习Network Representation Learning/Embedding

    网络表示学习相关资料 网络表示学习(network representation learning,NRL),也被称为图嵌入方法(graph embedding method,GEM)是这两年兴起的工 ...

  5. 深度学习论文笔记-Deep Learning Face Representation from Predicting 10,000 Classes

    来自:CVPR 2014   作者:Yi Sun ,Xiaogang Wang,Xiaoao Tang 题目:Deep Learning Face Representation from Predic ...

  6. Learning Structured Representation for Text Classification via Reinforcement Learning 学习笔记

    Representation learning : 表征学习,端到端的学习 pre-specified  预先指定的 demonstrate  论证;证明,证实;显示,展示;演示,说明 attempt ...

  7. 多视图子空间聚类/表示学习(Multi-view Subspace Clustering/Representation Learning)

    多视图子空间聚类/表示学习(Multi-view Subspace Clustering/Representation Learning) 作者:凯鲁嘎吉 - 博客园 http://www.cnblo ...

  8. 翻译 Improved Word Representation Learning with Sememes

    翻译 Improved Word Representation Learning with Sememes 题目 Improved Word Representation Learning with ...

  9. Hierarchical Attention Based Semi-supervised Network Representation Learning

    Hierarchical Attention Based Semi-supervised Network Representation Learning 1. 任务 给定:节点信息网络 目标:为每个节 ...

随机推荐

  1. 神盾解密工具 之 解密 “ PHP 神盾解密工具 ”

    不知道某君删了我的评论,还是发在这里面. 其实对神盾解密并没有那么感兴趣,只是看到了博主把工具又加密了,感觉不爽.研究了一下,其实解密没那么复杂. 利用php_apd扩展很轻松地就这把这搞定了.只有四 ...

  2. hosts.allow和hosts.deny

    /etc/hosts.allow和/etc/hosts.deny两个文件是控制远程访问设置的,通过他可以允许或者拒绝某个ip或者ip段的客户访问linux的某项服务. 比如SSH服务,我们通常只对管理 ...

  3. sql 聚合函数、排序方法详解

    聚合函数 count,max,min,avg,sum... select count (*) from T_Employee select Max(FSalary) from T_Employee 排 ...

  4. Duilib开发环境搭建

    1.到github上下载最新版本,https://github.com/duilib/duilib,也没有发现版本号,就如图所示吧 2.我只安装了VS2008,而github上的已经更新到VS2013 ...

  5. 学习linux/unix编程方法的建议(转)

    假设你是计算机科班出身,计算机系的基本课程如数据结构.操作系统.体系结构.编译原理.计算机网络你全修过 我想大概可以分为4个阶段,水平从低到高从安装使用=>linux常用命令=>linux ...

  6. 【Excel】宏之初认识

    出于提高效率的原因,希望excel能够不需要人为干预的完成一些操作,学习excel宏的编写与调试: 第一列输入公式,第二类输出计算结果:(如果自动获取文件的行数,待补充) Sub Calc() a = ...

  7. HTML之iframe

    iframe:是框架的一种形式. 属性: frameborder=0/1 表示是否显示周围边框 0--否 1--是 width,height:设置的边框宽高,具体数值不需要加单位,也可用百分比 mar ...

  8. iOS AVCaptureSession 小视频开发总结,支持设备旋转

    iOS开发中当我们想要自定义相机拍照或摄像界面时,UIImagePickerController无法满足我们的需求,这时候我们可以使用AVFoundation.framework这个framework ...

  9. javascript中值传递与值引用的研究

    今天重新看了一下<javascript高级程序设计>,其中讲到了javascript中的值传递和值引用,所以就自己研读了一下,但是刚开始没有明白函数中的参数只有值传递,有的场景好像参数是以 ...

  10. nopi excel 导入

    #region 从Excel导入 /// <summary> /// 读取excel ,默认第一行为标头 /// </summary> /// <param name=& ...