中文译文:深度学习、自然语言处理和表征方法

http://blog.jobbole.com/77709/

英文原文:Deep Learning, NLP, and Representations

http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/

总结:

这篇文章中主要提到了单层神经网络,单词嵌入(word embeddings),表征这几个概念,结合具体的实例,写的是通俗易懂,在引用参考文献的位置都给出了对应的链接,一些参考文献中的工作还是挺有意思的。

关于单层神经网络,介绍的浅显易懂,把神经网络比作查询表,很直观。

关于单词嵌入,文中介绍的意思是把一个词转换为一个多维向量,还用t-SNE工具直观的给出了单词嵌入空间的图,加上表格中的例子,更加易懂了。比较有意思的就是相同的词在单词嵌入空间中的距离是很近的,这个特点很有利用价值,利用这一点,提到了一些具体的应用场景,例如语法错误检查,性别类中不同代词(男-女,叔叔-阿姨,国王-王后等)之间的距离也是很相似的。关于这些应用,原文的作者的评价是,这些都是在研究方法的过程中所得的副产品。

关于表征representation,单词嵌入应该就算是一种吧,文中重点介绍了共同表征/共享嵌入(shared representation),将多个不同空间的嵌入信息映射到同一个空间,并介绍了两个很好的应用场景——双语单词嵌入和图像-文字嵌入。

双语单词嵌入,把两个语言中的单词嵌入空间用图像直观的看的话,形状是相似的,相似的词在图像中所处的位置是临近的。

图像-文字嵌入,也是相同概念的信息比较接近,例如猫的图像距离文字“猫”就很近,汽车的图像距离“汽车”就很近。这里引用了斯坦福一个小组和谷歌一个小组的工作,感觉挺有意思。

最后介绍了递归神经网络以及为什么适用于NLP。

整体来看,这篇文章更像是一篇不错的科普文章,个人认为读完之后还是有收获的,尤其是Word Embedding这个概念。

昨天搜了一下关于深度学习的相关博客,感觉很有难度。

感觉自己还是只知其一不知其二,不知道怎么在NLP中使用DL。

下面是英文原文中一些个人认为不错的概念和句子。

1:It’s true, essentially, because the hidden layer can be used as a lookup table.

2:word embeddings;

3:It seems natural for a network to make words with similar meanings have similar vectors.

4:You’ve seen all the words that you understand before, but you haven’t seen all the sentences that you understand before. So too with neural networks.

5:Word embeddings exhibit an even more remarkable property: analogies between words seem to be encoded in the difference vectors between words.

6:This general tactic – learning a good representation on a task A and then using it on a task B – is one of the major tricks in the Deep Learning toolbox. It goes by different names depending on the details: pretraining, transfer learning, and multi-task learning. One of the great strengths of this approach is that it allows the representation to learn from more than one kind of data.

There’s a counterpart to this trick. Instead of learning a way to represent one kind of data and using it to perform multiple kinds of tasks, we can learn a way to map multiple kinds of data into a single representation!

7:Shared Representations

(1)Bilingual Word Embeddings;

(2)Embed images and words in a single representation;

8:By merging sequences of words, A takes us from representing words to representing phrases or even representing whole sentences! And because we can merge together different numbers of words, we don’t have to have a fixed number of inputs.

ZH奶酪:【阅读笔记】Deep Learning, NLP, and Representations的更多相关文章

  1. (Deep) Neural Networks (Deep Learning) , NLP and Text Mining

    (Deep) Neural Networks (Deep Learning) , NLP and Text Mining 最近翻了一下关于Deep Learning 或者 普通的Neural Netw ...

  2. [论文阅读笔记] Adversarial Learning on Heterogeneous Information Networks

    [论文阅读笔记] Adversarial Learning on Heterogeneous Information Networks 本文结构 解决问题 主要贡献 算法原理 参考文献 (1) 解决问 ...

  3. 论文笔记: Deep Learning based Recommender System: A Survey and New Perspectives

    (聊两句,突然记起来以前一个学长说的看论文要能够把论文的亮点挖掘出来,合理的进行概括23333) 传统的推荐系统方法获取的user-item关系并不能获取其中非线性以及非平凡的信息,获取非线性以及非平 ...

  4. 深度学习阅读列表 Deep Learning Reading List

    Reading List List of reading lists and survey papers: Books Deep Learning, Yoshua Bengio, Ian Goodfe ...

  5. 深度学习论文笔记-Deep Learning Face Representation from Predicting 10,000 Classes

    来自:CVPR 2014   作者:Yi Sun ,Xiaogang Wang,Xiaoao Tang 题目:Deep Learning Face Representation from Predic ...

  6. 阅读笔记Multi-task Learning for Stock Selection [NIPS1996]

    Multi-task Learning for Stock Selection  Joumana Ghosn and Yoshua Bengio 摘要 用人工神经网络预测未来回报以便于做出对应的金融决 ...

  7. (Stanford CS224d) Deep Learning and NLP课程笔记(一):Deep NLP

    Stanford大学在2015年开设了一门Deep Learning for Natural Language Processing的课程,广受好评.并在2016年春季再次开课.我将开始这门课程的学习 ...

  8. Deep Learning for Natural Language Processing1

    Focus, Follow, and Forward Stanford CS224d 课程笔记 Lecture1 Stanford CS224d 课程笔记 Lecture1 Stanford大学在20 ...

  9. Rolling in the Deep (Learning)

    Rolling in the Deep (Learning) Deep Learning has been getting a lot of press lately, and is one of t ...

随机推荐

  1. 缓存处理后,F5刷新页面,css和js返回200,为什么不是304?

    最近在Apache上做网站的静态资源缓存,但是各种配置之后,发现css和js返回的状态码都是200,为什么不是304? 找来找去在知乎上得到了答案. 来自知乎的一个回答 http://www.zhih ...

  2. 【HDU】3401:Trade【单调队列优化DP】

    Trade Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others)Total Submi ...

  3. HDU 5682 zxa and leaf 二分 树形dp

    zxa and leaf 题目连接: http://acm.hdu.edu.cn/showproblem.php?pid=5682 Description zxa have an unrooted t ...

  4. 封装libsvm成可程序调用的C/C++类

    libsvm很早之前就用了,现在封装一下方便自己使用,也方便大家更快的使用这个库,这个库一个挺有用的特性就是对测试样本的概率估计.源码在随笔的最后.liblinear的版本也是类似移植,主要是处理好数 ...

  5. 匹配<a href="url">content</a>

    grep -Po '<a href="(.*.rmvb")>(.*)</a>' te.txt | sed -n 's/<a href=\(.*\)&g ...

  6. ARM FPGA Extended Memory Interface

    Connect a ARM Microcontroller to a FPGA using its Extended Memory Interface (EMI) http://elinux.org/ ...

  7. what is a process?

    A process is a program in execution. A process is more than the program code, which is sometimes kno ...

  8. 算法:Rate of Growth

    Rate of growth describes how an algorithm’s complexity changes as the input size grows. This is comm ...

  9. __super

    __super::member_function(); The __super keyword allows you to explicitly state that you are calling ...

  10. 【BZOJ】【2752】【HAOI2012】高速公路(Road)

    数学期望/线段树 然而又是一道road= =上一道是2750…… 下次不要一看期望题就弃疗么…… 期望题≠不可做题……!! 其实在这题中,期望就是(所有情况下 权值之和)/(总方案数) 因为是等概率抽 ...