Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks-paper
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
作者信息:
Kai Sheng Tai Stanford University
Richard Socher MetaMind
Christopher D. Manning Stanford University
数据:
1)Stanford Sentiment Treebank 情感分为五类
2)Sentence Involving Compositional Knowledge(SICK) 句子对有相关性得分
1 introduction
Most models for distributed representations of phrases and sentences—that is, models where realvalued vectors are used to represent meaning—fall into one of three classes:
bag-of-words models-句子中的单词的序列关系看不出来
sequence models
tree-structured models.-包含了句法语义
与standard LSTM 相比, Tree-LSTM 有以下这行特性:
(1)Tree-LSTM 可能依赖多个子节点
(2)forget gate 可能有多个,与子节点的个数有关
本文给出两种tree-LSTM :
(1) Child-Sum Tree-LSTMs
(2) N-ary Tree-LSTMs
这篇文章介绍了将标准lstm改进为树结构一般化过程,在序列lstm上可以表示出句子的含义a generalization of
区别:
the standard LSTM composes -- hidden state from the input at the current time step and the hidden state of the LSTM unit in the previous time step,
the tree-structured LSTM, orTree-LSTM--composes its state from an input vector and the hidden states of arbitrarily many child units.
标准lstm是tree-lstm的一个特例,看做tree-lstm的每个内部节点只有一个孩子
2 Long Short-Term Memory Networks
Two commonly-used variants of the basic LSTM architecture :
the Bidirectional LSTM —— At each time step, the hidden state of the Bidirectional LSTM is the concatenation of the forward and backward hidden states.
the Multilayer LSTM (also known as the stacked or deep LSTM)—— the idea is to let the higher layers capture longerterm dependencies of the input sequence.
3Tree-Structured LSTMs
该论文提出两个结构:
the Child-Sum Tree-LSTM
and the N-ary Tree-LSTM.
Under the Tree-RNN framework,the vectorial representation associated with each node of a tree is composed as a function of the vectors corresponding to the children of the node. The choice of composition function gives rise to numerous variants of this basic framework.
Tree-RNNs have been used to parse images of natural scenes (Socher et al., 2011), compose phrase representations from word vectors (Socher et al., 2012), and classify the sentiment polarity of sentences (Socher et al., 2013).
4 models
tree-LSTM的两个应用:
(1)classification
hjj 就是利用tree-LSTM计算出的node j 的embedding
(2) Semantic relatedness of Sentence Pairs
hLL 和 hRR 是利用Tree-LSTM对两个句子的embedding representations, 经过上面一系列公式的操作比较两个句子的senmantic relatedness
6 Results
指标:
1)Pearson's
2)Spearman's
3)MSE
6.1 Sentiment Classification
细腻度情感分析Fine-grained: 5-class sentiment classification.
Binary: positive/negative sentiment classification.
微调有助于区分更细腻度的区分度
对于细腻度情感分析来说bi-lstm比lstm更更好,但是对于二分类来说效果差不多,猜测是由于细腻度的需要更多输入向量表示和隐藏层有更多更复杂的互动,而二分类中想要保留的分类的状态lstm已经足够去保持
6.2 Semantic Relatedness
--------------------------------------------------
斯坦福的sentiment treebank:
treebank的形式如下
(0 (1 You) (2 (3 can) (4 (5 (6 run) (7 (8 this) (9 code))) (10 (11 with) (12 (13 (14 our) (15 (16 trained) (17 model))) (18 (19 on) (20 (21 (22 text) (23 files)) (24 (25 with) (26 (27 the) (28 (29 following) (30 command)))))))))))
这是句子“You can run this code with our trained model on text files with the following command”经过stanford模型计算后得到的情感treebank形式。
每个括号中的第一个元素为规则的头,比如对于左右两边都只有一个节点的规则:
(1 You): 1->You , 1表示的是NON-Terminal字符,You表示terminal字符,和标准的pennetreebank的区别是1代表的是这个节点的情感强度,分五个等级。
(0 (1 You) (2 (3 can)…) :
在这个规则里,右边有两个节点,是一个标准的二叉树,0-> 1, 2。
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks-paper的更多相关文章
- LSTM学习—Long Short Term Memory networks
原文链接:https://colah.github.io/posts/2015-08-Understanding-LSTMs/ Understanding LSTM Networks Recurren ...
- Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks(1)
今天和陈驰,汪鑫讨论了一下,借此记录一下想法. 关于这篇论文,要弄清的地方有: 1.LSTMtree到底是从上往下还是从下往上学的,再确认一下 2.关于每个节点的标注问题 3.label的值到底该怎么 ...
- 论文阅读及复现 | Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
两种形式的LSTM变体 Child-Sum Tree-LSTMs N-ary Tree-LSTMs https://paperswithcode.com/paper/improved-semantic ...
- LSTM(Long Short Term Memory)
长时依赖是这样的一个问题,当预测点与依赖的相关信息距离比较远的时候,就难以学到该相关信息.例如在句子”我出生在法国,……,我会说法语“中,若要预测末尾”法语“,我们需要用到上下文”法国“.理论上,递归 ...
- (转)The Neural Network Zoo
转自:http://www.asimovinstitute.org/neural-network-zoo/ THE NEURAL NETWORK ZOO POSTED ON SEPTEMBER 14, ...
- RNN 入门教程 Part 4 – 实现 RNN-LSTM 和 GRU 模型
转载 - Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano ...
- [转] Understanding-LSTMs 理解LSTM
图文并茂,讲得极清晰. 原文:http://colah.github.io/posts/2015-08-Understanding-LSTMs/ colah's blog Blog About Con ...
- 循环神经(LSTM)网络学习总结
摘要: 1.算法概述 2.算法要点与推导 3.算法特性及优缺点 4.注意事项 5.实现和具体例子 6.适用场合 内容: 1.算法概述 长短期记忆网络(Long Short Term Memory ne ...
- IMPLEMENTING A GRU/LSTM RNN WITH PYTHON AND THEANO - 学习笔记
catalogue . 引言 . LSTM NETWORKS . LSTM 的变体 . GRUs (Gated Recurrent Units) . IMPLEMENTATION GRUs 0. 引言 ...
随机推荐
- PROJ.4学习——地图投影
PROJ.4学习——地图投影(坐标系投影) 前言 PROJ是由大量的基础投影库构成.这里主要讨论学习PROJ库的相关参数. 这里大部分是讲如何将3D坐标系投影到2D平面上.投影时,涉及到基准线,单位, ...
- 跟随我在oracle学习php(17)
通用设定形式 定义一个字段的时候的类型的写法. 比如: create table tab1 (f1 数据类型 ); 数据类型: 类型名[(长度n)] [unsigned] [zerofil ...
- 初遇sass的两个小问题
关于sass大家都知道是一种css的开发工具,原本的css没有变量 参数一类的东西,所以比较死 效率较慢. sass就是在css里面加入了一些编程的元素如变量等,让css能够更灵活,提高效率. 刚接触 ...
- js jquery 正则去空字符
1.正则去空字符串: var str1=" a b c "; var strtrim=str1.replace(/\s/g,""); 2.js去前后空字符串: ...
- Linux 文件系统(一)---虚拟文件系统VFS----超级块、inode、dentry、file
转自:http://blog.csdn.net/shanshanpt/article/details/38943731 http://elixir.free-electrons.com/linux/v ...
- oracle问题 ORA-01843:not a valid month
解决思路: 开始解决问题走了些弯路,搜了一些资料,结果大部分说的是修改会话的nls_date_language参数 可是线上正式项目,不能说改就改吧 就找其他方式解决 最终找到问题,to_date() ...
- python 全栈开发笔记 2
函数 函数式:将某功能代码封装到函数中,日后便无需重复编写,仅调用函数即可 面向对象:对函数进行分类和封装,让开发“更快更好更强...” 函数式编程最重要的是增强代码的重用性和可读性 def xx() ...
- static(静态)关键字
class Person{String name; //成员变量,实例变量(实例中的变量) //共享数据出现在对象之前static String country="cn"; //对 ...
- 【转】QT 添加外部库文件
转自:Qt 添加外部库文件 LIBS += D:\Code\Opengltest\OpenGL32.Lib D:\Code\Opengltest\GlU32.Lib # 直接加绝对路径 LIBS += ...
- 怎么单独为ionic2应用的某一组件设置两个平台一致的样式
今天在继续项目的过程中,发现ionic2在显示样式上是根据不同的平台采用不同的样式,使在不同平台上的应用保持相应的风格,于是问题来了. ios的风格比较好看,android的风格略微不如ios的,所以 ...