100篇必读的NLP论文

100 Must-Read NLP
自己汇总的论文集,已更新
链接:https://pan.baidu.com/s/16k2s2HYfrKHLBS5lxZIkuw
提取码:x7tn

This is a list of 100 important natural language processing (NLP) papers that serious students and researchers working in the field should probably know about and read.

这是100篇重要的自然语言处理(NLP)论文的列表,认真的学生和研究人员在这个领域应该知道和阅读。

This list is compiled by Masato Hagiwara.

本榜单由Masato Hagiwara编制。

I welcome any feedback on this list. 我欢迎对这个列表的任何反馈。 This list is originally based on the answers for a Quora question I posted years ago: What are the most important research papers which all NLP studnets should definitely read?.

这个列表最初是基于我多年前在Quora上发布的一个问题的答案:所有NLP学生都应该阅读的最重要的研究论文是什么?

I thank all the people who contributed to the original post. 我感谢所有为原创文章做出贡献的人。

This list is far from complete or objective, and is evolving, as important papers are being published year after year.

由于重要的论文年复一年地发表,这份清单还远远不够完整和客观,而且还在不断发展。

Please let me know via pull requests and issues if anything is missing.

请通过pull requestsissues告诉我是否有任何遗漏。

Also, I didn't try to include links to original papers since it is a lot of work to keep dead links up to date.

此外,我没有试图包括原始论文的链接,因为保持死链接是大量的工作,直到最新。

I'm sure you can find most (if not all) of the papers listed here via a single Google search by their titles.

我相信你可以通过一个简单的谷歌搜索找到这里列出的大部分(如果不是全部)论文。

A paper doesn't have to be a peer-reviewed conference/journal paper to appear here.

一篇论文不一定要经过同行评审的会议/期刊论文才能出现在这里。

We also include tutorial/survey-style papers and blog posts that are often easier to understand than the original papers.

我们还包括教程/调查风格的论文和博客文章,通常比原来的论文更容易理解。

Machine Learning

  • Avrim Blum and Tom Mitchell: Combining Labeled and Unlabeled Data with Co-Training, 1998.
  • John Lafferty, Andrew McCallum, Fernando C.N. Pereira: Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data, ICML 2001.
  • Charles Sutton, Andrew McCallum. An Introduction to Conditional Random Fields for Relational Learning.
  • Kamal Nigam, et al.: Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning, 1999.
  • Kevin Knight: Bayesian Inference with Tears, 2009.
  • Marco Tulio Ribeiro et al.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier, KDD 2016.

Neural Models

  • Richard Socher, et al.: Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection, NIPS 2011.
  • Ronan Collobert et al.: Natural Language Processing (almost) from Scratch, J. of Machine Learning Research, 2011.
  • Richard Socher, et al.: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, EMNLP 2013.
  • Xiang Zhang, Junbo Zhao, and Yann LeCun: Character-level Convolutional Networks for Text Classification, NIPS 2015.
  • Yoon Kim: Convolutional Neural Networks for Sentence Classification, 2014.
  • Christopher Olah: Understanding LSTM Networks, 2015.
  • Matthew E. Peters, et al.: Deep contextualized word representations, 2018.
  • Jacob Devlin, et al.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018.

Clustering & Word Embeddings

集群和词嵌入

  • Peter F Brown, et al.: Class-Based n-gram Models of Natural Language, 1992.

    基于类的n-gram自然语言模型

  • Tomas Mikolov, et al.: Efficient Estimation of Word Representations in Vector Space, 2013.

    向量空间中字表示的有效估计

  • Tomas Mikolov, et al.: Distributed Representations of Words and Phrases and their Compositionality, NIPS 2013.

    单词和短语的分布式表示及其组合性

  • Quoc V. Le and Tomas Mikolov: Distributed Representations of Sentences and Documents, 2014.

    分布式句子和文档的表示形式

  • Jeffrey Pennington, et al.: GloVe: Global Vectors for Word Representation, 2014.

    词表示的全局向量

  • Ryan Kiros, et al.: Skip-Thought Vectors, 2015.

    Skip-Thought 向量

  • Piotr Bojanowski, et al.: Enriching Word Vectors with Subword Information, 2017.

    用子单词信息丰富单词向量

Topic Models

  • Thomas Hofmann: Probabilistic Latent Semantic Indexing, SIGIR 1999.
  • David Blei, Andrew Y. Ng, and Michael I. Jordan: Latent Dirichlet Allocation, J. Machine Learning Research, 2003.

Language Modeling

  • Joshua Goodman: A bit of progress in language modeling, MSR Technical Report, 2001.
  • Stanley F. Chen and Joshua Goodman: An Empirical Study of Smoothing Techniques for Language Modeling, ACL 2006.
  • Yee Whye Teh: A Hierarchical Bayesian Language Model based on Pitman-Yor Processes, COLING/ACL 2006.
  • Yee Whye Teh: A Bayesian interpretation of Interpolated Kneser-Ney, 2006.
  • Yoshua Bengio, et al.: A Neural Probabilistic Language Model, J. of Machine Learning Research, 2003.
  • Andrej Karpathy: The Unreasonable Effectiveness of Recurrent Neural Networks, 2015.
  • Yoon Kim, et al.: Character-Aware Neural Language Models, 2015.

Segmentation, Tagging, Parsing

  • Donald Hindle and Mats Rooth. Structural Ambiguity and Lexical Relations, Computational Linguistics, 1993.
  • Adwait Ratnaparkhi: A Maximum Entropy Model for Part-Of-Speech Tagging, EMNLP 1996.
  • Eugene Charniak: A Maximum-Entropy-Inspired Parser, NAACL 2000.
  • Michael Collins: Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms, EMNLP 2002.
  • Dan Klein and Christopher Manning: Accurate Unlexicalized Parsing, ACL 2003.
  • Dan Klein and Christopher Manning: Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency, ACL 2004.
  • Joakim Nivre and Mario Scholz: Deterministic Dependency Parsing of English Text, COLING 2004.
  • Ryan McDonald et al.: Non-Projective Dependency Parsing using Spanning-Tree Algorithms, EMNLP 2005.
  • Daniel Andor et al.: Globally Normalized Transition-Based Neural Networks, 2016.
  • Oriol Vinyals, et al.: Grammar as a Foreign Language, 2015.

Sequential Labeling & Information Extraction

  • Marti A. Hearst: Automatic Acquisition of Hyponyms from Large Text Corpora, COLING 1992.
  • Collins and Singer: Unsupervised Models for Named Entity Classification, EMNLP 1999.
  • Patrick Pantel and Dekang Lin, Discovering Word Senses from Text, SIGKDD, 2002.
  • Mike Mintz et al.: Distant supervision for relation extraction without labeled data, ACL 2009.
  • Zhiheng Huang et al.: Bidirectional LSTM-CRF Models for Sequence Tagging, 2015.
  • Xuezhe Ma and Eduard Hovy: End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF, ACL 2016.

Machine Translation & Transliteration, Sequence-to-Sequence Models

  • Peter F. Brown et al.: A Statistical Approach to Machine Translation, Computational Linguistics, 1990.
  • Kevin Knight, Graehl Jonathan. Machine Transliteration. Computational Linguistics, 1992.
  • Dekai Wu: Inversion Transduction Grammars and the Bilingual Parsing of Parallel Corpora, Computational Linguistics, 1997.
  • Kevin Knight: A Statistical MT Tutorial Workbook, 1999.
  • Kishore Papineni, et al.: BLEU: a Method for Automatic Evaluation of Machine Translation, ACL 2002.
  • Philipp Koehn, Franz J Och, and Daniel Marcu: Statistical Phrase-Based Translation, NAACL 2003.
  • Philip Resnik and Noah A. Smith: The Web as a Parallel Corpus, Computational Linguistics, 2003.
  • Franz J Och and Hermann Ney: The Alignment-Template Approach to Statistical Machine Translation, Computational Linguistics, 2004.
  • David Chiang. A Hierarchical Phrase-Based Model for Statistical Machine Translation, ACL 2005.
  • Ilya Sutskever, Oriol Vinyals, and Quoc V. Le: Sequence to Sequence Learning with Neural Networks, NIPS 2014.
  • Oriol Vinyals, Quoc Le: A Neural Conversation Model, 2015.
  • Dzmitry Bahdanau, et al.: Neural Machine Translation by Jointly Learning to Align and Translate, 2014.
  • Minh-Thang Luong, et al.: Effective Approaches to Attention-based Neural Machine Translation, 2015.
  • Rico Sennrich et al.: Neural Machine Translation of Rare Words with Subword Units. ACL 2016.
  • Yonghui Wu, et al.: Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.
  • Jonas Gehring, et al.: Convolutional Sequence to Sequence Learning, 2017.
  • Ashish Vaswani, et al.: Attention Is All You Need, 2017.

Coreference Resolution

  • Vincent Ng: Supervised Noun Phrase Coreference Research: The First Fifteen Years, ACL 2010.
  • Kenton Lee at al.: End-to-end Neural Coreference Resolution, EMNLP 2017.

Automatic Text Summarization

  • Kevin Knight and Daniel Marcu: Summarization beyond sentence extraction. Artificial Intelligence 139, 2002.
  • James Clarke and Mirella Lapata: Modeling Compression with Discourse Constraints. EMNLP-CONLL 2007.
  • Ryan McDonald: A Study of Global Inference Algorithms in Multi-Document Summarization, ECIR 2007.
  • Wen-tau Yih et al.: Multi-Document Summarization by Maximizing Informative Content-Words. IJCAI 2007.
  • Alexander M Rush, et al.: A Neural Attention Model for Sentence Summarization. EMNLP 2015.

Question Answering and Machine Comprehension

  • Pranav Rajpurkar et al.: SQuAD: 100,000+ Questions for Machine Comprehension of Text. EMNLP 2015.
  • Minjoon Soo et al.: Bi-Directional Attention Flow for Machine Comprehension. ICLR 2015.

Generation, Reinforcement Learning

  • Jiwei Li, et al.: Deep Reinforcement Learning for Dialogue Generation, EMNLP 2016.
  • Marc’Aurelio Ranzato et al.: Sequence Level Training with Recurrent Neural Networks. ICLR 2016.
  • Lantao Yu, et al.: SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient, AAAI 2017.

研究NLP100篇必读的论文---已整理可直接下载的更多相关文章

  1. (zhuan) 126 篇殿堂级深度学习论文分类整理 从入门到应用

    126 篇殿堂级深度学习论文分类整理 从入门到应用 | 干货 雷锋网 作者: 三川 2017-03-02 18:40:00 查看源网址 阅读数:66 如果你有非常大的决心从事深度学习,又不想在这一行打 ...

  2. 如何在两个月的时间内发表一篇EI/SCI论文-我的时间管理心得

    在松松垮垮的三年研究生时期,要说有点像样的成果,也只有我的小论文可以谈谈了.可能有些厉害的角色研究生是丰富而多彩的,而大多数的同学在研究生阶段可能同我一样,是慢悠悠的渡过的,而且可能有的还不如我,我还 ...

  3. AI领域:如何做优秀研究并写高水平论文?

    来源:深度强化学习实验室 每个人从本科到硕士,再到博士.博士后,甚至工作以后,都会遇到做研究.写论文这个差事.论文通常是对现有工作的一个总结和展示,特别对于博士和做研究的人来说,论文则显得更加重要. ...

  4. itemKNN发展史----推荐系统的三篇重要的论文解读

    itemKNN发展史----推荐系统的三篇重要的论文解读 本文用到的符号标识 1.Item-based CF 基本过程: 计算相似度矩阵 Cosine相似度 皮尔逊相似系数 参数聚合进行推荐 根据用户 ...

  5. Chrome 开发者工具(DevTools)中所有快捷方式列表(已整理)

    Chrome 开发者工具(DevTools)中所有快捷方式列表(已整理) 前言 Chrome DevTools提供了一些内置的快捷键,开发者利用这些快捷键可以节省常工作中很多日的开发时间.下面列出了每 ...

  6. SLAM架构的两篇顶会论文解析

    SLAM架构的两篇顶会论文解析 一. 基于superpoint的词袋和图验证的鲁棒闭环检测 标题:Robust Loop Closure Detection Based on Bag of Super ...

  7. O2O研究系列——O2O知识思维导图整理

    本篇文章对O2O电子商务模式的常规知识点,使用思维导图的方式整理,表达的形式是名词纲领性的方式, 不会在图中详细说明各个点. 通过这个图研究O2O模式时,可以系统的对各个业务点进行更深入的研究,避免有 ...

  8. Fast-RCNN论文总结整理

    此篇博客写作思路是一边翻译英文原文一边总结博主在阅读过程中遇到的问题及一些思考,因为博主本人阅读英文论文水平不高,所以还请大家在看此篇博客的过程中带着批判的眼神阅读!小墨镜带好,有什么不对的地方请在留 ...

  9. 《OAuth2.0协议安全形式化分析-》----论文摘抄整理

    ---恢复内容开始--- 本篇论文发表在计算机工程与设计,感觉写的还是很有水准的.实验部分交代的比较清楚 本篇论文的创新点: 使用Scyther工具 主要是在 DY模型下面 形式化分析了 OAuth2 ...

随机推荐

  1. VMware HA、FT、VADP、SRM、VR、vMotion

    VMware提供了一系列保护虚拟机可用性的功能:HA.FT.VADP.SRM以及vMotion.实现最大化虚拟系统可用性的关键在于了解公司策略以及可利用的技术能够使用哪些特性.下面简要介绍一下在特定的 ...

  2. 题解 P1654 【OSU!】

    题面 一序列\(a\), 对于每一个\(i\)均有\(a_i\)有\(p_i\)的几率为1, 否则为\(0\) 求: \(a\)中极长全\(1\)子序列长度三次方之和的期望 前置知识 基本期望(期望的 ...

  3. maven常用配置setting.xml详解

    参考文章: https://www.cnblogs.com/hwaggLee/p/4579418.html 1.<localRepository/> 该值maven本地仓库的路径 < ...

  4. C++ STD Gems01

    本文是根据油管大神的C++标准库课程的一个学习笔记,该课程主要介绍c++标准库中一些非常有用并且代码经常用到的工具. copy .copy_backward .copy_n .copy_if.swap ...

  5. Python说文解字_杂谈09

    1. 元类编程代码分析: import numbers class Field: pass class IntField(Field): # 数据描述符: # 初始化 def __init__(sel ...

  6. iOS精美过度动画、视频会议、朋友圈、联系人检索、自定义聊天界面等源码

    iOS精选源码 iOS 精美过度动画源码 iOS简易聊天页面以及容联云IM自定义聊天页面的实现思路 自定义cell的列表视图实现:置顶.拖拽.多选.删除 SSSearcher仿微信搜索联系人,高亮搜索 ...

  7. [极客大挑战 2019]EasySQL

    万能密码直接登陆得到flag admin' or 1=1 #

  8. Springboot注解--@Controller和@RestController的区别

    1.使用@Controller 注解,在对应的方法上,视图解析器可以解析return 的jsp,html页面,并且跳转到相应页面:若返回json等内容到页面,则需要加@ResponseBody注解 2 ...

  9. php速成_day2

    一.PHP中的多维数组 1.多维数组及其用途 多维数组用来存储多个元素,元素是一个数组的结构. 之前学习的数组,是一个一维数组. $person = array( 'name' => 'xiao ...

  10. Pyspider的简单介绍和初使用

    Pyspider   Pyspider是由国人(binux)编写的强大的网络爬虫系统 Ptspider带有强大的WebUi / 脚本编辑器 / 任务监控器 / 项目管理器以及结果处理器.他支持多种数据 ...