转自:https://github.com/terryum/awesome-deep-learning-papers

Awesome - Most Cited Deep Learning Papers

A curated list of the most cited deep learning papers (since 2010)

I believe that there exist classic deep learning papers which are worth reading regardless of their application areas. Rather than providing overwhelming amount of papers, I would like to provide a curated list of the classic deep learning papers which can be considered as must-reads in some research areas.

Awesome list criteria

  • < 6 months : Please refer to New papers worth reading section
  • < 1 year : +30 citations
  • 2016 : +50 citations (✨ +80)
  • 2015 : +100 citations (✨ +200)
  • 2014 : +200 citations (✨ +400)
  • 2013 : +300 citations (✨ +600)
  • 2012 : +400 citations (✨ +800)
  • Before 2012 : Please refer to Classic papers section

I need your contributions! Please read the contributing guide before you make a pull request.

Table of Contents

Total 85 papers except for the papers in Hardware / SoftwarePapers Worth Reading, and Classic Papers sections.

Survey / Review

  • Deep learning (Book, 2016), Goodfellow et al. (Bengio) [html]
  • Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton [pdf] ✨
  • Deep learning in neural networks: An overview (2015), J. Schmidhuber [pdf] ✨
  • Representation learning: A review and new perspectives (2013), Y. Bengio et al. [pdf] ✨

Theory / Distillation

  • Distilling the knowledge in a neural network (2015), G. Hinton et al. (Hinton, Vinyals, Dean: Google) [pdf] ✨
  • Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. [pdf]
  • How transferable are features in deep neural networks? (2014), J. Yosinski et al. (Bengio) [pdf]
  • Return of the devil in the details: delving deep into convolutional nets (2014), K. Chatfield et al. [pdf] ✨
  • Why does unsupervised pre-training help deep learning (2010), D. Erhan et al. (Bengio) [pdf]
  • Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio [pdf]

Optimization / Regularization

  • Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015), S. Loffe and C. Szegedy (Google) [pdf] ✨
  • Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al. (He) [pdf]
  • Recurrent neural network regularization (2014), W. Zaremba et al. (Sutskever, Vinyals: Google) [pdf]
  • Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. (Hinton) [pdf] ✨
  • Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]
  • Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al. [pdf] ✨
  • On the importance of initialization and momentum in deep learning (2013), I. Sutskever et al. (Hinton) [pdf]
  • Regularization of neural networks using dropconnect (2013), L. Wan et al. (LeCun) [pdf]
  • Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. [pdf] ✨
  • Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio [pdf]

Network Models

  • Inception-v4, inception-resnet and the impact of residual connections on learning (2016), C. Szegedy et al. (Google) [pdf]
  • Identity Mappings in Deep Residual Networks (2016), K. He et al. (He) [pdf]
  • Deep residual learning for image recognition (2016), K. He et al. (He) [pdf] ✨
  • Region-based convolutional networks for accurate object detection and segmentation (2016), R. Girshick et al. (He) [pdf]
  • Going deeper with convolutions (2015), C. Szegedy et al. (Google) [pdf] ✨
  • Fast R-CNN (2015), R. Girshick (He) [pdf] ✨
  • An Empirical Exploration of Recurrent Network Architectures (2015), R. Jozefowicz et al. Sutskever: Google [pdf]
  • Fully convolutional networks for semantic segmentation (2015), J. Long et al. [pdf] ✨
  • Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman [pdf] ✨
  • OverFeat: Integrated recognition, localization and detection using convolutional networks (2014), P. Sermanet et al. (LeCun) [pdf]
  • Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf] ✨
  • Maxout networks (2013), I. Goodfellow et al. (Bengio) [pdf]
  • Network in network (2013), M. Lin et al. [pdf]
  • ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. (Hinton) [pdf] ✨
  • Large scale distributed deep networks (2012), J. Dean et al. [pdf] ✨
  • Deep sparse rectifier neural networks (2011), X. Glorot et al. (Bengio) [pdf]

Unsupervised / Adversarial

  • Unsupervised representation learning with deep convolutional generative adversarial networks (2015), A. Radford et al. [pdf]
  • CNN features off-the-Shelf: An astounding baseline for recognition (2014), A. Razavian et al. [pdf] ✨
  • Generative adversarial nets (2014), I. Goodfellow et al. (Bengio) [pdf]
  • Intriguing properties of neural networks (2014), C. Szegedy et al. (Sutskever, Goodfellow: Google) [pdf]
  • Auto-encoding variational Bayes (2013), D. Kingma and M. Welling [pdf]
  • Building high-level features using large scale unsupervised learning (2013), Q. Le et al. [pdf] ✨
  • An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. [pdf]
  • Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. (Bengio) [pdf]
  • A practical guide to training restricted boltzmann machines (2010), G. Hinton [pdf]

Image

  • Image Super-Resolution Using Deep Convolutional Networks (2016), C. Dong et al. (He) [pdf] ✨
  • Reading text in the wild with convolutional neural networks (2016), M. Jaderberg et al. (DeepMind) [pdf]
  • Learning Deconvolution Network for Semantic Segmentation (2015), H. Noh et al. [pdf]
  • Imagenet large scale visual recognition challenge (2015), O. Russakovsky et al. [pdf] ✨
  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. [pdf] ✨
  • DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. [pdf]
  • Scalable object detection using deep neural networks (2014), D. Erhan et al. (Google) [pdf]
  • Learning a Deep Convolutional Network for Image Super-Resolution (2014, C. Dong et al. (He) [pdf]
  • Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al. [pdf] ✨
  • Learning a Deep Convolutional Network for Image Super-Resolution (2014), C. Dong et al. [pdf]
  • Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al. [pdf]
  • DeepFace: Closing the Gap to Human-Level Performance in Face Verification (2014), Y. Taigman et al. (Facebook) [pdf] ✨
  • Decaf: A deep convolutional activation feature for generic visual recognition (2013), J. Donahue et al. [pdf] ✨
  • Learning hierarchical features for scene labeling (2013), C. Farabet et al. (LeCun) [pdf]
  • Learning mid-level features for recognition (2010), Y. Boureau (LeCun) [pdf]

Caption / Visual QnA

  • VQA: Visual question answering (2015), S. Antol et al. [pdf]
  • Towards ai-complete question answering: A set of prerequisite toy tasks (2015), J. Weston et al. (Mikolov: Facebook) [pdf]
  • Ask me anything: Dynamic memory networks for natural language processing (2015), A. Kumar et al. [pdf]
  • A large annotated corpus for learning natural language inference (2015), S. Bowman et al. [pdf]
  • Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al. (Bengio) [pdf] ✨
  • Show and tell: A neural image caption generator (2015), O. Vinyals et al. (Vinyals: Google) [pdf] ✨
  • Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. [pdf] ✨
  • Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei [pdf] ✨

Video / Human Activity

  • Beyond short snippents: Deep networks for video classification (2015) (Vinyals: Google) [pdf] ✨
  • Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al. (FeiFei) [pdf] ✨
  • DeepPose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy (Google) [pdf]
  • Two-stream convolutional networks for action recognition in videos (2014), K. Simonyan et al. [pdf]
  • A survey on human activity recognition using wearable sensors (2013), O. Lara and M. Labrador [pdf]
  • 3D convolutional neural networks for human action recognition (2013), S. Ji et al. [pdf]
  • Action recognition with improved trajectories (2013), H. Wang and C. Schmid [pdf]
  • Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis (2011), Q. Le et al. [pdf]

Word Embedding

  • Glove: Global vectors for word representation (2014), J. Pennington et al. [pdf] ✨
  • Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov (Le, Mikolov: Google) [pdf] (Google)
  • Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. (Google) [pdf] ✨
  • Efficient estimation of word representations in vector space (2013), T. Mikolov et al. (Google) [pdf] ✨
  • Devise: A deep visual-semantic embedding model (2013), A. Frome et al., (Mikolov: Google) [pdf]
  • Word representations: a simple and general method for semi-supervised learning (2010), J. Turian (Bengio) [pdf]

Machine Translation / QnA

  • Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation (2016), Y. Wu et al. (Le, Vinyals, Dean: Google) [pdf]
  • Exploring the limits of language modeling (2016), R. Jozefowicz et al. (Vinyals: DeepMind) [pdf]
  • A neural conversational model, O. Vinyals and Q. Le. (Vinyals, Le: Google) [pdf]
  • Grammar as a foreign language (2015), O. Vinyals et al. (Vinyals, Sutskever, Hinton: Google) [pdf]
  • Towards ai-complete question answering: A set of prerequisite toy tasks (2015), J. Weston et al. [pdf]
  • Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. (Bengio) [pdf] ✨
  • Sequence to sequence learning with neural networks (2014), I. Sutskever et al. (Sutskever, Vinyals, Le: Google) [pdf] ✨
  • Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. (Bengio) [pdf]
  • A convolutional neural network for modelling sentences (2014), N. Kalchbrenner et al. [pdf]
  • Convolutional neural networks for sentence classification (2014), Y. Kim [pdf]
  • The stanford coreNLP natural language processing toolkit (2014), C. Manning et al. [pdf] ✨
  • Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. [pdf] ✨
  • Linguistic Regularities in Continuous Space Word Representations (2013), T. Mikolov et al. (Mikolov: Microsoft) [pdf]
  • Natural language processing (almost) from scratch (2011), R. Collobert et al. [pdf] ✨
  • Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]

Speech / Etc.

  • Automatic speech recognition - A deep learning approach (Book, 2015), D. Yu and L. Deng (Microsoft) [html]
  • Speech recognition with deep recurrent neural networks (2013), A. Graves (Hinton) [pdf]
  • Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al. [pdf] ✨
  • Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al. [pdf]
  • Acoustic modeling using deep belief networks (2012), A. Mohamed et al. (Hinton) [pdf]

RL / Robotics

  • Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al. (Sutskever: DeepMind) [pdf]
  • Human-level control through deep reinforcement learning (2015), V. Mnih et al. (DeepMind) [pdf] ✨
  • Deep learning for detecting robotic grasps (2015), I. Lenz et al. [pdf]
  • Playing atari with deep reinforcement learning (2013), V. Mnih et al. (DeepMind) [pdf])

Hardware / Software

  • TensorFlow: Large-scale machine learning on heterogeneous distributed systems (2016), M. Abadi et al. (Google) [pdf] ✨
  • Theano: A Python framework for fast computation of mathematical expressions, R. Al-Rfou et al. (Bengio)
  • MatConvNet: Convolutional neural networks for matlab (2015), A. Vedaldi and K. Lenc [pdf]
  • Caffe: Convolutional architecture for fast feature embedding (2014), Y. Jia et al. [pdf] ✨

Papers Worth Reading

Newly released papers which do not meet the criteria but worth reading

  • WaveNet: A Generative Model for Raw Audio (2016), A. Oord et al. (DeepMind) [pdf] [web]
  • Layer Normalization (2016), J. Ba et al. (Hinton) [pdf]
  • Deep neural network architectures for deep reinforcement learning, Z. Wang et al. (DeepMind) [pdf]
  • Learning to learn by gradient descent by gradient descent (2016), M. Andrychowicz et al. (DeepMind) [pdf]
  • Adversarially learned inference (2016), V. Dumoulin et al. [web][pdf]
  • Understanding convolutional neural networks (2016), J. Koushik [pdf]
  • SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size (2016), F. Iandola et al. [pdf]
  • Learning to compose neural networks for question answering (2016), J. Andreas et al. [pdf]
  • Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection (2016) (Google), S. Levine et al. [pdf]
  • Taking the human out of the loop: A review of bayesian optimization (2016), B. Shahriari et al. [pdf]
  • Eie: Efficient inference engine on compressed deep neural network (2016), S. Han et al. [pdf]
  • Adaptive Computation Time for Recurrent Neural Networks (2016), A. Graves [pdf]
  • Pixel recurrent neural networks (2016), A. van den Oord et al. (DeepMind) [pdf]
  • Densely connected convolutional networks (2016), G. Huang et al. [pdf]

Classic Papers

Classic papers (1997~2011) which cause the advent of deep learning era

  • Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]
  • Learning deep architectures for AI (2009), Y. Bengio. [pdf]
  • Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations (2009), H. Lee et al. [pdf]
  • Greedy layer-wise training of deep networks (2007), Y. Bengio et al. [pdf]

  • Reducing the dimensionality of data with neural networks, G. Hinton and R. Salakhutdinov. [pdf]

  • A fast learning algorithm for deep belief nets (2006), G. Hinton et al. [pdf]
  • Gradient-based learning applied to document recognition (1998), Y. LeCun et al. [pdf]
  • Long short-term memory (1997), S. Hochreiter and J. Schmidhuber. [pdf]

Distinguished Researchers

Distinguished deep learning researchers who have published +3 (✨ +6) papers on the awesome list (The papers in Hardware / SoftwarePapers Worth ReadingClassic Papers sections are excluded in counting.)

Acknowledgement

Thank you for all your contributions. Please make sure to read the contributing guide before you make a pull request.

You can follow my facebook page or google plus to get useful information about machine learning and robotics. If you want to have a talk with me, please send me a message to my facebook page.

You can also check out my blog where I share my thoughts on my research area (deep learning for human/robot motions). I got some thoughts while making this list and summerized them in a blog post, "Some trends of recent deep learning researches".

License

To the extent possible under law, Terry T. Um has waived all copyright and related or neighboring rights to this work.

 

(转) Awesome - Most Cited Deep Learning Papers的更多相关文章

  1. (转)The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3)

    Adit Deshpande CS Undergrad at UCLA ('19) Blog About The 9 Deep Learning Papers You Need To Know Abo ...

  2. Deep Learning Papers Reading Roadmap

    Deep Learning Papers Reading Roadmap https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadm ...

  3. Deep Learning Papers

    一.Image Classification(Recognition) lenet: http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf alexn ...

  4. (转) Awesome Deep Learning

    Awesome Deep Learning  Table of Contents Free Online Books Courses Videos and Lectures Papers Tutori ...

  5. What are some good books/papers for learning deep learning?

    What's the most effective way to get started with deep learning?       29 Answers     Yoshua Bengio, ...

  6. 【深度学习Deep Learning】资料大全

    最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books  by Yoshua Bengio, Ian Goodfellow and Aaron C ...

  7. (转)Deep Learning Research Review Week 1: Generative Adversarial Nets

    Adit Deshpande CS Undergrad at UCLA ('19) Blog About Resume Deep Learning Research Review Week 1: Ge ...

  8. Why Deep Learning Works – Key Insights and Saddle Points

    Why Deep Learning Works – Key Insights and Saddle Points A quality discussion on the theoretical mot ...

  9. 机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)

    ##机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)---#####注:机器学习资料[篇目一](https://github.co ...

随机推荐

  1. mkstemp生成临时文件

    使用该函数可以指定目录生成临时文件,函数原型为 int mkstemp(char *template); 应用举例 int main(int argc, char *argv[]) { /* char ...

  2. java 文件压缩和解压(ZipInputStream, ZipOutputStream)

    最近在看java se 的IO 部分 , 看到 java 的文件的压缩和解压比较有意思,主要用到了两个IO流-ZipInputStream, ZipOutputStream,不仅可以对文件进行压缩,还 ...

  3. 转:AJAX中xhr对象详解

    XJAX ,并不是一种新技术的诞生.它实际上代表的是几项技术按一定的方式组合在一在同共的协作中发挥各自的作用. 它包括: 使用XHTML和CSS标准化呈现: 使用DOM实现动态显示和交互: 使用XML ...

  4. 总结-css编码规范

    一.注释 统一采用 :/* 注释内容 */ 二.命名 1.常用命名(多查单词) 参考命名规范.doc 2.选择器 1> [建议] 选择器的嵌套层级应不大于 3 级,位置靠后的限定条件应尽可能精确 ...

  5. 《java集合概述》

    JAVA集合概述: Collection: |---List有序的:通过索引就可以精确的操作集合中的元素.元素是可以重复的. List提供了增删改查的动作. 增加add(element) add(in ...

  6. C#和Java在重写上的区别

    C# class A { public string Get1() { return "A1"; } public virtual string Get2() { return & ...

  7. docker 源码分析 三(基于1.8.2版本),NewDaemon启动

    本文来分析一下New Daemon的启动过程:在daemon/daemon.go文件中: func NewDaemon(config *Config, registryService *registr ...

  8. SharePoint 2013 中的 PowerPoint Automation Services

    简介                许多大型和小型企业都将其 Microsoft SharePoint Server 库用作 Microsoft PowerPoint 演示文稿的存储库.所有这些企业在 ...

  9. Razor视图添加命名空间

    在.cshtml文件添加@using MyNamespace,只是在页面添加引用这样编译不通过,还需要在view文件夹的web.config添加引用,找到<namespaces>添加< ...

  10. C# 调用cmd命令行路径中带空格问题

    今天打包winform程序,程序中本身有一处需要调用cmd.exe,打包安装在C:\Program Files目录下,然后调用cmd的地方,就弹出了C:\Program不是内部或外部命令,也不是可运行 ...