天问

引用最多的深度学习论文列表

Awesome - Most Cited Deep Learning Papers

 

A curated list of the most cited deep learning papers (since 2012)

 

Before this list, there exist other awesome deep learning lists, for example, Deep Vision(https://github.com/kjw0612/awesome-deep-vision) and Awesome Recurrent Neural Networks(https://github.com/kjw0612/awesome-rnn). Also, after this list comes out, another awesome list for deep learning beginners. called Deep Learning Papers Reading Roadmap(https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadmap), has been created and loved by many deep learning researchers.

 

 

 

Understanding / Generalization / Transfer

 

• Distilling the knowledge in a neural network (2015), G. Hinton et al. 

• Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. 

• How transferable are features in deep neural networks? (2014), J. Yosinski et al.

• CNN features off-the-Shelf: An astounding baseline for recognition (2014), A. Razavian et al.

• Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al. 

• Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus 

• Decaf: A deep convolutional activation feature for generic visual recognition (2014), J. Donahue et al. 

 

 

 

Optimization / Training Techniques

 

• Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015), S. Loffe and C. Szegedy 

• Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al.

• Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. 

• Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba 

• Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. 

• Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio 

 

 

 

Unsupervised / Generative Models

 

 

• Pixel recurrent neural networks (2016), A. Oord et al. 

• Improved techniques for training GANs (2016), T. Salimans et al. 

• Unsupervised representation learning with deep convolutional generative adversarial networks (2015), A. Radford et al. 

• DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. 

• Generative adversarial nets (2014), I. Goodfellow et al. 

• Auto-encoding variational Bayes (2013), D. Kingma and M. Welling 

• Building high-level features using large scale unsupervised learning (2013), Q. Le et al. 

 

 

 

Convolutional Neural Network Models

 

• Rethinking the inception architecture for computer vision (2016), C. Szegedy et al. 

• Inception-v4, inception-resnet and the impact of residual connections on learning (2016), C. Szegedy et al. 

• Identity Mappings in Deep Residual Networks (2016), K. He et al. 

• Deep residual learning for image recognition (2016), K. He et al. 

• Going deeper with convolutions (2015), C. Szegedy et al. 

• Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman 

• Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al. 

• Return of the devil in the details: delving deep into convolutional nets (2014), K. Chatfield et al. 

• OverFeat: Integrated recognition, localization and detection using convolutional networks (2013), P. Sermanet et al. 

• Maxout networks (2013), I. Goodfellow et al. 

• Network in network (2013), M. Lin et al. 

• ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. 

 

 

 

Image: Segmentation / Object Detection

 

• You only look once: Unified, real-time object detection (2016), J. Redmon et al. 

• Region-based convolutional networks for accurate object detection and segmentation (2016), R. Girshick et al. 

• Fully convolutional networks for semantic segmentation (2015), J. Long et al. 

• Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. 

• Fast R-CNN (2015), R. Girshick 

• Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al. 

• Semantic image segmentation with deep convolutional nets and fully connected CRFs, L. Chen et al. 

• Learning hierarchical features for scene labeling (2013), C. Farabet et al. 

 

 

 

Image / Video / Etc

 

 

• Image Super-Resolution Using Deep Convolutional Networks (2016), C. Dong et al. 

• A neural algorithm of artistic style (2015), L. Gatys et al. 

• Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei 

• Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al. 

• Show and tell: A neural image caption generator (2015), O. Vinyals et al. 

• Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. 

• VQA: Visual question answering (2015), S. Antol et al. 

• DeepFace: Closing the gap to human-level performance in face verification (2014), Y. Taigman et al. :

• Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al. 

• DeepPose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy 

• Two-stream convolutional networks for action recognition in videos (2014), K. Simonyan et al. 

• 3D convolutional neural networks for human action recognition (2013), S. Ji et al. 

 

 

Recurrent Neural Network Models

 

 

• Conditional random fields as recurrent neural networks (2015), S. Zheng and S. Jayasumana. 

• Memory networks (2014), J. Weston et al. 

• Neural turing machines (2014), A. Graves et al. 

• Generating sequences with recurrent neural networks (2013), A. Graves. 

Natural Language Process

• A character-level decoder without explicit segmentation for neural machine translation (2016), J. Chung et al. 

• Exploring the limits of language modeling (2016), R. Jozefowicz et al. 

• Teaching machines to read and comprehend (2015), K. Hermann et al. 

• Effective approaches to attention-based neural machine translation (2015), M. Luong et al. 

• Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. 

• Sequence to sequence learning with neural networks (2014), I. Sutskever et al. 

• Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. 

• A convolutional neural network for modelling sentences (2014), N. Kalchbrenner et al. 

• Convolutional neural networks for sentence classification (2014), Y. Kim 

• Glove: Global vectors for word representation (2014), J. Pennington et al. 

• Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov 

• Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. 

• Efficient estimation of word representations in vector space (2013), T. Mikolov et al. 

• Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. 

 

 

 

Speech / Other Domain

 

 

• End-to-end attention-based large vocabulary speech recognition (2016), D. Bahdanau et al. 

• Deep speech 2: End-to-end speech recognition in English and Mandarin (2015), D. Amodei et al. 

• Speech recognition with deep recurrent neural networks (2013), A. Graves 

• Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al. 

• Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al. 

• Acoustic modeling using deep belief networks (2012), A. Mohamed et al. 

 

 

 

Reinforcement Learning

 

 

• End-to-end training of deep visuomotor policies (2016), S. Levine et al. 

• Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection (2016), S. Levine et al. 

• Asynchronous methods for deep reinforcement learning (2016), V. Mnih et al. 

• Deep Reinforcement Learning with Double Q-Learning (2016), H. Hasselt et al. 

• Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al. 

• Continuous control with deep reinforcement learning (2015), T. Lillicrap et al. 

• Human-level control through deep reinforcement learning (2015), V. Mnih et al. 

• Deep learning for detecting robotic grasps (2015), I. Lenz et al. 

• Playing atari with deep reinforcement learning (2013), V. Mnih et al. )

 

 

More Papers from 2016

 

 

• Layer Normalization (2016), J. Ba et al. 

• Learning to learn by gradient descent by gradient descent (2016), M. Andrychowicz et al. 

• Domain-adversarial training of neural networks (2016), Y. Ganin et al. 

• WaveNet: A Generative Model for Raw Audio (2016), A. Oord et al.  [web]

• Colorful image colorization (2016), R. Zhang et al. 

• Generative visual manipulation on the natural image manifold (2016), J. Zhu et al. 

• Texture networks: Feed-forward synthesis of textures and stylized images (2016), D Ulyanov et al. 

• SSD: Single shot multibox detector (2016), W. Liu et al. 

• SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size (2016), F. Iandola et al. 

• Eie: Efficient inference engine on compressed deep neural network (2016), S. Han et al. 

• Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1 (2016), M. Courbariaux et al. 

• Dynamic memory networks for visual and textual question answering (2016), C. Xiong et al. 

• Stacked attention networks for image question answering (2016), Z. Yang et al. 

• Hybrid computing using a neural network with dynamic external memory (2016), A. Graves et al. 

• Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation (2016), Y. Wu et al. 

 

 

 

New papers

 

 

Newly published papers (< 6 months) which are worth reading

• Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models, S. Ioffe. 

• Wasserstein GAN, M. Arjovsky et al. 

• Understanding deep learning requires rethinking generalization, C. Zhang et al. 

 

 

Old Papers

 

 

Classic papers published before 2012

• An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. 

• Deep sparse rectifier neural networks (2011), X. Glorot et al. 

• Natural language processing (almost) from scratch (2011), R. Collobert et al. 

• Recurrent neural network based language model (2010), T. Mikolov et al. 

• Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. 

• Learning mid-level features for recognition (2010), Y. Boureau 

• A practical guide to training restricted boltzmann machines (2010), G. Hinton 

• Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio 

• Why does unsupervised pre-training help deep learning (2010), D. Erhan et al. 

• Recurrent neural network based language model (2010), T. Mikolov et al. 

• Learning deep architectures for AI (2009), Y. Bengio. 

• Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations (2009), H. Lee et al. 

• Greedy layer-wise training of deep networks (2007), Y. Bengio et al. 

• Reducing the dimensionality of data with neural networks, G. Hinton and R. Salakhutdinov. 

• A fast learning algorithm for deep belief nets (2006), G. Hinton et al. 

• Gradient-based learning applied to document recognition (1998), Y. LeCun et al. 

• Long short-term memory (1997), S. Hochreiter and J. Schmidhuber. 

博客地址:http://blog.yoqi.me/?p=2490
扫我捐助哦
喜欢 0

这篇文章还没有评论

发表评论