网络压缩论文集(network compression)
Convolutional Neural Networks
- ImageNet Models
- Architecture Design
- Activation Functions
- Visualization
- Fast Convolution
- Low-Rank Filter Approximation
- Low Precision
- Parameter Pruning
- Transfer Learning
- Theory
- 3D Data
- Hardware
ImageNet Models
- 2017 CVPR Xception: Deep Learning with Depthwise Separable Convolutions(Xception)
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks (ResNeXt)
- 2016 ECCV Identity Mappings in Deep Residual Networks (Pre-ResNet)
- 2016 arXiv Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (Inception V4)
- 2016 CVPR Deep Residual Learning for Image Recognition (ResNet)
- 2015 arXiv Rethinking the Inception Architecture for Computer Vision (Inception V3)
- 2015 ICML Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (Inception V2)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2015 ICLR Very Deep Convolutional Networks For Large-scale Image Recognition (VGG)
- 2015 CVPR Going Deeper with Convolutions (GoogleNet/Inception V1)
- 2012 NIPS ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
Architecture Design
- 2017 arXiv One Model To Learn Them All
- 2017 arXiv MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- 2017 ICML AdaNet: Adaptive Structural Learning of Artificial Neural Networks
- 2017 ICML Large-Scale Evolution of Image Classifiers
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks
- 2017 CVPR Densely Connected Convolutional Networks
- 2017 ICLR Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
- 2017 ICLR Neural Architecture Search with Reinforcement Learning
- 2017 ICLR Designing Neural Network Architectures using Reinforcement Learning
- 2017 ICLR Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
- 2017 ICLR Highway and Residual Networks learn Unrolled Iterative Estimation
- 2016 NIPS Residual Networks Behave Like Ensembles of Relatively Shallow Networks
- 2016 BMVC Wide Residual Networks
- 2016 arXiv Benefits of depth in neural networks
- 2016 AAAI On the Depth of Deep Neural Networks: A Theoretical View
- 2016 arXiv SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
- 2015 ICMLW Highway Networks
- 2015 CVPR Convolutional Neural Networks at Constrained Time Cost
- 2015 CVPR Fully Convolutional Networks for Semantic Segmentation
- 2014 NIPS Do Deep Nets Really Need to be Deep?
- 2014 ICLRW Understanding Deep Architectures using a Recursive Convolutional Network
- 2013 ICML Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures
- 2009 ICCV What is the Best Multi-Stage Architecture for Object Recognition?
- 1995 NIPS Simplifying Neural Nets by Discovering Flat Minima
- 1994 T-NN SVD-NET: An Algorithm that Automatically Selects Network Structure
Activation Functions
- 2017 arXiv Self-Normalizing Neural Networks (SELU)
- 2016 ICLR Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (ELU)
- 2015 arXiv Empirical Evaluation of Rectified Activations in Convolutional Network (RReLU)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2013 ICML Rectifier Nonlinearities Improve Neural Network Acoustic Models
- 2010 ICML Rectified Linear Units Improve Restricted Boltzmann Machines (ReLU)
Visualization
- 2017 CVPR Network Dissection: Quantifying Interpretability of Deep Visual Representations
- 2015 ICMLW Understanding Neural Networks Through Deep Visualization
- 2014 ECCV Visualizing and Understanding Convolutional Networks
Fast Convolution
- 2017 ICML Warped Convolutions: Efficient Invariance to Spatial Transformations
- 2017 ICLR Faster CNNs with Direct Sparse Convolutions and Guided Pruning
- 2016 NIPS PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions
- 2016 CVPR Fast Algorithms for Convolutional Neural Networks (Winograd)
- 2015 CVPR Sparse Convolutional Neural Networks
Low-Rank Filter Approximation
- 2016 ICLR Convolutional Neural Networks with Low-rank Regularization
- 2016 ICLR Training CNNs with Low-Rank Filters for Efficient Image Classification
- 2016 TPAMI Accelerating Very Deep Convolutional Networks for Classification and Detection
- 2015 CVPR Efficient and Accurate Approximations of Nonlinear Convolutional Networks
- 2015 ICLR Speeding-up convolutional neural networks using fine-tuned cp-decomposition
- 2014 NIPS Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
- 2014 BMVC Speeding up Convolutional Neural Networks with Low Rank Expansions
- 2013 NIPS Predicting Parameters in Deep Learning
- 2013 CVPR Learning Separable Filters
Low Precision
- 2017 arXiv BitNet: Bit-Regularized Deep Neural Networks
- 2017 arXiv Gradient Descent for Spiking Neural Networks
- 2017 arXiv ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks
- 2017 arXiv Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework
- 2017 arXiv The High-Dimensional Geometry of Binary Neural Networks
- 2017 NIPS Training Quantized Nets: A Deeper Understanding
- 2017 NIPS TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
- 2017 ICML Analytical Guarantees on Numerical Precision of Deep Neural Networks
- 2017 arXiv Deep Learning with Low Precision by Half-wave Gaussian Quantization
- 2017 CVPR Network Sketching: Exploiting Binary Structure in Deep CNNs
- 2017 CVPR Local Binary Convolutional Neural Networks
- 2017 ICLR Towards the Limit of Network Quantization
- 2017 ICLR Loss-aware Binarization of Deep Networks
- 2017 ICLR Trained Ternary Quantization
- 2017 ICLR Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights
- 2016 arXiv Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
- 2016 arXiv Accelerating Deep Convolutional Networks using low-precision and sparsity
- 2016 arXiv Deep neural networks are robust to weight binarization and other non-linear distortions
- 2016 ECCV XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
- 2016 ICMLW Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
- 2016 ICML Fixed Point Quantization of Deep Convolutional Networks
- 2016 NIPS Binarized Neural Networks
- 2016 arXiv Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
- 2016 CVPR Quantized Convolutional Neural Networks for Mobile Devices
- 2016 ICLR Neural Networks with Few Multiplications
- 2015 arXiv Resiliency of Deep Neural Networks under Quantization
- 2015 arXiv Rounding Methods for Neural Networks with Low Resolution Synaptic Weights
- 2015 NIPS Backpropagation for Energy-Efficient Neuromorphic Computing
- 2015 NIPS BinaryConnect: Training Deep Neural Networks with Binary Weights during Propagations
- 2015 ICMLW Bitwise Neural Networks
- 2015 ICML Deep Learning with Limited Numerical Precision
- 2015 ICLRW Training deep neural networks with low precision multiplications
- 2015 arXiv Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation
- 2014 NIPS Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights
- 2013 arXiv Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
- 2011 NIPSW Improving the speed of neural networks on CPUs
- 1987 Combinatorica Randomized rounding: A technique for provably good algorithms and algorithmic proofs
Parameter Pruning
- 2017 ICML Beyond Filters: Compact Feature Map for Portable Deep Model
- 2017 ICLR Soft Weight-Sharing for Neural Network Compression
- 2017 ICLR Pruning Convolutional Neural Networks for Resource Efficient Inference
- 2017 ICLR Pruning Filters for Efficient ConvNets
- 2016 arXiv Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
- 2016 arXiv Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
- 2016 NIPS Learning the Number of Neurons in Deep Networks
- 2016 NIPS Learning Structured Sparsity in Deep Learning [code]
- 2016 NIPS Dynamic Network Surgery for Efficient DNNs
- 2016 ECCV Less is More: Towards Compact CNNs
- 2016 CVPR Fast ConvNets Using Group-wise Brain Damage
- 2016 ICLR Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
- 2016 ICLR Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
- 2015 arXiv Structured Pruning of Deep Convolutional Neural Networks
- 2015 IEEE Access Channel-Level Acceleration of Deep Face Representations
- 2015 BMVC Data-free parameter pruning for Deep Neural Networks
- 2015 ICML Compressing Neural Networks with the Hashing Trick
- 2015 ICCV Deep Fried Convnets
- 2015 ICCV An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
- 2015 NIPS Learning both Weights and Connections for Efficient Neural Networks
- 2015 ICLR FitNets: Hints for Thin Deep Nets
- 2014 arXiv Compressing Deep Convolutional Networks using Vector Quantization
- 2014 NIPSW Distilling the Knowledge in a Neural Network
- 1995 ISANN Evaluating Pruning Methods
- 1993 T-NN Pruning Algorithms--A Survey
- 1989 NIPS Optimal Brain Damage
Transfer Learning
- 2016 arXiv What makes ImageNet good for transfer learning?
- 2014 NIPS How transferable are features in deep neural networks?
- 2014 CVPR CNN Features off-the-shelf: an Astounding Baseline for Recognition
- 2014 ICML DeCAF: A Deep Convolutional Activation
Theory
- 2017 ICML On the Expressive Power of Deep Neural Networks
- 2017 ICML A Closer Look at Memorization in Deep Networks
- 2017 ICML An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis
- 2016 NIPS Exponential expressivity in deep neural networks through transient chaos
- 2016 arXiv Understanding Deep Convolutional Networks
- 2014 NIPS On the number of linear regions of deep neural networks
- 2014 ICML Provable Bounds for Learning Some Deep Representations
- 2014 ICLR On the number of response regions of deep feed forward networks with piece-wise linear activations
- 2014 ICLR Revisiting natural gradient for deep networks
3D Data
- 2017 NIPS PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
- 2017 ICCV Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs
- 2017 SIGGRAPH O-CNN: Octree-based Convolutional Neural Network for Understanding 3D Shapes
- 2017 CVPR PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- 2017 CVPR OctNet: Learning Deep 3D Representations at High Resolutions
- 2016 NIPS FPNN: Field Probing Neural Networks for 3D Data
- 2016 NIPS Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
- 2015 ICCV Multi-view Convolutional Neural Networks for 3D Shape Recognition
- 2015 BMVC Sparse 3D convolutional neural networks
- 2015 CVPR 3D ShapeNets: A Deep Representation for Volumetric Shapes
Hardware
- 2017 ISVLSI YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights
- 2017 ASPLOS SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
- 2017 FPGA Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Neural Networks
- 2015 NIPS Tutorial High-Performance Hardware for Machine Learning
网络压缩论文集(network compression)的更多相关文章
- 网络压缩论文整理(network compression)
1. Parameter pruning and sharing 1.1 Quantization and Binarization Compressing deep convolutional ne ...
- plain framework 1 1.0.3更新 优化编译部分、网络压缩和加密
有些东西总是姗姗来迟,就好比这新年的钟声,我们盼望着新年同时也不太旧的一年过去.每当这个时候,我们都会总结一下在过去的一年中我们收获了什么,再计划新的一年我们要实现什么.PF并不是一个十分优秀的框架, ...
- VMware虚拟机上网络连接(network type)的三种模式--bridged、host-only、NAT
VMware虚拟机上网络连接(network type)的三种模式--bridged.host-only.NAT VMWare提供了三种工作模式,它们是bridged(桥接模式).NAT(网络地址转换 ...
- [USACO08JAN]手机网络Cell Phone Network
[USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cell phon ...
- linux 网络虚拟化: network namespace 简介
linux 网络虚拟化: network namespace 简介 network namespace 是实现网络虚拟化的重要功能,它能创建多个隔离的网络空间,它们有独自的网络栈信息.不管是虚拟机还是 ...
- 洛谷 P2812 校园网络【[USACO]Network of Schools加强版】 解题报告
P2812 校园网络[[USACO]Network of Schools加强版] 题目背景 浙江省的几所OI强校的神犇发明了一种人工智能,可以AC任何题目,所以他们决定建立一个网络来共享这个软件.但是 ...
- 论文笔记——A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding
论文<A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding> Prunin ...
- Solaris11.1网络配置(Fixed Network)
Solaris11的网络配置与Solaris10有很大不同,Solaris11通过network configuration profiles(NCP)来管理网络配置. Solaris11网络配置分为 ...
- 洛谷P2899 [USACO08JAN]手机网络Cell Phone Network
P2899 [USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cel ...
随机推荐
- javaScript高级教程(三) javaScript不支持关联数组,只是语法上像关联数组
1.在js中所有要素都是继承自Object对象的,任何对象都能通过obj['name'] = something的形式来添加属性,相当于obj.name=something. 之所以设计中括号这种存取 ...
- /etc/redhat-release 查看centos 版本
查看centos 版本 [root@localhost ~]# cat /etc/redhat-release CentOS release 6.4 (Final)
- 使用对象作为hashMap的键,需要覆盖hashcode和equals方法
1:HashMap可以存放键值对,如果要以对象(自己创建的类等)作为键,实际上是以对象的散列值(以hashCode方法计算得到)作为键.hashCode计算的hash值默认是对象的地址值. 这样就会忽 ...
- [py]处理文件的3个方法
file处理的3个方法: f和f.readlines效果一样 # f.read() 所有行 -> 字符串 # f.readline 读取一行 -> 字符串 # f.readlines 所有 ...
- (转)spring boot整合redis
一篇写的更清晰的文章,包括redis序列化:http://makaidong.com/ncjava/330749_5285125.html 1.项目目录结构 2.引入所需jar包 <!-- Sp ...
- 人活着系列之开会(Floy)
http://acm.sdut.edu.cn/sdutoj/problem.php?action=showproblem&problemid=2930 题意:所有点到Z点的最短距离.因为岛名由 ...
- eclipse向svn提交代码的时候忽略部分资源配置
eclipse向svn提交代码的时候有 .settings, .project, .classpath, target等不需要上传,所以在eclipse中配置一下就不会显示了,方法如下图:
- python 的 json 转换
python 的 json 转换 本文为原创文章,禁止转载! 本文以 json.dumps() 和 json.loads() 方法进行 Python 数据和 json 格式之间转换,进行讲解 首先比 ...
- bind,live,delegate
.live() 到目前为止,一切似乎很完美.可惜,事实并非如此.因为.live()方法并不完美,它有如下几个主要缺点: $()函数会找到当前页面中的所有td元素并创建jQuery对象,但在确认事件目标 ...
- Scrapy是什么
1.Scrapy是蜘蛛爬虫框架,我们用蜘蛛来获取互联网上的各种信息,然后再对这些信息进行数据分析处理. 2.Scrapy的组成 引擎:处理整个系统的数据流处理,出发事务 调度器: 接受引擎发过来的请求 ...