网络压缩论文集(network compression)
Convolutional Neural Networks
- ImageNet Models
- Architecture Design
- Activation Functions
- Visualization
- Fast Convolution
- Low-Rank Filter Approximation
- Low Precision
- Parameter Pruning
- Transfer Learning
- Theory
- 3D Data
- Hardware
ImageNet Models
- 2017 CVPR Xception: Deep Learning with Depthwise Separable Convolutions(Xception)
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks (ResNeXt)
- 2016 ECCV Identity Mappings in Deep Residual Networks (Pre-ResNet)
- 2016 arXiv Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (Inception V4)
- 2016 CVPR Deep Residual Learning for Image Recognition (ResNet)
- 2015 arXiv Rethinking the Inception Architecture for Computer Vision (Inception V3)
- 2015 ICML Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (Inception V2)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2015 ICLR Very Deep Convolutional Networks For Large-scale Image Recognition (VGG)
- 2015 CVPR Going Deeper with Convolutions (GoogleNet/Inception V1)
- 2012 NIPS ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
Architecture Design
- 2017 arXiv One Model To Learn Them All
- 2017 arXiv MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- 2017 ICML AdaNet: Adaptive Structural Learning of Artificial Neural Networks
- 2017 ICML Large-Scale Evolution of Image Classifiers
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks
- 2017 CVPR Densely Connected Convolutional Networks
- 2017 ICLR Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
- 2017 ICLR Neural Architecture Search with Reinforcement Learning
- 2017 ICLR Designing Neural Network Architectures using Reinforcement Learning
- 2017 ICLR Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
- 2017 ICLR Highway and Residual Networks learn Unrolled Iterative Estimation
- 2016 NIPS Residual Networks Behave Like Ensembles of Relatively Shallow Networks
- 2016 BMVC Wide Residual Networks
- 2016 arXiv Benefits of depth in neural networks
- 2016 AAAI On the Depth of Deep Neural Networks: A Theoretical View
- 2016 arXiv SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
- 2015 ICMLW Highway Networks
- 2015 CVPR Convolutional Neural Networks at Constrained Time Cost
- 2015 CVPR Fully Convolutional Networks for Semantic Segmentation
- 2014 NIPS Do Deep Nets Really Need to be Deep?
- 2014 ICLRW Understanding Deep Architectures using a Recursive Convolutional Network
- 2013 ICML Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures
- 2009 ICCV What is the Best Multi-Stage Architecture for Object Recognition?
- 1995 NIPS Simplifying Neural Nets by Discovering Flat Minima
- 1994 T-NN SVD-NET: An Algorithm that Automatically Selects Network Structure
Activation Functions
- 2017 arXiv Self-Normalizing Neural Networks (SELU)
- 2016 ICLR Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (ELU)
- 2015 arXiv Empirical Evaluation of Rectified Activations in Convolutional Network (RReLU)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2013 ICML Rectifier Nonlinearities Improve Neural Network Acoustic Models
- 2010 ICML Rectified Linear Units Improve Restricted Boltzmann Machines (ReLU)
Visualization
- 2017 CVPR Network Dissection: Quantifying Interpretability of Deep Visual Representations
- 2015 ICMLW Understanding Neural Networks Through Deep Visualization
- 2014 ECCV Visualizing and Understanding Convolutional Networks
Fast Convolution
- 2017 ICML Warped Convolutions: Efficient Invariance to Spatial Transformations
- 2017 ICLR Faster CNNs with Direct Sparse Convolutions and Guided Pruning
- 2016 NIPS PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions
- 2016 CVPR Fast Algorithms for Convolutional Neural Networks (Winograd)
- 2015 CVPR Sparse Convolutional Neural Networks
Low-Rank Filter Approximation
- 2016 ICLR Convolutional Neural Networks with Low-rank Regularization
- 2016 ICLR Training CNNs with Low-Rank Filters for Efficient Image Classification
- 2016 TPAMI Accelerating Very Deep Convolutional Networks for Classification and Detection
- 2015 CVPR Efficient and Accurate Approximations of Nonlinear Convolutional Networks
- 2015 ICLR Speeding-up convolutional neural networks using fine-tuned cp-decomposition
- 2014 NIPS Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
- 2014 BMVC Speeding up Convolutional Neural Networks with Low Rank Expansions
- 2013 NIPS Predicting Parameters in Deep Learning
- 2013 CVPR Learning Separable Filters
Low Precision
- 2017 arXiv BitNet: Bit-Regularized Deep Neural Networks
- 2017 arXiv Gradient Descent for Spiking Neural Networks
- 2017 arXiv ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks
- 2017 arXiv Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework
- 2017 arXiv The High-Dimensional Geometry of Binary Neural Networks
- 2017 NIPS Training Quantized Nets: A Deeper Understanding
- 2017 NIPS TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
- 2017 ICML Analytical Guarantees on Numerical Precision of Deep Neural Networks
- 2017 arXiv Deep Learning with Low Precision by Half-wave Gaussian Quantization
- 2017 CVPR Network Sketching: Exploiting Binary Structure in Deep CNNs
- 2017 CVPR Local Binary Convolutional Neural Networks
- 2017 ICLR Towards the Limit of Network Quantization
- 2017 ICLR Loss-aware Binarization of Deep Networks
- 2017 ICLR Trained Ternary Quantization
- 2017 ICLR Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights
- 2016 arXiv Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
- 2016 arXiv Accelerating Deep Convolutional Networks using low-precision and sparsity
- 2016 arXiv Deep neural networks are robust to weight binarization and other non-linear distortions
- 2016 ECCV XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
- 2016 ICMLW Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
- 2016 ICML Fixed Point Quantization of Deep Convolutional Networks
- 2016 NIPS Binarized Neural Networks
- 2016 arXiv Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
- 2016 CVPR Quantized Convolutional Neural Networks for Mobile Devices
- 2016 ICLR Neural Networks with Few Multiplications
- 2015 arXiv Resiliency of Deep Neural Networks under Quantization
- 2015 arXiv Rounding Methods for Neural Networks with Low Resolution Synaptic Weights
- 2015 NIPS Backpropagation for Energy-Efficient Neuromorphic Computing
- 2015 NIPS BinaryConnect: Training Deep Neural Networks with Binary Weights during Propagations
- 2015 ICMLW Bitwise Neural Networks
- 2015 ICML Deep Learning with Limited Numerical Precision
- 2015 ICLRW Training deep neural networks with low precision multiplications
- 2015 arXiv Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation
- 2014 NIPS Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights
- 2013 arXiv Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
- 2011 NIPSW Improving the speed of neural networks on CPUs
- 1987 Combinatorica Randomized rounding: A technique for provably good algorithms and algorithmic proofs
Parameter Pruning
- 2017 ICML Beyond Filters: Compact Feature Map for Portable Deep Model
- 2017 ICLR Soft Weight-Sharing for Neural Network Compression
- 2017 ICLR Pruning Convolutional Neural Networks for Resource Efficient Inference
- 2017 ICLR Pruning Filters for Efficient ConvNets
- 2016 arXiv Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
- 2016 arXiv Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
- 2016 NIPS Learning the Number of Neurons in Deep Networks
- 2016 NIPS Learning Structured Sparsity in Deep Learning [code]
- 2016 NIPS Dynamic Network Surgery for Efficient DNNs
- 2016 ECCV Less is More: Towards Compact CNNs
- 2016 CVPR Fast ConvNets Using Group-wise Brain Damage
- 2016 ICLR Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
- 2016 ICLR Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
- 2015 arXiv Structured Pruning of Deep Convolutional Neural Networks
- 2015 IEEE Access Channel-Level Acceleration of Deep Face Representations
- 2015 BMVC Data-free parameter pruning for Deep Neural Networks
- 2015 ICML Compressing Neural Networks with the Hashing Trick
- 2015 ICCV Deep Fried Convnets
- 2015 ICCV An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
- 2015 NIPS Learning both Weights and Connections for Efficient Neural Networks
- 2015 ICLR FitNets: Hints for Thin Deep Nets
- 2014 arXiv Compressing Deep Convolutional Networks using Vector Quantization
- 2014 NIPSW Distilling the Knowledge in a Neural Network
- 1995 ISANN Evaluating Pruning Methods
- 1993 T-NN Pruning Algorithms--A Survey
- 1989 NIPS Optimal Brain Damage
Transfer Learning
- 2016 arXiv What makes ImageNet good for transfer learning?
- 2014 NIPS How transferable are features in deep neural networks?
- 2014 CVPR CNN Features off-the-shelf: an Astounding Baseline for Recognition
- 2014 ICML DeCAF: A Deep Convolutional Activation
Theory
- 2017 ICML On the Expressive Power of Deep Neural Networks
- 2017 ICML A Closer Look at Memorization in Deep Networks
- 2017 ICML An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis
- 2016 NIPS Exponential expressivity in deep neural networks through transient chaos
- 2016 arXiv Understanding Deep Convolutional Networks
- 2014 NIPS On the number of linear regions of deep neural networks
- 2014 ICML Provable Bounds for Learning Some Deep Representations
- 2014 ICLR On the number of response regions of deep feed forward networks with piece-wise linear activations
- 2014 ICLR Revisiting natural gradient for deep networks
3D Data
- 2017 NIPS PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
- 2017 ICCV Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs
- 2017 SIGGRAPH O-CNN: Octree-based Convolutional Neural Network for Understanding 3D Shapes
- 2017 CVPR PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- 2017 CVPR OctNet: Learning Deep 3D Representations at High Resolutions
- 2016 NIPS FPNN: Field Probing Neural Networks for 3D Data
- 2016 NIPS Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
- 2015 ICCV Multi-view Convolutional Neural Networks for 3D Shape Recognition
- 2015 BMVC Sparse 3D convolutional neural networks
- 2015 CVPR 3D ShapeNets: A Deep Representation for Volumetric Shapes
Hardware
- 2017 ISVLSI YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights
- 2017 ASPLOS SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
- 2017 FPGA Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Neural Networks
- 2015 NIPS Tutorial High-Performance Hardware for Machine Learning
网络压缩论文集(network compression)的更多相关文章
- 网络压缩论文整理(network compression)
1. Parameter pruning and sharing 1.1 Quantization and Binarization Compressing deep convolutional ne ...
- plain framework 1 1.0.3更新 优化编译部分、网络压缩和加密
有些东西总是姗姗来迟,就好比这新年的钟声,我们盼望着新年同时也不太旧的一年过去.每当这个时候,我们都会总结一下在过去的一年中我们收获了什么,再计划新的一年我们要实现什么.PF并不是一个十分优秀的框架, ...
- VMware虚拟机上网络连接(network type)的三种模式--bridged、host-only、NAT
VMware虚拟机上网络连接(network type)的三种模式--bridged.host-only.NAT VMWare提供了三种工作模式,它们是bridged(桥接模式).NAT(网络地址转换 ...
- [USACO08JAN]手机网络Cell Phone Network
[USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cell phon ...
- linux 网络虚拟化: network namespace 简介
linux 网络虚拟化: network namespace 简介 network namespace 是实现网络虚拟化的重要功能,它能创建多个隔离的网络空间,它们有独自的网络栈信息.不管是虚拟机还是 ...
- 洛谷 P2812 校园网络【[USACO]Network of Schools加强版】 解题报告
P2812 校园网络[[USACO]Network of Schools加强版] 题目背景 浙江省的几所OI强校的神犇发明了一种人工智能,可以AC任何题目,所以他们决定建立一个网络来共享这个软件.但是 ...
- 论文笔记——A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding
论文<A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding> Prunin ...
- Solaris11.1网络配置(Fixed Network)
Solaris11的网络配置与Solaris10有很大不同,Solaris11通过network configuration profiles(NCP)来管理网络配置. Solaris11网络配置分为 ...
- 洛谷P2899 [USACO08JAN]手机网络Cell Phone Network
P2899 [USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cel ...
随机推荐
- oracle(五)tkprof 使用 transient kernal profile 侧面 轮廓
1.show parameter sql_trace value是false表示系统当前不会产生trace文件 2.使产生trace文件 alter session set sql_trace = t ...
- qt——c++环境下qt编程,类的声明与构造
在c++中创建项目时,会生成以项目名字命名的QMainWindow,以及相应的头文件和CPP文件,作为主要窗口: 在项目中继续生成qt类时,比如类的名称是test,会自动生成一个test.h的头文件, ...
- dedecms如何调用当前栏目的子栏目及子栏目文章
前面ytkah谈到了 dedecms调用当前栏目的子栏目怎么操作,有的朋友会问如果再增加一个调用子栏目文章的需求,即调用当前栏目的子栏目及子栏目文章,这个有办法实现吗?这时就要涉及到另外两个标签的调用 ...
- 使用SolrJ代码导入,发布搜索服务
搭建solr服务器:http://www.cnblogs.com/liyafei/p/8005571.html 一导入要搜索的字段 1:确定发布搜索的字段,sql语句 SELECT a.id, b. ...
- Andrew Ng-ML习题答案1
1.Linear Regression with Multiple Variables 转自:https://blog.csdn.net/mupengfei6688/article/details/5 ...
- C#集合中的Add与AddRange方法
C#.NET的集合主要位于System.Collections和System.Collections.Generic(泛型)这两个namespace中. 1.System.Collections 比如 ...
- 支持向量机:Numerical Optimization,SMO算法
http://www.cnblogs.com/jerrylead/archive/2011/03/18/1988419.html 另外一篇:http://www.cnblogs.com/vivouni ...
- [LeetCode] 595. Big Countries_Easy tag: SQL
There is a table World +-----------------+------------+------------+--------------+---------------+ ...
- ssh 远程执行命令
SSH 是 Linux 下进行远程连接的基本工具,但是如果仅仅用它来登录那可是太浪费啦!SSH 命令可是完成远程操作的神器啊,借助它我们可以把很多的远程操作自动化掉!下面就对 SSH 的远程操作功能进 ...
- SQL Server 公用表表达式(CTE)实现递归的方法
公用表表达式简介: 公用表表达式 (CTE) 可以认为是在单个 SELECT.INSERT.UPDATE.DELETE 或 CREATE VIEW 语句的执行范围内定义的临时结果集.CTE 与派生表类 ...