网络压缩论文集(network compression)
Convolutional Neural Networks
- ImageNet Models
- Architecture Design
- Activation Functions
- Visualization
- Fast Convolution
- Low-Rank Filter Approximation
- Low Precision
- Parameter Pruning
- Transfer Learning
- Theory
- 3D Data
- Hardware
ImageNet Models
- 2017 CVPR Xception: Deep Learning with Depthwise Separable Convolutions(Xception)
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks (ResNeXt)
- 2016 ECCV Identity Mappings in Deep Residual Networks (Pre-ResNet)
- 2016 arXiv Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (Inception V4)
- 2016 CVPR Deep Residual Learning for Image Recognition (ResNet)
- 2015 arXiv Rethinking the Inception Architecture for Computer Vision (Inception V3)
- 2015 ICML Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (Inception V2)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2015 ICLR Very Deep Convolutional Networks For Large-scale Image Recognition (VGG)
- 2015 CVPR Going Deeper with Convolutions (GoogleNet/Inception V1)
- 2012 NIPS ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
Architecture Design
- 2017 arXiv One Model To Learn Them All
- 2017 arXiv MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- 2017 ICML AdaNet: Adaptive Structural Learning of Artificial Neural Networks
- 2017 ICML Large-Scale Evolution of Image Classifiers
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks
- 2017 CVPR Densely Connected Convolutional Networks
- 2017 ICLR Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
- 2017 ICLR Neural Architecture Search with Reinforcement Learning
- 2017 ICLR Designing Neural Network Architectures using Reinforcement Learning
- 2017 ICLR Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
- 2017 ICLR Highway and Residual Networks learn Unrolled Iterative Estimation
- 2016 NIPS Residual Networks Behave Like Ensembles of Relatively Shallow Networks
- 2016 BMVC Wide Residual Networks
- 2016 arXiv Benefits of depth in neural networks
- 2016 AAAI On the Depth of Deep Neural Networks: A Theoretical View
- 2016 arXiv SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
- 2015 ICMLW Highway Networks
- 2015 CVPR Convolutional Neural Networks at Constrained Time Cost
- 2015 CVPR Fully Convolutional Networks for Semantic Segmentation
- 2014 NIPS Do Deep Nets Really Need to be Deep?
- 2014 ICLRW Understanding Deep Architectures using a Recursive Convolutional Network
- 2013 ICML Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures
- 2009 ICCV What is the Best Multi-Stage Architecture for Object Recognition?
- 1995 NIPS Simplifying Neural Nets by Discovering Flat Minima
- 1994 T-NN SVD-NET: An Algorithm that Automatically Selects Network Structure
Activation Functions
- 2017 arXiv Self-Normalizing Neural Networks (SELU)
- 2016 ICLR Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (ELU)
- 2015 arXiv Empirical Evaluation of Rectified Activations in Convolutional Network (RReLU)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2013 ICML Rectifier Nonlinearities Improve Neural Network Acoustic Models
- 2010 ICML Rectified Linear Units Improve Restricted Boltzmann Machines (ReLU)
Visualization
- 2017 CVPR Network Dissection: Quantifying Interpretability of Deep Visual Representations
- 2015 ICMLW Understanding Neural Networks Through Deep Visualization
- 2014 ECCV Visualizing and Understanding Convolutional Networks
Fast Convolution
- 2017 ICML Warped Convolutions: Efficient Invariance to Spatial Transformations
- 2017 ICLR Faster CNNs with Direct Sparse Convolutions and Guided Pruning
- 2016 NIPS PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions
- 2016 CVPR Fast Algorithms for Convolutional Neural Networks (Winograd)
- 2015 CVPR Sparse Convolutional Neural Networks
Low-Rank Filter Approximation
- 2016 ICLR Convolutional Neural Networks with Low-rank Regularization
- 2016 ICLR Training CNNs with Low-Rank Filters for Efficient Image Classification
- 2016 TPAMI Accelerating Very Deep Convolutional Networks for Classification and Detection
- 2015 CVPR Efficient and Accurate Approximations of Nonlinear Convolutional Networks
- 2015 ICLR Speeding-up convolutional neural networks using fine-tuned cp-decomposition
- 2014 NIPS Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
- 2014 BMVC Speeding up Convolutional Neural Networks with Low Rank Expansions
- 2013 NIPS Predicting Parameters in Deep Learning
- 2013 CVPR Learning Separable Filters
Low Precision
- 2017 arXiv BitNet: Bit-Regularized Deep Neural Networks
- 2017 arXiv Gradient Descent for Spiking Neural Networks
- 2017 arXiv ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks
- 2017 arXiv Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework
- 2017 arXiv The High-Dimensional Geometry of Binary Neural Networks
- 2017 NIPS Training Quantized Nets: A Deeper Understanding
- 2017 NIPS TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
- 2017 ICML Analytical Guarantees on Numerical Precision of Deep Neural Networks
- 2017 arXiv Deep Learning with Low Precision by Half-wave Gaussian Quantization
- 2017 CVPR Network Sketching: Exploiting Binary Structure in Deep CNNs
- 2017 CVPR Local Binary Convolutional Neural Networks
- 2017 ICLR Towards the Limit of Network Quantization
- 2017 ICLR Loss-aware Binarization of Deep Networks
- 2017 ICLR Trained Ternary Quantization
- 2017 ICLR Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights
- 2016 arXiv Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
- 2016 arXiv Accelerating Deep Convolutional Networks using low-precision and sparsity
- 2016 arXiv Deep neural networks are robust to weight binarization and other non-linear distortions
- 2016 ECCV XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
- 2016 ICMLW Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
- 2016 ICML Fixed Point Quantization of Deep Convolutional Networks
- 2016 NIPS Binarized Neural Networks
- 2016 arXiv Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
- 2016 CVPR Quantized Convolutional Neural Networks for Mobile Devices
- 2016 ICLR Neural Networks with Few Multiplications
- 2015 arXiv Resiliency of Deep Neural Networks under Quantization
- 2015 arXiv Rounding Methods for Neural Networks with Low Resolution Synaptic Weights
- 2015 NIPS Backpropagation for Energy-Efficient Neuromorphic Computing
- 2015 NIPS BinaryConnect: Training Deep Neural Networks with Binary Weights during Propagations
- 2015 ICMLW Bitwise Neural Networks
- 2015 ICML Deep Learning with Limited Numerical Precision
- 2015 ICLRW Training deep neural networks with low precision multiplications
- 2015 arXiv Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation
- 2014 NIPS Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights
- 2013 arXiv Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
- 2011 NIPSW Improving the speed of neural networks on CPUs
- 1987 Combinatorica Randomized rounding: A technique for provably good algorithms and algorithmic proofs
Parameter Pruning
- 2017 ICML Beyond Filters: Compact Feature Map for Portable Deep Model
- 2017 ICLR Soft Weight-Sharing for Neural Network Compression
- 2017 ICLR Pruning Convolutional Neural Networks for Resource Efficient Inference
- 2017 ICLR Pruning Filters for Efficient ConvNets
- 2016 arXiv Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
- 2016 arXiv Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
- 2016 NIPS Learning the Number of Neurons in Deep Networks
- 2016 NIPS Learning Structured Sparsity in Deep Learning [code]
- 2016 NIPS Dynamic Network Surgery for Efficient DNNs
- 2016 ECCV Less is More: Towards Compact CNNs
- 2016 CVPR Fast ConvNets Using Group-wise Brain Damage
- 2016 ICLR Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
- 2016 ICLR Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
- 2015 arXiv Structured Pruning of Deep Convolutional Neural Networks
- 2015 IEEE Access Channel-Level Acceleration of Deep Face Representations
- 2015 BMVC Data-free parameter pruning for Deep Neural Networks
- 2015 ICML Compressing Neural Networks with the Hashing Trick
- 2015 ICCV Deep Fried Convnets
- 2015 ICCV An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
- 2015 NIPS Learning both Weights and Connections for Efficient Neural Networks
- 2015 ICLR FitNets: Hints for Thin Deep Nets
- 2014 arXiv Compressing Deep Convolutional Networks using Vector Quantization
- 2014 NIPSW Distilling the Knowledge in a Neural Network
- 1995 ISANN Evaluating Pruning Methods
- 1993 T-NN Pruning Algorithms--A Survey
- 1989 NIPS Optimal Brain Damage
Transfer Learning
- 2016 arXiv What makes ImageNet good for transfer learning?
- 2014 NIPS How transferable are features in deep neural networks?
- 2014 CVPR CNN Features off-the-shelf: an Astounding Baseline for Recognition
- 2014 ICML DeCAF: A Deep Convolutional Activation
Theory
- 2017 ICML On the Expressive Power of Deep Neural Networks
- 2017 ICML A Closer Look at Memorization in Deep Networks
- 2017 ICML An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis
- 2016 NIPS Exponential expressivity in deep neural networks through transient chaos
- 2016 arXiv Understanding Deep Convolutional Networks
- 2014 NIPS On the number of linear regions of deep neural networks
- 2014 ICML Provable Bounds for Learning Some Deep Representations
- 2014 ICLR On the number of response regions of deep feed forward networks with piece-wise linear activations
- 2014 ICLR Revisiting natural gradient for deep networks
3D Data
- 2017 NIPS PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
- 2017 ICCV Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs
- 2017 SIGGRAPH O-CNN: Octree-based Convolutional Neural Network for Understanding 3D Shapes
- 2017 CVPR PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- 2017 CVPR OctNet: Learning Deep 3D Representations at High Resolutions
- 2016 NIPS FPNN: Field Probing Neural Networks for 3D Data
- 2016 NIPS Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
- 2015 ICCV Multi-view Convolutional Neural Networks for 3D Shape Recognition
- 2015 BMVC Sparse 3D convolutional neural networks
- 2015 CVPR 3D ShapeNets: A Deep Representation for Volumetric Shapes
Hardware
- 2017 ISVLSI YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights
- 2017 ASPLOS SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
- 2017 FPGA Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Neural Networks
- 2015 NIPS Tutorial High-Performance Hardware for Machine Learning
网络压缩论文集(network compression)的更多相关文章
- 网络压缩论文整理(network compression)
1. Parameter pruning and sharing 1.1 Quantization and Binarization Compressing deep convolutional ne ...
- plain framework 1 1.0.3更新 优化编译部分、网络压缩和加密
有些东西总是姗姗来迟,就好比这新年的钟声,我们盼望着新年同时也不太旧的一年过去.每当这个时候,我们都会总结一下在过去的一年中我们收获了什么,再计划新的一年我们要实现什么.PF并不是一个十分优秀的框架, ...
- VMware虚拟机上网络连接(network type)的三种模式--bridged、host-only、NAT
VMware虚拟机上网络连接(network type)的三种模式--bridged.host-only.NAT VMWare提供了三种工作模式,它们是bridged(桥接模式).NAT(网络地址转换 ...
- [USACO08JAN]手机网络Cell Phone Network
[USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cell phon ...
- linux 网络虚拟化: network namespace 简介
linux 网络虚拟化: network namespace 简介 network namespace 是实现网络虚拟化的重要功能,它能创建多个隔离的网络空间,它们有独自的网络栈信息.不管是虚拟机还是 ...
- 洛谷 P2812 校园网络【[USACO]Network of Schools加强版】 解题报告
P2812 校园网络[[USACO]Network of Schools加强版] 题目背景 浙江省的几所OI强校的神犇发明了一种人工智能,可以AC任何题目,所以他们决定建立一个网络来共享这个软件.但是 ...
- 论文笔记——A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding
论文<A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding> Prunin ...
- Solaris11.1网络配置(Fixed Network)
Solaris11的网络配置与Solaris10有很大不同,Solaris11通过network configuration profiles(NCP)来管理网络配置. Solaris11网络配置分为 ...
- 洛谷P2899 [USACO08JAN]手机网络Cell Phone Network
P2899 [USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cel ...
随机推荐
- sparkuser is not in the sudoers file. This incident will be reported.
切换到root身份$su -(注意有- ,这和su是不同的,在用命令"su"的时候只是切换到root,但没有把root的环境变量传过去,还是当前用户的环境变量,用"su ...
- Java学习之路-Spring的HttpInvoker学习
Hessian和Burlap都是基于HTTP的,他们都解决了RMI所头疼的防火墙渗透问题.但当传递过来的RPC消息中包含序列化对象时,RMI就完胜Hessian和Burlap了. 因为Hessian和 ...
- JavaScript中通过arguments对象实现对象的重载
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title> ...
- android Thread
1.Thread的三种形式 第一种: class MyThread extends Thread{ @Override public void run(){ Log.d("MainActiv ...
- 005-四种常见的 POST 提交数据方式
1.http请求方法 HTTP Method RFC Request Has Body Response Has Body Safe Idempotent Cacheable GET RFC 7231 ...
- Selenium定位元素-Xpath的使用方法
工具 Xpath的练习建议下载火狐浏览器,下载插件Firebug.Firepath. 由于最新版火狐不支持Firebug等扩展工具了,所以需要下载49版以下的版本安装https://ftp.mozil ...
- Groovy介绍
关于 Groovy 这一节将学习 Groovy 的基础知识:它是什么,它与 Java 语言和 JVM 的关系,以及编写 Groovy 代码的一些要点. 一.什么是 Groovy? Groovy 是 J ...
- mysql参数配置文件
(1)参数配置文件中的内容以键值对形式存在. (2)如何查看键值对?show variables like '%name%';或者查看information_schema库下的global_varia ...
- windows中xcopy命令详解
一.格式: 二.举例说明: 1.复制文件,文件路径有空格的,那么就使用双引号括起来.如果目标路径已经有相同文件了,使用覆盖方式而不进行提示.在复制文件的同时也复制空目录或子目录 xcopy ...
- IO—代码—基础及其用例
字节流:文件.图片.歌曲 使用字节流的应用场景:如果是读写的数据都不需要转换成字符的时候,则使用字节流. 字节流处理单元为1个字节, 操作字节和字节数组.不能直接处理Unicode字符 字节流可用于任 ...