Deep compression code
https://github.com/songhan/SqueezeNet-Deep-Compression
- import sys
- import os
- import numpy as np
- import pickle
- help_ = '''
- Usage:
- decode.py <net.prototxt> <net.binary> <target.caffemodel>
- Set variable CAFFE_ROOT as root of caffe before run this demo!
- '''
- if len(sys.argv) != 4:
- print help_
- sys.exit()
- else:
- prototxt = sys.argv[1]
- net_bin = sys.argv[2]
- target = sys.argv[3]
- # os.system("cd $CAFFE_ROOT")
- caffe_root = os.environ["CAFFE_ROOT"]
- os.chdir(caffe_root)
- print caffe_root
- sys.path.insert(0, caffe_root + 'python')
- import caffe
- caffe.set_mode_cpu()
- net = caffe.Net(prototxt, caffe.TEST)
- layers = filter(lambda x:'conv' in x or 'fc' in x or 'ip' in x, net.params.keys())
- fin = open(net_bin, 'rb')
- def binary_to_net(weights, spm_stream, ind_stream, codebook, num_nz):
- bits = np.log2(codebook.size)
- if bits == 4:
- slots = 2
- elif bits == 8:
- slots = 1
- else:
- print "Not impemented,", bits
- sys.exit()
- code = np.zeros(weights.size, np.uint8)
- # Recover from binary stream
- spm = np.zeros(num_nz, np.uint8)
- ind = np.zeros(num_nz, np.uint8)
- if slots == 2:
- spm[np.arange(0, num_nz, 2)] = spm_stream % (2**4)
- spm[np.arange(1, num_nz, 2)] = spm_stream / (2**4)
- else:
- spm = spm_stream
- ind[np.arange(0, num_nz, 2)] = ind_stream% (2**4)
- ind[np.arange(1, num_nz, 2)] = ind_stream/ (2**4)
- # Recover the matrix
- ind = np.cumsum(ind+1)-1
- code[ind] = spm
- data = np.reshape(codebook[code], weights.shape)
- np.copyto(weights, data)
- nz_num = np.fromfile(fin, dtype = np.uint32, count = len(layers))
- for idx, layer in enumerate(layers):
- print "Reconstruct layer", layer
- print "Total Non-zero number:", nz_num[idx]
#eg . Reconstruct layer conv1
#Total Non-zero number: 13902- if 'conv' in layer:
- bits = 8 #卷积层使用8bit量化,全连接使用4bit
- else:
- bits = 4
- codebook_size = 2 ** bits #所有码字的总数
- codebook = np.fromfile(fin, dtype = np.float32, count = codebook_size)
- bias = np.fromfile(fin, dtype = np.float32, count = net.params[layer][1].data.size)
- np.copyto(net.params[layer][1].data, bias) #把fin里的值拷贝进去,原先net.params[layer][1].data全部都是0
- spm_stream = np.fromfile(fin, dtype = np.uint8, count = (nz_num[idx]-1) / (8/bits) + 1)
- ind_stream = np.fromfile(fin, dtype = np.uint8, count = (nz_num[idx]-1) / 2+1)
- binary_to_net(net.params[layer][0].data, spm_stream, ind_stream, codebook, nz_num[idx])
- net.save(target)
Deep compression code的更多相关文章
- [综述]Deep Compression/Acceleration深度压缩/加速/量化
Survey Recent Advances in Efficient Computation of Deep Convolutional Neural Networks, [arxiv '18] A ...
- DEEP COMPRESSION小记
2016ICLR最佳论文 Deep Compression: Compression Deep Neural Networks With Pruning, Trained Quantization A ...
- Deep Compression Compressing Deep Neural Networks With Pruning, Trained QuantizationAnd Huffman Coding
转载请注明出处: http://www.cnblogs.com/sysuzyq/p/6200613.html by 少侠阿朱
- 论文翻译:2021_Towards model compression for deep learning based speech enhancement
论文地址:面向基于深度学习的语音增强模型压缩 论文代码:没开源,鼓励大家去向作者要呀,作者是中国人,在语音增强领域 深耕多年 引用格式:Tan K, Wang D L. Towards model c ...
- A Full Hardware Guide to Deep Learning
A Full Hardware Guide to Deep Learning Deep Learning is very computationally intensive, so you will ...
- 网络压缩论文集(network compression)
Convolutional Neural Networks ImageNet Models Architecture Design Activation Functions Visualization ...
- cs231n spring 2017 lecture15 Efficient Methods and Hardware for Deep Learning 听课笔记
1. 深度学习面临的问题: 1)模型越来越大,很难在移动端部署,也很难网络更新. 2)训练时间越来越长,限制了研究人员的产量. 3)耗能太多,硬件成本昂贵. 解决的方法:联合设计算法和硬件. 计算硬件 ...
- 深度学习网络压缩模型方法总结(model compression)
两派 1. 新的卷机计算方法 这种是直接提出新的卷机计算方式,从而减少参数,达到压缩模型的效果,例如SqueezedNet,mobileNet SqueezeNet: AlexNet-level ac ...
- (zhuan) Where can I start with Deep Learning?
Where can I start with Deep Learning? By Rotek Song, Deep Reinforcement Learning/Robotics/Computer V ...
随机推荐
- python文本 字符串对齐
python 字符串对齐 场景: 字符串对齐 python提供非常容易的方法,使得字符串对齐 >>> print("abc".center (30,'-')) ...
- Python3.6学习笔记(四)
错误.调试和测试 程序运行中,可能会遇到BUG.用户输入异常数据以及其它环境的异常,这些都需要程序猿进行处理.Python提供了一套内置的异常处理机制,供程序猿使用,同时PDB提供了调试代码的功能,除 ...
- ELK+Filebeat 安装配置入门
本文地址 http://www.cnblogs.com/jasonxuli/p/6397244.html https://www.elastic.co 上,elasticsearch,logsta ...
- Top N之MapReduce程序加强版Enhanced MapReduce for Top N items
In the last post we saw how to write a MapReduce program for finding the top-n items of a dataset. T ...
- 中国计算机学会CCF推荐国际学术会议和期刊目录(PDF版,2015年)
total: CCF推荐国际学术会议和期刊目录(下载) parts: 点击下载: 计算机体系结构并行与分布计算存储系统.pdf 点击下载: 计算机网络.pdf 点击下载: 网络与信息安全.pdf ...
- C语言:创建动态单向链表,创建完成后,输出每一个节点的数据信息。
// // main.c // dynamic_link_list // // Created by ma c on 15/8/5. // Copyright (c) 2015. All ri ...
- Visual Studio Code 构建C/C++开发环境
转自: https://blog.csdn.net/lidong_12664196/article/details/68928136#visual-sutdio-code%E4%BB%A5%E5%8F ...
- UVA 400 (13.08.05)
Unix ls The computer company you work for is introducing a brand new computer line and is developi ...
- 什么是'脑分裂(split brain)'?
这个词明显有点恐怖.设想一下,如果某时刻连接两个控制器之间的通路出现了问题,而不是其中某个控制器死机,此时两个控制器其实都是工作正常的,但是两者都检测不到对方的存在,所以两者都尝试接管所有总线,这时候 ...
- Android组件之自定义ContentProvider
Android的数据存储有五种方式Shared Preferences.网络存储.文件存储.外储存储.SQLite,一般这些存储都只是在单独的一个应用程序之中达到一个数据的共享,有时候我们需要操作其他 ...