Why are Eight Bits Enough for Deep Neural Networks?
Why are Eight Bits Enough for Deep Neural Networks?
Deep learning is a very weird technology. It evolved over decades on a very different track than the mainstream of AI, kept alive by the efforts of a handful of believers. When I started using it a few years ago, it reminded me of the first time I played with an iPhone – it felt like I’d been handed something that had been sent back to us from the future, or alien technology.
One of the consequences of that is that my engineering intuitions about it are often wrong. When I came across im2col, the memory redundancy seemed crazy, based on my experience with image processing, but it turns out it’s an efficient way to tackle the problem. While there are more complex approaches that can yield better results, they’re not the ones my graphics background would have predicted.
Another key area that seems to throw a lot of people off is how much precision you need for the calculations inside neural networks. For most of my career, precision loss has been a fairly easy thing to estimate. I almost never needed more than 32-bit floats, and if I did it was because I’d screwed up my numerical design and I had a fragile algorithm that would go wrong pretty soon even with 64 bits. 16-bit floats were good for a lot of graphics operations, as long as they weren’t chained together too deeply. I could use 8-bit values for a final output for display, or at the end of an algorithm, but they weren’t useful for much else.
It turns out that neural networks are different. You can run them with eight-bit parameters and intermediate buffers, and suffer no noticeable loss in the final results. This was astonishing to me, but it’s something that’s been re-discovered over and over again. My colleague Vincent Vanhoucke has the only paper I’ve found covering this result for deep networks, but I’ve seen with my own eyes how it holds true across every application I’ve tried it on. I’ve also had to convince almost every other engineer who I tell that I’m not crazy, and watch them prove it to themselves by running a lot of their own tests, so this post is an attempt to short-circuit some of that!
How does it work?
You can see an example of a low-precision approach in the Jetpac mobile framework, though to keep things simple I keep the intermediate calculations in float and just use eight bits to compress the weights. Nervana’s NEON library also supports fp16, though not eight-bit yet. As long as you accumulate to 32 bits when you’re doing the long dot products that are the heart of the fully-connected and convolution operations (and that take up the vast majority of the time) you don’t need float though, you can keep all your inputs and output as eight bit. I’ve even seen evidence that you can drop a bit or two below eight without too much loss! The pooling layers are fine at eight bits too, I’ve generally seen the bias addition and activation functions (other than the trivial relu) done at higher precision, but 16 bits seems fine even for those.
I’ve generally taken networks that have been trained in full float and down-converted them afterwards, since I’m focused on inference, but training can also be done at low precision. Knowing that you’re aiming at a lower-precision deployment can make life easier too, even if you train in float, since you can do things like place limits on the ranges of the activation layers.
Why does it work?
I can’t see any fundamental mathematical reason why the results should hold up so well with low precision, so I’ve come to believe that it emerges as a side-effect of a successful training process. When we are trying to teach a network, the aim is to have it understand the patterns that are useful evidence and discard the meaningless variations and irrelevant details. That means we expect the network to be able to produce good results despite a lot of noise. Dropout is a good example of synthetic grit being thrown into the machinery, so that the final network can function even with very adverse data.
The networks that emerge from this process have to be very robust numerically, with a lot of redundancy in their calculations so that small differences in input samples don’t affect the results. Compared to differences in pose, position, and orientation, the noise in images is actually a comparatively small problem to deal with. All of the layers are affected by those small input changes to some extent, so they all develop a tolerance to minor variations. That means that the differences introduced by low-precision calculations are well within the tolerances a network has learned to deal with. Intuitively, they feel like weebles that won’t fall down no matter how much you push them, thanks to an inherently stable structure.
At heart I’m an engineer, so I’ve been happy to see it works in practice without worrying too much about why, I don’t want to look a gift horse in the mouth! What I’ve laid out here is my best guess at the cause of this property, but I would love to see a more principled explanation if any researchers want to investigate more thoroughly? [Update – here’s a related paper from Matthieu Courbariaux, thanks Scott!]
What does this mean?
This is very good news for anyone trying to optimize deep neural networks. On the general CPU side, modern SIMD instruction sets are often geared towards float, and so eight bit calculations don’t offer a massive computational advantage on recent x86 or ARM chips. DRAM access takes a lot of electrical power though, and is slow too, so just reducing the bandwidth by 75% can be a very big help. Being able to squeeze more values into fast, low-power SRAM cache and registers is a win too.
GPUs were originally designed to take eight bit texture values, perform calculations on them at higher precisions, and then write them back out at eight bits again, so they’re a perfect fit for our needs. They generally have very wide pipes to DRAM, so the gains aren’t quite as straightforward to achieve, but can be exploited with a bit of work. I’ve learned to appreciate DSPs as great low-power solutions too, and their instruction sets are geared towards the sort of fixed-point operations we need. Custom vision chips like Movidius’ Myriad are good fits too.
Deep networks’ robustness means that they can be implemented efficiently across a very wide range of hardware. Combine this flexibility with their almost-magical effectiveness at a lot of AI tasks that have eluded us for decades, and you can see why I’m so excited about how they will alter our world over the next few years!
Why are Eight Bits Enough for Deep Neural Networks?的更多相关文章
- 为什么深度神经网络难以训练Why are deep neural networks hard to train?
Imagine you're an engineer who has been asked to design a computer from scratch. One day you're work ...
- Training Deep Neural Networks
http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html //转载于 Training Deep Neural ...
- On Explainability of Deep Neural Networks
On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is ...
- Introduction to Deep Neural Networks
Introduction to Deep Neural Networks Neural networks are a set of algorithms, modeled loosely after ...
- 深度神经网络入门教程Deep Neural Networks: A Getting Started Tutorial
Deep Neural Networks are the more computationally powerful cousins to regular neural networks. Learn ...
- Classifying plankton with deep neural networks
Classifying plankton with deep neural networks The National Data Science Bowl, a data science compet ...
- (Deep) Neural Networks (Deep Learning) , NLP and Text Mining
(Deep) Neural Networks (Deep Learning) , NLP and Text Mining 最近翻了一下关于Deep Learning 或者 普通的Neural Netw ...
- Must Know Tips/Tricks in Deep Neural Networks
Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei) Deep Neural Networks, especially C ...
- Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1, Assignment(Initialization)
声明:所有内容来自coursera,作为个人学习笔记记录在这里. Initialization Welcome to the first assignment of "Improving D ...
随机推荐
- IT小小鸟阅读笔记
人生就像是一艘漂泊的船,你努力滑行了就会找到成功的彼岸,否则就漂泊一生.在这个物欲横流的时代有太多的诱惑使我们静不下心来,但是我们应该时时刻刻警醒自己要做一些对自己成长有意义的事,程序员虽然幸苦但是作 ...
- 使用vue-cli3新建一个项目,并写好基本配置
1. 使用vue-cli3新建项目: https://cli.vuejs.org/zh/guide/creating-a-project.html 注意,我这里用gitbash不好选择选项,我就用了基 ...
- mysql 、慢查询、到底如何玩
在项目开发中,那些开发大佬经常会写出一些SQL语句,一条糟糕的SQL语句可能让你测试的整个程序都非常慢,超过10秒的话,我觉得一般用户就会选择关闭网页,如何优化SQL语句将那些运行时间 比较长的SQL ...
- 【数据库_Mysql】查询当前年份的sql
1.本年份 SELECT DATE_FORMAT(NOW(), '%Y'); 2.本月份(显示数字) SELECT DATE_FORMAT(NOW(), '%m'); 3.本月份(显示英文) SELE ...
- 【总结】Link-Cut Tree
这是一篇关于LCT的总结 加删边的好朋友--Link Cut Tree Link-Cut Tree,LCT的全称 可以说是从树剖引出的问题 树剖可以解决静态的修改或查询树的链上信息:那如果图会不断改变 ...
- 洛谷 P1013 进制位 【搜索 + 进制运算】
题目描述 著名科学家卢斯为了检查学生对进位制的理解,他给出了如下的一张加法表,表中的字母代表数字. 例如: + L K V E L L K V E K K V E KL V V E KL KK E E ...
- 服务器版“永恒之蓝”高危预警 (Samba远程命令执行漏洞CVE-2017-7494) 攻击演示
漏洞信息: 2017年5月24日Samba发布了4.6.4版本,中间修复了一个严重的远程代码执行漏洞,漏洞编号CVE-2017-7494,漏洞影响了Samba 3.5.0 之后到4.6.4/4.5.1 ...
- get与post请求简单理解
一般在浏览器中输入网址访问资源都是通过GET方式:在FORM提交中,可以通过Method指定提交方式为GET或者POST,默认为GET提交 Http定义了与服务器交互的不同方法,最基本的方法有4种,分 ...
- web项目中解决post乱码和get乱码的方法
前提复习编码问题产生的原因: 1. 什么是URL编码. URL编码是一种浏览器用来打包表单输入的格式,浏览器从表单中获取所有的name和其对应的value,将他们以name/value编码方式作为U ...
- bzoj 1030 AC自动机+dp
代码: //先把给的单词建AC自动机并且转移fail,然后d[i][j]表示构造的文章到第i位时处在字典树的第j个节点的不包含单词的数量,最后用总的数量26^m //-d[m][0~sz]即可.其中不 ...