Comparison of Symbolic Deep Learning Frameworks
http://blog.revolutionanalytics.com/2016/08/deep-learning-part-1.html
Deep Learning Part 1: Comparison of Symbolic Deep Learning Frameworks
by Anusua Trivedi, Microsoft Data Scientist
Background and Approach
This blog series is based on my upcoming talk on re-usability of Deep Learning Models at the Hadoop+Strata World Conference in Singapore. This blog series will be in several parts – where I describe my experiences and go deep into the reasons behind my choices.
Deep learning is an emerging field of research, which has its application across multiple domains. I try to show how transfer learning and fine tuning strategy leads to re-usability of the same Convolution Neural Network model in different disjoint domains. Application of this model across various different domains brings value to using this fine-tuned model.
In this blog (Part1), I describe and compare the commonly used open-source deep learning frameworks. I dive deep into different pros and cons for each framework, and discuss why I chose Theano for my work.
Please feel free to email me at trivedianusua23@gmail.com if you have questions.
Symbolic Frameworks
Symbolic computation frameworks (as in CNTK, MXNET, TensorFlow, Theano) are specified as a symbolic graph of vector operations, such as matrix add/multiply or convolution. A layer is just a composition of those operations. The fine granularity of the building blocks (operations) allows users to invent new complex layer types without implementing them in a low-level language (as in Caffe).
I've used different symbolic computation frameworks in my work. However, I found each of them has their pros and cons in their design and current implementation, and none of them can perfectly satisfy all needs. For my problem needs , I decided to work with Theano.
Here we compare the following symbolic computation frameworks:
- Software: Theano
- Creator: Université de Montréal
- Software license: BSD license
- Open source: Yes
- Platform: Cross-platform
- Written in: Python
- Interface: Python
- CUDA support: Yes
- Automatic differentiation: Yes
- Has pre-trained models: Through Lasagne's model zoo
- Recurrent Nets: Yes
- Convolutional Nets: Yes
- RBM/DBNs: Yes
- Software: TensorFlow
- Creator: Google Brain Team
- Software license: Apache 2.0
- Open source: Yes
- Platform: Linux, Mac OS X,
- Windows support on roadmap
- Written in: C++, Python
- Interface: Python, C/C++
- CUDA support: Yes
- Automatic differentiation: Yes
- Has pre-trained models: No
- Recurrent Nets: Yes
- Convolutional Nets: Yes
- RBM/DBNs: Yes
- Software: MXNET
- Creator: Distributed (Deep) Machine Learning Community
- Software license: Apache 2.0
- Open source: Yes
- Platform: Ubuntu, OS X, Windows, AWS, Android, iOS, JavaScript
- Written in: C++, Python, Julia, Matlab, R, Scala
- Interface: C++, Python, Julia, Matlab, JavaScript, R, Scala
- CUDA support: Yes
- Automatic differentiation: Yes
- Has pre-trained models: Yes
- Recurrent Nets: Yes
- Convolutional Nets: Yes
- RBM/DBNs: Yes
Non-symbolic frameworks
PROS:
- Non-symbolic (imperative) neural network frameworks like torch, caffe etc. tend to have very similar design in their computation part.
- In terms of expressiveness, imperative frameworks with a good design can also expose graph-like interface (e.g. torch/nngraph).
CONS:
- The main drawbacks of imperative frameworks actually lie in manual optimization. For example, in-place operation has to be manually implemented.
- Most imperative frameworks are not designed well enough to have comparable expressiveness as symbolic frameworks.
Symbolic frameworks
PROS:
- Symbolic frameworks can possibly infer optimization automatically from the dependency graph.
- A symbolic framework can exploit much more memory reuse opportunities, as is well done in MXNET.
- Symbolic frameworks can automatically compute an optimal schedule. This is explained in TensorFlow whitepaper.
CONS:
- Available open source symbolic frameworks currently are still not good enough to beat imperative frameworks in performance.
Adding New Operations
Theano / MXNET |
TensorFlow |
Can add Operation in Python with inline C support. |
Forward in C++, symbolic gradient in Python. |
Code Re-usability
Training deep networks are time-consuming. So, Caffe has released some pre-trained model/weights (model zoo) which could be used as initial weights while transfer learning or fine tuning deep networks on domain specific or custom images.
- Theano
Lasagne is a high-level framework built on top of Theano. It’s very easy to use Caffe pre-tained model weights in Lasagne. - TensorFlow
No support for pre-trained model. - MXNET
MXNET has a caffe_converter tool which allows to convert pre-trained caffe model weights to fit MXNET.
Low-level Tensor Operators
A reasonably efficient implementation of low-level operators can serve as ingredients in writing new models, saving the effort to write new Operations.
Theano |
TensorFlow |
MXNET |
A lot of basic Operations |
Fairly good |
Very few |
Control Flow Operator
Control flow operators make the symbolic engine more expressive and generic.
Theano |
TensorFlow |
MXNET |
Supported |
Experimental |
Not Supported |
High-level Support
- Theano
Pure symbolic computation framework. High-level frameworks can be built to fit desired means of use. Successful examples include Keras, Lasagne, blocks. - TensorFlow
Has good design considerations for neural network training, and at the same time avoid being totally a neural network framework, which is a wonderful job. The graph collection, queues, image augmenters etc. can be useful building blocks for a higher-level wrapper. - MXNET
Apart from the symbolic part, MXNET also comes with all necessary components for image classification, going all the way through data loading to building a model that has a method to start training.
Performance
Benchmarking Using Single-GPU
I benchmark LeNet model on MNIST Dataset using a Single-GPU (NVIDIA Quadro K1200 GPU).
Theano |
TensorFlow |
MXNET |
Great |
Not so good |
Excellent |
Memory
GPU memory is limited and may usually be a problem for large models.
Theano |
TensorFlow |
MXNET |
Great |
Not so good |
Excellent |
Single-GPU Speed
Theano takes a long time to compile a graph, especially with complex models. TensorFlow is a bit slower.
Theano / MXNET |
TensorFlow |
comparable to CuDNNv4 |
about 0.5x slower |
Parallel/Distributed Support
Theano |
TensorFlow |
MXNET |
experimental multi-GPU |
multi-GPU |
distributed |
Conclusion
Theano (with higher-level Lasagne & Keras) is a great choice for deep learning models. It’s very easy to implement new networks & modify existing networks using Lasagne/Keras. I prefer python, and thus prefer using Lasagne/Keras due to their very mature python interface. However, they do not support R. I have tried using transfer learning and fine tuning in Lasagne/Keras, and it’s very easy to modify an existing network and customize it with domain-specific custom data.
Comparisons of different frameworks show that MXNET is the best choice (better performance/memory). Moreover, it has a great R support. In fact, it is the only framework that supports all functions in R. In MXNET, transfer learning and fine tuning networks are possible, but not as easy (as compared to Lasagne/Keras). This makes modifying existing trained networks more difficult, and thus a bit difficult to use domain-specific custom data.
Continued in Deep Learning Part 2: Transfer Learning and Fine-tuning Deep Convolutional Neural Networks
Posted by Guest Blogger at 09:30 in data science, Microsoft, predictive analytics, python, R | Permalink
Comments
You can follow this conversation by subscribing to the comment feed for this post.
It’s worth to note that H2O is another framework of DL as well but w/o GPU support now.
And, it’s a tradeoff between performance and flexibility for DL framework.
One example in below blog post which shows the native R DL code w/ GPU backend acceleration.
http://www.parallelr.com/r-deep-neural-network-from-scratch/
http://www.parallelr.com/r-dnn-parallel-acceleration/
http://www.parallelr.com/r-dnn-cuda-multigpu/
Posted by: daisy | August 10, 2016 at 20:18
You are using some old TensorFlow release. It is no longer slow and it supports multi machine training. Operations can also be easily defined in Python and control ops are no longer experimental.
Posted by: Andrew | August 25, 2016 at 23:00
Would you please put your benchmarking code on GitHub and link back here in a comment?
Posted by: Dale Smith | August 26, 2016 at 05:21
Excellent post Anusha. Very informative. Do you mind I re-post this along with part 2 on my platform www.gladwinanalytics.com ? It would be greatly useful to tens and thousands of Gladwin Analytics users.
Thanks,
Anandh Shanmugaraj
Posted by: Big Data Jobs | August 27, 2016 at 05:35
Thanks for the comments.
Daisy - I tried to compare open-source frameworks only. I haven't played much with H2O, thanks for posting the links.
Andrew - Ahh! Thanks for pointing. I bench-marked TensorFlow sometime back, I need to update to new version.
Dale Smith - The plan is to make all codes available through Github. Its a work in progress, and I'll make it available once I have some newer version results.
Anandh Shanmugaraj - Feel free to re-post.
Posted by: Anusua Trivedi | August 29, 2016 at 07:12
The comments to this entry are closed.
Comparison of Symbolic Deep Learning Frameworks的更多相关文章
- Comparing deep learning frameworks: Tensorflow, CNTK, MXNet, & Caffe
https://imaginghub.com/blog/10-a-comparison-of-four-deep-learning-frameworks-tensorflow-cntk-mxnet-a ...
- Deep Learning in R
Introduction Deep learning is a recent trend in machine learning that models highly non-linear repre ...
- Machine and Deep Learning with Python
Machine and Deep Learning with Python Education Tutorials and courses Supervised learning superstiti ...
- deep learning framework(不同的深度学习框架)
常用的deep learning frameworks 基本转自:http://www.codeceo.com/article/10-open-source-framework.html 1. Caf ...
- Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1, Assignment(Regularization)
声明:所有内容来自coursera,作为个人学习笔记记录在这里. Regularization Welcome to the second assignment of this week. Deep ...
- (转) Learning Deep Learning with Keras
Learning Deep Learning with Keras Piotr Migdał - blog Projects Articles Publications Resume About Ph ...
- 课程一(Neural Networks and Deep Learning),第一周(Introduction to Deep Learning)—— 1、经常提及的问题
Frequently Asked Questions Congratulations to be part of the first class of the Deep Learning Specia ...
- Convolutional Neural Networks from deep learning (assignment 1 from week 1)
Convolutional Neural Networks https://www.coursera.org/learn/convolutional-neural-networks/home/welc ...
- Deep Learning for NLP学习翻译笔记(2)
Deep Learning for NLP Deep Learning for NLP Lecture 2:Introduction to Teano enter link description h ...
随机推荐
- HDU 5977 Garden of Eden (树分治+状态压缩)
题意:给一棵节点数为n,节点种类为k的无根树,问其中有多少种不同的简单路径,可以满足路径上经过所有k种类型的点? 析:对于路径,就是两类,第一种情况,就是跨过根结点,第二种是不跨过根结点,分别讨论就好 ...
- 编写高质量代码改善C#程序的157个建议——建议138:事件和委托变量使用动词或形容词短语命名
建议138:事件和委托变量使用动词或形容词短语命名 事件和委托使用场景是调用某个方法,只不过这个方法由调用者赋值.这决定了对应的变量应该以动词或形容词短语命名. 关于事件和委托变量妥当的命名示例如下: ...
- (自己转)比较ArrayList、LinkedList、Vector
1. List概述 List,就如图名字所示一样,是元素的有序列表.当我们讨论List时,将其与Set作对比是一个很好的办法,Set集合中的元素是无序且唯一的.下图是Collection的类继承图,从 ...
- 读取IE缓存文件
使用WebCacheTool项目中的WinInetAPI.cs和Win32API.cs两个类 /// <summary> /// 获取IE缓存文件 /// </summary> ...
- docker获取镜像很慢解决办法
docker pull selenium/hub获取非常慢 可以使用docker中国的官方镜像加速 docker pull registry.docker-cn.com/selenium/hub 官方 ...
- Heimich manoeuvre 海姆利克氏操作
食物,异物卡喉的问题屡见不鲜,造成呼吸困难,甚至心跳停止. 一旦发生这个状况,千万千万不要叩击病人的背部,应在迅速联系医院救援的同时,对病人进行现场急救. heimlich的实施最重要的功能是可以实现 ...
- [bzoj3714] [PA2014] Kuglarz(最小生成树)
我们考虑这个题...思路比较神仙. 就是我们设\(sum[i]\)为前i个的区间里的情况,然后我们知道\(sum[j]\)的话,我们就可以知道\(j-i\)的情况了 所以说这很像最小生成树里面的约束条 ...
- RESTDebugger-我们的REST调试工具!!
Delphi:XE8 XE8已经为我们提供了调试REST程序的工具了,就是“RESTDebugger.exe”.这个小工具,在XE8的菜单中可以找到: 如果在这里找不到,我们可以直接在XE8的bin目 ...
- 在macbookpro上开启ssh服务
- “全栈2019”Java第七十七章:抽象内部类与抽象静态内部类详解
难度 初级 学习时间 10分钟 适合人群 零基础 开发语言 Java 开发环境 JDK v11 IntelliJ IDEA v2018.3 文章原文链接 "全栈2019"Java第 ...