Distributed Deep Learning
安利一下刘铁岩老师的《分布式机器学习》这本书
以及一个大神的blog:
https://zhuanlan.zhihu.com/p/29032307
https://zhuanlan.zhihu.com/p/30976469
分布式深度学习原理
在很多教程中都有介绍DL training的原理。我们来简单回顾一下:
那么如果scale太大,需要分布式呢?分布式机器学习大致有以下几个思路:
- 对于计算量太大的场景(计算并行),可以多线程/多节点并行计算。常用的一个算法就是同步随机梯度下降(synchronous stochastic gradient descent),含义大致相当于K个(K是节点数)mini-batch SGD [ch6.2]
- 对于训练数据太多的场景(数据并行,也是最主要的场景),需要将数据划分到多个节点上训练。每个节点先用本地的数据先训练出一个子模型,同时和其他节点保持通信(比如更新参数)以保证最终可以有效整合来自各个节点的训练结果,并得到全局的ML模型。 [ch6.3]
- 对于模型太大的场景,需要把模型(例如NN中的不同层)划分到不同节点上进行训练。此时不同节点之间可能需要频繁的sync。 [ch6.4]
它们可以总结为下图:
以数据并行为例,整个pipeline如下:
- 划分数据到不同节点
- 每个节点单机训练
- 节点之间的通信以及整个拓扑结构设计 【ch7】
- 多个训练好的子模型的聚合 【ch8】
Distributed DL model
目前工业界常见的Distributed DL方法有以下三种:【ch7.3】
1. PyTorch: AllReduce Model
MPI is a common method of distributed computing framework to implement distributed machine learning system. The main idea is to use AllReduce API to synchronize message and it also supports operations which satisfy Reduce rules. The common polymerization method for machine learning models is addition and average, so AllReduce logic is suitable to deal with it. The standard API of AllReduce have various implemented methods.
AllReduce mode is simple and convenient which is beneficial for paralleling training in synchronization algorithm. Till now, there are many deep learning systems still use it to complete communication function in distributed training, such as gloo communication library from Caffe2, DeepSpeech system in Baidu and NCCL communication library in Nvidia.
However, AllReduce can only support synchronizing communication and the logic of all working nodes are same which means every working node should handle completed model. It is unsuitable for large scale model.
Limitation of AllReduce:
When working nodes in system is increasing and the computing is unbalance, the training speed is decided by the slowest node in this system; once a working node does not work, the whole system has to stop.
Also, when the number of parameters of models in machine learning task is too large, it will exceed the memory capacity of single machine.
2. MXNet: Parameter Server Model
In the parameter server framework, all nodes in system are divided into worker and server logically. The main task of each worker is to take charge of local training task and communicate with parameter server through server interface. In this way, they can obtain latest model parameters from parameter server or send latest local training model to parameter server. With this parameter server, machine learning can be synchronous or asynchronous, or even mixed.
3. TensorFlow: Dataflow Model
Computational graph model in TensorFlow: Computation is described as a directed acyclic data flow graph. The nodes in the figure represent compute nodes and the edges represent data flow.
Distributed machine learning system based on data flow draws on the flexibility of DAG-based big data processing system, it describes the computing task as a directed acyclic data flow graph. The nodes in the figure represent the operations on the data and the edges in the figure represent the dependencies of the operation.
The system automatically provides distributed execution of the dataflow graph, so the user cares about how to design the appropriate dataflow graph to represent the algorithmic logic that is to be executed.
Below, it will take a data flow diagram representing the data flow system in TensorFlow as an example to introduce a typical data flow diagram.
分布式机器学习算法
【ch9】
Distributed Deep Learning的更多相关文章
- (转)分布式深度学习系统构建 简介 Distributed Deep Learning
HOME ABOUT CONTACT SUBSCRIBE VIA RSS DEEP LEARNING FOR ENTERPRISE Distributed Deep Learning, Part ...
- 英特尔深度学习框架BigDL——a distributed deep learning library for Apache Spark
BigDL: Distributed Deep Learning on Apache Spark What is BigDL? BigDL is a distributed deep learning ...
- CoRR 2018 | Horovod: Fast and Easy Distributed Deep Learning in Tensorflow
将深度学习模型的训练从单GPU扩展到多GPU主要面临以下问题:(1)训练框架必须支持GPU间的通信,(2)用户必须更改大量代码以使用多GPU进行训练.为了克服这些问题,本文提出了Horovod,它通过 ...
- Install PaddlePaddle (Parallel Distributed Deep Learning)
Step 1: Install docker on your linux system (My linux is fedora) https://docs.docker.com/engine/inst ...
- NeurIPS 2017 | TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
在深度神经网络的分布式训练中,梯度和参数同步时的网络开销是一个瓶颈.本文提出了一个名为TernGrad梯度量化的方法,通过将梯度三值化为\({-1, 0, 1}\)来减少通信量.此外,本文还使用逐层三 ...
- 【深度学习Deep Learning】资料大全
最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books by Yoshua Bengio, Ian Goodfellow and Aaron C ...
- (转) Awesome Deep Learning
Awesome Deep Learning Table of Contents Free Online Books Courses Videos and Lectures Papers Tutori ...
- 机器学习(Machine Learning)&深度学习(Deep Learning)资料
<Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost到随机森林.D ...
- 机器学习(Machine Learning)&深入学习(Deep Learning)资料
<Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost 到随机森林. ...
随机推荐
- AJAX - 服务器 响应
AJAX - 服务器 响应 服务器响应 如需获得来自服务器的响应,请使用 XMLHttpRequest 对象的 responseText 或 responseXML 属性.大理石构件来图加工 属性 描 ...
- html area标签 语法
html area标签 语法 作用:带有可点击区域的图像映射 说明:<img> 中的 usemap 属性可引用 <map> 中的 id 或 name 属性(由浏览器决定),所以 ...
- C/C++中的转义字符
在C语言中有三种转义字符,它们是:一般转义字符.八进制转义字符和十六进制转义字符. 1.一般转义字符 这种转义字符,虽然在形式上由两个字符组成,但只代表一个字符.常用的一般转义字符为: \a \n \ ...
- 【深入理解CLR】1:CLR的执行模型
将源代码编译成托管模块 下图展示了编译源代码文件的过程.如图所示,可用支持 CLR 的任何一种语言创建源代码文件.然后,用一个对应的编译器检查语法和分析源代码.无论选用哪一个编译器,结果都是一个托管模 ...
- ali之mtl平台学习
摩天轮平台可以进行无线测试.设备借用.打包发布.线上监控等功能. 无线测试包括:mock测试.真机适配.代码审查.验收报告等. mock测试类似于fiddler,主要用于接口查看,可以查看请求,返回串 ...
- Vue创建局部组件
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...
- [VBA]删除多余工作表
sub 删除多余工作表() Dim i As Integer Application.DisplayAlerts = False For i = Worksheets.Count To 1 step ...
- JavaScript基础篇详解
全部的数据类型: 基本数据类型: undefined Number Boolean null String 复杂数据类型: object ①Undefined: >>>声明但未初始化 ...
- 003-unity3d 物理引擎-示例2 打箱子
一.基础知识点 1.坐标.向量等 )) { //1.将鼠标坐标 转化为 世界坐标 由于鼠标z轴 可能不存在,故自定义为3 Vector3 targetPos = Camera.main.ScreenT ...
- Linx下Keepalived做成服务
在/usr目录下面执行: find -name keepalived 返回如下: ./sbin/keepalived ./local/sbin/keepalived ./local/etc/keepa ...