TensorFlow并行,模型并行,数据并行。模型并行根据不同模型设计不同并行方式,模型不同计算节点放在不同硬伯上资源运算。数据并行,比较通用简便实现大规模并行方式,同时使用多个硬件资源计算不同batch数据梯度,汇总梯度全局参数更新。

数据并行,多块GPU同时训练多个batch数据,运行在每块GPU模型基于同一神经网络,网络结构一样,共享模型参数。

同步数据并行,所有GPU计算完batch数据梯度,统计将多个梯度合在一起,更新共享模型参数,类似使用较大batch。GPU型号、速度一致时,效率最高。
异步数据并行,不等待所有GPU完成一次训练,哪个GPU完成训练,立即将梯度更新到共享模型参数。
同步数据并行,比异步收敛速度更快,模型精度更高。

同步数据并行,数据集CIFAR-10。载入依赖库,TensorFlow Models cifar10类,下载CIFAR-10数据预处理。

设置batch大小 128,最大步数100万步(中间随时停止,模型定期保存),GPU数量4。

定义计算损失函数tower_loss。cifar10.distorted_inputs产生数据增强images、labels,调用cifar10.inference生成卷积网络,每个GPU生成单独网络,结构一致,共享模型参数。根据卷积网络、labels,调用cifar10.loss计算损失函数(loss储存到collection),tf.get_collection('losses',scope)获取当前GPU loss(scope限定范围),tf.add_n 所有损失叠加一起得total_loss。返回total_loss作函数结果。

定义函数average_gradients,不同GPU计算梯度合成。输入参数tower_grads梯度双层列表,外层列表不同GPU计算梯度,内层列表GPU计算不同Variable梯度。最内层元素(grads,variable),tower_grads基本元素二元组(梯度、变量),具体形式[[(grad0_gpu0,var0_gpu0),(grad1_gpu0,var1_gpu0)……],[(grad0_gpu1,var0_gpu1),(grad1_gpu1,var1_gpu1)……]……]。创建平均梯度列表average_grads,梯度在不同GPU平均。zip(*tower_grads)双层列表转置,变[[(grad0_gpu0,var0_gpu0),(grad0_gpu1,var0_gpu1)……],[(grad1_gpu0,var1_gpu0),(grad1_gpu1,var1_gpu1)……]……]形式,循环遍历元素。循环获取元素grad_and_vars,同Variable梯度在不同GPU计算结果。同Variable梯度不同GPU计算副本,计算梯度均值。梯度N维向量,每个维度平均。tf.expand_dims给梯度添加冗余维度0,梯度放列表grad。tf.concat 维度0上合并。tf.reduce_mean维度0平均,其他维度全部平均。平均梯度,和Variable组合得原有二元组(梯度、变量)格式,添加到列表average_grads。所有梯度求均后,返回average_grads。

定义训练函数。设置默认计算设备CPU。global_step记录全局训练步数,计算epoch对应batch数,学习速率衰减需要步数decay_steps。tf.train.exponential_decay创建随训练步数衰减学习速率,第一参数初始学习速率,第二参数全局训练步数,第三参数每次衰减需要步数,第四参数衰减率,staircase设true,阶梯式衰减。设置优化算法GradientDescent,传入随机步数衰减学习速率。

定义储存GPU计算结果列表tower_grads。创建循环,循环次数GPU数量。循环中tf.device限定使用哪个GPU。tf.name_scope命名空间。

GPU用tower_loss获取损失。tf.get_variable_scope().reuse_variables()重用参数。GPU共用一个模型入完全相同参数。opt.compute_gradients(loss)计算单个GPU梯度,添加到梯度列表tower_grads。average_gradients计算平均梯度,opt.apply_gradients更新模型参数。

创建模型保存器saver,Session allow_soft_placement 参数设True。有些操作只能在CPU上进行,不使用soft_placement。初始化全部参数,tf.train.start_queue_runner()准备大量数据增强训练样本,防止训练被阻塞在生成样本。

训练循环,最大迭代次数max_steps。每步执行一次更新梯度操作apply_gradient_op(一次训练操作),计算损失操作loss。time.time()记录耗时。每隔10步,展示当前batch loss。每秒钟可训练样本数和每个batch训练花费时间。每隔1000步,Saver保存整个模型文件。

cifar10.maybe_download_and_extract()下载完整CIFAR-10数据,train()开始训练。

loss从最开始4点几,到第70万步,降到0.07。平均每个batch耗时0.021s,平均每秒训练6000个样本,单GPU 4倍。

  1. import os.path
  2. import re
  3. import time
  4. import numpy as np
  5. import tensorflow as tf
  6. import cifar10
  7. batch_size=128
  8. #train_dir='/tmp/cifar10_train'
  9. max_steps=1000000
  10. num_gpus=4
  11. #log_device_placement=False
  12. def tower_loss(scope):
  13. """Calculate the total loss on a single tower running the CIFAR model.
  14. Args:
  15. scope: unique prefix string identifying the CIFAR tower, e.g. 'tower_0'
  16. Returns:
  17. Tensor of shape [] containing the total loss for a batch of data
  18. """
  19. # Get images and labels for CIFAR-10.
  20. images, labels = cifar10.distorted_inputs()
  21. # Build inference Graph.
  22. logits = cifar10.inference(images)
  23. # Build the portion of the Graph calculating the losses. Note that we will
  24. # assemble the total_loss using a custom function below.
  25. _ = cifar10.loss(logits, labels)
  26. # Assemble all of the losses for the current tower only.
  27. losses = tf.get_collection('losses', scope)
  28. # Calculate the total loss for the current tower.
  29. total_loss = tf.add_n(losses, name='total_loss')
  30. # Compute the moving average of all individual losses and the total loss.
  31. # loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg')
  32. # loss_averages_op = loss_averages.apply(losses + [total_loss])
  33. # Attach a scalar summary to all individual losses and the total loss; do the
  34. # same for the averaged version of the losses.
  35. # for l in losses + [total_loss]:
  36. # Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training
  37. # session. This helps the clarity of presentation on tensorboard.
  38. # loss_name = re.sub('%s_[0-9]*/' % cifar10.TOWER_NAME, '', l.op.name)
  39. # Name each loss as '(raw)' and name the moving average version of the loss
  40. # as the original loss name.
  41. # tf.scalar_summary(loss_name +' (raw)', l)
  42. # tf.scalar_summary(loss_name, loss_averages.average(l))
  43. # with tf.control_dependencies([loss_averages_op]):
  44. # total_loss = tf.identity(total_loss)
  45. return total_loss
  46. def average_gradients(tower_grads):
  47. """Calculate the average gradient for each shared variable across all towers.
  48. Note that this function provides a synchronization point across all towers.
  49. Args:
  50. tower_grads: List of lists of (gradient, variable) tuples. The outer list
  51. is over individual gradients. The inner list is over the gradient
  52. calculation for each tower.
  53. Returns:
  54. List of pairs of (gradient, variable) where the gradient has been averaged
  55. across all towers.
  56. """
  57. average_grads = []
  58. for grad_and_vars in zip(*tower_grads):
  59. # Note that each grad_and_vars looks like the following:
  60. # ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
  61. grads = []
  62. for g, _ in grad_and_vars:
  63. # Add 0 dimension to the gradients to represent the tower.
  64. expanded_g = tf.expand_dims(g, 0)
  65. # Append on a 'tower' dimension which we will average over below.
  66. grads.append(expanded_g)
  67. # Average over the 'tower' dimension.
  68. grad = tf.concat(grads, 0)
  69. grad = tf.reduce_mean(grad, 0)
  70. # Keep in mind that the Variables are redundant because they are shared
  71. # across towers. So .. we will just return the first tower's pointer to
  72. # the Variable.
  73. v = grad_and_vars[0][1]
  74. grad_and_var = (grad, v)
  75. average_grads.append(grad_and_var)
  76. return average_grads
  77. def train():
  78. """Train CIFAR-10 for a number of steps."""
  79. with tf.Graph().as_default(), tf.device('/cpu:0'):
  80. # Create a variable to count the number of train() calls. This equals the
  81. # number of batches processed * FLAGS.num_gpus.
  82. global_step = tf.get_variable(
  83. 'global_step', [],
  84. initializer=tf.constant_initializer(0), trainable=False)
  85. # Calculate the learning rate schedule.
  86. num_batches_per_epoch = (cifar10.NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN /
  87. batch_size)
  88. decay_steps = int(num_batches_per_epoch * cifar10.NUM_EPOCHS_PER_DECAY)
  89. # Decay the learning rate exponentially based on the number of steps.
  90. lr = tf.train.exponential_decay(cifar10.INITIAL_LEARNING_RATE,
  91. global_step,
  92. decay_steps,
  93. cifar10.LEARNING_RATE_DECAY_FACTOR,
  94. staircase=True)
  95. # Create an optimizer that performs gradient descent.
  96. opt = tf.train.GradientDescentOptimizer(lr)
  97. # Calculate the gradients for each model tower.
  98. tower_grads = []
  99. for i in range(num_gpus):
  100. with tf.device('/gpu:%d' % i):
  101. with tf.name_scope('%s_%d' % (cifar10.TOWER_NAME, i)) as scope:
  102. # Calculate the loss for one tower of the CIFAR model. This function
  103. # constructs the entire CIFAR model but shares the variables across
  104. # all towers.
  105. loss = tower_loss(scope)
  106. # Reuse variables for the next tower.
  107. tf.get_variable_scope().reuse_variables()
  108. # Retain the summaries from the final tower.
  109. # summaries = tf.get_collection(tf.GraphKeys.SUMMARIES, scope)
  110. # Calculate the gradients for the batch of data on this CIFAR tower.
  111. grads = opt.compute_gradients(loss)
  112. # Keep track of the gradients across all towers.
  113. tower_grads.append(grads)
  114. # We must calculate the mean of each gradient. Note that this is the
  115. # synchronization point across all towers.
  116. grads = average_gradients(tower_grads)
  117. # Add a summary to track the learning rate.
  118. # summaries.append(tf.scalar_summary('learning_rate', lr))
  119. # Add histograms for gradients.
  120. # for grad, var in grads:
  121. # if grad is not None:
  122. # summaries.append(
  123. # tf.histogram_summary(var.op.name + '/gradients', grad))
  124. # Apply the gradients to adjust the shared variables.
  125. apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
  126. # Add histograms for trainable variables.
  127. # for var in tf.trainable_variables():
  128. # summaries.append(tf.histogram_summary(var.op.name, var))
  129. # Track the moving averages of all trainable variables.
  130. # variable_averages = tf.train.ExponentialMovingAverage(
  131. # cifar10.MOVING_AVERAGE_DECAY, global_step)
  132. # variables_averages_op = variable_averages.apply(tf.trainable_variables())
  133. # Group all updates to into a single train op.
  134. # train_op = tf.group(apply_gradient_op, variables_averages_op)
  135. # Create a saver.
  136. saver = tf.train.Saver(tf.all_variables())
  137. # Build the summary operation from the last tower summaries.
  138. # summary_op = tf.merge_summary(summaries)
  139. # Build an initialization operation to run below.
  140. init = tf.global_variables_initializer()
  141. # Start running operations on the Graph. allow_soft_placement must be set to
  142. # True to build towers on GPU, as some of the ops do not have GPU
  143. # implementations.
  144. sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
  145. sess.run(init)
  146. # Start the queue runners.
  147. tf.train.start_queue_runners(sess=sess)
  148. # summary_writer = tf.train.SummaryWriter(train_dir, sess.graph)
  149. for step in range(max_steps):
  150. start_time = time.time()
  151. _, loss_value = sess.run([apply_gradient_op, loss])
  152. duration = time.time() - start_time
  153. assert not np.isnan(loss_value), 'Model diverged with loss = NaN'
  154. if step % 10 == 0:
  155. num_examples_per_step = batch_size * num_gpus
  156. examples_per_sec = num_examples_per_step / duration
  157. sec_per_batch = duration / num_gpus
  158. format_str = ('step %d, loss = %.2f (%.1f examples/sec; %.3f '
  159. 'sec/batch)')
  160. print (format_str % (step, loss_value,
  161. examples_per_sec, sec_per_batch))
  162. # if step % 100 == 0:
  163. # summary_str = sess.run(summary_op)
  164. # summary_writer.add_summary(summary_str, step)
  165. # Save the model checkpoint periodically.
  166. if step % 1000 == 0 or (step + 1) == max_steps:
  167. # checkpoint_path = os.path.join(train_dir, 'model.ckpt')
  168. saver.save(sess, '/tmp/cifar10_train/model.ckpt', global_step=step)
  169. cifar10.maybe_download_and_extract()
  170. #if tf.gfile.Exists(train_dir):
  171. # tf.gfile.DeleteRecursively(train_dir)
  172. #tf.gfile.MakeDirs(train_dir)
  173. train()

参考资料:
《TensorFlow实战》

欢迎付费咨询(150元每小时),我的微信:qingxingfengzi

学习笔记TF040:多GPU并行的更多相关文章

  1. Python Web学习笔记之并发和并行的区别和实现

    你吃饭吃到一半,电话来了,你一直到吃完了以后才去接,这就说明你不支持并发也不支持并行.你吃饭吃到一半,电话来了,你停了下来接了电话,接完后继续吃饭,这说明你支持并发.你吃饭吃到一半,电话来了,你一边打 ...

  2. Go语言并发与并行学习笔记(一)

    转:http://blog.csdn.net/kjfcpua/article/details/18265441 如果不是我对真正并行的线程的追求,就不会认识到Go有多么的迷人. Go语言从语言层面上就 ...

  3. Caffe学习笔记2--Ubuntu 14.04 64bit 安装Caffe(GPU版本)

    0.检查配置 1. VMWare上运行的Ubuntu,并不能支持真实的GPU(除了特定版本的VMWare和特定的GPU,要求条件严格,所以我在VMWare上搭建好了Caffe环境后,又重新在Windo ...

  4. JavaSE中线程与并行API框架学习笔记——线程为什么会不安全?

    前言:休整一个多月之后,终于开始投简历了.这段时间休息了一阵子,又病了几天,真正用来复习准备的时间其实并不多.说实话,心里不是非常有底气. 这可能是学生时代遗留的思维惯性--总想着做好万全准备才去做事 ...

  5. Unity3D学习笔记6——GPU实例化(1)

    目录 1. 概述 2. 详论 3. 参考 1. 概述 在之前的文章中说到,一种材质对应一次绘制调用的指令.即使是这种情况,两个三维物体使用同一种材质,但它们使用的材质参数不一样,那么最终仍然会造成两次 ...

  6. Unity3D学习笔记7——GPU实例化(2)

    目录 1. 概述 2. 详论 2.1. 实现 2.2. 解析 3. 参考 1. 概述 在上一篇文章<Unity3D学习笔记6--GPU实例化(1)>详细介绍了Unity3d中GPU实例化的 ...

  7. Unity3D学习笔记8——GPU实例化(3)

    目录 1. 概述 2. 详论 2.1. 自动实例化 2.2. MaterialPropertyBlock 3. 参考 1. 概述 在前两篇文章<Unity3D学习笔记6--GPU实例化(1)&g ...

  8. JavaSE中线程与并行API框架学习笔记1——线程是什么?

    前言:虽然工作了三年,但是几乎没有使用到多线程之类的内容.这其实是工作与学习的矛盾.我们在公司上班,很多时候都只是在处理业务代码,很少接触底层技术. 可是你不可能一辈子都写业务代码,而且跳槽之后新单位 ...

  9. 【Unity Shaders】学习笔记——渲染管线

    [Unity Shaders]学习笔记——Shader和渲染管线 转载请注明出处:http://www.cnblogs.com/-867259206/p/5595924.html 写作本系列文章时使用 ...

随机推荐

  1. Learning-MySQL【1】:数据库初识及 MySQL 的安装

    一.什么是数据 数据(Data):描述事务的符号记录,描述事物的符号既可以是数字,也可以是文字.图片,图像.声音.语言等,数据由多种表现形式,它们都可以经过数字化后存入计算机 在计算机中描述一个事物, ...

  2. 代码覆盖率-JaCoCo

    代码覆盖率 在做单元测试时,代码覆盖率常常被拿来作为衡量测试好坏的指标,甚至,用代码覆盖率来考核测试任务完成情况,比如,代码覆盖率必须达到80%或 90%. JaCoCo Jacoco从多种角度对代码 ...

  3. kafka在windows上的安装、运行

    https://blog.csdn.net/u010283894/article/details/77106159 kafka 创建消费者报错 consumer zookeeper is not a ...

  4. Js异常的处理

    博客1:  https://segmentfault.com/a/1190000011481099 express中的异常处理:https://blog.fundebug.com/2017/12/06 ...

  5. CentOS7.5下安装Python3.7 --python3

    1.将本地安装包上传到远程主机上 scp Python-3.7.0.tgz root@123.206.74.24:/root 2.扩展 安装Python之前安装Python相关的依赖包(主要是u红色部 ...

  6. 7.4 GRASP原则四:控制器 Controller

    4.GRASP原则四:控制器 Controller  What first object beyond the UI layer receives and co-ordinates (control ...

  7. 通用Mapper环境下,mapper接口无法注入问题

    写了一个mapper接口 package com.nyist.mapper; import com.nyist.entity.User; import tk.mybatis.mapper.common ...

  8. Win10安装CAD2006

    以管理员身份运行 提示如下问题: 查看该隐藏文件如下: 开始以为是未安装MSI Runtime 3.0和.NET Framework Runtime 1.1的原因,下载并安装后还是提示如上问题. 仔细 ...

  9. Solr安装使用教程

    一.安装 1.1 安装jdk solr是基于lucene而lucene是java写的,所以solr需要jdk----当前安装的solr-7.5需要jdk-1.8及以上版本,下载安装jdk并设置JAVA ...

  10. ubuntu16.04中设置python3

    执行: sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 100 sudo update-alter ...