分段常数衰减

分段常数衰减是在事先定义好的训练次数区间上,设置不同的学习率常数。刚开始学习率大一些,之后越来越小,区间的设置需要根据样本量调整,一般样本量越大区间间隔应该越小。tf中定义了tf.train.piecewise_constant 函数,实现了学习率的分段常数衰减功能。

指数衰减

指数衰减是比较常用的衰减方法,学习率是跟当前的训练轮次指数相关的。tf中实现指数衰减的函数是 tf.train.exponential_decay()。

- decayed_learning_rate = learning_rate *decay_rate ^ (global_step / decay_steps)

  1. TensorFlow提供了一种非常灵活的学习率设置方法,指数衰减法。通过这种方式可以很好的解决上面的问题,先用一个较大的学习率来快速得到一个比较优的参数值,然后通过迭代次数的增加逐渐减少学习率,使得保证参数极优的同时迭代次数也少。TensorFlow提供了一个exponential_decay函数会指数极的逐渐减少学习率,函数的功能有下面的公式可以表示:
  2.  
  3. decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
  4.  
  5. 公式中的参数,其中decayed_learning_rate表示每轮迭代所使用的学习率,learning_rate为初始化学习率,decay_rate为衰减系数,随着迭代次数的增加,学习率会逐步降低。
  6.  
  7. tf.train.exponential_decay(learning_rate,global_step,decay_step,staircase=False,name=None)
  8.  
  9. learning_rate:一个标量类型为float32floate64、张量或一个python数字,代表初始化的学习率
  10.  
  11. global_step:一个标量类型为int32int64,张量或一个python数字,用于衰减计算中,不能是负数。
  12.  
  13. decay_steps:一个标量类型为int32int64,张量或一个python数字,必须是正数,用于衰减计算中。
  14.  
  15. decay_rate:一个标量类型为float32floate64,张量或一个python数字,表示衰减的比率。
  16.  
  17. staircaseBoolean类型,默认是False表示衰减的学习率是连续的,如果是True代表衰减的学习率是一个离散的间隔。

自然指数衰减

自然指数衰减是指数衰减的一种特殊情况,学习率也是跟当前的训练轮次指数相关,只不过以 e 为底数。tf中实现自然指数衰减的函数是 tf.train.natural_exp_decay()

多项式衰减

多项式衰减是这样一种衰减机制:定义一个初始的学习率,一个最低的学习率,按照设置的衰减规则,学习率从初始学习率逐渐降低到最低的学习率,并且可以定义学习率降低到最低的学习率之后,是一直保持使用这个最低的学习率,还是到达最低的学习率之后再升高学习率到一定值,然后再降低到最低的学习率(反复这个过程)。tf中实现多项式衰减的函数是 tf.train.polynomial_decay()

  1. global_step = min(global_step, decay_steps)
  2. decayed_learning_rate = (learning_rate - end_learning_rate) *
  3. (1 - global_step / decay_steps) ^ (power) +
  4. end_learning_rate

余弦衰减

余弦衰减的衰减机制跟余弦函数相关,形状也大体上是余弦形状。tf中的实现函数是:tf.train.cosine_decay()

改进的余弦衰减方法还有:
线性余弦衰减,对应函数 tf.train.linear_cosine_decay()
噪声线性余弦衰减,对应函数 tf.train.noisy_linear_cosine_decay()

倒数衰减

倒数衰减指的是一个变量的大小与另一个变量的大小成反比的关系,具体到神经网络中就是学习率的大小跟训练次数有一定的反比关系。

tf中实现倒数衰减的函数是 tf.train.inverse_time_decay()。

训练模型之loss曲线滑动平均

- 只依赖python

  1. def print_loss(config, title, loss_dict, epoch, iters, current_iter, need_plot=False):
  2. data_str = ''
  3. for k, v in loss_dict.items():
  4. if data_str != '':
  5. data_str += ', '
  6. data_str += '{}: {:.10f}'.format(k, v)
  7.  
  8. if need_plot and config.vis is not None:
  9. plot_line(config, title, k, (epoch-1)*iters+current_iter, v)
  10.  
  11. # step is the progress rate of the whole dataset (split by batchsize)
  12. print('[{}] [{}] Epoch [{}/{}], Iter [{}/{}]'.format(title, config.experiment_name, epoch, config.epochs, current_iter, iters))
  13. print(' {}'.format(data_str))
  14.  
  15. class AverageWithinWindow():
  16. def __init__(self, win_size):
  17. self.win_size = win_size
  18. self.cache = []
  19. self.average = 0
  20. self.count = 0
  21.  
  22. def update(self, v):
  23. if self.count < self.win_size:
  24. self.cache.append(v)
  25. self.count += 1
  26. self.average = (self.average * (self.count - 1) + v) / self.count
  27. else:
  28. idx = self.count % self.win_size
  29. self.average += (v - self.cache[idx]) / self.win_size
  30. self.cache[idx] = v
  31. self.count += 1
  32.  
  33. class DictAccumulator():
  34. def __init__(self, win_size=None):
  35. self.accumulator = OrderedDict()
  36. self.total_num = 0
  37. self.win_size = win_size
  38.  
  39. def update(self, d):
  40. self.total_num += 1
  41. for k, v in d.items():
  42. if not self.win_size:
  43. self.accumulator[k] = v + self.accumulator.get(k,0)
  44. else:
  45. self.accumulator.setdefault(k, AverageWithinWindow(self.win_size)).update(v)
  46.  
  47. def get_average(self):
  48. average = OrderedDict()
  49. for k, v in self.accumulator.items():
  50. if not self.win_size:
  51. average[k] = v*1.0/self.total_num
  52. else:
  53. average[k] = v.average
  54. return average
  55.  
  56. def train(epoch, train_loader, model):
  57. loss_accumulator = utils.DictAccumulator(config.loss_average_win_size)
  58. grad_accumulator = utils.DictAccumulator(config.loss_average_win_size)
  59. score_accumulator = utils.DictAccumulator(config.loss_average_win_size)
  60. iters = len(train_loader)
  61.  
  62. for i, (inputs, targets) in enumerate(train_loader):
  63. inputs = inputs.cuda()
  64. print (inputs.shape)
  65. targets = targets.cuda()
  66. inputs = Variable(inputs)
  67. targets = Variable(targets)
  68.  
  69. net_outputs, loss, grad, lr_dict, score = model.fit(inputs, targets, update=True, epoch=epoch,
  70. cur_iter=i+1, iter_one_epoch=iters)
  71. loss_accumulator.update(loss)
  72. grad_accumulator.update(grad)
  73. score_accumulator.update(score)
  74.  
  75. if (i+1) % config.loss_average_win_size == 0:
  76. need_plot = True
  77. if hasattr(config, 'plot_loss_start_iter'):
  78. need_plot = (i + 1 + (epoch - 1) * iters >= config.plot_loss_start_iter)
  79. elif hasattr(config, 'plot_loss_start_epoch'):
  80. need_plot = (epoch >= config.plot_loss_start_epoch)
  81.  
  82. utils.print_loss(config, "train_loss", loss_accumulator.get_average(), epoch=epoch, iters=iters, current_iter=i+1, need_plot=need_plot)
  83. utils.print_loss(config, "grad", grad_accumulator.get_average(), epoch=epoch, iters=iters, current_iter=i+1, need_plot=need_plot)
  84. utils.print_loss(config, "learning rate", lr_dict, epoch=epoch, iters=iters, current_iter=i+1, need_plot=need_plot)
  85.  
  86. utils.print_loss(config, "train_score", score_accumulator.get_average(), epoch=epoch, iters=iters, current_iter=i+1, need_plot=need_plot)
  87.  
  88. if epoch % config.save_train_hr_interval_epoch == 0:
  89. k = random.randint(0, net_outputs['output'].size(0) - 1)
  90. for name, out in net_outputs.items():
  91. utils.save_tensor(out.data[k], os.path.join(config.TRAIN_OUT_FOLDER, 'epoch_%d_k_%d_%s.png' % (epoch, k, name)))
  92.  
  93. def validate(valid_loader, model):
  94. loss_accumulator = utils.DictAccumulator()
  95. score_accumulator = utils.DictAccumulator()
  96.  
  97. # loss of the whole validation dataset
  98. for i, (inputs, targets) in enumerate(valid_loader):
  99. inputs = inputs.cuda()
  100. targets = targets.cuda()
  101.  
  102. inputs = Variable(inputs, volatile=True)
  103. targets = Variable(targets)
  104.  
  105. loss, score = model.fit(inputs, targets, update=False)
  106.  
  107. loss_accumulator.update(loss)
  108. score_accumulator.update(score)
  109.  
  110. return loss_accumulator.get_average(), score_accumulator.get_average()

- 依赖torch

  1. # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
  2. import time
  3. from collections import defaultdict
  4. from collections import deque
  5. from datetime import datetime
  6.  
  7. import torch
  8.  
  9. from .comm import is_main_process
  10.  
  11. class SmoothedValue(object):
  12. """Track a series of values and provide access to smoothed values over a
  13. window or the global series average.
  14. """
  15.  
  16. def __init__(self, window_size=20):
  17. self.deque = deque(maxlen=window_size)
  18. self.series = []
  19. self.total = 0.0
  20. self.count = 0
  21.  
  22. def update(self, value):
  23. self.deque.append(value)
  24. self.series.append(value)
  25. self.count += 1
  26. self.total += value
  27.  
  28. @property
  29. def median(self):
  30. d = torch.tensor(list(self.deque))
  31. return d.median().item()
  32.  
  33. @property
  34. def avg(self):
  35. d = torch.tensor(list(self.deque))
  36. return d.mean().item()
  37.  
  38. @property
  39. def global_avg(self):
  40. return self.total / self.count
  41.  
  42. class MetricLogger(object):
  43. def __init__(self, delimiter="\t"):
  44. self.meters = defaultdict(SmoothedValue)
  45. self.delimiter = delimiter
  46.  
  47. def update(self, **kwargs):
  48. for k, v in kwargs.items():
  49. if isinstance(v, torch.Tensor):
  50. v = v.item()
  51. assert isinstance(v, (float, int))
  52. self.meters[k].update(v)
  53.  
  54. def __getattr__(self, attr):
  55. if attr in self.meters:
  56. return self.meters[attr]
  57. return object.__getattr__(self, attr)
  58.  
  59. def __str__(self):
  60. loss_str = []
  61. for name, meter in self.meters.items():
  62. loss_str.append(
  63. "{}: {:.4f} ({:.4f})".format(name, meter.median, meter.global_avg)
  64. )
  65. return self.delimiter.join(loss_str)
  66.  
  67. class TensorboardLogger(MetricLogger):
  68. def __init__(self,
  69. log_dir='logs',
  70. exp_name='maskrcnn-benchmark',
  71. start_iter=0,
  72. delimiter='\t'):
  73.  
  74. super(TensorboardLogger, self).__init__(delimiter)
  75. self.iteration = start_iter
  76. self.writer = self._get_tensorboard_writer(log_dir, exp_name)
  77.  
  78. @staticmethod
  79. def _get_tensorboard_writer(log_dir, exp_name):
  80. try:
  81. from tensorboardX import SummaryWriter
  82. except ImportError:
  83. raise ImportError(
  84. 'To use tensorboard please install tensorboardX '
  85. '[ pip install tensorflow tensorboardX ].'
  86. )
  87.  
  88. if is_main_process():
  89. timestamp = datetime.fromtimestamp(time.time()).strftime('%Y%m%d-%H:%M')
  90. tb_logger = SummaryWriter('{}/{}-{}'.format(log_dir, exp_name, timestamp))
  91. return tb_logger
  92. else:
  93. return None
  94.  
  95. def update(self, ** kwargs):
  96. super(TensorboardLogger, self).update(**kwargs)
  97. if self.writer:
  98. for k, v in kwargs.items():
  99. if isinstance(v, torch.Tensor):
  100. v = v.item()
  101. assert isinstance(v, (float, int))
  102. self.writer.add_scalar(k, v, self.iteration)
  103. self.iteration += 1
  104.  
  105. def do_train(
  106. model,
  107. data_loader,
  108. optimizer,
  109. scheduler,
  110. checkpointer,
  111. device,
  112. checkpoint_period,
  113. arguments,
  114. tb_log_dir,
  115. tb_exp_name,
  116. use_tensorboard=False
  117. ):
  118. logger = logging.getLogger("maskrcnn_benchmark.trainer")
  119. logger.info("Start training")
  120.  
  121. meters = TensorboardLogger(log_dir=tb_log_dir,
  122. exp_name=tb_exp_name,
  123. start_iter=arguments['iteration'],
  124. delimiter=" ") \
  125. if use_tensorboard else MetricLogger(delimiter=" ")
  126.  
  127. max_iter = len(data_loader)
  128. start_iter = arguments["iteration"]
  129. model.train()
  130. start_training_time = time.time()
  131. end = time.time()
  132. for iteration, (images, targets, _) in enumerate(data_loader, start_iter):
  133. data_time = time.time() - end
  134. iteration = iteration + 1
  135. arguments["iteration"] = iteration
  136.  
  137. scheduler.step()
  138.  
  139. images = images.to(device)
  140. targets = [target.to(device) for target in targets]
  141.  
  142. loss_dict = model(images, targets)
  143.  
  144. losses = sum(loss for loss in loss_dict.values())
  145.  
  146. # reduce losses over all GPUs for logging purposes
  147. loss_dict_reduced = reduce_loss_dict(loss_dict)
  148. losses_reduced = sum(loss for loss in loss_dict_reduced.values())
  149. meters.update(loss=losses_reduced, **loss_dict_reduced)
  150.  
  151. optimizer.zero_grad()
  152. losses.backward()
  153. optimizer.step()
  154.  
  155. batch_time = time.time() - end
  156. end = time.time()
  157. meters.update(time=batch_time, data=data_time)
  158.  
  159. eta_seconds = meters.time.global_avg * (max_iter - iteration)
  160. eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
  161.  
  162. if iteration % 20 == 0 or iteration == max_iter:
  163. logger.info(
  164. meters.delimiter.join(
  165. [
  166. "eta: {eta}",
  167. "iter: {iter}",
  168. "{meters}",
  169. "lr: {lr:.6f}",
  170. "max mem: {memory:.0f}",
  171. ]
  172. ).format(
  173. eta=eta_string,
  174. iter=iteration,
  175. meters=str(meters),
  176. lr=optimizer.param_groups[0]["lr"],
  177. memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0,
  178. )
  179. )
  180. if iteration % checkpoint_period == 0:
  181. checkpointer.save("model_{:07d}".format(iteration), **arguments)
  182. if iteration == max_iter:
  183. checkpointer.save("model_final", **arguments)
  184.  
  185. total_training_time = time.time() - start_training_time
  186. total_time_str = str(datetime.timedelta(seconds=total_training_time))
  187. logger.info(
  188. "Total training time: {} ({:.4f} s / it)".format(
  189. total_time_str, total_training_time / (max_iter)
  190. )
  191. )

- 依赖torch

  1. import math
  2. from . import meter
  3. import torch
  4.  
  5. class MovingAverageValueMeter(meter.Meter):
  6. def __init__(self, windowsize):
  7. super(MovingAverageValueMeter, self).__init__()
  8. self.windowsize = windowsize
  9. self.valuequeue = torch.Tensor(windowsize)
  10. self.reset()
  11.  
  12. def reset(self):
  13. self.sum = 0.0
  14. self.n = 0
  15. self.var = 0.0
  16. self.valuequeue.fill_(0)
  17.  
  18. def add(self, value):
  19. queueid = (self.n % self.windowsize)
  20. oldvalue = self.valuequeue[queueid]
  21. self.sum += value - oldvalue
  22. self.var += value * value - oldvalue * oldvalue
  23. self.valuequeue[queueid] = value
  24. self.n += 1
  25.  
  26. def value(self):
  27. n = min(self.n, self.windowsize)
  28. mean = self.sum / max(1, n)
  29. std = math.sqrt(max((self.var - n * mean * mean) / max(1, n - 1), 0))
  30. return mean, std
  31.  
  32. def main():
  33. .....
  34. # TensorBoard Logger
  35. writer = SummaryWriter(CONFIG.LOG_DIR)
  36. loss_meter = MovingAverageValueMeter(20)
  37.  
  38. model.train()
  39. model.module.scale.freeze_bn()
  40.  
  41. for iteration in tqdm(
  42. range(1, CONFIG.ITER_MAX + 1),
  43. total=CONFIG.ITER_MAX,
  44. leave=False,
  45. dynamic_ncols=True,
  46. ):
  47.  
  48. # Set a learning rate
  49. poly_lr_scheduler(
  50. optimizer=optimizer,
  51. init_lr=CONFIG.LR,
  52. iter=iteration - 1,
  53. lr_decay_iter=CONFIG.LR_DECAY,
  54. max_iter=CONFIG.ITER_MAX,
  55. power=CONFIG.POLY_POWER,
  56. )
  57.  
  58. # Clear gradients (ready to accumulate)
  59. optimizer.zero_grad()
  60.  
  61. iter_loss = 0
  62. for i in range(1, CONFIG.ITER_SIZE + 1):
  63. try:
  64. images, labels = next(loader_iter)
  65. except:
  66. loader_iter = iter(loader)
  67. images, labels = next(loader_iter)
  68.  
  69. images = images.to(device)
  70. labels = labels.to(device).unsqueeze(1).float()
  71.  
  72. # Propagate forward
  73. logits = model(images)
  74.  
  75. # Loss
  76. loss = 0
  77. for logit in logits:
  78. # Resize labels for {100%, 75%, 50%, Max} logits
  79. labels_ = F.interpolate(labels, logit.shape[2:], mode="nearest")
  80. labels_ = labels_.squeeze(1).long()
  81. # Compute crossentropy loss
  82. loss += criterion(logit, labels_)
  83.  
  84. # Backpropagate (just compute gradients wrt the loss)
  85. loss /= float(CONFIG.ITER_SIZE)
  86. loss.backward()
  87.  
  88. iter_loss += float(loss)
  89.  
  90. loss_meter.add(iter_loss)
  91.  
  92. # Update weights with accumulated gradients
  93. optimizer.step()
  94.  
  95. # TensorBoard
  96. if iteration % CONFIG.ITER_TB == 0:
  97. writer.add_scalar("train_loss", loss_meter.value()[0], iteration)
  98. for i, o in enumerate(optimizer.param_groups):
  99. writer.add_scalar("train_lr_group{}".format(i), o["lr"], iteration)
  100. if False: # This produces a large log file
  101. for name, param in model.named_parameters():
  102. name = name.replace(".", "/")
  103. writer.add_histogram(name, param, iteration, bins="auto")
  104. if param.requires_grad:
  105. writer.add_histogram(
  106. name + "/grad", param.grad, iteration, bins="auto"
  107. )
  108.  
  109. # Save a model
  110. if iteration % CONFIG.ITER_SAVE == 0:
  111. torch.save(
  112. model.module.state_dict(),
  113. osp.join(CONFIG.SAVE_DIR, "checkpoint_{}.pth".format(iteration)),
  114. )
  115.  
  116. # Save a model (short term)
  117. if iteration % 100 == 0:
  118. torch.save(
  119. model.module.state_dict(),
  120. osp.join(CONFIG.SAVE_DIR, "checkpoint_current.pth"),
  121. )
  122.  
  123. torch.save(
  124. model.module.state_dict(), osp.join(CONFIG.SAVE_DIR, "checkpoint_final.pth")
  125. )

学习率设置&&训练模型之loss曲线滑动平均的更多相关文章

  1. TensorFlow笔记-07-神经网络优化-学习率,滑动平均

    TensorFlow笔记-07-神经网络优化-学习率,滑动平均 学习率 学习率 learning_rate: 表示了每次参数更新的幅度大小.学习率过大,会导致待优化的参数在最小值附近波动,不收敛:学习 ...

  2. TensorFlow+实战Google深度学习框架学习笔记(11)-----Mnist识别【采用滑动平均,双层神经网络】

    模型:双层神经网络 [一层隐藏层.一层输出层]隐藏层输出用relu函数,输出层输出用softmax函数 过程: 设置参数 滑动平均的辅助函数 训练函数 x,y的占位,w1,b1,w2,b2的初始化 前 ...

  3. 吴裕雄 python 神经网络——TensorFlow训练神经网络:不使用滑动平均

    import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data INPUT_NODE = 784 ...

  4. 理解滑动平均(exponential moving average)

    1. 用滑动平均估计局部均值 滑动平均(exponential moving average),或者叫做指数加权平均(exponentially weighted moving average),可以 ...

  5. tensorflow入门笔记(二) 滑动平均模型

    tensorflow提供的tf.train.ExponentialMovingAverage 类利用指数衰减维持变量的滑动平均. 当训练模型的时候,保持训练参数的滑动平均是非常有益的.评估时使用取平均 ...

  6. (转)理解滑动平均(exponential moving average)

    转自:理解滑动平均(exponential moving average) 1. 用滑动平均估计局部均值 滑动平均(exponential moving average),或者叫做指数加权平均(exp ...

  7. deep_learning_Function_tf.train.ExponentialMovingAverage()滑动平均

    近来看batch normalization的代码时,遇到tf.train.ExponentialMovingAverage()函数,特此记录. tf.train.ExponentialMovingA ...

  8. 探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

      学习率是一个控制每次更新模型权重时响应估计误差而调整模型程度的超参数.学习率选取是一项具有挑战性的工作,学习率设置的非常小可能导致训练过程过长甚至训练进程被卡住,而设置的非常大可能会导致过快学习到 ...

  9. Tensorflow滑动平均模型tf.train.ExponentialMovingAverage解析

    觉得有用的话,欢迎一起讨论相互学习~Follow Me 移动平均法相关知识 移动平均法又称滑动平均法.滑动平均模型法(Moving average,MA) 什么是移动平均法 移动平均法是用一组最近的实 ...

随机推荐

  1. mac安装navicat mysql破解版

    下载破解中文版http://m6.pc6.com/xuh6/navicat12027pre.zip 完成下载后无法正常进行安装,此时应该打开命令行执行 sudo spctl --master-disa ...

  2. Maven实战(Maven+Nexus建立私服【Linux系统】)

    准备工作 下载及配置Maven3:http://www.cnblogs.com/leefreeman/archive/2013/03/05/2944519.html 下载Nexus:http://ne ...

  3. tcpdump使用示例

    前言 这段时间一直在研究kubernetes当中的网络, 包括通过keepalived来实现VIP的高可用时常常不得不排查一些网络方面的问题, 在这里顺道梳理一下tcpdump的使用姿势, 若有写的不 ...

  4. Laravel Blade 模板 @section/endsection 与 @section/show, @yield 的区别

    base layout 中需要使用 @section("section_name") 区块链是什么? @show 继承的 blade 中需要使用 @section("se ...

  5. MFC单文档

    一.创建并运行MFC单文档程序 1.创建单文档程序 这里使用的是VS2017.首先,打开VS2017,选择文件->新建->项目,然后选择Visual C++ -> MFC /ATL& ...

  6. poj2828 伸展树模拟

    用伸展树模拟插队比线段树快乐3倍.. 但是pojT了.别的oj可以过,直接贴代码. 每次更新时,找到第pos个人,splay到根,然后作为新root的左子树即可 #include<iostrea ...

  7. 怎样在win7 IIS中部署网站?

    IIS作为微软web服务器的平台,可以轻松的部署网站,让网站轻而易举的搭建成功,那么如何在IIS中部署一个网站呢,下面就跟小编一起学习一下吧. 第一步:发布IIS文件 1:发布你所要在IIS上部署的网 ...

  8. F 多重背包 判断能否刚好装满

    Description 有n种不同大小的数字,每种各个.判断是否可以从这些数字之中选出若干使它们的和恰好为K. Input 首先是一个正整数T(1<=T<=100)接下来是T组数据每组数据 ...

  9. [APIO2011]方格染色

    题解: 挺不错的一道题目 首先4个里面只有1个1或者3个1 那么有一个特性就是4个数xor为1 为什么要用xor呢? 在于xor能把相同的数消去 然后用一般的套路 看看确定哪些值能确定全部 yy一下就 ...

  10. [ZJOI2012]数列

    超级水的题还wa了一次 首先很容易发现其实就只有两个值并存 然后 要注意把数组初始化啊...可能后面有多余的元素(对拍的时候由于从小到大就没跑出错) #include <bits/stdc++. ...