• 背景介绍

    Neural Network之模型复杂度主要取决于优化参数个数与参数变化范围. 优化参数个数可手动调节, 参数变化范围可通过正则化技术加以限制. 本文从优化参数个数出发, 以dropout技术为例, 简要演示dropout参数丢弃比例对Neural Network模型复杂度的影响.

  • 算法特征

    ①. 训练阶段以概率丢弃数据点; ②. 测试阶段保留所有数据点

  • 算法推导

    以概率\(p\)对数据点\(x\)进行如下变换,

    \[\begin{equation*}
    x' = \left\{\begin{split}
    &0 &\quad\text{with probability $p$,} \\
    &\frac{x}{1-p} &\quad\text{otherwise,}
    \end{split}\right.
    \end{equation*}
    \]

    即数据点\(x\)以概率\(p\)置零, 以概率\(1-p\)放大\(1/(1-p)\)倍. 此时有,

    \[\begin{equation*}
    \mathbf{E}[x'] = p\mathbf{E}[0] + (1-p)\mathbf{E}[\frac{x}{1-p}] = \mathbf{E}[x],
    \end{equation*}
    \]

    此变换不改变数据点均值, 为无偏变换.

    若数据点\(x\)作为某线性变换之输入, 将其置零, 则对此线性变换无贡献, 等效于无效化该数据点及相关权重参数, 减少了优化参数个数, 降低了模型复杂度.

  • 数据、模型与损失函数

    数据生成策略如下,

    \[\begin{equation*}
    \left\{\begin{aligned}
    x &= r + 2g + 3b \\
    y &= r^2 + 2g^2 + 3b^2 \\
    lv &= -3r - 4g - 5b
    \end{aligned}\right.
    \end{equation*}
    \]

    Neural Network网络模型如下,

    其中, 输入层为$(r, g, b)$, 隐藏层取激活函数$\tanh$, 输出层为$(x, y, lv)$且不取激活函数.

    损失函数如下,
    $$
    \begin{equation*}
    L = \sum_i\frac{1}{2}(\bar{x}^{(i)}-x^{(i)})^2+\frac{1}{2}(\bar{y}^{(i)}-y^{(i)})^2+\frac{1}{2}(\bar{lv}^{(i)}-lv^{(i)})^2
    \end{equation*}
    $$
    其中, $i$为data序号, $(\bar{x}, \bar{y}, \bar{lv})$为相应观测值.

  • 代码实现

    本文拟将中间隐藏层节点数设置为300, 使模型具备较高复杂度. 后逐步提升置零概率\(p\), 使模型复杂度降低, 以此观察泛化误差的变化. 具体实现如下,

    code
    import numpy
    import torch
    from torch import nn
    from torch import optim
    from torch.utils import data
    from matplotlib import pyplot as plt # 获取数据与封装数据
    def xFunc(r, g, b):
    x = r + 2 * g + 3 * b
    return x def yFunc(r, g, b):
    y = r ** 2 + 2 * g ** 2 + 3 * b ** 2
    return y def lvFunc(r, g, b):
    lv = -3 * r - 4 * g - 5 * b
    return lv class GeneDataset(data.Dataset): def __init__(self, rRange=[-1, 1], gRange=[-1, 1], bRange=[-1, 1], num=100, transform=None,\
    target_transform=None):
    self.__rRange = rRange
    self.__gRange = gRange
    self.__bRange = bRange
    self.__num = num
    self.__transform = transform
    self.__target_transform = transform self.__X = self.__build_X()
    self.__Y_ = self.__build_Y_() def __build_X(self):
    rArr = numpy.random.uniform(*self.__rRange, (self.__num, 1))
    gArr = numpy.random.uniform(*self.__gRange, (self.__num, 1))
    bArr = numpy.random.uniform(*self.__bRange, (self.__num, 1))
    X = numpy.hstack((rArr, gArr, bArr))
    return X def __build_Y_(self):
    rArr = self.__X[:, 0:1]
    gArr = self.__X[:, 1:2]
    bArr = self.__X[:, 2:3]
    xArr = xFunc(rArr, gArr, bArr)
    yArr = yFunc(rArr, gArr, bArr)
    lvArr = lvFunc(rArr, gArr, bArr)
    Y_ = numpy.hstack((xArr, yArr, lvArr))
    return Y_ def __len__(self):
    return self.__num def __getitem__(self, idx):
    x = self.__X[idx]
    y_ = self.__Y_[idx]
    if self.__transform:
    x = self.__transform(x)
    if self.__target_transform:
    y_ = self.__target_transform(y_)
    return x, y_ # 构建模型
    class Linear(nn.Module): def __init__(self, dim_in, dim_out):
    super(Linear, self).__init__() self.__dim_in = dim_in
    self.__dim_out = dim_out
    self.weight = nn.Parameter(torch.randn((dim_in, dim_out)))
    self.bias = nn.Parameter(torch.randn((dim_out,))) def forward(self, X):
    X = torch.matmul(X, self.weight) + self.bias
    return X class Tanh(nn.Module): def __init__(self):
    super(Tanh, self).__init__() def forward(self, X):
    X = torch.tanh(X)
    return X class Dropout(nn.Module): def __init__(self, p):
    super(Dropout, self).__init__() assert 0 <= p <= 1
    self.__p = p # 置零概率 def forward(self, X):
    if self.__p == 0:
    return X
    if self.__p == 1:
    return torch.zeros_like(X)
    mark = (torch.rand(X.shape) > self.__p).type(torch.float)
    X = X * mark / (1 - self.__p)
    return X class MLP(nn.Module): def __init__(self, dim_hidden=50, p=0, is_training=True):
    super(MLP, self).__init__() self.__dim_hidden = dim_hidden
    self.__p = p
    self.training = True
    self.__dim_in = 3
    self.__dim_out = 3 self.lin1 = Linear(self.__dim_in, self.__dim_hidden)
    self.tanh = Tanh()
    self.drop = Dropout(self.__p)
    self.lin2 = Linear(self.__dim_hidden, self.__dim_out) def forward(self, X):
    X = self.tanh(self.lin1(X))
    if self.training:
    X = self.drop(X)
    X = self.lin2(X)
    return X # 构建损失函数
    class MSE(nn.Module): def __init__(self):
    super(MSE, self).__init__() def forward(self, Y, Y_):
    loss = torch.sum((Y - Y_) ** 2) / 2
    return loss # 训练单元与测试单元
    def train_epoch(trainLoader, model, loss_fn, optimizer):
    model.train()
    loss = 0 with torch.enable_grad():
    for X, Y_ in trainLoader:
    optimizer.zero_grad()
    Y = model(X)
    loss_tmp = loss_fn(Y, Y_)
    loss_tmp.backward()
    optimizer.step() loss += loss_tmp.item()
    return loss def test_epoch(testLoader, model, loss_fn):
    model.eval()
    loss = 0 with torch.no_grad():
    for X, Y_ in testLoader:
    Y = model(X)
    loss_tmp = loss_fn(Y, Y_)
    loss += loss_tmp.item() return loss # 进行训练与测试
    def train(trainLoader, testLoader, model, loss_fn, optimizer, epochs):
    minLoss = numpy.inf
    for epoch in range(epochs):
    trainLoss = train_epoch(trainLoader, model, loss_fn, optimizer) / len(trainLoader.dataset)
    testLoss = test_epoch(testLoader, model, loss_fn) / len(testLoader.dataset)
    if testLoss < minLoss:
    minLoss = testLoss
    torch.save(model.state_dict(), "./mlp.params")
    # if epoch % 100 == 0:
    # print(f"epoch = {epoch:8}, trainLoss = {trainLoss:15.9f}, testLoss = {testLoss:15.9f}")
    return minLoss numpy.random.seed(0)
    torch.random.manual_seed(0) def search_dropout():
    trainData = GeneDataset(num=50, transform=torch.Tensor, target_transform=torch.Tensor)
    trainLoader = data.DataLoader(trainData, batch_size=50, shuffle=True)
    testData = GeneDataset(num=1000, transform=torch.Tensor, target_transform=torch.Tensor)
    testLoader = data.DataLoader(testData, batch_size=1000, shuffle=False) dim_hidden1 = 300
    p = 0.005
    model = MLP(dim_hidden1, p)
    loss_fn = MSE()
    optimizer = optim.Adam(model.parameters(), lr=0.003)
    train(trainLoader, testLoader, model, loss_fn, optimizer, 100000) pRange = numpy.linspace(0, 1, 101)
    lossList = list()
    for idx, p in enumerate(pRange):
    model = MLP(dim_hidden1, p)
    loss_fn = MSE()
    optimizer = optim.Adam(model.parameters(), lr=0.003)
    model.load_state_dict(torch.load("./mlp.params"))
    loss = train(trainLoader, testLoader, model, loss_fn, optimizer, 100000)
    lossList.append(loss)
    print(f"p = {p:10f}, loss = {loss:15.9f}") minIdx = numpy.argmin(lossList)
    pBest = pRange[minIdx]
    lossBest = lossList[minIdx] fig = plt.figure(figsize=(5, 4))
    ax1 = fig.add_subplot(1, 1, 1)
    ax1.plot(pRange, lossList, ".--", lw=1, markersize=5, label="testing error", zorder=1)
    ax1.scatter(pBest, lossBest, marker="*", s=30, c="red", label="optimal", zorder=2)
    ax1.set(xlabel="$p$", ylabel="error", title="optimal dropout probability = {:.5f}".format(pBest))
    ax1.legend()
    fig.tight_layout()
    fig.savefig("search_p.png", dpi=100)
    # plt.show() if __name__ == "__main__":
    search_dropout()
  • 结果展示

    可以看到, 泛化误差在提升置零概率后先下降后上升, 大致对应降低模型复杂度使模型表现从过拟合至欠拟合.

  • 使用建议

    ①. dropout为使整个节点失效, 通常作用在节点的最终输出上(即激活函数后);

    ②. dropout适用于神经网络全连接层.

  • 参考文档

    ①. 动手学深度学习 - 李牧

Neural Network模型复杂度之Dropout - Python实现的更多相关文章

  1. A Neural Network in 11 lines of Python

    A Neural Network in 11 lines of Python A bare bones neural network implementation to describe the in ...

  2. Recurrent Neural Network系列2--利用Python,Theano实现RNN

    作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...

  3. Recurrent Neural Network系列4--利用Python,Theano实现GRU或LSTM

    yi作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORK ...

  4. Python -- machine learning, neural network -- PyBrain 机器学习 神经网络

    I am using pybrain on my Linuxmint 13 x86_64 PC. As what it is described: PyBrain is a modular Machi ...

  5. [Python Debug]Kernel Crash While Running Neural Network with Keras|Jupyter Notebook运行Keras服务器宕机原因及解决方法

    最近做Machine Learning作业,要在Jupyter Notebook上用Keras搭建Neural Network.结果连最简单的一层神经网络都运行不了,更奇怪的是我先用iris数据集跑了 ...

  6. 从0开始用python实现神经网络 IMPLEMENTING A NEURAL NETWORK FROM SCRATCH IN PYTHON – AN INTRODUCTION

    code地址:https://github.com/dennybritz/nn-from-scratch 文章地址:http://www.wildml.com/2015/09/implementing ...

  7. 机器学习: Python with Recurrent Neural Network

    之前我们介绍了Recurrent neural network (RNN) 的原理: http://blog.csdn.net/matrix_space/article/details/5337404 ...

  8. 通过Visualizing Representations来理解Deep Learning、Neural network、以及输入样本自身的高维空间结构

    catalogue . 引言 . Neural Networks Transform Space - 神经网络内部的空间结构 . Understand the data itself by visua ...

  9. Recurrent Neural Network[survey]

    0.引言 我们发现传统的(如前向网络等)非循环的NN都是假设样本之间无依赖关系(至少时间和顺序上是无依赖关系),而许多学习任务却都涉及到处理序列数据,如image captioning,speech ...

  10. A Survey of Model Compression and Acceleration for Deep Neural Network时s

    A Survey of Model Compression and Acceleration for Deep Neural Network时s 本文全面概述了深度神经网络的压缩方法,主要可分为参数修 ...

随机推荐

  1. SX【2020.01.09】NOIP提高组模拟赛(day1)

    [2020.01.09]NOIP提高组模拟赛(day1) 这次考得不理想,只做了前两题,后两题没时间做,说明做题速度偏慢. source : 100 + 20 + 0 + 0 = 120 rank7 ...

  2. swiper列数 slidesPerView属性决定

    swiper轮播一行有几列如下决定 slidesPerView为3是三列,不写一列 // 初始化文章swiper var newsSwiper = new Swiper('#news-swiper', ...

  3. 基于C++的OpenGL 03 之纹理

    1. 概述 本文基于C++语言,描述OpenGL的纹理 前置知识可参考: 基于C++的OpenGL 02 之着色器 - 当时明月在曾照彩云归 - 博客园 (cnblogs.com) 笔者这里不过多描述 ...

  4. 基于Linux编译JDK18

    1.概述 JDK都没手动编译过,敢说自己是Java程序员吗?(By 羊哥--JDK都没手动编译过,敢说自己是Java程序员吗?实战编译Java源码(JDK源码,JVM)视频教程_哔哩哔哩_bilibi ...

  5. 花10几元买ESP32-C3,体验一下MicroPython (和CircuitPython)

    ESP32是近年很火的国产低成本MCU系列. 买了芯片ESP32-C3的模组安信可 ESP-C3-32S的开发板安信可 NodeMCU ESP-C3-32S-Kit .开发板很小,没有任何多余的东西, ...

  6. 跟女朋友介绍十个常用的 Python 内置函数,她夸了我一整天

    内置函数是什么 了解内置函数之前,先来了解一下什么是函数 将使用频繁的代码段进行封装,并给它起一个名字,当我们使用的时候只需要知道名字就行 函数就是一段封装好的.可以重复使用的代码,函数使得我们的程序 ...

  7. 微信小程序分享百度网盘文件的实现思路

    需求: 在小程序中点击按钮,获取百度网盘文件的下载地址. 实现思路: 1.网盘文件的下载地址,使用官方API只能自己下载,别人通过dlink无法下载,所以采用网页端生成接口. 好处是可以自定义提取码, ...

  8. angular中echart的使用

    <div class="ringlike-chart" echarts [options]="options" (chartInit)="onC ...

  9. taro框架开发微信小程序遇到的问题

    ios端,如果input放在了dispplay flex里面,会导致一系列问题 滑动屏幕,键盘不收起,input值随屏幕滚动 input之前切换,键盘不弹起来或有时弹有时不弹 键盘莫名收起 input ...

  10. .Net Core Elasticsearch 时间查询问题

    查询时增加条件需要设置时区,这样时间才不会出现问题. new QueryContainerDescriptor<T>().DateRange(t => t.Field(f => ...