人的理想志向往往和他的能力成正比。 —— 约翰逊

  最近一直在使用pytorch深度学习框架,很想用pytorch搞点事情出来,但是框架中一些基本的原理得懂!本次,利用pytorch实现ResNet神经网络对cifar-10数据集进行分类。CIFAR-10包含60000张32*32的彩色图像,彩色图像,即分别有RGB三个通道,一共有10类图片,每一类图片有6000张,其类别有飞机、鸟、猫、狗等。

  注意,如果直接使用torch.torchvision的models中的ResNet18或者ResNet34等等,你会遇到最后的特征图大小不够用的情况,因为cifar-10的图像大小只有32*32,因此需要单独设计ResNet的网络结构!但是采用其他的数据集,比如imagenet的数据集,其图的大小为224*224便不会遇到这种情况。

1、运行环境:

  •  python3.6.8
  •  win10
  •  GTX1060
  •  cuda9.0+cudnn7.4+vs2017
  •  torch1.0.1
  •  visdom0.1.8.8

2、实战cifar10步骤如下:

  • 使用torchvision加载并预处理CIFAR-10数据集
  • 定义网络
  • 定义损失函数和优化器
  • 训练网络,计算损失,清除梯度,反向传播,更新网络参数
  • 测试网络

3、代码

  1. import torch
  2. import torch.nn as nn
  3. from torch.autograd import Variable
  4. from torchvision import datasets,transforms
  5. from torch.utils.data import dataloader
  6. import torchvision.models as models
  7. from tqdm import tgrange
  8. import torch.optim as optim
  9. import numpy
  10. import visdom
  11. import torch.nn.functional as F
  12.  
  13. vis = visdom.Visdom()
  14. batch_size = 100
  15. lr = 0.001
  16. momentum = 0.9
  17. epochs = 100
  18.  
  19. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
  20.  
  21. def conv3x3(in_channels,out_channels,stride = 1):
  22. return nn.Conv2d(in_channels,out_channels,kernel_size=3, stride = stride, padding=1, bias=False)
  23. class ResidualBlock(nn.Module):
  24. def __init__(self, in_channels, out_channels, stride = 1, shotcut = None):
  25. super(ResidualBlock, self).__init__()
  26. self.conv1 = conv3x3(in_channels, out_channels,stride)
  27. self.bn1 = nn.BatchNorm2d(out_channels)
  28. self.relu = nn.ReLU(inplace=True)
  29.  
  30. self.conv2 = conv3x3(out_channels, out_channels)
  31. self.bn2 = nn.BatchNorm2d(out_channels)
  32. self.shotcut = shotcut
  33.  
  34. def forward(self, x):
  35. residual = x
  36. out = self.conv1(x)
  37. out = self.bn1(out)
  38. out = self.relu(out)
  39. out = self.conv2(out)
  40. out = self.bn2(out)
  41. if self.shotcut:
  42. residual = self.shotcut(x)
  43. out += residual
  44. out = self.relu(out)
  45. return out
  46. class ResNet(nn.Module):
  47. def __init__(self, block, layer, num_classes = 10):
  48. super(ResNet, self).__init__()
  49. self.in_channels = 16
  50. self.conv = conv3x3(3,16)
  51. self.bn = nn.BatchNorm2d(16)
  52. self.relu = nn.ReLU(inplace=True)
  53.  
  54. self.layer1 = self.make_layer(block, 16, layer[0])
  55. self.layer2 = self.make_layer(block, 32, layer[1], 2)
  56. self.layer3 = self.make_layer(block, 64, layer[2], 2)
  57. self.avg_pool = nn.AvgPool2d(8)
  58. self.fc = nn.Linear(64, num_classes)
  59.  
  60. def make_layer(self, block, out_channels, blocks, stride = 1):
  61. shotcut = None
  62. if(stride != 1) or (self.in_channels != out_channels):
  63. shotcut = nn.Sequential(
  64. nn.Conv2d(self.in_channels, out_channels,kernel_size=3,stride = stride,padding=1),
  65. nn.BatchNorm2d(out_channels))
  66.  
  67. layers = []
  68. layers.append(block(self.in_channels, out_channels, stride, shotcut))
  69.  
  70. for i in range(1, blocks):
  71. layers.append(block(out_channels, out_channels))
  72. self.in_channels = out_channels
  73. return nn.Sequential(*layers)
  74.  
  75. def forward(self, x):
  76. x = self.conv(x)
  77. x = self.bn(x)
  78. x = self.relu(x)
  79. x = self.layer1(x)
  80. x = self.layer2(x)
  81. x = self.layer3(x)
  82. x = self.avg_pool(x)
  83. x = x.view(x.size(0), -1)
  84. x = self.fc(x)
  85. return x
  86.  
  87. #标准化数据集
  88. data_tf = transforms.Compose(
  89. [transforms.ToTensor(),
  90. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
  91.  
  92. train_dataset = datasets.CIFAR10(root = './datacifar/',
  93. train=True,
  94. transform = data_tf,
  95. download=False)
  96.  
  97. test_dataset =datasets.CIFAR10(root = './datacifar/',
  98. train=False,
  99. transform= data_tf,
  100. download=False)
  101. # print(test_dataset[0][0])
  102. # print(test_dataset[0][0][0])
  103. print("训练集的大小:",len(train_dataset),len(train_dataset[0][0]),len(train_dataset[0][0][0]),len(train_dataset[0][0][0][0]))
  104. print("测试集的大小:",len(test_dataset),len(test_dataset[0][0]),len(test_dataset[0][0][0]),len(test_dataset[0][0][0][0]))
  105. #建立一个数据迭代器
  106. train_loader = torch.utils.data.DataLoader(dataset = train_dataset,
  107. batch_size = batch_size,
  108. shuffle = True)
  109. test_loader = torch.utils.data.DataLoader(dataset = test_dataset,
  110. batch_size = batch_size,
  111. shuffle = False)
  112. '''
  113. print(train_loader.dataset)
  114. ---->
  115. Dataset CIFAR10
  116. Number of datapoints: 50000
  117. Split: train
  118. Root Location: ./datacifar/
  119. Transforms (if any): Compose(
  120. ToTensor()
  121. Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
  122. )
  123. Target Transforms (if any): None
  124. '''
  125.  
  126. model = ResNet(ResidualBlock, [3,3,3], 10).to(device)
  127.  
  128. criterion = nn.CrossEntropyLoss()#定义损失函数
  129. optimizer = optim.SGD(model.parameters(),lr=lr,momentum=momentum)
  130. print(model)
  131.  
  132. if __name__ == '__main__':
  133. global_step = 0
  134. for epoch in range(epochs):
  135. for i,train_data in enumerate(train_loader):
  136. # print("i:",i)
  137. # print(len(train_data[0]))
  138. # print(len(train_data[1]))
  139. inputs,label = train_data
  140. inputs = Variable(inputs).cuda()
  141. label = Variable(label).cuda()
  142. # print(model)
  143. output = model(inputs)
  144. # print(len(output))
  145.  
  146. loss = criterion(output,label)
  147. optimizer.zero_grad()
  148. loss.backward()
  149. optimizer.step()
  150. if i % 100 == 99:
  151. print('epoch:%d | batch: %d | loss:%.03f' % (epoch + 1, i + 1, loss.item()))
  152. vis.line(X=[global_step],Y=[loss.item()],win='loss',opts=dict(title = 'train loss'),update='append')
  153. global_step = global_step +1
  154. # 验证测试集
  155.  
  156. model.eval() # 将模型变换为测试模式
  157. correct = 0
  158. total = 0
  159. for data_test in test_loader:
  160. images, labels = data_test
  161. images, labels = Variable(images).cuda(), Variable(labels).cuda()
  162. output_test = model(images)
  163. # print("output_test:",output_test.shape)
  164. _, predicted = torch.max(output_test, 1) # 此处的predicted获取的是最大值的下标
  165. # print("predicted:", predicted)
  166. total += labels.size(0)
  167. correct += (predicted == labels).sum()
  168. print("correct1: ", correct)
  169. print("Test acc: {0}".format(correct.item() / len(test_dataset))) # .cpu().numpy()

4、结果展示

loss值        epoch:100           |          batch: 500      |       loss:0.294

test acc      epoch: 100 test acc: 0.8363

5、网络结构

  1. ResNet(
  2. (conv): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  3. (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (relu): ReLU(inplace)
  5. (layer1): Sequential(
  6. (0): ResidualBlock(
  7. (conv1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  8. (bn1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  9. (relu): ReLU(inplace)
  10. (conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  11. (bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  12. )
  13. (1): ResidualBlock(
  14. (conv1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  15. (bn1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  16. (relu): ReLU(inplace)
  17. (conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  18. (bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  19. )
  20. (2): ResidualBlock(
  21. (conv1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  22. (bn1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  23. (relu): ReLU(inplace)
  24. (conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  25. (bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  26. )
  27. )
  28. (layer2): Sequential(
  29. (0): ResidualBlock(
  30. (conv1): Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  31. (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  32. (relu): ReLU(inplace)
  33. (conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  34. (bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  35. (shotcut): Sequential(
  36. (0): Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  37. (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  38. )
  39. )
  40. (1): ResidualBlock(
  41. (conv1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  42. (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  43. (relu): ReLU(inplace)
  44. (conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  45. (bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  46. )
  47. (2): ResidualBlock(
  48. (conv1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  49. (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  50. (relu): ReLU(inplace)
  51. (conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  52. (bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  53. )
  54. )
  55. (layer3): Sequential(
  56. (0): ResidualBlock(
  57. (conv1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  58. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  59. (relu): ReLU(inplace)
  60. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  61. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  62. (shotcut): Sequential(
  63. (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  64. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  65. )
  66. )
  67. (1): ResidualBlock(
  68. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  69. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  70. (relu): ReLU(inplace)
  71. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  72. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  73. )
  74. (2): ResidualBlock(
  75. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  76. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  77. (relu): ReLU(inplace)
  78. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  79. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  80. )
  81. )
  82. (avg_pool): AvgPool2d(kernel_size=8, stride=8, padding=0)
  83. (fc): Linear(in_features=64, out_features=10, bias=True)
  84. )

Pytorch1.0入门实战三:ResNet实现cifar-10分类,利用visdom可视化训练过程的更多相关文章

  1. Pytorch1.0入门实战二:LeNet、AleNet、VGG、GoogLeNet、ResNet模型详解

    LeNet 1998年,LeCun提出了第一个真正的卷积神经网络,也是整个神经网络的开山之作,称为LeNet,现在主要指的是LeNet5或LeNet-5,如图1.1所示.它的主要特征是将卷积层和下采样 ...

  2. 用pytorch1.0搭建简单的神经网络:进行多分类分析

    用pytorch1.0搭建简单的神经网络:进行多分类分析 import torch import torch.nn.functional as F # 包含激励函数 import matplotlib ...

  3. 大规模数据分析统一引擎Spark最新版本3.3.0入门实战

    @ 目录 概述 定义 Hadoop与Spark的关系与区别 特点与关键特性 组件 集群概述 集群术语 部署 概述 环境准备 Local模式 Standalone部署 Standalone模式 配置历史 ...

  4. DL Practice:Cifar 10分类

    Step 1:数据加载和处理 一般使用深度学习框架会经过下面几个流程: 模型定义(包括损失函数的选择)——>数据处理和加载——>训练(可能包括训练过程可视化)——>测试 所以自己写代 ...

  5. Pytorch1.0入门实战一:LeNet神经网络实现 MNIST手写数字识别

    记得第一次接触手写数字识别数据集还在学习TensorFlow,各种sess.run(),头都绕晕了.自从接触pytorch以来,一直想写点什么.曾经在2017年5月,Andrej Karpathy发表 ...

  6. vue3.0入门(三)

    前言 最近在b站上学习了飞哥的vue教程 学习案例已上传,下载地址 class绑定 对象绑定 :class='{active:isActive}' // 相当于class="active&q ...

  7. Spring3.0 入门进阶(三):基于XML方式的AOP使用

    AOP是一个比较通用的概念,主要关注的内容用一句话来说就是"如何使用一个对象代理另外一个对象",不同的框架会有不同的实现,Aspectj 是在编译期就绑定了代理对象与被代理对象的关 ...

  8. springboot2.0入门(三)----定义编程风格+jackjson使用+postMan测试

    一.RESTFul风格API 1.优点: )看Url就知道要什么资源 )看http method就知道针对资源干什么 )看http status code就知道结果如何 HTTP方法体现对资源的操作: ...

  9. word2vec 入门(三)模型介绍

    两种模型,两种方法 模型:CBOW和Skip-Gram 方法:Hierarchical Softmax和Negative Sampling CBOW模型Hierarchical Softmax方法 C ...

随机推荐

  1. swoole_process模拟耗时操作

    一例串行阻塞操作 <?php $start = time(); $tasklists = [ '/root/文档/longtale1.txt', '/root/文档/longtale2.txt' ...

  2. Python3-json3csv

    import json import csv json_str = '[{"a":1,"b":"2","c":" ...

  3. ElementUI datepicker日期选择器时间选择范围限制

    ElementUI是饿了么推出的一套基于vue2.x的一个ui框架.官方文档也很详细,这里做一个element-ui日期插件的补充. 最近做项目用到了datepicker,需要对日期选择做一些限制, ...

  4. 【Layui】Layui模板引擎生成下拉框不显示

    首先让我震惊了一下,layui引擎模板居然是支持ajax操作的 博主的需求是需要在数据表格内放入下拉框而下拉框的数据是数据库内查出来的(详见上一篇博客),但是下拉框怎么也显示不出来 找了四个小时的问题 ...

  5. [人物存档]【AI少女】【捏脸数据】1223今日份的推荐

    点击下载(城通网盘):AISChaF_20191112214754919.png 点击下载(城通网盘):AISChaF_20191111205924765.png

  6. k8s知识2

    kubernetes到底有多难?看下面的白话: service 网络通信原理service 由k8s外面的服务作为访问端 内部里面其实是pod————————————————————————————— ...

  7. 《剑指offer》算法题第七天

    今日题目: 复杂链表的复制 二叉搜索树与双向链表 序列化二叉树 字符串的排序 1.复杂链表的复制 题目描述: 输入一个复杂链表(每个节点中有节点值,以及两个指针,一个指向下一个节点,另一个特殊指针指向 ...

  8. flask框架(五): @app.route和app.add_url_rule参数

    @app.route和app.add_url_rule参数: rule, URL规则 view_func, 视图函数名称 defaults=None, 默认值,当URL中无参数,函数需要参数时,使用d ...

  9. codeforces gym #101873B. Buildings(Polya定理)

    参考博客: https://blog.csdn.net/liangzhaoyang1/article/details/72639208 题目链接: https://codeforces.com/gym ...

  10. LVS集群之NAT模式

    集群的分类: (1)HA:高可用集群,有叫双机热备 原理:两台机器A.B,正常是A提供服务,当A机宕机或者服务有问题时,会切换到B机继续提供服务常用的高了永软件:heartbeat和keepalive ...