深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(三)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处。联系方式:460356155@qq.com
VGGNet在2014年ImageNet图像分类任务竞赛中有出色的表现。网络结构如下图所示:
同样的,对32*32的CIFAR10图片,网络结构做了微调:删除了最后一层最大池化,具体参见网络定义代码,这里采用VGG19,并加入了BN:
'''
创建VGG块
参数分别为输入通道数,输出通道数,卷积层个数,是否做最大池化
'''
def make_vgg_block(in_channel, out_channel, convs, pool=True):
net = [] # 不改变图片尺寸卷积
net.append(nn.Conv2d(in_channel, out_channel, kernel_size=3, padding=1))
net.append(nn.BatchNorm2d(out_channel))
net.append(nn.ReLU(inplace=True)) for i in range(convs - 1):
# 不改变图片尺寸卷积
net.append(nn.Conv2d(out_channel, out_channel, kernel_size=3, padding=1))
net.append(nn.BatchNorm2d(out_channel))
net.append(nn.ReLU(inplace=True)) if pool:
# 2*2最大池化,图片变为w/2 * h/2
net.append(nn.MaxPool2d(2)) return nn.Sequential(*net) # 定义网络模型
class VGG19Net(nn.Module):
def __init__(self):
super(VGG19Net, self).__init__() net = [] # 输入32*32,输出16*16
net.append(make_vgg_block(3, 64, 2)) # 输出8*8
net.append(make_vgg_block(64, 128, 2)) # 输出4*4
net.append(make_vgg_block(128, 256, 4)) # 输出2*2
net.append(make_vgg_block(256, 512, 4)) # 无池化层,输出保持2*2
net.append(make_vgg_block(512, 512, 4, False)) self.cnn = nn.Sequential(*net) self.fc = nn.Sequential(
# 512个feature,每个feature 2*2
nn.Linear(512*2*2, 256),
nn.ReLU(), nn.Linear(256, 256),
nn.ReLU(), nn.Linear(256, 10)
) def forward(self, x):
x = self.cnn(x) # x.size()[0]: batch size
x = x.view(x.size()[0], -1)
x = self.fc(x) return x
其余代码同深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(一)运行结果如下:
Files already downloaded and verified
VGG19Net(
(cnn): Sequential(
(0): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(2): Sequential(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace)
(9): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(10): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace)
(12): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): Sequential(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace)
(9): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(10): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace)
(12): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(4): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace)
(9): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(10): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace)
)
)
(fc): Sequential(
(0): Linear(in_features=2048, out_features=256, bias=True)
(1): ReLU()
(2): Linear(in_features=256, out_features=256, bias=True)
(3): ReLU()
(4): Linear(in_features=256, out_features=10, bias=True)
)
)
Train Epoch: 1 [6400/50000 (13%)] Loss: 1.991934 Acc: 22.000000
Train Epoch: 1 [12800/50000 (26%)] Loss: 1.851721 Acc: 27.000000
Train Epoch: 1 [19200/50000 (38%)] Loss: 1.765295 Acc: 31.000000
Train Epoch: 1 [25600/50000 (51%)] Loss: 1.708027 Acc: 33.000000
Train Epoch: 1 [32000/50000 (64%)] Loss: 1.652181 Acc: 36.000000
Train Epoch: 1 [38400/50000 (77%)] Loss: 1.597727 Acc: 38.000000
Train Epoch: 1 [44800/50000 (90%)] Loss: 1.552660 Acc: 41.000000
one epoch spend: 0:01:08.269581
EPOCH:1, ACC:55.08
Train Epoch: 2 [6400/50000 (13%)] Loss: 1.139670 Acc: 60.000000
Train Epoch: 2 [12800/50000 (26%)] Loss: 1.099960 Acc: 61.000000
Train Epoch: 2 [19200/50000 (38%)] Loss: 1.078881 Acc: 62.000000
Train Epoch: 2 [25600/50000 (51%)] Loss: 1.054403 Acc: 63.000000
Train Epoch: 2 [32000/50000 (64%)] Loss: 1.031371 Acc: 64.000000
Train Epoch: 2 [38400/50000 (77%)] Loss: 1.011668 Acc: 64.000000
Train Epoch: 2 [44800/50000 (90%)] Loss: 0.995242 Acc: 65.000000
one epoch spend: 0:01:08.220392
EPOCH:2, ACC:71.01
Train Epoch: 3 [6400/50000 (13%)] Loss: 0.823265 Acc: 71.000000
Train Epoch: 3 [12800/50000 (26%)] Loss: 0.799878 Acc: 73.000000
Train Epoch: 3 [19200/50000 (38%)] Loss: 0.791265 Acc: 73.000000
Train Epoch: 3 [25600/50000 (51%)] Loss: 0.790027 Acc: 73.000000
Train Epoch: 3 [32000/50000 (64%)] Loss: 0.777267 Acc: 73.000000
Train Epoch: 3 [38400/50000 (77%)] Loss: 0.771953 Acc: 74.000000
Train Epoch: 3 [44800/50000 (90%)] Loss: 0.766835 Acc: 74.000000
one epoch spend: 0:01:08.485721
EPOCH:3, ACC:69.48
Train Epoch: 4 [6400/50000 (13%)] Loss: 0.640418 Acc: 78.000000
Train Epoch: 4 [12800/50000 (26%)] Loss: 0.637256 Acc: 78.000000
Train Epoch: 4 [19200/50000 (38%)] Loss: 0.631245 Acc: 79.000000
Train Epoch: 4 [25600/50000 (51%)] Loss: 0.629215 Acc: 79.000000
Train Epoch: 4 [32000/50000 (64%)] Loss: 0.625925 Acc: 79.000000
Train Epoch: 4 [38400/50000 (77%)] Loss: 0.618307 Acc: 79.000000
Train Epoch: 4 [44800/50000 (90%)] Loss: 0.617456 Acc: 79.000000
one epoch spend: 0:01:08.289673
EPOCH:4, ACC:77.2
Train Epoch: 5 [6400/50000 (13%)] Loss: 0.537330 Acc: 82.000000
Train Epoch: 5 [12800/50000 (26%)] Loss: 0.529751 Acc: 82.000000
Train Epoch: 5 [19200/50000 (38%)] Loss: 0.529389 Acc: 82.000000
Train Epoch: 5 [25600/50000 (51%)] Loss: 0.528106 Acc: 82.000000
Train Epoch: 5 [32000/50000 (64%)] Loss: 0.526467 Acc: 82.000000
Train Epoch: 5 [38400/50000 (77%)] Loss: 0.525133 Acc: 82.000000
Train Epoch: 5 [44800/50000 (90%)] Loss: 0.521847 Acc: 82.000000
one epoch spend: 0:01:08.272084
EPOCH:5, ACC:78.26
Train Epoch: 6 [6400/50000 (13%)] Loss: 0.435377 Acc: 85.000000
Train Epoch: 6 [12800/50000 (26%)] Loss: 0.431456 Acc: 85.000000
Train Epoch: 6 [19200/50000 (38%)] Loss: 0.443582 Acc: 85.000000
Train Epoch: 6 [25600/50000 (51%)] Loss: 0.442819 Acc: 85.000000
Train Epoch: 6 [32000/50000 (64%)] Loss: 0.443313 Acc: 85.000000
Train Epoch: 6 [38400/50000 (77%)] Loss: 0.442025 Acc: 85.000000
Train Epoch: 6 [44800/50000 (90%)] Loss: 0.441722 Acc: 85.000000
one epoch spend: 0:01:10.725170
EPOCH:6, ACC:80.91
Train Epoch: 7 [6400/50000 (13%)] Loss: 0.350214 Acc: 88.000000
Train Epoch: 7 [12800/50000 (26%)] Loss: 0.351490 Acc: 88.000000
Train Epoch: 7 [19200/50000 (38%)] Loss: 0.361328 Acc: 88.000000
Train Epoch: 7 [25600/50000 (51%)] Loss: 0.362231 Acc: 87.000000
Train Epoch: 7 [32000/50000 (64%)] Loss: 0.364318 Acc: 87.000000
Train Epoch: 7 [38400/50000 (77%)] Loss: 0.367137 Acc: 87.000000
Train Epoch: 7 [44800/50000 (90%)] Loss: 0.375220 Acc: 87.000000
one epoch spend: 0:01:09.395538
EPOCH:7, ACC:80.55
Train Epoch: 8 [6400/50000 (13%)] Loss: 0.297754 Acc: 90.000000
Train Epoch: 8 [12800/50000 (26%)] Loss: 0.303383 Acc: 89.000000
Train Epoch: 8 [19200/50000 (38%)] Loss: 0.305170 Acc: 89.000000
Train Epoch: 8 [25600/50000 (51%)] Loss: 0.311823 Acc: 89.000000
Train Epoch: 8 [32000/50000 (64%)] Loss: 0.309851 Acc: 89.000000
Train Epoch: 8 [38400/50000 (77%)] Loss: 0.310422 Acc: 89.000000
Train Epoch: 8 [44800/50000 (90%)] Loss: 0.312672 Acc: 89.000000
one epoch spend: 0:01:08.041167
EPOCH:8, ACC:80.54
Train Epoch: 9 [6400/50000 (13%)] Loss: 0.277638 Acc: 90.000000
Train Epoch: 9 [12800/50000 (26%)] Loss: 0.276622 Acc: 90.000000
Train Epoch: 9 [19200/50000 (38%)] Loss: 0.276465 Acc: 90.000000
Train Epoch: 9 [25600/50000 (51%)] Loss: 0.278001 Acc: 90.000000
Train Epoch: 9 [32000/50000 (64%)] Loss: 0.277109 Acc: 90.000000
Train Epoch: 9 [38400/50000 (77%)] Loss: 0.277029 Acc: 90.000000
Train Epoch: 9 [44800/50000 (90%)] Loss: 0.275243 Acc: 90.000000
one epoch spend: 0:01:08.143754
EPOCH:9, ACC:83.53
Train Epoch: 10 [6400/50000 (13%)] Loss: 0.205785 Acc: 92.000000
Train Epoch: 10 [12800/50000 (26%)] Loss: 0.210659 Acc: 92.000000
Train Epoch: 10 [19200/50000 (38%)] Loss: 0.214871 Acc: 92.000000
Train Epoch: 10 [25600/50000 (51%)] Loss: 0.218910 Acc: 92.000000
Train Epoch: 10 [32000/50000 (64%)] Loss: 0.220843 Acc: 92.000000
Train Epoch: 10 [38400/50000 (77%)] Loss: 0.220417 Acc: 92.000000
Train Epoch: 10 [44800/50000 (90%)] Loss: 0.221100 Acc: 92.000000
one epoch spend: 0:01:08.333929
EPOCH:10, ACC:79.01
Train Epoch: 11 [6400/50000 (13%)] Loss: 0.186917 Acc: 93.000000
Train Epoch: 11 [12800/50000 (26%)] Loss: 0.183512 Acc: 93.000000
Train Epoch: 11 [19200/50000 (38%)] Loss: 0.182561 Acc: 93.000000
Train Epoch: 11 [25600/50000 (51%)] Loss: 0.186446 Acc: 93.000000
Train Epoch: 11 [32000/50000 (64%)] Loss: 0.187314 Acc: 93.000000
Train Epoch: 11 [38400/50000 (77%)] Loss: 0.185967 Acc: 93.000000
Train Epoch: 11 [44800/50000 (90%)] Loss: 0.189130 Acc: 93.000000
one epoch spend: 0:01:10.476138
EPOCH:11, ACC:81.57
Train Epoch: 12 [6400/50000 (13%)] Loss: 0.136427 Acc: 95.000000
Train Epoch: 12 [12800/50000 (26%)] Loss: 0.147904 Acc: 95.000000
Train Epoch: 12 [19200/50000 (38%)] Loss: 0.154502 Acc: 94.000000
Train Epoch: 12 [25600/50000 (51%)] Loss: 0.155767 Acc: 94.000000
Train Epoch: 12 [32000/50000 (64%)] Loss: 0.158346 Acc: 94.000000
Train Epoch: 12 [38400/50000 (77%)] Loss: 0.159562 Acc: 94.000000
Train Epoch: 12 [44800/50000 (90%)] Loss: 0.159924 Acc: 94.000000
one epoch spend: 0:01:10.779635
EPOCH:12, ACC:84.38
Train Epoch: 13 [6400/50000 (13%)] Loss: 0.110026 Acc: 96.000000
Train Epoch: 13 [12800/50000 (26%)] Loss: 0.113738 Acc: 96.000000
Train Epoch: 13 [19200/50000 (38%)] Loss: 0.117731 Acc: 96.000000
Train Epoch: 13 [25600/50000 (51%)] Loss: 0.123653 Acc: 95.000000
Train Epoch: 13 [32000/50000 (64%)] Loss: 0.127138 Acc: 95.000000
Train Epoch: 13 [38400/50000 (77%)] Loss: 0.128938 Acc: 95.000000
Train Epoch: 13 [44800/50000 (90%)] Loss: 0.131382 Acc: 95.000000
one epoch spend: 0:01:09.020651
EPOCH:13, ACC:83.46
Train Epoch: 14 [6400/50000 (13%)] Loss: 0.122690 Acc: 96.000000
Train Epoch: 14 [12800/50000 (26%)] Loss: 0.114584 Acc: 96.000000
Train Epoch: 14 [19200/50000 (38%)] Loss: 0.122652 Acc: 96.000000
Train Epoch: 14 [25600/50000 (51%)] Loss: 0.123031 Acc: 95.000000
Train Epoch: 14 [32000/50000 (64%)] Loss: 0.123427 Acc: 95.000000
Train Epoch: 14 [38400/50000 (77%)] Loss: 0.123146 Acc: 95.000000
Train Epoch: 14 [44800/50000 (90%)] Loss: 0.124063 Acc: 95.000000
one epoch spend: 0:01:10.294790
EPOCH:14, ACC:82.27
Train Epoch: 15 [6400/50000 (13%)] Loss: 0.087797 Acc: 97.000000
Train Epoch: 15 [12800/50000 (26%)] Loss: 0.086152 Acc: 97.000000
Train Epoch: 15 [19200/50000 (38%)] Loss: 0.088446 Acc: 97.000000
Train Epoch: 15 [25600/50000 (51%)] Loss: 0.093510 Acc: 96.000000
Train Epoch: 15 [32000/50000 (64%)] Loss: 0.092870 Acc: 96.000000
Train Epoch: 15 [38400/50000 (77%)] Loss: 0.092416 Acc: 96.000000
Train Epoch: 15 [44800/50000 (90%)] Loss: 0.095187 Acc: 96.000000
one epoch spend: 0:01:10.375479
EPOCH:15, ACC:82.73
Train Epoch: 16 [6400/50000 (13%)] Loss: 0.066554 Acc: 97.000000
Train Epoch: 16 [12800/50000 (26%)] Loss: 0.079139 Acc: 97.000000
Train Epoch: 16 [19200/50000 (38%)] Loss: 0.078223 Acc: 97.000000
Train Epoch: 16 [25600/50000 (51%)] Loss: 0.076825 Acc: 97.000000
Train Epoch: 16 [32000/50000 (64%)] Loss: 0.079679 Acc: 97.000000
Train Epoch: 16 [38400/50000 (77%)] Loss: 0.081081 Acc: 97.000000
Train Epoch: 16 [44800/50000 (90%)] Loss: 0.081967 Acc: 97.000000
one epoch spend: 0:01:09.971818
EPOCH:16, ACC:85.45
Train Epoch: 17 [6400/50000 (13%)] Loss: 0.061477 Acc: 98.000000
Train Epoch: 17 [12800/50000 (26%)] Loss: 0.066804 Acc: 97.000000
Train Epoch: 17 [19200/50000 (38%)] Loss: 0.069621 Acc: 97.000000
Train Epoch: 17 [25600/50000 (51%)] Loss: 0.068841 Acc: 97.000000
Train Epoch: 17 [32000/50000 (64%)] Loss: 0.069220 Acc: 97.000000
Train Epoch: 17 [38400/50000 (77%)] Loss: 0.071493 Acc: 97.000000
Train Epoch: 17 [44800/50000 (90%)] Loss: 0.070973 Acc: 97.000000
one epoch spend: 0:01:10.599626
EPOCH:17, ACC:83.02
Train Epoch: 18 [6400/50000 (13%)] Loss: 0.095195 Acc: 96.000000
Train Epoch: 18 [12800/50000 (26%)] Loss: 0.081690 Acc: 97.000000
Train Epoch: 18 [19200/50000 (38%)] Loss: 0.076400 Acc: 97.000000
Train Epoch: 18 [25600/50000 (51%)] Loss: 0.073249 Acc: 97.000000
Train Epoch: 18 [32000/50000 (64%)] Loss: 0.072114 Acc: 97.000000
Train Epoch: 18 [38400/50000 (77%)] Loss: 0.073739 Acc: 97.000000
Train Epoch: 18 [44800/50000 (90%)] Loss: 0.073761 Acc: 97.000000
one epoch spend: 0:01:11.619880
EPOCH:18, ACC:83.67
Train Epoch: 19 [6400/50000 (13%)] Loss: 0.049970 Acc: 98.000000
Train Epoch: 19 [12800/50000 (26%)] Loss: 0.051812 Acc: 98.000000
Train Epoch: 19 [19200/50000 (38%)] Loss: 0.053814 Acc: 98.000000
Train Epoch: 19 [25600/50000 (51%)] Loss: 0.054168 Acc: 98.000000
Train Epoch: 19 [32000/50000 (64%)] Loss: 0.054138 Acc: 98.000000
Train Epoch: 19 [38400/50000 (77%)] Loss: 0.055356 Acc: 98.000000
Train Epoch: 19 [44800/50000 (90%)] Loss: 0.055334 Acc: 98.000000
one epoch spend: 0:01:10.397104
EPOCH:19, ACC:84.23
Train Epoch: 20 [6400/50000 (13%)] Loss: 0.059795 Acc: 98.000000
Train Epoch: 20 [12800/50000 (26%)] Loss: 0.059780 Acc: 98.000000
Train Epoch: 20 [19200/50000 (38%)] Loss: 0.060332 Acc: 98.000000
Train Epoch: 20 [25600/50000 (51%)] Loss: 0.057949 Acc: 98.000000
Train Epoch: 20 [32000/50000 (64%)] Loss: 0.056517 Acc: 98.000000
Train Epoch: 20 [38400/50000 (77%)] Loss: 0.055322 Acc: 98.000000
Train Epoch: 20 [44800/50000 (90%)] Loss: 0.053375 Acc: 98.000000
one epoch spend: 0:01:10.407573
EPOCH:20, ACC:84.51
CIFAR10 pytorch LeNet Train: EPOCH:20, BATCH_SZ:64, LR:0.01, ACC:85.45
train spend time: 0:23:45.010363
Process finished with exit code 0
准确率达到85%,对比AlexNet的75%,提升了10%。
深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(三)的更多相关文章
- 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(二)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com AlexNet在2012年ImageNet图像分类任务竞赛中获得冠军.网络结构如下图所示: 对CIFA ...
- 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(一)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 前面几篇文章介绍了MINIST,对这种简单图片的识别,LeNet-5可以达到99%的识别率. CIFA ...
- MINIST深度学习识别:python全连接神经网络和pytorch LeNet CNN网络训练实现及比较(三)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 在前两篇文章MINIST深度学习识别:python全连接神经网络和pytorch LeNet CNN网 ...
- pytorch识别CIFAR10:训练ResNet-34(准确率80%)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com CNN的层数越多,能够提取到的特征越丰富,但是简单地增加卷积层数,训练时会导致梯度弥散或梯度爆炸. 何 ...
- 深度学习面试题12:LeNet(手写数字识别)
目录 神经网络的卷积.池化.拉伸 LeNet网络结构 LeNet在MNIST数据集上应用 参考资料 LeNet是卷积神经网络的祖师爷LeCun在1998年提出,用于解决手写数字识别的视觉任务.自那时起 ...
- pytorch识别CIFAR10:训练ResNet-34(自定义transform,动态调整学习率,准确率提升到94.33%)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 前面通过数据增强,ResNet-34残差网络识别CIFAR10,准确率达到了92.6. 这里对训练过程 ...
- pytorch识别CIFAR10:训练ResNet-34(数据增强,准确率提升到92.6%)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 在前一篇中的ResNet-34残差网络,经过减小卷积核训练准确率提升到85%. 这里对训练数据集做数据 ...
- pytorch识别CIFAR10:训练ResNet-34(微调网络,准确率提升到85%)
版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 在前一篇中的ResNet-34残差网络,经过训练准确率只达到80%. 这里对网络做点小修改,在最开始的 ...
- 用CNTK搞深度学习 (二) 训练基于RNN的自然语言模型 ( language model )
前一篇文章 用 CNTK 搞深度学习 (一) 入门 介绍了用CNTK构建简单前向神经网络的例子.现在假设读者已经懂得了使用CNTK的基本方法.现在我们做一个稍微复杂一点,也是自然语言挖掘中很火 ...
随机推荐
- 从壹开始前后端分离【 .NET Core2.0 +Vue2.0 】框架之七 || API项目整体搭建 6.2 轻量级ORM
更新 1.在使用的时候,特别是更新数据的时候,如果不知道哪里有问题,可以查看数据库 和 实体类 的字段,是否大小写一致,比如 name 和 Name 2.在使用Sqlsugar 的 CodeFirst ...
- jenkins maven 自动远程发布到服务器,钉钉提醒团队
jenkins 自动远程发布到服务器 1.安装jenkins 安装过程:自行百度 英文不好的,不要装最新版的jenkins.建议安装Jenkins ver. 2.138.4,此版本可以设置中文语言,设 ...
- 微信公众号开发C#系列-8、自定义菜单及菜单响应事件的处理
1.概述 自定义菜单能够帮助公众号丰富界面,让用户更好更快地理解公众号的功能.菜单分为默认菜单与个性化菜单.个性化菜单接口是为了帮助公众号实现灵活的业务运营,开发者可以通过该接口,让公众号的不同用户群 ...
- SLAM+语音机器人DIY系列:(四)差分底盘设计——6.底盘里程计标
摘要 运动底盘是移动机器人的重要组成部分,不像激光雷达.IMU.麦克风.音响.摄像头这些通用部件可以直接买到,很难买到通用的底盘.一方面是因为底盘的尺寸结构和参数是要与具体机器人匹配的:另一方面是因为 ...
- git clone 指定分支
使用Git下载指定分支命令为:git clone -b 分支名仓库地址 克隆asp.net core 2.1.6版本 git clone -b 2.1.6 https://github.com/asp ...
- throw和throws的区别以及try,catch,finally在有return的情况下执行的顺序
一,抛出异常有三种形式,一是throw,一个throws,还有一种系统自动抛异常.下面它们之间的异同. (1).系统自动抛异常 1.当程序语句出现一些逻辑错误.主义错误或类型转换错误时,系统会自动抛出 ...
- Django之CSRF跨站请求伪造(老掉牙的钓鱼网站模拟)
首先这是一个测试的代码 请先在setting页面进行下面操作 注释完成后,开始模拟钓鱼网站的跨站请求伪造操作: 前端代码: <!DOCTYPE html> <html lang=&q ...
- Dynamics 365 启用跟踪及读取跟踪文件工具
微软动态CRM专家罗勇 ,回复315或者20190313可方便获取本文,同时可以在第一间得到我发布的最新博文信息,follow me!我的网站是 www.luoyong.me . 当根据错误提示排查问 ...
- C# 利用键值对取代Switch...Case语句
swich....case 条件分支多了之后,会严重的破坏程序的美观性. 比如这个 上述代码是用于两个进程之间通信的代码,由于通信的枚举特别的多,所以case的分支特别的多.导致了代码的可读性,可维护 ...
- Easyui datagrid combobox输入框下拉(取消)选值和编辑已选值处理
datagrid combobox输入框下拉(取消)选值和编辑已选值处理 by:授客 QQ:1033553122 测试环境 jquery-easyui-1.5.3 需求场景 如下,在datagri ...