版权声明:本文为博主原创文章,欢迎转载,并请注明出处。联系方式:460356155@qq.com

VGGNet在2014年ImageNet图像分类任务竞赛中有出色的表现。网络结构如下图所示:

同样的,对32*32的CIFAR10图片,网络结构做了微调:删除了最后一层最大池化,具体参见网络定义代码,这里采用VGG19,并加入了BN:

 '''
创建VGG块
参数分别为输入通道数,输出通道数,卷积层个数,是否做最大池化
'''
def make_vgg_block(in_channel, out_channel, convs, pool=True):
net = [] # 不改变图片尺寸卷积
net.append(nn.Conv2d(in_channel, out_channel, kernel_size=3, padding=1))
net.append(nn.BatchNorm2d(out_channel))
net.append(nn.ReLU(inplace=True)) for i in range(convs - 1):
# 不改变图片尺寸卷积
net.append(nn.Conv2d(out_channel, out_channel, kernel_size=3, padding=1))
net.append(nn.BatchNorm2d(out_channel))
net.append(nn.ReLU(inplace=True)) if pool:
# 2*2最大池化,图片变为w/2 * h/2
net.append(nn.MaxPool2d(2)) return nn.Sequential(*net) # 定义网络模型
class VGG19Net(nn.Module):
def __init__(self):
super(VGG19Net, self).__init__() net = [] # 输入32*32,输出16*16
net.append(make_vgg_block(3, 64, 2)) # 输出8*8
net.append(make_vgg_block(64, 128, 2)) # 输出4*4
net.append(make_vgg_block(128, 256, 4)) # 输出2*2
net.append(make_vgg_block(256, 512, 4)) # 无池化层,输出保持2*2
net.append(make_vgg_block(512, 512, 4, False)) self.cnn = nn.Sequential(*net) self.fc = nn.Sequential(
# 512个feature,每个feature 2*2
nn.Linear(512*2*2, 256),
nn.ReLU(), nn.Linear(256, 256),
nn.ReLU(), nn.Linear(256, 10)
) def forward(self, x):
x = self.cnn(x) # x.size()[0]: batch size
x = x.view(x.size()[0], -1)
x = self.fc(x) return x

其余代码同深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(一)运行结果如下:

Files already downloaded and verified
VGG19Net(
  (cnn): Sequential(
    (0): Sequential(
      (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace)
      (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (5): ReLU(inplace)
      (6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): Sequential(
      (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace)
      (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (5): ReLU(inplace)
      (6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (2): Sequential(
      (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace)
      (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (5): ReLU(inplace)
      (6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (8): ReLU(inplace)
      (9): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (10): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (11): ReLU(inplace)
      (12): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): Sequential(
      (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace)
      (3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (5): ReLU(inplace)
      (6): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (8): ReLU(inplace)
      (9): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (10): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (11): ReLU(inplace)
      (12): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (4): Sequential(
      (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace)
      (3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (5): ReLU(inplace)
      (6): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (8): ReLU(inplace)
      (9): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (10): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (11): ReLU(inplace)
    )
  )
  (fc): Sequential(
    (0): Linear(in_features=2048, out_features=256, bias=True)
    (1): ReLU()
    (2): Linear(in_features=256, out_features=256, bias=True)
    (3): ReLU()
    (4): Linear(in_features=256, out_features=10, bias=True)
  )
)
Train Epoch: 1 [6400/50000 (13%)]    Loss: 1.991934  Acc: 22.000000
Train Epoch: 1 [12800/50000 (26%)]    Loss: 1.851721  Acc: 27.000000
Train Epoch: 1 [19200/50000 (38%)]    Loss: 1.765295  Acc: 31.000000
Train Epoch: 1 [25600/50000 (51%)]    Loss: 1.708027  Acc: 33.000000
Train Epoch: 1 [32000/50000 (64%)]    Loss: 1.652181  Acc: 36.000000
Train Epoch: 1 [38400/50000 (77%)]    Loss: 1.597727  Acc: 38.000000
Train Epoch: 1 [44800/50000 (90%)]    Loss: 1.552660  Acc: 41.000000
one epoch spend:  0:01:08.269581
EPOCH:1, ACC:55.08

Train Epoch: 2 [6400/50000 (13%)]    Loss: 1.139670  Acc: 60.000000
Train Epoch: 2 [12800/50000 (26%)]    Loss: 1.099960  Acc: 61.000000
Train Epoch: 2 [19200/50000 (38%)]    Loss: 1.078881  Acc: 62.000000
Train Epoch: 2 [25600/50000 (51%)]    Loss: 1.054403  Acc: 63.000000
Train Epoch: 2 [32000/50000 (64%)]    Loss: 1.031371  Acc: 64.000000
Train Epoch: 2 [38400/50000 (77%)]    Loss: 1.011668  Acc: 64.000000
Train Epoch: 2 [44800/50000 (90%)]    Loss: 0.995242  Acc: 65.000000
one epoch spend:  0:01:08.220392
EPOCH:2, ACC:71.01

Train Epoch: 3 [6400/50000 (13%)]    Loss: 0.823265  Acc: 71.000000
Train Epoch: 3 [12800/50000 (26%)]    Loss: 0.799878  Acc: 73.000000
Train Epoch: 3 [19200/50000 (38%)]    Loss: 0.791265  Acc: 73.000000
Train Epoch: 3 [25600/50000 (51%)]    Loss: 0.790027  Acc: 73.000000
Train Epoch: 3 [32000/50000 (64%)]    Loss: 0.777267  Acc: 73.000000
Train Epoch: 3 [38400/50000 (77%)]    Loss: 0.771953  Acc: 74.000000
Train Epoch: 3 [44800/50000 (90%)]    Loss: 0.766835  Acc: 74.000000
one epoch spend:  0:01:08.485721
EPOCH:3, ACC:69.48

Train Epoch: 4 [6400/50000 (13%)]    Loss: 0.640418  Acc: 78.000000
Train Epoch: 4 [12800/50000 (26%)]    Loss: 0.637256  Acc: 78.000000
Train Epoch: 4 [19200/50000 (38%)]    Loss: 0.631245  Acc: 79.000000
Train Epoch: 4 [25600/50000 (51%)]    Loss: 0.629215  Acc: 79.000000
Train Epoch: 4 [32000/50000 (64%)]    Loss: 0.625925  Acc: 79.000000
Train Epoch: 4 [38400/50000 (77%)]    Loss: 0.618307  Acc: 79.000000
Train Epoch: 4 [44800/50000 (90%)]    Loss: 0.617456  Acc: 79.000000
one epoch spend:  0:01:08.289673
EPOCH:4, ACC:77.2

Train Epoch: 5 [6400/50000 (13%)]    Loss: 0.537330  Acc: 82.000000
Train Epoch: 5 [12800/50000 (26%)]    Loss: 0.529751  Acc: 82.000000
Train Epoch: 5 [19200/50000 (38%)]    Loss: 0.529389  Acc: 82.000000
Train Epoch: 5 [25600/50000 (51%)]    Loss: 0.528106  Acc: 82.000000
Train Epoch: 5 [32000/50000 (64%)]    Loss: 0.526467  Acc: 82.000000
Train Epoch: 5 [38400/50000 (77%)]    Loss: 0.525133  Acc: 82.000000
Train Epoch: 5 [44800/50000 (90%)]    Loss: 0.521847  Acc: 82.000000
one epoch spend:  0:01:08.272084
EPOCH:5, ACC:78.26

Train Epoch: 6 [6400/50000 (13%)]    Loss: 0.435377  Acc: 85.000000
Train Epoch: 6 [12800/50000 (26%)]    Loss: 0.431456  Acc: 85.000000
Train Epoch: 6 [19200/50000 (38%)]    Loss: 0.443582  Acc: 85.000000
Train Epoch: 6 [25600/50000 (51%)]    Loss: 0.442819  Acc: 85.000000
Train Epoch: 6 [32000/50000 (64%)]    Loss: 0.443313  Acc: 85.000000
Train Epoch: 6 [38400/50000 (77%)]    Loss: 0.442025  Acc: 85.000000
Train Epoch: 6 [44800/50000 (90%)]    Loss: 0.441722  Acc: 85.000000
one epoch spend:  0:01:10.725170
EPOCH:6, ACC:80.91

Train Epoch: 7 [6400/50000 (13%)]    Loss: 0.350214  Acc: 88.000000
Train Epoch: 7 [12800/50000 (26%)]    Loss: 0.351490  Acc: 88.000000
Train Epoch: 7 [19200/50000 (38%)]    Loss: 0.361328  Acc: 88.000000
Train Epoch: 7 [25600/50000 (51%)]    Loss: 0.362231  Acc: 87.000000
Train Epoch: 7 [32000/50000 (64%)]    Loss: 0.364318  Acc: 87.000000
Train Epoch: 7 [38400/50000 (77%)]    Loss: 0.367137  Acc: 87.000000
Train Epoch: 7 [44800/50000 (90%)]    Loss: 0.375220  Acc: 87.000000
one epoch spend:  0:01:09.395538
EPOCH:7, ACC:80.55

Train Epoch: 8 [6400/50000 (13%)]    Loss: 0.297754  Acc: 90.000000
Train Epoch: 8 [12800/50000 (26%)]    Loss: 0.303383  Acc: 89.000000
Train Epoch: 8 [19200/50000 (38%)]    Loss: 0.305170  Acc: 89.000000
Train Epoch: 8 [25600/50000 (51%)]    Loss: 0.311823  Acc: 89.000000
Train Epoch: 8 [32000/50000 (64%)]    Loss: 0.309851  Acc: 89.000000
Train Epoch: 8 [38400/50000 (77%)]    Loss: 0.310422  Acc: 89.000000
Train Epoch: 8 [44800/50000 (90%)]    Loss: 0.312672  Acc: 89.000000
one epoch spend:  0:01:08.041167
EPOCH:8, ACC:80.54

Train Epoch: 9 [6400/50000 (13%)]    Loss: 0.277638  Acc: 90.000000
Train Epoch: 9 [12800/50000 (26%)]    Loss: 0.276622  Acc: 90.000000
Train Epoch: 9 [19200/50000 (38%)]    Loss: 0.276465  Acc: 90.000000
Train Epoch: 9 [25600/50000 (51%)]    Loss: 0.278001  Acc: 90.000000
Train Epoch: 9 [32000/50000 (64%)]    Loss: 0.277109  Acc: 90.000000
Train Epoch: 9 [38400/50000 (77%)]    Loss: 0.277029  Acc: 90.000000
Train Epoch: 9 [44800/50000 (90%)]    Loss: 0.275243  Acc: 90.000000
one epoch spend:  0:01:08.143754
EPOCH:9, ACC:83.53

Train Epoch: 10 [6400/50000 (13%)]    Loss: 0.205785  Acc: 92.000000
Train Epoch: 10 [12800/50000 (26%)]    Loss: 0.210659  Acc: 92.000000
Train Epoch: 10 [19200/50000 (38%)]    Loss: 0.214871  Acc: 92.000000
Train Epoch: 10 [25600/50000 (51%)]    Loss: 0.218910  Acc: 92.000000
Train Epoch: 10 [32000/50000 (64%)]    Loss: 0.220843  Acc: 92.000000
Train Epoch: 10 [38400/50000 (77%)]    Loss: 0.220417  Acc: 92.000000
Train Epoch: 10 [44800/50000 (90%)]    Loss: 0.221100  Acc: 92.000000
one epoch spend:  0:01:08.333929
EPOCH:10, ACC:79.01

Train Epoch: 11 [6400/50000 (13%)]    Loss: 0.186917  Acc: 93.000000
Train Epoch: 11 [12800/50000 (26%)]    Loss: 0.183512  Acc: 93.000000
Train Epoch: 11 [19200/50000 (38%)]    Loss: 0.182561  Acc: 93.000000
Train Epoch: 11 [25600/50000 (51%)]    Loss: 0.186446  Acc: 93.000000
Train Epoch: 11 [32000/50000 (64%)]    Loss: 0.187314  Acc: 93.000000
Train Epoch: 11 [38400/50000 (77%)]    Loss: 0.185967  Acc: 93.000000
Train Epoch: 11 [44800/50000 (90%)]    Loss: 0.189130  Acc: 93.000000
one epoch spend:  0:01:10.476138
EPOCH:11, ACC:81.57

Train Epoch: 12 [6400/50000 (13%)]    Loss: 0.136427  Acc: 95.000000
Train Epoch: 12 [12800/50000 (26%)]    Loss: 0.147904  Acc: 95.000000
Train Epoch: 12 [19200/50000 (38%)]    Loss: 0.154502  Acc: 94.000000
Train Epoch: 12 [25600/50000 (51%)]    Loss: 0.155767  Acc: 94.000000
Train Epoch: 12 [32000/50000 (64%)]    Loss: 0.158346  Acc: 94.000000
Train Epoch: 12 [38400/50000 (77%)]    Loss: 0.159562  Acc: 94.000000
Train Epoch: 12 [44800/50000 (90%)]    Loss: 0.159924  Acc: 94.000000
one epoch spend:  0:01:10.779635
EPOCH:12, ACC:84.38

Train Epoch: 13 [6400/50000 (13%)]    Loss: 0.110026  Acc: 96.000000
Train Epoch: 13 [12800/50000 (26%)]    Loss: 0.113738  Acc: 96.000000
Train Epoch: 13 [19200/50000 (38%)]    Loss: 0.117731  Acc: 96.000000
Train Epoch: 13 [25600/50000 (51%)]    Loss: 0.123653  Acc: 95.000000
Train Epoch: 13 [32000/50000 (64%)]    Loss: 0.127138  Acc: 95.000000
Train Epoch: 13 [38400/50000 (77%)]    Loss: 0.128938  Acc: 95.000000
Train Epoch: 13 [44800/50000 (90%)]    Loss: 0.131382  Acc: 95.000000
one epoch spend:  0:01:09.020651
EPOCH:13, ACC:83.46

Train Epoch: 14 [6400/50000 (13%)]    Loss: 0.122690  Acc: 96.000000
Train Epoch: 14 [12800/50000 (26%)]    Loss: 0.114584  Acc: 96.000000
Train Epoch: 14 [19200/50000 (38%)]    Loss: 0.122652  Acc: 96.000000
Train Epoch: 14 [25600/50000 (51%)]    Loss: 0.123031  Acc: 95.000000
Train Epoch: 14 [32000/50000 (64%)]    Loss: 0.123427  Acc: 95.000000
Train Epoch: 14 [38400/50000 (77%)]    Loss: 0.123146  Acc: 95.000000
Train Epoch: 14 [44800/50000 (90%)]    Loss: 0.124063  Acc: 95.000000
one epoch spend:  0:01:10.294790
EPOCH:14, ACC:82.27

Train Epoch: 15 [6400/50000 (13%)]    Loss: 0.087797  Acc: 97.000000
Train Epoch: 15 [12800/50000 (26%)]    Loss: 0.086152  Acc: 97.000000
Train Epoch: 15 [19200/50000 (38%)]    Loss: 0.088446  Acc: 97.000000
Train Epoch: 15 [25600/50000 (51%)]    Loss: 0.093510  Acc: 96.000000
Train Epoch: 15 [32000/50000 (64%)]    Loss: 0.092870  Acc: 96.000000
Train Epoch: 15 [38400/50000 (77%)]    Loss: 0.092416  Acc: 96.000000
Train Epoch: 15 [44800/50000 (90%)]    Loss: 0.095187  Acc: 96.000000
one epoch spend:  0:01:10.375479
EPOCH:15, ACC:82.73

Train Epoch: 16 [6400/50000 (13%)]    Loss: 0.066554  Acc: 97.000000
Train Epoch: 16 [12800/50000 (26%)]    Loss: 0.079139  Acc: 97.000000
Train Epoch: 16 [19200/50000 (38%)]    Loss: 0.078223  Acc: 97.000000
Train Epoch: 16 [25600/50000 (51%)]    Loss: 0.076825  Acc: 97.000000
Train Epoch: 16 [32000/50000 (64%)]    Loss: 0.079679  Acc: 97.000000
Train Epoch: 16 [38400/50000 (77%)]    Loss: 0.081081  Acc: 97.000000
Train Epoch: 16 [44800/50000 (90%)]    Loss: 0.081967  Acc: 97.000000
one epoch spend:  0:01:09.971818
EPOCH:16, ACC:85.45

Train Epoch: 17 [6400/50000 (13%)]    Loss: 0.061477  Acc: 98.000000
Train Epoch: 17 [12800/50000 (26%)]    Loss: 0.066804  Acc: 97.000000
Train Epoch: 17 [19200/50000 (38%)]    Loss: 0.069621  Acc: 97.000000
Train Epoch: 17 [25600/50000 (51%)]    Loss: 0.068841  Acc: 97.000000
Train Epoch: 17 [32000/50000 (64%)]    Loss: 0.069220  Acc: 97.000000
Train Epoch: 17 [38400/50000 (77%)]    Loss: 0.071493  Acc: 97.000000
Train Epoch: 17 [44800/50000 (90%)]    Loss: 0.070973  Acc: 97.000000
one epoch spend:  0:01:10.599626
EPOCH:17, ACC:83.02

Train Epoch: 18 [6400/50000 (13%)]    Loss: 0.095195  Acc: 96.000000
Train Epoch: 18 [12800/50000 (26%)]    Loss: 0.081690  Acc: 97.000000
Train Epoch: 18 [19200/50000 (38%)]    Loss: 0.076400  Acc: 97.000000
Train Epoch: 18 [25600/50000 (51%)]    Loss: 0.073249  Acc: 97.000000
Train Epoch: 18 [32000/50000 (64%)]    Loss: 0.072114  Acc: 97.000000
Train Epoch: 18 [38400/50000 (77%)]    Loss: 0.073739  Acc: 97.000000
Train Epoch: 18 [44800/50000 (90%)]    Loss: 0.073761  Acc: 97.000000
one epoch spend:  0:01:11.619880
EPOCH:18, ACC:83.67

Train Epoch: 19 [6400/50000 (13%)]    Loss: 0.049970  Acc: 98.000000
Train Epoch: 19 [12800/50000 (26%)]    Loss: 0.051812  Acc: 98.000000
Train Epoch: 19 [19200/50000 (38%)]    Loss: 0.053814  Acc: 98.000000
Train Epoch: 19 [25600/50000 (51%)]    Loss: 0.054168  Acc: 98.000000
Train Epoch: 19 [32000/50000 (64%)]    Loss: 0.054138  Acc: 98.000000
Train Epoch: 19 [38400/50000 (77%)]    Loss: 0.055356  Acc: 98.000000
Train Epoch: 19 [44800/50000 (90%)]    Loss: 0.055334  Acc: 98.000000
one epoch spend:  0:01:10.397104
EPOCH:19, ACC:84.23

Train Epoch: 20 [6400/50000 (13%)]    Loss: 0.059795  Acc: 98.000000
Train Epoch: 20 [12800/50000 (26%)]    Loss: 0.059780  Acc: 98.000000
Train Epoch: 20 [19200/50000 (38%)]    Loss: 0.060332  Acc: 98.000000
Train Epoch: 20 [25600/50000 (51%)]    Loss: 0.057949  Acc: 98.000000
Train Epoch: 20 [32000/50000 (64%)]    Loss: 0.056517  Acc: 98.000000
Train Epoch: 20 [38400/50000 (77%)]    Loss: 0.055322  Acc: 98.000000
Train Epoch: 20 [44800/50000 (90%)]    Loss: 0.053375  Acc: 98.000000
one epoch spend:  0:01:10.407573
EPOCH:20, ACC:84.51

CIFAR10 pytorch LeNet Train: EPOCH:20, BATCH_SZ:64, LR:0.01, ACC:85.45
train spend time:  0:23:45.010363

Process finished with exit code 0

准确率达到85%,对比AlexNet的75%,提升了10%。

深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(三)的更多相关文章

  1. 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(二)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com AlexNet在2012年ImageNet图像分类任务竞赛中获得冠军.网络结构如下图所示: 对CIFA ...

  2. 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(一)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 前面几篇文章介绍了MINIST,对这种简单图片的识别,LeNet-5可以达到99%的识别率. CIFA ...

  3. MINIST深度学习识别:python全连接神经网络和pytorch LeNet CNN网络训练实现及比较(三)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 在前两篇文章MINIST深度学习识别:python全连接神经网络和pytorch LeNet CNN网 ...

  4. pytorch识别CIFAR10:训练ResNet-34(准确率80%)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com CNN的层数越多,能够提取到的特征越丰富,但是简单地增加卷积层数,训练时会导致梯度弥散或梯度爆炸. 何 ...

  5. 深度学习面试题12:LeNet(手写数字识别)

    目录 神经网络的卷积.池化.拉伸 LeNet网络结构 LeNet在MNIST数据集上应用 参考资料 LeNet是卷积神经网络的祖师爷LeCun在1998年提出,用于解决手写数字识别的视觉任务.自那时起 ...

  6. pytorch识别CIFAR10:训练ResNet-34(自定义transform,动态调整学习率,准确率提升到94.33%)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 前面通过数据增强,ResNet-34残差网络识别CIFAR10,准确率达到了92.6. 这里对训练过程 ...

  7. pytorch识别CIFAR10:训练ResNet-34(数据增强,准确率提升到92.6%)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 在前一篇中的ResNet-34残差网络,经过减小卷积核训练准确率提升到85%. 这里对训练数据集做数据 ...

  8. pytorch识别CIFAR10:训练ResNet-34(微调网络,准确率提升到85%)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 在前一篇中的ResNet-34残差网络,经过训练准确率只达到80%. 这里对网络做点小修改,在最开始的 ...

  9. 用CNTK搞深度学习 (二) 训练基于RNN的自然语言模型 ( language model )

    前一篇文章  用 CNTK 搞深度学习 (一) 入门    介绍了用CNTK构建简单前向神经网络的例子.现在假设读者已经懂得了使用CNTK的基本方法.现在我们做一个稍微复杂一点,也是自然语言挖掘中很火 ...

随机推荐

  1. 使用描述符实现property功能

    # Author : Kelvin # Date : 2019/1/25 14:46 class Decproperty: def __init__(self, func): self.func = ...

  2. JavaScript小记二则:接上一节:用.net写Textbox控件关于数字的判断的另一则方法

    方法二.通过写JS进行判断控制输入的只能为数字,源码如下: <!DOCTYPE html> <html> <body> <h1></h1> ...

  3. android 资源

    在进行APP开发的过程当中,会用到许多资源,比如:图片,字符串等.现对android资源知识进行简单记录. 具体的详细信息及用法,点击查看官方文档 分类      一般android资源分为可直接访问 ...

  4. MongoDB学习(使用分组、聚合和映射-归并)

    使用分组.聚合和映射-归并 MongoDB的强大功能之一,是直接在服务器对文档的值进行复杂的操作,而不用先发文档发送到客户端在进行处理. 结果分组 对大型数据集进行查询操作时,通常会根据文档的字段值对 ...

  5. SpringAOP(5)

    2019-03-08/14:22:58 演示:登陆核心业务类与日志周边功能实现AOP面向切面思想 jar包:https://share.weiyun.com/5GOFouP 学习资料:http://h ...

  6. 《JavaScript高级程序设计》笔记:客户端检测(九)

    能力检测 在编写代码之前先检测特定浏览器的能力.例如,脚本在调用某个函数之前,可能要先检测该函数首付存在.这种检测方法将开发人员从考虑具体的浏览器类型和版本中解放出来,让他们把注意力集中到相应的能力是 ...

  7. Admin Console 反应慢的相关bug

    一个常见问题是在 Admin console 刷新 server 列表时,页面反应慢.从 Admin Server 的 Thread Dump 可以看到 Admin server 到 Managed ...

  8. 电脑获取手机文件的一种方式(通过手机建立ftp)

    1 打开手机热点. 2 手机需要安装es文件浏览器,在es浏览器首页有个远程管理(或在左侧网络功能下有个远程管理),打开即可启用手机目录下的ftp. 3 打开电脑按提示输入ftp站点.默认地址是手机热 ...

  9. 小程序使用之WXS

    文章链接:https://mp.weixin.qq.com/s/F1zzS7mvMpFaplq4KINzQg 之前做过一段时间的小程序开发,自己也写过两个自己的小程序,了解些前端的知识,相对而言还是比 ...

  10. rocketmq广播消息

    发布与模式实现.广播就是向一个主题的所有订阅者发送同一条消息. 在发送消息的时候和普通的消息并与不同之处,只是在消费端做一些配置即可. Consumer消息消费 public class Broadc ...