mxnet框架下超全手写字体识别—从数据预处理到网络的训练—模型及日志的保存

import numpy as np
import mxnet as mx
import logging logging.getLogger().setLevel(logging.DEBUG) batch_size = 100
mnist = mx.test_utils.get_mnist()
train_iter = mx.io.NDArrayIter(mnist['train_data'], mnist['train_label'], batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size) data = mx.sym.var('data')
# first conv layer
conv1= mx.sym.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1= mx.sym.Activation(data=conv1, act_type="tanh")
pool1= mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv layer
conv2= mx.sym.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2= mx.sym.Activation(data=conv2, act_type="tanh")
pool2= mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc layer
flatten= mx.sym.Flatten(data=pool2)
fc1= mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3= mx.sym.Activation(data=fc1, act_type="tanh")
# second fullc
fc2= mx.sym.FullyConnected(data=tanh3, num_hidden=10)
# softmax loss
lenet= mx.sym.SoftmaxOutput(data=fc2, name='softmax') # create a trainable module on GPU 0
lenet_model = mx.mod.Module(
symbol=lenet,
context=mx.cpu()) # train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.1},
eval_metric='acc',
batch_end_callback = mx.callback.Speedometer(batch_size, 100),
num_epoch=10)

INFO:root:Epoch[0] Batch [100] Speed: 1504.57 samples/sec accuracy=0.113564
INFO:root:Epoch[0] Batch [200] Speed: 1516.40 samples/sec accuracy=0.118100
INFO:root:Epoch[0] Batch [300] Speed: 1515.71 samples/sec accuracy=0.116600
INFO:root:Epoch[0] Batch [400] Speed: 1505.61 samples/sec accuracy=0.110200
INFO:root:Epoch[0] Batch [500] Speed: 1406.21 samples/sec accuracy=0.107600
INFO:root:Epoch[0] Train-accuracy=0.108081
INFO:root:Epoch[0] Time cost=40.572
INFO:root:Epoch[0] Validation-accuracy=0.102800
INFO:root:Epoch[1] Batch [100] Speed: 1451.87 samples/sec accuracy=0.115050
INFO:root:Epoch[1] Batch [200] Speed: 1476.86 samples/sec accuracy=0.179600
INFO:root:Epoch[1] Batch [300] Speed: 1409.67 samples/sec accuracy=0.697100
INFO:root:Epoch[1] Batch [400] Speed: 1379.52 samples/sec accuracy=0.871900
INFO:root:Epoch[1] Batch [500] Speed: 1374.88 samples/sec accuracy=0.901000
INFO:root:Epoch[1] Train-accuracy=0.925556
INFO:root:Epoch[1] Time cost=42.527
INFO:root:Epoch[1] Validation-accuracy=0.936900
INFO:root:Epoch[2] Batch [100] Speed: 1376.59 samples/sec accuracy=0.936436
INFO:root:Epoch[2] Batch [200] Speed: 1379.29 samples/sec accuracy=0.948100
INFO:root:Epoch[2] Batch [300] Speed: 1375.07 samples/sec accuracy=0.953400
INFO:root:Epoch[2] Batch [400] Speed: 1369.65 samples/sec accuracy=0.958600
INFO:root:Epoch[2] Batch [500] Speed: 1371.79 samples/sec accuracy=0.960900
INFO:root:Epoch[2] Train-accuracy=0.966667
INFO:root:Epoch[2] Time cost=43.660
INFO:root:Epoch[2] Validation-accuracy=0.972900
INFO:root:Epoch[3] Batch [100] Speed: 1230.74 samples/sec accuracy=0.969505
INFO:root:Epoch[3] Batch [200] Speed: 1335.27 samples/sec accuracy=0.970800
INFO:root:Epoch[3] Batch [300] Speed: 1264.43 samples/sec accuracy=0.972600
INFO:root:Epoch[3] Batch [400] Speed: 1242.03 samples/sec accuracy=0.974100
INFO:root:Epoch[3] Batch [500] Speed: 1322.77 samples/sec accuracy=0.974600
INFO:root:Epoch[3] Train-accuracy=0.976465
INFO:root:Epoch[3] Time cost=46.860
INFO:root:Epoch[3] Validation-accuracy=0.980700
INFO:root:Epoch[4] Batch [100] Speed: 1342.42 samples/sec accuracy=0.978020
INFO:root:Epoch[4] Batch [200] Speed: 1339.98 samples/sec accuracy=0.980600
INFO:root:Epoch[4] Batch [300] Speed: 1344.36 samples/sec accuracy=0.981000
INFO:root:Epoch[4] Batch [400] Speed: 1338.13 samples/sec accuracy=0.980000
INFO:root:Epoch[4] Batch [500] Speed: 1343.76 samples/sec accuracy=0.979000
INFO:root:Epoch[4] Train-accuracy=0.983535
INFO:root:Epoch[4] Time cost=44.694
INFO:root:Epoch[4] Validation-accuracy=0.985700
INFO:root:Epoch[5] Batch [100] Speed: 1333.50 samples/sec accuracy=0.981584
INFO:root:Epoch[5] Batch [200] Speed: 1342.07 samples/sec accuracy=0.985400
INFO:root:Epoch[5] Batch [300] Speed: 1339.04 samples/sec accuracy=0.984300
INFO:root:Epoch[5] Batch [400] Speed: 1323.42 samples/sec accuracy=0.983500

mxnet卷积神经网络训练MNIST数据集测试的更多相关文章

  1. TensorFlow——CNN卷积神经网络处理Mnist数据集

    CNN卷积神经网络处理Mnist数据集 CNN模型结构: 输入层:Mnist数据集(28*28) 第一层卷积:感受视野5*5,步长为1,卷积核:32个 第一层池化:池化视野2*2,步长为2 第二层卷积 ...

  2. Tensorflow学习教程------利用卷积神经网络对mnist数据集进行分类_利用训练好的模型进行分类

    #coding:utf-8 import tensorflow as tf from PIL import Image,ImageFilter from tensorflow.examples.tut ...

  3. TensorFlow初探之简单神经网络训练mnist数据集(TensorFlow2.0代码)

    from __future__ import print_function from tensorflow.examples.tutorials.mnist import input_data #加载 ...

  4. 使用一层神经网络训练mnist数据集

    import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_dat ...

  5. 实践详细篇-Windows下使用VS2015编译的Caffe训练mnist数据集

    上一篇记录的是学习caffe前的环境准备以及如何创建好自己需要的caffe版本.这一篇记录的是如何使用编译好的caffe做训练mnist数据集,步骤编号延用上一篇 <实践详细篇-Windows下 ...

  6. 3层-CNN卷积神经网络预测MNIST数字

    3层-CNN卷积神经网络预测MNIST数字 本文创建一个简单的三层卷积网络来预测 MNIST 数字.这个深层网络由两个带有 ReLU 和 maxpool 的卷积层以及两个全连接层组成. MNIST 由 ...

  7. 使用caffe训练mnist数据集 - caffe教程实战(一)

    个人认为学习一个陌生的框架,最好从例子开始,所以我们也从一个例子开始. 学习本教程之前,你需要首先对卷积神经网络算法原理有些了解,而且安装好了caffe 卷积神经网络原理参考:http://cs231 ...

  8. Python实现bp神经网络识别MNIST数据集

    title: "Python实现bp神经网络识别MNIST数据集" date: 2018-06-18T14:01:49+08:00 tags: [""] cat ...

  9. deep_learning_LSTM长短期记忆神经网络处理Mnist数据集

    1.RNN(Recurrent Neural Network)循环神经网络模型 详见RNN循环神经网络:https://www.cnblogs.com/pinard/p/6509630.html 2. ...

随机推荐

  1. sqlite第三方类库:FMDB使用

    转自:http://www.cnblogs.com/wuhenke/archive/2012/02/07/2341656.html 本文转自一位台湾ios开发者的blog,由于blog地址被墙掉,转发 ...

  2. 基于 Promise 的 HTTP 请求客户端 axios

    基于 Promise 的 HTTP 请求客户端,可同时在浏览器和 node.js 中使用 功能特性 在浏览器中发送 XMLHttpRequests 请求 在 node.js 中发送 http请求 支持 ...

  3. sql的split()函数

    ALTER function [dbo].[StrToList_Test](@Str varchar()) returns @table table( value nvarchar(max) ) as ...

  4. java集群优化——ORM框架查询优化原理

    众所周知,当下的流行的企业级架构中,ORM一直是最基础的部分,在架构设计的底层.对逻辑层提供面向对象的操作支持,而事实总是和我们预想的有所偏差,ORM在提供了较好的操作体验时,也流失了一部分原生SQL ...

  5. schema的作用

    1,如果schema中定义的字段类型和数据库中该字段存储值的类型不一致(可以不定义,但定义的时候类型必须一致),则该字段查找不到,mongoose不会返回该字段的数据 2,如果数据库中有字段a,而sc ...

  6. View:Android View的scrollTo(),scrollBy(),getScrollX(), getScrollY()的理解

    Android系统手机屏幕的左上角为坐标系,同时y轴方向与笛卡尔坐标系的y轴方向想反.提供了 getLeft(), getTop(), getBottom(), getRight() 这些API来获取 ...

  7. Zend Studio 12 windows 无限期试用

    安装: 1.下载最新版本Zend Studio:http://downloads.zend.com/studio-eclipse/12.0.0/ZendStudio-12.0.0-win32.win3 ...

  8. (转)在Docker中运行Java:为了防止失败,你需要知道这些

    转自:https://mp.weixin.qq.com/s?__biz=MzA5OTAyNzQ2OA==&mid=2649693848&idx=1&sn=4e9ef7e2a9d ...

  9. Java中的异常处理:何时抛出异常,何时捕获异常,何时处理异常?

    Java中的异常处理:何时抛出异常,何时捕获异常? 2017-06-07 1 异常分类 Throwable对象可以分为两组: 一组是unchecked异常,异常处理机制往往不用于这组异常,包括: Er ...

  10. LeetCode: Balanced Binary Tree 解题报告

    Balanced Binary Tree Given a binary tree, determine if it is height-balanced. For this problem, a he ...