实战keras——用CNN实现cifar10图像分类
原文:https://blog.csdn.net/zzulp/article/details/76358694
import keras
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D num_classes = 10
model_name = 'cifar10.h5' # The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data() x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255 # Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5)) model.add(Dense(num_classes))
model.add(Activation('softmax')) model.summary() # initiate RMSprop optimizer
opt = keras.optimizers.rmsprop(lr=0.001, decay=1e-6) # train the model using RMSprop
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) hist = model.fit(x_train, y_train, epochs=40, shuffle=True)
model.save(model_name) # evaluate
loss, accuracy = model.evaluate(x_test, y_test)
print(loss, accuracy)
实验结果:
Downloading data from http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170475520/170498071 [============================>.] - ETA: 0s_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 32, 32, 32) 896
_________________________________________________________________
activation_1 (Activation) (None, 32, 32, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 16, 16, 64) 18496
_________________________________________________________________
activation_2 (Activation) (None, 16, 16, 64) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 8, 8, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4096) 0
_________________________________________________________________
dense_1 (Dense) (None, 512) 2097664
_________________________________________________________________
activation_3 (Activation) (None, 512) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 5130
_________________________________________________________________
activation_4 (Activation) (None, 10) 0
=================================================================
Total params: 2,122,186
Trainable params: 2,122,186
Non-trainable params: 0
_________________________________________________________________
Epoch 1/40
50000/50000 [==============================] - 189s - loss: 1.5264 - acc: 0.4558
Epoch 2/40
50000/50000 [==============================] - 185s - loss: 1.2152 - acc: 0.5769
Epoch 3/40
50000/50000 [==============================] - 192s - loss: 1.1367 - acc: 0.6118
Epoch 4/40
50000/50000 [==============================] - 183s - loss: 1.1145 - acc: 0.6241
Epoch 5/40
50000/50000 [==============================] - 189s - loss: 1.1131 - acc: 0.6273
Epoch 6/40
50000/50000 [==============================] - 192s - loss: 1.1175 - acc: 0.6313
Epoch 7/40
50000/50000 [==============================] - 202s - loss: 1.1309 - acc: 0.6299
Epoch 8/40
50000/50000 [==============================] - 187s - loss: 1.1406 - acc: 0.6278
Epoch 9/40
50000/50000 [==============================] - 190s - loss: 1.1583 - acc: 0.6221
Epoch 10/40
50000/50000 [==============================] - 188s - loss: 1.1689 - acc: 0.6199
Epoch 11/40
50000/50000 [==============================] - 183s - loss: 1.1896 - acc: 0.6134
Epoch 12/40
50000/50000 [==============================] - 188s - loss: 1.2032 - acc: 0.6101
Epoch 13/40
50000/50000 [==============================] - 186s - loss: 1.2246 - acc: 0.6011
Epoch 14/40
50000/50000 [==============================] - 192s - loss: 1.2405 - acc: 0.6000
Epoch 15/40
50000/50000 [==============================] - 170s - loss: 1.2514 - acc: 0.5958
Epoch 16/40
50000/50000 [==============================] - 172s - loss: 1.2627 - acc: 0.5912
Epoch 17/40
50000/50000 [==============================] - 177s - loss: 1.2835 - acc: 0.5838
Epoch 18/40
50000/50000 [==============================] - 179s - loss: 1.2876 - acc: 0.5809
Epoch 19/40
50000/50000 [==============================] - 180s - loss: 1.3085 - acc: 0.5782
Epoch 20/40
50000/50000 [==============================] - 180s - loss: 1.3253 - acc: 0.5695
Epoch 21/40
50000/50000 [==============================] - 180s - loss: 1.3375 - acc: 0.5651
Epoch 22/40
50000/50000 [==============================] - 183s - loss: 1.3483 - acc: 0.5623
Epoch 23/40
50000/50000 [==============================] - 177s - loss: 1.3567 - acc: 0.5599
Epoch 24/40
50000/50000 [==============================] - 178s - loss: 1.3697 - acc: 0.5541
Epoch 25/40
50000/50000 [==============================] - 178s - loss: 1.3722 - acc: 0.5518
Epoch 26/40
50000/50000 [==============================] - 181s - loss: 1.3848 - acc: 0.5479
Epoch 27/40
50000/50000 [==============================] - 181s - loss: 1.3916 - acc: 0.5474
Epoch 28/40
50000/50000 [==============================] - 183s - loss: 1.4081 - acc: 0.5403
Epoch 29/40
50000/50000 [==============================] - 172s - loss: 1.4229 - acc: 0.5387
Epoch 30/40
50000/50000 [==============================] - 190s - loss: 1.4153 - acc: 0.5383
Epoch 31/40
50000/50000 [==============================] - 183s - loss: 1.4355 - acc: 0.5324
Epoch 32/40
50000/50000 [==============================] - 191s - loss: 1.4667 - acc: 0.5251
Epoch 33/40
50000/50000 [==============================] - 169s - loss: 1.4690 - acc: 0.5188
Epoch 34/40
50000/50000 [==============================] - 168s - loss: 1.4798 - acc: 0.5176
Epoch 35/40
50000/50000 [==============================] - 181s - loss: 1.5152 - acc: 0.5054
Epoch 36/40
50000/50000 [==============================] - 173s - loss: 1.4985 - acc: 0.5067
Epoch 37/40
50000/50000 [==============================] - 182s - loss: 1.5030 - acc: 0.5098
Epoch 38/40
50000/50000 [==============================] - 178s - loss: 1.5298 - acc: 0.4967
Epoch 39/40
50000/50000 [==============================] - 181s - loss: 1.5237 - acc: 0.5014
Epoch 40/40
50000/50000 [==============================] - 181s - loss: 1.4933 - acc: 0.5103
9952/10000 [============================>.] - ETA: 0s1.80146283646 0.3274
实战keras——用CNN实现cifar10图像分类的更多相关文章
- CNN眼中的世界:利用Keras解释CNN的滤波器
转载自:https://keras-cn.readthedocs.io/en/latest/legacy/blog/cnn_see_world/ 文章信息 本文地址:http://blog.keras ...
- 基于Pre-Train的CNN模型的图像分类实验
基于Pre-Train的CNN模型的图像分类实验 MatConvNet工具包提供了好几个在imageNet数据库上训练好的CNN模型,可以利用这个训练好的模型提取图像的特征.本文就利用其中的 “im ...
- 深度学习之 cnn 进行 CIFAR10 分类
深度学习之 cnn 进行 CIFAR10 分类 import torchvision as tv import torchvision.transforms as transforms from to ...
- keras训练cnn模型时loss为nan
keras训练cnn模型时loss为nan 1.首先记下来如何解决这个问题的:由于我代码中 model.compile(loss='categorical_crossentropy', optimiz ...
- 数据挖掘入门系列教程(十二)之使用keras构建CNN网络识别CIFAR10
简介 在上一篇博客:数据挖掘入门系列教程(十一点五)之CNN网络介绍中,介绍了CNN的工作原理和工作流程,在这一篇博客,将具体的使用代码来说明如何使用keras构建一个CNN网络来对CIFAR-10数 ...
- Keras框架下使用CNN进行CIFAR-10的识别测试
有手册,然后代码不知道看一下:https://keras-cn.readthedocs.io/en/latest/ 首先是下载数据集,下载太慢了就从网盘上下载: 链接:https://pan.baid ...
- 入门项目数字手写体识别:使用Keras完成CNN模型搭建(重要)
摘要: 本文是通过Keras实现深度学习入门项目——数字手写体识别,整个流程介绍比较详细,适合初学者上手实践. 对于图像分类任务而言,卷积神经网络(CNN)是目前最优的网络结构,没有之一.在面部识别. ...
- CNN训练Cifar-10技巧
关于数据集 Cifar-10是由Hinton的两个大弟子Alex Krizhevsky.Ilya Sutskever收集的一个用于普适物体识别的数据集.Cifar是加拿大政府牵头投资的一个先进科学项目 ...
- 使用Keras搭建cnn+rnn, BRNN,DRNN等模型
Keras api 提前知道: BatchNormalization, 用来加快每次迭代中的训练速度 Normalize the activations of the previous layer a ...
随机推荐
- Java&Selenium处理页面Table以及Table中随机位置的数据
一.摘要 前一段时间公司小伙伴刚刚接触自动化,遇到的一个问题,页面新创建的数据保存后,出现在table中的某个位置,并不一定是第一行还是第几行,这种情况下如何去操控它 本篇博文将介绍处理这个问题的一种 ...
- 浅谈矩阵变换——Matrix
矩阵变换在图形学上经常用到.基本的常用矩阵变换操作包括平移.缩放.旋转.斜切. 每种变换都对应一个变换矩阵,通过矩阵乘法,可以把多个变换矩阵相乘得到复合变换矩阵. 矩阵乘法不支持交换律,因此不同的变换 ...
- 关于STM32的I2C硬件DMA实现
关于STM32的I2C硬件DMA实现 网上看到很多说STM32的I2C很难用,但我觉得还是理解上的问题,STM32的I2C确实很复杂,但只要基础牢靠,并没有想象中的那么困难. 那么就先从基础说起,只说 ...
- flask学习导航主页
我就学习了网易课堂的知了Flaskk. 十分感谢. └—01-Flask视图和URL ├—课时001.[Flask预热]课程介绍 ├—课时002.[Flask预热]Flask课程准备工作 ├—课时00 ...
- nginx 之 root和alias
转载: https://www.jianshu.com/p/4be0d5882ec5 https://blog.csdn.net/Erica_1230/article/details/7855311 ...
- Java数据库小项目02--管家婆项目
目录 项目要求 开发环境搭建 工具类JDBCUtils 创建管家婆数据表 项目分层 MainApp层 MainView层 ZhangWuController层 ZhangWuService层 Zhan ...
- 实体类 @TableName&@TableField&@Version
//指向表table_biao @TableName("table_biao)public class UserThirdLogin extends Model<UserThirdLo ...
- 洛谷P2787 语文1(chin1)- 理理思维
洛谷题目链接 珂朵莉树吼啊!!! 对于操作$1$,直接普通查询即可 对于操作$2$,直接区间赋值即可 对于操作$3$,其实也并不难,来一次计数排序后,依次插入即可,(注意初始化计数器数组)具体实现看代 ...
- elasticsearch-head后台运行
运行插件 # npm run start > elasticsearch-head@0.0.0 start /usr/local/elasticsearch-head-master > g ...
- Leetcode题目102.二叉树的层次遍历(队列-中等)
题目描述: 给定一个二叉树,返回其按层次遍历的节点值. (即逐层地,从左到右访问所有节点). 例如: 给定二叉树: [3,9,20,null,null,15,7], 3 / \ 9 20 / \ 15 ...