Keras实现VGG16
一.代码实现
# -*- coding: utf-8 -*-
"""
Created on Sat Feb 9 15:33:39 2019 @author: zhen
""" from keras.applications.vgg16 import VGG16 from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
from keras.models import Model
from keras.optimizers import SGD from keras.datasets import mnist import cv2
import numpy as np
# 因初始设置需大量内存(至少24G),现设置为最小分辨率以降低内存的要求
model_vgg = VGG16(include_top=False, weights='imagenet', input_shape=(48, 48, 3)) for layer in model_vgg.layers:
layer.trainable = False
model = Flatten(name='flatten')(model_vgg.output) # 扁平化
model = Dense(4096, activation='relu', name='fc1')(model)
model = Dense(4096, activation='relu', name='fc2')(model)
model = Dropout(0.5)(model)
model = Dense(10, activation='softmax')(model)
model_vgg_mnist = Model(inputs=model_vgg.input, outputs=model, name='vgg16') model_vgg_mnist.summary() # VGGNet初始推荐
model_vgg = VGG16(include_top=False, weights='imagenet', input_shape=(224, 224, 3))
for layer in model_vgg.layers:
layer.trainable = False model = Flatten()(model_vgg.output)
model = Dense(4096, activation='relu', name='fc1')(model)
model = Dense(4096, activation='relu', name='fc2')(model)
model = Dropout(0.5)(model)
model = Dense(10, activation='softmax', name='prediction')(model)
model_vgg_mnist_pretrain = Model(model_vgg.input, model, name='vgg16_pretrain') model_vgg_mnist_pretrain.summary() sgd = SGD(lr=0.05, decay=1e-5) # 随机梯度下降
model_vgg_mnist.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) (x_train, y_train), (x_test, y_test) = mnist.load_data("../test_data_home")
x_train, y_train = x_train[:1000], y_train[:1000]
x_test, y_test = x_test[:1000], y_test[:1000]
# GRAY两通道转换为RGB三通道
x_train = [cv2.cvtColor(cv2.resize(i, (48, 48)), cv2.COLOR_GRAY2RGB) for i in x_train]
x_train = np.concatenate([arr[np.newaxis] for arr in x_train]).astype('float32') x_test = [cv2.cvtColor(cv2.resize(i, (48, 48)), cv2.COLOR_GRAY2RGB) for i in x_test]
x_test = np.concatenate([arr[np.newaxis] for arr in x_test]).astype('float32') print(x_train.shape)
print(x_test.shape) x_train = x_train / 255
x_test = x_test / 255 def tran_y(y):
y_ohe = np.zeros(10)
y_ohe[y] = 1
return y_ohe y_train_ohe = np.array([tran_y(y_train[i]) for i in range(len(y_train))])
y_test_ohe = np.array([tran_y(y_test[i]) for i in range(len(y_test))]) model_vgg_mnist.fit(x_train, y_train_ohe, validation_data=(x_test, y_test_ohe), epochs=20, batch_size=100)
二.结果
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) (None, 48, 48, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 48, 48, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 48, 48, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 24, 24, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 24, 24, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 12, 12, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 12, 12, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 6, 6, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 6, 6, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 3, 3, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 1, 1, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 2101248
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_9 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_5 (Dense) (None, 10) 40970
=================================================================
Total params: 33,638,218
Trainable params: 18,923,530
Non-trainable params: 14,714,688
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_10 (Dropout) (None, 4096) 0
_________________________________________________________________
prediction (Dense) (None, 10) 40970
=================================================================
Total params: 134,301,514
Trainable params: 119,586,826
Non-trainable params: 14,714,688
_________________________________________________________________
(1000, 48, 48, 3)
(1000, 48, 48, 3)
Train on 1000 samples, validate on 1000 samples
Epoch 1/20
1000/1000 [==============================] - 175s 175ms/step - loss: 2.1289 - acc: 0.2350 - val_loss: 1.9100 - val_acc: 0.4230
Epoch 2/20
1000/1000 [==============================] - 190s 190ms/step - loss: 1.7685 - acc: 0.4420 - val_loss: 1.6503 - val_acc: 0.4930
Epoch 3/20
1000/1000 [==============================] - 265s 265ms/step - loss: 1.5582 - acc: 0.5140 - val_loss: 1.5005 - val_acc: 0.5440
Epoch 4/20
1000/1000 [==============================] - 373s 373ms/step - loss: 1.4210 - acc: 0.5710 - val_loss: 1.3019 - val_acc: 0.6160
Epoch 5/20
1000/1000 [==============================] - 295s 295ms/step - loss: 1.1946 - acc: 0.6490 - val_loss: 1.1182 - val_acc: 0.7280
Epoch 6/20
1000/1000 [==============================] - 277s 277ms/step - loss: 1.0291 - acc: 0.7330 - val_loss: 1.0279 - val_acc: 0.7430
Epoch 7/20
1000/1000 [==============================] - 177s 177ms/step - loss: 1.0065 - acc: 0.7060 - val_loss: 0.9229 - val_acc: 0.7690
Epoch 8/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.8438 - acc: 0.7810 - val_loss: 0.9716 - val_acc: 0.6670
Epoch 9/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.8898 - acc: 0.7230 - val_loss: 0.9710 - val_acc: 0.6660
Epoch 10/20
1000/1000 [==============================] - 166s 166ms/step - loss: 0.8258 - acc: 0.7460 - val_loss: 0.9026 - val_acc: 0.7130
Epoch 11/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.7592 - acc: 0.7640 - val_loss: 0.9691 - val_acc: 0.6730
Epoch 12/20
1000/1000 [==============================] - 165s 165ms/step - loss: 0.7793 - acc: 0.7520 - val_loss: 0.8350 - val_acc: 0.6800
Epoch 13/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.6677 - acc: 0.7780 - val_loss: 0.7203 - val_acc: 0.7730
Epoch 14/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.7018 - acc: 0.7630 - val_loss: 0.6947 - val_acc: 0.7760
Epoch 15/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.6129 - acc: 0.8100 - val_loss: 0.7025 - val_acc: 0.7610
Epoch 16/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.6104 - acc: 0.8190 - val_loss: 0.6385 - val_acc: 0.8220
Epoch 17/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.5507 - acc: 0.8320 - val_loss: 0.6273 - val_acc: 0.8290
Epoch 18/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.5205 - acc: 0.8360 - val_loss: 0.8740 - val_acc: 0.6750
Epoch 19/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.5852 - acc: 0.8150 - val_loss: 0.6614 - val_acc: 0.7890
Epoch 20/20
1000/1000 [==============================] - 166s 166ms/step - loss: 0.5310 - acc: 0.8340 - val_loss: 0.5718 - val_acc: 0.8250
三.解析
VGGNet是牛津大学计算机视觉组(Visual Geometry Group)和Google DeepMind公司的研究员一起研发的深度卷积神经网络。VGG探索了卷积神经网络的深度与其性能之间的关系,通过反复堆叠3*3的小型卷积核和2*2的最大池化层,VGG成功构筑了16-19层深的卷积神经网络。
VGG取得了2014年比赛分类项目第二名和定位项目第一名。同时,VGG拓展性很强,迁移到其他图片数据上的泛化性非常好。VGG的结构简洁,整个网络都是使用了同样大小的卷积核尺寸3*3和池化层2*2。VGG现在也还经常被用来提取图像特征,可用来在图像分类任务上进行再训练,相当于提供了非常好的初始化权重。
VGG通过加深层次来提升性能,拥有5段卷积,每一段内有2-3个卷积层,同时每段尾部都会连接一个最大池化层来缩小图片尺寸。每段内的卷积核数量一样,越靠后段的卷积核数量越多,64-128-256-512-512。
Keras实现VGG16的更多相关文章
- 【Keras篇】---利用keras改写VGG16经典模型在手写数字识别体中的应用
一.前述 VGG16是由16层神经网络构成的经典模型,包括多层卷积,多层全连接层,一般我们改写的时候卷积层基本不动,全连接层从后面几层依次向前改写,因为先改参数较小的. 二.具体 1.因为本文中代码需 ...
- keras用vgg16做图像分类
实际上我只是提供一个模版而已,代码应该很容易看得懂,label是存在一个csv里面的,图片是在一个文件夹里面的 没GPU的就不用尝试了,训练一次要很久很久... ## import libaries ...
- 基于Keras 的VGG16神经网络模型的Mnist数据集识别并使用GPU加速
这段话放在前面:之前一种用的Pytorch,用着还挺爽,感觉挺方便的,但是在最近文献的时候,很多实验都是基于Google 的Keras的,所以抽空学了下Keras,学了之后才发现Keras相比Pyto ...
- keras系列︱Application中五款已训练模型、VGG16框架(Sequential式、Model式)解读(二)
引自:http://blog.csdn.net/sinat_26917383/article/details/72859145 中文文档:http://keras-cn.readthedocs.io/ ...
- 1.keras实现-->使用预训练的卷积神经网络(VGG16)
VGG16内置于Keras,可以通过keras.applications模块中导入. --------------------------------------------------------将 ...
- Keras(二)Application中五款已训练模型、VGG16框架解读
Application的五款已训练模型 + H5py简述 Keras的应用模块Application提供了带有预训练权重的Keras模型,这些模型可以用来进行预测.特征提取和finetune. 后续还 ...
- VGG16等keras预训练权重文件的下载及本地存放
VGG16等keras预训练权重文件的下载: https://github.com/fchollet/deep-learning-models/releases/ .h5文件本地存放目录: Linux ...
- 我的Keras使用总结(3)——利用bottleneck features进行微调预训练模型VGG16
Keras的预训练模型地址:https://github.com/fchollet/deep-learning-models/releases 一个稍微讲究一点的办法是,利用在大规模数据集上预训练好的 ...
- Keras官方中文文档:常见问题与解答
所属分类:Keras Keras FAQ:常见问题 如何引用Keras? 如何使Keras调用GPU? 如何在多张GPU卡上使用Keras "batch", "epoch ...
随机推荐
- RocketMQ事务消息实现分析
这周RocketMQ发布了4.3.0版本,New Feature中最受关注的一点就是支持了事务消息: 今天花了点时间看了下具体的实现内容,下面是简单的总结. RocketMQ事务消息概要 通过冯嘉发布 ...
- Java多线程之三volatile与等待通知机制示例
原子性,可见性与有序性 在多线程中,线程同步的时候一般需要考虑原子性,可见性与有序性 原子性 原子性定义:一个操作或者多个操作在执行过程中要么全部执行完成,要么全部都不执行,不存在执行一部分的情况. ...
- 改善JAVA代码01:考虑静态工厂方法代替构造器
前言 系列文章:[传送门] 每次开始新的一本书,我都会很开心.新书新心情. 正文 静态工厂方法代替构造器 说起这个,好多可以念叨的.做了一年多的项目,慢慢也有感触. 说起构造器 大家很明白,构造器 ...
- [NewLife.XCode]扩展属性(替代多表关联Join提升性能)
NewLife.XCode是一个有10多年历史的开源数据中间件,支持nfx/netstandard,由新生命团队(2002~2019)开发完成并维护至今,以下简称XCode. 整个系列教程会大量结合示 ...
- 完整例子-正则控制input的输入
转 : https://www.cnblogs.com/ckf1988/p/5619337.html
- IDEA的几个常用配置,日常开发必备。
用了IDEA有很长时间了,身边的同事朋友也都慢慢的开始都从Eclipse切换到IDEA了,其实无论是Eclipse还是IntelliJ IDEA都是开发工具而已,各自都有优点.但是刚从Eclipse切 ...
- man mountd(rpc.mountd中文手册)
本人译作集合:http://www.cnblogs.com/f-ck-need-u/p/7048359.html rpc.mountd() System Manager's Manual rpc.mo ...
- Python面向对象基础:编码细节和注意事项
在前面,我用了3篇文章解释python的面向对象: 面向对象:从代码复用开始 面向对象:设置对象属性 类和对象的名称空间 本篇是第4篇,用一个完整的示例来解释面向对象的一些细节. 例子的模型是父类Em ...
- go基础系列:结构struct
Go语言不是一门面向对象的语言,没有对象和继承,也没有面向对象的多态.重写相关特性. Go所拥有的是数据结构,它可以关联方法.Go也支持简单但高效的组合(Composition),请搜索面向对象和组合 ...
- 基于SpringMVC+Spring+MyBatis实现秒杀系统【概况】
前言 本教程使用SpringMVC+Spring+MyBatis+MySQL实现一个秒杀系统.教程素材来自慕课网视频教程[https://www.imooc.com/learn/631].有感兴趣的可 ...