软件环境(Windows):

  • Visual Studio
  • Anaconda
  • CUDA
  • MinGW-w64
    • conda install -c anaconda mingw libpython
  • CNTK
  • TensorFlow-gpu
  • Keras-gpu
  • Theano
  • MKL
  • CuDNN

参考书籍:谢梁 , 鲁颖 , 劳虹岚.Keras快速上手:基于Python的深度学习实战

Keras 简介

Keras 这个名字来源于希腊古典史诗《奥德赛》的牛角之门(Gate of Horn):Those that come through the Ivory Gate cheat us with empty promises that never see fullfillment.Those that come through the Gate of Horn inform the dreamer of trut.

Keras 的优点:

  1. Keras 在设计时以人为本,强调快速建模,用户可以快速地将所需模型的结构映射到 Keras 代码中,尽可能减少编写代码的工作量。

  2. 支持现有的常见结构,比如 CNN、RNN 等。

  3. 高度模块化,用户几乎能够任意组合各种模块来构造所需的模型:

    在 Keras 中,任何神经网络模型都可以被描述为一个图模型或者序列模型,其中的部件被划分为:

    - 神经网络层

    - 损失函数

    - 激活函数

    - 初始化方法

    - 正则化方法

    - 优化引擎

  4. 基于 Python,用户很容易实现模块的自定义操作。

  5. 能在 CPU 和 GPU 之间无缝切换。


1 Keras 中的模型

关于Keras模型

Keras 有两种类型的模型,序列模型(Sequential)和 函数式模型(Model),函数式模型应用更为广泛,序列模型是函数式模型的一种特殊情况。函数式模型也叫通用模型。

两类模型均有有两个主要的方法:

  • model.summary():打印出模型概况,它实际调用的是keras.utils.print_summary
  • model.get_config():返回包含模型配置信息的 Python 字典。模型也可以从它的 config 信息中重构回去

对于 Model: Model.from_config 我不会使用。

对于 Sequential:

  1. config = model.get_config()
  2. model = Sequential.from_config(config)
  • model.get_layer():依据层名或下标获得层对象
  • model.get_weights():返回模型权重张量的列表,类型为 numpy.array
  • model.set_weights():从 numpy.array 里将权重载入给模型,要求数组具有与 model.get_weights() 相同的形状。

1.1 Sequential 序列模型

序列模型是函数式模型的简略版(即序列模型是通用模型的一个子类),为最简单的线性、从头到尾的结构顺序,不分叉。即这种模型各层之间是依次顺序的线性关系,在第 \(k\) 层和 \(k+1\) 层之间可以加上各种元素来构造神经网络。这些元素可以通过一个列表来制定,然后作为参数传递给序列模型来生成相应的模型。

Sequential模型的基本组件:

  1. model.add,添加层;
  2. model.compile,模型训练的 BP 模式设置;
  3. model.fit,模型训练参数设置 + 训练;
  4. 模型评估
  5. 模型预测

1.1.1 add:添加层

序贯模型是多个网络层的线性堆叠,也就是“一条路走到黑”。

可以通过向 Sequential 模型传递一个 layer 的 list 来构造该模型:

  1. from keras.models import Sequential
  2. from keras.layers import Dense, Activation
  3. model = Sequential([Dense(32, input_shape=(784,)),
  4. Activation('relu'),
  5. Dense(10),
  6. Activation('softmax'),
  7. ])
  1. Using TensorFlow backend.

也可以通过 .add() 方法一个个的将 layer 加入模型中:

  1. model = Sequential()
  2. model.add(Dense(32, input_shape=(784,)))
  3. model.add(Activation('relu'))
  4. model.add(Dense(10))
  5. model.add(Activation('softmax'))

1.1.2 指定输入数据的 shape

模型需要知道输入数据的 shape,因此,Sequential 的第一层需要接受一个关于输入数据 shape 的参数,后面的各个层则可以自动的推导出中间数据的shape,因此不需要为每个层都指定这个参数。有几种方法来为第一层指定输入数据的 shape:

  • 传递一个input_shape的关键字参数给第一层,input_shape是一个 tuple 类型的数据,其中也可以填入 None ,如果填入 None 则表示此位置可能是任何正整数。数据的 batch 大小不应包含在其中。
  • 有些2D层,如 Dense,支持通过指定其输入维度 input_dim 来隐含的指定输入数据 shape,是一个 Int 类型的数据。一些 3D 的时域层支持通过参数input_diminput_length 来指定输入 shape。
  • 如果你需要为输入指定一个固定大小的 batch_size(常用于 stateful RNN 网络),可以传递 batch_size 参数到一个层中,例如你想指定输入张量的batch 大小是 \(32\),数据shape是 \((6,8)\),则你需要传递 batch_size=32input_shape=(6,8)
  1. model = Sequential()
  2. model.add(Dense(32, input_dim= 784))
  3. model.summary()
  1. _________________________________________________________________
  2. Layer (type) Output Shape Param #
  3. =================================================================
  4. dense_6 (Dense) (None, 32) 25120
  5. =================================================================
  6. Total params: 25,120
  7. Trainable params: 25,120
  8. Non-trainable params: 0
  9. _________________________________________________________________
  1. model = Sequential()
  2. model.add(Dense(32, input_shape=(784,)))
  3. model.summary()
  1. _________________________________________________________________
  2. Layer (type) Output Shape Param #
  3. =================================================================
  4. dense_8 (Dense) (None, 32) 25120
  5. =================================================================
  6. Total params: 25,120
  7. Trainable params: 25,120
  8. Non-trainable params: 0
  9. _________________________________________________________________
  1. model = Sequential()
  2. model.add(Dense(100, input_shape= (32, 32, 3)))
  3. model.summary()
  1. _________________________________________________________________
  2. Layer (type) Output Shape Param #
  3. =================================================================
  4. dense_9 (Dense) (None, 32, 32, 100) 400
  5. =================================================================
  6. Total params: 400
  7. Trainable params: 400
  8. Non-trainable params: 0
  9. _________________________________________________________________

Param 是 \(400\):\(3 \times 100 + 100\) (包含偏置项)

  1. model = Sequential()
  2. model.add(Dense(100, input_shape= (32, 32, 3), batch_size= 64))
  3. model.summary()
  1. _________________________________________________________________
  2. Layer (type) Output Shape Param #
  3. =================================================================
  4. dense_10 (Dense) (64, 32, 32, 100) 400
  5. =================================================================
  6. Total params: 400
  7. Trainable params: 400
  8. Non-trainable params: 0
  9. _________________________________________________________________

1.1.3 编译

在训练模型之前,我们需要通过 compile 来对学习过程进行配置。

compile接收三个参数:

  • 优化器 optimizer:该参数可指定为已预定义的优化器名,如 rmsprop、adagrad ,或一个Optimizer 类的对象,详情见优化器optimizers

  • 损失函数 loss:该参数为模型试图最小化的目标函数,它可为预定义的损失函数名,如 categorical_crossentropy、mse,也可以为一个自定义损失函数。详情见损失函数loss

  • 指标列表 metrics:对分类问题,我们一般将该列表设置为 metrics=['accuracy']。指标可以是一个预定义指标的名字,也可以是一个用户定制的函数。指标函数应该返回单个张量,或一个完成 metric_name - > metric_value映射的字典。

  • sample_weight_mode:如果你需要按时间步为样本赋权( 2D 权矩阵),将该值设为 “temporal”。

    默认为 “None”,代表按样本赋权(1D 权)。在下面 fit 函数的解释中有相关的参考内容。

  • kwargs: 使用 TensorFlow 作为后端请忽略该参数,若使用 Theano 作为后端,kwargs 的值将会传递给 K.function

注意:

模型在使用前必须编译,否则在调用 fitevaluate 时会抛出异常。

  1. # For a multi-class classification problem
  2. model.compile(optimizer='rmsprop',
  3. loss='categorical_crossentropy',
  4. metrics=['accuracy'])
  5. # For a binary classification problem
  6. model.compile(optimizer='rmsprop',
  7. loss='binary_crossentropy',
  8. metrics=['accuracy'])
  9. # For a mean squared error regression problem
  10. model.compile(optimizer='rmsprop',
  11. loss='mse')
  12. # For custom metrics
  13. import keras.backend as K
  14. def mean_pred(y_true, y_pred):
  15. return K.mean(y_pred)
  16. model.compile(optimizer='rmsprop',
  17. loss='binary_crossentropy',
  18. metrics=['accuracy', mean_pred])

1.1.3 训练

Keras以 Numpy 数组作为输入数据和标签的数据类型。训练模型一般使用 fit 函数:

fit(self, x, y, batch_size=32, epochs=10, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0)

本函数将模型训练 epochs 轮,其参数有:

  • x:输入数据。如果模型只有一个输入,那么 x 的类型是 numpy array,如果模型有多个输入,那么 x 的类型应当为 list,list 的元素是对应于各个输入的 numpy array

  • y:标签,numpy array

  • batch_size:整数,指定进行梯度下降时每个 batch 包含的样本数。训练时一个 batch 的样本会被计算一次梯度下降,使目标函数优化一步。

  • epochs:整数,训练的轮数,每个 epoch 会把训练集轮一遍。

  • verbose:日志显示,0 为不在标准输出流输出日志信息,1 为输出进度条记录,2 为每个epoch输出一行记录

  • callbacks:list,其中的元素是 keras.callbacks.Callback 的对象。这个 list 中的回调函数将会在训练过程中的适当时机被调用,参考回调函数

  • validation_split:\(0 - 1\) 之间的浮点数,用来指定训练集的一定比例数据作为验证集。验证集将不参与训练,并在每个 epoch 结束后测试的模型的指标,如损失函数、精确度等。

    • 注意,validation_split 的划分在 shuffle 之前,因此如果你的数据本身是有序的,需要先手工打乱再指定validation_split,否则可能会出现验证集样本不均匀。
  • validation_data:形式为 (X, y) 的 tuple,是指定的验证集。此参数将覆盖 validation_spilt。

  • shuffle:布尔值或字符串,一般为布尔值,表示是否在训练过程中随机打乱输入样本的顺序。若为字符串 “batch”,则是用来处理 HDF5 数据的特殊情况,它将在 batch 内部将数据打乱。

  • class_weight:字典,将不同的类别映射为不同的权值,该参数用来在训练过程中调整损失函数(只能用于训练)

  • sample_weight:权值的numpy array,用于在训练时调整损失函数(仅用于训练)。可以传递一个 1D 的与样本等长的向量用于对样本进行 \(1\) 对 \(1\) 的加权,或者在面对时序数据时,传递一个的形式为 (samples,sequence_length) 的矩阵来为每个时间步上的样本赋不同的权。这种情况下请确定在编译模型时添加了 sample_weight_mode='temporal'

  • initial_epoch: 从该参数指定的 epoch 开始训练,在继续之前的训练时有用。

fit函数返回一个 History 的对象,其 History.history 属性记录了损失函数和其他指标的数值随 epoch 变化的情况,如果有验证集的话,也包含了验证集的这些指标变化情况

注意:

要与之后的 fit_generator 做区别,两者输入 x/y 不同。

案例一:简单的2分类

\(epoch = batch\_size \times iteration\) ,\(10\) 次 epoch 代表训练十次训练集

  1. from keras.models import Sequential
  2. from keras.layers import Dense, Activation
  3. # 模型搭建阶段
  4. model= Sequential() # 代表类的初始化
  5. # Dense(32) is a fully-connected layer with 32 hidden units.
  6. model.add(Dense(32, activation='relu', input_dim= 100))
  7. model.add(Dense(1, activation='sigmoid'))
  8. # For custom metrics
  9. import keras.backend as K
  10. def mean_pred(y_true, y_pred):
  11. return K.mean(y_pred)
  12. model.compile(optimizer='rmsprop',
  13. loss='binary_crossentropy',
  14. metrics=['accuracy', mean_pred])
  15. # Generate dummy data
  16. import numpy as np
  17. data = np.random.random((1000, 100))
  18. labels = np.random.randint(2, size=(1000, 1))
  19. # Train the model, iterating on the data in batches of 32 samples
  20. model.fit(data, labels, epochs =10, batch_size=32)
  1. Using TensorFlow backend.
  2. Epoch 1/10
  3. 1000/1000 [==============================] - 3s - loss: 0.7218 - acc: 0.4780 - mean_pred: 0.5181
  4. Epoch 2/10
  5. 1000/1000 [==============================] - 0s - loss: 0.7083 - acc: 0.4990 - mean_pred: 0.5042
  6. Epoch 3/10
  7. 1000/1000 [==============================] - 0s - loss: 0.7053 - acc: 0.4850 - mean_pred: 0.5174
  8. Epoch 4/10
  9. 1000/1000 [==============================] - 0s - loss: 0.6978 - acc: 0.5400 - mean_pred: 0.5074
  10. Epoch 5/10
  11. 1000/1000 [==============================] - 0s - loss: 0.6938 - acc: 0.5250 - mean_pred: 0.5088
  12. Epoch 6/10
  13. 1000/1000 [==============================] - 0s - loss: 0.6887 - acc: 0.5290 - mean_pred: 0.5196
  14. Epoch 7/10
  15. 1000/1000 [==============================] - 0s - loss: 0.6847 - acc: 0.5570 - mean_pred: 0.5052
  16. Epoch 8/10
  17. 1000/1000 [==============================] - 0s - loss: 0.6797 - acc: 0.5530 - mean_pred: 0.5134
  18. Epoch 9/10
  19. 1000/1000 [==============================] - 0s - loss: 0.6749 - acc: 0.5790 - mean_pred: 0.5126
  20. Epoch 10/10
  21. 1000/1000 [==============================] - 0s - loss: 0.6728 - acc: 0.5920 - mean_pred: 0.5118
  22. <keras.callbacks.History at 0x1eafe9b9240>

1.1.4 evaluate 模型评估

evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None)

本函数按 batch 计算在某些输入数据上模型的误差,其参数有:

  • x:输入数据,与 fit 一样,是 numpy array 或 numpy array 的 list

  • y:标签,numpy array

  • batch_size:整数,含义同 fit 的同名参数

  • verbose:含义同fit的同名参数,但只能取0或1

  • sample_weight:numpy array,含义同 fit 的同名参数

本函数返回一个测试误差的标量值(如果模型没有其他评价指标),或一个标量的 list(如果模型还有其他的评价指标)。model.metrics_names将给出 list 中各个值的含义。

  1. model.evaluate(data, labels, batch_size=32)
  1. 512/1000 [==============>...............] - ETA: 0s
  2. [0.62733754062652591, 0.68200000000000005, 0.54467054557800298]
  1. model.metrics_names
  1. ['loss', 'acc', 'mean_pred']

1.1.5 predict 模型预测

  1. predict(self, x, batch_size=32, verbose=0)
  2. predict_classes(self, x, batch_size=32, verbose=1)
  3. predict_proba(self, x, batch_size=32, verbose=1)
  • predict 函数按 batch 获得输入数据对应的输出,函数的返回值是预测值的 numpy array 其参数有:
  • predict_classes:本函数按batch产生输入数据的类别预测结果;
  • predict_proba:本函数按 batch 产生输入数据属于各个类别的概率
  1. model.predict_proba?
  1. model.predict(data[:5])
  1. array([[ 0.39388809],
  2. [ 0.39062682],
  3. [ 0.59655035],
  4. [ 0.53066045],
  5. [ 0.56720185]], dtype=float32)
  1. model.predict_classes(data[:5])
  1. 5/5 [==============================] - 0s
  2. array([[0],
  3. [0],
  4. [1],
  5. [1],
  6. [1]])
  1. model.predict_proba(data[:5])
  1. 5/5 [==============================] - 0s
  2. array([[ 0.39388809],
  3. [ 0.39062682],
  4. [ 0.59655035],
  5. [ 0.53066045],
  6. [ 0.56720185]], dtype=float32)

1.1.6 on_batch 的结果,模型检查

  • train_on_batch:本函数在一个 batch 的数据上进行一次参数更新,函数返回训练误差的标量值或标量值的 list,与 evaluate 的情形相同。
  • test_on_batch:本函数在一个 batch 的样本上对模型进行评估,函数的返回与 evaluate 的情形相同
  • predict_on_batch:本函数在一个 batch 的样本上对模型进行测试,函数返回模型在一个 batch 上的预测结果
  1. model.train_on_batch(data, labels)
  1. [0.62733746, 0.68199992, 0.54467058]
  1. model.train_on_batch(data, labels)
  1. [0.62483531, 0.68799996, 0.52803379]

1.1.7 fit_generator

  • 利用 Python 的生成器,逐个生成数据的 batch 并进行训练。
  • 生成器与模型将并行执行以提高效率。
  • 例如,该函数允许我们在 CPU 上进行实时的数据提升,同时在 GPU 上进行模型训练

    参考链接:http://keras-cn.readthedocs.io/en/latest/models/sequential/

有了该函数,图像分类训练任务变得很简单。

  1. model.fit_generator(generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, initial_epoch=0)

函数的参数是:

  • generator:生成器函数,生成器的输出应该为:

    • 一个形如 (inputs,targets) 的tuple
    • 一个形如 (inputs, targets,sample_weight) 的 tuple。

      所有的返回值都应该包含相同数目的样本。生成器将无限在数据集上循环。每个 epoch 以经过模型的样本数达到 samples_per_epoch 时,记一个 epoch 结束。
  • steps_per_epoch:整数,当生成器返回 steps_per_epoch 次数据时计一个 epoch 结束,执行下一个 epoch
  • epochs:整数,数据迭代的轮数
  • verbose:日志显示,0 为不在标准输出流输出日志信息,1 为输出进度条记录,2 为每个 epoch 输出一行记录
  • validation_data:具有以下三种形式之一
    • 生成验证集的生成器
    • 一个形如 (inputs,targets) 的tuple
    • 一个形如 (inputs,targets,sample_weights) 的tuple
  • validation_steps: 当 validation_data 为生成器时,本参数指定验证集的生成器返回次数
  • class_weight:规定类别权重的字典,将类别映射为权重,常用于处理样本不均衡问题。
  • sample_weight:权值的 numpy array,用于在训练时调整损失函数(仅用于训练)。可以传递一个1D的与样本等长的向量用于对样本进行 \(1\) 对\(1\) 的加权,或者在面对时序数据时,传递一个的形式为 (samples,sequence_length) 的矩阵来为每个时间步上的样本赋不同的权。这种情况下请确定在编译模型时添加了sample_weight_mode='temporal'
  • workers:最大进程数
  • max_q_size:生成器队列的最大容量
  • pickle_safe: 若为真,则使用基于进程的线程。由于该实现依赖多进程,不能传递 non picklable(无法被 pickle 序列化)的参数到生成器中,因为无法轻易将它们传入子进程中。
  • initial_epoch: 从该参数指定的 epoch 开始训练,在继续之前的训练时有用。

    函数返回一个 History 对象。

例子

  1. def generate_arrays_from_file(path):
  2. while 1:
  3. f = open(path)
  4. for line in f:
  5. # create Numpy arrays of input data
  6. # and labels, from each line in the file
  7. x, y = process_line(line)
  8. yield (x, y)
  9. f.close()
  10. model.fit_generator(generate_arrays_from_file('/my_file.txt'), steps_per_epoch= 1000, epochs=10)

1.1.8 其他的两个辅助的内容:

  • evaluate_generator:本函数使用一个生成器作为数据源评估模型,生成器应返回与 test_on_batch 的输入数据相同类型的数据。该函数的参数与 fit_generator 同名参数含义相同,steps 是生成器要返回数据的轮数。
  • predcit_generator:本函数使用一个生成器作为数据源预测模型,生成器应返回与 test_on_batch 的输入数据相同类型的数据。该函数的参数与 fit_generator 同名参数含义相同,steps 是生成器要返回数据的轮数。

案例二:多分类-VGG的卷积神经网络

注意:keras.utils.to_categorical 的用法:

类似于 One-Hot 编码:

  1. keras.utils.to_categorical(y, num_classes=None)
  1. # -*- coding:utf-8 -*-
  2. import numpy as np
  3. import keras
  4. from keras.models import Sequential
  5. from keras.layers import Dense, Dropout, Flatten
  6. from keras.layers import Conv2D, MaxPooling2D
  7. from keras.optimizers import SGD
  8. from keras.utils import np_utils
  9. # Generate dummy data
  10. x_train = np.random.random((100, 100, 100, 3))
  11. # 100张图片,每张 100*100*3
  12. y_train = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)
  13. # 100*10
  14. x_test = np.random.random((20, 100, 100, 3))
  15. y_test = keras.utils.to_categorical(np.random.randint(10, size=(20, 1)), num_classes=10)
  16. # 20*100
  17. model = Sequential()#最简单的线性、从头到尾的结构顺序,不分叉
  18. # input: 100x100 images with 3 channels -> (100, 100, 3) tensors.
  19. # this applies 32 convolution filters of size 3x3 each.
  20. model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(100, 100, 3)))
  21. model.add(Conv2D(32, (3, 3), activation='relu'))
  22. model.add(MaxPooling2D(pool_size=(2, 2)))
  23. model.add(Dropout(0.25))
  24. model.add(Conv2D(64, (3, 3), activation='relu'))
  25. model.add(Conv2D(64, (3, 3), activation='relu'))
  26. model.add(MaxPooling2D(pool_size=(2, 2)))
  27. model.add(Dropout(0.25))
  28. model.add(Flatten())
  29. model.add(Dense(256, activation='relu'))
  30. model.add(Dropout(0.5))
  31. model.add(Dense(10, activation='softmax'))
  32. sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
  33. model.compile(loss='categorical_crossentropy', optimizer=sgd)
  34. model.fit(x_train, y_train, batch_size=32, epochs=10)
  35. score = model.evaluate(x_test, y_test, batch_size=32)
  36. score
  1. Epoch 1/10
  2. 100/100 [==============================] - 1s - loss: 2.3800
  3. Epoch 2/10
  4. 100/100 [==============================] - 0s - loss: 2.3484
  5. Epoch 3/10
  6. 100/100 [==============================] - 0s - loss: 2.3034
  7. Epoch 4/10
  8. 100/100 [==============================] - 0s - loss: 2.2938
  9. Epoch 5/10
  10. 100/100 [==============================] - 0s - loss: 2.2874
  11. Epoch 6/10
  12. 100/100 [==============================] - 0s - loss: 2.2873
  13. Epoch 7/10
  14. 100/100 [==============================] - 0s - loss: 2.3132 - ETA: 0s - loss: 2.31
  15. Epoch 8/10
  16. 100/100 [==============================] - 0s - loss: 2.2866
  17. Epoch 9/10
  18. 100/100 [==============================] - 0s - loss: 2.2814
  19. Epoch 10/10
  20. 100/100 [==============================] - 0s - loss: 2.2856
  21. 20/20 [==============================] - 0s
  22. 2.2700035572052002

使用LSTM的序列分类

采用stateful LSTM的相同模型

stateful LSTM的特点是,在处理过一个batch的训练数据后,其内部状态(记忆)会被作为下一个batch的训练数据的初始状态。状态LSTM使得我们可以在合理的计算复杂度内处理较长序列

  1. from keras.models import Sequential
  2. from keras.layers import LSTM, Dense
  3. import numpy as np
  4. data_dim = 16
  5. timesteps = 8
  6. num_classes = 10
  7. batch_size = 32
  8. # Expected input batch shape: (batch_size, timesteps, data_dim)
  9. # Note that we have to provide the full batch_input_shape since the network is stateful.
  10. # the sample of index i in batch k is the follow-up for the sample i in batch k-1.
  11. model = Sequential()
  12. model.add(LSTM(32, return_sequences=True, stateful=True,
  13. batch_input_shape=(batch_size, timesteps, data_dim)))
  14. model.add(LSTM(32, return_sequences=True, stateful=True))
  15. model.add(LSTM(32, stateful=True))
  16. model.add(Dense(10, activation='softmax'))
  17. model.compile(loss='categorical_crossentropy',
  18. optimizer='rmsprop',
  19. metrics=['accuracy'])
  20. # Generate dummy training data
  21. x_train = np.random.random((batch_size * 10, timesteps, data_dim))
  22. y_train = np.random.random((batch_size * 10, num_classes))
  23. # Generate dummy validation data
  24. x_val = np.random.random((batch_size * 3, timesteps, data_dim))
  25. y_val = np.random.random((batch_size * 3, num_classes))
  26. model.fit(x_train, y_train,
  27. batch_size=batch_size, epochs=5, shuffle=False,
  28. validation_data=(x_val, y_val))
  1. Train on 320 samples, validate on 96 samples
  2. Epoch 1/5
  3. 320/320 [==============================] - 2s - loss: 11.4843 - acc: 0.1062 - val_loss: 11.2222 - val_acc: 0.1042
  4. Epoch 2/5
  5. 320/320 [==============================] - 0s - loss: 11.4815 - acc: 0.1031 - val_loss: 11.2207 - val_acc: 0.1250
  6. Epoch 3/5
  7. 320/320 [==============================] - 0s - loss: 11.4799 - acc: 0.0844 - val_loss: 11.2202 - val_acc: 0.1562
  8. Epoch 4/5
  9. 320/320 [==============================] - 0s - loss: 11.4790 - acc: 0.1000 - val_loss: 11.2198 - val_acc: 0.1562
  10. Epoch 5/5
  11. 320/320 [==============================] - 0s - loss: 11.4780 - acc: 0.1094 - val_loss: 11.2194 - val_acc: 0.1250
  12. <keras.callbacks.History at 0x1ab0e78ff28>

Keras FAQ:

常见问题: http://keras-cn.readthedocs.io/en/latest/for_beginners/FAQ/


1.2 Model(通用模型)(或者称为函数式(Functional)模型)

函数式模型称作 Functional,但它的类名是 Model,因此我们有时候也用 Model 来代表函数式模型。

Keras函数式模型接口是用户定义多输出模型、非循环有向模型或具有共享层的模型等复杂模型的途径。函数式模型是最广泛的一类模型,序贯模型(Sequential)只是它的一种特殊情况。更多关于序列模型的资料参考: 序贯模型API

通用模型可以用来设计非常复杂、任意拓扑结构的神经网络。类似于序列模型,通用模型采用函数化的应用接口来定义模型。

在定义的时候,从输入的多维矩阵开始,然后定义各层及其要素,最后定义输出层。将输入层与输出层作为参数纳入通用模型中就可以定义一个模型对象,并进行编译和拟合。

函数式模型基本属性与训练流程:

  1. model.layers,添加层信息;
  2. model.compile,模型训练的BP模式设置;
  3. model.fit,模型训练参数设置 + 训练;
  4. evaluate,模型评估;
  5. predict 模型预测

1.2.1 常用Model属性

  • model.layers:组成模型图的各个层
  • model.inputs:模型的输入张量列表
  • model.outputs:模型的输出张量列表

1.2.2 compile 训练模式设置

compile(self, optimizer, loss, metrics=None, loss_weights=None, sample_weight_mode=None)

本函数编译模型以供训练,参数有

  • optimizer:优化器,为预定义优化器名或优化器对象
  • loss:损失函数,为预定义损失函数名或一个目标函数
  • metrics:列表,包含评估模型在训练和测试时的性能的指标,典型用法是metrics=['accuracy']如果要在多输出模型中为不同的输出指定不同的指标,可像该参数传递一个字典,例如 metrics={'ouput_a': 'accuracy'}
  • sample_weight_mode:如果你需要按时间步为样本赋权( 2D 权矩阵),将该值设为 “temporal”。默认为 “None”,代表按样本赋权(1D权)。

    如果模型有多个输出,可以向该参数传入指定 sample_weight_mode 的字典或列表。在下面 fit 函数的解释中有相关的参考内容。

【Tips】如果你只是载入模型并利用其 predict,可以不用进行 compile。在Keras中,compile 主要完成损失函数和优化器的一些配置,是为训练服务的。predict 会在内部进行符号函数的编译工作(通过调用_make_predict_function 生成函数)

1.2.3 fit 模型训练参数设置 + 训练

fit(self, x=None, y=None, batch_size=32, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0)

与序列模型类似

1.2.4 evaluate,模型评估

evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None)

与序列模型类似

1.2.5 predict 模型预测

predict(self, x, batch_size=32, verbose=0)

与序列模型类似

1.2.6 模型检查

  • train_on_batch:本函数在一个 batch 的数据上进行一次参数更新,函数返回训练误差的标量值或标量值的 list,与 evaluate 的情形相同。
  • test_on_batch:本函数在一个 batch 的样本上对模型进行评估,函数的返回与 evaluate 的情形相同
  • predict_on_batch:本函数在一个 batch 的样本上对模型进行测试,函数返回模型在一个 batch 上的预测结果

与序列模型类似

1.2.7 fit_generator

fit_generator(self, generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_q_size=10, workers=1, pickle_safe=False, initial_epoch=0) evaluate_generator(self, generator, steps, max_q_size=10, workers=1, pickle_safe=False)

案例三:全连接网络

在开始前,有几个概念需要澄清:

  • 层对象接受张量为参数,返回一个张量。
  • 输入是张量,输出也是张量的一个框架就是一个模型,通过 Model 定义。
  • 这样的模型可以被像Keras的 Sequential 一样被训练
  1. import keras
  2. from keras.layers import Input, Dense
  3. from keras.models import Model
  4. # 层实例接受张量为参数,返回一个张量
  5. inputs = Input(shape=(100,))
  6. # a layer instance is callable on a tensor, and returns a tensor
  7. # 输入inputs,输出x
  8. # (inputs)代表输入
  9. x = Dense(64, activation='relu')(inputs)
  10. # 输入x,输出x
  11. x = Dense(64, activation='relu')(x)
  12. predictions = Dense(100, activation='softmax')(x)
  13. # 输入x,输出分类
  14. # This creates a model that includes
  15. # the Input layer and three Dense layers
  16. model = Model(inputs=inputs, outputs=predictions)
  17. model.compile(optimizer='rmsprop',
  18. loss='categorical_crossentropy',
  19. metrics=['accuracy'])
  20. # Generate dummy data
  21. import numpy as np
  22. data = np.random.random((1000, 100))
  23. labels = keras.utils.to_categorical(np.random.randint(2, size=(1000, 1)), num_classes=100)
  24. # Train the model
  25. model.fit(data, labels, batch_size=64, epochs=10) # starts training
  1. Epoch 1/10
  2. 1000/1000 [==============================] - 0s - loss: 2.2130 - acc: 0.4650
  3. Epoch 2/10
  4. 1000/1000 [==============================] - 0s - loss: 0.7474 - acc: 0.4980
  5. Epoch 3/10
  6. 1000/1000 [==============================] - 0s - loss: 0.7158 - acc: 0.5050
  7. Epoch 4/10
  8. 1000/1000 [==============================] - 0s - loss: 0.7039 - acc: 0.5260
  9. Epoch 5/10
  10. 1000/1000 [==============================] - 0s - loss: 0.7060 - acc: 0.5280
  11. Epoch 6/10
  12. 1000/1000 [==============================] - 0s - loss: 0.6979 - acc: 0.5270
  13. Epoch 7/10
  14. 1000/1000 [==============================] - 0s - loss: 0.6854 - acc: 0.5570
  15. Epoch 8/10
  16. 1000/1000 [==============================] - 0s - loss: 0.6920 - acc: 0.5300
  17. Epoch 9/10
  18. 1000/1000 [==============================] - 0s - loss: 0.6862 - acc: 0.5620
  19. Epoch 10/10
  20. 1000/1000 [==============================] - 0s - loss: 0.6766 - acc: 0.5750
  21. <keras.callbacks.History at 0x1ec3dd2d5c0>
  1. inputs
  1. <tf.Tensor 'input_4:0' shape=(?, 100) dtype=float32>

可以看到结构与序贯模型完全不一样,其中 x = Dense(64, activation='relu')(inputs) 中:(input)代表输入;x 代表输出

model = Model(inputs=inputs, outputs=predictions) 该句是函数式模型的经典,可以同时输入两个 input,然后输出 output两个。

下面的时间序列模型,我不懂。。。。。。。。。

案例四:视频处理

现在用来做迁移学习;

  • 还可以通过 TimeDistributed 来进行实时预测;

  • input_sequences 代表序列输入;model 代表已训练的模型

  1. x = Input(shape=(100,))
  2. # This works, and returns the 10-way softmax we defined above.
  3. y = model(x)
  4. # model里面存着权重,然后输入 x,输出结果,用来作 fine-tuning
  5. # 分类 -> 视频、实时处理
  6. from keras.layers import TimeDistributed
  7. # Input tensor for sequences of 20 timesteps,
  8. # each containing a 100-dimensional vector
  9. input_sequences = Input(shape=(20, 100))
  10. # 20个时间间隔,输入 100 维度的数据
  11. # This applies our previous model to every timestep in the input sequences.
  12. # the output of the previous model was a 10-way softmax,
  13. # so the output of the layer below will be a sequence of 20 vectors of size 10.
  14. processed_sequences = TimeDistributed(model)(input_sequences) # Model是已经训练好的
  1. processed_sequences
  1. <tf.Tensor 'time_distributed_1/Reshape_1:0' shape=(?, 20, 100) dtype=float32>

案例五:双输入、双模型输出:LSTM 时序预测

本案例很好,可以了解到 Model 的精髓在于他的任意性,给编译者很多的便利。

  • 输入:

    • 新闻语料;新闻语料对应的时间
  • 输出:
    • 新闻语料的预测模型;新闻语料+对应时间的预测模型
模型一:只针对新闻语料的 LSTM 模型
  1. from keras.layers import Input, Embedding, LSTM, Dense
  2. from keras.models import Model
  3. # Headline input: meant to receive sequences of 100 integers, between 1 and 10000.
  4. # Note that we can name any layer by passing it a "name" argument.
  5. main_input = Input(shape=(100,), dtype='int32', name='main_input')
  6. # 一个100词的 BOW 序列
  7. # This embedding layer will encode the input sequence
  8. # into a sequence of dense 512-dimensional vectors.
  9. x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)
  10. # Embedding 层,把 100 维度再 encode 成 512 的句向量,10000 指的是词典单词总数
  11. # A LSTM will transform the vector sequence into a single vector,
  12. # containing information about the entire sequence
  13. lstm_out = LSTM(32)(x)
  14. # ? 32什么意思?????????????????????
  15. #然后,我们插入一个额外的损失,使得即使在主损失很高的情况下,LSTM 和 Embedding 层也可以平滑的训练。
  16. auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)
  17. #再然后,我们将LSTM与额外的输入数据串联起来组成输入,送入模型中:
  18. # 模型一:只针对以上的序列做的预测模型
组合模型:新闻语料+时序
  1. # 模型二:组合模型
  2. auxiliary_input = Input(shape=(5,), name='aux_input') # 新加入的一个Input,5维度
  3. x = keras.layers.concatenate([lstm_out, auxiliary_input]) # 组合起来,对应起来
  4. # We stack a deep densely-connected network on top
  5. # 组合模型的形式
  6. x = Dense(64, activation='relu')(x)
  7. x = Dense(64, activation='relu')(x)
  8. x = Dense(64, activation='relu')(x)
  9. # And finally we add the main logistic regression layer
  10. main_output = Dense(1, activation='sigmoid', name='main_output')(x)
  11. #最后,我们定义整个2输入,2输出的模型:
  12. model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])
  13. #模型定义完毕,下一步编译模型。
  14. #我们给额外的损失赋0.2的权重。我们可以通过关键字参数loss_weights或loss来为不同的输出设置不同的损失函数或权值。
  15. #这两个参数均可为Python的列表或字典。这里我们给loss传递单个损失函数,这个损失函数会被应用于所有输出上。

其中:Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output]) 是核心,

Input 两个内容,outputs 两个模型:

  1. # 训练方式一:两个模型一个loss
  2. model.compile(optimizer='rmsprop', loss='binary_crossentropy',
  3. loss_weights=[1., 0.2])
  4. #编译完成后,我们通过传递训练数据和目标值训练该模型:
  5. model.fit([headline_data, additional_data], [labels, labels],
  6. epochs=50, batch_size=32)
  7. # 训练方式二:两个模型,两个Loss
  8. #因为我们输入和输出是被命名过的(在定义时传递了“name”参数),我们也可以用下面的方式编译和训练模型:
  9. model.compile(optimizer='rmsprop',
  10. loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'},
  11. loss_weights={'main_output': 1., 'aux_output': 0.2})
  12. # And trained it via:
  13. model.fit({'main_input': headline_data, 'aux_input': additional_data},
  14. {'main_output': labels, 'aux_output': labels},
  15. epochs=50, batch_size=32)

因为输入两个,输出两个模型,所以可以分为设置不同的模型训练参数

案例六:共享层:对应关系、相似性

一个节点,分成两个分支出去

  1. import keras
  2. from keras.layers import Input, LSTM, Dense
  3. from keras.models import Model
  4. tweet_a = Input(shape=(140, 256))
  5. tweet_b = Input(shape=(140, 256))
  6. #若要对不同的输入共享同一层,就初始化该层一次,然后多次调用它
  7. # 140个单词,每个单词256维度,词向量
  8. #
  9. # This layer can take as input a matrix
  10. # and will return a vector of size 64
  11. shared_lstm = LSTM(64)
  12. # 返回一个64规模的向量
  13. # When we reuse the same layer instance
  14. # multiple times, the weights of the layer
  15. # are also being reused
  16. # (it is effectively *the same* layer)
  17. encoded_a = shared_lstm(tweet_a)
  18. encoded_b = shared_lstm(tweet_b)
  19. # We can then concatenate the two vectors:
  20. # 连接两个结果
  21. # axis=-1?????
  22. merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)
  23. # And add a logistic regression on top
  24. predictions = Dense(1, activation='sigmoid')(merged_vector)
  25. # 其中的1 代表什么????
  26. # We define a trainable model linking the
  27. # tweet inputs to the predictions
  28. model = Model(inputs=[tweet_a, tweet_b], outputs=predictions)
  29. model.compile(optimizer='rmsprop',
  30. loss='binary_crossentropy',
  31. metrics=['accuracy'])
  32. model.fit([data_a, data_b], labels, epochs=10)
  33. # 训练模型,然后预测

案例七:抽取层节点内容

  1. # 1、单节点
  2. a = Input(shape=(140, 256))
  3. lstm = LSTM(32)
  4. encoded_a = lstm(a)
  5. assert lstm.output == encoded_a
  6. # 抽取获得encoded_a的输出张量
  7. # 2、多节点
  8. a = Input(shape=(140, 256))
  9. b = Input(shape=(140, 256))
  10. lstm = LSTM(32)
  11. encoded_a = lstm(a)
  12. encoded_b = lstm(b)
  13. assert lstm.get_output_at(0) == encoded_a
  14. assert lstm.get_output_at(1) == encoded_b
  15. # 3、图像层节点
  16. # 对于input_shape和output_shape也是一样,如果一个层只有一个节点,
  17. #或所有的节点都有相同的输入或输出shape,
  18. #那么input_shape和output_shape都是没有歧义的,并也只返回一个值。
  19. #但是,例如你把一个相同的Conv2D应用于一个大小为(3,32,32)的数据,
  20. #然后又将其应用于一个(3,64,64)的数据,那么此时该层就具有了多个输入和输出的shape,
  21. #你就需要显式的指定节点的下标,来表明你想取的是哪个了
  22. a = Input(shape=(3, 32, 32))
  23. b = Input(shape=(3, 64, 64))
  24. conv = Conv2D(16, (3, 3), padding='same')
  25. conved_a = conv(a)
  26. # Only one input so far, the following will work:
  27. assert conv.input_shape == (None, 3, 32, 32)
  28. conved_b = conv(b)
  29. # now the `.input_shape` property wouldn't work, but this does:
  30. assert conv.get_input_shape_at(0) == (None, 3, 32, 32)
  31. assert conv.get_input_shape_at(1) == (None, 3, 64, 64)

案例八:视觉问答模型

  1. #这个模型将自然语言的问题和图片分别映射为特征向量,
  2. #将二者合并后训练一个logistic回归层,从一系列可能的回答中挑选一个。
  3. from keras.layers import Conv2D, MaxPooling2D, Flatten
  4. from keras.layers import Input, LSTM, Embedding, Dense
  5. from keras.models import Model, Sequential
  6. # First, let's define a vision model using a Sequential model.
  7. # This model will encode an image into a vector.
  8. vision_model = Sequential()
  9. vision_model.add(Conv2D(64, (3, 3) activation='relu', padding='same', input_shape=(3, 224, 224)))
  10. vision_model.add(Conv2D(64, (3, 3), activation='relu'))
  11. vision_model.add(MaxPooling2D((2, 2)))
  12. vision_model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
  13. vision_model.add(Conv2D(128, (3, 3), activation='relu'))
  14. vision_model.add(MaxPooling2D((2, 2)))
  15. vision_model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
  16. vision_model.add(Conv2D(256, (3, 3), activation='relu'))
  17. vision_model.add(Conv2D(256, (3, 3), activation='relu'))
  18. vision_model.add(MaxPooling2D((2, 2)))
  19. vision_model.add(Flatten())
  20. # Now let's get a tensor with the output of our vision model:
  21. image_input = Input(shape=(3, 224, 224))
  22. encoded_image = vision_model(image_input)
  23. # Next, let's define a language model to encode the question into a vector.
  24. # Each question will be at most 100 word long,
  25. # and we will index words as integers from 1 to 9999.
  26. question_input = Input(shape=(100,), dtype='int32')
  27. embedded_question = Embedding(input_dim=10000, output_dim=256, input_length=100)(question_input)
  28. encoded_question = LSTM(256)(embedded_question)
  29. # Let's concatenate the question vector and the image vector:
  30. merged = keras.layers.concatenate([encoded_question, encoded_image])
  31. # And let's train a logistic regression over 1000 words on top:
  32. output = Dense(1000, activation='softmax')(merged)
  33. # This is our final model:
  34. vqa_model = Model(inputs=[image_input, question_input], outputs=output)
  35. # The next stage would be training this model on actual data.

延伸一:fine-tuning 时如何加载 No_top 的权重

如果你需要加载权重到不同的网络结构(有些层一样)中,例如 fine-tunetransfer-learning,你可以通过层名字来加载模型:

model.load_weights(‘my_model_weights.h5’, by_name=True)

例如:

假如原模型为:

  1. model = Sequential()
  2. model.add(Dense(2, input_dim=3, name="dense_1"))
  3. model.add(Dense(3, name="dense_2"))
  4. ...
  5. model.save_weights(fname)

新模型为:

  1. model = Sequential()
  2. model.add(Dense(2, input_dim=3, name="dense_1")) # will be loaded
  3. model.add(Dense(10, name="new_dense")) # will not be loaded
  4. # load weights from first model; will only affect the first layer, dense_1.
  5. model.load_weights(fname, by_name=True)

2 学习资料



3 keras 学习小结

引自:http://blog.csdn.net/sinat_26917383/article/details/72857454

3.1 keras网络结构

3.2 keras网络配置



其中回调函数 callbacks 是keras

3.3 keras预处理功能

3.4 模型的节点信息提取

对于序列模型

  1. %%time
  2. import keras
  3. from keras.models import Sequential
  4. from keras.layers import Dense
  5. import numpy as np
  6. # 实现 Lenet
  7. import keras
  8. from keras.datasets import mnist
  9. (x_train, y_train), (x_test,y_test) = mnist.load_data()
  10. x_train=x_train.reshape(-1, 28,28,1)
  11. x_test=x_test.reshape(-1, 28,28,1)
  12. x_train=x_train/255.
  13. x_test=x_test/255.
  14. y_train=keras.utils.to_categorical(y_train)
  15. y_test=keras.utils.to_categorical(y_test)
  16. from keras.layers import Conv2D, MaxPool2D, Dense, Flatten
  17. from keras.models import Sequential
  18. lenet=Sequential()
  19. lenet.add(Conv2D(6, kernel_size=3,strides=1, padding='same', input_shape=(28, 28, 1)))
  20. lenet.add(MaxPool2D(pool_size=2,strides=2))
  21. lenet.add(Conv2D(16, kernel_size=5, strides=1, padding='valid'))
  22. lenet.add(MaxPool2D(pool_size=2, strides=2))
  23. lenet.add(Flatten())
  24. lenet.add(Dense(120))
  25. lenet.add(Dense(84))
  26. lenet.add(Dense(10, activation='softmax'))
  27. lenet.compile('sgd',loss='categorical_crossentropy',metrics=['accuracy']) # 编译模型
  28. lenet.fit(x_train,y_train,batch_size=64,epochs= 20,validation_data=[x_test,y_test], verbose= 0) # 训练模型
  29. lenet.save('E:/Graphs/Models/myletnet.h5') # 保存模型
  1. Wall time: 2min 48s
  1. # 节点信息提取
  2. config = lenet.get_config() # 把 lenet 模型中的信息提取出来
  3. config[0]
  1. {'class_name': 'Conv2D',
  2. 'config': {'activation': 'linear',
  3. 'activity_regularizer': None,
  4. 'batch_input_shape': (None, 28, 28, 1),
  5. 'bias_constraint': None,
  6. 'bias_initializer': {'class_name': 'Zeros', 'config': {}},
  7. 'bias_regularizer': None,
  8. 'data_format': 'channels_last',
  9. 'dilation_rate': (1, 1),
  10. 'dtype': 'float32',
  11. 'filters': 6,
  12. 'kernel_constraint': None,
  13. 'kernel_initializer': {'class_name': 'VarianceScaling',
  14. 'config': {'distribution': 'uniform',
  15. 'mode': 'fan_avg',
  16. 'scale': 1.0,
  17. 'seed': None}},
  18. 'kernel_regularizer': None,
  19. 'kernel_size': (3, 3),
  20. 'name': 'conv2d_7',
  21. 'padding': 'same',
  22. 'strides': (1, 1),
  23. 'trainable': True,
  24. 'use_bias': True}}
  1. model = Sequential.from_config(config) # 将提取的信息传给新的模型, 重构一个新的 Model 模型,fine-tuning 比较好用

3.5 模型概况查询、保存及载入

1、模型概括打印

  1. model.summary()
  1. _________________________________________________________________
  2. Layer (type) Output Shape Param #
  3. =================================================================
  4. conv2d_7 (Conv2D) (None, 28, 28, 6) 60
  5. _________________________________________________________________
  6. max_pooling2d_7 (MaxPooling2 (None, 14, 14, 6) 0
  7. _________________________________________________________________
  8. conv2d_8 (Conv2D) (None, 10, 10, 16) 2416
  9. _________________________________________________________________
  10. max_pooling2d_8 (MaxPooling2 (None, 5, 5, 16) 0
  11. _________________________________________________________________
  12. flatten_4 (Flatten) (None, 400) 0
  13. _________________________________________________________________
  14. dense_34 (Dense) (None, 120) 48120
  15. _________________________________________________________________
  16. dense_35 (Dense) (None, 84) 10164
  17. _________________________________________________________________
  18. dense_36 (Dense) (None, 10) 850
  19. =================================================================
  20. Total params: 61,610
  21. Trainable params: 61,610
  22. Non-trainable params: 0
  23. _________________________________________________________________

2、权重获取

  1. model.get_layer('conv2d_7' ) # 依据层名或下标获得层对象
  1. <keras.layers.convolutional.Conv2D at 0x1ed425bce10>
  1. weights = model.get_weights() #返回模型权重张量的列表,类型为 numpy array
  1. model.set_weights(weights) #从 numpy array 里将权重载入给模型,要求数组具有与 model.get_weights() 相同的形状。
  1. # 查看 model 中 Layer 的信息
  2. model.layers
  1. [<keras.layers.convolutional.Conv2D at 0x1ed425bce10>,
  2. <keras.layers.pooling.MaxPooling2D at 0x1ed4267a4a8>,
  3. <keras.layers.convolutional.Conv2D at 0x1ed4267a898>,
  4. <keras.layers.pooling.MaxPooling2D at 0x1ed4266bb00>,
  5. <keras.layers.core.Flatten at 0x1ed4267ebe0>,
  6. <keras.layers.core.Dense at 0x1ed426774a8>,
  7. <keras.layers.core.Dense at 0x1ed42684940>,
  8. <keras.layers.core.Dense at 0x1ed4268edd8>]

3.6 模型保存与加载

引用:keras如何保存模型

  • 使用 model.save(filepath) 将 Keras 模型和权重保存在一个 HDF5 文件中,该文件将包含:

    • 模型的结构(以便重构该模型)
    • 模型的权重
    • 训练配置(损失函数,优化器等)
    • 优化器的状态(以便于从上次训练中断的地方开始)
  • 使用 keras.models.load_model(filepath) 来重新实例化你的模型,如果文件中存储了训练配置的话,该函数还会同时完成模型的编译

  1. # 将模型权重保存到指定路径,文件类型是HDF5(后缀是.h5)
  2. filepath = 'E:/Graphs/Models/lenet.h5'
  3. model.save_weights(filepath)
  4. # 从 HDF5 文件中加载权重到当前模型中, 默认情况下模型的结构将保持不变。
  5. # 如果想将权重载入不同的模型(有些层相同)中,则设置 by_name=True,只有名字匹配的层才会载入权重
  6. model.load_weights(filepath, by_name=False)
  1. json_string = model.to_json() # 等价于 json_string = model.get_config()
  2. open('E:/Graphs/Models/lenet.json','w').write(json_string)
  3. model.save_weights('E:/Graphs/Models/lenet_weights.h5')
  4. #加载模型数据和weights
  5. model = model_from_json(open('E:/Graphs/Models/lenet.json').read())
  6. model.load_weights('E:/Graphs/Models/lenet_weights.h5')

3.6.1 只保存模型结构,而不包含其权重或配置信息

  • 保存成 json 格式的文件
  1. # save as JSON
  2. json_string = model.to_json()
  3. open('E:/Graphs/Models/my_model_architecture.json','w').write(json_string)
  4. from keras.models import model_from_json
  5. model = model_from_json(open('E:/Graphs/Models/my_model_architecture.json').read())
  • 保存成 yaml 文件
  1. # save as YAML
  2. yaml_string = model.to_yaml()
  3. open('E:/Graphs/Models/my_model_architectrue.yaml','w').write(yaml_string)
  4. from keras.models import model_from_yaml
  5. model = model_from_yaml(open('E:/Graphs/Models/my_model_architectrue.yaml').read())

这些操作将把模型序列化为json或yaml文件,这些文件对人而言也是友好的,如果需要的话你甚至可以手动打开这些文件并进行编辑。当然,你也可以从保存好的json文件或yaml文件中载入模型

3.6.2 实时保存模型结构、训练出来的权重、及优化器状态并调用

keras 的 callback 参数可以帮助我们实现在训练过程中的适当时机被调用。实现实时保存训练模型以及训练参数

  1. keras.callbacks.ModelCheckpoint(
  2. filepath,
  3. monitor='val_loss',
  4. verbose=0,
  5. save_best_only=False,
  6. save_weights_only=False,
  7. mode='auto',
  8. period=1
  9. )
  1. filename:字符串,保存模型的路径
  2. monitor:需要监视的值
  3. verbose:信息展示模式,01
  4. save_best_only:当设置为True时,将只保存在验证集上性能最好的模型
  5. mode:‘auto’,‘min’,‘max’之一,在save_best_only=True时决定性能最佳模型的评判准则,例如,当监测值为val_acc时,模式应为max,当检测值为val_loss时,模式应为min。在auto模式下,评价准则由被监测值的名字自动推断。
  6. save_weights_only:若设置为True,则只保存模型权重,否则将保存整个模型(包括模型结构,配置信息等)
  7. period:CheckPoint之间的间隔的epoch数

3.6.3 示例

假如原模型为:

  1. x=np.array([[0,1,0],[0,0,1],[1,3,2],[3,2,1]])
  2. y=np.array([0,0,1,1]).T
  3. model=Sequential()
  4. model.add(Dense(5,input_shape=(x.shape[1],),activation='relu', name='layer1'))
  5. model.add(Dense(4,activation='relu',name='layer2'))
  6. model.add(Dense(1,activation='sigmoid',name='layer3'))
  7. model.compile(optimizer='sgd',loss='mean_squared_error')
  8. model.fit(x,y,epochs=200, verbose= 0) # 训练
  9. model.save_weights('E:/Graphs/Models/my_weights.h5')
  10. model.predict(x[0:1]) # 预测
  1. array([[ 0.38783705]], dtype=float32)
  1. # 新模型
  2. model = Sequential()
  3. model.add(Dense(2, input_dim=3, name="layer_1")) # will be loaded
  4. model.add(Dense(10, name="new_dense")) # will not be loaded
  5. # load weights from first model; will only affect the first layer, dense_1.
  6. model.load_weights('E:/Graphs/Models/my_weights.h5', by_name=True)
  1. model.predict(x[1:2])
  1. array([[-0.27631092, -0.35040742, -0.2807056 , -0.22762418, -0.31791407,
  2. -0.0897391 , 0.02615392, -0.15040982, 0.19909057, -0.38647971]], dtype=float32)

3.7 How to Check-Point Deep Learning Models in Keras

  1. # Checkpoint the weights when validation accuracy improves
  2. from keras.models import Sequential
  3. from keras.layers import Dense
  4. from keras.callbacks import ModelCheckpoint
  5. import matplotlib.pyplot as plt
  6. import numpy as np
  7. x=np.array([[0,1,0],[0,0,1],[1,3,2],[3,2,1]])
  8. y=np.array([0,0,1,1]).T
  9. model=Sequential()
  10. model.add(Dense(5,input_shape=(x.shape[1],),activation='relu', name='layer1'))
  11. model.add(Dense(4,activation='relu',name='layer2'))
  12. model.add(Dense(1,activation='sigmoid',name='layer3'))
  13. # Compile model
  14. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
  15. filepath="E:/Graphs/Models/weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
  16. checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
  17. callbacks_list = [checkpoint]
  18. # Fit the model
  19. model.fit(x, y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)
  1. Epoch 00000: val_acc improved from -inf to 1.00000, saving model to E:/Graphs/Models/weights-improvement-00-1.00.hdf5
  2. Epoch 00001: val_acc did not improve
  3. Epoch 00002: val_acc did not improve
  4. Epoch 00003: val_acc did not improve
  5. Epoch 00004: val_acc did not improve
  6. Epoch 00005: val_acc did not improve
  7. Epoch 00006: val_acc did not improve
  8. Epoch 00007: val_acc did not improve
  9. Epoch 00008: val_acc did not improve
  10. Epoch 00009: val_acc did not improve
  11. Epoch 00010: val_acc did not improve
  12. Epoch 00011: val_acc did not improve
  13. Epoch 00012: val_acc did not improve
  14. Epoch 00013: val_acc did not improve
  15. Epoch 00014: val_acc did not improve
  16. Epoch 00015: val_acc did not improve
  17. Epoch 00016: val_acc did not improve
  18. Epoch 00017: val_acc did not improve
  19. Epoch 00018: val_acc did not improve
  20. Epoch 00019: val_acc did not improve
  21. Epoch 00020: val_acc did not improve
  22. Epoch 00021: val_acc did not improve
  23. Epoch 00022: val_acc did not improve
  24. Epoch 00023: val_acc did not improve
  25. Epoch 00024: val_acc did not improve
  26. Epoch 00025: val_acc did not improve
  27. Epoch 00026: val_acc did not improve
  28. Epoch 00027: val_acc did not improve
  29. Epoch 00028: val_acc did not improve
  30. Epoch 00029: val_acc did not improve
  31. Epoch 00030: val_acc did not improve
  32. Epoch 00031: val_acc did not improve
  33. Epoch 00032: val_acc did not improve
  34. Epoch 00033: val_acc did not improve
  35. Epoch 00034: val_acc did not improve
  36. Epoch 00035: val_acc did not improve
  37. Epoch 00036: val_acc did not improve
  38. Epoch 00037: val_acc did not improve
  39. Epoch 00038: val_acc did not improve
  40. Epoch 00039: val_acc did not improve
  41. Epoch 00040: val_acc did not improve
  42. Epoch 00041: val_acc did not improve
  43. Epoch 00042: val_acc did not improve
  44. Epoch 00043: val_acc did not improve
  45. Epoch 00044: val_acc did not improve
  46. Epoch 00045: val_acc did not improve
  47. Epoch 00046: val_acc did not improve
  48. Epoch 00047: val_acc did not improve
  49. Epoch 00048: val_acc did not improve
  50. Epoch 00049: val_acc did not improve
  51. Epoch 00050: val_acc did not improve
  52. Epoch 00051: val_acc did not improve
  53. Epoch 00052: val_acc did not improve
  54. Epoch 00053: val_acc did not improve
  55. Epoch 00054: val_acc did not improve
  56. Epoch 00055: val_acc did not improve
  57. Epoch 00056: val_acc did not improve
  58. Epoch 00057: val_acc did not improve
  59. Epoch 00058: val_acc did not improve
  60. Epoch 00059: val_acc did not improve
  61. Epoch 00060: val_acc did not improve
  62. Epoch 00061: val_acc did not improve
  63. Epoch 00062: val_acc did not improve
  64. Epoch 00063: val_acc did not improve
  65. Epoch 00064: val_acc did not improve
  66. Epoch 00065: val_acc did not improve
  67. Epoch 00066: val_acc did not improve
  68. Epoch 00067: val_acc did not improve
  69. Epoch 00068: val_acc did not improve
  70. Epoch 00069: val_acc did not improve
  71. Epoch 00070: val_acc did not improve
  72. Epoch 00071: val_acc did not improve
  73. Epoch 00072: val_acc did not improve
  74. Epoch 00073: val_acc did not improve
  75. Epoch 00074: val_acc did not improve
  76. Epoch 00075: val_acc did not improve
  77. Epoch 00076: val_acc did not improve
  78. Epoch 00077: val_acc did not improve
  79. Epoch 00078: val_acc did not improve
  80. Epoch 00079: val_acc did not improve
  81. Epoch 00080: val_acc did not improve
  82. Epoch 00081: val_acc did not improve
  83. Epoch 00082: val_acc did not improve
  84. Epoch 00083: val_acc did not improve
  85. Epoch 00084: val_acc did not improve
  86. Epoch 00085: val_acc did not improve
  87. Epoch 00086: val_acc did not improve
  88. Epoch 00087: val_acc did not improve
  89. Epoch 00088: val_acc did not improve
  90. Epoch 00089: val_acc did not improve
  91. Epoch 00090: val_acc did not improve
  92. Epoch 00091: val_acc did not improve
  93. Epoch 00092: val_acc did not improve
  94. Epoch 00093: val_acc did not improve
  95. Epoch 00094: val_acc did not improve
  96. Epoch 00095: val_acc did not improve
  97. Epoch 00096: val_acc did not improve
  98. Epoch 00097: val_acc did not improve
  99. Epoch 00098: val_acc did not improve
  100. Epoch 00099: val_acc did not improve
  101. Epoch 00100: val_acc did not improve
  102. Epoch 00101: val_acc did not improve
  103. Epoch 00102: val_acc did not improve
  104. Epoch 00103: val_acc did not improve
  105. Epoch 00104: val_acc did not improve
  106. Epoch 00105: val_acc did not improve
  107. Epoch 00106: val_acc did not improve
  108. Epoch 00107: val_acc did not improve
  109. Epoch 00108: val_acc did not improve
  110. Epoch 00109: val_acc did not improve
  111. Epoch 00110: val_acc did not improve
  112. Epoch 00111: val_acc did not improve
  113. Epoch 00112: val_acc did not improve
  114. Epoch 00113: val_acc did not improve
  115. Epoch 00114: val_acc did not improve
  116. Epoch 00115: val_acc did not improve
  117. Epoch 00116: val_acc did not improve
  118. Epoch 00117: val_acc did not improve
  119. Epoch 00118: val_acc did not improve
  120. Epoch 00119: val_acc did not improve
  121. Epoch 00120: val_acc did not improve
  122. Epoch 00121: val_acc did not improve
  123. Epoch 00122: val_acc did not improve
  124. Epoch 00123: val_acc did not improve
  125. Epoch 00124: val_acc did not improve
  126. Epoch 00125: val_acc did not improve
  127. Epoch 00126: val_acc did not improve
  128. Epoch 00127: val_acc did not improve
  129. Epoch 00128: val_acc did not improve
  130. Epoch 00129: val_acc did not improve
  131. Epoch 00130: val_acc did not improve
  132. Epoch 00131: val_acc did not improve
  133. Epoch 00132: val_acc did not improve
  134. Epoch 00133: val_acc did not improve
  135. Epoch 00134: val_acc did not improve
  136. Epoch 00135: val_acc did not improve
  137. Epoch 00136: val_acc did not improve
  138. Epoch 00137: val_acc did not improve
  139. Epoch 00138: val_acc did not improve
  140. Epoch 00139: val_acc did not improve
  141. Epoch 00140: val_acc did not improve
  142. Epoch 00141: val_acc did not improve
  143. Epoch 00142: val_acc did not improve
  144. Epoch 00143: val_acc did not improve
  145. Epoch 00144: val_acc did not improve
  146. Epoch 00145: val_acc did not improve
  147. Epoch 00146: val_acc did not improve
  148. Epoch 00147: val_acc did not improve
  149. Epoch 00148: val_acc did not improve
  150. Epoch 00149: val_acc did not improve
  151. <keras.callbacks.History at 0x1ed46f00ac8>

3.8 Checkpoint Best Neural Network Model Only

  1. # Checkpoint the weights for best model on validation accuracy
  2. import keras
  3. from keras.layers import Input, Dense
  4. from keras.models import Model
  5. from keras.callbacks import ModelCheckpoint
  6. import matplotlib.pyplot as plt
  7. # 层实例接受张量为参数,返回一个张量
  8. inputs = Input(shape=(100,))
  9. # a layer instance is callable on a tensor, and returns a tensor
  10. # 输入inputs,输出x
  11. # (inputs)代表输入
  12. x = Dense(64, activation='relu')(inputs)
  13. x = Dense(64, activation='relu')(x)
  14. # 输入x,输出x
  15. predictions = Dense(100, activation='softmax')(x)
  16. # 输入x,输出分类
  17. # This creates a model that includes
  18. # the Input layer and three Dense layers
  19. model = Model(inputs=inputs, outputs=predictions)
  20. model.compile(optimizer='rmsprop',
  21. loss='categorical_crossentropy',
  22. metrics=['accuracy'])
  23. # Generate dummy data
  24. import numpy as np
  25. data = np.random.random((1000, 100))
  26. labels = keras.utils.to_categorical(np.random.randint(2, size=(1000, 1)), num_classes=100)
  27. # checkpoint
  28. filepath="E:/Graphs/Models/weights.best.hdf5"
  29. checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
  30. callbacks_list = [checkpoint]
  31. # Fit the model
  32. model.fit(data, labels, validation_split=0.33, epochs=15, batch_size=10, callbacks=callbacks_list, verbose=0)
  1. Epoch 00000: val_acc improved from -inf to 0.48036, saving model to E:/Graphs/Models/weights.best.hdf5
  2. Epoch 00001: val_acc improved from 0.48036 to 0.51360, saving model to E:/Graphs/Models/weights.best.hdf5
  3. Epoch 00002: val_acc did not improve
  4. Epoch 00003: val_acc did not improve
  5. Epoch 00004: val_acc improved from 0.51360 to 0.52568, saving model to E:/Graphs/Models/weights.best.hdf5
  6. Epoch 00005: val_acc did not improve
  7. Epoch 00006: val_acc improved from 0.52568 to 0.52568, saving model to E:/Graphs/Models/weights.best.hdf5
  8. Epoch 00007: val_acc did not improve
  9. Epoch 00008: val_acc did not improve
  10. Epoch 00009: val_acc did not improve
  11. Epoch 00010: val_acc did not improve
  12. Epoch 00011: val_acc did not improve
  13. Epoch 00012: val_acc did not improve
  14. Epoch 00013: val_acc did not improve
  15. Epoch 00014: val_acc did not improve
  16. <keras.callbacks.History at 0x1a276ec1be0>

3.9 Loading a Check-Pointed Neural Network Model

  1. # How to load and use weights from a checkpoint
  2. from keras.models import Sequential
  3. from keras.layers import Dense
  4. from keras.callbacks import ModelCheckpoint
  5. import matplotlib.pyplot as plt
  6. # create model
  7. model = Sequential()
  8. model.add(Dense(64, input_dim=100, kernel_initializer='uniform', activation='relu'))
  9. model.add(Dense(64, kernel_initializer='uniform', activation='relu'))
  10. model.add(Dense(100, kernel_initializer='uniform', activation='sigmoid'))
  11. # load weights
  12. model.load_weights("E:/Graphs/Models/weights.best.hdf5")
  13. # Compile model (required to make predictions)
  14. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
  15. print("Created model and loaded weights from file")
  16. # Generate dummy data
  17. import numpy as np
  18. data = np.random.random((1000, 100))
  19. labels = keras.utils.to_categorical(np.random.randint(2, size=(1000, 1)), num_classes=100)
  20. # estimate accuracy on whole dataset using loaded weights
  21. scores = model.evaluate(data, labels, verbose=0)
  22. print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
  1. Created model and loaded weights from file
  2. acc: 99.00%

3.10 如何在 keras 中设定 GPU 使用的大小

本节来源于:深度学习theano/tensorflow多显卡多人使用问题集(参见:Limit the resource usage for tensorflow backend · Issue #1538 · fchollet/keras · GitHub

在使用 keras 时候会出现总是占满 GPU 显存的情况,可以通过重设 backend 的 GPU 占用情况来进行调节。

  1. import tensorflow as tf
  2. from keras.backend.tensorflow_backend import set_session
  3. config = tf.ConfigProto()
  4. config.gpu_options.per_process_gpu_memory_fraction = 0.3
  5. set_session(tf.Session(config=config))

需要注意的是,虽然代码或配置层面设置了对显存占用百分比阈值,但在实际运行中如果达到了这个阈值,程序有需要的话还是会突破这个阈值。换而言之如果跑在一个大数据集上还是会用到更多的显存。以上的显存限制仅仅为了在跑小数据集时避免对显存的浪费而已。


Tips

更科学地训练与保存模型

  1. from keras.datasets import mnist
  2. from keras.models import Model
  3. from keras.layers import Dense, Activation, Flatten, Input
  4. (x_train, y_train), (x_test, y_test) = mnist.load_data()
  5. y_train = keras.utils.to_categorical(y_train, 10)
  6. y_test = keras.utils.to_categorical(y_test, 10)
  7. x_train.shape
  1. (60000, 28, 28)
  1. import keras
  2. from keras.layers import Input, Dense
  3. from keras.models import Model
  4. from keras.callbacks import ModelCheckpoint
  5. # 层实例接受张量为参数,返回一个张量
  6. inputs = Input(shape=(28, 28))
  7. x = Flatten()(inputs)
  8. x = Dense(64, activation='relu')(x)
  9. x = Dense(64, activation='relu')(x)
  10. predictions = Dense(10, activation='softmax')(x)
  11. # 输入x,输出分类
  12. # This creates a model that includes
  13. # the Input layer and three Dense layers
  14. model = Model(inputs=inputs, outputs=predictions)
  15. model.compile(optimizer='rmsprop',
  16. loss='categorical_crossentropy',
  17. metrics=['accuracy'])
  18. model.summary()
  1. _________________________________________________________________
  2. Layer (type) Output Shape Param #
  3. =================================================================
  4. input_6 (InputLayer) (None, 28, 28) 0
  5. _________________________________________________________________
  6. flatten_1 (Flatten) (None, 784) 0
  7. _________________________________________________________________
  8. dense_16 (Dense) (None, 64) 50240
  9. _________________________________________________________________
  10. dense_17 (Dense) (None, 64) 4160
  11. _________________________________________________________________
  12. dense_18 (Dense) (None, 10) 650
  13. =================================================================
  14. Total params: 55,050
  15. Trainable params: 55,050
  16. Non-trainable params: 0
  17. _________________________________________________________________
  1. filepath = 'E:/Graphs/Models/model-ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5'
  2. checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
  3. # fit model
  4. model.fit(x_train, y_train, epochs=20, verbose=2, batch_size=64, callbacks=[checkpoint], validation_data=(x_test, y_test))
  1. Train on 60000 samples, validate on 10000 samples
  2. Epoch 1/20
  3. Epoch 00000: val_loss improved from inf to 6.25477, saving model to E:/Graphs/Models/model-ep000-loss6.835-val_loss6.255.h5
  4. 10s - loss: 6.8349 - acc: 0.5660 - val_loss: 6.2548 - val_acc: 0.6063
  5. Epoch 2/20
  6. Epoch 00001: val_loss improved from 6.25477 to 5.75301, saving model to E:/Graphs/Models/model-ep001-loss5.981-val_loss5.753.h5
  7. 7s - loss: 5.9805 - acc: 0.6246 - val_loss: 5.7530 - val_acc: 0.6395
  8. Epoch 3/20
  9. Epoch 00002: val_loss did not improve
  10. 5s - loss: 5.8032 - acc: 0.6368 - val_loss: 5.9562 - val_acc: 0.6270
  11. Epoch 4/20
  12. Epoch 00003: val_loss improved from 5.75301 to 5.69140, saving model to E:/Graphs/Models/model-ep003-loss5.816-val_loss5.691.h5
  13. 7s - loss: 5.8163 - acc: 0.6363 - val_loss: 5.6914 - val_acc: 0.6451
  14. Epoch 5/20
  15. Epoch 00004: val_loss did not improve
  16. 6s - loss: 5.7578 - acc: 0.6404 - val_loss: 5.8904 - val_acc: 0.6317
  17. Epoch 6/20
  18. Epoch 00005: val_loss did not improve
  19. 7s - loss: 5.7435 - acc: 0.6417 - val_loss: 5.8636 - val_acc: 0.6342
  20. Epoch 7/20
  21. Epoch 00006: val_loss improved from 5.69140 to 5.68394, saving model to E:/Graphs/Models/model-ep006-loss5.674-val_loss5.684.h5
  22. 7s - loss: 5.6743 - acc: 0.6458 - val_loss: 5.6839 - val_acc: 0.6457
  23. Epoch 8/20
  24. Epoch 00007: val_loss improved from 5.68394 to 5.62847, saving model to E:/Graphs/Models/model-ep007-loss5.655-val_loss5.628.h5
  25. 6s - loss: 5.6552 - acc: 0.6472 - val_loss: 5.6285 - val_acc: 0.6488
  26. Epoch 9/20
  27. Epoch 00008: val_loss did not improve
  28. 6s - loss: 5.6277 - acc: 0.6493 - val_loss: 5.7295 - val_acc: 0.6422
  29. Epoch 10/20
  30. Epoch 00009: val_loss improved from 5.62847 to 5.55242, saving model to E:/Graphs/Models/model-ep009-loss5.577-val_loss5.552.h5
  31. 6s - loss: 5.5769 - acc: 0.6524 - val_loss: 5.5524 - val_acc: 0.6540
  32. Epoch 11/20
  33. Epoch 00010: val_loss improved from 5.55242 to 5.53212, saving model to E:/Graphs/Models/model-ep010-loss5.537-val_loss5.532.h5
  34. 6s - loss: 5.5374 - acc: 0.6550 - val_loss: 5.5321 - val_acc: 0.6560
  35. Epoch 12/20
  36. Epoch 00011: val_loss improved from 5.53212 to 5.53056, saving model to E:/Graphs/Models/model-ep011-loss5.549-val_loss5.531.h5
  37. 6s - loss: 5.5492 - acc: 0.6543 - val_loss: 5.5306 - val_acc: 0.6553
  38. Epoch 13/20
  39. Epoch 00012: val_loss improved from 5.53056 to 5.48013, saving model to E:/Graphs/Models/model-ep012-loss5.558-val_loss5.480.h5
  40. 7s - loss: 5.5579 - acc: 0.6538 - val_loss: 5.4801 - val_acc: 0.6587
  41. Epoch 14/20
  42. Epoch 00013: val_loss did not improve
  43. 6s - loss: 5.5490 - acc: 0.6547 - val_loss: 5.5233 - val_acc: 0.6561
  44. Epoch 15/20
  45. Epoch 00014: val_loss did not improve
  46. 7s - loss: 5.5563 - acc: 0.6541 - val_loss: 5.4960 - val_acc: 0.6580
  47. Epoch 16/20
  48. Epoch 00015: val_loss did not improve
  49. 6s - loss: 5.5364 - acc: 0.6554 - val_loss: 5.5200 - val_acc: 0.6567
  50. Epoch 17/20
  51. Epoch 00016: val_loss did not improve
  52. 6s - loss: 5.5081 - acc: 0.6571 - val_loss: 5.5577 - val_acc: 0.6544
  53. Epoch 18/20
  54. Epoch 00017: val_loss did not improve
  55. 6s - loss: 5.5281 - acc: 0.6560 - val_loss: 5.5768 - val_acc: 0.6530
  56. Epoch 19/20
  57. Epoch 00018: val_loss did not improve
  58. 6s - loss: 5.5146 - acc: 0.6567 - val_loss: 5.7057 - val_acc: 0.6447
  59. Epoch 20/20
  60. Epoch 00019: val_loss improved from 5.48013 to 5.46820, saving model to E:/Graphs/Models/model-ep019-loss5.476-val_loss5.468.h5
  61. 7s - loss: 5.4757 - acc: 0.6592 - val_loss: 5.4682 - val_acc: 0.6601
  62. <keras.callbacks.History at 0x25b5ae27630>

如果 val_loss 提高了就会保存,没有提高就不会保存。


Keras 学习之旅(一)的更多相关文章

  1. WCF学习之旅—第三个示例之四(三十)

           上接WCF学习之旅—第三个示例之一(二十七)               WCF学习之旅—第三个示例之二(二十八)              WCF学习之旅—第三个示例之三(二十九)   ...

  2. Hadoop学习之旅二:HDFS

    本文基于Hadoop1.X 概述 分布式文件系统主要用来解决如下几个问题: 读写大文件 加速运算 对于某些体积巨大的文件,比如其大小超过了计算机文件系统所能存放的最大限制或者是其大小甚至超过了计算机整 ...

  3. WCF学习之旅—第三个示例之二(二十八)

    上接WCF学习之旅—第三个示例之一(二十七) 五.在项目BookMgr.Model创建实体类数据 第一步,安装Entity Framework 1)  使用NuGet下载最新版的Entity Fram ...

  4. WCF学习之旅—第三个示例之三(二十九)

    上接WCF学习之旅—第三个示例之一(二十七) WCF学习之旅—第三个示例之二(二十八) 在上一篇文章中我们创建了实体对象与接口协定,在这一篇文章中我们来学习如何创建WCF的服务端代码.具体步骤见下面. ...

  5. WCF学习之旅—WCF服务部署到IIS7.5(九)

    上接   WCF学习之旅—WCF寄宿前的准备(八) 四.WCF服务部署到IIS7.5 我们把WCF寄宿在IIS之上,在IIS中宿主一个服务的主要优点是在发生客户端请求时宿主进程会被自动启动,并且你可以 ...

  6. WCF学习之旅—WCF服务部署到应用程序(十)

    上接  WCF学习之旅—WCF寄宿前的准备(八) WCF学习之旅—WCF服务部署到IIS7.5(九) 五.控制台应用程序宿主 (1) 在解决方案下新建控制台输出项目 ConsoleHosting.如下 ...

  7. WCF学习之旅—WCF服务的Windows 服务程序寄宿(十一)

    上接    WCF学习之旅—WCF服务部署到IIS7.5(九) WCF学习之旅—WCF服务部署到应用程序(十) 七 WCF服务的Windows 服务程序寄宿 这种方式的服务寄宿,和IIS一样有一个一样 ...

  8. WCF学习之旅—WCF服务的WAS寄宿(十二)

    上接    WCF学习之旅—WCF服务部署到IIS7.5(九) WCF学习之旅—WCF服务部署到应用程序(十) WCF学习之旅—WCF服务的Windows 服务程序寄宿(十一) 八.WAS宿主 IIS ...

  9. WCF学习之旅—WCF服务的批量寄宿(十三)

    上接    WCF学习之旅—WCF服务部署到IIS7.5(九) WCF学习之旅—WCF服务部署到应用程序(十) WCF学习之旅—WCF服务的Windows 服务程序寄宿(十一) WCF学习之旅—WCF ...

随机推荐

  1. 天梯赛 L3-013 非常弹的球 找规律

    L3-013. 非常弹的球 时间限制 100 ms 内存限制 65536 kB 代码长度限制 8000 B 判题程序 Standard 作者 俞勇(上海交通大学) 刚上高一的森森为了学好物理,买了一个 ...

  2. PHPstorm 如何新增项目

    如何在PHPstorm新增项目 1.打开设置 2.找到Directories ,点击增加路径

  3. CF Round#436 div2

    额,这次的题目其实挺智障的.所以通过这次比赛,我也发现了自己是一个智障.... 不说太多,说多是泪... A. Fair Game 题意:给你一个数组,看你能否把它均分为两个所有元素均相同的子数组. ...

  4. apply/call/bind的区别与用法

    apply 方法/call 方法 obj.call(thisObj, arg1, arg2, ...);obj.apply(thisObj, [arg1, arg2, ...]); 两者作用一致,都是 ...

  5. 小白的Python之路 day2 字符串操作 , 字典操作

    1. 字符串操作 特性:不可修改 name.capitalize() 首字母大写 name.casefold() 大写全部变小写 name.center(50,"-") 输出 '- ...

  6. ubuntu14.04 升级mysql到5.7版本

    Ubuntu14.04默认安装的是mysql5.5,由于开发需要支持utf8mb4,因此需要升级到mysql5.7 默认情况下,apt是无法直接升级到mysql5.7的,因此需要额外设置 首先,备份数 ...

  7. Python爬虫(十四)_BeautifulSoup4 解析器

    CSS选择器:BeautifulSoup4 和lxml一样,Beautiful Soup也是一个HTML/XML的解析器,主要的功能也是如何解析和提取HTML/XML数据. lxml只会局部遍历,而B ...

  8. MySQL数据库使某个不是主键的字段唯一

    在使用MySQL数据的过程中有时候我们须要某个不是主键的字段不反复.这个时候就要用到SQL的UNIQUE约束了. 以下摘抄自w3school里的一段介绍: UNIQUE 约束唯一标识数据库表中的每条记 ...

  9. MS OFFICE WORD 绝招

    以MS OFFICE WORD 2010为例. 1.WORD 文件夹连接线(标准称呼:前导符)为什么有的稀,有的密? 答案:文件夹格式字体不同. 2.首页.文件夹页.正文有的要页眉,有的不要,首页不要 ...

  10. git 操作问题

    clone远程版本号库的时候,报错,提示路径不正确. 之前输入的路径为:ssh://[ip]:[port号]/[数据库名称] 增加username后的路径:ssh://[username]@[ip]: ...