Keras 是一个用于构建和训练深度学习模型的高阶 API(应用程序接口)。它可用于快速设计原型、高级研究和生产,具有以下三个主要优势:

  • 方便用户使用
    Keras 具有针对常见用例做出优化的简单而一致的界面。它可针对用户错误提供切实可行的清晰反馈。
  • 模块化和可组合
    将可配置的构造块连接在一起就可以构建 Keras 模型,并且几乎不受限制。
  • 易于扩展
    可以编写自定义构造块以表达新的研究创意,并且可以创建新层、损失函数并开发先进的模型。

导入 tf.keras

tf.keras 是 TensorFlow 对 Keras API 规范的实现。这是一个用于构建和训练模型的高阶 API,包含对 TensorFlow 特定功能(例如 Eager Executiontf.data 管道和 Estimator)的顶级支持。 tf.keras 使 TensorFlow 更易于使用,并且不会牺牲灵活性和性能。

首先,导入 tf.keras 以设置 TensorFlow 程序:

import tensorflow as tf
from tensorflow.keras import layers

print(tf.VERSION)
print(tf.keras.__version__)

1.11.0

2.1.6-tf

tf.keras 可以运行任何与 Keras 兼容的代码,但请注意:

构建简单的模型

序列模型

在 Keras 中,您可以通过组合层来构建模型。模型(通常)是由层构成的图。最常见的模型类型是层的堆叠:tf.keras.Sequential 模型。

要构建一个简单的全连接网络(即多层感知器),请运行以下代码:

model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))

配置层

我们可以使用很多 tf.keras.layers,它们具有一些相同的构造函数参数:

  • activation:设置层的激活函数。此参数由内置函数的名称指定,或指定为可调用对象。默认情况下,系统不会应用任何激活函数。
  • kernel_initializer 和 bias_initializer:创建层权重(核和偏差)的初始化方案。此参数是一个名称或可调用对象,默认为 "Glorot uniform" 初始化器。
  • kernel_regularizer 和 bias_regularizer:应用层权重(核和偏差)的正则化方案,例如 L1 或 L2 正则化。默认情况下,系统不会应用正则化函数。

以下代码使用构造函数参数实例化 tf.keras.layers. Dense 层:

# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)

# A linear layer with L1 regularization of factor 0.01
applied to the kernel matrix:
layers.Dense(64,
kernel_regularizer=tf.keras.regularizers.l1(0.01))

# A linear layer with L2 regularization of factor 0.01
applied to the bias vector:
layers.Dense(64,
bias_regularizer=tf.keras.regularizers.l2(0.01))

# A linear layer with a kernel initialized to a random
orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')

# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64,
bias_initializer=tf.keras.initializers.constant(2.0))

训练和评估

设置训练流程

构建好模型后,通过调用 compile 方法配置该模型的学习流程:

model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu'),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])

model.compile(optimizer=tf.train.AdamOptimizer(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

tf.keras.Model.compile 采用三个重要参数:

以下代码展示了配置模型以进行训练的几个示例:

# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
              loss='mse',       # mean squared error
              metrics=['mae'])  # mean absolute error

# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
             
loss=tf.keras.losses.categorical_crossentropy,
             
metrics=[tf.keras.metrics.categorical_accuracy])

输入 NumPy 数据

对于小型数据集,请使用内存中的 NumPy 数组训练和评估模型。使用 fit 方法使模型与训练数据“拟合”:

import numpy as np

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

model.fit(data, labels, epochs=10, batch_size=32)

Epoch 1/10

1000/1000
[==============================] - 0s 253us/step - loss: 11.5766 -
categorical_accuracy: 0.1110

Epoch 2/10

1000/1000
[==============================] - 0s 64us/step - loss: 11.5205 -
categorical_accuracy: 0.1070

Epoch 3/10

1000/1000
[==============================] - 0s 70us/step - loss: 11.5146 -
categorical_accuracy: 0.1100

Epoch 4/10

1000/1000
[==============================] - 0s 69us/step - loss: 11.5070 -
categorical_accuracy: 0.0940

Epoch 5/10

1000/1000
[==============================] - 0s 71us/step - loss: 11.5020 -
categorical_accuracy: 0.1150

Epoch 6/10

1000/1000
[==============================] - 0s 72us/step - loss: 11.5019 -
categorical_accuracy: 0.1350

Epoch 7/10

1000/1000
[==============================] - 0s 72us/step - loss: 11.5012 -
categorical_accuracy: 0.0970

Epoch 8/10

1000/1000
[==============================] - 0s 72us/step - loss: 11.4993 - categorical_accuracy:
0.1180

Epoch 9/10

1000/1000
[==============================] - 0s 69us/step - loss: 11.4905 -
categorical_accuracy: 0.1320

Epoch 10/10

1000/1000
[==============================] - 0s 66us/step - loss: 11.4909 -
categorical_accuracy: 0.1410

tf.keras.Model.fit 采用三个重要参数:

  • epochs:以周期为单位进行训练。一个周期是对整个输入数据的一次迭代(以较小的批次完成迭代)。
  • batch_size:当传递 NumPy 数据时,模型将数据分成较小的批次,并在训练期间迭代这些批次。此整数指定每个批次的大小。请注意,如果样本总数不能被批次大小整除,则最后一个批次可能更小。
  • validation_data:在对模型进行原型设计时,您需要轻松监控该模型在某些验证数据上达到的效果。传递此参数(输入和标签元组)可以让该模型在每个周期结束时以推理模式显示所传递数据的损失和指标。

下面是使用 validation_data 的示例:

import numpy as np

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

val_data = np.random.random((100, 32))
val_labels = np.random.random((100, 10))

model.fit(data, labels, epochs=10, batch_size=32,
          validation_data=(val_data,
val_labels))

Train on
1000 samples, validate on 100 samples

Epoch 1/10

1000/1000
[==============================] - 0s 124us/step - loss: 11.5267 -
categorical_accuracy: 0.1070 - val_loss: 11.0015 - val_categorical_accuracy:
0.0500

Epoch 2/10

1000/1000
[==============================] - 0s 72us/step - loss: 11.5243 -
categorical_accuracy: 0.0840 - val_loss: 10.9809 - val_categorical_accuracy:
0.1200

Epoch 3/10

1000/1000
[==============================] - 0s 73us/step - loss: 11.5213 -
categorical_accuracy: 0.1000 - val_loss: 10.9945 - val_categorical_accuracy:
0.0800

Epoch 4/10

1000/1000
[==============================] - 0s 73us/step - loss: 11.5213 -
categorical_accuracy: 0.1080 - val_loss: 10.9967 - val_categorical_accuracy:
0.0700

Epoch 5/10

1000/1000
[==============================] - 0s 73us/step - loss: 11.5181 -
categorical_accuracy: 0.1150 - val_loss: 11.0184 - val_categorical_accuracy:
0.0500

Epoch 6/10

1000/1000
[==============================] - 0s 72us/step - loss: 11.5177 -
categorical_accuracy: 0.1150 - val_loss: 10.9892 - val_categorical_accuracy:
0.0200

Epoch 7/10

1000/1000
[==============================] - 0s 72us/step - loss: 11.5130 -
categorical_accuracy: 0.1320 - val_loss: 11.0038 - val_categorical_accuracy:
0.0500

Epoch 8/10

1000/1000
[==============================] - 0s 74us/step - loss: 11.5123 -
categorical_accuracy: 0.1130 - val_loss: 11.0065 - val_categorical_accuracy:
0.0100

Epoch 9/10

1000/1000
[==============================] - 0s 72us/step - loss: 11.5076 -
categorical_accuracy: 0.1150 - val_loss: 11.0062 - val_categorical_accuracy:
0.0800

Epoch 10/10

1000/1000
[==============================] - 0s 67us/step - loss: 11.5035 -
categorical_accuracy: 0.1390 - val_loss: 11.0241 - val_categorical_accuracy:
0.1100

输入 tf.data 数据集

使用 Datasets API 可扩展为大型数据集或多设备训练。将 tf.data.Dataset 实例传递到 fit 方法:

# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()

# Don't forget to specify `steps_per_epoch` when calling
`fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)

Epoch 1/10

30/30
[==============================] - 0s 6ms/step - loss: 11.4973 -
categorical_accuracy: 0.1406

Epoch 2/10

30/30
[==============================] - 0s 2ms/step - loss: 11.5182 -
categorical_accuracy: 0.1344

Epoch 3/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4953 -
categorical_accuracy: 0.1344

Epoch 4/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4842 -
categorical_accuracy: 0.1542

Epoch 5/10

30/30
[==============================] - 0s 2ms/step - loss: 11.5081 -
categorical_accuracy: 0.1510

Epoch 6/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4939 -
categorical_accuracy: 0.1615

Epoch 7/10

30/30
[==============================] - 0s 2ms/step - loss: 11.5049 -
categorical_accuracy: 0.1823

Epoch 8/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4617 -
categorical_accuracy: 0.1760

Epoch 9/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4863 -
categorical_accuracy: 0.1688

Epoch 10/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4946 -
categorical_accuracy: 0.1885

在上方代码中,fit 方法使用了 steps_per_epoch 参数(表示模型在进入下一个周期之前运行的训练步数)。由于 Dataset 会生成批次数据,因此该代码段不需要 batch_size。

数据集也可用于验证:

dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()

val_dataset = tf.data.Dataset.from_tensor_slices((val_data,
val_labels))
val_dataset = val_dataset.batch(32).repeat()

model.fit(dataset, epochs=10, steps_per_epoch=30,
         
validation_data=val_dataset,
          validation_steps=3)

Epoch 1/10

30/30
[==============================] - 0s 8ms/step - loss: 11.4649 -
categorical_accuracy: 0.1740 - val_loss: 11.0269 - val_categorical_accuracy:
0.0521

Epoch 2/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4794 -
categorical_accuracy: 0.1865 - val_loss: 11.4233 - val_categorical_accuracy:
0.0521

Epoch 3/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4604 -
categorical_accuracy: 0.1760 - val_loss: 11.4040 - val_categorical_accuracy:
0.0208

Epoch 4/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4475 -
categorical_accuracy: 0.1771 - val_loss: 11.3095 - val_categorical_accuracy:
0.2396

Epoch 5/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4727 -
categorical_accuracy: 0.1750 - val_loss: 11.0481 - val_categorical_accuracy:
0.0938

Epoch 6/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4569 -
categorical_accuracy: 0.1833 - val_loss: 11.3550 - val_categorical_accuracy:
0.1562

Epoch 7/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4653 -
categorical_accuracy: 0.1958 - val_loss: 11.4325 - val_categorical_accuracy:
0.0417

Epoch 8/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4246 -
categorical_accuracy: 0.1823 - val_loss: 11.3625 - val_categorical_accuracy:
0.0417

Epoch 9/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4542 -
categorical_accuracy: 0.1729 - val_loss: 11.0326 - val_categorical_accuracy:
0.0521

Epoch 10/10

30/30
[==============================] - 0s 2ms/step - loss: 11.4600 -
categorical_accuracy: 0.1979 - val_loss: 11.3494 - val_categorical_accuracy:
0.1042

评估和预测

tf.keras.Model.evaluate 和 tf.keras.Model.predict 方法可以使用 NumPy 数据和 tf.data.Dataset

要评估所提供数据的推理模式损失和指标,请运行以下代码:

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

model.evaluate(data, labels, batch_size=32)

model.evaluate(dataset, steps=30)

1000/1000
[==============================] - 0s 83us/step

30/30
[==============================] - 0s 3ms/step

[11.43181880315145,
0.18333333333333332]

要在所提供数据(采用 NumPy 数组形式)的推理中预测最后一层的输出,请运行以下代码:

result = model.predict(data, batch_size=32)
print(result.shape)

(1000, 10)

构建高级模型

函数式 API

tf.keras.Sequential 模型是层的简单堆叠,无法表示任意模型。使用 Keras 函数式 API 可以构建复杂的模型拓扑,例如:

  • 多输入模型,
  • 多输出模型,
  • 具有共享层的模型(同一层被调用多次),
  • 具有非序列数据流的模型(例如,剩余连接)。

使用函数式 API 构建的模型具有以下特征:

  1. 层实例可调用并返回张量。
  2. 输入张量和输出张量用于定义 tf.keras.Model 实例。
  3. 此模型的训练方式和 Sequential 模型一样。

以下示例使用函数式 API 构建一个简单的全连接网络:

inputs = tf.keras.Input(shape=(32,))  # Returns a placeholder tensor

# A layer instance is callable on a tensor, and returns a
tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)

在给定输入和输出的情况下实例化模型。

model = tf.keras.Model(inputs=inputs, outputs=predictions)

# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)

Epoch 1/5

1000/1000
[==============================] - 0s 260us/step - loss: 11.7190 - acc: 0.1080

Epoch 2/5

1000/1000
[==============================] - 0s 75us/step - loss: 11.5347 - acc: 0.1010

Epoch 3/5

1000/1000
[==============================] - 0s 74us/step - loss: 11.5020 - acc: 0.1100

Epoch 4/5

1000/1000
[==============================] - 0s 75us/step - loss: 11.4908 - acc: 0.1090

Epoch 5/5

1000/1000
[==============================] - 0s 74us/step - loss: 11.4809 - acc: 0.1330

模型子类化

通过对 tf.keras.Model 进行子类化并定义您自己的前向传播来构建完全可自定义的模型。在 __init__ 方法中创建层并将它们设置为类实例的属性。在 call 方法中定义前向传播。

在启用 Eager Execution 时,模型子类化特别有用,因为可以命令式地编写前向传播。

要点:针对作业使用正确的 API。虽然模型子类化较为灵活,但代价是复杂性更高且用户出错率更高。如果可能,请首选函数式 API。

以下示例展示了使用自定义前向传播进行子类化的 tf.keras.Model

class MyModel(tf.keras.Model):

def __init__(self, num_classes=10):
    super(MyModel, self).__init__(name='my_model')
    self.num_classes = num_classes
    # Define your layers here.
    self.dense_1 = layers.Dense(32, activation='relu')
    self.dense_2 = layers.Dense(num_classes, activation='sigmoid')

def call(self, inputs):
    # Define your forward pass here,
    # using layers you previously defined
(in `__init__`).
    x = self.dense_1(inputs)
    return self.dense_2(x)

def compute_output_shape(self, input_shape):
    # You need to override this function
if you want to use the subclassed model
    # as part of a functional-style
model.
    # Otherwise, this method is optional.
    shape = tf.TensorShape(input_shape).as_list()
    shape[-1] = self.num_classes
    return tf.TensorShape(shape)

实例化新模型类:

model = MyModel(num_classes=10)

# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)

Epoch 1/5

1000/1000 [==============================]
- 0s 224us/step - loss: 11.5206 - acc: 0.0990

Epoch 2/5

1000/1000
[==============================] - 0s 62us/step - loss: 11.5128 - acc: 0.1070

Epoch 3/5

1000/1000
[==============================] - 0s 64us/step - loss: 11.5023 - acc: 0.0980

Epoch 4/5

1000/1000
[==============================] - 0s 65us/step - loss: 11.4941 - acc: 0.0980

Epoch 5/5

1000/1000
[==============================] - 0s 66us/step - loss: 11.4879 - acc: 0.0990

自定义层

通过对 tf.keras.layers.Layer 进行子类化并实现以下方法来创建自定义层:

  • build:创建层的权重。使用 add_weight 方法添加权重。
  • call:定义前向传播。
  • compute_output_shape:指定在给定输入形状的情况下如何计算层的输出形状。
  • 或者,可以通过实现 get_config 方法和 from_config 类方法序列化层。

下面是一个使用核矩阵实现输入 matmul 的自定义层示例:

class MyLayer(layers.Layer):

def __init__(self, output_dim, **kwargs):
    self.output_dim = output_dim
    super(MyLayer, self).__init__(**kwargs)

def build(self, input_shape):
    shape = tf.TensorShape((input_shape[1], self.output_dim))
    # Create a trainable weight variable
for this layer.
    self.kernel = self.add_weight(name='kernel',
               
                  shape=shape,
               
                  initializer='uniform',
               
                  trainable=True)
    # Be sure to call this at the end
    super(MyLayer, self).build(input_shape)

def call(self, inputs):
    return tf.matmul(inputs, self.kernel)

def compute_output_shape(self, input_shape):
    shape = tf.TensorShape(input_shape).as_list()
    shape[-1] = self.output_dim
    return tf.TensorShape(shape)

def get_config(self):
    base_config = super(MyLayer, self).get_config()
    base_config['output_dim'] = self.output_dim
    return base_config

@classmethod
  def from_config(cls, config):
    return cls(**config)

使用自定义层创建模型:

model = tf.keras.Sequential([
    MyLayer(10),
    layers.Activation('softmax')])

# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)

Epoch 1/5

1000/1000
[==============================] - 0s 170us/step - loss: 11.4872 - acc: 0.0990

Epoch 2/5

1000/1000
[==============================] - 0s 52us/step - loss: 11.4817 - acc: 0.0910

Epoch 3/5

1000/1000
[==============================] - 0s 52us/step - loss: 11.4800 - acc: 0.0960

Epoch 4/5

1000/1000
[==============================] - 0s 57us/step - loss: 11.4778 - acc: 0.0960

Epoch 5/5

1000/1000
[==============================] - 0s 60us/step - loss: 11.4764 - acc: 0.0930

回调

回调是传递给模型的对象,用于在训练期间自定义该模型并扩展其行为。您可以编写自定义回调,也可以使用包含以下方法的内置 tf.keras.callbacks

要使用 tf.keras.callbacks.Callback,请将其传递给模型的 fit 方法:

callbacks = [
  # Interrupt training if `val_loss`
stops improving for over 2 epochs
  tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
  # Write TensorBoard logs to `./logs`
directory
  tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
          validation_data=(val_data,
val_labels))

Train on
1000 samples, validate on 100 samples

Epoch 1/5

1000/1000
[==============================] - 0s 150us/step - loss: 11.4748 - acc: 0.1230
- val_loss: 10.9787 - val_acc: 0.1000

Epoch 2/5

1000/1000
[==============================] - 0s 78us/step - loss: 11.4730 - acc: 0.1060 -
val_loss: 10.9783 - val_acc: 0.1300

Epoch 3/5

1000/1000
[==============================] - 0s 82us/step - loss: 11.4711 - acc: 0.1130 -
val_loss: 10.9756 - val_acc: 0.1500

Epoch 4/5

1000/1000
[==============================] - 0s 82us/step - loss: 11.4704 - acc: 0.1050 -
val_loss: 10.9772 - val_acc: 0.0900

Epoch 5/5

1000/1000
[==============================] - 0s 83us/step - loss: 11.4689 - acc: 0.1140 -
val_loss: 10.9781 - val_acc: 0.1300

保存和恢复

仅限权重

使用 tf.keras.Model.save_weights 保存并加载模型的权重:

model = tf.keras.Sequential([
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')])

model.compile(optimizer=tf.train.AdamOptimizer(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')

# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')

默认情况下,会以 TensorFlow 检查点文件格式保存模型的权重。权重也可以另存为 Keras HDF5 格式(Keras 多后端实现的默认格式):

# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')

# Restore the model's state
model.load_weights('my_model.h5')

仅限配置

可以保存模型的配置,此操作会对模型架构(不含任何权重)进行序列化。即使没有定义原始模型的代码,保存的配置也可以重新创建并初始化相同的模型。Keras 支持 JSON 和 YAML 序列化格式:

# Serialize a model to JSON format
json_string = model.to_json()
json_string

'{"backend":
"tensorflow", "keras_version": "2.1.6-tf",
"config": {"name": "sequential_3",
"layers": [{"config": {"units": 64,
"kernel_regularizer": null, "activation": "relu",
"bias_constraint": null, "trainable": true, "use_bias":
true, "bias_initializer": {"config": {"dtype":
"float32"}, "class_name": "Zeros"},
"activity_regularizer": null, "dtype": null,
"kernel_constraint": null, "kernel_initializer":
{"config": {"mode": "fan_avg", "seed":
null, "distribution": "uniform", "scale": 1.0,
"dtype": "float32"}, "class_name":
"VarianceScaling"}, "name": "dense_17",
"bias_regularizer": null}, "class_name":
"Dense"}, {"config": {"units": 10,
"kernel_regularizer": null, "activation":
"softmax", "bias_constraint": null, "trainable":
true, "use_bias": true, "bias_initializer":
{"config": {"dtype": "float32"},
"class_name": "Zeros"}, "activity_regularizer":
null, "dtype": null, "kernel_constraint": null,
"kernel_initializer": {"config": {"mode":
"fan_avg", "seed": null, "distribution": "uniform",
"scale": 1.0, "dtype": "float32"},
"class_name": "VarianceScaling"}, "name":
"dense_18", "bias_regularizer": null},
"class_name": "Dense"}]}, "class_name":
"Sequential"}'

import json
import pprint
pprint.pprint(json.loads(json_string))

{'backend':
'tensorflow',

'class_name': 'Sequential',

'config': {'layers': [{'class_name': 'Dense',

'config':
{'activation': 'relu',

'activity_regularizer': None,

'bias_constraint':
None,

'bias_initializer': {'class_name': 'Zeros',

'config': {'dtype': 'float32'}},

'bias_regularizer': None,

'dtype': None,

'kernel_constraint': None,

'kernel_initializer': {'class_name': 'VarianceScaling',

'config': {'distribution': 'uniform',

'dtype': 'float32',

'mode': 'fan_avg',

'scale': 1.0,

'seed': None}},

'kernel_regularizer': None,

'name': 'dense_17',

'trainable':
True,

'units': 64,

'use_bias':
True}},

{'class_name': 'Dense',

'config': {'activation':
'softmax',

'activity_regularizer': None,

'bias_constraint': None,

'bias_initializer': {'class_name': 'Zeros',

'config': {'dtype': 'float32'}},

'bias_regularizer': None,

'dtype':
None,

'kernel_constraint': None,

'kernel_initializer': {'class_name': 'VarianceScaling',

'config': {'distribution': 'uniform',

'dtype': 'float32',

'mode': 'fan_avg',

'scale': 1.0,

'seed': None}},

'kernel_regularizer': None,

'name':
'dense_18',

'trainable':
True,

'units': 10,

'use_bias':
True}}],

'name': 'sequential_3'},

'keras_version': '2.1.6-tf'}

从 json 重新创建模型(刚刚初始化)。

fresh_model = tf.keras.models.model_from_json(json_string)

将模型序列化为 YAML 格式

yaml_string = model.to_yaml()
print(yaml_string)

backend:
tensorflow

class_name:
Sequential

config:

layers:

- class_name: Dense

config:

activation: relu

activity_regularizer: null

bias_constraint: null

bias_initializer:

class_name: Zeros

config: {dtype: float32}

bias_regularizer: null

dtype: null

kernel_constraint: null

kernel_initializer:

class_name: VarianceScaling

config: {distribution: uniform, dtype:
float32, mode: fan_avg, scale: 1.0,

seed: null}

kernel_regularizer: null

name: dense_17

trainable: true

units: 64

use_bias: true

- class_name: Dense

config:

activation: softmax

activity_regularizer: null

bias_constraint: null

bias_initializer:

class_name: Zeros

config: {dtype: float32}

bias_regularizer: null

dtype: null

kernel_constraint: null

kernel_initializer:

class_name: VarianceScaling

config: {distribution: uniform, dtype:
float32, mode: fan_avg, scale: 1.0,

seed: null}

kernel_regularizer: null

name: dense_18

trainable: true

units: 10

use_bias: true

name: sequential_3

keras_version:
2.1.6-tf

从 yaml 重新创建模型

fresh_model = tf.keras.models.model_from_yaml(yaml_string)

注意:子类化模型不可序列化,因为它们的架构由 call 方法正文中的 Python 代码定义。

整个模型

整个模型可以保存到一个文件中,其中包含权重值、模型配置乃至优化器配置。这样,您就可以对模型设置检查点并稍后从完全相同的状态继续训练,而无需访问原始代码。

# Create a trivial model
model = tf.keras.Sequential([
  layers.Dense(10, activation='softmax', input_shape=(32,)),
  layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)

# Save entire model to a HDF5 file
model.save('my_model.h5')

# Recreate the exact same model, including weights and
optimizer.
model = tf.keras.models.load_model('my_model.h5')

Epoch 1/5

1000/1000
[==============================] - 0s 297us/step - loss: 11.5009 - acc: 0.0980

Epoch 2/5

1000/1000
[==============================] - 0s 76us/step - loss: 11.4844 - acc: 0.0960

Epoch 3/5

1000/1000
[==============================] - 0s 77us/step - loss: 11.4791 - acc: 0.0850

Epoch 4/5

1000/1000
[==============================] - 0s 78us/step - loss: 11.4771 - acc: 0.1020

Epoch 5/5

1000/1000
[==============================] - 0s 79us/step - loss: 11.4763 - acc: 0.0900

Eager Execution

Eager Execution 是一种命令式编程环境,可立即评估操作。此环境对于 Keras 并不是必需的,但是受 tf.keras 的支持,并且可用于检查程序和调试。

所有 tf.keras 模型构建 API 都与 Eager Execution 兼容。虽然可以使用 Sequential 和函数式 API,但 Eager Execution 对模型子类化和构建自定义层特别有用。与通过组合现有层来创建模型的 API 不同,函数式 API 要求您编写前向传播代码。

请参阅 Eager Execution 指南,了解将 Keras 模型与自定义训练循环和 tf.GradientTape 搭配使用的示例。

分布

Estimator

Estimator API 用于针对分布式环境训练模型。它适用于一些行业使用场景,例如用大型数据集进行分布式训练并导出模型以用于生产。

tf.keras.Model 可以通过 tf.estimator API 进行训练,方法是将该模型转换为 tf.estimator.Estimator 对象(通过 tf.keras.estimator.model_to_estimator)。请参阅用 Keras 模型创建 Estimator

model = tf.keras.Sequential([layers.Dense(10,activation='softmax'),
               
          layers.Dense(10,activation='softmax')])

model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

estimator = tf.keras.estimator.model_to_estimator(model)

INFO:tensorflow:Using
the Keras model provided.

INFO:tensorflow:Using
default config.

WARNING:tensorflow:Using
temporary folder as model directory: /tmp/tmpm0ljzq8s

INFO:tensorflow:Using
config: {'_experimental_distribute': None, '_master': '', '_eval_distribute':
None, '_num_ps_replicas': 0, '_protocol': None, '_global_id_in_cluster': 0,
'_save_summary_steps': 100, '_tf_random_seed': None, '_model_dir':
'/tmp/tmpm0ljzq8s', '_evaluation_master': '', '_task_id': 0,
'_keep_checkpoint_max': 5, '_save_checkpoints_steps': None, '_service': None,
'_num_worker_replicas': 1, '_save_checkpoints_secs': 600, '_is_chief': True,
'_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object
at 0x7fad8c5d3e10>, '_keep_checkpoint_every_n_hours': 10000,
'_log_step_count_steps': 100, '_session_config': allow_soft_placement: true

graph_options
{

rewrite_options {

meta_optimizer_iterations: ONE

}

}

,
'_train_distribute': None, '_task_type': 'worker', '_device_fn': None}

注意:请启用 Eager
Execution
 以调试 Estimator 输入函数并检查数据。

多个 GPU

tf.keras 模型可以使用 tf.contrib.distribute.DistributionStrategy 在多个 GPU 上运行。此 API 在多个 GPU 上提供分布式训练,几乎不需要更改现有代码。

目前,tf.contrib.distribute.MirroredStrategy 是唯一受支持的分布策略。MirroredStrategy 通过在一台机器上使用规约在同步训练中进行图内复制。要将 DistributionStrategy 与 Keras 搭配使用,请将 tf.keras.Model 转换为 tf.estimator.Estimator(通过 tf.keras.estimator.model_to_estimator),然后训练该 Estimator

以下示例在一台机器上的多个 GPU 间分布了 tf.keras.Model

首先,定义一个简单的模型:

model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))

optimizer = tf.train.GradientDescentOptimizer(0.2)

model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()

_________________________________________________________________

Layer
(type)                 Output Shape              Param #

=================================================================

dense_23
(Dense)             (None, 16)                176

_________________________________________________________________

dense_24
(Dense)             (None, 1)                 17

=================================================================

Total
params: 193

Trainable
params: 193

Non-trainable
params: 0

_________________________________________________________________

定义输入管道。input_fn 会返回 tf.data.Dataset 对象,此对象用于将数据分布在多台设备上,每台设备处理输入批次数据的一部分。

def input_fn():
  x = np.random.random((1024, 10))
  y = np.random.randint(2, size=(1024, 1))
  x = tf.cast(x, tf.float32)
  dataset = tf.data.Dataset.from_tensor_slices((x, y))
  dataset = dataset.repeat(10)
  dataset = dataset.batch(32)
  return dataset

接下来,创建 tf.estimator.RunConfig 并将 train_distribute 参数设置为 tf.contrib.distribute.MirroredStrategy 实例。创建 MirroredStrategy 时,您可以指定设备列表或设置 num_gpus 参数。默认使用所有可用的 GPU,如下所示:

strategy = tf.contrib.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)

INFO:tensorflow:Initializing
RunConfig with distribution strategies.

INFO:tensorflow:Not
using Distribute Coordinator.

将 Keras 模型转换为 tf.estimator.Estimator 实例:

keras_estimator = tf.keras.estimator.model_to_estimator(
  keras_model=model,
  config=config,
  model_dir='/tmp/model_dir')

INFO:tensorflow:Using
the Keras model provided.

INFO:tensorflow:Using
config: {'_experimental_distribute': None, '_master': '', '_eval_distribute':
None, '_num_ps_replicas': 0, '_protocol': None, '_global_id_in_cluster': 0,
'_save_summary_steps': 100, '_tf_random_seed': None, '_model_dir':
'/tmp/model_dir', '_evaluation_master': '', '_task_id': 0,
'_keep_checkpoint_max': 5, '_save_checkpoints_steps': None, '_service': None,
'_num_worker_replicas': 1, '_save_checkpoints_secs': 600, '_is_chief': True,
'_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object
at 0x7faed9e1c550>, '_keep_checkpoint_every_n_hours': 10000,
'_log_step_count_steps': 100, '_distribute_coordinator_mode': None,
'_session_config': allow_soft_placement: true

graph_options
{

rewrite_options {

meta_optimizer_iterations: ONE

}

}

, '_train_distribute':
<tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy
object at 0x7faed9e1c588>, '_task_type': 'worker', '_device_fn': None}

最后,通过提供 input_fn 和 steps 参数训练 Estimator 实例:

keras_estimator.train(input_fn=input_fn, steps=10)

WARNING:tensorflow:Not all devices in DistributionStrategy are visible to TensorFlow session.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting from: ('/tmp/model_dir/keras/keras_model.ckpt',)
INFO:tensorflow:Warm-starting variable: dense_24/kernel; prev_var_name: Unchanged
INFO:tensorflow:Warm-starting variable: dense_23/bias; prev_var_name: Unchanged
INFO:tensorflow:Warm-starting variable: dense_24/bias; prev_var_name: Unchanged
INFO:tensorflow:Warm-starting variable: dense_23/kernel; prev_var_name: Unchanged
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/model_dir/model.ckpt.
INFO:tensorflow:Initialize system
INFO:tensorflow:loss = 0.7582453, step = 0
INFO:tensorflow:Saving checkpoints for 10 into /tmp/model_dir/model.ckpt.
INFO:tensorflow:Finalize system.
INFO:tensorflow:Loss for final step: 0.6743419.

Tensorflow--Keras官方原文的更多相关文章

  1. tensor搭建--windows 10 64bit下安装Tensorflow+Keras+VS2015+CUDA8.0 GPU加速

    windows 10 64bit下安装Tensorflow+Keras+VS2015+CUDA8.0 GPU加速 原文见于:http://www.jianshu.com/p/c245d46d43f0 ...

  2. 深度学习框架: Keras官方中文版文档正式发布

    今年 1 月 12 日,Keras 作者 François Chollet‏ 在推特上表示因为中文读者的广泛关注,他已经在 GitHub 上展开了一个 Keras 中文文档项目.而昨日,Françoi ...

  3. 100天搞定机器学习|day40-42 Tensorflow Keras识别猫狗

    100天搞定机器学习|1-38天 100天搞定机器学习|day39 Tensorflow Keras手写数字识别 前文我们用keras的Sequential 模型实现mnist手写数字识别,准确率0. ...

  4. win10 anaconda3 python3.6安装tensorflow keras tensorflow_federated详细步骤及在jupyter notebook运行指定的conda虚拟环境

    本文链接:https://blog.csdn.net/weixin_44290661/article/details/1026789071. 安装tensorflow keras tensorflow ...

  5. Windows10下Anaconda+Tensorflow+Keras环境配置

    注意!注意!!注意!!! (重要的事情说三遍) 安装前检查: 1.Tensorflow不支持Anaconda2,Tensorflow也不支持python2.7和python3.7(满满的辛酸泪!) 2 ...

  6. mnist手写数字识别——深度学习入门项目(tensorflow+keras+Sequential模型)

    前言 今天记录一下深度学习的另外一个入门项目——<mnist数据集手写数字识别>,这是一个入门必备的学习案例,主要使用了tensorflow下的keras网络结构的Sequential模型 ...

  7. 常用深度学习框——Caffe/ TensorFlow / Keras/ PyTorch/MXNet

    常用深度学习框--Caffe/ TensorFlow / Keras/ PyTorch/MXNet 一.概述 近几年来,深度学习的研究和应用的热潮持续高涨,各种开源深度学习框架层出不穷,包括Tenso ...

  8. Keras官方中文文档:Keras安装和配置指南(Windows)

    这里需要说明一下,笔者不建议在Windows环境下进行深度学习的研究,一方面是因为Windows所对应的框架搭建的依赖过多,社区设定不完全:另一方面,Linux系统下对显卡支持.内存释放以及存储空间调 ...

  9. 【学习总结】win7使用anaconda安装tensorflow+keras

    tips: Keras是一个高层神经网络API(高层意味着会引用封装好的的底层) Keras由纯Python编写而成并基Tensorflow.Theano以及CNTK后端. 故先安装TensorFlo ...

  10. [转] 理解CheckPoint及其在Tensorflow & Keras & Pytorch中的使用

    作者用游戏的暂停与继续聊明白了checkpoint的作用,在三种主流框架中演示实际使用场景,手动点赞. 转自:https://blog.floydhub.com/checkpointing-tutor ...

随机推荐

  1. go中string类型转换为基本数据类型的方法

    代码 // string类型转基本数据类型 package main import ( "fmt" "strconv" ) func main() { str1 ...

  2. windows 10 自动升级后环境变量无效

    上个礼拜放假的时候,win10提示需要升级,我当时随手就一点更新并关机...今天,在启动项目时候尴尬了: D:\project\js\iam-web\code\iam-web>npm run d ...

  3. Uedit32_17.00 修改某一语言背景色-修改后续名后语法着色及某语言的大括号{}对齐

    修改UE的背景色:高级-配置-编辑器显示-其它-设置颜色 新增扩展名语法着色:如以tpl为后缀的html代码格式着色高级-配置-编辑器显示-语法着色-语言选言[选中要着色的语言html]-打开-在'F ...

  4. 洛谷P2606 [ZJOI2010]排列计数 组合数学+DP

    题意:称一个1,2,...,N的排列P1,P2...,Pn是Magic的,当且仅当2<=i<=N时,Pi>Pi/2. 计算1,2,...N的排列中有多少是Magic的,答案可能很大, ...

  5. C++使用静态类成员时出现的一个问题

    开发环境 Qt Creator 4.8.2 编译器 MinGw 32-bit 在类中定义了一个static data member class Triangular{ public: static b ...

  6. uboot移植之迷雾解码

    按照蜗窝科技的步骤执行 一.有关硬件描述的填空题 1)CPU上电后,从哪种设备(       BOOTROM         )的哪个地址(        0x0000_0000       )开始执 ...

  7. Spring3.x 升级至 Spring4.x 详解

    1 升级依赖包 1.1 Maven 项目 1.1.1 更新 spring 依赖版本 打开 pom.xml,把所有 spring3.x 的版本号更新为 spring4.x.建议使用属性配置,形如: &l ...

  8. JS基础入门篇(七)—运算符

    1.算术运算符 1.算术运算符 算术运算符:+ ,- ,* ,/ ,%(取余) ,++ ,-- . 重点:++和--前置和后置的区别. 1.1 前置 ++ 和 后置 ++ 前置++:先自增值,再使用值 ...

  9. 了解卷积神经网络如何使用TDA学习

    在我之前的文章中,我讨论了如何对卷积神经网络(CNN)学习的权重进行拓扑数据分析,以便深入了解正在学习的内容以及如何学习它. 这项工作的重要性可归纳如下: 它使我们能够了解神经网络如何执行分类任务. ...

  10. django之mysql数据库的配置和orm交互

    一:django默认数据库的配置 DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path. ...