本节我们会使用RNN来进行回归训练(Regression),会继续使用自己创建的sin曲线预测一条cos曲线。

首先我们需要先确定RNN的各种参数:

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt BATCH_START = 0 # 建立 batch data 时候的 index
TIME_STEPS = 20 # backpropagation through time 的 time_steps
BATCH_SIZE = 50
INPUT_SIZE = 1 # sin 数据输入 size
OUTPUT_SIZE = 1 # cos 数据输出 size
CELL_SIZE = 10 # RNN 的 hidden unit size
LR = 0.006 # learning rate

定义一个数据生成的get_batch function:

def get_batch():
global BATCH_START, TIME_STEPS
# xs shape (50batch, 20steps)
xs = np.arange(BATCH_START, BATCH_START+TIME_STEPS*BATCH_SIZE).reshape((BATCH_SIZE, TIME_STEPS)) / (10*np.pi)
seq = np.sin(xs)
res = np.cos(xs)
BATCH_START += TIME_STEPS
# returned seq, res and xs: shape (batch, step, input)
return [seq[:, :, np.newaxis], res[:, :, np.newaxis], xs]

定义LSTMRNN的主体结构

使用一个class来定义这次的LSTMRNN会更加的方便,第一步定义class中的__int__传入各种参数:

class LSTMRNN(object):
def __init__(self, n_steps, input_size, output_size, cell_size, batch_size):
self.n_steps = n_steps
self.input_size = input_size
self.output_size = output_size
self.cell_size = cell_size
self.batch_size = batch_size
with tf.name_scope('inputs'):
self.xs = tf.placeholder(tf.float32, [None, n_steps, input_size], name='xs')
self.ys = tf.placeholder(tf.float32, [None, n_steps, output_size], name='ys')
with tf.variable_scope('in_hidden'):
self.add_input_layer()
with tf.variable_scope('LSTM_cell'):
self.add_cell()
with tf.variable_scope('out_hidden'):
self.add_output_layer()
with tf.name_scope('cost'):
self.compute_cost()
with tf.name_scope('train'):
self.train_op = tf.train.AdamOptimizer(LR).minimize(self.cost)

设置add_input_layer功能 添加input_layer:

    def add_input_layer(self,):
l_in_x = tf.reshape(self.xs, [-1, self.input_size], name='2_2D') # (batch*n_step, in_size)
# Ws (in_size, cell_size)
Ws_in = self._weight_variable([self.input_size, self.cell_size])
# bs (cell_size, )
bs_in = self._bias_variable([self.cell_size,])
# l_in_y = (batch * n_steps, cell_size)
with tf.name_scope('Wx_plus_b'):
l_in_y = tf.matmul(l_in_x, Ws_in) + bs_in
# reshape l_in_y ==> (batch, n_steps, cell_size)
self.l_in_y = tf.reshape(l_in_y, [-1, self.n_steps, self.cell_size], name='2_3D')

设置add_cell功能,添加cell,注意这里的self.cell_int_state,因为我们在train的时候,这个地方要做特别的声明。

    def add_cell(self):
lstm_cell = tf.contrib.rnn.BasicLSTMCell(self.cell_size, forget_bias=1.0, state_is_tuple=True)
with tf.name_scope('initial_state'):
self.cell_init_state = lstm_cell.zero_state(self.batch_size, dtype=tf.float32)
self.cell_outputs, self.cell_final_state = tf.nn.dynamic_rnn(
lstm_cell, self.l_in_y, initial_state=self.cell_init_state, time_major=False)

设置add_output_layer的功能,添加output_layer:

    def add_output_layer(self):
# shape = (batch * steps, cell_size)
l_out_x = tf.reshape(self.cell_outputs, [-1, self.cell_size], name='2_2D')
Ws_out = self._weight_variable([self.cell_size, self.output_size])
bs_out = self._bias_variable([self.output_size, ])
# shape = (batch * steps, output_size)
with tf.name_scope('Wx_plus_b'):
self.pred = tf.matmul(l_out_x, Ws_out) + bs_out

添加RNN中剩下的部分

    def compute_cost(self):
losses = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
[tf.reshape(self.pred, [-1], name='reshape_pred')],
[tf.reshape(self.ys, [-1], name='reshape_target')],
[tf.ones([self.batch_size * self.n_steps], dtype=tf.float32)],
average_across_timesteps=True,
softmax_loss_function=self.ms_error,
name='losses'
)
with tf.name_scope('average_cost'):
self.cost = tf.div(
tf.reduce_sum(losses, name='losses_sum'),
self.batch_size,
name='average_cost')
tf.summary.scalar('cost', self.cost) def ms_error(self, y_target, y_pre):
return tf.square(tf.sub(y_target, y_pre)) def _weight_variable(self, shape, name='weights'):
initializer = tf.random_normal_initializer(mean=0., stddev=1.,)
return tf.get_variable(shape=shape, initializer=initializer, name=name) def _bias_variable(self, shape, name='biases'):
initializer = tf.constant_initializer(0.1)
return tf.get_variable(name=name, shape=shape, initializer=initializer)

这里说明一下TensorFlow LSTMatate_is_tuple参数问题

state_is_tuple 官方建议设置为True。此时,输入和输出的states为c(cell状态)和h(输出)的二元组

输入、输出、cell的维度相同,都是 batch_size * num_units。

训练LSTMRNN

if __name__ == '__main__':
# 搭建 LSTMRNN 模型
model = LSTMRNN(TIME_STEPS, INPUT_SIZE, OUTPUT_SIZE, CELL_SIZE, BATCH_SIZE)
sess = tf.Session()
# sess.run(tf.initialize_all_variables()) # tf 马上就要废弃这种写法
# 替换成下面的写法:
sess.run(tf.global_variables_initializer()) # 训练 200 次
for i in range(200):
seq, res, xs = get_batch() # 提取 batch data
if i == 0:
# 初始化 data
feed_dict = {
model.xs: seq,
model.ys: res,
}
else:
feed_dict = {
model.xs: seq,
model.ys: res,
model.cell_init_state: state # 保持 state 的连续性
} # 训练
_, cost, state, pred = sess.run(
[model.train_op, model.cost, model.cell_final_state, model.pred],
feed_dict=feed_dict) # 打印 cost 结果
if i % 20 == 0:
print('cost: ', round(cost, 4))

完整代码如下所示:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
import matplotlib.pyplot as plt tf.set_random_seed(1)
np.random.seed(1) # Hyper Parameters
BATCH_SIZE = 64
TIME_STEP = 28 # rnn time step / image height
INPUT_SIZE = 28 # rnn input size / image width
LR = 0.01 # learning rate # data
mnist = input_data.read_data_sets('./mnist', one_hot=True) # they has been normalized to range (0,1)
test_x = mnist.test.images[:2000]
test_y = mnist.test.labels[:2000] # plot one example
print(mnist.train.images.shape) # (55000, 28 * 28)
print(mnist.train.labels.shape) # (55000, 10)
plt.imshow(mnist.train.images[0].reshape((28, 28)), cmap='gray')
plt.title('%i' % np.argmax(mnist.train.labels[0]))
plt.show() # tensorflow placeholders
tf_x = tf.placeholder(tf.float32, [None, TIME_STEP * INPUT_SIZE]) # shape(batch, 784)
image = tf.reshape(tf_x, [-1, TIME_STEP, INPUT_SIZE]) # (batch, height, width, channel)
tf_y = tf.placeholder(tf.int32, [None, 10]) # input y # RNN
rnn_cell = tf.nn.rnn_cell.LSTMCell(num_units=64)
outputs, (h_c, h_n) = tf.nn.dynamic_rnn(
rnn_cell, # cell you have chosen
image, # input
initial_state=None, # the initial hidden state
dtype=tf.float32, # must given if set initial_state = None
time_major=False, # False: (batch, time step, input); True: (time step, batch, input)
)
output = tf.layers.dense(outputs[:, -1, :], 10) # output based on the last output step loss = tf.losses.softmax_cross_entropy(onehot_labels=tf_y, logits=output) # compute cost
train_op = tf.train.AdamOptimizer(LR).minimize(loss) accuracy = tf.metrics.accuracy( # return (acc, update_op), and create 2 local variables
labels=tf.argmax(tf_y, axis=1), predictions=tf.argmax(output, axis=1),)[1] sess = tf.Session()
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) # the local var is for accuracy_op
sess.run(init_op) # initialize var in graph for step in range(1200): # training
b_x, b_y = mnist.train.next_batch(BATCH_SIZE)
_, loss_ = sess.run([train_op, loss], {tf_x: b_x, tf_y: b_y})
if step % 50 == 0: # testing
accuracy_ = sess.run(accuracy, {tf_x: test_x, tf_y: test_y})
print('train loss: %.4f' % loss_, '| test accuracy: %.2f' % accuracy_) # print 10 predictions from test data
test_output = sess.run(output, {tf_x: test_x[:10]})
pred_y = np.argmax(test_output, 1)
print(pred_y, 'prediction number')
print(np.argmax(test_y[:10], 1), 'real number')
 

Tensorflow实战第十一课(RNN Regression 回归例子 )的更多相关文章

  1. Tensorflow实战第十课(RNN MNIST分类)

    设置RNN的参数 我们本节采用RNN来进行分类的训练(classifiction).会继续使用手写数据集MNIST. 让RNN从每张图片的第一行像素读到最后一行,然后进行分类判断.接下来我们导入MNI ...

  2. TensorFlow实战第七课(dropout解决overfitting)

    Dropout 解决 overfitting overfitting也被称为过度学习,过度拟合.他是机器学习中常见的问题. 图中的黑色曲线是正常模型,绿色曲线就是overfitting模型.尽管绿色曲 ...

  3. TensorFlow从入门到理解(五):你的第一个循环神经网络RNN(回归例子)

    运行代码: import tensorflow as tf import numpy as np import matplotlib.pyplot as plt BATCH_START = 0 TIM ...

  4. TensorFlow实战第八课(卷积神经网络CNN)

    首先我们来简单的了解一下什么是卷积神经网路(Convolutional Neural Network) 卷积神经网络是近些年逐步兴起的一种人工神经网络结构, 因为利用卷积神经网络在图像和语音识别方面能 ...

  5. TensorFlow实战第五课(MNIST手写数据集识别)

    Tensorflow实现softmax regression识别手写数字 MNIST手写数字识别可以形象的描述为机器学习领域中的hello world. MNIST是一个非常简单的机器视觉数据集.它由 ...

  6. TensorFlow实战第六课(过拟合)

    本节讲的是机器学习中出现的过拟合(overfitting)现象,以及解决过拟合的一些方法. 机器学习模型的自负又表现在哪些方面呢. 这里是一些数据. 如果要你画一条线来描述这些数据, 大多数人都会这么 ...

  7. TensorFlow实战第四课(tensorboard数据可视化)

    tensorboard可视化工具 tensorboard是tensorflow的可视化工具,通过这个工具我们可以很清楚的看到整个神经网络的结构及框架. 通过之前展示的代码,我们进行修改从而展示其神经网 ...

  8. TensorFlow实战第三课(可视化、加速神经网络训练)

    matplotlib可视化 构件图形 用散点图描述真实数据之间的关系(plt.ion()用于连续显示) # plot the real data fig = plt.figure() ax = fig ...

  9. 学习笔记TF024:TensorFlow实现Softmax Regression(回归)识别手写数字

    TensorFlow实现Softmax Regression(回归)识别手写数字.MNIST(Mixed National Institute of Standards and Technology ...

随机推荐

  1. mongodb cursor用法

    为了营造大批量数据,我们可以这样写javascript脚本 for (var i=1;i<=10000;i++) { if(i%2==1) { db.cursortest.insert({_id ...

  2. 【agc004e】Salvage Robots

    题目大意 一个n*m的矩阵,矩阵内有一个出口和若干个机器人,每一步操作可以使所有的机器人向任意方向移动一格,如果机器人出了边界就爆炸.求最多可以让多少个机器人走到出口. 解题思路 发现,移动所有机器人 ...

  3. CSS3 animation属性 实现转动效果

    <!DOCTYPE html> <html lang="en"> <head>     <meta charset="UTF-8 ...

  4. VUE: 移动端长按弹出确认删除地址(2)

    之前有一篇文章也写了长按弹出确认框的功能,在android机上测试过完全没问题,到后面整体测试时发现IOS这个功能长按移除就消失了, 除非长按不松手,用另外一只手点击确定才能完成操作,所以这次做了修改 ...

  5. [Python之路] 元类(引申 单例模式)

    一.类也是对象 当我们定义一个变量或者函数的时候,我们可以在globals()的返回值字典中找到响应的映射: def A(): print("This is function A" ...

  6. C/C++字符串转换函数;

    字符串转数字: char 转 int 函数: atoi   wchar  转 int 函数: _wtoi 数字转字符串: std::to_string(): std::to_wsting();

  7. JavaWeb_(Struts2框架)Action中struts-default下result的各种转发类型

    此系列博文基于同一个项目已上传至github 传送门 JavaWeb_(Struts2框架)Struts创建Action的三种方式 传送门 JavaWeb_(Struts2框架)struts.xml核 ...

  8. OkHttp3 拦截器源码分析

    OkHttp 拦截器流程源码分析 在这篇博客 OkHttp3 拦截器(Interceptor) ,我们已经介绍了拦截器的作用,拦截器是 OkHttp 提供的对 Http 请求和响应进行统一处理的强大机 ...

  9. Python 之 subprocess模块

    一.subprocess以及常用的封装函数运行python的时候,我们都是在创建并运行一个进程.像Linux进程那样,一个进程可以fork一个子进程,并让这个子进程exec另外一个程序.在Python ...

  10. LeetCode 73. 矩阵置零(Set Matrix Zeroes)

    题目描述 给定一个 m x n 的矩阵,如果一个元素为 0,则将其所在行和列的所有元素都设为 0.请使用原地算法. 示例 1: 输入: [   [1,1,1],   [1,0,1],   [1,1,1 ...