原文地址:

https://www.jianshu.com/p/1b1ea45fab47

yanghedada

-----------------------------------------------------------------------------------

static_rnn和dynamic_rnn

1:     static_rnn

x = tf.placeholder("float", [None, n_steps, n_input])
x1 = tf.unstack(x, n_steps, 1)
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x1, dtype=tf.float32)
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None)

2:     dynamic_rnn

x = tf.placeholder("float", [None, n_steps, n_input])
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs,_ = tf.nn.dynamic_rnn(lstm_cell ,x,dtype=tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None)
BasicLSTMCell:
(num_units: 是指一个Cell中神经元的个数,forget_bias:忘记门记住多少,1.0代表全部记住)
 
 
tf.contrib.rnn.static_rnn:
静态 rnn的意思就是按照样本时间序列个数(n_steps)展开,在图中创建(n_steps)个序列的cell
 
 
tf.nn.dynamic_rnn:
动态rnn的意思是只创建样本中的一个序列RNN,其他序列数据会通过循环进入该RNN运算。  通过静态static_rnn生成的RNN网络,生成过程所需的时间会更长,网络所占有的内存会更多,导出的模型会更大。static_rnn模型中会带有第个序列中间态的信息,利于调试。static_rnn在使用时必须与训练的样本序列个数相同。dynamic_rnn通过动态生成的RNN网络,所占用内存较少。dynamic_rnn模型中只会有最后的状态,在使用时还能支持不同的序列个数。

 
 
 

区别

1.tf.nn.dynamic_rnn与tf.contrib.rnn.static_rnn输入格式不同。
2.tf.nn.dynamic_rnn与tf.contrib.rnn.static_rnn输出格式不同。
3.tf.nn.dynamic_rnn与tf.contrib.rnn.static_rnn内部训练方式。

 
 
 

请仔细对比以下区别:

可以参考:https://blog.csdn.net/mzpmzk/article/details/80573338

 
 
 
 
 
 
 
 
 

动态rnn

import tensorflow as tf
# 导入 MINST 数据集
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("c:/user/administrator/data/", one_hot=True)
n_input = 28 # MNIST data 输入 (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 10 # MNIST 列别 (0-9 ,一共10类)
batch_size = 128
tf.reset_default_graph() # tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
lstm_cell = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias=1.0)
outputs,_ = tf.nn.dynamic_rnn(lstm_cell,x,dtype=tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
#取最后一条输出信息,(outputs[-1])
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None) learning_rate = 0.001
training_iters = 100000 display_step = 10 # Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # 启动session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# 计算批次数据的准确率
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print (" Finished!") # 计算准确率 for 128 mnist test images
test_len = 128
test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
test_label = mnist.test.labels[:test_len]
print ("Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: test_data, y: test_label}))

静态RNN

import tensorflow as tf
# 导入 MINST 数据集
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("c:/user/administrator/data/", one_hot=True)
n_input = 28 # MNIST data 输入 (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 10 # MNIST 列别 (0-9 ,一共10类)
batch_size = 128
tf.reset_default_graph() # tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
lstm_cell = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias=1.0)
x1 = tf.unstack(x, n_steps, 1)
lstm_cell = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias=1.0)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x1, dtype=tf.float32)
#取最后一条输出信息,(outputs[-1])
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None) learning_rate = 0.001
training_iters = 100000 display_step = 10 # Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # 启动session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# 计算批次数据的准确率
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print (" Finished!") # 计算准确率 for 128 mnist test images
test_len = 128
test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
test_label = mnist.test.labels[:test_len]
print ("Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: test_data, y: test_label}))

本代码源自:
凯文自学TensorFlow

# -*- coding: utf-8 -*-

import tensorflow as tf
# 导入 MINST 数据集
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("c:/user/administrator/data/", one_hot=True)
n_input = 28 # MNIST data 输入 (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 10 # MNIST 列别 (0-9 ,一共10类)
batch_size = 128
tf.reset_default_graph() # tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
#重置x以适合tf.contrib.rnn.static_rnn所要求的格式
#x1 = tf.unstack(x, n_steps, 1) #BasicLSTMCell(num_units: 是指一个Cell中神经元的个数,forget_bias:忘记门记住多少,1.0代表全部记住)
#静态 (tf.contrib.rnn.static_rnn)的意思就是按照样本时间序列个数(n_steps)展开,在图中创建(n_steps)个序列的cell;
#动态(tf.nn.dynamic_rnn)的意思是只创建样本中的一个序列RNN,其他序列数据会通过循环进入该RNN运算
"""
通过静态生成的RNN网络,生成过程所需的时间会更长,网络所占有的内存会更多,导出的模型会更大
。模型中会带有第个序列中间态的信息,利于调试。在使用时必须与训练的样本序列个数相同。通过动
态生成的RNN网络,所占用内存较少。模型中只会有最后的状态,在使用时还能支持不同的序列个数。
"""
#lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
#outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x1, dtype=tf.float32)
"""
#2 LSTMCell,LSTM实现的一个高级版本(use_peepholes:默认False,True表示启用peephole连接)
cell_clip:是否在输出前对cell状态按照给定值进行截断处理
initializer:指定初始化函数
num_proj:通过projection进行模型压缩的输出维度
proj_clip:将num_proj按照给定的proj_clip截断
"""
#lstm_cell = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias=1.0)
#outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x1, dtype=tf.float32) #3 gru类定义
#gru = tf.contrib.rnn.GRUCell(n_hidden)
#outputs = tf.contrib.rnn.static_rnn(gru, x1, dtype=tf.float32) #4 创建动态RNN,此时的输入是x,是动态的[None, n_steps, n_input]LIST
#具体定义参考https://blog.csdn.net/mzpmzk/article/details/80573338
gru = tf.contrib.rnn.GRUCell(n_hidden)
outputs,_ = tf.nn.dynamic_rnn(gru,x,dtype=tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
#取最后一条输出信息,(outputs[-1])
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None) learning_rate = 0.001
training_iters = 100000 display_step = 10 # Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # 启动session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# 计算批次数据的准确率
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print (" Finished!") # 计算准确率 for 128 mnist test images
test_len = 128
test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
test_label = mnist.test.labels[:test_len]
print ("Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: test_data, y: test_label}))

-----------------------------------------------------------------------------------

【转载】 tensorflow的单层静态与动态RNN比较的更多相关文章

  1. RNN静态与动态

    静态.多层RNN:import numpy as np import tensorflow as tf # 导入 MINST 数据集 from tensorflow.examples.tutorial ...

  2. Delphi DLL的创建、静态及动态调用

    转载:http://blog.csdn.net/welcome000yy/article/details/7905463 结合这篇博客:http://www.cnblogs.com/xumenger/ ...

  3. [源码解析] TensorFlow 分布式环境(6) --- Master 动态逻辑

    [源码解析] TensorFlow 分布式环境(6) --- Master 动态逻辑 目录 [源码解析] TensorFlow 分布式环境(6) --- Master 动态逻辑 1. GrpcSess ...

  4. [源码解析] TensorFlow 分布式环境(7) --- Worker 动态逻辑

    [源码解析] TensorFlow 分布式环境(7) --- Worker 动态逻辑 目录 [源码解析] TensorFlow 分布式环境(7) --- Worker 动态逻辑 1. 概述 1.1 温 ...

  5. Android中BroadcastReceiver的两种注册方式(静态和动态)详解

    今天我们一起来探讨下安卓中BroadcastReceiver组件以及详细分析下它的两种注册方式. BroadcastReceiver也就是"广播接收者"的意思,顾名思义,它就是用来 ...

  6. 生成lua的静态库.动态库.lua.exe和luac.exe

    前些日子准备学习下关于lua coroutine更为强大的功能,然而发现根据lua 5.1.4版本来运行一段代码的话也会导致 "lua: attempt to yield across me ...

  7. 3D touch 静态、动态设置及进入APP的跳转方式

    申明Quick Action有两种方式:静态和动态 静态是在info.plist文件中申明,动态则是在代码中注册,系统支持两者同时存在. -系统限制每个app最多显示4个快捷图标,包括静态和动态 静态 ...

  8. C/C++ 跨平台交叉编译、静态库/动态库编译、MinGW、Cygwin、CodeBlocks使用原理及链接参数选项

    目录 . 引言 . 交叉编译 . Cygwin简介 . 静态库编译及使用 . 动态库编译及使用 . MinGW简介 . CodeBlocks简介 0. 引言 UNIX是一个注册商标,是要满足一大堆条件 ...

  9. RT-Thread创建静态、动态线程

    RT-Thread 实时操作系统核心是一个高效的硬实时核心,它具备非常优异的实时性.稳定性.可剪裁性,当进行最小配置时,内核体积可以到 3k ROM 占用. 1k RAM 占用. RT-Thread ...

随机推荐

  1. Docker02-重要概念

    目录 Docker简介 思考 Docker是什么 Docker 解决了什么问题 Docker 的优点 Docker的目的 Docker常用场景 虚拟化和Docker的对比 Docker的架构 Dock ...

  2. Spring boot项目分环境Maven打包,动态配置文件,动态配置项目

    Spring boot Maven 项目打包 使用Maven 实现多环境 test dev prod 打包 项目的结构 在下图中可用看出,我们打包时各个环境需要分开,采用 application-环境 ...

  3. SQL模糊查询的四种匹配模式

    执行数据库查询时,有完整查询和模糊查询之分,一般模糊语句如下: SELECT 字段 FROM 表 WHERE 某字段 Like 条件 一.四种匹配模式 关于条件,SQL提供了四种匹配模式: 1.% 表 ...

  4. 动态生成16位不重复随机数、随机创建2位ID

    /** 1. * 动态生成16位不重复随机数 * * @return */ public synchronized static String generate16() { StringBuffer ...

  5. Windows环境下安装和使用nginx1.16.0

    nginx是一款开源的HTTP服务器和反向代理服务器,nginx可以作为Web服务器提供HTTP访问功能,类似于Apache.IIS等.目前nginx已经在国内外很多网站作为Web服务器或反向代理服务 ...

  6. 项目Alpha冲刺(团队)-第八天冲刺

    格式描述 课程名称:软件工程1916|W(福州大学) 作业要求:项目Alpha冲刺(团队) 团队名称:为了交项目干杯 作业目标:描述第八天冲刺的项目进展.问题困难.心得体会 队员姓名与学号 队员学号 ...

  7. 4、markdown基本语法

    一.前言 由于有些语法无法在博客园展示,推荐使用Typora解锁全套,下载地址:https://www.typora.io/ 推荐使用jupyter,使用方法:https://www.cnblogs. ...

  8. 2019牛客多校第九场AThe power of Fibonacci——扩展BM

    题意 求斐波那契数列m次方的前n项和,模数为 $1e9$. 分析 线性递推乘线性递推仍是线性递推,所以上BM. 由于模数非质数,上扩展版的BM. 递推多少项呢?本地输入发现最大为与前57项有关(而且好 ...

  9. 用Wget下载的文件在哪里可以找到。。

    输入命令: cd ~ 然后 ls 就ok了.

  10. SQL Server 父子迭代查询语句,树状查询

    这个也有用: -- Get childs by parent idWITH TreeAS( SELECT Id,ParentId FROM dbo.Node P WHERE P.Id = 21 -- ...