github:https://github.com/zle1992/Seq2Seq-Chatbot

1、 注意在infer阶段,需要需要reuse,

2、If you are using the BeamSearchDecoder with a cell wrapped in AttentionWrapper, then you must ensure that:

  • The encoder output has been tiled to beam_width via tf.contrib.seq2seq.tile_batch (NOT tf.tile).
  • The batch_size argument passed to the zero_state method of this wrapper is equal to true_batch_size * beam_width.
  • The initial state created with zero_state above contains a cell_state value containing properly tiled final state from the encoder.
 import tensorflow as tf
from tensorflow.python.layers.core import Dense BEAM_WIDTH = 5
BATCH_SIZE = 128 # INPUTS
X = tf.placeholder(tf.int32, [BATCH_SIZE, None])
Y = tf.placeholder(tf.int32, [BATCH_SIZE, None])
X_seq_len = tf.placeholder(tf.int32, [BATCH_SIZE])
Y_seq_len = tf.placeholder(tf.int32, [BATCH_SIZE]) # ENCODER
encoder_out, encoder_state = tf.nn.dynamic_rnn(
cell = tf.nn.rnn_cell.BasicLSTMCell(128),
inputs = tf.contrib.layers.embed_sequence(X, 10000, 128),
sequence_length = X_seq_len,
dtype = tf.float32) # DECODER COMPONENTS
Y_vocab_size = 10000
decoder_embedding = tf.Variable(tf.random_uniform([Y_vocab_size, 128], -1.0, 1.0))
projection_layer = Dense(Y_vocab_size) # ATTENTION (TRAINING)
with tf.variable_scope('shared_attention_mechanism'):
attention_mechanism = tf.contrib.seq2seq.LuongAttention(
num_units = 128,
memory = encoder_out,
memory_sequence_length = X_seq_len) decoder_cell = tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.BasicLSTMCell(128),
attention_mechanism = attention_mechanism,
attention_layer_size = 128) # DECODER (TRAINING)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embedding, Y),
sequence_length = Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cell,
helper = training_helper,
initial_state = decoder_cell.zero_state(BATCH_SIZE,tf.float32).clone(cell_state=encoder_state),
output_layer = projection_layer)
with tf.variable_scope('decode_with_shared_attention'):
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(Y_seq_len))
training_logits = training_decoder_output.rnn_output # BEAM SEARCH TILE
encoder_out = tf.contrib.seq2seq.tile_batch(encoder_out, multiplier=BEAM_WIDTH)
X_seq_len = tf.contrib.seq2seq.tile_batch(X_seq_len, multiplier=BEAM_WIDTH)
encoder_state = tf.contrib.seq2seq.tile_batch(encoder_state, multiplier=
BEAM_WIDTH) # ATTENTION (PREDICTING)
with tf.variable_scope('shared_attention_mechanism', reuse=True):
attention_mechanism = tf.contrib.seq2seq.LuongAttention(
num_units = 128,
memory = encoder_out,
memory_sequence_length = X_seq_len) decoder_cell = tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.BasicLSTMCell(128),
attention_mechanism = attention_mechanism,
attention_layer_size = 128) # DECODER (PREDICTING)
predicting_decoder = tf.contrib.seq2seq.BeamSearchDecoder(
cell = decoder_cell,
embedding = decoder_embedding,
start_tokens = tf.tile(tf.constant([1], dtype=tf.int32), [BATCH_SIZE]),
end_token = 2,
initial_state = decoder_cell.zero_state(BATCH_SIZE * BEAM_WIDTH,tf.float32).clone(cell_state=encoder_state),
beam_width = BEAM_WIDTH,
output_layer = projection_layer,
length_penalty_weight = 0.0)
with tf.variable_scope('decode_with_shared_attention', reuse=True):
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = False,
maximum_iterations = 2 * tf.reduce_max(Y_seq_len))
predicting_logits = predicting_decoder_output.predicted_ids[:, :, 0] print('successful')

参考:

https://gist.github.com/higepon/eb81ba0f6663a57ff1908442ce753084

https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BeamSearchDecoder

https://github.com/tensorflow/nmt#beam-search

Tensorflow --BeamSearch的更多相关文章

  1. tensorflow 笔记13:了解机器翻译,google NMT,Attention

    一.关于Attention,关于NMT 未完待续... 以google 的 nmt 代码引入 探讨下端到端: 项目地址:https://github.com/tensorflow/nmt 机器翻译算是 ...

  2. Effective Tensorflow[转]

    Effective TensorFlow Table of Contents TensorFlow Basics Understanding static and dynamic shapes Sco ...

  3. Tensorflow 官方版教程中文版

    2015年11月9日,Google发布人工智能系统TensorFlow并宣布开源,同日,极客学院组织在线TensorFlow中文文档翻译.一个月后,30章文档全部翻译校对完成,上线并提供电子书下载,该 ...

  4. tensorflow学习笔记二:入门基础

    TensorFlow用张量这种数据结构来表示所有的数据.用一阶张量来表示向量,如:v = [1.2, 2.3, 3.5] ,如二阶张量表示矩阵,如:m = [[1, 2, 3], [4, 5, 6], ...

  5. 用Tensorflow让神经网络自动创造音乐

    #————————————————————————本文禁止转载,禁止用于各类讲座及ppt中,违者必究————————————————————————# 前几天看到一个有意思的分享,大意是讲如何用Ten ...

  6. tensorflow 一些好的blog链接和tensorflow gpu版本安装

    pading :SAME,VALID 区别  http://blog.csdn.net/mao_xiao_feng/article/details/53444333 tensorflow实现的各种算法 ...

  7. tensorflow中的基本概念

    本文是在阅读官方文档后的一些个人理解. 官方文档地址:https://www.tensorflow.org/versions/r0.12/get_started/basic_usage.html#ba ...

  8. kubernetes&tensorflow

    谷歌内部--Borg Google Brain跑在数十万台机器上 谷歌电商商品分类深度学习模型跑在1000+台机器上 谷歌外部--Kubernetes(https://github.com/kuber ...

  9. tensorflow学习

    tensorflow安装时遇到gcc: error trying to exec 'as': execvp: No such file or directory. 截止到2016年11月13号,源码编 ...

随机推荐

  1. yum clear all无反应

    卸载重装yum 操作系统版本:centos7 [root@linux-node3 ~]# uname -r 3.10.0-514.el7.x86_64 一.将现有的yum源卸载 [root@linux ...

  2. python 函数split()

    函数:split() Python中有split()和os.path.split()两个函数,具体作用如下:split():拆分字符串.通过指定分隔符对字符串进行切片,并返回分割后的字符串列表(lis ...

  3. STS的安装以及IDEA安装和破解过程

    一.STS的下载 1·下载地址:直接百度搜索STS,选择Download STS 3 2.选择电脑对应的版本 直接下载 3.安装包解压后在有jdk的情况下就可以直接使用 二.IDEA的安装 下载网址: ...

  4. __x__(7)0905第二天__HTML的发展

    HTML的发展 浏览器各个厂商有不同的标准,一个网页的兼容性非常差. 于是,W3C出来了,作为公益组织定义了HTML标准. 在 1993.6 实现并发布了第一个 HTML. 在 1995.11 开始创 ...

  5. 2、vuex页面刷新数据不保留,解决方法(转)

    今天这个问题又跟页面的刷新有一定的关系,虽然说跟页面刷新的关系不大,但确实页面刷新引起的这一个问题. 场景: VueX里存储了 this.$store.state.PV这样一个变量,这个变量是在app ...

  6. 【C++ 实验六 继承与派生】

    实验内容 1. 某计算机硬件系统,为了实现特定的功能,在某个子模块设计了 ABC 三款芯片用于 数字计算.各个芯片的计算功能如下: A 芯片:计算两位整数的加法(m+n).计算两位整数的减法(m-n) ...

  7. duilib 新增数据迁移界面

    xml界面配置: <?xml version="1.0" encoding="utf-8"?> <Window caption="0 ...

  8. div+css显示两行或三行文字,超出用...表示

    <style> .comment_inner{ width: 200px; word-break: break-all; text-overflow: ellipsis; display: ...

  9. python语法_字符串

    字符串 a = 'asdb' #双引号和打印号没区别, 操作 "abc"*2 打印两遍"abc"  #字符串 加* 重复打印字符串 “abc”[2:1] #切片 ...

  10. MTQQ 物联网

    这个是学校的SRP项目.去年12月做了大概3周. 直接摘个人总结报告如下: 在本次“学生研究计划”项目,研究课题是“基于JAVA的智能家居公众号”.根据课题要求之一:以微信作为媒介,实现智能设备的远程 ...