1. paper: Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation

Encoder

  每个时刻输入一个词,隐藏层状态根据公式ht=f(ht−1,xt)改变。其中激活函数f可以是sigmod,tanh,ReLU,sotfplus等。
  读完序列的每一个词之后,会得到一个固定长度向量c=tanh(VhN)
Decoder

  由结构图可以看出,t时刻的隐藏层状态ht由ht−1,yt−1,c决定:ht=f(ht−1,yt−1,c),其中h0=tanh(V′c)
  最后的输出yt是由ht,yt−1,c决定
  P=(yt|yt−1,yt−2,...,y1,c)=g(ht,yt−1,c)

以上,f,gf,g都是激活函数,其中g一般是softmax

对此我在pytoch环境下进行实现seq2seq最初版的模型:

(参考:https://github.com/graykode/nlp-tutorial)

 import numpy as np
import torch
import torch.nn as nn
from torch.autograd import Variable dtype = torch.FloatTensor
# S: Symbol that shows starting of decoding input
# E: Symbol that shows ending of decoding output
# P: Symbol that will fill in blank sequence if current batch data size is short than time steps char_arr = [c for c in 'SEPabcdefghijklmnopqrstuvwxyz']
num_dic = {n: i for i, n in enumerate(char_arr)} seq_data = [['man', 'women'], ['black', 'white'], ['king', 'queen'], ['girl', 'boy'], ['up', 'down'], ['high', 'low']] # Seq2Seq Parameter
n_step = 5
n_hidden = 128
n_class = len(num_dic) #
batch_size = len(seq_data) # def make_batch(seq_data):
input_batch, output_batch, target_batch = [], [], [] for seq in seq_data:
for i in range(2):
seq[i] = seq[i] + 'P' * (n_step - len(seq[i])) input = [num_dic[n] for n in seq[0]]
output = [num_dic[n] for n in ('S' + seq[1])]
target = [num_dic[n] for n in (seq[1] + 'E')] input_batch.append(np.eye(n_class)[input])
output_batch.append(np.eye(n_class)[output])
target_batch.append(target) # not one-hot # make tensor
return Variable(torch.Tensor(input_batch)), Variable(torch.Tensor(output_batch)), Variable(torch.LongTensor(target_batch)) # Model
class Seq2Seq(nn.Module):
def __init__(self):
super(Seq2Seq, self).__init__() self.enc_cell = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5)
self.dec_cell = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5)
self.fc = nn.Linear(n_hidden, n_class) def forward(self, enc_input, enc_hidden, dec_input):
enc_input = enc_input.transpose(0, 1) # enc_input: [max_len(=n_step, time step), batch_size, n_class]
dec_input = dec_input.transpose(0, 1) # dec_input: [max_len(=n_step, time step), batch_size, n_class] # enc_states : [num_layers(=1) * num_directions(=1), batch_size, n_hidden]
_, enc_states = self.enc_cell(enc_input, enc_hidden)
# outputs : [max_len+1(=6), batch_size, num_directions(=1) * n_hidden(=128)]
outputs, _ = self.dec_cell(dec_input, enc_states) model = self.fc(outputs) # model : [max_len+1(=6), batch_size, n_class]
return model input_batch, output_batch, target_batch = make_batch(seq_data) model = Seq2Seq()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001) for epoch in range(5000):
# make hidden shape [num_layers * num_directions, batch_size, n_hidden]
hidden = Variable(torch.zeros(1, batch_size, n_hidden)) # input_batch : [batch_size, max_len(=n_step, time step), n_class]
# output_batch : [batch_size, max_len+1(=n_step, time step) (becase of 'S' or 'E'), n_class]
# target_batch : [batch_size, max_len+1(=n_step, time step)], not one-hot
output = model(input_batch, hidden, output_batch)
# output : [max_len+1, batch_size, n_class]
output = output.transpose(0, 1) # [batch_size, max_len+1(=6), n_class]
loss = 0
for i in range(0, len(target_batch)):
# output[i] : [max_len+1, n_class, target_batch[i] : max_len+1]
loss += criterion(output[i], target_batch[i])
if (epoch + 1) % 1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) optimizer.zero_grad()
loss.backward()
optimizer.step() # Test
def translate(word):
input_batch, output_batch, _ = make_batch([[word, 'P' * len(word)]]) # make hidden shape [num_layers * num_directions, batch_size, n_hidden]
hidden = Variable(torch.zeros(1, 1, n_hidden))
output = model(input_batch, hidden, output_batch)
# output : [max_len+1(=6), batch_size(=1), n_class] predict = output.data.max(2, keepdim=True)[1] # select n_class dimension
decoded = [char_arr[i] for i in predict]
end = decoded.index('E')
translated = ''.join(decoded[:end]) return translated.replace('P', '') print('test')
print('man ->', translate('man'))
print('mans ->', translate('mans'))
print('king ->', translate('king'))
print('black ->', translate('black'))
print('upp ->', translate('upp'))

之后,在seq2seq模型基础上,提出了attention机制。

论文: NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE

pytorch-- Attention Mechanism的更多相关文章

  1. (转)注意力机制(Attention Mechanism)在自然语言处理中的应用

    注意力机制(Attention Mechanism)在自然语言处理中的应用 本文转自:http://www.cnblogs.com/robert-dlut/p/5952032.html  近年来,深度 ...

  2. 注意力机制(Attention Mechanism)在自然语言处理中的应用

    注意力机制(Attention Mechanism)在自然语言处理中的应用 近年来,深度学习的研究越来越深入,在各个领域也都获得了不少突破性的进展.基于注意力(attention)机制的神经网络成为了 ...

  3. 深度学习之注意力机制(Attention Mechanism)和Seq2Seq

    这篇文章整理有关注意力机制(Attention Mechanism )的知识,主要涉及以下几点内容: 1.注意力机制是为了解决什么问题而提出来的? 2.软性注意力机制的数学原理: 3.软性注意力机制. ...

  4. 课程五(Sequence Models),第三周(Sequence models & Attention mechanism) —— 1.Programming assignments:Neural Machine Translation with Attention

    Neural Machine Translation Welcome to your first programming assignment for this week! You will buil ...

  5. [C5W3] Sequence Models - Sequence models & Attention mechanism

    第三周 序列模型和注意力机制(Sequence models & Attention mechanism) 基础模型(Basic Models) 在这一周,你将会学习 seq2seq(sequ ...

  6. 模型汇总24 - 深度学习中Attention Mechanism详细介绍:原理、分类及应用

    模型汇总24 - 深度学习中Attention Mechanism详细介绍:原理.分类及应用 lqfarmer 深度学习研究员.欢迎扫描头像二维码,获取更多精彩内容. 946 人赞同了该文章 Atte ...

  7. 【转载】Attention Mechanism in Deep Learning

    本篇随笔为转载,原文地址:知乎,深度学习中Attention Mechanism详细介绍:原理.分类及应用.参考链接:深度学习中的注意力机制. Attention是一种用于提升基于RNN(LSTM或G ...

  8. 吴恩达《深度学习》-第五门课 序列模型(Sequence Models)-第三周 序列模型和注意力机制(Sequence models & Attention mechanism)-课程笔记

    第三周 序列模型和注意力机制(Sequence models & Attention mechanism) 3.1 序列结构的各种序列(Various sequence to sequence ...

  9. 论文解读(GSAT)《Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism》

    论文信息 论文标题:Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism论文作者:Siqi ...

  10. Attention Mechanism in Computer Vision

    ​  前言 本文系统全面地介绍了Attention机制的不同类别,介绍了每个类别的原理.优缺点. 欢迎关注公众号CV技术指南,专注于计算机视觉的技术总结.最新技术跟踪.经典论文解读.CV招聘信息. 概 ...

随机推荐

  1. 谈谈Java的Collection接口

    目录 谈谈Collection 前言 Collection 方法 1.boolean add(E) 2.void clear() 3.boolean contains(Object o) 4.bool ...

  2. React16源码解读:揭秘ReactDOM.render

    引言 在上一篇文章中我们通过create-react-app脚手架快速搭建了一个简单的示例,并基于该示例讲解了在类组件中React.Component和React.PureComponent背后的实现 ...

  3. windows服务搭建(VS2019创建Windows服务不显示安装组件)

    1.创建windows服务应用 2.右键查看代码 3.写个计时器Timer  using System.Timers; 如上图,按tab键快速操作  会自动创建一个委托 改为下边的方式,打印日志来记录 ...

  4. React Hooks 一步到位

    useState 用来声明状态变量. import React, { useState } from 'react'; // ... const [ count , setCount ] = useS ...

  5. vue计算属性例子

    不使用计算属性 <!DOCTYPE html> <html lang="en"> <head> <meta charset="U ...

  6. maven常用的远程仓库地址

    <mirror> <id>nexus-aliyun</id> <name>Nexus aliyun</name> <url>ht ...

  7. Bootstrap Table的使用详解

    Bootstrap Table是基于 Bootstrap 的 jQuery 表格插件,通过简单的设置,就可以拥有强大的单选.多选.排序.分页,以及编辑.导出.过滤(扩展)等等的功能.接下来我们来介绍B ...

  8. [bzoj3930] [洛谷P3172] [CQOI2015] 选数

    Description 我们知道,从区间[L,H](L和H为整数)中选取N个整数,总共有(H-L+1)^N种方案.小z很好奇这样选出的数的最大公约数的规律,他决定对每种方案选出的N个整数都求一次最大公 ...

  9. CQOI十二省联考游记

    Day 0 看似稳如老狗的我实则慌得一逼 看了一上午的CRT,一个字没看进去 我反复安慰自己:我才高一,我才高一 但是,明年的联赛会不会跟今年一样高呢? 明年的心态会不会有现在这么好呢? 明年同届的d ...

  10. C# winform 弹框提示内存不足

    看了下面一片博文解决的 Winform 内存不足Winform,我给PictureBox 赋值 picBox_One.BackgroundImage = Image.FromFile("图片 ...