文章目录

skip-gram pytorch 朴素实现
网络结构
训练过程:使用nn.NLLLoss()
batch的准备,为unsupervised,准备数据获取(center,contex)的pair:
采样时的优化:Subsampling降低高频词的概率
skip-gram 进阶:negative sampling
一般都是针对计算效率优化的方法:negative sampling和hierachical softmax
negative sampling实现:
negative sampling原理:
negative sampling抽样方法:
negative sampling前向传递过程:
negative sampling训练过程:
skip-gram pytorch 朴素实现

网络结构

class SkipGram(nn.Module):
def __init__(self, n_vocab, n_embed):
super().__init__()

self.embed = nn.Embedding(n_vocab, n_embed)
self.output = nn.Linear(n_embed, n_vocab)
self.log_softmax = nn.LogSoftmax(dim=1)

def forward(self, x):
x = self.embed(x)
scores = self.output(x)
log_ps = self.log_softmax(scores)

return log_ps

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
训练过程:使用nn.NLLLoss()

# check if GPU is available
device = 'cuda' if torch.cuda.is_available() else 'cpu'

embedding_dim=300 # you can change, if you want

model = SkipGram(len(vocab_to_int), embedding_dim).to(device)
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)

print_every = 500
steps = 0
epochs = 5

# train for some number of epochs
for e in range(epochs):

# get input and target batches
for inputs, targets in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(inputs), torch.LongTensor(targets)
inputs, targets = inputs.to(device), targets.to(device)

log_ps = model(inputs)
loss = criterion(log_ps, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
batch的准备,为unsupervised,准备数据获取(center,contex)的pair:

def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''

R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = words[start:idx] + words[idx+1:stop+1]

return list(target_words)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''

n_batches = len(words)//batch_size

# only full batches
words = words[:n_batches*batch_size]

for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
采样时的优化:Subsampling降低高频词的概率

Words that show up often such as “the”, “of”, and “for” don’t provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word wi w_iw
i

in the training set, we’ll discard it with probability given by

P(wi)=1−tf(wi)−−−−√ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}}
P(w
i

)=1−
f(w
i

)
t

where t tt is a threshold parameter and f(wi) f(w_i)f(w
i

) is the frequency of word wi w_iw
i

in the total dataset.

from collections import Counter
import random
import numpy as np

threshold = 1e-5
word_counts = Counter(int_words)
#print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear

total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
# discard some frequent words, according to the subsampling equation
# create a new list of words for training
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
skip-gram 进阶:negative sampling

一般都是针对计算效率优化的方法:negative sampling和hierachical softmax

negative sampling实现:

negative sampling原理:

class NegativeSamplingLoss(nn.Module):
def __init__(self):
super().__init__()

def forward(self, input_vectors, output_vectors, noise_vectors):

batch_size, embed_size = input_vectors.shape

# Input vectors should be a batch of column vectors
input_vectors = input_vectors.view(batch_size, embed_size, 1)

# Output vectors should be a batch of row vectors
output_vectors = output_vectors.view(batch_size, 1, embed_size)

# bmm = batch matrix multiplication
# correct log-sigmoid loss
out_loss = torch.bmm(output_vectors, input_vectors).sigmoid().log()
out_loss = out_loss.squeeze()

# incorrect log-sigmoid loss
noise_loss = torch.bmm(noise_vectors.neg(), input_vectors).sigmoid().log()
noise_loss = noise_loss.squeeze().sum(1) # sum the losses over the sample of noise vectors

# negate and sum correct and noisy log-sigmoid losses
# return average batch loss
return -(out_loss + noise_loss).mean()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
negative sampling抽样方法:

# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))

1
2
3
4
5
6
7
negative sampling前向传递过程:

class SkipGramNeg(nn.Module):
def __init__(self, n_vocab, n_embed, noise_dist=None):
super().__init__()

self.n_vocab = n_vocab
self.n_embed = n_embed
self.noise_dist = noise_dist

# define embedding layers for input and output words
self.in_embed = nn.Embedding(n_vocab, n_embed)
self.out_embed = nn.Embedding(n_vocab, n_embed)

# Initialize embedding tables with uniform distribution
# I believe this helps with convergence
self.in_embed.weight.data.uniform_(-1, 1)
self.out_embed.weight.data.uniform_(-1, 1)

def forward_input(self, input_words):
input_vectors = self.in_embed(input_words)
return input_vectors

def forward_output(self, output_words):
output_vectors = self.out_embed(output_words)
return output_vectors

def forward_noise(self, batch_size, n_samples):
""" Generate noise vectors with shape (batch_size, n_samples, n_embed)"""
if self.noise_dist is None:
# Sample words uniformly
noise_dist = torch.ones(self.n_vocab)
else:
noise_dist = self.noise_dist

# Sample words from our noise distribution
noise_words = torch.multinomial(noise_dist,
batch_size * n_samples,
replacement=True)

device = "cuda" if model.out_embed.weight.is_cuda else "cpu"
noise_words = noise_words.to(device)

noise_vectors = self.out_embed(noise_words).view(batch_size, n_samples, self.n_embed)

return noise_vectors
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
negative sampling训练过程:

device = 'cuda' if torch.cuda.is_available() else 'cpu'

# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))

# instantiating the model
embedding_dim = 300
model = SkipGramNeg(len(vocab_to_int), embedding_dim, noise_dist=noise_dist).to(device)

# using the loss that we defined
criterion = NegativeSamplingLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)

print_every = 1500
steps = 0
epochs = 5

# train for some number of epochs
for e in range(epochs):

# get our input, target batches
for input_words, target_words in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(input_words), torch.LongTensor(target_words)
inputs, targets = inputs.to(device), targets.to(device)

# input, output, and noise vectors
input_vectors = model.forward_input(inputs)
output_vectors = model.forward_output(targets)
noise_vectors = model.forward_noise(inputs.shape[0], 5)

# negative sampling loss
loss = criterion(input_vectors, output_vectors, noise_vectors)

optimizer.zero_grad()
loss.backward()
optimizer.step()

深度学总结:skip-gram pytorch实现的更多相关文章

  1. pytorch深度学习书、论坛和比赛地址

    pytorch深度学习书.论坛和比赛地址 待办 https://zhuanlan.zhihu.com/p/85353963 http://zh.d2l.ai/ https://discuss.gluo ...

  2. 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(三)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com VGGNet在2014年ImageNet图像分类任务竞赛中有出色的表现.网络结构如下图所示: 同样的, ...

  3. 深度学习识别CIFAR10:pytorch训练LeNet、AlexNet、VGG19实现及比较(二)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com AlexNet在2012年ImageNet图像分类任务竞赛中获得冠军.网络结构如下图所示: 对CIFA ...

  4. 深度学习框架Keras与Pytorch对比

    对于许多科学家.工程师和开发人员来说,TensorFlow是他们的第一个深度学习框架.TensorFlow 1.0于2017年2月发布,可以说,它对用户不太友好. 在过去的几年里,两个主要的深度学习库 ...

  5. 深度学*点云语义分割:CVPR2019论文阅读

    深度学*点云语义分割:CVPR2019论文阅读 Point Cloud Oversegmentation with Graph-Structured Deep Metric Learning 摘要 本 ...

  6. 3D点云深度学*

    3D点云深度学* 在自动驾驶中关于三维点云的深度学*方法应用.三维场景语义理解的方法以及对应的关键技术介绍. 1. 数据 但是对于3D点云,数据正在迅速增长.大有从2D向3D发展的趋势,比如在open ...

  7. 深度学习调用TensorFlow、PyTorch等框架

    深度学习调用TensorFlow.PyTorch等框架 一.开发目标目标 提供统一接口的库,它可以从C++和Python中的多个框架中运行深度学习模型.欧米诺使研究人员能够在自己选择的框架内轻松建立模 ...

  8. 动手学深度学习9-多层感知机pytorch

    多层感知机 隐藏层 激活函数 小结 多层感知机 之前已经介绍过了线性回归和softmax回归在内的单层神经网络,然后深度学习主要学习多层模型,后续将以多层感知机(multilayer percetro ...

  9. 【深度学习 01】线性回归+PyTorch实现

    1. 线性回归 1.1 线性模型 当输入包含d个特征,预测结果表示为: 记x为样本的特征向量,w为权重向量,上式可表示为: 对于含有n个样本的数据集,可用X来表示n个样本的特征集合,其中行代表样本,列 ...

随机推荐

  1. Codeforces Round #408 (Div. 2) C.Bank Hacking(二分)

    传送门 题意 给出n个银行,银行之间总共有n-1条边,定义i与j有边相连为neighboring,i到j,j到k有边,则定义i到k的关系为semi- neighboring, 每家银行hack的难度为 ...

  2. 51nod1256【exgcd求逆元】

    思路: 把k*M%N=1可以写成一个不定方程,(k*M)%N=(N*x+1)%N,那么就是求k*M-N*x=1,k最小,不定方程我们可以直接利用exgcd,中间还搞错了: //小小地讲一下exgcd球 ...

  3. P5024 保卫王国

    传送门 我现在还是不明白为什么NOIPd2t3会是一道动态dp-- 首先关于动态dp可以看这里 然后这里就是把把矩阵给改一改,改成这个形式\[\left[dp_{i-1,0},dp_{i-1,1}\r ...

  4. 天天坐在电脑面前,小心抑郁!来自一个人的旅行<自导自演>

    画图画累了?写代码写累了?何不放松一下呢. 一望无际.亲近自然.忘乎所以.放空自我! 一个人的旅行, GoPro拍摄,后期采用FCPX.记录梦想, 自导自演.一个人去了很多地方, 认识和很多当地人,交 ...

  5. HttpClient替换HttpWebRequest--以GET和POST请求为例说明

    首先说一下HttpRequest.WebClient和HttpClient的关系:HttpRequest是基层的请求方式,WebClient是对HttpRequest的简化封装,在WebClient中 ...

  6. Oracle之Char VarChar VarChar2

    Oracle之Char VarChar VarChar2 在Oracle数据库中,字符类型有Char.VarChar和VarChar2三种类型,但不大清楚各自区别在哪儿,平时基本上就是用VarChar ...

  7. CSS3鼠标放上去旋转代码

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  8. AtCoder Grand Contest 015 C - Nuske vs Phantom Thnook

    题目传送门:https://agc015.contest.atcoder.jp/tasks/agc015_c 题目大意: 现有一个\(N×M\)的矩阵\(S\),若\(S_{i,j}=1\),则该处为 ...

  9. Codeforces Round #261 (Div. 2) C

    Description Recently Pashmak has been employed in a transportation company. The company has kbuses a ...

  10. 18.3.2从Class上获取信息(注解)

    package d18_3_1; /** * Class类上所包含的注解 * * getAnnotation(Class annotationClass) 获取该元素上指定的类型的注解 * getAn ...