Word Embeddings: Encoding Lexical Semantics
Word Embeddings: Encoding Lexical Semantics
- Getting Dense Word Embeddings
- Word Embeddings in Pytorch
- An Example: N-Gram Language Modeling
- Exercise: Computing Word Embeddings: Continuous Bag-of-Words
Word Embeddings in Pytorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim torch.manual_seed(1) word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings
lookup_tensor = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor)
print(hello_embed)
Out:
tensor([[ 0.6614, 0.2669, 0.0617, 0.6213, -0.4519]],
grad_fn=<EmbeddingBackward>)
An Example: N-Gram Language Modeling
CONTEXT_SIZE = 2
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)
trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
for i in range(len(test_sentence) - 2)] vocab = set(test_sentence) #the element in set is distinct
word_to_ix = {word: i for i, word in enumerate(vocab)} class NGramLanguageModeler(nn.Module): def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size) def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
log_probs = F.log_softmax(out, dim=1)
return log_probs losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001) for epoch in range(10):
total_loss = 0
for context, target in trigrams: context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long) model.zero_grad() log_probs = model(context_idxs) loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long)) loss.backward()
optimizer.step() total_loss += loss.item()
losses.append(total_loss)
print(losses)
Exercise: Computing Word Embeddings: Continuous Bag-of-Words
CONTEXT_SIZE=2
raw_text= """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split() # By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab) word_to_ix={word:i for i,word in enumerate(vocab)}
data=[]
for i in range(2,len(raw_text)-2):
context=[raw_text[i-2],raw_text[i-1],raw_text[i+1],raw_text[i+2]]
target=raw_text[i]
data.append((context,target))
print(data[:5]) class CBOW(nn.Module):
def __init__(self):
pass def forward(self,inputs):
pass def make_context_vector(context,word_to_ix):
idxs=[word_to_ix[w] for w in context]
return torch.tensor(idxs,dtype=torch.long) make_context_vector(data[0][0],word_to_ix)
Word Embeddings: Encoding Lexical Semantics的更多相关文章
- Word Embeddings: Encoding Lexical Semantics(译文)
词向量:编码词汇级别的信息 url:http://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html?highlight= ...
- [C5W2] Sequence Models - Natural Language Processing and Word Embeddings
第二周 自然语言处理与词嵌入(Natural Language Processing and Word Embeddings) 词汇表征(Word Representation) 上周我们学习了 RN ...
- deeplearning.ai 序列模型 Week 2 NLP & Word Embeddings
1. Word representation One-hot representation的缺点:把每个单词独立对待,导致对相关词的泛化能力不强.比如训练出“I want a glass of ora ...
- 翻译 | Improving Distributional Similarity with Lessons Learned from Word Embeddings
翻译 | Improving Distributional Similarity with Lessons Learned from Word Embeddings 叶娜老师说:"读懂论文的 ...
- 论文阅读笔记 Word Embeddings A Survey
论文阅读笔记 Word Embeddings A Survey 收获 Word Embedding 的定义 dense, distributed, fixed-length word vectors, ...
- 课程五(Sequence Models),第二 周(Natural Language Processing & Word Embeddings) —— 1.Programming assignments:Operations on word vectors - Debiasing
Operations on word vectors Welcome to your first assignment of this week! Because word embeddings ar ...
- [IR] Word Embeddings
From: https://www.youtube.com/watch?v=pw187aaz49o Ref: http://blog.csdn.net/abcjennifer/article/deta ...
- Word Embeddings
能够充分意识到W的这些属性不过是副产品而已是很重要的.我们没有尝试着让相似的词离得近.我们没想把类比编码进不同的向量里.我们想做的不过是一个简单的任务,比如预测一个句子是不是成立的.这些属性大概也就是 ...
- Papers of Word Embeddings
首先解释一下什么叫做embedding.举个例子:地图就是对于现实地理的embedding,现实的地理地形的信息其实远远超过三维 但是地图通过颜色和等高线等来最大化表现现实的地理信息. embeddi ...
随机推荐
- 适合前端开发的 Chrome 扩展有哪些?(十款)
适合前端开发的 Chrome 扩展有哪些?(十款) 一.总结 好的插件或者框架对程序员的意义重大. 二.适合前端开发的 Chrome 扩展有哪些?(十款) 掘金是一个高质量的技术社区,从 ECMASc ...
- 【u031】租用游艇
Time Limit: 1 second Memory Limit: 128 MB [问题描述] 长江游艇俱乐部在长江上设置了n 个游艇出租站1,2,-,n.游客可在这些游艇出租站租用游艇,并在下游的 ...
- 【codeforces 755D】PolandBall and Polygon
time limit per test4 seconds memory limit per test256 megabytes inputstandard input outputstandard o ...
- js如何实现页面跳转(大全)
js如何实现页面跳转(大全) 一.总结 一句话总结: 1.location的href属性: js跳转主要是通过window的location对象的href属性,因为location对象本来就是表示的浏 ...
- 实现上拉加载更多的SwipeRefreshLayout
转载请标明出处: http://blog.csdn.net/developer_jiangqq/article/details/49992269 本文出自:[江清清的博客] (一).前言: [好消息] ...
- Qt Roadmap for 2018(对3D有很多改进)
When it comes to new features, we have many things ongoing related to graphics, so I’ll start with t ...
- JVM源码分析之System.currentTimeMillis及nanoTime原理详解
JDK7和JDK8下的System.nanoTime()输出完全不一样,而且差距还非常大,是不是两个版本里的实现不一样,之前我也没注意过这个细节,觉得非常奇怪,于是自己也在本地mac机器上马上测试了一 ...
- OVS处理upcall流程分析
处理upcall总体框架: 1.由函数handle_upcalls()批量处理(in batches)的是由内核传上来的dpif_upcalls,会解析出upcall的类型.这里主要看在内核中匹配流表 ...
- C#中的并发编程知识二
= 导航 顶部 基本信息 ConcurrentQueue ConcurrentStack ConcurrentBag BlockingCollection ConcurrentDictiona ...
- 定制Octopress
在 github pages 上搭建好 octopress 博客之后,博客的基本功能就能使用了.如果想自己定制也是没问题的,octopress 有较详尽的官方文档,原则上有问题求助官方即可:octop ...