Word Embeddings: Encoding Lexical Semantics

Word Embeddings in Pytorch

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim torch.manual_seed(1) word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings
lookup_tensor = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor)
print(hello_embed)

Out:

tensor([[ 0.6614,  0.2669,  0.0617,  0.6213, -0.4519]],
grad_fn=<EmbeddingBackward>)

An Example: N-Gram Language Modeling

CONTEXT_SIZE = 2
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)
trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
for i in range(len(test_sentence) - 2)] vocab = set(test_sentence) #the element in set is distinct
word_to_ix = {word: i for i, word in enumerate(vocab)} class NGramLanguageModeler(nn.Module): def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size) def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
log_probs = F.log_softmax(out, dim=1)
return log_probs losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001) for epoch in range(10):
total_loss = 0
for context, target in trigrams: context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long) model.zero_grad() log_probs = model(context_idxs) loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long)) loss.backward()
optimizer.step() total_loss += loss.item()
losses.append(total_loss)
print(losses)

Exercise: Computing Word Embeddings: Continuous Bag-of-Words

CONTEXT_SIZE=2
raw_text= """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split() # By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab) word_to_ix={word:i for i,word in enumerate(vocab)}
data=[]
for i in range(2,len(raw_text)-2):
context=[raw_text[i-2],raw_text[i-1],raw_text[i+1],raw_text[i+2]]
target=raw_text[i]
data.append((context,target))
print(data[:5]) class CBOW(nn.Module):
def __init__(self):
pass def forward(self,inputs):
pass def make_context_vector(context,word_to_ix):
idxs=[word_to_ix[w] for w in context]
return torch.tensor(idxs,dtype=torch.long) make_context_vector(data[0][0],word_to_ix)

Word Embeddings: Encoding Lexical Semantics的更多相关文章

  1. Word Embeddings: Encoding Lexical Semantics(译文)

    词向量:编码词汇级别的信息 url:http://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html?highlight= ...

  2. [C5W2] Sequence Models - Natural Language Processing and Word Embeddings

    第二周 自然语言处理与词嵌入(Natural Language Processing and Word Embeddings) 词汇表征(Word Representation) 上周我们学习了 RN ...

  3. deeplearning.ai 序列模型 Week 2 NLP & Word Embeddings

    1. Word representation One-hot representation的缺点:把每个单词独立对待,导致对相关词的泛化能力不强.比如训练出“I want a glass of ora ...

  4. 翻译 | Improving Distributional Similarity with Lessons Learned from Word Embeddings

    翻译 | Improving Distributional Similarity with Lessons Learned from Word Embeddings 叶娜老师说:"读懂论文的 ...

  5. 论文阅读笔记 Word Embeddings A Survey

    论文阅读笔记 Word Embeddings A Survey 收获 Word Embedding 的定义 dense, distributed, fixed-length word vectors, ...

  6. 课程五(Sequence Models),第二 周(Natural Language Processing & Word Embeddings) —— 1.Programming assignments:Operations on word vectors - Debiasing

    Operations on word vectors Welcome to your first assignment of this week! Because word embeddings ar ...

  7. [IR] Word Embeddings

    From: https://www.youtube.com/watch?v=pw187aaz49o Ref: http://blog.csdn.net/abcjennifer/article/deta ...

  8. Word Embeddings

    能够充分意识到W的这些属性不过是副产品而已是很重要的.我们没有尝试着让相似的词离得近.我们没想把类比编码进不同的向量里.我们想做的不过是一个简单的任务,比如预测一个句子是不是成立的.这些属性大概也就是 ...

  9. Papers of Word Embeddings

    首先解释一下什么叫做embedding.举个例子:地图就是对于现实地理的embedding,现实的地理地形的信息其实远远超过三维 但是地图通过颜色和等高线等来最大化表现现实的地理信息. embeddi ...

随机推荐

  1. 适合前端开发的 Chrome 扩展有哪些?(十款)

    适合前端开发的 Chrome 扩展有哪些?(十款) 一.总结 好的插件或者框架对程序员的意义重大. 二.适合前端开发的 Chrome 扩展有哪些?(十款) 掘金是一个高质量的技术社区,从 ECMASc ...

  2. 【poj2528】Mayor's posters

    Time Limit: 1000MS   Memory Limit: 65536K Total Submissions: 59254   Accepted: 17167 Description The ...

  3. 谷歌 AI 中国中心成立,人工智能势不可挡?

    昨日,谷歌在上海举办了一年一度的Google中国开发者大会.在本届大会上,谷歌云首席科学家李飞飞宣布了一个重磅消息,即在北京将成立谷歌AI中国中心.对于这个即将成立的AI中心谷歌寄予厚望,希望与中国本 ...

  4. java中<T> T和T的区别?

    如果你希望 getMax 方法的返回值类型为 T,就要这样去定义getMax方法: public T getMax() 如果你希望 getMax 方法返回值的类型由调用者决定,那么就这么去定义 get ...

  5. python request爬取百度贴吧

    import requests import os import shutil import time class PostBarSpider(object): def __init__(self, ...

  6. 从PHP到Node

    花12天时间,断断续续学了PHP和MySQL.学完PHP基础后,本以为很快就能做个中等项目,发现还是不行,可是是学习PHP的时间太短了吧,需要一定强度的练习,学新框架才行.PHP就先放一下吧,就当通过 ...

  7. java做微信支付notify_url异步通知服务端的写法

    最近团队在接入微信支付,APP和JSAPI的接口都需要填写一个notify_url回调地址,但是坑爹的官方文档并没有找到JSAPI模式的java版的demo,所以不得不自己看文档写了一个接受微信异步通 ...

  8. 一个2013届毕业生(踏上IT行业)的迷茫(4)

    等了大概三个月,终于到9月份了,以前没有出过远门,这次要去西安上学,一个人父母还是不放心,带了几件衣服就和父亲匆匆去坐火车,这一路有多少个第一次啊,第一次和父亲一块坐车.第一次坐火车.第一次出县城.第 ...

  9. 【JDBC】java PreparedStatement操作oracle数据库

    ************************************************************************ ****原文:blog.csdn.net/clark_ ...

  10. Socket实现原理和机制

    要写网络程序就必须用Socket,这是程序员都知道的.而且,面试的时候,我们也会问对方会不会Socket编程?一般来说,很多人都会说,Socket编程基本就是listen,accept以及send,w ...