源代码网址: https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/examples/tutorials/word2vec/word2vec_basic.py
简书上有一篇此代码的详解,图文并茂,可直接看这篇详解: http://www.jianshu.com/p/f682066f0586 # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Basic word2vec example.""" from __future__ import absolute_import
from __future__ import division
from __future__ import print_function import collections
import math
import os
import random
import zipfile import numpy as np
from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf # Step 1: Download the data.
url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urllib.request.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename filename = maybe_download('text8.zip', 31344016) # Read the data into a list of strings.
def read_data(filename):
"""Extract the first file enclosed in a zip file as a list of words."""
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data vocabulary = read_data(filename)
print('Data size', len(vocabulary)) # Step 2: Build the dictionary and replace rare words with UNK token.
vocabulary_size = 50000

'''
input:
words - the original word list
n_words - the number of used words

output:
data - a list with the same length of input words
every element in the list is the value of the corresponding word in dictionary
or the position in count or dictionary
count - a matrix with n_words rows and two columns,
the first column corresponds to the word,
the second column corresponds to its frequency in input words
the first row in count is ['UNK', *]
the other rows are in descending order of the sencond column
dictionary - key-value map, key is the word, value is its position in count or dictionary
reversed_dictionary - reverse the key-value in dictionary
'''

def build_dataset(words, n_words):
"""Process raw inputs into a dataset."""
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(n_words - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary data, count, dictionary, reverse_dictionary = build_dataset(vocabulary,
vocabulary_size)
del vocabulary # Hint to reduce memory.
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]]) data_index = 0

'''
convert data to batch and labels
the values in batch and labels are the positions of the corresponding words
'''

# Step 3: Function to generate a training batch for the skip-gram model.
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [skip_window]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
# Backtrack a little bit to avoid skipping words in the end of a batch
data_index = (data_index + len(data) - span) % len(data)
return batch, labels batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
for i in range(8):
print(batch[i], reverse_dictionary[batch[i]],
'->', labels[i, 0], reverse_dictionary[labels[i, 0]]) # Step 4: Build and train a skip-gram model. batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label. # We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64 # Number of negative examples to sample. graph = tf.Graph() with graph.as_default(): # Input data.
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Ops and variables pinned to the CPU because of missing GPU implementation
with tf.device('/cpu:0'):

'''
Generate initial embeddings using random values
the row of the embeddings is same as vocabulary size
the column of the embeddings is the dimension of the embedding vector
each row of the embedding corresponds to the word in count or dictionary with the same row id
The below embed is the embeddings of train_inputs
'''

    # Look up embeddings for inputs.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_inputs) # Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size])) # Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(weights=nce_weights,
biases=nce_biases,
labels=train_labels,
inputs=embed,
num_sampled=num_sampled,
num_classes=vocabulary_size)) # Construct the SGD optimizer using a learning rate of 1.0.
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss) # Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(
valid_embeddings, normalized_embeddings, transpose_b=True) # Add variable initializer.
init = tf.global_variables_initializer() # Step 5: Begin training.
num_steps = 100001 with tf.Session(graph=graph) as session:
# We must initialize all variables before we use them.
init.run()
print('Initialized') average_loss = 0
for step in xrange(num_steps):
batch_inputs, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels} # We perform one update step by evaluating the optimizer op (including it
# in the list of returned values for session.run()
_, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += loss_val if step % 2000 == 0:
if step > 0:
average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step ', step, ': ', average_loss)
average_loss = 0 # Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in xrange(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k + 1]
log_str = 'Nearest to %s:' % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log_str = '%s %s,' % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval() # Step 6: Visualize the embeddings. def plot_with_labels(low_dim_embs, labels, filename='tsne.png'):
assert low_dim_embs.shape[0] >= len(labels), 'More labels than embeddings'
plt.figure(figsize=(18, 18)) # in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i, :]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom') plt.savefig(filename) try:
# pylint: disable=g-import-not-at-top
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :])
labels = [reverse_dictionary[i] for i in xrange(plot_only)]
plot_with_labels(low_dim_embs, labels) except ImportError:
print('Please install sklearn, matplotlib, and scipy to show embeddings.')

tensorflow之word2vec_basic代码研究的更多相关文章

  1. dedecms代码研究二

    dedecms代码研究(2)从index开始现在继续,今天讲的主要是dedecms的入口代码.先打开index.PHP看看里面是什么吧.打开根目录下的index.php嗯映入眼帘的是一个if语句.检查 ...

  2. Ningx代码研究.

    概述 研究计划 参与人员 研究文档 学习emiller的文章 熟悉nginx的基本数据结构 nginx 代码的目录结构 nginx简单的数据类型的表示 nginx字符串的数据类型的表示 内存分配相关 ...

  3. 一段markdown编辑器代码研究

    一段markdown编辑器代码研究 说明 代码在 https://github.com/dukeofharen/markdown-editor 之所以选择这个来分析是一方面是因为它的代码结构比较简单, ...

  4. [转载]iOS6新特征:UICollectionView官方使用示例代码研究

    原文地址:iOS6新特征:UICollectionView官方使用示例代码研究作者:浪友dans 注:这里是iOS6新特征汇总贴链接 iOS6新特征:参考资料和示例汇总 这个链接可以学习到UIColl ...

  5. *DataSet序列化,这段代码研究

    DataSet序列化,这段代码研究研究.学习学习. using System; using System.Collections.Generic; using System.Linq; using S ...

  6. CWMP开源代码研究——git代码工程

    原创作品,转载请注明出处,严禁非法转载.如有错误,请留言! email:40879506@qq.com 声明:本系列涉及的开源程序代码学习和研究,严禁用于商业目的. 如有任何问题,欢迎和我交流.(企鹅 ...

  7. 如何使用TensorFlow Hub和代码示例

    任何深度学习框架,为了获得成功,必须提供一系列最先进的模型,以及在流行和广泛接受的数据集上训练的权重,即与训练模型. TensorFlow现在已经提出了一个更好的框架,称为TensorFlow Hub ...

  8. CWMP开源代码研究5——CWMP程序设计思想

    声明:本文涉及的开源程序代码学习和研究,严禁用于商业目的. 如有任何问题,欢迎和我交流.(企鹅号:408797506) 本文介绍自己用过的ACS,其中包括开源版(提供下载包)和商业版(仅提供安装包下载 ...

  9. CWMP开源代码研究2——easycwmp安装和学习

    声明:本文是对开源程序代码学习和研究,严禁用于商业目的. 如有任何问题,欢迎和我交流.(企鹅号:408797506) 本文所有笔记和代码可以到csdn下载:http://download.csdn.n ...

随机推荐

  1. mysql 清空表——truncate 与delete的区别

    清空表 truncate table [表名]: delete from [表名]: 注: truncate是整体删除(速度较快), delete是逐条删除(速度较慢). truncate不写服务器l ...

  2. Eclipse使用之将Git项目转为Maven项目, ( 注意: 最后没有pom.xml文件的, 要转化下 )

    Eclipse使用之将Git项目转为Maven项目(全图解) 2017年08月11日 09:24:31 阅读数:427 1.打开Eclipse,File->Import 2.Git->Pr ...

  3. npm升级package.json依赖包到最新版本号

    转载自:https://blog.csdn.net/syaivin/article/details/79388244?utm_source=blogxgwz1 1.安装: npm install -g ...

  4. leecode第五十九题(螺旋矩阵 II)

    class Solution { public: vector<vector<int>> generateMatrix(int n) { )//特殊情况 { vector< ...

  5. python读取配置文件&&简单封装

    之前有做过把爬虫数据写到数据库中的练习,这次想把数据库信息抽离到一个ini配置文件中,这样做的好处在于可以在配置文件中添加多个数据库,方便切换(另外配置文件也可以添加诸如邮箱.url等信息) 1.co ...

  6. 带参数EXE

    有时候我们需要让软件带参数运行,使用参数控制软件的部分行为, C#默认窗口应用是不带参数的,不过在Main函数的参数手动加上就可以得到参数了. 举例如下: /// <summary> // ...

  7. 20170814xlVBA限定日期按客户分类汇总

    原始数据表: 汇总格式: Sub subtotalDic() Dim Wb As Workbook Dim Sht As Worksheet Dim oSht As Worksheet Dim mYe ...

  8. p1473 Zero Sum

    搜索,最后判断一下是否结果为0就行. #include <iostream> #include <cstdio> #include <cmath> #include ...

  9. codeforces736b Taxes (Codeforces Round #382 (Div. 1))

    题意:纳税额为金额的最大因数(除了本身).为了逃税将金额n分为n1+n2+.......问怎样分纳税最少. 哥德巴赫猜想: 任一大于2的偶数都可写成两个质数之和. 质数情况: 任何大于5的奇数都是三个 ...

  10. android-------开发常用框架汇总

    响应式编程 RxJava https://github.com/ReactiveX/RxJava RxAndroid https://github.com/ReactiveX/RxAndroid 消息 ...