1. 从一个简单的多维缩放例子说起

我们知道,对于我们的现实世界来说,我们人类能表达和理解是最高维度是3维,超过3维的向量只能存在于数学中,无法在物理世界中被我们认识到。但是在大多数机器学习工程项目中,数据向量的维度都不只2或3个维度,我们无法直接将其表现在坐标图像上。为了能够更好地认识这些高维空间的数据向量,我们需要一种技术,能够将高维空间的数据向量投影到低维的2/3维坐标系中。这就是我们本文要讨论的多维缩放(multidimensional scaling)技术。

0x1:主要思想简述

不管数据向量本身维度是多少,我们总可以按照某种标准(欧氏距离、皮尔逊相关度指数)来度量两两数据点之间的距离,并得到一个标量。

假设我们有4个m维数据向量,通过循环计算两两数据向量之间的距离

将所有数据项随机放置在二维图上,如下图所示,

初始状态的时候,因为随机放置的关系,所有数据项两两之间的当前距离都是根据当前实际距离(欧式距离)计算求得的,如下图所示,

显然,这样的空间布局不符合数据项之间距离的实际情况,针对每两两构成的一对数据项,我们将它们的目标距离与当前距离进行比较,并求出一个误差值。根据误差的情况,我们会按照比例将每个数据项的所在位置进行适当的移动。

下图展示了我们对数据项A的施力情况,

  • A与B之间的距离为0.5,而两者的目标距离仅为0.2,因为我们必须将A朝B的方向移近一点才行
  • 将A推离C和D一些,因为它距离C和D都太近了

每一个节点的移动,都是所有其他节点施加在该节点上的推或拉的综合效应。节点每移动一次,其当前距离和目标距离间的差距就会减少一些。这一过程会不断重复多次,直到我们无法再通过移动节点来减少总体误差为止。

可以想象,经过多轮的迭代后,所有的节点之间的距离不一定能完美匹配数据项之间的目标距离,但是一定会进入一个最佳平衡,在这个最佳平衡下,所有数据项之间距离和目标距离之间的总差值,达到一个全局最小。

0x2:代码示例

from PIL import Image,ImageDraw

def readfile(filename):
lines=[line for line in file(filename)] # First line is the column titles
colnames=lines[].strip().split('\t')[:]
rownames=[]
data=[]
for line in lines[:]:
print "line: ", line
p=line.strip().split('\t')
# First column in each row is the rowname
rownames.append(p[])
# The data for this row is the remainder of the row
data.append([float(x) for x in p[:]])
return rownames,colnames,data from math import sqrt def scaledown(data,distance=pearson,rate=0.01):
n=len(data) # The real distances between every pair of items
realdist=[[distance(data[i],data[j]) for j in range(n)]
for i in range(,n)] print "realdist: ", realdist # Randomly initialize the starting points of the locations in 2D
loc=[[random.random(),random.random()] for i in range(n)]
fakedist=[[0.0 for j in range(n)] for i in range(n)] lasterror=None
for m in range(,):
# Find projected distances
for i in range(n):
for j in range(n):
fakedist[i][j]=sqrt(sum([pow(loc[i][x]-loc[j][x],)
for x in range(len(loc[i]))])) # Move points
grad=[[0.0,0.0] for i in range(n)] totalerror=
for k in range(n):
for j in range(n):
if j==k: continue
# The error is percent difference between the distances
errorterm=(fakedist[j][k]-realdist[j][k])/realdist[j][k] # Each point needs to be moved away from or towards the other
# point in proportion to how much error it has
grad[k][]+=((loc[k][]-loc[j][])/fakedist[j][k])*errorterm
grad[k][]+=((loc[k][]-loc[j][])/fakedist[j][k])*errorterm # Keep track of the total error
totalerror+=abs(errorterm)
print totalerror # If the answer got worse by moving the points, we are done
if lasterror and lasterror<totalerror: break
lasterror=totalerror # Move each of the points by the learning rate times the gradient
for k in range(n):
loc[k][]-=rate*grad[k][]
loc[k][]-=rate*grad[k][] return loc def draw2d(data,labels,jpeg='mds2d.jpg'):
img=Image.new('RGB',(,),(,,))
draw=ImageDraw.Draw(img)
for i in range(len(data)):
x=(data[i][]+0.5)*
y=(data[i][]+0.5)*
draw.text((x,y),labels[i],(,,))
img.save(jpeg,'JPEG') if __name__ == '__main__':
blognames, words, data = readfile('blogdata.txt')
coords = scaledown(data)
draw2d(coords, blognames, jpeg='blogs2d.jpg')

上图中显示了多维缩放算法的执行结果,虽然聚类的分布情况没有像树状图那样直观,但是我们仍然可以找到一些主题分组(topical grouping)。

Relevant Link:

《集体智慧编程》Toby Segaran著 - 第3章

2. Neural Networks Transform Space - 神经网络内部的空间结构

这章我们从空间结构可视化角度来探索下神经网络内部(隐层)的拓朴结构。

0x1: 只包含2个神经元的神经网络

我们先从一个[2维的输入样本(输入层是2维的):只包含输入层和输出层且每层只有2个神经元(x/y分别对应输入和输出的坐标点)的神经网络]入手来探查这个问题。

这是一张二维平面图(输入层是由2维的点组成的集合),图像的2条曲线分别代表了2个类别,2条曲线上的点代表了我们的输入数据集dataset,我们希望神经网络对这2类进行正确区分,即对分类建模

由于我们的神经网络只有输入层和输出层,每层的神经元都是2,分别代表了x和y,这样神经网络只能尽力去"寻找"一个直线来进行回归分类,如下图,

但是因为输入层和输出层之间是线性关系,很显然,这无法得到一个"相对完美"的结果,因为图上我们的输入数据集是该二维空间上线性不可分的。

要解决这个问题,就需要使用线性变换对原始空间进行”旋转“和”拉伸“。我们在网络中增加一个hidden layer(2维),通过隐层把输入向量所在的线性空间进行拉伸和旋转,在新的线性空间中,我么可以很容易找到一个线性分界面进行分类,如下图,

0x2: MNIST CNN卷积神经网络

这小节我们来看一个我们都熟悉的MNIST问题来说,输入层的维度是图像的所有像素点,即784维,隐层由全连接DNN组成,100维。进行200epoch的训练后得到一组神经元参数。我们通过t-SNE可视化技术来探查一下,神经网络是如何通过调整权重向量来逐渐拟合出输入数据在高维空间的真实拓朴含义

我们可以看到,

  • 在输入层(784维度)权重向量的空间拓朴分布还是比较"混杂在一起的"
  • 但是中hidden layer的训练过程中,由于梯度下降的"强迫调整",代表不同数字空间结构的神经元权重向量将各自的向量调整为"尽量的离彼此远远的",因为距离越远,在之后的激活决策层(例如sigmore)就越容易进行,所得到的loss function也越小

Relevant Link:

http://blog.csdn.net/unoboros/article/details/30451213
http://www.cnblogs.com/boostable/p/iage_high_space_sphere.html

3. Word Embeddings in NLP - text word文本词语串内隐含的空间结构

embedding是NLP特征工程中常用的一种语料处理方式,在word emberdding空间中,每一个词都是一个高维词嵌入向量。

我们可以基于t-SNE来探查word2vec emberdding内部的工作原理,word2vec emberdding将低维空间的数据映射到高维空间向量中(保持语法不变性),而t-SNE则可以将高维空间的向量在尽量少失真的情况下投影到2/3维空间。

# -*- coding: utf- -*-

from gensim.models.word2vec import Word2Vec
from sklearn.manifold import TSNE
from sklearn.datasets import fetch_20newsgroups
import re
import matplotlib.pyplot as plt def clean(text):
"""Remove posting header, split by sentences and words, keep only letters"""
lines = re.split('[?!.:]\s', re.sub('^.*Lines: \d+', '', re.sub('\n', ' ', text)))
return [re.sub('[^a-zA-Z]', ' ', line).lower().split() for line in lines] if __name__ == '__main__':
# download example data ( may take a while)
train = fetch_20newsgroups() sentences = [line for text in train.data for line in clean(text)] model = Word2Vec(sentences, workers=, size=, min_count=, window=, sample=1e-) print (model.most_similar('memory')) X = model[model.wv.vocab] tsne = TSNE(n_components=)
X_tsne = tsne.fit_transform(X) plt.scatter(X_tsne[:, ], X_tsne[:, ])
plt.show()

可以看到,emberdding vocabulary词汇表中,和memory这个计算机词汇相近的词是cpu和cache,符合实际意义的语法

把局部放大看,

Relevant Link:

http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
http://www.iro.umontreal.ca/~lisa/pointeurs/turian-wordrepresentations-acl10.pdf
https://stackoverflow.com/questions/43166762/what-is-relation-between-tsne-and-word2vec
https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim
http://learningaboutdata.blogspot.com/2014/06/plotting-word-embedding-using-tsne-with.html
https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim
https://stackoverflow.com/questions/43776572/visualise-word2vec-generated-from-gensim
https://www.quora.com/How-do-I-visualise-word2vec-word-vectors
http://nlp.yvespeirsman.be/blog/visualizing-word-embeddings-with-tsne/
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
《word2vec_中的数学原理详解》
http://blog.csdn.net/u014595019/article/details/51884529
http://download.csdn.net/detail/mzg12345678/7988741
https://www.tensorflow.org/versions/r0.12/tutorials/word2vec

4. Paragraph Vectors in NLP - 词组短语向量化后内部隐含的空间结构

Paragraph/Sentence Vector Model的核心思想从一篇document/或者一大段话(sentences)中抽取一部分的短语(Paragraph),常常是文章的标题描述或者引导语之类的字符串,并通过将这段Paragraph映射到词向量空间中,来获得document/sentences的向量表示,如下图所示,

Paragraph/Sentence Vector Model模型并不是直接针对一个变长的文本计算vector,而是基于sentence词向量的基础之上,通过从原始sentence中摘取一段Paragraph(类似于摘要),把这段Paragraph看作是一个word(初始化一个权重向量),并通过它周围的context上下文来进行无监督的训练,权重参数的调整通过Gredien decend + BP来完成。

这节我们来看一下短语向量的空间拓朴可视化表示,

demo.py

#!/usr/bin/env python
# -*- coding: utf- -*-
#
# Licensed under the GNU LGPL v2. - http://www.gnu.org/licenses/lgpl.html import logging
import sys
import os
from word2vec import Word2Vec, Sent2Vec, LineSentence logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.info("running %s" % " ".join(sys.argv)) input_file = './modleTrain/test.txt'
# Emberdding dimension =
# The maximum distance between the current and predicted word within a sentence =
# Model = CBOW
# Ignore total frequency lower than = .
model = Word2Vec(LineSentence(input_file), size=, window=, sg=, min_count=, workers=)
model.save(input_file + '.model')
# get word emberdding vector vocabulary
model.save_word2vec_format(input_file + '.vec') sent_file = './modleTrain/sent.txt'
model = Sent2Vec(LineSentence(sent_file), model_file=input_file + '.model')
model.save_sent2vec_format(sent_file + '.vec') program = os.path.basename(sys.argv[])
logging.info("finished running %s" % program)

word2vec.py

#!/usr/bin/env python
# -*- coding: utf- -*-
#
# Copyright (C) Radim Rehurek <me@radimrehurek.com>
# Licensed under the GNU LGPL v2. - http://www.gnu.org/licenses/lgpl.html """
Deep learning via word2vec's "skip-gram and CBOW models", using either
hierarchical softmax or negative sampling []_ []_. The training algorithms were originally ported from the C package https://code.google.com/p/word2vec/
and extended with additional functionality. For a blog tutorial on gensim word2vec, with an interactive web app trained on GoogleNews, visit http://radimrehurek.com/2014/02/word2vec-tutorial/ **Install Cython with `pip install cython` to use optimized word2vec training** (70x speedup []_). Initialize a model with e.g.:: >>> model = Word2Vec(sentences, size=, window=, min_count=, workers=) Persist a model to disk with:: >>> model.save(fname)
>>> model = Word2Vec.load(fname) # you can continue training with the loaded model! The model can also be instantiated from an existing file on disk in the word2vec C format:: >>> model = Word2Vec.load_word2vec_format('/tmp/vectors.txt', binary=False) # C text format
>>> model = Word2Vec.load_word2vec_format('/tmp/vectors.bin', binary=True) # C binary format You can perform various syntactic/semantic NLP word tasks with the model. Some of them
are already built-in:: >>> model.most_similar(positive=['woman', 'king'], negative=['man'])
[('queen', 0.50882536), ...] >>> model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal' >>> model.similarity('woman', 'man')
0.73723527 >>> model['computer'] # raw numpy vector of a word
array([-0.00449447, -0.00310097, 0.02421786, ...], dtype=float32) and so on. If you're finished training a model (=no more updates, only querying), you can do >>> model.init_sims(replace=True) to trim unneeded model memory = use (much) less RAM. .. [] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, .
.. [] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality.
In Proceedings of NIPS, .
.. [] Optimizing word2vec in gensim, http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/
""" import logging
import sys
import os
import heapq
import time
from copy import deepcopy
import threading try:
from queue import Queue
except ImportError:
from Queue import Queue from numpy import exp, dot, zeros, outer, random, dtype, get_include, float32 as REAL, \
uint32, seterr, array, uint8, vstack, argsort, fromstring, sqrt, newaxis, ndarray, empty, sum as np_sum # logger = logging.getLogger("gensim.models.word2vec")
logger = logging.getLogger("sent2vec") # from gensim import utils, matutils # utility fnc for pickling, common scipy operations etc
import utils, matutils # utility fnc for pickling, common scipy operations etc
from six import iteritems, itervalues, string_types
from six.moves import xrange try:
from gensim_addons.models.word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION
except ImportError:
try:
# try to compile and use the faster cython version
import pyximport models_dir = os.path.dirname(__file__) or os.getcwd()
pyximport.install(setup_args={"include_dirs": [models_dir, get_include()]})
from word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION
except:
# failed... fall back to plain numpy (-80x slower training than the above)
FAST_VERSION = - def train_sentence_sg(model, sentence, alpha, work=None):
"""
Update skip-gram model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding
word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim
will use the optimized version from word2vec_inner instead. """
if model.negative:
# precompute negative labels
labels = zeros(model.negative + )
labels[] = 1.0 for pos, word in enumerate(sentence):
if word is None:
continue # OOV word in the input sentence => skip
reduced_window = random.randint(model.window) # `b` in the original word2vec code # now go over all words from the (reduced) window, predicting each one in turn
start = max(, pos - model.window + reduced_window)
for pos2, word2 in enumerate(sentence[start: pos + model.window + - reduced_window], start):
# don't train on OOV words and on the `word` itself
if word2 and not (pos2 == pos):
l1 = model.syn0[word2.index]
neu1e = zeros(l1.shape) if model.hs:
# work on the entire tree at once, to push as much work into numpy's C routines as possible (performance)
l2a = deepcopy(model.syn1[word.point]) # 2d matrix, codelen x layer1_size
fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T))) # propagate hidden -> output
ga = (
- word.code - fa) * alpha # vector of error gradients multiplied by the learning rate
model.syn1[word.point] += outer(ga, l1) # learn hidden -> output
neu1e += dot(ga, l2a) # save error if model.negative:
# use this word (label = ) + `negative` other random words not from this sentence (label = )
word_indices = [word.index]
while len(word_indices) < model.negative + :
w = model.table[random.randint(model.table.shape[])]
if w != word.index:
word_indices.append(w)
l2b = model.syn1neg[word_indices] # 2d matrix, k+ x layer1_size
fb = . / (. + exp(-dot(l1, l2b.T))) # propagate hidden -> output
gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate
model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
neu1e += dot(gb, l2b) # save error model.syn0[word2.index] += neu1e # learn input -> hidden return len([word for word in sentence if word is not None]) def train_sentence_cbow(model, sentence, alpha, work=None, neu1=None):
"""
Update CBOW model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding
word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim
will use the optimized version from word2vec_inner instead. """
if model.negative:
# precompute negative labels
labels = zeros(model.negative + )
labels[] = . for pos, word in enumerate(sentence):
if word is None:
continue # OOV word in the input sentence => skip
reduced_window = random.randint(model.window) # `b` in the original word2vec code
start = max(, pos - model.window + reduced_window)
window_pos = enumerate(sentence[start: pos + model.window + - reduced_window], start)
word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)]
l1 = np_sum(model.syn0[word2_indices], axis=) # x layer1_size
if word2_indices and model.cbow_mean:
l1 /= len(word2_indices)
neu1e = zeros(l1.shape) if model.hs:
l2a = model.syn1[word.point] # 2d matrix, codelen x layer1_size
fa = . / (. + exp(-dot(l1, l2a.T))) # propagate hidden -> output
ga = (. - word.code - fa) * alpha # vector of error gradients multiplied by the learning rate
model.syn1[word.point] += outer(ga, l1) # learn hidden -> output
neu1e += dot(ga, l2a) # save error if model.negative:
# use this word (label = ) + `negative` other random words not from this sentence (label = )
word_indices = [word.index]
while len(word_indices) < model.negative + :
w = model.table[random.randint(model.table.shape[])]
if w != word.index:
word_indices.append(w)
l2b = model.syn1neg[word_indices] # 2d matrix, k+ x layer1_size
fb = . / (. + exp(-dot(l1, l2b.T))) # propagate hidden -> output
gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate
model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
neu1e += dot(gb, l2b) # save error model.syn0[word2_indices] += neu1e # learn input -> hidden, here for all words in the window separately return len([word for word in sentence if word is not None]) class Vocab(object):
"""A single vocabulary item, used internally for constructing binary trees (incl. both word leaves and inner nodes).""" def __init__(self, **kwargs):
self.count =
self.__dict__.update(kwargs) def __lt__(self, other): # used for sorting in a priority queue
return self.count < other.count def __str__(self):
vals = ['%s:%r' % (key, self.__dict__[key]) for key in sorted(self.__dict__) if not key.startswith('_')]
return "<" + ', '.join(vals) + ">" class Word2Vec(utils.SaveLoad):
"""
Class for training, using and evaluating neural networks described in https://code.google.com/p/word2vec/ The model can be stored/loaded via its `save()` and `load()` methods, or stored/loaded in a format
compatible with the original word2vec implementation via `save_word2vec_format()` and `load_word2vec_format()`. """ def __init__(self, sentences=None, size=, alpha=0.025, window=, min_count=,
sample=, seed=, workers=, min_alpha=0.0001, sg=, hs=, negative=, cbow_mean=):
"""
Initialize the model from an iterable of `sentences`. Each sentence is a
list of words (unicode strings) that will be used for training. The `sentences` iterable can be simply a list, but for larger corpora,
consider an iterable that streams the sentences directly from disk/network.
See :class:`BrownCorpus`, :class:`Text8Corpus` or :class:`LineSentence` in
this module for such examples. If you don't supply `sentences`, the model is left uninitialized -- use if
you plan to initialize it in some other way. `sg` defines the training algorithm. By default (`sg=`), skip-gram is used. Otherwise, `cbow` is employed.
`size` is the dimensionality of the feature vectors.
`window` is the maximum distance between the current and predicted word within a sentence.
`alpha` is the initial learning rate (will linearly drop to zero as training progresses).
`seed` = for the random number generator.
`min_count` = ignore all words with total frequency lower than this.
`sample` = threshold for configuring which higher-frequency words are randomly downsampled;
default is (off), useful value is 1e-.
`workers` = use this many worker threads to train the model (=faster training with multicore machines)
`hs` = if (default), hierarchical sampling will be used for model training (else set to )
`negative` = if > , negative sampling will be used, the int for negative
specifies how many "noise words" should be drawn (usually between -)
`cbow_mean` = if (default), use the sum of the context word vectors. If , use the mean.
Only applies when cbow is used.
"""
self.vocab = {} # mapping from a word (string) to a Vocab object
self.index2word = [] # map from a word's matrix index (int) to word (string)
self.sg = int(sg)
self.table = None # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving
self.layer1_size = int(size)
if size % != :
logger.warning("consider setting layer size to a multiple of 4 for greater performance")
self.alpha = float(alpha)
self.window = int(window)
self.seed = seed
self.min_count = min_count
self.sample = sample
self.workers = workers
self.min_alpha = min_alpha
self.hs = hs
self.negative = negative
self.cbow_mean = int(cbow_mean)
if sentences is not None:
self.build_vocab(sentences)
self.train(sentences) def make_table(self, table_size=, power=0.75):
"""
Create a table using stored vocabulary word counts for drawing random words in the negative
sampling training routines. Called internally from `build_vocab()`. """
logger.info("constructing a table with noise distribution from %i words" % len(self.vocab))
# table (= list of words) of noise distribution for negative sampling
vocab_size = len(self.index2word)
self.table = zeros(table_size, dtype=uint32) if not vocab_size:
logger.warning("empty vocabulary in word2vec, is this intended?")
return # compute sum of all power (Z in paper)
train_words_pow = float(sum([self.vocab[word].count ** power for word in self.vocab]))
# go through the whole table and fill it up with the word indexes proportional to a word's count**power
widx =
# normalize count^0.75 by Z
d1 = self.vocab[self.index2word[widx]].count ** power / train_words_pow
for tidx in xrange(table_size):
self.table[tidx] = widx
if 1.0 * tidx / table_size > d1:
widx +=
d1 += self.vocab[self.index2word[widx]].count ** power / train_words_pow
if widx >= vocab_size:
widx = vocab_size - def create_binary_tree(self):
"""
Create a binary Huffman tree using stored vocabulary word counts. Frequent words
will have shorter binary codes. Called internally from `build_vocab()`. """
logger.info("constructing a huffman tree from %i words" % len(self.vocab)) # build the huffman tree
heap = list(itervalues(self.vocab))
heapq.heapify(heap) # 每次从当前所有word节点中取最小的两个节点,组成左右子树(左小右大),并将concat结果构成的新节点作为父节点插入huffman树中(从下往上生长,词频越小,越靠近叶子) - 构造huffman二叉树
for i in xrange(len(self.vocab) - ):
min1, min2 = heapq.heappop(heap), heapq.heappop(heap)
heapq.heappush(heap, Vocab(count=min1.count + min2.count, index=i + len(self.vocab), left=min1, right=min2)) # recurse over the tree, assigning a binary code to each vocabulary word
if heap:
max_depth, stack = , [(heap[], [], [])]
while stack:
node, codes, points = stack.pop()
if node.index < len(self.vocab):
# leaf node => store its path from the root
node.code, node.point = codes, points
max_depth = max(len(codes), max_depth)
else:
# inner node => continue recursion
points = array(list(points) + [node.index - len(self.vocab)], dtype=uint32)
stack.append((node.left, array(list(codes) + [], dtype=uint8), points))
stack.append((node.right, array(list(codes) + [], dtype=uint8), points)) logger.info("built huffman tree with maximum node depth %i" % max_depth) def precalc_sampling(self):
"""Precalculate each vocabulary item's threshold for sampling"""
if self.sample:
logger.info(
"frequent-word downsampling, threshold %g; progress tallies will be approximate" % (self.sample))
total_words = sum(v.count for v in itervalues(self.vocab))
threshold_count = float(self.sample) * total_words
# 根据出现次数计算word节点的词频概率
for v in itervalues(self.vocab):
prob = (sqrt(v.count / threshold_count) + ) * (threshold_count / v.count) if self.sample else 1.0
v.sample_probability = min(prob, 1.0)
# print v def build_vocab(self, sentences):
"""
Build vocabulary from a sequence of sentences (can be a once-only generator stream).
Each sentence must be a list of unicode strings. """
logger.info("collecting all words and their counts")
sentence_no, vocab = -, {}
total_words =
# 统计训练集中每个词的出现次数
for sentence_no, sentence in enumerate(sentences):
if sentence_no % == :
logger.info("PROGRESS: at sentence #%i, processed %i words and %i word types" % (
sentence_no, total_words, len(vocab)))
for word in sentence:
total_words +=
if word in vocab:
vocab[word].count +=
else:
vocab[word] = Vocab(count=)
logger.info("collected %i word types from a corpus of %i words and %i sentences" % (
len(vocab), total_words, sentence_no + )) # assign a unique index to each word
# 按照出现的顺序给每个词index编码(这里没有按照词频排序),使得词汇表vocabulary中的word和index索引可以互相查询转换
self.vocab, self.index2word = {}, []
for word, v in iteritems(vocab):
if v.count >= self.min_count:
v.index = len(self.vocab)
self.index2word.append(word)
self.vocab[word] = v
# print "word: ", word
# print "v:", v
logger.info("total %i word types after removing those with count<%s" % (len(self.vocab), self.min_count))
# print self.vocab
# print self.index2word # 分层抽样
if self.hs:
# add info about each word's Huffman encoding
self.create_binary_tree()
if self.negative:
# build the table for drawing random words (for negative sampling)
self.make_table()
# precalculate downsampling thresholds
self.precalc_sampling()
self.reset_weights() def train(self, sentences, total_words=None, word_count=, chunksize=):
"""
Update the model's neural weights from a sequence of sentences (can be a once-only generator stream).
Each sentence must be a list of unicode strings. """
if FAST_VERSION < :
import warnings
warnings.warn(
"Cython compilation failed, training will be slow. Do you have Cython installed? `pip install cython`")
logger.info("training model with %i workers on %i vocabulary and %i features, "
"using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" %
(self.workers, len(self.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative)) if not self.vocab:
raise RuntimeError("you must first build vocabulary before training the model") start, next_report = time.time(), [1.0]
word_count = [word_count]
total_words = total_words or int(sum(v.count * v.sample_probability for v in itervalues(self.vocab)))
jobs = Queue(
maxsize= * self.workers) # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :(
lock = threading.Lock() # for shared state (=number of words trained so far, log reports...) def worker_train():
"""Train the model, lifting lists of sentences from the jobs queue."""
work = zeros(self.layer1_size, dtype=REAL) # each thread must have its own work memory
neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) while True:
job = jobs.get()
if job is None: # data finished, exit
break
# update the learning rate before every job
alpha = max(self.min_alpha, self.alpha * ( - 1.0 * word_count[] / total_words))
# how many words did we train on? out-of-vocabulary (unknown) words do not count
if self.sg:
job_words = sum(train_sentence_sg(self, sentence, alpha, work) for sentence in job)
else:
job_words = sum(train_sentence_cbow(self, sentence, alpha, work, neu1) for sentence in job)
with lock:
word_count[] += job_words
elapsed = time.time() - start
if elapsed >= next_report[]:
logger.info("PROGRESS: at %.2f%% words, alpha %.05f, %.0f words/s" %
(100.0 * word_count[] / total_words, alpha,
word_count[] / elapsed if elapsed else 0.0))
next_report[
] = elapsed + 1.0 # don't flood the log, wait at least a second between progress reports workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)]
for thread in workers:
thread.daemon = True # make interrupting the process with ctrl+c easier
thread.start() def prepare_sentences():
for sentence in sentences:
# avoid calling random_sample() where prob >= , to speed things up a little:
sampled = [self.vocab[word] for word in sentence
if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[
word].sample_probability >= random.random_sample())]
yield sampled # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue
for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)):
logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize()))
jobs.put(job)
logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize())
for _ in xrange(self.workers):
jobs.put(None) # give the workers heads up that they can finish -- no more work! for thread in workers:
thread.join() elapsed = time.time() - start
logger.info("training on %i words took %.1fs, %.0f words/s" %
(word_count[], elapsed, word_count[] / elapsed if elapsed else 0.0)) return word_count[] def reset_weights(self):
"""Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary."""
logger.info("resetting layer weights")
random.seed(self.seed)
self.syn0 = empty((len(self.vocab), self.layer1_size), dtype=REAL)
# randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once
for i in xrange(len(self.vocab)):
self.syn0[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size
if self.hs:
self.syn1 = zeros((len(self.vocab), self.layer1_size), dtype=REAL)
if self.negative:
self.syn1neg = zeros((len(self.vocab), self.layer1_size), dtype=REAL)
self.syn0norm = None def save_word2vec_format(self, fname, fvocab=None, binary=False):
"""
Store the input-hidden weight matrix in the same format used by the original
C word2vec-tool, for compatibility. """
if fvocab is not None:
logger.info("Storing vocabulary in %s" % (fvocab))
with utils.smart_open(fvocab, 'wb') as vout:
for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[].count):
vout.write(utils.to_utf8("%s %s\n" % (word, vocab.count)))
logger.info("storing %sx%s projection weights into %s" % (len(self.vocab), self.layer1_size, fname))
assert (len(self.vocab), self.layer1_size) == self.syn0.shape
with utils.smart_open(fname, 'wb') as fout:
fout.write(utils.to_utf8("%s %s\n" % self.syn0.shape))
# store in sorted order: most frequent words at the top
for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[].count):
row = self.syn0[vocab.index]
if binary:
fout.write(utils.to_utf8(word) + b" " + row.tostring())
else:
fout.write(utils.to_utf8("%s %s\n" % (word, ' '.join("%f" % val for val in row)))) @classmethod
def load_word2vec_format(cls, fname, fvocab=None, binary=False, norm_only=True):
"""
Load the input-hidden weight matrix from the original C word2vec-tool format. Note that the information stored in the file is incomplete (the binary tree is missing),
so while you can query for word similarity etc., you cannot continue training
with a model loaded this way. `binary` is a boolean indicating whether the data is in binary word2vec format.
`norm_only` is a boolean indicating whether to only store normalised word2vec vectors in memory.
Word counts are read from `fvocab` filename, if set (this is the file generated
by `-save-vocab` flag of the original C tool).
"""
counts = None
if fvocab is not None:
logger.info("loading word counts from %s" % (fvocab))
counts = {}
with utils.smart_open(fvocab) as fin:
for line in fin:
word, count = utils.to_unicode(line).strip().split()
counts[word] = int(count) logger.info("loading projection weights from %s" % (fname))
with utils.smart_open(fname) as fin:
header = utils.to_unicode(fin.readline())
vocab_size, layer1_size = map(int, header.split()) # throws for invalid file format
result = Word2Vec(size=layer1_size)
result.syn0 = zeros((vocab_size, layer1_size), dtype=REAL)
if binary:
binary_len = dtype(REAL).itemsize * layer1_size
for line_no in xrange(vocab_size):
# mixed text and binary: read text first, then binary
word = []
while True:
ch = fin.read()
if ch == b' ':
break
if ch != b'\n': # ignore newlines in front of words (some binary files have newline, some don't)
word.append(ch)
word = utils.to_unicode(b''.join(word))
if counts is None:
result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no)
elif word in counts:
result.vocab[word] = Vocab(index=line_no, count=counts[word])
else:
logger.warning("vocabulary file is incomplete")
result.vocab[word] = Vocab(index=line_no, count=None)
result.index2word.append(word)
result.syn0[line_no] = fromstring(fin.read(binary_len), dtype=REAL)
else:
for line_no, line in enumerate(fin):
parts = utils.to_unicode(line).split()
if len(parts) != layer1_size + :
raise ValueError("invalid vector on line %s (is this really the text format?)" % (line_no))
word, weights = parts[], map(REAL, parts[:])
if counts is None:
result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no)
elif word in counts:
result.vocab[word] = Vocab(index=line_no, count=counts[word])
else:
logger.warning("vocabulary file is incomplete")
result.vocab[word] = Vocab(index=line_no, count=None)
result.index2word.append(word)
result.syn0[line_no] = weights
logger.info("loaded %s matrix from %s" % (result.syn0.shape, fname))
result.init_sims(norm_only)
return result def most_similar(self, positive=[], negative=[], topn=):
"""
Find the top-N most similar words. Positive words contribute positively towards the
similarity, negative words negatively. This method computes cosine similarity between a simple mean of the projection
weight vectors of the given words, and corresponds to the `word-analogy` and
`distance` scripts in the original word2vec implementation. Example:: >>> trained_model.most_similar(positive=['woman', 'king'], negative=['man'])
[('queen', 0.50882536), ...] """
self.init_sims() if isinstance(positive, string_types) and not negative:
# allow calls like most_similar('dog'), as a shorthand for most_similar(['dog'])
positive = [positive] # add weights for each word, if not already present; default to 1.0 for positive and -1.0 for negative words
positive = [(word, 1.0) if isinstance(word, string_types + (ndarray,))
else word for word in positive]
negative = [(word, -1.0) if isinstance(word, string_types + (ndarray,))
else word for word in negative] # compute the weighted average of all words
all_words, mean = set(), []
for word, weight in positive + negative:
if isinstance(word, ndarray):
mean.append(weight * word)
elif word in self.vocab:
mean.append(weight * self.syn0norm[self.vocab[word].index])
all_words.add(self.vocab[word].index)
else:
raise KeyError("word '%s' not in vocabulary" % word)
if not mean:
raise ValueError("cannot compute similarity with no input")
mean = matutils.unitvec(array(mean).mean(axis=)).astype(REAL) dists = dot(self.syn0norm, mean)
if not topn:
return dists
best = argsort(dists)[::-][:topn + len(all_words)]
# ignore (don't return) words from the input
result = [(self.index2word[sim], float(dists[sim])) for sim in best if sim not in all_words]
return result[:topn] def doesnt_match(self, words):
"""
Which word from the given list doesn't go with the others? Example:: >>> trained_model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal' """
self.init_sims() words = [word for word in words if word in self.vocab] # filter out OOV words
logger.debug("using words %s" % words)
if not words:
raise ValueError("cannot select a word from an empty list")
vectors = vstack(self.syn0norm[self.vocab[word].index] for word in words).astype(REAL)
mean = matutils.unitvec(vectors.mean(axis=)).astype(REAL)
dists = dot(vectors, mean)
return sorted(zip(dists, words))[][] def __getitem__(self, word):
"""
Return a word's representations in vector space, as a 1D numpy array. Example:: >>> trained_model['woman']
array([ -1.40128313e-02, ...] """
return self.syn0[self.vocab[word].index] def __contains__(self, word):
return word in self.vocab def similarity(self, w1, w2):
"""
Compute cosine similarity between two words. Example:: >>> trained_model.similarity('woman', 'man')
0.73723527 >>> trained_model.similarity('woman', 'woman')
1.0 """
return dot(matutils.unitvec(self[w1]), matutils.unitvec(self[w2])) def init_sims(self, replace=False):
"""
Precompute L2-normalized vectors. If `replace` is set, forget the original vectors and only keep the normalized
ones = saves lots of memory! Note that you **cannot continue training** after doing a replace. The model becomes
effectively read-only = you can call `most_similar`, `similarity` etc., but not `train`. """
if getattr(self, 'syn0norm', None) is None or replace:
logger.info("precomputing L2-norms of word weight vectors")
if replace:
for i in xrange(self.syn0.shape[]):
self.syn0[i, :] /= sqrt((self.syn0[i, :] ** ).sum(-))
self.syn0norm = self.syn0
if hasattr(self, 'syn1'):
del self.syn1
else:
self.syn0norm = (self.syn0 / sqrt((self.syn0 ** ).sum(-))[..., newaxis]).astype(REAL) def accuracy(self, questions, restrict_vocab=):
"""
Compute accuracy of the model. `questions` is a filename where lines are
-tuples of words, split into sections by ": SECTION NAME" lines.
See https://code.google.com/p/word2vec/source/browse/trunk/questions-words.txt for an example. The accuracy is reported (=printed to log and returned as a list) for each
section separately, plus there's one aggregate summary at the end. Use `restrict_vocab` to ignore all questions containing a word whose frequency
is not in the top-N most frequent words (default top ,). This method corresponds to the `compute-accuracy` script of the original C word2vec. """
ok_vocab = dict(sorted(iteritems(self.vocab),
key=lambda item: -item[].count)[:restrict_vocab])
ok_index = set(v.index for v in itervalues(ok_vocab)) def log_accuracy(section):
correct, incorrect = section['correct'], section['incorrect']
if correct + incorrect > :
logger.info("%s: %.1f%% (%i/%i)" %
(section['section'], 100.0 * correct / (correct + incorrect),
correct, correct + incorrect)) sections, section = [], None
for line_no, line in enumerate(utils.smart_open(questions)):
# TODO: use level3 BLAS (=evaluate multiple questions at once), for speed
line = utils.to_unicode(line)
if line.startswith(': '):
# a new section starts => store the old section
if section:
sections.append(section)
log_accuracy(section)
section = {'section': line.lstrip(': ').strip(), 'correct': , 'incorrect': }
else:
if not section:
raise ValueError("missing section header before line #%i in %s" % (line_no, questions))
try:
a, b, c, expected = [word.lower() for word in
line.split()] # TODO assumes vocabulary preprocessing uses lowercase, too...
except:
logger.info("skipping invalid line #%i in %s" % (line_no, questions))
if a not in ok_vocab or b not in ok_vocab or c not in ok_vocab or expected not in ok_vocab:
logger.debug("skipping line #%i with OOV words: %s" % (line_no, line))
continue ignore = set(self.vocab[v].index for v in [a, b, c]) # indexes of words to ignore
predicted = None
# find the most likely prediction, ignoring OOV words and input words
for index in argsort(self.most_similar(positive=[b, c], negative=[a], topn=False))[::-]:
if index in ok_index and index not in ignore:
predicted = self.index2word[index]
if predicted != expected:
logger.debug("%s: expected %s, predicted %s" % (line.strip(), expected, predicted))
break
section['correct' if predicted == expected else 'incorrect'] +=
if section:
# store the last section, too
sections.append(section)
log_accuracy(section) total = {'section': 'total', 'correct': sum(s['correct'] for s in sections),
'incorrect': sum(s['incorrect'] for s in sections)}
log_accuracy(total)
sections.append(total)
return sections def __str__(self):
return "Word2Vec(vocab=%s, size=%s, alpha=%s)" % (len(self.index2word), self.layer1_size, self.alpha) def save(self, *args, **kwargs):
kwargs['ignore'] = kwargs.get('ignore', ['syn0norm']) # don't bother storing the cached normalized vectors
super(Word2Vec, self).save(*args, **kwargs) class Sent2Vec(utils.SaveLoad):
def __init__(self, sentences, model_file=None, alpha=0.025, window=, sample=, seed=,
workers=, min_alpha=0.0001, sg=, hs=, negative=, cbow_mean=, iteration=):
self.sg = int(sg)
self.table = None # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving
self.alpha = float(alpha)
self.window = int(window)
self.seed = seed
self.sample = sample
self.workers = workers
self.min_alpha = min_alpha
self.hs = hs
self.negative = negative
self.cbow_mean = int(cbow_mean)
self.iteration = iteration if model_file and sentences:
self.w2v = Word2Vec.load(model_file)
self.vocab = self.w2v.vocab
self.layer1_size = self.w2v.layer1_size
self.reset_sent_vec(sentences)
for i in range(iteration):
self.train_sent(sentences) def reset_sent_vec(self, sentences):
"""Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary."""
logger.info("resetting vectors for sentences")
random.seed(self.seed)
self.sents_len =
for sent in sentences:
self.sents_len +=
self.sents = empty((self.sents_len, self.layer1_size), dtype=REAL)
# randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once
for i in xrange(self.sents_len):
self.sents[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size def train_sent(self, sentences, total_words=None, word_count=, sent_count=, chunksize=):
"""
Update the model's neural weights from a sequence of sentences (can be a once-only generator stream).
Each sentence must be a list of unicode strings. """
logger.info("training model with %i workers on %i sentences and %i features, "
"using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" %
(self.workers, self.sents_len, self.layer1_size, self.sg, self.hs, self.sample, self.negative)) if not self.vocab:
raise RuntimeError("you must first build vocabulary before training the model") start, next_report = time.time(), [1.0]
word_count = [word_count]
sent_count = [sent_count]
total_words = total_words or sum(v.count for v in itervalues(self.vocab))
total_sents = self.sents_len * self.iteration
jobs = Queue(
maxsize= * self.workers) # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :(
lock = threading.Lock() # for shared state (=number of words trained so far, log reports...) def worker_train():
"""Train the model, lifting lists of sentences from the jobs queue."""
work = zeros(self.layer1_size, dtype=REAL) # each thread must have its own work memory
neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) while True:
job = jobs.get()
if job is None: # data finished, exit
break
# update the learning rate before every job
alpha = max(self.min_alpha, self.alpha * ( - 1.0 * word_count[] / total_words))
if self.sg:
job_words = sum(self.train_sent_vec_sg(self.w2v, sent_no, sentence, alpha, work)
for sent_no, sentence in job)
else:
job_words = sum(self.train_sent_vec_cbow(self.w2v, sent_no, sentence, alpha, work, neu1)
for sent_no, sentence in job)
with lock:
word_count[] += job_words
sent_count[] += chunksize
elapsed = time.time() - start
if elapsed >= next_report[]:
logger.info("PROGRESS: at %.2f%% sents, alpha %.05f, %.0f words/s" %
(100.0 * sent_count[] / total_sents, alpha,
word_count[] / elapsed if elapsed else 0.0))
next_report[
] = elapsed + 1.0 # don't flood the log, wait at least a second between progress reports workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)]
for thread in workers:
thread.daemon = True # make interrupting the process with ctrl+c easier
thread.start() def prepare_sentences():
for sent_no, sentence in enumerate(sentences):
# avoid calling random_sample() where prob >= , to speed things up a little:
# sampled = [self.vocab[word] for word in sentence
# if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[word].sample_probability >= random.random_sample())]
sampled = [self.vocab.get(word, None) for word in sentence]
yield (sent_no, sampled) # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue
for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)):
logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize()))
jobs.put(job)
logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize())
for _ in xrange(self.workers):
jobs.put(None) # give the workers heads up that they can finish -- no more work! for thread in workers:
thread.join() elapsed = time.time() - start
logger.info("training on %i words took %.1fs, %.0f words/s" %
(word_count[], elapsed, word_count[] / elapsed if elapsed else 0.0)) return word_count[] def train_sent_vec_cbow(self, model, sent_no, sentence, alpha, work=None, neu1=None):
"""
Update CBOW model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding
word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim
will use the optimized version from word2vec_inner instead. """
sent_vec = self.sents[sent_no]
if self.negative:
# precompute negative labels
labels = zeros(self.negative + )
labels[] = . for pos, word in enumerate(sentence):
if word is None:
continue # OOV word in the input sentence => skip
reduced_window = random.randint(self.window) # `b` in the original word2vec code
start = max(, pos - self.window + reduced_window)
window_pos = enumerate(sentence[start: pos + self.window + - reduced_window], start)
word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)]
l1 = np_sum(model.syn0[word2_indices], axis=) # x layer1_size
l1 += sent_vec
if word2_indices and self.cbow_mean:
l1 /= len(word2_indices)
neu1e = zeros(l1.shape) if self.hs:
l2a = model.syn1[word.point] # 2d matrix, codelen x layer1_size
fa = . / (. + exp(-dot(l1, l2a.T))) # propagate hidden -> output
ga = (. - word.code - fa) * alpha # vector of error gradients multiplied by the learning rate
# model.syn1[word.point] += outer(ga, l1) # learn hidden -> output
neu1e += dot(ga, l2a) # save error if self.negative:
# use this word (label = ) + `negative` other random words not from this sentence (label = )
word_indices = [word.index]
while len(word_indices) < self.negative + :
w = model.table[random.randint(model.table.shape[])]
if w != word.index:
word_indices.append(w)
l2b = model.syn1neg[word_indices] # 2d matrix, k+ x layer1_size
fb = . / (. + exp(-dot(l1, l2b.T))) # propagate hidden -> output
gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate
# model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
neu1e += dot(gb, l2b) # save error # model.syn0[word2_indices] += neu1e # learn input -> hidden, here for all words in the window separately
self.sents[sent_no] += neu1e # learn input -> hidden, here for all words in the window separately return len([word for word in sentence if word is not None]) def train_sent_vec_sg(self, model, sent_no, sentence, alpha, work=None):
"""
Update skip-gram model by training on a single sentence. The sentence is a list of Vocab objects (or None, where the corresponding
word is not in the vocabulary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim
will use the optimized version from word2vec_inner instead. """
if self.negative:
# precompute negative labels
labels = zeros(self.negative + )
labels[] = 1.0 for pos, word in enumerate(sentence):
if word is None:
continue # OOV word in the input sentence => skip
reduced_window = random.randint(model.window) # `b` in the original word2vec code # now go over all words from the (reduced) window, predicting each one in turn
start = max(, pos - model.window + reduced_window)
for pos2, word2 in enumerate(sentence[start: pos + model.window + - reduced_window], start):
# don't train on OOV words and on the `word` itself
if word2:
# l1 = model.syn0[word.index]
l1 = self.sents[sent_no]
neu1e = zeros(l1.shape) if self.hs:
# work on the entire tree at once, to push as much work into numpy's C routines as possible (performance)
l2a = deepcopy(model.syn1[word2.point]) # 2d matrix, codelen x layer1_size
fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T))) # propagate hidden -> output
ga = ( - word2.code - fa) * alpha # vector of error gradients multiplied by the learning rate
# model.syn1[word2.point] += outer(ga, l1) # learn hidden -> output
neu1e += dot(ga, l2a) # save error if self.negative:
# use this word (label = ) + `negative` other random words not from this sentence (label = )
word_indices = [word2.index]
while len(word_indices) < model.negative + :
w = model.table[random.randint(model.table.shape[])]
if w != word2.index:
word_indices.append(w)
l2b = model.syn1neg[word_indices] # 2d matrix, k+ x layer1_size
fb = . / (. + exp(-dot(l1, l2b.T))) # propagate hidden -> output
gb = (labels - fb) * alpha # vector of error gradients multiplied by the learning rate
# model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
neu1e += dot(gb, l2b) # save error # model.syn0[word.index] += neu1e # learn input -> hidden
self.sents[sent_no] += neu1e # learn input -> hidden return len([word for word in sentence if word is not None]) def save_sent2vec_format(self, fname):
"""
Store the input-hidden weight matrix in the same format used by the original
C word2vec-tool, for compatibility. """
logger.info("storing %sx%s projection weights into %s" % (self.sents_len, self.layer1_size, fname))
assert (self.sents_len, self.layer1_size) == self.sents.shape
with utils.smart_open(fname, 'wb') as fout:
fout.write(utils.to_utf8("%s %s\n" % self.sents.shape))
# store in sorted order: most frequent words at the top
for sent_no in xrange(self.sents_len):
row = self.sents[sent_no]
fout.write(utils.to_utf8("sent_%d %s\n" % (sent_no, ' '.join("%f" % val for val in row)))) def similarity(self, sent1, sent2):
"""
Compute cosine similarity between two sentences. sent1 and sent2 are
the indexs in the train file. Example:: >>> trained_model.similarity(, )
1.0 >>> trained_model.similarity(, )
0.73 """
return dot(matutils.unitvec(self.sents[sent1]), matutils.unitvec(self.sents[sent2])) class BrownCorpus(object):
"""Iterate over sentences from the Brown corpus (part of NLTK data).""" def __init__(self, dirname):
self.dirname = dirname def __iter__(self):
for fname in os.listdir(self.dirname):
fname = os.path.join(self.dirname, fname)
if not os.path.isfile(fname):
continue
for line in utils.smart_open(fname):
line = utils.to_unicode(line)
# each file line is a single sentence in the Brown corpus
# each token is WORD/POS_TAG
token_tags = [t.split('/') for t in line.split() if len(t.split('/')) == ]
# ignore words with non-alphabetic tags like ",", "!" etc (punctuation, weird stuff)
words = ["%s/%s" % (token.lower(), tag[:]) for token, tag in token_tags if tag[:].isalpha()]
if not words: # don't bother sending out empty sentences
continue
yield words class Text8Corpus(object):
"""Iterate over sentences from the "text8" corpus, unzipped from http://mattmahoney.net/dc/text8.zip .""" def __init__(self, fname):
self.fname = fname def __iter__(self):
# the entire corpus is one gigantic line -- there are no sentence marks at all
# so just split the sequence of tokens arbitrarily: sentence = tokens
sentence, rest, max_sentence_length = [], b'',
with utils.smart_open(self.fname) as fin:
while True:
text = rest + fin.read() # avoid loading the entire file (= line) into RAM
if text == rest: # EOF
sentence.extend(rest.split()) # return the last chunk of words, too (may be shorter/longer)
if sentence:
yield sentence
break
last_token = text.rfind(
b' ') # the last token may have been split in two... keep it for the next iteration
words, rest = (
utils.to_unicode(text[:last_token]).split(), text[last_token:].strip()) if last_token >= else (
[], text)
sentence.extend(words)
while len(sentence) >= max_sentence_length:
yield sentence[:max_sentence_length]
sentence = sentence[max_sentence_length:] class LineSentence(object):
"""Simple format: one sentence = one line; words already preprocessed and separated by whitespace.""" def __init__(self, source):
"""
`source` can be either a string or a file object. Example:: sentences = LineSentence('myfile.txt') Or for compressed files:: sentences = LineSentence('compressed_text.txt.bz2')
sentences = LineSentence('compressed_text.txt.gz') """
self.source = source def __iter__(self):
"""Iterate through the lines in the source."""
try:
# Assume it is a file-like object and try treating it as such
# Things that don't have seek will trigger an exception
self.source.seek()
for line in self.source:
yield utils.to_unicode(line).split()
except AttributeError:
# If it didn't work like a file, use it as a string filename
with utils.smart_open(self.source) as fin:
for line in fin:
yield utils.to_unicode(line).split() # Example: ./word2vec.py ~/workspace/word2vec/text8 ~/workspace/word2vec/questions-words.txt ./text8
if __name__ == "__main__":
logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.info("running %s" % " ".join(sys.argv))
logging.info("using optimization %s" % FAST_VERSION) # check and process cmdline input
program = os.path.basename(sys.argv[])
if len(sys.argv) < :
print(globals()['__doc__'] % locals())
sys.exit() seterr(all='raise') # don't ignore numpy errors if len(sys.argv) > :
input_file = sys.argv[]
model_file = sys.argv[]
out_file = sys.argv[]
model = Sent2Vec(LineSentence(input_file), model_file=model_file, iteration=)
model.save_sent2vec_format(out_file)
elif len(sys.argv) > :
input_file = sys.argv[]
model = Word2Vec(LineSentence(input_file), size=, window=, min_count=, workers=)
model.save(input_file + '.model')
model.save_word2vec_format(input_file + '.vec')
else:
pass program = os.path.basename(sys.argv[])
logging.info("finished running %s" % program)

Relevant Link:

https://www.zhihu.com/question/21661274
https://fb56552f-a-62cb3a1a-s-sites.googlegroups.com/site/deeplearningworkshopnips2014/68.pdf?attachauth=ANoY7cq83cA2A-ZgTWKF9vIxGRQs96O5OGXbt8n_GqRuU_4IellDNS17z_56Wa6aafihhDHuNHM_7d_jitkT27Cy_RnspiY8Dms5w_eBXFrVBFoFqSdzPmUbHaAblYPGHNA3mCAYn4whKO5w9uk7w9BLyMIX-QNco591gprLzPTM_XHLYa5U2YtIBhVptFj4LMedeKki_hxk2UkHCN0_MwrLwAgZneBihpOAWSX8GgRb5-uqUWpq3CI%3D&attredirects=2
https://www.zhihu.com/question/27689129
https://github.com/hassyGo/paragraph-vector
https://arxiv.org/pdf/1405.4053.pdf
https://github.com/jiyfeng/ParagraphVector/tree/master/ParaVector
https://github.com/JonathanRaiman/PVDM
https://github.com/thunlp/paragraph2vec
https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/distributed-representations-of-sentences-and-documents.md
https://github.com/klb3713/sentence2vec

5. Visualization convolutional neural network In t-SNE - 图像CNN抽象后的高维度可视化

0x1:VGG image network

我们知道,在NLP问题中词向量emberdding vector是浅层神经网络训练的副产物,但是我们可以把这个副产物看作是word在词嵌入空间的映射。

对于image图像来说也存在类似的情形,我们通过构建VGG多层神经卷积网络,将图像输入其中进行训练,在网络卷积层的最后一层输出的激活值本质上是一个权重向量,我们可以将其看作是输入图像在高维空间上的一种向量化表示,

def VGG_16():
model = Sequential()
model.add(ZeroPadding2D((, ), input_shape=(, , )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(MaxPooling2D((, ), strides=(, )))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(MaxPooling2D((, ), strides=(, )))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(MaxPooling2D((, ), strides=(, )))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(MaxPooling2D((, ), strides=(, )))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(ZeroPadding2D((, )))
model.add(Convolution2D(, , , activation='relu'))
model.add(MaxPooling2D((, ), strides=(, )))
model.add(Flatten())
model.add(Dense(, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(, activation='relu')) return model

最后一层输入的activation权重向量,可以直接被当成一个高维向量,输入给t-SNE用于可视化展示,这里我们使用pre-train好的图像vector,以及对应的Caltech-101 dataset

# -*- coding: utf- -*-

import os
import random
import numpy as np
import json
import matplotlib.pyplot
import cPickle as pickle
from matplotlib.pyplot import imshow,show
from PIL import Image
from sklearn.manifold import TSNE
from tqdm import tqdm if __name__ == '__main__':
images, pca_features = pickle.load(open('../data/features_caltech101.p', 'r'))
for i, f in zip(images, pca_features):
print("image: %s, features: %0.2f,%0.2f,%0.2f,%0.2f... " % (i, f[], f[], f[], f[]))
# Although in principle, t-SNE works with any number of images, it's difficult to place that many tiles in a single image. So instead, we will take a random subset of 1000 images and plot those on a t-SNE instead. This step is optional.
num_images_to_plot = '''
It is usually a good idea to first run the vectors through a faster dimensionality reduction technique like principal component analysis to project your data into an intermediate lower-dimensional space before using t-SNE.
This improves accuracy, and cuts down on runtime since PCA is more efficient than t-SNE. Since we have already projected our data down with PCA in the previous notebook, we can proceed straight to running the t-SNE on the feature vectors.
'''
if len(images) > num_images_to_plot:
sort_order = sorted(random.sample(xrange(len(images)), num_images_to_plot))
images = [images[i] for i in sort_order]
pca_features = [pca_features[i] for i in sort_order] # Internally, t-SNE uses an iterative approach, making small (or sometimes large) adjustments to the points. By default, t-SNE will go a maximum of iterations, but in practice, it often terminates early because it has found a locally optimal (good enough) embedding.
X = np.array(pca_features)
tsne = TSNE(n_components=, learning_rate=, perplexity=, angle=0.2, verbose=).fit_transform(X) # The variable tsne contains an array of unnormalized 2d points, corresponding to the embedding. In the next cell, we normalize the embedding so that lies entirely in the range (,).
tx, ty = tsne[:, ], tsne[:, ]
tx = (tx - np.min(tx)) / (np.max(tx) - np.min(tx))
ty = (ty - np.min(ty)) / (np.max(ty) - np.min(ty)) # Finally, we will compose a new RGB image where the set of images have been drawn according to the t-SNE results.
# Adjust width and height to set the size in pixels of the full image, and set max_dim to the pixel size (on the largest size) to scale images to.
width =
height =
max_dim = full_image = Image.new('RGB', (width, height))
for img, x, y in tqdm(zip(images, tx, ty)):
tile = Image.open(img)
rs = max(, tile.width / max_dim, tile.height / max_dim)
tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS)
full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y))) matplotlib.pyplot.figure(figsize=(, ))
imshow(full_image)
#show() # we can save the image to disk:
full_image.save("../assets/example-tSNE-caltech101.jpg")

从图片上可以看到,摩托车、椅子、飞机、大象被聚类在了一起,这体现了VGG CNN捕获到了这些事物的高维细节信息,t-SNE将这种原理直观地展示出来了

0x2: handwritten digits MNIST t-SNE

# -*- coding: utf- -*-

import numpy as np
from skdata.mnist.views import OfficialImageClassification
from matplotlib import pyplot as plt
from tsne import bh_sne # load up data
data = OfficialImageClassification(x_dtype="float32")
x_data = data.all_images
y_data = data.all_labels # convert image data to float64 matrix. float64 is need for bh_sne
x_data = np.asarray(x_data).astype('float64')
x_data = x_data.reshape((x_data.shape[], -)) # For speed of computation, only run on a subset
n =
x_data = x_data[:n]
y_data = y_data[:n] # perform t-SNE embedding
vis_data = bh_sne(x_data) # plot the result
vis_x = vis_data[:, ]
vis_y = vis_data[:, ] plt.scatter(vis_x, vis_y, c=y_data, cmap=plt.cm.get_cmap("jet", ))
plt.colorbar(ticks=range())
plt.clim(-0.5, 9.5)
plt.show()

用10种颜色区分了【0-9】共10个mnist手写数字,t-SNE向我们展示了不同的手写数字在高维空间上存在的空间结构可区分型,这从某种程度上解释了为什么用CNN这种算法能很好地对Mnist手写数字问题进行准确分类的内在原因。

笔者注:只有数据本身内部包含了可区分的因素,应用对应的模型才能完成准确分类这个任务;分类任务中如何对输入数据进行抽象表示,有时候和选取什么模型同样甚至更重要

Relevant Link:

https://github.com/genekogan/image-tSNE
https://indico.io/blog/visualizing-with-t-sne/
https://github.com/oreillymedia/t-SNE-tutorial
https://drive.google.com/drive/folders/0B3WXSfqxKDkFYm9GMzlnemdEbEE
http://www.vision.caltech.edu/Image_Datasets/Caltech101/#Download
https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb
https://github.com/genekogan/ofxTSNE
http://ml4a.github.io/guides/ImageTSNEViewer/
https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb
http://ml4a.github.io/guides/ImageTSNELive/
https://github.com/ml4a/ml4a-ofx

6. 对网络上AV女星进行聚类分析

import argparse
import sys
import numpy as np
import json
import os
from os.path import isfile, join
import keras
from keras.preprocessing import image
from keras.applications.imagenet_utils import decode_predictions, preprocess_input
from keras.models import Model
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from scipy.spatial import distance from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True def process_arguments(args):
parser = argparse.ArgumentParser(description='tSNE on audio')
parser.add_argument('--images_path', action='store', help='path to directory of images')
parser.add_argument('--output_path', action='store', help='path to where to put output json file')
parser.add_argument('--num_dimensions', action='store', default=, help='dimensionality of t-SNE points (default 2)')
parser.add_argument('--perplexity', action='store', default=, help='perplexity of t-SNE (default 30)')
parser.add_argument('--learning_rate', action='store', default=, help='learning rate of t-SNE (default 150)')
params = vars(parser.parse_args(args))
return params def get_image(path, input_shape):
img = image.load_img(path, target_size=input_shape)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=)
x = preprocess_input(x)
return x def analyze_images(images_path):
# make feature_extractor
model = keras.applications.VGG16(weights='imagenet', include_top=True)
feat_extractor = Model(input=model.input, output=model.get_layer("fc2").output)
input_shape = model.input_shape[:]
# get images
candidate_images = [f for f in os.listdir(images_path) if os.path.splitext(f)[].lower() in ['.jpg','.png','.jpeg']]
# analyze images and grab activations
activations = []
images = []
for idx,image_path in enumerate(candidate_images):
file_path = join(images_path,image_path)
img = get_image(file_path, input_shape);
if img is not None:
print("getting activations for %s %d/%d" % (image_path,idx,len(candidate_images)))
acts = feat_extractor.predict(img)[]
activations.append(acts)
images.append(image_path)
# run PCA firt
print("Running PCA on %d images..." % len(activations))
features = np.array(activations)
pca = PCA(n_components=)
pca.fit(features)
pca_features = pca.transform(features)
return images, pca_features def run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate):
images, pca_features = analyze_images(images_path)
print("Running t-SNE on %d images..." % len(images))
X = np.array(pca_features)
tsne = TSNE(n_components=tsne_dimensions, learning_rate=tsne_learning_rate, perplexity=tsne_perplexity, verbose=).fit_transform(X)
# save data to json
data = []
for i,f in enumerate(images):
point = [ (tsne[i,k] - np.min(tsne[:,k]))/(np.max(tsne[:,k]) - np.min(tsne[:,k])) for k in range(tsne_dimensions) ]
data.append({"path":os.path.abspath(join(images_path,images[i])), "point":point})
with open(output_path, 'w') as outfile:
json.dump(data, outfile) if __name__ == '__main__':
params = process_arguments(sys.argv[:])
images_path = params['images_path']
output_path = params['output_path']
tsne_dimensions = int(params['num_dimensions'])
tsne_perplexity = int(params['perplexity'])
tsne_learning_rate = int(params['learning_rate'])
run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate)
print("finished saving %s" % output_path)

从百度上搜索"AV女优",下载1000张图片

python tSNE-images.py --images_path ../data/av/ --output_path ../module/ImageTSNEViewer/av_points.json

用matplotlib展示在一张大图上

# -*- coding: utf- -*-
import json
from matplotlib.pyplot import imshow
from matplotlib.pyplot import imshow
import matplotlib.pyplot
from PIL import Image if __name__ == '__main__':
# show on Display Board
width =
height =
max_dim = full_image = Image.new('RGB', (width, height)) # reading pre-trained image pointer
with open('../module/ImageTSNEViewer/av_points.json', 'r') as f:
data = json.load(f)
for line in data:
img = line['path']
x, y = line['point'][], line['point'][]
print img, x, y
tile = Image.open(img)
rs = max(, tile.width / max_dim, tile.height / max_dim)
tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS)
full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y))) matplotlib.pyplot.figure(figsize=(, ))
imshow(full_image) # we can save the image to disk:
full_image.save("../assets/example-tSNE-av.jpg")

放大局部细节

可以看到,VGGnet把图像里高维空间的细节信息捕获到了

通过Visualizing Representations来理解Deep Learning、Neural network、以及输入样本自身的高维空间结构的更多相关文章

  1. (转) Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance

    Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance 2018-1 ...

  2. 树卷积神经网络Tree-CNN: A Deep Convolutional Neural Network for Lifelong Learning

    树卷积神经网络Tree-CNN: A Deep Convolutional Neural Network for Lifelong Learning 2018-04-17 08:32:39 看_这是一 ...

  3. 深度学习的集成方法——Ensemble Methods for Deep Learning Neural Networks

    本文主要参考Ensemble Methods for Deep Learning Neural Networks一文. 1. 前言 神经网络具有很高的方差,不易复现出结果,而且模型的结果对初始化参数异 ...

  4. ISSCC 2017论文导读 Session 14 Deep Learning Processors,A 2.9TOPS/W Deep Convolutional Neural Network

    最近ISSCC2017大会刚刚举行,看了关于Deep Learning处理器的Session 14,有一些不错的东西,在这里记录一下. A 2.9TOPS/W Deep Convolutional N ...

  5. ISSCC 2017论文导读 Session 14 Deep Learning Processors,A 2.9TOPS/W Deep Convolutional Neural Network SOC

    最近ISSCC2017大会刚刚举行,看了关于Deep Learning处理器的Session 14,有一些不错的东西,在这里记录一下. A 2.9TOPS/W Deep Convolutional N ...

  6. 1 - ImageNet Classification with Deep Convolutional Neural Network (阅读翻译)

    ImageNet Classification with Deep Convolutional Neural Network 利用深度卷积神经网络进行ImageNet分类 Abstract We tr ...

  7. 读paper:Deep Convolutional Neural Network using Triplets of Faces, Deep Ensemble, andScore-level Fusion for Face Recognition

    今天给大家带来一篇来自CVPR 2017关于人脸识别的文章. 文章题目:Deep Convolutional Neural Network using Triplets of Faces, Deep ...

  8. HYPERSPECTRAL IMAGE CLASSIFICATION USING TWOCHANNEL DEEP CONVOLUTIONAL NEURAL NETWORK阅读笔记

    HYPERSPECTRAL IMAGE CLASSIFICATION USING TWOCHANNEL  DEEP  CONVOLUTIONAL NEURAL NETWORK 论文地址:https:/ ...

  9. ASPLOS'17论文导读——SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing

    今年去参加了ASPLOS 2017大会,这个会议总体来说我感觉偏系统和偏软一点,涉及硬件的相对少一些,对我这个喜欢算法以及硬件架构的菜鸟来说并不算非常契合.中间记录了几篇相对比较有趣的paper,今天 ...

随机推荐

  1. R语言学习——因子

    变量可分为名义型变量.有序型变量或者连续型变量.名义型变量是没有顺序之分的类别变量,如糖尿病类型Diabetes(Type1.Type2),即使在数据中Type1编码为1而Type2编码为2,这也并不 ...

  2. R语言学习——向量

    以下为在RStudio中输入 #为注释符,其后内容程序不执行 > #向量是用于储存数值型.字符型或者逻辑型数据的一维数组.执行组合功能的函数c()可用来创建向量.示例如下: > a< ...

  3. js 批量替换

    html = html.replace(new RegExp(title,"gm"), "<span style='color:red;'>"+ti ...

  4. 22 python 初学(类,面向对象)

    python: 函数式 + 面向对象 函数式可以做所有的事,是否合适? 面向对象: 一.定义: 函数: def + 函数名(参数) 面向对象: class  -> 名字叫 Bar 类 def   ...

  5. 一本通 1223:An Easy Problem

    \[传送门qwq\] [题目描述] 给定一个正整数N,求最小的.比N大的正整数M,使得M与N的二进制表示中有相同数目的1. 举个例子,假如给定的N为78,其二进制表示为1001110,包含4个1,那么 ...

  6. NOIP2018:The First Step

    NOIP2018 RP=Ackermann(4,3) Day 0 日常不想做题也不知道要写什么qwq Day 1 接到$smy$巨佬的催更私信于是来更了(原本准备咕掉的) 最开始的策略是准备总览题目, ...

  7. ORM框架的前世今生

    目录 一.ORM简介二.ORM的工作原理三.ORM的优缺点四.常见的ORM框架 一.ORM简介 ORM(Object Relational Mapping)对象关系映射,一般指持久化数据和实体对象的映 ...

  8. aelf帮助C#工程师10分钟零门槛搭建DAPP&私有链开发环境

    aelf是一个可扩展的去中心化云计算区块链平台,支持高性能合约并行执行.原生多链数据交互.存储使用高性能分布式数据库. aelf整个系统可以在windows.osx及linux运行,团队在osx环境下 ...

  9. 关于JavaScript闭包的粗浅理解

    在JavaScript中,使用var创建变量,会创建全局变量或局部变量. 只有在非函数内创建的变量,才是全局变量,该变量可以在任何地方被读取. 而在函数内创建变量时,只有在函数内部才可读取.在函数外部 ...

  10. 基于MySQL的Activiti6引擎创建

    整个activiti6的搭建都是在spring boot2之上的,首先贴一下pom: <dependencies> <!-- 这是activiti需要的最基本的核心引擎 --> ...