语料下载地址

# -*- coding: utf-8 -*-

import jieba
import jieba.analyse # suggest_freq调节单个词语的词频,使其能(或不能)被分出来
jieba.suggest_freq('沙瑞金', True)
jieba.suggest_freq('田国富', True)
jieba.suggest_freq('高育良', True)
jieba.suggest_freq('侯亮平', True)
jieba.suggest_freq('钟小艾', True)
jieba.suggest_freq('陈岩石', True)
jieba.suggest_freq('欧阳菁', True)
jieba.suggest_freq('易学习', True)
jieba.suggest_freq('王大路', True)
jieba.suggest_freq('蔡成功', True)
jieba.suggest_freq('孙连城', True)
jieba.suggest_freq('季昌明', True)
jieba.suggest_freq('丁义珍', True)
jieba.suggest_freq('郑西坡', True)
jieba.suggest_freq('赵东来', True)
jieba.suggest_freq('高小琴', True)
jieba.suggest_freq('赵瑞龙', True)
jieba.suggest_freq('林华华', True)
jieba.suggest_freq('陆亦可', True)
jieba.suggest_freq('刘新建', True)
jieba.suggest_freq('刘庆祝', True) with open('./in_the_name_of_people.txt', 'rb') as f:
document = f.read()
document_cut = jieba.cut(document)
result = ' '.join(document_cut)
result = result.encode('utf-8')
with open('./in_the_name_of_people_segment.txt', 'wb+') as f2:
f2.write(result) f.close()
f2.close()

读分词后的文件到内存,这里使用了word2vec提供的LineSentence类来读文件,然后使用word2vec的模型

  • min_count:忽略总频率低于此值的所有单词
  • size:指定了训练时词向量维度,默认为100
  • window:句中当前词与预测词之间的最大距离
  • hs:If 1, hierarchical softmax .If 0 negative sampling.
# import modules & set up logging
import logging
import os
from gensim.models import word2vec logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) sentences = word2vec.LineSentence('./in_the_name_of_people_segment.txt') model = word2vec.Word2Vec(sentences, hs=1, min_count=1, window=3, size=100)
2019-05-14 17:13:22,538 : INFO : collecting all words and their counts
2019-05-14 17:13:22,540 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2019-05-14 17:13:22,593 : INFO : collected 17878 word types from a corpus of 161343 raw words and 2311 sentences
2019-05-14 17:13:22,594 : INFO : Loading a fresh vocabulary
2019-05-14 17:13:22,673 : INFO : effective_min_count=1 retains 17878 unique words (100% of original 17878, drops 0)
2019-05-14 17:13:22,674 : INFO : effective_min_count=1 leaves 161343 word corpus (100% of original 161343, drops 0)
2019-05-14 17:13:22,724 : INFO : deleting the raw counts dictionary of 17878 items
2019-05-14 17:13:22,724 : INFO : sample=0.001 downsamples 38 most-common words
2019-05-14 17:13:22,725 : INFO : downsampling leaves estimated 120578 word corpus (74.7% of prior 161343)
2019-05-14 17:13:22,738 : INFO : constructing a huffman tree from 17878 words
2019-05-14 17:13:23,069 : INFO : built huffman tree with maximum node depth 17
2019-05-14 17:13:23,097 : INFO : estimated required memory for 17878 words and 100 dimensions: 33968200 bytes
2019-05-14 17:13:23,098 : INFO : resetting layer weights
2019-05-14 17:13:23,271 : INFO : training model with 3 workers on 17878 vocabulary and 100 features, using sg=0 hs=1 sample=0.001 negative=5 window=3
2019-05-14 17:13:23,457 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-14 17:13:23,458 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-14 17:13:23,470 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-14 17:13:23,471 : INFO : EPOCH - 1 : training on 161343 raw words (120329 effective words) took 0.2s, 613072 effective words/s
2019-05-14 17:13:23,655 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-14 17:13:23,658 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-14 17:13:23,676 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-14 17:13:23,677 : INFO : EPOCH - 2 : training on 161343 raw words (120484 effective words) took 0.2s, 592001 effective words/s
2019-05-14 17:13:23,865 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-14 17:13:23,866 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-14 17:13:23,882 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-14 17:13:23,883 : INFO : EPOCH - 3 : training on 161343 raw words (120571 effective words) took 0.2s, 589983 effective words/s
2019-05-14 17:13:24,065 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-14 17:13:24,075 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-14 17:13:24,084 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-14 17:13:24,085 : INFO : EPOCH - 4 : training on 161343 raw words (120615 effective words) took 0.2s, 600460 effective words/s
2019-05-14 17:13:24,273 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-14 17:13:24,274 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-14 17:13:24,277 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-14 17:13:24,279 : INFO : EPOCH - 5 : training on 161343 raw words (120605 effective words) took 0.2s, 631944 effective words/s
2019-05-14 17:13:24,279 : INFO : training on a 806715 raw words (602604 effective words) took 1.0s, 598553 effective words/s

与某个词最相近的3个字的词

req_count = 5
for key in model.wv.similar_by_word('李达康', topn=100):
if len(key[0]) == 3:
req_count -= 1
print(key[0], key[1])
if req_count == 0:
break
2019-05-14 17:13:27,276 : INFO : precomputing L2-norms of word weight vectors

赵东来 0.9634759426116943
陆亦可 0.9602197408676147
蔡成功 0.9589439034461975
王大路 0.9569779634475708
祁同伟 0.9561013579368591
req_count = 5
for key in model.wv.similar_by_word('赵东来', topn=100):
if len(key[0]) == 3:
req_count -= 1
print(key[0], key[1])
if req_count == 0:
break
李达康 0.9634760618209839
陆亦可 0.9614400863647461
易学习 0.9584609866142273
祁同伟 0.9565587639808655
王大路 0.9549983739852905
req_count = 5
for key in model.wv.similar_by_word('高育良', topn=100):
if len(key[0]) == 3:
req_count -= 1
print(key[0], key[1])
if req_count == 0:
break
沙瑞金 0.9721000790596008
侯亮平 0.9408242702484131
祁同伟 0.9268442392349243
李达康 0.9241408705711365
季昌明 0.913619339466095
req_count = 5
for key in model.wv.similar_by_word('沙瑞金', topn=100):
if len(key[0]) == 3:
req_count -= 1
print(key[0], key[1])
if req_count == 0:
break
高育良 0.9721001386642456
李达康 0.9424692392349243
易学习 0.9424353241920471
无表情 0.9378770589828491
祁同伟 0.9351213574409485

计算两个词向量的相似度

print(model.wv.similarity('沙瑞金', '高育良'))
print(model.wv.similarity('李达康', '王大路'))
0.9721002
0.95697814

计算某个词的相关列表

try:
sim3 = model.most_similar(u'侯亮平',topn =20)
print(u'和 侯亮平 与相关的词有:\n')
for key in sim3:
print(key[0],key[1])
except:
print(' error')
和 侯亮平 与相关的词有:

祁同伟 0.9691112041473389
陆亦可 0.9684256911277771
季昌明 0.9582957625389099
李达康 0.952505886554718
她 0.9482855200767517
他们 0.9475176334381104
易学习 0.9456426501274109
陈岩石 0.9433715343475342
马上 0.941593587398529
高育良 0.9408242702484131
郑西坡 0.9396289587020874
王大路 0.9381627440452576
沙瑞金 0.9350594282150269
赵东来 0.9322312474250793
陈海 0.9311630725860596
司机 0.9282065033912659
蔡成功 0.9281994104385376
他 0.92684006690979
组织 0.9237431287765503
大家 0.9234919548034668 E:\Anaconda3\envs\sklearn\lib\site-packages\ipykernel_launcher.py:2: DeprecationWarning: Call to deprecated `most_similar` (Method will be removed in 4.0.0, use self.wv.most_similar() instead).

找出不同类的词

print(model.wv.doesnt_match(u"沙瑞金 高育良 李达康 刘庆祝".split()))
刘庆祝

保留模型,方便重用

model.save(u'人民的名义.model')
2019-05-14 17:13:39,338 : INFO : saving Word2Vec object under 人民的名义.model, separately None
2019-05-14 17:13:39,338 : INFO : not storing attribute vectors_norm
2019-05-14 17:13:39,339 : INFO : not storing attribute cum_table
2019-05-14 17:13:39,906 : INFO : saved 人民的名义.model

加载模型

model_2 = word2vec.Word2Vec.load('人民的名义.model')
2019-05-14 17:13:42,714 : INFO : loading Word2Vec object from 人民的名义.model
2019-05-14 17:13:42,942 : INFO : loading wv recursively from 人民的名义.model.wv.* with mmap=None
2019-05-14 17:13:42,943 : INFO : setting ignored attribute vectors_norm to None
2019-05-14 17:13:42,943 : INFO : loading vocabulary recursively from 人民的名义.model.vocabulary.* with mmap=None
2019-05-14 17:13:42,944 : INFO : loading trainables recursively from 人民的名义.model.trainables.* with mmap=None
2019-05-14 17:13:42,944 : INFO : setting ignored attribute cum_table to None
2019-05-14 17:13:42,945 : INFO : loaded 人民的名义.model
try:
sim3 = model_2.most_similar(u'侯亮平',topn =20)
print(u'和 侯亮平 与相关的词有:\n')
for key in sim3:
print(key[0],key[1])
except:
print(' error')
E:\Anaconda3\envs\sklearn\lib\site-packages\ipykernel_launcher.py:2: DeprecationWarning: Call to deprecated `most_similar` (Method will be removed in 4.0.0, use self.wv.most_similar() instead).

2019-05-14 17:14:02,083 : INFO : precomputing L2-norms of word weight vectors

和 侯亮平 与相关的词有:

祁同伟 0.9691112041473389
陆亦可 0.9684256911277771
季昌明 0.9582957625389099
李达康 0.952505886554718
她 0.9482855200767517
他们 0.9475176334381104
易学习 0.9456426501274109
陈岩石 0.9433715343475342
马上 0.941593587398529
高育良 0.9408242702484131
郑西坡 0.9396289587020874
王大路 0.9381627440452576
沙瑞金 0.9350594282150269
赵东来 0.9322312474250793
陈海 0.9311630725860596
司机 0.9282065033912659
蔡成功 0.9281994104385376
他 0.92684006690979
组织 0.9237431287765503
大家 0.9234919548034668

gensim word2vec实践的更多相关文章

  1. word2vec 实践

    关于word2vec,这方面无论中英文的参考资料相当的多,英文方面既可以看官方推荐的论文,也可以看gensim作者Radim Řehůřek博士写得一些文章.而中文方面,推荐 @licstar的< ...

  2. 词向量之word2vec实践

    首先感谢无私分享的各位大神,文中很多内容多有借鉴之处.本次将自己的实验过程记录,希望能帮助有需要的同学. 一.从下载数据开始 现在的中文语料库不是特别丰富,我在之前的文章中略有整理,有兴趣的可以看看. ...

  3. gensim Word2Vec 训练和使用(Model一定要加载到内存中,节省时间!!!)

    训练模型利用gensim.models.Word2Vec(sentences)建立词向量模型该构造函数执行了三个步骤:建立一个空的模型对象,遍历一次语料库建立词典,第二次遍历语料库建立神经网络模型可以 ...

  4. ubuntu中使用gensim+word2vec[备忘]

    python版本: 2.7.12 0. 安装python和pip 1. 用pip依次安装: numpy, cython,scipy,pattern,word2vec 五个工具包 2. 用pip安装ge ...

  5. gensim word2vec |来自渣渣硕的学习笔记

    最近写论文跑模型,要用到word2vec,但是发现自己怎么也看不懂网上的帖子,还是自己笨吧,所以就有了我的第一篇博客!!!  关于word2vec工具打算写一个系列的,当然今天这篇文章只打算写: 如何 ...

  6. sklearn word2vec 实践

    源代码: https://blog.csdn.net/github_38705794/article/details/75452729 一.复现时报错: Traceback (most recent ...

  7. 用gensim学习word2vec

    在word2vec原理篇中,我们对word2vec的两种模型CBOW和Skip-Gram,以及两种解法Hierarchical Softmax和Negative Sampling做了总结.这里我们就从 ...

  8. Python gensim库word2vec 基本用法

    ip install gensim安装好库后,即可导入使用: 1.训练模型定义 from gensim.models import Word2Vec   model = Word2Vec(senten ...

  9. 机器学习:gensim之Word2Vec 详解

    一 前言 Word2Vec是同上一篇提及的PageRank一样,都是Google的工程师和机器学习专家所提出的的:在学习这些算法.模型的时候,最好优先去看Google提出者的原汁Paper和Proje ...

随机推荐

  1. C#基础 类及常用函数【string 、Math 、DiteTime 、TimeSpan】

    一  string 类型 string str = "abcdefg"; str.Length  -  字符串长度,返回int类型 str.TrimStart()          ...

  2. 阿里P7浅谈SpringMVC

    一.前言 既然是浅谈 SpringMVC,那么我们就先从基础说起,本章节主要讲解以下内容: 1.三层结构介绍 2.MVC 设计模式介绍 3.SpringMVC 介绍 4.入门程序的实现 注:介绍方面的 ...

  3. 【Day4】4.Request对象之Get请求与URL编码

    import urllib.parse as up import urllib.request as ur kw = '动漫' data ={ 'kw':kw, 'ie':'utf-8', 'pn': ...

  4. Linux命令——netstat

    参考:20 Netstat Commands for Linux Network Management Foreword Print network connections, routing tabl ...

  5. 7.Vue实例的生命周期

    1.Vue实例的生命周期: 什么是生命周期:从Vue实例创建.运行.到销毁期间,总是伴随着各种各样的事件,这些事件,统称为生命周期! 生命周期钩子 = 生命周期函数 = 生命周期事件 2. 主要的生命 ...

  6. 火狐新版移除developer Toolbar和无法关闭自动更新的解决

    随着火狐的不断更新已经更新到66版本了,近期注意到有个问题是火狐经常提示更新,更新了没多久,又时不时跳出更新的提示,不胜其烦. 在火狐的前期的版本中(大概4年之前吧)在Options菜单里是可以设置从 ...

  7. Java 基础 面向对象: 接口(interface )概念 以及接口之练习3 -定义一个接口用来实现两个对象的比较并 判断instanceof是否为同类

    接口(interface )概念概念笔记 及测试代码: /** 面向对象: 接口(interface ) 是与类并行的一个概念: * 1.接口可以看成一个特殊的抽象类,它是常量与抽象方法的一个集合 * ...

  8. python中常用的九种预处理方法

    本文总结的是我们大家在python中常见的数据预处理方法,以下通过sklearn的preprocessing模块来介绍; 1. 标准化(Standardization or Mean Removal ...

  9. post和get的在测试眼里的区别大概就是post比get安全吧……

    get将请求数据放在url中,而post将请求数据放在request body中 http://www.cnblogs.com/logsharing/p/8448446.html

  10. jface viewer 全选

    viewer.setSelection(new StructuredSelection(((List)viewer.getInput()))); StructuredSelection有多个构造方法, ...