句子相似度_tf/idf
import math
from math import isnan
import pandas as pd
#结巴分词,切开之后,有分隔符
def jieba_function(sent):
import jieba
sent1 = jieba.cut(sent)
s = []
for each in sent1:
s.append(each)
return ' '.join(str(i) for i in s)
def count_cos_similarity(vec_1, vec_2):
if len(vec_1) != len(vec_2):
return 0 s = sum(vec_1[i] * vec_2[i] for i in range(len(vec_2)))
den1 = math.sqrt(sum([pow(number, 2) for number in vec_1]))
den2 = math.sqrt(sum([pow(number, 2) for number in vec_2]))
return s / (den1 * den2)
#计算文本向量,传入文本,接受的是字符串
def tf(sent1, sent2):
from sklearn.feature_extraction.text import CountVectorizer sent1 = jieba_function(sent1)
sent2 = jieba_function(sent2) count_vec = CountVectorizer() sentences = [sent1, sent2]
print('sentences',sentences)
print('vector',count_vec.fit_transform(sentences).toarray())## 输出特征向量化后的表示
print('cut_word',count_vec.get_feature_names())#输出的是切分的词, 输出向量各个维度的特征含义 #转换成维度相同的
vec_1 = count_vec.fit_transform(sentences).toarray()[0]
vec_2 = count_vec.fit_transform(sentences).toarray()[1]
similarity=count_cos_similarity(vec_1, vec_2)
if isnan(similarity):
similarity=0.0 print('count_cos_similarity',similarity)
def tfidf(sent1, sent2):
from sklearn.feature_extraction.text import TfidfVectorizer sent1 = jieba_function(sent1)
sent2 = jieba_function(sent2) tfidf_vec = TfidfVectorizer() sentences = [sent1, sent2]
vec_1 = tfidf_vec.fit_transform(sentences).toarray()[0]
vec_2 = tfidf_vec.fit_transform(sentences).toarray()[1]
similarity=count_cos_similarity(vec_1, vec_2)
if isnan(similarity):
similarity=0.0
return similarity if __name__=='__main__': sent1 = '我喜欢看电视也喜欢看电影,'
sent2 = '我不喜欢看电视也不喜欢看电影'
print('<<<<tf<<<<<<<')
tf(sent1, sent2)
print('<<<<tfidf<<<<<<<')
tfidf(sent1, sent2)
句子相似度_tf/idf的更多相关文章
- 使用 TF-IDF 加权的空间向量模型实现句子相似度计算
使用 TF-IDF 加权的空间向量模型实现句子相似度计算 字符匹配层次计算句子相似度 计算两个句子相似度的算法有很多种,但是对于从未了解过这方面算法的人来说,可能最容易想到的就是使用字符串匹配相关的算 ...
- NLP入门(一)词袋模型及句子相似度
本文作为笔者NLP入门系列文章第一篇,以后我们就要步入NLP时代. 本文将会介绍NLP中常见的词袋模型(Bag of Words)以及如何利用词袋模型来计算句子间的相似度(余弦相似度,cosi ...
- [LeetCode] 737. Sentence Similarity II 句子相似度 II
Given two sentences words1, words2 (each represented as an array of strings), and a list of similar ...
- [LeetCode] 734. Sentence Similarity 句子相似度
Given two sentences words1, words2 (each represented as an array of strings), and a list of similar ...
- LSTM 句子相似度分析
使用句子中出现单词的Vector加权平均进行文本相似度分析虽然简单,但也有比较明显的缺点:没有考虑词序且词向量区别不明确.如下面两个句子: "北京的首都是中国"与"中国的 ...
- [LeetCode] Sentence Similarity 句子相似度
Given two sentences words1, words2 (each represented as an array of strings), and a list of similar ...
- Wordvec_句子相似度
import jiebafrom jieba import analyseimport numpyimport gensimimport codecsimport pandas as pdimport ...
- [LeetCode] Sentence Similarity II 句子相似度之二
Given two sentences words1, words2 (each represented as an array of strings), and a list of similar ...
- [LeetCode] 737. Sentence Similarity II 句子相似度之二
Given two sentences words1, words2 (each represented as an array of strings), and a list of similar ...
随机推荐
- svn 更新lib库时,报错
svn: E195012: Unable to find repository location for svn:// in revision 9718 Can't revert without re ...
- xlrd使用
import xlrd //导入模块filename='路径/文件名' //文件路径.名称python读取excel中单元格的内容返回的有5种类型,即ctype:0. empty(空的),1 stri ...
- NBU 还原LINUX ORACLE RAC数据库(CRM)
CRM集群数据库恢复 linux centos 6.6 oracle 11.2.0.3 集群环境 1.53 oraclea 1.54 oracleb 在linux操作系统root用户下安装好NBUci ...
- zabbix 自定义监控 排除带报错提示
UserParameter=lq_data_sqoop,/usr/local/bin/sqoop.sh 2>/dev/null |awk '{print $2}' 注意:2>/dev/n ...
- Mac上安装Git
转载注明出处:http://blog.csdn.net/xiaohanluo/article/details/53214933 Git安装 下载Git有两种方法 直接下载安装包,Git下载地址 用ho ...
- 二分图 最小点覆盖 poj 3041
题目链接:Asteroids - POJ 3041 - Virtual Judge https://vjudge.net/problem/POJ-3041 第一行输入一个n和一个m表示在n*n的网格 ...
- Flask之before_request、after_request
1.@app.before_request在请求(request)|在视图函数 之前做出响应 解决所有问题 from flask import Flask from flask import re ...
- Shell教程 之函数
1.函数定义 shell中函数的定义格式如下: [ function ] funname [()] { action; [return int;] } 说明: 可以带function fun() 定义 ...
- OSPF网络类型不一致路由无法计算的问题
晚上割接,远端的ASR9001-s网络类型为广播类型,本端为6509-e,网络接口类型修改成p2p后,OSPF邻居关系建立,但是路由无法计算.
- Fibonacci again and again
Fibonacci again and again http://acm.hdu.edu.cn/showproblem.php?pid=1848 Time Limit: 1000/1000 MS (J ...