中文文本kmeans聚类
原理:
K就是将原始数据分为K类,Means即均值点。K-Means的核心就是将一堆数据聚集为K个簇,每个簇中都有一个中心点称为均值点,簇中所有点到该簇的均值点的距离都较到其他簇的均值点更近。
实现步骤:

1、给出k个初始聚类中心

2、重复执行:

 把每一个数据对象重新分配到k个聚类中心处,形成k个簇

重新计算每一个簇的聚类中心

3、直到聚类中心不在发生变化时,此时分类结束

两种方法:

from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer
from sklearn.cluster import KMeans
from sklearn import metrics
import numpy as np;
import jieba
from DBUtils import update_keyword
def easy_get_parameter_k_means():
data = []
datas = []
file = open("keyword.txt" , encoding='utf-8')
for post in file:
data.append(post.replace('\n','')) datas = data vec = CountVectorizer()
X = vec.fit_transform([" ".join([b for b in jieba.cut(a)]) for a in data])
tf = TfidfTransformer()
X = tf.fit_transform(X.toarray()) data = X.toarray() test_score = []
n_clusters_end = 20 # 聚类个数
n_clusters_start = 20 # 聚类个数
while n_clusters_start <= n_clusters_end:
km = KMeans(n_clusters=n_clusters_start)
km.fit(data)
clusters = km.labels_.tolist()
# print(type(clusters))
# print(clusters)
score = metrics.silhouette_score(X=X, labels=clusters)
num = sorted([(np.sum([1 for a in clusters if a == i]), i) for i in set(clusters)])[-1]
test_score.append([n_clusters_start, score, num[0], num[1]])
#print([n_clusters_start, score, num[0], num[1]]) # 输出分数
n_clusters_start += 1 for i in range(0, 20):
result = []
#print('len(clusters):',len(clusters))
for index in range(len(clusters)):
if clusters[index] == i:
res = datas[index]
update_keyword(res,str(i))
print("更新关键词为:",res,'的分类号为',i)
result.append(res)
#print('res',res) #print("第",i,"类,共", len(result), "个") return clusters # easy_get_parameter_k_means() # 得到最佳参数
print("arrs",easy_get_parameter_k_means())
print("arrs[length]",len(easy_get_parameter_k_means()))

 

②此方法是读取多个文件构建词库空间,但是当数据文件过多时运行很慢很慢

求特征值  求TF-IDF   利用kmeans算法求聚类中心  和 聚类分类以及各点距离其聚类中心的距离

import os;
import jieba;
import numpy as np;
from numpy import *
import matplotlib.pyplot as plt
import os def read_from_file(file_name): # 读取原文章 with open(file_name, "r", encoding='UTF8') as fp:
words = fp.read()
return words def stop_words(stop_word_file):
words = read_from_file(stop_word_file)
result = jieba.cut(words)
new_words = []
for r in result:
new_words.append(r)
return set(new_words) def del_stop_words(words, stop_words_set):
# words是已经切词但是没有去除停用词的文档。
# 返回的会是去除停用词后的文档
result = jieba.cut(words)
new_words = []
for r in result:
if r not in stop_words_set:
new_words.append(r)
return new_words def get_all_vector(stop_words_set):
# names = [os.path.join(file_path, f) for f in os.listdir(file_path)] docs = []
word_set = set()
file = open("keyword.txt" , encoding='utf-8')
for post in file:
doc = del_stop_words(post, stop_words_set)
docs.append(doc)
word_set |= set(doc)
# print len(doc),len(word_set)
# print("word_Set:",word_set)
# print("docs:", docs)
word_set = list(word_set)
docs_vsm = []
# for word in word_set[:30]:
# print word.encode("utf-8"),
for doc in docs:
temp_vector = []
for word in word_set:
temp_vector.append(doc.count(word) * 1.0)
# print temp_vector[-30:-1]
docs_vsm.append(temp_vector) docs_matrix = np.array(docs_vsm)
print("docs_matrix:", docs_matrix) column_sum = [float(len(np.nonzero(docs_matrix[:, i])[0])) for i in range(docs_matrix.shape[1])]
column_sum = np.array(column_sum)
column_sum = docs_matrix.shape[0] / column_sum
idf = np.log(column_sum)
idf = np.diag(idf)
# 注意一下计算都是矩阵运算,不是单个变量的运算。
for doc_v in docs_matrix:
if doc_v.sum() == 0:
doc_v = doc_v / 1
else:
doc_v = doc_v / (doc_v.sum())
tfidf = np.dot(docs_matrix, idf)
# return names, tfidf
print("idf:", tfidf)
f = "tezheng.txt"
with open(f, "w", encoding='utf8') as file: # ”w"代表着每次运行都覆盖内容
for i in tfidf:
for j in i:
datafl = str(format(float(j), '.2f')) file.write(datafl + "\t")
file.write("\n") def loadDataSet(fileName):
dataSet = [] # 初始化一个空列表
fr = open(fileName)
for line in fr.readlines():
# 切割每一行的数据
curLine = line.strip().split('\t')
# 将数据追加到dataMat,映射所有的元素为 float类型
fltLine = list(map(float, curLine))
dataSet.append(fltLine)
return mat(dataSet) '''
def randCent(dataSet, k):
n = shape(dataSet)[1]
centroids = mat(zeros((k,n)))#用mat函数转换为矩阵之后可以才进行一些线性代数的操作
for j in range(n):#在每个维度的边界内,创建簇心。
minJ = min(dataSet[:,j])
rangeJ = float(max(dataSet[:,j]) - minJ)
centroids[:,j] = mat(minJ + rangeJ * random.rand(k,1))
return centroids def randCent(dataSet,k):
m,n = dataSet.shape
centroids = np.zeros((k,n))
for i in range(k):
index = int(np.random.uniform(0,m)) # centroids[i,:] = dataSet[index,:]
return centroids
''' def randCent(dataSet, k):
n = shape(dataSet)[1]
centroids = mat(zeros((k, n))) # create centroid mat
for j in range(n): # create random cluster centers, within bounds of each dimension
minJ = min(dataSet[:, j])
rangeJ = float(max(dataSet[:, j]) - minJ)
centroids[:, j] = mat(minJ + rangeJ * random.rand(k, 1))
return centroids def distEclud(vecA, vecB):
return math.sqrt(sum(power(vecA - vecB, 2))) # dataSet样本点,k 簇的个数
# disMeas距离量度,默认为欧几里得距离
# createCent,初始点的选取
'''
def K_means(dataSet,k,distMeas = distEclud,createCent = randCent):
print("样本点:::",dataSet)
m = shape(dataSet)[0]#样本数
print('样本数:',m)
clusterAssment = mat(zeros((m,2)))#m*2的矩阵
centroids = createCent(dataSet,k)#初始化k个中心
clusterChanged = True
while clusterChanged:#当聚类不再变化
clusterChanged = False
for i in range(m):
minDist = math.inf;minIndex = -1
for j in range(k):#找到最近的质心
distJI = distMeas(centroids[j,:],dataSet[i,:])
if distJI < minDist:
minDist = distJI;minIndex = j
if clusterAssment[i,0] !=minIndex:clusterChanged = True
#第一列为所属质心,第二列为距离
clusterAssment[i,:] = minIndex,minDist**2
print(centroids) #更改质心位置
for cent in range(k):
ptsInClust = dataSet[nonzero(clusterAssment[:,0].A==cent)[0]]
centroids[cent,:] = mean(ptsInClust,axis=0)
return centroids,clusterAssment
''' def kMeans(dataSet, k, distMeas=distEclud, createCent=randCent):
m = shape(dataSet)[0] # 样本数
clusterAssment = mat(zeros((m, 2))) # m*2的矩阵
centroids = createCent(dataSet, k) # 初始化k个中心
clusterChanged = True
while clusterChanged: # 当聚类不再变化
clusterChanged = False
for i in range(m):
minDist = inf;
minIndex = -1
for j in range(k): # 找到最近的质心
distJI = distMeas(centroids[j, :], dataSet[i, :])
if distJI < minDist:
minDist = distJI;
minIndex = j
if clusterAssment[i, 0] != minIndex: clusterChanged = True
# 第1列为所属质心,第2列为距离
clusterAssment[i, :] = minIndex, minDist ** 2
print(centroids) # 更改质心位置
for cent in range(k):
ptsInClust = dataSet[nonzero(clusterAssment[:, 0].A == cent)[0]]
centroids[cent, :] = mean(ptsInClust, axis=0)
return centroids, clusterAssment if __name__ == '__main__':
wenzhang = read_from_file('keyword.txt')
# print(wenzhang)
wenzhang1 = stop_words('stopword.txt')
# print(wenzhang1)
wenzhang2 = del_stop_words(wenzhang, wenzhang1)
# print(wenzhang2)
wenzhang3 = get_all_vector( wenzhang1)
# kMeans(dataSet, k, distMeas=gen_sim, createCent=randCent)
dataSet = loadDataSet('tezheng.txt')
centroids, clusterAssment = kMeans(dataSet, 10, distMeas=distEclud, createCent=randCent)
print("centroids:", centroids)
print("clusterAssment :", clusterAssment)
print("clusterAssmentlengh :", len(clusterAssment)) '''
import os;
import jieba;
import numpy as np;
from numpy import *
import matplotlib.pyplot as plt
import os def file_name(file_dir):
filesname = []
for root, dirs, files in os.walk(file_dir):
for file in files:
filename = 'keywordfile/' + file
filesname.append(filename)
print("filesname length:", len(filesname))
return filesname def read_from_file(file_name): # 读取原文章 with open(file_name, "r", encoding='UTF8') as fp:
words = fp.read()
return words def stop_words(stop_word_file):
words = read_from_file(stop_word_file)
result = jieba.cut(words)
new_words = []
for r in result:
new_words.append(r)
return set(new_words) def del_stop_words(words, stop_words_set):
# words是已经切词但是没有去除停用词的文档。
# 返回的会是去除停用词后的文档
result = jieba.cut(words)
new_words = []
for r in result:
if r not in stop_words_set:
new_words.append(r)
return new_words def get_all_vector(file_path, stop_words_set):
# names = [os.path.join(file_path, f) for f in os.listdir(file_path)]
names = file_name('keyfile')
posts = [open(name, encoding='utf-8').read() for name in names]
docs = []
word_set = set()
for post in posts:
print('post', post)
doc = del_stop_words(post, stop_words_set)
docs.append(doc)
word_set |= set(doc)
# print len(doc),len(word_set)
# print("word_Set:",word_set)
# print("docs:", docs)
word_set = list(word_set)
docs_vsm = []
# for word in word_set[:30]:
# print word.encode("utf-8"),
for doc in docs:
temp_vector = []
for word in word_set:
temp_vector.append(doc.count(word) * 1.0)
# print temp_vector[-30:-1]
docs_vsm.append(temp_vector) docs_matrix = np.array(docs_vsm)
print("docs_matrix:", docs_matrix) column_sum = [float(len(np.nonzero(docs_matrix[:, i])[0])) for i in range(docs_matrix.shape[1])]
column_sum = np.array(column_sum)
column_sum = docs_matrix.shape[0] / column_sum
idf = np.log(column_sum)
idf = np.diag(idf)
# 注意一下计算都是矩阵运算,不是单个变量的运算。
for doc_v in docs_matrix:
if doc_v.sum() == 0:
doc_v = doc_v / 1
else:
doc_v = doc_v / (doc_v.sum())
tfidf = np.dot(docs_matrix, idf)
# return names, tfidf
print("idf:", tfidf)
f = "tezheng.txt"
with open(f, "w", encoding='utf8') as file: # ”w"代表着每次运行都覆盖内容
for i in tfidf:
for j in i:
datafl = str(format(float(j), '.2f')) file.write(datafl + "\t")
file.write("\n") def loadDataSet(fileName):
dataSet = [] # 初始化一个空列表
fr = open(fileName)
for line in fr.readlines():
# 切割每一行的数据
curLine = line.strip().split('\t')
# 将数据追加到dataMat,映射所有的元素为 float类型
fltLine = list(map(float, curLine))
dataSet.append(fltLine)
return mat(dataSet) def randCent(dataSet, k):
n = shape(dataSet)[1]
centroids = mat(zeros((k,n)))#用mat函数转换为矩阵之后可以才进行一些线性代数的操作
for j in range(n):#在每个维度的边界内,创建簇心。
minJ = min(dataSet[:,j])
rangeJ = float(max(dataSet[:,j]) - minJ)
centroids[:,j] = mat(minJ + rangeJ * random.rand(k,1))
return centroids def randCent(dataSet,k):
m,n = dataSet.shape
centroids = np.zeros((k,n))
for i in range(k):
index = int(np.random.uniform(0,m)) # centroids[i,:] = dataSet[index,:]
return centroids def randCent(dataSet, k):
n = shape(dataSet)[1]
centroids = mat(zeros((k, n))) # create centroid mat
for j in range(n): # create random cluster centers, within bounds of each dimension
minJ = min(dataSet[:, j])
rangeJ = float(max(dataSet[:, j]) - minJ)
centroids[:, j] = mat(minJ + rangeJ * random.rand(k, 1))
return centroids def distEclud(vecA, vecB):
return math.sqrt(sum(power(vecA - vecB, 2))) # dataSet样本点,k 簇的个数
# disMeas距离量度,默认为欧几里得距离
# createCent,初始点的选取 def K_means(dataSet,k,distMeas = distEclud,createCent = randCent):
print("样本点:::",dataSet)
m = shape(dataSet)[0]#样本数
print('样本数:',m)
clusterAssment = mat(zeros((m,2)))#m*2的矩阵
centroids = createCent(dataSet,k)#初始化k个中心
clusterChanged = True
while clusterChanged:#当聚类不再变化
clusterChanged = False
for i in range(m):
minDist = math.inf;minIndex = -1
for j in range(k):#找到最近的质心
distJI = distMeas(centroids[j,:],dataSet[i,:])
if distJI < minDist:
minDist = distJI;minIndex = j
if clusterAssment[i,0] !=minIndex:clusterChanged = True
#第一列为所属质心,第二列为距离
clusterAssment[i,:] = minIndex,minDist**2
print(centroids) #更改质心位置
for cent in range(k):
ptsInClust = dataSet[nonzero(clusterAssment[:,0].A==cent)[0]]
centroids[cent,:] = mean(ptsInClust,axis=0)
return centroids,clusterAssment def K_Means(dataSet, k, distMeas=distEclud, createCent=randCent):
m = shape(dataSet)[0] # 样本数
clusterAssment = mat(zeros((m, 2))) # m*2的矩阵
centroids = createCent(dataSet, k) # 初始化k个中心
clusterChanged = True
while clusterChanged: # 当聚类不再变化
clusterChanged = False
for i in range(m):
minDist = inf;
minIndex = -1
for j in range(k): # 找到最近的质心
distJI = distMeas(centroids[j, :], dataSet[i, :])
if distJI < minDist:
minDist = distJI;
minIndex = j
if clusterAssment[i, 0] != minIndex: clusterChanged = True
# 第1列为所属质心,第2列为距离
clusterAssment[i, :] = minIndex, minDist ** 2
print(centroids) # 更改质心位置
for cent in range(k):
ptsInClust = dataSet[nonzero(clusterAssment[:, 0].A == cent)[0]]
centroids[cent, :] = mean(ptsInClust, axis=0)
return centroids, clusterAssment if __name__ == '__main__':
wenzhang = read_from_file('input.txt')
# print(wenzhang)
wenzhang1 = stop_words('stopword.txt')
# print(wenzhang1)
wenzhang2 = del_stop_words(wenzhang, wenzhang1)
# print(wenzhang2)
wenzhang3 = get_all_vector('D:/Pycharm/项目存储/input/', wenzhang1)
# kMeans(dataSet, k, distMeas=gen_sim, createCent=randCent)
dataSet = loadDataSet('tezheng.txt')
centroids, clusterAssment = K_Means(dataSet, 3, distMeas=distEclud, createCent=randCent) print("centroids:", centroids)
print("clusterAssment :", clusterAssment)
print("clusterAssmentlengh :", len(clusterAssment))
'''

  

Kmeans中文聚类的更多相关文章

  1. 机器学习之K均值算法(K-means)聚类

    K均值算法(K-means)聚类 [关键词]K个种子,均值 一.K-means算法原理 聚类的概念:一种无监督的学习,事先不知道类别,自动将相似的对象归到同一个簇中. K-Means算法是一种聚类分析 ...

  2. 利用模拟退火提高Kmeans的聚类精度

    http://www.cnblogs.com/LBSer/p/4605904.html Kmeans算法是一种非监督聚类算法,由于原理简单而在业界被广泛使用,一般在实践中遇到聚类问题往往会优先使用Km ...

  3. Python_sklearn机器学习库学习笔记(五)k-means(聚类)

    # K的选择:肘部法则 如果问题中没有指定 的值,可以通过肘部法则这一技术来估计聚类数量.肘部法则会把不同 值的成本函数值画出来.随着 值的增大,平均畸变程度会减小:每个类包含的样本数会减少,于是样本 ...

  4. k-means均值聚类算法(转)

    4.1.摘要 在前面的文章中,介绍了三种常见的分类算法.分类作为一种监督学习方法,要求必须事先明确知道各个类别的信息,并且断言所有待分类项都有一个类别与之对应.但是很多时候上述条件得不到满足,尤其是在 ...

  5. K-means算法[聚类算法]

    聚类算法k-Means的实现 <?php /* *Kmeans法(聚类算法的实现) */ /* *求误差平方和J */ //----------------------------------- ...

  6. 使用K-means进行聚类,用calinski_harabaz_score评价聚类效果

    代码如下: """ 下面的方法是用kmeans方法进行聚类,用calinski_harabaz_score方法评价聚类效果的好坏 大概是类间距除以类内距,因此这个值越大越 ...

  7. kmeans均值聚类算法实现

    这个算法中文名为k均值聚类算法,首先我们在二维的特殊条件下讨论其实现的过程,方便大家理解. 第一步.随机生成质心 由于这是一个无监督学习的算法,因此我们首先在一个二维的坐标轴下随机给定一堆点,并随即给 ...

  8. 基于K-means Clustering聚类算法对电商商户进行级别划分(含Octave仿真)

    在从事电商做频道运营时,每到关键时间节点,大促前,季度末等等,我们要做的一件事情就是品牌池打分,更新所有店铺的等级.例如,所以的商户分入SKA,KA,普通店铺,新店铺这4个级别,对于不同级别的商户,会 ...

  9. 【机器学习】:Kmeans均值聚类算法原理(附带Python代码实现)

    这个算法中文名为k均值聚类算法,首先我们在二维的特殊条件下讨论其实现的过程,方便大家理解. 第一步.随机生成质心 由于这是一个无监督学习的算法,因此我们首先在一个二维的坐标轴下随机给定一堆点,并随即给 ...

  10. K-means算法-聚类

    算法过程如下: 1)从N个文档随机选取K个文档作为质心 2)对剩余的每个文档测量其到每个质心的距离,并把它归到最近的质心的类 3)重新计算已经得到的个各类的质心 4)迭代2~3步直至新的质心与原质心相 ...

随机推荐

  1. 虚假新闻检测(CADM)《Unsupervised Domain Adaptation for COVID-19 Information Service with Contrastive Adversarial Domain Mixup》

    论文信息 论文标题:Unsupervised Domain Adaptation for COVID-19 Information Service with Contrastive Adversari ...

  2. [python]《Python编程快速上手:让繁琐工作自动化》学习笔记5

    1. 处理CSV文件笔记(第14章) (代码下载) 本文主要在python下介绍CSV文件,CSV 表示"Comma-Separated Values(逗号分隔的值)",CSV文件 ...

  3. 失配树学习笔记 | P5829 【模板】失配树

    简介 失配树(简称 Fail 树),是基于 KMP 的算法,可以高效的解决复杂的字符串前缀后缀关系问题. 前置知识: KMP 算法(求失配数组) 最近公共祖先(LCA) 希望大家看完这篇文章后可以理解 ...

  4. P1848 [USACO12OPEN]Bookshelf G

    简要题意 给你 \(N\) 本书 \((h_i,w_i)\),你要将书分成任意段(顺序不能改变),使得每一段 \(j\) 中 \(\sum\limits_{i \in j} w_i \leq L\), ...

  5. python读取kafka,输出到Vertica数据库

    # 主测试 # https://docs.python.org/2/library/json.html import sys import json import vertica_python imp ...

  6. webpack 中 loader 和 plugin 的区别

    通俗点讲loader是转换,plugin是执行比转换更复杂的任务,比如合并压缩等 loader:让webpack能够处理非js文件,然后你就可以利用 webpack 的打包能力,对它们进行处理. 例如 ...

  7. 鸿蒙系统应用开发之基于API6的蓝牙开发

    写在前面 由题意得,我今天讲的是基于鸿蒙系统的兼容JS的类Web开发范式的软件应用开发之蓝牙开发,它是基于API6的,至于为什么是基于API6,请你花几分钟看一下我之前写的这个系列教程的第四篇&quo ...

  8. 练习_使用递归计算1-n之间的和-练习_使用递归计算阶乘

    练习_使用递归计算1-n之间的和 定义一个方法,使用递归计算1-n之间的和 1+2+3+. . .+n n+(n-1)+(n-2)+...+1 已知: 最大值:n 最小值:1 使用递归必须明确: 1. ...

  9. 标准if-else语句-扩展if-else语句

    标准if-else语句 if语句第二种格式: if...else if(关系表达式) { 语句体1; }else { 语句体2; } 执行流程 首先判断关系表达式看其结果是true还是false 如果 ...

  10. day04-视图和视图解析器

    视图和视图解析器 1.基本介绍 在SpringMVC中的目标方法,最终返回的都是一个视图(有各种视图) 注意,这里的视图是一个类对象,不是一个页面!! 返回的视图都会由一个视图解析器来处理(视图解析器 ...