学习视频来自:Youtube

学习文档来自:简书

NLTK:自然语言工具包

目的:将段落拆分为句子、拆分词语,识别这些词语的词性,高亮主题,帮助机器了解文本关于什么。这个小节将解决意见挖掘或情感分析的领域

一、分析单词或句子

常见简单词汇,希望快速掌握吧:

  1. 语料库Corpus:文本的正文,理解为电子文本库,corpora是其复数形式
  2. 词库Lexicon:词汇及含义
  3. 标记Token:拆分出来的东西。每个实体都是根据规则分割的一部分,一个句子被拆分成单词时,每个单词都是一个标记,如果拆分成句子,则句子是标记
from nltk.tokenize import sent_tokenize, word_tokenize
EXAMPLE_TEXT = "Hello Mr. Smith, how are you doing today? The weather is great, and Python is awesome. The sky is pinkish-blue. You shouldn't eat cardboard."
print(sent_tokenize(EXAMPLE_TEXT)) ['Hello Mr. Smith, how are you doing today?', 'The weather is great, and Python is awesome.', 'The sky is pinkish-blue.', "You shouldn't eat cardboard."]

注意这里sent_tokenize是分割句子,word_tokenize是分割词

print(word_tokenize(EXAMPLE_TEXT))

['Hello', 'Mr.', 'Smith', ',', 'how', 'are', 'you', 'doing', 'today', '?', 'The', 'weather', 'is', 'great', ',', 'and', 'Python', 'is', 'awesome', '.', 'The', 'sky', 'is', 'pinkish-blue', '.', 'You', 'should', "n't", 'eat', 'cardboard', '.']

这里需要注意一下,标点符号也被当作了标记,以及n't是由should't分割而来

二、NLTK与停止词

想将单词转换为数值或信号模式,那么需要预处理,预处理中有一步非常关键:过滤掉无用的数据(这里主要有降维的作用,我理解的)

停止词理解为不含任何含义的词,进行删除

from nltk.corpus import stopwords
set(stopwords.words('english'))
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
example_sent = "This is a sample sentence, showing off the stop words filtration."
stop_words = set(stopwords.words('english'))
word_tokens = word_tokenize(example_sent)
filtered_sentence = [w for w in word_tokens if not w in stop_words]
filtered_sentence
# 和下面的内容等价
filtered_sentence = []
for w in word_tokens:
if w not in stop_words:
filtered_sentence.append(w)
print(word_tokens)
print(filtered_sentence) ['This', 'is', 'a', 'sample', 'sentence', ',', 'showing', 'off', 'the', 'stop', 'words', 'filtration', '.']
['This', 'sample', 'sentence', ',', 'showing', 'stop', 'words', 'filtration', '.']

三、NLTK词干提取

词干的概念是一种规范化方法,除涉及时态之外,许多词语的变体都具有相同的含义

提取词干的原因是为了缩短查找的时间,使句子正常化

词干提取三种常用的方法:Porter、Snowball、Lancaster

from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
ps = PorterStemmer() example_words = ["python","pythoner","pythoning","pythoned","pythonly"]
for w in example_words:
print(ps.stem(w))
new_text = "It is important to by very pythonly while you are pythoning with python. All pythoners have pythoned poorly at least once."
words = word_tokenize(new_text)
for w in words:
print(ps.stem(w))

四、NLTK词性标注

把一个句子中的单词标注为名词、形容词、动词等

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer
train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt")
# print(train_text)

为了了解一下state_union,进行了打印,我理解的是一个数据集

训练Punkt标记器,你要问我Punkt标记器是什么,我只能说是一个无监督分词器

custom_sent_tokenizer = PunktSentenceTokenizer(train_text)
tokenized = custom_sent_tokenizer.tokenize(sample_text)

上述train_text是训练数据,sample_text是实际要分词的数据

先看一下tokenized前五行

for i in tokenized[:5]:
print(i)
PRESIDENT GEORGE W. BUSH'S ADDRESS BEFORE A JOINT SESSION OF THE CONGRESS ON THE STATE OF THE UNION

January 31, 2006

THE PRESIDENT: Thank you all.
Mr. Speaker, Vice President Cheney, members of Congress, members of the Supreme Court and diplomatic corps, distinguished guests, and fellow citizens: Today our nation lost a beloved, graceful, courageous woman who called America to its founding ideals and carried on a noble dream.
Tonight we are comforted by the hope of a glad reunion with the husband who was taken so long ago, and we are grateful for the good life of Coretta Scott King.
(Applause.)
President George W. Bush reacts to applause during his State of the Union Address at the Capitol, Tuesday, Jan.

sample_test内容

'PRESIDENT GEORGE W. BUSH\'S ADDRESS BEFORE A JOINT SESSION OF THE CONGRESS ON THE STATE OF THE UNION\n \nJanuary 31, 2006\n\nTHE PRESIDENT: Thank you all. Mr. Speaker, Vice President Cheney, members of Congress, members of the Supreme Court and diplomatic corps, distinguished guests,...

tokenized内容

["PRESIDENT GEORGE W. BUSH'S ADDRESS BEFORE A JOINT SESSION OF THE CONGRESS ON THE STATE OF THE UNION\n \nJanuary 31, 2006\n\nTHE PRESIDENT: Thank you all.",
'Mr. Speaker, Vice President Cheney, members of Congress, members of the Supreme Court and diplomatic corps, distinguished guests,...

然后先看一下pos_tag的简单用法:

result = nltk.pos_tag(['I','have','a','pen'])
result [('I', 'PRP'), ('have', 'VBP'), ('a', 'DT'), ('pen', 'NN')]

现在进行整合:

def process_content():
try:
for i in tokenized[:5]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
print(tagged) except Exception as e:
print(str(e)) process_content()
# 空间有限,这里只查看第一条
[('PRESIDENT', 'NNP'), ('GEORGE', 'NNP'), ('W.', 'NNP'), ('BUSH', 'NNP'), ("'S", 'POS'), ('ADDRESS', 'NNP'), ('BEFORE', 'IN'), ('A', 'NNP'), ('JOINT', 'NNP'), ('SESSION', 'NNP'), ('OF', 'IN'), ('THE', 'NNP'), ('CONGRESS', 'NNP'), ('ON', 'NNP'), ('THE', 'NNP'), ('STATE', 'NNP'), ('OF', 'IN'), ('THE', 'NNP'), ('UNION', 'NNP'), ('January', 'NNP'), ('31', 'CD'), (',', ','), ('2006', 'CD'), ('THE', 'NNP'), ('PRESIDENT', 'NNP'), (':', ':'), ('Thank', 'NNP'), ('you', 'PRP'), ('all', 'DT'), ('.', '.')]

这里作为分割线先把上面的内容消化一下

PunktSentenceTokenizer是一个无监督分词器,上面的例子中先是对train_txt进行了训练,再对sample_txt进行了分词,其实我们也可以使用默认的,直接进行分词,如下

In [1]: import nltk
In [2]: tokenizer = nltk.tokenize.punkt.PunktSentenceTokenizer()
In [3]: txt =""" This is one sentence. This is another sentence."""
In [4]: tokenizer.tokenize(txt)
Out[4]: [' This is one sentence.', 'This is another sentence.']

五、分块

import nltk

sent = sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"),("dog", "NN"), ("barked", "VBD"), ("at", "IN"),  ("the", "DT"), ("cat", "NN")]

grammer = 'NP:{<DT>*<JJ>*<NN>+}'
cp = nltk.RegexpParser(grammer)
tree = cp.parse(sent) print tree
tree.draw()

NLTK中用RegexpParser来进行分块,传入一个正则表达式,这里的语法是如何分块的?

首先是DT*,然后是JJ*,最后是NN+,意思是这个块命名为NP,且这个NP中DT有0个以上,紧接着JJ有0个以上,最后NN必须有1个以上,这个需要一点正则表达式的知识

再看文档中的分块:

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt") custom_sent_tokenizer = PunktSentenceTokenizer(train_text) tokenized = custom_sent_tokenizer.tokenize(sample_text) def process_content():
try:
for i in tokenized:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""
chunkParser = nltk.RegexpParser(chunkGram)
chunked = chunkParser.parse(tagged)
chunked.draw() except Exception as e:
print(str(e)) process_content()

作者的分块规则中NNP必须有1个以上

还可以查看子树

六、添加缝隙

chinking和chunk的区别是,chinking用}{来删除块中的块,如将原来的正则表达式规则

chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""

变为

chunkGram = r"""Chunk: {<.*>+}
}<VB.?|IN|DT|TO>+{"""

表示不要<VB.?|IN|DT|TO>+

七、命名实体识别

前面讲了分块,这里的命名实体识别也是分块的一种,NLTK内置的,分块成如人物、地点、位置等

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt") custom_sent_tokenizer = PunktSentenceTokenizer(train_text) tokenized = custom_sent_tokenizer.tokenize(sample_text) def process_content():
try:
for i in tokenized[5:]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
namedEnt = nltk.ne_chunk(tagged, binary=True)
namedEnt.draw()
except Exception as e:
print(str(e))

上面两张图是对属性binary设置产生不同的结果

八、词形还原

词干提取用stem,词形还原用lemmatize

from nltk.stem import WordNetLemmatizer

lemmatizer = WordNetLemmatizer()

print(lemmatizer.lemmatize("cats"))
print(lemmatizer.lemmatize("cacti"))
print(lemmatizer.lemmatize("geese"))
print(lemmatizer.lemmatize("rocks"))
print(lemmatizer.lemmatize("python"))
print(lemmatizer.lemmatize("better", pos="a"))
print(lemmatizer.lemmatize("best", pos="a"))
print(lemmatizer.lemmatize("run"))
print(lemmatizer.lemmatize("run",'v'))
cat
cactus
goose
rock
python
good
best
run
run

注意,可以通过参数pos设置词性,如果不提供,默认是名词

九、NLTK语料库

WordNet常用文件夹有三个:omw、wordnet和wordnet_ic

omw中的文件是各个国家和地区的研究人员基于英文wordnet所做的本国语言的wordnet,cmn是简体中文

wordnet是本体

十、WordNet

  1. 查看同义词
from nltk.corpus import wordnet as wn
syns = wn.synsets("program")
syns
[Synset('plan.n.01'),
Synset('program.n.02'),
Synset('broadcast.n.02'),
Synset('platform.n.02'),
Synset('program.n.05'),
Synset('course_of_study.n.01'),
Synset('program.n.07'),
Synset('program.n.08'),
Synset('program.v.01'),
Synset('program.v.02')]

分别为单词词性序号

有点不好理解,查看定义

  1. 查看定义
print(wn.synset('program.n.03').definition())
print(wn.synset('broadcast.n.02').definition())
'a radio or television show'
'a radio or television show'
  1. 查看使用例子
wn.synset('program.n.03').examples()
['did you see his program last night?']
  1. 查看某条词(获得某条词)
syns[0].name()
'plan.n.01'
  1. 查看同义词集合
syns[0].lemma_names()
['plan', 'program', 'programme']
  1. 反义词
synonyms = []
antonyms = [] for syn in wordnet.synsets("good"):
# 词形还原lemmas
for l in syn.lemmas():
synonyms.append(l.name())
if l.antonyms():
antonyms.append(l.antonyms()[0].name()) print(set(synonyms))
print(set(antonyms))
  1. 相似性
w1 = wordnet.synset('ship.n.01')
w2 = wordnet.synset('boat.n.01')
print(w1.wup_similarity(w2))

十一、文本分类

用NLTK中的copora里面的movie_reviews数据

标签分为正和负,文件夹里面都是相关txt

import nltk
import random
from nltk.corpus import movie_reviews documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)] random.shuffle(documents) print(documents[1]) all_words = []
for w in movie_reviews.words():
all_words.append(w.lower()) all_words = nltk.FreqDist(all_words)
print(all_words.most_common(15))
print(all_words["stupid"])

documents是一个列表,但是每项都是元组

这里的shuffle没用上

words函数如果不传参数,则会把所有txt的内容输出

fileids会获得negpos下的txt的list形式,就有了

documents = [(list(movie_reviews.words(fileid)), category)

for category in movie_reviews.categories()

for fileid in movie_reviews.fileids(category)]

FreqDist感觉和numpy的argsort有点像

将单词转换为特征

import nltk
import random
from nltk.corpus import movie_reviews documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)] random.shuffle(documents) all_words = [] for w in movie_reviews.words():
all_words.append(w.lower()) all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:3000] print((find_features(movie_reviews.words('neg/cv000_29416.txt')))) featuresets = [(find_features(rev), category) for (rev, category) in documents]

朴素贝叶斯分类器

本来想按照教程走完,复制粘贴就算自己学会了,但是实在没搞明白一些东西,然后觉得教程错了,遂与大家分享一下

在上面的例子中,在我们的正面和负面的文档中找到这些前 3000 个单词,将他们的存在标记为是或否,本来使用了FreqDist方法,然后再调用keys得到键,应该确实是按照FreqDist已经排好序的方法得到

all_words = nltk.FreqDist(all_words)

word_features = list(all_words.keys())[:3000]

但是,当我在我的环境中(python=3.8.8)查看word_features的内容时,却不是我想要的

all_words
# FreqDist({',': 77717, 'the': 76529, '.': 65876, 'a': 38106, 'and': 35576, 'of': 34123, 'to': 31937, "'": 30585, 'is': 25195, 'in': 21822, ...})
word_features
# ['plot',
':',
'two',
'teen',
'couples',
'go',
'to',
'a',
'church',
'party',
',',...]

这就不是取前3000个,而是直接从neg的里面取的

思考之后,查找了一下字典的keys用法,有说是python版本问题

于是我安装了python=3.4,发现nltk的版本是3.2.2,并且升级不了

而我的python=3.8.8环境中,nltk是3.6.5,降级不成功

在python=3.4.5,且nltk=3.2.2的环境中输入代码如下

import nltk
import random
from nltk.corpus import movie_reviews documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)] random.shuffle(documents) all_words = [] for w in movie_reviews.words():
all_words.append(w.lower()) all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:3000]
all_words

#FreqDist({'meekness': 2,
'breached': 1,
'emerging': 9,
'raft': 5,
'crops': 3,
'embrace': 17,
'embarrassment': 19,
'reacquaint': 1,
'hildyard': 1,
'thumper': 1,
'easton': 1,...]
word_features

# ['meekness',
'restricts',
'breached',
'emerging',
'pammy',
'raft',
'crops',
'fiel',
'cretins',...]

嗯,与想象中很不一样

鉴于此,已经不知道现在的nltk到底如何得到最前面的3000个词,于是转而使用sklearn来调用nltk

NLTK基础学习的更多相关文章

  1. salesforce 零基础学习(五十二)Trigger使用篇(二)

    第十七篇的Trigger用法为通过Handler方式实现Trigger的封装,此种好处是一个Handler对应一个sObject,使本该在Trigger中写的代码分到Handler中,代码更加清晰. ...

  2. 如何从零基础学习VR

    转载请声明转载地址:http://www.cnblogs.com/Rodolfo/,违者必究. 近期很多搞技术的朋友问我,如何步入VR的圈子?如何从零基础系统性的学习VR技术? 本人将于2017年1月 ...

  3. IOS基础学习-2: UIButton

    IOS基础学习-2: UIButton   UIButton是一个标准的UIControl控件,UIKit提供了一组控件:UISwitch开关.UIButton按钮.UISegmentedContro ...

  4. HTML5零基础学习Web前端需要知道哪些?

    HTML零基础学习Web前端网页制作,首先是要掌握一些常用标签的使用和他们的各个属性,常用的标签我总结了一下有以下这些: html:页面的根元素. head:页面的头部标签,是所有头部元素的容器. b ...

  5. python入门到精通[三]:基础学习(2)

    摘要:Python基础学习:列表.元组.字典.函数.序列化.正则.模块. 上一节学习了字符串.流程控制.文件及目录操作,这节介绍下列表.元组.字典.函数.序列化.正则.模块. 1.列表 python中 ...

  6. python入门到精通[二]:基础学习(1)

    摘要:Python基础学习: 注释.字符串操作.用户交互.流程控制.导入模块.文件操作.目录操作. 上一节讲了分别在windows下和linux下的环境配置,这节以linux为例学习基本语法.代码部分 ...

  7. CSS零基础学习笔记.

    酸菜记 之 CSS的零基础. 这篇是我自己从零基础学习CSS的笔记加理解总结归纳的,如有不对的地方,请留言指教, 学前了解: CSS中字母是不分大小写的; CSS文件可以使用在各种程序文件中(如:PH ...

  8. Yaf零基础学习总结5-Yaf类的自动加载

    Yaf零基础学习总结5-Yaf类的自动加载 框架的一个重要功能就是类的自动加载了,在第一个demo的时候我们就约定自己的项目的目录结构,框架就基于这个目录结构来自动加载需要的类文件. Yaf在自启动的 ...

  9. Yaf零基础学习总结4-Yaf的配置文件

    在上一节的hello yaf当中我们已经接触过了yaf的配置文件了, Yaf和用户共用一个配置空间, 也就是在Yaf_Application初始化时刻给出的配置文件中的配置. 作为区别, Yaf的配置 ...

随机推荐

  1. simulink中scope图像显示添加图例

    1. 在scope中添加图例 (1)首先打开配置属性(configuration properties),在display下面的show legend前面打钩 这样就允许图例显示出来 (2)对scop ...

  2. 顺利通过EMC实验(19)

  3. 从零开始:微信小程序新手入门宝典《一》

    为了方便大家了解并入门微信小程序,我将一些可能会需要的知识,列在这里,让大家方便的从零开始学习: 一:微信小程序的特点 张小龙:张小龙全面阐述小程序,推荐通读此文: 小程序是一种不需要下载.安装即可使 ...

  4. 使用electron制作满屏心特效

    图片被压缩了 看起来很难看 主进程代码 import {BrowserWindow, app, ipcMain} from 'electron' createWindow(); ipcMain.on( ...

  5. linux系统引导过程

    linux系统引导过程 linux-0.11引导时,将依次运行BIOS程序.bootsect.s.setup.s和head.s,完成引导过程后进入到main函数运行.BIOS完成硬件的检查与初始化等工 ...

  6. java反射相关

    反射的机制:反射机制是在运行状态中,对于任意一个类,都能够知道这个类的所有属性和方法:对于任意一个对象,都能够调用它的任意一个方法和属性:这种动态获取的信息以及动态调用对象的方法的功能称为java语言 ...

  7. vuecli中配置webpack加快打包速度

    webpack4中webpack 的DllPlugin插件可以将常见的库文件作为dll文件来,每次打包的时候就不用再次打包库文件了. 但是游鱼西在vuecli中已经去除这个选项,意识到带来的打包速度提 ...

  8. input type='file'限制上传文件类型

    前端与后台数据进行对接时,就避免不了要使用ajax进行http请求,常用的请求就两个post与get:然而常见的post请求的需求是文件上传,可能我一说到文件上传大家都觉得so  easy啊,没什么嘛 ...

  9. vue构建项目步骤

    1.node版本请更新到6.9.X版本以上,不然npm依赖会出问题 2.命令行里运行npm install --global vue-cli 3.npm install --global webpac ...

  10. js知识梳理6:关于函数的要点梳理(2)(作用域链和闭包)

    写在前面 注:这个系列是本人对js知识的一些梳理,其中不少内容来自书籍:Javascript高级程序设计第三版和JavaScript权威指南第六版,感谢它们的作者和译者.有发现什么问题的,欢迎留言指出 ...