用keras实现lstm 利用Keras下的LSTM进行情感分析
1 I either LOVE Brokeback Mountain or think it’s great that homosexuality is becoming more acceptable!:
1 Anyway, thats why I love ” Brokeback Mountain.
1 Brokeback mountain was beautiful…
0 da vinci code was a terrible movie.
0 Then again, the Da Vinci code is super shitty movie, and it made like 700 million.
0 The Da Vinci Code comes out tomorrow, which sucks.
其中的每个句子都有个标签 1 或 0, 用来代表积极或消极。

先把用到的包一次性全部导入
"language-python hljs">from keras.layers.core import Activation, Dense
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
import nltk #用来分词
import collections #用来统计词频
import numpy as np
在开始前,先对所用数据做个初步探索。特别地,我们需要知道数据中有多少个不同的单词,每句话由多少个单词组成。
"language-pyhon hljs livecodeserver">maxlen = 0 #句子最大长度
word_freqs = collections.Counter() #词频
num_recs = 0 # 样本数
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
if len(words) > maxlen:
maxlen = len(words)
for word in words:
word_freqs[word] += 1
num_recs += 1
print('max_len ',maxlen)
print('nb_words ', len(word_freqs))
max_len 42
nb_words 2324
可见一共有 2324 个不同的单词,包括标点符号。每句话最多包含 42 个单词。
根据不同单词的个数 (nb_words),我们可以把词汇表的大小设为一个定值,并且对于不在词汇表里的单词,把它们用伪单词 UNK 代替。 根据句子的最大长度 (max_lens),我们可以统一句子的长度,把短句用 0 填充。
依前所述,我们把 VOCABULARY_SIZE 设为 2002。包含训练数据中按词频从大到小排序后的前 2000 个单词,外加一个伪单词 UNK 和填充单词 0。 最大句子长度 MAX_SENTENCE_LENGTH 设为40。
MAX_FEATURES = 2000
MAX_SENTENCE_LENGTH = 40
接下来建立两个 lookup tables,分别是 word2index 和 index2word,用于单词和数字转换。
"language-python hljs">vocab_size = min(MAX_FEATURES, len(word_freqs)) + 2
word2index = {x[0]: i+2 for i, x in enumerate(word_freqs.most_common(MAX_FEATURES))}
word2index["PAD"] = 0
word2index["UNK"] = 1
index2word = {v:k for k, v in word2index.items()}
下面就是根据 lookup table 把句子转换成数字序列了,并把长度统一到 MAX_SENTENCE_LENGTH, 不够的填 0 , 多出的截掉。
"language-python hljs">X = np.empty(num_recs,dtype=list)
y = np.zeros(num_recs)
i=0
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
seqs = []
for word in words:
if word in word2index:
seqs.append(word2index[word])
else:
seqs.append(word2index["UNK"])
X[i] = seqs
y[i] = int(label)
i += 1
X = sequence.pad_sequences(X, maxlen=MAX_SENTENCE_LENGTH)
最后是划分数据,80% 作为训练数据,20% 作为测试数据。
"language-python hljs">Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2, random_state=42)
数据准备好后,就可以上模型了。这里损失函数用 binary_crossentropy, 优化方法用 adam。 至于 EMBEDDING_SIZE , HIDDEN_LAYER_SIZE , 以及训练时用到的BATCH_SIZE 和 NUM_EPOCHS 这些超参数,就凭经验多跑几次调优了。
EMBEDDING_SIZE = 128
HIDDEN_LAYER_SIZE = 64
model = Sequential()
model.add(Embedding(vocab_size, EMBEDDING_SIZE,input_length=MAX_SENTENCE_LENGTH))
model.add(LSTM(HIDDEN_LAYER_SIZE, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1))
model.add(Activation("sigmoid"))
pile(loss="binary_crossentropy", optimizer="adam",metrics=["accuracy"])
网络构建好后就是上数据训练了。用 10 个 epochs 和 batch_size 取 32 来训练这个网络。在每个 epoch, 我们用测试集当作验证集。
BATCH_SIZE = 32
NUM_EPOCHS = 10
model.fit(Xtrain, ytrain, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS,validation_data=(Xtest, ytest))
Train on 5668 samples, validate on 1418 samples
Epoch 1/10
5668/5668 [==============================] - 12s - loss: 0.2464 - acc: 0.8897 - val_loss: 0.0672 - val_acc: 0.9697
Epoch 2/10
5668/5668 [==============================] - 11s - loss: 0.0290 - acc: 0.9896 - val_loss: 0.0407 - val_acc: 0.9838
Epoch 3/10
5668/5668 [==============================] - 11s - loss: 0.0078 - acc: 0.9975 - val_loss: 0.0506 - val_acc: 0.9866
Epoch 4/10
5668/5668 [==============================] - 11s - loss: 0.0084 - acc: 0.9970 - val_loss: 0.0772 - val_acc: 0.9732
Epoch 5/10
5668/5668 [==============================] - 11s - loss: 0.0046 - acc: 0.9989 - val_loss: 0.0415 - val_acc: 0.9880
Epoch 6/10
5668/5668 [==============================] - 11s - loss: 0.0012 - acc: 0.9998 - val_loss: 0.0401 - val_acc: 0.9901
Epoch 7/10
5668/5668 [==============================] - 11s - loss: 0.0020 - acc: 0.9996 - val_loss: 0.0406 - val_acc: 0.9894
Epoch 8/10
5668/5668 [==============================] - 11s - loss: 7.7990e-04 - acc: 0.9998 - val_loss: 0.0444 - val_acc: 0.9887
Epoch 9/10
5668/5668 [==============================] - 11s - loss: 5.3168e-04 - acc: 0.9998 - val_loss: 0.0550 - val_acc: 0.9908
Epoch 10/10
5668/5668 [==============================] - 11s - loss: 7.8728e-04 - acc: 0.9996 - val_loss: 0.0523 - val_acc: 0.9901
可以看到,经过了 10 个epoch 后,在验证集上的正确率已经达到了 99%。
我们用已经训练好的 LSTM 去预测已经划分好的测试集的数据,查看其效果。选了 5 个句子的预测结果,并打印出了原句。
"language-python hljs">score, acc = model.evaluate(Xtest, ytest, batch_size=BATCH_SIZE)
print("\nTest score: %.3f, accuracy: %.3f" % (score, acc))
print('{} {} {}'.format('预测','真实','句子'))
for i in range(5):
idx = np.random.randint(len(Xtest))
xtest = Xtest[idx].reshape(1,40)
ylabel = ytest[idx]
ypred = model.predict(xtest)[0][0]
sent = " ".join([index2word[x] for x in xtest[0] if x != 0])
print(' {} {} {}'.format(int(round(ypred)), int(ylabel), sent))
Test score: 0.052, accuracy: 0.990
预测 真实 句子
0 0 oh , and brokeback mountain is a terrible movie …
1 1 the last stand and mission impossible 3 both were awesome movies .
1 1 i love harry potter .
1 1 mission impossible 2 rocks ! ! … .
1 1 harry potter is awesome i do n’t care if anyone says differently ! ..
可见在测试集上的正确率已达 99%.
我们可以自己输入一些话,让网络预测我们的情感态度。假如我们输入 I love reading. 和 You are so boring. 两句话,看看训练好的网络能否预测出正确的情感。
"language-python hljs">INPUT_SENTENCES = ['I love reading.','You are so boring.']
XX = np.empty(len(INPUT_SENTENCES),dtype=list)
i=0
for sentence in INPUT_SENTENCES:
words = nltk.word_tokenize(sentence.lower())
seq = []
for word in words:
if word in word2index:
seq.append(word2index[word])
else:
seq.append(word2index['UNK'])
XX[i] = seq
i+=1
XX = sequence.pad_sequences(XX, maxlen=MAX_SENTENCE_LENGTH)
labels = [int(round(x[0])) for x in model.predict(XX) ]
label2word = {1:'积极', 0:'消极'}
for i in range(len(INPUT_SENTENCES)):
print('{} {}'.format(label2word[labels[i]], INPUT_SENTENCES[i]))
积极 I love reading.
消极 You are so boring.
Yes ,预测正确。
全部
# -*- coding: gbk -*-
from keras.layers.core import Activation, Dense
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
import collections
import nltk
import numpy as np
## EDA
maxlen = 0
word_freqs = collections.Counter()
num_recs = 0
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
if len(words) > maxlen:
maxlen = len(words)
for word in words:
word_freqs[word] += 1
num_recs += 1
print('max_len ',maxlen)
print('nb_words ', len(word_freqs))
## 准备数据
MAX_FEATURES = 2000
MAX_SENTENCE_LENGTH = 40
vocab_size = min(MAX_FEATURES, len(word_freqs)) + 2
word2index = {x[0]: i+2 for i, x in enumerate(word_freqs.most_common(MAX_FEATURES))}
word2index["PAD"] = 0
word2index["UNK"] = 1
index2word = {v:k for k, v in word2index.items()}
X = np.empty(num_recs,dtype=list)
y = np.zeros(num_recs)
i=0
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
seqs = []
for word in words:
if word in word2index:
seqs.append(word2index[word])
else:
seqs.append(word2index["UNK"])
X[i] = seqs
y[i] = int(label)
i += 1
X = sequence.pad_sequences(X, maxlen=MAX_SENTENCE_LENGTH)
## 数据划分
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2, random_state=42)
## 网络构建
EMBEDDING_SIZE = 128
HIDDEN_LAYER_SIZE = 64
BATCH_SIZE = 32
NUM_EPOCHS = 10
model = Sequential()
model.add(Embedding(vocab_size, EMBEDDING_SIZE,input_length=MAX_SENTENCE_LENGTH))
model.add(LSTM(HIDDEN_LAYER_SIZE, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1))
model.add(Activation("sigmoid"))
pile(loss="binary_crossentropy", optimizer="adam",metrics=["accuracy"])
## 网络训练
model.fit(Xtrain, ytrain, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS,validation_data=(Xtest, ytest))
## 预测
score, acc = model.evaluate(Xtest, ytest, batch_size=BATCH_SIZE)
print("\nTest score: %.3f, accuracy: %.3f" % (score, acc))
print('{} {} {}'.format('预测','真实','句子'))
for i in range(5):
idx = np.random.randint(len(Xtest))
xtest = Xtest[idx].reshape(1,40)
ylabel = ytest[idx]
ypred = model.predict(xtest)[0][0]
sent = " ".join([index2word[x] for x in xtest[0] if x != 0])
print(' {} {} {}'.format(int(round(ypred)), int(ylabel), sent))
##### 自己输入
INPUT_SENTENCES = ['I love reading.','You are so boring.']
XX = np.empty(len(INPUT_SENTENCES),dtype=list)
i=0
for sentence in INPUT_SENTENCES:
words = nltk.word_tokenize(sentence.lower())
seq = []
for word in words:
if word in word2index:
seq.append(word2index[word])
else:
seq.append(word2index['UNK'])
XX[i] = seq
i+=1
XX = sequence.pad_sequences(XX, maxlen=MAX_SENTENCE_LENGTH)
labels = [int(round(x[0])) for x in model.predict(XX) ]
label2word = {1:'积极', 0:'消极'}
for i in range(len(INPUT_SENTENCES)):
print('{} {}'.format(label2word[labels[i]], INPUT_SENTENCES[i]))
用keras实现lstm 利用Keras下的LSTM进行情感分析的更多相关文章
- 【Keras篇】---利用keras改写VGG16经典模型在手写数字识别体中的应用
一.前述 VGG16是由16层神经网络构成的经典模型,包括多层卷积,多层全连接层,一般我们改写的时候卷积层基本不动,全连接层从后面几层依次向前改写,因为先改参数较小的. 二.具体 1.因为本文中代码需 ...
- NLP入门(十)使用LSTM进行文本情感分析
情感分析简介 文本情感分析(Sentiment Analysis)是自然语言处理(NLP)方法中常见的应用,也是一个有趣的基本任务,尤其是以提炼文本情绪内容为目的的分类.它是对带有情感色彩的主观性 ...
- (转!)利用Keras实现图像分类与颜色分类
2018-07-19 全部谷歌渣翻加略微修改 大家将就的看哈 建议大佬们还是看看原文 点击收获原文 其中用到的示例文件 multi-output-classification 大家可以点击 下载 . ...
- TensorFlow 1.4利用Keras+Estimator API进行训练和预测
Tensorflow 1.4中,Keras作为作为核心模块可以直接通过tf.keas进行调用,但是考虑到keras对tfrecords文件进行操作比较麻烦,而将keras模型转成tensorflow中 ...
- Python机器学习笔记:利用Keras进行分类预测
Keras是一个用于深度学习的Python库,它包含高效的数值库Theano和TensorFlow. 本文的目的是学习如何从csv中加载数据并使其可供Keras使用,如何用神经网络建立多类分类的数据进 ...
- 人脸检测及识别python实现系列(5)——利用keras库训练人脸识别模型
人脸检测及识别python实现系列(5)——利用keras库训练人脸识别模型 经过前面稍显罗嗦的准备工作,现在,我们终于可以尝试训练我们自己的卷积神经网络模型了.CNN擅长图像处理,keras库的te ...
- CNN眼中的世界:利用Keras解释CNN的滤波器
转载自:https://keras-cn.readthedocs.io/en/latest/legacy/blog/cnn_see_world/ 文章信息 本文地址:http://blog.keras ...
- 【Python与机器学习】:利用Keras进行多类分类
多类分类问题本质上可以分解为多个二分类问题,而解决二分类问题的方法有很多.这里我们利用Keras机器学习框架中的ANN(artificial neural network)来解决多分类问题.这里我们采 ...
- LSTM 文本情感分析/序列分类 Keras
LSTM 文本情感分析/序列分类 Keras 请参考 http://spaces.ac.cn/archives/3414/ neg.xls是这样的 pos.xls是这样的neg=pd.read_e ...
随机推荐
- Codeforces Round #370 (Div. 2) D. Memory and Scores 动态规划
D. Memory and Scores 题目连接: http://codeforces.com/contest/712/problem/D Description Memory and his fr ...
- GitLab目录迁移方法
在生产环境上迁移GitLab的目录需要注意一下几点: 1.目录的权限必须为755或者775 2.目录的用户和用户组必须为git:git 3.如果在深一级的目录下,那么git用户必须添加到上一级目录的账 ...
- HDU 4031 Attack
Attack Time Limit: 5000/3000 MS (Java/Others) Memory Limit: 65768/65768 K (Java/Others) Total Sub ...
- IIS、Asp.net 编译时的临时文件路径
IIS上部署的ASP.NET站点都会在一个.Net Framework的特定目录下生成临时编译文件增加ASP.NET站点的访问性能,有时候需要手动去删除这些临时编译文件,特别是发布新版本代码到IIS后 ...
- Revit API修改保温层厚度
start [Transaction(TransactionMode.Manual)] [Regeneration(RegenerationOption.Manual)] ;, newLayer); ...
- Win10正式版开机慢怎么办 开机黑屏时间长怎么办
升级Win10正式版后开机速度慢.黑屏时间长怎么解决呢?其实我重要是由Win10正式版所提供的“快速启动”功能与电脑显卡驱动.电源管理驱动不兼容所造成的.下面就与大家分享一下针对Win10正式版开机速 ...
- Redis源代码分析(三十三)--- redis-cli.cclient命令行接口的实现(2)
今天学习完了命令行client的兴许内容,总体感觉就是环绕着2个东西转,config和mode.为什么我会这么说呢,请继续往下看,client中的配置结构体和之前我们所学习的配置结构体,不是指的同一个 ...
- Running Jenkins behind Nginx
original : https://wiki.jenkins-ci.org/display/JENKINS/Running+Jenkins+behind+Nginx In situations wh ...
- #ifdef #else #endif 的用法
预处理就是在进行编译的第一遍词法扫描和语法分析之前所作的工作.说白了,就是对源文件进行编译前,先对预处理部分进行处理,然后对处理后的代码进行编译.这样做的好处是,经过处理后的代码,将会变的很精短. 关 ...
- Python:日期和时间类型学习
背景 在非开发环境经常需要做一下日期计算,就准备使用Python,顺便记下来学习的痕迹. 代码 1 # coding = utf-8 2 3 from datetime import * 4 5 ## ...