Sentiment analysis in nlp
Sentiment analysis in nlp
The goal of the program is to analysis the article title is Sarcasm or not, i use tensorflow 2.5 to solve this problem.
Dataset download url: https://www.kaggle.com/rmisra/news-headlines-dataset-for-sarcasm-detection/home
a sample of the dataset:
{
"article_link": "https://www.huffingtonpost.com/entry/versace-black-code_us_5861fbefe4b0de3a08f600d5",
"headline": "former versace store clerk sues over secret 'black code' for minority shoppers",
"is_sarcastic": 0
}
we want to depend on headline to predict the is_sarcastic
, 1 means True,0 means False.
preprocessing
use pandas to read json file.
import pandas as pd
# lines = True means headle the json for each line
df = pd.read_json("Sarcasm_Headlines_Dataset_v2.json" ,lines="True")
df
'''
is_sarcastic headline article_link
0 1 thirtysomething sci... https://www.theonion.co...
1 0 dem rep. totally ... https://www.huffingtonpos..
'''build list for each column
labels = []
sentences = []
urls = []
# a tips for convert series to list
'''
type(df['is_sarcastic'])
# Series
type(df['is_sarcastic'].values)
# ndarray
type(df['is_sarcastic'].values.tolist())
# list
'''
labels = df['is_sarcastic'].values.tolist()
sentences = df['headline'].values.tolist()
urls = df['article_link'].values.tolist()
len(labels) # 28619
len(sentences) # 28619split dataset into train set and test set
# train size is the 2/3 of the all dataset.
train_size = int(len(labels) / 3 * 2)
train_sentences = sentences[0: train_size]
test_sentences = sentences[train_size:]
train_y = labels[0:train_size]
test_y = labels[train_size:]init some parameter
# some parameter
vocab_size = 10000
# input layer to embedding
embedding_dim = 16
# each input sentence length
max_length = 100
# padding method
trunc_type='post'
padding_type='post'
# token the unfamiliar word
oov_tok = "<OOV>"preprocessing on train set and test set
# processing on train set and test set
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(oov_token = oov_tok)
tokenizer.fit_on_texts(train_sentences)
train_X = tokenizer.texts_to_sequences(train_sentences)
# padding the data
train_X = pad_sequences(train_X,
maxlen = max_length,
truncating = trunc_type,
padding = padding_type)
train_X[:2]
# convery the list to nparray
train_y = np.array(train_y)
# same operator to test set
test_X = tokenizer.texts_to_sequences(test_sentences)
test_X = pad_sequences(test_X ,
maxlen = max_length,
truncating = trunc_type,
padding = padding_type)
test_y = np.array(test_y)
build the model
some important functions and args:
tf.keras.layers.Dense # Dense
implements the operation:
output = activation(dot(input, kernel) + bias) , a NN layeractivation # Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation:
a(x) = x
).use_bias # Boolean, whether the layer uses a bias vector.
tf.keras.Sequential # contain a linear stack of layer into a
tf.keras.Model
.tf.keras.Model # to train and predict
config the model with losses and metrics with
model.compile(args)
optimizer
some args
Adam
RMSprop
SGD
Adagrad
loss # The loss value that will be minimized by the model will then be the sum of all individual losses.
metrices # List of metrics to be evaluated by the model during training and testing.
train the model with
model.fit(x=None,y=None)
batch_size # Number of samples per gradient update. If unspecified,
batch_size
will default to 32.epochs # Number of epochs to train the model
verbose # Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch,verbose=2 is recommended when not running interactively
validation_data #( valid_X, valid_y )
tf.keras.layers.Embedding # Turns positive integers (indexes) into dense vectors of fixed size. as shown in following figure
the purpose of the embedding is making the 1-dim integer proceed the muti-dim vectors add. can find the hide feature and connect to predict the labels. in this program ,every word's emotion direction can be trained many times.
tf.keras.layer.GlobalAveragePooling1D # add all muti-dim vectors ,if the output layer shape is (32, 10, 64), after the pooling, the shape will be changed as (32,64), as shown in following figure
code is more simple then theory
# build the model
model = tf.keras.Sequential(
[
# make a word became a 64-dim vector
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length = max_length),
# add all word vector
tf.keras.layers.GlobalAveragePooling1D(),
# NN
tf.keras.layers.Dense(24, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid')
]
)
model.compile(loss = 'binary_crossentropy', optimizer = 'adam' , metrics = ['accuracy'])
train the model
num_epochs = 30
history = model.fit(train_X, train_y, epochs = num_epochs,
validation_data = (test_X, test_y),
verbose = 2)
after the 30 epochs
Epoch 30/30
597/597 - 8s - loss: 1.8816e-04 - accuracy: 1.0000 - val_loss: 1.2858 - val_accuracy: 0.8216
predict our sentence
mytest_sentence = ["you are so cute", "you are so cute but looks like stupid"]
mytest_X = tokenizer.texts_to_sequences(mytest_sentence)
mytest_X = pad_sequences(mytest_X ,
maxlen = max_length,
truncating = trunc_type,
padding = padding_type)
mytest_y = model.predict(mytest_X)
# if result is bigger then 0.5 ,it means the title is Sarcasm
print(mytest_y > 0.5)
'''
[[False]
[ True]]
'''
reference:
tensorflow API: https://www.tensorflow.org/api_docs/python/tf/keras/Sequential
colab: bit.ly/tfw-sarcembed
Sentiment analysis in nlp的更多相关文章
- Sentiment Analysis resources
Wikipedia: Sentiment analysis (also known as opinion mining) refers to the use of natural language p ...
- NAACL 2013 Paper Mining User Relations from Online Discussions using Sentiment Analysis and PMF
中文简单介绍:本文对怎样基于情感分析和概率矩阵分解从网络论坛讨论中挖掘用户关系进行了深入研究. 论文出处:NAACL'13. 英文摘要: Advances in sentiment analysis ...
- 【Deep Learning Nanodegree Foundation笔记】第 10 课:Sentiment Analysis with Andrew Trask
In this lesson, Andrew Trask, the author of Grokking Deep Learning, will walk you through using neur ...
- 论文阅读:Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis
论文标题:Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis 论文链接:http://arxi ...
- 使用RNN进行imdb影评情感识别--use RNN to sentiment analysis
原创帖子,转载请说明出处 一.RNN神经网络结构 RNN隐藏层神经元的连接方式和普通神经网路的连接方式有一个非常明显的区别,就是同一层的神经元的输出也成为了这一层神经元的输入.当然同一时刻的输出是不可 ...
- Deep Learning for NLP 文章列举
Deep Learning for NLP 文章列举 原文链接:http://www.xperseverance.net/blogs/2013/07/2124/ 大部分文章来自: http://w ...
- 转 Deep Learning for NLP 文章列举
原文链接:http://www.xperseverance.net/blogs/2013/07/2124/ 大部分文章来自: http://www.socher.org/ http://deepl ...
- Standford CoreNLP--Sentiment Analysis初探
Stanford CoreNLP功能之一是Sentiment Analysis(情感分析),可以标识出语句的正面或者负面情绪,包括:Positive,Neutral,Negative三个值. 运行有两 ...
- Java自然语言处理NLP工具包
1. Java自然语言处理 LingPipe LingPipe是一个自然语言处理的Java开源工具包.LingPipe目前已有很丰富的功能,包括主题分类(Top Classification).命名实 ...
随机推荐
- Springboot之Actuator的渗透测试和漏洞利用
背景概述 Spring的生态很优秀,而使用Spring Boot的开发者也比较多. Actuator是Spring Boot提供的对应用系统的监控和管理的集成功能,可以查看应用配置的详细信息,例如自动 ...
- 牛客网 第十八届浙大城市学院程序设计竞赛(同步赛)J--万万没想到 啦啦啦啦啦
我觉得我可以继续wa下去(手动魔鬼笑)-------------------------------------------- 原题链接:https://ac.nowcoder.com/acm/c ...
- 超详细讲解H5移动端适配
前言 移动互联网发展至今,各种移动设备应运而生,但它们的物理分辨率可以说是五花八门,一般情况UI会为我们提供375尺寸的设计稿,所以为了让H5页面能够在这些不同的设备上尽量表现的一致,前端工程师就不得 ...
- 1903021116-吉琛- JAVA第二周作业—Java程序编写
项目 内容 课程班级博客链接 19级信计班 这个作业要求链接 https://www.cnblogs.com/thelovelybugfly/p/9641367.html 我的课程学习目标 1. 学习 ...
- 阿里云IoT流转到postgresql数据库方案
之前写过一篇如使用阿里云上部署.NET 3.1自定义运行时的文章,吐槽一下,虽然现在已经2022年了,但是阿里云函数计算的支持依然停留在.NET Core 2.1,更新缓慢,由于程序解包大小的限制,也 ...
- 1.1 Qt Creater使用Python开发桌面软件的操作流程
Qt Creater及Python的下载与安装过程不再赘述,读者可自行在网上搜索相应的下载与安装方法. 首先我们打开Qt Creater,单击"Create Project"按钮或 ...
- 论文解读(CGC)《CGC: Contrastive Graph Clustering for Community Detection and Tracking》
论文信息 论文标题:CGC: Contrastive Graph Clustering for Community Detection and Tracking论文作者:Namyong Park, R ...
- 新华三Gen10服务器ILO 5 安装中文语言包
ILO 5 安装中文语言包 在官网下载语言包文件,并解压 选择firmware&OS software,点击右侧的update firmware 选择本地文件,浏览到语言包里面的lpk文件,点 ...
- Bugku练习题---MISC---FileStoragedat
Bugku练习题---MISC---FileStoragedat flag:bugku{WeChatwithSteg0} 解题步骤: 1.观察题目,下载附件 2.下载后发现是一个后缀名为.dat的文件 ...
- UniApp文件上传(SpringBoot+Minio)
UniApp文件上传(SpringBoot+Minio) 一.Uni文件上传 (1).文件上传的问题 UniApp文件上传文档 uni.uploadFile({ url: 'https://www.e ...