1 大纲概述

  文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类。总共有以下系列:

  word2vec预训练词向量

  textCNN 模型

  charCNN 模型

  Bi-LSTM 模型

  Bi-LSTM + Attention 模型

  RCNN 模型

  Adversarial LSTM 模型

  Transformer 模型

  ELMo 预训练模型

  BERT 预训练模型

  jupyter notebook代码均在textClassifier仓库中,python代码在NLP-Project中的text_classfier中。

2 数据集

  数据集为IMDB 电影影评,总共有三个数据文件,在/data/rawData目录下,包括unlabeledTrainData.tsv,labeledTrainData.tsv,testData.tsv。在进行文本分类时需要有标签的数据(labeledTrainData),数据预处理如文本分类实战(一)—— word2vec预训练词向量中一样,预处理后的文件为/data/preprocess/labeledTrain.csv。

3 Bi-LSTM + Attention 模型

  Bi-LSTM + Attention模型来源于论文Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification。关于Attention的介绍见这篇

  Bi-LSTM + Attention 就是在Bi-LSTM的模型上加入Attention层,在Bi-LSTM中我们会用最后一个时序的输出向量 作为特征向量,然后进行softmax分类。Attention是先计算每个时序的权重,然后将所有时序 的向量进行加权和作为特征向量,然后进行softmax分类。在实验中,加上Attention确实对结果有所提升。其模型结构如下图:

  

4 参数配置

  1. import os
  2. import csv
  3. import time
  4. import datetime
  5. import random
  6. import json
  7.  
  8. import warnings
  9. from collections import Counter
  10. from math import sqrt
  11.  
  12. import gensim
  13. import pandas as pd
  14. import numpy as np
  15. import tensorflow as tf
  16. from sklearn.metrics import roc_auc_score, accuracy_score, precision_score, recall_score
  17. warnings.filterwarnings("ignore")
  1. # 配置参数
  2.  
  3. class TrainingConfig(object):
  4. epoches = 4
  5. evaluateEvery = 100
  6. checkpointEvery = 100
  7. learningRate = 0.001
  8.  
  9. class ModelConfig(object):
  10. embeddingSize = 200
  11.  
  12. hiddenSizes = [256, 128] # LSTM结构的神经元个数
  13.  
  14. dropoutKeepProb = 0.5
  15. l2RegLambda = 0.0
  16.  
  17. class Config(object):
  18. sequenceLength = 200 # 取了所有序列长度的均值
  19. batchSize = 128
  20.  
  21. dataSource = "../data/preProcess/labeledTrain.csv"
  22.  
  23. stopWordSource = "../data/english"
  24.  
  25. numClasses = 1 # 二分类设置为1,多分类设置为类别的数目
  26.  
  27. rate = 0.8 # 训练集的比例
  28.  
  29. training = TrainingConfig()
  30.  
  31. model = ModelConfig()
  32.  
  33. # 实例化配置参数对象
  34. config = Config()

5 生成训练数据

  1)将数据加载进来,将句子分割成词表示,并去除低频词和停用词。

  2)将词映射成索引表示,构建词汇-索引映射表,并保存成json的数据格式,之后做inference时可以用到。(注意,有的词可能不在word2vec的预训练词向量中,这种词直接用UNK表示)

  3)从预训练的词向量模型中读取出词向量,作为初始化值输入到模型中。

  4)将数据集分割成训练集和测试集

  1. # 数据预处理的类,生成训练集和测试集
  2.  
  3. class Dataset(object):
  4. def __init__(self, config):
  5. self.config = config
  6. self._dataSource = config.dataSource
  7. self._stopWordSource = config.stopWordSource
  8.  
  9. self._sequenceLength = config.sequenceLength # 每条输入的序列处理为定长
  10. self._embeddingSize = config.model.embeddingSize
  11. self._batchSize = config.batchSize
  12. self._rate = config.rate
  13.  
  14. self._stopWordDict = {}
  15.  
  16. self.trainReviews = []
  17. self.trainLabels = []
  18.  
  19. self.evalReviews = []
  20. self.evalLabels = []
  21.  
  22. self.wordEmbedding =None
  23.  
  24. self.labelList = []
  25.  
  26. def _readData(self, filePath):
  27. """
  28. 从csv文件中读取数据集
  29. """
  30.  
  31. df = pd.read_csv(filePath)
  32.  
  33. if self.config.numClasses == 1:
  34. labels = df["sentiment"].tolist()
  35. elif self.config.numClasses > 1:
  36. labels = df["rate"].tolist()
  37.  
  38. review = df["review"].tolist()
  39. reviews = [line.strip().split() for line in review]
  40.  
  41. return reviews, labels
  42.  
  43. def _labelToIndex(self, labels, label2idx):
  44. """
  45. 将标签转换成索引表示
  46. """
  47. labelIds = [label2idx[label] for label in labels]
  48. return labelIds
  49.  
  50. def _wordToIndex(self, reviews, word2idx):
  51. """
  52. 将词转换成索引
  53. """
  54. reviewIds = [[word2idx.get(item, word2idx["UNK"]) for item in review] for review in reviews]
  55. return reviewIds
  56.  
  57. def _genTrainEvalData(self, x, y, word2idx, rate):
  58. """
  59. 生成训练集和验证集
  60. """
  61. reviews = []
  62. for review in x:
  63. if len(review) >= self._sequenceLength:
  64. reviews.append(review[:self._sequenceLength])
  65. else:
  66. reviews.append(review + [word2idx["PAD"]] * (self._sequenceLength - len(review)))
  67.  
  68. trainIndex = int(len(x) * rate)
  69.  
  70. trainReviews = np.asarray(reviews[:trainIndex], dtype="int64")
  71. trainLabels = np.array(y[:trainIndex], dtype="float32")
  72.  
  73. evalReviews = np.asarray(reviews[trainIndex:], dtype="int64")
  74. evalLabels = np.array(y[trainIndex:], dtype="float32")
  75.  
  76. return trainReviews, trainLabels, evalReviews, evalLabels
  77.  
  78. def _genVocabulary(self, reviews, labels):
  79. """
  80. 生成词向量和词汇-索引映射字典,可以用全数据集
  81. """
  82.  
  83. allWords = [word for review in reviews for word in review]
  84.  
  85. # 去掉停用词
  86. subWords = [word for word in allWords if word not in self.stopWordDict]
  87.  
  88. wordCount = Counter(subWords) # 统计词频
  89. sortWordCount = sorted(wordCount.items(), key=lambda x: x[1], reverse=True)
  90.  
  91. # 去除低频词
  92. words = [item[0] for item in sortWordCount if item[1] >= 5]
  93.  
  94. vocab, wordEmbedding = self._getWordEmbedding(words)
  95. self.wordEmbedding = wordEmbedding
  96.  
  97. word2idx = dict(zip(vocab, list(range(len(vocab)))))
  98.  
  99. uniqueLabel = list(set(labels))
  100. label2idx = dict(zip(uniqueLabel, list(range(len(uniqueLabel)))))
  101. self.labelList = list(range(len(uniqueLabel)))
  102.  
  103. # 将词汇-索引映射表保存为json数据,之后做inference时直接加载来处理数据
  104. with open("../data/wordJson/word2idx.json", "w", encoding="utf-8") as f:
  105. json.dump(word2idx, f)
  106.  
  107. with open("../data/wordJson/label2idx.json", "w", encoding="utf-8") as f:
  108. json.dump(label2idx, f)
  109.  
  110. return word2idx, label2idx
  111.  
  112. def _getWordEmbedding(self, words):
  113. """
  114. 按照我们的数据集中的单词取出预训练好的word2vec中的词向量
  115. """
  116.  
  117. wordVec = gensim.models.KeyedVectors.load_word2vec_format("../word2vec/word2Vec.bin", binary=True)
  118. vocab = []
  119. wordEmbedding = []
  120.  
  121. # 添加 "pad" 和 "UNK",
  122. vocab.append("PAD")
  123. vocab.append("UNK")
  124. wordEmbedding.append(np.zeros(self._embeddingSize))
  125. wordEmbedding.append(np.random.randn(self._embeddingSize))
  126.  
  127. for word in words:
  128. try:
  129. vector = wordVec.wv[word]
  130. vocab.append(word)
  131. wordEmbedding.append(vector)
  132. except:
  133. print(word + "不存在于词向量中")
  134.  
  135. return vocab, np.array(wordEmbedding)
  136.  
  137. def _readStopWord(self, stopWordPath):
  138. """
  139. 读取停用词
  140. """
  141.  
  142. with open(stopWordPath, "r") as f:
  143. stopWords = f.read()
  144. stopWordList = stopWords.splitlines()
  145. # 将停用词用列表的形式生成,之后查找停用词时会比较快
  146. self.stopWordDict = dict(zip(stopWordList, list(range(len(stopWordList)))))
  147.  
  148. def dataGen(self):
  149. """
  150. 初始化训练集和验证集
  151. """
  152.  
  153. # 初始化停用词
  154. self._readStopWord(self._stopWordSource)
  155.  
  156. # 初始化数据集
  157. reviews, labels = self._readData(self._dataSource)
  158.  
  159. # 初始化词汇-索引映射表和词向量矩阵
  160. word2idx, label2idx = self._genVocabulary(reviews, labels)
  161.  
  162. # 将标签和句子数值化
  163. labelIds = self._labelToIndex(labels, label2idx)
  164. reviewIds = self._wordToIndex(reviews, word2idx)
  165.  
  166. # 初始化训练集和测试集
  167. trainReviews, trainLabels, evalReviews, evalLabels = self._genTrainEvalData(reviewIds, labelIds, word2idx, self._rate)
  168. self.trainReviews = trainReviews
  169. self.trainLabels = trainLabels
  170.  
  171. self.evalReviews = evalReviews
  172. self.evalLabels = evalLabels
  173.  
  174. data = Dataset(config)
  175. data.dataGen()

6 生成batch数据集

  采用生成器的形式向模型输入batch数据集,(生成器可以避免将所有的数据加入到内存中)

  1. # 输出batch数据集
  2.  
  3. def nextBatch(x, y, batchSize):
  4. """
  5. 生成batch数据集,用生成器的方式输出
  6. """
  7.  
  8. perm = np.arange(len(x))
  9. np.random.shuffle(perm)
  10. x = x[perm]
  11. y = y[perm]
  12.  
  13. numBatches = len(x) // batchSize
  14.  
  15. for i in range(numBatches):
  16. start = i * batchSize
  17. end = start + batchSize
  18. batchX = np.array(x[start: end], dtype="int64")
  19. batchY = np.array(y[start: end], dtype="float32")
  20.  
  21. yield batchX, batchY

7 Bi-LSTM + Attention模型

  1. # 构建模型
  2. class BiLSTMAttention(object):
  3. """
  4. Text CNN 用于文本分类
  5. """
  6. def __init__(self, config, wordEmbedding):
  7.  
  8. # 定义模型的输入
  9. self.inputX = tf.placeholder(tf.int32, [None, config.sequenceLength], name="inputX")
  10. self.inputY = tf.placeholder(tf.int32, [None], name="inputY")
  11.  
  12. self.dropoutKeepProb = tf.placeholder(tf.float32, name="dropoutKeepProb")
  13.  
  14. # 定义l2损失
  15. l2Loss = tf.constant(0.0)
  16.  
  17. # 词嵌入层
  18. with tf.name_scope("embedding"):
  19.  
  20. # 利用预训练的词向量初始化词嵌入矩阵
  21. self.W = tf.Variable(tf.cast(wordEmbedding, dtype=tf.float32, name="word2vec") ,name="W")
  22. # 利用词嵌入矩阵将输入的数据中的词转换成词向量,维度[batch_size, sequence_length, embedding_size]
  23. self.embeddedWords = tf.nn.embedding_lookup(self.W, self.inputX)
  24.  
  25. # 定义两层双向LSTM的模型结构
  26. with tf.name_scope("Bi-LSTM"):
  27. for idx, hiddenSize in enumerate(config.model.hiddenSizes):
  28. with tf.name_scope("Bi-LSTM" + str(idx)):
  29. # 定义前向LSTM结构
  30. lstmFwCell = tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(num_units=hiddenSize, state_is_tuple=True),
  31. output_keep_prob=self.dropoutKeepProb)
  32. # 定义反向LSTM结构
  33. lstmBwCell = tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(num_units=hiddenSize, state_is_tuple=True),
  34. output_keep_prob=self.dropoutKeepProb)
  35.  
  36. # 采用动态rnn,可以动态的输入序列的长度,若没有输入,则取序列的全长
  37. # outputs是一个元祖(output_fw, output_bw),其中两个元素的维度都是[batch_size, max_time, hidden_size],fw和bw的hidden_size一样
  38. # self.current_state 是最终的状态,二元组(state_fw, state_bw),state_fw=[batch_size, s],s是一个元祖(h, c)
  39. outputs_, self.current_state = tf.nn.bidirectional_dynamic_rnn(lstmFwCell, lstmBwCell,
  40. self.embeddedWords, dtype=tf.float32,
  41. scope="bi-lstm" + str(idx))
  42.  
  43. # 对outputs中的fw和bw的结果拼接 [batch_size, time_step, hidden_size * 2], 传入到下一层Bi-LSTM中
  44. self.embeddedWords = tf.concat(outputs_, 2)
  45.  
  46. # 将最后一层Bi-LSTM输出的结果分割成前向和后向的输出
  47. outputs = tf.split(self.embeddedWords, 2, -1)
  48.  
  49. # 在Bi-LSTM+Attention的论文中,将前向和后向的输出相加
  50. with tf.name_scope("Attention"):
  51. H = outputs[0] + outputs[1]
  52.  
  53. # 得到Attention的输出
  54. output = self.attention(H)
  55. outputSize = config.model.hiddenSizes[-1]
  56.  
  57. # 全连接层的输出
  58. with tf.name_scope("output"):
  59. outputW = tf.get_variable(
  60. "outputW",
  61. shape=[outputSize, config.numClasses],
  62. initializer=tf.contrib.layers.xavier_initializer())
  63.  
  64. outputB= tf.Variable(tf.constant(0.1, shape=[config.numClasses]), name="outputB")
  65. l2Loss += tf.nn.l2_loss(outputW)
  66. l2Loss += tf.nn.l2_loss(outputB)
  67. self.logits = tf.nn.xw_plus_b(output, outputW, outputB, name="logits")
  68.  
  69. if config.numClasses == 1:
  70. self.predictions = tf.cast(tf.greater_equal(self.logits, 0.0), tf.float32, name="predictions")
  71. elif config.numClasses > 1:
  72. self.predictions = tf.argmax(self.logits, axis=-1, name="predictions")
  73.  
  74. # 计算二元交叉熵损失
  75. with tf.name_scope("loss"):
  76.  
  77. if config.numClasses == 1:
  78. losses = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=tf.cast(tf.reshape(self.inputY, [-1, 1]),
  79. dtype=tf.float32))
  80. elif config.numClasses > 1:
  81. losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits, labels=self.inputY)
  82.  
  83. self.loss = tf.reduce_mean(losses) + config.model.l2RegLambda * l2Loss
  84.  
  85. def attention(self, H):
  86. """
  87. 利用Attention机制得到句子的向量表示
  88. """
  89. # 获得最后一层LSTM的神经元数量
  90. hiddenSize = config.model.hiddenSizes[-1]
  91.  
  92. # 初始化一个权重向量,是可训练的参数
  93. W = tf.Variable(tf.random_normal([hiddenSize], stddev=0.1))
  94.  
  95. # 对Bi-LSTM的输出用激活函数做非线性转换
  96. M = tf.tanh(H)
  97.  
  98. # 对W和M做矩阵运算,W=[batch_size, time_step, hidden_size],计算前做维度转换成[batch_size * time_step, hidden_size]
  99. # newM = [batch_size, time_step, 1],每一个时间步的输出由向量转换成一个数字
  100. newM = tf.matmul(tf.reshape(M, [-1, hiddenSize]), tf.reshape(W, [-1, 1]))
  101.  
  102. # 对newM做维度转换成[batch_size, time_step]
  103. restoreM = tf.reshape(newM, [-1, config.sequenceLength])
  104.  
  105. # 用softmax做归一化处理[batch_size, time_step]
  106. self.alpha = tf.nn.softmax(restoreM)
  107.  
  108. # 利用求得的alpha的值对H进行加权求和,用矩阵运算直接操作
  109. r = tf.matmul(tf.transpose(H, [0, 2, 1]), tf.reshape(self.alpha, [-1, config.sequenceLength, 1]))
  110.  
  111. # 将三维压缩成二维sequeezeR=[batch_size, hidden_size]
  112. sequeezeR = tf.reshape(r, [-1, hiddenSize])
  113.  
  114. sentenceRepren = tf.tanh(sequeezeR)
  115.  
  116. # 对Attention的输出可以做dropout处理
  117. output = tf.nn.dropout(sentenceRepren, self.dropoutKeepProb)
  118.  
  119. return output

8 定义计算metrics的函数

  1. """
  2. 定义各类性能指标
  3. """
  4.  
  5. def mean(item: list) -> float:
  6. """
  7. 计算列表中元素的平均值
  8. :param item: 列表对象
  9. :return:
  10. """
  11. res = sum(item) / len(item) if len(item) > 0 else 0
  12. return res
  13.  
  14. def accuracy(pred_y, true_y):
  15. """
  16. 计算二类和多类的准确率
  17. :param pred_y: 预测结果
  18. :param true_y: 真实结果
  19. :return:
  20. """
  21. if isinstance(pred_y[0], list):
  22. pred_y = [item[0] for item in pred_y]
  23. corr = 0
  24. for i in range(len(pred_y)):
  25. if pred_y[i] == true_y[i]:
  26. corr += 1
  27. acc = corr / len(pred_y) if len(pred_y) > 0 else 0
  28. return acc
  29.  
  30. def binary_precision(pred_y, true_y, positive=1):
  31. """
  32. 二类的精确率计算
  33. :param pred_y: 预测结果
  34. :param true_y: 真实结果
  35. :param positive: 正例的索引表示
  36. :return:
  37. """
  38. corr = 0
  39. pred_corr = 0
  40. for i in range(len(pred_y)):
  41. if pred_y[i] == positive:
  42. pred_corr += 1
  43. if pred_y[i] == true_y[i]:
  44. corr += 1
  45.  
  46. prec = corr / pred_corr if pred_corr > 0 else 0
  47. return prec
  48.  
  49. def binary_recall(pred_y, true_y, positive=1):
  50. """
  51. 二类的召回率
  52. :param pred_y: 预测结果
  53. :param true_y: 真实结果
  54. :param positive: 正例的索引表示
  55. :return:
  56. """
  57. corr = 0
  58. true_corr = 0
  59. for i in range(len(pred_y)):
  60. if true_y[i] == positive:
  61. true_corr += 1
  62. if pred_y[i] == true_y[i]:
  63. corr += 1
  64.  
  65. rec = corr / true_corr if true_corr > 0 else 0
  66. return rec
  67.  
  68. def binary_f_beta(pred_y, true_y, beta=1.0, positive=1):
  69. """
  70. 二类的f beta值
  71. :param pred_y: 预测结果
  72. :param true_y: 真实结果
  73. :param beta: beta值
  74. :param positive: 正例的索引表示
  75. :return:
  76. """
  77. precision = binary_precision(pred_y, true_y, positive)
  78. recall = binary_recall(pred_y, true_y, positive)
  79. try:
  80. f_b = (1 + beta * beta) * precision * recall / (beta * beta * precision + recall)
  81. except:
  82. f_b = 0
  83. return f_b
  84.  
  85. def multi_precision(pred_y, true_y, labels):
  86. """
  87. 多类的精确率
  88. :param pred_y: 预测结果
  89. :param true_y: 真实结果
  90. :param labels: 标签列表
  91. :return:
  92. """
  93. if isinstance(pred_y[0], list):
  94. pred_y = [item[0] for item in pred_y]
  95.  
  96. precisions = [binary_precision(pred_y, true_y, label) for label in labels]
  97. prec = mean(precisions)
  98. return prec
  99.  
  100. def multi_recall(pred_y, true_y, labels):
  101. """
  102. 多类的召回率
  103. :param pred_y: 预测结果
  104. :param true_y: 真实结果
  105. :param labels: 标签列表
  106. :return:
  107. """
  108. if isinstance(pred_y[0], list):
  109. pred_y = [item[0] for item in pred_y]
  110.  
  111. recalls = [binary_recall(pred_y, true_y, label) for label in labels]
  112. rec = mean(recalls)
  113. return rec
  114.  
  115. def multi_f_beta(pred_y, true_y, labels, beta=1.0):
  116. """
  117. 多类的f beta值
  118. :param pred_y: 预测结果
  119. :param true_y: 真实结果
  120. :param labels: 标签列表
  121. :param beta: beta值
  122. :return:
  123. """
  124. if isinstance(pred_y[0], list):
  125. pred_y = [item[0] for item in pred_y]
  126.  
  127. f_betas = [binary_f_beta(pred_y, true_y, beta, label) for label in labels]
  128. f_beta = mean(f_betas)
  129. return f_beta
  130.  
  131. def get_binary_metrics(pred_y, true_y, f_beta=1.0):
  132. """
  133. 得到二分类的性能指标
  134. :param pred_y:
  135. :param true_y:
  136. :param f_beta:
  137. :return:
  138. """
  139. acc = accuracy(pred_y, true_y)
  140. recall = binary_recall(pred_y, true_y)
  141. precision = binary_precision(pred_y, true_y)
  142. f_beta = binary_f_beta(pred_y, true_y, f_beta)
  143. return acc, recall, precision, f_beta
  144.  
  145. def get_multi_metrics(pred_y, true_y, labels, f_beta=1.0):
  146. """
  147. 得到多分类的性能指标
  148. :param pred_y:
  149. :param true_y:
  150. :param labels:
  151. :param f_beta:
  152. :return:
  153. """
  154. acc = accuracy(pred_y, true_y)
  155. recall = multi_recall(pred_y, true_y, labels)
  156. precision = multi_precision(pred_y, true_y, labels)
  157. f_beta = multi_f_beta(pred_y, true_y, labels, f_beta)
  158. return acc, recall, precision, f_beta

9 训练模型

  在训练时,我们定义了tensorBoard的输出,并定义了两种模型保存的方法。

  1. # 训练模型
  2.  
  3. # 生成训练集和验证集
  4. trainReviews = data.trainReviews
  5. trainLabels = data.trainLabels
  6. evalReviews = data.evalReviews
  7. evalLabels = data.evalLabels
  8.  
  9. wordEmbedding = data.wordEmbedding
  10. labelList = data.labelList
  11.  
  12. # 定义计算图
  13. with tf.Graph().as_default():
  14.  
  15. session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)
  16. session_conf.gpu_options.allow_growth=True
  17. session_conf.gpu_options.per_process_gpu_memory_fraction = 0.9 # 配置gpu占用率
  18.  
  19. sess = tf.Session(config=session_conf)
  20.  
  21. # 定义会话
  22. with sess.as_default():
  23. lstm = BiLSTMAttention(config, wordEmbedding)
  24.  
  25. globalStep = tf.Variable(0, name="globalStep", trainable=False)
  26. # 定义优化函数,传入学习速率参数
  27. optimizer = tf.train.AdamOptimizer(config.training.learningRate)
  28. # 计算梯度,得到梯度和变量
  29. gradsAndVars = optimizer.compute_gradients(lstm.loss)
  30. # 将梯度应用到变量下,生成训练器
  31. trainOp = optimizer.apply_gradients(gradsAndVars, global_step=globalStep)
  32.  
  33. # 用summary绘制tensorBoard
  34. gradSummaries = []
  35. for g, v in gradsAndVars:
  36. if g is not None:
  37. tf.summary.histogram("{}/grad/hist".format(v.name), g)
  38. tf.summary.scalar("{}/grad/sparsity".format(v.name), tf.nn.zero_fraction(g))
  39.  
  40. outDir = os.path.abspath(os.path.join(os.path.curdir, "summarys"))
  41. print("Writing to {}\n".format(outDir))
  42.  
  43. lossSummary = tf.summary.scalar("loss", lstm.loss)
  44. summaryOp = tf.summary.merge_all()
  45.  
  46. trainSummaryDir = os.path.join(outDir, "train")
  47. trainSummaryWriter = tf.summary.FileWriter(trainSummaryDir, sess.graph)
  48.  
  49. evalSummaryDir = os.path.join(outDir, "eval")
  50. evalSummaryWriter = tf.summary.FileWriter(evalSummaryDir, sess.graph)
  51.  
  52. # 初始化所有变量
  53. saver = tf.train.Saver(tf.global_variables(), max_to_keep=5)
  54.  
  55. # 保存模型的一种方式,保存为pb文件
  56. savedModelPath = "../model/bilstm-atten/savedModel"
  57. if os.path.exists(savedModelPath):
  58. os.rmdir(savedModelPath)
  59. builder = tf.saved_model.builder.SavedModelBuilder(savedModelPath)
  60.  
  61. sess.run(tf.global_variables_initializer())
  62.  
  63. def trainStep(batchX, batchY):
  64. """
  65. 训练函数
  66. """
  67. feed_dict = {
  68. lstm.inputX: batchX,
  69. lstm.inputY: batchY,
  70. lstm.dropoutKeepProb: config.model.dropoutKeepProb
  71. }
  72. _, summary, step, loss, predictions = sess.run(
  73. [trainOp, summaryOp, globalStep, lstm.loss, lstm.predictions],
  74. feed_dict)
  75. timeStr = datetime.datetime.now().isoformat()
  76.  
  77. if config.numClasses == 1:
  78. acc, recall, prec, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY)
  79.  
  80. elif config.numClasses > 1:
  81. acc, recall, prec, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY,
  82. labels=labelList)
  83.  
  84. trainSummaryWriter.add_summary(summary, step)
  85.  
  86. return loss, acc, prec, recall, f_beta
  87.  
  88. def devStep(batchX, batchY):
  89. """
  90. 验证函数
  91. """
  92. feed_dict = {
  93. lstm.inputX: batchX,
  94. lstm.inputY: batchY,
  95. lstm.dropoutKeepProb: 1.0
  96. }
  97. summary, step, loss, predictions = sess.run(
  98. [summaryOp, globalStep, lstm.loss, lstm.predictions],
  99. feed_dict)
  100.  
  101. if config.numClasses == 1:
  102.  
  103. acc, precision, recall, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY)
  104. elif config.numClasses > 1:
  105. acc, precision, recall, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY, labels=labelList)
  106.  
  107. evalSummaryWriter.add_summary(summary, step)
  108.  
  109. return loss, acc, precision, recall, f_beta
  110.  
  111. for i in range(config.training.epoches):
  112. # 训练模型
  113. print("start training model")
  114. for batchTrain in nextBatch(trainReviews, trainLabels, config.batchSize):
  115. loss, acc, prec, recall, f_beta = trainStep(batchTrain[0], batchTrain[1])
  116.  
  117. currentStep = tf.train.global_step(sess, globalStep)
  118. print("train: step: {}, loss: {}, acc: {}, recall: {}, precision: {}, f_beta: {}".format(
  119. currentStep, loss, acc, recall, prec, f_beta))
  120. if currentStep % config.training.evaluateEvery == 0:
  121. print("\nEvaluation:")
  122.  
  123. losses = []
  124. accs = []
  125. f_betas = []
  126. precisions = []
  127. recalls = []
  128.  
  129. for batchEval in nextBatch(evalReviews, evalLabels, config.batchSize):
  130. loss, acc, precision, recall, f_beta = devStep(batchEval[0], batchEval[1])
  131. losses.append(loss)
  132. accs.append(acc)
  133. f_betas.append(f_beta)
  134. precisions.append(precision)
  135. recalls.append(recall)
  136.  
  137. time_str = datetime.datetime.now().isoformat()
  138. print("{}, step: {}, loss: {}, acc: {},precision: {}, recall: {}, f_beta: {}".format(time_str, currentStep, mean(losses),
  139. mean(accs), mean(precisions),
  140. mean(recalls), mean(f_betas)))
  141.  
  142. if currentStep % config.training.checkpointEvery == 0:
  143. # 保存模型的另一种方法,保存checkpoint文件
  144. path = saver.save(sess, "../model/Bi-LSTM-atten/model/my-model", global_step=currentStep)
  145. print("Saved model checkpoint to {}\n".format(path))
  146.  
  147. inputs = {"inputX": tf.saved_model.utils.build_tensor_info(lstm.inputX),
  148. "keepProb": tf.saved_model.utils.build_tensor_info(lstm.dropoutKeepProb)}
  149.  
  150. outputs = {"predictions": tf.saved_model.utils.build_tensor_info(lstm.binaryPreds)}
  151.  
  152. prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(inputs=inputs, outputs=outputs,
  153. method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
  154. legacy_init_op = tf.group(tf.tables_initializer(), name="legacy_init_op")
  155. builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],
  156. signature_def_map={"predict": prediction_signature}, legacy_init_op=legacy_init_op)
  157.  
  158. builder.save()

10 预测代码

  1. x = "this movie is full of references like mad max ii the wild one and many others the ladybug´s face it´s a clear reference or tribute to peter lorre this movie is a masterpiece we´ll talk much more about in the future"
  2.  
  3. # 注:下面两个词典要保证和当前加载的模型对应的词典是一致的
  4. with open("../data/wordJson/word2idx.json", "r", encoding="utf-8") as f:
  5. word2idx = json.load(f)
  6.  
  7. with open("../data/wordJson/label2idx.json", "r", encoding="utf-8") as f:
  8. label2idx = json.load(f)
  9. idx2label = {value: key for key, value in label2idx.items()}
  10.  
  11. xIds = [word2idx.get(item, word2idx["UNK"]) for item in x.split(" ")]
  12. if len(xIds) >= config.sequenceLength:
  13. xIds = xIds[:config.sequenceLength]
  14. else:
  15. xIds = xIds + [word2idx["PAD"]] * (config.sequenceLength - len(xIds))
  16.  
  17. graph = tf.Graph()
  18. with graph.as_default():
  19. gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
  20. session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False, gpu_options=gpu_options)
  21. sess = tf.Session(config=session_conf)
  22.  
  23. with sess.as_default():
  24. checkpoint_file = tf.train.latest_checkpoint("../model/Bi-LSTM-atten/model/")
  25. saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
  26. saver.restore(sess, checkpoint_file)
  27.  
  28. # 获得需要喂给模型的参数,输出的结果依赖的输入值
  29. inputX = graph.get_operation_by_name("inputX").outputs[0]
  30. dropoutKeepProb = graph.get_operation_by_name("dropoutKeepProb").outputs[0]
  31.  
  32. # 获得输出的结果
  33. predictions = graph.get_tensor_by_name("output/predictions:0")
  34.  
  35. pred = sess.run(predictions, feed_dict={inputX: [xIds], dropoutKeepProb: 1.0})[0]
  36.  
  37. pred = [idx2label[item] for item in pred]
  38. print(pred)

文本分类实战(五)—— Bi-LSTM + Attention模型的更多相关文章

  1. 文本分类实战(八)—— Transformer模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  2. 文本分类实战(六)—— RCNN模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  3. 文本分类实战(四)—— Bi-LSTM模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  4. 文本分类实战(三)—— charCNN模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  5. 文本分类实战(二)—— textCNN 模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  6. 文本分类实战(七)—— Adversarial LSTM模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  7. 文本分类实战(十)—— BERT 预训练模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  8. 文本分类实战(九)—— ELMO 预训练模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  9. Python 1行代码实现文本分类(实战笔记),含代码详细说明及运行结果

    Python 1行代码实现文本分类(实战笔记),含代码详细说明及运行结果 一.详细说明及代码 tc.py =============================================== ...

随机推荐

  1. 通过LRU实现通用高效的超时连接探测

    编写网络通讯都要面对一个问题,就是要把很久不存活的死连接清除,如果不这样做那死连接最终会占用大量内存影响服务运作!在实现过程中一般都会使用ping,pong原理,通过ping,pong来更新连接的时效 ...

  2. python转义字符——重点解释:\b,\n和\r区别

    放在最前面: 有时我们并不想让转义字符生效,我们只想显示字符串原来的意思,这就要用r和R来定义原始字符串.如:print r'\t\r' 实际输出为“\t\r”. 主要参考:AllenW的博客 转义字 ...

  3. 网络设备配置与管理(华为)基础系列 :VLAN故障排除和GVRP

    一.VLAN故障排除 故障排除的三步骤:故障定位 → 分析故障 → 排除故障 一般情况下,网络设备配置的故障有两种排错方式 A.静态排错:主要靠display查看配置信息的方式进行 在相关vlan下d ...

  4. Docker折腾手记-linux下安装

    Linux下的安装方法 博主用的是centos7,其它也是大同小异 我根据的是官网的教程进行的操作,地址是 https://docs.docker.com/engine/installation/li ...

  5. 【Java基础】【21IO(字符流)&字符流其他内容&递归】

    21.01_IO流(字符流FileReader) 1.字符流是什么 字符流是可以直接读写字符的IO流 字符流读取字符, 就要先读取到字节数据, 然后转为字符. 如果要写出字符, 需要把字符转为字节再写 ...

  6. javascript入门篇(二)

    对   象 对象:一组元素的集合 声明方式:字面量方式 var O = { } 构造函数方式:var obj  =  new object(); 为对象添加新属性,如:o.name = 'jerry' ...

  7. Spring Boot 2.x(四):整合Mybatis的四种方式

    前言 目前的大环境下,使用Mybatis作为持久层框架还是占了绝大多数的,下面我们来说一下使用Mybatis的几种姿势. 姿势一:零配置注解开发 第一步:引入依赖 首先,我们需要在pom文件中添加依赖 ...

  8. oracle数据库密码过期修改注意事项

    近期的工作中,因数据库密码临近过期,需要进行修改,因对oracle数据库底层结构不了解,导致安装网上的教程操作是出现一些问题,特记录下来 传统的修改语句为 输入:win+R进入cmd  输入sqlpl ...

  9. SpringCloud系列——Zuul 动态路由

    前言 Zuul 是在Spring Cloud Netflix平台上提供动态路由,监控,弹性,安全等边缘服务的框架,是Netflix基于jvm的路由器和服务器端负载均衡器,相当于是设备和 Netflix ...

  10. 【转载】C#将图片转换为二进制流调用

    在C#中可以使用MemoryStream类.BinaryFormatter类等来操作图片,将图片读取到二进制数据流中,最终转成二进制数据流进行调用,详细的实现如下方法所示. private byte[ ...