tensorflow seq2seq.py接口实例
以简单英文问答问题为例测试tensorflow1.4 tf.contrib.legacy_seq2seq中seq2seq文件的几个seq2seq接口
github:https://github.com/buyizhiyou/tf_seq2seq
测试 basic_rnn_seq2seq 的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import basic_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0]) '''
embedding
'''
enc_Wemb = tf.get_variable('enc_word_emb',initializer=tf.random_uniform([enc_vocab_size+1,enc_emb_size]))
dec_Wemb = tf.get_variable('dec_word_emb',initializer=tf.random_uniform([dec_vocab_size+2,dec_emb_size]))
enc_emb_inputs = tf.nn.embedding_lookup(enc_Wemb,enc_inputs_t)
dec_emb_inputs = tf.nn.embedding_lookup(dec_Wemb,dec_inputs_t)
# enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_emb_inputs = tf.unstack(enc_emb_inputs)
dec_emb_inputs = tf.unstack(dec_emb_inputs) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = basic_rnn_seq2seq(enc_emb_inputs,dec_emb_inputs,cell)
dec_outputs = tf.stack(dec_outputs)
logits = tf.layers.dense(dec_outputs,units=dec_vocab_size+2,activation=tf.nn.relu)#fc层
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0])
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2]
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits)) # training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试 tied_rnn_seq2seq 的使用(该接口encoder和decoder共享参数)
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import tied_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 20#must consistent with enc_emb_size for parameter sharing
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0]) '''
embedding
'''
enc_Wemb = tf.get_variable('enc_word_emb',initializer=tf.random_uniform([enc_vocab_size+1,enc_emb_size]))
dec_Wemb = tf.get_variable('dec_word_emb',initializer=tf.random_uniform([dec_vocab_size+2,dec_emb_size]))
enc_emb_inputs = tf.nn.embedding_lookup(enc_Wemb,enc_inputs_t)
dec_emb_inputs = tf.nn.embedding_lookup(dec_Wemb,dec_inputs_t)
# enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_emb_inputs = tf.unstack(enc_emb_inputs)
dec_emb_inputs = tf.unstack(dec_emb_inputs) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = tied_rnn_seq2seq(enc_emb_inputs,dec_emb_inputs,cell)
dec_outputs = tf.stack(dec_outputs)
logits = tf.layers.dense(dec_outputs,units=dec_vocab_size+2,activation=tf.nn.relu)#fc层
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0])
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2]
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits)) # training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试 embedding_attention_seq2seq的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import embedding_attention_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0])
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2)
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2] # enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_inputs_t = tf.unstack(enc_inputs_t)
dec_inputs_t = tf.unstack(dec_inputs_t) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = embedding_attention_seq2seq(
encoder_inputs=enc_inputs_t,
decoder_inputs=dec_inputs_t,
cell=cell,
num_encoder_symbols=enc_vocab_size+1,
num_decoder_symbols=dec_vocab_size+2,
embedding_size=enc_emb_size,
output_projection=None,
feed_previous=True
)
logits = tf.stack(dec_outputs)
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0]) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits))
# training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试embedding_seq2seq 的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" '''
测试embedding_rnn_seq2seq函数
''' import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import embedding_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0])
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2)
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2] # enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_inputs_t = tf.unstack(enc_inputs_t)
dec_inputs_t = tf.unstack(dec_inputs_t) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = embedding_rnn_seq2seq(
encoder_inputs=enc_inputs_t,
decoder_inputs=dec_inputs_t,
cell=cell,
num_encoder_symbols=enc_vocab_size+1,
num_decoder_symbols=dec_vocab_size+2,
embedding_size=enc_emb_size,
output_projection=None,
feed_previous=True
)
logits = tf.stack(dec_outputs)
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0]) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits))
# training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
tensorflow seq2seq.py接口实例的更多相关文章
- 学习笔记CB014:TensorFlow seq2seq模型步步进阶
神经网络.<Make Your Own Neural Network>,用非常通俗易懂描述讲解人工神经网络原理用代码实现,试验效果非常好. 循环神经网络和LSTM.Christopher ...
- ChatGirl 一个基于 TensorFlow Seq2Seq 模型的聊天机器人[中文文档]
ChatGirl 一个基于 TensorFlow Seq2Seq 模型的聊天机器人[中文文档] 简介 简单地说就是该有的都有了,但是总体跑起来效果还不好. 还在开发中,它工作的效果还不好.但是你可以直 ...
- ChatGirl is an AI ChatBot based on TensorFlow Seq2Seq Model
Introduction [Under developing,it is not working well yet.But you can just train,and run it.] ChatGi ...
- 规则引擎集成接口(八)Java接口实例
接口实例 右键点击“对象库” —“添加接口实例”,如下图: 弹出如下窗体: 输入接口的参数信息: 点击接口“求和”,选择选项卡“求和操作”,点击添加图标 ,如下: 弹出如下窗体,勾选方法“coun ...
- MyBatis 源码分析——生成Statement接口实例
JDBC的知识对于JAVA开发人员来讲在简单不过的知识了.PreparedStatement的作用更是胸有成竹.我们最常见用到有俩个方法:executeQuery方法和executeUpdate方法. ...
- Tensorflow 线性回归预测房价实例
在本节中将通过一个预测房屋价格的实例来讲解利用线性回归预测房屋价格,以及在tensorflow中如何实现 Tensorflow 线性回归预测房价实例 1.1. 准备工作 1.2. 归一化数据 1.3. ...
- R用户的福音︱TensorFlow:TensorFlow的R接口
------------------------------------------------------------ Matt︱R语言调用深度学习架构系列引文 R语言︱H2o深度学习的一些R语言实 ...
- Easy-Mock模拟get接口和post接口实例
1.先创建项目,再新建接口 创建项目入口:首页右下角 + 按钮 创建接口入口如下图: 关于mock的语法这里不做说明,可查看mock.js官方查看更详情的资料. 小tip:在Easy-Mock里面支持 ...
- postman+jmeter接口实例
接口基础 一.为什么要单独测试接口? 1. 程序是分开开发的,前端还没有开发,后端已经开发完了,可以提前进入测试2. 接口直接返回的数据------越底层发现bug,修复成本是越低的3. 接口测试能模 ...
随机推荐
- codeforces 834 D. The Bakery
codeforces 834 D. The Bakery(dp + 线段树优化) 题意: 给一个长度为n的序列分成k段,每段的值为这一段不同数字的个数,最大化划分k端的值 $n <= 35000 ...
- 用类加载器的5种方式读取.properties文件
用类加载器的5中形式读取.properties文件(这个.properties文件一般放在src的下面) 用类加载器进行读取:这里采取先向大家讲读取类加载器的几种方法:然后写一个例子把几种方法融进去, ...
- 7月14号day6总结
今天学习过程和总结 IOC和DIO IOC相当于一个容器,在容器中加注解.接口存在意义依赖注入.4个注解都行,依赖注入只能发生在IOC容器里, pring IOC 容器可以管理Bean 的生命周期,S ...
- HDU2824 The Euler function
Time Limit: 1000MS Memory Limit: 32768KB 64bit IO Format: %I64d & %I64u Description The Eule ...
- off charging mode flow
/system/core/init/init.cpp ..... ..... ..... int main(int argc, char** argv) { ..... ..... ..... // ...
- linux内核分析之缺页中断【转】
转自:http://blog.csdn.net/bullbat/article/details/7108402 linux缺页异常程序必须能够区分由编程引起的异常以及由引用属于进程地址空间但还尚未分配 ...
- 如何在natTable表格上添加双击事件
在项目当中,有时候需要双击表格中的某一行触发一个事件或者一次数据请求,这时候,我们就需要在表格中绑定相关事件,思路实际上很简单,添加一个绑定事件就ok了,那么怎么添加呢?简单实现如下: 1.创建绑定双 ...
- 关于boostrapValidator动态添加字段(addField)验证的bug
每次码博客,都觉得自己怀才不遇,哎~脑袋有瑕疵,文笔拿不粗手,就直接上干货吧. 在使用boostrapValidator这个验证插件的时候,如果某一个字段是动态添加来的,我们需要调用方法:addFie ...
- 在windows系统上word转pdf
一.前言:我在做文件转换过程中遇到的一些坑,在这里记录下,因为项目需求,需要使用html转pdf,由于itext转换质量问题(一些Css属性不起作用),导致只能通过word文件作为跳板来转换到pdf文 ...
- PSR-1 基础编码规范
本篇规范制定了代码基本元素的相关标准, 以确保共享的PHP代码间具有较高程度的技术互通性. 关键词 “必须”("MUST").“一定不可/一定不能”("MUST NOT& ...