tflearn 保存模型重新训练
from:https://stackoverflow.com/questions/41616292/how-to-load-and-retrain-tflean-model
This is to create a graph and save it
graph1 = tf.Graph()
with graph1.as_default():
network = input_data(shape=[None, MAX_DOCUMENT_LENGTH])
network = tflearn.embedding(network, input_dim=n_words, output_dim=128)
branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
network = merge([branch1, branch2, branch3], mode='concat', axis=1)
network = tf.expand_dims(network, 2)
network = global_max_pool(network)
network = dropout(network, 0.5)
network = fully_connected(network, 2, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=0.001,loss='categorical_crossentropy', name='target')
model = tflearn.DNN(network, tensorboard_verbose=0)
clf, acc, roc_auc,fpr,tpr =classify_DNN(data,clas,model)
clf.save(model_path)
To reload and retrain or use it for prediction
MODEL = None
with tf.Graph().as_default():
## Building deep neural network
network = input_data(shape=[None, MAX_DOCUMENT_LENGTH])
network = tflearn.embedding(network, input_dim=n_words, output_dim=128)
branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
network = merge([branch1, branch2, branch3], mode='concat', axis=1)
network = tf.expand_dims(network, 2)
network = global_max_pool(network)
network = dropout(network, 0.5)
network = fully_connected(network, 2, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=0.001,loss='categorical_crossentropy', name='target')
new_model = tflearn.DNN(network, tensorboard_verbose=3)
new_model.load(model_path)
MODEL = new_model
Use the MODEL for prediction or retraining. The 1st line and the with loop was important. For anyone who might need help
官方例子:
""" An example showing how to save/restore models and retrieve weights. """ from __future__ import absolute_import, division, print_function import tflearn import tflearn.datasets.mnist as mnist # MNIST Data
X, Y, testX, testY = mnist.load_data(one_hot=True) # Model
input_layer = tflearn.input_data(shape=[None, 784], name='input')
dense1 = tflearn.fully_connected(input_layer, 128, name='dense1')
dense2 = tflearn.fully_connected(dense1, 256, name='dense2')
softmax = tflearn.fully_connected(dense2, 10, activation='softmax')
regression = tflearn.regression(softmax, optimizer='adam',
learning_rate=0.001,
loss='categorical_crossentropy') # Define classifier, with model checkpoint (autosave)
model = tflearn.DNN(regression, checkpoint_path='model.tfl.ckpt') # Train model, with model checkpoint every epoch and every 200 training steps.
model.fit(X, Y, n_epoch=1,
validation_set=(testX, testY),
show_metric=True,
snapshot_epoch=True, # Snapshot (save & evaluate) model every epoch.
snapshot_step=500, # Snapshot (save & evalaute) model every 500 steps.
run_id='model_and_weights') # ---------------------
# Save and load a model
# --------------------- # Manually save model
model.save("model.tfl") # Load a model
model.load("model.tfl") # Or Load a model from auto-generated checkpoint
# >> model.load("model.tfl.ckpt-500") # Resume training
model.fit(X, Y, n_epoch=1,
validation_set=(testX, testY),
show_metric=True,
snapshot_epoch=True,
run_id='model_and_weights') # ------------------
# Retrieving weights
# ------------------ # Retrieve a layer weights, by layer name:
dense1_vars = tflearn.variables.get_layer_variables_by_name('dense1')
# Get a variable's value, using model `get_weights` method:
print("Dense1 layer weights:")
print(model.get_weights(dense1_vars[0]))
# Or using generic tflearn function:
print("Dense1 layer biases:")
with model.session.as_default():
print(tflearn.variables.get_value(dense1_vars[1])) # It is also possible to retrieve a layer weights through its attributes `W`
# and `b` (if available).
# Get variable's value, using model `get_weights` method:
print("Dense2 layer weights:")
print(model.get_weights(dense2.W))
# Or using generic tflearn function:
print("Dense2 layer biases:")
with model.session.as_default():
print(tflearn.variables.get_value(dense2.b))
tflearn 保存模型重新训练的更多相关文章
- tflearn 中文汉字识别,训练后模型存为pb给TensorFlow使用——模型层次太深,或者太复杂训练时候都不会收敛
tflearn 中文汉字识别,训练后模型存为pb给TensorFlow使用. 数据目录在data,data下放了汉字识别图片: data$ ls0 1 10 11 12 13 14 15 ...
- tensorflow训练自己的数据集实现CNN图像分类2(保存模型&测试单张图片)
神经网络训练的时候,我们需要将模型保存下来,方便后面继续训练或者用训练好的模型进行测试.因此,我们需要创建一个saver保存模型. def run_training(): data_dir = 'C: ...
- 将tflearn的模型保存为pb,给TensorFlow使用
参考:https://github.com/tflearn/tflearn/issues/964 解决方法: """ Tensorflow graph freezer C ...
- Keras保存模型并载入模型继续训练
我们以MNIST手写数字识别为例 import numpy as np from keras.datasets import mnist from keras.utils import np_util ...
- sklearn保存模型-【老鱼学sklearn】
训练好了一个Model 以后总需要保存和再次预测, 所以保存和读取我们的sklearn model也是同样重要的一步. 比如,我们根据房源样本数据训练了一下房价模型,当用户输入自己的房子后,我们就需要 ...
- pytorch加载和保存模型
在模型完成训练后,我们需要将训练好的模型保存为一个文件供测试使用,或者因为一些原因我们需要继续之前的状态训练之前保存的模型,那么如何在PyTorch中保存和恢复模型呢? 方法一(推荐): 第一种方法也 ...
- PyTorch保存模型与加载模型+Finetune预训练模型使用
Pytorch 保存模型与加载模型 PyTorch之保存加载模型 参数初始化参 数的初始化其实就是对参数赋值.而我们需要学习的参数其实都是Variable,它其实是对Tensor的封装,同时提供了da ...
- (原)tensorflow保存模型及载入保存的模型
转载请注明出处: http://www.cnblogs.com/darkknightzh/p/7198773.html 参考网址: http://stackoverflow.com/questions ...
- 转sklearn保存模型
训练好了一个Model 以后总需要保存和再次预测, 所以保存和读取我们的sklearn model也是同样重要的一步. 比如,我们根据房源样本数据训练了一下房价模型,当用户输入自己的房子后,我们就需要 ...
随机推荐
- linux系统查看网络连接情况
netstat命令状态说明: CLOSED 没有使用这个套接字[netstat 无法显示closed状态] LISTEN 套接字正在监听连接[调用listen ...
- leetcode-169求众数
求众数 思路: 记录每个元素出现的次数,然后查找到众数 代码: class Solution: def majorityElement(self, nums: List[int]) -> int ...
- 模拟Django的admin自定义stark组件
1.新建Django项目--新建app:app01和stark--在settings中配置app和数据库--在models.py中新建模型表--完成数据库迁移 2.在stark下的apps.py中: ...
- 如何转成libsvm支持的数据格式并做回归分析
本次实验的数据是来自老师给的2006-2008年的日期,24小时的温度.电力负荷数据,以及2009年的日期,24小时的温度数据,目的是预测2009年每天24小时的电力负荷,实验数据本文不予给出. 用l ...
- python接口测试之Http请求(三)
python的强大之处在于提供了很多的标准库,这些标准库可以直接调用,本节部分,重点学习和总结在 接口测试中Python的Http请求的库的学习. 首先来看httplib,官方的解释为:本模块定义了类 ...
- POJ-1988Cube Stacking/HDU-2818Building Block;
Cube Stacking Time Limit: 2000MS Memory Limit: 30000K Total Submissions: 23283 Accepted: 8166 Ca ...
- HDU-2817,同余定理+快速幂取模,水过~
A sequence of numbers Time Limit: 2000/1 ...
- [BZOJ2462] [BeiJing2011]矩阵模板(二维Hash)
传送门 二维哈希即可. 注意质数选的大一些,不然会超时. 还有插入的时候不判重居然比判重要快.. ——代码 #include <cstdio> int main() { ; ") ...
- [NOIP2006] 普及组
明明的随机数 STL真是偷懒神器 /*by SilverN*/ #include<algorithm> #include<iostream> #include<cstri ...
- 斯特林(Stirling)公式 求大数阶乘的位数
我们知道整数n的位数的计算方法为:log10(n)+1n!=10^m故n!的位数为 m = log10(n!)+1 lgN!=lg1+lg2+lg3+lg4+lg5+................. ...