TensorFlow tutorial
代码示例来自https://github.com/aymericdamien/TensorFlow-Examples
- tensorflow先定义运算图,在run的时候才会进行真正的运算。
- run之前需要先建立一个session
- 常量用constant 如a = tf.constant(2)
- 变量用placeholder 需要指定类型 如a = tf.placeholder(tf.int16)
矩阵相乘
matrix1 = tf.constant([[3., 3.]]) #1*2矩阵
matrix2 = tf.constant([[2.],[2.]]) #2*1矩阵
product = tf.matmul(matrix1, matrix2) #矩阵相乘得到1*1矩阵
with tf.Session() as sess:
result = sess.run(product) #result类型为ndarray
print(result)
# ==> [[ 12.]]
'''
Basic Operations example using TensorFlow library.
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
from __future__ import print_function
import tensorflow as tf
# Basic constant operations
# The value returned by the constructor represents the output
# of the Constant op.
a = tf.constant(2)
b = tf.constant(3)
# Launch the default graph.
with tf.Session() as sess:
print("a=2, b=3")
print("Addition with constants: %i" % sess.run(a+b))
print("Multiplication with constants: %i" % sess.run(a*b))
# Basic Operations with variable as graph input
# The value returned by the constructor represents the output
# of the Variable op. (define as input when running session)
# tf Graph input
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
# Define some operations
add = tf.add(a, b)
mul = tf.multiply(a, b)
# Launch the default graph.
with tf.Session() as sess:
# Run every operation with variable input
print("Addition with variables: %i" % sess.run(add, feed_dict={a: 2, b: 3}))
print("Multiplication with variables: %i" % sess.run(mul, feed_dict={a: 2, b: 3}))
# ----------------
# More in details:
# Matrix Multiplication from TensorFlow official tutorial
# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])
# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])
# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)
# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op. This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session. They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of threes ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
with tf.Session() as sess:
result = sess.run(product)
print(result)
# ==> [[ 12.]]
eager api
详细解释参见https://www.zhihu.com/question/67471378
之前说过tensorflow是先定义运算图,在session.run的时候才会真正做运算.
tensorflow推出了eager api.使得tf(tensorflow简称)中的函数可以像我们熟知的普通函数一样,调用后立刻可以得到结果,更方便调试.
坏处是和之前的有些代码不兼容.比如和Basic1里的示例代码就无法兼容.
- tfe.enable_eager_execution() 开启eager模式要放在代码最前面
'''
Basic introduction to TensorFlow's Eager API.
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
What is Eager API?
" Eager execution is an imperative, define-by-run interface where operations are
executed immediately as they are called from Python. This makes it easier to
get started with TensorFlow, and can make research and development more
intuitive. A vast majority of the TensorFlow API remains the same whether eager
execution is enabled or not. As a result, the exact same code that constructs
TensorFlow graphs (e.g. using the layers API) can be executed imperatively
by using eager execution. Conversely, most models written with Eager enabled
can be converted to a graph that can be further optimized and/or extracted
for deployment in production without changing code. " - Rajat Monga
'''
from __future__ import absolute_import, division, print_function
import numpy as np
import tensorflow as tf
import tensorflow.contrib.eager as tfe
# Set Eager API
print("Setting Eager mode...")
tfe.enable_eager_execution()
# Define constant tensors
print("Define constant tensors")
a = tf.constant(2)
print("a = %i" % a)
b = tf.constant(3)
print("b = %i" % b)
# Run the operation without the need for tf.Session
print("Running operations, without tf.Session")
c = a + b
print("a + b = %i" % c)
d = a * b
print("a * b = %i" % d)
# Full compatibility with Numpy
print("Mixing operations with Tensors and Numpy Arrays")
# Define constant tensors
a = tf.constant([[2., 1.],
[1., 0.]], dtype=tf.float32)
print("Tensor:\n a = %s" % a)
b = np.array([[3., 0.],
[5., 1.]], dtype=np.float32)
print("NumpyArray:\n b = %s" % b)
# Run the operation without the need for tf.Session
print("Running operations, without tf.Session")
c = a + b
print("a + b = %s" % c)
d = tf.matmul(a, b)
print("a * b = %s" % d)
print("Iterate through Tensor 'a':")
for i in range(a.shape[0]):
for j in range(a.shape[1]):
print(a[i][j])
卷积神经网络
阅读这部分之前先要对卷积神经网络有多了解.参见https://www.cnblogs.com/sdu20112013/p/10149529.html
神经网络的参数
- 输入层 样本X的维度 图片为28*28->784
- 全连接层的输出 图片种类,0到数字9共10种
- dropout = 0.25 # Dropout, probability to drop a unit 为了防止过拟合,丢弃某些神经元的输出
训练参数
- 学习率
- batch_size 梯度求解参考的样本数量
- num_steps 可能是只迭代的轮次?
构建一个CNN
- 卷积层,池化层,完成特征提取.
- 卷积层 32个filter filter的size是5*5 激活函数relu
- 池化层 2个filter filter的size是2*2
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
- 卷积层,池化层,进一步提取特征
# Convolution Layer with 64 filters and a kernel size of 3
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
- 全连接层,完成分类
# Flatten the data to a 1-D vector for the fully connected layer
#全连接层接收的是个M*N的矩阵 flatten:拍平.可以理解为把一摞叠起来的矩阵拍平
fc1 = tf.contrib.layers.flatten(conv2)
# Fully connected layer (in tf contrib folder for now)
#全连接层有1024个神经元
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
#对训练集,要对部分神经元做dropout,以引入非线性.只有训练集才会dropout
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
# Output layer, class prediction
#对全连接层输出做分类
out = tf.layers.dense(fc1, n_classes)
模型构建好了,现在需要告诉我们的模型损失函数有关的信息,这样模型才能够知道如何求梯度,并进而更新神经元之间的权重信息.
tensorflow要求我们定一个Estimator
- 损失函数定义
- 最优化算法定义
- 模型准确度定义
- TF Estimators requires to return a EstimatorSpec, that specify the different ops for training, evaluating,
logits_train = conv_net(features, num_classes, dropout, reuse=False,is_training=True)
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))#损失函数定义
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)#最优化算法 也可以用小批量梯度下降法SGD
train_op = optimizer.minimize(loss_op,global_step=tf.train.get_global_step())#最优化目标:使得loss最小
至此,我们已经完成了模型的创建,下面就是把数据转换成合适的格式喂给模型,进行训练.
# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.train.images}, y=mnist.train.labels,
batch_size=batch_size, num_epochs=None, shuffle=True)
# Train the Model
model.train(input_fn, steps=num_steps)
# Evaluate the Model
# Define the input function for evaluating
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.test.images}, y=mnist.test.labels,
batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
e = model.evaluate(input_fn)
""" Convolutional Neural Network.
Build and train a convolutional neural network with TensorFlow.
This example is using the MNIST database of handwritten digits
(http://yann.lecun.com/exdb/mnist/)
This example is using TensorFlow layers API, see 'convolutional_network_raw'
example for a raw implementation with variables.
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
"""
from __future__ import division, print_function, absolute_import
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
import tensorflow as tf
# Training Parameters
learning_rate = 0.001
num_steps = 2000
batch_size = 128
# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.25 # Dropout, probability to drop a unit
# Create the neural network
def conv_net(x_dict, n_classes, dropout, reuse, is_training):
# Define a scope for reusing the variables
with tf.variable_scope('ConvNet', reuse=reuse):
# TF Estimator input is a dict, in case of multiple inputs
x = x_dict['images']
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Convolution Layer with 64 filters and a kernel size of 3
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
fc1 = tf.contrib.layers.flatten(conv2)
print("fcl shape",fc1.shape)
# Fully connected layer (in tf contrib folder for now)
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
# Output layer, class prediction
out = tf.layers.dense(fc1, n_classes)
return out
# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):
# Build the neural network
# Because Dropout have different behavior at training and prediction time, we
# need to create 2 distinct computation graphs that still share the same weights.
logits_train = conv_net(features, num_classes, dropout, reuse=False,
is_training=True)
print("logits_train.shape",logits_train.shape)
logits_test = conv_net(features, num_classes, dropout, reuse=True,
is_training=False)
# Predictions
pred_classes = tf.argmax(logits_test, axis=1)
pred_probas = tf.nn.softmax(logits_test)
# If prediction mode, early return
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op,
global_step=tf.train.get_global_step())
# Evaluate the accuracy of the model
acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)
# TF Estimators requires to return a EstimatorSpec, that specify
# the different ops for training, evaluating, ...
estim_specs = tf.estimator.EstimatorSpec(
mode=mode,
predictions=pred_classes,
loss=loss_op,
train_op=train_op,
eval_metric_ops={'accuracy': acc_op})
return estim_specs
# Build the Estimator
model = tf.estimator.Estimator(model_fn)
# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.train.images}, y=mnist.train.labels,
batch_size=batch_size, num_epochs=None, shuffle=True)
# Train the Model
model.train(input_fn, steps=num_steps)
# Evaluate the Model
# Define the input function for evaluating
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.test.images}, y=mnist.test.labels,
batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
e = model.evaluate(input_fn)
print("Testing Accuracy:", e['accuracy'])
顺便,说说我所理解的神经网络,"神经元"听着玄乎,其实每一个神经元就是y=ax+b而已,a,x都是矩阵. 神经元之间是有权重关系的,前一个神经元的输出作为下一个神经元的输入,赋以权重w. 通过这样的多层神经元,尽管每个神经元都做得是一个线性变化,(当然为了非线性会引入relu/sigmoid等),组合起来确取得了模拟非线性的效果,利用反向传播算法,更新整个网络中的神经元之间的权重w.达到使误差最小. 本质上神经网络就是求矩阵w的运算.
TensorFlow tutorial的更多相关文章
- Tensorflow - Tutorial (7) : 利用 RNN/LSTM 进行手写数字识别
1. 经常使用类 class tf.contrib.rnn.BasicLSTMCell BasicLSTMCell 是最简单的一个LSTM类.没有实现clipping,projection layer ...
- 吴恩达深度学习第2课第3周编程作业 的坑(Tensorflow+Tutorial)
可能因为Andrew Ng用的是python3,而我是python2.7的缘故,我发现了坑.如下: 在辅助文件tf_utils.py中的random_mini_batches(X, Y, mini_b ...
- 【tensorflow使用笔记三】:tensorflow tutorial中的源码阅读
https://blog.csdn.net/victoriaw/article/details/61195620#t0 input_data 没用的另一种解决方法:tensorflow1.8版本及以上 ...
- TensorFlow 中文资源全集,官方网站,安装教程,入门教程,实战项目,学习路径。
Awesome-TensorFlow-Chinese TensorFlow 中文资源全集,学习路径推荐: 官方网站,初步了解. 安装教程,安装之后跑起来. 入门教程,简单的模型学习和运行. 实战项目, ...
- tensorflow学习笔记3:写一个mnist rpc服务
本篇做一个没有实用价值的mnist rpc服务,重点记录我在调试整合tensorflow和opencv时遇到的问题: 准备模型 mnist的基础模型结构就使用tensorflow tutorial给的 ...
- 一份快速完整的Tensorflow模型保存和恢复教程(译)(转载)
该文章转自https://blog.csdn.net/sinat_34474705/article/details/78995196 我在进行图像识别使用ckpt文件预测的时候,这个文章给我提供了极大 ...
- [Tensorflow] Object Detection API - retrain mobileNet
前言 一.专注话题 重点话题 Retrain mobileNet (transfer learning). Train your own Object Detector. 这部分讲理论,下一篇讲实践. ...
- Awesome TensorFlow
Awesome TensorFlow A curated list of awesome TensorFlow experiments, libraries, and projects. Inspi ...
- CNN tensorflow text classification CNN文本分类的例子
from:http://deeplearning.lipingyang.org/tensorflow-examples-text/ TensorFlow examples (text-based) T ...
随机推荐
- BZOJ_4448_[Scoi2015]情报传递_主席树
BZOJ_4448_[Scoi2015]情报传递_主席树 Description 奈特公司是一个巨大的情报公司,它有着庞大的情报网络.情报网络中共有n名情报员.每名情报员口J-能有 若T名(可能没有) ...
- 在CentOS下安装Git
1.安装git依赖包 yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel gcc perl-ExtUti ...
- C语言——输入输出函数
0.getchar().putchar() 输入缓冲区,键盘输入是"行缓冲"遇到一个换行符的时候清空缓冲区. 标准流,stdin和stdout,是标准的输入输出流,键盘输入就是用的 ...
- TensorFlow从1到2(九)迁移学习
迁移学习基本概念 迁移学习是这两年比较火的一个话题,主要原因是在当前的机器学习中,样本数据的获取是成本最高的一块.而迁移学习可以有效的把原有的学习经验(对于模型就是模型本身及其训练好的权重值)带入到新 ...
- 异步多线程 Async
进程:进程是一个程序在电脑运行时,全部资源的合集叫进程 线程:是程序的最小执行单位,包含计算资源,任何一个操作的响应都是线程完成的. 多线程:多个线程并发执行 Thread 是.net框架封装 ...
- C语言超级搞笑的代码,冷笑话我们程序员也会讲的啊!
百年修得足下点击本文 欢迎来到"C语言基础"专题,今天我们放松一天,不学习知识,来看下大千世界的千奇百怪的C语言代码,你见过那些? 1.关于随机数这回事 这个随机数有点意思哦. 2 ...
- SQL 高效运行注意事项(二)
SQL Server高效运行总的来说有两种方式: 一. 扩容,提高服务器性能,显著提高CPU.内存,解决磁盘I/O瓶颈.硬件的提升是立竿见影的,而且是风险小,在硬件更新换代非常快的年代, 当SQLSe ...
- SQL SERVER数据库的简单介绍
一.数据库技术的发展 数据库技术是应数据管理任务的需求而产生的,先后经历了人工管理.文件系统.数据库系统等三个阶段. 二.关系型数据库 SQL Server属于关系型数据库. 关系模型 以二维表来描述 ...
- JVM之垃圾收集器
前一篇讲了垃圾收集算法--JVM之GC算法.垃圾收集算法——标记-清除算法.复制算法.标记-整理算法.分代收集算法,如果把它看作是方法论,那么下面说的就应该是内存回收的具体实现. 先看一下JVM中有哪 ...
- winfrom 图片等比例压缩
效果图: 核心代码: /// <summary> /// 等比例缩放图片 /// </summary> /// <param name="bitmap" ...