2.5BatchNormalzation
BatchNormalzation是一种解决深度神经网络层数太多,而没有办法有效前向传递的问题,因为每层的输出值都会有不同的均值和方差,所以输出数据的分布也不一样。
如果对于输入的X*W本身得到的值通过tanh激活函数已经输出为1,在通过下一层的神经元之后结果并没有什么改变,因为该值已经很大(神经网络过敏感问题)
对于这种情况我们可以首先做一些批处理正则化
可以看出没有BN的时候,每层的值迅速全部都变为0,也可以说,所有的神经元迅速都已经死去了,而有BN,relu之后,每层的值都能有一个比较好的分布效果
大部分神经元都还活着
relu下的误差对比,no BN的曲线是没有的,很快消失
如果改为tanh,误差变化如下
"""
Build two networks.
1. Without batch normalization
2. With batch normalization Run tests on these two networks.
""" # Batch Normalization import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt ACTIVATION = tf.nn.relu #tf.nn.tanh
N_LAYERS = 7
N_HIDDEN_UNITS = 30 def fix_seed(seed=1):
# reproducible
np.random.seed(seed)
tf.set_random_seed(seed) def plot_his(inputs, inputs_norm):
# plot histogram for the inputs of every layer
for j, all_inputs in enumerate([inputs, inputs_norm]):
for i, input in enumerate(all_inputs):
plt.subplot(2, len(all_inputs), j*len(all_inputs)+(i+1))
plt.cla()
if i == 0:
the_range = (-7, 10)
else:
the_range = (-1, 1)
plt.hist(input.ravel(), bins=15, range=the_range, color='#FF5733')
plt.yticks(())
if j == 1:
plt.xticks(the_range)
else:
plt.xticks(())
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.title("%s normalizing" % ("Without" if j == 0 else "With"))
plt.draw()
plt.pause(0.01) def built_net(xs, ys, norm):
def add_layer(inputs, in_size, out_size, activation_function=None, norm=False):
# weights and biases (bad initialization for this case)
Weights = tf.Variable(tf.random_normal([in_size, out_size], mean=0., stddev=1.))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1) # fully connected product
Wx_plus_b = tf.matmul(inputs, Weights) + biases # normalize fully connected product
if norm:
# Batch Normalize
fc_mean, fc_var = tf.nn.moments(
Wx_plus_b,
axes=[0], # the dimension you wanna normalize, here [0] for batch
# for image, you wanna do [0, 1, 2] for [batch, height, width] but not channel
)
scale = tf.Variable(tf.ones([out_size]))
shift = tf.Variable(tf.zeros([out_size]))
epsilon = 0.001 # apply moving average for mean and var when train on batch
ema = tf.train.ExponentialMovingAverage(decay=0.5)
def mean_var_with_update():
ema_apply_op = ema.apply([fc_mean, fc_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(fc_mean), tf.identity(fc_var)
mean, var = mean_var_with_update()
#all parmeter batch normalization
Wx_plus_b = tf.nn.batch_normalization(Wx_plus_b, mean, var, shift, scale, epsilon)
# similar with this two steps:
# Wx_plus_b = (Wx_plus_b - fc_mean) / tf.sqrt(fc_var + 0.001)
# Wx_plus_b = Wx_plus_b * scale + shift # activation
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b) return outputs fix_seed(1) if norm:
# BN for the first input
fc_mean, fc_var = tf.nn.moments(
xs,
axes=[0],
)
scale = tf.Variable(tf.ones([1]))
shift = tf.Variable(tf.zeros([1]))
epsilon = 0.001
# apply moving average for mean and var when train on batch
ema = tf.train.ExponentialMovingAverage(decay=0.5)
def mean_var_with_update():
ema_apply_op = ema.apply([fc_mean, fc_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(fc_mean), tf.identity(fc_var)
mean, var = mean_var_with_update()
xs = tf.nn.batch_normalization(xs, mean, var, shift, scale, epsilon) # record inputs for every layer
layers_inputs = [xs] # build hidden layers
for l_n in range(N_LAYERS):
layer_input = layers_inputs[l_n]
in_size = layers_inputs[l_n].get_shape()[1].value output = add_layer(
layer_input, # input
in_size, # input size
N_HIDDEN_UNITS, # output size
ACTIVATION, # activation function
norm, # normalize before activation
)
layers_inputs.append(output) # add output for next run # build output layer
prediction = add_layer(layers_inputs[-1], 30, 1, activation_function=None) cost = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
train_op = tf.train.GradientDescentOptimizer(0.001).minimize(cost)
return [train_op, cost, layers_inputs] # make up data
fix_seed(1)
x_data = np.linspace(-7, 10, 2500)[:, np.newaxis]
np.random.shuffle(x_data)
noise = np.random.normal(0, 8, x_data.shape)
y_data = np.square(x_data) - 5 + noise # plot input data
plt.scatter(x_data, y_data)
plt.show() xs = tf.placeholder(tf.float32, [None, 1]) # [num_samples, num_features]
ys = tf.placeholder(tf.float32, [None, 1])
#create 2 neuralnetwork
train_op, cost, layers_inputs = built_net(xs, ys, norm=False) # without BN
train_op_norm, cost_norm, layers_inputs_norm = built_net(xs, ys, norm=True) # with BN sess = tf.Session()
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
init = tf.initialize_all_variables()
else:
init = tf.global_variables_initializer()
sess.run(init) # record cost
cost_his = []
cost_his_norm = []
record_step = 5 plt.ion()
plt.figure(figsize=(7, 3))
for i in range(250):
if i % 50 == 0:
# plot histogram
all_inputs, all_inputs_norm = sess.run([layers_inputs, layers_inputs_norm], feed_dict={xs: x_data, ys: y_data})
plot_his(all_inputs, all_inputs_norm) # train on batch
sess.run([train_op, train_op_norm], feed_dict={xs: x_data[i*10:i*10+10], ys: y_data[i*10:i*10+10]}) if i % record_step == 0:
# record cost
cost_his.append(sess.run(cost, feed_dict={xs: x_data, ys: y_data}))
cost_his_norm.append(sess.run(cost_norm, feed_dict={xs: x_data, ys: y_data})) plt.ioff()
plt.figure()
plt.plot(np.arange(len(cost_his))*record_step, np.array(cost_his), label='no BN') # no norm
plt.plot(np.arange(len(cost_his))*record_step, np.array(cost_his_norm), label='BN') # norm
plt.legend()
plt.show()
数据集图示
2.5BatchNormalzation的更多相关文章
随机推荐
- 【原】list<T>排序
继承IComparable #region IComparable public int CompareTo(object obj) { if(obj is Sce ...
- PHP缓存机制详解
一,PHP缓存机制详解 我们可以使用PHP自带的缓存机制来完成页面静态化,但是仅靠PHP自身的缓存机制并不能完美的解决页面静态化,往往需要和其他静态化技术(通常是伪静态技术)结合使用. output ...
- Java使用泛型实现栈结构
泛型是Java SE5.0的重要特性,使用泛型编程可以使代码获得最大的重用.由于在使用泛型时要指明泛型的具体类型,这样就避免了类型转换.本实例将使用泛型来实现一个栈结构,并对其进行测试. 思路分析:既 ...
- SpringMVC由浅入深day01_2springmvc入门程序
2 入门程序 2.1 需求 以案例作为驱动. springmvc和mybaits使用一个案例(商品订单管理). 功能需求:商品列表查询 2.2 环境准备 数据库环境:mysql5.5 先导入sql_t ...
- 简单的面向过程的Redis存储加入购物车
群里有人问这个Redis存储用户购物车信息,我简单的写了个面向过程的demo 代码如下: <?php $user_id=session("user_id");//获取用户登录 ...
- [XPath] XPath 与 lxml (五)XPath 实例
本文继续沿用第三章的 XML 示例文档. 选取价格高于30的 price 节点 # 从父节点进行筛选 >>> root.xpath('//book[price>30]/pric ...
- 深入浅出MFC——MFC程序的生死因果(三)
1. 本章主要目的:从MFC程序代码中检验出一个Windows程序原本该有的程序进入点(WinMain).窗口类注册(RegisterClass).窗口产生(CreateWindow).消息循环(Me ...
- 《Lua程序设计》第1章 开始 学习笔记
1.1 程序块(chunk)每段代码(例如一个源代码文件或在交互模式中输入的一行代码),称为一个程序块.若使用命令行参数-i来启动Lua解释器,那么解释器就会在运行完指定程序块后进入交互模式.dofi ...
- 二叉查找树(BST)的性质
二叉查找树的性质: 1.左子树上所有结点的值均小于或等于它的根结点的值. 2.右子树上所有结点的值均大于或等于它的根结点的值. 3.左.右子树也分别为二叉排序树. 下图中这棵树,就是一颗典型的二叉查找 ...
- php安装xdebug后var_dump输出没有格式化的问题
开发环境,装了一个xdebug扩展方便调试代码. 但是环境配置好了之后却发现xdebug加载成功了但是var_dump输出的内容却没有使用html格式化 这时想到估计是php.ini里面的某个输出的配 ...