import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data #载入数据集
mnist = input_data.read_data_sets("MNIST_data",one_hot=True) #每个批次的大小
batch_size = 64
#计算一共有多少个批次
n_batch = mnist.train.num_examples // batch_size #定义三个placeholder
x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])
keep_prob=tf.placeholder(tf.float32) # 784-1000-500-10
W1 = tf.Variable(tf.truncated_normal([784,1000],stddev=0.1))
b1 = tf.Variable(tf.zeros([1000])+0.1)
L1 = tf.nn.tanh(tf.matmul(x,W1)+b1)
L1_drop = tf.nn.dropout(L1,keep_prob) W2 = tf.Variable(tf.truncated_normal([1000,500],stddev=0.1))
b2 = tf.Variable(tf.zeros([500])+0.1)
L2 = tf.nn.tanh(tf.matmul(L1_drop,W2)+b2)
L2_drop = tf.nn.dropout(L2,keep_prob) W3 = tf.Variable(tf.truncated_normal([500,10],stddev=0.1))
b3 = tf.Variable(tf.zeros([10])+0.1)
prediction = tf.nn.softmax(tf.matmul(L2_drop,W3)+b3) #交叉熵
loss = tf.losses.softmax_cross_entropy(y,prediction)
#使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss) #初始化变量
init = tf.global_variables_initializer() #结果存放在一个布尔型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置
#求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) with tf.Session() as sess:
sess.run(init)
for epoch in range(31):
for batch in range(n_batch):
batch_xs,batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys,keep_prob:0.5}) test_acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels,keep_prob:1.0})
train_acc = sess.run(accuracy,feed_dict={x:mnist.train.images,y:mnist.train.labels,keep_prob:1.0})
print("Iter " + str(epoch) + ",Testing Accuracy " + str(test_acc) +",Training Accuracy " + str(train_acc))
Extracting MNIST_data\train-images-idx3-ubyte.gz
Extracting MNIST_data\train-labels-idx1-ubyte.gz
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
Iter 0,Testing Accuracy 0.9201,Training Accuracy 0.91234547
Iter 1,Testing Accuracy 0.9256,Training Accuracy 0.9229636
Iter 2,Testing Accuracy 0.9359,Training Accuracy 0.9328182
Iter 3,Testing Accuracy 0.9375,Training Accuracy 0.93716365
Iter 4,Testing Accuracy 0.9408,Training Accuracy 0.9411273
Iter 5,Testing Accuracy 0.9407,Training Accuracy 0.94365454
Iter 6,Testing Accuracy 0.9472,Training Accuracy 0.9484909
Iter 7,Testing Accuracy 0.9472,Training Accuracy 0.9502
Iter 8,Testing Accuracy 0.9516,Training Accuracy 0.95336366
Iter 9,Testing Accuracy 0.9522,Training Accuracy 0.95552725
Iter 10,Testing Accuracy 0.9525,Training Accuracy 0.95632726
Iter 11,Testing Accuracy 0.9566,Training Accuracy 0.9578909
Iter 12,Testing Accuracy 0.9574,Training Accuracy 0.9606182
Iter 13,Testing Accuracy 0.9573,Training Accuracy 0.96107274
Iter 14,Testing Accuracy 0.9587,Training Accuracy 0.9614546
Iter 15,Testing Accuracy 0.9581,Training Accuracy 0.9616727
Iter 16,Testing Accuracy 0.9599,Training Accuracy 0.96369094
Iter 17,Testing Accuracy 0.9601,Training Accuracy 0.96403635
Iter 18,Testing Accuracy 0.9618,Training Accuracy 0.9658909
Iter 19,Testing Accuracy 0.9608,Training Accuracy 0.9652
Iter 20,Testing Accuracy 0.9618,Training Accuracy 0.96607274
Iter 21,Testing Accuracy 0.9634,Training Accuracy 0.96794546
Iter 22,Testing Accuracy 0.9639,Training Accuracy 0.96836364
Iter 23,Testing Accuracy 0.964,Training Accuracy 0.96965456
Iter 24,Testing Accuracy 0.9644,Training Accuracy 0.9693091
Iter 25,Testing Accuracy 0.9647,Training Accuracy 0.9703818
Iter 26,Testing Accuracy 0.9639,Training Accuracy 0.9702
Iter 27,Testing Accuracy 0.9651,Training Accuracy 0.9708909
Iter 28,Testing Accuracy 0.9666,Training Accuracy 0.9711818
Iter 29,Testing Accuracy 0.9644,Training Accuracy 0.9710364
Iter 30,Testing Accuracy 0.9659,Training Accuracy 0.97205454

8.Dropout的更多相关文章

  1. 在RNN中使用Dropout

    dropout在前向神经网络中效果很好,但是不能直接用于RNN,因为RNN中的循环会放大噪声,扰乱它自己的学习.那么如何让它适用于RNN,就是只将它应用于一些特定的RNN连接上.   LSTM的长期记 ...

  2. Deep Learning 23:dropout理解_之读论文“Improving neural networks by preventing co-adaptation of feature detectors”

    理论知识:Deep learning:四十一(Dropout简单理解).深度学习(二十二)Dropout浅层理解与实现.“Improving neural networks by preventing ...

  3. 正则化方法:L1和L2 regularization、数据集扩增、dropout

    正则化方法:防止过拟合,提高泛化能力 在训练数据不够多时,或者overtraining时,常常会导致overfitting(过拟合).其直观的表现如下图所示,随着训练过程的进行,模型复杂度增加,在tr ...

  4. 深度学习(dropout)

    other_techniques_for_regularization 随手翻译,略作参考,禁止转载 www.cnblogs.com/santian/p/5457412.html Dropout: D ...

  5. Deep learning:四十一(Dropout简单理解)

    前言 训练神经网络模型时,如果训练样本较少,为了防止模型过拟合,Dropout可以作为一种trikc供选择.Dropout是hintion最近2年提出的,源于其文章Improving neural n ...

  6. 简单理解dropout

    dropout是CNN(卷积神经网络)中的一个trick,能防止过拟合. 关于dropout的详细内容,还是看论文原文好了: Hinton, G. E., et al. (2012). "I ...

  7. [转]理解dropout

    理解dropout 原文地址:http://blog.csdn.net/stdcoutzyx/article/details/49022443     理解dropout 注意:图片都在github上 ...

  8. [CS231n-CNN] Training Neural Networks Part 1 : parameter updates, ensembles, dropout

    课程主页:http://cs231n.stanford.edu/ ___________________________________________________________________ ...

  9. 正则化,数据集扩增,Dropout

    正则化方法:防止过拟合,提高泛化能力 在训练数据不够多时,或者overtraining时,常常会导致overfitting(过拟合).其直观的表现如下图所示,随着训练过程的进行,模型复杂度增加,在tr ...

  10. [Neural Networks] Dropout阅读笔记

    多伦多大学Hinton组 http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf 一.目的 降低overfitting的风险 二.原理 ...

随机推荐

  1. 第四章 信息收集之nmap

    @nmap扫描工具 nmap是使用最广泛的扫描工具,主要的使用范围有,嗅探,扫描,ping. 局域网扫描 nmap扫描的基本命令: 首先在桌面右键选择open in terminal进入命令窗口,输入 ...

  2. co源码

    co模块整体包括三部分 对于几种参数类型的判断,主要判断是否object,array,promise,generator,generatorFunction这几种; 将几种不同的参数类型转换为prom ...

  3. Squares of a Sorted Array

    Given an array of integers A sorted in non-decreasing order, return an array of the squares of each ...

  4. inxi 查看国产服务器的硬件配置信息

    下载inxi 查看国产服务器的信息 https://codeload.github.com/smxi/inxi/tar.gz/3.0.35-1 注意查看命令是 ./inxi -F 2. 直接是二进制包 ...

  5. mybatis-plus 错误 java.lang.NoClassDefFoundError

    错误 java.lang.NoClassDefFoundError: org/apache/velocity/context/Context 使用mybatis-plus自动生成文件的时候,报下面的错 ...

  6. RMQ+差分处理(Let Them Slide)Manthan, Codefest 19 (open for everyone, rated, Div. 1 + Div. 2)

    题意:https://codeforc.es/contest/1208/problem/E 现有n行w列的墙,每行有一排连续方块,一排方块可以左右连续滑动,且每个方块都有一个价值,第i 列的价值定义为 ...

  7. 网络名称空间 实例研究 veth处于不同网络的路由问题

    相关命令详细介绍参见 http://www.cnblogs.com/Dream-Chaser/p/7077105.html .问题: 两个网络名称空间中的两个接口veth0和veth1,如何配置net ...

  8. vue 模拟测试数据构建

    等价=====================================

  9. Lua格式讲解

    firstValue = "This is a string value"; -- 这是一个变量的定义,变量定义不需要任何标记,这个是全局变量 print("helloW ...

  10. Java lesson16homework

    package lesson16; /** * 1. 随机生成4个0到9的整数,组成一个序列(使用LinkedList<Integer>存储) 例如:3  6  4  4 2. 然后要求用 ...