计算图设计

很简单的实践,

  • 多了个隐藏层
  • 没有上节的高斯噪声
  • 网络写法由上节的面向对象改为了函数式编程,

其他没有特别需要注意的,实现如下:

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import tensorflow as tf
  4. from tensorflow.examples.tutorials.mnist import input_data
  5. import os
  6.  
  7. os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
  8.  
  9. learning_rate = 0.01 # 学习率
  10. training_epochs = 20 # 训练轮数,1轮等于n_samples/batch_size
  11. batch_size = 128 # batch容量
  12. display_step = 1 # 展示间隔
  13. example_to_show = 10 # 展示图像数目
  14.  
  15. n_hidden_units = 256
  16. n_input_units = 784
  17. n_output_units = n_input_units
  18.  
  19. def WeightsVariable(n_in, n_out, name_str):
  20. return tf.Variable(tf.random_normal([n_in, n_out]), dtype=tf.float32, name=name_str)
  21.  
  22. def biasesVariable(n_out, name_str):
  23. return tf.Variable(tf.random_normal([n_out]), dtype=tf.float32, name=name_str)
  24.  
  25. def encoder(x_origin, activate_func=tf.nn.sigmoid):
  26. with tf.name_scope('Layer'):
  27. Weights = WeightsVariable(n_input_units, n_hidden_units, 'Weights')
  28. biases = biasesVariable(n_hidden_units, 'biases')
  29. x_code = activate_func(tf.add(tf.matmul(x_origin, Weights), biases))
  30. return x_code
  31.  
  32. def decode(x_code, activate_func=tf.nn.sigmoid):
  33. with tf.name_scope('Layer'):
  34. Weights = WeightsVariable(n_hidden_units, n_output_units, 'Weights')
  35. biases = biasesVariable(n_output_units, 'biases')
  36. x_decode = activate_func(tf.add(tf.matmul(x_code, Weights), biases))
  37. return x_decode
  38.  
  39. with tf.Graph().as_default():
  40. with tf.name_scope('Input'):
  41. X_input = tf.placeholder(tf.float32, [None, n_input_units])
  42. with tf.name_scope('Encode'):
  43. X_code = encoder(X_input)
  44. with tf.name_scope('decode'):
  45. X_decode = decode(X_code)
  46. with tf.name_scope('loss'):
  47. loss = tf.reduce_mean(tf.pow(X_input - X_decode, 2))
  48. with tf.name_scope('train'):
  49. Optimizer = tf.train.RMSPropOptimizer(learning_rate)
  50. train = Optimizer.minimize(loss)
  51.  
  52. init = tf.global_variables_initializer()
  53.  
  54. # 因为使用了tf.Graph.as_default()上下文环境
  55. # 所以下面的记录必须放在上下文里面,否则记录下来的图是空的(get不到上面的default)
  56. writer = tf.summary.FileWriter(logdir='logs', graph=tf.get_default_graph())
  57. writer.flush()

计算图:

训练程序

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import tensorflow as tf
  4. from tensorflow.examples.tutorials.mnist import input_data
  5. import os
  6.  
  7. os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
  8.  
  9. learning_rate = 0.01 # 学习率
  10. training_epochs = 20 # 训练轮数,1轮等于n_samples/batch_size
  11. batch_size = 128 # batch容量
  12. display_step = 1 # 展示间隔
  13. example_to_show = 10 # 展示图像数目
  14.  
  15. n_hidden_units = 256
  16. n_input_units = 784
  17. n_output_units = n_input_units
  18.  
  19. def WeightsVariable(n_in, n_out, name_str):
  20. return tf.Variable(tf.random_normal([n_in, n_out]), dtype=tf.float32, name=name_str)
  21.  
  22. def biasesVariable(n_out, name_str):
  23. return tf.Variable(tf.random_normal([n_out]), dtype=tf.float32, name=name_str)
  24.  
  25. def encoder(x_origin, activate_func=tf.nn.sigmoid):
  26. with tf.name_scope('Layer'):
  27. Weights = WeightsVariable(n_input_units, n_hidden_units, 'Weights')
  28. biases = biasesVariable(n_hidden_units, 'biases')
  29. x_code = activate_func(tf.add(tf.matmul(x_origin, Weights), biases))
  30. return x_code
  31.  
  32. def decode(x_code, activate_func=tf.nn.sigmoid):
  33. with tf.name_scope('Layer'):
  34. Weights = WeightsVariable(n_hidden_units, n_output_units, 'Weights')
  35. biases = biasesVariable(n_output_units, 'biases')
  36. x_decode = activate_func(tf.add(tf.matmul(x_code, Weights), biases))
  37. return x_decode
  38.  
  39. with tf.Graph().as_default():
  40. with tf.name_scope('Input'):
  41. X_input = tf.placeholder(tf.float32, [None, n_input_units])
  42. with tf.name_scope('Encode'):
  43. X_code = encoder(X_input)
  44. with tf.name_scope('decode'):
  45. X_decode = decode(X_code)
  46. with tf.name_scope('loss'):
  47. loss = tf.reduce_mean(tf.pow(X_input - X_decode, 2))
  48. with tf.name_scope('train'):
  49. Optimizer = tf.train.RMSPropOptimizer(learning_rate)
  50. train = Optimizer.minimize(loss)
  51.  
  52. init = tf.global_variables_initializer()
  53.  
  54. # 因为使用了tf.Graph.as_default()上下文环境
  55. # 所以下面的记录必须放在上下文里面,否则记录下来的图是空的(get不到上面的default)
  56. writer = tf.summary.FileWriter(logdir='logs', graph=tf.get_default_graph())
  57. writer.flush()
  58.  
  59. mnist = input_data.read_data_sets('../Mnist_data/', one_hot=True)
  60.  
  61. with tf.Session() as sess:
  62. sess.run(init)
  63. total_batch = int(mnist.train.num_examples / batch_size)
  64. for epoch in range(training_epochs):
  65. for i in range(total_batch):
  66. batch_xs, batch_ys = mnist.train.next_batch(batch_size)
  67. _, Loss = sess.run([train, loss], feed_dict={X_input: batch_xs})
  68. Loss = sess.run(loss, feed_dict={X_input: batch_xs})
  69. if epoch % display_step == 0:
  70. print('Epoch: %04d' % (epoch + 1), 'loss= ', '{:.9f}'.format(Loss))
  71. writer.close()
  72. print('训练完毕!')
  73.  
  74. '''比较输入和输出的图像'''
  75. # 输出图像获取
  76. reconstructions = sess.run(X_decode, feed_dict={X_input: mnist.test.images[:example_to_show]})
  77. # 画布建立
  78. f, a = plt.subplots(2, 10, figsize=(10, 2))
  79. for i in range(example_to_show):
  80. a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))
  81. a[1][i].imshow(np.reshape(reconstructions[i], (28, 28)))
  82. f.show() # 渲染图像
  83. plt.draw() # 刷新图像
  84. # plt.waitforbuttonpress()

debug一上午的收获:接受sess.run输出的变量名不要和tensor节点的变量名重复,会出错的... ...好低级的错误。mmdz

比较图像一部分之前没做过,介绍了matplotlib.pyplot的花式用法,

  原来plt.subplots()是会返回 画布句柄 & 子图集合 句柄的,子图集合句柄可以像数组一样调用子图

   pyplot是有show()和draw()两个方法的,show是展示出画布,draw会刷新原图,可以交互的修改画布

   waitforbuttonpress()监听键盘按键如果用户按的是键盘,返回True,如果是其他(如鼠标单击),则返回False

另,发现用surface写程序其实还挺带感... ...

输出图像如下:

双隐藏层版本

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import tensorflow as tf
  4. from tensorflow.examples.tutorials.mnist import input_data
  5. import os
  6.  
  7. os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
  8.  
  9. batch_size = 128 # batch容量
  10. display_step = 1 # 展示间隔
  11. learning_rate = 0.01 # 学习率
  12. training_epochs = 20 # 训练轮数,1轮等于n_samples/batch_size
  13. example_to_show = 10 # 展示图像数目
  14.  
  15. n_hidden1_units = 256 # 第一隐藏层
  16. n_hidden2_units = 128 # 第二隐藏层
  17. n_input_units = 784
  18. n_output_units = n_input_units
  19.  
  20. def WeightsVariable(n_in, n_out, name_str):
  21. return tf.Variable(tf.random_normal([n_in, n_out]), dtype=tf.float32, name=name_str)
  22.  
  23. def biasesVariable(n_out, name_str):
  24. return tf.Variable(tf.random_normal([n_out]), dtype=tf.float32, name=name_str)
  25.  
  26. def encoder(x_origin, activate_func=tf.nn.sigmoid):
  27. with tf.name_scope('Layer1'):
  28. Weights = WeightsVariable(n_input_units, n_hidden1_units, 'Weights')
  29. biases = biasesVariable(n_hidden1_units, 'biases')
  30. x_code1 = activate_func(tf.add(tf.matmul(x_origin, Weights), biases))
  31. with tf.name_scope('Layer2'):
  32. Weights = WeightsVariable(n_hidden1_units, n_hidden2_units, 'Weights')
  33. biases = biasesVariable(n_hidden2_units, 'biases')
  34. x_code2 = activate_func(tf.add(tf.matmul(x_code1, Weights), biases))
  35. return x_code2
  36.  
  37. def decode(x_code, activate_func=tf.nn.sigmoid):
  38. with tf.name_scope('Layer1'):
  39. Weights = WeightsVariable(n_hidden2_units, n_hidden1_units, 'Weights')
  40. biases = biasesVariable(n_hidden1_units, 'biases')
  41. x_decode1 = activate_func(tf.add(tf.matmul(x_code, Weights), biases))
  42. with tf.name_scope('Layer2'):
  43. Weights = WeightsVariable(n_hidden1_units, n_output_units, 'Weights')
  44. biases = biasesVariable(n_output_units, 'biases')
  45. x_decode2 = activate_func(tf.add(tf.matmul(x_decode1, Weights), biases))
  46. return x_decode2
  47.  
  48. with tf.Graph().as_default():
  49. with tf.name_scope('Input'):
  50. X_input = tf.placeholder(tf.float32, [None, n_input_units])
  51. with tf.name_scope('Encode'):
  52. X_code = encoder(X_input)
  53. with tf.name_scope('decode'):
  54. X_decode = decode(X_code)
  55. with tf.name_scope('loss'):
  56. loss = tf.reduce_mean(tf.pow(X_input - X_decode, 2))
  57. with tf.name_scope('train'):
  58. Optimizer = tf.train.RMSPropOptimizer(learning_rate)
  59. train = Optimizer.minimize(loss)
  60.  
  61. init = tf.global_variables_initializer()
  62.  
  63. # 因为使用了tf.Graph.as_default()上下文环境
  64. # 所以下面的记录必须放在上下文里面,否则记录下来的图是空的(get不到上面的default)
  65. writer = tf.summary.FileWriter(logdir='logs', graph=tf.get_default_graph())
  66. writer.flush()
  67.  
  68. mnist = input_data.read_data_sets('../Mnist_data/', one_hot=True)
  69.  
  70. with tf.Session() as sess:
  71. sess.run(init)
  72. total_batch = int(mnist.train.num_examples / batch_size)
  73. for epoch in range(training_epochs):
  74. for i in range(total_batch):
  75. batch_xs, batch_ys = mnist.train.next_batch(batch_size)
  76. _, Loss = sess.run([train, loss], feed_dict={X_input: batch_xs})
  77. Loss = sess.run(loss, feed_dict={X_input: batch_xs})
  78. if epoch % display_step == 0:
  79. print('Epoch: %04d' % (epoch + 1), 'loss= ', '{:.9f}'.format(Loss))
  80. writer.close()
  81. print('训练完毕!')
  82.  
  83. '''比较输入和输出的图像'''
  84. # 输出图像获取
  85. reconstructions = sess.run(X_decode, feed_dict={X_input: mnist.test.images[:example_to_show]})
  86. # 画布建立
  87. f, a = plt.subplots(2, 10, figsize=(10, 2))
  88. for i in range(example_to_show):
  89. a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))
  90. a[1][i].imshow(np.reshape(reconstructions[i], (28, 28)))
  91. f.show() # 渲染图像
  92. plt.draw() # 刷新图像
  93. # plt.waitforbuttonpress()

输出图像如下:


由于压缩到128个节点损失信息过多,所以结果不如之前单层的好。

有意思的是我们把256的那层改成128(也就是双128)后,结果反而比上面的要好:

但是仍然比不上单隐藏层,数据比较简单时候复杂网络效果可能不那么好(loss值我没有截取,但实际上是这样,虽然不同网络loss直接比较没什么意义),当然,也有可能是复杂网络没收敛的结果。

可视化双隐藏层自编码器

  1. import tensorflow as tf
  2. from tensorflow.examples.tutorials.mnist import input_data
  3. import os
  4.  
  5. os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
  6.  
  7. batch_size = 128 # batch容量
  8. display_step = 1 # 展示间隔
  9. learning_rate = 0.01 # 学习率
  10. training_epochs = 20 # 训练轮数,1轮等于n_samples/batch_size
  11. example_to_show = 10 # 展示图像数目
  12.  
  13. n_hidden1_units = 256 # 第一隐藏层
  14. n_hidden2_units = 128 # 第二隐藏层
  15. n_input_units = 784
  16. n_output_units = n_input_units
  17.  
  18. def variable_summaries(var): #<---
  19. """
  20. 可视化变量全部相关参数
  21. :param var:
  22. :return:
  23. """
  24. with tf.name_scope('summaries'):
  25. mean = tf.reduce_mean(var)
  26. tf.summary.histogram('mean', mean)
  27. with tf.name_scope('stddev'):
  28. stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
  29. tf.summary.scalar('stddev', stddev) # 注意,这是标量
  30. tf.summary.scalar('max', tf.reduce_max(var))
  31. tf.summary.scalar('min', tf.reduce_min(var))
  32. tf.summary.histogram('histogram', var)
  33.  
  34. def WeightsVariable(n_in,n_out,name_str):
  35. return tf.Variable(tf.random_normal([n_in,n_out]),dtype=tf.float32,name=name_str)
  36.  
  37. def biasesVariable(n_out,name_str):
  38. return tf.Variable(tf.random_normal([n_out]),dtype=tf.float32,name=name_str)
  39.  
  40. def encoder(x_origin,activate_func=tf.nn.sigmoid):
  41. with tf.name_scope('Layer1'):
  42. Weights = WeightsVariable(n_input_units,n_hidden1_units,'Weights')
  43. biases = biasesVariable(n_hidden1_units,'biases')
  44. x_code1 = activate_func(tf.add(tf.matmul(x_origin,Weights),biases))
  45. variable_summaries(Weights) #<---
  46. variable_summaries(biases) #<---
  47. with tf.name_scope('Layer2'):
  48. Weights = WeightsVariable(n_hidden1_units,n_hidden2_units,'Weights')
  49. biases = biasesVariable(n_hidden2_units,'biases')
  50. x_code2 = activate_func(tf.add(tf.matmul(x_code1,Weights),biases))
  51. variable_summaries(Weights) #<---
  52. variable_summaries(biases) #<---
  53. return x_code2
  54.  
  55. def decode(x_code,activate_func=tf.nn.sigmoid):
  56. with tf.name_scope('Layer1'):
  57. Weights = WeightsVariable(n_hidden2_units,n_hidden1_units,'Weights')
  58. biases = biasesVariable(n_hidden1_units,'biases')
  59. x_decode1 = activate_func(tf.add(tf.matmul(x_code,Weights),biases))
  60. variable_summaries(Weights) #<---
  61. variable_summaries(biases) #<---
  62. with tf.name_scope('Layer2'):
  63. Weights = WeightsVariable(n_hidden1_units,n_output_units,'Weights')
  64. biases = biasesVariable(n_output_units,'biases')
  65. x_decode2 = activate_func(tf.add(tf.matmul(x_decode1,Weights),biases))
  66. variable_summaries(Weights) #<---
  67. variable_summaries(biases) #<---
  68. return x_decode2
  69.  
  70. with tf.Graph().as_default():
  71. with tf.name_scope('Input'):
  72. X_input = tf.placeholder(tf.float32,[None,n_input_units])
  73. with tf.name_scope('Encode'):
  74. X_code = encoder(X_input)
  75. with tf.name_scope('decode'):
  76. X_decode = decode(X_code)
  77. with tf.name_scope('loss'):
  78. loss = tf.reduce_mean(tf.pow(X_input - X_decode,2))
  79. with tf.name_scope('train'):
  80. Optimizer = tf.train.RMSPropOptimizer(learning_rate)
  81. train = Optimizer.minimize(loss)
  82.  
  83. # 标量汇总
  84. with tf.name_scope('LossSummary'):
  85. tf.summary.scalar('loss',loss)
  86. tf.summary.scalar('learning_rate',learning_rate)
  87.  
  88. # 图像展示
  89. with tf.name_scope('ImageSummary'):
  90. image_original = tf.reshape(X_input,[-1, 28, 28, 1])
  91. image_reconstruction = tf.reshape(X_decode, [-1, 28, 28, 1])
  92. tf.summary.image('image_original', image_original, 9)
  93. tf.summary.image('image_recinstruction', image_reconstruction, 9)
  94.  
  95. # 汇总
  96. merged_summary = tf.summary.merge_all()
  97.  
  98. init = tf.global_variables_initializer()
  99.  
  100. writer = tf.summary.FileWriter(logdir='logs', graph=tf.get_default_graph())
  101. writer.flush()
  102.  
  103. mnist = input_data.read_data_sets('../Mnist_data/', one_hot=True)
  104.  
  105. with tf.Session() as sess:
  106. sess.run(init)
  107. total_batch = int(mnist.train.num_examples / batch_size)
  108. for epoch in range(training_epochs):
  109. for i in range(total_batch):
  110. batch_xs,batch_ys = mnist.train.next_batch(batch_size)
  111. _,Loss = sess.run([train,loss],feed_dict={X_input: batch_xs})
  112. Loss = sess.run(loss,feed_dict={X_input: batch_xs})
  113. if epoch % display_step == 0:
  114. print('Epoch: %04d' % (epoch + 1),'loss= ','{:.9f}'.format(Loss))
  115. summary_str = sess.run(merged_summary,feed_dict={X_input: batch_xs}) #<---
  116. writer.add_summary(summary_str,epoch) #<---
  117. writer.flush() #<---
  118. writer.close()
  119. print('训练完毕!')

几个有意思的发现,

  使用之前的图像输出方式时,win下matplotlib.pyplot的绘画框会立即退出,所以要使用 plt.waitforbuttonpress() 命令。

  win下使用plt绘画色彩和linux不一样,效果如下:

  

输出图如下:

对比图像如下(截自tensorboard):

『TensorFlow』单&双隐藏层自编码器设计的更多相关文章

  1. 吴裕雄 PYTHON 神经网络——TENSORFLOW 双隐藏层自编码器设计处理MNIST手写数字数据集并使用TENSORBORD描绘神经网络数据2

    import os import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data os.envi ...

  2. 『TensorFlow』读书笔记_降噪自编码器

    『TensorFlow』降噪自编码器设计  之前学习过的代码,又敲了一遍,新的收获也还是有的,因为这次注释写的比较详尽,所以再次记录一下,具体的相关知识查阅之前写的文章即可(见上面链接). # Aut ...

  3. 吴裕雄 PYTHON 神经网络——TENSORFLOW 单隐藏层自编码器设计处理MNIST手写数字数据集并使用TensorBord描绘神经网络数据

    import os import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow ...

  4. 『TensorFlow』专题汇总

    TensorFlow:官方文档 TensorFlow:项目地址 本篇列出文章对于全零新手不太合适,可以尝试TensorFlow入门系列博客,搭配其他资料进行学习. Keras使用tf.Session训 ...

  5. 『TensorFlow』模型保存和载入方法汇总

    『TensorFlow』第七弹_保存&载入会话_霸王回马 一.TensorFlow常规模型加载方法 保存模型 tf.train.Saver()类,.save(sess, ckpt文件目录)方法 ...

  6. 『TensorFlow』SSD源码学习_其一:论文及开源项目文档介绍

    一.论文介绍 读论文系列:Object Detection ECCV2016 SSD 一句话概括:SSD就是关于类别的多尺度RPN网络 基本思路: 基础网络后接多层feature map 多层feat ...

  7. 『TensorFlow』DCGAN生成动漫人物头像_下

    『TensorFlow』以GAN为例的神经网络类范式 『cs231n』通过代码理解gan网络&tensorflow共享变量机制_上 『TensorFlow』通过代码理解gan网络_中 一.计算 ...

  8. 『TensorFlow』滑动平均

    滑动平均会为目标变量维护一个影子变量,影子变量不影响原变量的更新维护,但是在测试或者实际预测过程中(非训练时),使用影子变量代替原变量. 1.滑动平均求解对象初始化 ema = tf.train.Ex ...

  9. 『TensorFlow』流程控制

    『PyTorch』第六弹_最小二乘法对比PyTorch和TensorFlow TensorFlow 控制流程操作 TensorFlow 提供了几个操作和类,您可以使用它们来控制操作的执行并向图中添加条 ...

随机推荐

  1. java框架之MyBatis(2)-进阶&整合Spring&逆向工程

    进阶内容 准备 jdbc.url=jdbc:mysql://192.168.208.192:3306/test?characterEncoding=utf-8 jdbc.driver=com.mysq ...

  2. python框架之Django(6)-查询优化之select_related&prefetch_related

    准备 定义如下模型 from django.db import models # 省份 class Province(models.Model): name = models.CharField(ma ...

  3. spring 事务注解

    在spring中使用事务需要遵守一些规范和了解一些坑点,别想当然.列举一下一些注意点. 在需要事务管理的地方加@Transactional 注解.@Transactional 注解可以被应用于接口定义 ...

  4. kubectl批量删除pvc

    #!/bin/bashkubectl get pvc |grep hub > tmp.txtcat tmp.txt |awk '{split($0,a," ");print ...

  5. 【UML】NO.55.EBook.8.UML.3.001-【UML和模式应用 第3版】

    1.0.0 Summary Tittle:[UML]NO.54.EBook.8.UML.3.001-[UML和模式应用 第3版] Style:DesignPattern Series:DesignPa ...

  6. phpstudy----------phpstudy开启apache日志并且按照日期划分创建。

    1.CustomLog "|bin/rotatelogs.exe logs/access_%Y_%m_%d.log 86400 480" combined 这里修改成上图所示,然后 ...

  7. easy ui datatimebox databox 当前时间

    databox  当前日期: class="easyui-datebox" var curr_time = new Date(); var strDate = curr_time. ...

  8. 虚拟机centos7配置本地yum源

    在虚拟机中要使用yum命令,就要先配置一下yum源,下面就分享一下这个过程: 1. 挂载iso到vmware,首先得确保CD/DVD连接到镜像.可以这样操作 2. 执行下面的命令 # mkdir /m ...

  9. Node.js学习准备篇

    这里写个Node.js 准备篇包含内容有node.js 的安装,命令行运行node.js 文件,使用webStrom 编写 node.js 时有提示功能,并用webStrom 运行 Node.js 其 ...

  10. Redis初探-Redis安装

    官网地址:https://redis.io/download 最新版本是4.0,在这里本人下的是3.2 使用rz命令可以将Redis上传到Linux系统 首先要确定Linux上是否安装了gcc,没有则 ...