题目:

  1)In the first step, apply the Convolution Neural Network method to perform the training on one single CPU and testing

  2)In the second step, try the distributed training on at least two CPU/GPUs and evaluate the training time.

一、单机单卡实现mnist_CNN

1、CNN的理解

  概念:卷积神经网络是一类包含卷积计算且具有深度结构的前馈神经网络,是深度学习代表算法之一。

  基本结构:

    输入层 —— 【卷积层 —— 激活函数层 —— 池化层 】(隐藏层)—— 全连接层 ——输出层

    (1)输入层:原始数据的输入,可对数据进行预处理(如:去均值、归一化)

    (2)卷积层:CNN里面最重要的构建单元。

      Filter卷积核(相当W): 局部关联,抽取重要特征,是一个类似窗口移动方式的映射窗口,大小可自定义

      步长stride: 卷积核移动的大小

      填充 Zero-padding :对卷积核边缘填充0,以能计算出相对大小的特征图(feature map), 通常有‘SAME’、‘VALID’ 两个类型

      特征图(feature map):经过卷积核对原始图的映射最后得出来的计算结果(数量和同层的卷积核相同)

    (3)激活函数层:对卷积层进行非线性变化,有很多种(这里我用的是relu,一般想快点梯度下降的话选这个,简单、收敛快、但较脆弱)

    (4)池化层:用于压缩数据和减少参数的量,减小过拟合,也相当于降维

    (5)全连接层:神经网络的最后一层,两层之间所有神经元进行全连接

    (6)输出层:最后输出结果的层(这里用的是softmax对mnist进行分类)

  过程类似下图:

    

2、设计过程

  请看图:

    

    图中画得并不详细,请看代码:

    

  1. __author__ = 'Kadaj'
  2.  
  3. import tensorflow as tf
  4. from tensorflow.examples.tutorials.mnist import input_data
  5. import numpy as np
  6.  
  7. mnist = input_data.read_data_sets('mnist/', one_hot=True)
  8.  
  9. #创建W , b 构建图
  10. def weight_variable(shape):
  11. initial = tf.truncated_normal(shape, stddev=0.1)
  12. return tf.Variable(initial)
  13.  
  14. def bias_variable(shape):
  15. initial = tf.constant(0.1, shape=shape)
  16. return tf.Variable(initial)
  17.  
  18. #使用TensorFlow中的二维卷积函数
  19. def conv2d(x, W):
  20. return tf.nn.conv2d(x, W , strides=[1,1,1,1], padding="SAME")
  21.  
  22. #池化层
  23. def max_pool_2x2(x):
  24. return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
  25.  
  26. #由于卷积神经网络会利用到空间结构信息,因此需要将一唯的输入向量转为二维的图片结构
  27. x = tf.placeholder(tf.float32, [None, 784])
  28. y_ = tf.placeholder(tf.float32, [None, 10])
  29. x_image = tf.reshape(x, [-1,28,28,1])
  30.  
  31. W_conv1 = weight_variable([5,5,1,32])
  32. b_conv1 = bias_variable([32])
  33. h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1)+b_conv1)
  34. h_pool1 = max_pool_2x2(h_conv1)
  35.  
  36. W_conv2 = weight_variable([5,5,32,64])
  37. b_conv2 = bias_variable([64])
  38. h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2)+b_conv2)
  39. h_pool2 = max_pool_2x2(h_conv2)
  40.  
  41. W_fcl = weight_variable([7 * 7 * 64 , 1024])
  42. b_fcl = bias_variable([1024])
  43. h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64 ])
  44. h_fcl = tf.nn.relu(tf.matmul(h_pool2_flat, W_fcl) + b_fcl )
  45.  
  46. #防止过拟合,使用Dropout层
  47. keep_prob = tf.placeholder(tf.float32)
  48. h_fcl_drop = tf.nn.dropout(h_fcl, keep_prob)
  49.  
  50. #接着使用softmax分类
  51. W_fc2 = weight_variable([1024,10])
  52. b_fc2 = bias_variable([10])
  53. y_conv = tf.nn.softmax(tf.matmul(h_fcl_drop, W_fc2) + b_fc2)
  54.  
  55. #定义损失函数
  56. cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
  57. train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
  58.  
  59. #计算正确率
  60. correct_prediction = tf.equal(tf.argmax(y_conv,1) , tf.argmax(y_,1))
  61. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  62.  
  63. init = tf.global_variables_initializer()
  64.  
  65. with tf.Session(config= tf.ConfigProto(log_device_placement = True)) as sess:
  66. #训练
  67. sess.run(init)
  68. for i in range(20001):
  69. batch = mnist.train.next_batch(50)
  70. if i % 100 == 0:
  71. train_accuracy= sess.run( [accuracy ] ,feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  72. print("The local_step "+str(i) +" of training accuracy is "+ str(train_accuracy))
  73. training, cost = sess.run( [train_step,cross_entropy] , feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
  74.  
  75. accuracyResult = list(range(10))
  76. for i in range(10):
  77. batch = mnist.test.next_batch(1000)
  78. accuracyResult[i] = sess.run([accuracy], feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  79. print("The testing accuracy is :", np.mean(accuracyResult))
  80. print("The cost function is ", cost)

3、执行结果

4、对上述代码的 Dropout补充

  在深度学习中,Dropout是最流行的正则化技术,它被证明非常成功,即使在顶尖水准的神经网络中也可以带来1%到2%的准确度提升,这可能听起来不是很多,但是如果模型已经有95%的准确率,获得2%的准确率提升意味着降低错误率40%,即从5%的错误率降低到3%错误率。

  在每一次训练step中,每个神经元,包括输入神经元,但是不包括输出神经元,有一个概率被临时丢掉,意味着它将被忽视在整个这次训练step中,但是有可能下次再被激活。

  超参数dropout rate,一般设置为50%,在训练之后,神经元不会被dropout 。

二、分布式实现

1、分布式原理(此图是单机多卡和多机多卡)

2、基本概念

  Cluster、Job、task概念:三者可以简单的看成是层次关系,task可以看成每台机器上的一个进程,多个task组成job,job又有:ps、worker两种,分别用于参数服务、计算服务,组成cluster。

3、同步SGD与异步SGD

  同步SGD:各个用于并行计算的电脑,计算完各自的batch后,求取梯度值,把梯度值统一送到ps服务机器中,由ps服务机器求取梯度平均值,更新ps服务器上的参数。

  异步SGD:ps服务器只要收到一台机器的梯度值,就直接进行参数更新,无需等待其他机器。这种迭代方法比较不稳定,收敛曲线震动比较厉害,因为当A机器计算完更新了ps中的参数,可能B机器还是在用上一次迭代的旧版参数值。

以上这些来自:https://blog.csdn.net/panpan_1210/aarticle/details/79402105#

4、基本设计

  逻辑基本上和第一问是一样的,由于没有找到足够的环境,所以这里利用本机模拟实现分布式训练。

  这里直接上代码吧:

  

  1. import tensorflow as tf
  2. from tensorflow.examples.tutorials.mnist import input_data
  3. import numpy as np
  4.  
  5. mnist = input_data.read_data_sets('mnist/', one_hot=True)
  6.  
  7. #创建W , b 构建图
  8. def weight_variable(shape):
  9. initial = tf.truncated_normal(shape, stddev=0.1)
  10. return tf.Variable(initial)
  11.  
  12. def bias_variable(shape):
  13. initial = tf.constant(0.1, shape=shape)
  14. return tf.Variable(initial)
  15.  
  16. #使用TensorFlow中的二维卷积函数
  17. def conv2d(x, W):
  18. return tf.nn.conv2d(x, W , strides=[1,1,1,1], padding="SAME")
  19.  
  20. #池化层
  21. def max_pool_2x2(x):
  22. return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
  23.  
  24. cluster = tf.train.ClusterSpec({
  25. "worker": [
  26. "127.0.0.1:23236",
  27. "127.0.0.1:23237",
  28. ],
  29. "ps": [
  30. "127.0.0.1:32216"
  31. ]})
  32.  
  33. isps = False
  34. if isps:
  35. server = tf.train.Server(cluster, job_name='ps', task_index=0)
  36. server.join()
  37. else:
  38. server = tf.train.Server(cluster, job_name='worker', task_index=0)
  39. with tf.device(tf.train.replica_device_setter(worker_device='/job:worker/task:0/cpu:0', cluster=cluster)):
  40. # 由于卷积神经网络会利用到空间结构信息,因此需要将一唯的输入向量转为二维的图片结构
  41. global_step = tf.Variable(0, name='global_step', trainable=False)
  42. x = tf.placeholder(tf.float32, [None, 784])
  43. y_ = tf.placeholder(tf.float32, [None, 10])
  44. x_image = tf.reshape(x, [-1, 28, 28, 1])
  45.  
  46. W_conv1 = weight_variable([5, 5, 1, 32])
  47. b_conv1 = bias_variable([32])
  48. h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
  49. h_pool1 = max_pool_2x2(h_conv1)
  50.  
  51. W_conv2 = weight_variable([5, 5, 32, 64])
  52. b_conv2 = bias_variable([64])
  53. h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
  54. h_pool2 = max_pool_2x2(h_conv2)
  55.  
  56. W_fcl = weight_variable([7 * 7 * 64, 1024])
  57. b_fcl = bias_variable([1024])
  58. h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
  59. h_fcl = tf.nn.relu(tf.matmul(h_pool2_flat, W_fcl) + b_fcl)
  60.  
  61. # 防止过拟合,使用Dropout层
  62. keep_prob = tf.placeholder(tf.float32)
  63. h_fcl_drop = tf.nn.dropout(h_fcl, keep_prob)
  64.  
  65. # 接着使用softmax分类
  66. W_fc2 = weight_variable([1024, 10])
  67. b_fc2 = bias_variable([10])
  68. y_conv = tf.nn.softmax(tf.matmul(h_fcl_drop, W_fc2) + b_fc2)
  69.  
  70. #定义损失函数
  71. cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
  72. train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy,global_step=global_step)
  73.  
  74. correct_prediction = tf.equal(tf.argmax(y_conv,1) , tf.argmax(y_,1))
  75. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  76.  
  77. saver = tf.train.Saver()
  78. summary_op = tf.summary.merge_all()
  79.  
  80. init_op = tf.initialize_all_variables()
  81. sv = tf.train.Supervisor(init_op=init_op, summary_op=summary_op, saver=saver,global_step=global_step)
  82.  
  83. config = tf.ConfigProto()
  84. config.gpu_options.allow_growth = True
  85. sum = 0
  86. with sv.managed_session(server.target,config=config) as sess:
  87. for i in range(10001):
  88. batch = mnist.train.next_batch(50)
  89. if i % 100 == 0:
  90. train_accuracy,step, cost = sess.run( [accuracy ,global_step,cross_entropy] ,feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  91. print("The local_step "+str(i) +" of training accuracy is "+ str(train_accuracy)+" and global_step is "+str(step))
  92. training, cost = sess.run([train_step, cross_entropy],feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
  93. # sum +=cost
  94.  
  95. accuracyResult = list(range(10))
  96. for i in range(10):
  97. batch = mnist.test.next_batch(1000)
  98. accuracyResult[i] = sess.run([accuracy], feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  99. print("Test accuracy is :", np.mean(accuracyResult))
  100. print("The cost function is ", cost)
  1. import tensorflow as tf
  2. from tensorflow.examples.tutorials.mnist import input_data
  3. import numpy as np
  4.  
  5. mnist = input_data.read_data_sets('mnist/', one_hot=True)
  6.  
  7. #创建W , b 构建图
  8. def weight_variable(shape):
  9. initial = tf.truncated_normal(shape, stddev=0.1)
  10. return tf.Variable(initial)
  11.  
  12. def bias_variable(shape):
  13. initial = tf.constant(0.1, shape=shape)
  14. return tf.Variable(initial)
  15.  
  16. #使用TensorFlow中的二维卷积函数
  17. def conv2d(x, W):
  18. return tf.nn.conv2d(x, W , strides=[1,1,1,1], padding="SAME")
  19.  
  20. #池化层
  21. def max_pool_2x2(x):
  22. return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
  23.  
  24. cluster = tf.train.ClusterSpec({
  25. "worker": [
  26. "127.0.0.1:23236",
  27. "127.0.0.1:23237",
  28. ],
  29. "ps": [
  30. "127.0.0.1:32216"
  31. ]})
  32.  
  33. isps = False
  34. if isps:
  35. server = tf.train.Server(cluster, job_name='ps', task_index=0)
  36. server.join()
  37. else:
  38. server = tf.train.Server(cluster, job_name='worker', task_index=1)
  39. with tf.device(tf.train.replica_device_setter(worker_device='/job:worker/task:1/cpu:0', cluster=cluster)):
  40. # 由于卷积神经网络会利用到空间结构信息,因此需要将一唯的输入向量转为二维的图片结构
  41. global_step = tf.Variable(0, name='global_step', trainable=False)
  42. x = tf.placeholder(tf.float32, [None, 784])
  43. y_ = tf.placeholder(tf.float32, [None, 10])
  44. x_image = tf.reshape(x, [-1, 28, 28, 1])
  45.  
  46. W_conv1 = weight_variable([5, 5, 1, 32])
  47. b_conv1 = bias_variable([32])
  48. h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
  49. h_pool1 = max_pool_2x2(h_conv1)
  50.  
  51. W_conv2 = weight_variable([5, 5, 32, 64])
  52. b_conv2 = bias_variable([64])
  53. h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
  54. h_pool2 = max_pool_2x2(h_conv2)
  55.  
  56. W_fcl = weight_variable([7 * 7 * 64, 1024])
  57. b_fcl = bias_variable([1024])
  58. h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
  59. h_fcl = tf.nn.relu(tf.matmul(h_pool2_flat, W_fcl) + b_fcl)
  60.  
  61. # 防止过拟合,使用Dropout层
  62. keep_prob = tf.placeholder(tf.float32)
  63. h_fcl_drop = tf.nn.dropout(h_fcl, keep_prob)
  64.  
  65. # 接着使用softmax分类
  66. W_fc2 = weight_variable([1024, 10])
  67. b_fc2 = bias_variable([10])
  68. y_conv = tf.nn.softmax(tf.matmul(h_fcl_drop, W_fc2) + b_fc2)
  69.  
  70. #定义损失函数
  71. cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
  72. train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy,global_step=global_step)
  73.  
  74. correct_prediction = tf.equal(tf.argmax(y_conv,1) , tf.argmax(y_,1))
  75. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  76.  
  77. saver = tf.train.Saver()
  78. summary_op = tf.summary.merge_all()
  79.  
  80. init_op = tf.initialize_all_variables()
  81. sv = tf.train.Supervisor(init_op=init_op, summary_op=summary_op, saver=saver,global_step=global_step)
  82.  
  83. config = tf.ConfigProto()
  84. config.gpu_options.allow_growth = True
  85. sum =0
  86. with sv.managed_session(server.target,config=config) as sess:
  87. for i in range(10001):
  88. batch = mnist.train.next_batch(50)
  89. if i % 100 == 0:
  90. train_accuracy,step = sess.run( [accuracy ,global_step] ,feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  91. print("The local_step "+str(i) +" of training accuracy is "+ str(train_accuracy)+" and global_step is "+str(step))
  92. training, cost = sess.run([train_step, cross_entropy],feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
  93. # sum +=cost
  94.  
  95. accuracyResult = list(range(10))
  96. for i in range(10):
  97. batch = mnist.test.next_batch(1000)
  98. accuracyResult[i] = sess.run([accuracy], feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  99. print("Test accuracy is :", np.mean(accuracyResult))
  100. print("The cost function is ", cost)
  1. import tensorflow as tf
  2. from tensorflow.examples.tutorials.mnist import input_data
  3. import numpy as np
  4.  
  5. mnist = input_data.read_data_sets('mnist/', one_hot=True)
  6.  
  7. #创建W , b 构建图
  8. def weight_variable(shape):
  9. initial = tf.truncated_normal(shape, stddev=0.1)
  10. return tf.Variable(initial)
  11.  
  12. def bias_variable(shape):
  13. initial = tf.constant(0.1, shape=shape)
  14. return tf.Variable(initial)
  15.  
  16. #使用TensorFlow中的二维卷积函数
  17. def conv2d(x, W):
  18. return tf.nn.conv2d(x, W , strides=[1,1,1,1], padding="SAME")
  19.  
  20. #池化层
  21. def max_pool_2x2(x):
  22. return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
  23.  
  24. cluster = tf.train.ClusterSpec({
  25. "worker": [
  26. "127.0.0.1:23236",
  27. "127.0.0.1:23237",
  28. ],
  29. "ps": [
  30. "127.0.0.1:32216"
  31. ]})
  32.  
  33. isps = True
  34. if isps:
  35. server = tf.train.Server(cluster, job_name='ps', task_index=0)
  36. server.join()
  37. else:
  38. server = tf.train.Server(cluster, job_name='worker', task_index=0)
  39. with tf.device(tf.train.replica_device_setter(worker_device='/job:worker/task:0/cpu:0', cluster=cluster)):
  40. # 由于卷积神经网络会利用到空间结构信息,因此需要将一唯的输入向量转为二维的图片结构
  41. global_step = tf.Variable(0, name='global_step', trainable=False)
  42. x = tf.placeholder(tf.float32, [None, 784])
  43. y_ = tf.placeholder(tf.float32, [None, 10])
  44. x_image = tf.reshape(x, [-1, 28, 28, 1])
  45.  
  46. W_conv1 = weight_variable([5, 5, 1, 32])
  47. b_conv1 = bias_variable([32])
  48. h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
  49. h_pool1 = max_pool_2x2(h_conv1)
  50.  
  51. W_conv2 = weight_variable([5, 5, 32, 64])
  52. b_conv2 = bias_variable([64])
  53. h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
  54. h_pool2 = max_pool_2x2(h_conv2)
  55.  
  56. W_fcl = weight_variable([7 * 7 * 64, 1024])
  57. b_fcl = bias_variable([1024])
  58. h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
  59. h_fcl = tf.nn.relu(tf.matmul(h_pool2_flat, W_fcl) + b_fcl)
  60.  
  61. # 防止过拟合,使用Dropout层
  62. keep_prob = tf.placeholder(tf.float32)
  63. h_fcl_drop = tf.nn.dropout(h_fcl, keep_prob)
  64.  
  65. # 接着使用softmax分类
  66. W_fc2 = weight_variable([1024, 10])
  67. b_fc2 = bias_variable([10])
  68. y_conv = tf.nn.softmax(tf.matmul(h_fcl_drop, W_fc2) + b_fc2)
  69.  
  70. #定义损失函数
  71. cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
  72. train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy,global_step=global_step)
  73.  
  74. correct_prediction = tf.equal(tf.argmax(y_conv,1) , tf.argmax(y_,1))
  75. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  76.  
  77. saver = tf.train.Saver()
  78. summary_op = tf.summary.merge_all()
  79.  
  80. init_op = tf.initialize_all_variables()
  81. sv = tf.train.Supervisor(init_op=init_op, summary_op=summary_op, saver=saver,global_step=global_step)
  82.  
  83. config = tf.ConfigProto()
  84. config.gpu_options.allow_growth = True
  85. sum = 0
  86. with sv.managed_session(server.target,config=config) as sess:
  87. for i in range(10001):
  88. batch = mnist.train.next_batch(50)
  89. if i % 100 == 0:
  90. train_accuracy,step = sess.run( [accuracy ,global_step] ,feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  91. print("The local_step "+str(i) +" of training accuracy is "+ str(train_accuracy)+" and global_step is "+str(step))
  92. training, cost = sess.run([train_step, cross_entropy],feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
  93. # sum += cost
  94.  
  95. accuracyResult = list(range(10))
  96. for i in range(10):
  97. batch = mnist.test.next_batch(1000)
  98. accuracyResult[i] = sess.run([accuracy], feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
  99. print("Test accuracy is :", np.mean(accuracyResult))
  100. print("The cost function is ", cost)

这里模拟了一个ps,两个worker,但是这里训练的时间没有去计算。但是已经迭代减半为10000次

对比了一下单机训练的10000次迭代,正确率只有99.002%

5、执行结果

利用CNN神经网络实现手写数字mnist分类的更多相关文章

  1. 基于卷积神经网络的手写数字识别分类(Tensorflow)

    import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_dat ...

  2. 利用c++编写bp神经网络实现手写数字识别详解

    利用c++编写bp神经网络实现手写数字识别 写在前面 从大一入学开始,本菜菜就一直想学习一下神经网络算法,但由于时间和资源所限,一直未展开比较透彻的学习.大二下人工智能课的修习,给了我一个学习的契机. ...

  3. 用BP人工神经网络识别手写数字

    http://wenku.baidu.com/link?url=HQ-5tZCXBQ3uwPZQECHkMCtursKIpglboBHq416N-q2WZupkNNH3Gv4vtEHyPULezDb5 ...

  4. TensorFlow卷积神经网络实现手写数字识别以及可视化

    边学习边笔记 https://www.cnblogs.com/felixwang2/p/9190602.html # https://www.cnblogs.com/felixwang2/p/9190 ...

  5. BP神经网络的手写数字识别

    BP神经网络的手写数字识别 ANN 人工神经网络算法在实践中往往给人难以琢磨的印象,有句老话叫“出来混总是要还的”,大概是由于具有很强的非线性模拟和处理能力,因此作为代价上帝让它“黑盒”化了.作为一种 ...

  6. 用Keras搭建神经网络 简单模版(三)—— CNN 卷积神经网络(手写数字图片识别)

    # -*- coding: utf-8 -*- import numpy as np np.random.seed(1337) #for reproducibility再现性 from keras.d ...

  7. TensorFlow.NET机器学习入门【5】采用神经网络实现手写数字识别(MNIST)

    从这篇文章开始,终于要干点正儿八经的工作了,前面都是准备工作.这次我们要解决机器学习的经典问题,MNIST手写数字识别. 首先介绍一下数据集.请首先解压:TF_Net\Asset\mnist_png. ...

  8. keras和tensorflow搭建DNN、CNN、RNN手写数字识别

    MNIST手写数字集 MNIST是一个由美国由美国邮政系统开发的手写数字识别数据集.手写内容是0~9,一共有60000个图片样本,我们可以到MNIST官网免费下载,总共4个.gz后缀的压缩文件,该文件 ...

  9. PyTorch基础——使用卷积神经网络识别手写数字

    一.介绍 实验内容 内容包括用 PyTorch 来实现一个卷积神经网络,从而实现手写数字识别任务. 除此之外,还对卷积神经网络的卷积核.特征图等进行了分析,引出了过滤器的概念,并简单示了卷积神经网络的 ...

随机推荐

  1. 使用vue-cli搭建项目

    在使用vue-cli搭建项目前提: 1.node.js环境 2.npm镜像 开始. 1.cmd打开命令行, npm install -g vue-cli 进行全局安装  (vue-V可以查看其版本) ...

  2. 【EMV L2】Cardholder Verification Rule(CVR) Format

    Cardholder Verification Rule(CVR)由两个字节组成: 高字节为Cardholder Verification Method (CVM) Codes,表示执行Cardhol ...

  3. python从零开始 -- 第2篇之python版本差异

    python从零开始 -- 第2篇之python版本差异 第0篇开始,咱们就说选择 python 3.x,一般来说,咱们面临选择的时候总是想了解更多一点,并且版本之间的对比能引申出很多有意思的故事和知 ...

  4. python学习笔记-os模块参数

    python的os 模块提供了非常丰富的方法用来处理文件和目录.常用的方法如下表所示: os.access(path, mode) 检验权限模式 os.chdir(path) 改变当前工作目录 os. ...

  5. python学习4---实现快速排序

    1.QuickSort def Rand_Partition(A,p,r): """ 划分数组的元素下标q :param A: 数组 :param p: 数组左边界 :p ...

  6. docker ,docker与虚拟机的区别

    什么是Docker: 1.Docker 是一个开源项目,诞生于 2013 年初,最初是 dotCloud 公司内部的一个业余项目.它基于 Google 公司推出的 Go 语言实现. 项目后来加入了 L ...

  7. CSS制作环形进度条

    参考来源 <Radial progress indicator using CSS>,该文核心是用纯CSS来做一个环形的进度条.纯css的意思就是连百分比这种数字,都是css生成的.文章作 ...

  8. SQL中Union和Union All

    工作中,看到大佬写的一段SQL,查询了五个表中的数据,最后求某个收入的总和,其中使用了Union All,因此在这里记录一下我从中学到的东西 先上语法 Union:   [ Select语句1 ] U ...

  9. PHP连接mysql数据库报错:Call to undefined function mysql_connect()

    http://php.net/manual/zh/intro.mysqli.php 系统环境PHP7.0+Mysql5.7+Apache2. 运行一个数据库连接测试示例时报错: [client 127 ...

  10. 简述rpm与yum命令的常见选项

    rpm是一个功能十分强大的软件包管理系统,它使得在Linux下安装.升级和删除软件包的工作变的容易.并且具有查询.验证软件包的功能. 1)安装选项 命令格式:rpm {-i|--install} [i ...