Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.1

http://blog.csdn.net/sunbow0

Spark MLlib Deep Learning工具箱,是依据现有深度学习教程《UFLDL教程》中的算法。在SparkMLlib中的实现。详细Spark MLlib Deep Learning(深度学习)文件夹结构:

第一章Neural Net(NN)

1、源代码

2、源代码解析

3、实例

第二章Deep Belief Nets(DBNs)

1、源代码

2、源代码解析

3、实例

第三章Convolution Neural Network(CNN)

第四章 Stacked Auto-Encoders(SAE)

第五章CAE

第二章Deep Belief Network (深度信念网络)

1源代码

眼下Spark MLlib Deep Learning工具箱源代码的github地址为:

https://github.com/sunbow1/SparkMLlibDeepLearn

1.1 DBN代码

  1. package DBN
  2.  
  3. import org.apache.spark._
  4. import org.apache.spark.SparkContext._
  5. import org.apache.spark.rdd.RDD
  6. import org.apache.spark.Logging
  7. import org.apache.spark.mllib.regression.LabeledPoint
  8. import org.apache.spark.mllib.linalg._
  9. import org.apache.spark.mllib.linalg.distributed.RowMatrix
  10.  
  11. import breeze.linalg.{
  12. Matrix => BM,
  13. CSCMatrix => BSM,
  14. DenseMatrix => BDM,
  15. Vector => BV,
  16. DenseVector => BDV,
  17. SparseVector => BSV,
  18. axpy => brzAxpy,
  19. svd => brzSvd
  20. }
  21. import breeze.numerics.{
  22. exp => Bexp,
  23. tanh => Btanh
  24. }
  25.  
  26. import scala.collection.mutable.ArrayBuffer
  27. import java.util.Random
  28. import scala.math._
  29.  
  30. /**
  31. * W:权重
  32. * b:偏置
  33. * c:偏置
  34. */
  35. case class DBNweight(
  36. W: BDM[Double],
  37. vW: BDM[Double],
  38. b: BDM[Double],
  39. vb: BDM[Double],
  40. c: BDM[Double],
  41. vc: BDM[Double]) extends Serializable
  42.  
  43. /**
  44. * 配置參数
  45. */
  46. case class DBNConfig(
  47. size: Array[Int],
  48. layer: Int,
  49. momentum: Double,
  50. alpha: Double) extends Serializable
  51.  
  52. /**
  53. * DBN(Deep Belief Network)
  54. */
  55.  
  56. class DBN(
  57. private var size: Array[Int],
  58. private var layer: Int,
  59. private var momentum: Double,
  60. private var alpha: Double) extends Serializable with Logging {
  61. // var size=Array(5, 10, 10)
  62. // var layer=3
  63. // var momentum=0.0
  64. // var alpha=1.0
  65. /**
  66. * size = architecture; 网络结构
  67. * layer = numel(nn.size); 网络层数
  68. * momentum = 0.0; Momentum
  69. * alpha = 1.0; alpha
  70. */
  71. def this() = this(DBN.Architecture, 3, 0.0, 1.0)
  72.  
  73. /** 设置神经网络结构. Default: [10, 5, 1]. */
  74. def setSize(size: Array[Int]): this.type = {
  75. this.size = size
  76. this
  77. }
  78.  
  79. /** 设置神经网络层数据. Default: 3. */
  80. def setLayer(layer: Int): this.type = {
  81. this.layer = layer
  82. this
  83. }
  84.  
  85. /** 设置Momentum. Default: 0.0. */
  86. def setMomentum(momentum: Double): this.type = {
  87. this.momentum = momentum
  88. this
  89. }
  90.  
  91. /** 设置alpha. Default: 1. */
  92. def setAlpha(alpha: Double): this.type = {
  93. this.alpha = alpha
  94. this
  95. }
  96.  
  97. /**
  98. * 深度信念网络(Deep Belief Network)
  99. * 执行训练DBNtrain
  100. */
  101. def DBNtrain(train_d: RDD[(BDM[Double], BDM[Double])], opts: Array[Double]): DBNModel = {
  102. // 參数配置 广播配置
  103. val sc = train_d.sparkContext
  104. val dbnconfig = DBNConfig(size, layer, momentum, alpha)
  105. // 初始化权重
  106. var dbn_W = DBN.InitialW(size)
  107. var dbn_vW = DBN.InitialvW(size)
  108. var dbn_b = DBN.Initialb(size)
  109. var dbn_vb = DBN.Initialvb(size)
  110. var dbn_c = DBN.Initialc(size)
  111. var dbn_vc = DBN.Initialvc(size)
  112. // 训练第1层
  113. printf("Training Level: %d.\n", 1)
  114. val weight0 = new DBNweight(dbn_W(0), dbn_vW(0), dbn_b(0), dbn_vb(0), dbn_c(0), dbn_vc(0))
  115. val weight1 = RBMtrain(train_d, opts, dbnconfig, weight0)
  116. dbn_W(0) = weight1.W
  117. dbn_vW(0) = weight1.vW
  118. dbn_b(0) = weight1.b
  119. dbn_vb(0) = weight1.vb
  120. dbn_c(0) = weight1.c
  121. dbn_vc(0) = weight1.vc
  122. // 打印权重
  123. printf("dbn_W%d.\n", 1)
  124. val tmpw0 = dbn_W(0)
  125. for (i <- 0 to tmpw0.rows - 1) {
  126. for (j <- 0 to tmpw0.cols - 1) {
  127. print(tmpw0(i, j) + "\t")
  128. }
  129. println()
  130. }
  131.  
  132. // 训练第2层 至 n层
  133. for (i <- 2 to dbnconfig.layer - 1) {
  134. // 前向计算x
  135. // x = sigm(repmat(rbm.c', size(x, 1), 1) + x * rbm.W');
  136. printf("Training Level: %d.\n", i)
  137. val tmp_bc_w = sc.broadcast(dbn_W(i - 2))
  138. val tmp_bc_c = sc.broadcast(dbn_c(i - 2))
  139. val train_d2 = train_d.map { f =>
  140. val lable = f._1
  141. val x = f._2
  142. val x2 = DBN.sigm(x * tmp_bc_w.value.t + tmp_bc_c.value.t)
  143. (lable, x2)
  144. }
  145. // 训练第i层
  146. val weighti = new DBNweight(dbn_W(i - 1), dbn_vW(i - 1), dbn_b(i - 1), dbn_vb(i - 1), dbn_c(i - 1), dbn_vc(i - 1))
  147. val weight2 = RBMtrain(train_d2, opts, dbnconfig, weighti)
  148. dbn_W(i - 1) = weight2.W
  149. dbn_vW(i - 1) = weight2.vW
  150. dbn_b(i - 1) = weight2.b
  151. dbn_vb(i - 1) = weight2.vb
  152. dbn_c(i - 1) = weight2.c
  153. dbn_vc(i - 1) = weight2.vc
  154. // 打印权重
  155. printf("dbn_W%d.\n", i)
  156. val tmpw1 = dbn_W(i - 1)
  157. for (i <- 0 to tmpw1.rows - 1) {
  158. for (j <- 0 to tmpw1.cols - 1) {
  159. print(tmpw1(i, j) + "\t")
  160. }
  161. println()
  162. }
  163. }
  164. new DBNModel(dbnconfig, dbn_W, dbn_b, dbn_c)
  165. }
  166.  
  167. /**
  168. * 深度信念网络(Deep Belief Network)
  169. * 每一层神经网络进行训练rbmtrain
  170. */
  171. def RBMtrain(train_t: RDD[(BDM[Double], BDM[Double])],
  172. opts: Array[Double],
  173. dbnconfig: DBNConfig,
  174. weight: DBNweight): DBNweight = {
  175. val sc = train_t.sparkContext
  176. var StartTime = System.currentTimeMillis()
  177. var EndTime = System.currentTimeMillis()
  178. // 权重參数变量
  179. var rbm_W = weight.W
  180. var rbm_vW = weight.vW
  181. var rbm_b = weight.b
  182. var rbm_vb = weight.vb
  183. var rbm_c = weight.c
  184. var rbm_vc = weight.vc
  185. // 广播參数
  186. val bc_config = sc.broadcast(dbnconfig)
  187. // 训练样本数量
  188. val m = train_t.count
  189. // 计算batch的数量
  190. val batchsize = opts(0).toInt
  191. val numepochs = opts(1).toInt
  192. val numbatches = (m / batchsize).toInt
  193. // numepochs是循环的次数
  194. for (i <- 1 to numepochs) {
  195. StartTime = System.currentTimeMillis()
  196. val splitW2 = Array.fill(numbatches)(1.0 / numbatches)
  197. var err = 0.0
  198. // 依据分组权重,随机划分每组样本数据
  199. for (l <- 1 to numbatches) {
  200. // 1 广播权重參数
  201. val bc_rbm_W = sc.broadcast(rbm_W)
  202. val bc_rbm_vW = sc.broadcast(rbm_vW)
  203. val bc_rbm_b = sc.broadcast(rbm_b)
  204. val bc_rbm_vb = sc.broadcast(rbm_vb)
  205. val bc_rbm_c = sc.broadcast(rbm_c)
  206. val bc_rbm_vc = sc.broadcast(rbm_vc)
  207.  
  208. // // 打印权重
  209. // println(i + "\t" + l)
  210. // val tmpw0 = bc_rbm_W.value
  211. // for (i <- 0 to tmpw0.rows - 1) {
  212. // for (j <- 0 to tmpw0.cols - 1) {
  213. // print(tmpw0(i, j) + "\t")
  214. // }
  215. // println()
  216. // }
  217.  
  218. // 2 样本划分
  219. val train_split2 = train_t.randomSplit(splitW2, System.nanoTime())
  220. val batch_xy1 = train_split2(l - 1)
  221. // val train_split3 = train_t.filter { f => (f._1 >= batchsize * (l - 1) + 1) && (f._1 <= batchsize * (l)) }
  222. // val batch_xy1 = train_split3.map(f => (f._2, f._3))
  223.  
  224. // 3 前向计算
  225. // v1 = batch;
  226. // h1 = sigmrnd(repmat(rbm.c', opts.batchsize, 1) + v1 * rbm.W');
  227. // v2 = sigmrnd(repmat(rbm.b', opts.batchsize, 1) + h1 * rbm.W);
  228. // h2 = sigm(repmat(rbm.c', opts.batchsize, 1) + v2 * rbm.W');
  229. // c1 = h1' * v1;
  230. // c2 = h2' * v2;
  231. val batch_vh1 = batch_xy1.map { f =>
  232. val lable = f._1
  233. val v1 = f._2
  234. val h1 = DBN.sigmrnd((v1 * bc_rbm_W.value.t + bc_rbm_c.value.t))
  235. val v2 = DBN.sigmrnd((h1 * bc_rbm_W.value + bc_rbm_b.value.t))
  236. val h2 = DBN.sigm(v2 * bc_rbm_W.value.t + bc_rbm_c.value.t)
  237. val c1 = h1.t * v1
  238. val c2 = h2.t * v2
  239. (lable, v1, h1, v2, h2, c1, c2)
  240. }
  241.  
  242. // 4 更新前向计算
  243. // rbm.vW = rbm.momentum * rbm.vW + rbm.alpha * (c1 - c2) / opts.batchsize;
  244. // rbm.vb = rbm.momentum * rbm.vb + rbm.alpha * sum(v1 - v2)' / opts.batchsize;
  245. // rbm.vc = rbm.momentum * rbm.vc + rbm.alpha * sum(h1 - h2)' / opts.batchsize;
  246. // W 更新方向
  247. val vw1 = batch_vh1.map {
  248. case (lable, v1, h1, v2, h2, c1, c2) =>
  249. c1 - c2
  250. }
  251. val initw = BDM.zeros[Double](bc_rbm_W.value.rows, bc_rbm_W.value.cols)
  252. val (vw2, countw2) = vw1.treeAggregate((initw, 0L))(
  253. seqOp = (c, v) => {
  254. // c: (m, count), v: (m)
  255. val m1 = c._1
  256. val m2 = m1 + v
  257. (m2, c._2 + 1)
  258. },
  259. combOp = (c1, c2) => {
  260. // c: (m, count)
  261. val m1 = c1._1
  262. val m2 = c2._1
  263. val m3 = m1 + m2
  264. (m3, c1._2 + c2._2)
  265. })
  266. val vw3 = vw2 / countw2.toDouble
  267. rbm_vW = bc_config.value.momentum * bc_rbm_vW.value + bc_config.value.alpha * vw3
  268. // b 更新方向
  269. val vb1 = batch_vh1.map {
  270. case (lable, v1, h1, v2, h2, c1, c2) =>
  271. (v1 - v2)
  272. }
  273. val initb = BDM.zeros[Double](bc_rbm_vb.value.cols, bc_rbm_vb.value.rows)
  274. val (vb2, countb2) = vb1.treeAggregate((initb, 0L))(
  275. seqOp = (c, v) => {
  276. // c: (m, count), v: (m)
  277. val m1 = c._1
  278. val m2 = m1 + v
  279. (m2, c._2 + 1)
  280. },
  281. combOp = (c1, c2) => {
  282. // c: (m, count)
  283. val m1 = c1._1
  284. val m2 = c2._1
  285. val m3 = m1 + m2
  286. (m3, c1._2 + c2._2)
  287. })
  288. val vb3 = vb2 / countb2.toDouble
  289. rbm_vb = bc_config.value.momentum * bc_rbm_vb.value + bc_config.value.alpha * vb3.t
  290. // c 更新方向
  291. val vc1 = batch_vh1.map {
  292. case (lable, v1, h1, v2, h2, c1, c2) =>
  293. (h1 - h2)
  294. }
  295. val initc = BDM.zeros[Double](bc_rbm_vc.value.cols, bc_rbm_vc.value.rows)
  296. val (vc2, countc2) = vc1.treeAggregate((initc, 0L))(
  297. seqOp = (c, v) => {
  298. // c: (m, count), v: (m)
  299. val m1 = c._1
  300. val m2 = m1 + v
  301. (m2, c._2 + 1)
  302. },
  303. combOp = (c1, c2) => {
  304. // c: (m, count)
  305. val m1 = c1._1
  306. val m2 = c2._1
  307. val m3 = m1 + m2
  308. (m3, c1._2 + c2._2)
  309. })
  310. val vc3 = vc2 / countc2.toDouble
  311. rbm_vc = bc_config.value.momentum * bc_rbm_vc.value + bc_config.value.alpha * vc3.t
  312.  
  313. // 5 权重更新
  314. // rbm.W = rbm.W + rbm.vW;
  315. // rbm.b = rbm.b + rbm.vb;
  316. // rbm.c = rbm.c + rbm.vc;
  317. rbm_W = bc_rbm_W.value + rbm_vW
  318. rbm_b = bc_rbm_b.value + rbm_vb
  319. rbm_c = bc_rbm_c.value + rbm_vc
  320.  
  321. // 6 计算误差
  322. val dbne1 = batch_vh1.map {
  323. case (lable, v1, h1, v2, h2, c1, c2) =>
  324. (v1 - v2)
  325. }
  326. val (dbne2, counte) = dbne1.treeAggregate((0.0, 0L))(
  327. seqOp = (c, v) => {
  328. // c: (e, count), v: (m)
  329. val e1 = c._1
  330. val e2 = (v :* v).sum
  331. val esum = e1 + e2
  332. (esum, c._2 + 1)
  333. },
  334. combOp = (c1, c2) => {
  335. // c: (e, count)
  336. val e1 = c1._1
  337. val e2 = c2._1
  338. val esum = e1 + e2
  339. (esum, c1._2 + c2._2)
  340. })
  341. val dbne = dbne2 / counte.toDouble
  342. err += dbne
  343. }
  344. EndTime = System.currentTimeMillis()
  345. // 打印误差结果
  346. printf("epoch: numepochs = %d , Took = %d seconds; Average reconstruction error is: %f.\n", i, scala.math.ceil((EndTime - StartTime).toDouble / 1000).toLong, err / numbatches.toDouble)
  347. }
  348. new DBNweight(rbm_W, rbm_vW, rbm_b, rbm_vb, rbm_c, rbm_vc)
  349. }
  350.  
  351. }
  352.  
  353. /**
  354. * NN(neural network)
  355. */
  356. object DBN extends Serializable {
  357.  
  358. // Initialization mode names
  359. val Activation_Function = "sigm"
  360. val Output = "linear"
  361. val Architecture = Array(10, 5, 1)
  362.  
  363. /**
  364. * 初始化权重
  365. * 初始化为0
  366. */
  367. def InitialW(size: Array[Int]): Array[BDM[Double]] = {
  368. // 初始化权重參数
  369. // weights and weight momentum
  370. // dbn.rbm{u}.W = zeros(dbn.sizes(u + 1), dbn.sizes(u));
  371. val n = size.length
  372. val rbm_W = ArrayBuffer[BDM[Double]]()
  373. for (i <- 1 to n - 1) {
  374. val d1 = BDM.zeros[Double](size(i), size(i - 1))
  375. rbm_W += d1
  376. }
  377. rbm_W.toArray
  378. }
  379.  
  380. /**
  381. * 初始化权重vW
  382. * 初始化为0
  383. */
  384. def InitialvW(size: Array[Int]): Array[BDM[Double]] = {
  385. // 初始化权重參数
  386. // weights and weight momentum
  387. // dbn.rbm{u}.vW = zeros(dbn.sizes(u + 1), dbn.sizes(u));
  388. val n = size.length
  389. val rbm_vW = ArrayBuffer[BDM[Double]]()
  390. for (i <- 1 to n - 1) {
  391. val d1 = BDM.zeros[Double](size(i), size(i - 1))
  392. rbm_vW += d1
  393. }
  394. rbm_vW.toArray
  395. }
  396.  
  397. /**
  398. * 初始化偏置向量b
  399. * 初始化为0
  400. */
  401. def Initialb(size: Array[Int]): Array[BDM[Double]] = {
  402. // 初始化偏置向量b
  403. // weights and weight momentum
  404. // dbn.rbm{u}.b = zeros(dbn.sizes(u), 1);
  405. val n = size.length
  406. val rbm_b = ArrayBuffer[BDM[Double]]()
  407. for (i <- 1 to n - 1) {
  408. val d1 = BDM.zeros[Double](size(i - 1), 1)
  409. rbm_b += d1
  410. }
  411. rbm_b.toArray
  412. }
  413.  
  414. /**
  415. * 初始化偏置向量vb
  416. * 初始化为0
  417. */
  418. def Initialvb(size: Array[Int]): Array[BDM[Double]] = {
  419. // 初始化偏置向量b
  420. // weights and weight momentum
  421. // dbn.rbm{u}.vb = zeros(dbn.sizes(u), 1);
  422. val n = size.length
  423. val rbm_vb = ArrayBuffer[BDM[Double]]()
  424. for (i <- 1 to n - 1) {
  425. val d1 = BDM.zeros[Double](size(i - 1), 1)
  426. rbm_vb += d1
  427. }
  428. rbm_vb.toArray
  429. }
  430.  
  431. /**
  432. * 初始化偏置向量c
  433. * 初始化为0
  434. */
  435. def Initialc(size: Array[Int]): Array[BDM[Double]] = {
  436. // 初始化偏置向量c
  437. // weights and weight momentum
  438. // dbn.rbm{u}.c = zeros(dbn.sizes(u + 1), 1);
  439. val n = size.length
  440. val rbm_c = ArrayBuffer[BDM[Double]]()
  441. for (i <- 1 to n - 1) {
  442. val d1 = BDM.zeros[Double](size(i), 1)
  443. rbm_c += d1
  444. }
  445. rbm_c.toArray
  446. }
  447.  
  448. /**
  449. * 初始化偏置向量vc
  450. * 初始化为0
  451. */
  452. def Initialvc(size: Array[Int]): Array[BDM[Double]] = {
  453. // 初始化偏置向量c
  454. // weights and weight momentum
  455. // dbn.rbm{u}.vc = zeros(dbn.sizes(u + 1), 1);
  456. val n = size.length
  457. val rbm_vc = ArrayBuffer[BDM[Double]]()
  458. for (i <- 1 to n - 1) {
  459. val d1 = BDM.zeros[Double](size(i), 1)
  460. rbm_vc += d1
  461. }
  462. rbm_vc.toArray
  463. }
  464.  
  465. /**
  466. * Gibbs採样
  467. * X = double(1./(1+exp(-P)) > rand(size(P)));
  468. */
  469. def sigmrnd(P: BDM[Double]): BDM[Double] = {
  470. val s1 = 1.0 / (Bexp(P * (-1.0)) + 1.0)
  471. val r1 = BDM.rand[Double](s1.rows, s1.cols)
  472. val a1 = s1 :> r1
  473. val a2 = a1.data.map { f => if (f == true) 1.0 else 0.0 }
  474. val a3 = new BDM(s1.rows, s1.cols, a2)
  475. a3
  476. }
  477.  
  478. /**
  479. * Gibbs採样
  480. * X = double(1./(1+exp(-P)))+1*randn(size(P));
  481. */
  482. def sigmrnd2(P: BDM[Double]): BDM[Double] = {
  483. val s1 = 1.0 / (Bexp(P * (-1.0)) + 1.0)
  484. val r1 = BDM.rand[Double](s1.rows, s1.cols)
  485. val a3 = s1 + (r1 * 1.0)
  486. a3
  487. }
  488.  
  489. /**
  490. * sigm激活函数
  491. * X = 1./(1+exp(-P));
  492. */
  493. def sigm(matrix: BDM[Double]): BDM[Double] = {
  494. val s1 = 1.0 / (Bexp(matrix * (-1.0)) + 1.0)
  495. s1
  496. }
  497.  
  498. /**
  499. * tanh激活函数
  500. * f=1.7159*tanh(2/3.*A);
  501. */
  502. def tanh_opt(matrix: BDM[Double]): BDM[Double] = {
  503. val s1 = Btanh(matrix * (2.0 / 3.0)) * 1.7159
  504. s1
  505. }
  506.  
  507. }

1.2 DBNModel代码

  1. package DBN
  2.  
  3. import breeze.linalg.{
  4. Matrix => BM,
  5. CSCMatrix => BSM,
  6. DenseMatrix => BDM,
  7. Vector => BV,
  8. DenseVector => BDV,
  9. SparseVector => BSV
  10. }
  11. import org.apache.spark.rdd.RDD
  12. import scala.collection.mutable.ArrayBuffer
  13.  
  14. class DBNModel(
  15. val config: DBNConfig,
  16. val dbn_W: Array[BDM[Double]],
  17. val dbn_b: Array[BDM[Double]],
  18. val dbn_c: Array[BDM[Double]]) extends Serializable {
  19.  
  20. /**
  21. * DBN模型转化为NN模型
  22. * 权重转换
  23. */
  24. def dbnunfoldtonn(outputsize: Int): (Array[Int], Int, Array[BDM[Double]]) = {
  25. //1 size layer 參数转换
  26. val size = if (outputsize > 0) {
  27. val size1 = config.size
  28. val size2 = ArrayBuffer[Int]()
  29. size2 ++= size1
  30. size2 += outputsize
  31. size2.toArray
  32. } else config.size
  33. val layer = if (outputsize > 0) config.layer + 1 else config.layer
  34.  
  35. //2 dbn_W 參数转换
  36. var initW = ArrayBuffer[BDM[Double]]()
  37. for (i <- 0 to dbn_W.length - 1) {
  38. initW += BDM.horzcat(dbn_c(i), dbn_W(i))
  39. }
  40. (size, layer, initW.toArray)
  41. }
  42.  
  43. }

转载请注明出处:

http://blog.csdn.net/sunbow0

Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.1的更多相关文章

  1. Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3

    Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3 http://blog.csdn.net/sunbow0 第二章Deep ...

  2. Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2

    Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2 http://blog.csdn.net/sunbow0 第二章Deep ...

  3. Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1

    3.Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1 http://blog.csdn.net/sunbow0 ...

  4. Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.2

    3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.2 http://blog.csdn.net/sunbow0 ...

  5. Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.3

    3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.3 http://blog.csdn.net/sunbow0 ...

  6. Deep Learning 17:DBN的学习_读论文“A fast learning algorithm for deep belief nets”的总结

    1.论文“A fast learning algorithm for deep belief nets”的“explaining away”现象的解释: 见:Explaining Away的简单理解 ...

  7. 调参侠的末日? Auto-Keras 自动搜索深度学习模型的网络架构和超参数

    Auto-Keras 是一个开源的自动机器学习库.Auto-Keras 的终极目标是允许所有领域的只需要很少的数据科学或者机器学习背景的专家都可以很容易的使用深度学习.Auto-Keras 提供了一系 ...

  8. 深度学习图像分割——U-net网络

    写在前面: 一直没有整理的习惯,导致很多东西会有所遗忘,遗漏.借着这个机会,养成一个习惯. 对现有东西做一个整理.记录,对新事物去探索.分享. 因此博客主要内容为我做过的,所学的整理记录以及新的算法. ...

  9. 深度学习|基于LSTM网络的黄金期货价格预测--转载

    深度学习|基于LSTM网络的黄金期货价格预测 前些天看到一位大佬的深度学习的推文,内容很适用于实战,争得原作者转载同意后,转发给大家.之后会介绍LSTM的理论知识. 我把code先放在我github上 ...

随机推荐

  1. 【java基础 15】java代码中“==”和equals的区别

    导读:昨夜闲来无事,和贾姑娘聊了聊java基础,然后就说到了这个"=="和equals的问题,我俩都是以前了解过,也常用这个,但是,昨天说到的时候,又乱了,什么比较地址值,什么判断 ...

  2. 【转】Bad Smell(代码的坏味道)

    1.Duplicated Code(重复的代码) 臭味行列中首当其冲的就是Duplicated Code.如果你在一个以上的地点看到相同的程序结构,那么当可肯定:设法将它们合而为一,程序会变得更好. ...

  3. 【bzoj1959】[Ahoi2005]LANE 航线规划 树链剖分+线段树

    题目描述 对Samuel星球的探险已经取得了非常巨大的成就,于是科学家们将目光投向了Samuel星球所在的星系——一个巨大的由千百万星球构成的Samuel星系. 星际空间站的Samuel II巨型计算 ...

  4. 【bzoj1307】玩具 单调栈

    题目描述 小球球是个可爱的孩子,他喜欢玩具,另外小球球有个大大的柜子,里面放满了玩具,由于柜子太高了,每天小球球都会让妈妈从柜子上拿一些玩具放在地板上让小球球玩. 这天,小球球把所有的N辆玩具摆成一排 ...

  5. HDU——1715大菲波数(大数加法)

    大菲波数 Time Limit: 1000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others) Total Submi ...

  6. C# 图像旋转代码

    方法一: public static Bitmap rotateImage(Bitmap b, float angle) { //create a new empty bitmap to hold r ...

  7. 开源 project

    移动:http://www.csdn.net/article/2014-04-22/2819435-facebook-mobile-open-source-projects/1

  8. 【POJ3352】Road Construction(边双联通分量)

    题意:给一个无向图,问最少添加多少条边后能使整个图变成双连通分量. 思路:双连通分量缩点,缩点后给度为1的分量两两之间连边,要连(ans+1) div 2条 low[u]即为u所在的分量编号,flag ...

  9. 用promise做图片的预加载

    var url='jsonp-master/0.jpg' var url1='jsonp-master/1.jpg' var url2='jsonp-master/2.jpg' var img=doc ...

  10. Codeforces 932 A.Palindromic Supersequence (ICM Technex 2018 and Codeforces Round #463 (Div. 1 + Div. 2, combined))

    占坑,明天写,想把D补出来一起写.2/20/2018 11:17:00 PM ----------------------------------------------------------我是分 ...