Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2
第二章Deep Belief Network (深度信念网络)
基础及源代码解析
2.1 Deep Belief Network深度信念网络基础知识
)综合基础知识參照:
http://tieba.baidu.com/p/2895759455
)原著资料參照:
《Learning Deep Architectures for AI》
url=suD736_WyPyNRj_CEcdo11mKBNMBoq73-u9IxJkbksOtNXdsfMnxOCN2TUz-zVuW80iyb72dyah_GI6qAaPKg42J2sQWLmHeqv4CrU1aqTq
《A Practical Guide to Training Restricted Boltzmann Machines》
2.2 Deep Learning DBN源代码解析
2.2.1 DBN代码结构
DBN源代码主要包含:DBN。DBNModel两个类。源代码结构例如以下:
DBN结构:
watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvc3VuYm93MA==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center">
DBNModel结构:
watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvc3VuYm93MA==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center">
2.2.2 DBN训练过程
2.2.3 DBN解析
(1) DBNweight
/**
* W:权重
* b:偏置
* c:偏置
*/
caseclass DBNweight(
W: BDM[Double],
vW: BDM[Double],
b: BDM[Double],
vb: BDM[Double],
c: BDM[Double],
vc: BDM[Double])extendsSerializable
DBNweight:自己定义数据类型,存储权重。
(2) DBNConfig
/**
*配置參数
*/
caseclassDBNConfig(
size: Array[Int],
layer: Int,
momentum: Double,
alpha: Double)extends Serializable
DBNConfig:定义參数配置,存储配置信息。參数说明:
size:神经网络结构
layer:神经网络层数
momentum: Momentum因子
alpha:学习迭代因子
(3) InitialWeight
初始化权重
/**
* 初始化权重
*
*/
def InitialW(size: Array[Int]): Array[BDM[Double]] = {
// 初始化权重參数
// weights and weight momentum
// dbn.rbm{u}.W = zeros(dbn.sizes(u + 1), dbn.sizes(u));
valn = size.length
valrbm_W = ArrayBuffer[BDM[Double]]()
ton
- ) {
vald1 = BDM.zeros[Double](size(i), size(i
- ))
rbm_W += d1
}
rbm_W.toArray
}
(4) InitialWeightV
初始化权重vW
/**
* 初始化权重vW
*
*/
def InitialvW(size: Array[Int]): Array[BDM[Double]] = {
// 初始化权重參数
// weights and weight momentum
// dbn.rbm{u}.vW = zeros(dbn.sizes(u + 1), dbn.sizes(u));
valn = size.length
valrbm_vW = ArrayBuffer[BDM[Double]]()
ton
- ) {
vald1 = BDM.zeros[Double](size(i), size(i
- ))
rbm_vW += d1
}
rbm_vW.toArray
}
(5) Initialb
初始化偏置向量
/**
* 初始化偏置向量b
*
*/
def Initialb(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量b
// weights and weight momentum
// dbn.rbm{u}.b = zeros(dbn.sizes(u), 1);
valn = size.length
valrbm_b = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_b += d1
}
rbm_b.toArray
}
(6) Initialvb
初始化偏置向量
/**
* 初始化偏置向量vb
*
*/
def Initialvb(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量b
// weights and weight momentum
// dbn.rbm{u}.vb = zeros(dbn.sizes(u), 1);
valn = size.length
valrbm_vb = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_vb += d1
}
rbm_vb.toArray
}
(7) Initialc
初始化偏置向量
/**
* 初始化偏置向量c
*
*/
def Initialc(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量c
// weights and weight momentum
// dbn.rbm{u}.c = zeros(dbn.sizes(u + 1), 1);
valn = size.length
valrbm_c = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_c += d1
}
rbm_c.toArray
}
(8) Initialvc
初始化偏置向量
/**
* 初始化偏置向量vc
*
*/
def Initialvc(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量c
// weights and weight momentum
// dbn.rbm{u}.vc = zeros(dbn.sizes(u + 1), 1);
valn = size.length
valrbm_vc = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_vc += d1
}
rbm_vc.toArray
}
(8) sigmrnd
Gibbs採样
/**
* Gibbs採样
* X = double(1./(1+exp(-P)) > rand(size(P)));
*/
def sigmrnd(P: BDM[Double]): BDM[Double] = {
vals1 =1.0 / (Bexp(P * (-1.0))
+1.0)
valr1 = BDM.rand[Double](s1.rows,s1.cols)
vala1 =s1 :>r1
vala2 =a1.data.map
{ f =>if (f ==true)1.0else0.0
}
vala3 =new BDM(s1.rows,s1.cols,a2)
a3
}
/**
* Gibbs採样
* X = double(1./(1+exp(-P)))+1*randn(size(P));
*/
def sigmrnd2(P: BDM[Double]): BDM[Double] = {
vals1 =1.0 / (Bexp(P * (-1.0))
+1.0)
valr1 = BDM.rand[Double](s1.rows,s1.cols)
vala3 =s1 + (r1
*1.0)
a3
}
(9) DBNtrain
对神经网络每一层进行训练。
/**
* 深度信念网络(Deep Belief Network)
* 执行训练DBNtrain
*/
def DBNtrain(train_d: RDD[(BDM[Double], BDM[Double])], opts: Array[Double]): DBNModel = {
// 參数配置广播配置
valsc = train_d.sparkContext
valdbnconfig = DBNConfig(size,layer,momentum,
alpha)
// 初始化权重
vardbn_W = DBN.InitialW(size)
vardbn_vW = DBN.InitialvW(size)
vardbn_b = DBN.Initialb(size)
vardbn_vb = DBN.Initialvb(size)
vardbn_c = DBN.Initialc(size)
vardbn_vc = DBN.Initialvc(size)
层
printf()
))
valweight1 = RBMtrain(train_d, opts,dbnconfig,weight0)
) =weight1.W
) =weight1.vW
) =weight1.b
) =weight1.vb
) =weight1.c
) =weight1.vc
层至 n层
todbnconfig.layer
-) {
// 前向计算x
// x = sigm(repmat(rbm.c', size(x, 1), 1) + x * rbm.W');
printf("Training Level: %d.\n",i)
valtmp_bc_w =sc.broadcast(dbn_W(i
-))
valtmp_bc_c =sc.broadcast(dbn_c(i
-))
valtrain_d2 = train_d.map { f =>
vallable = f._1
valx = f._2
valx2 = DBN.sigm(x *tmp_bc_w.value.t
+tmp_bc_c.value.t)
(lable, x2)
}
// 训练第i层
valweighti =new DBNweight(dbn_W(i
-),
),dbn_b(i
-),dbn_c(i
-))
valweight2 = RBMtrain(train_d2, opts,dbnconfig,weighti)
) =weight2.W
) =weight2.vW
) =weight2.b
) =weight2.vb
) =weight2.c
) =weight2.vc
new DBNModel(dbnconfig,dbn_W,dbn_b,
dbn_c)
}
(10) RBMtrain
神经网络训练运行代码。
/**
* 深度信念网络(Deep Belief Network)
* 每一层神经网络进行训练rbmtrain
*/
def RBMtrain(train_t: RDD[(BDM[Double], BDM[Double])],
opts: Array[Double],
dbnconfig: DBNConfig,
weight: DBNweight): DBNweight = {
valsc = train_t.sparkContext
varStartTime = System.currentTimeMillis()
varEndTime = System.currentTimeMillis()
// 权重參数变量
varrbm_W = weight.W
varrbm_vW = weight.vW
varrbm_b = weight.b
varrbm_vb = weight.vb
varrbm_c = weight.c
varrbm_vc = weight.vc
// 广播參数
valbc_config =sc.broadcast(dbnconfig)
// 训练样本数量
valm = train_t.count
// 计算batch的数量
).toInt
).toInt
valnumbatches = (m /batchsize).toInt
// numepochs是循环的次数
tonumepochs)
{
StartTime = System.currentTimeMillis()
valsplitW2 = Array.fill(numbatches)(1.0
/ numbatches)
varerr =0.0
// 依据分组权重,随机划分每组样本数据
tonumbatches)
{
// 1 广播权重參数
valbc_rbm_W =sc.broadcast(rbm_W)
valbc_rbm_vW =sc.broadcast(rbm_vW)
valbc_rbm_b =sc.broadcast(rbm_b)
valbc_rbm_vb =sc.broadcast(rbm_vb)
valbc_rbm_c =sc.broadcast(rbm_c)
valbc_rbm_vc =sc.broadcast(rbm_vc)
// 2 样本划分
valtrain_split2 = train_t.randomSplit(splitW2, System.nanoTime())
valbatch_xy1 =train_split2(l
-)
// 3 前向计算
// v1 = batch;
// h1 = sigmrnd(repmat(rbm.c', opts.batchsize, 1) + v1 * rbm.W');
// v2 = sigmrnd(repmat(rbm.b', opts.batchsize, 1) + h1 * rbm.W);
// h2 = sigm(repmat(rbm.c', opts.batchsize, 1) + v2 * rbm.W');
// c1 = h1' * v1;
// c2 = h2' * v2;
valbatch_vh1 =batch_xy1.map { f =>
vallable = f._1
valv1 = f._2
valh1 = DBN.sigmrnd((v1 *bc_rbm_W.value.t
+bc_rbm_c.value.t))
valv2 = DBN.sigmrnd((h1 *bc_rbm_W.value
+bc_rbm_b.value.t))
valh2 = DBN.sigm(v2 *bc_rbm_W.value.t
+bc_rbm_c.value.t)
valc1 =h1.t *v1
valc2 =h2.t *v2
(lable, v1,h1,v2,h2,c1,c2)
}
// 4 更新前向计算
// rbm.vW = rbm.momentum * rbm.vW + rbm.alpha * (c1 - c2) / opts.batchsize;
// rbm.vb = rbm.momentum * rbm.vb + rbm.alpha * sum(v1 - v2)' / opts.batchsize;
// rbm.vc = rbm.momentum * rbm.vc + rbm.alpha * sum(h1 - h2)' / opts.batchsize;
// W 更新方向
valvw1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
c1 -c2
}
valinitw = BDM.zeros[Double](bc_rbm_W.value.rows,bc_rbm_W.value.cols)
val (vw2,countw2) =vw1.treeAggregate((initw,0L))(
seqOp = (c, v) => {
// c: (m, count), v: (m)
valm1 = c._1
valm2 =m1
+ v
)
},
combOp = (c1, c2) => {
// c: (m, count)
valm1 = c1._1
valm2 = c2._1
valm3 =m1
+ m2
(m3, c1._2 + c2._2)
})
valvw3 =vw2 /countw2.toDouble
rbm_vW = bc_config.value.momentum *bc_rbm_vW.value
+bc_config.value.alpha *vw3
// b 更新方向
valvb1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
(v1 -v2)
}
valinitb = BDM.zeros[Double](bc_rbm_vb.value.cols,bc_rbm_vb.value.rows)
val (vb2,countb2) =vb1.treeAggregate((initb,0L))(
seqOp = (c, v) => {
// c: (m, count), v: (m)
valm1 = c._1
valm2 =m1
+ v
)
},
combOp = (c1, c2) => {
// c: (m, count)
valm1 = c1._1
valm2 = c2._1
valm3 =m1
+ m2
(m3, c1._2 + c2._2)
})
valvb3 =vb2 /countb2.toDouble
rbm_vb = bc_config.value.momentum *bc_rbm_vb.value
+bc_config.value.alpha *vb3.t
// c 更新方向
valvc1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
(h1 -h2)
}
valinitc = BDM.zeros[Double](bc_rbm_vc.value.cols,bc_rbm_vc.value.rows)
val (vc2,countc2) =vc1.treeAggregate((initc,0L))(
seqOp = (c, v) => {
// c: (m, count), v: (m)
valm1 = c._1
valm2 =m1
+ v
)
},
combOp = (c1, c2) => {
// c: (m, count)
valm1 = c1._1
valm2 = c2._1
valm3 =m1
+ m2
(m3, c1._2 + c2._2)
})
valvc3 =vc2 /countc2.toDouble
rbm_vc = bc_config.value.momentum *bc_rbm_vc.value
+bc_config.value.alpha *vc3.t
// 5 权重更新
// rbm.W = rbm.W + rbm.vW;
// rbm.b = rbm.b + rbm.vb;
// rbm.c = rbm.c + rbm.vc;
rbm_W = bc_rbm_W.value +rbm_vW
rbm_b = bc_rbm_b.value +rbm_vb
rbm_c = bc_rbm_c.value +rbm_vc
// 6 计算误差
valdbne1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
(v1 -v2)
}
val (dbne2,counte) =dbne1.treeAggregate((0.0,0L))(
seqOp = (c, v) => {
// c: (e, count), v: (m)
vale1 = c._1
vale2 = (v :* v).sum
valesum =e1
+ e2
)
},
combOp = (c1, c2) => {
// c: (e, count)
vale1 = c1._1
vale2 = c2._1
valesum =e1
+ e2
(esum, c1._2 + c2._2)
})
valdbne =dbne2 /counte.toDouble
err += dbne
}
EndTime = System.currentTimeMillis()
// 打印误差结果
printf("epoch: numepochs = %d , Took = %d seconds; Average reconstruction error is: %f.\n",i,
scala.math.ceil(().toLong,err
/ numbatches.toDouble)
}
new DBNweight(rbm_W,rbm_vW,rbm_b,
rbm_vb,rbm_c,rbm_vc)
}
2.2.4 DBNModel解析
(1) DBNModel
DBNModel:存储DBN网络參数。包含:config配置參数,dbn_W权重,dbn_b偏置,dbn_c偏置。
class DBNModel(
valconfig: DBNConfig,
valdbn_W: Array[BDM[Double]],
valdbn_b: Array[BDM[Double]],
valdbn_c: Array[BDM[Double]])extends
Serializable {
}
(2) dbnunfoldtonn
dbnunfoldtonn:将DBN网络參数转换为NN參数。
/**
* DBN模型转化为NN模型
* 权重转换
*/
defdbnunfoldtonn(outputsize: Int): (Array[Int], Int, Array[BDM[Double]])
= {
//1 size layer 參数转换
)
{
valsize1 =config.size
valsize2 = ArrayBuffer[Int]()
size2 ++= size1
size2 += outputsize
size2.toArray
} elseconfig.size
)config.layer
+elseconfig.layer
//2 dbn_W 參数转换
varinitW = ArrayBuffer[BDM[Double]]()
todbn_W.length
-) {
initW += BDM.horzcat(dbn_c(i),dbn_W(i))
}
(size, layer,initW.toArray)
}
转载请注明出处:
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2的更多相关文章
- Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3 http://blog.csdn.net/sunbow0 第二章Deep ...
- Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.1
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.1 http://blog.csdn.net/sunbow0 Spark ML ...
- Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1
3.Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1 http://blog.csdn.net/sunbow0 ...
- Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.2
3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.2 http://blog.csdn.net/sunbow0 ...
- Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.3
3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.3 http://blog.csdn.net/sunbow0 ...
- Deep Learning 17:DBN的学习_读论文“A fast learning algorithm for deep belief nets”的总结
1.论文“A fast learning algorithm for deep belief nets”的“explaining away”现象的解释: 见:Explaining Away的简单理解 ...
- 调参侠的末日? Auto-Keras 自动搜索深度学习模型的网络架构和超参数
Auto-Keras 是一个开源的自动机器学习库.Auto-Keras 的终极目标是允许所有领域的只需要很少的数据科学或者机器学习背景的专家都可以很容易的使用深度学习.Auto-Keras 提供了一系 ...
- 深度学习图像分割——U-net网络
写在前面: 一直没有整理的习惯,导致很多东西会有所遗忘,遗漏.借着这个机会,养成一个习惯. 对现有东西做一个整理.记录,对新事物去探索.分享. 因此博客主要内容为我做过的,所学的整理记录以及新的算法. ...
- 深度学习|基于LSTM网络的黄金期货价格预测--转载
深度学习|基于LSTM网络的黄金期货价格预测 前些天看到一位大佬的深度学习的推文,内容很适用于实战,争得原作者转载同意后,转发给大家.之后会介绍LSTM的理论知识. 我把code先放在我github上 ...
随机推荐
- PCB MS SQL SERVER版本管控工具source_safe_for_sql_server
PCB由于业务关系复杂,业务触发一个事件时,可能需与数据库多个表进行关连处理才能拿到数据结果, 而表关连并不是简单的关连,实际是要进行大量数据筛选,逻辑判断,转换等过程...这个过程是复杂的 想一想, ...
- 基于Myeclipse+Axis2的WebService开发实录
最近开始学习了下在Myeclipse开发工具下基于WebSerivce的开发,下面将相关相关关键信息予以记录 Myeclipse的安装,本文以Myeclipse2014-blue为开发环境,相关配置执 ...
- CSS样式优先级和权重问题(部分)
内联样式: <div style="font-size: 12px;">姓名</div> 外部样式: <link rel="styleshe ...
- web前端利用HTML代码显示符号
HTML常用符号代码: ´ ´ © © > > µ µ ® ® & & ° ° ¡ ¡ » » ¦ ¦ ÷ ÷ ¿ ¿ ...
- 省市区县的sql语句——城市
/*SQLyog v10.2 MySQL - 5.5.48 : Database - 省市县****************************************************** ...
- IN、EXISTS的相关子查询用INNER JOIN 代替--性能优化
如果保证子查询没有重复 ,IN.EXISTS的相关子查询可以用INNER JOIN 代替.比如: IN.EXISTS的相关子查询用INNER JOIN 代替--sql2000性能优化
- 《Linux程序设计》笔记(二)shell程序设计
1. 进程树形显示 ps -e f 2. 重定向 > 覆盖文件 >> 附加至文件 1> 标准输出 2> 标准错误输出 0 代表一个程序的标准输入 3. 程序可以在当前目录 ...
- 多开 MFC线程
序言:我才编程几年啊!就要处理多线程.对于只写函数的我,这难度简直了!不过MFC的多线程,貌似比较简单,还能处理的了. (1).开MFC多个线程 在视频采集的过程中,如果不使用媒体计数器,会造成主线程 ...
- 计算机图形学课件pdf版
为方便大家学习,我将自己计算机图形学的课件分享. 下载链接:http://pan.baidu.com/s/1kV5BW8n 密码:eqg4 注:本课件与教材配套PPT有所不同.教材配套PPT是编写教材 ...
- Step by Step 开发dynamics CRM
这里是作为开发贴的总结. 现在plugin和workflow系列已经终结. 希望这些教程能给想入坑的小伙伴一些帮忙. CRM中文教材不多, 我会不断努力为大家提供更优质的教程. Plugin 开发系列 ...