Deeplearning原文作者Hinton代码注解

  1. Matlab示例代码为两部分,分别对应不同的论文:
  2.  
  3. . Reducing the Dimensionality of data with neural networks
  4.  
  5.   ministdeepauto.m backprop.m rbmhidlinear.m
  6.  
  7. . A fast learing algorithm for deep belief net
  8.  
  9.   mnistclassify.m   backpropclassfy.m
  10.  
  11. 其余部分代码通用。
  12.  
  13. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  14. mnistclassify.m
  15. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  16.  
  17. clear all
  18. close all
  19.  
  20. maxepoch=; %迭代次数
  21. numhid=; numpen=; numpen2=;
  22.  
  23. fprintf(,'Converting Raw files into Matlab format \n');
  24. converter;
  25.  
  26. fprintf(,'Pretraining a deep autoencoder. \n');
  27. fprintf(,'The Science paper used 50 epochs. This uses %3i \n', maxepoch);
  28.  
  29. makebatches;%分批数据
  30. [numcases numdims numbatches]=size(batchdata); %获取batchdata数据大小
  31. %%numcases 每批数据的个数
  32. %%numdims 数据元组的维度
  33. %%numbtches 数据批数
  34.  
  35. fprintf(,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid);%图像输入层到第一个隐藏层
  36. restart=; %设置初始化参数
  37. rbm; %调用RBM训练数据
  38. hidrecbiases=hidbiases; %获取隐藏层偏置值
  39. save mnistvhclassify vishid hidrecbiases visbiases; %
  40.  
  41. fprintf(,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen);%第一个隐藏层到第二个隐藏层
  42. batchdata=batchposhidprobs; %上一个RBM的隐藏层输出,读入作为这个RBM的输入
  43. numhid=numpen;%设置隐藏层的节点数,输入的节点数已经由读入数据给出
  44. restart=;
  45. rbm;
  46. hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; %同上,提取权值,偏置,
  47. save mnisthpclassify hidpen penrecbiases hidgenbiases;
  48.  
  49. fprintf(,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2);%第二个隐藏层到第三层隐藏层,其余同上
  50. batchdata=batchposhidprobs;
  51. numhid=numpen2;
  52. restart=;
  53. rbm;
  54. hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases;
  55. save mnisthp2classify hidpen2 penrecbiases2 hidgenbiases2;
  56.  
  57. backpropclassify;
  58.  
  59. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  60. backpropclassify.m
  61. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  62. maxepoch=;
  63. fprintf(,'\nTraining discriminative model on MNIST by minimizing cross entropy error. \n');%最小化交叉熵
  64. fprintf(,'60 batches of 1000 cases each. \n');
  65.  
  66. load mnistvhclassify%加载各层之间的权值,以及偏置
  67. load mnisthpclassify
  68. load mnisthp2classify
  69.  
  70. makebatches;%分批数据
  71. [numcases numdims numbatches]=size(batchdata);
  72. N=numcases; %获取每批数据向量数
  73.  
  74. %%%% PREINITIALIZE WEIGHTS OF THE DISCRIMINATIVE MODEL%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  75.  
  76. w1=[vishid; hidrecbiases];%第一层到第二层的权重,以及第二层的偏置
  77. w2=[hidpen; penrecbiases];%类上
  78. w3=[hidpen2; penrecbiases2];%类上
  79. w_class = 0.1*randn(size(w3,)+,);%随机生成第四层列数+1行,10列的矩阵
  80. %%%%%%%%%% END OF PREINITIALIZATIO OF WEIGHTS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  81.  
  82. l1=size(w1,)-;%获取每层的单元个数
  83. l2=size(w2,)-;
  84. l3=size(w3,)-;
  85. l4=size(w_class,)-;%最高层的单元个数
  86. l5=; %label层单元个数
  87. test_err=[];%
  88. train_err=[];%
  89.  
  90. for epoch = :maxepoch
  91.  
  92. %%%%%%%%%%%%%%%%%%%% COMPUTE TRAINING MISCLASSIFICATION ERROR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  93. err=;
  94. err_cr=;
  95. counter=;
  96. [numcases numdims numbatches]=size(batchdata);
  97. %%numcases 每批数据的个数
  98. %%numdims 数据元组的维度
  99. %%numbtches 数据批数
  100. N=numcases;%%每批次数据向量个数
  101. for batch = :numbatches
  102. data = [batchdata(:,:,batch)];%读取一批次数据
  103. target = [batchtargets(:,:,batch)];%读取当前批次的目标值
  104. data = [data ones(N,)];%在原数据后添加N1列数据
  105. w1probs = ./( + exp(-data*w1)); w1probs = [w1probs ones(N,)];%sigmod计算各层的概率值,参见BP算法
  106. w2probs = ./( + exp(-w1probs*w2)); w2probs = [w2probs ones(N,)];
  107. w3probs = ./( + exp(-w2probs*w3)); w3probs = [w3probs ones(N,)];
  108.  
  109. targetout = exp(w3probs*w_class);%计算最后的输出值N10
  110. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  111. %对最后的label的输出处理过程,见公式6.,其中w3probs*w_classlabel的输入
  112. %最后只能有一个单元被激活,激活单元的选择即通过下面计算得出的概率来进行选择
  113. %10个单元组成的“softmax”组
  114. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  115. targetout = targetout./repmat(sum(targetout,),,);%计算最后10label输出除以输出值的总和
  116.  
  117. [I J]=max(targetout,[],);%取计算结果每行中的最大值,以及其列标
  118. [I1 J1]=max(target,[],);%取原先设定目标值的最大值以及列标
  119. counter=counter+length(find(J==J1));%统计正确的条数
  120. err_cr = err_cr- sum(sum( target(:,:end).*log(targetout))) ; %%%%????
  121. end
  122. train_err(epoch)=(numcases*numbatches-counter);%总的错误条数???
  123. train_crerr(epoch)=err_cr/numbatches;%平均每批次错误率???
  124.  
  125. %%%%%%%%%%%%%% END OF COMPUTING TRAINING MISCLASSIFICATION ERROR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  126.  
  127. %%%%%%%%%%%%%%%%%%%% COMPUTE TEST MISCLASSIFICATION ERROR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  128. err=;
  129. err_cr=;
  130. counter=;
  131. [testnumcases testnumdims testnumbatches]=size(testbatchdata);
  132.  
  133. N=testnumcases;
  134. for batch = :testnumbatches
  135. data = [testbatchdata(:,:,batch)];
  136. target = [testbatchtargets(:,:,batch)];
  137. data = [data ones(N,)];
  138. w1probs = ./( + exp(-data*w1)); w1probs = [w1probs ones(N,)];
  139. w2probs = ./( + exp(-w1probs*w2)); w2probs = [w2probs ones(N,)];
  140. w3probs = ./( + exp(-w2probs*w3)); w3probs = [w3probs ones(N,)];
  141. targetout = exp(w3probs*w_class);
  142. targetout = targetout./repmat(sum(targetout,),,);
  143.  
  144. [I J]=max(targetout,[],);
  145. [I1 J1]=max(target,[],);
  146. counter=counter+length(find(J==J1));
  147. err_cr = err_cr- sum(sum( target(:,:end).*log(targetout))) ;
  148. end
  149. test_err(epoch)=(testnumcases*testnumbatches-counter);
  150. test_crerr(epoch)=err_cr/testnumbatches;
  151. fprintf(,'Before epoch %d Train # misclassified: %d (from %d). Test # misclassified: %d (from %d) \t \t \n',...
  152. epoch,train_err(epoch),numcases*numbatches,test_err(epoch),testnumcases*testnumbatches);
  153.  
  154. %%%%%%%%%%%%%% END OF COMPUTING TEST MISCLASSIFICATION ERROR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  155.  
  156. tt=;
  157. for batch = :numbatches/
  158. fprintf(,'epoch %d batch %d\r',epoch,batch);
  159.  
  160. %%%%%%%%%%% COMBINE MINIBATCHES INTO LARGER MINIBATCH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  161. %组合10个小批次为1000样例的批次,然后用conjugate gradient来进行微调
  162. tt=tt+;
  163. data=[];
  164. targets=[];
  165. for kk=:
  166. data=[data
  167. batchdata(:,:,(tt-)*+kk)]; %10个小批次合成
  168. targets=[targets
  169. batchtargets(:,:,(tt-)*+kk)];
  170. end
  171.  
  172. %%%%%%%%%%%%%%% PERFORM CONJUGATE GRADIENT WITH LINESEARCHES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  173. max_iter=; %设置线性搜索的次数
  174.  
  175. if epoch< % First update top-level weights holding other weights fixed.
  176. N = size(data,); %获取数据的行数
  177. XX = [data ones(N,)]; %每行数据后面增加1,用来增加偏置
  178. w1probs = ./( + exp(-XX*w1)); w1probs = [w1probs ones(N,)];
  179. w2probs = ./( + exp(-w1probs*w2)); w2probs = [w2probs ones(N,)];
  180. w3probs = ./( + exp(-w2probs*w3)); %w3probs = [w3probs ones(N,)];
  181.  
  182. VV = [w_class(:)']'; %VV将随机生成的向量w_class展开成一列???为什么展开成一列与minimize的参数有关
  183. %
  184. Dim = [l4; l5]; %记录最后两层的单元节点数,即2000的隐藏层和10label
  185. [X, fX] = minimize(VV,'CG_CLASSIFY_INIT',max_iter,Dim,w3probs,targets);%只训练两层 %%%详细见函数定义
  186. %minimize is Cari Rasmusssen's "minimize" code
  187. %%------------------参数含义------------------%%
  188. %VV 随机权重向量的展开 ,其作为输入参数,列必须为1(D by )
  189. %X 函数f="CG_CLASSIFY_INIT"的最优化参数
  190. %fX 函数f对X的偏导
  191. %max_iter 如果为正,表示线性搜索次数,为负,函数的最大值个数
  192. %%-------------------------------------------------%
  193. w_class = reshape(X,l4+,l5);%恢复权值矩阵结构
  194.  
  195. else %进入整体微调过程
  196. VV = [w1(:)' w2(:)' w3(:)' w_class(:)']'; %将所有权值按列展开成一列
  197. Dim = [l1; l2; l3; l4; l5]; %记录各层单元个数传入
  198. [X, fX] = minimize(VV,'CG_CLASSIFY',max_iter,Dim,data,targets);
  199.  
  200. w1 = reshape(X(:(l1+)*l2),l1+,l2); %恢复W1权值1.
  201. xxx = (l1+)*l2; %临时变量,用于恢复权值单元
  202. w2 = reshape(X(xxx+:xxx+(l2+)*l3),l2+,l3);
  203. xxx = xxx+(l2+)*l3;
  204. w3 = reshape(X(xxx+:xxx+(l3+)*l4),l3+,l4);
  205. xxx = xxx+(l3+)*l4;
  206. w_class = reshape(X(xxx+:xxx+(l4+)*l5),l4+,l5);
  207.  
  208. end
  209. %%%%%%%%%%%%%%% END OF CONJUGATE GRADIENT WITH LINESEARCHES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  210.  
  211. end
  212.  
  213. save mnistclassify_weights w1 w2 w3 w_class
  214. save mnistclassify_error test_err test_crerr train_err train_crerr;
  215.  
  216. end
  217.  
  218. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  219. rbm.m
  220. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\
  221. epsilonw = 0.1; % Learning rate for weights
  222. epsilonvb = 0.1; % Learning rate for biases of visible units
  223. epsilonhb = 0.1; % Learning rate for biases of hidden units
  224. weightcost = 0.0002;
  225. initialmomentum = 0.5;
  226. finalmomentum = 0.9;
  227.  
  228. [numcases numdims numbatches]=size(batchdata);
  229. %%numcases 每批数据的个数
  230. %%numdims 数据元组的维度
  231. %%numbtches 数据批数
  232.  
  233. if restart ==,
  234. restart=;
  235. epoch=;
  236.  
  237. % Initializing symmetric weights and biases. 初始化对称权值和偏置
  238. vishid = 0.1*randn(numdims, numhid); %初始化生成可视层到隐藏层的权值
  239. hidbiases = zeros(,numhid);%隐藏单元的偏置值
  240. visbiases = zeros(,numdims);%可见单元的偏置值
  241.  
  242. poshidprobs = zeros(numcases,numhid); %正向的隐藏单元概率生成
  243. neghidprobs = zeros(numcases,numhid);%反向的隐藏单元概率生成
  244. posprods = zeros(numdims,numhid);%正向可见单元概率生成
  245. negprods = zeros(numdims,numhid);%反向可见单元概率生成
  246. vishidinc = zeros(numdims,numhid);%%%%%可视单元和隐藏单元之间的权值增量
  247. hidbiasinc = zeros(,numhid);%%隐藏单元的偏置增量
  248. visbiasinc = zeros(,numdims);%%可视单元的偏置增量
  249. batchposhidprobs=zeros(numcases,numhid,numbatches);%存储每次迭代计算好的每层的隐藏层概率,作为下一个RBM的输入
  250. end
  251.  
  252. %%%%%%%%%%%%%%%%简单输出 迭代次数 处理的批次%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  253. for epoch = epoch:maxepoch, %迭代处理
  254. fprintf(,'epoch %d\r',epoch);
  255. errsum=; %初始化输出错误为0
  256. for batch = :numbatches, %每次处理一批次的数据
  257. fprintf(,'epoch %d batch %d\r',epoch,batch);
  258.  
  259. %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  260. data = batchdata(:,:,batch); %读取当前批次的全部数据vi
  261. poshidprobs = ./( + exp(-data*vishid - repmat(hidbiases,numcases,))); %计算前向传播的隐藏层概率hi
  262. batchposhidprobs(:,:,batch)=poshidprobs;%将计算好的概率赋值给当前批次前向传播的隐藏层最后一次计算好的值作为下一层的输入
  263. posprods = data' * poshidprobs;%contrastive divergence过程<vi,hi>
  264.  
  265. poshidact = sum(poshidprobs);%average-wise隐藏层激活概率值
  266. posvisact = sum(data);%average-wise可视层激活概率值
  267.  
  268. %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  269. poshidstates = poshidprobs > rand(numcases,numhid);%gibbs抽样,设定状态
  270.  
  271. %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  272. negdata = ./( + exp(-poshidstates*vishid' - repmat(visbiases,numcases,1)));%根据hi计算vi+1
  273. neghidprobs = ./( + exp(-negdata*vishid - repmat(hidbiases,numcases,))); %根据vi+1计算hi+
  274. negprods = negdata'*neghidprobs;%contrastive divergence <vi+1,hi+1>
  275.  
  276. neghidact = sum(neghidprobs);
  277. negvisact = sum(negdata);
  278.  
  279. %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  280. err= sum(sum( (data-negdata).^ )); %重新构建数据的方差
  281. errsum = err + errsum;%整体方差
  282.  
  283. if epoch>, %迭代次数不同调整冲量
  284. momentum=finalmomentum;
  285. else
  286. momentum=initialmomentum;
  287. end;
  288.  
  289. %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  290. vishidinc = momentum*vishidinc + ...
  291. epsilonw*( (posprods-negprods)/numcases - weightcost*vishid);%权重增量计算
  292. visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact);%偏置增量计算
  293. hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact);%隐藏层增量计算
  294.  
  295. vishid = vishid + vishidinc;
  296. visbiases = visbiases + visbiasinc;
  297. hidbiases = hidbiases + hidbiasinc;
  298.  
  299. %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  300.  
  301. end
  302. fprintf(, 'epoch %4i error %6.1f \n', epoch, errsum);
  303. end;
  304.  
  305. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  306. CG_CLASSIFY_INIT.M
  307. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\
  308. function [f, df] = CG_CLASSIFY_INIT(VV,Dim,w3probs,target);%CG对最上面两层的训练
  309. l1 = Dim();
  310. l2 = Dim();
  311. N = size(w3probs,);
  312. % Do decomversion.
  313. w_class = reshape(VV,l1+,l2); %恢复权重,
  314. w3probs = [w3probs ones(N,)]; %一列,偏置
  315.  
  316. targetout = exp(w3probs*w_class); %计算label层的输出结果为numbercase*lablesnumber的矩阵
  317. targetout = targetout./repmat(sum(targetout,),,); %选择最后的激活单元,见backpropclassify.m 的76行
  318. f = -sum(sum( target(:,:end).*log(targetout))) ; %交叉熵 只采用了前边部分
  319.  
  320. IO = (targetout-target(:,:end)); % 输入和输出结果之间的差值
  321. Ix_class=IO; %
  322. dw_class = w3probs'*Ix_class;%导数F(x)((1-F(x))乘以输出结果的偏差..其中Fsigmoid函数
  323.  
  324. df = [dw_class(:)']';
  325.  
  326. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  327. CG_CLASSIFY.M
  328. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  329. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  330. % 该段代码对所有权重进行整体微调
  331. % 各部分过程见 CG_CLASSIFY_INIT.m注解
  332. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  333. function [f, df] = CG_CLASSIFY(VV,Dim,XX,target);
  334.  
  335. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  336. rbmhidlinear.m
  337. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  338. %除了最后计算单元值采用的是线性单元其余过程全部一样
  339. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  340.  
  341. 复制代码

Reducing the Dimensionality of data with neural networks / A fast learing algorithm for deep belief net的更多相关文章

  1. 一天一经典Reducing the Dimensionality of Data with Neural Networks [Science2006]

    别看本文没有几页纸,本着把经典的文多读几遍的想法,把它彩印出来看,没想到效果很好,比在屏幕上看着舒服.若用蓝色的笔圈出重点,这篇文章中几乎要全蓝.字字珠玑. Reducing the Dimensio ...

  2. Deep Learning 16:用自编码器对数据进行降维_读论文“Reducing the Dimensionality of Data with Neural Networks”的笔记

    前言 论文“Reducing the Dimensionality of Data with Neural Networks”是深度学习鼻祖hinton于2006年发表于<SCIENCE > ...

  3. Reducing the Dimensionality of Data with Neural Networks:神经网络用于降维

    原文链接:http://www.ncbi.nlm.nih.gov/pubmed/16873662/ G. E. Hinton* and R. R. Salakhutdinov .   Science. ...

  4. 【Deep Learning】Hinton. Reducing the Dimensionality of Data with Neural Networks Reading Note

    2006年,机器学习泰斗.多伦多大学计算机系教授Geoffery Hinton在Science发表文章,提出基于深度信念网络(Deep Belief Networks, DBN)可使用非监督的逐层贪心 ...

  5. 【神经网络】Reducing the Dimensionality of Data with Neural Networks

    这篇paper来做什么的? 用神经网络来降维.之前降维用的方法是主成分分析法PCA,找到数据集中最大方差方向.(附:降维有助于分类.可视化.交流和高维信号的存储) 这篇paper提出了一种非线性的PC ...

  6. 论文阅读---Reducing the Dimensionality of Data with Neural Networks

    通过训练多层神经网络可以将高维数据转换成低维数据,其中有对高维输入向量进行改造的网络层.梯度下降可以用来微调如自编码器网络的权重系数,但是对权重的初始化要求比较高.这里提出一种有效初始化权重的方法,允 ...

  7. Reducing the Dimensionality of Data with Neural Networks

    ****************内容加密中********************

  8. 文章“Redcing the Dimensiongality of Data with Neural Networks”的翻译

    注明:本人英语水平有限,翻译不当之处,请以英文原版为准,不喜勿喷,另,本文翻译只限于学术交流,不涉及任何版权问题,若有不当侵权或其他任何除学术交流之外的问题,请留言本人,本人立刻删除,谢谢!! 本文原 ...

  9. Deep learning_CNN_Review:A Survey of the Recent Architectures of Deep Convolutional Neural Networks——2019

    CNN综述文章 的翻译 [2019 CVPR] A Survey of the Recent Architectures of Deep Convolutional Neural Networks 翻 ...

随机推荐

  1. 移动开发框架,第【二】弹:Hammer.js 移动设备触摸手势js库

    hammer.js是一个多点触摸手势库,能够为网页加入Tap.Double Tap.Swipe.Hold.Pinch.Drag等多点触摸事件,免去自己监听底层touchstart.touchmove. ...

  2. SecondarySort 原理

    定义IntPair 以及 IntPair(first,second)的compareto,先比較first的大小,再比較second的大小 定义FirstPartitioner是为了让partitio ...

  3. 【Linux常用工具】1.1 diff命令的三种格式

    diff是用来比较两个文本文件的差异的工具,它有三种格式,下面用实例介绍一下: 准备三个测试文件1.txt 2.txt 3.txt bixiaopeng@bixiaopengtekiMacBook-P ...

  4. python装饰实现线程同步

    import threading def tryfinally(finallyf):   u"returns a decorator that adds try/finally behavi ...

  5. C开发之----#if、#ifdef、#if defined之间的区别

    #if的使用说明 #if的后面接的是表达式 #if (MAX==10)||(MAX==20) code... #endif 它的作用是:如果(MAX==10)||(MAX==20)成立,那么编译器就会 ...

  6. Jboss image upload and http access to show image--reference

    question I am uploading images to jboss server by getting the absolute path using the following code ...

  7. Fragment的懒加载

    我们在做应用开发的时候,一个Activity里面可能会以viewpager(或其他容器)与多个Fragment来组合使用,而如果每个fragment都需要去加载数据,或从本地加载,或从网络加载,那么在 ...

  8. Java 国际化 语言切换

      Java国际化 我们使用java.lang.Locale来构造Java国际化的情境. java.lang.Locale代表特定的地理.政治和文化.需要Locale来执行其任务的操作叫语言环境敏感的 ...

  9. YouTube CEO关于工作和生活平衡的完美回答

    原文地址:http://www.businessinsider.com/youtubes-ceo-response-to-work-life-balance-2015-7 译文: 在2015年Aspe ...

  10. 怎样写好一份IT技术岗位的简历

    10月是校园招聘的旺季,很多应届毕业生都忙碌起来了,从CSDN笔试-面试文章的火热程度,从我收到的简历就看得出来. 我很久没有参与笔试和面试了,所以只能从“简历”来阐述下我的看法. 截至目前,已经帮8 ...