一大波matlab代码正在靠近.- -!

sparse autoencoder的一个实例练习,这个例子所要实现的内容大概如下:从给定的很多张自然图片中截取出大小为8*8的小patches图片共10000张,现在需要用sparse autoencoder的方法训练出一个隐含层网络所学习到的特征。该网络共有3层,输入层是64个节点,隐含层是25个节点,输出层当然也是64个节点了。

main函数,  分五步走,每个函数的实现细节在下边都列出了。

  1. %%======================================================================
  2. %% STEP 0: Here we provide the relevant parameters values that will
  3. % allow your sparse autoencoder to get good filters; you do not need to
  4. % change the parameters below.
  5.  
  6. visibleSize = 8*8; % number of input units
  7. hiddenSize = 25; % number of hidden units
  8. sparsityParam = 0.01; % desired average activation of the hidden units.
  9. % (This was denoted by the Greek alphabet rho,
  10. % which looks like a lower-case "p",
  11. % in the lecture notes).
  12. lambda = 0.0001; % weight decay parameter
  13. beta = 3; % weight of sparsity penalty term
  14.  
  15. %%======================================================================
  16. %% STEP 1: Implement sampleIMAGES
  17. %
  18. % After implementing sampleIMAGES, the display_network command should
  19. % display a random sample of 200 patches from the dataset
  20. patches = sampleIMAGES;
  21. display_network(patches(:,randi(size(patches,2),200,1)),8);
  22.  
  23. % Obtain random parameters theta
  24. theta = initializeParameters(hiddenSize, visibleSize);
  25.  
  26. %%======================================================================
  27. %% STEP 2: Implement sparseAutoencoderCost
  28. %
  29. % You can implement all of the components (squared error cost, weight decay term,
  30. % sparsity penalty) in the cost function at once, but it may be easier to do
  31. % it step-by-step and run gradient checking (see STEP 3) after each step. We
  32. % suggest implementing the sparseAutoencoderCost function using the following steps:
  33. %
  34. % (a) Implement forward propagation in your neural network, and implement the
  35. % squared error term of the cost function. Implement backpropagation to
  36. % compute the derivatives. Then (using lambda=beta=0), run Gradient Checking
  37. % to verify that the calculations corresponding to the squared error cost
  38. % term are correct.
  39. %
  40. % (b) Add in the weight decay term (in both the cost function and the derivative
  41. % calculations), then re-run Gradient Checking to verify correctness.
  42. %
  43. % (c) Add in the sparsity penalty term, then re-run Gradient Checking to
  44. % verify correctness.
  45. %
  46. % Feel free to change the training settings when debugging your
  47. % code. (For example, reducing the training set size or
  48. % number of hidden units may make your code run faster; and setting beta
  49. % and/or lambda to zero may be helpful for debugging.) However, in your
  50. % final submission of the visualized weights, please use parameters we
  51. % gave in Step 0 above.
  52.  
  53. [cost, grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, ...
  54. lambda,sparsityParam, beta, patches);
  55.  
  56. %%======================================================================
  57. %% STEP 3: Gradient Checking
  58. %
  59. % Hint: If you are debugging your code, performing gradient checking on smaller models
  60. % and smaller training sets (e.g., using only 10 training examples and 1-2 hidden
  61. % units) may speed things up.
  62.  
  63. % First, lets make sure your numerical gradient computation is correct for a
  64. % simple function. After you have implemented computeNumericalGradient.m,
  65. % run the following:
  66. checkNumericalGradient();
  67.  
  68. % Now we can use it to check your cost function and derivative calculations
  69. % for the sparse autoencoder.
  70. numgrad = computeNumericalGradient( @(x) sparseAutoencoderCost(x, visibleSize, ...
  71. hiddenSize, lambda,sparsityParam, beta, patches), theta);
  72.  
  73. % Use this to visually compare the gradients side by side
  74. disp([numgrad grad]);
  75.  
  76. % Compare numerically computed gradients with the ones obtained from backpropagation
  77. diff = norm(numgrad-grad)/norm(numgrad+grad);
  78. disp(diff); % Should be small. In our implementation, these values are
  79. % usually less than 1e-9.
  80. % When you got this working, Congratulations!!!
  81.  
  82. %%======================================================================
  83. %% STEP 4: After verifying that your implementation of
  84. % sparseAutoencoderCost is correct, You can start training your sparse
  85. % autoencoder with minFunc (L-BFGS).
  86.  
  87. % Randomly initialize the parameters
  88. theta = initializeParameters(hiddenSize, visibleSize);
  89.  
  90. % Use minFunc to minimize the function
  91. addpath minFunc/
  92. options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost
  93. % function. Generally, for minFunc to work, you
  94. % need a function pointer with two outputs: the
  95. % function value and the gradient. In our problem,
  96. % sparseAutoencoderCost.m satisfies this.
  97. options.maxIter = 400; % Maximum number of iterations of L-BFGS to run
  98. options.display = 'on';
  99. [opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p,visibleSize, hiddenSize, ...
  100. lambda, sparsityParam, beta, patches),theta, options);
  101. %%======================================================================
  102. %% STEP 5: Visualization
  103.  
  104. W1 = reshape(opttheta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
  105. display_network(W1', 12);
  106.  
  107. print -djpeg weights.jpg % save the visualization to a file
  108.  
  109. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 对应step1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  110. %三个函数(sampleIMAGES)(normalizeData)(initializeParameters)%%%%
  111. function patches = sampleIMAGES()
  112. load IMAGES; % 加载初始的10张512*512大图片
  113.  
  114. patchsize = 8; % 采样大小
  115. numpatches = 10000;
  116.  
  117. % 初始化该矩阵为0,该矩阵为 64*10000维每一列为一张图片.
  118. patches = zeros(patchsize*patchsize, numpatches);
  119.  
  120. % IMAGES 为一个包含10 张images的三维数组,IMAGES(:,:,6) 是一个第六张图片的 512x512 的二维数组,
  121. % 命令 "imagesc(IMAGES(:,:,6)), colormap gray;" 可以把第六张图可视化.
  122. % 这几张图是经过whiteing预处理的?
  123. % IMAGES(21:30,21:30,1) 就是从第一张图采样得到的(21,21) to (30,30) 的小patchs
  124.  
  125. %在每张图片中随机选取1000个patch,共10000个patch
  126. for imageNum = 1:10
  127. [rowNum colNum] = size(IMAGES(:,:,imageNum));
  128. %实现每张图片选取1000个patch
  129. for patchNum = 1:1000
  130. %得到左上角的两个点
  131. xPos = randi([1,rowNum-patchsize+1]);
  132. yPos = randi([1, colNum-patchsize+1]);
  133. %填充到矩阵里
  134. patches(:,(imageNum-1)*1000+patchNum) = ...
  135. reshape(IMAGES(xPos:xPos+7,yPos:yPos+7,imageNum),64,1);
  136. end
  137. end
  138. %由于autoencoder的激励函数是sigmod函数,输出值限定在[0,1],故为了达到H W,b(x)= x,x作为输入,
  139. %也要限定在0-1之间,故需要进行正则化
  140. patches = normalizeData(patches);
  141. end
  142.  
  143. % 正则化的函数,不太明白s-sigma法则?
  144. function patches = normalizeData(patches)
  145. % 减去均值
  146. patches = bsxfun(@minus, patches, mean(patches));
  147. % s = std(X),此处X是一个矢量,该函数返回标准偏差(注意其分母为n-1,而不是n) 。
  148. % 结果s是一个X各样本偏差无偏估计的平方根(X包含独立的、同分布样本)。
  149. % 如果X是一个矩阵,该函数返回一个行矢量,它包含了X每列元素的标准偏差。
  150. pstd = 3 * std(patches(:));
  151. patches = max(min(patches, pstd), -pstd) / pstd;
  152. % 重新压缩 从[-1,1] 到 [0.1,0.9]
  153. patches = (patches + 1) * 0.4 + 0.1;
  154. end
  155.  
  156. %首先初始化参数
  157. function theta = initializeParameters(hiddenSize, visibleSize)
  158. % Initialize parameters randomly based on layer sizes.
  159. % we'll choose weights uniformly from the interval [-r, r]
  160. r = sqrt(6) / sqrt(hiddenSize+visibleSize+1);
  161. %rand(a,b)产生均匀分布的随机矩阵维度为a*b,元素取值范围0.0 1.0
  162. W1 = rand(hiddenSize, visibleSize) * 2 * r - r;
  163. %rand(a,b)*2*r即取值范围为(0-2r), rand(a,b)*2*r -r即取值范围为(-r - r
  164. W2 = rand(visibleSize, hiddenSize) * 2 * r - r;
  165. b1 = zeros(hiddenSize, 1); %连接到hidden unit的偏置单元
  166. b2 = zeros(visibleSize, 1); %链接到output layer的偏置单元
  167. % 将矩阵合并为一个向量
  168. theta = [W1(:) ; W2(:) ; b1(:) ; b2(:)];
  169. %初始化参数结束
  170. end
  171.  
  172. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 对应step 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
  173. %%%%%返回稀疏损失函数的值与梯度值%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  174. function [cost,grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, ...
  175. lambda, sparsityParam, beta, data)
  176. % visibleSize: 输入层单元数
  177. % hiddenSize: 隐藏单元数
  178. % lambda: 正则项
  179. % sparsityParam: p)指定的平均激活度p
  180. % beta: 稀疏权重项B
  181. % data: 64x10000 的矩阵为training data,data(:,i) 是第i个训练样例.
  182. % 把参数拼接为一个向量,因为采用L-BFGS优化,L-BFGS要求的就是向量.
  183. % 将长向量转换成每一层的权值矩阵和偏置向量值
  184. % theta向量的的 1->hiddenSize*visibleSizeW1hiddenSize*visibleSize 个元素,重新作为矩阵
  185. W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
  186.  
  187. %类似以上一直往后放
  188. W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);
  189. b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
  190. b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);
  191.  
  192. % 参数对应的梯度矩阵 ;
  193. cost = 0;
  194. W1grad = zeros(size(W1));
  195. W2grad = zeros(size(W2));
  196. b1grad = zeros(size(b1));
  197. b2grad = zeros(size(b2));
  198.  
  199. Jcost = 0; %直接误差
  200. Jweight = 0;%权值惩罚
  201. Jsparse = 0;%稀疏性惩罚
  202. [n m] = size(data); %m为样本的个数,n为样本的特征数
  203.  
  204. %前向算法计算各神经网络节点的线性组合值和active
  205. %W1 hiddenSize*visibleSize的矩阵
  206. %data visibleSize* trainexampleNum的矩阵
  207. %remat(b1,1,m)把向量b1复制扩展为hiddenSize*m
  208. % 根据公式 Z^(l) = z^(l-1)*W^(l-1)+b^(l-1)
  209. %z2保存的是10000个样本下隐藏层的输入,为hiddenSize*m维的矩阵,每一列代表一次输入
  210. z2= W1*data + remat(b1,1,m);%第二层的输入
  211. a2 = sigmoid(z2); %对z2sigmod 即得到a2,即隐藏层的输出
  212. z3 = W2*a2+repmat(b2,1,m); %output layer 的输入
  213. a3 = sigmoid(z3); %output 层的输出
  214.  
  215. % 计算预测产生的误差
  216. %对应J(W,b), 外边的sum是对所有样本求和,里边的sum是对输出层的所有分量求和
  217. Jcost = (0.5/m)*sum(sum((a3-data).^2));
  218. %计算权值惩罚项 正则化项,并没有带正则项参数
  219. Jweight = (1/2)*(sum(sum(W1.^2))+sum(sum(W2.^2)));
  220. %计算稀疏性规则项 sum(matrix,2)是进行按行求和运算,即所有样本在隐层的输出累加求均值
  221. % rho为一个hiddenSize*1 维的向量
  222.  
  223. rho = (1/m).*sum(a2,2);%求出隐含层输出aj的平均值向量 rhohiddenSize维的
  224. %求稀疏项的损失
  225. Jsparse = sum(sparsityParam.*log(sparsityParam./rho)+(1-sparsityParam).*log((1-sparsityParam)./(1-rho)));
  226. %损失函数的总表达式 损失项 + 正则化项 + 稀疏项
  227. cost = Jcost + lambda*Jweight + beta*Jsparse;
  228. %计算l = 3 output-layer层的误差dleta3,因为在autoencoder中输入等于输出h(W,b)=x
  229. delta3 = -(data-a3).*sigmoidInv(z3);
  230. %因为加入了稀疏规则项,所以计算偏导时需要引入该项,sterm为稀疏项,为hiddenSize维的向量
  231. sterm = beta*(-sparsityParam./rho+(1-sparsityParam)./(1-rho))
  232. % W2 64*25的矩阵,d3为第三层的输出为64*10000的矩阵,每一列为每个样本x^(i)的输出,W2'为W2的转置
  233. % repmat(sterm,1,m)会把函数复制扩展为m列的矩阵,每一列都为sterm向量。
  234. % d2为hiddenSize*10000的矩阵
  235. delta2 = (W2'*delta3+repmat(sterm,1,m)).*sigmoidInv(z2);
  236.  
  237. %计算W1grad
  238. % data'为10000*64的矩阵 d2*data' 25*64的矩阵
  239. W1grad = W1grad+delta2*data';
  240. W1grad = (1/m)*W1grad+lambda*W1;
  241.  
  242. %计算W2grad
  243. W2grad = W2grad+delta3*a2';
  244. W2grad = (1/m).*W2grad+lambda*W2;
  245.  
  246. %计算b1grad
  247. b1grad = b1grad+sum(delta2,2);
  248. b1grad = (1/m)*b1grad;%注意b的偏导是一个向量,所以这里应该把每一行的值累加起来
  249.  
  250. %计算b2grad
  251. b2grad = b2grad+sum(delta3,2);
  252. b2grad = (1/m)*b2grad;
  253. %计算完成重新转为向量
  254. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];
  255. end
  256.  
  257. %-------------------------------------------------------------------
  258. % Here's an implementation of the sigmoid function, which you may find useful
  259. % in your computation of the costs and the gradients. This inputs a (row or
  260. % column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)).
  261.  
  262. function sigm = sigmoid(x)
  263. sigm = 1 ./ (1 + exp(-x));
  264. end
  265.  
  266. %sigmoid函数的导函数
  267. function sigmInv = sigmoidInv(x)
  268. sigmInv = sigmoid(x).*(1-sigmoid(x));
  269. end
  270.  
  271. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 对应step 3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
  272. %三个函数:(checkNumericalGradient)(simpleQuadraticFunction)(computeNumericalGradient)
  273. function [] = checkNumericalGradient()
  274. x = [4; 10];
  275. %当前简单函数实际的值与实际的导函数
  276. [value, grad] = simpleQuadraticFunction(x);
  277. % 在点 x 处计算简单函数的梯度,("@simpleQuadraticFunction" denotes a pointer to a function.)
  278. numgrad = computeNumericalGradient(@simpleQuadraticFunction, x);
  279. % disp()等价于 print()
  280. disp([numgrad grad]);
  281. fprintf('The above two columns you get should be very similar.\n(Left-Your Numerical Gradient, Right-Analytical Gradient)\n\n');
  282. % norm 等价于 sqrt(sum(X.^2)); 如果实现正确,设置 EPSILON = 0.0001,误差应该为2.1452e-12
  283. diff = norm(numgrad-grad)/norm(numgrad+grad);
  284. disp(diff);
  285. fprintf('Norm of the difference between numerical and analytical gradient (should be < 1e-9)\n\n');
  286. end
  287.  
  288. %这个简单函数用来检验写的computeNumericalGradient函数的正确性
  289. function [value,grad] = simpleQuadraticFunction(x)
  290. % this function accepts a 2D vector as input.
  291. % Its outputs are:
  292. % value: h(x1, x2) = x1^2 + 3*x1*x2
  293. % grad: A 2x1 vector that gives the partial derivatives of h with respect to x1 and x2
  294. % Note that when we pass @simpleQuadraticFunction(x) to computeNumericalGradients, we're assuming
  295. % that computeNumericalGradients will use only the first returned value of this function.
  296. value = x(1)^2 + 3*x(1)*x(2);
  297. grad = zeros(2, 1);
  298. grad(1) = 2*x(1) + 3*x(2);
  299. grad(2) = 3*x(1);
  300. end
  301.  
  302. %梯度检验的函数
  303. function numgrad = computeNumericalGradient(J, theta)
  304. % theta: 参数,向量或者实数均可
  305. % J: 输出值为实数的函数. 调用y = J(theta)将会返回函数在theta处的值
  306.  
  307. % numgrad初始化为0,与theta维度相同
  308. numgrad = zeros(size(theta));
  309. EPSILON = 1e-4;
  310. % theta是一个行向量,size(theta,1)是求行数
  311. n = size(theta,1);
  312. %产生一个维度为n的单位矩阵
  313. E = eye(n);
  314. for i = 1:n
  315. % (n,:)代表第n行,所有的列
  316. % (:,n)代表所有行,第n
  317. % 由于E是单位矩阵,所以只有第i行第i列的元素变为EPSILON
  318. delta = E(:,i)*EPSILON;
  319. %向量第i维度的值
  320. numgrad(i) = (J(theta+delta)-J(theta-delta))/(EPSILON*2.0);
  321. end
  322. %% ---------------------------------------------------------------
  323.  
  324. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 对应step 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
  325. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%关于函数的展示%%%%%%%%%%%%%%%%%%%%%%%%%%%
  326. function [h, array] = display_network(A, opt_normalize, opt_graycolor, cols, opt_colmajor)
  327. % This function visualizes filters in matrix A. Each column of A is a
  328. % filter. We will reshape each column into a square image and visualizes
  329. % on each cell of the visualization panel.
  330. % All other parameters are optional, usually you do not need to worry
  331. % about it.
  332. % opt_normalize: whether we need to normalize the filter so that all of
  333. % them can have similar contrast. Default value is true.
  334. % opt_graycolor: whether we use gray as the heat map. Default is true.
  335. % cols: how many columns are there in the display. Default value is the
  336. % squareroot of the number of columns in A.
  337. % opt_colmajor: you can switch convention to row major for A. In that
  338. % case, each row of A is a filter. Default value is false.
  339. warning off all
  340.  
  341. if ~exist('opt_normalize', 'var') || isempty(opt_normalize)
  342. opt_normalize= true;
  343. end
  344.  
  345. if ~exist('opt_graycolor', 'var') || isempty(opt_graycolor)
  346. opt_graycolor= true;
  347. end
  348.  
  349. if ~exist('opt_colmajor', 'var') || isempty(opt_colmajor)
  350. opt_colmajor = false;
  351. end
  352.  
  353. % rescale
  354. A = A - mean(A(:));
  355.  
  356. if opt_graycolor, colormap(gray); end
  357.  
  358. % compute rows, cols
  359. [L M]=size(A);
  360. sz=sqrt(L);
  361. buf=1;
  362. if ~exist('cols', 'var')
  363. if floor(sqrt(M))^2 ~= M
  364. n=ceil(sqrt(M));
  365. while mod(M, n)~=0 && n<1.2*sqrt(M), n=n+1; end
  366. m=ceil(M/n);
  367. else
  368. n=sqrt(M);
  369. m=n;
  370. end
  371. else
  372. n = cols;
  373. m = ceil(M/n);
  374. end
  375.  
  376. array=-ones(buf+m*(sz+buf),buf+n*(sz+buf));
  377.  
  378. if ~opt_graycolor
  379. array = 0.1.* array;
  380. end
  381.  
  382. if ~opt_colmajor
  383. k=1;
  384. for i=1:m
  385. for j=1:n
  386. if k>M,
  387. continue;
  388. end
  389. clim=max(abs(A(:,k)));
  390. if opt_normalize
  391. array(buf+(i-1)*(sz+buf)+(1:sz),buf+(j-1)*(sz+buf)+(1:sz))=reshape(A(:,k),sz,sz)/clim;
  392. else
  393. array(buf+(i-1)*(sz+buf)+(1:sz),buf+(j-1)*(sz+buf)+(1:sz))=reshape(A(:,k),sz,sz)/max(abs(A(:)));
  394. end
  395. k=k+1;
  396. end
  397. end
  398. else
  399. k=1;
  400. for j=1:n
  401. for i=1:m
  402. if k>M,
  403. continue;
  404. end
  405. clim=max(abs(A(:,k)));
  406. if opt_normalize
  407. array(buf+(i-1)*(sz+buf)+(1:sz),buf+(j-1)*(sz+buf)+(1:sz))=reshape(A(:,k),sz,sz)/clim;
  408. else
  409. array(buf+(i-1)*(sz+buf)+(1:sz),buf+(j-1)*(sz+buf)+(1:sz))=reshape(A(:,k),sz,sz);
  410. end
  411. k=k+1;
  412. end
  413. end
  414. end
  415.  
  416. if opt_graycolor
  417. h=imagesc(array,'EraseMode','none',[-1 1]);
  418. else
  419. h=imagesc(array,'EraseMode','none',[-1 1]);
  420. end
  421. axis image off
  422.  
  423. drawnow;
  424.  
  425. warning on all

  

(六)6.5 Neurons Networks Implements of Sparse Autoencoder的更多相关文章

  1. CS229 6.5 Neurons Networks Implements of Sparse Autoencoder

    sparse autoencoder的一个实例练习,这个例子所要实现的内容大概如下:从给定的很多张自然图片中截取出大小为8*8的小patches图片共10000张,现在需要用sparse autoen ...

  2. (六)6.13 Neurons Networks Implements of stack autoencoder

    对于加深网络层数带来的问题,(gradient diffuse  局部最优等)可以使用逐层预训练(pre-training)的方法来避免 Stack-Autoencoder是一种逐层贪婪(Greedy ...

  3. CS229 6.13 Neurons Networks Implements of stack autoencoder

    对于加深网络层数带来的问题,(gradient diffuse  局部最优等)可以使用逐层预训练(pre-training)的方法来避免 Stack-Autoencoder是一种逐层贪婪(Greedy ...

  4. (六)6.10 Neurons Networks implements of softmax regression

    softmax可以看做只有输入和输出的Neurons Networks,如下图: 其参数数量为k*(n+1) ,但在本实现中没有加入截距项,所以参数为k*n的矩阵. 对损失函数J(θ)的形式有: 算法 ...

  5. (六)6.11 Neurons Networks implements of self-taught learning

    在machine learning领域,更多的数据往往强于更优秀的算法,然而现实中的情况是一般人无法获取大量的已标注数据,这时候可以通过无监督方法获取大量的未标注数据,自学习( self-taught ...

  6. CS229 6.10 Neurons Networks implements of softmax regression

    softmax可以看做只有输入和输出的Neurons Networks,如下图: 其参数数量为k*(n+1) ,但在本实现中没有加入截距项,所以参数为k*n的矩阵. 对损失函数J(θ)的形式有: 算法 ...

  7. (六) 6.1 Neurons Networks Representation

    面对复杂的非线性可分的样本是,使用浅层分类器如Logistic等需要对样本进行复杂的映射,使得样本在映射后的空间是线性可分的,但在原始空间,分类边界可能是复杂的曲线.比如下图的样本只是在2维情形下的示 ...

  8. CS229 6.11 Neurons Networks implements of self-taught learning

    在machine learning领域,更多的数据往往强于更优秀的算法,然而现实中的情况是一般人无法获取大量的已标注数据,这时候可以通过无监督方法获取大量的未标注数据,自学习( self-taught ...

  9. (六)6.8 Neurons Networks implements of PCA ZCA and whitening

    PCA 给定一组二维数据,每列十一组样本,共45个样本点 -6.7644914e-01  -6.3089308e-01  -4.8915202e-01 ... -4.4722050e-01  -7.4 ...

随机推荐

  1. ExtJs之单选及多选框

    坚持 <!DOCTYPE html> <html> <head> <title>ExtJs</title> <meta http-eq ...

  2. CS程序,服务器端弹出MessageBox.Show()之类的UI操作???禁止

    服务器端绝对不能用MessageBox.Show之类的UI操作,大家要掌握异常(Exception)的工作机制. 可能你开发调试时貌似可以,因为是以单机版运行.在服务层或者数据访问层MessageBo ...

  3. linux下如何查看主机的外网ip地址

    在linux下如果我们使用的是nat方式上网.通过ifconfig命令查看到的ip地址往往是内网地址 那么如何查看主机在互联网上使用的公网IP呢?我们可以在命令行下使用curl命令实现这个功能. [r ...

  4. Android NDK引用预编译的动态链接库

    NDK里有个例子: android-ndk-r10/samples/module-exports/jni一看就懂了 ———————————————————————————– 从r5版本开始,就支持预编 ...

  5. C/C++ 位域知识小结

    C/C++ 位域知识小结 几篇较全面的位域相关的文章: http://www.uplook.cn/blog/9/93362/ C/C++位域(Bit-fields)之我见 C中的位域与大小端问题 内存 ...

  6. Linux软链接和硬链接

    Linux中的链接有两种方式,软链接和硬链接.本文试图清晰彻底的解释Linux中软链接和硬链接文件的区别. 1.Linux链接文件 1)软链接文件  软链接又叫符号链接,这个文件包含了另一个文件的路径 ...

  7. AngularJs+bootstrap搭载前台框架——准备工作

    1.关于什么是AngularJs以及什么是bootstrap我就不多说了,简单说下,AngularJs是一个比较强大前台MVC框架,bootstrap是Twitter推出的一个用于前端开发的开源工具包 ...

  8. PowerDesigner如何自定义报表模板

    PowerDesigner如何自定义报表模板 帅宏军 使用PowerDesigner设计数据库非常方便,但是它自带的报表模板一般不符合中国的使用情况.如何设计一个自己的报表模板,并在做项目的数据库设计 ...

  9. 网页爬虫的设计与实现(Java版)

    网页爬虫的设计与实现(Java版)     最近为了练手而且对网页爬虫也挺感兴趣,决定自己写一个网页爬虫程序. 首先看看爬虫都应该有哪些功能. 内容来自(http://www.ibm.com/deve ...

  10. MTK

    1.mt_boot_init->boot_linux_from_storage->boot_linux->boot_linux_fdt