图像大小与参数个数:

前面几章都是针对小图像块处理的,这一章则是针对大图像进行处理的。两者在这的区别还是很明显的,小图像(如8*8,MINIST的28*28)可以采用全连接的方式(即输入层和隐含层直接相连)。但是大图像,这个将会变得很耗时:比如96*96的图像,若采用全连接方式,需要96*96个输入单元,然后如果要训练100个特征,只这一层就需要96*96*100个参数(W,b),训练时间将是前面的几百或者上万倍。所以这里用到了部分联通网络。对于图像来说,每个隐含单元仅仅连接输入图像的一小片相邻区域。

这样就引出了一个卷积的方法:

convolution:

自然图像有其固有特性,也就是说,图像的一部分的统计特性与其他部分是一样的。这也意味着我们在这一部分学习的特征也能用在另一部分上,所以对于这个图像上的所有位置,我们都能使用同样的学习特征。

对于图像,当从一个大尺寸图像中随机选取一小块,比如说8x8作为样本,并且从这个小块样本中学习到了一些特征,这时我们可以把从这个8x8样本中学习到的特征作为探测器,应用到这个图像的任意地方中去。特别是,我们可以用从8x8样本中所学习到的特征跟原本的大尺寸图像作卷积,从而对这个大尺寸图像上的任一位置获得一个不同特征的激活值。

讲义中举得具体例子,还是看例子容易理解:

假设你已经从一个96x96的图像中学习到了它的一个8x8的样本所具有的特征,假设这是由有100个隐含单元的自编码完成的。为了得到卷积特征,需要对96x96的图像的每个8x8的小块图像区域都进行卷积运算。也就是说,抽取8x8的小块区域,并且从起始坐标开始依次标记为(1,1),(1,2),...,一直到(89,89),然后对抽取的区域逐个运行训练过的稀疏自编码来得到特征的激活值。在这个例子里,显然可以得到100个集合,每个集合含有89x89个卷积特征。讲义中那个gif图更形象,这里不知道怎么添加进来...

最后,总结下convolution的处理过程:

假设给定了r * c的大尺寸图像,将其定义为xlarge。首先通过从大尺寸图像中抽取的a * b的小尺寸图像样本xsmall训练稀疏自编码,得到了k个特征(k为隐含层神经元数量),然后对于xlarge中的每个a*b大小的块,求激活值fs,然后对这些fs进行卷积。这样得到(r-a+1)*(c-b+1)*k个卷积后的特征矩阵。

pooling:

在通过卷积获得了特征(features)之后,下一步我们希望利用这些特征去做分类。理论上讲,人们可以把所有解析出来的特征关联到一个分类器,例如softmax分类器,但计算量非常大。例如:对于一个96X96像素的图像,假设我们已经通过8X8个输入学习得到了400个特征。而每一个卷积都会得到一个(96 − 8 + 1) * (96 − 8 + 1) = 7921的结果集,由于已经得到了400个特征,所以对于每个样例(example)结果集的大小就将达到892 * 400 = 3,168,400 个特征。这样学习一个拥有超过3百万特征的输入的分类器是相当不明智的,并且极易出现过度拟合(over-fitting).

所以就有了pooling这个方法,翻译作“池化”?感觉pooling这个英语单词还是挺形象的,翻译“作池”化就没那么形象了。其实也就是把特征图像区域的一部分求个均值或者最大值,用来代表这部分区域。如果是求均值就是mean pooling,求最大值就是max pooling。讲义中那个gif图也很形象,只是不知道这里怎么放gif图....

至于pooling为什么可以这样做,是因为:我们之所以决定使用卷积后的特征是因为图像具有一种“静态性”的属性,这也就意味着在一个图像区域有用的特征极有可能在另一个区域同样适用。因此,为了描述大的图像,一个很自然的想法就是对不同位置的特征进行聚合统计。这个均值或者最大值就是一种聚合统计的方法。

另外,如果人们选择图像中的连续范围作为池化区域,并且只是池化相同(重复)的隐藏单元产生的特征,那么,这些池化单元就具有平移不变性(translation invariant)。这就意味着即使图像经历了一个小的平移之后,依然会产生相同的(池化的)特征(这里有个小小的疑问,既然这样,是不是只能保证在池化大小的这块区域内具有平移不变性?)。在很多任务中(例如物体检测、声音识别),我们都更希望得到具有平移不变性的特征,因为即使图像经过了平移,样例(图像)的标记仍然保持不变。例如,如果你处理一个MNIST数据集的数字,把它向左侧或右侧平移,那么不论最终的位置在哪里,你都会期望你的分类器仍然能够精确地将其分类为相同的数字。

练习:

下面是讲义中的练习。用到了上一章的练习的结构(即在convolution过程中的第一步,用稀疏自编码对xsmall求k个特征)。

以下是主要程序:

主程序cnnExercise.m

  1. %% CS294A/CS294W Convolutional Neural Networks Exercise
  2.  
  3. % Instructions
  4. % ------------
  5. %
  6. % This file contains code that helps you get started on the
  7. % convolutional neural networks exercise. In this exercise, you will only
  8. % need to modify cnnConvolve.m and cnnPool.m. You will not need to modify
  9. % this file.
  10.  
  11. %%======================================================================
  12. %% STEP : Initialization
  13. % Here we initialize some parameters used for the exercise.
  14.  
  15. imageDim = ; % image dimension
  16. imageChannels = ; % number of channels (rgb, so )
  17.  
  18. patchDim = ; % patch dimension
  19. numPatches = ; % number of patches
  20.  
  21. visibleSize = patchDim * patchDim * imageChannels; % number of input units
  22. outputSize = visibleSize; % number of output units
  23. hiddenSize = ; % number of hidden units
  24.  
  25. epsilon = 0.1; % epsilon for ZCA whitening
  26.  
  27. poolDim = ; % dimension of pooling region
  28.  
  29. %%======================================================================
  30. %% STEP : Train a sparse autoencoder (with a linear decoder) to learn
  31. % features from color patches. If you have completed the linear decoder
  32. % execise, use the features that you have obtained from that exercise,
  33. % loading them into optTheta. Recall that we have to keep around the
  34. % parameters used in whitening (i.e., the ZCA whitening matrix and the
  35. % meanPatch)
  36.  
  37. % --------------------------- YOUR CODE HERE --------------------------
  38. % Train the sparse autoencoder and fill the following variables with
  39. % the optimal parameters:
  40.  
  41. %optTheta = zeros(*hiddenSize*visibleSize+hiddenSize+visibleSize, );
  42. %ZCAWhite = zeros(visibleSize, visibleSize);
  43. %meanPatch = zeros(visibleSize, );
  44. load STL10Features.mat;
  45.  
  46. % --------------------------------------------------------------------
  47.  
  48. % Display and check to see that the features look good
  49. W = reshape(optTheta(:visibleSize * hiddenSize), hiddenSize, visibleSize);
  50. b = optTheta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize);
  51.  
  52. displayColorNetwork( (W*ZCAWhite)');
  53.  
  54. %%======================================================================
  55. %% STEP : Implement and test convolution and pooling
  56. % In this step, you will implement convolution and pooling, and test them
  57. % on a small part of the data set to ensure that you have implemented
  58. % these two functions correctly. In the next step, you will actually
  59. % convolve and pool the features with the STL10 images.
  60.  
  61. %% STEP 2a: Implement convolution
  62. % Implement convolution in the function cnnConvolve in cnnConvolve.m
  63.  
  64. % Note that we have to preprocess the images in the exact same way
  65. % we preprocessed the patches before we can obtain the feature activations.
  66.  
  67. load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
  68.  
  69. %% Use only the first images for testing
  70. convImages = trainImages(:, :, :, :);
  71.  
  72. % NOTE: Implement cnnConvolve in cnnConvolve.m first!
  73. convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch);
  74.  
  75. %% STEP 2b: Checking your convolution
  76. % To ensure that you have convolved the features correctly, we have
  77. % provided some code to compare the results of your convolution with
  78. % activations from the sparse autoencoder
  79.  
  80. % For random points
  81. for i = :
  82. featureNum = randi([, hiddenSize]);
  83. imageNum = randi([, ]);
  84. imageRow = randi([, imageDim - patchDim + ]);
  85. imageCol = randi([, imageDim - patchDim + ]);
  86.  
  87. patch = convImages(imageRow:imageRow + patchDim - , imageCol:imageCol + patchDim - , :, imageNum);
  88. patch = patch(:);
  89. patch = patch - meanPatch;
  90. patch = ZCAWhite * patch;
  91.  
  92. features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch);
  93.  
  94. if abs(features(featureNum, ) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-
  95. fprintf('Convolved feature does not match activation from autoencoder\n');
  96. fprintf('Feature Number : %d\n', featureNum);
  97. fprintf('Image Number : %d\n', imageNum);
  98. fprintf('Image Row : %d\n', imageRow);
  99. fprintf('Image Column : %d\n', imageCol);
  100. fprintf('Convolved feature : %0.5f\n', convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
  101. fprintf('Sparse AE feature : %0.5f\n', features(featureNum, ));
  102. error('Convolved feature does not match activation from autoencoder');
  103. end
  104. end
  105.  
  106. disp('Congratulations! Your convolution code passed the test.');
  107.  
  108. %% STEP 2c: Implement pooling
  109. % Implement pooling in the function cnnPool in cnnPool.m
  110.  
  111. % NOTE: Implement cnnPool in cnnPool.m first!
  112. pooledFeatures = cnnPool(poolDim, convolvedFeatures);
  113.  
  114. %% STEP 2d: Checking your pooling
  115. % To ensure that you have implemented pooling, we will use your pooling
  116. % function to pool over a test matrix and check the results.
  117.  
  118. testMatrix = reshape(:, , );
  119. expectedMatrix = [mean(mean(testMatrix(:, :))) mean(mean(testMatrix(:, :))); ...
  120. mean(mean(testMatrix(:, :))) mean(mean(testMatrix(:, :))); ];
  121.  
  122. testMatrix = reshape(testMatrix, , , , );
  123.  
  124. pooledFeatures = squeeze(cnnPool(, testMatrix));
  125.  
  126. if ~isequal(pooledFeatures, expectedMatrix)
  127. disp('Pooling incorrect');
  128. disp('Expected');
  129. disp(expectedMatrix);
  130. disp('Got');
  131. disp(pooledFeatures);
  132. else
  133. disp('Congratulations! Your pooling code passed the test.');
  134. end
  135.  
  136. %%======================================================================
  137. %% STEP : Convolve and pool with the dataset
  138. % In this step, you will convolve each of the features you learned with
  139. % the full large images to obtain the convolved features. You will then
  140. % pool the convolved features to obtain the pooled features for
  141. % classification.
  142. %
  143. % Because the convolved features matrix is very large, we will do the
  144. % convolution and pooling features at a time to avoid running out of
  145. % memory. Reduce this number if necessary
  146.  
  147. stepSize = ;
  148. assert(mod(hiddenSize, stepSize) == , 'stepSize should divide hiddenSize');
  149.  
  150. load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
  151. load stlTestSubset.mat % loads numTestImages, testImages, testLabels
  152.  
  153. pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ...
  154. floor((imageDim - patchDim + ) / poolDim), ...
  155. floor((imageDim - patchDim + ) / poolDim) );
  156. pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
  157. floor((imageDim - patchDim + ) / poolDim), ...
  158. floor((imageDim - patchDim + ) / poolDim) );
  159.  
  160. tic();
  161.  
  162. for convPart = :(hiddenSize / stepSize)
  163.  
  164. featureStart = (convPart - ) * stepSize + ;
  165. featureEnd = convPart * stepSize;
  166.  
  167. fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);
  168. Wt = W(featureStart:featureEnd, :);
  169. bt = b(featureStart:featureEnd);
  170.  
  171. fprintf('Convolving and pooling train images\n');
  172. convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
  173. trainImages, Wt, bt, ZCAWhite, meanPatch);
  174. pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
  175. pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
  176. toc();
  177. clear convolvedFeaturesThis pooledFeaturesThis;
  178.  
  179. fprintf('Convolving and pooling test images\n');
  180. convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
  181. testImages, Wt, bt, ZCAWhite, meanPatch);
  182. pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
  183. pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
  184. toc();
  185.  
  186. clear convolvedFeaturesThis pooledFeaturesThis;
  187.  
  188. end
  189.  
  190. % You might want to save the pooled features since convolution and pooling takes a long time
  191. save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest');
  192. toc();
  193.  
  194. %%======================================================================
  195. %% STEP : Use pooled features for classification
  196. % Now, you will use your pooled features to train a softmax classifier,
  197. % using softmaxTrain from the softmax exercise.
  198. % Training the softmax classifer for iterations should take less than
  199. % minutes.
  200.  
  201. % Add the path to your softmax solution, if necessary
  202. % addpath /path/to/solution/
  203.  
  204. % Setup parameters for softmax
  205. softmaxLambda = 1e-;
  206. numClasses = ;
  207. % Reshape the pooledFeatures to form an input vector for softmax
  208. softmaxX = permute(pooledFeaturesTrain, [ ]);
  209. softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...
  210. numTrainImages);
  211. softmaxY = trainLabels;
  212.  
  213. options = struct;
  214. options.maxIter = ;
  215. softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...
  216. numClasses, softmaxLambda, softmaxX, softmaxY, options);
  217.  
  218. %%======================================================================
  219. %% STEP : Test classifer
  220. % Now you will test your trained classifer against the test images
  221.  
  222. softmaxX = permute(pooledFeaturesTest, [ ]);
  223. softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
  224. softmaxY = testLabels;
  225.  
  226. [pred] = softmaxPredict(softmaxModel, softmaxX);
  227. acc = (pred(:) == softmaxY(:));
  228. acc = sum(acc) / size(acc, );
  229. fprintf('Accuracy: %2.3f%%\n', acc * );
  230.  
  231. % You should expect to get an accuracy of around % on the test images.

cnnConvolve.m

  1. function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)
  2. %cnnConvolve Returns the convolution of the features given by W and b with
  3. %the given images
  4. %
  5. % Parameters:
  6. % patchDim - patch (feature) dimension
  7. % numFeatures - number of features
  8. % images - large images to convolve with, matrix in the form
  9. % images(r, c, channel, image number)
  10. % W, b - W, b for features from the sparse autoencoder
  11. % ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
  12. % preprocessing
  13. %
  14. % Returns:
  15. % convolvedFeatures - matrix of convolved features in the form
  16. % convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
  17. patchSize = patchDim*patchDim;
  18. numImages = size(images, );
  19. imageDim = size(images, );
  20. imageChannels = size(images, );
  21.  
  22. convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + , imageDim - patchDim + );
  23.  
  24. % Instructions:
  25. % Convolve every feature with every large image here to produce the
  26. % numFeatures x numImages x (imageDim - patchDim + ) x (imageDim - patchDim + )
  27. % matrix convolvedFeatures, such that
  28. % convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
  29. % value of the convolved featureNum feature for the imageNum image over
  30. % the region (imageRow, imageCol) to (imageRow + patchDim - , imageCol + patchDim - )
  31. %
  32. % Expected running times:
  33. % Convolving with images should take less than minutes
  34. % Convolving with images should take around an hour
  35. % (So to save time when testing, you should convolve with less images, as
  36. % described earlier)
  37.  
  38. % -------------------- YOUR CODE HERE --------------------
  39. % Precompute the matrices that will be used during the convolution. Recall
  40. % that you need to take into account the whitening and mean subtraction
  41. % steps
  42. WT = W*ZCAWhite;
  43. bT = b-WT*meanPatch;
  44. % --------------------------------------------------------
  45.  
  46. convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + , imageDim - patchDim + );
  47. for imageNum = :numImages
  48. for featureNum = :numFeatures
  49.  
  50. % convolution of image with feature matrix for each channel
  51. convolvedImage = zeros(imageDim - patchDim + , imageDim - patchDim + );
  52. for channel = :
  53.  
  54. % Obtain the feature (patchDim x patchDim) needed during the convolution
  55. % ---- YOUR CODE HERE ----
  56. %feature = zeros(,); % You should replace this
  57. offset = (channel-)*patchSize;
  58. feature = reshape(WT(featureNum,(offset+):(offset+patchSize)),patchDim,patchDim);
  59.  
  60. % ------------------------
  61.  
  62. % Flip the feature matrix because of the definition of convolution, as explained later
  63. feature = flipud(fliplr(squeeze(feature)));
  64.  
  65. % Obtain the image
  66. im = squeeze(images(:, :, channel, imageNum));
  67.  
  68. % Convolve "feature" with "im", adding the result to convolvedImage
  69. % be sure to do a 'valid' convolution
  70. % ---- YOUR CODE HERE ----
  71. convolveThisChannel = conv2(im,feature,'valid');
  72. convolvedImage = convolvedImage + convolveThisChannel; %三个通道加起来,应该是指三个通道同时用来做判断标准。
  73.  
  74. % ------------------------
  75.  
  76. end
  77.  
  78. % Subtract the bias unit (correcting for the mean subtraction as well)
  79. % Then, apply the sigmoid function to get the hidden activation
  80. % ---- YOUR CODE HERE ----
  81. convolvedImage = sigmoid(convolvedImage + bT(featureNum));
  82.  
  83. % ------------------------
  84.  
  85. % The convolved feature is the sum of the convolved values for all channels
  86. convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
  87. end
  88. end
  89.  
  90. function sigm = sigmoid(x)
  91.  
  92. sigm = ./ ( + exp(-x));
  93. end
  94.  
  95. end

cnnPool.m

  1. function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
  2. %cnnPool Pools the given convolved features
  3. %
  4. % Parameters:
  5. % poolDim - dimension of pooling region
  6. % convolvedFeatures - convolved features to pool (as given by cnnConvolve)
  7. % convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
  8. %
  9. % Returns:
  10. % pooledFeatures - matrix of pooled features in the form
  11. % pooledFeatures(featureNum, imageNum, poolRow, poolCol)
  12. %
  13.  
  14. numImages = size(convolvedFeatures, );
  15. numFeatures = size(convolvedFeatures, );
  16. convolvedDim = size(convolvedFeatures, );
  17.  
  18. pooledFeatures = zeros(numFeatures, numImages, floor(convolvedDim / poolDim), floor(convolvedDim / poolDim));
  19.  
  20. % -------------------- YOUR CODE HERE --------------------
  21. % Instructions:
  22. % Now pool the convolved features in regions of poolDim x poolDim,
  23. % to obtain the
  24. % numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim)
  25. % matrix pooledFeatures, such that
  26. % pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the
  27. % value of the featureNum feature for the imageNum image pooled over the
  28. % corresponding (poolRow, poolCol) pooling region
  29. % (see http://ufldl/wiki/index.php/Pooling )
  30. %
  31. % Use mean pooling here.
  32. % -------------------- YOUR CODE HERE --------------------
  33. numBlocks = floor(convolvedDim/poolDim); %每个维度总共分成多少块(/),这里对于不同维数的数据,poolDim要选择能刚好除尽的?
  34. for featureNum = :numFeatures
  35. for imageNum=:numImages
  36. for poolRow = :numBlocks
  37. for poolCol = :numBlocks
  38. features = convolvedFeatures(featureNum,imageNum,(poolRow-)*poolDim+:poolRow*poolDim,(poolCol-)*poolDim+:poolCol*poolDim);
  39. pooledFeatures(featureNum,imageNum,poolRow,poolCol) = mean(features(:));
  40. end
  41. end
  42. end
  43. end
  44. end

结果:

Accuracy: 78.938%

与讲义提到的80%左右差不多。

ps:讲义地址:

http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution

http://deeplearning.stanford.edu/wiki/index.php/Pooling

http://deeplearning.stanford.edu/wiki/index.php/Exercise:Convolution_and_Pooling

 

Deep Learning 学习随记(七)Convolution and Pooling --卷积和池化的更多相关文章

  1. Deep Learning 学习随记(八)CNN(Convolutional neural network)理解

    前面Andrew Ng的讲义基本看完了.Andrew讲的真是通俗易懂,只是不过瘾啊,讲的太少了.趁着看完那章convolution and pooling, 自己又去翻了翻CNN的相关东西. 当时看讲 ...

  2. Deep Learning 学习随记(六)Linear Decoder 线性解码

    线性解码器(Linear Decoder) 前面第一章提到稀疏自编码器(http://www.cnblogs.com/bzjia-blog/p/SparseAutoencoder.html)的三层网络 ...

  3. Deep Learning学习随记(一)稀疏自编码器

    最近开始看Deep Learning,随手记点,方便以后查看. 主要参考资料是Stanford 教授 Andrew Ng 的 Deep Learning 教程讲义:http://deeplearnin ...

  4. Deep Learning 学习随记(五)深度网络--续

    前面记到了深度网络这一章.当时觉得练习应该挺简单的,用不了多少时间,结果训练时间真够长的...途中debug的时候还手贱的clear了一下,又得从头开始运行.不过最终还是调试成功了,sigh~ 前一篇 ...

  5. Deep Learning 学习随记(五)Deep network 深度网络

    这一个多周忙别的事去了,忙完了,接着看讲义~ 这章讲的是深度网络(Deep Network).前面讲了自学习网络,通过稀疏自编码和一个logistic回归或者softmax回归连接,显然是3层的.而这 ...

  6. Deep Learning 学习随记(四)自学习和非监督特征学习

    接着看讲义,接下来这章应该是Self-Taught Learning and Unsupervised Feature Learning. 含义: 从字面上不难理解其意思.这里的self-taught ...

  7. Deep Learning学习随记(二)Vectorized、PCA和Whitening

    接着上次的记,前面看了稀疏自编码.按照讲义,接下来是Vectorized, 翻译成向量化?暂且这么认为吧. Vectorized: 这节是老师教我们编程技巧了,这个向量化的意思说白了就是利用已经被优化 ...

  8. Deep Learning 学习随记(三)Softmax regression

    讲义中的第四章,讲的是Softmax 回归.softmax回归是logistic回归的泛化版,先来回顾下logistic回归. logistic回归: 训练集为{(x(1),y(1)),...,(x( ...

  9. Deep Learning 学习随记(三)续 Softmax regression练习

    上一篇讲的Softmax regression,当时时间不够,没把练习做完.这几天学车有点累,又特别想动动手自己写写matlab代码 所以等到了现在,这篇文章就当做上一篇的续吧. 回顾: 上一篇最后给 ...

随机推荐

  1. Android sqlite数据库存取图片信息

    Android sqlite数据库存取图片信息 存储图片:bitmap private byte[] getIconData(Bitmap bitmap){ int size = bitmap.get ...

  2. java 表格项的删除、编辑、增加 修改版

    修改之后的java 代码: package com.platformda.optimize; import java.awt.BorderLayout; import java.awt.Button; ...

  3. Android入门-Service-start,end,bind,unbind之间的区别

    写贴一段别人关于service中start与bind,end与unbind的分析了: Service创建有两种方法:  startService或者bindService 服务不能自己运行,需要通过调 ...

  4. [LeetCode#12] Roman to Integer

    Problem: Given an integer, convert it to a roman numeral. Input is guaranteed to be within the range ...

  5. BZOJ1599: [Usaco2008 Oct]笨重的石子

    1599: [Usaco2008 Oct]笨重的石子 Time Limit: 10 Sec  Memory Limit: 162 MBSubmit: 758  Solved: 513[Submit][ ...

  6. Maven学习(1) - Maven入门

    home index:http://maven.apache.org/ download:http://maven.apache.org/download.cgi install: http://ma ...

  7. 用Delphi7开发Web Service程序 转

        转:http://rosehacker.blog.51cto.com/2528968/450160 用Delphi7开发Web Service程序,并把服务程序放在IIS Web服务器上提供给 ...

  8. Tree2cycle

    Problem Description A tree with N nodes and N-1 edges is given. To connect or disconnect one edge, w ...

  9. Ethernet & IEEE 802.3 802.X 802.1ag-MEP

    ISO/IEC 7498标准,它定义了网络互联的7层框架,也就是开放式系统互连参考模型(OSI模型). 交换机好比是邻近的街道,而路由器则是街道的交汇点. (交换机第二层,即数据链路层,也有四层,七层 ...

  10. Java 集合框架 ArrayList 源码剖析

    总体介绍 ArrayList实现了List接口,是顺序容器,即元素存放的数据与放进去的顺序相同,允许放入null元素,底层通过数组实现.除该类未实现同步外,其余跟Vector大致相同.每个ArrayL ...