题目下载【传送门

第1题

简述:支持向量机的实现

(1)线性的情况:

第1步:读取数据文件,可视化数据:

  1. % Load from ex6data1:
  2. % You will have X, y in your environment
  3. load('ex6data1.mat');
  4.  
  5. % Plot training data
  6. plotData(X, y);

第2步:设定不同的C,使用线性核函数训练SVM,并画出决策边界:

  1. C = 1;
  2. model = svmTrain(X, y, C, @linearKernel, 1e-3, 20);
  3. visualizeBoundaryLinear(X, y, model);

运行结果:

C = 1时:

C = 1000时:

其中线性核函数linearKernel:

  1. function sim = linearKernel(x1, x2)
  2.  
  3. % Ensure that x1 and x2 are column vectors
  4. x1 = x1(:); x2 = x2(:);
  5.  
  6. % Compute the kernel
  7. sim = x1' * x2; % dot product
  8.  
  9. end

高斯核函数gaussianKernel实现:

  1. function sim = gaussianKernel(x1, x2, sigma)
  2.  
  3. % Ensure that x1 and x2 are column vectors
  4. x1 = x1(:); x2 = x2(:);
  5.  
  6. % You need to return the following variables correctly.
  7. sim = 0;
  8.  
  9. sim = exp(-norm(x1 - x2) ^ 2 / (2 * (sigma ^ 2)));
  10.  
  11. end

训练模型svmTrain函数(实现较为复杂,直接调用):

  1. function [model] = svmTrain(X, Y, C, kernelFunction, ...
  2. tol, max_passes)
  3. %SVMTRAIN Trains an SVM classifier using a simplified version of the SMO
  4. %algorithm.
  5. % [model] = SVMTRAIN(X, Y, C, kernelFunction, tol, max_passes) trains an
  6. % SVM classifier and returns trained model. X is the matrix of training
  7. % examples. Each row is a training example, and the jth column holds the
  8. % jth feature. Y is a column matrix containing 1 for positive examples
  9. % and 0 for negative examples. C is the standard SVM regularization
  10. % parameter. tol is a tolerance value used for determining equality of
  11. % floating point numbers. max_passes controls the number of iterations
  12. % over the dataset (without changes to alpha) before the algorithm quits.
  13. %
  14. % Note: This is a simplified version of the SMO algorithm for training
  15. % SVMs. In practice, if you want to train an SVM classifier, we
  16. % recommend using an optimized package such as:
  17. %
  18. % LIBSVM (http://www.csie.ntu.edu.tw/~cjlin/libsvm/)
  19. % SVMLight (http://svmlight.joachims.org/)
  20. %
  21. %
  22.  
  23. if ~exist('tol', 'var') || isempty(tol)
  24. tol = 1e-3;
  25. end
  26.  
  27. if ~exist('max_passes', 'var') || isempty(max_passes)
  28. max_passes = 5;
  29. end
  30.  
  31. % Data parameters
  32. m = size(X, 1);
  33. n = size(X, 2);
  34.  
  35. % Map 0 to -1
  36. Y(Y==0) = -1;
  37.  
  38. % Variables
  39. alphas = zeros(m, 1);
  40. b = 0;
  41. E = zeros(m, 1);
  42. passes = 0;
  43. eta = 0;
  44. L = 0;
  45. H = 0;
  46.  
  47. % Pre-compute the Kernel Matrix since our dataset is small
  48. % (in practice, optimized SVM packages that handle large datasets
  49. % gracefully will _not_ do this)
  50. %
  51. % We have implemented optimized vectorized version of the Kernels here so
  52. % that the svm training will run faster.
  53. if strcmp(func2str(kernelFunction), 'linearKernel')
  54. % Vectorized computation for the Linear Kernel
  55. % This is equivalent to computing the kernel on every pair of examples
  56. K = X*X';
  57. elseif strfind(func2str(kernelFunction), 'gaussianKernel')
  58. % Vectorized RBF Kernel
  59. % This is equivalent to computing the kernel on every pair of examples
  60. X2 = sum(X.^2, 2);
  61. K = bsxfun(@plus, X2, bsxfun(@plus, X2', - 2 * (X * X')));
  62. K = kernelFunction(1, 0) .^ K;
  63. else
  64. % Pre-compute the Kernel Matrix
  65. % The following can be slow due to the lack of vectorization
  66. K = zeros(m);
  67. for i = 1:m
  68. for j = i:m
  69. K(i,j) = kernelFunction(X(i,:)', X(j,:)');
  70. K(j,i) = K(i,j); %the matrix is symmetric
  71. end
  72. end
  73. end
  74.  
  75. % Train
  76. fprintf('\nTraining ...');
  77. dots = 12;
  78. while passes < max_passes,
  79.  
  80. num_changed_alphas = 0;
  81. for i = 1:m,
  82.  
  83. % Calculate Ei = f(x(i)) - y(i) using (2).
  84. % E(i) = b + sum (X(i, :) * (repmat(alphas.*Y,1,n).*X)') - Y(i);
  85. E(i) = b + sum (alphas.*Y.*K(:,i)) - Y(i);
  86.  
  87. if ((Y(i)*E(i) < -tol && alphas(i) < C) || (Y(i)*E(i) > tol && alphas(i) > 0)),
  88.  
  89. % In practice, there are many heuristics one can use to select
  90. % the i and j. In this simplified code, we select them randomly.
  91. j = ceil(m * rand());
  92. while j == i, % Make sure i \neq j
  93. j = ceil(m * rand());
  94. end
  95.  
  96. % Calculate Ej = f(x(j)) - y(j) using (2).
  97. E(j) = b + sum (alphas.*Y.*K(:,j)) - Y(j);
  98.  
  99. % Save old alphas
  100. alpha_i_old = alphas(i);
  101. alpha_j_old = alphas(j);
  102.  
  103. % Compute L and H by (10) or (11).
  104. if (Y(i) == Y(j)),
  105. L = max(0, alphas(j) + alphas(i) - C);
  106. H = min(C, alphas(j) + alphas(i));
  107. else
  108. L = max(0, alphas(j) - alphas(i));
  109. H = min(C, C + alphas(j) - alphas(i));
  110. end
  111.  
  112. if (L == H),
  113. % continue to next i.
  114. continue;
  115. end
  116.  
  117. % Compute eta by (14).
  118. eta = 2 * K(i,j) - K(i,i) - K(j,j);
  119. if (eta >= 0),
  120. % continue to next i.
  121. continue;
  122. end
  123.  
  124. % Compute and clip new value for alpha j using (12) and (15).
  125. alphas(j) = alphas(j) - (Y(j) * (E(i) - E(j))) / eta;
  126.  
  127. % Clip
  128. alphas(j) = min (H, alphas(j));
  129. alphas(j) = max (L, alphas(j));
  130.  
  131. % Check if change in alpha is significant
  132. if (abs(alphas(j) - alpha_j_old) < tol),
  133. % continue to next i.
  134. % replace anyway
  135. alphas(j) = alpha_j_old;
  136. continue;
  137. end
  138.  
  139. % Determine value for alpha i using (16).
  140. alphas(i) = alphas(i) + Y(i)*Y(j)*(alpha_j_old - alphas(j));
  141.  
  142. % Compute b1 and b2 using (17) and (18) respectively.
  143. b1 = b - E(i) ...
  144. - Y(i) * (alphas(i) - alpha_i_old) * K(i,j)' ...
  145. - Y(j) * (alphas(j) - alpha_j_old) * K(i,j)';
  146. b2 = b - E(j) ...
  147. - Y(i) * (alphas(i) - alpha_i_old) * K(i,j)' ...
  148. - Y(j) * (alphas(j) - alpha_j_old) * K(j,j)';
  149.  
  150. % Compute b by (19).
  151. if (0 < alphas(i) && alphas(i) < C),
  152. b = b1;
  153. elseif (0 < alphas(j) && alphas(j) < C),
  154. b = b2;
  155. else
  156. b = (b1+b2)/2;
  157. end
  158.  
  159. num_changed_alphas = num_changed_alphas + 1;
  160.  
  161. end
  162.  
  163. end
  164.  
  165. if (num_changed_alphas == 0),
  166. passes = passes + 1;
  167. else
  168. passes = 0;
  169. end
  170.  
  171. fprintf('.');
  172. dots = dots + 1;
  173. if dots > 78
  174. dots = 0;
  175. fprintf('\n');
  176. end
  177. if exist('OCTAVE_VERSION')
  178. fflush(stdout);
  179. end
  180. end
  181. fprintf(' Done! \n\n');
  182.  
  183. % Save the model
  184. idx = alphas > 0;
  185. model.X= X(idx,:);
  186. model.y= Y(idx);
  187. model.kernelFunction = kernelFunction;
  188. model.b= b;
  189. model.alphas= alphas(idx);
  190. model.w = ((alphas.*Y)'*X)';
  191.  
  192. end

(2)非线性的情况:

第1步:读取数据文件,并可视化数据:

  1. % Load from ex6data2:
  2. % You will have X, y in your environment
  3. load('ex6data2.mat');
  4.  
  5. % Plot training data
  6. plotData(X, y);

第2步:使用高斯核函数进行训练:

  1. % SVM Parameters
  2. C = 1; sigma = 0.1;
  3.  
  4. % We set the tolerance and max_passes lower here so that the code will run
  5. % faster. However, in practice, you will want to run the training to
  6. % convergence.
  7. model= svmTrain(X, y, C, @(x1, x2) gaussianKernel(x1, x2, sigma));
  8. visualizeBoundary(X, y, model);

运行结果:

(3)非线性情况2:

第1步:读取数据文件,并可视化数据:

  1. % Load from ex6data3:
  2. % You will have X, y in your environment
  3. load('ex6data3.mat');
  4.  
  5. % Plot training data
  6. plotData(X, y);

第2步:尝试不同的参数,选取准确率最高的:

  1. % Try different SVM Parameters here
  2. [C, sigma] = dataset3Params(X, y, Xval, yval);
  3.  
  4. % Train the SVM
  5. model= svmTrain(X, y, C, @(x1, x2) gaussianKernel(x1, x2, sigma));
  6. visualizeBoundary(X, y, model);

其中datasetParams函数:

  1. function [C, sigma] = dataset3Params(X, y, Xval, yval)
  2.  
  3. % You need to return the following variables correctly.
  4. C = 1;
  5. sigma = 0.3;
  6.  
  7. C_vec = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30];
  8. sigma_vec = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30];
  9. m = size(C_vec, 2);
  10. error_val = 1;
  11. for i = 1:m
  12. for j = 1:m
  13. model= svmTrain(X, y, C_vec(i), @(x1, x2) gaussianKernel(x1, x2, sigma_vec(j)));
  14. pred = svmPredict(model, Xval);
  15. error_temp = mean(double(pred ~= yval));
  16. if error_temp < error_val
  17. C = C_vec(i);
  18. sigma = sigma_vec(j);
  19. error_val = error_temp;
  20. end
  21. end
  22. end
  23.  
  24. end

其中svmPredict函数:

  1. function pred = svmPredict(model, X)
  2.  
  3. % Check if we are getting a column vector, if so, then assume that we only
  4. % need to do prediction for a single example
  5. if (size(X, 2) == 1)
  6. % Examples should be in rows
  7. X = X';
  8. end
  9.  
  10. % Dataset
  11. m = size(X, 1);
  12. p = zeros(m, 1);
  13. pred = zeros(m, 1);
  14.  
  15. if strcmp(func2str(model.kernelFunction), 'linearKernel')
  16. % We can use the weights and bias directly if working with the
  17. % linear kernel
  18. p = X * model.w + model.b;
  19. elseif strfind(func2str(model.kernelFunction), 'gaussianKernel')
  20. % Vectorized RBF Kernel
  21. % This is equivalent to computing the kernel on every pair of examples
  22. X1 = sum(X.^2, 2);
  23. X2 = sum(model.X.^2, 2)';
  24. K = bsxfun(@plus, X1, bsxfun(@plus, X2, - 2 * X * model.X'));
  25. K = model.kernelFunction(1, 0) .^ K;
  26. K = bsxfun(@times, model.y', K);
  27. K = bsxfun(@times, model.alphas', K);
  28. p = sum(K, 2);
  29. else
  30. % Other Non-linear kernel
  31. for i = 1:m
  32. prediction = 0;
  33. for j = 1:size(model.X, 1)
  34. prediction = prediction + ...
  35. model.alphas(j) * model.y(j) * ...
  36. model.kernelFunction(X(i,:)', model.X(j,:)');
  37. end
  38. p(i) = prediction + model.b;
  39. end
  40. end
  41.  
  42. % Convert predictions into 0 / 1
  43. pred(p >= 0) = 1;
  44. pred(p < 0) = 0;
  45.  
  46. end

运行结果:

第2题

概述:实现垃圾邮件的识别

第1步:读取数据文件,对单词进行处理:

  1. % Extract Features
  2. file_contents = readFile('emailSample1.txt');
  3. word_indices = processEmail(file_contents);
  4.  
  5. % Print Stats
  6. fprintf('Word Indices: \n');
  7. fprintf(' %d', word_indices);
  8. fprintf('\n\n');

单词处理过程:

去除符号、空格、换行等;

识别出邮箱、价格、超链接、数字,替换为特定单词;

在关键词列表中找出出现的关键词,并标记为出单词编号.

  1. function word_indices = processEmail(email_contents)
  2.  
  3. % Load Vocabulary
  4. vocabList = getVocabList();
  5.  
  6. % Init return value
  7. word_indices = [];
  8.  
  9. % ========================== Preprocess Email ===========================
  10.  
  11. % Find the Headers ( \n\n and remove )
  12. % Uncomment the following lines if you are working with raw emails with the
  13. % full headers
  14.  
  15. % hdrstart = strfind(email_contents, ([char(10) char(10)]));
  16. % email_contents = email_contents(hdrstart(1):end);
  17.  
  18. % Lower case
  19. email_contents = lower(email_contents);
  20.  
  21. % Strip all HTML
  22. % Looks for any expression that starts with < and ends with > and replace
  23. % and does not have any < or > in the tag it with a space
  24. email_contents = regexprep(email_contents, '<[^<>]+>', ' ');
  25.  
  26. % Handle Numbers
  27. % Look for one or more characters between 0-9
  28. email_contents = regexprep(email_contents, '[0-9]+', 'number');
  29.  
  30. % Handle URLS
  31. % Look for strings starting with http:// or https://
  32. email_contents = regexprep(email_contents, ...
  33. '(http|https)://[^\s]*', 'httpaddr');
  34.  
  35. % Handle Email Addresses
  36. % Look for strings with @ in the middle
  37. email_contents = regexprep(email_contents, '[^\s]+@[^\s]+', 'emailaddr');
  38.  
  39. % Handle $ sign
  40. email_contents = regexprep(email_contents, '[$]+', 'dollar');
  41.  
  42. % ========================== Tokenize Email ===========================
  43.  
  44. % Output the email to screen as well
  45. fprintf('\n==== Processed Email ====\n\n');
  46.  
  47. % Process file
  48. l = 0;
  49.  
  50. while ~isempty(email_contents)
  51.  
  52. % Tokenize and also get rid of any punctuation
  53. [str, email_contents] = ...
  54. strtok(email_contents, ...
  55. [' @$/#.-:&*+=[]?!(){},''">_<;%' char(10) char(13)]);
  56.  
  57. % Remove any non alphanumeric characters
  58. str = regexprep(str, '[^a-zA-Z0-9]', '');
  59.  
  60. % Stem the word
  61. % (the porterStemmer sometimes has issues, so we use a try catch block)
  62. try str = porterStemmer(strtrim(str));
  63. catch str = ''; continue;
  64. end;
  65.  
  66. % Skip the word if it is too short
  67. if length(str) < 1
  68. continue;
  69. end
  70.  
  71. for i = 1:size(vocabList),
  72. if strcmp(str, vocabList(i)),
  73. word_indices = [word_indices i];
  74. end
  75. end
  76.  
  77. % Print to screen, ensuring that the output lines are not too long
  78. if (l + length(str) + 1) > 78
  79. fprintf('\n');
  80. l = 0;
  81. end
  82. fprintf('%s ', str);
  83. l = l + length(str) + 1;
  84.  
  85. end
  86.  
  87. % Print footer
  88. fprintf('\n\n=========================\n');
  89.  
  90. end

其中读取关键字列表函数:

  1. function vocabList = getVocabList()
  2.  
  3. %% Read the fixed vocabulary list
  4. fid = fopen('vocab.txt');
  5.  
  6. % Store all dictionary words in cell array vocab{}
  7. n = 1899; % Total number of words in the dictionary
  8.  
  9. % For ease of implementation, we use a struct to map the strings => integers
  10. % In practice, you'll want to use some form of hashmap
  11. vocabList = cell(n, 1);
  12. for i = 1:n
  13. % Word Index (can ignore since it will be = i)
  14. fscanf(fid, '%d', 1);
  15. % Actual Word
  16. vocabList{i} = fscanf(fid, '%s', 1);
  17. end
  18. fclose(fid);
  19.  
  20. end

第3步:对关键字进行特征值标记,出现的关键词标记为1:

  1. % Extract Features
  2. features = emailFeatures(word_indices);
  3.  
  4. % Print Stats
  5. fprintf('Length of feature vector: %d\n', length(features));
  6. fprintf('Number of non-zero entries: %d\n', sum(features > 0));

其中emailFeatures函数为:

  1. function x = emailFeatures(word_indices)
  2.  
  3. % Total number of words in the dictionary
  4. n = 1899;
  5.  
  6. % You need to return the following variables correctly.
  7. x = zeros(n, 1);
  8.  
  9. for i = 1:size(word_indices),
  10. x(word_indices(i)) = 1;
  11. end
  12.  
  13. end

  

第4步:使用线性核函数进行训练,并分别计算训练集准确率和测试集准确率:

  1. % Load the Spam Email dataset
  2. % You will have X, y in your environment
  3. load('spamTrain.mat');
  4.  
  5. fprintf('\nTraining Linear SVM (Spam Classification)\n')
  6. fprintf('(this may take 1 to 2 minutes) ...\n')
  7.  
  8. C = 0.1;
  9. model = svmTrain(X, y, C, @linearKernel);
  10.  
  11. p = svmPredict(model, X);
  12.  
  13. fprintf('Training Accuracy: %f\n', mean(double(p == y)) * 100);
  14.  
  15. % Load the test dataset
  16. % You will have Xtest, ytest in your environment
  17. load('spamTest.mat');
  18.  
  19. fprintf('\nEvaluating the trained Linear SVM on a test set ...\n')
  20.  
  21. p = svmPredict(model, Xtest);
  22.  
  23. fprintf('Test Accuracy: %f\n', mean(double(p == ytest)) * 100);

运行结果:

第5步:找出最高权重的关键词:

  1. % Sort the weights and obtin the vocabulary list
  2. [weight, idx] = sort(model.w, 'descend');
  3. vocabList = getVocabList();
  4.  
  5. fprintf('\nTop predictors of spam: \n');
  6. for i = 1:15
  7. fprintf(' %-15s (%f) \n', vocabList{idx(i)}, weight(i));
  8. end
  9.  
  10. fprintf('\n\n');
  11. fprintf('\nProgram paused. Press enter to continue.\n');
  12. pause;

运行结果:

机器学习作业(六)支持向量机——Matlab实现的更多相关文章

  1. Coursera公开课笔记: 斯坦福大学机器学习第六课“逻辑回归(Logistic Regression)” 清晰讲解logistic-good!!!!!!

    原文:http://52opencourse.com/125/coursera%E5%85%AC%E5%BC%80%E8%AF%BE%E7%AC%94%E8%AE%B0-%E6%96%AF%E5%9D ...

  2. 機器學習基石(Machine Learning Foundations) 机器学习基石 作业四 Q13-20 MATLAB实现

    大家好,我是Mac Jiang,今天和大家分享Coursera-NTU-機器學習基石(Machine Learning Foundations)-作业四 Q13-20的MATLAB实现. 曾经的代码都 ...

  3. 机器学习作业(八)异常检测与推荐系统——Matlab实现

    题目下载[传送门] 第1题 简述:对于一组网络数据进行异常检测. 第1步:读取数据文件,使用高斯分布计算 μ 和 σ²: % The following command loads the datas ...

  4. 机器学习作业(七)非监督学习——Matlab实现

    题目下载[传送门] 第1题 简述:实现K-means聚类,并应用到图像压缩上. 第1步:实现kMeansInitCentroids函数,初始化聚类中心: function centroids = kM ...

  5. 机器学习作业(五)机器学习算法的选择与优化——Matlab实现

    题目下载[传送门] 第1步:读取数据文件,并可视化: % Load from ex5data1: % You will have X, y, Xval, yval, Xtest, ytest in y ...

  6. 机器学习作业(四)神经网络参数的拟合——Matlab实现

    题目下载[传送门] 题目简述:识别图片中的数字,训练该模型,求参数θ. 第1步:读取数据文件: %% Setup the parameters you will use for this exerci ...

  7. 机器学习作业(三)多类别分类与神经网络——Matlab实现

    题目太长了!下载地址[传送门] 第1题 简述:识别图片上的数字. 第1步:读取数据文件: %% Setup the parameters you will use for this part of t ...

  8. 机器学习作业(二)逻辑回归——Matlab实现

    题目太长啦!文档下载[传送门] 第1题 简述:实现逻辑回归. 第1步:加载数据文件: data = load('ex2data1.txt'); X = data(:, [1, 2]); y = dat ...

  9. 机器学习作业(一)线性回归——Matlab实现

    题目太长啦!文档下载[传送门] 第1题 简述:设计一个5*5的单位矩阵. function A = warmUpExercise() A = []; A = eye(5); end 运行结果: 第2题 ...

随机推荐

  1. vue 使用v-for进行循环

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  2. Android.mk文件LOCAL_SDK_VERSION选项

    Api分类 internal api 翻译为内部API,理解为供sdk内部使用的API. 这类接口最初打算就是不对外公开的,有点private的意思. hide api 在源码中看到使用@hide 标 ...

  3. Linux nohup不输出日志文件的方法

    引用:https://blog.csdn.net/guotao15285007494/article/details/84136234 最近在Linux上部署视频流推送应用时,由于网络不稳定等原因程序 ...

  4. java-十进制与十六进制的转化

    问题: 在一些特定的情况下,程序中需要用到进制之间的转化,现在来说说十进制和十六进制的转化. 其实java进制转换非常的简单. 那问什么还要说这个问题呢? 因为在转化的时候遇到一个问题... 记录一下 ...

  5. ggEditor流程图增加网格背景

    参考官方文档: https://www.yuque.com/antv/g6/plugin.tool.grid react-ggEditor如何使用 import { Flow } from 'gg-e ...

  6. CSS字体连写及外观属性

    一.font:字体连写 使用font属性时,必须按以下语法格式中的顺序书写,不能更换顺序,各个属性以空格隔开.注意:其中不需要设置的属性可以省略(取默认值),但必须保留font-size和font-f ...

  7. css基础-float浮动

    float实现文字环绕图片效果: <!DOCTYPE html> <html lang="en"> <head> <meta charse ...

  8. Linux下搭建asp.net运行环境

    最近有个项目,是在Windows平台下开发的,需要把 asp.net web应用移植到 CentOS下,甚是头疼: 翻阅资料,发现Jexus是个可行的方案,下面是官方对Jexus的定义: 什么是Jex ...

  9. 安装 browsercookie 模块详细步骤

    在安装browsercookie时遇到了不少问题,现在终于解决了,把方法分享下,希望能帮大家节约点时间 到此网址上下载压缩包: https://pypi.org/project/browsercook ...

  10. Uva10820 欧拉公式模板(求小于n且与n互素的数的个数)

    题意: 给出n,算出小于等于n的所有数中,有几对互质: 解法: 本质就是求有多少个2元组(x,y)满足:1 <= x,y <= n,且x与y互素. 除了(1,1)之外,其他所有的x和y都不 ...