1 概述

基础的理论知识参考线性SVM与Softmax分类器

代码实现环境:python3

2 数据处理

2.1 加载数据集

将原始数据集放入“data/cifar10/”文件夹下。

  1. ### 加载cifar10数据集
  2. import os
  3. import pickle
  4. import random
  5. import numpy as np
  6. import matplotlib.pyplot as plt
  7. def load_CIFAR_batch(filename):
  8. """
  9. cifar-10数据集是分batch存储的,这是载入单个batch
  10. @参数 filename: cifar文件名
  11. @r返回值: X, Y: cifar batch中的 data 和 labels
  12. """
  13. with open(filename,'rb') as f:
  14. datadict=pickle.load(f,encoding='bytes')
  15. X=datadict[b'data']
  16. Y=datadict[b'labels']
  17. X=X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("float")
  18. Y=np.array(Y)
  19. return X, Y
  20. def load_CIFAR10(ROOT):
  21. """
  22. 读取载入整个 CIFAR-10 数据集
  23. @参数 ROOT: 根目录名
  24. @return: X_train, Y_train: 训练集 data 和 labels
  25. X_test, Y_test: 测试集 data 和 labels
  26. """
  27. xs=[]
  28. ys=[]
  29. for b in range(1,6):
  30. f=os.path.join(ROOT, "data_batch_%d" % (b, ))
  31. X, Y=load_CIFAR_batch(f)
  32. xs.append(X)
  33. ys.append(Y)
  34. X_train=np.concatenate(xs)
  35. Y_train=np.concatenate(ys)
  36. del X, Y
  37. X_test, Y_test=load_CIFAR_batch(os.path.join(ROOT, "test_batch"))
  38. return X_train, Y_train, X_test, Y_test
  39. X_train, y_train, X_test, y_test = load_CIFAR10('data/cifar10/')
  40. print(X_train.shape)
  41. print(y_train.shape)
  42. print(X_test.shape)
  43. print( y_test.shape)

运行结果如下:

  1. (50000, 32, 32, 3)
  2. (50000,)
  3. (10000, 32, 32, 3)
  4. (10000,)

2.2 划分数据集

将加载好的数据集划分为训练集,验证集,以及测试集。

  1. ## 划分训练集,验证集,测试集
  2. num_train = 49000
  3. num_val = 1000
  4. num_test = 1000
  5. # Validation set
  6. mask = range(num_train, num_train + num_val)
  7. X_val = X_train[mask]
  8. y_val = y_train[mask]
  9. # Train set
  10. mask = range(num_train)
  11. X_train = X_train[mask]
  12. y_train = y_train[mask]
  13. # Test set
  14. mask = range(num_test)
  15. X_test = X_test[mask]
  16. y_test = y_test[mask]
  17. print('Train data shape: ', X_train.shape)
  18. print('Train labels shape: ', y_train.shape)
  19. print('Validation data shape: ', X_val.shape)
  20. print('Validation labels shape ', y_val.shape)
  21. print('Test data shape: ', X_test.shape)
  22. print('Test labels shape: ', y_test.shape)

运行结果为:

  1. Train data shape: (49000, 3072)
  2. Validation data shape: (1000, 3072)
  3. Test data shape: (1000, 3072)

2.3 去均值归一化

将划分好的数据集归一化,即:所有划分好的数据集减去均值图像。

  1. # Processing: subtract the mean images
  2. mean_image = np.mean(X_train, axis=0)
  3. X_train -= mean_image
  4. X_val -= mean_image
  5. X_test -= mean_image
  6. # append the bias dimension of ones (i.e. bias trick)
  7. X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])#堆叠数组
  8. X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
  9. X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
  10. print('Train data shape: ', X_train.shape)
  11. print('Validation data shape: ', X_val.shape)
  12. print('Test data shape: ', X_test.shape)

运行结果为:

  1. Train data shape: (49000, 3073)
  2. Validation data shape: (1000, 3073)
  3. Test data shape: (1000, 3073)

3 线性SVM分类器

3.1 定义线性SVM分类器

关键的是线性SVM的梯度推导过程。具体的可以看看这篇文章

  1. #Define a linear SVM classifier
  2. class LinearSVM(object):
  3. """ A subclass that uses the Multiclass SVM loss function """
  4. def __init__(self):
  5. self.W = None
  6. def loss_vectorized(self, X, y, reg):
  7. """
  8. Structured SVM loss function, naive implementation (with loops).
  9. Inputs:
  10. - X: A numpy array of shape (num_train, D) contain the training data
  11. consisting of num_train samples each of dimension D
  12. - y: A numpy array of shape (num_train,) contain the training labels,
  13. where y[i] is the label of X[i]
  14. - reg: (float) regularization strength
  15. Outputs:
  16. - loss: the loss value between predict value and ground truth
  17. - dW: gradient of W
  18. """
  19. # Initialize loss and dW
  20. loss = 0.0
  21. dW = np.zeros(self.W.shape)
  22. # Compute the loss
  23. num_train = X.shape[0]
  24. scores = np.dot(X, self.W)
  25. correct_score = scores[range(num_train), list(y)].reshape(-1, 1)
  26. margin = np.maximum(0, scores - correct_score + 1) # delta = 1
  27. margin[range(num_train), list(y)] = 0 #分对的损失为0
  28. loss = np.sum(margin) / num_train + 0.5 * reg * np.sum(self.W * self.W) #reg就是权重lamda
  29. # Compute the dW
  30. num_classes = self.W.shape[1]
  31. mask = np.zeros((num_train, num_classes))
  32. mask[margin > 0] = 1
  33. mask[range(num_train), list(y)] = 0
  34. mask[range(num_train), list(y)] = -np.sum(mask, axis=1)
  35. dW = np.dot(X.T, mask)
  36. dW = dW / num_train + reg * self.W
  37. return loss, dW
  38. def train(self, X, y, learning_rate = 1e-3, reg = 1e-5, num_iters = 100,
  39. batch_size = 200, print_flag = False):
  40. """
  41. Train linear SVM classifier using SGD
  42. Inputs:
  43. - X: A numpy array of shape (num_train, D) contain the training data
  44. consisting of num_train samples each of dimension D
  45. - y: A numpy array of shape (num_train,) contain the training labels,
  46. where y[i] is the label of X[i], y[i] = c, 0 <= c <= C
  47. - learning rate: (float) learning rate for optimization
  48. - reg: (float) regularization strength
  49. - num_iters: (integer) numbers of steps to take when optimization
  50. - batch_size: (integer) number of training examples to use at each step
  51. - print_flag: (boolean) If true, print the progress during optimization
  52. Outputs:
  53. - loss_history: A list containing the loss at each training iteration
  54. """
  55. loss_history = []
  56. num_train = X.shape[0]
  57. dim = X.shape[1]
  58. num_classes = np.max(y) + 1
  59. # Initialize W
  60. if self.W == None:
  61. self.W = 0.001 * np.random.randn(dim, num_classes)
  62. # iteration and optimization
  63. for t in range(num_iters):
  64. idx_batch = np.random.choice(num_train, batch_size, replace=True)
  65. X_batch = X[idx_batch]
  66. y_batch = y[idx_batch]
  67. loss, dW = self.loss_vectorized(X_batch, y_batch, reg)
  68. loss_history.append(loss)
  69. self.W += -learning_rate * dW
  70. if print_flag and t%100 == 0:
  71. print('iteration %d / %d: loss %f' % (t, num_iters, loss))
  72. return loss_history
  73. def predict(self, X):
  74. """
  75. Use the trained weights of linear SVM to predict data labels
  76. Inputs:
  77. - X: A numpy array of shape (num_train, D) contain the training data
  78. Outputs:
  79. - y_pred: A numpy array, predicted labels for the data in X
  80. """
  81. y_pred = np.zeros(X.shape[0])
  82. scores = np.dot(X, self.W)
  83. y_pred = np.argmax(scores, axis=1)
  84. return y_pred

3.2 无交叉验证

3.2.1 训练模型

  1. ##Stochastic Gradient Descent
  2. svm = LinearSVM()
  3. loss_history = svm.train(X_train, y_train, learning_rate = 1e-7, reg = 2.5e4, num_iters = 2000,
  4. batch_size = 200, print_flag = True)

运行结果如下:

  1. iteration 0 / 2000: loss 407.076351
  2. iteration 100 / 2000: loss 241.030820
  3. iteration 200 / 2000: loss 147.135737
  4. iteration 300 / 2000: loss 90.274781
  5. iteration 400 / 2000: loss 56.509895
  6. iteration 500 / 2000: loss 36.654007
  7. iteration 600 / 2000: loss 23.732160
  8. iteration 700 / 2000: loss 16.340341
  9. iteration 800 / 2000: loss 11.538806
  10. iteration 900 / 2000: loss 9.482515
  11. iteration 1000 / 2000: loss 7.414343
  12. iteration 1100 / 2000: loss 6.240377
  13. iteration 1200 / 2000: loss 5.774960
  14. iteration 1300 / 2000: loss 5.569365
  15. iteration 1400 / 2000: loss 5.326023
  16. iteration 1500 / 2000: loss 5.708757
  17. iteration 1600 / 2000: loss 4.731255
  18. iteration 1700 / 2000: loss 5.516500
  19. iteration 1800 / 2000: loss 4.959480
  20. iteration 1900 / 2000: loss 5.447249

3.2.2 预测

  1. # Use svm to predict
  2. # Training set
  3. y_pred = svm.predict(X_train)
  4. num_correct = np.sum(y_pred == y_train)
  5. accuracy = np.mean(y_pred == y_train)
  6. print('Training correct %d/%d: The accuracy is %f' % (num_correct, X_train.shape[0], accuracy))
  7. # Test set
  8. y_pred = svm.predict(X_test)
  9. num_correct = np.sum(y_pred == y_test)
  10. accuracy = np.mean(y_pred == y_test)
  11. print('Test correct %d/%d: The accuracy is %f' % (num_correct, X_test.shape[0], accuracy))

运行结果如下:

  1. Training correct 18799/49000: The accuracy is 0.383653
  2. Test correct 386/1000: The accuracy is 0.386000

3.3 有交叉验证

3.3.1 训练模型

  1. #Cross-validation
  2. learning_rates = [1.4e-7, 1.5e-7, 1.6e-7]
  3. regularization_strengths = [8000.0, 9000.0, 10000.0, 11000.0, 18000.0, 19000.0, 20000.0, 21000.0]
  4. results = {}
  5. best_lr = None
  6. best_reg = None
  7. best_val = -1 # The highest validation accuracy that we have seen so far.
  8. best_svm = None # The LinearSVM object that achieved the highest validation rate.
  9. for lr in learning_rates:
  10. for reg in regularization_strengths:
  11. svm = LinearSVM()
  12. loss_history = svm.train(X_train, y_train, learning_rate = lr, reg = reg, num_iters = 2000)
  13. y_train_pred = svm.predict(X_train)
  14. accuracy_train = np.mean(y_train_pred == y_train)
  15. y_val_pred = svm.predict(X_val)
  16. accuracy_val = np.mean(y_val_pred == y_val)
  17. if accuracy_val > best_val:
  18. best_lr = lr
  19. best_reg = reg
  20. best_val = accuracy_val
  21. best_svm = svm
  22. results[(lr, reg)] = accuracy_train, accuracy_val
  23. print('lr: %e reg: %e train accuracy: %f val accuracy: %f' %
  24. (lr, reg, results[(lr, reg)][0], results[(lr, reg)][1]))
  25. print('Best validation accuracy during cross-validation:\nlr = %e, reg = %e, best_val = %f' %
  26. (best_lr, best_reg, best_val))

3.3.2 预测

  1. # Use the best svm to test
  2. y_test_pred = best_svm.predict(X_test)
  3. num_correct = np.sum(y_test_pred == y_test)
  4. accuracy = np.mean(y_test_pred == y_test)
  5. print('Test correct %d/%d: The accuracy is %f' % (num_correct, X_test.shape[0], accuracy))

运行结果为:

  1. Test correct 372/1000: The accuracy is 0.372000

线性SVM分类器实战的更多相关文章

  1. 线性Softmax分类器实战

    1 概述 基础的理论知识参考线性SVM与Softmax分类器. 代码实现环境:python3 2 数据预处理 2.1 加载数据 将原始数据集放入"data/cifar10/"文件夹 ...

  2. 基于sklearn的分类器实战

    已迁移到我新博客,阅读体验更佳基于sklearn的分类器实战 完整代码实现见github:click me 一.实验说明 1.1 任务描述 1.2 数据说明 一共有十个数据集,数据集中的数据属性有全部 ...

  3. SVM-支持向量机(一)线性SVM分类

    SVM-支持向量机 SVM(Support Vector Machine)-支持向量机,是一个功能非常强大的机器学习模型,可以处理线性与非线性的分类.回归,甚至是异常检测.它也是机器学习中非常热门的算 ...

  4. 深度学习与计算机视觉系列(3)_线性SVM与SoftMax分类器

    作者: 寒小阳 &&龙心尘 时间:2015年11月. 出处: http://blog.csdn.net/han_xiaoyang/article/details/49949535 ht ...

  5. 线性SVM与Softmax分类器

    1 引入 上一篇介绍了图像分类问题.图像分类的任务,就是从已有的固定分类标签集合中选择一个并分配给一张图像.我们还介绍了k-Nearest Neighbor (k-NN)分类器,该分类器的基本思想是通 ...

  6. SVM1 线性SVM

    一.Linear Support Vector Machine 接下来的讨论假设数据都是线性可分的. 1.1 SVM的引入:增大对测量误差的容忍度 假设有训练数据和分类曲线如下图所示: 很明显,三个分 ...

  7. 机器学习经典算法详解及Python实现--基于SMO的SVM分类器

    原文:http://blog.csdn.net/suipingsp/article/details/41645779 支持向量机基本上是最好的有监督学习算法,因其英文名为support vector  ...

  8. 支持向量机(Support Vector Machine,SVM)—— 线性SVM

      支持向量机(Support Vector Machine,简称 SVM)于 1995 年正式发表,由于其在文本分类任务中的卓越性能,很快就成为机器学习的主流技术.尽管现在 Deep Learnin ...

  9. 自己训练SVM分类器进行HOG行人检测

    正样本来源是INRIA数据集中的96*160大小的人体图片,使用时上下左右都去掉16个像素,截取中间的64*128大小的人体. 负样本是从不包含人体的图片中随机裁取的,大小同样是64*128(从完全不 ...

随机推荐

  1. C# 调用java的Webservice时关于非string类型处理

    比如webservice地址是:http://wdft.com:80/services/getOrderService1.0?wsdl 方法是:getOrder 1.首先添加引用: 2. 3.引用完成 ...

  2. GetOpenFilename的基本用法

    GetOpenFilename '一.概述基本语法 Application.GetOpenFilename 方法 显示标准的“打开”对话框,并获取用户文件名,而不必真正打开任何文件,只是把打开文件名称 ...

  3. ASP.NET加断点调试,却跳不进方法的原因。

    1.首先调试后看一下断点是不是空心的,如果是,鼠标放在断点上,按提示操作即可. 提示如图所示:

  4. 9.spark Core 进阶2--Cashe

          RDD Persistence One of the most important capabilities in Spark is persisting (or caching) a d ...

  5. Last_SQL_Error: Error 'Can't drop database

    此文办法只用应急, 别的办法我还没想到,  文章是Copy来的 MySQL主从同步报错排错结果及修复过程之:Slave_SQL_Running: No 起因调查: 收到大量邮件报警想必事出有因,就问同 ...

  6. 【luoguP3868】猜数字

    description 现有两组数字,每组k个,第一组中的数字分别为:a1,a2,...,ak表示,第二组中的数字分别用b1,b2,...,bk表示.其中第二组中的数字是两两互素的.求最小的非负整数n ...

  7. Excel宏开发之合并单元格

    合并单元格 Sub 宏1() ' ' 宏1 宏 ' ' 快捷键: Ctrl+q ' Application.Goto Reference:="宏1" Application.VBE ...

  8. gitbook新版本"gitbook build"命令导出的html不能跳转的解决办法

    使用的是win7系统,gitbook新版本不支持html跳转功能,所以要降版本至2.6.7 解决办法如下: 第一步: 生成时指定gitbook的版本, 本地没有会先下载 gitbook build - ...

  9. VS2010-MFC(常用控件:列表框控件ListBox)

    转自:http://www.jizhuomi.com/software/186.html 列表框控件简介 列表框给出了一个选项清单,允许用户从中进行单项或多项选择,被选中的项会高亮显示.列表框可分为单 ...

  10. Java-Maven-pom.xml-project-dependencies:dependencies

    ylbtech-Java-Maven-pom.xml-project-dependencies:dependencies 1.java 调用ddl <!-- java 调用ddl --> ...