Abstract

We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy is . In contrast, previous analyses of stochastic gradient descent methods require iterations. As in previous devised SVM solvers, the number of iterations also scales linearly with , where is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is , where is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV 1) with training examples.

1. Introduction

Support Vector Machines (SVMs) are effective and popular classification learning tool. The task of learning a support vector machine is cast as a constrained quadratic programming. However, in its native form, it is in fact an unconstrained empirical loss minimization with a penalty term for the norm of the classifier that is being learned. Formally, given a training set , where and , we would like to find the minimizer of the problem

(1)

where

(1)

We denote the objective function of Eq. (1) by . An optimization method finds an -accurate solution if . The original SVM problem also includes a bias term, . We omit the bias throughout the first sections and defer the description of an extension which employs a bias term to Sec. 4.

We describe and analyze in this paper a simple iterative algorithm, called Pegasos, for solving Eq. (1). The algorithm performs iterations and also requires an additional parameter , whose role is explained in the sequel. Pegasos alternates between stochastic subgradient descent steps and projection steps. The parameter determines the number of examples from the algorithm uses on each iteration for estimating the subgradient. When , Pegasos reduces to a variant of the subgradient projection method. We show that in this case the number of iterations that is required in order to achieve an -accurate solution is . At the other extreme, when , we recover a variant of the stochastic (sub) gradient method. In the stochastic case, we analyze the probability of obtaining a good approximate solution. Specifically, we show that with probability of at least our algorithm finds an -accurate solution using only iterations, while each iteration involves a single inner product between and . This rate of convergence does not depend on the size of the training set and thus our algorithm is especially suited for large datasets.

2. The Pegasos Algorithm

In this section we describe the Pegasos algorithm for solving the optimization problem given in Eq. (1). The algorithm receives as input two parameters: - the number of iterations to perform; - the number of examples to use for calculating sub-gradients. Initially, we set to any vector whose norm is at most . On iteration of the algorithm, we first choose a set of size . Then, we replace the objective in Eq. (1) with an approximate objective function,

.

Note that we overloaded our original definition of as the original objective can be denoted either as or as . We interchangeably use both notations depending on the context. Next, we set the learning rate and define to be the set of examples for which suffers a non-zero loss. We now perform a two-step update as follows. We scale by and for all examples we add to the vector . We denote the resulting vector by . This step can be also written as , where

(1)

The definition of the hinge-loss implies that is a sub-gradient of at . Last, we set to be the projection of onto the set

(1)

That is, is obtained by scaling by . As we show in our analysis below, the optimal solution of SVM is in the set . Informally speaking, we can always project back onto the set as we only get closer to the optimum. The output of Pegasos is the last vector .

Note that if we choose on each round then we obtain the sub-gradient projection method. On the other extreme, if we choose to contain a single randomly selected example, then we recover a variant of the stochastic gradient method. In general, we allow to be a set of examples sampled i.i.d. from .

We conclude this section with a short discussion of implementation details when the instances are sparse, namely, when each instance has very few non-zero elements. In this case, we can represent as a triplet where is a dense vector and are scalars. The vector is defined through the triplet as follows: and stores the squared norm of , . Using this representation, it is easily verified that the total number of operations required for performing one iteration of Pegasos with is , where is the number of non-zero elements in .

3. Analysis

In this section we analyze the convergence properties of Pegasos. Throughout this section we denote

(1)

Recall that on each iteration of the algorithm, we focus on an instantaneous objective function

Pegasos: Primal Estimated sub-GrAdient Solver for SVM的更多相关文章

  1. [转] 从零推导支持向量机 (SVM)

    原文连接 - https://zhuanlan.zhihu.com/p/31652569 摘要 支持向量机 (SVM) 是一个非常经典且高效的分类模型.但是,支持向量机中涉及许多复杂的数学推导,并需要 ...

  2. 损失函数(Loss Function) -1

    http://www.ics.uci.edu/~dramanan/teaching/ics273a_winter08/lectures/lecture14.pdf Loss Function 损失函数 ...

  3. 常见数据挖掘算法的Map-Reduce策略(2)

           接着上一篇文章常见算法的mapreduce案例(1)继续挖坑,本文涉及到算法的基本原理,文中会大概讲讲,但具体有关公式的推导还请大家去查阅相关的文献文章.下面涉及到的数据挖掘算法会有:L ...

  4. 逻辑回归原理_挑战者飞船事故和乳腺癌案例_Python和R_信用评分卡(AAA推荐)

    sklearn实战-乳腺癌细胞数据挖掘(博客主亲自录制视频教程) https://study.163.com/course/introduction.htm?courseId=1005269003&a ...

  5. 基于MNIST数据的softmax regression

    跟着tensorflow上mnist基本机器学习教程联系 首先了解sklearn接口: sklearn.linear_model.LogisticRegression In the multiclas ...

  6. [Scikit-learn] 1.1 Generalized Linear Models - Logistic regression & Softmax

    二分类:Logistic regression 多分类:Softmax分类函数 对于损失函数,我们求其最小值, 对于似然函数,我们求其最大值. Logistic是loss function,即: 在逻 ...

  7. Factorization Machine算法

    参考: http://stackbox.cn/2018-12-factorization-machine/ https://baijiahao.baidu.com/s?id=1641085157432 ...

  8. LibLinear(SVM包)使用说明之(一)README

    转自:http://blog.csdn.net/zouxy09/article/details/10947323/ LibLinear(SVM包)使用说明之(一)README zouxy09@qq.c ...

  9. SVM应用

    我在项目中应用的SVM库是国立台湾大学林智仁教授开发的一套开源软件,主要有LIBSVM与LIBLINEAR两个,LIBSVM是对非线性数据进行分类,大家也比较熟悉,LIBLINEAR是对线性数据进行分 ...

随机推荐

  1. ubuntu16.04 + ubuntu + apache2 配置apache解析php

    给apache安装php扩展:  sudo apt-get install libapache2-mod-php 注:这是apache解析php文件的关键,光修改配置文件不安装扩展是不起作用的. 目录 ...

  2. select 和 input 的不可编辑,input隐藏

    select 没有readOnly属性 在jsp中 <select  id="a" name="a" disabled="disabled&qu ...

  3. [Java] SoapUI使用Java获取各时间日期方法

    import java.util.*; import java.text.SimpleDateFormat; // current date String dateNew = today() // t ...

  4. java.lang.ClassNotFoundException: org.springframework.web.filter.CharacterEncodingFilter

    今天在用git merge 新代码后报了如下错误:java.lang.ClassNotFoundException: org.springframework.web.filter.CharacterE ...

  5. openssl evp 对称加密(AES_ecb,ccb)

    openssl evp 对称加密(AES_ecb,ccb) evp.h 封装了openssl常用密码学工具,以下主要说对称加密的接口 1. 如下使用 aes_256_ecb 模式的加密解密测试代码 u ...

  6. away3D改造白皮书

    [多余的stage3D的考虑] 因为away3D为了支持stage本身可以有n个stage3D对象这个特性,在诸如MaterialPassBase.SubGeometry中,为Program3D.Ve ...

  7. 7 -- Spring的基本用法 -- 3...

    7.3 Spring 的核心机制 : 依赖注入 Spring 框架的核心功能有两个. Spring容器作为超级大工厂,负责创建.管理所有的Java对象,这些Java对象被称为Bean. Spring容 ...

  8. Response.Write("<script>alert('弹出对话框!')</script>") 后跟Response.Redirect("page.aspx");不能弹出对话框,直接跳转页面了 如何解?

    Response.Write和Response.Redirect一起用的时候就会这样,write脚本和redirect脚本不能同时使用,这样不会执行脚本,最好使用ClientScript 改进方法: ...

  9. jquery实现可拖拽的div

    由于项目中并未引入前端开发框架easyui.ext.没有现成的控件可以使用,今天时间算是充裕的时候,自己写了一个可以拖拽.放大缩小的例子.欢迎大家指正. 不啰嗦,上代码: 依赖的文件:jquery.j ...

  10. 纯CSS tooltip 提示

    一般的tooltip,使用超链接的title,或者是css+javascript生成. 如果页面布局合理,样式结构清晰,可以使用纯CSS的提示. demo如下: a.tooltip { positio ...