下载本文PDF格式(Academia.edu)

本文给出了机器学习中AdaBoost算法的一个简单初等证明,需要使用的数学工具为微积分-1.

Adaboost is a powerful algorithm for predicting models. However, a major disadvantage is that Adaboost may lead to over-fit in the presence of noise. Freund, Y. & Schapire, R. E. (1997) proved that the training error of the ensemble is bounded by the following expression: \begin{equation}\label{ada1}e_{ensemble}\le \prod_{t}2\cdot\sqrt{\epsilon_t\cdot(1-\epsilon_t)} \end{equation} where $\epsilon_t$ is the error rate of each base classifier $t$. If the error rate is less than 0.5, we can write $\epsilon_t=0.5-\gamma_t$, where $\gamma_t$ measures how much better the classifier is than random guessing (on binary problems). The bound on the training error of the ensemble becomes \begin{equation}\label{ada2} e_{ensemble}\le \prod_{t}\sqrt{1-4{\gamma_t}^2}\le e^{-2\sum_{t}{\gamma_t}^2} \end{equation} Thus if each base classifier is slightly better than random so that $\gamma_t>\gamma$ for some $\gamma>0$, then the training error drops exponentially fast. Nevertheless, because of its tendency to focus on training examples that are misclassified, Adaboost algorithm can be quite susceptible to over-fitting. We will give a new simple proof of \ref{ada1} and \ref{ada2}; additionally, we try to explain why the parameter $\alpha_t=\frac{1}{2}\cdot\log\frac{1-\epsilon_t}{\epsilon_t}$ in boosting algorithm.

AdaBoost Algorithm:

Recall the boosting algorithm is:

Given $(x_1, y_1), (x_2, y_2), \cdots, (x_m, y_m)$, where $x_i\in X, y_i\in Y=\{-1, +1\}$.

Initialize $$D_1(i)=\frac{1}{m}$$ For $t=1, 2, \ldots, T$: Train weak learner using distribution $D_t$.

Get weak hypothesis $h_t: X\rightarrow \{-1, +1\}$ with error \[\epsilon_t=\Pr_{i\sim D_t}[h_t (x_i)\ne y_i]\] If $\epsilon_i >0.5$, then the weights $D_t (i)$ are reverted back to their original uniform values $\frac{1}{m}$.

Choose \begin{equation}\label{boost3} \alpha_t=\frac{1}{2}\cdot \log\frac{1-\epsilon_t}{\epsilon_t} \end{equation}

Update \begin{equation}\label{boost4} D_{t+1}(i)=\frac{D_{t}(i)}{Z_t}\times \left\{\begin{array}{c c} e^{-\alpha_t} & \quad \textrm{if $h_t(x_i)=y_i$}\\ e^{\alpha_t} & \quad \textrm{if $h_t(x_i)\ne y_i$} \end{array} \right. \end{equation} where $Z_t$ is a normalization factor.

Output

the final hypothesis: \[H(x)=\text{sign}\left(\sum_{t=1}^{T}\alpha_t\cdot h_t(x)\right)\]

Proof:

Firstly, we will prove \ref{ada1}, note that $D_{t+1}(i)$ is the distribution and its summation $\sum_{i}D_{t+1}(i)$ equals to 1, hence \[Z_t=\sum_{i}D_{t+1}(i)\cdot Z_t=\sum_{i}D_t(i)\times \left\{\begin{array}{c c} e^{-\alpha_t} & \quad \textrm{if $h_t(x_i)=y_i$}\\ e^{\alpha_t} & \quad \textrm{if $h_t(x_i)\ne y_i$} \end{array} \right.\] \[=\sum_{i:\ h_t(x_i)=y_i}D_t(i)\cdot e^{-\alpha_t}+\sum_{i:\ h_t(x_i)\ne y_i}D_t(i)\cdot e^{\alpha_t}\] \[=e^{-\alpha_t}\cdot \sum_{i:\ h_t(x_i)=y_i}D_t(i)+e^{\alpha_t}\cdot \sum_{i:\ h_t(x_i)\ne y_i}D_t(i)\] \begin{equation}\label{boost5} =e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t \end{equation} In order to find $\alpha_t$ we can minimize $Z_t$ by making its first order derivative equal to 0. \[{[e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t]}^{'}=-e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t=0\] \[\Rightarrow \alpha_t=\frac{1}{2}\cdot \log\frac{1-\epsilon_t}{\epsilon_t}\] which is \ref{boost3} in the boosting algorithm. Then we put $\alpha_t$ back to \ref{boost5} \[Z_t=e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t=e^{-\frac{1}{2}\log\frac{1-\epsilon_t}{\epsilon_t}}\cdot (1-\epsilon_t)+e^{\frac{1}{2}\log\frac{1-\epsilon_t}{\epsilon_t}}\cdot\epsilon_t\] \begin{equation}\label{boost6} =2\sqrt{\epsilon_t\cdot(1-\epsilon_t)} \end{equation} On the other hand, derive from \ref{boost4} we have \[D_{t+1}(i)=\frac{D_t(i)\cdot e^{-\alpha_t\cdot y_i\cdot h_t(x_i)}}{Z_t}=\frac{D_t(i)\cdot e^{K_t}}{Z_t}\] Since the product will either be $1$ if $h_t (x_i )=y_i$ or $-1$ if $h_t (x_i )\ne y_i$. Thus we can write down all of the equations \[D_1(i)=\frac{1}{m}\] \[D_2(i)=\frac{D_1(i)\cdot e^{K_1}}{Z_1}\] \[D_3(i)=\frac{D_2(i)\cdot e^{K_2}}{Z_2}\] \[\ldots\ldots\ldots\] \[D_{t+1}(i)=\frac{D_t(i)\cdot e^{K_t}}{Z_t}\] Multiply all equalities above and obtain \[D_{t+1}(i)=\frac{1}{m}\cdot\frac{e^{-y_i\cdot f(x_i)}}{\prod_{t}Z_t}\] where $f(x_i)=\sum_{t}\alpha_t\cdot h_t(x_i)$. Thus \begin{equation}\label{boost7} \frac{1}{m}\cdot \sum_{i}e^{-y_i\cdot f(x_i)}=\sum_{i}D_{t+1}(i)\cdot\prod_{t}Z_t=\prod_{t}Z_t \end{equation} Note that if $\epsilon_i>0.5$ the data set will be re-sampled until $\epsilon_i\le0.5$. In other words, the parameter $\alpha_t\ge0$ in each valid iteration process. The training error of the ensemble can be expressed as \[e_{ensemble}=\frac{1}{m}\cdot\sum_{i}\left\{\begin{array}{c c} 1 & \quad \textrm{if $y_i\ne h_t(x_i)$}\\ 0 & \quad \textrm{if $y_i=h_t(x_i)$} \end{array} \right. =\frac{1}{m}\cdot \sum_{i}\left\{\begin{array}{c c} 1 & \quad \textrm{if $y_i\cdot f(x_i)\le0$}\\ 0 & \quad \textrm{if $y_i\cdot f(x_i)>0$} \end{array} \right.\] \begin{equation}\label{boost8} \le\frac{1}{m}\cdot\sum_{i}e^{-y_i\cdot f(x_i)}=\prod_{t}Z_t \end{equation} The last step derives from \ref{boost7}. According to \ref{boost6} and \ref{boost8}, we have proved \ref{ada1} \begin{equation}\label{boost9} e_{ensemble}\le \prod_{t}2\cdot\sqrt{\epsilon_t\cdot(1-\epsilon_t)} \end{equation} In order to prove \ref{ada2}, we have to firstly prove the following inequality: \begin{equation}\label{boost10} 1+x\le e^x \end{equation} Or the equivalence $e^x-x-1\ge0$. Let $f(x)=e^x-x-1$, then \[f^{'}(x)=e^x-1=0\Rightarrow x=0\] Since $f^{''}(x)=e^x>0$, so \[{f(x)}_{min}=f(0)=0\Rightarrow e^x-x-1\ge0\] which is desired. Now we go back to \ref{boost9} and let \[\epsilon_t=\frac{1}{2}-\gamma_t\] where $\gamma_t$ measures how much better the classifier is than random guessing (on binary problems). Based on \ref{boost10} we have \[e_{ensemble}\le\prod_{t}2\cdot\sqrt{\epsilon_t\cdot(1-\epsilon_t)}\] \[=\prod_{t}\sqrt{1-4\gamma_t^2}\] \[=\prod_{t}[1+(-4\gamma_t^2)]^{\frac{1}{2}}\] \[\le\prod_{t}(e^{-4\gamma_t^2})^\frac{1}{2}=\prod_{t}e^{-2\gamma_t^2}\] \[=e^{-2\cdot\sum_{t}\gamma_t^2}\] as desired.

一个关于AdaBoost算法的简单证明的更多相关文章

  1. adaboost算法

    三 Adaboost 算法 AdaBoost 是一种迭代算法,其核心思想是针对同一个训练集训练不同的分类器,即弱分类器,然后把这些弱分类器集合起来,构造一个更强的最终分类器.(很多博客里说的三个臭皮匠 ...

  2. 机器学习实战之AdaBoost算法

    一,引言 前面几章的介绍了几种分类算法,当然各有优缺.如果将这些不同的分类器组合起来,就构成了我们今天要介绍的集成方法或者说元算法.集成方法有多种形式:可以使多种算法的集成,也可以是一种算法在不同设置 ...

  3. Adaboost 算法

    一 Boosting 算法的起源 boost 算法系列的起源来自于PAC Learnability(PAC 可学习性).这套理论主要研究的是什么时候一个问题是可被学习的,当然也会探讨针对可学习的问题的 ...

  4. 浅谈 Adaboost 算法

    http://blog.csdn.net/haidao2009/article/details/7514787 菜鸟最近开始学习machine learning.发现adaboost 挺有趣,就把自己 ...

  5. Adaboost算法的一个简单实现——基于《统计学习方法(李航)》第八章

    最近阅读了李航的<统计学习方法(第二版)>,对AdaBoost算法进行了学习. 在第八章的8.1.3小节中,举了一个具体的算法计算实例.美中不足的是书上只给出了数值解,这里用代码将它实现一 ...

  6. Adaboost 算法的原理与推导

    0 引言 一直想写Adaboost来着,但迟迟未能动笔.其算法思想虽然简单“听取多人意见,最后综合决策”,但一般书上对其算法的流程描述实在是过于晦涩.昨日11月1日下午,邹博在我组织的机器学习班第8次 ...

  7. Adaboost算法结合Haar-like特征

    Adaboost算法结合Haar-like特征 一.Haar-like特征 目前通常使用的Haar-like特征主要包括Paul Viola和Michal Jones在人脸检测中使用的由Papageo ...

  8. 前向分步算法 && AdaBoost算法 && 提升树(GBDT)算法 && XGBoost算法

    1. 提升方法 提升(boosting)方法是一种常用的统计学方法,在分类问题中,它通过逐轮不断改变训练样本的权重,学习多个分类器,并将这些分类器进行线性组合,提高分类的性能 0x1: 提升方法的基本 ...

  9. Adaboost算法流程及示例

    1. Boosting提升方法(源自统计学习方法) 提升方法是一种常用的统计学习方法,应用十分广泛且有效.在分类问题中,它通过改变训练样本的权重,学习多个分类器,并将这些分类器进行线性组合,提高分类的 ...

随机推荐

  1. Chrome扩展开发之二——Chrome扩展中脚本的运行机制和通信方式

    目录: 0.Chrome扩展开发(Gmail附件管理助手)系列之〇——概述 1.Chrome扩展开发之一——Chrome扩展的文件结构 2.Chrome扩展开发之二——Chrome扩展中脚本的运行机制 ...

  2. sqlserver数据库附加分离备份还原命令

    --获取所有数据库的名称 select [name] from master.dbo.sysdatabases where [name]='master' --判断数据库是否存在 if exists( ...

  3. <实训|第六天>偷偷让新手的Linux无限重启附linux主机名称不是随便乱改的!

    先说个事情:这几天我正在忙一个项目的设计,8月1号之前要弄出来,所以每天都要弄到很晚,可能更新就有点跟不上了,不过我如果有时间的话,我就更新,没时间的话,我会在8月1号之后统一更新出来,希望大家谅解! ...

  4. 用 Linux自带的logrotate 来管理日志

    大家可能都有管理日志的需要,比如定时压缩日志,或者当日志超过一定大小时就自动分裂成两个文件等.最近就接到这样一个小任务.我们的程序用的是C语言,用log4cpp的library来实现日志记录.但是问题 ...

  5. HBase初探

    string hbaseCluster = "https://charju.azurehdinsight.net"; string hadoopUsername = "账 ...

  6. CSS3——让最后一行显示省略号

    代码如下: <!DOCTYPE html> <html> <head> <title></title> <meta charset=& ...

  7. 既不删除, 也不生成DS_store

    defaults write com.apple.desktopservices DSDontWriteNetworkStores true sudo find / -name ".DS_S ...

  8. 修改placehosder

    CSS美化INPUT placeholder效果.CSS代码美化文本框里的placeholder文字. ::selection伪元素 简而言之:单冒号(:)用于CSS3伪类,双冒号(::)用于CSS3 ...

  9. JSON与js对象序列化

    JavaScript对象表示法(JavaScript Object Notation,简称JSON)是一种轻量级的数据交换格式,它基于js字面量表示法,是js的一个子集.虽然是一个js的子集但是他与语 ...

  10. logstash 配置文件实例

    这个配置文件记不起来是从那个地方下载的来的了,感谢那位无私的朋友 input {  beats {            #shipper 端用的是 filebeat,所以用这个插件     port ...