class sklearn.ensemble.AdaBoostRegressor(base_estimator=Nonen_estimators=50learning_rate=1.0loss='linear',random_state=None)[source]

An AdaBoost regressor.

An AdaBoost [1] regressor is a meta-estimator that begins by fitting a regressor on the original dataset and then fits additional copies of the regressor on the same dataset but where the weights of instances are adjusted according to the error of the current prediction. As such, subsequent regressors focus more on difficult cases.

This class implements the algorithm known as AdaBoost.R2 [2].

Read more in the User Guide.

Parameters:

base_estimator : object, optional (default=DecisionTreeRegressor)

The base estimator from which the boosted ensemble is built. Support for sample weighting is required.

n_estimators : integer, optional (default=50)

The maximum number of estimators at which boosting is terminated. In case of perfect fit, the learning procedure is stopped early.

learning_rate : float, optional (default=1.)

Learning rate shrinks the contribution of each regressor by learning_rate. There is a trade-off between learning_rate and n_estimators.

loss : {‘linear’, ‘square’, ‘exponential’}, optional (default=’linear’)

The loss function to use when updating the weights after each boosting iteration.

random_state : int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Attributes:

estimators_ : list of classifiers

The collection of fitted sub-estimators.

estimator_weights_ : array of floats

Weights for each estimator in the boosted ensemble.

estimator_errors_ : array of floats

Regression error for each estimator in the boosted ensemble.

feature_importances_ : array of shape = [n_features]

The feature importances if supported by the base_estimator.

See also

AdaBoostClassifierGradientBoostingRegressorDecisionTreeRegressor

References

[R123] Y. Freund, R. Schapire, “A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting”, 1995.
[R124]
  1. Drucker, “Improving Regressors using Boosting Techniques”, 1997.

Methods

fit(X, y[, sample_weight]) Build a boosted regressor from the training set (X, y).
get_params([deep]) Get parameters for this estimator.
predict(X) Predict regression value for X.
score(X, y[, sample_weight]) Returns the coefficient of determination R^2 of the prediction.
set_params(**params) Set the parameters of this estimator.
staged_predict(X) Return staged predictions for X.
staged_score(X, y[, sample_weight]) Return staged scores for X, y.
__init__(base_estimator=Nonen_estimators=50learning_rate=1.0loss='linear',random_state=None)[source]
feature_importances_
Return the feature importances (the higher, the more important the
feature).
Returns: feature_importances_ : array, shape = [n_features]
fit(Xysample_weight=None)[source]

Build a boosted regressor from the training set (X, y).

Parameters:

X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR.

y : array-like of shape = [n_samples]

The target values (real numbers).

sample_weight : array-like of shape = [n_samples], optional

Sample weights. If None, the sample weights are initialized to 1 / n_samples.

Returns:

self : object

Returns self.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

predict(X)[source]

Predict regression value for X.

The predicted regression value of an input sample is computed as the weighted median prediction of the classifiers in the ensemble.

Parameters:

X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR.

Returns:

y : array of shape = [n_samples]

The predicted regression values.

score(Xysample_weight=None)[source]

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0, lower values are worse.

Parameters:

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True values for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

score : float

R^2 of self.predict(X) wrt. y.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self :
staged_predict(X)[source]

Return staged predictions for X.

The predicted regression value of an input sample is computed as the weighted median prediction of the classifiers in the ensemble.

This generator method yields the ensemble prediction after each iteration of boosting and therefore allows monitoring, such as to determine the prediction on a test set after each boost.

Parameters:

X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR.

Returns:

y : generator of array, shape = [n_samples]

The predicted regression values.

staged_score(Xysample_weight=None)[source]

Return staged scores for X, y.

This generator method yields the ensemble score after each iteration of boosting and therefore allows monitoring, such as to determine the score on a test set after each boost.

Parameters:

X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR.

y : array-like, shape = [n_samples]

Labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

z : float

A decision tree is boosted using the AdaBoost.R2 [1] algorithm on a 1D sinusoidal dataset with a small amount of Gaussian noise. 299 boosts (300 decision trees) is compared with a single decision tree regressor. As the number of boosts is increased the regressor can fit more detail.

[1]
  1. Drucker, “Improving Regressors using Boosting Techniques”, 1997.

print(__doc__)

# Author: Noel Dawe <noel.dawe@gmail.com>
#
# License: BSD 3 clause # importing necessary libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor # Create the dataset
rng = np.random.RandomState(1)
X = np.linspace(0, 6, 100)[:, np.newaxis]
y = np.sin(X).ravel() + np.sin(6 * X).ravel() + rng.normal(0, 0.1, X.shape[0]) # Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=4) regr_2 = AdaBoostRegressor(DecisionTreeRegressor(max_depth=4),
n_estimators=300, random_state=rng) regr_1.fit(X, y)
regr_2.fit(X, y) # Predict
y_1 = regr_1.predict(X)
y_2 = regr_2.predict(X) # Plot the results
plt.figure()
plt.scatter(X, y, c="k", label="training samples")
plt.plot(X, y_1, c="g", label="n_estimators=1", linewidth=2)
plt.plot(X, y_2, c="r", label="n_estimators=300", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Boosted Decision Tree Regression")
plt.legend()
plt.show()

AdaBoostRegressor的更多相关文章

  1. scikit-learn Adaboost类库使用小结

    在集成学习之Adaboost算法原理小结中,我们对Adaboost的算法原理做了一个总结.这里我们就从实用的角度对scikit-learn中Adaboost类库的使用做一个小结,重点对调参的注意事项做 ...

  2. XGBoost、LightGBM的详细对比介绍

    sklearn集成方法 集成方法的目的是结合一些基于某些算法训练得到的基学习器来改进其泛化能力和鲁棒性(相对单个的基学习器而言)主流的两种做法分别是: bagging 基本思想 独立的训练一些基学习器 ...

  3. 壁虎书7 Ensemble Learning and Random Forests

    if you aggregate the predictions of a group of predictors,you will often get better predictions than ...

  4. Adaboost总结

    一.简介 Boosting 是一类算法的总称,这类算法的特点是通过训练若干弱分类器,然后将弱分类器组合成强分类器进行分类.为什么要这样做呢?因为弱分类器训练起来很容易,将弱分类器集成起来,往往可以得到 ...

  5. sklearn-adaboost

    sklearn中实现了adaboost分类和回归,即AdaBoostClassifier和AdaBoostRegressor, AdaBoostClassifier 实现了两种方法,即 SAMME 和 ...

  6. 集成学习值Adaboost算法原理和代码小结(转载)

    在集成学习原理小结中,我们讲到了集成学习按照个体学习器之间是否存在依赖关系可以分为两类: 第一个是个体学习器之间存在强依赖关系: 另一类是个体学习器之间不存在强依赖关系. 前者的代表算法就是提升(bo ...

  7. Scikit-learn使用总结

    在机器学习和数据挖掘的应用中,scikit-learn是一个功能强大的python包.在数据量不是过大的情况下,可以解决大部分问题.学习使用scikit-learn的过程中,我自己也在补充着机器学习和 ...

  8. Python & 机器学习之项目实践

    机器学习是一项经验技能,经验越多越好.在项目建立的过程中,实践是掌握机器学习的最佳手段.在实践过程中,通过实际操作加深对分类和回归问题的每一个步骤的理解,达到学习机器学习的目的. 预测模型项目模板不能 ...

  9. sklearn10-使用总结

    sklearn实战-乳腺癌细胞数据挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003& ...

随机推荐

  1. jQuery架构(源码)分析

    ( function( global, factory ) { "use strict"; if ( typeof module === "object" &a ...

  2. Jrebel简单的热部署一个web工程

    前言:博主最近在做Hybris开发,漫长的启动时间大大的拖累了项目的进度,而Jrebel的出现就是为了减少项目重启的时间或者说修改了代码后直接不用重启就可以看到修改的结果,但是Hybris的部署一直没 ...

  3. HDU4992 求所有原根

    Primitive Roots Time Limit: 4000/2000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others)T ...

  4. BZOJ1036 (其实这只是一份板子)

    我说我是不是完蛋了啊... ...昨天考试线段树写错,调了好久才调回来:今天做这道树链剖分辣鸡操作题,TM写错了4个地方!先是建树为了省常数打了一个build结果初值赋错了,然后又是线段树!getma ...

  5. Spark源码剖析(七):Job触发流程原理与源码剖析

    引言 我们知道在application中每存在一个action操作就会触发一个job,那么spark底层是怎样触发job的呢?接下来我们用一个wordcount程序来剖析一下job的触发机制. 解析w ...

  6. java应用的jar包多合一

    之前开发的java程序由于依赖比较多的jar包,启动命令为" java -classpath .:lib/*.jar 主类名",这种启动方式需要指定类路径.入口类名称,并存在jar ...

  7. [编织消息框架][JAVA核心技术]动态代理应用3

    我们先使用懒处理实现提取接口类上的元信息: public abstract class QRpcFactory { public static <T> T loadProxy(Class& ...

  8. Spark 核心概念 RDD 详解

    RDD全称叫做弹性分布式数据集(Resilient Distributed Datasets),它是一种分布式的内存抽象,表示一个只读的记录分区的集合,它只能通过其他RDD转换而创建,为此,RDD支持 ...

  9. Jupyter Notebook使用小技巧

    在 C:\Windows\Fonts目录下找到Mircosoft YaHei UI字体,然后复制到[你的Python安装路径]/Lib/site-packages/matplotlib/mpl-dat ...

  10. thinkinginjava学习笔记07_多态

    在上一节的学习中,强调继承一般在需要向上转型时才有必要上场,否则都应该谨慎使用: 向上转型和绑定 向上转型是指子类向基类转型,由于子类拥有基类中的所有接口,所以向上转型的过程是安全无损的,所有对基类进 ...