https://en.wikipedia.org/wiki/Ensemble_learning

Stacking

Stacking (sometimes called stacked generalization) involves training a learning algorithm to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm is trained to make a final prediction using all the predictions of the other algorithms as additional inputs. If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although in practice, a single-layer logistic regression model is often used as the combiner.

Stacking typically yields performance better than any single one of the trained models.[22] It has been successfully used on both supervised learning tasks (regression,[23]classification and distance learning [24]) and unsupervised learning (density estimation).[25] It has also been used to estimate bagging's error rate.[3][26] It has been reported to out-perform Bayesian model-averaging.[27] The two top-performers in the Netflix competition utilized blending, which may be considered to be a form of stacking.[28]

https://arxiv.org/pdf/0911.0460.pdf

【显著提升协同过滤的准确性】

Ensemble methods, such as stacking, are designed to boost predictive accuracy by blending the predictions of multiple machine learning models. Recent work has shown that the use of meta-features, additional inputs describing each example in a dataset, can boost the performance of ensemble methods, but the greatest reported gains have come from nonlinear procedures requiring significant tuning and training time. Here, we present a linear technique, Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for improved accuracy while retaining the well-known virtues of linear regression regarding speed, stability, and interpretability. FWLS combines model predictions linearly using coefficients that are themselves linear functions of meta-features. This technique was a key facet of the solution of the second place team in the recently concluded Netflix Prize competition. Significant increases in accuracy over standard linear stacking are demonstrated on the Netflix Prize collaborative filtering dataset.

【a blend of blends - stacking--调和 混合 堆积 调和的调和 】

“Stacking” is a technique in which the predictions of a collection of models are given as inputs to a second-level learning algorithm. This second-level algorithm is trained to combine the model predictions optimally to form a final set of predictions. Many machine learning practitioners have had success using stacking and related techniques to boost prediction accuracy beyond the level obtained by any of the individual models. In some contexts, stacking is also referred to as blending, and we will use the terms interchangeably here. Since its introduction [23], modellers have employed stacking successfuly on a wide variety of problems, including chemometrics [8], spam filtering [16], and large collections of datasets drawn from the UCI Machine learning repository [21, 7]. One prominent recent example of the

power of model blending was the Netflix Prize1 collaborative filtering competition. The team BellKor’s Pragmatic Chaos won the $1 million prize using a blend of hundreds of different models [22, 11, 14]. Indeed, the winning solution was a blend at multiple levels, i.e., a blend of blends. Intuition suggests that the reliability of a model may vary as a function of the conditions in which it is used. For instance, in a collaborative filtering context where we wish to predict the preferences of customers for various products, the amount of data collected may vary significantly depending on which customer or which product is under consideration. Model A may be more reliable than model B for users who have rated many products, but model B may outperform model A for users who have only rated a few products. In an attempt to capitalize on this intuition, many researchers have developed approaches that attempt to improve the accuracy of stacked regression by adapting the blending on the basis of side information. Such an additional source of information, like the number of products rated by a user or the number of days since a product was released, is often referred to as a “meta-feature,” and we will use that terminology here.

stacked generalization 堆积正则化 堆积泛化 加权特征线性堆积的更多相关文章

  1. Ensemble Learning: Bootstrap aggregating (Bagging) & Boosting & Stacked generalization (Stacking)

    Booststrap aggregating (有些地方译作:引导聚集),也就是通常为大家所熟知的bagging.在维基上被定义为一种提升机器学习算法稳定性和准确性的元算法,常用于统计分类和回归中. ...

  2. 机器学习中模型泛化能力和过拟合现象(overfitting)的矛盾、以及其主要缓解方法正则化技术原理初探

    1. 偏差与方差 - 机器学习算法泛化性能分析 在一个项目中,我们通过设计和训练得到了一个model,该model的泛化可能很好,也可能不尽如人意,其背后的决定因素是什么呢?或者说我们可以从哪些方面去 ...

  3. R语言 绘图——条形图可以将堆积条形图与百分比堆积条形图配合使用

    在使用堆积条形图时候,新增一个百分比堆积条形图,可以加深读者印象. 封装一个function函数后只需要在调用的数据上改一下pos=‘fill’的代码即可.比较方便. 案例: # 封装函数 fun1& ...

  4. C++编程之面向对象的三个基本特征

    面向对象的三个基本特征是:封装.继承.多态. 封装 封装最好理解了.封装是面向对象的特征之一,是对象和类概念的主要特性. 封装,也就是把客观事物封装成抽象的类,并且类可以把自己的数据和方法只让可信的类 ...

  5. 机器学习入门13 - 正则化:稀疏性 (Regularization for Sparsity)

    原文链接:https://developers.google.com/machine-learning/crash-course/regularization-for-sparsity/ 1- L₁正 ...

  6. 【cs229-Lecture11】贝叶斯统计正则化

    本节知识点: 贝叶斯统计及规范化 在线学习 如何使用机器学习算法解决具体问题:设定诊断方法,迅速发现问题 贝叶斯统计及规范化(防止过拟合的方法) 就是要找更好的估计方法来减少过度拟合情况的发生. 回顾 ...

  7. Andrew Ng-ML-第八章-正则化

    1.过度拟合overfitting 过度拟合,因为有太多的特征+过少的训练数据,学习到的假设可能很适应训练集,但是不能泛化到新的样例.即泛化generalize能力差. 解决办法: 1.手动/使用选择 ...

  8. 线性回归和正则化(Regularization)

    python风控建模实战lendingClub(博主录制,包含大量回归建模脚本和和正则化解释,2K超清分辨率) https://study.163.com/course/courseMain.htm? ...

  9. coursera机器学习-logistic回归,正则化

    #对coursera上Andrew Ng老师开的机器学习课程的笔记和心得: #注:此笔记是我自己认为本节课里比较重要.难理解或容易忘记的内容并做了些补充,并非是课堂详细笔记和要点: #标记为<补 ...

随机推荐

  1. 【hibernate】Hibernate SQL 方言(hibernate.dialect)

    参考如下: RDBMS Dialect DB2 org.hibernate.dialect.DB2Dialect DB2 AS/400 org.hibernate.dialect.DB2400Dial ...

  2. ScutSDK 0.9版本发布

    ScutSDK简介: ScutSDK是和Scut游戏服务器引擎,简化客户端开发的配套SDK,她彻底打通了Scut开源游戏服务器引擎与客户端引擎(如Cocos2d-x/Quick-x/Unity3D)项 ...

  3. Android 两种注册、发送广播的区别

    前言:前面文章记录了Service的使用,这次来记录另一个四个组件之一的BroadcastReceiver.主要介绍两种发送和注册广播的区别. BroadcastReceiver广播接收者用于接收系统 ...

  4. java中的占位符\t\n\r\f

    \t 相当于tab,缩进\n NewLine 换行 System.out.println("aaa\tbbb"); //aaa bbbSystem.out.println(&quo ...

  5. 过滤器Filter_03_多个Filter的执行顺序

    过滤器Filter_03_多个Filter的执行顺序 学习了:https://www.cnblogs.com/HigginCui/p/5772514.html 按照在web.xml中的顺序进行filt ...

  6. android ANR产生原因和解决的方法

    ANR (Application Not Responding) ANR定义:在Android上,假设你的应用程序有一段时间响应不够灵敏,系统会向用户显示一个对话框.这个对话框称作应用程序无响应(AN ...

  7. AAuto如何设置combobox

    1 在左侧点击下拉组合框,然后再界面中拖拉一个组合框,鼠标单击这个下拉组合框,哟蹙额就可以设置名称,items,文本,其中文本是默认显示的文字,而item是点击之后的东西,中间用";&quo ...

  8. POJ 1785 Binary Search Heap Construction (线段树)

    题目大意: 给出的东西要求建立一个堆,使得后面的数字满足堆的性质.并且字符串满足搜索序 思路分析: 用线段树的最大询问建树.在建树之前先排序,然后用中序遍历递归输出. 注意输入的时候的技巧. .. # ...

  9. 习科小D整理找网站后台办法[科普]

    习科科普贴,如何找到网站的后台 作者:小Dの马甲来自:习科论坛 - Silic.Org - BlackBap.Org 1, 穷举猜解    现如今可以暴力猜解网站后台登陆地址的软件有很多,从最早的啊D ...

  10. Odoo11 重大改变

    Table of Contents 新特性 Activity 项目子任务 组织架构 地址 域 widget 功能重构 Quant 份 procurement 补货 自动动作 动作绑定 去掉了stock ...