The glmnetUtils package provides a collection of tools to streamline the process of fitting elastic net models with glmnet. I wrote the package after a couple of projects where I found myself writing the same boilerplate code to convert a data frame into a predictor matrix and a response vector. In addition to providing a formula interface, it also has a function (cvAlpha.glmnet) to do crossvalidation for both elastic net parameters α and λ, as well as some utility functions.

The formula interface

The interface that glmnetUtils provides is very much the same as for most modelling functions in R. To fit a model, you provide a formula and data frame. You can also provide any arguments that glmnet will accept. Here is a simple example:

mtcarsMod <- glmnet(mpg ~ cyl + disp + hp, data=mtcars)

## Call:
## glmnet.formula(formula = mpg ~ cyl + disp + hp, data = mtcars)
##
## Model fitting options:
## Sparse model matrix: FALSE
## Use model.frame: FALSE
## Alpha: 1
## Lambda summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.03326 0.11690 0.41000 1.02800 1.44100 5.05500

Under the hood, glmnetUtils creates a model matrix and response vector, and passes them to the glmnet package to do the actual model fitting. Prediction also works as you'd expect: just pass a data frame containing the new observations, along with any arguments thatpredict.glmnet needs.

# least squares regression: get predictions for lambda=1
predict(mtcarsMod, newdata=mtcars, s=1)

Building the model matrix

You may have noticed the options "use model.frame" and "sparse model matrix" in the printed output above. glmnetUtils includes a couple of options to improve performance, especially on wide datasets and/or have many categorical (factor) variables.

The standard R method for creating a model matrix out of a data frame uses the model.framefunction, which has a major disadvantage when it comes to wide data. It generates a termsobject, which specifies how the original columns of data relate to the columns in the model matrix. This involves creating and storing a (roughly) square matrix of size p × p, where p is the number of variables in the model. When p > 10000, which isn't uncommon these days, the terms object can exceed a gigabyte in size. Even if there is enough memory to store the object, processing it can be very slow.

Another issue with the standard approach is the treatment of factors. Normally, model.matrixwill turn an N-level factor into an indicator matrix with N−1 columns, with one column being dropped. This is necessary for unregularised models as fit with lm and glm, since the full set of Ncolumns is linearly dependent. However, this may not be appropriate for a regularised model as fit with glmnet. The regularisation procedure shrinks the coefficients towards zero, which forces the estimated differences from the baseline to be smaller. But this only makes sense if the baseline level was chosen beforehand, or is otherwise meaningful as a default; otherwise it is effectively making the levels more similar to an arbitrarily chosen level.

To deal with these problems, glmnetUtils by default will avoid using model.frame, instead building up the model matrix term-by-term. This avoids the memory cost of creating a terms object, and can be much faster than the standard approach. It will also include one column in the model matrix for all levels in a factor; that is, no baseline level is assumed. In this situation, the coefficients represent differences from the overall mean response, and shrinking them to zero is meaningful (usually). Machine learners may also recognise this as one-hot encoding.

glmnetUtils can also generate a sparse model matrix, using the sparse.model.matrix function provided in the Matrix package. This works exactly the same as a regular model matrix, but takes up significantly less memory if many of its entries are zero. A scenario where this is the case would be where many of the predictors are factors, each with a large number of levels.

Crossvalidation for α

One piece missing from the standard glmnet package is a way of choosing α, the elastic net mixing parameter, similar to how cv.glmnet chooses λ, the shrinkage parameter. To fix this, glmnetUtils provides the cvAlpha.glmnet function, which uses crossvalidation to examine the impact on the model of changing α and λ. The interface is the same as for the other functions:

# Leukemia dataset from Trevor Hastie's website:
# http://web.stanford.edu/~hastie/glmnet/glmnetData/Leukemia.RData
load("~/Leukemia.rdata")
leuk <- do.call(data.frame, Leukemia) cvAlpha.glmnet(y ~ ., data=leuk, family="binomial") ## Call:
## cvAlpha.glmnet.formula(formula = y ~ ., data = leuk, family = "binomial")
##
## Model fitting options:
## Sparse model matrix: FALSE
## Use model.frame: FALSE
## Alpha values: 0 0.001 0.008 0.027 0.064 0.125 0.216 0.343 0.512 0.729 1
## Number of crossvalidation folds for lambda: 10

cvAlpha.glmnet uses the algorithm described in the help for cv.glmnet, which is to fix the distribution of observations across folds and then call cv.glmnet in a loop with different values of α. Optionally, you can parallelise this outer loop, by setting the outerParallel argument to a non-NULL value. Currently, glmnetUtils supports the following methods of parallelisation:

  • Via parLapply in the parallel package. To use this, set outerParallel to a valid cluster object created bymakeCluster.
  • Via rxExec as supplied by Microsoft R Server’s RevoScaleR package. To use this, setouterParallel to a valid compute context created by RxComputeContext, or a character string specifying such a context.

Conclusion

The glmnetUtils package is a way to improve quality of life for users of glmnet. As with many R packages, it’s always under development; you can get the latest version from my GitHub repo. The easiest way to install it is via devtools:

library(devtools)
install_github("hong-revo/glmnetUtils")

A more detailed version of this post can also be found at the package vignette. If you find a bug, or if you want to suggest improvements to the package, please feel free to contact me athongooi@microsoft.com.

转自:http://blog.revolutionanalytics.com/2016/11/glmnetutils.html

glmnetUtils: quality of life enhancements for elastic net regression with glmnet的更多相关文章

  1. wlan的QOS配置

    WLAN QoS配置 1.1  WLAN QoS简介 802.11网络提供了基于竞争的无线接入服务,但是不同的应用需求对于网络的要求是不同的,而原始的网络不能为不同的应用提供不同质量的接入服务,所以已 ...

  2. L1和L2特征的适用场景

    How to decide which regularization (L1 or L2) to use? Is there collinearity among some features? L2 ...

  3. Machine and Deep Learning with Python

    Machine and Deep Learning with Python Education Tutorials and courses Supervised learning superstiti ...

  4. Kaggle实战之一回归问题

    0. 前言 1.任务描述 2.数据概览 3. 数据准备 4. 模型训练 5. kaggle实战 0. 前言 "尽管新技术新算法层出不穷,但是掌握好基础算法就能解决手头 90% 的机器学习问题 ...

  5. Google云平台使用方法 | Hail | GWAS | 分布式回归 | LASSO

    参考: Hail Hail - Tutorial  windows也可以安装:Spark在Windows下的环境搭建 spark-2.2.0-bin-hadoop2.7 - Hail依赖的平台,并行处 ...

  6. Overfitting & Regularization

    Overfitting & Regularization The Problem of overfitting A common issue in machine learning or ma ...

  7. stacking method house price in kaggle top10%

    整合几部分代码的汇总 隐藏代码片段 导入python数据和可视化包 导入统计相关的工具 导入回归相关的算法 导入数据预处理相关的方法 导入模型调参相关的包 读取数据 特征工程 缺失值 类别特征处理-l ...

  8. Java Programming Language Enhancements

    引用:Java Programming Language Enhancements Java Programming Language Enhancements Enhancements in Jav ...

  9. EasyMesh - A Two-Dimensional Quality Mesh Generator

    EasyMesh - A Two-Dimensional Quality Mesh Generator eryar@163.com Abstract. EasyMesh is developed by ...

随机推荐

  1. Apache Beam 剖析

    1.概述 在大数据的浪潮之下,技术的更新迭代十分频繁.受技术开源的影响,大数据开发者提供了十分丰富的工具.但也因为如此,增加了开发者选择合适工具的难度.在大数据处理一些问题的时候,往往使用的技术是多样 ...

  2. 关于css解决俩边等高的问题

    前段时间公司需哦一个后台管理系统,左侧是导航栏,右侧是content区域.然厚刚开始用的是js 去控制的,但是当页面的椰蓉过长的时候,有与js单线程,加载比较慢,就会有那么一个过程,查找了很多的方法都 ...

  3. URL解析器urllib2

    urllib2是Python的一个库(不用下载,安装,只需要使用时导入import urllib2)它提供了一系列用于操作URL的功能. urlopen urllib2.urlopen可以接受Requ ...

  4. Linux配置mysql (centos配置java环境 mysql配置篇 总结四)

    ♣安装的几种方法和比较 ♣配置yum源 ♣安装mysql ♣启动mysql ♣修改密码 ♣导入.sql文件 ♣缓存设置 ♣允许远程登录(navicat) ♣配置编码为utf8  1.关于Linux系统 ...

  5. SQLite数据库_实现简单的增删改查

    1.SQLite是一款轻量型的数据库是遵守ACID(原子性.一致性.隔离性.持久性)的关联式数据库管理系统,多用于嵌入式开发中. 2.Android平台中嵌入了一个关系型数据库SQLite,和其他数据 ...

  6. summerDao-比mybatis更强大无需映射配置的dao工具

    summerDao是summer框架中的一个数据库操作工具,项目地址:http://git.oschina.net/xiwa/summer. 怎么比mybatis更强大,怎么比beetlsql更简单, ...

  7. split()方法

    split()方法用于把一个字符串分隔成字符串数组. 它有两个参数: separator:从参数指定的地方分隔字符串,必需: howmany:该参数可指定返回的数组的最大长度.如果设置了该参数,返回的 ...

  8. office web apps 整合Java web项目

    之前两篇文章将服务器安装好了,项目主要的就是这么讲其整合到我们的项目中,网上大部分都是asp.net的,很少有介绍Java如何整合的,经过百度,终于将其整合到了我的项目中. 首先建个servlet拦截 ...

  9. 【Spark2.0源码学习】-5.Worker启动

         Worker作为Endpoint的具体实例,下面我们介绍一下Worker启动以及OnStart指令后的额外工作   一.脚本概览      下面是一个举例: /opt/jdk1..0_79/ ...

  10. C# 在iis windows authentication身份验证下,如何实现域用户自动登录

    前言: 该博文产生的背景是有个项目在客户那部署方式为iis windows身份验证,而客户不想每次登录系统都要输入帐号和密码来登录. 因此需要得到域用户,然后进行判断该用户是否可以进入系统. 解决方法 ...