超参数调整

详细可以参考官方文档

定义

在拟合模型之前需要定义好的参数

适用

  • Linear regression: Choosing parameters
  • Ridge/lasso regression: Choosing alpha
  • k-Nearest Neighbors: Choosing n_neighbors
  • Parameters like alpha and k: Hyperparameters
  • Hyperparameters cannot be learned by tting the model

GridsearchCV

sklearn.model_selection.GridSearchCV

  • 超参数自动搜索模块
  • 网格搜索+交叉验证
  • 指定的参数范围内,按步长依次调整参数,利用调整的参数训练学习器,从所有的参数中找到在验证集上精度最高的参数,这其实是一个训练和比较的过程
class sklearn.model_selection.GridSearchCV(estimator, param_grid, scoring=None, n_jobs=None, iid='deprecated', refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score=nan, return_train_score=False

参数

  • estimator:模型对象
  • param_grid:dict or list of dictionaries,字典类型的参数,定义一个字典然后都放进去
  • scoring:string, callable, list/tuple, dict or None, default: None,就是metrics,损失函数定义rmse,mse等
  • Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.控制cop,core并行运行数量

    -cv:int, cross-validation generator or an iterable, optional,k折交叉验证数,默认5折

    • Determines the cross-validation splitting strategy. Possible inputs for cv are:
    • None, to use the default 5-fold cross validation,integer, to specify the number of folds in a (Stratified)KFold,CV splitter,
    • An iterable yielding (train, test) splits as arrays of indices.
    • For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used.
  • verbose:控制输出信息的详细程度,愈高输出越多。

属性

常见:

  • cv_results_dict of numpy (masked) ndarrays输出交叉验证的每一个结果
  • best_estimator_:最好的估计器
  • best_params_:dict
    • 返回最优模型参数
    • Parameter setting that gave the best results on the hold out data.
    • For multi-metric evaluation, this is present only if refit is specified.
  • best_score_:float
    • 返回最优模型参数的得分
    • Mean cross-validated score of the best_estimator
    • For multi-metric evaluation, this is present only if refit is specified.
    • This attribute is not available if refit is a function.

复现

# Import necessary modules
from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression
# Setup the hyperparameter grid
# 创建一个参数集
c_space = np.logspace(-5, 8, 15)
# 这里是创建一个字典保存参数集
param_grid = {'C': c_space} # Instantiate a logistic regression classifier: logreg
# 针对回归模型进行的超参数调整
logreg = LogisticRegression() # Instantiate the GridSearchCV object: logreg_cv
logreg_cv = GridSearchCV(logreg, param_grid, cv=5) # Fit it to the data
logreg_cv.fit(X,y) # Print the tuned parameters and score
# 得到最好的模型
print("Tuned Logistic Regression Parameters: {}".format(logreg_cv.best_params_))
# 得到最好的模型的最好的结果
print("Best score is {}".format(logreg_cv.best_score_)) <script.py> output:
Tuned Logistic Regression Parameters: {'C': 3.727593720314938}
Best score is 0.7708333333333334

GridSearchCV can be computationally expensive, especially if you are searching over a large hyperparameter space and dealing with multiple hyperparameters. A solution to this is to use RandomizedSearchCV, in which not all hyperparameter values are tried out. Instead, a fixed number of hyperparameter settings is sampled from specified probability distributions.

grid相当于一个for循环,会遍历每一个参数,因此,当调参很多的时候,会导致计算量非常的大,因此,使用随机抽样的随机搜索会好一些

RandomizedSearchCV的使用方法其实是和GridSearchCV一致的,但它以随机在参数空间中采样的方式代替了GridSearchCV对于参数的网格搜索,在对于有连续变量的参数时,RandomizedSearchCV会将其当作一个分布进行采样这是网格搜索做不到的,它的搜索能力取决于设定的n_iter参数,同样的给出代码

csdn

RandomizedSearchCV

  • 随机搜索法
  • 不是每一个参数都被选取,而是从指定概率分布的参数中,抽取一定量的参数

    我还是没太能明白?

    可以比较一下时间

比较网格搜索而言,参数略有不同

算了,还是都列一下常见的吧,剩下的可以参照官方文档

比Grid 多了一个属性

  • .cv_results_,可以交叉验证的每一轮的结果

复现

# Import necessary modules
from scipy.stats import randint
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import RandomizedSearchCV # Setup the parameters and distributions to sample from: param_dist
# 以决策树为例,注意定一个字典的形式哦
param_dist = {"max_depth": [3, None],
"max_features": randint(1, 9),
"min_samples_leaf": randint(1, 9),
"criterion": ["gini", "entropy"]} # Instantiate a Decision Tree classifier: tree
tree = DecisionTreeClassifier() # Instantiate the RandomizedSearchCV object: tree_cv
tree_cv = RandomizedSearchCV(tree, param_dist, cv=5) # Fit it to the data
tree_cv.fit(X,y) # Print the tuned parameters and score
print("Tuned Decision Tree Parameters: {}".format(tree_cv.best_params_))
print("Best score is {}".format(tree_cv.best_score_)) <script.py> output:
Tuned Decision Tree Parameters: {'criterion': 'gini', 'max_depth': 3, 'max_features': 5, 'min_samples_leaf': 2}
Best score is 0.7395833333333334

Limits of grid search and random search

调参的限制点

  • grid:

    -random:

Hyperparameter tuning的更多相关文章

  1. 论文笔记系列-Multi-Fidelity Automatic Hyper-Parameter Tuning via Transfer Series Expansion

    论文: Multi-Fidelity Automatic Hyper-Parameter Tuning via Transfer Series Expansion 我们都知道实现AutoML的基本思路 ...

  2. How to Evaluate Machine Learning Models, Part 4: Hyperparameter Tuning

    How to Evaluate Machine Learning Models, Part 4: Hyperparameter Tuning In the realm of machine learn ...

  3. [C2W3] Improving Deep Neural Networks : Hyperparameter tuning, Batch Normalization and Programming Frameworks

    第三周:Hyperparameter tuning, Batch Normalization and Programming Frameworks 调试处理(Tuning process) 目前为止, ...

  4. [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

    About this Course This course will teach you the "magic" of getting deep learning to work ...

  5. 《Improving Deep Neural Networks:Hyperparameter tuning, Regularization and Optimization》课堂笔记

    Lesson 2 Improving Deep Neural Networks:Hyperparameter tuning, Regularization and Optimization 这篇文章其 ...

  6. Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week3, Hyperparameter tuning, Batch Normalization and Programming Frameworks

    Tuning process 下图中的需要tune的parameter的先后顺序, 红色>黄色>紫色,其他基本不会tune. 先讲到怎么选hyperparameter, 需要随机选取(sa ...

  7. Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week2, Assignment(Optimization Methods)

    声明:所有内容来自coursera,作为个人学习笔记记录在这里. 请不要ctrl+c/ctrl+v作业. Optimization Methods Until now, you've always u ...

  8. 课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第二周(Optimization algorithms) —— 2.Programming assignments:Optimization

    Optimization Welcome to the optimization's programming assignment of the hyper-parameters tuning spe ...

  9. Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1, Assignment(Regularization)

    声明:所有内容来自coursera,作为个人学习笔记记录在这里. Regularization Welcome to the second assignment of this week. Deep ...

随机推荐

  1. 菜鸡发现腾讯视频bug

    腾讯视频bug 我看一个将夜,出现三生三世? 这是为啥? 发现bug,会得到奖励吗? 不会像dnf一样游戏自己的bug,然后出现伤害999,把我号封了. 我这样会被封号吗?我应该怎么做才不会被封?好慌 ...

  2. 11-MyBatis01

    今日知识 1. MyBatis简介 2. MyBatis入门 3. 全局配置文件其他配置 4. MyBatis的映射文件 5. 动态SQL 6. mybatis和hibernate区别 MyBatis ...

  3. 20200105--python学习数据类型总结

    总结 python中的数据类型:整型/布尔类型/字符串/元组/列表/字典/集合 注意:列表,字典,集合都不能作为字典中的key,也不能作为集合中的元素 数据类型: (1)整型 (2)布尔类型:只有两个 ...

  4. HDU Ignatius and the Princess II 全排列下第K大数

    #include<cstdio>#include<cstring>#include<cmath>#include<algorithm>#include& ...

  5. SHELL下打包文件

    SHELL下打包文件 在我们拿下webshell的时候,想要获取数据或者源码往往会用菜刀或者蚁剑去打包,但是这个时候往往就会出现很多问题,列如打包失败,或者是打包得不完整等等. 这个时候如果对方是wi ...

  6. Vue之Vuex的使用

    重点看懂这张图: 重点记住: 1.Mutation 必须是同步函数,即mutations里只能处理同步操作. 2.如果处理的是同步操作可直接commit提交mutations更改state,如果是异步 ...

  7. 如何在命令行添加换行符到git commit -m "xxx"

    需求来源: 需要将自动识别的组件信息.更新信息.任务跟踪单号.下载链接等信息自动提交并推送至gerrit, 然后作为触发条件启动另一个协作业务流程. 方法1:单引号开放方法 git commit -m ...

  8. 2019IT运维大会上海站 智和信通解析等保2.0支撑

    2019IT运维大会上海站 智和信通解析等保2.0支撑 2019年11月14日上午8:30-12:10,上海锦荣国际大酒店二层锦荣厅

  9. ElasticSearch集群-Windows

    概述 ES集群是一个P2类型的分布式系统,除了集群状态管理以外,其他所有的请求都可以发送到集群内任意一台节点上,这个节点可以自己找到需要转发给哪些节点,并且直接跟这些节点通信.所以,从网络架构及服务配 ...

  10. Mumbai:1 Vulnhub Walkthrough

    靶机地址: https://www.vulnhub.com/entry/mumbai-1,372/ 主机探测: 主机端口扫描: FTP 下载Note文件 TODO: Move these multip ...