本文链接:https://blog.csdn.net/u012735708/article/details/83749703
1. 概述
在竞赛题中,我们知道XGBoost算法非常热门,是很多的比赛的大杀器,但是在使用过程中,其训练耗时很长,内存占用比较大。在2017年年1月微软在GitHub的上开源了LightGBM。该算法在不降低准确率的前提下,速度提升了10倍左右,占用内存下降了3倍左右。LightGBM是个快速的,分布式的,高性能的基于决策树算法的梯度提升算法。可用于排序,分类,回归以及很多其他的机器学习任务中。其详细的原理及操作内容详见:LightGBM 中文文档。

本文主要讲解LightGBM的两种调参方法。

下面几张表为重要参数的含义和如何应用

学习控制参数 含义 用法
max_depth 树的最大深度 当模型过拟合时,可以考虑首先降低 max_depth
min_data_in_leaf 叶子可能具有的最小记录数 默认20,过拟合时用
feature_fraction 例如 为0.8时,意味着在每次迭代中随机选择80%的参数来建树 boosting 为 random forest 时用
bagging_fraction 每次迭代时用的数据比例 用于加快训练速度和减小过拟合
early_stopping_round 如果一次验证数据的一个度量在最近的early_stopping_round 回合中没有提高,模型将停止训练 加速分析,减少过多迭代
lambda 指定正则化 0~1
min_gain_to_split 描述分裂的最小 gain 控制树的有用的分裂
max_cat_group 在 group 边界上找到分割点 当类别数量很多时,找分割点很容易过拟合时
核心参数 含义 用法
Task 数据的用途 选择 train 或者 predict
application 模型的用途 选择 regression: 回归时,binary: 二分类时,multiclass: 多分类时
boosting 要用的算法 gbdt, rf: random forest, dart: Dropouts meet Multiple Additive Regression Trees, goss: Gradient-based One-Side Sampling
num_boost_round 迭代次数 通常 100+
learning_rate 如果一次验证数据的一个度量在最近的 early_stopping_round 回合中没有提高,模型将停止训练 常用 0.1, 0.001, 0.003…
num_leaves   默认 31
device   cpu 或者 gpu
metric   mae: mean absolute error , mse: mean squared error , binary_logloss: loss for binary classification , multi_logloss: loss for multi classification
IO 参数 含义
max_bin 表示 feature 将存入的 bin 的最大数量
categorical_feature 如果 categorical_features = 0,1,2, 则列 0,1,2是 categorical 变量
ignore_column 与 categorical_features 类似,只不过不是将特定的列视为categorical,而是完全忽略
save_binary 这个参数为 true 时,则数据集被保存为二进制文件,下次读数据时速度会变快
调参
IO parameter 含义
num_leaves 取值应 <= 2 ^(max_depth), 超过此值会导致过拟合
min_data_in_leaf 将它设置为较大的值可以避免生长太深的树,但可能会导致 underfitting,在大型数据集时就设置为数百或数千
max_depth 这个也是可以限制树的深度
下表对应了 Faster Speed ,better accuracy ,over-fitting 三种目的时,可以调的参数

Faster Speed better accuracy over-fitting
将 max_bin 设置小一些 用较大的 max_bin max_bin 小一些
  num_leaves 大一些 num_leaves 小一些
用 feature_fraction 来做 sub-sampling   用 feature_fraction
用 bagging_fraction 和 bagging_freq   设定 bagging_fraction 和 bagging_freq
  training data 多一些 training data 多一些
用 save_binary 来加速数据加载 直接用 categorical feature 用 gmin_data_in_leaf 和 min_sum_hessian_in_leaf
用 parallel learning 用 dart 用 lambda_l1, lambda_l2 ,min_gain_to_split 做正则化
  num_iterations 大一些,learning_rate 小一些 用 max_depth 控制树的深度
2.GridSearchCV调参
LightGBM的调参过程和RF、GBDT等类似,其基本流程如下:

首先选择较高的学习率,大概0.1附近,这样是为了加快收敛的速度。这对于调参是很有必要的。

对决策树基本参数调参

正则化参数调参

最后降低学习率,这里是为了最后提高准确率

第一步:学习率和迭代次数
我们先把学习率先定一个较高的值,这里取 learning_rate = 0.1,其次确定估计器boosting/boost/boosting_type的类型,不过默认都会选gbdt。

迭代的次数,也可以说是残差树的数目,参数名为n_estimators/num_iterations/num_round/num_boost_round。我们可以先将该参数设成一个较大的数,然后在cv结果中查看最优的迭代次数,具体如代码。

在这之前,我们必须给其他重要的参数一个初始值。初始值的意义不大,只是为了方便确定其他参数。下面先给定一下初始值:

以下参数根据具体项目要求定:

'boosting_type'/'boosting': 'gbdt'
'objective': 'binary'
'metric': 'auc'
以下是我选择的初始值:

'max_depth': 5 # 由于数据集不是很大,所以选择了一个适中的值,其实4-10都无所谓。
'num_leaves': 30 # 由于lightGBM是leaves_wise生长,官方说法是要小于2^max_depth
'subsample'/'bagging_fraction':0.8 # 数据采样
'colsample_bytree'/'feature_fraction': 0.8 # 特征采样

下面用LightGBM的cv函数进行确定:

import pandas as pd
import lightgbm as lgb
from sklearn.datasets import load_breast_cancer
from sklearn.cross_validation import train_test_split

canceData=load_breast_cancer()
X=canceData.data
y=canceData.target
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0,test_size=0.2)
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'metric': 'auc',
'nthread':4,
'learning_rate':0.1,
'num_leaves':30,
'max_depth': 5,
'subsample': 0.8,
'colsample_bytree': 0.8,
}

data_train = lgb.Dataset(X_train, y_train)
cv_results = lgb.cv(params, data_train, num_boost_round=1000, nfold=5, stratified=False, shuffle=True, metrics='auc',early_stopping_rounds=50,seed=0)
print('best n_estimators:', len(cv_results['auc-mean']))
print('best cv score:', pd.Series(cv_results['auc-mean']).max())
输出结果如下:

('best n_estimators:', 188)
('best cv score:', 0.99134716298085424)

我们根据以上结果,取n_estimators=188。

第二步:确定max_depth和num_leaves
这是提高精确度的最重要的参数。这里我们引入sklearn里的GridSearchCV()函数进行搜索。

from sklearn.grid_search import GridSearchCV
params_test1={'max_depth': range(3,8,1), 'num_leaves':range(5, 100, 5)}

gsearch1 = GridSearchCV(estimator = lgb.LGBMClassifier(boosting_type='gbdt',objective='binary',metrics='auc',learning_rate=0.1, n_estimators=188, max_depth=6, bagging_fraction = 0.8,feature_fraction = 0.8),
param_grid = params_test1, scoring='roc_auc',cv=5,n_jobs=-1)
gsearch1.fit(X_train,y_train)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
 结果如下:(结果较长,只显示部分内容)

([mean: 0.99248, std: 0.01033, params: {'num_leaves': 5, 'max_depth': 3},
mean: 0.99227, std: 0.01013, params: {'num_leaves': 10, 'max_depth': 3},
mean: 0.99227, std: 0.01013, params: {'num_leaves': 15, 'max_depth': 3},
······
mean: 0.99331, std: 0.00775, params: {'num_leaves': 85, 'max_depth': 7},
mean: 0.99331, std: 0.00775, params: {'num_leaves': 90, 'max_depth': 7},
mean: 0.99331, std: 0.00775, params: {'num_leaves': 95, 'max_depth': 7}],
{'max_depth': 4, 'num_leaves': 10},
0.9943573667711598)
根据结果,我们取max_depth=4,num_leaves=10。

第三步:确定min_data_in_leaf和max_bin in
params_test2={'max_bin': range(5,256,10), 'min_data_in_leaf':range(1,102,10)}

gsearch2 = GridSearchCV(estimator = lgb.LGBMClassifier(boosting_type='gbdt',objective='binary',metrics='auc',learning_rate=0.1, n_estimators=188, max_depth=4, num_leaves=10,bagging_fraction = 0.8,feature_fraction = 0.8),
param_grid = params_test2, scoring='roc_auc',cv=5,n_jobs=-1)
gsearch2.fit(X_train,y_train)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
结果如下:(结果较长,只显示部分内容)

([mean: 0.98715, std: 0.01044, params: {'min_data_in_leaf': 1, 'max_bin': 5},
mean: 0.98809, std: 0.01095, params: {'min_data_in_leaf': 11, 'max_bin': 5},
mean: 0.98809, std: 0.00952, params: {'min_data_in_leaf': 21, 'max_bin': 5},
······
mean: 0.99363, std: 0.00812, params: {'min_data_in_leaf': 81, 'max_bin': 255},
mean: 0.99133, std: 0.00788, params: {'min_data_in_leaf': 91, 'max_bin': 255},
mean: 0.98882, std: 0.01223, params: {'min_data_in_leaf': 101, 'max_bin': 255}],
{'max_bin': 15, 'min_data_in_leaf': 51},
0.9952978056426331)
根据结果,我们取min_data_in_leaf=51,max_bin in=15。

第四步:确定feature_fraction、bagging_fraction、bagging_freq
params_test3={'feature_fraction': [0.6,0.7,0.8,0.9,1.0],
'bagging_fraction': [0.6,0.7,0.8,0.9,1.0],
'bagging_freq': range(0,81,10)
}

gsearch3 = GridSearchCV(estimator = lgb.LGBMClassifier(boosting_type='gbdt',objective='binary',metrics='auc',learning_rate=0.1, n_estimators=188, max_depth=4, num_leaves=10,max_bin=15,min_data_in_leaf=51),
param_grid = params_test3, scoring='roc_auc',cv=5,n_jobs=-1)
gsearch3.fit(X_train,y_train)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
结果如下:(结果较长,只显示部分内容)

([mean: 0.99467, std: 0.00710, params: {'bagging_freq': 0, 'bagging_fraction': 0.6, 'feature_fraction': 0.6},
mean: 0.99415, std: 0.00804, params: {'bagging_freq': 0, 'bagging_fraction': 0.6, 'feature_fraction': 0.7},
mean: 0.99530, std: 0.00722, params: {'bagging_freq': 0, 'bagging_fraction': 0.6, 'feature_fraction': 0.8},
······
mean: 0.99530, std: 0.00722, params: {'bagging_freq': 80, 'bagging_fraction': 1.0, 'feature_fraction': 0.8},
mean: 0.99383, std: 0.00731, params: {'bagging_freq': 80, 'bagging_fraction': 1.0, 'feature_fraction': 0.9},
mean: 0.99383, std: 0.00766, params: {'bagging_freq': 80, 'bagging_fraction': 1.0, 'feature_fraction': 1.0}],
{'bagging_fraction': 0.6, 'bagging_freq': 0, 'feature_fraction': 0.8},
0.9952978056426331)
第五步:确定lambda_l1和lambda_l2
params_test4={'lambda_l1': [1e-5,1e-3,1e-1,0.0,0.1,0.3,0.5,0.7,0.9,1.0],
'lambda_l2': [1e-5,1e-3,1e-1,0.0,0.1,0.3,0.5,0.7,0.9,1.0]
}

gsearch4 = GridSearchCV(estimator = lgb.LGBMClassifier(boosting_type='gbdt',objective='binary',metrics='auc',learning_rate=0.1, n_estimators=188, max_depth=4, num_leaves=10,max_bin=15,min_data_in_leaf=51,bagging_fraction=0.6,bagging_freq= 0, feature_fraction= 0.8),
param_grid = params_test4, scoring='roc_auc',cv=5,n_jobs=-1)
gsearch4.fit(X_train,y_train)
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
解果如下:(结果较长,只显示部分内容)

([mean: 0.99530, std: 0.00722, params: {'lambda_l1': 1e-05, 'lambda_l2': 1e-05},
mean: 0.99415, std: 0.00804, params: {'lambda_l1': 1e-05, 'lambda_l2': 0.001},
mean: 0.99331, std: 0.00826, params: {'lambda_l1': 1e-05, 'lambda_l2': 0.1},
·····
mean: 0.99049, std: 0.01047, params: {'lambda_l1': 1.0, 'lambda_l2': 0.7},
mean: 0.99049, std: 0.01013, params: {'lambda_l1': 1.0, 'lambda_l2': 0.9},
mean: 0.99070, std: 0.01071, params: {'lambda_l1': 1.0, 'lambda_l2': 1.0}],
{'lambda_l1': 1e-05, 'lambda_l2': 1e-05},
0.9952978056426331)
第六步:确定 min_split_gain 
params_test5={'min_split_gain':[0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]}

gsearch5 = GridSearchCV(estimator = lgb.LGBMClassifier(boosting_type='gbdt',objective='binary',metrics='auc',learning_rate=0.1, n_estimators=188, max_depth=4, num_leaves=10,max_bin=15,min_data_in_leaf=51,bagging_fraction=0.6,bagging_freq= 0, feature_fraction= 0.8,
lambda_l1=1e-05,lambda_l2=1e-05),
param_grid = params_test5, scoring='roc_auc',cv=5,n_jobs=-1)
gsearch5.fit(X_train,y_train)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
结果如下:

([mean: 0.99530, std: 0.00722, params: {'min_split_gain': 0.0},
mean: 0.99415, std: 0.00810, params: {'min_split_gain': 0.1},
mean: 0.99394, std: 0.00898, params: {'min_split_gain': 0.2},
mean: 0.99373, std: 0.00918, params: {'min_split_gain': 0.3},
mean: 0.99404, std: 0.00845, params: {'min_split_gain': 0.4},
mean: 0.99300, std: 0.00958, params: {'min_split_gain': 0.5},
mean: 0.99258, std: 0.00960, params: {'min_split_gain': 0.6},
mean: 0.99227, std: 0.01071, params: {'min_split_gain': 0.7},
mean: 0.99342, std: 0.00872, params: {'min_split_gain': 0.8},
mean: 0.99206, std: 0.01062, params: {'min_split_gain': 0.9},
mean: 0.99206, std: 0.01064, params: {'min_split_gain': 1.0}],
{'min_split_gain': 0.0},
0.9952978056426331)
第七步:降低学习率,增加迭代次数,验证模型
model=lgb.LGBMClassifier(boosting_type='gbdt',objective='binary',metrics='auc',learning_rate=0.01, n_estimators=1000, max_depth=4, num_leaves=10,max_bin=15,min_data_in_leaf=51,bagging_fraction=0.6,bagging_freq= 0, feature_fraction= 0.8,
lambda_l1=1e-05,lambda_l2=1e-05,min_split_gain=0)
model.fit(X_train,y_train)
y_pre=model.predict(X_test)
print("acc:",metrics.accuracy_score(y_test,y_pre))
print("auc:",metrics.roc_auc_score(y_test,y_pre))
结果如下:

('acc:', 0.97368421052631582)
('auc:', 0.9744363289933311)

而使用默认参数时,模型表现如下:

model=lgb.LGBMClassifier()
model.fit(X_train,y_train)
y_pre=model.predict(X_test)
print("acc:",metrics.accuracy_score(y_test,y_pre))
print("auc:",metrics.roc_auc_score(y_test,y_pre))
('acc:', 0.96491228070175439)
('auc:', 0.96379803112099083)
我们可以看出在准确率和AUC得分都有所提高。

3.LightGBM的cv函数调参
这种方式比较省事儿,写好代码自动寻优,但需要有调参经验,如何设置较好的参数范围有一定的技术含量,这里直接给出代码。

import pandas as pd
import lightgbm as lgb
from sklearn import metrics
from sklearn.datasets import load_breast_cancer
from sklearn.cross_validation import train_test_split

canceData=load_breast_cancer()
X=canceData.data
y=canceData.target
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0,test_size=0.2)

### 数据转换
print('数据转换')
lgb_train = lgb.Dataset(X_train, y_train, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train,free_raw_data=False)

### 设置初始参数--不含交叉验证参数
print('设置参数')
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'metric': 'auc',
'nthread':4,
'learning_rate':0.1
}

### 交叉验证(调参)
print('交叉验证')
max_auc = float('0')
best_params = {}

# 准确率
print("调参1:提高准确率")
for num_leaves in range(5,100,5):
for max_depth in range(3,8,1):
params['num_leaves'] = num_leaves
params['max_depth'] = max_depth

cv_results = lgb.cv(
params,
lgb_train,
seed=1,
nfold=5,
metrics=['auc'],
early_stopping_rounds=10,
verbose_eval=True
)

mean_auc = pd.Series(cv_results['auc-mean']).max()
boost_rounds = pd.Series(cv_results['auc-mean']).idxmax()

if mean_auc >= max_auc:
max_auc = mean_auc
best_params['num_leaves'] = num_leaves
best_params['max_depth'] = max_depth
if 'num_leaves' and 'max_depth' in best_params.keys():
params['num_leaves'] = best_params['num_leaves']
params['max_depth'] = best_params['max_depth']

# 过拟合
print("调参2:降低过拟合")
for max_bin in range(5,256,10):
for min_data_in_leaf in range(1,102,10):
params['max_bin'] = max_bin
params['min_data_in_leaf'] = min_data_in_leaf

cv_results = lgb.cv(
params,
lgb_train,
seed=1,
nfold=5,
metrics=['auc'],
early_stopping_rounds=10,
verbose_eval=True
)

mean_auc = pd.Series(cv_results['auc-mean']).max()
boost_rounds = pd.Series(cv_results['auc-mean']).idxmax()

if mean_auc >= max_auc:
max_auc = mean_auc
best_params['max_bin']= max_bin
best_params['min_data_in_leaf'] = min_data_in_leaf
if 'max_bin' and 'min_data_in_leaf' in best_params.keys():
params['min_data_in_leaf'] = best_params['min_data_in_leaf']
params['max_bin'] = best_params['max_bin']

print("调参3:降低过拟合")
for feature_fraction in [0.6,0.7,0.8,0.9,1.0]:
for bagging_fraction in [0.6,0.7,0.8,0.9,1.0]:
for bagging_freq in range(0,50,5):
params['feature_fraction'] = feature_fraction
params['bagging_fraction'] = bagging_fraction
params['bagging_freq'] = bagging_freq

cv_results = lgb.cv(
params,
lgb_train,
seed=1,
nfold=5,
metrics=['auc'],
early_stopping_rounds=10,
verbose_eval=True
)

mean_auc = pd.Series(cv_results['auc-mean']).max()
boost_rounds = pd.Series(cv_results['auc-mean']).idxmax()

if mean_auc >= max_auc:
max_auc=mean_auc
best_params['feature_fraction'] = feature_fraction
best_params['bagging_fraction'] = bagging_fraction
best_params['bagging_freq'] = bagging_freq

if 'feature_fraction' and 'bagging_fraction' and 'bagging_freq' in best_params.keys():
params['feature_fraction'] = best_params['feature_fraction']
params['bagging_fraction'] = best_params['bagging_fraction']
params['bagging_freq'] = best_params['bagging_freq']

print("调参4:降低过拟合")
for lambda_l1 in [1e-5,1e-3,1e-1,0.0,0.1,0.3,0.5,0.7,0.9,1.0]:
for lambda_l2 in [1e-5,1e-3,1e-1,0.0,0.1,0.4,0.6,0.7,0.9,1.0]:
params['lambda_l1'] = lambda_l1
params['lambda_l2'] = lambda_l2
cv_results = lgb.cv(
params,
lgb_train,
seed=1,
nfold=5,
metrics=['auc'],
early_stopping_rounds=10,
verbose_eval=True
)

mean_auc = pd.Series(cv_results['auc-mean']).max()
boost_rounds = pd.Series(cv_results['auc-mean']).idxmax()

if mean_auc >= max_auc:
max_auc=mean_auc
best_params['lambda_l1'] = lambda_l1
best_params['lambda_l2'] = lambda_l2
if 'lambda_l1' and 'lambda_l2' in best_params.keys():
params['lambda_l1'] = best_params['lambda_l1']
params['lambda_l2'] = best_params['lambda_l2']

print("调参5:降低过拟合2")
for min_split_gain in [0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]:
params['min_split_gain'] = min_split_gain

cv_results = lgb.cv(
params,
lgb_train,
seed=1,
nfold=5,
metrics=['auc'],
early_stopping_rounds=10,
verbose_eval=True
)

mean_auc = pd.Series(cv_results['auc-mean']).max()
boost_rounds = pd.Series(cv_results['auc-mean']).idxmax()

if mean_auc >= max_auc:
max_auc=mean_auc

best_params['min_split_gain'] = min_split_gain
if 'min_split_gain' in best_params.keys():
params['min_split_gain'] = best_params['min_split_gain']

print(best_params)
结果如下:

{'bagging_fraction': 0.7,
'bagging_freq': 30,
'feature_fraction': 0.8,
'lambda_l1': 0.1,
'lambda_l2': 0.0,
'max_bin': 255,
'max_depth': 4,
'min_data_in_leaf': 81,
'min_split_gain': 0.1,
'num_leaves': 10}
我们将训练得到的参数代入模型

model=lgb.LGBMClassifier(boosting_type='gbdt',objective='binary',metrics='auc',learning_rate=0.01, n_estimators=1000, max_depth=4, num_leaves=10,max_bin=255,min_data_in_leaf=81,bagging_fraction=0.7,bagging_freq= 30, feature_fraction= 0.8,
lambda_l1=0.1,lambda_l2=0,min_split_gain=0.1)
model.fit(X_train,y_train)
y_pre=model.predict(X_test)
print("acc:",metrics.accuracy_score(y_test,y_pre))
print("auc:",metrics.roc_auc_score(y_test,y_pre))
结果如下:

('acc:', 0.98245614035087714)
('auc:', 0.98189901556049541)
 
————————————————
版权声明:本文为CSDN博主「浅笑古今」的原创文章,遵循CC 4.0 by-sa版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/u012735708/article/details/83749703

LightGBM调参笔记的更多相关文章

  1. 【集成学习】lightgbm调参案例

    lightgbm使用leaf_wise tree生长策略,leaf_wise_tree的优点是收敛速度快,缺点是容易过拟合. # lightgbm关键参数 # lightgbm调参方法cv 代码git ...

  2. 自动调参库hyperopt+lightgbm 调参demo

    在此之前,调参要么网格调参,要么随机调参,要么肉眼调参.虽然调参到一定程度,进步有限,但仍然很耗精力. 自动调参库hyperopt可用tpe算法自动调参,实测强于随机调参. hyperopt 需要自己 ...

  3. LightGBM 调参方法(具体操作)

     sklearn实战-乳腺癌细胞数据挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003& ...

  4. lightgbm调参方法

    gridsearchcv: https://www.cnblogs.com/bjwu/p/9307344.html gridsearchcv+lightgbm cv函数调参: https://www. ...

  5. xgboost&lightgbm调参指南

    本文重点阐述了xgboost和lightgbm的主要参数和调参技巧,其理论部分可见集成学习,以下内容主要来自xgboost和LightGBM的官方文档. xgboost Xgboost参数主要分为三大 ...

  6. scikit-learn 梯度提升树(GBDT)调参笔记

    在梯度提升树(GBDT)原理小结中,我们对GBDT的原理做了总结,本文我们就从scikit-learn里GBDT的类库使用方法作一个总结,主要会关注调参中的一些要点. 1. scikit-learn ...

  7. LightGBM调参总结

    1. 参数速查 使用num_leaves,因为LightGBM使用的是leaf-wise的算法,因此在调节树的复杂程度时,使用的是num_leaves而不是max_depth. 大致换算关系:num_ ...

  8. 深度学习调参笔记(trick)

    1. Adam 学习率0.00035真香: 2. SGD + Momentum 学习率应当找到合适区间,一般远大于Adam (取1,2,5,10这类数据): 3. 提前终止,防止过拟合; 4. Ens ...

  9. XGBoost和LightGBM的参数以及调参

    一.XGBoost参数解释 XGBoost的参数一共分为三类: 通用参数:宏观函数控制. Booster参数:控制每一步的booster(tree/regression).booster参数一般可以调 ...

随机推荐

  1. webstorm的git操作使用

    0. 前言 在上一篇文章中,讲述了使用webstorm去调试node程序,最近研究了一下如何使用webstorm去操作git. 对于git的使用,大家的使用方式均有不同,最王道的方式非命令行莫属,基于 ...

  2. MongoDB Java(七)

    在 Java 程序中如果要使用 MongoDB,你需要确保已经安装了 Java 环境及 MongoDB JDBC 驱动. mongodb-driver jar 下载地址:http://central. ...

  3. 关于join的使用

    一.join的作用 join() 定义在Thread.java中.join() 的作用:让“主线程”等待“子线程”结束之后才能继续运行. // 主线程 public class Father exte ...

  4. luoguP4719 【模板】动态 DP

    题意 我理解的动态DP: 发现DP可以写成矩阵的形式,因此用数据结构维护矩阵乘积. 对于这道题,显然有DP: \(f_{x,0/1}\)表示\(x\)的子树中,x选/不选的最大点独立集. \(f_{x ...

  5. mybatis中<include>标签的作用

    MyBatis中sql标签定义SQL片段,include标签引用,可以复用SQL片段 sql标签中id属性对应include标签中的refid属性.通过include标签将sql片段和原sql片段进行 ...

  6. unique_ptr的实现原理

    在C++11中有两个智能指针类型来管理动态对象,share_ptr允许多个指针指向同一个对象,unique_ptr则“独占”所指对象. 我们知道指针或引用在离开作用域时是不会进行析构的,但是类在离开作 ...

  7. css3之水波效果

    这些效果可谓多种多样,当然用canvas.svg也都能实现奈何对这些有不熟悉(尴尬),不过咱们用css来写貌似也没想象中的那么难吧. 一  悬浮球水波效果 效果图 css .container { w ...

  8. CSP-J&S2019前颓废记

    说了是颓废记,就是颓废记,因为真的很颓废...... 2018年12月 我看懂了<啊哈算法>(仅仅是看懂,并没有完全学会,只看得懂,却不会敲) 插曲:八上期末考试 我们老师阻挠我继续学OI ...

  9. yum安装软件报错Error: Nothing to do

    今天在一台新服务器上装一些常用软件,一开始安装ncdu(一个很好用的磁盘分析工具,用来查找大文件),报错如下: 在网上找了各种办法,什么更新yum啊,清理yum缓存啊的,统统没用 最后的找到的问题是, ...

  10. 【SpringCloud之pigx框架学习之路 】2.部署环境

    [SpringCloud之pigx框架学习之路 ]1.基础环境安装 [SpringCloud之pigx框架学习之路 ]2.部署环境 1.下载代码 git clone https://git.pig4c ...