The sklearn preprocessing
Recently, I was writing module of feature engineering, i found two excellently packages -- tsfresh and sklearn.
tsfresh has been specialized for data of time series, tsfresh mainly include two modules, feature extract, and feature select:
from tsfresh import feature_selection, feature_extraction
To limit the number of irrelevant features tsfresh deploys the fresh algorithms. The whole process consists of three steps.
Firstly. the algorithm characterizes time series with comprehensive and well-established feature mappings. the feature calculators used to derive the features are contained in tsfresh.feature_extraction.feature_calculators.
In a second step, each extracted feature vector is individually and evaluated with respect to its significance for predicting the target under investigation, those tests are contained in submodule tsfresh.feature_selection.significance_tests. the result of a significance test is a vector of p-value, quantifying the significance of each feature for predicting the target.
Finally, the vector of p-value is evaluated base on basis of the Benjamini-Yekutieli procedure in order to decide which feature could keep.
In summary, the tsfresh is a scalable and efficiency tool of feature engineering.
although the function of tsfresh was powerful, i choose sklearn.
I download data which is the heart disease data set. the data set target is binary and has a 13 dimension feature, I was just used MinMaxScaler to transform age,trestbps,chol three columns, the model had a choiced ensemble of AutoSklearnClassifer and ensemble of RandomForest. but bad performance for two models.
from sklearn.preprocessing import MinMaxScaler,StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from numpy import set_printoptions, inf
set_printoptions(threshold=inf)
import pandas as pd
data = pd.read_csv("../data_set/heart.csv")
X = data[data.columns[:data.shape[1] - 1]].values
y = data[data.columns[-1]].values data = MinMaxScaler().fit_transform(X[:, [0, 3, 4, 7]])
X[:, [0, 3, 4, 7]] = data
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) from autosklearn.classification import AutoSklearnClassifier
model_auto = AutoSklearnClassifier(time_left_for_this_task=120, n_jobs=3, include_preprocessors=["no_preprocessing"], seed=3)
model_auto.fit(x_train, y_train) from sklearn.metrics import accuracy_score
y_pred = model_auto.predict(x_test)
accuracy_score(y_test, y_pred) >>> 0.8021978021978022 from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=500)
y_pred_rf = model.predict(x_test)
accuracy_score(y_test, y_pred_rf) >>> 0.8051648351648352
My personal web site which provides automl service, I upload this data set to my service, it gets a better score than my code: http://simple-code.cn/
The sklearn preprocessing的更多相关文章
- 数据规范化——sklearn.preprocessing
sklearn实现---归类为5大类 sklearn.preprocessing.scale()(最常用,易受异常值影响) sklearn.preprocessing.StandardScaler() ...
- 【sklearn】数据预处理 sklearn.preprocessing
数据预处理 标准化 (Standardization) 规范化(Normalization) 二值化 分类特征编码 推定缺失数据 生成多项式特征 定制转换器 1. 标准化Standardization ...
- sklearn.preprocessing.LabelBinarizer
sklearn.preprocessing.LabelBinarizer
- sklearn.preprocessing.LabelEncoder的使用
在训练模型之前,我们通常都要对训练数据进行一定的处理.将类别编号就是一种常用的处理方法,比如把类别"男","女"编号为0和1.可以使用sklearn.prepr ...
- sklearn preprocessing (预处理)
预处理的几种方法:标准化.数据最大最小缩放处理.正则化.特征二值化和数据缺失值处理. 知识回顾: p-范数:先算绝对值的p次方,再求和,再开p次方. 数据标准化:尽量将数据转化为均值为0,方差为1的数 ...
- 11.sklearn.preprocessing.LabelEncoder的作用
In [5]: from sklearn import preprocessing ...: le =preprocessing.LabelEncoder() ...: le.fit(["p ...
- sklearn学习笔记(一)——数据预处理 sklearn.preprocessing
https://blog.csdn.net/zhangyang10d/article/details/53418227 数据预处理 sklearn.preprocessing 标准化 (Standar ...
- sklearn.preprocessing.StandardScaler 离线使用 不使用pickle如何做
Having said that, you can query sklearn.preprocessing.StandardScaler for the fit parameters: scale_ ...
- sklearn.preprocessing OneHotEncoder——仅仅是数值型字段才可以,如果是字符类型字段则不能直接搞定
>>> from sklearn.preprocessing import OneHotEncoder >>> enc = OneHotEncoder() > ...
- pandas 下的 one hot encoder 及 pd.get_dummies() 与 sklearn.preprocessing 下的 OneHotEncoder 的区别
sklearn.preprocessing 下除了提供 OneHotEncoder 还提供 LabelEncoder(简单地将 categorical labels 转换为不同的数字): 1. 简单区 ...
随机推荐
- linux web站点常用压力测试工具httperf
一.工具下载&&安装 软件获取 ftp://ftp.hpl.hp.com/pub/httperf/ 这里使用的是如下的版本 ftp://ftp.hpl.hp.com/pub/httpe ...
- Quartz.NET - 课程 6: Cron 触发器
译者注: 目录在这 [译]Quartz.NET 3.x 教程 原文在这 Lesson 6: CronTrigger 如果你需要一个类似日历概念而不是像 SimpleTrigger 那样指定间隔来调度作 ...
- ExecutionContext(执行上下文)综述
>>返回<C# 并发编程> 1. 简介 2. 同步异步对比 3. 上下文的捕获和恢复 4. Flowing ExecutionContext vs Using Synchron ...
- 医院信息集成平台(ESB)实施、建设方案
医院信息集成平台(ESB)实施.建设方案 基于中立.标准.开放的IT架构和数据标准,打造插拔式医院应用生态. 解决方案 基于ESB集成总线,构建医院信息化建设顶层设计. ...
- mysql随机抽取数据
-- 慢 ; -- 较慢 SELECT * FROM `table` WHERE id >= (SELECT floor( RAND() * ((SELECT MAX(id) FROM `t ...
- Windows、Linux之间传输文件的几种方式
常见的文件传输协议有ftp.sftp,sftp就是在ftp的基础上对传输的数据进行了加密. ftp速度快,sftp速度略慢但安全性高. ftp默认使用21端口,sftp默认使用22端口. 我使用的是C ...
- C#中StreamWriter类使用总结
C#中StreamWriter类使用总结 1.使用的命名空间是:System.IO; 2.用来将字符串写入文件. 常用属性: AutoFlush:获取或设置一个值,该值指示是否 System.IO ...
- python选课系统作业
# 选课系统# 角色:学校.学员.课程.讲师# 要求:# 1. 创建北京.上海 2 所学校# 2. 创建linux , python , go 3个课程 , linux\py 在北京开, go 在上海 ...
- MySQL基础(7) | 触发器
MySQL基础(7) | 触发器 基本语法 创建 CREATE TRIGGER trigger_name trigger_time trigger_event ON table_name FOR EA ...
- C# WPF联系人列表(1/3)
微信公众号:Dotnet9,网站:Dotnet9,问题或建议:请网站留言, 如果对您有所帮助:欢迎赞赏. C# WPF联系人列表(1/3) 阅读导航 本文背景 代码实现 本文参考 1.本文背景 聊天软 ...