Feature Preprocessing on Kaggle
刚入手data science, 想着自己玩一玩kaggle,玩了新手Titanic和House Price的 项目, 觉得基本的baseline还是可以写出来,但是具体到一些细节,以至于到能拿到的出手的成绩还是需要理论分析的。
本文旨在介绍kaggle比赛到各种原理与技巧,当然一切源自于coursera,由于课程都是英文的,且都比较好理解,这里直接使用英文
Features: numeric, categorical, ordinal, datetime, coordinate, text
Numeric features
All models are divided into tree-based model and non-tree-based model.
Scaling
For example: if we apply KNN algorithm to the instances below, as we see in the second row, we caculate the distance between the instance and the object. It is obvious that dimension of large scale dominates the distance.
Tree-based models doesn’t depend on scaling
Non-tree-based models hugely depend on scaling
How to do
sklearn:
- To [0,1]
sklearn.preprocessing.MinMaxScaler
X = ( X-X.min( ) )/( X.max()-X.min() ) To mean=0, std=1
sklearn.preprocessing.StandardScaler
X = ( X-X.mean( ) )/X.std()- if you want to use KNN, we can go one step ahead and recall that the bigger feature is, the more important it will be for KNN. So, we can optimize scaling parameter to boost features which seems to be more important for us and see if this helps
Outliers
The outliers make the model diviate like the red line.
We can clip features values between teo chosen values of lower bound and upper bound
- Rank Transformation
If we have outliers, it behaves better than scaling. It will move the outliers closer to other objects
Linear model, KNN, Neural Network will benefit from this mothod.
rank([-100, 0, 1e5]) == [0,1,2]
rank([1000,1,10]) = [2,0,1]
scipy:
scipy.stats.rankdata
Other method
- Log transform: np.log(1 + x)
- Raising to the power < 1: np.sqrt(x + 2/3)
Feature Generation
Depends on
a. Prior knowledge
b. Exploratory data analysis
Ordinal features
Examples:
- Ticket class: 1,2,3
- Driver’s license: A, B, C, D
- Education: kindergarden, school, undergraduate, bachelor, master, doctoral
Processing
1.Label Encoding
* Alphabetical (sorted)
[S,C,Q] -> [2, 1, 3]
sklearn.preprocessing.LabelEncoder
- Order of appearance
[S,C,Q] -> [1, 2, 3]
Pandas.factorize
This method works fine with two ways because tree-methods can split feature, and extract most of the useful values in categories on its own. Non-tree-based-models, on the other side,usually can’t use this feature effectively.
2.Frequency Encoding
[S,C,Q] -> [0.5, 0.3, 0.2]
encoding = titanic.groupby(‘Embarked’).size()
encoding = encoding/len(titanic)
titanic[‘enc’] = titanic.Embarked.map(encoding)
from scipy.stats import rankdata
For linear model, it is also helpful.
if frequency of category is correlated with target value, linear model will utilize this dependency.
3.One-hot Encoding
pandas.get_dummies
It give all the categories of one feature a new columns and often used for non-tree-based model.
It will slow down tree-based model, so we introduce sparse matric. Most of libaraies can work with these sparse matrices directly. Namely, xgboost, lightGBM
Feature generation
Interactions of categorical features can help linear models and KNN
By concatenating string
Datetime and Coordinates
Date and time
1.Periodicity
2.Time since
a. Row-independent moment
For example: since 00:00:00 UTC, 1 January 1970;
b. Row-dependent important moment
Number of days left until next holidays/ time passed after last holiday.
3.Difference betwenn dates
We can add date_diff
feature which indicates number of days between these events
Coordicates
1.Interesting places from train/test data or additional data
Generate distance between the instance to a flat or an old building(Everything that is meanful)
2.Aggergates statistics
The price of surrounding building
3.Rotation
Sometime it makes the model more precisely to classify the instances.
Missing data
Hidden Nan, numeric
When drawing a histgram, we see the following picture:
It is obivous that -1 is a hidden Nan which is no meaning for this feature.
Fillna approaches
1.-999,-1,etc(outside the feature range)
It is useful in a way that it gives three possibility to take missing value into separate category. The downside of this is that performance of linear networks can suffer.
2.mean,median
Second method usually beneficial for simple linear models and neural networks. But again for trees it can be harder to select object which had missing values in the first place.
3.Reconstruct:
Isnull
Prediction
* Replace the missing data with the mean of medain grouped by another feature.
But sometimes it can be screwed up, like:
The way to handle this is to ignore missing values while calculating means for each category.
- Treating values which do not present in trian data
Just generate new feature indicating number of occurrence in the data(freqency)
- Xgboost can handle Nan
4.Remove rows with missing values
This one is possible, but it can lead to loss of important samples and a quality decrease.
Text
Bag of words
Text preprocessing
1.Lowercase
2.Lemmatization and Stemming
3.Stopwords
Examples:
1.Articles(冠词) or prepositions
2.Very common words
sklearn.feature_extraction.text.CountVectorizer:
max_df
- max_df : float in range [0.0, 1.0] or int, default=1.0
When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
CountVectorizer
The number of times a term occurs in a given document
sklearn.feature_extraction.text.CountVectorizer
TFiDF
In order to re-weight the count features into floating point values suitable for usage by a classifier
Term frequency
tf = 1 / x.sum(axis=1) [:,None]
x = x * tfInverse Document Frequency
idf = np.log(x.shape[0] / (x > 0).sum(0))
x = x * idf
N-gram
sklearn.feature_extraction.text.CountVectorizer:
Ngram_range, analyzer
- ngram_range : tuple (min_n, max_n)
The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used.
Embeddings(~word2vec)
It converts each word to some vector in some sophisticated space, which usually have several hundred dimensions
a. Relatively small vectors
b. Values in vector can be interpreted only in some cases
c. The words with similar meaning often have similar
embeddings
Example:
Feature Preprocessing on Kaggle的更多相关文章
- Kaggle教程——大神教你上分
本文记录笔者在观看Coursera上国立经济大学HLE的课程 How to win a data science competetion中的收获,和大家分享.课程的这门课的讲授人是Kaggle的大牛, ...
- [Feature] Final pipeline: custom transformers
有视频:https://www.youtube.com/watch?v=BFaadIqWlAg 有代码:https://github.com/jem1031/pandas-pipelines-cust ...
- [ML] Load and preview large scale data
Ref: [Feature] Preprocessing tutorial 主要是 “无量纲化” 之前的部分. 加载数据 一.大数据源 http://archive.ics.uci.edu/ml/ht ...
- scikit-learn:class and function reference(看看你究竟掌握了多少。。)
http://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition Reference This is t ...
- 以股票案例入门基于SVM的机器学习
SVM是Support Vector Machine的缩写,中文叫支持向量机,通过它可以对样本数据进行分类.以股票为例,SVM能根据若干特征样本数据,把待预测的目标结果划分成“涨”和”跌”两种,从而实 ...
- Machine Learning : Pre-processing features
from:http://analyticsbot.ml/2016/10/machine-learning-pre-processing-features/ Machine Learning : Pre ...
- 逻辑回归应用之Kaggle泰坦尼克之灾(转)
正文:14pt 代码:15px 1 初探数据 先看看我们的数据,长什么样吧.在Data下我们train.csv和test.csv两个文件,分别存着官方给的训练和测试数据. import pandas ...
- kaggle Titanic心得
Titanic是kaggle上一个练手的比赛,kaggle平台提供一部分人的特征,以及是否遇难,目的是预测另一部分人是否遇难.目前抽工作之余,断断续续弄了点,成绩为0.79426.在这个比赛过程中,接 ...
- Kaggle竞赛 —— 泰坦尼克号(Titanic)
完整代码见kaggle kernel 或 NbViewer 比赛页面:https://www.kaggle.com/c/titanic Titanic大概是kaggle上最受欢迎的项目了,有7000多 ...
随机推荐
- 三大框架:Struts+Hibernate+Spring
三大框架:Struts+Hibernate+Spring Java三大框架主要用来做WEN应用. Struts主要负责表示层的显示 Spring利用它的IOC和AOP来处理控制业务(负责对数据库的操作 ...
- 【转载】tomcat+nginx+redis实现均衡负载、session共享(一)
http://www.cnblogs.com/zhrxidian/p/5432886.html 在项目运营时,我们都会遇到一个问题,项目需要更新时,我们可能需先暂时关闭下服务器来更新.但这可能会出现一 ...
- Django之AppConfig源码学习
class AppConfig(object) 这个基类描述了一个Django应用以及它的配置信息. 属性: name:django应用的完整python路径,eg.'django.contrib.a ...
- SQL Server Agent Job 多服务器管理
- Kudu vs HBase
本文由 网易云发布. 背景 Cloudera在2016年发布了新型的分布式存储系统--kudu,kudu目前也是apache下面的开源项目.Hadoop生态圈中的技术繁多,HDFS作为底层数据存储的 ...
- 第三方支付设计——自有账户支付
笔者在上一篇blog<<第三方支付架构设计之-帐户体系>>中已经稍微全面的阐述了第三方支付架构设计中的账户体系,在该体系中,其实涉及了各种各样的账户:银行侧账户(包括用户在银行 ...
- 蚂蚁 RPC 框架 SOFA-RPC 初体验
前言 最近蚂蚁金服开源了分布式框架 SOFA,楼主写了一个 demo,体验了一下 SOFA 的功能,SOFA 完全兼容 SpringBoot(当然 Dubbo 也是可以兼容的). 项目地址:Alipa ...
- 解决vue在ios或android中用webview打开H5链接时#号后面的参数被忽略问题angular同样适用
在ios或android如果直接用webview在打开H5链接例如: 打开:http://localhost:8080/#/answer?id=1509335039582001 会变成 http:// ...
- MySQL索引的使用
1.创建和查看索引 所谓普通索引,就是在创建索引时,不附加任何限制条件(唯一.非空等限制).该类型的索引可以创建在任何数据类型的字段上. (1)创建表时,创建普通索引 语法: 例子: (2)在已经存在 ...
- Python进阶开发之元类编程
系列文章 √第一章 元类编程,已完成 ; 本文目录 类是如何产生的如何使用type创建类理解什么是元类使用元类的意义元类实战:ORM . 类是如何产生的 类是如何产生?这个问题肯定很傻.实则不然,很多 ...