一、特征选择可以减少过拟合代码实例

 该实例来自机器学习实战第四章

#coding=utf-8

'''
We use KNN to show that feature selection maybe reduce overfitting
''' from sklearn.base import clone
from itertools import combinations
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score class SBS():
def __init__(self, estimator, k_features, scoring=accuracy_score, test_size=0.25, random_state=1):
self.scoring = scoring
self.estimator = clone(estimator)
self.k_features = k_features
self.test_size = test_size
self.random_state = random_state def fit(self, X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = self.test_size, random_state=self.random_state)
dim = X_train.shape[1] self.indices_ = tuple(range(dim))
self.subsets_ = [self.indices_]
score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_) self.scores_ = [score] while dim > self.k_features:
scores = []
subsets = [] for p in combinations(self.indices_, r=dim-1):
score = self._calc_score(X_train, y_train, X_test, y_test, p)
scores.append(score)
subsets.append(p)
best = np.argmax(scores)
self.indices_ = subsets[best]
self.subsets_.append(self.indices_)
dim -= 1 self.scores_.append(scores[best]) self.k_score_ = self.scores_[-1] return self def transform(self, X):
return X[:, self.indices_] def _calc_score(self, X_train, y_train, X_test, y_test, indices):
self.estimator.fit(X_train[:, indices], y_train)
y_pred = self.estimator.predict(X_test[:, indices])
score = self.scoring(y_test, y_pred) return score import pandas as pd
from sklearn.model_selection import train_test_split df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline'] X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test) from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
knn = KNeighborsClassifier(n_neighbors=2)
sbs = SBS(knn, k_features=1)
sbs.fit(X_train_std, y_train) k_feat = [len(k) for k in sbs.subsets_] plt.figure(figsize=(8,10))#must be a tuple
plt.subplot(2,1,1)
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylim([0.7, 1.1])
plt.ylabel('Accuracy')
plt.xlabel('Number of features')
plt.grid()
#plt.show() #Let's see what those five features are that yield such a good performance on validation dataset
#subsets_的第九个元素是选择了13个特征中的五个来进行训练
k5 = list(sbs.subsets_[8])
print(df_wine.columns[1:][k5])
'''
Index(['Alcohol', 'Malic acid', 'Alcalinity of ash', 'Hue', 'Proline'], dtype='object')
''' #Let's evaluate the performance of the KNN classifer on the original test set
knn.fit(X_train_std, y_train)
print("Training Accuracy:", knn.score(X_train_std, y_train))
print("Test Accuracy:", knn.score(X_test_std, y_test))
'''
Training accuracy: 0.9838709677419355
Test Accuracy: 0.9444444444444444
'''
#We find a slight degree of overftting if we used all the 13 features on training knn.fit(X_train_std[:, k5], y_train)
print("Training Accuracy:", knn.score(X_train_std[:, k5], y_train))
print("Test Accuracy:", knn.score(X_test_std[:, k5], y_test))
'''
Training Accuracy: 0.9596774193548387
Test Accuracy: 0.9629629629629629
'''
#We reduced overfitting and the prediction accuracy improved. #RF Show Feature Importance
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_wine.columns[1:]
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1] for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f+1, 30, feat_labels[indices[f]], importances[indices[f]])) plt.subplot(2,1,2)
plt.title("Feature Importances")
plt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center')
plt.xticks(range(X_train.shape[1]), feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
plt.show()

Feature Selection Can Reduce Overfitting And RF Show Feature Importance的更多相关文章

  1. 10-3[RF] feature selection

    main idea: 计算每一个feature的重要性,选取重要性前k的feature: 衡量一个feature重要的方式:如果一个feature重要,则在这个feature上加上noise,会对最后 ...

  2. The Practical Importance of Feature Selection(变量筛选重要性)

    python机器学习-乳腺癌细胞挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003&u ...

  3. 【转】[特征选择] An Introduction to Feature Selection 翻译

    中文原文链接:http://www.cnblogs.com/AHappyCat/p/5318042.html 英文原文链接: An Introduction to Feature Selection ...

  4. 单因素特征选择--Univariate Feature Selection

    An example showing univariate feature selection. Noisy (non informative) features are added to the i ...

  5. highly variable gene | 高变异基因的选择 | feature selection | 特征选择

    在做单细胞的时候,有很多基因属于noise,就是变化没有规律,或者无显著变化的基因.在后续分析之前,我们需要把它们去掉. 以下是一种找出highly variable gene的方法: The fea ...

  6. 机器学习-特征选择 Feature Selection 研究报告

    原文:http://www.cnblogs.com/xbinworld/archive/2012/11/27/2791504.html 机器学习-特征选择 Feature Selection 研究报告 ...

  7. the steps that may be taken to solve a feature selection problem:特征选择的步骤

    參考:JMLR的paper<an introduction to variable and feature selection> we summarize the steps that m ...

  8. [Feature] Feature selection

    Ref: 1.13. Feature selection Ref: 1.13. 特征选择(Feature selection) 大纲列表 3.1 Filter 3.1.1 方差选择法 3.1.2 相关 ...

  9. [Feature] Feature selection - Embedded topic

    基于惩罚项的特征选择法 一.直接对特征筛选 Ref: 1.13.4. 使用SelectFromModel选择特征(Feature selection using SelectFromModel) 通过 ...

随机推荐

  1. 猫咪记单词Beta版使用说明

    猫咪记单词Beta版使用说明 一.项目背景 英语四级考试.六级考试.托福.雅思等英语方面的考试是现在大学生必须面对的问题.同时因为学生对手机的使用越来越频繁,而且仅仅通过书本背诵单词又比较无聊坚持的时 ...

  2. mapreduce 中 map数量与文件大小的关系

    学习mapreduce过程中, map第一个阶段是从hdfs 中获取文件的并进行切片,我自己在好奇map的启动的数量和文件的大小有什么关系,进过学习得知map的数量和文件切片的数量有关系,那文件的大小 ...

  3. js 基础-&& || 逻辑与和逻辑或

    今天百度发现一个简化长if   else if 语句的方法,看起来及其强大,感觉这样虽然对系统性能提升没有帮助但是代码更简练了,分析了一番,下面先说说自己学到的理论. 首先要弄清楚js 中对于 变量, ...

  4. Windows samba history

    https://blogs.technet.microsoft.com/josebda/2013/10/02/windows-server-2012-r2-which-version-of-the-s ...

  5. Java 泛型 1例

    private <T> T getFirstItem(List<T> list) {        T item = null;        if(list != null ...

  6. html的表格 table

    創建表格: 每一個表格以table開始: 每一個表格行以tr開始: 每一個數據以td開始:td的內容可以文本.圖像.表格.表單.段落等. 表格邊框: border設置邊框的粗細,但無法設置行間距,也無 ...

  7. USDT(omniCore)测试环境搭建

    一.测试环境搭建. 注:由于window版本的omni出现同步不了的问题,推荐使用linux系统进行usdt测试链的搭建. 1.下载omnicore: wget https://bintray.com ...

  8. Centos7 Journald 指令

    Journald是为Linux服务器打造的新系统日志方式,它标志着文本日志文件的终结.现在日志信息写入到二进制文件,使用journalctl阅读,要获得这些信息,Linux管理员将需要一些实践. Re ...

  9. python 模块之-hashlib

    python 模块hashlib import hashlib m=hashlib.md5()         # 生成MD5加密对象 m.update('jiami-string'.encode(' ...

  10. BZOJ5252 八省联考2018林克卡特树(动态规划+wqs二分)

    假设已经linkcut完了树,答案显然是树的直径.那么考虑这条直径在原树中是怎样的.容易想到其是由原树中恰好k+1条点不相交的链(包括单个点)拼接而成的.因为这样的链显然可以通过linkcut拼接起来 ...