---恢复内容开始---

本文件对应logistic.py

amazonaccess介绍:

根据入职员工的定位(员工角色代码、角色所属家族代码等特征)判断员工是否有访问某资源的权限

logistic.py(python)的关键:

1.通过组合组合几个特征来获取新的特征

例如:组合MGR_ID ROLE_FAMILY得到新特征 hash((85475,290919))=1071656665

2.greedy feature selection

i.  首先从候选特征中选择1个在训练集上表现最好的特征,将其加入好特征goodfeatures中,并将该特征从中候选特征中排除

ii. 从候选特征中选择一个特征与goodfeatures中特征一起,选取在训练数据集中表现最好的特征,加入goodfeatures中,并将该特征从中候选特征中排除

iii.继续选取,直到在训练集上的表现不再增加为止

3.One Hot Encoding

例如:对数据离散数据 [23 33 33 44]进行编码

i. 首先relable,转换为 [0 1 1 2]

ii.对0进行编码 0 0 1   对应 23

对1进行编码 0 1 0   对应 33

对2进行编码 1 0 0   对应 44

这样在最后使用线性模型的时候,离散数据的每个标签都会对应一个权重

代码流程:

1.读取数据,去除ROLE_CODE属性

  1. learner = 'log'
  2. print "Reading dataset..."
  3. train_data = pd.read_csv('train.csv')
  4. test_data = pd.read_csv('test.csv')
  5. submit=learner + str(SEED) + '.csv'
  6. #去除ROLE_CODE特征,因为train和test数据需要同时做变换,所以合到一块
  7. all_data = np.vstack((train_data.ix[:,1:-1], test_data.ix[:,1:-1]))
  8. num_train = np.shape(train_data)[0]

2.对数据进行relable

  1. # Transform data
  2. print "Transforming data..."
  3. # Relabel the variable values to smallest possible so that I can use bincount
  4. # on them later.
  5. relabler = preprocessing.LabelEncoder()
  6. for col in range(len(all_data[0,:])):
  7. relabler.fit(all_data[:, col])
  8. all_data[:, col] = relabler.transform(all_data[:, col])

3.组合特征生成新特征,这里分别组合了2个特征和3个特征,分别生成(28-2)和(56-12)个新特征,并与原特征合并

在组合特征时,排除了(ROLE_FAMILY,ROLE_FAMILY_DESC)和(ROLE_ROLLUP_1,ROLE_ROLLUP_2)组合

因为特征中很多标签对应的数据只有1条或2条,将这些数据合并到个标签中

组合特征的函数

  1. def group_data(data, degree=3, hash=hash):
  2. """
  3. numpy.array -> numpy.array
  4.  
  5. Groups all columns of data into all combinations of triples
  6. """
  7. new_data = []
  8. m,n = data.shape
  9. for indicies in combinations(range(n), degree):
  10. #去除ROLE_TITLE和ROLE_FAMILY组合
  11. if 5 in indicies and 7 in indicies:
  12. print "feature Xd"
  13. #去除ROLE_ROLLUP_1和ROLE_ROLLUP_2组合
  14. elif 2 in indicies and 3 in indicies:
  15. print "feature Xd"
  16. else:
  17. new_data.append([hash(tuple(v)) for v in data[:,indicies]])
  18. return array(new_data).T

合并数据只有1条或两条的标签

  1. dp = group_data(all_data, degree=2)
  2. for col in range(len(dp[0,:])):
  3. relabler.fit(dp[:, col])
  4. dp[:, col] = relabler.transform(dp[:, col])
  5. uniques = len(set(dp[:,col]))
  6. maximum = max(dp[:,col])
  7. print col
  8. if maximum < 65534:
  9. count_map = np.bincount((dp[:, col]).astype('uint16'))
  10. for n,i in enumerate(dp[:, col]):
  11. #只有1条数据的标签,合并
  12. if count_map[i] <= 1:
  13. dp[n, col] = uniques
  14. #只有2条数据的标签,合并
  15. elif count_map[i] == 2:
  16. dp[n, col] = uniques+1
  17. else:
  18. for n,i in enumerate(dp[:, col]):
  19. if (dp[:, col] == i).sum() <= 1:
  20. dp[n, col] = uniques
  21. elif (dp[:, col] == i).sum() == 2:
  22. dp[n, col] = uniques+1
  23. print uniques # unique values
  24. uniques = len(set(dp[:,col]))
  25. print uniques
  26. relabler.fit(dp[:, col])
  27. dp[:, col] = relabler.transform(dp[:, col])

将新特征和原特征合并

  1. # Collect the training features together
  2. y = array(train_data.ACTION)
  3. X = all_data[:num_train]
  4. X_2 = dp[:num_train]
  5. X_3 = dt[:num_train]
  6.  
  7. # Collect the testing features together
  8. X_test = all_data[num_train:]
  9. X_test_2 = dp[num_train:]
  10. X_test_3 = dt[num_train:]
  11.  
  12. X_train_all = np.hstack((X, X_2, X_3))
  13. X_test_all = np.hstack((X_test, X_test_2, X_test_3))

4.one hot encoding

  1. def OneHotEncoder(data, keymap=None):
  2. """
  3. OneHotEncoder takes data matrix with categorical columns and
  4. converts it to a sparse binary matrix.
  5.  
  6. Returns sparse binary matrix and keymap mapping categories to indicies.
  7. If a keymap is supplied on input it will be used instead of creating one
  8. and any categories appearing in the data that are not in the keymap are
  9. ignored
  10. """
  11. if keymap is None:
  12. keymap = []
  13. for col in data.T:
  14. uniques = set(list(col))
  15. keymap.append(dict((key, i) for i, key in enumerate(uniques)))
  16. total_pts = data.shape[0]
  17. outdat = []
  18. for i, col in enumerate(data.T):
  19. km = keymap[i]
  20. num_labels = len(km)
  21. spmat = sparse.lil_matrix((total_pts, num_labels))
  22. for j, val in enumerate(col):
  23. if val in km:
  24. spmat[j, km[val]] = 1
  25. outdat.append(spmat)
  26. outdat = sparse.hstack(outdat).tocsr()
  27. return outdat, keymap
  28.  
  29. # Xts holds one hot encodings for each individual feature in memory
  30. # speeding up feature selection
  31. Xts = [OneHotEncoder(X_train_all[:,[i]])[0] for i in range(num_features)]

5.greedy feature selection

  1. print "Performing greedy feature selection..."
  2. score_hist = []
  3. N = 10
  4. good_features = set([])
  5. # Greedy feature selection loop
  6. while len(score_hist) < 2 or score_hist[-1][0] > score_hist[-2][0]:
  7. scores = []
  8. for f in range(len(Xts)):
  9. if f not in good_features:
  10. feats = list(good_features) + [f]
  11. Xt = sparse.hstack([Xts[j] for j in feats]).tocsr()
  12. score = cv_loop(Xt, y, model, N)
  13. scores.append((score, f))
  14. print "Feature: %i Mean AUC: %f" % (f, score)
  15. good_features.add(sorted(scores)[-1][1])
  16. score_hist.append(sorted(scores)[-1])
  17. print "Current features: %s" % sorted(list(good_features))
  18.  
  19. # Remove last added feature from good_features
  20. good_features.remove(score_hist[-1][1])
  21. good_features = sorted(list(good_features))
  22. print "Selected features %s" % good_features
  23. gf = open("feats" + submit, 'w')
  24. print >>gf, good_features
  25. gf.close()
  26. print len(good_features), " features"

6.通过validation选取最优参数,logistic regression为regularization strength

  1. print "Performing hyperparameter selection..."
  2. # Hyperparameter selection loop
  3. score_hist = []
  4. Xt = sparse.hstack([Xts[j] for j in good_features]).tocsr()
  5. if learner == 'NB':
  6. Cvals = [0.001, 0.003, 0.006, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.1]
  7. else:
  8. Cvals = np.logspace(-4, 4, 15, base=2) # for logistic
  9. for C in Cvals:
  10. if learner == 'NB':
  11. model.alpha = C
  12. else:
  13. model.C = C
  14. score = cv_loop(Xt, y, model, N)
  15. score_hist.append((score,C))
  16. print "C: %f Mean AUC: %f" %(C, score)
  17. bestC = sorted(score_hist)[-1][1]
  18. print "Best C value: %f" % (bestC)

7.预测

  1. print "Performing One Hot Encoding on entire dataset..."
  2. Xt = np.vstack((X_train_all[:,good_features], X_test_all[:,good_features]))
  3. Xt, keymap = OneHotEncoder(Xt)
  4. X_train = Xt[:num_train]
  5. X_test = Xt[num_train:]
  6.  
  7. if learner == 'NB':
  8. model.alpha = bestC
  9. else:
  10. model.C = bestC
  11.  
  12. print "Training full model..."
  13. print "Making prediction and saving results..."
  14. model.fit(X_train, y)
  15. preds = model.predict_proba(X_test)[:,1]
  16. create_test_submission(submit, preds)
  17. preds = model.predict_proba(X_train)[:,1]
  18. create_test_submission('Train'+submit, preds)

---恢复内容结束---

[amazonaccess 1]logistic.py 特征提取的更多相关文章

  1. 【机器学习实战】第5章 Logistic回归

    第5章 Logistic回归 Logistic 回归 概述 Logistic 回归虽然名字叫回归,但是它是用来做分类的.其主要思想是: 根据现有数据对分类边界线建立回归公式,以此进行分类. 须知概念 ...

  2. 【机器学习实战】第5章 Logistic回归(逻辑回归)

    第5章 Logistic回归 <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/ ...

  3. Airbnb新用户的民宿预定结果预测

    1. 背景 关于这个数据集,在这个挑战中,您将获得一个用户列表以及他们的人口统计数据.web会话记录和一些汇总统计信息.您被要求预测新用户的第一个预订目的地将是哪个国家.这个数据集中的所有用户都来自美 ...

  4. sklearn机器学习-泰坦尼克号

    sklearn实战-乳腺癌细胞数据挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003& ...

  5. 逻辑回归原理_挑战者飞船事故和乳腺癌案例_Python和R_信用评分卡(AAA推荐)

    sklearn实战-乳腺癌细胞数据挖掘(博客主亲自录制视频教程) https://study.163.com/course/introduction.htm?courseId=1005269003&a ...

  6. 02-14 scikit-learn库之逻辑回归

    目录 scikit-learn库之逻辑回归 一.LogisticRegression 1.1 使用场景 1.2 代码 1.3 参数详解 1.4 属性 1.5 方法 二.LogisticRegressi ...

  7. Sklearn使用良心完整入门教程

    The complete .ipynb file can be download through my share in onedrive:https://1drv.ms/u/s!Al86h1dThX ...

  8. 《机器学习_02_线性模型_Logistic回归》

    import numpy as np import os os.chdir('../') from ml_models import utils import matplotlib.pyplot as ...

  9. 基于Python的卷积神经网络和特征提取

    基于Python的卷积神经网络和特征提取 用户1737318发表于人工智能头条订阅 224 在这篇文章中: Lasagne 和 nolearn 加载MNIST数据集 ConvNet体系结构与训练 预测 ...

随机推荐

  1. 今日哈工大刷推荐python脚本

    import httplib import random import time import urllib2 import re address = raw_input("Please i ...

  2. Android小记之--ClickableSpan

    在给TextView设置超链接时,要想ClickableSpan的onClick事件响应,还必须同时设置tv.setMovementMethod(LinkMovementMethod.getInsta ...

  3. PHP环境搭配

    电脑上如果有apache,必须先卸载了先,如果有集成的环境,类似于apmserver,也必须先停止先.不然安装的时候,会出现修复和卸载选项,而不是典型安装跟用户自定义安装. apache安装目录 E: ...

  4. Node.js HTTP 使用详解

    对于初学者有没有发觉在查看Node.js官方API的时候非常简单,只有几个洋文描述两下子,没了,我第一次一口气看完所以API后,对于第一个示例都有些懵,特别是参数里的request和response, ...

  5. system.exit(0) vs system.exit(1)

    2.解析 查看java.lang.System的源代码,我们可以找到System.exit(status)这个方法的说明,代码如下: /** * Terminates the currently ru ...

  6. java设计模式--行为型模式--命令模式

    命令模式 概述 将一个请求封装为一个对象,从而使你可用不同的请求对客户进行参数化:对请求排队或记录请求日志,以及支持可撤消的操作. 适用性 .抽象出待执行的动作以参数化某对象. .在不同的时刻指定.排 ...

  7. POJ 动态规划题目列表

    ]POJ 动态规划题目列表 容易: 1018, 1050, 1083, 1088, 1125, 1143, 1157, 1163, 1178, 1179, 1189, 1208, 1276, 1322 ...

  8. ios控制器modal跳转

    1. http://www.cnblogs.com/smileEvday/archive/2012/05/29/presentModalViewController.html 2012年5月- Pre ...

  9. 关于CMCC(中国移动)、CU(中国联通)、CT(中国电信)的一些笔记

    一.三大运营商网络 CMCC(ChinaMobileCommunicationCorporation):GSM(2G).TD-SCDMA(3G).TD-LTE(4G); CU(China Unicom ...

  10. 关于Makefile.am中与Build相关的变量设置 AM_CPPFLAGS

    http://tonybai.com/2010/10/26/about-variables-related-to-building-in-makefile-am/ 关于Makefile.am中与Bui ...