Facial keypoints detection Kaggle 竞赛系列
3.2# Facial keypoints detection
- 作者:Stu. Rui
- QQ: 1026163725
- 原文链接:http://blog.csdn.net/i_love_home/article/details/51051888
该题主要任务是检測面部关键点位置
问题表述
在本问题中。要求计算面部关键点的位置,即关键点在图片中的百分比坐标。
因此该问题的机理就是 [0, 1] 范围内的数值拟合,当然了,这也是一个多输出的拟合的问题。
给定图片与其相应的 30 个标签的百分比位置,标签信息例如以下:
1 | 2 | 3 |
---|---|---|
left_eye_center_x | left_eye_center_y | right_eye_center_x |
right_eye_center_y | left_eye_inner_corner_x | left_eye_inner_corner_y |
left_eye_outer_corner_x | left_eye_outer_corner_y | right_eye_inner_corner_x |
right_eye_inner_corner_y | right_eye_outer_corner_x | right_eye_outer_corner_y |
left_eyebrow_inner_end_x | left_eyebrow_inner_end_y | left_eyebrow_outer_end_x |
left_eyebrow_outer_end_y | right_eyebrow_inner_end_x | right_eyebrow_inner_end_y |
right_eyebrow_outer_end_x | right_eyebrow_outer_end_y | nose_tip_x |
nose_tip_y | mouth_left_corner_x | mouth_left_corner_y |
mouth_right_corner_x | mouth_right_corner_y | mouth_center_top_lip_x |
mouth_center_top_lip_y | mouth_center_bottom_lip_x | mouth_center_bottom_lip_y |
当中标签完整的图片有 2140 张,当中,图片的大小为 96*96 pixels。
求解方案
- 求解过程例如以下:
- Step 1. 选择拟合器 SVR/KernelRidge 以及相应的 kernel
- Step 2. 交叉验证实验选择超參数,超參数的选择通过枚举的方法
- Step 3. 选定超參数后,用全部训练集训练拟合器
- Step 4. 对測试集做预測。并输出结果
实验结果
- 结果
- First idea:
-
Using 30 fitter to fit 30 labels, then I got 3.48060 RMSE
- Second idea
- Using 1 fitter to fit 30 labels, then I got 3.43998 RMSE[Better]
- Third idea
- Adding symmetrical training data, then resulting in abnormal result, such as position was greater then 96.
So, I can see that the result of fitting is only cover [0,96](or [0,1])
备注
超參数选择 gamma
for G in G_para:
scores = list()
for i in range(3):
X1, X2, y1, y2 = train_test_split(train_X, train_y, test_size=0.3, random_state=42)
clf = KernelRidge(kernel='rbf', gamma=G, alpha=1e-2)
pred = clf.fit(X1, y1).predict(X2)
sco = calbais(pred, y2)
scores.append(sco)
print('G:', G, 'Score:', scores)
30 个拟合器超參数调试的方法与结果例如以下:
拟合器 KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2)
0.7:0.3 训练集划分拟合误差:
[0] 0.7792 [10] 0.9744 [20] 1.0985
[1] 0.6383 [11] 0.7451 [21] 1.2300
[2] 0.7714 [12] 0.9513 [22] 1.2636
[3] 0.6482 [13] 0.9299 [23] 1.1784
[4] 0.7355 [14] 1.0870 [24] 1.2469
[5] 0.6005 [15] 1.1898 [25] 1.2440
[6] 0.9636 [16] 0.9012 [26] 0.9444
[7] 0.7063 [17] 0.9462 [27] 1.3718
[8] 0.7214 [18] 1.1349 [28] 0.9961
[9] 0.6089 [19] 1.1669 [29] 1.5076
pandas usage:
数据统计:DataFrame.count()
数据去缺失项:DataFrame.dropna()
字符串切割:Series = Series.apply(lambda im: numpy.fromstring(im, sep=' '))
值得注意的地方:
镜像图片,似乎对本问题採用 kernel ridge 拟合器 的求解没有帮助。
Conclusion
The 30 fitter is replaced by the only 1 fitter. The grade is better.
源代码
import pandas as pd
import numpy as np
import csv as csv
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.svm import SVR
from sklearn.kernel_ridge import KernelRidge
from sklearn.cross_validation import cross_val_score, train_test_split
train_file = 'training.csv' # 训练集数据
test_file = 'test.csv' # 測试集数据 1783 张图片
test_type = 'IdLookupTable.csv' # 測试集样表 行号, 图编号, 标签名
pd.set_option('chained_assignment',None)
# csv 数据读取,返回 df (pandas)
def csvFileRead(filename):
print('Loading', filename)
df = pd.read_csv(filename, header=0, encoding='GBK')
print('Loaded')
# 缺失项数据删除
if 'train' in filename:
df = df.dropna()
''' 数据查看
print('\n数据表尺寸: ', df.values.shape)
print('类别统计:\n')
print(df.count(), '\n')
'''
return df
# 结果存储
def csvSave(filename, ids, predicted):
with open(filename, 'w') as mycsv:
mywriter = csv.writer(mycsv)
mywriter.writerow(['RowId','Location'])
mywriter.writerows(zip(ids, predicted))
# 训练集数据预处理
def preTrain():
print('-----------------Training reading...-----------------')
df = csvFileRead(train_file)
print('Image: str -> narray')
df.Image = df.Image.apply(lambda im: np.fromstring(im, sep=' '))
print('Image transfered.\n')
# problem: 7049*9046 MemoryError -> df.dropna()
X = np.vstack(df.Image.values) / 255.
X.astype(np.float32)
y = df[df.columns[:-1]].values
y = (y-48)/48.
y = y.astype(np.float32)
'''
# 增加人工镜像图片
print('增加人工镜像图片...')
X, y = imageSym(X, y)
'''
X, y = shuffle(X, y, random_state=42)
yd = dict()
for i in range(len(df.columns[:-1].values)):
yd[df.columns[i]] = i
return X, y, yd
# 预測集数据预处理
def preTest():
print('-----------------Test reading...-----------------')
df = csvFileRead(test_file)
print('Image: str -> narray')
df.Image = df.Image.apply(lambda im: np.fromstring(im, sep=' '))
print('Image transfered.\n')
# 測试集图像
X = np.vstack(df.Image.values) / 255.
X.astype(np.float32)
# 预測内容:行号, 图编号, 标签名
df = csvFileRead(test_type)
RowId = df.RowId.values
ImageId = df.ImageId.values - 1
FeatureName = df.FeatureName.values
return RowId, ImageId, FeatureName, X
# 人工特征:镜像图片
def imageSym(X, y):
nX = np.zeros(X.shape)
ny = np.zeros(y.shape)
for i in range(X.shape[0]):
temp = X[i,:].reshape(96, 96)
temp = temp[:,::-1]
nX[i,:] = temp.reshape(-1)
ny[i,0::2] = -y[i,0::2]
ny[i,1::2] = y[i,1::2]
X = np.vstack((X, nX))
y = np.vstack((y, ny))
return X, y
# 30 个拟合器进行拟合
def modelfit(train_X, train_y, test_X, yd, ImageId, FeatureName):
#There are fitting codes.
# 30 个拟合器相应 1 个位置
n_clf = 30
clfs = [
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2),
KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2), KernelRidge(kernel='rbf', gamma=2e-4, alpha=1e-2)]
print('-----------------開始训练...------------------')
# 超參数
C_para = np.logspace(-2, 4, 7) # SVR.C
G_para = np.logspace(-4, -3, 6) # kernel = 'rbf'.gamma
A_para = np.logspace(-3, 1, 5) # KernelRidge.alpha
# 训练
for i in range(n_clf):
print('Training', i, 'clf...')
clfs[i].fit(train_X, train_y[:,i])
# 打印训练误差
predict = np.zeros([train_y.shape[0], 30]).astype(np.float32)
for i in range(n_clf):
predict[:,i] = clfs[i].predict(train_X)
print(calbais(predict, train_y))
print()
print('-----------------開始预測...------------------')
# 预測
pred = np.zeros([test_X.shape[0], 30]).astype(np.float32)
for i in range(n_clf):
pred[:,i] = clfs[i].predict(test_X)
predicted = np.zeros(len(FeatureName))
for i in range(len(FeatureName)):
if i % 500 == 0:
print('i =', i)
else:
pass
imageID = ImageId[i]
clfID = yd[FeatureName[i]]
predicted[i] = pred[imageID, clfID]
predicted = predicted*48.+48.
return predicted
# 单一拟合器,同一时候对 30 个标签做拟合
def modelfitOne(train_X, train_y, test_X, yd, ImageId, FeatureName):
n_clf = 1
# 拟合器
clf = KernelRidge(kernel='rbf', gamma=6e-4, alpha=2e-2)
# 训练
print('-----------------開始训练...------------------')
clf.fit(train_X, train_y)
# 预測
print('-----------------開始预測...------------------')
pred = clf.predict(test_X)
predicted = np.zeros(len(FeatureName))
for i in range(len(FeatureName)):
if i % 500 == 0:
print('i =', i)
else:
pass
imageID = ImageId[i]
clfID = yd[FeatureName[i]]
predicted[i] = pred[imageID, clfID]
predicted = predicted*48.+48.
return predicted
# 均方根计算方法
def calbais(pred, y2):
y_diff = pred - y2
y_diff = y_diff.reshape(-1)
sco = np.linalg.norm(y_diff)/(len(y2)**0.5)
return sco
# 參数选择的调试函数
# 超參数调试 X-y
def testfit(clf, train_X, train_y):
scores = list()
for i in range(3):
X1, X2, y1, y2 = train_test_split(train_X, train_y, test_size=0.3, random_state=42)
pred = clf.fit(X1, y1).predict(X2)
sco = calbais(pred, y2)
scores.append(sco)
print(scores)
# 測试图
def plotface(x, y):
img = x.reshape(96, 96)
plt.imshow(img, cmap='gray')
y = y * 48 + 48
plt.scatter(y[0::2], y[1::2], marker='x', s=20)
plt.show()
# 训练集数据读取
df = csvFileRead(train_file)
train_X, train_y, yd = preTrain()
# 測试集数据读取
RowId, ImageId, FeatureName, test_X = preTest()
# 1) 数据拟合: 30 个拟合器
predicted = modelfit(train_X, train_y, test_X, yd, ImageId, FeatureName)
# 2) 数据拟合: 1 个拟合器
predicted = modelfitOne(train_X, train_y, test_X, yd, ImageId, FeatureName)
# 结果存储
csvSave('KernelRidge.csv', np.linspace(1, len(predicted), len(predicted)).astype(int), predicted)
Facial keypoints detection Kaggle 竞赛系列的更多相关文章
- (zhuan) Using convolutional neural nets to detect facial keypoints tutorial
Using convolutional neural nets to detect facial keypoints tutorial this blog from: http://danieln ...
- Facial landmark detection - 人脸关键点检测
Facial landmark detection (Facial keypoints detection) OpenSourceLibrary: DLib Project Home: http: ...
- 如何使用Python在Kaggle竞赛中成为Top15
如何使用Python在Kaggle竞赛中成为Top15 Kaggle比赛是一个学习数据科学和投资时间的非常的方式,我自己通过Kaggle学习到了很多数据科学的概念和思想,在我学习编程之后的几个月就开始 ...
- 初窥Kaggle竞赛
初窥Kaggle竞赛 原文地址: https://www.dataquest.io/mission/74/getting-started-with-kaggle 1: Kaggle竞赛 我们接下来将要 ...
- Facial Landmark Detection
源地址:http://www.learnopencv.com/facial-landmark-detection/#comment-2471797375 OCTOBER 18, 2015 BY SAT ...
- OpenCV Facial Landmark Detection 人脸关键点检测
Opencv-Facial-Landmark-Detection 利用OpenCV中的LBF算法进行人脸关键点检测(Facial Landmark Detection) Note: OpenCV3.4 ...
- 《机器学习及实践--从零开始通往Kaggle竞赛之路》
<机器学习及实践--从零开始通往Kaggle竞赛之路> 在开始说之前一个很重要的Tip:电脑至少要求是64位的,这是我的痛. 断断续续花了个把月的时间把这本书过了一遍.这是一本非常适合基于 ...
- 《Python机器学习及实践:从零开始通往Kaggle竞赛之路》
<Python 机器学习及实践–从零开始通往kaggle竞赛之路>很基础 主要介绍了Scikit-learn,顺带介绍了pandas.numpy.matplotlib.scipy. 本书代 ...
- 由Kaggle竞赛wiki文章流量预测引发的pandas内存优化过程分享
pandas内存优化分享 缘由 最近在做Kaggle上的wiki文章流量预测项目,这里由于个人电脑配置问题,我一直都是用的Kaggle的kernel,但是我们知道kernel的内存限制是16G,如下: ...
随机推荐
- BZOJ2738: 矩阵乘法(整体二分)
Description 给你一个N*N的矩阵,不用算矩阵乘法,但是每次询问一个子矩形的第K小数. Input 第一行两个数N,Q,表示矩阵大小和询问组数: 接下来N行N列一共N*N个数,表示这个矩阵: ...
- BZOJ2243: [SDOI2011]染色(树链剖分/LCT)
Description 给定一棵有n个节点的无根树和m个操作,操作有2类: 1.将节点a到节点b路径上所有点都染成颜色c: 2.询问节点a到节点b路径上的颜色段数量(连续相同颜色被认为是同一段), 如 ...
- setInterval()第一个参数带引号和不带引号的区别
setInterval()第一个参数带引号和不带引号的区别:关于定时函数setInterval()的基本用法这里就不做介绍了,查阅相关教程即可,这里主要介绍一下setInterval()函数的第一个参 ...
- python3对序列求绝对值
http://www.cnblogs.com/itdyb/p/5731804.html 一开始我是这样写的,据说这样写python2是可以的: myList = [-1,2,-3,4,-5,6 ...
- Java Web学习总结(15)——JSP指令
一.JSP指令简介 JSP指令(directive)是为JSP引擎而设计的,它们并不直接产生任何可见输出,而只是告诉引擎如何处理JSP页面中的其余部分. 在JSP 2.0规范中共定义了三个指令: pa ...
- 洛谷 P1192 台阶问题
P1192 台阶问题 题目描述 有N级的台阶,你一开始在底部,每次可以向上迈最多K级台阶(最少1级),问到达第N级台阶有多少种不同方式. 输入输出格式 输入格式: 输入文件的仅包含两个正整数N,K. ...
- Codeforces Round #450 (Div. 2) D.Unusual Sequences (数学)
题目链接: http://codeforces.com/contest/900/problem/D 题意: 给你 \(x\) 和 \(y\),让你求同时满足这两个条件的序列的个数: \(a_1, a_ ...
- JNI各种环境下编译方法及初期出错分析
转自 https://www.cnblogs.com/xyang0917/p/4172490.html 第五步.将C/C++代码编译成本地动态库文件 动态库文件名命名规则:lib+动态库文件名+后缀( ...
- 原生js大总结二
011.if语句的优化 1.把次数多的条件和执行结果放到最前面 2.减少第一次无用的判断,可以用嵌套判断 3.判断语句禁止出现三次嵌套 012.谈谈你对switch的理解 1. ...
- 【Codeforces Round #445 (Div. 2) A】ACM ICPC
[链接] 我是链接,点我呀:) [题意] 在这里输入题意 [题解] 三重循环 [代码] #include <bits/stdc++.h> using namespace std; int ...