Recruit Coupon Purchase Winner's Interview: 2nd place, Halla Yang

Recruit Ponpare is Japan's leading joint coupon site, offering huge discounts on everything from hot yoga, to gourmet sushi, to a summer concert bonanza. The Recruit Coupon Purchase Prediction challenge asked the community to predict which coupons a customer would buy in a given period of time using past purchase and browsing behavior.

Halla Yang finished 2nd ahead of 1,191 other data scientists. His experience working with time series data helped him use unsupervised methods effectively in conjunction with gradient boosting. In this blog, Halla walks through his approach and shares key visualizations that helped him better understand and work with the dataset.

The Basics

What was your background prior to entering this challenge?

I've worked almost a decade in finance as a quantitative researcher and portfolio manager. I've also competed in several Kaggle contests, placing first in the Pfizer Volume Prediction Masters competition, sixth in the Merck Molecular Activity Challenge, and ninth in theDiabetic Retinopathy Detection.

Halla Yang's profile on Kaggle

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

Predicting prices for thousands of stocks and predicting purchases by thousands of Japanese internet users are loosely similar problems. You can forecast stock returns by looking at time series data such as past returns and cross-sectional data such as industry averages. You can forecast coupon purchases by looking at time series features based on past purchases and cross-sectional features based on peer group averages.

Let's Get Technical

What preprocessing and supervised learning methods did you use?

For each (user, coupon) pair, I calculated the probability that the user would purchase that coupon during the test period using a gradient boosting classifier. I sorted the coupons for each user by probability, composing the ten highest probability coupons into my submission.

To train my classifier, I constructed training data for 24 "train periods" that simulated the test period. Train period 1 is the week from 2012-01-08 through 2012-01-14, and includes all coupons with a DISPFROM date - the date on which they're supposed to be first displayed - in that week. Train period 2 is the week from 2012-01-15 through 2012-01-21, and includes all coupons with a DISPFROM date in that week. Train period 24 is the week from 2012-06-17 through 2012-06-23, and includes all coupons with a DISPFROM date in that week.

For each of these training periods, I built a set of features for each (user, relevant coupon) pair. This set of features includes user-specific data, e.g. gender, days on site, and age; coupon-specific data, e.g. catalog price, genre, and price rate; as well as user-coupon interaction data, e.g. how often has the user viewed coupons of the same genre. The target for each observation is set to 1 if the user purchased that coupon during the training week, and 0 otherwise.

To calibrate the parameters of my model, I first trained a model on the first twenty-three weeks of data, and estimated my log loss and confusion matrix on the twenty-fourth week. I then trained a model on the full twenty-four weeks of data to generate my competition submission.

The only supervised learning method I used was gradient boosting, as implemented in the excellent xgboost package. I cycled through other algorithms at the start of my analysis to get a feel for their relative performance - logistic regressions, random forests, SVMs, as well as deep neural networks - but found that gradient boosting was the single best classifier for my approach.

What was your most important insight into the data?

First, many test set and training set coupons were viewed prior to their DISPFROM, the date on which they're supposed to be first displayed, and so one could use direct views as a forecasting variable. The violin plot below shows the distribution of first view times relative to DISPFROM. A negative x-value indicates the coupon was viewed prior to its DISPFROM. Over a quarter of coupons are first viewed more than twelve hours before their DISPFROM, and five percent of coupons are first viewed more than ninety hours before their DISPFROM.



Simply counting the number of times a user has viewed a test set coupon is tremendously helpful in forecasting test set purchases. As shown in the left panel of the figure below, users are 2.5% likely to buy a coupon if they've viewed it exactly once prior to its DISPFROM, but that probability rises to 32% if they've viewed the coupon four or more times.

Second, users tend to buy the same coupons over and over. As shown in the middle panel of the above figure, a user who has purchased a coupon with a given prefecture, genre, and catalog price four or more times has a 38% chance of buying a matched coupon again in the next week if it is offered for sale.

Third, peer group averages can help forecast the behavior of users with little or no history. The right panel of the above figure shows that a user's probability of buying a coupon increases from less than 0.1% to above 0.6% if more than ten percent of age, sex, and geography-matched peers have bought a coupon with the same characteristics.

Fourth, it's important to consider the geographic coverage of each coupon. To be specific, a coupon is relevant for the multiple prefectures listed in coupon_area_train.csv, not just the single prefecture listed for that coupon in coupon_list_train.csv. In the kernel density plots below, I show the purchase intensity for users based in four prefectures: Tokyo, Kanagawa, Osaka, and Aichi, using the geographic data in coupon_list_train.csv. The purchases for Osaka and Aichi users appear strongly bimodal, with an unusually large number of purchases occurring in the Tokyo region.

On the other hand, if we look at all the prefectures that map to a given coupon, we find that Osaka users purchased Tokyo coupons not because they planned to travel to Tokyo, but because these coupons were also local to Osaka. If we plot the geographic intensity of "nearest-to-user" prefecture rather than a coupon's primary listing prefecture, we see much more localized purchase behavior.

Words of Wisdom

Do you have any advice for those just getting started in data science?

Focus on understanding the problem. Without understanding the problem, it's impossible to develop a solution.

Start with simple approaches and models. A fast development cycle is key to testing out ideas and learning what works. Don't start building computationally expensive ensembles until you have iterated through most of your best ideas.

Bio

Halla Yang has worked as a quantitative researcher, portfolio manager and trader at Goldman Sachs Asset ManagementJump Trading and Arrowstreet Capital. He holds a Ph.D. in Business Economics from Harvard, and a B.A. in Physics, summa cum laude, also from Harvard. He is about to start a new position as data scientist at a management consulting firm.

Recruit Coupon Purchase Winner's Interview: 2nd place, Halla Yang的更多相关文章

  1. Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯

    Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯ The Otto Grou ...

  2. CrowdFlower Winner's Interview: 1st place, Chenglong Chen

    CrowdFlower Winner's Interview: 1st place, Chenglong Chen The Crowdflower Search Results Relevance c ...

  3. How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo

    How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo An early insight into the importa ...

  4. Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees)

    Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees) Peter Best (aka fakepla ...

  5. Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang

    Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang The hugely popular Liberty Mut ...

  6. ICDM Winner's Interview: 3rd place, Roberto Diaz

    ICDM Winner's Interview: 3rd place, Roberto Diaz This summer, the ICDM 2015 conference sponsored a c ...

  7. Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham

    Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham Ben Graham finished at the top of the ...

  8. 如何在 Kaggle 首战中进入前 10%

    原文:https://dnc1994.com/2016/04/rank-10-percent-in-first-kaggle-competition/ Introduction Kaggle 是目前最 ...

  9. 【转载】如何在 Kaggle 首战中进入前 10%

    本文转载自如何在 Kaggle 首战中进入前 10% 转载仅出于个人学习收藏,侵删 Introduction 本文采用署名 - 非商业性使用 - 禁止演绎 3.0 中国大陆许可协议进行许可.著作权由章 ...

随机推荐

  1. C语言 数组做函数参数不传数组个数的遍历方法

    //数组做函数参数不传数组个数的遍历方法 #include<stdio.h> #include<stdlib.h> #include<string.h> void ...

  2. C语言 百炼成钢19

    /* 题目55: 有一个字符串符合以下特征(”abcdef,acccd,eeee,aaaa,e3eeeee,sssss,";),要求写一个函数(接口),输出以下结果 1) 以逗号分割字符串, ...

  3. ROWNUMBER() OVER( PARTITION BY COL1 ORDER BY COL2)用法

    今天在使用多字段去重时,由于某些字段有多种可能性,只需根据部分字段进行去重,在网上看到了rownumber() over(partition by col1 order by col2)去重的方法,很 ...

  4. DDD 领域驱动设计-谈谈 Repository、IUnitOfWork 和 IDbContext 的实践(转)

    http://www.cnblogs.com/xishuai/p/ddd-repository-iunitofwork-and-idbcontext.html 好久没写 DDD 领域驱动设计相关的文章 ...

  5. Caffe学习系列(5):其它常用层及参数

    本文讲解一些其它的常用层,包括:softmax_loss层,Inner Product层,accuracy层,reshape层和dropout层及其它们的参数配置. 1.softmax-loss so ...

  6. 使用JQuery能做什么(zz)

    jQuery库为Web脚本编程提供了通用(跨浏览器)的抽象层,使得它几乎适用于任何脚本编程的情形.jQuery通常能为我们提供以下功能: 1.方便快捷获取DOM元素 如果使用纯JavaScript的方 ...

  7. [CareerCup] 9.3 Magic Index 魔法序号

    9.3 A magic index in an array A[0.. .n-1] is defined to be an index such that A[i] = i. Given a sort ...

  8. LeetCode:Path Sum I II

    LeetCode:Path Sum Given a binary tree and a sum, determine if the tree has a root-to-leaf path such ...

  9. 《信息安全系统设计基础》实验五 简单嵌入式WEB服务器实验

    实验报告链接:http://www.cnblogs.com/lx20145332/p/6058790.html

  10. CentOS 6.5 安装Nginx 1.7.4

    一.安装准备 首先由于nginx的一些模块依赖一些lib库,所以在安装nginx之前,必须先安装这些lib库,这些依赖库主要有g++.gcc.openssl-devel.pcre-devel和zlib ...