Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯

The Otto Group Product Classification Challenge made Kaggle history as our most popular competition ever. Alexander Guschin finished in 2nd place ahead of 3,845 other data scientists. In this blog, Alexander shares his stacking centered approach and explains why you should never underestimate the nearest neighbours algorithm.

3,848 players on 3,514 teams competed to classify items across Otto Group's product lines

The Basics

What was your background prior to entering this challenge?

I have some theoretical understanding of machine learning thanks to my base institute (Moscow Institute of Physics and Technology) and our professor Konstantin Vorontsov, one of the top Russian machine learning specialists. As for my acquaintance with practical problems, another great Russian data scientist who once was Top-1 on Kaggle,Alexander D’yakonov, used to teach a course on practical machine learning every autumn which gave me very good basis. Kagglers may know this course as PZAD.

Alexander's profile on Kaggle

How did you get started competing on Kaggle?

I got started in 2014’s autumn in “Forest Cover Type Prediction”. At that time I had no experience in solving machine learning problems. I found excellent benchmarks in “Titanic: Machine Learning from Disaster” which helped me a lot. After that I understand that machine learning is extremely interesting for me and just tried to participate in every competition I could.

What made you decide to enter this competition?

I wanted to check some ideas for my bachelor work. I liked that Otto competition has quite reliable dataset. You can check everything on cross-validation and changes on CV were close enough to leaderboard. Also, the spirit of competition is quite appropriate for checking ideas.

Let's Get Technical

What preprocessing and supervised learning methods did you use?

My solution’s stacking schema

The main idea of my solution is stacking. Stacking helps you to combine different methods’ predictions of Y (or labels when it comes to multiclass problems) as “metafeatures”. Basically, to obtain metafeature for train, you split your data into K folds, training K models on K-1 parts while making prediction for 1 part that was left aside for each K-1 group. To obtain metafeature for test, you can average predictions from these K models or make single prediction based on all train data. After that you train metaclassifier on features & metafeatures and average predictions if you have several metaclassifiers.

In the beginning of working on the competition I found useful to split data in two groups : (1) train & test, (2) TF-IDF(train) & TF-IDF(test). Many parts of my solution use these two groups in parallel.

Talking about supervised methods, I’ve found that Xgboost and neural networks both give good results on data. Thus I decided to use them as metaclassifiers in my ensemble.

Nevertheless, KNN usually gives predictions that are very different from decision trees or neural networks, so I include them on the first level of ensemble as metafeatures. Random forest and xgboost also happened to be useful as metafeatures.

What was your most important insight into the data?

Probably the main insight was that KNN is capable of making very good metafeatures. Never underestimate nearest neighbours algorithm.

Very important were to combine NN and XGB predictions on the second level. While my final second- level NN and XGB separately scored around .391 on private LB, the combination of them achieved .386, which is very significant improvement. Bagging on the second level helped a lot too.

TSNE in 2 dimensions

Beside this, TSNE in 2 dimensions looks very interesting. We can see on the plot that we have some examples which most likely will be misclassified by our algorithm. It does mean that it won’t be easy to find a way to post-process our predictions to improve logloss.

Also, it seemed interesting that some classes were related closer than others, for example class 1 and class 2. It’s worth trying to distinguish these classes specially.

Final model’s predictions for holdout’

Were you surprised by any of your findings?

Unfortunately, it appears that you won’t necessarily improve your model if you will make your metafeatures better. And when it comes to ensembling, all that you can count on is your understanding of algorithms (basically, the more diverse metafeatures you have, the better) and effort to try as many metafeatures as possible.

The more diverse metafeatures you have, the better. Metafeature by Extratrees vs metafeature by Neural Network.

Which tools did you use?

I only used sklearn, xgboost, lasagne. These are perfect machine learning libraries and I would recommend them to anyone who is starting to compete on Kaggle. Relying on my past experience they are sufficient to try different methods and achieve great results in most Kaggle competitions.

Words of Wisdom

Do you have any advice for those just getting started in data science?

I think that the most useful advice here is try not to stuck trying to fine-tune parameters or stuck using the same approaches every competition. Read through forums, understand winning solutions of past competitions and all of this will give you significant boost whatever your level is. In another words, my point is that reading past solutions is as important as solving competitions.

Also, when you first starting to work on machine learning problems you could make some nasty mistakes which will cost you a lot of time and efforts. Thus it is great if you can work in a team with someone and ask him to check you code or try the same methods on his own. Besides always compare your performance with people on forums.When you see that you algorithm performs much worse than people report on forum, go and check benchmarks for this and other recent competitions and try to figure out the mistake.

Bio

Alexander Guschin is 4th year student in Moscow Institute of Physics and Technology. Currently, Alexander is finishing his bachelor diploma work about ensembling methods.

Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯的更多相关文章

  1. Recruit Coupon Purchase Winner's Interview: 2nd place, Halla Yang

    Recruit Coupon Purchase Winner's Interview: 2nd place, Halla Yang Recruit Ponpare is Japan's leading ...

  2. How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo

    How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo An early insight into the importa ...

  3. CrowdFlower Winner's Interview: 1st place, Chenglong Chen

    CrowdFlower Winner's Interview: 1st place, Chenglong Chen The Crowdflower Search Results Relevance c ...

  4. Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang

    Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang The hugely popular Liberty Mut ...

  5. Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees)

    Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees) Peter Best (aka fakepla ...

  6. ICDM Winner's Interview: 3rd place, Roberto Diaz

    ICDM Winner's Interview: 3rd place, Roberto Diaz This summer, the ICDM 2015 conference sponsored a c ...

  7. Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham

    Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham Ben Graham finished at the top of the ...

  8. hbase官方文档(转)

    FROM:http://www.just4e.com/hbase.html Apache HBase™ 参考指南  HBase 官方文档中文版 Copyright © 2012 Apache Soft ...

  9. (原创)(四)机器学习笔记之Scikit Learn的Logistic回归初探

    目录 5.3 使用LogisticRegressionCV进行正则化的 Logistic Regression 参数调优 一.Scikit Learn中有关logistics回归函数的介绍 1. 交叉 ...

随机推荐

  1. HTTPClient模块的HttpGet和HttpPost

    HttpClient常用HttpGet和HttpPost这两个类,分别对应Get方式和Post方式. 无论是使用HttpGet,还是使用HttpPost,都必须通过如下3步来访问HTTP资源. 1.创 ...

  2. indeed2017校招在线编程题(网测)三

    A. Calculate Sequence 分析:就是斐波那契递推公式,但是初始值是指定的,只用求第10个数,数据范围和复杂度都比较小,直接写. B. 忘了叫啥了. 就是有a-j十个字符组成的字符串, ...

  3. [CSS]下拉菜单

    原理:先让下拉菜单隐藏,鼠标移到的时候在显示出来 1>display 无动画效果,图片是秒出 2>opacity 有动画效果,我这里是1S出现,推荐配合绝对定位使用

  4. 认识HTML5

    引言,认识两个标准制定的组织 在讲什么是Html5之前得先了解两个组织:WHATWG :网页超文本技术工作小组(英语:Web Hypertext Application Technology Work ...

  5. python(五)图形用户界面easyGUI入门

    1.首先我们配置环境 先在网上下载一个包文件 2.然后在命令行输入安装命令 3.安装完成后看一下具体安装到了哪里 4.下面进入正题 运行程序: 如果你觉得对话框太大,可以在easygui的配置文件里修 ...

  6. wap网站获取访问者手机号PHP类文件

    <?php /** * 类名: mobile * 描述: 手机信息类 * 其他: */ class mobile { /** * 函数名称: getPhoneNumber * 函数功能: 取手机 ...

  7. Catalyst揭秘 Day5 optimizer解析

    Catalyst揭秘 Day5 optimizer解析 Optimizer是目前为止中catalyst中最重要的部分.主要作用是把analyzed logicalPlan变成optimized Log ...

  8. 获取股票历史数据和当前数据的API

    关键字:股票,stock,API,接口 1.获取股票当前数据 新浪数据接口:http://hq.sinajs.cn/list={code}.{code}替换为股票代码,沪市股票代码加前缀sh,深市股票 ...

  9. s3c-u-boot-1.1.6源码分析之一start.s

    定位到\s3c-u-boot-1.1.6\cpu\s3c64xx\start.s,打开该文件 /* * armboot - Startup Code for S3C6400/ARM1176 CPU-c ...

  10. (转)Qt Model/View 学习笔记 (一)——Qt Model/View模式简介

    Qt Model/View模式简介 Qt 4推出了一组新的item view类,它们使用model/view结构来管理数据与表示层的关系.这种结构带来的 功能上的分离给了开发人员更大的弹性来定制数据项 ...