This section describes some best practices for analysis. These practices come from experience of analysts in the Data Mining Team. We list a few things you should do (or at least consider doing) and some pitfalls to avoid. We provide a list of issues to keep in mind that could affect the the quality of your results. Finally, a list of tools and data sets are referenced that might help in your analysis.

Analysis Quality

  • Did you spend time thinking about what question you were answering?
  • Did you engage potential users of your analysis to ensure you address right questions?
  • How much effort did you put into checking the quality of the data?
  • How reproducible is your analysis? If you were to pick up your project 6 months from now could you reuse anything?
  • Did you review your write up to your satisfaction?
  • Did you have others review your analysis artifacts (scripts, code, etc.)?
  • Is your write up something you would be proud to publish?
  • Do you think readers of your analysis summary can understand the key points easily and benefit from them?

Analysis Do's

  • Look at the distribution of your data. Always look at histograms (value and counts) for key fields in your analysis and see what pops out. In most cases, you will find some surprises that need futher investigation before you dive into your real analysis.
  • Skewed Distributions. Most of the data distributions we see in our are very skewed "heavy or long tailed"). For example, if you are analyzing queries, there may be a handful of queries that dominate (e.g., "google'). The metrics computed for a particular feature or vertical may be heavily skewed because of those few queries.
  • Segmentation. Metrics are more useful when segmented appropriately — not all segments are necessarily useful, but almost always some kind of segmentation can provide more useful insights. E.g. segmenting by dominant/not-dominant query (head vs. tail, "super-head" vs. rest). For more on this see section on Segmentation. See also a good blog: http://www.kaushik.net/avinash/2010/05/web-analytics-segments-three-category-recommendations.html on segmentation from the Web Analytics expert Avinash Kaushik.
  • Deep dive: Always look at some unaggregated data as part of your analysis -- especially for results that are surprisingly (both positively or negatively). Some good ideas are to use Magic Mirror to get a few sample sessions to see what users are doing in detail. While that will not answer questions you have, but it may raise a few questions that may not have been considered or show up some assumptions you made are false.
  • Make sure the data is correct. Talk to people who generated the data to verify that every field you are using means what you think it means. Don't trust your intuition, always check. For example, when using DQ field from one of the databases it is good to verify which verticals are included in the DQ computation. Not all are included and the list of the ones that are included differs depending in Competitive and Live Metrics databases.
  • Think about baselines. Make sure that the numbers you are comparing are meaningful in their comparison. Often some subset of the population cannot be meaningfully compared to the population as a whole. For example, it isn't terribly meaningful to compare IE entry point Bing users to the global Bing user population in terms of value, because the global Bing user population will be biased by low-value marketing users, have different demographics, etc. It may be that you will simply demonstrate that marketing users are less likely to return than IE and Toolbar users, which is expected, and not what you set out to prove at all.
  • Think ahead about possible shortfalls of your methods. Build specific experiments to test whether these shortcomings are real. The beginning of any analysis should project should include an active brainstorm of possible reasons the analysis method would be flawed. The project should specifically build in experiments and data sets to attempt to prove or disprove those possible shortcomings. For example, when developing Session Success Rate, we realized that there were concerns that success due to answers would not be properly measured, invalidating the metric for answers-related experiments. To help shed light on this we ensured we tested on data for a known-good answers ranker flight, to ensure that Session Success Rate didn't tell the wrong story in that case.
  • Ensure your metric can find both good and bad. Sometimes your tools will have biases which can be found by testing both good and bad examples. If you metric always says that things are good, it probably isn't useful. This can sometimes be accomplished by having some prior knowledge about good cases and bad cases, and ensuring both are included in your set. For example, imagine that your analysis intends to find the impact of exposure to various Bing features on usage of Bing. In this case, the analysis should include both features like Instant Answers, which we believe are a positive experience for our users, and features like no-results pages, which we believe aren't a good experience for our users. In this case, if our analysis says that both are really good things, or both are really bad things, then we know our analysis hasn't produced reliable results.
  • Communicate the analysis results. Allocate time and put some effort into communicating the results of your analysis to your customers as well as to anyone who may potentially be interested. Don't wait for them to contact you. Contact them first and ask if they are interested.

Analysis Dont's

  • Don't go too broad in the analysis. When trying to look at everything it's very easy to drown in data.
  • Don't use a page view-level quantity to determine a cohort of users without extreme care. This can introduce unexpected biases due to coverage effects, which can influence broad features of the cohort.
  • Don't be afraid to turn away from some analysis method which is proving unproductive. Just because you've written up a plan and scheduled time for a project doesn't mean you should be afraid to fail fast if that's the right thing to do.

Analysis Issues

  • Precision: add error bars (e.g. 95% confidence intervals). This is especially important when working with sampled data (samples NIF streams or Magic Mirror). For example if we compare two estimates (e.g. CTR) that are different, but the 95% confidence intervals overlap, we can't say that they are different (though we can't say that they're equal either).
  • Accuracy: depending on the "ground truth" and data set used for the analysis, there may be a bias that needs to be understood to put the analysis results in perspective. For example when using a particular flight for the analysis, there is a mechanism for selecting users to be in that flight — i.e. the users in the flight may not be a true random sample from the population your analysis is interested in, in which case there's a bias introduced into the analysis. There can also be temporal bias, e.g. due to seasonal effects: browsing patterns may be different during the weeks before Christmas than say in February. Day of the week effects could also be an issue (best to use multiples of 7 days for analysis data, e.g. 35 days). Also (unless there is very good reason for it), don t aggregate over very long periods of time as the signal will likely change over long time. This presents a trade-off between aggregating over short term thus having less data and larger error versus aggregating over long term thus having more data and better precision, but yielding less sensitivity to temporal effects. In general a four or five week period best balances this trade-off.
  • Weighted aggregation: When computing aggregate values, one can choose to add different weights to different data points. Currently Foray (flight analysis) and LiveMetrics compute aggregate metrics in different ways: LiveMetrics gives each impression equal weight, whereas Foray gives each user equal weight (by first computing aggregates per user and then aggregating these values over all users). As a result the metrics values in LiveMetrics represent heavy users more than light users. The results obtained from these two methods can differ both quantitatively and qualitatively. Depending on the analysis one or the other (or neither) may be most appropriate.

Analysis Guidelines的更多相关文章

  1. Dynamic Library Design Guidelines

    https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100 ...

  2. Bjarne Stroustrup announces C++ Core Guidelines

    This morning in his opening keynote at CppCon, Bjarne Stroustrup announced the C++ Core Guidelines ( ...

  3. Code Review Checklist and Guidelines for C# Developers

    Checklist1. Make sure that there shouldn't be any project warnings.2. It will be much better if Code ...

  4. C++ Core Guidelines

    C++ Core Guidelines September 9, 2015 Editors: Bjarne Stroustrup Herb Sutter This document is a very ...

  5. Guidelines for Successful SoC Verification in OVM/UVM

    By Moataz El-Metwally, Mentor Graphics Cairo Egypt Abstract : With the increasing adoption of OVM/UV ...

  6. Java Programming Guidelines

    This appendix contains suggestions to help guide you in performing low-level program design and in w ...

  7. Why many EEG researchers choose only midline electrodes for data analysis EEG分析为何多用中轴线电极

    Source: Research gate Stafford Michahial EEG is a very low frequency.. and literature will give us t ...

  8. Automated Memory Analysis

    catalogue . 静态分析.动态分析.内存镜像分析对比 . Memory Analysis Approach . volatility: An advanced memory forensics ...

  9. Sentiment Analysis resources

    Wikipedia: Sentiment analysis (also known as opinion mining) refers to the use of natural language p ...

随机推荐

  1. Linux下crontab命令详解

    crontab -e编辑定时任务 * * * shell.sh 从左到右依次是:分钟.小时.天.周.月

  2. RedHat6.1(64bit)安装JDK

    今天在服务器上装JDK1.5,费了不少力气,记录下来以供参考 服务器安装的操作系统为Red Hat 6.1(x86) [123@123 bin]$ cat /etc/redhat-release Re ...

  3. 20151216JqueryUI学习笔记---按钮

    按钮(button) , 可以给生硬的原生按钮或者文本提供更多丰富多彩的外观. 它不单单可以设置按钮或文本,还可以设置单选按钮和多选按钮.一. 使用 button 按钮使用 button 按钮 UI ...

  4. poj1308 Is It A Tree?(并查集)详解

    poj1308   http://poj.org/problem?id=1308 题目大意:输入若干组测试数据,输入 (-1 -1) 时输入结束.每组测试数据以输入(0 0)为结束标志.然后根据所给的 ...

  5. 2013年10月13日学习:SQL通过命令语句来创建表

    优点:操作简单,不容易出错,易于调试 缺点:需要记住命令.命令多了就容易混淆,是吧!但是熟悉了时间长了就OK了! step 1. 新建数据库,命名为Test 点击图形化界面中的新建查询,此时就可以输入 ...

  6. ACM/ICPC ZOJ1003-Crashing Balloon 解题代码

    #include <iostream> using namespace std; int main() { int **array = new int *[100]; for ( int ...

  7. WebSocket 实战

    http://www.ibm.com/developerworks/cn/java/j-lo-WebSocket/ 本文介绍了 HTML5 WebSocket 的由来,运作机制及客户端和服务端的 AP ...

  8. SQL Server 脚本语句

    一.语法结构 select select_list [ into new_table ] from table_source [ where search_condition ] [ group by ...

  9. 导入外部jar包的方法

    注:使用的编译平台为eclipse <算法>一书中需要引入外部jar包(algs4.jar),因此特地去学了下导入外部jar包的方法.步骤如下: 1.先将algs4.jar拷到j如下路径: ...

  10. IOS 学习笔记 2015-04-15 Xcode 工程模板分类

    一 Application类型    我们大部分呢的开发工作都是使用Application类型的模板创建IOS程序开始的,该类型包括5个模板1 Master-Detail-Application    ...