Causal Inference】的更多相关文章

Targeted learning methods build machine-learning-based estimators of parameters defined as features of the probability distribution of the data, while also providing influence-curve or bootstrap-based confidence internals. The theory offers a general…
[统计]Causal Inference 原文传送门 http://www.stat.cmu.edu/~larry/=sml/Causation.pdf 过程 一.Prediction 和 causation 的区别 现实中遇到的很多问题实际上是因果问题,而不是预测. 因果问题分为两种:一种是 causal inference,比如给定两个变量 X.Y,希望找到一个衡量它们之间因果关系的参数 theta:另一种是 causal discovery,即给定一组变量,找到他们之间的因果关系.对于后面…
因果推理 本文档是对<A Survey on Causal Inference>一文的总结和梳理. 论文地址 简介 关联与因果 先有的鸡,还是先有的蛋?这里研究的是因果关系,因果关系与普通的关联有所区别.不能仅仅根据观察到的两个变量之间的关联或关联来合理推断两个变量之间的因果关系. 对于两个相互关联的事件A和B,可能存在的关系 A造成B B造成A A和B是共同原因的结果,但不互相引起. 其他 用一个简单的例子来说明关联关系和因果关系之间的区别: 随着冰淇淋销量的增加,溺水死亡的比率急剧上升.如…
目录 Standardization 非参数情况 Censoring 参数模型 Time-varying 静态 IP weighting 无参数 Censoring 参数模型 censoring 条件下 V Time-varying G-estimation 非参数模型 参数模型 Time-varying Propensity Scores Instrumental Variables Binary Linear Setting Continuous Linear Setting Nonpara…
目录 6.1 Causal diagrams 6.2 Causal diagrams and marginal independence 6.3 Causal diagrams and conditional independence 6.4 Positivity and consistency in causal diagrams 6.5 A structural classification of bias 6.6 The structure of effect modification F…
目录 1.1 Individual casual effects 1.2 Average casual effects 1.5 Causation versus association Hern\(\'{a}\)n M. and Robins J. Causal Inference: What If. A: intervention, exposure, treatment consistency: \(Y=Y^A\) when A observed. 1.1 Individual casual…
2015年~2017年SIGIR,SIGKDD,ICML三大会议的Recsys论文: [转载请注明出处:https://www.cnblogs.com/shenxiaolin/p/8321722.html] SIGIR-2015 [Title]WEMAREC: Accurate and Scalable Recommendation through Weighted and Ensemble Matrix Approximation [Abstract]Matrix approximation…
一直以来机器学习希望解决的一个问题就是'what if',也就是决策指导: 如果我给用户发优惠券用户会留下来么? 如果患者服了这个药血压会降低么? 如果APP增加这个功能会增加用户的使用时长么? 如果实施这个货币政策对有效提振经济么? 这类问题之所以难以解决是因为ground truth在现实中是观测不到的,一个已经服了药的患者血压降低但我们无从知道在同一时刻如果他没有服药血压是不是也会降低. 这个时候做分析的同学应该会说我们做AB实验!我们估计整体差异,显著就是有效,不显著就是无效.但我们能做…
OGB: Open Graph Benchmark https://ogb.stanford.edu/ https://github.com/snap-stanford/ogb OGB is a collection of benchmark datasets, data-loaders and evaluators for graph machine learning in PyTorch. Data-loaders are fully compatible with PyTorch Geom…
仅作为记录笔记,完善中...................... 1       PyGSP https://pygsp.readthedocs.io/en/stable/index.html https://pygsp.readthedocs.io/en/stable/reference/index.html Development: https://github.com/epfl-lts2/pygsp.git https://github.com/wangg12/pygsp.git Gra…
Hetergeneous Treatment Effect旨在量化实验对不同人群的差异影响,进而通过人群定向/数值策略的方式进行差异化实验,或者对实验进行调整.Double Machine Learning把Treatment作为特征,通过估计特征对目标的影响来计算实验的差异效果. Machine Learning擅长给出精准的预测,而经济学更注重特征对目标影响的无偏估计.DML把经济学的方法和机器学习相结合,在经济学框架下用任意的ML模型给出特征对目标影响的无偏估计 HTE其他方法流派详见因果…
目录 22.1 The target trial 22.2 Causal effects in randomized trails 22.3 Causal effects in observational analyses that emulate a target trial 22.4 Time zero 22.5 A unified analysis for causal inference Fine Point Grace periods Technical Point Controlle…
目录 21.1 The g-formula for time-varying treatments 21.2 IP weighting for time-varying treatments 21.3 A doubly robust estimator for time-varying treatments 21.4 G-estimation for time-varying treatments 21.5 Censoring is a time-varying treatment Fine P…
目录 20.1 The elements of treatment-confounder feedback 20.2 The bias of traditional methods 20.3 Why traditional methods fail 20.4 Why traditional methods cannot be fixed 20.5 Adjusting for past treatment Fine Point Representing feedback cycles with a…
目录 15.1 Outcome regression 15.2 Propensity scores 15.3 Propensity stratification and standardization 15.4 Propensity matching 15.5 Propensity models, structural models, predictive models Fine Point Nuisance parameters Effect modification and the prop…
目录 14.1 The causal question revisited 14.2 Exchangeability revisited 14.3 Structural nested mean models 14.4 Rank preservation 14.5 G-estimation 14.6 Structural nested models with two or more parameters Fine Point Relation between marginal structural…
目录 13.1 Standardization as an alternative to IP weighting 13.2 Estimating the mean outcome via modeling 13.3 Standardizing the mean outcome to the confounder distribution 13.4 IP weighting or standardization 13.5 How seriously do we take our estimate…
目录 12.1 The causal question 12.2 Estimating IP weights via modeling 12.3 Stabilized IP weights 12.4 Marginal structural models 12.5 Effect modification and marginal structural models 12.6 Censoring and missing data Fine Point Setting a bad example Ch…
目录 11.1 Data cannot speak for themselves 11.2 Parametric estimators of the conditional mean 11.3 Nonparametric estimators of the conditional mean 11.4 Smoothing The bias-variance trade-off Fine Point Fisher consistency Model dimensionality and the re…
目录 10.1 Identification versus estimation 10.2 Estimation of causal effects 10.3 The myth of the super-population 10.4 The conditionality "principle" The curse of dimensionality Fine Point Honest confidence intervals Uncertainty from systematic b…
目录 9.1 Measurement Error The structure of measurement error 9.3 Mismeasured confounders 9.4 Intention-to-treat effect: the effect of a misclassified treatment 9.5 Per-protocol effect Fine Point The strength and direction of measurement bias Per-proto…
目录 8.1 The structure of selection bias 8.2 Examples of selection bias 8.3 Selection bias and confounding 8.4 Selection bias and censoring 8.5 How to adjust for selection bias 8.6 Selection without bias Fine Point Selection bias in case-control studie…
目录 7.1 The structure of confounding Confounding and exchangeability Confounding and the backdoor criterion 7.4 Confounding and confounders 7.5 Single-world intervention graphs Confounding adjustment Fine Point The strength and direction of confoundin…
目录 5.1 Interaction requires a joint intervention 5.2 Identifying interaction 5.3 Counterfactual response types and interactions 5.4 Sufficient causes 5.5 Sufficient cause interaction 5.6 Counterfactual or sufficient-component causes? Fine Point More…
目录 4.1 Definition of effect modification 4.2 Stratification to identify effect modification 4.3 Why care about effect modification 4.4 Stratification as a form of adjustment 4.5 Matching as another form of adjustment 4.6 Effect modification and adjus…
目录 概 3.1 3.2 Exchangeability 3.3 Positivity 3.4 Consistency First Second Fine Point 3.1 Identifiability of causal effects 3.2 Crossover randomized experiments 3.3 Possible worlds 3.4 Attributable fraction Technical Point 3.1 Positivity for standardiz…
目录 概 2.1 Randomization 2.2 Conditional randomization 2.3 Standardization 2.4 Inverse probability weighting Technical Point 2.2 Formal definition of IP weights Technical Point 2.3 Equivalence of IP weighting and standardization Hern\(\'{a}\)n M. and R…
Click can be Cheating: Counterfactual Recommendation for Mitigating Clickbait Issue Authors: 王文杰,冯福利,何向南,张含望,蔡达成 SIGIR'21 新加坡国立大学,中国科学技术大学,南洋理工大学 论文链接:https://dl.acm.org/doi/pdf/10.1145/3404835.3462962 本文链接:https://www.cnblogs.com/zihaojun/p/15713705…
Consider a real LTI system with a WSS process $x(t)$ as input and WSS process $y(t)$ as output. Base on the WSS correlation properties,we get these equations $\begin{align*}&Time-Domain  &:&R_{yy}(\tau) &= h(\tau)*h(-\tau)*R_{xx}(\tau)\\&a…
一开始对于机器学习,主要是有监督学习,我的看法是: 假定一个算法模型,然后它有一些超参数,通过喂多组数据,每次喂数据后计算一下这些超参数.最后,数据喂完了,参数取值也就得到了.这组参数取值+这个算法,就是模型文件,后续能够用来预测,也就是直接用这个算法+这个参数取值的组合,能投入实际使用,做分类/回归. 但是后来出现了inference,以及指出和learning是不同的过程.这就有点让人发晕了.learning是啥?inference是啥?learning不是inference的一种吗? 好吧…