Not Logged In

Learning Disentangled Representations for CounterFactual Regression

Full Text: CausalML_NeurIPS_2019.pdf PDF

We consider the challenge of estimating causal effects from observational data; and note that, in general, only some factors based on the observed covariates X contribute to selection of the treatment T, and only some to determining the outcomes Y. We model this by considering three underlying sources of {X T, Y} and show that explicitly modeling these sources offers great insight to guide designing models that better handle selection bias. This paper is an attempt to conceptualize this line of thought and provide a path to explore it further. In this work, we propose an algorithm to (1) identify disentangled representations of the above-mentioned underlying factors from any observational dataset D and (2) leverage this knowledge to reduce, as well as account for, the negative impact of selection bias on estimating the causal effects from D. Our empirical results show that the proposed method (i) achieves state-of-the-art performance in both individual and population based evaluation measures and (ii) is highly robust under various data generating scenarios.

Citation

N. Hassanpour, R. Greiner. "Learning Disentangled Representations for CounterFactual Regression". NeurIPS Workshop on Causal ML, December 2019.

Keywords: Counterfactual Regression, Causal Effect Estimation, Selection Bias, Off-policy Learning
Category: In Workshop

BibTeX

@misc{Hassanpour+Greiner:CausalML19,
  author = {Negar Hassanpour and Russ Greiner},
  title = {Learning Disentangled Representations for CounterFactual Regression},
  booktitle = {NeurIPS Workshop on Causal ML},
  year = 2019,
}

Last Updated: July 14, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo