Not Logged In

Reducing Selection Bias in Counterfactual Reasoning

Full Text: 2019-NeurIPS-CausalML-Wkshp.pdf PDF

Counterfactual reasoning is an important paradigm applicable in many fields, such as healthcare, economics, and education. In this work, we propose a novel method to address the issue of sample selection bias. We learn two groups of latent random variables, where one group corresponds to variables that only cause selection bias, and the other group is relevant for outcome prediction. They are learned by an auto-encoder where an additional regularized loss based on Pearson Correlation Coefficient (PCC) encourages the de-correlation between the two groups of random variables. This allows for explicitly alleviating selection bias by only keeping the latent variables that are relevant for estimating individual treatment effects. Experimental results on a synthetic toy dataset and a benchmark dataset show that our algorithm is able to achieve state-of-the-art performance and improve the result of its counterpart that does not explicitly model the selection bias.

Citation

Z. Zhang, Q. Lan, L. Ding, Y. Wang, N. Hassanpour, R. Greiner. "Reducing Selection Bias in Counterfactual Reasoning". NeurIPS Workshop on Causal ML, December 2019.

Keywords: Counterfactual reasoning, Individual treatment effects, Machine Learning
Category: In Workshop

BibTeX

@misc{Zhang+al:CausalML19,
  author = {Zichen Zhang and Qingfeng Lan and Lei Ding and Yue Wang and Negar
    Hassanpour and Russ Greiner},
  title = {Reducing Selection Bias in Counterfactual Reasoning},
  booktitle = {NeurIPS Workshop on Causal ML},
  year = 2019,
}

Last Updated: July 14, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo