centre for microdata methods and practice

ESRC centre

cemmap is an ESRC research centre

ESRC

Keep in touch

Subscribe to cemmap news

Double machine learning for treatment and causal parameters

Authors: Victor Chernozhukov , Denis Chetverikov , Mert Demirer , Esther Duflo , Christian Hansen and Whitney K. Newey
Date: 27 September 2016
Type: cemmap Working Paper, CWP49/16
DOI: 10.1920/wp.cem.2016.4916

Abstract

Most modern supervised statistical/machine learning (ML) methods are explicitly designed to solve prediction problems very well. Achieving this goal does not imply that these methods automatically deliver good estimators of causal parameters. Examples of such parameters include individual regression coffiecients, average treatment e ffects, average lifts, and demand or supply elasticities. In fact, estimators of such causal parameters obtained via naively plugging ML estimators into estimating equations for such parameters can behave very poorly. For example, the resulting estimators may formally have inferior rates of convergence with respect to the sample size n caused by regularization bias. Fortunately, this regularization bias can be removed by solving auxiliary prediction problems via ML tools. Speci ficially, we can form an efficient score for the target low-dimensional parameter by combining auxiliary and main ML predictions. The efficient score may then be used to build an efficient estimator of the target parameter which typically will converge at the fastest possible 1/√ n rate and be approximately unbiased and normal, allowing simple construction of valid con fidence intervals for parameters of interest. The resulting method thus could be called a "double ML" method because it relies on estimating primary and auxiliary predictive models. Such double ML estimators achieve the fastest rates of convergence and exhibit robust good behavior with respect to a broader class of probability distributions than naive "single" ML estimators. In order to avoid overfi tting, following [3], our construction also makes use of the K-fold sample splitting, which we call cross- fitting. The use of sample splitting allows us to use a very broad set of ML predictive methods in solving the auxiliary and main prediction problems, such as random forests, lasso, ridge, deep neural nets, boosted trees, as well as various hybrids and aggregates of these methods (e.g. a hybrid of a random forest and lasso). We illustrate the application of the general theory through application to the leading cases of estimation and inference on the main parameter in a partially linear regression model and estimation and inference on average treatment eff ects and average treatment e ffects on the treated under conditional random assignment of the treatment. These applications cover randomized control trials as a special case. We then use the methods in an empirical application which estimates the e ffect of 401(k) eligibility on accumulated financial assets.

Download full version

Publications feeds

Subscribe to cemmap working papers via RSS

Search cemmap

Search by title, topic or name.

Contact cemmap

Centre for Microdata Methods and Practice

How to find us

Tel: +44 (0)20 7291 4800

E-mail us