Journal Article

L1-Penalized quantile regression in high-dimensional sparse models

Authors

Alexandre Belloni, Victor Chernozhukov

Published Date

28 February 2011

Type

Journal Article

We consider median regression and, more generally, a possibly infinite collection of quantile regressions in high-dimensional sparse models. In these models, the number of regressors p is very large, possibly larger than the sample size n, but only at most s regressors have a nonzero impact on each conditional quantile of the response variable, where s grows more slowly than n. Since ordinary quantile regression is not consistent in this case, we consider 1-penalized quantile regression (1-QR), which penalizes the 1-norm of regression coefficients, as well as the post-penalized QR estimator (post-1-QR), which applies ordinary QR to the model selected by 1-QR. First, we show that under general conditions 1-QR is consistent at the near-oracle rate $sqrt{s/n}sqrt{log(pvee n)}$, uniformly in the compact set $mathcal{U}subset(0,1)$ of quantile indices. In deriving this result, we propose a partly pivotal, data-driven choice of the penalty level and show that it satisfies the requirements for achieving this rate. Second, we show that under similar conditions post-1-QR is consistent at the near-oracle rate $sqrt{s/n}sqrt{log(pvee n)}$, uniformly over $mathcal{U}$, even if the 1-QR-selected models miss some components of the true models, and the rate could be even closer to the oracle rate otherwise. Third, we characterize conditions under which 1-QR contains the true model as a submodel, and derive bounds on the dimension of the selected model, uniformly over $mathcal{U}$; we also provide conditions under which hard-thresholding selects the minimal true model, uniformly over $mathcal{U}$.


Previous version

L1-Penalized quantile regression in high-dimensional sparse models
Alexandre Belloni, Victor Chernozhukov
CWP10/09