Cross-validation is the most common data-driven procedure for choosing smoothing parameters in nonparametric regression. For the case of kernel estimators with iid or strong mixing data, it is well-known that the bandwidth chosen by crossvalidation is optimal with respect to the average squared error and other performance measures. In this paper, we show that the cross-validated bandwidth continues to be optimal with respect to the average squared error even when the datagenerating process is a -recurrent Markov chain. This general class of processes covers stationary as well as nonstationary Markov chains. Hence, the proposed procedure adapts to the degree of recurrence, thereby freeing the researcher from the need to assume stationary (or nonstationary) before inference begins. We study finite sample performance in a Monte Carlo study. We conclude by demonstrating the practical usefulness of cross-validation in a highly-persistent environment, namely that of nonlinear predictive systems for market returns.