The traditional approach to obtain valid conﬁdence intervals for nonparametric quantities is to select a smoothing parameter such that the bias of the estimator is negligible relative to its standard deviation. While this approach is apparently simple, it has two drawbacks: First, the question of optimal bandwidth selection is no longer well-deﬁned, as it is not clear what ratio of bias to standard deviation should be considered negligible. Second, since the bandwidth choice necessarily deviates from the optimal (mean squares-minimizing) bandwidth, such a conﬁdence interval is very inefficient. To address these issues, we construct valid conﬁdence intervals that account for the presence of a nonnegligible bias and thus make it possible to perform inference with optimal mean squared error minimizing bandwidths. The key difficulty in achieving this involves ﬁnding a strict, yet feasible, bound on the bias of a nonparametric estimator. It is well-known that it is not possible to consistently estimate the point-wise bias of an optimal nonparametric estimator (for otherwise, one could subtract it and obtain a faster convergence rate violating Stone’s bounds on optimal convergence rate). Nevertheless, we ﬁnd that, under minimal primitive assumptions, it is possible to consistently estimate an upper bound on the magnitude of the bias, which is suﬃcient to deliver a valid conﬁdence interval whose length decreases at the optimal rate and which does not contradict Stone’s results.