Download
Filename Size Access Description License

Abstract

Statistical minimax theory is a fundamental quantity used to assess the difficulty of various statistical tasks. We consider two variants on traditional minimax theory to alleviate some of its deficiencies. The first variant, a constrained form of minimax theory, puts computational constraints on the procedures and leads to minimax complexities that are achievable by computationally efficient methods. We illustrate this by an example of nonparametric estimation with storage constraint. We show how the minimax risk varies with the number of bits that is allowed to be used to represent the estimate. This establishes the Pareto optimal minimax tradeoff between storage and risk under quantization constraints for Sobolev spaces. As for the second variant, we extend the traditional minimax analysis by introducing a localized form of minimax complexity for individual instances. The formulation is based on the ``hardest local alternative.'' As an example, we derive the local minimax complexity for stochastic optimization of convex functions. The local minimax complexity is expressed in terms of a localized and computational analogue of the modulus of continuity. We show how the computational modulus of continuity can be explicitly calculated in concrete cases, and relates to the curvature of the function at the optimum. We also prove a superefficiency result that demonstrates it is a meaningful benchmark, acting as a computational analogue of the Fisher information in statistical estimation. The nature and practical implications of the results are demonstrated in simulations.

Details

Additional Details

Actions

from
to
Download Full History