Files

Abstract

This thesis considers estimation and statistical inference for high dimensional model with constrained parameter space. Due to the recent development of data storage and computing technology, it is extremely common for researchers to face a high dimensional problem in practical applications, ranging from health-care, neural imaging, genetic studies, etc. In a high dimensional problem, the number of unknown parameters is usually much larger than the sample size, imposing additional difficulties on accurately estimating the parameters. As a result, it is usually assumed that the parameter satisfies some certain constraints, such as sparsity constraint or low-rank constraint. In this thesis, we develop novel algorithms to obtain accurate parameter estimation and statistical inference for several high dimensional models with constrained parameter space. Chapter 2 discusses asymptotic inference for high dimensional model under equality constraint. We propose a novel inference method that takes the equality constraint into consideration. The proposed estimator enjoys asymptotically smaller variance than the standard method without constraints, and is semiparametric efficient. Chapter 3 considers high dimensional statistical inference with inequality constraint. We develop tools to test whether the parameters are on the boundary of the constraint or not. The proposed testing procedure has greater power than the standard algorithms where the constraints are ignored. Chapter 4 studies the problem of recovery of matrices that are simultaneously low rank and row and/or column sparse. We propose a GDT (Gradient Descent with hard Thresholding) algorithm that converges linearly to a region within statistical error of an optimal solution. Chapter 5 considers the safe reinforcement learning problem. We construct a sequence of surrogate convex constrained optimization problems by replacing the nonconvex functions locally with convex quadratic functions obtained from policy gradient estimators. We prove that the solutions to these surrogate problems converge to a stationary point of the original nonconvex problem.

Details

Actions

PDF

from
to
Export
Download Full History