Filename Size Access Description License


In this thesis, we discuss three positive semidefinite matrix estimation,problems. We recast them by decomposing the semidefinite,variable into symmetric factors, and investigate first-order methods for,optimizing the transformed nonconvex objectives.,The central theme of our methods is to exploit the structure of the factors for,computational efficiency. The first part of this thesis focuses on low rank,structure. We first consider a family of random semidefinite programs.,We reformulate the problem as minimizing a fourth order objective function,,and propose a simple gradient descent algorithm. With $O(\r^3 \kappa^2 n,\log n)$ random measurements of a positive semidefinite $n\times n$ matrix of,rank $\r$ and condition number $\kappa$, our method is guaranteed to converge,linearly to the global optimum.,Similarly, we address the rectangular matrix completion problem,by lifting the unknown matrix to a positive semidefinite,matrix in higher dimension, and optimizing a fourth order objective over the,factor using a simple gradient descent scheme. With $O( \mu r^2,\kappa^2 n \max(\mu, \log n))$ random observations of a $n_1 \times n_2$,$\mu$-incoherent matrix of rank $r$ and condition number $\kappa$, where $n =,\max(n_1, n_2)$, the algorithm linearly converges to the global optimum with,high probability.,Sparsity is the other structure we study. In the second part of this thesis, we consider the,problem of computing the fastest mixing Markov chain on a given graph. The task,is to choose the edge weights so that a function of the eigenvalues of the,associated graph Laplacian matrix is minimized. We rewrite this problem so that,the search space is over the sparse Cholesky factor of the associated graph Laplacian, and,develop a nonconvex ADMM algorithm. Experiments are conducted to,demonstrate the convergence of this approach.


Additional Details


Download Full History