Files
Abstract
In this thesis, we investigate a class of optimization problems and introduce a novel family of optimization algorithms. We begin by studying large-scale deterministic optimization problems formulated within a graph-based framework. To overcome the limitations of existing decomposition methods—namely, the lack of global convergence guarantees and slow convergence rates—we propose a new decomposition algorithm. The proposed method ensures both local and global convergence, and we theoretically establish that its local convergence rate improves exponentially with the size of the overlap between subdomains. Experimental results on problems constrained by partial differential equations (PDEs) further validate our theoretical findings. We further extend our investigation from deterministic to stochastic settings by studying the nonstationary Linear Quadratic Regulator (LQR) problem. To address challenges related to sample complexity and computational efficiency, we propose a scalable framework for estimating the unknown system parameters and approximately solving the resulting control problem. The proposed algorithm achieves significant reductions in sample complexity, estimation error, and computational cost, thereby enhancing its applicability to large-scale systems. Building on the results obtained for the fully observed LQR system, we extend our investigation to the partially observed LQR problem under the stationary dynamics. To address the challenges of limited data and computational resources, we develop a scalable strategy for parameter estimation tailored to this setting. The proposed algorithm demonstrates strong suitability for large-scale systems, and its performance is supported by a complete theoretical analysis. In addition, we introduce a candidate parameter estimation algorithm with the potential to generalize to a broader class of problems. A full theoretical analysis and comprehensive empirical validation of this method are left for future work.