Files
Abstract
Matrix and tensor operations play a vital role in diverse fields such as machine learning, numerical analysis, computational physics and optimization. As data sizes continue to grow and high-dimensional problems become more challenging, low-complexity matrix and tensor approximations are becoming increasingly crucial. These approximations are essential in providing efficient and robust numerical solutions to these problems.
This thesis focuses on two types of efficient low-complexity matrix and tensor formats, hierarchical matrices for efficient matrix compression and tensor-train (TT) for efficient high-dimensional tensor compression. Chapter 3 utilizes hierarchical matrix techniques and proposes a scalable algorithm for traditional tasks in Gaussian processes (GPs) applications, including maximum likelihood estimations (MLEs) for parameter identification, computing confidence intervals for uncertainty quantification, and regressions. In Chapter 5 and 4, we propose a novel approach for compressing Gibbs-Boltzmann distributions in statistical mechanics into TT formats and their downstream applications in computing committor functions.
Low-complexity formats provide a compact representation of matrices and tensors by exploring their numerical structure. This reduces memory usage and enables fast algorithms that exploit the inherent low-rankness of the problem and data. Numerical evidence is provided to demonstrate the scaling of the algorithms, as well as the efficiency/accuracy trade-offs.