The rapid development of technology has enabled the collection of enormous amounts of data and the construction of increasingly complex machine learning models whose internal mechanisms are often difficult, if not impossible, to analyze directly. This complexity poses major challenges for reliable uncertainty quantification, which is essential for the responsible deployment of these models in real-world applications. This thesis develops statistical theory and methods for assumption-lean uncertainty quantification, leveraging two recurring technical perspectives---exchangeability and algorithmic stability---in the analysis. We study three representative uncertainty quantification tasks: predictive inference, coincidence detection, and algorithmic stability, each corresponding to an abstraction arising in the deployment of reliable machine learning tools that facilitate subsequent decision-making. The first part studies distribution-free predictive inference, focusing on the framework of conformal prediction. We establish training-conditional coverage guarantees for conformal methods under algorithmic stability conditions, strengthening the marginal coverage guarantees typically provided by the conformal methods. We also develop computationally efficient methods that leverage model selection to obtain more informative prediction sets. The second part studies coincidence detection in time series, where we provide finite-sample guarantees for false positive rate control of time-shifting procedures under weak assumptions that allow for strong temporal dependence within each data stream. The third part addresses algorithmic stability in ranking problems, proposing novel ranking operators that promote stable rankings by explicitly accounting for ambiguity in ranking scores. Together, these results contribute to the broader goal of developing statistically rigorous and assumption-lean tools for uncertainty quantification in modern data analysis.