The primary reason for performing compiler optimizations before running the program is that they are "free" in the sense that the analysis and transformations involved in applying the optimizations do not contribute any overhead to program execution. For optimizations that are inherently or provably beneficial, there is no reason to perform them dynamically at runtime (or online). But many optimizations are situational or speculative, because their profitability depends on assumptions about dynamic information that cannot be fully determined statically. Today, the main use of online adaptive optimization has been for language implementations that would otherwise have high baseline overheads. This overhead is normally due to the use of an interpreter or a large degree of dynamically-determined behavior (e.g., dynamic types). The optimizations applied in online adaptive systems are those that, if the profiling data is a good predictor of future behavior, will certainly be profitable according to the optimization's cost model. But, there are many compiler optimizations whose cost models are unreliable! In this dissertation, I describe an adaptive optimization system that uses a trial-and-error search over the available compiler optimization flags to automatically tune them for the program, with respect to its dynamic environment. This LLVM-based system is unique in that it is fully online: all profiling and adaptation happens while the program is running in production, thanks to newly developed techniques to manage the cost of tuning. For the first time, users can easily take advantage of automatic tuning that is specific to the program's workload in production, by enabling just one option when initially compiling the program. The system transparently delivers up to a 2x speedup for a number C/C++ benchmarks relative to the best optimizations available in a production-grade compiler. Furthermore, the system adapts to future workloads, because the automatic tuning is continuous.