Files
Abstract
As artificial intelligence systems grow increasingly powerful and influential, their design often overlooks the critical incentive structures embedded within the environments in which they operate. This misalignment can yield unintended and harmful outcomes --- such as filter bubbles in recommendation systems and agency problems involving content creators or gig workers. This dissertation frames such phenomena under the unifying lens of strategic alignment in AI, a concept adapted from business management that emphasizes harmonizing AI behavior with the interests of all stakeholders to achieve collectively desirable outcomes. To address these challenges, this work develops a principled foundation at the intersection of machine learning and algorithmic game theory, advancing both modeling frameworks and algorithmic solutions. We introduce incentive-aware learning algorithms and data-driven mechanisms that offer statistical and computational efficiency guarantees, aiming to enhance the robustness and responsibility of AI systems in strategic, multi-agent environments.