Files
Abstract
Artificial Intelligence was once seen as a net good---an engine of progress, efficiency, and creativity. That view no longer holds. Today, the unchecked rise of generative AI inflicts lasting damage on society, from the loss of authenticity and truth, to the influx of AI content disrupting markets, the disruption to education system, and mass surveillance. The scale and speed of these harms have outpaced legal systems and regulatory responses, leaving a deep and lasting impact on our society. Unfortunately, existing defenses remain insufficient at addressing AI's harm. Legal frameworks are slow-moving and jurisdiction-bound. Technical defenses often assume a level of model transparency or developer cooperation that does not exist in practice. Passive mechanisms—such as opt-outs or robots.txt—are frequently ignored. Worse, these failures reinforce a dangerous narrative: that AI harm is the cost of progress. This dissertation challenges that premise. It shows that generative AI pipelines have vulnerabilities, which can be leveraged to build defenses that mitigate AI harms. I introduce a new class of protections based on adversarial machine learning, which ``cloak'' user data before it is used for training. These cloaks distort the data in ways that mislead AI models, preventing them from learning accurate information. Building on this intuition, I have developed three defense systems: Fawkes, which protects personal identities from facial recognition models; Glaze, which enables artists to protect their unique styles from AI mimicry; and Nightshade, which can inject poison into data to prevent unauthorized AI training. All three are designed to function in adversarial settings without requiring cooperation from AI companies, offering practical and robust protection. The resulting tools I built are widely adopted by millions of users globally and have informed broader conversations around AI regulation. Looking ahead, this dissertation offers not just a set of defensive tools, but a perspective on how we can reframe the AI ecosystem—from reactive and passive methods to proactive, adversarial techniques that embed agency at the data level. This dissertation points toward a more balanced and sustainable AI ecosystem where the direction of AI development is not entirely determined by major tech companies, but also by diverse set of stakeholders.