The current rise of Machine Learning (ML) and the proliferation of ML-based algorithms in modern technology has led to renewed speculation that Artificial Intelligence (AI) could soon match or exceed human cognitive capacity. The ability for an ML-based system to learn, combined with the increasing proficiency and capacity of “deep” ML algorithms lends credence to this speculation, and gives rise to imagined futures—some promising, some apocalyptic—in which machines can think like humans. Many ML algorithms operate on linguistic-based data. Digital assistants such as Siri and Alexa cognize and enact users’ commands, while Google returns extremely relevant datasets based on a few keywords. We increasingly read digital content and information generated by algorithms, such as the generated advertisements that appear in web browsers. ML algorithms are thus becoming increasingly pervasive and effective at reading us and determining what is best to read for us. The sophistication of modern ML algorithms thus calls into question the boundary between algorithmic and human cognition, and the proliferation of ML in modern devices is increasingly forming a technological substrate that reads and writes us into the world at timescales below experiential perception. But as it stands now, even the most sophisticated ML algorithms cannot approach human proficiencies for general reading and writing. And yet, some ML algorithms can perform certain specialized forms of reading and writing remarkably well. At this point, I do not believe either the technical or humanistic communities have developed the necessary critical methodology to formulate when and why these algorithms fail or succeed. And this is a crucial discussion to have both inside and outside of the academy. As ML grows quantitively more sophisticated and proficient, it becomes increasingly important to understand and articulate the qualitative gulf between human and algorithmic cognition. Reading Algorithms will thus attempt to describe a methodology that deploys the algorithms themselves as tools to demarcate the evolving boundary between qualitatively different modes of cognition. In the pages that follow, a humanistic understanding of interpretive reading will be deployed to highlight the qualitative differences in ML-based reading and writing. But crucially, we will also work in the opposite direction, taking a technical understanding of how ML algorithms consume and cognize textual data as an alternate “language” in which to formulate abstract literary-critical concepts.