Data scientists who develop predictive policing tools play an integral and largely unexamined role in the relationship between the police and efforts to increase the surveillance of Black and brown neighborhoods in Chicago. My project uses interviews with data scientists who are affiliated with both private corporations and research institutions in order to evaluate how these data scientists understand the value of predictive policing technology despite freely acknowledging significant issues about both its technical efficacy and its ability to worsen the unconstitutional and unethical over-policing of urban neighborhoods. I find that data scientists distanced themselves from the “dirty work” of policing by first positing their algorithms as perfectly objective and race-neutral despite relying on indicators strongly correlated with patterns of segregation, disinvestment, and over-policing. Data scientists mediated their anxieties about their roles in increasingly invasive forms of policing by relegating their work to purely on the “back-end.” Despite acknowledging that their work centered on making policing more “convenient,” they established police officers as solely responsible for biased outcomes. Moreover, data scientists expressed confidence in academic interdisciplinary “vetting” and “open-source” initiatives as forms of accountability. Thus, data scientists not only positioned their tools as inherently neutral resources in which police officers projected their own biases but also deferred larger questions of ethical responsibility and oversight to others. The influence and legitimacy conferred by data scientists is inextricable from how predictive policing technologies are being used to obscure racist policing practices under the purported infallibility of statistical analysis. Their self-isolation from the Chicago policing apparatus reinforces the assumption that data and statistical tools are incapable of both containing and underscoring existing forms of racism.