Files
Abstract
Social media platforms have turned to AI content moderation in response to increased scrutiny for the kind of content they circulate, especially due to major potential for social and political influence. Algorithmic reduction, colloquially known as ‘shadowbans,’ are favorable for platforms due to their opaqueness, which allows for evasion of responsibility and creates a Foucauldian relationship of platform power and creator discipline. As such, platforms intentionally manufacture opacity surrounding algorithmic reduction. Instagram in particular has recently announced changes in algorithmic content that has caused outrage about their approach to free expression, in part due to vague definitions. This work seeks to shed light on Instagram’s intentionally opaque policies. To do so, I define a ‘model drift’ test to measure change in the ability of a neural network model to predict post like counts. I compare model drift to drift in data in order to tease out algorithmic drift from behavioral drift. I find confirmation that the February 2024 policy change did likely result in algorithmic reduction of content from politicians. More interestingly, I find that pro-Palestine and Zionist content have similar rates of engagement, but only pro-Palestine content shows signs of algorithmic reduction, starting as soon as one week after October 7, 2023. This research serves as a proof of concept demonstrating that intentionally opaque algorithmic reduction can still be investigated. Future research to bring transparency to social media practices is essential for understanding otherwise opaque and guarded practices of algorithmic moderation.