Files

Abstract

As online presence and engagement in social media become more common, they also become more plagued with toxicity (\cite{cheng2015antisocial, rajadesingan2020quick, wulczyn2017ex, guberman2016quantifying, bogolyubova2018dark}). Social rewards, like votes, are designed to promote productive content but might have unintentionally helped toxic content spread over social networks and motivated producers to continue behaving uncivilly, particularly in political spaces (\cite{wang2016catching, frimer2023incivility}). Additionally, studies show that toxic behaviors can have widespread negative effects, exacerbated by features like recommendation algorithms on social media platforms (\cite{kim2021distorting, shmargad2022social, brady2023algorithm}). Understanding the interaction between social rewards and toxicity is crucial. Analyzing over 500 million comments from more than 400 political subreddits on Reddit, I found strong evidence supporting a coupled relationship between social rewards and toxicity in online political discussions. Highly-voted comments tend to be more toxic. Users may alter the toxicity level of their comments based on the scores their past comments received: the majority choose to increase the toxicity of their future comments if they received higher scores previously. Furthermore, some subreddit characteristics can strongly predict community variations in users' reward learning for toxicity behavioral patterns. This work provides an insight regarding online toxicity from a reinforcement learning perspective and highlights how we might tackle the toxicity issue by paying attention to users who are susceptible to the bind rewards-toxicity dyad.

Details

Actions

from
to
Export