Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

On the popular Chinese video-sharing platform Bilibili, users can post live comments that float across video streams or broadcasts—a unique interaction known as ''bullet chat''. This real-time commenting system is actively moderated by an automated scoring mechanism known as the ''censor score'', which assigns each comment a value from 1 to 10, reflecting its likelihood of being censored based on content appropriateness. This study investigates how moderation on Bilibili varies according to user identity, emotional sentiment of comments, and topic sensitivity. By analyzing publicly available moderation data and employing large-scale sentiment classification methods, we discovered distinct moderation patterns. We found that severity of penalties differ substantially depending on the type of violation and the imposer of the penalty (System detect versus Human detect). Moreover, accounts with different certifications—such as government entities versus individual influencers—exhibit significantly different censor scoring trends even within the same discussion topics. Institutional accounts, generally received higher censor scores, suggesting variations in moderation standards tied to user identity and content context.

Details

from
to
Export
Download Full History