Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Large Language Models (LLMs) have rapidly adapted from specialized tools to everyday conversational partners. Rather than following the technical tradition of asking what LLMs can do, this study explores empirically how people actively frame their encounters with them. I drew 440k English-language first-turn prompts from the open-access WildChat corpus on human-ChatGPT interaction, embedded each into a 384-dimensional semantic space (MiniLM-L6-v2), clustered the landscape with unsupervised learning techniques, and interpreted 60 different topic clusters through qualitative close-reading. The process visually typologized the prompts into a “graded spectrum” — instead of categorizing discrete “use cases,” I depicted an interpretable continuum from highly instrumental queries to deeply expressive or esoteric engagements. Another critical methodological finding is that apparent topical “coherence” often reflects authorship concentration rather than genuine consensus — a small cadre of “power-users” contributed thematically narrow yet voluminous exchanges, inflating the semantic density in repeated or industrialized use. This outsized footprint can skew the usage patterns in open-source chatlogs datasets, leading analysts to underestimate the genuine breadth and interweaving of topics while overestimating the prevalence of anthropomorphism and niche role-playing. The findings complicate the simplistic scope that juxtaposes utilitarian and anthropomorphic use of AI. I showed that many users approach ChatGPT not as a neutral tool but as a mutable social actor, strategically anthropomorphized inside local narrative frames. There is a shifting ecology of motives that is prominent in the dataset yet less discussed in existing studies — from affective self-therapy to workplace automation, and with the hidden labour (“playbor”) of heavy users who are quietly shaping public training data and, by extension, future model behavior. The findings foreground the social and emotional stakes of LLM use, and invite designers to treat usage data with caution while honoring the plurality of ways that humans want to converse with — rather than merely through — their machines.

Details

from
to
Export