Files

Abstract

In an era where online platforms increasingly mediate public discourse, enhancing the quality of democratic deliberation through technology is crucial. While much effort in online deliberation platforms has been focused on public and group deliberation, the “internal reflection” stage has received less attention, despite it being a prerequisite for informed public engagement. The advent of Large Language Models (LLMs) offers new possibilities for creating accessible and inclusive platforms for this purpose. This thesis presents the design and preliminary implementation of an AI agent intended to facilitate internal reflection on political topics by employing a Socratic Method-inspired approach for generating follow-up questions. Through the agent development process, the project identifies key challenges, including the absence of a gold standard for constructive follow-up questions and noticeable discrepancies in value judgments between AI and humans. To address these issues, we conducted an annotation experiment with both human and LLM annotators, assessing the effectiveness of various questioning strategies across different sources and lengths. The findings reveal preferences and discrepancies in question selection both between and within human and AI annotators, underscoring the challenges in designing AI interactions that effectively mimic human reflective dialogue. Humans prefer human-generated data over AI-generated ones, emphasizing the need for human involvement in AI to better capture the nuanced qualities of human interaction in complex discourse settings. This research highlights the potential of reflective AI to enhance democratic participation and points to future directions for improving AI-driven interactions for deliberation.

Details

Actions

from
to
Export