Files
Abstract
As artificial intelligence (AI) becomes increasingly integrated into healthcare, understanding the psychological mechanisms that shape trust and user acceptance is critical. This study investigates how anthropomorphism, AI autonomy (decision-maker vs. assistant), perceived stakes (high vs. low), and individual differences in perceived uniqueness influence users’ trust, behavioral intention, and willingness to pay (WTP) for healthcare chatbots. A 2 (autonomy: assistant vs. decision-maker) × 2 (anthropomorphism: high vs. low) × 2 (stakes: low vs. high; within-subjects) mixed experimental design was conducted with 258 participants engaging in free-form consultations with an AI chatbot simulating dermatological scenarios. Notably, anthropomorphism significantly increased use intention in high-stakes conditions but had no consistent effects on WTP. Autonomy level did not significantly impact any outcome. Perceived uniqueness predicted trust in low-stakes contexts but did not moderate the main effects. Mediation models showed that trust consistently predicted both WTP and use intention but did not significantly mediate the effects of AI design features. Thematic analysis of open-ended responses identified four trust-enhancing factors: clarity of information, perceived professional oversight, emotional reassurance, and chatbot responsiveness. These findings suggest that anthropomorphic cues and contextual risk shape user trust, but this trust does not uniformly translate to economic or behavioral commitment. The results underscore the importance of designing AI healthcare systems that integrate emotionally intelligent, personalized interactions—particularly in high-stakes applications—while also acknowledging the limits of trust as a standalone driver of user acceptance.