Psychological Responses to and Acceptance of Medical AI: Comparing Human Provider, AI Provider, and their Collaboration

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Advances in medical artificial intelligence (AI) have accelerated the development of diagnostic tools designed to support or replace human clinicians, yet public acceptance of these systems remains uncertain. This dissertation examines how psychological responses vary depending on the type of diagnostic provider: Human, AI, or a Human + AI collaboration. In a between-groups experiment inspired by Longoni et al. (2019), participants viewed a skin-cancer screening scenario that involved one of the three providers. Willingness to use the service differed only between the Human and AI conditions, but substantial differences emerged in underlying psychological reactions. Trust and confidence followed a consistent pattern where the Human condition had the highest ratings followed by Human + AI, then AI last, indicating that lack of trust in AI persists and collaboration only partially restores the trust associated with human providers. Diagnostic-worry analyses revealed stronger worry about under-diagnosis (miss) than over-diagnosis (false alarm), particularly when AI was involved. Participants were also more likely to seek a second opinion when the provider was AI or Human + AI rather than Human, and when the diagnosis was Cancer-Positive. These findings demonstrate that acceptance of medical AI is multifaceted and that merely adding a human reviewer may not sufficiently address patients’ concerns. Targeted communication strategies that address provider-specific worries, such as concerns for under-diagnosis with AI diagnosis, may be necessary to support effective and trustworthy integration of AI in medical decision-making.

Description

Thesis (Ph.D.)--University of Washington, 2025

Citation

DOI

Collections