Auditing the Reasoning Processes of Medical-Image AI

Loading...
Thumbnail Image

Authors

DeGrave, Alex John

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

While medical artificial intelligence (AI) systems are achieving regulatory approval and clinical deployment across the world, the reasoning processes of these systems remain opaque to all stakeholders, including physicians, patients, regulators, and even the developers of these systems. Since the modern wave of medical AI relies on automatic learning of statistical patterns from large datasets---via 'machine--learning' techniques such as neural networks---they are prone to learning unexpected and potentially undesirable patterns, which may lead to pathological behavior in deployment. Here, we investigate the 'reasoning processes' of medical-image AI systems, that is, by forming a human-understandable, medically grounded conception of that mechanisms by which they generate predictions. Along the way, we develop new tools and frameworks as necessary to do so. Via these investigations, we uncover severe flaws in the reasoning of medical AI systems, and we build the first thorough, medically grounded picture of machine-learning--based medical-image AI reasoning processes.

Description

Thesis (Ph.D.)--University of Washington, 2024

Citation

DOI