Lee, Su-InDeGrave, Alex John2024-04-262024-04-262024-04-262024DeGrave_washington_0250E_26532.pdfhttp://hdl.handle.net/1773/51331Thesis (Ph.D.)--University of Washington, 2024While medical artificial intelligence (AI) systems are achieving regulatory approval and clinical deployment across the world, the reasoning processes of these systems remain opaque to all stakeholders, including physicians, patients, regulators, and even the developers of these systems. Since the modern wave of medical AI relies on automatic learning of statistical patterns from large datasets---via 'machine--learning' techniques such as neural networks---they are prone to learning unexpected and potentially undesirable patterns, which may lead to pathological behavior in deployment. Here, we investigate the 'reasoning processes' of medical-image AI systems, that is, by forming a human-understandable, medically grounded conception of that mechanisms by which they generate predictions. Along the way, we develop new tools and frameworks as necessary to do so. Via these investigations, we uncover severe flaws in the reasoning of medical AI systems, and we build the first thorough, medically grounded picture of machine-learning--based medical-image AI reasoning processes.application/pdfen-USnoneartificial intelligencedermatologyexplainable AImachine learningradiologyMedicineMedical imagingComputer scienceComputer science and engineeringAuditing the Reasoning Processes of Medical-Image AIThesis