Three Maxims for Developing Human-Centered AI for DecisionMaking

dc.contributor.advisorWeld, Daniel S.
dc.contributor.authorBansal, Gagan
dc.date.accessioned2022-01-26T23:23:25Z
dc.date.issued2022-01-26
dc.date.submitted2022
dc.descriptionThesis (Ph.D.)--University of Washington, 2022
dc.description.abstractWe focus on AI-advised decision making, where AI systems (e.g., classifiers) are deployed to assist users to make better decisions (e.g.,, in healthcare, finance, and criminal justice). While the dominant development practice deploys the most "accurate" autonomous AI to assist users, we argue that in order for AI to {\em augment} users, we should shift the focus of research to developing human-centered AI (HCAI). HCAI systems additional requirements atop those of autonomous AI. They are not just capable but also: trustworthy and dependable, they communicate and coordinate their reasoning with users, and complement users' expertise. We specifically develop and study three relevant maxims for developing HCAI systems: 1) help users understand {\em when to trust} AI recommendations, 2) preserve user's mental model of AI's trustworthiness, and 3) train AI to optimize for team performance. Through experiments on various tasks that involve AI-assisted decision making, we show that a) contrary to expectations, current XAI methods may be insufficient for helping users understand when to rely on AI recommendations. b) It is easier for users to create a mental model of AI's trustworthiness when its error boundary (\ie, regions where it errs) is simple and deterministic. c) The current practice of updates to AI systems (e.g., to improve its accuracy) can result in models that violate user trust, e.g., by introducing errors on examples on which the system was previously correct; however, we also show that its possible to create models that preserve trust by considering compatibility of updates during the training process.d) For a simple setting, we formally show that by accommodating the user's mental model in the AI's training process, we can train a model that results in higher a human-AI team performance than the team performance achieved with the most accurate AI. Finally, we discuss open problems and future work in developing HCAI including enabling explanatory dialogs (as opposed to static, one-shot explanations) and enabling user control of AI behavior. Overall, the problems and results in this thesis show the richness and interdisciplinary nature of the challenge of developing human-centered AI.
dc.embargo.lift2023-01-26T23:23:25Z
dc.embargo.termsRestrict to UW for 1 year -- then make Open Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherBansal_washington_0250E_23745.pdf
dc.identifier.urihttp://hdl.handle.net/1773/48233
dc.language.isoen_US
dc.rightsCC BY-NC-ND
dc.subject
dc.subjectComputer science
dc.subject.otherComputer science and engineering
dc.titleThree Maxims for Developing Human-Centered AI for DecisionMaking
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bansal_washington_0250E_23745.pdf
Size:
6.14 MB
Format:
Adobe Portable Document Format