Three Maxims for Developing Human-Centered AI for DecisionMaking
| dc.contributor.advisor | Weld, Daniel S. | |
| dc.contributor.author | Bansal, Gagan | |
| dc.date.accessioned | 2022-01-26T23:23:25Z | |
| dc.date.issued | 2022-01-26 | |
| dc.date.submitted | 2022 | |
| dc.description | Thesis (Ph.D.)--University of Washington, 2022 | |
| dc.description.abstract | We focus on AI-advised decision making, where AI systems (e.g., classifiers) are deployed to assist users to make better decisions (e.g.,, in healthcare, finance, and criminal justice). While the dominant development practice deploys the most "accurate" autonomous AI to assist users, we argue that in order for AI to {\em augment} users, we should shift the focus of research to developing human-centered AI (HCAI). HCAI systems additional requirements atop those of autonomous AI. They are not just capable but also: trustworthy and dependable, they communicate and coordinate their reasoning with users, and complement users' expertise. We specifically develop and study three relevant maxims for developing HCAI systems: 1) help users understand {\em when to trust} AI recommendations, 2) preserve user's mental model of AI's trustworthiness, and 3) train AI to optimize for team performance. Through experiments on various tasks that involve AI-assisted decision making, we show that a) contrary to expectations, current XAI methods may be insufficient for helping users understand when to rely on AI recommendations. b) It is easier for users to create a mental model of AI's trustworthiness when its error boundary (\ie, regions where it errs) is simple and deterministic. c) The current practice of updates to AI systems (e.g., to improve its accuracy) can result in models that violate user trust, e.g., by introducing errors on examples on which the system was previously correct; however, we also show that its possible to create models that preserve trust by considering compatibility of updates during the training process.d) For a simple setting, we formally show that by accommodating the user's mental model in the AI's training process, we can train a model that results in higher a human-AI team performance than the team performance achieved with the most accurate AI. Finally, we discuss open problems and future work in developing HCAI including enabling explanatory dialogs (as opposed to static, one-shot explanations) and enabling user control of AI behavior. Overall, the problems and results in this thesis show the richness and interdisciplinary nature of the challenge of developing human-centered AI. | |
| dc.embargo.lift | 2023-01-26T23:23:25Z | |
| dc.embargo.terms | Restrict to UW for 1 year -- then make Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Bansal_washington_0250E_23745.pdf | |
| dc.identifier.uri | http://hdl.handle.net/1773/48233 | |
| dc.language.iso | en_US | |
| dc.rights | CC BY-NC-ND | |
| dc.subject | ||
| dc.subject | Computer science | |
| dc.subject.other | Computer science and engineering | |
| dc.title | Three Maxims for Developing Human-Centered AI for DecisionMaking | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Bansal_washington_0250E_23745.pdf
- Size:
- 6.14 MB
- Format:
- Adobe Portable Document Format
