Smith, Noah AHaduong, Nikita2025-10-022025-10-022025-10-022025Haduong_washington_0250E_28851.pdfhttps://hdl.handle.net/1773/53966Thesis (Ph.D.)--University of Washington, 2025AI agents are being increasingly used in production settings, but our understanding of how humansexpect AI to behave, and how AI usage influences human behavior, falls short because of the gap between controlled laboratory studies and real-world usage. In this thesis, I develop methodologies to shrink this gap and further our understanding of how humans perceive and use AI in practice, and how we can design more relevant technologies. My methodologies are anchored by the observation that participants with greater task immersion and intrinsic motivation allow modeling more realistic behavior, and simple manipulation of task settings, domains, and incentives can increase immersion. This thesis discusses my key contributions to making AI research more relevant to potential downstream users. I first consider the role of AI in collaborative problem solving (CPS) and discover a dearth of openresources for conducting research in human-AI CPS when teams are larger than dyads. I approach this challenge by developing CPS-TaskForge, a CPS environment generator based on a resource management task, hence resembling real-world problems. CPS-TaskForge enables systematic study of CPS and open data generation by parameterizing tower defense games, and is thus approachable to laypeople and intrinsically motivating because the task is fun. Next, I explore how potential risks and harms of AI assistants are perceived and understood by users by grounding the discussion in procedural document question answering which has tangible and relatable risks to human evaluators, and recruiting evaluators who are familiar with the domain of procedural documents. I discover how current human evaluation techniques fail to account for non-deterministic AI behavior and develop a taxonomy of errors that can help inform the future development of an AI-powered system. Finally, I examine AI-assisted decision making behavior and explore the influence of performance pressure, a common environmental factor in production settings that lab studies isolate away from, to further our understanding of the sensitivity of AI advice taking. My methods illustrate the importance, and potential simplicity, of modeling more realistic deployment settings while conducting carefully controlled studies.application/pdfen-USCC BY-NC-NDhuman centered AInatural language processingArtificial intelligenceComputer science and engineeringImproving Experimental Methods to Capture Real-World Human-AI Perceptions and InteractionsThesis