Making Robot Behaviors Automatically Transparent
| dc.contributor.advisor | Cakmak, Maya | |
| dc.contributor.author | Walker, Nick | |
| dc.date.accessioned | 2025-05-12T22:46:30Z | |
| dc.date.available | 2025-05-12T22:46:30Z | |
| dc.date.issued | 2025-05-12 | |
| dc.date.submitted | 2025 | |
| dc.description | Thesis (Ph.D.)--University of Washington, 2025 | |
| dc.description.abstract | Incorporating transparency into a robot behavior often demands substantial effort and expertise, to the detriment of anyone who must interact with them closely or for extended periods. We envision a future where this burden is automated away, enabling robot behaviors to be transparent by default. To this end, we propose a conceptual framework that formalizes the distinction between the design and implementation of a behavior and its transparent execution. This separation clarifies the scope of transparency interventions in human-robot interaction (HRI) and highlights key challenges in achieving generalizable transparency. This dissertation addresses these challenges across diverse HRI contexts, demonstrating data- and model-driven augmentations that improve transparency. First, we investigate how users perceive learning robots, uncovering gaps between behavior and external attributions that inform the design of transparency mechanisms. Next, we introduce a data-driven method for controlling the expression of robot affect, enabling behavior creators to balance transparency with task performance. We then present a model-based approach to improving transparency in assistive teleoperation, allowing for explicit control over the trade-off between assistance and intelligibility. Finally, we tackle transparency for human supervisors reviewing robot failures, developing techniques to distill voluminous multi-modal recordings into interpretable summaries. Together, these contributions demonstrate the feasibility of providing transparency into a robot behavior without requiring extensive modification of the original behavior specification. By framing transparency as an independent layer over existing behaviors, this work moves toward automation-friendly, scalable solutions that enhance human-robot interaction across a range of domains. | |
| dc.embargo.terms | Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Walker_washington_0250E_27909.pdf | |
| dc.identifier.uri | https://hdl.handle.net/1773/52961 | |
| dc.language.iso | en_US | |
| dc.rights | none | |
| dc.subject | curiosity | |
| dc.subject | interpretability | |
| dc.subject | manipulation | |
| dc.subject | teleoperation | |
| dc.subject | Robotics | |
| dc.subject | Computer science | |
| dc.subject.other | Computer science and engineering | |
| dc.title | Making Robot Behaviors Automatically Transparent | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Walker_washington_0250E_27909.pdf
- Size:
- 10.49 MB
- Format:
- Adobe Portable Document Format
