Learning for Robot-centric Autonomy
| dc.contributor.advisor | Fox, Dieter | |
| dc.contributor.author | Meng, Xiangyun | |
| dc.date.accessioned | 2025-01-23T20:07:12Z | |
| dc.date.available | 2025-01-23T20:07:12Z | |
| dc.date.issued | 2025-01-23 | |
| dc.date.submitted | 2024 | |
| dc.description | Thesis (Ph.D.)--University of Washington, 2024 | |
| dc.description.abstract | Autonomy is a foundational capability that frees robots from confined workspaces and lets them interact with the open world. The traditional approach to robot autonomy has relied heavily on a world-centric approach: building a global, geometrically accurate map and using it for localization and planning. However, this approach often proves inadequate or impractical in many real-world applications. This thesis adopts a robot-centric perspective to autonomy, addressing the challenges across three distinctive scales: (1) Globally, we learn to compress visual experiences into sparse, topological scene representations for long-horizon navigation; (2) At the semi-local level, we develop perception systems that reason about the traversability of the terrains around the robot to achieve robust off-road navigation; (3) Locally, we learn end-to-end perception-action models to navigate a robot to any object with high precision. We demonstrate the real-time performance of our approaches across diverse robotic platforms, highlighting the applicability and generalizability of these methods. | |
| dc.embargo.terms | Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Meng_washington_0250E_27789.pdf | |
| dc.identifier.uri | https://hdl.handle.net/1773/52767 | |
| dc.language.iso | en_US | |
| dc.rights | CC BY | |
| dc.subject | Robotics | |
| dc.subject.other | Computer science and engineering | |
| dc.title | Learning for Robot-centric Autonomy | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Meng_washington_0250E_27789.pdf
- Size:
- 11.87 MB
- Format:
- Adobe Portable Document Format
