Learning for Robot-centric Autonomy

dc.contributor.advisorFox, Dieter
dc.contributor.authorMeng, Xiangyun
dc.date.accessioned2025-01-23T20:07:12Z
dc.date.available2025-01-23T20:07:12Z
dc.date.issued2025-01-23
dc.date.submitted2024
dc.descriptionThesis (Ph.D.)--University of Washington, 2024
dc.description.abstractAutonomy is a foundational capability that frees robots from confined workspaces and lets them interact with the open world. The traditional approach to robot autonomy has relied heavily on a world-centric approach: building a global, geometrically accurate map and using it for localization and planning. However, this approach often proves inadequate or impractical in many real-world applications. This thesis adopts a robot-centric perspective to autonomy, addressing the challenges across three distinctive scales: (1) Globally, we learn to compress visual experiences into sparse, topological scene representations for long-horizon navigation; (2) At the semi-local level, we develop perception systems that reason about the traversability of the terrains around the robot to achieve robust off-road navigation; (3) Locally, we learn end-to-end perception-action models to navigate a robot to any object with high precision. We demonstrate the real-time performance of our approaches across diverse robotic platforms, highlighting the applicability and generalizability of these methods.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherMeng_washington_0250E_27789.pdf
dc.identifier.urihttps://hdl.handle.net/1773/52767
dc.language.isoen_US
dc.rightsCC BY
dc.subjectRobotics
dc.subject.otherComputer science and engineering
dc.titleLearning for Robot-centric Autonomy
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Meng_washington_0250E_27789.pdf
Size:
11.87 MB
Format:
Adobe Portable Document Format