Robust Vision-Aided Self-Localization of Mobile Robots

dc.contributor.advisorHannaford, Blake
dc.contributor.authorLindgren, Kyle Martin
dc.date.accessioned2020-04-30T17:40:05Z
dc.date.available2020-04-30T17:40:05Z
dc.date.issued2020-04-30
dc.date.submitted2020
dc.descriptionThesis (Ph.D.)--University of Washington, 2020
dc.description.abstractMachine learning has emerged as a powerful tool for solving many computer vision tasks by extracting and correlating meaningful features from high dimensional inputs in ways that can exceed the best human-derived modeling efforts. However, the area of vision-aided localization remains diverse with many traditional approaches (i.e. filtering- or nonlinear least-squares- based) often outperforming deep approaches and none declaring an end to the problem. Proven successes in both approaches, with model-free methods excelling at complex data association and model-based methods benefiting from known sensor and scene geometric dynamics, elicits the question: can a hybrid approach effectively combine the strengths of model-free and model-based methods? This work presents a new vision-aided localization solution with a monocular visual-inertial architecture which combines model-based optimization and model-free robustness techniques to produce scaled depth and egomotion estimates. Additionally, a Mixture of Experts ensemble framework is presented for robust multi-domain self-localization in unseen environments. Advancements in virtual environments and synthetically generated data with realistic imagery and physics engines are leveraged to aid exploration and evaluation of self-localization solutions. Access to diverse, abundant, and manipulable data also promotes the efficacy of transitioning simulator-tested solutions onto real-world vehicles, a rare ability for current deep approaches but a critical step for the advancement of the field. Together, well-established model-based techniques are combined with innovative model-free techniques to create a robust, hybrid, multi-domain self-localization solution. Robustness to sensor, motion, and scene dynamics are demonstrated with comparison to state-of-the-art model-free and model-based approaches in both real and virtual domains.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherLindgren_washington_0250E_21183.pdf
dc.identifier.urihttp://hdl.handle.net/1773/45429
dc.language.isoen_US
dc.rightsCC BY-NC-SA
dc.subjectComputer Vision
dc.subjectDeep Learning
dc.subjectNavigation
dc.subjectRobotics
dc.subjectSelf-Localization
dc.subjectComputer science
dc.subjectElectrical engineering
dc.subjectComputer engineering
dc.subject.otherElectrical engineering
dc.titleRobust Vision-Aided Self-Localization of Mobile Robots
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Lindgren_washington_0250E_21183.pdf
Size:
8.06 MB
Format:
Adobe Portable Document Format