Robust Vision-Aided Self-Localization of Mobile Robots
| dc.contributor.advisor | Hannaford, Blake | |
| dc.contributor.author | Lindgren, Kyle Martin | |
| dc.date.accessioned | 2020-04-30T17:40:05Z | |
| dc.date.available | 2020-04-30T17:40:05Z | |
| dc.date.issued | 2020-04-30 | |
| dc.date.submitted | 2020 | |
| dc.description | Thesis (Ph.D.)--University of Washington, 2020 | |
| dc.description.abstract | Machine learning has emerged as a powerful tool for solving many computer vision tasks by extracting and correlating meaningful features from high dimensional inputs in ways that can exceed the best human-derived modeling efforts. However, the area of vision-aided localization remains diverse with many traditional approaches (i.e. filtering- or nonlinear least-squares- based) often outperforming deep approaches and none declaring an end to the problem. Proven successes in both approaches, with model-free methods excelling at complex data association and model-based methods benefiting from known sensor and scene geometric dynamics, elicits the question: can a hybrid approach effectively combine the strengths of model-free and model-based methods? This work presents a new vision-aided localization solution with a monocular visual-inertial architecture which combines model-based optimization and model-free robustness techniques to produce scaled depth and egomotion estimates. Additionally, a Mixture of Experts ensemble framework is presented for robust multi-domain self-localization in unseen environments. Advancements in virtual environments and synthetically generated data with realistic imagery and physics engines are leveraged to aid exploration and evaluation of self-localization solutions. Access to diverse, abundant, and manipulable data also promotes the efficacy of transitioning simulator-tested solutions onto real-world vehicles, a rare ability for current deep approaches but a critical step for the advancement of the field. Together, well-established model-based techniques are combined with innovative model-free techniques to create a robust, hybrid, multi-domain self-localization solution. Robustness to sensor, motion, and scene dynamics are demonstrated with comparison to state-of-the-art model-free and model-based approaches in both real and virtual domains. | |
| dc.embargo.terms | Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Lindgren_washington_0250E_21183.pdf | |
| dc.identifier.uri | http://hdl.handle.net/1773/45429 | |
| dc.language.iso | en_US | |
| dc.rights | CC BY-NC-SA | |
| dc.subject | Computer Vision | |
| dc.subject | Deep Learning | |
| dc.subject | Navigation | |
| dc.subject | Robotics | |
| dc.subject | Self-Localization | |
| dc.subject | Computer science | |
| dc.subject | Electrical engineering | |
| dc.subject | Computer engineering | |
| dc.subject.other | Electrical engineering | |
| dc.title | Robust Vision-Aided Self-Localization of Mobile Robots | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Lindgren_washington_0250E_21183.pdf
- Size:
- 8.06 MB
- Format:
- Adobe Portable Document Format
