Hannaford, BlakeLindgren, Kyle Martin2020-04-302020-04-302020-04-302020Lindgren_washington_0250E_21183.pdfhttp://hdl.handle.net/1773/45429Thesis (Ph.D.)--University of Washington, 2020Machine learning has emerged as a powerful tool for solving many computer vision tasks by extracting and correlating meaningful features from high dimensional inputs in ways that can exceed the best human-derived modeling efforts. However, the area of vision-aided localization remains diverse with many traditional approaches (i.e. filtering- or nonlinear least-squares- based) often outperforming deep approaches and none declaring an end to the problem. Proven successes in both approaches, with model-free methods excelling at complex data association and model-based methods benefiting from known sensor and scene geometric dynamics, elicits the question: can a hybrid approach effectively combine the strengths of model-free and model-based methods? This work presents a new vision-aided localization solution with a monocular visual-inertial architecture which combines model-based optimization and model-free robustness techniques to produce scaled depth and egomotion estimates. Additionally, a Mixture of Experts ensemble framework is presented for robust multi-domain self-localization in unseen environments. Advancements in virtual environments and synthetically generated data with realistic imagery and physics engines are leveraged to aid exploration and evaluation of self-localization solutions. Access to diverse, abundant, and manipulable data also promotes the efficacy of transitioning simulator-tested solutions onto real-world vehicles, a rare ability for current deep approaches but a critical step for the advancement of the field. Together, well-established model-based techniques are combined with innovative model-free techniques to create a robust, hybrid, multi-domain self-localization solution. Robustness to sensor, motion, and scene dynamics are demonstrated with comparison to state-of-the-art model-free and model-based approaches in both real and virtual domains.application/pdfen-USCC BY-NC-SAComputer VisionDeep LearningNavigationRoboticsSelf-LocalizationComputer scienceElectrical engineeringComputer engineeringElectrical engineeringRobust Vision-Aided Self-Localization of Mobile RobotsThesis