Machine Learning in Feedback Systems: Provable Methods for Safe and Robust Autonomy

dc.contributor.advisorMesbahi, Mehran M
dc.contributor.authorRahimi, Niyousha
dc.date.accessioned2024-10-16T03:08:45Z
dc.date.available2024-10-16T03:08:45Z
dc.date.issued2024-10-16
dc.date.submitted2024
dc.descriptionThesis (Ph.D.)--University of Washington, 2024
dc.description.abstractThis dissertation explores the integration of machine learning into feedback control systems, addressing key challenges in the realm of control theory with a focus on autonomous navigation. Modern advances in sensing technologies and computational methods have enabled remarkable advancements in data-guided control. However, the reliability of machine learning, particularly deep learning, in safety-critical applications remains limited due to its inadequate handling of uncertainty. Furthermore, traditional methods in control theory impose limitations when the system is operating in complex environments with unknown uncertainties. This research seeks to bridge this gap by combining robust and optimal control techniques with machine learning to ensure reliable automated system behavior. \textbf{Part I} of the dissertation establishes a theoretical framework for the data-driven design of optimal controllers inspired by autonomous physical systems. It investigates the online regulation of both linear and nonlinear systems that are possibly unstable and partially unknown. A significant contribution of this part is the introduction of the concept of ``regularizability," which characterizes the extent by which a system can be regulated in finite time, offering a new perspective on system behavior compared to traditional stabilizability and controllability. This theoretical exploration challenges conventional understandings and provides novel insights into finite time regulation versus asymptotic behavior. \textbf{Part II} addresses the practical application of deep learning algorithms in processing high-dimensional data and generating a spectrum of outputs in automatic feedback control. The inherent challenge in this context is modeling uncertainties in the output, especially when these trained neural networks are employed as perception modules within control loops for autonomous navigation. To mitigate this, the dissertation introduces a novel approach utilizing a perception map as an approximate inverse. This perception-control loop demonstrates commendable attributes, provided that the controller is robustly designed to accommodate for the perception errors. The novelty of this part lies in developing methods to ensure robustness against state-dependent perception errors, thus contributing to more reliable machine learning applications in feedback control systems.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherRahimi_washington_0250E_27319.pdf
dc.identifier.urihttps://hdl.handle.net/1773/52410
dc.language.isoen_US
dc.rightsCC BY-NC
dc.subjectAutonomous navigation
dc.subjectData-Driven Control
dc.subjectLearning in Feedback Systems
dc.subjectLearning-Based Control
dc.subjectOnline Regulation
dc.subjectVision-Based Navigation
dc.subjectAerospace engineering
dc.subjectApplied mathematics
dc.subjectComputer science
dc.subject.otherAeronautics and astronautics
dc.titleMachine Learning in Feedback Systems: Provable Methods for Safe and Robust Autonomy
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Rahimi_washington_0250E_27319.pdf
Size:
8.51 MB
Format:
Adobe Portable Document Format