Machine Learning in Feedback Systems: Provable Methods for Safe and Robust Autonomy

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This dissertation explores the integration of machine learning into feedback control systems, addressing key challenges in the realm of control theory with a focus on autonomous navigation. Modern advances in sensing technologies and computational methods have enabled remarkable advancements in data-guided control. However, the reliability of machine learning, particularly deep learning, in safety-critical applications remains limited due to its inadequate handling of uncertainty. Furthermore, traditional methods in control theory impose limitations when the system is operating in complex environments with unknown uncertainties. This research seeks to bridge this gap by combining robust and optimal control techniques with machine learning to ensure reliable automated system behavior. \textbf{Part I} of the dissertation establishes a theoretical framework for the data-driven design of optimal controllers inspired by autonomous physical systems. It investigates the online regulation of both linear and nonlinear systems that are possibly unstable and partially unknown. A significant contribution of this part is the introduction of the concept of ``regularizability," which characterizes the extent by which a system can be regulated in finite time, offering a new perspective on system behavior compared to traditional stabilizability and controllability. This theoretical exploration challenges conventional understandings and provides novel insights into finite time regulation versus asymptotic behavior. \textbf{Part II} addresses the practical application of deep learning algorithms in processing high-dimensional data and generating a spectrum of outputs in automatic feedback control. The inherent challenge in this context is modeling uncertainties in the output, especially when these trained neural networks are employed as perception modules within control loops for autonomous navigation. To mitigate this, the dissertation introduces a novel approach utilizing a perception map as an approximate inverse. This perception-control loop demonstrates commendable attributes, provided that the controller is robustly designed to accommodate for the perception errors. The novelty of this part lies in developing methods to ensure robustness against state-dependent perception errors, thus contributing to more reliable machine learning applications in feedback control systems.

Description

Thesis (Ph.D.)--University of Washington, 2024

Citation

DOI