Constrained Policy Synthesis: Riemannian Flows, Online Regulation, and Distributed Games

Loading...
Thumbnail Image

Authors

Talebi, Shahriar

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This dissertation makes contributions to decision-making processes in both cooperative and non-cooperative environments, spanning several domains from constrained and large- scale dynamical systems to network games and learning for control and estimation prob- lems. First, we examine linearly constrained policy optimization over stabilizing controllers, utilizing a Riemannian metric inherent to optimal control problems. We propose a novel Newton-type algorithm that leverages the manifold’s second-order geometry to ensure local convergence, demonstrating promising results in Structured and Output Linear Quadratic Regulators (LQR) problems. Additionally, we present a distributed model-free policy it- eration tailored for large networks of homogeneous systems. This algorithm enables the development of stabilizing distributed feedback controllers through a data-driven approach and the use of a learned stability margin. Addressing online regulation of partially unknown unstable linear systems, we introduce the Data-Guided Regulation (DGR) synthesis proce- dure, revealing novel geometric and system-theoretic properties while effectively regulating the system’s states. Furthermore, we explore distributed learning in network games using dual averaging, achieving sublinear regret bounds by optimizing global objectives composed of local objective functions and considering network structures. Lastly, we investigate optimal filtering policies for linear systems with unknown noise covariance matrices using noisy output data, minimizing prediction error through stochastic policy optimization and ensuring theoretical guarantees for biased gradients and stability constraints.

Description

Thesis (Ph.D.)--University of Washington, 2023

Citation

DOI