Scalable Bayesian Reinforcement Learning

Loading...
Thumbnail Image

Authors

Lee, Gilwoo

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Informed and robust decision making in the face of uncertainty is critical for robots operating in unstructured environments. We formulate this problem as Bayesian Reinforcement Learning (BRL) over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms scale poorly to continuous state and action spaces. This thesis proposes a set of BRL algorithms that scale to complex control tasks. Our algorithms build on the following insight: robotics problems have structural priors that we can use to produce approximate models and experts that the agent can leverage. First, we propose an algorithm which improves a nominal model and policy with data-driven semi-parametric learning and optimal control. Then, we look into more general BRL tasks with complex latent models. We propose algorithms that combine batch reinforcement learning algorithms with experts to scale to complex latent tasks. Finally, through simulated and physical experiments, we demonstrate that our algorithms drastically outperform existing adaptive RL methods.

Description

Thesis (Ph.D.)--University of Washington, 2020

Citation

DOI