Learning in Structured Multi-agent Systems with Provable Guarantees

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Multi-agent systems enable decentralized decision-making and interaction in complex envi- ronments, with applications ranging from traffic networks to robotics and economics. This thesis develops algorithms with provable theoretical guarantees, exploiting the structural properties of multi-agent systems to enhance scalability and efficiency.In offline multi-agent reinforcement learning, we introduce unilateral coverage assump- tion and design the first efficient algorithms for two-player zero-sum and general-sum Markov games based on the principle of pessimism. We propose a novel strategy-wise concentration technique to reduce sample complexity, overcoming the challenges of joint action spaces. In online multi-agent reinforcement learning, we propose the independent linear Markov game framework, enabling scalable algorithms that break the curse of multiagents by lever- aging individual agent function approximation. We also design the first algorithm that can address non-stationary environments, improving sample complexity guarantees for learning correlated and Nash equilibria. In congestion games, we design the first algorithm for Nash equilibrium learning and optimal tax learning. By exploiting the game's structure, we achieve scalable performance with sample complexity independent of large action spaces. For tax design, we propose an equilibrium feedback framework and develop an efficient method for approximating socially optimal taxes. This work advances the theoretical and practical understanding of multi-agent learning, with implications for diverse real-world applications.

Description

Thesis (Ph.D.)--University of Washington, 2025

Citation

DOI