Fairness in Continual Federated Learning
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Continual Federated Learning (CFL) is a distributed machine learning technique that enables multiple clients to collaboratively train a shared model without sharing their data, while also adapting to new classes without forgetting previously learned ones. Currently, there are limited evaluation models and metrics for measuring fairness in CFL, and ensuring fairness over time can be challenging as the system evolves. To address this, our study explores temporal fairness in CFL, examining how the fairness of the model can be influenced by the selection and participation of clients over time. We introduce novel fairness metrics—Delta Accuracy Fairness (DAF) and Delta Forgetting Fairness (DFF)—specifically designed to ensure temporal fairness in a CFL context. Additionally, we propose a set of client selection strategies that enhance the temporal fairness of the CFL model by addressing disparities in knowledge retention. Through comprehensive analysis, we demonstrate that while no single strategy guarantees perfect temporal fairness, the Low Participation and Low Average strategies consistently outperform others in terms of stability and equity. Furthermore, our findings underscore the adaptability of the Dynamic strategy, which shows significant promise in certain tasks. These insights pave the way for refining client selection strategies, enhancing CFL's fairness, and fostering more equitable learning environments.
Description
Thesis (Master's)--University of Washington, 2024
