Mashhadi, AfraNoor, Naima2024-09-092024-09-092024Noor_washington_0250O_26829.pdfhttps://hdl.handle.net/1773/51873Thesis (Master's)--University of Washington, 2024Continual Federated Learning (CFL) is a distributed machine learning technique that enables multiple clients to collaboratively train a shared model without sharing their data, while also adapting to new classes without forgetting previously learned ones. Currently, there are limited evaluation models and metrics for measuring fairness in CFL, and ensuring fairness over time can be challenging as the system evolves. To address this, our study explores temporal fairness in CFL, examining how the fairness of the model can be influenced by the selection and participation of clients over time. We introduce novel fairness metrics—Delta Accuracy Fairness (DAF) and Delta Forgetting Fairness (DFF)—specifically designed to ensure temporal fairness in a CFL context. Additionally, we propose a set of client selection strategies that enhance the temporal fairness of the CFL model by addressing disparities in knowledge retention. Through comprehensive analysis, we demonstrate that while no single strategy guarantees perfect temporal fairness, the Low Participation and Low Average strategies consistently outperform others in terms of stability and equity. Furthermore, our findings underscore the adaptability of the Dynamic strategy, which shows significant promise in certain tasks. These insights pave the way for refining client selection strategies, enhancing CFL's fairness, and fostering more equitable learning environments.application/pdfen-USCC BYContinual Federated LearningContinual LearningFairnessFederated LearningIndividual FairnessMachine LearningComputer scienceArtificial intelligenceComputer science and engineeringFairness in Continual Federated LearningThesis