Federated Learning with Dropout-Resilient Distributed Differential Privacy

12 September 2022

New Image

Federated learning (FL) is increasingly deployed among multiple clients (e.g., mobile devices) to train a shared model over decentralized data. To address the privacy concerns, FL systems need to protect the clients' data from being revealed during training and also control the data's leakage through trained models when exposed to untrusted domains. Distributed differential privacy (DP) offers an appealing solution in this regard as it achieves the optimal privacy-utility tradeoff without a trusted server. However, existing distributed DP algorithms are incapable of handling client dropout, resulting in insufficient DP noises in each round that quickly depletes the privacy budget before the training completes. We present Dordis, a distributed differentially private FL framework that is resilient to client dropout, and make three key contributions. First, we develop a new privacy accounting technique under the notion of Renyi DP that tightly bounds the privacy loss in the presence of dropout before client sampling. This enables Dordis to set a minimum target noise level in each round. Second, we propose a novel add-then-drop masking scheme to enforce the target noise even though some sampled clients may drop out in the end. Third, we design an efficient secure aggregation approach that optimally pipelines communication and computation for faster execution. Evaluations through large-scale cloud deployment show that Dordis can efficiently handle client dropout in various scenarios, attaining the optimal privacy-utility tradeoff and accelerating the training by up to 2× compared to existing solutions.