* External authors




Byzantine-resilient Federated Learning via Gradient Memorization

Chen Chen

Lingjuan Lyu

Yuchen Liu*

Fangzhao Wu*

Chaochao Chen*

Gang Chen*

* External authors




Federated learning (FL) provides a privacy-aware learning framework by enabling a multitude of participants to jointly construct models without collecting their private training data. However, federated learning has exhibited vulnerabilities to Byzantine attacks. Many existing methods defend against such Byzantine attacks by monitoring the gradients of clients in the current round, i.e., gradients in one round. Recent works have demonstrated that such naïve methods can hardly achieve satisfying performance. Defenses based on one-round gradients could be compromised by adding a small well-crafted bias to the benign gradients, due to the high variance of one-round (benign) gradients. To address this problem, we propose a new Average of Gradients (AG) framework, which detects Byzantine attacks with the average of multi-round gradients (i.e., gradients across multiple rounds). We theoretically show that our AG framework leads to lower variance of the benign gradients, and thus can reduce the effects of Byzantine attacks. Experiments on various real-world datasets verify the efficacy of our AG framework.

Related Publications

Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?

NeurIPS, 2023
Xiaoxiao Sun*, Nidham Gazagnadou, Vivek Sharma, Lingjuan Lyu, Hongdong Li*, Liang Zheng*

Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Image…

UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition

NeurIPS, 2023
Yuyuan Li*, Chaochao Chen*, Yizhao Zhang*, Weiming Liu*, Lingjuan Lyu, Xiaolin Zheng*, Dan Meng*, Jun Wang*

With growing concerns regarding privacy in machine learning models, regulations have committed to granting individuals the right to be forgotten while mandating companies to develop non-discriminatory machine learning systems, thereby fueling the study of the machine unlearn…

Towards Personalized Federated Learning via Heterogeneous Model Reassembly

NeurIPS, 2023
Jiaqi Wang*, Xingyi Yang*, Suhan Cui*, Liwei Che*, Lingjuan Lyu, Dongkuan Xu*, Fenglong Ma*

This paper focuses on addressing the practical yet challenging problem of model heterogeneity in federated learning, where clients possess models with different network structures. To track this problem, we propose a novel framework called pFedHR, which leverages heterogeneo…

  • HOME
  • Publications
  • Byzantine-resilient Federated Learning via Gradient Memorization


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.