Authors

* External authors

Venue

Date

Share

Byzantine-resilient Federated Learning via Gradient Memorization

Chen Chen

Lingjuan Lyu

Yuchen Liu*

Fangzhao Wu*

Chaochao Chen*

Gang Chen*

* External authors

FL-AAAI-22

2022

Abstract

Federated learning (FL) provides a privacy-aware learning framework by enabling a multitude of participants to jointly construct models without collecting their private training data. However, federated learning has exhibited vulnerabilities to Byzantine attacks. Many existing methods defend against such Byzantine attacks by monitoring the gradients of clients in the current round, i.e., gradients in one round. Recent works have demonstrated that such naïve methods can hardly achieve satisfying performance. Defenses based on one-round gradients could be compromised by adding a small well-crafted bias to the benign gradients, due to the high variance of one-round (benign) gradients. To address this problem, we propose a new Average of Gradients (AG) framework, which detects Byzantine attacks with the average of multi-round gradients (i.e., gradients across multiple rounds). We theoretically show that our AG framework leads to lower variance of the benign gradients, and thus can reduce the effects of Byzantine attacks. Experiments on various real-world datasets verify the efficacy of our AG framework.

Related Publications

FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low- Rank Adaptations

NeurIPS, 2024
Lingjuan Lyu, Ziyao Wang, Zheyu Shen, Yexiao He, Guoheng Sun, Hongyi Wang, Ang Li

The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients'…

pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning

NeurIPS, 2024
Jiaqi Wang*, Lingjuan Lyu, Fenglong Ma*, Qi Li

Federated learning, a pioneering paradigm, enables collaborative model training without exposing users’ data to central servers. Most existing federated learning systems necessitate uniform model structures across all clients, restricting their practicality. Several methods …

CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence

NeurIPS, 2024
Chaochao Chen*, Yizhao Zhang*, Lingjuan Lyu, Yuyuan Li*, Jiaming Zhang, Li Zhang, Biao Gong, Chenggang Yan

With increasing privacy concerns in artificial intelligence, regulations have mandated the right to be forgotten, granting individuals the right to withdraw their data from models. Machine unlearning has emerged as a potential solution to enable selective forgetting in model…

  • HOME
  • Publications
  • Byzantine-resilient Federated Learning via Gradient Memorization

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.