Authors

* External authors

Venue

Date

Share

Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

Yuchen Liu*

Chen Chen

Lingjuan Lyu

Fangzhao Wu*

Sai Wu*

Gang Chen*

* External authors

ICML 2023

2023

Abstract

Federated learning has exhibited vulnerabilities to Byzantine attacks, where the Byzantine attackers can send arbitrary gradients to the central server to destroy the convergence and performance of the global model. A wealth of defenses have been proposed to defend against Byzantine attacks. However, Byzantine clients can still circumvent defense when the data is non-identically and independently distributed (non-IID). In this paper, we first reveal the root causes of current robust AGgregation Rule (AGR) performance degradation in non-IID settings: the curse of dimensionality and gradient heterogeneity. In order to address this issue, we propose GAS, a gradient splitting based approach that can successfully adapt existing robust AGRs to ensure Byzantine robustness under non-IID settings. We also provide a detailed convergence analysis when the existing robust AGRs are adapted to GAS. Experiments on various real-world datasets verify the efficacy of our proposed GAS.

Related Publications

FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low- Rank Adaptations

NeurIPS, 2024
Lingjuan Lyu, Ziyao Wang, Zheyu Shen, Yexiao He, Guoheng Sun, Hongyi Wang, Ang Li

The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients'…

pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning

NeurIPS, 2024
Jiaqi Wang*, Lingjuan Lyu, Fenglong Ma*, Qi Li

Federated learning, a pioneering paradigm, enables collaborative model training without exposing users’ data to central servers. Most existing federated learning systems necessitate uniform model structures across all clients, restricting their practicality. Several methods …

CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence

NeurIPS, 2024
Chaochao Chen*, Yizhao Zhang*, Lingjuan Lyu, Yuyuan Li*, Jiaming Zhang, Li Zhang, Biao Gong, Chenggang Yan

With increasing privacy concerns in artificial intelligence, regulations have mandated the right to be forgotten, granting individuals the right to withdraw their data from models. Machine unlearning has emerged as a potential solution to enable selective forgetting in model…

  • HOME
  • Publications
  • Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.