Authors

* External authors

Venue

Date

Share

Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

Yuchen Liu*

Chen Chen

Lingjuan Lyu

Fangzhao Wu*

Sai Wu*

Gang Chen*

* External authors

ICML 2023

2023

Abstract

Federated learning has exhibited vulnerabilities to Byzantine attacks, where the Byzantine attackers can send arbitrary gradients to the central server to destroy the convergence and performance of the global model. A wealth of defenses have been proposed to defend against Byzantine attacks. However, Byzantine clients can still circumvent defense when the data is non-identically and independently distributed (non-IID). In this paper, we first reveal the root causes of current robust AGgregation Rule (AGR) performance degradation in non-IID settings: the curse of dimensionality and gradient heterogeneity. In order to address this issue, we propose GAS, a gradient splitting based approach that can successfully adapt existing robust AGRs to ensure Byzantine robustness under non-IID settings. We also provide a detailed convergence analysis when the existing robust AGRs are adapted to GAS. Experiments on various real-world datasets verify the efficacy of our proposed GAS.

Related Publications

FedMef: Towards Memory-efficient Federated Dynamic Pruning

CVPR, 2024
Hong Huang, Weiming Zhuang, Chen Chen, Lingjuan Lyu

Federated learning (FL) promotes decentralized training while prioritizing data confidentiality. However, its application on resource-constrained devices is challenging due to the high demand for computation and memory resources for training deep learning models. Neural netw…

DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models

ICLR, 2024
Zhenting Wang, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas*, Shiqing Ma*

Recent text-to-image diffusion models have shown surprising performance in generating high-quality images. However, concerns have arisen regarding the unauthorized data usage during the training or fine-tuning process. One example is when a model trainer collects a set of im…

FedWon: Triumphing Multi-domain Federated Learning Without Normalization

ICLR, 2024
Weiming Zhuang, Lingjuan Lyu

Federated learning (FL) enhances data privacy with collaborative in-situ training on decentralized clients. Nevertheless, FL encounters challenges due to non-independent and identically distributed (non-i.i.d) data, leading to potential performance degradation and hindered c…

  • HOME
  • Publications
  • Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.