GEAR: A Margin-based Federated Adversarial Training Approach

Chen Chen

Jie Zhang

Lingjuan Lyu




Previous studies have shown that federated learning (FL) is vulnerable to well-crafted adversarial examples. Some recent efforts tried to combine adversarial training with FL, i.e., federated adversarial training (FAT), in order to achieve adversarial robustness in FL. However, most of the existing FAT works suffer from either low natural accuracy or low robust accuracy. Moreover, none of these works provide a more in-depth understanding of the challenges behind adversarial robustness in FL. To address these issues, we propose a novel marGin-based fEderated Adversarial tRaining Approach called GEAR. It encourages the minority classes to have larger margins by introducing a margin-based cross-entropy loss, and regularizes the decision boundary to be smooth by introducing a regularization loss, thus providing a better decision boundary for the global model. To the best of our knowledge, this work is the first to investigate the impact of decision boundary on FAT and delivers the best natural accuracy and robust accuracy in FL by far. Extensive experiments on multiple datasets across various settings all validate the effectiveness of our proposed method. For example, on SVHN dataset, GEAR can improve the natural accuracy and robust accuracy (against FGSM attack) of the best baseline method (FedTRADES) by 20.17\% and 10.73\%, respectively.

Related Publications

Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?

NeurIPS, 2023
Xiaoxiao Sun*, Nidham Gazagnadou, Vivek Sharma, Lingjuan Lyu, Hongdong Li*, Liang Zheng*

Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Image…

UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition

NeurIPS, 2023
Yuyuan Li*, Chaochao Chen*, Yizhao Zhang*, Weiming Liu*, Lingjuan Lyu, Xiaolin Zheng*, Dan Meng*, Jun Wang*

With growing concerns regarding privacy in machine learning models, regulations have committed to granting individuals the right to be forgotten while mandating companies to develop non-discriminatory machine learning systems, thereby fueling the study of the machine unlearn…

Towards Personalized Federated Learning via Heterogeneous Model Reassembly

NeurIPS, 2023
Jiaqi Wang*, Xingyi Yang*, Suhan Cui*, Liwei Che*, Lingjuan Lyu, Dongkuan Xu*, Fenglong Ma*

This paper focuses on addressing the practical yet challenging problem of model heterogeneity in federated learning, where clients possess models with different network structures. To track this problem, we propose a novel framework called pFedHR, which leverages heterogeneo…

  • HOME
  • Publications
  • GEAR: A Margin-based Federated Adversarial Training Approach


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.