Skip to content

Privacy and Robustness in Federated Learning: Attacks and Defenses

Abstract

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models are facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol designs have been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct a comprehensive survey on privacy and robustness in federated learning over the past 5 years. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) privacy attacks and defenses; 3) poisoning attacks and defenses, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy- preserving FL, and their interplays with multidisciplinary goals of FL.

View PDF

Authors

  • Lingjuan Lyu
  • Han Yu*
  • Xingjun Ma*
  • Chen Chen
  • Lichao Sun*
  • Jun Zhao*
  • Qiang Yang*
  • Philip S. Yu*

*External Authors

Venue

TNNLS 2022

Date

2022

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation