Venue
- IEEE Transactions on Big Data
Date
- 2022
Practical Attribute Reconstruction Attack Against Federated Learning
Chen Chen
Han Yu*
Gang Chen*
* External authors
IEEE Transactions on Big Data
2022
Abstract
Existing federated learning (FL) designs have been shown to exhibit vulnerabilities which can be exploited by adversaries to compromise data privacy. However, most current works conduct attacks by leveraging gradients calculated on a small batch of data. This setting is not realistic as gradients are normally shared after at least 1 epoch of local training on each participant's local data in FL for communication efficiency. In this work, we conduct a unique systematic evaluation of attribute reconstruction attack (ARA) launched by the malicious server in the FL system, and empirically demonstrate that the shared local model gradients after 1 epoch of local training can still reveal sensitive attributes of local training data. To demonstrate this leakage, we develop a more effective and efficient gradient matching based method called cos-matching to reconstruct the sensitive attributes of any victim participant's training data. Based on the reconstructed training data attributes, we further show that an attacker can even reconstruct the sensitive attributes of any records that are not included in any participant's training data, thus opening a new attack surface in FL. Extensive experiments show that the proposed method achieves better attribute attack performance than existing state-of-the-art methods.
Related Publications
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients'…
Federated learning, a pioneering paradigm, enables collaborative model training without exposing users’ data to central servers. Most existing federated learning systems necessitate uniform model structures across all clients, restricting their practicality. Several methods …
With increasing privacy concerns in artificial intelligence, regulations have mandated the right to be forgotten, granting individuals the right to withdraw their data from models. Machine unlearning has emerged as a potential solution to enable selective forgetting in model…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.