Venue
- IEEE Transactions on Big Data
Date
- 2022
Practical Attribute Reconstruction Attack Against Federated Learning
Chen Chen
Han Yu*
Gang Chen*
* External authors
IEEE Transactions on Big Data
2022
Abstract
Existing federated learning (FL) designs have been shown to exhibit vulnerabilities which can be exploited by adversaries to compromise data privacy. However, most current works conduct attacks by leveraging gradients calculated on a small batch of data. This setting is not realistic as gradients are normally shared after at least 1 epoch of local training on each participant's local data in FL for communication efficiency. In this work, we conduct a unique systematic evaluation of attribute reconstruction attack (ARA) launched by the malicious server in the FL system, and empirically demonstrate that the shared local model gradients after 1 epoch of local training can still reveal sensitive attributes of local training data. To demonstrate this leakage, we develop a more effective and efficient gradient matching based method called cos-matching to reconstruct the sensitive attributes of any victim participant's training data. Based on the reconstructed training data attributes, we further show that an attacker can even reconstruct the sensitive attributes of any records that are not included in any participant's training data, thus opening a new attack surface in FL. Extensive experiments show that the proposed method achieves better attribute attack performance than existing state-of-the-art methods.
Related Publications
Large Language Models (LLMs) and Vision-Language Models (VLMs) have made significant advancements in a wide range of natural language processing and vision-language tasks. Access to large web-scale datasets has been a key factor in their success. However, concerns have been …
Federated Learning (FL) is notorious for its vulnerability to Byzantine attacks. Most current Byzantine defenses share a common inductive bias: among all the gradients, the densely distributed ones are more likely to be honest. However, such a bias is a poison to Byzantine r…
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients'…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.