Authors

* External authors

Venue

Date

Share

Practical Attribute Reconstruction Attack Against Federated Learning

Chen Chen

Lingjuan Lyu

Han Yu*

Gang Chen*

* External authors

IEEE Transactions on Big Data

2022

Abstract

Existing federated learning (FL) designs have been shown to exhibit vulnerabilities which can be exploited by adversaries to compromise data privacy. However, most current works conduct attacks by leveraging gradients calculated on a small batch of data. This setting is not realistic as gradients are normally shared after at least 1 epoch of local training on each participant's local data in FL for communication efficiency. In this work, we conduct a unique systematic evaluation of attribute reconstruction attack (ARA) launched by the malicious server in the FL system, and empirically demonstrate that the shared local model gradients after 1 epoch of local training can still reveal sensitive attributes of local training data. To demonstrate this leakage, we develop a more effective and efficient gradient matching based method called cos-matching to reconstruct the sensitive attributes of any victim participant's training data. Based on the reconstructed training data attributes, we further show that an attacker can even reconstruct the sensitive attributes of any records that are not included in any participant's training data, thus opening a new attack surface in FL. Extensive experiments show that the proposed method achieves better attribute attack performance than existing state-of-the-art methods.

Related Publications

Finding a needle in a haystack: A Black-Box Approach to Invisible Watermark Detection

ECCV, 2024
Minzhou Pan*, Zhenting Wang, Xin Dong, Vikash Sehwag, Lingjuan Lyu, Xue Lin*

In this paper, we propose WaterMark Detection (WMD), the first invisible watermark detection method under a black-box and annotation-free setting. WMD is capable of detecting arbitrary watermarks within a given reference dataset using a clean non watermarked dataset as a ref…

PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization for GDPR

ICML, 2024
Kartik Patwari, Chen-Nee Chuah*, Lingjuan Lyu, Vivek Sharma

Current image anonymization techniques, largely focus on localized pseudonymization, typically modify identifiable features like faces or full bodies and evaluate anonymity through metrics such as detection and re-identification rates. However, this approach often overlooks …

COALA: A Practical and Vision-Centric Federated Learning Platform

ICML, 2024
Weiming Zhuang, Jian Xu, Chen Chen, Jingtao Li, Lingjuan Lyu

We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize as task, data, and model levels. At the task level, COALA extends support from simple classification to 15 computer vision tasks, in…

  • HOME
  • Publications
  • Practical Attribute Reconstruction Attack Against Federated Learning

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.