Authors
- Chaochao Chen*
- Longfei Zheng*
- Huiwen Wu*
- Lingjuan Lyu
- Jun Zhou*
- Jia Wu*
- Bingzhe Wu*
- Ziqi Liu*
- Li Wang*
- Xiaolin Zheng*
* External authors
Venue
- IJCAI 2022
Date
- 2022
Vertically Federated Graph Neural Network for Privacy-Preserving Node Classification
Chaochao Chen*
Longfei Zheng*
Huiwen Wu*
Jun Zhou*
Jia Wu*
Bingzhe Wu*
Ziqi Liu*
Li Wang*
Xiaolin Zheng*
* External authors
IJCAI 2022
2022
Abstract
Graph Neural Network (GNN) has achieved remarkable progresses in various real-world tasks on graph data. High-performance GNN models always depend on both rich features and complete edge information in graph. However, such information could possibly be isolated by different data holders in practice, which is the so-called data isolation problem. To solve this problem, in this paper, we propose Vertically Federated Graph Neural Network (VFGNN), a federated GNN learning paradigm for privacy-preserving node classification task under data vertically partitioned setting, which can be generalized to existing GNN models. Specifically, we split the computation graph into two parts. We leave the private data (i.e., features, edges, and labels) related computations on data holders, and delegate the rest of computations to a semi-honest server. We also propose to apply differential privacy to prevent potential information leakage from the server. We conduct experiments on three benchmarks and the results demonstrate the effectiveness of VFGNN.
Related Publications
Large Language Models (LLMs) and Vision-Language Models (VLMs) have made significant advancements in a wide range of natural language processing and vision-language tasks. Access to large web-scale datasets has been a key factor in their success. However, concerns have been …
Federated Learning (FL) is notorious for its vulnerability to Byzantine attacks. Most current Byzantine defenses share a common inductive bias: among all the gradients, the densely distributed ones are more likely to be honest. However, such a bias is a poison to Byzantine r…
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients'…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.