Skip to content

DENSE: Data-Free One-Shot Federated Learning

Abstract

One-shot Federated Learning (FL) has recently emerged as a promising approach, which allows the central server to learn a model in a single communication round. Despite the low communication cost, existing one-shot FL methods are mostly impractical or face inherent limitations, e.g. a public dataset is required, clients' models are homogeneous, and additional data/model information need to be uploaded. To overcome these issues, we propose a novel two-stage Data-freE oNe-Shot federated lEarning (DENSE) framework, which trains the global model by a data generation stage and a model distillation stage. DENSE is a practical one-shot FL method that can be applied in reality due to the following advantages: (1) DENSE requires no additional information compared with other methods (except the model parameters) to be transferred between clients and the server; (2) DENSE does not require any auxiliary dataset for training; (3) DENSE considers model heterogeneity in FL, i.e. different clients can have different model architectures. Experiments on a variety of real-world datasets demonstrate the superiority of our method. For example, DENSE outperforms the best baseline method Fed-ADI by 5.08% on CIFAR10 dataset. Our code will soon be available.

Authors

  • Jie Zhang*
  • Chen Chen
  • Bo Li*
  • Lingjuan Lyu
  • Shuang Wu*
  • Shouhong Ding*
  • Chunhua Shen*
  • Chao Wu*

*External Authors

Venue

NeurIPS 2022

Date

2022

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation