Authors

* External authors

Venue

Date

Share

Communication-Efficient Federated Learning via Knowledge Distillation

Yongfeng Huang*

Chuhan Wu*

Fangzhao Wu*

Lingjuan Lyu

Xing Xie*

* External authors

Nature Communications

2022

Abstract

Federated learning is a privacy-preserving machine learning technique to train intelligent models from decentralized data, which enables exploiting private data by communicating local model updates in each iteration of model learning rather than the raw data. However, model updates can be extremely large if they contain numerous parameters, and many rounds of communication are needed for model training. The huge communication cost in federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effective, based on adaptive mutual knowledge distillation and dynamic gradient compression techniques. FedKD is validated on three different scenarios that need privacy protection, showing that it maximally can reduce 94.89% of communication cost and achieve competitive results with centralized model learning. FedKD provides a potential to efficiently deploy privacy-preserving intelligent systems in many scenarios, such as intelligent healthcare and personalization.

Related Publications

Self-Comparison for Dataset-Level Membership Inference in Large (Vision-)Language Model

WWW, 2025
Jie Ren, Kangrui Chen, Chen Chen, Vikash Sehwag, Yue Xing, Jiliang Tang, Lingjuan Lyu

Large Language Models (LLMs) and Vision-Language Models (VLMs) have made significant advancements in a wide range of natural language processing and vision-language tasks. Access to large web-scale datasets has been a key factor in their success. However, concerns have been …

Exploit Gradient Skewness to Circumvent Byzantine Defenses for Federated Learning

AAAI, 2025
Yuchen Liu*, Chen Chen, Lingjuan Lyu, Yaochu Jin, Gang Chen*

Federated Learning (FL) is notorious for its vulnerability to Byzantine attacks. Most current Byzantine defenses share a common inductive bias: among all the gradients, the densely distributed ones are more likely to be honest. However, such a bias is a poison to Byzantine r…

FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low- Rank Adaptations

NeurIPS, 2024
Lingjuan Lyu, Ziyao Wang, Zheyu Shen, Yexiao He, Guoheng Sun, Hongyi Wang, Ang Li

The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients'…

  • HOME
  • Publications
  • Communication-Efficient Federated Learning via Knowledge Distillation

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.