Authors

Venue

Date

Share

FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low- Rank Adaptations

Lingjuan Lyu

Ziyao Wang

Zheyu Shen

Yexiao He

Guoheng Sun

Hongyi Wang

Ang Li

NeurIPS 2024

2024

Abstract

The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients' local data through in-situ computation, eliminating the need for data movement. However, fine-tuning LLMs, given their massive scale of parameters, poses challenges for clients with constrained and heterogeneous resources in FL. Previous methods employed low-rank adaptation (LoRA) for efficient federated fine-tuning but utilized traditional FL aggregation strategies on LoRA adapters. This approach led to mathematically inaccurate aggregation noise, reducing fine-tuning effectiveness and failing to address heterogeneous LoRAs. In this work, we first highlight the mathematical incorrectness of LoRA aggregation in existing federated fine-tuning methods. We introduce a new approach called FLoRA that enables federated fine-tuning on heterogeneous LoRA adapters across clients through a novel stacking-based aggregation method. Our approach is noise-free and seamlessly supports heterogeneous LoRAs. Extensive experiments demonstrate FLoRA's superior performance in both homogeneous and heterogeneous settings, surpassing state-of-the-art methods. We envision this work as a milestone for efficient, privacy-preserving, and accurate federated fine-tuning of LLMs.

Related Publications

pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning

NeurIPS, 2024
Jiaqi Wang*, Lingjuan Lyu, Fenglong Ma*, Qi Li

Federated learning, a pioneering paradigm, enables collaborative model training without exposing users’ data to central servers. Most existing federated learning systems necessitate uniform model structures across all clients, restricting their practicality. Several methods …

CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence

NeurIPS, 2024
Chaochao Chen*, Yizhao Zhang*, Lingjuan Lyu, Yuyuan Li*, Jiaming Zhang, Li Zhang, Biao Gong, Chenggang Yan

With increasing privacy concerns in artificial intelligence, regulations have mandated the right to be forgotten, granting individuals the right to withdraw their data from models. Machine unlearning has emerged as a potential solution to enable selective forgetting in model…

FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge Injection

NeurIPS, 2024
Jiaqi Wang*, Lingjuan Lyu, Fenglong Ma*, Xiaochen Wang, Jinghui Chen

This study introduces the Federated Medical Knowledge Injection (FedMEKI) platform, a new benchmark designed to address the unique challenges of integrating medical knowledge into foundation models under privacy constraints. By leveraging a cross-silo federated learning appr…

  • HOME
  • Publications
  • FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low- Rank Adaptations

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.