Skip to content

FedBERT: When Federated Learning Meets Pre-Training

Abstract

The fast growth of pre-trained models (PTMs) has brought natural language processing to a new era, which becomes a dominant technique for various natural language processing (NLP) applications. Every user can download weights of PTMs, then fine-tune the weights on a task on the local side. However, the pre-training of a model relies heavily on accessing a large-scale of training data and requires a vast amount of computing resources. These strict requirements make it impossible for any single client to pre-train such a model. In order to grant clients with limited computing capability to participate in pre-training a large model, in this paper, we propose a new learning approach FedBERT that takes advantage of the federated learning and split learning approaches, resorting to pre-training BERT in a federated way. FedBERT can prevent sharing the raw data information and obtain excellent performance. Extensive experiments on seven GLUE tasks demonstrate that FedBERT can maintain its effectiveness without communicating the sensitive local data of clients.

Authors

  • Yuanyishu Tian*
  • Yao Wan*
  • Lingjuan Lyu
  • Dezhong Yao*
  • Hai Jin*
  • Lichao Sun*

*External Authors

Venue

ACM Transactions on Intelligent Systems and Technology

Date

2022

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation