Authors

* External authors

Venue

Date

Share

SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning

Bac Nguyen

Stefan Uhlich*

Fabien Cardinaux*

Lukas Mauch*

Marzieh Edraki*

Aaron Courville*

* External authors

ECCV-24

2024

Abstract

Handling distribution shifts from training data, known as out-of-distribution (OOD) generalization, poses a significant challenge in the field of machine learning. While a pre-trained vision-language model like CLIP has demonstrated remarkable zero-shot performance, further adaptation of the model to downstream tasks leads to undesirable degradation for OOD data. In this work, we introduce Sparse Adaptation for Fine-Tuning (SAFT), a method that prevents fine-tuning from forgetting the general knowledge in the pre-trained model. SAFT only updates a small subset of important parameters whose gradient magnitude is large, while keeping the other parameters frozen. SAFT is straightforward to implement and conceptually simple. Extensive experiments show that with only 0.1% of the model parameters, SAFT can significantly improve the performance of CLIP. It consistently outperforms baseline methods across several benchmarks. On the few-shot learning benchmark of ImageNet and its variants, SAFT gives a gain of 5.15% on average over the conventional fine-tuning method in OOD settings.

Related Publications

Schemato -- An LLM for Netlist-to-Schematic Conversion

MLCAD, 2025
Ryoga Matsuo, Stefan Uhlich*, Arun Venkitaraman, Andrea Bonetti, Chia-Yu Hsieh, Ali Momeni, Lukas Mauch*, Augusto Capone, Eisaku Ohbuchi, Lorenzo Servadei

Machine learning models are advancing circuit design, particularly in analog circuits. They typically generate netlists that lack human interpretability. This is a problem as human designers heavily rely on the interpretability of circuit diagrams or schematics to intuitivel…

Music Foundation Model as Generic Booster for Music Downstream Tasks

TMLR, 2025
WeiHsiang Liao, Yuhta Takida, Yukara Ikemiya, Zhi Zhong*, Chieh-Hsin Lai, Giorgio Fabbro*, Kazuki Shimada, Keisuke Toyama*, Kinwai Cheuk, Marco A. Martínez-Ramírez, Shusuke Takahashi*, Stefan Uhlich*, Taketo Akama*, Woosung Choi, Yuichiro Koyama*, Yuki Mitsufuji

We demonstrate the efficacy of using intermediate representations from a single foundation model to enhance various music downstream tasks. We introduce SoniDo, a music foundation model (MFM) designed to extract hierarchical features from target music samples. By leveraging …

Improving Vector-Quantized Image Modeling with Latent Consistency-Matching Diffusion

IJCNN, 2025
Bac Nguyen, Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji

By embedding discrete representations into a continuous latent space, we can leverage continuous-space latent diffusion models to handle generative modeling of discrete data. However, despite their initial success, most latent diffusion methods rely on fixed pretrained embed…

  • HOME
  • Publications
  • SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.