Feature Decoupling Alignment for Fine-tuning Pre-trained Models in Few-shot Learning

  • Kun Song ,
  • Huimin Ma ,
  • Bochao Zou ,
  • Huishuai Zhang ,
  • Weiran Huang

NeurIPS 2023 |

Due to the limited availability of data, existing few-shot learning methods trained from scratch fail to achieve satisfactory performance. In contrast, large-scale pre-trained models such as CLIP demonstrate remarkable few-shot and zero-shot capabilities. To enhance the performance of pre-trained models for downstream tasks, fine-tuning model on downstream data is frequently necessary. However, fine-tuning the pre-trained model jeopardizes its generalizability in the presence of distribution shift, while the limited number of samples in few-shot learning makes the model highly susceptible to overfitting. Consequently, existing methods for fine-tuning few-shot learning primarily focus on fine-tuning the model’s classification head or introducing additional structure. This paper introduces a feature decoupled alignment (FD-Align) fine-tuning approach, aiming to maximize the preservation of category-related information during fine-tuning while retaining category-independent information to maintain the model’s generalizability. Extensive experiments demonstrate the superior effectiveness of our approach in enhancing model performance compared to direct fine-tuning. Furthermore, we showcase the effectiveness of our approach on the OOD dataset by achieving excellent OOD performance for the fine-tuned model.