Multi-Head Adapter Routing for Data-Efficient Fine-Tuning
- Lucas Caccia ,
- E. Ponti ,
- Lu Liu ,
- Matheus Pereira ,
- Nicolas Le Roux ,
- Alessandro Sordoni
ArXiv | , Vol abs/2211.03831
Parameter-efficient fine-tuning (PEFT) methods can adapt large language models to downstream tasks by training a small amount of newly added parameters. In multi-task settings, PEFT adapters typically train on each task independently, inhibiting transfer across tasks, or on the concatenation of all tasks, which can lead to negative interference. To address this, Polytropon [Ponti et al., 2022] jointly learns an inventory of PEFT adapters and a routing function to share variable-size sets of adapters across tasks. Subsequently, adapters can be re-combined and fine-tuned on novel tasks even with limited data. In this paper, we investigate to what extent the ability to control which adapters are active for each task leads to sample-efficient generalization. Thus, we propose less expressive variants where we perform weighted averaging of the adapters before few-shot adaptation ( Poly – µ ) instead of learning a routing function. Moreover, we introduce more expressive variants where finer-grained task–adapter allocation is learned through a multi-head routing function ( Poly – S ). We test these variants on three separate benchmarks for multi-task learning. We find that Poly – S achieves gains on all three (up to 5.3 points on average) over strong baselines, while incurring a negligible additional cost in parameter count. In particular, we find that instruction tuning, where models are fully fine-tuned on natural language instructions for each task, is inferior to modular methods such as Polytropon and our proposed variants.