Joint Prompt Optimization of Stacked LLMs using Variational Inference
- Alessandro Sordoni ,
- Xingdi Yuan ,
- Marc-Alexandre Côté ,
- Matheus Pereira ,
- Adam Trischler ,
- Ziang Xiao ,
- Arian Hosseini ,
- Friederike Niedtner ,
- Nicolas Le Roux
We view large language models (LLMs) as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output of one layer to the next. We call the stacked architecture a Deep Language Network (DLN). We first show how to effectively perform prompt optimization for a 1-Layer language network (DLN-1). We then show how to train 2-layer DLNs (DLN-2), where two prompts must be learnt. We consider the output of the first layer as a latent variable to marginalize, and devise a variational inference algorithm for joint prompt training. A DLN-2 reaches higher performance than a single layer, sometimes comparable to few-shot GPT-4 even when each LLM in the network is smaller and less powerful.