Guiding Large Language Models via Directional Stimulus Prompting

NeurIPS 2023 |

We introduce a novel prompting framework called Directional Stimulus Prompting for guiding black-box large language models (LLMs) toward desired outputs. The framework introduces a new component called directional stimulus into the prompt, providing more fine-grained guidance and control over LLMs. The directional stimulus serves as hints or cues for each input query to guide LLMs toward the desired output, such as keywords that the desired summary should include for summarization. We utilize a small tunable model (e.g., T5) to generate such directional stimulus for each query, allowing us to optimize black-box LLMs by optimizing a small policy model. This policy model can be trained through 1) supervised fine-tuning using labeled data and 2) reinforcement learning from offline or online rewards to explore directional stimulus that better aligns LLMs with desired behaviors. We evaluate our framework on summarization and dialogue response generation tasks. Experimental results show that our framework consistently improves ChatGPT’s performance over standard prompting with a small collection of training data, and reinforcement learning further improves the performance. Notably, on the MultWOZ dataset, our framework enables ChatGPT to achieve a remarkable 41.4% improvement in its combined score with only 80 dialogues, matching or even surpassing the performance of some fully trained state-of-the-art models. We have made our code publicly available.