Fixing Model Bugs with Natural Language Patches
- Shikhar Murty ,
- Christopher D. Manning ,
- Scott Lundberg ,
- Marco Tulio Ribeiro
EMNLP 2022 |
Current approaches for fixing systematic problems in NLP models (e.g. regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts. In contrast, humans often provide corrections to each other through natural language. Taking inspiration from this, we explore natural language patches — declarative statements that allow developers to provide corrective feedback at the right level of abstraction, either overriding the model («if a review gives 2 stars, the sentiment is negative») or providing additional information the model may lack («if something is described as the bomb, then it is good»). We model the task of determining if a patch applies separately from the task of integrating patch information, and show that with a small amount of synthetic data, we can teach models to effectively use real patches on real data — 1 to 7 patches improve accuracy by ~1-4 accuracy points on different slices of a sentiment analysis dataset, and F1 by 7 points on a relation extraction dataset. Finally, we show that finetuning on as many as 100 labeled examples may be needed to match the performance of a small set of language patches.