CODAMOSA: Escaping Coverage Plateaus in Test Generation with Pre-trained Large Language Models
- Caroline Lemieux ,
- Jeevana Priya Inala ,
- Shuvendu Lahiri ,
- Siddhartha Sen
ICSE'23 |
Search-based software testing (SBST) generates high-coverage test cases for programs under test with a combination of test case generation and mutation. SBST’s performance relies on there being a reasonable probability of generating test cases that exercise the core logic of the program under test. Given such test cases, SBST can then explore the space around them to exercise various parts of the program. This paper explores whether Large Language Models (LLMs) of code, such as OpenAI’s Codex, can be used to help SBST’s exploration. Our proposed algorithm, CODAMOSA, conducts SBST until its coverage improvements stall, then asks Codex to provide example test cases for under-covered functions. These examples help SBST redirect its search to more useful areas of the search space. On an evaluation over 486 benchmarks, CODAMOSA achieves statistically significantly higher coverage on many more benchmarks (173 and 279) than it reduces coverage on (10 and 4), compared to SBST and LLM-only baselines.
Téléchargements de publications
CodaMosa
janvier 27, 2023
This repository contains the code for CodaMOSA. CodaMOSA integrates queries to a Large Language Model (currently supports the OpenAI API) into search-based algorithms for unit test generation.