“What It Wants Me To Say”: Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models
- Michael Xieyang Liu ,
- Advait Sarkar ,
- Carina Negreanu ,
- Ben Zorn ,
- Jack Williams ,
- Neil Toronto ,
- Andy Gordon
Proceedings of the ACM CHI Conference on Human Factors in Computing Systems |
Published by ACM
Honorable mention
下载 BibTexCode-generating large language models map natural language to code. However, only a small portion of the infinite space of naturalistic utterances is effective at guiding code generation. For non-expert end-user programmers, learning this is the challenge of abstraction matching. We examine this challenge in the specific context of data analysis in spreadsheets, in a system that maps the user’s natural language query to Python code using the Codex generator, executes the code, and shows the result. We propose grounded abstraction matching, which bridges the abstraction gap by translating the code back into a systematic and predictable naturalistic utterance. In a between-subjects, think-aloud study (n=24), we compare grounded abstraction matching to an ungrounded alternative based on previously established query framing principles. We find that the grounded approach improves end-users’ understanding of the scope and capabilities of the code-generating model, and the kind of language needed to use it effectively.
The Metacognitive Demands and Opportunities of Generative AI
Microsoft Research Forum | Episode 2 | March 5, 2024 Lev Tankelevitch explored how metacognition—the psychological capacity to monitor and regulate one's cognitive processes—provides a valuable perspective for comprehending and addressing the usability challenges of generative AI systems around prompting, assessing and relying on outputs, and workflow optimization. See more at https://aka.ms/ResearchForum-Mar2024 (opens in new tab)