Skip to yearly menu bar Skip to main content


Poster

Code Agents are State of the Art Software Testers

Niels Mündler · Mark Niklas Müller · Jingxuan He · Martin Vechev

[ ] [ Project Page ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Rigorous software testing is crucial for developing and maintaining high-quality code. However, writing tests is a tedious task for developers. Therefore, automating test generation is a promising avenue for improving both software quality and developer satisfaction. However, while code generation with Large Language Models (LLMs) is an extraordinarily active research area, test generation remains relatively unexplored. We address this gap and investigate the capability of LLM-based Code Agents for formalizing user issues into test cases. To this end, we propose a novel benchmark based on popular GitHub repositories, containing real-world issues, ground-truth patches, and golden tests. We find that LLMs generally perform surprisingly well at generating relevant test cases and that Code Agents designed for code repair also perform well on test generation, exceeding the performance of systems designed specifically for test generation. Finally, as test generation is a similar but more structured task than code generation, it allows for a more fine-grained analysis using fail-to-pass rate and coverage metrics, providing a dual metric for analyzing systems designed for code repair.

Live content is unavailable. Log in and register to view live content