Using Relative Lines of Code to Guide Automated Test Generation for PythonJournal-First
Thu 27 May 2021 07:30 - 07:50 at Blended Sessions Room 1 - 2.5.1. Testing: Automatic Test Generation
Raw lines of code (LOC) is a metric that does not, at first glance, seem extremely useful for automated test generation. It is both highly language-dependent and not extremely meaningful, semantically, within a language: one coder can produce the same effect with many fewer lines than another. However, relative LOC, between components of the same project, turns out to be a highly useful metric for automated testing. In this article, we make use of a heuristic based on LOC counts for tested functions to dramatically improve the effectiveness of automated test generation. This approach is particularly valuable in languages where collecting code coverage data to guide testing has a very high overhead. We apply the heuristic to property-based Python testing using the TSTL (Template Scripting Testing Language) tool. In our experiments, the simple LOC heuristic can improve branch and statement coverage by large margins (often more than 20%, up to 40% or more) and improve fault detection by an even larger margin (usually more than 75% and up to 400% or more). The LOC heuristic is also easy to combine with other approaches and is comparable to, and possibly more effective than, two well-established approaches for guiding random testing.
Wed 26 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
18:50 - 19:50 | 2.5.1. Testing: Automatic Test GenerationJournal-First Papers / Technical Track at Blended Sessions Room 1 +12h Chair(s): José Miguel Rojas University of Leicester, UK | ||
18:50 20mPaper | Inputs from Hell: Learning Input Distributions for Grammar-Based Test GenerationJournal-First Journal-First Papers Ezekiel Soremekun SnT, University of Luxembourg, Esteban Pavese Humboldt University of Berlin, Nikolas Havrikov CISPA, Germany, Lars Grunske Humboldt University of Berlin, Andreas Zeller CISPA Helmholtz Center for Information Security Link to publication DOI Pre-print Media Attached | ||
19:10 20mPaper | Automatic Unit Test Generation for Machine Learning Libraries: How Far Are We?Technical Track Technical Track Song Wang York University, Nishtha Shrestha York University, Abarna Kucheri Subburaman York University, Junjie Wang Institute of Software, Chinese Academy of Sciences, Moshi Wei York University, Nachiappan Nagappan Microsoft Research Link to publication Pre-print Media Attached | ||
19:30 20mPaper | Using Relative Lines of Code to Guide Automated Test Generation for PythonJournal-First Journal-First Papers Josie Holmes Northern Arizona University, Iftekhar Ahmed University of California, Irvine, Caius Brindescu Oregon State University, Rahul Gopinath CISPA Helmholtz Center for Information Security, He Zhang Nanjing University, Alex Groce Northern Arizona University Pre-print Media Attached |