Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

The reliability of software that has a Deep Neural Network (DNN) as a component is urgently important today given the increasing number of critical applications being deployed with DNNs. The need for reliability raises a need for rigorous testing of the safety and trustworthiness of these systems. In the last few years, there have been a number of research efforts focused on testing DNNs. However the test generation techniques proposed so far lack a check to determine whether the test inputs they are generating are valid, and thus invalid inputs are produced. To illustrate this situation, we explored three recent DNN testing techniques. Using deep generative model based input validation, we show that all the three techniques generate significant number of invalid test inputs. We further analyzed the test coverage achieved by the test inputs generated by the DNN testing techniques and showed how invalid test inputs can falsely inflate test coverage metrics.

To overcome the inclusion of invalid inputs in testing, we propose a technique to incorporate the valid input space of the DNN model under test in the test generation process. Our technique uses a deep generative model-based algorithm to generate only valid inputs. Results of our empirical studies show that our technique is is an improvement in terms of the number of valid test inputs generated, the time to generate tests, and the test coverage achieved.

Thu 27 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:30 - 17:30
3.4.1. Deep Neural Networks: Data SelectionTechnical Track / SEIP - Software Engineering in Practice / Journal-First Papers at Blended Sessions Room 1 +12h
Chair(s): Ayse Tosun Istanbul Technical University
16:30
20m
Paper
Test Selection for Deep Learning SystemsJournal-First
Journal-First Papers
Wei Ma SnT, University of Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Anestis Tsakmalis University of Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg
Pre-print Media Attached
16:50
20m
Paper
On the experiences of adopting automated data validation in an industrial machine learning projectSEIP
SEIP - Software Engineering in Practice
Lucy Ellen Lwakatare University of Helsinki, Finland, Ellinor Rånge Ericsson, Ivica Crnkovic Chalmers University of Technology, Jan Bosch Chalmers University of Technology, Sweden
Link to publication Media Attached
17:10
20m
Paper
Distribution-Aware Testing of Neural Networks Using Generative ModelsArtifact ReusableTechnical TrackArtifact Available
Technical Track
Swaroopa Dola University of Virginia, Matthew B Dwyer University of Virginia, Mary Lou Soffa University of Virginia
Pre-print Media Attached

Fri 28 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

04:30 - 05:30
04:30
20m
Paper
Test Selection for Deep Learning SystemsJournal-First
Journal-First Papers
Wei Ma SnT, University of Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Anestis Tsakmalis University of Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg
Pre-print Media Attached
04:50
20m
Paper
On the experiences of adopting automated data validation in an industrial machine learning projectSEIP
SEIP - Software Engineering in Practice
Lucy Ellen Lwakatare University of Helsinki, Finland, Ellinor Rånge Ericsson, Ivica Crnkovic Chalmers University of Technology, Jan Bosch Chalmers University of Technology, Sweden
Link to publication Media Attached
05:10
20m
Paper
Distribution-Aware Testing of Neural Networks Using Generative ModelsArtifact ReusableTechnical TrackArtifact Available
Technical Track
Swaroopa Dola University of Virginia, Matthew B Dwyer University of Virginia, Mary Lou Soffa University of Virginia
Pre-print Media Attached