Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

Deep Neural Networks (DNN) are typically tested for accuracy relying on a set of unlabelled real world data (operational dataset), from which a subset is selected, to be labelled and used as test suite. This subset is required to be small (due to manual labelling) yet faithfully represent the operational context, with the resulting test suite containing roughly the same proportion of examples causing misprediction (i.e., failing test cases) as the operational dataset. However, while testing to estimate accuracy, it is desirable to also learn as much as possible from the failing tests in the operational dataset, since they inform about possible bugs of the DNN. A smart sampling strategy may allow to intentionally include in the test suite many examples causing misprediction, thus providing this way more valuable inputs for DNN improvement while preserving the ability to get trustworthy unbiased estimates. This paper presents a test selection technique (DeepEST) to actively look for failing test cases in the operational dataset of a DNN, with the goal of assessing the DNN expected accuracy by building a small and “informative” test suite, namely with a high number of mispredictions, for subsequent DNN improvement. Experiments with five subjects, combining four DNN models and three datasets, are described. The results show that DeepEST provides DNN accuracy estimates with precision close to (and often better than) those of existing sampling-based DNN testing techniques, while detecting from 5 to 30 times more mispredictions, with the same test suite size.

Tue 25 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 11:30
1.1.3. Deep Neural Networks: Validation #1Technical Track at Blended Sessions Room 3 +12h
Chair(s): Oscar Dieste Universidad Politécnica de Madrid
10:30
20m
Paper
Operation is the hardest teacher: estimating DNN accuracy looking for mispredictionsTechnical Track
Technical Track
Antonio Guerriero Università di Napoli Federico II, Roberto Pietrantuono Università di Napoli Federico II, Stefano Russo Università di Napoli Federico II
Pre-print Media Attached
10:50
20m
Paper
AUTOTRAINER: An Automatic DNN Training Problem Detection and Repair SystemTechnical Track
Technical Track
Xiaoyu Zhang Xi'an Jiaotong University, Juan Zhai Rutgers University, Shiqing Ma Rutgers University, Chao Shen Xi'an Jiaotong University
Pre-print Media Attached
11:10
20m
Paper
Self-Checking Deep Neural Networks in DeploymentTechnical Track
Technical Track
Yan Xiao National University of Singapore, Ivan Beschastnikh University of British Columbia, David Rosenblum George Mason University, Changsheng Sun National University of Singapore, Sebastian Elbaum University of Virginia, Yun Lin National University of Singapore, Jin Song Dong National University of Singapore
Pre-print Media Attached
22:30 - 23:30
1.1.3. Deep Neural Networks: Validation #1Technical Track at Blended Sessions Room 3
22:30
20m
Paper
Operation is the hardest teacher: estimating DNN accuracy looking for mispredictionsTechnical Track
Technical Track
Antonio Guerriero Università di Napoli Federico II, Roberto Pietrantuono Università di Napoli Federico II, Stefano Russo Università di Napoli Federico II
Pre-print Media Attached
22:50
20m
Paper
AUTOTRAINER: An Automatic DNN Training Problem Detection and Repair SystemTechnical Track
Technical Track
Xiaoyu Zhang Xi'an Jiaotong University, Juan Zhai Rutgers University, Shiqing Ma Rutgers University, Chao Shen Xi'an Jiaotong University
Pre-print Media Attached
23:10
20m
Paper
Self-Checking Deep Neural Networks in DeploymentTechnical Track
Technical Track
Yan Xiao National University of Singapore, Ivan Beschastnikh University of British Columbia, David Rosenblum George Mason University, Changsheng Sun National University of Singapore, Sebastian Elbaum University of Virginia, Yun Lin National University of Singapore, Jin Song Dong National University of Singapore
Pre-print Media Attached