Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

The boom of DL technology leads to massive DL models built and shared, which facilitates the acquisition and reuse of DL models. For a given task, we encounter multiple DL models available with the same functionality, which are considered as candidates to achieve this task. Testers are expected to compare multiple DL models and select the more suitable ones w.r.t. the whole testing context. Due to the limitation of labeling effort, testers aim to select an efficient subset of samples to make an as precise rank estimation as possible for these models.

To tackle this problem, we propose \textbf{S}ample \textbf{D}iscrimination based \textbf{S}election (\textbf{SDS}) to select efficient samples that could discriminate multiple models, i.e., the prediction behaviors (right/wrong) of these samples would be helpful to indicate the trend of model performance. To evaluate SDS, we conduct an extensive empirical study with three widely-used image datasets and 80 real world DL models. The experiment results show that, compared with state-of-the-art baseline methods, SDS is an effective and efficient sample selection method to rank multiple DL models.

Tue 25 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

12:05 - 13:05
1.2.1. Deep Neural Networks: Validation #2Technical Track at Blended Sessions Room 1 +12h
Chair(s): Grace Lewis Carnegie Mellon Software Engineering Institute
12:05
20m
Paper
Measuring Discrimination to Boost Comparative Testing for Multiple Deep Learning ModelsTechnical Track
Technical Track
Linghan Meng Nanjing University, Yanhui Li Department of Computer Science and Technology, Nanjing University, Lin Chen Department of Computer Science and Technology, Nanjing University, Zhi Wang Nanjing University, Di Wu Momenta, Yuming Zhou Nanjing University, Baowen Xu Nanjing University
Pre-print Media Attached
12:25
20m
Paper
Prioritizing Test Inputs for Deep Neural Networks via Mutation AnalysisTechnical Track
Technical Track
Zan Wang College of Intelligence and Computing, Tianjin University, Hanmo You College of Intelligence and Computing, Tianjin University, Junjie Chen College of Intelligence and Computing, Tianjin University, Yingyi Zhang College of Intelligence and Computing, Tianjin University, Xuyuan Dong Information and Network Center,Tianjin University, Wenbin Zhang Information and Network Center,Tianjin University
Pre-print Media Attached
12:45
20m
Paper
Testing Machine Translation via Referential TransparencyTechnical Track
Technical Track
Pinjia He ETH Zurich, Clara Meister ETH Zurich, Zhendong Su ETH Zurich
Pre-print Media Attached

Wed 26 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

00:05 - 01:05
1.2.1. Deep Neural Networks: Validation #2Technical Track at Blended Sessions Room 1
00:05
20m
Paper
Measuring Discrimination to Boost Comparative Testing for Multiple Deep Learning ModelsTechnical Track
Technical Track
Linghan Meng Nanjing University, Yanhui Li Department of Computer Science and Technology, Nanjing University, Lin Chen Department of Computer Science and Technology, Nanjing University, Zhi Wang Nanjing University, Di Wu Momenta, Yuming Zhou Nanjing University, Baowen Xu Nanjing University
Pre-print Media Attached
00:25
20m
Paper
Prioritizing Test Inputs for Deep Neural Networks via Mutation AnalysisTechnical Track
Technical Track
Zan Wang College of Intelligence and Computing, Tianjin University, Hanmo You College of Intelligence and Computing, Tianjin University, Junjie Chen College of Intelligence and Computing, Tianjin University, Yingyi Zhang College of Intelligence and Computing, Tianjin University, Xuyuan Dong Information and Network Center,Tianjin University, Wenbin Zhang Information and Network Center,Tianjin University
Pre-print Media Attached
00:45
20m
Paper
Testing Machine Translation via Referential TransparencyTechnical Track
Technical Track
Pinjia He ETH Zurich, Clara Meister ETH Zurich, Zhendong Su ETH Zurich
Pre-print Media Attached