Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

Deep Neural Networks (DNN) are increasingly being deployed in safety-critical domains, from autonomous vehicles to medical devices, where the consequences of errors demand techniques that can provide stronger guarantees about behavior than just high test accuracy. This paper explores broadening the application of existing adversarial attack techniques for the falsification of DNN safety properties. We contend and later show that such attacks provide a powerful repertoire of scalable algorithms for property falsification. To enable the broad application of falsification, we introduce a semantics-preserving reduction of multiple safety property types, which subsume prior work, into a set of equivalid correctness problems amenable to adversarial attacks. We evaluate our reduction approach as an enabler of falsification on a range of DNN correctness problems and show its cost-effectiveness and scalability.

Wed 26 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

12:55 - 13:55
2.2.5. Deep Neural Networks: HackingSEIP - Software Engineering in Practice / Technical Track at Blended Sessions Room 5 +12h
Chair(s): Grace Lewis Carnegie Mellon Software Engineering Institute
12:55
20m
Paper
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android AppsSEIP
SEIP - Software Engineering in Practice
Yujin Huang Faculty of Information Technology, Monash University, Han Hu Faculty of Information Technology, Monash University, Chunyang Chen Monash University
Pre-print Media Attached
13:15
20m
Paper
DeepBackdoor: Black-box Backdoor Attack on Deep Learning Models through Neural Payload InjectionTechnical Track
Technical Track
Yuanchun Li Microsoft Research, Jiayi Hua Beijing University of Posts and Telecommunications, Haoyu Wang Beijing University of Posts and Telecommunications, Chunyang Chen Monash University, Yunxin Liu Microsoft Research
Pre-print Media Attached
13:35
20m
Paper
Reducing DNN Properties to Enable Falsification with Adversarial AttacksArtifact ReusableTechnical TrackArtifact Available
Technical Track
David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia
Link to publication DOI Pre-print Media Attached

Thu 27 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

00:55 - 01:55
00:55
20m
Paper
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android AppsSEIP
SEIP - Software Engineering in Practice
Yujin Huang Faculty of Information Technology, Monash University, Han Hu Faculty of Information Technology, Monash University, Chunyang Chen Monash University
Pre-print Media Attached
01:15
20m
Paper
DeepBackdoor: Black-box Backdoor Attack on Deep Learning Models through Neural Payload InjectionTechnical Track
Technical Track
Yuanchun Li Microsoft Research, Jiayi Hua Beijing University of Posts and Telecommunications, Haoyu Wang Beijing University of Posts and Telecommunications, Chunyang Chen Monash University, Yunxin Liu Microsoft Research
Pre-print Media Attached
01:35
20m
Paper
Reducing DNN Properties to Enable Falsification with Adversarial AttacksArtifact ReusableTechnical TrackArtifact Available
Technical Track
David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia
Link to publication DOI Pre-print Media Attached