Distribution Awareness for AI System TestingACM SRC
As Deep Learning (DL) is continuously adopted in many safety critical applications, its quality and reliability start to raise concerns. Similar to the traditional software development process, testing the DL software to uncover its defects at an early stage is an effective way to reduce risks after deployment. Although recent progress has been made in designing novel testing techniques for DL software, the distribution of generated test data is not taken into consideration. It is therefore hard to judge whether the identified errors are indeed meaningful errors to the DL application. Therefore, we propose a new OOD-guided testing technique which aims to generate new unseen test cases relevant to the underlying DL system task. Our results show that this technique is able to filter up to 55.44% of error test case on CIFAR-10 and is 10.05% more effective in enhancing robustness.
I am PhD Candidate at Nanyang Technological University, Singapore. Passionate about autonomous systems, education, globalization, business and entrepreneurship. My mission is to create trust in artificial intelligent systems by enhancing robustness and communicating opportunities and risks with industry leaders and governments. I am always keen to connect and exchange ideas.