Nowadays, we are witnessing an increasing adoption of Machine Learning (ML) for solving complex real-world problems. However, despite some reports showing that ML models can produce results comparable and even superior to human experts, they are often vulnerable to carefully crafted perturbations and are prone to bias and hallucinations. Ensuring the trustworthiness of software systems enabled by machine learning is a very challenging task. In this talk, I will discuss challenges that we should overcome to build trustworthy ML-enabled systems and present some recent techniques and tools that we have proposed to support their quality assurance. I will also share my views on the role that software engineering can play to improve the trustworthiness of software systems, like ChatGPT, that are based on Generative Large Language models (which are prone to hallucinations).
Foutse Khomh is a Full Professor of Software Engineering at Polytechnique Montréal, a Canada CIFAR AI Chair on Trustworthy Machine Learning Software Systems, and an FRQ-IVADO Research Chair on Software Quality Assurance for Machine Learning Applications. He received a Ph.D. in Software Engineering from the University of Montreal in 2011, with the Award of Excellence. He also received a CS-Can/Info-Can Outstanding Young Computer Science Researcher Prize for 2019. His research interests include software maintenance and evolution, machine learning systems engineering, cloud engineering, and dependable and trustworthy ML/AI. His work has received four ten-year Most Influential Paper (MIP) Awards, and six Best/Distinguished Paper Awards. He also served on the steering committee of SANER (chair), MSR, PROMISE, ICPC (chair), and ICSME (vice-chair). He initiated and co-organized the Software Engineering for Machine Learning Applications (SEMLA) symposium and the RELENG (Release Engineering) workshop series. He is co-founder of the NSERC CREATE SE4AI: A Training Program on the Development, Deployment, and Servicing of Artificial Intelligence-based Software Systems and one of the Principal Investigators of the DEpendable Explainable Learning (DEEL) project. He is also co-founder and chair of the academic committee of the Confiance.ia Quebec initiative on Trustworthy AI. He is on the editorial board of multiple international software engineering journals (e.g., IEEE Software, EMSE, JSEP) and is a Senior Member of IEEE.