Explainable Bug Prediction for Code Changes: Are We There Yet?ACM SRC
Explaining the prediction results of software bug prediction models is a challenging task, which can provide useful information for developers to understand and fix the predicted bugs. Recently, Jirayus et al. [4] proposed to use two model-agnostic techniques (i.e., LIME and iBreakDown) to explain the prediction results of bug prediction models. Although, their experiments on file-level bug prediction show promising results, the performance of these techniques on explaining the results of just-in-time (i.e., change-level) bug prediction is unknown. This paper conducts the first empirical study to explore the explainability of these model-agnostic techniques on just-in-time bug prediction models. Specifically, this study takes a three step approach, 1) replicating previously widely used just-in-time bug prediction model [3], [14], 2) applying Local Interpretability Model-agnostic Explanation Technique (LIME) and iBreakDown on the prediction results, and 3) manually evaluating the explanations for buggy instances (i.e. positive predictions) against the root cause of the bugs. The results of our experiment however, did not provide any reasonable explanations. In other words, LIME and iBreakDown fail to explain defect prediction explanations for just-in-time bug prediction models, unlike file-level [4]. This paper urges for new approaches for explaining the results of just-in-time bug prediction models.