On the Time-Based Conclusion Stability of Cross-Project Defect Prediction ModelsJournal-First
Thu 27 May 2021 09:00 - 09:20 at Blended Sessions Room 3 - 2.6.3. Defect Prediction: Data Issues and Bug Classification
Researchers in empirical software engineering often make claims based on observable data such as defect reports. Unfortunately, in many cases, these claims are generalized beyond the data sets that have been evaluated. Will the researcher’s conclusions hold a year from now for the same software projects? Perhaps not. Recent studies show that in the area of Software Analytics, conclusions over different data sets are usually inconsistent. In this article, we empirically investigate whether conclusions in the area of defect prediction truly exhibit stability throughout time or not. Our investigation applies a time-aware evaluation approach where models are trained only on the past, and evaluations are executed only on the future. Through this time-aware evaluation, we show that depending on which time period we evaluate defect predictors, their performance, in terms of F-Score, the area under the curve (AUC), and Mathews Correlation Coefficient (MCC), varies and their results are not consistent. The next release of a product, which is significantly different from its prior release, may drastically change defect prediction performance. Therefore, without knowing about the conclusion stability, empirical software engineering researchers should limit their claims of performance within the contexts of evaluation, because broad claims about defect prediction performance might be contradicted by the next upcoming release of a product under analysis.
Wed 26 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
20:40 - 21:40 | 2.6.3. Defect Prediction: Data Issues and Bug ClassificationTechnical Track / Journal-First Papers at Blended Sessions Room 3 +12h Chair(s): Federica Sarro University College London | ||
20:40 20mFull-paper | Early Life Cycle Software Defect Prediction. Why? How?Technical Track Technical Track Shrikanth N C North Carolina State University, Suvodeep Majumder North Carolina State University, Tim Menzies North Carolina State University, USA Pre-print Media Attached | ||
21:00 20mPaper | On the Time-Based Conclusion Stability of Cross-Project Defect Prediction ModelsJournal-First Journal-First Papers Abdul Ali Bangash University of Alberta, Canada, Hareem Sahar University of Alberta, Abram Hindle University of Alberta, Karim Ali University of Alberta Pre-print Media Attached | ||
21:20 20mPaper | IoT Bugs and Development ChallengesTechnical Track Technical Track Amir Makhshari University of British Columbia (UBC), Ali Mesbah University of British Columbia (UBC) Pre-print Media Attached |
Thu 27 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
08:40 - 09:40 | 2.6.3. Defect Prediction: Data Issues and Bug ClassificationJournal-First Papers / Technical Track at Blended Sessions Room 3 | ||
08:40 20mFull-paper | Early Life Cycle Software Defect Prediction. Why? How?Technical Track Technical Track Shrikanth N C North Carolina State University, Suvodeep Majumder North Carolina State University, Tim Menzies North Carolina State University, USA Pre-print Media Attached | ||
09:00 20mPaper | On the Time-Based Conclusion Stability of Cross-Project Defect Prediction ModelsJournal-First Journal-First Papers Abdul Ali Bangash University of Alberta, Canada, Hareem Sahar University of Alberta, Abram Hindle University of Alberta, Karim Ali University of Alberta Pre-print Media Attached | ||
09:20 20mPaper | IoT Bugs and Development ChallengesTechnical Track Technical Track Amir Makhshari University of British Columbia (UBC), Ali Mesbah University of British Columbia (UBC) Pre-print Media Attached |