Methods for Stabilizing Models across Large Samples of Projects(with case studies on Predicting Defect and Project Health)
Mon 23 May 2022 13:30 - 13:45 at Room 315+316 - Blended Technical Session 2 (Machine Learning and Information Retrieval) Chair(s): Preetha Chatterjee
Despite decades of research, SE lacks widely accepted models (that offer precise quantitative stable predictions) about what factors most influence software quality. This paper provides a promising result showing such stable models can be generated using a new transfer learning framework called “STABILIZER”. Given a tree of recursively clustered projects (using project meta-data), STABILIZER promotes a model upwards if it performs best in the lower clusters (stopping when the promoted model performs worse than the models seen at a lower level).
The number of models found by STABILIZER is minimal: one for defect prediction (756 projects) and less than a dozen for project health (1628 projects). Hence, via STABILIZER, it is possible to find a few projects which can be used for transfer learning and make conclusions that hold across hundreds of projects at a time. Further, the models produced in this manner offer predictions that perform as well or better than the prior state-of-the-art.
To the best of our knowledge, STABILIZER is the order of magnitude faster than the prior state-of-the-art transfer learners which seek to find conclusion stability, and these case studies are the largest demonstration of the generalizability of quantitative predictions of project quality yet reported in the SE literature.
In order to support open science, all our scripts and data are online at https://github.com/Anonymous633671/STABILIZER.
Thu 19 MayDisplayed time zone: Eastern Time (US & Canada) change
Mon 23 MayDisplayed time zone: Eastern Time (US & Canada) change
13:30 - 15:00
Blended Technical Session 2 (Machine Learning and Information Retrieval) Technical Papers / Data and Tool Showcase Track at Room 315+316
Chair(s): Preetha Chatterjee Drexel University, USA
|Methods for Stabilizing Models across Large Samples of Projects(with case studies on Predicting Defect and Project Health)|
Suvodeep Majumder North Carolina State University, Tianpei Xia North Carolina State University, Rahul Krishna North Carolina State University, Tim Menzies North Carolina State UniversityPre-print Media Attached
|GraphCode2Vec: Generic Code Embedding via Lexical and Program Dependence Analyses|
Wei Ma SnT, University of Luxembourg, Mengjie Zhao LMU Munich, Ezekiel Soremekun SnT, University of Luxembourg, Qiang Hu University of Luxembourg, Jie M. Zhang King's College London, Mike Papadakis University of Luxembourg, Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Xiaofei Xie Singapore Management University, Singapore, Yves Le Traon University of Luxembourg, LuxembourgPre-print
|Senatus: A Fast and Accurate Code-to-Code Recommendation Engine|
Fran Silavong JP Morgan Chase & Co., Sean Moran JP Morgan Chase & Co., Antonios Georgiadis JP Morgan Chase & Co., Rohan Saphal JP Morgan Chase & Co., Robert Otter JP Morgan Chase & Co.DOI Pre-print Media Attached
|Comments on Comments: Where Code Review and Documentation Meet|
Nikitha Rao Carnegie Mellon University, Jason Tsay IBM Research, Martin Hirzel IBM Research, Vincent J. Hellendoorn Carnegie Mellon UniversityDOI Pre-print File Attached
|On the Naturalness of Fuzzer Generated Code|
Rajeswari Hita Kambhamettu Carnegie Mellon University, John Billos Wake Forest University, Carolyn "Tomi" Oluwaseun-Apo Pennsylvania State University, Benjamin Gafford Carnegie Mellon University, Rohan Padhye Carnegie Mellon University, Vincent J. Hellendoorn Carnegie Mellon University
|SOSum: A Dataset of Stack Overflow Post Summaries|
Data and Tool Showcase Track
Bonan Kou Purdue University, Yifeng Di Purdue University, Muhao Chen University of Southern California, Tianyi Zhang Purdue University
|Discussions and Q&A|