This year there will be an ACM Student Research Competition (SRC) at ECOOP/ISSTA! SRC offers a unique forum for undergraduate and graduate students to present their original research in software engineering and related topics at ECOOP/ISSTA 2023. The goal is to give students a place to discuss their research with experts in their field and to help them sharpen their research and communication skills.

Accepted Posters

An empirical evaluation of commit-untangling tools
Student Research Competition
An Empirical Study of Nondeterministic Behavior and Its Causes in Static Analysis Tools
Student Research Competition
Characterizing WebAssembly Performance in the Era of Serverless Computing
Student Research Competition
Determinism with Intersection and Union Types
Student Research Competition
ExplainThis: Teaching a New Programming Language and Paradigm Using Contextualized Explanations
Student Research Competition
Object Synthesis for Automated Test Generation
Student Research Competition
Out-of-place Multiverse Debugging for Constraint Devices
Student Research Competition
Polymorphism for Programming With Versions
Student Research Competition

Call for Student Research Competition Submissions

This year there will be an ACM Student Research Competition (SRC) at ECOOP/ISSTA! SRC offers a unique forum for undergraduate and graduate students to present their original research in software engineering and related topics at ECOOP/ISSTA 2023. The goal is to give students a place to discuss their research with experts in their field and to help them sharpen their research and communication skills.

Eligibility criteria

Participants must have current student status, either graduate or undergraduate, at the time of the submission deadline. Participants in the SRC must also be current ACM (student) members.

Selection Process and Attendance

Three rounds of competition will take place before and during ECOOP/ISSTA 2023. All rounds are held in parallel for two categories: Undergraduate Students and Graduate Students. Master’s and Ph.D. students both participate in the Graduate category. Winners of the third round will be invited to participate in the SRC Grand Finals competition hosted by the ACM. The ECOOP/ISSTA rounds are:

  • Extended abstract round. Students first submit a 2 page extended abstract outlining their research. Based on the abstracts, a panel of reviewers will select the most promising authors to participate in a poster session, which will take place in person at ECOOP/ISSTA. Registration will be free for the selected students.
  • Poster session. Selected students will need to prepare a research poster. During the poster session, students will have the opportunity to present their work to the judges, who will select three semi-finalists for each category to advance to the third round.
  • ECOOP/ISSTA presentation. The last round will consist of a short (10 min) oral presentation at the conference. Judges and attendees will get a chance to ask questions during the Q&A session after each talk. Based on these presentations the judges will select the first, second, and third place for each category. Prizes will be awarded accordingly (see section “Prizes”).


The main body of the submission is a title and extended abstract consisting of the following:

Problem and Motivation: Clearly state the problem being addressed and explain the reasons for seeking a solution to this problem.

Background and Related Work: Describe the specialized (but pertinent) background necessary to appreciate the work in the context of ECOOP/ISSTA areas of interest. Include references to the literature where appropriate, and briefly explain where your work departs from that done by others.

Approach and Uniqueness: Describe your approach in addressing the problem and clearly state how your approach is novel.

Results and Contributions: Clearly show how the results of your work contribute to software engineering research in particular and to computer science in general; explain the significance of those results.

The submissions are double blind (the uploaded PDFs should not have your name or any other identifying information). We do ask however that you include the student author’s name, email address, institutional affiliation, research advisor’s name(s), ACM student member number, category (undergraduate or graduate), and mode of presentation (in-person or virtual) in the standard author contact information region (under “additional notes” for any field that does not have a dedicated cell).

Submissions must be original research that is not already published at any conference or journal. One of the goals of the SRC is to give students feedback on ongoing, unpublished work. Furthermore, the abstract must be authored solely by the student. If the work is collaborative with others and/or part of a larger group project, the abstract should make clear what the student’s role was and should focus on that portion of the work.

The extended abstract must not be longer than 2 pages. Reference lists do not count towards these limits. Please use the ACM Conference Proceedings Template similar to with line numbers turned on.

The submission site is: It will be open for submissions closer to the deadline. Should you have questions on whether your work is a good fit for the SRC track, please reach out to the SRC Co-Chairs by emailing at or

Important Dates

The deadline for submitting the 2-page abstract is April 15, 2023.

Decisions will be sent out by the end of May, 2023.

The actual conference is on July 17 - 21, 2023 which is when both the poster session and short talks will take place.

Prizes and ACM SRC Grand Finals

Winners of the three top places in each category will receive prizes of $500 for the first place winner, $300 for the second place winner and $200 for the third place winner, respectively.

First place winners in each category will be invited to participate in the ACM SRC Grand Finals, an online round of competition between first-place SRC winners from different ACM conferences held in 2023. Grand Finals will be judged by an ACM-appointed panel of judges.

Winners of the three top Grand Finals places in each category will receive additional prizes of $500 for the first place winner, $300 for the second place winner and $200 for the third place winner, respectively.

Good luck, and we hope to see you in July!

For the students who got accepted to the SRC, here are the requirements for the poster, video, and final presentation.

Poster Requirements

  • Create a poster that can fit up to 3’ x 4’ dimensions.
  • The content on your poster should be organized neatly and clearly.
  • Choose colors of your choice but make sure it all aligns nicely.
  • Submit the PDF of your posters to us by July 15th, 2023.
  • Please print your own posters and bring them to the venue.
  • We will provide easels to put the posters up.

Final Presentation Requirements

  • The students who qualify for the final presentation round will be given 7 minutes to present their work, followed by a 4-minute Q&A session at the end of each talk.
  • Prepare a slide deck for this round.

NOTE: There is only a single final presentation session. If virtual participants qualify for the final round, they will have to make sure they attend it at the same time.

ISSTA/ECOOP will also have a poster session that is not part of the SRC. These posters are going to be presented during the SRC poster session but will not be counted towards the competition. Congratulations to all presenters!

Poster Presentations

Poster Presentations

  • An Exploration of Static Variability Bug Detection Through Desugaring

    Austin Mordahl

    The extensive use of the preprocessor in C programs results in hard-to-find bugs that exist only under specific configurations (i.e., variability bugs). Despite past efforts developing new family-based analyses to detect these bugs, there remains a large gap between product-based and family-based static analyses in terms of their bug detection capabilities. To help close this gap, we explore using desugared code as an intermediate representation for product-based analyzers. We accomplish this with the first tool-independent variability-aware static analysis framework that can easily integrate many existing C static analyzers without changing their underlying implementations. While applying existing analyzers to desugared code is intuitive, we identify and address multiple practical challenges to enable scalable family-based analysis, and share insights on the current state of analyzing desugared code. To ensure our framework will be widely adopted in practice, we design it to make it easy for tool developers to integrate their own analyzers and to allow end users to flexibly run family-based analyses with the integrated product-based analyzers on their target programs. We demonstrate that our tool integration interfaces allow the integration of three popular C analyzers, Clang, Infer, and Phasar, each within 150 lines of code. We evaluate across five unique programs and a variability bug dataset to study the tradeoffs of our approach compared with existing product-based and family-based variability analysis strategies.
  • LibAE: An Area Exploration Technology for Detecting Third-party Libraries in Binaries

    Siyuan Li

    Third-party libraries (TPLs) are widely reused by software to accelerate the development process and incorporate external functionalities. However, unreliable TPL reuse leads to a significant security threat. Existing works that detect TPLs at the function or file level are limited in their accuracy and robustness due to the significant variation in C/C++ binaries across various compilation options and architectures. Moreover, these works fail to identify the exact reuse area, leading to challenges in detecting complicated reuse relationships and downstream tasks. We discover that software does not reuse just an isolated function, but an area that includes several adjacent functions on the Function Call Graph (FCG). In this paper, we propose LibAE, a novel Area Exploration technology that connects isolated functions into reuse areas on FCG and detects TPLs at FCG level in binaries to significantly mitigate the impact of different compilation options and architectures. Additionally, LibAE is the first work to detect the exact reuse areas on FCG, which is beneficial for downstream tasks. Moreover, we conduct the first TPL detection dataset in C/C++ binaries under different compilation options and architectures. Experimental results demonstrate that LibAE outperforms all existing TPL detection works and beats the state-of-the-art (SOTA) work by an average of 32.8% in precision and 10.9% in recall. Moreover, we further evaluate LibAE in large-scale real-world binaries in IoT firmware and generate a potential vulnerability list for them. Our experiments indicate that Area Exploration technology performs well in the TPL detection task and has potential in other binary similarity analysis tasks.
  • An Algorithm for Partial Elimination of Jumps in An Object-Oriented Dataflow Language

    Yegor Bugayenko

    Despite decades of criticism, jump statements like GOTO, BREAK, CONTINUE, and RETURN are actively used in imperative programming languages, for example in C++, Java, and Python. There is no consensus on whether refactoring these languages' source code in order to remove such statements can actually help increase code readability. However, it is obvious that automated program analysis would benefit from such a refactoring, because analyzing structured code is easier than analyzing code that arbitrarily changes its flow of control. Algorithms for such a refactoring process have already been proposed for some imperative languages. We propose a similar algorithm but for a dataflow programming language. Even though dataflow languages do not have jump statements, they may have the jump-objects (if they are object-oriented) or jump-functions (if they are functional). We implemented our algorithm as a command line tool for refactoring EO, an object-oriented dataflow language. We successfully tested the tool with a few EO programs. Using φ-calculus, we suggest a formal proof of the validity of all transformations in our algorithm.
  • DeltaNN: Differential Testing to Evaluate Robustness of Image Recognition Models

    Nikolaos Louloudakis

    Image recognition tasks typically use deep learning and require enormous processing power, thus relying on hardware accelerators like GPUs and TPUs for fast, timely processing. Failure in real-time image recognition tasks can occur due to incorrect mapping on hardware accelerators, which may lead to timing uncertainty and erroneous behavior. Mapping on hardware accelerators is done through multiple software components like deep learning frameworks, compilers, device libraries, that we refer to as the computational environment. Owing to the increased use of image recognition tasks in safety-critical applications like autonomous driving and medical imaging, it is imperative to assess their robustness to changes in the computational environment, as the impact of parameters like deep learning frameworks, compiler optimizations, and hardware devices on model performance and correctness is not well understood. In this paper we present a differential testing framework, DeltaNN, to conduct robustness analysis of three popular image recognition models with the ImageNet dataset, assessing the impact of changing deep learning frameworks, compiler optimizations, and hardware devices. We report the impact in terms of misclassifications and inference time across different settings. We observed up to 66% output label differences across deep learning frameworks, and up to 81.8% unexpected performance degradation in terms of inference time, when applying compiler optimizations. Using the analysis tools in DeltaNN, we also perform fault localization to understand the reasons for the observed differences.
  • Towards Reliable Memory Management for Python Native Extensions

    Joannah Nanjekye

    Many programming languages provide a C interface as a foreign function interface (FFI) for C developers to access the language, the Python language being one of these languages. Over the years, the Python C API has grown to be a challenge for the evolution of the Python ecosystem. In this paper, we implement a new Python FFI, we call CyStck, by combining a stack and light-weight handles, to support efficient garbage collection (GC) in Python native extensions. Five large, real-world Python extensions are ported to CyStck, thoroughly profiled with the Scalene profiler, comparing CyStck to the current CPython C API and another Python C API implementation, HPy. CyStck provides speed ups in native (12%) and Python (13%) time respectively for some benchmarks. CyStck also introduces acceptable overhead in system time, as low as 0.2X in some benchmarks while copying the fewest bytes (1%–40%) for all benchmarks across the C/Python boundary compared to the CPython API and HPy respectively. We also implemented a tool to automate the migration of extensions from the CPython C API to CyStck using pattern matching and static analysis, with a success rate as high as 90%.
  • LATCH: Enabling Large-scale Tests on Constraint Systems

    Tom Lauwaerts

    Testing is an essential part of the software development cycle. Unfortunately, testing software on constraint devices is currently very challenging. First, the limited memory of constraint devices severely restricts the size of the test suites. Second, the limited processing power can greatly slow down test suites, preventing a fast feedback loop. Third, when the constraint device becomes unresponsive, it is impossible to distinguish between the test failing or taking very long, forcing the developer to work with timeouts. In addition, these timeouts can cause tests to become flaky, where they have unpredictable outcomes for the same code. Given these problems, researchers have found that most IoT developers rely on laborious manual testing. We propose the novel testing framework LATCH (Large-scale Automated Testing on Constraint Hardware) to overcome the three main challenges of running large test suites on constraint hardware. The core idea of LATCH is to enable programmers to script tests on a desktop machine which are remotely executed on the constraint device. The main advantage is that the constraint device does not maintain the whole test suite in memory, but is step-wise instructed to execute each test. Our testing framework further allows developers to mark tests as depending on other tests. This way, LATCH can skip tests that depend on previously failing tests, resulting in a fast feedback loop. Finally, LATCH addresses the issue of timeouts and flaky tests by including an analysis mode that provides feedback on timeouts and the flakiness of tests. We evaluate LATCH by implementing a large specification test suite, consisting of over 8,000 tests running on top of an ESP32 microcontroller. Our experience shows that the testing framework is expressive, reliable, and reasonably fast, making it suitable to run large test suites on constraint devices.
  • Context-adaptive Automated Testing for Deep Learning Model

    Wei Kong

    Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic by training data, which renders it difficult for system analysis and defect detection. Hence, recent efforts to test deep learning systems have produced intuitive test criteria called Neuron Coverage (NC) and NC-variants, which measure the proportion of neurons activated in a neural network. However, a simple but powerful observation reveals that the contexts of DL applications are neglected when testing. Consequently, the paper first proposes a novel testing framework that incorporates the properties of DL pipelines before test execution, such as time-consuming labeling, complex feature extraction, and frequent re-training. Next, we formulate structural coverage criteria to guide test suite generation based on the above-mentioned properties in different contexts. Furthermore, we proposed a heuristic-based search algorithm to efficiently generate test suites and context-related comprehensive assessment metrics to evaluate the quality of test suites. To exemplify its usefulness, experimental results demonstrate that PipelineTest can generate more diverse and explainable test suites from the perspective of the structural DL pipeline, which can effectively trigger numerous erroneous behaviors in the contexts of medical image diagnosis, method name prediction, and Android malware detection.
  • Specification-Driven Synthesis of Summaries for Symbolic Execution

    Frederico Ramos

    Symbolic execution is a program analysis technique that allows for the exploration of the execution paths of a given program up to a bound. Despite its popularity, two main challenges still hinder its application to real-world code: interactions with the runtime environment and path explosion. The standard way of addressing these problems is to use symbolic summaries, which are operational models that simulate the execution of concrete functions through the use of symbolic reflection primitives. However, symbolic summaries are both error-prone and difficult to validate, leading to numerous correctness bugs in summaries used within state-of-the-art symbolic execution engines. To address this problem, we present SumSynth, a new tool for the automatic synthesis of symbolic summaries from separation-logic specifications for the C programming language. With SumSynth, the developer needs only to write a separation logic specification for the function of interest to generate a correct-by-construction symbolic summary. SumSynth supports the generation of both over- and under-approximating summaries for both verification and bug-finding. We are in the process of evaluating SumSynth in a dataset consisting of 24 libc functions for string and number manipulation. The poster gives an overview of the main design decisions and theoretical underpinnings of SumSynth, and a snapshot of its current evaluation results.
  • Enhancing Web Test Script Repair using Integrated UI Structural and Visual Information

    Zhongzhen Wen

    End-to-end UI testing plays an indispensable role in web testing. However, the maintenance of UI test scripts can become a challenge as web applications undergo changes, leading to the potential breakage of these scripts. The manual repair of broken scripts is a time-consuming and labor-intensive process, making it imperative to study automated repair approaches. Existing approaches have relied on either the Document Object Model (DOM) or visual information alone to repair broken scripts, which show limited effectiveness as they only utilize a subset of the available information and conventional processing techniques. Furthermore, merely combining the two approaches is not sufficient to improve effectiveness, as the use of two disparate methods may result in conflicting repair outcomes. In this study, we present a novel approach to web test repair that considers all available information in the DOM and UI, categorizing it as either identity-related or appearance-related and prioritizing its use in the repair process. In addition, we propose a more advanced lightweight Convolutional Neural Network (CNN) based approach for better processing visual information. Our approach has been implemented as a tool named WebRL, which is available for practical use and further research. The effectiveness of our tool was evaluated on a set of broken UI scripts constructed from 20 real-world websites and was found to outperform state-of-the-art approaches, including water, color, and vista, by a significant margin.