SSBSE 2022
Thu 17 - Fri 18 November 2022 Singapore
co-located with ESEC/FSE 2022

The SSBSE Challenge Track is an exciting opportunity for SBSE researchers to apply tools, techniques, and algorithms to real-world software. Participants can use their expertise to carry out analyses on open source software projects or to directly improve the infrastructure powering research experiments. The principal criterion is to produce interesting results and to apply your expertise to challenge the state of the art and inspire future SBSE research.

All accepted submissions will compete for cash prizes totalling up to €1000. Up to three winners will be selected by the co-chairs, based on the reviews of the submitted papers. The winners will be announced at SSBSE 2022.

Dates
Tracks
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Thu 17 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

09:00 - 10:30
Plenary + Keynote 1Keynotes at ERC SR 9
Chair(s): Mike Papadakis University of Luxembourg, Luxembourg
09:00
90m
Keynote
Applications of Search-based Software Testing to Trustworthy Artificial Intelligence
Keynotes
Lionel Briand University of Luxembourg; University of Ottawa
11:00 - 12:30
Session 1Research Papers / RENE / NIER at ERC SR 9
Chair(s): Ezekiel Soremekun SnT, University of Luxembourg
11:00
30m
Talk
Guess What: Test Case Generation for Javascript with Unsupervised Probabilistic Type Inference
Research Papers
Dimitri Stallenberg Delft University of Technology, Mitchell Olsthoorn Delft University of Technology, Annibale Panichella Delft University of Technology
Pre-print
11:30
30m
Talk
Improving Search-based Android Test Generation using Surrogate Models
Research Papers
Michael Auer University of Passau, Felix Adler University of Passau, Gordon Fraser University of Passau
12:00
30m
Talk
Applying Combinatorial Testing to Verification-Based Fairness Testing
RENE / NIER
Takashi Kitamura , Zhenjiang Zhao Graduate School of Informatics and Engineering, University of Electro-Communications, Tokyo, Japan, Takahisa Toda The University of Electro-Communications
14:00 - 15:30
Session 2Research Papers / Challenge Track at ERC SR 9
Chair(s): Renzo Degiovanni SnT, University of Luxembourg
14:00
30m
Talk
An Empirical Comparison of EvoSuite and DSpot for Improving Developer-Written Test Suites with Respect to Mutation Score
Research Papers
Muhammad Firhard Roslan University of Sheffield, José Miguel Rojas The University of Sheffield, Phil McMinn University of Sheffield
14:30
30m
Talk
Efficient Fairness Testing through Hash-Based Sampling
Research Papers
Zhenjiang Zhao Graduate School of Informatics and Engineering, University of Electro-Communications, Tokyo, Japan, Takahisa Toda The University of Electro-Communications, Takashi Kitamura
15:00
30m
Talk
Multi-Objective Genetic Improvement: A Case Study with EvoSuite
Challenge Track
James Callan UCL, Justyna Petke University College London
16:00 - 17:30
Session 3Research Papers at ERC SR 9
Chair(s): Mitchell Olsthoorn Delft University of Technology
16:00
30m
Talk
EvoAttack: An Evolutionary Search-based Adversarial Attack for Object Detection Models
Research Papers
Kenneth Chan Michigan State University, Betty H.C. Cheng Michigan State University
16:30
30m
Talk
Search-based Test Suite Generation for Rust
Research Papers
Vsevolod Tymofyeyev University of Passau, Gordon Fraser University of Passau

Fri 18 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

11:00 - 12:30
Future of SSBSE 1Future of SBSE at Virtual 3 (Whova)
Chair(s): Thiago Ferreira University of Michigan - Flint
11:00
30m
Talk
ML is the new SBSE
Future of SBSE
Myra Cohen Iowa State University
11:30
30m
Talk
Reverse engineering the new SBSE
Future of SBSE
Tim Menzies North Carolina State University
14:00 - 15:30
TutorialTutorial at Virtual 3 (Whova)
Chair(s): Jeongju Sohn University of Luxembourg, Luxembourg
14:30
60m
Tutorial
Methodology and Guidelines for Evaluating Multi-Objective Search-Based Software Engineering
Tutorial
Miqing Li University of Birmingham, Tao Chen Loughborough University
Link to publication Pre-print
16:00 - 17:30
Keynote 2Keynotes at Virtual 3 (Whova)
Chair(s): Annibale Panichella Delft University of Technology
16:00
90m
Keynote
Genetic Improvement of Software
Keynotes
Justyna Petke University College London
18:30 - 20:00
Future of SSBSE 2Future of SBSE at Virtual 3 (Whova)
Chair(s): Giovani Guizzo University College London
18:30
30m
Talk
Online software safety: a new paradigm for SBSE research
Future of SBSE
Mark Harman Meta Platforms, Inc. and UCL
19:00
30m
Talk
"SSBSE 2050: 14-18 November, Oxia Palus, Mars"
Future of SBSE
Andrea Arcuri Kristiania University College and Oslo Metropolitan University
19:30
30m
Talk
Data Mining Algorithms Using/Used-by Optimisers: a DUO Approach to Software Engineering
Future of SBSE
Leandro Minku University of Birmingham, UK

Call for Challenge Solutions

Challenge Cases

You can choose one or many of the following challenge software to be used as subjects of your SBSE research. In other words, you need to use the software and/or their accompanying artefacts as targets of your approach. Remember, source-code and tests are not the only artefacts that can be targeted. You can use commit history, documentation, bug tracking information, and other resources associated with these programs.

We also encourage you to contact the developers of these programs in case you are able to enhance the artefacts, find bugs, or contribute in any way to the development of the software.

The Traditional Software

If you are going for a more traditional software, we have prepared two challenge cases for you. These two programs are well-known in both research and industry.

Java - Apache Commons Numbers

Apache Commons Numbers is an open-source API under the Apache Licence v2 that provides implementations of number types and utilities for Java. Features include:

  • arithmetic
  • primes
  • fractions
  • combinatorics
  • complex numbers
  • quaternions
  • gamma functions
  • root finding

The project is divided into many modules, each of which with a specific type of number operation and with a dedicated test suite. It uses JUnit as a testing framework.

Official Website: https://commons.apache.org/proper/commons-numbers/
Repository: https://github.com/apache/commons-numbers
Current Version: 1.0 (https://github.com/apache/commons-numbers/tree/89a9c3817222a6c67d7a231b19f6c5c7fc995208)
Bug Tracking System: https://issues.apache.org/jira/browse/NUMBERS
Documentation: https://commons.apache.org/proper/commons-numbers/commons-numbers-core/apidocs/org/apache/commons/numbers/core/package-summary.html

Python - NumPy

NumPy (Numerical Python) is an community-driven open source library for Python programming language. It’s the universal standard for operations on numerical data in Python. NumPy adds powerful data structures to Python that guarantee efficient calculations with arrays and matrices and it supplies an enormous library of high-level mathematical functions that operate on these arrays and matrices. NumPy users include everyone from beginning coders to experienced researchers doing state-of-the-art scientific and industrial research and development.

Official Website: https://numpy.org/
Repository: https://github.com/numpy/numpy
Documentation: https://numpy.org/doc/stable/

Who Watches the Watchers?

If you are feeling like taking a real challenge, these four programs will put your skills to the test. These are programs commonly used by SBSE researchers to compose their solutions, but they are rarely used as subjects themselves. This year, we decided to incorporate the common saying “Who Watches the Watchers?” and allow you to innovate by applying your approaches on popular Machine Learning (ML) and SBSE programs.

Although these programs are to be used as subjects, there is nothing preventing you from using them to solve problems on themselves.

Java - EvoSuite

One of the most popular tool for search-based test generation is EvoSuite. It automatically generates test cases with assertions for classes written in Java code. To achieve this, EvoSuite uses search-based techniques that have been improved over the years, from a “whole test suite generation” algorithm to the “Dynamic Many-Objective Sorting Algorithm (DynaMOSA)” that has been integrated recently in the EvoSuite tool. Moreover, EvoSuite provides the flexibility to adjust the search techniques with its fitness functions, for instance, tackling different coverage criteria. A benchmark (SF110) is provided to compare the performance of different test case generation techniques implemented by EvoSuite. The latest release improves support for Java 9 and JUnit 5. EvoSuite itself is implemented in Java.

Official Website: https://www.evosuite.org/
Latest Release: https://github.com/EvoSuite/evosuite/releases/tag/v1.2.0
Repository: https://github.com/EvoSuite/evosuite

Kotlin and Java - EvoMaster

Mainly developed by Prof. Andrea Arcuri from the Kristiania University College, Oslo, Norway, EvoMaster is the first open-source AI-driven tool that automatically generates system-level test cases for web/enterprise applications. In particular, internally it uses an Evolutionary Algorithm and Dynamic Program Analysis to be able to generate effective test cases. The approach is to evolve test cases from an initial population of random ones, trying to maximise measures like code coverage and fault detection. EvoMaster uses several kinds of AI heuristics to improve performance even further, building on decades of research in the field of Search-Based Software Testing.

At the moment, EvoMaster targets RESTful APIs compiled to JVM (e.g., Java and Kotlin). It supports JDK 8 and the major LTS versions after that (currently JDK 17). The tool generates JUnit (version 4 or 5) tests, written in either Java or Kotlin.

Its core components are developed in Kotlin, while also having a Java client.

Official Website: https://github.com/EMResearch/EvoMaster
Repository: https://github.com/EMResearch/EvoMaster
Current Version: 1.4.0 (https://github.com/EMResearch/EvoMaster/tree/acca83118e58dc4c8fa5231d86b70e9b26ce3bce)
Bug Tracking System: https://github.com/EMResearch/EvoMaster/issues
Documentation: https://github.com/EMResearch/EvoMaster/tree/master/docs
Publications: https://github.com/EMResearch/EvoMaster/tree/master/docs/publications
Demonstration Video: https://www.youtube.com/watch?v=3mYxjgnhLEo

Python - Keras

Keras is a popular deep learning framework to develop artificial neural networks. It is implemented in Python and runs on top of the machine learning platform TensorFlow. Particularly, Keras provides a Python API to enable fast experimentation with neural networks. Its goal is to be a simple, flexible, and scalable framework whose API provides appropriate abstractions and building blocks to quickly develop solutions to machine learning problems. It is adopted by organizations and industry such as NASA, YouTube, and Waymo.

Official Website including documentation and code examples: https://keras.io/
Repository: https://github.com/keras-team/keras

Python - Mnist

MNIST (Modified National Institute of Standards and Technology) is a dataset of hand-written images of digits. It is divided into the training set of 60,000 and a test set of 10,000 examples. Each element of the dataset comes with an assigned label which is a digit between 0 and 9. The images are black and white and were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field. MNIST is widely used as the basis for learning and practicing how to develop, evaluate, and use convolutional deep learning (DL) neural networks for image classification from scratch. Moreover, it is the most used subject case study in academic papers focusing on the application of software engineering to the testing of deep learning systems. In addition to the dataset, we also provide the source code for the DL model that performs the classification task and achieves ~99% test accuracy.

Official Website: http://yann.lecun.com/exdb/mnist/
Source code of the model: https://keras.io/examples/vision/mnist_convnet/ \


Submitting to the Challenge Track

A challenge-track participant should:

  • Perform original SBSE research using or enhancing the challenge programs and/or their artefacts.
  • Report the findings in a six-page paper using the regular symposium format. Note that these findings must not have been previously published in any peer-reviewed venue.
  • Submit the challenge-track report by the deadline.
  • Present the findings at SSBSE 2022, if the submission is accepted.

It is not mandatory for submissions to the SSBSE Challenge track to implement a new tool, technique, or algorithm. However, we do expect that applying your existing or new tools/techniques/algorithms to the challenge programs will lead to relevant insights and interesting results.

The criteria for paper acceptance are the following:

  • Application of an SBSE technique to either analyse or enhance the challenge programs and/or their accompanying artefacts.
  • Technical soundness.
  • Readability and presentation.

Submission details

Submissions must be, at most, six pages long in PDF format and should conform at time of submission to the SSBSE/Springer LNCS format and submission guidelines. Submissions must not have been previously published, or be in consideration for, any journal, book, or other conference. Please submit your challenge paper to EasyChair on, or before, the Challenge Solution deadline. At least one author of each paper is expected to present at SSBSE 2022. Papers for the Challenge Solution track are also required to follow double-anonymous restrictions. All accepted contributions will be published in the conference proceedings.

Submissions can be made via Easychair (https://easychair.org/conferences/?conf=ssbse2022) by the submission deadline. If you have any questions about the challenge, please email the Challenge Track chairs.

UKRI Open Access Policy

For the planned Springer Nature conference proceedings later, the License-to-Publish form for the papers accepted will be compatible with the UKRI policy introduced April 1st, 2022. Springer Nature will be pleased to allow UKRI-funded authors of accepted papers the right to make the authors’ Accepted Manuscripts – after acceptance by the conference but before any publisher typesetting / production editing (i.e., not the Version of Record) – available on their own personal, self-maintained website immediately on acceptance, or available for public release on their employer’s internal website or their institutional and / or funder repositories immediately upon acceptance without an embargo period, under a CC BY licence.

Questions? Use the SSBSE Challenge Track contact form.