Write a Blog >>
ICSE 2023
Sun 14 - Sat 20 May 2023 Melbourne, Australia

Call For Artifact Submissions

Version 1.0, last modified 2021-12-15


The artifact evaluation track aims to review, promote, share, and catalog the research artifacts of accepted software engineering papers. Authors of papers accepted to the Technical/SEIP/NIER/SEET/SEIS tracks can submit an artifact for the Artifacts Evaluated and Artifacts Available badges. Authors of any prior SE work (published at ICSE or elsewhere) are also invited to submit their work for the Results Validated badges. Definitions for the badges are given in the table below, taken from ACM Artifact Review and Badging Version 1.1 .

Artifacts Evaluated

Artifacts Available

Results Validated

Functional

Reusable

Available

Reproduced

Replicated

https://www.acm.org/binaries/content/gallery/acm/publications/replication-badges/artifacts_evaluated_functional_dl.jpg

https://www.acm.org/binaries/content/gallery/acm/publications/replication-badges/artifacts_evaluated_reusable_dl.jpg

https://www.acm.org/binaries/content/gallery/acm/publications/replication-badges/artifacts_available_dl.jpg

https://www.acm.org/binaries/content/gallery/acm/publications/replication-badges/results_reproduced_dl.jpg

https://www.acm.org/binaries/content/gallery/acm/publications/replication-badges/results_replicated_dl.jpg

The artifacts associated with the research are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.

The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated – Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.

Author-created artifacts relevant to this paper have been placed on a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.

Τhe main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.

Τhe main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.

Important Dates

  • Jan 20, 2023: Artifact abstract deadline.

  • Jan 27, 2023: Artifact submissions deadline.

  • Jan 28 - Feb 7, 2023: 1st review period (PC only).

  • Feb 7 - Feb 17, 2023: 2nd review period (PC/authors discussion).

  • Feb 24, 2023: Notifications.

Best Artifact Awards

There will be two ICSE 2023 Best Artifact Awards to recognize the effort of authors creating and sharing outstanding research artifacts.

Submission for Results Validated Badges

For the two types of Results Validated badges (Replicated and Reproduced badges), authors will need to offer appropriate documentation that their artifacts have reached that stage.

By the submission deadline, register your research artifact at the ICSE 2023 HotCRP site by submitting a 2 pages (max) abstract in PDF format describing your artifact.

The abstract should include the paper title, the purpose of the research artifact, the badge(s) you are claiming, and the technology skills assumed by the reviewer evaluating the artifact. Please also mention if running your artifact requires specific Operating Systems or other environments.

  • TITLE: A (Partial) (Replication|Reproduction) of XYZ . Please add the term partial to your title if only part of the original work could be replicated/reproduced.

  • WHO: name the original authors (and paper) and the authors that performed the replication/reproduction. Include contact information and mark one author as the corresponding author. IMPORTANT : include also a web link to a publically available URL directory containing (a) the original paper (that is being reproduced) and (b) any subsequent paper(s)/documents/reports that do the reproduction.

  • WHAT: describe the “thing” being replicated/reproduced;

  • WHY: clearly state why that “thing” is interesting/important;

  • HOW: say how it was done first;

  • WHERE: describe the replication/reproduction. If the replication/reproduction was only partial, then explain what parts could be achieved or had to be missed.

  • DISCUSSION (if applicable): What aspects of this “thing” made it easier/harder to replicate/reproduce. What are the lessons learned from this work that would enable more replication/reproduction in the future for other kinds of tasks or other kinds of research.

At least two PC members will review each abstract, possibly reaching out to the authors of the abstract or original paper. Abstracts will be ranked as follows.

  • If PC members do not find sufficient substantive evidence for replication/reproduction, the abstract will be rejected.

  • The remaining abstracts will be sorted according to (a) correctness and (b) how interesting they are to the community (relevance).

  • The top ranked abstracts will be invited to give lightning talks.

Submission for Evaluated and Available Badges

Only authors of papers accepted to the 2023 Technical/SEIP/NIER/SEET/SEIS tracks can submit candidate reusable or available artifacts.

By the submission deadline, register your research artifact at the ICSE 2023 HotCRP site by submitting a 2 pages (max) abstract in PDF format describing your artifact.

For the evaluated and available badges, authors must offer “download information” showing how reviewers can access and execute (if appropriate) their artifact.

Authors must perform the following steps to submit an artifact:

  1. Prepare the artifact

  2. Make the artifact available

  3. Document the artifact

  4. Submit the artifact

1. Prepare the artifact

There are two options depending on the nature of the artifacts: Installation Package or Simple Package. In both cases, the configuration and installation for the artifact should take less than 30 minutes. Otherwise, the artifact is unlikely to be accepted on practical grounds, simply because the PC will not have sufficient time to evaluate it.

  • Installation Package. If the artifact consists of a tool or software system, then the authors need to prepare an installation package so that the tool can be installed and run in the evaluator’s environment. Provide enough associated instruction, code, and data such that an average CS professional could build, install, and run the code. If the artifact contains or requires the use of a special tool or any other non-trivial piece of software, the authors must provide a VirtualBox VM image or a Docker container image with a working environment containing the artifact and all the necessary tools. We expect artifacts to have been vetted on a clean machine before submission.

  • Simple Package. If the artifact only contains documents which can be used with a simple text editor, a PDF viewer, or some other common tool (e.g., a spreadsheet program in its basic configuration) the authors can just save all documents in a single package file (zip or tar.gz).

2. Make the artifact available

The authors need to make the packaged artifact (installation package or simple package) available so that the PC can access it.

We recommend making artifacts available via an archival repository, such as Software Heritage (see submission guide ), which provides long-term availability of software source code. Other often used solutions, more focused on long-term data archival, includeFigshare and Zenodo.

We recommend against making artifacts available solely on collaborative software development platforms, as they do not guarantee long-term archival. Also, availability via an archival repository is a mandatory requirement for papers aiming for the Available badge.

For papers aiming for the Functional and Reusable badges (but not the Available badge), it is possible to provide a private link or a password-protected link to the submitted artifact, in order to enable reviewers to privately review it. While these mechanisms are supported, in the spirit of the ICSE Open Science Policy , we recommend against them and encourage authors to make their artifacts publicly available, no matter which badges they are after.

3. Document the artifact

The authors need to write and submit documentation explaining how to obtain, unpack, and use the artifact in more detail. The artifact submission must only describe the technicalities of the artifacts and uses of the artifact that are not already described in the paper.

The submission should include the following documents (in plain text or pdf format) in a single archive file:

  • A README file describing what the artifact does and where it can be obtained. Also, there should be a clear description of how to repeat/replicate/reproduce the results presented in the paper. Artifacts which focus on data should, in principle, cover aspects relevant to understanding the context, data provenance, ethical and legal statements (as long as relevant), and storage requirements. Artifacts which focus on software should, in principle, cover aspects relevant to how to install and use it (and be accompanied by a small example).

  • A REQUIREMENTS file for artifacts which focus on software. This file should cover aspects of hardware (e.g., performance, storage or non-commodity peripherals) and software environments (e.g., Docker, VM, and operating system; package dependencies, etc.). Any deviation from standard environments needs to be reasonably justified.

  • A STATUS file stating what kind of badge(s) the authors are applying for as well as the reasons why the authors believe that the artifact deserves that badge(s).

  • A LICENSE file describing the distribution rights. For submissions aiming for the Available badge, the license needs to ensure public availability. In the spirit of the ICSE Open Science Policy , we recommend adopting an open source license.

  • An INSTALL file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, on what output to expect that confirms that the code is installed and working; and the code is doing something interesting and useful.

  • A copy of the accepted paper in pdf format.

4. Submit the artifact

By the submission deadline, register your research artifact at the ICSE 2023 HotCRP site by submitting an abstract describing your artifact. The abstract should include the paper title, the purpose of the research artifact, the badge(s) you are claiming, and the technology skills assumed by the reviewer evaluating the artifact. Please also mention if running your artifact requires specific Operation Systems or other environments.

The PC may contact the authors, via the submission system, during the 2nd review period to request clarifications on the basic installation and start-up procedures or to resolve simple installation problems. Information on this phase is provided in the Submission and Reviewing Guidelines. Given the short review time available, the authors are expected to respond within a 48-hour period. Authors may update their research artifact after submission only for changes requested by reviewers in the 2nd review period (PC/author discussion phase).

Further information will be constantly made available on the website https://conf.researchr.org/track/icse-2023/icse-2023-artifact-evaluation .

In case of questions, please do not hesitate contacting the chairs.


Review Guidelines

For PC members, please refer to the excellent ESEC/FSE’2021 guidelines for artifacts evaluation criteria (the entire document is applicable also for ICSE 2023, with the notable exception of different deadline dates).

Accepted Research Artifacts

Title
AdHere: Automated Detection and Repair of Intrusive Ads
Artifact Evaluation
Analyzing the Impact of Workloads on Modeling the Performance of Configurable Software Systems (Supplementary Material)
Artifact Evaluation
APICAD: Augmenting API Misuse Detection through Specifications from Code and Documents
Artifact Evaluation
A Replication of "Better Automatic Program Repair by Using Bug Reports and Tests Together"
Artifact Evaluation
Link to publication DOI Pre-print
A Replication of Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction
Artifact Evaluation
A Reproduction of Demystifying Privacy Policy of Third-Party Libraries in Android Apps
Artifact Evaluation
DOI Pre-print
A Reproduction of Dependency Facade: The Coupling and Conflicts between Android Framework and Its Customization
Artifact Evaluation
Artifact Abstract of TAINTMINI: Detecting Flow of Sensitive Data in Mini-Programs
Artifact Evaluation
[Artifact] ECSTATIC: An Extensible Framework for Testing and Debugging Configurable Static Analysis
Artifact Evaluation
Artifact for Article "SmartMark: Software Watermarking Scheme for Smart Contracts"
Artifact Evaluation
Artifact for Evaluating the Impact of Experimental Assumptions in Automated Fault Localization
Artifact Evaluation
DOI Pre-print Media Attached File Attached
Artifact for ``FedSlice: Protecting Federated Learning Models From Malicious Participants with Model Slicing''
Artifact Evaluation
Artifact for Flexible and Optimal Dependency Management via Max-SMT
Artifact Evaluation
Artifact for Fonte: Finding Bug Inducing Commits From Failures
Artifact Evaluation
Artifact for Locating Framework-specific Crashing Faults with Compact and Explainable Candidate Set
Artifact Evaluation
Artifact for Technical Track Paper: An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry
Artifact Evaluation
Artifact for the ICSE 2023 Paper Entitled "Towards Understanding Fairness and its Composition in Ensemble Machine Learning"
Artifact Evaluation
Artifact for the paper - "Decomposing a Recurrent Neural Network into Modules for Enabling Reusability and Replacement"
Artifact Evaluation
Artifact for "Triggers for Reactive Synthesis Specifications"
Artifact Evaluation
Artifact for ”Usability-Oriented Design of Liquid Types for Java”
Artifact Evaluation
Artifact for "Using Reactive Synthesis: An End-to-End Exploratory Case Study"
Artifact Evaluation
Artifact for "Which of My Assumptions are Unnecessary for Realizability and Why Should I Care?"
Artifact Evaluation
Artifact: Generating Realistic and Diverse Tests for LiDAR-Based Perception Systems
Artifact Evaluation
Artifact: Measuring and Mitigating Gaps in Structural Testing
Artifact Evaluation
Artifact of "Impact of Code Language Models on Automated Program Repair"
Artifact Evaluation
Artifact of "KNOD: Domain Knowledge Distilled Tree Decoder for Automated Program Repair"
Artifact Evaluation
Artifact of "PExReport: Automatic Creation of Pruned Executable Cross-Project Failure Reports"
Artifact Evaluation
Artifacts for "AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces"
Artifact Evaluation
Artifacts for Keyword Extraction From Specification Documents for Planning Security Mechanisms
Artifact Evaluation
Artifacts Package for Article 'AChecker: Statically Detecting Smart Contract Access Control Vulnerabilities'
Artifact Evaluation
Artifact: UpCy - Safely Updating Outdated Dependencies
Artifact Evaluation
A StackExchange Dataset of Developer Questions Related to Checked-in Secrets in Software Artifacts
Artifact Evaluation
BFTDetector: Automatic Detection of Business Flow Tampering for Digital Content Service
Artifact Evaluation
CC: Causality-Aware Coverage Criterion for Deep Neural Networks
Artifact Evaluation
Chronos: Time-Aware Zero-Shot Identification of Libraries from Vulnerability Reports
Artifact Evaluation
Code Review of Build System Specifications: Prevalence, Purposes, Patterns, and Perceptions (Replication Package)
Artifact Evaluation
Compiling Parallel Symbolic Execution with Continuations
Artifact Evaluation
Pre-print
Concrat: An Automatic C-to-Rust Lock API Translator for Concurrent Programs
Artifact Evaluation
DOI
Coverage Guided Fault Injection for Cloud Systems
Artifact Evaluation
Demystifying Issues, Challenges, and Solutions for Multilingual Software Development
Artifact Evaluation
Detecting Dialog-Related Keyboard Navigation Failures in Web Applications
Artifact Evaluation
Detecting Isolation Bugs via Transaction Oracle Construction
Artifact Evaluation
Detecting JVM JIT Compiler Bugs via Exploring Two-Dimensional Input Spaces
Artifact Evaluation
Diver: Oracle-Guided SMT Solver Testing with Unrestricted Random Mutations
Artifact Evaluation
DLInfer: Deep Learning with Static Slicing for Python Type Inference
Artifact Evaluation
DoppelTest - Doppelganger Test Generation for Revealing Bugs in Autonomous Driving Software
Artifact Evaluation
DRFuzz: A Regression Fuzzing Framework for Deep Learning Systems
Artifact Evaluation
Efficient Requirements Analysis with QAssist: An AI-Powered Question Answering Assistant
Artifact Evaluation
Explaining Software Bugs Leveraging Code Structures in Neural Machine Translation
Artifact Evaluation
FedDebug: Systematic Debugging for Federated Learning Applications
Artifact Evaluation
Improving API Knowledge Discovery with ML: A Case Study of Comparable API Methods
Artifact Evaluation
Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks
Artifact Evaluation
Learning Graph-based Code Representations for Source-level Functional Similarity Detection
Artifact Evaluation
Learning Seed-Adaptive Mutation Strategies for Greybox Fuzzing
Artifact Evaluation
Learning to Boost Disjunctive Static Bug-Finders
Artifact Evaluation
MorphQ: Metamorphic Testing of the Qiskit Quantum Computing Platform
Artifact Evaluation
On Privacy Weaknesses and Vulnerabilities in Software Systems
Artifact Evaluation
On the Reproducibility of Software Defect Datasets
Artifact Evaluation
Operand-Variation-Oriented Differential Analysis for Fuzzing Binding Calls in PDF Readers
Artifact Evaluation
(Partial) Bad Snakes: Understanding and Improving Python Package Index Malware Scanning (accepted at ICSE 2023)
Artifact Evaluation
PYEVOLVE: Automating Frequent Code Changes in Python ML Systems
Artifact Evaluation
Reachable Coverage: Estimating Saturation in Fuzzing - Artifacts
Artifact Evaluation
Replication package for Revisiting Learning-based Commit Message Generation
Artifact Evaluation
Replication Package of the ICSE 2023 Paper Entitled "Fairify: Fairness Verification of Neural Networks"
Artifact Evaluation
Research Artifact: Designing for Cognitive Diversity: Improving the GitHub Experience
Artifact Evaluation
Research Artifact of “SmallRace: Static Race Detection for Dynamic Languages - A Case on Smalltalk”
Artifact Evaluation
Responsibility in Context: On Applicability of Slicing in Semantic Regression Analysis
Artifact Evaluation
Revisiting Neuron Coverage for DNN Testing: A Layer-Wise and Distribution-Aware Criterion
Artifact Evaluation
Safe low-level code without overhead is practical
Artifact Evaluation
DOI
SecBench.js: An Executable Security Benchmark Suite for Server-Side JavaScript
Artifact Evaluation
Sibyl Artifact: Improving Software Engineering Tools with SMT Selection
Artifact Evaluation
Socio-Technical Anti-Patterns in Building ML-Enabled Software (Supplementary Material)
Artifact Evaluation
Strategies, Benefits and Challenges of App Store-inspired Requirements Elicitation - Artifact
Artifact Evaluation
Testing Database Engines via Query Plan Guidance
Artifact Evaluation
Testing Database Systems via Differential Query Execution
Artifact Evaluation
Test Selection for Unified Regression Testing
Artifact Evaluation
The Smelly Eight: An Empirical Study on the Prevalence of Code Smells in Quantum Computing - Artifact
Artifact Evaluation
Twins or False Friends? A Study on Energy Consumption and Performance of Configurable Software (Replication Package)
Artifact Evaluation
Understanding the Threats of Upstream Vulnerabilities to Downstream Projects in the Maven Ecosystem
Artifact Evaluation
Validating SMT Solvers via Skeleton Enumeration Empowered by Historical Bug-Triggering Input
Artifact Evaluation
ViolationTracker: Building Precise Histories for Static Analysis Violations
Artifact Evaluation
VULGEN: Realistic Vulnerability Generation Via Pattern Mining and Deep Learning
Artifact Evaluation
When to Say What? Learning to Find Inconsistencies in Condition-Message Statements
Artifact Evaluation

.