The ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems.

New for ISSTA 2023: Dual submission deadlines

ISSTA 2023 will feature two submission deadlines: An early submission deadline (November 10, 2022) and a regular submission deadline (February 16, 2023). You can choose to submit at either deadline, but papers submitted to the first deadline may receive a chance to submit a major revision of the initial submission to the regular deadline, addressing the reviewer comments. Papers submitted to the regular deadline will be decided using only the traditional paper outcomes of accept or reject.

Accepted Papers

Title
1dFuzz: Reproduce 1-day Vulnerabilities with Directed Differential Fuzzing
Technical Papers
A Bayesian Framework for Automated Debugging
Technical Papers
Pre-print
ACETest: Automated Constraint Extraction for Testing Deep Learning Operators
Technical Papers
A Comprehensive Study on Quality Assurance Tools for Java
Technical Papers
AGORA: Automated Generation of Test Oracles for REST APIs
Technical Papers
Alligator In Vest: A Practical Failure-Diagnosis Framework via Arm Hardware Features
Technical Papers
An Empirical Study of Functional Bugs in Android Apps
Technical Papers
An Empirical Study on Concurrency Bugs in Interrupt-driven Embedded Software
Technical Papers
An Empirical Study on the Effects of Obfuscation on Static Machine Learning-based Malicious JavaScript Detectors
Technical Papers
API2vec: Learning Representations of API Sequences for Malware Detection
Technical Papers
Applying and Extending the Delta Debugging Algorithm for Elevator Dispatching Algorithms (Experience Paper)
Technical Papers
Pre-print
A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation
Technical Papers
Automated Generation of Security-Centric Descriptions for Smart Contract Bytecode
Technical Papers
Automated Program Repair from Fuzzing Perspective
Technical Papers
Automatically Reproducing Android Bug Reports Using Natural Language Processing and Reinforcement Learning
Technical Papers
Back Deduction based Testing for Word Sense Disambiguation Ability of Machine Translation Systems
Technical Papers
BehAVExplor: Behavior Diversity Guided Testing for Autonomous Driving Systems
Technical Papers
Beware of the Unexpected: Bimodal Taint Analysis
Technical Papers
Beyond “Protected” and “Private”: An Empirical Security Analysis of Custom Function Modifiers in Smart Contracts
Technical Papers
Building Critical Testing Scenarios for Autonomous Driving from Real Accidents
Technical Papers
Catamaran: Low-Overhead Memory Safety Enforcement via Parallel Acceleration
Technical Papers
CGuard: Scalable and Precise Object Bounds Protection for C
Technical Papers
CILIATE: Towards Fairer Class-based Incremental Learning by Dataset and Training Refinement
Technical Papers
Pre-print
CodeGrid: A Grid Representation of Code
Technical Papers
CODEP: Grammatical Seq2Seq Model for General-Purpose Code Generation
Technical Papers
Pre-print
COME: Commit Message Generation with Modification Embedding
Technical Papers
Concept-based Automated Grading of CS-1 Programming Assignments
Technical Papers
CONCORD: Clone-aware Contrastive Learning for Source Code
Technical Papers
ConfFix: Repairing Configuration Compatibility Issues in Android Apps
Technical Papers
CoopHance: Cooperative Enhancement for Robustness of Deep Learning Systems
Technical Papers
CydiOS: a model-based testing framework for iOS apps
Technical Papers
Data Constraint Mining for Automatic Reconciliation Scripts Generation
Technical Papers
DDLDroid: Efficiently Detecting Data Loss Issues in Android Apps
Technical Papers
DeepAtash: Focused Test Generation for Deep Learning Systems
Technical Papers
Definition and Detection of Defects in NFT Smart Contracts
Technical Papers
DeFiTainter: Detecting Price Manipulation Vulnerabilities in DeFi Protocols
Technical Papers
Dependency-Aware Metamorphic Testing of Datalog Engines
Technical Papers
Detecting Condition-related Bugs with Control Flow Graph Neural Network
Technical Papers
Detecting State Inconsistency Bugs in DApps via On-Chain Transaction Replay and Fuzzing
Technical Papers
Detecting Vulnerabilities in Linux-based Embedded Firmware with SSE-based On-demand Alias Analysis
Technical Papers
DeUEDroid: Detecting Underground Economy Apps Based on UTG Similarity
Technical Papers
DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization
Technical Papers
Enhancing REST API Testing with NLP Techniques
Technical Papers
Eunomia: Enabling User-specified Fine-Grained Search in Symbolically Executing WebAssembly Binaries
Technical Papers
Exploring Missed Optimizations in WebAssembly Optimizers
Technical Papers
Extracting Inline Tests from Unit Tests
Technical Papers
FairRec: Fairness Testing for Deep Recommender Systems
Technical Papers
Finding Short Slow Inputs Faster with Grammar-Based Search
Technical Papers
Fine-grained Code Clone Detection with Block-based Splitting of Abstract Syntax Tree
Technical Papers
Fuzzing Deep Learning Compilers with HirGen
Technical Papers
Fuzzing Embedded Systems Using Debug Interfaces
Technical Papers
DOI Pre-print
GDsmith: Detecting Bugs in Cypher Graph Database Engines
Technical Papers
GenCoG: A DSL-Based Approach to Generating Computation Graphs for TVM Testing
Technical Papers
GrayC: Greybox Fuzzing of Compilers and Analysers for C
Technical Papers
Green Fuzzer Benchmarking
Technical Papers
Green Fuzzing: A Saturation-based Stopping Criterion using Vulnerability Prediction
Technical Papers
Guided Retraining to Enhance the Detection of Difficult Android Malware
Technical Papers
Guiding Greybox Fuzzing using Mutation Testing
Technical Papers
DOI Pre-print Media Attached
How Effective are Neural Networks for Fixing Security Vulnerabilities
Technical Papers
Hybrid Inlining: A Framework for Compositional and Context-Sensitive Static Analysis
Technical Papers
ICICLE: A re-Designed Emulator for Grey-Box Firmware Fuzzing
Technical Papers
Improving Binary Code Similarity Transformer Models by Semantics-driven Instruction Deemphasis
Technical Papers
Improving Bit-Blasting for Nonlinear Integer Constraints
Technical Papers
Improving Spectrum-Based Localization of Multiple Faults by Iterative Test Suite Reduction
Technical Papers
In Defense of Simple Techniques for Neural Network Test Case Selection
Technical Papers
Interpreters for GNN-based Vulnerability Detection: Are We There Yet?
Technical Papers
iSyn: Semi-Automated Smart Contract Synthesis from Legal Financial Agreements
Technical Papers
ItyFuzz: Snapshot-Based Fuzzer for Smart Contract
Technical Papers
June: A Type Testability Transformation for Improved ATG Performance
Technical Papers
Large Language Models are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models
Technical Papers
Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Technical Papers
LiResolver: License Incompatibility Resolution for Open Source Software
Technical Papers
Loop Invariant Inference through SMT Solving Enhanced Reinforcement Learning
Technical Papers
ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-based Systems
Technical Papers
More Precise Regression Test Selection via Reasoning about Semantics-Modifying Changes
Technical Papers
NodeRT: Detecting Races in Node.js Applications Practically
Technical Papers
DOI
OCFI: Make Function Entry Identification Hard Again
Technical Papers
$\omega$Test: WebView-Oriented Testing for Android Applications
Technical Papers
Pattern-based Peephole Optimizations with Java JIT Tests
Technical Papers
PhysCov: Physical Test Coverage for Autonomous Vehicles
Technical Papers
Precise and Efficient Patch Presence Test for Android Applications against Code Obfuscation
Technical Papers
Project X: Testing GPU Memory Consistency At Large (Experience Paper)
Technical Papers
Quantitative Policy Repair for Access Control on the Cloud
Technical Papers
Rare Path Guided Fuzzing
Technical Papers
Re2BERT: A Two-stage Pre-trained Framework for Automatic Rename Refactoring
Technical Papers
DOI Pre-print
Reducing the Memory Footprint of IFDS-based Data-Flow Analyses Using Fine-Grained Garbage Collection
Technical Papers
ROME: Testing Image Captioning Systems via Recursive Object Melting
Technical Papers
SBDT: Search-based Differential Testing of Certificate Parsers in SSL/TLS Implementations
Technical Papers
Security Checking of Trigger-Action-Programming Smart Home Integrations
Technical Papers
Semantic-Based Neural Network Repair
Technical Papers
Silent Compiler Bug De-duplication via Three-Dimensional Analysis
Technical Papers
Simulation-based Validation for Autonomous Driving Systems
Technical Papers
SlipCover: Near Zero-Overhead Code Coverage for Python
Technical Papers
SmartState : Detecting State-reverting Vulnerabilities in Smart Contracts via Fine-grained State-dependency Analysis
Technical Papers
Splendor: Static Detection of Stored XSS in Modern Web Applications
Technical Papers
Synthesizing Speech Test Cases with Text-To-Speech? An Empirical Study on the False Alarms in Automated Speech Recognition Testing
Technical Papers
Systematically Producing Test-Orders to Detect Order-Dependent Flaky Tests
Technical Papers
Systematic Testing of the Data-Poisoning Robustness of KNN
Technical Papers
Tai-e: A Developer-Friendly Static Analysis Framework for Java by Harnessing the Good Designs of Classics
Technical Papers
Pre-print
Testing Automated Driving Systems by Breaking Many Laws Efficiently
Technical Papers
Testing Graph Database Engines via Query Partitioning
Technical Papers
Testing the Compiler for a New-born Programming Language: An Industrial Case Study (Experience Paper)
Technical Papers
That's a Tough Call: Studying the Challenges of Call Graph Construction for WebAssembly
Technical Papers
Third-party Library Dependency for Large-scale SCA in the C/C++ Ecosystem: How Far Are We?
Technical Papers
To Kill a Mutant: An Empirical Study of Mutation Testing Kills
Technical Papers
Toward Automated Detecting Unanticipated Price Feed in Smart Contract
Technical Papers
Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond
Technical Papers
Towards More Realistic Evaluation for Neural Test Oracle Generation
Technical Papers
Transforming Test Suites Into Croissants
Technical Papers
Type Batched Program Reduction
Technical Papers
Understanding and Tackling Label Errors in Deep Learning-based Vulnerability Detection (Experience Paper)
Technical Papers
Understanding Breaking Changes in the Wild
Technical Papers
Validating Multimedia Content Moderation Software via Semantic Fusion
Technical Papers
Vectorizing Program Ingredients for Better JVM Testing
Technical Papers
Virtual Reality (VR) Automated Testing in the Wild: a Case Study on Unity-Based VR Applications
Technical Papers
What You See Is What You Get? It Is Not the Case! Detecting Misleading Icons for Mobile Applications
Technical Papers
Who Judges the Judge: An Empirical Study on Online Judge Tests
Technical Papers

Call for Papers

ISSTA invites three kinds of submissions. The clear majority of submissions is expected to be “Research Papers”, but submissions that best fit the description of “Experience Papers” or “Replicability Studies” should be submitted as such.

Research Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, methods for emerging systems, in-depth case studies, infrastructures of testing and analysis, or tools are welcome.

Experience Papers

Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Replicability Studies

ISSTA would like to encourage researchers to replicate results from previous papers. A replicability study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, replicability studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A replicability study should clearly report on results that the authors were able to replicate as well as on aspects of the work that were not replicable. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.

In particular, replicability studies should follow the ACM guidelines on replicability (different team, different experimental setup): The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently. This means that it is also insufficient to focus on reproducibility (i.e., different team, same experimental setup) alone. Replicability Studies will be evaluated according to the following standards:

  • Depth and breadth of experiments
  • Clarity of writing
  • Appropriateness of conclusions
  • Amount of useful, actionable insights
  • Availability of artifacts

We expect replicability studies to clearly point out the artifacts the study is built on, and to submit those artifacts to artifact evaluation (see below). Artifacts evaluated positively will be eligible to obtain the prestigious Results Reproduced badge.

Dual Submission Deadlines and Major Revisions

ISSTA 2023 features two submission deadlines: An early submission deadline (November 10, 2022) and a regular submission deadline (February 16, 2023). The instructions in this call apply to both deadlines. You can choose to submit at either deadline, but only papers submitted to the first deadline may receive a chance to submit a major revision of the initial submission to the regular deadline, addressing the reviewer comments. Papers submitted to the regular deadline will be decided using only the traditional paper outcomes of accept or reject.

Submission Guidelines

Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for ISSTA. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

Research Papers, Experience Papers, and Replicability Studies should be at most 10 pages in length, with at most 2 additional pages for references. Experience papers and replicability studies should clearly specify their category in the paper title upon submission, e.g., “XXX (Experience Paper)”. All authors should use the official “ACM Master article template”, which can be obtained from the ACM Proceedings Template pages. Latex users should use the “sigconf” option, as well as the “review” (to produce line numbers for easy reference by the reviewers) and “anonymous” (omitting author names) options. To that end, the following Latex code can be placed at the start of the Latex document:

\documentclass[sigconf,review, anonymous]{acmart}
\acmConference[ISSTA 2023]{ACM SIGSOFT International Symposium on Software Testing and Analysis}{17-21 July, 2023}{Seattle, USA}

Submit your papers via the HotCRP ISSTA 2023 submission website.

Each submission will be reviewed by at least three members of the program committee. Authors will have an opportunity to respond to reviews during a rebuttal period. Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation and appropriate comparison to related work. Some papers may have more than three reviews, as the PC chair may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The authors will have a chance to read the additional reviews and respond to them during the additional short response period. The program committee as a whole will make final decisions about which submissions to accept for presentation at the conference.

Double-blind Reviewing

ISSTA 2023 will conduct double-blind reviewing. Submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any citations to related work by themselves are written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.

Double-blind reviewing should not hinder the usual communication of results. But, during the review period, please don’t broadcast the work on social media. Also, to the extent possible, consider not publishing preprints of your work during or right before the review period. If the authors have already published a version of their paper to arXiv or similar sites, we request authors to use a different title for their submission, so that author names are not inadvertently disclosed, e.g., via a notification on Google Scholar.

Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Open Science Policy

ISSTA has adopted an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles and encourage all contributing authors to disclose (anonymized and curated) data to increase reproducibility and replicability.

Upon submission, authors are asked to make their data available to the program committee (via upload of anonymized supplemental material or a link to an anonymized private or public repository) or to comment on why this is not possible or desirable. While sharing such a repository is not mandatory for submission or acceptance, this information will be passed to the program committee to inform its decision. Furthermore, authors are asked to indicate whether they intend to make their data publicly available upon acceptance.

Authors Take Note:

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.