Accepted Papers

Title
Abstraction-Aware Inference of Metamorphic Relations
Research Papers
A Critical Review of Common Log Data Sets Used for Evaluation of Sequence-based Anomaly Detection Techniques
Research Papers
Active Monitoring Mechanism for Control-based Self-Adaptive Systems
Research Papers
Adapting Multi-objectivized Software Configuration Tuning
Research Papers
Pre-print
A Deep Dive into Large Language Models for Automated Bug Localization and Repair
Research Papers
AI-assisted Code Authoring at Scale: Fine-tuning, deploying, and mixed methods evaluation
Research Papers
A Miss Is as Good as A Mile: Metamorphic Testing for Deep Learning Operators
Research Papers
Analyzing Quantum Programs with LintQ: A Static Analysis Framework for Qiskit
Research Papers
An Analysis of the Costs and Benefits of Autocomplete in IDEs
Research Papers
An empirical study on code review activity prediction in practice
Research Papers
An Empirical Study on Focal Methods in Deep-Learning-Based Approaches for Assertion Generation
Research Papers
A Quantitative and Qualitative Evaluation of LLM-based Explainable Fault Localization
Research Papers
Are Human Rules Necessary? Generating Reusable APIs with CoT Reasoning and In-context Learning
Research Papers
AROMA: Automatic Reproduction of Maven Artifacts
Research Papers
A Transferability Study of Interpolation-Based Hardware Model Checking to Software Verification
Research Papers
A Weak Supervision-Based Approach to Improve Chatbots for Code Repositories
Research Papers
BARO: Robust Root Cause Analysis for Microservices via Multivariate Bayesian Online Change Point Detection
Research Papers
Beyond Code Generation: An Observational Study of ChatGPT Usage in Software Engineering Practice
Research Papers
Bin2Summary: Beyond Function Name Prediction in Stripped Binaries with Functionality-specific Code Embeddings
Research Papers
Bloat beneath Python's Scales: A Fine-Grained Inter-Project Dependency Analysis
Research Papers
Pre-print
Bounding Random Test Set Size with Computational Learning Theory
Research Papers
BRF: Fuzzing the eBPF runtime
Research Papers
Can GPT-4 Replicate Empirical Software Engineering Research?
Research Papers
Can Large Language Models Transform Natural Language Intent into Formal Method Postconditions?
Research Papers
CC2Vec: Combining Typed Tokens with Contrastive Learning for Effective Code Clone Detection
Research Papers
ChangeRCA: Finding Root Causes from Software Changes in Large Online Systems
Research Papers
DOI Pre-print
Characterizing Python Library Migrations
Research Papers
ClarifyGPT: A Framework for Enhancing LLM-based Code Generation via Requirements Clarification
Research Papers
CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking
Research Papers
Code-Aware Prompting: A study of Coverage guided Test Generation in Regression Setting using LLM
Research Papers
CodePlan: Repository-level Coding using LLMs and Planning
Research Papers
Component Security Ten Years Later: An Empirical Study of Cross-Layer Threats in Real-World Mobile Applications
Research Papers
CORE: Resolving Code Quality Issues Using LLMs
Research Papers
COSTELLO: Contrastive Testing for Embedding-based Large Language Model as a Service Embeddings
Research Papers
CrossCert: A Cross-Checking Detection Approach to Patch Robustness Certification for Deep Learning Models
Research Papers
Cut to the Chase: An Error-Oriented Approach to Detect Error-Handling Bugs
Research Papers
DAInfer: Inferring API Aliasing Specifications from Library Documentation via Neurosymbolic Optimization
Research Papers
DeciX: Explain Deep Learning Based Code Generation Applications
Research Papers
Decomposing Software Verification Using Distributed Summary Synthesis
Research Papers
Demystifying Invariant Effectiveness for Securing Smart Contracts
Research Papers
Dependency-Induced Waste in Continuous Integration: An Empirical Study on NPM Dependencies
Research Papers
DeSQL: Interactive Debugging of SQL in Data-Intensive Scalable Computing
Research Papers
DiffCoder: Enhancing Large Language Model on API Invocation via Analogical Code Exercises
Research Papers
Do Code Generation Models Think Like Us? - A Study of Attention Alignment between Large Language Models and Human Programmers
Research Papers
Do Words Have Power? Understanding and Fostering Civility in Code Review Discussion
Research Papers
DTD: Comprehensive and Scalable Testing for Debuggers
Research Papers
DyPyBench: A Benchmark of Executable Python Software
Research Papers
Effective Teaching through Code Reviews: Patterns and Anti-Patterns
Research Papers
Efficiently Detecting Reentrancy Vulnerabilities in Complex Smart Contracts
Research Papers
Enhancing Code Understanding for Impact Analysis by Combining Transformers and Program Dependence Graphs
Research Papers
Enhancing Function Name Prediction using Votes-Based Name Tokenization and Multi-Task Learning
Research Papers
Evaluating and Improving ChatGPT for Unit Test Generation
Research Papers
Evaluating Directed Fuzzers: Are We Heading in the Right Direction?
Research Papers
Evolutionary Multi-Objective Optimization for Contextual Adversarial Example Generation
Research Papers
Exploring and Unleashing the Power of Large Language Models in Automated Code Translation
Research Papers
EyeTrans: Merging Human and Machine Attention for Neural Code Summarization
Research Papers
Fast Graph Simplification for Path-Sensitive Typestate Analysis through Tempo-Spatial Multi-Point Slicing
Research Papers
FeatMaker: Automated Feature Engineering for Search Strategy of Symbolic Execution
Research Papers
Finding and Understanding Defects in Static Analyzers by Constructing Automated Oracles
Research Papers
Generative AI for Pull Request Descriptions: Adoption, Impact, and Developer Interventions
Research Papers
Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection
Research Papers
Go Static: Contextualized Logging Statement Generation
Research Papers
Harnessing Neuron Stability to Improve DNN Verification
Research Papers
How does Simulation-based Testing for Self-driving Cars match Human Perception?
Research Papers
How to Gain Commit Rights in Modern Top Open Source Communities?
Research Papers
Improving the Learning of Code Review Successive Tasks with Cross-Task Knowledge Distillation
Research Papers
Investigating Documented Privacy Changes in Android OS
Research Papers
IRCoCo: Immediate Rewards-Guided Deep Reinforcement Learning for Code Completion
Research Papers
Java JIT Testing with Template Extraction
Research Papers
JIT-Smart: A Multi-Task Learning Framework for Just-In-Time Defect Prediction and Localization
Research Papers
Learning to Detect and Localize Multilingual Bugs
Research Papers
Less Cybersickness, Please: Demystifying and Detecting Stereoscopic Visual Inconsistencies in Virtual Reality Applications
Research Papers
LILAC: Log Parsing using LLMs with Adaptive Parsing Cache
Research Papers
LogSD: Detecting Anomalies from System Logs through Self-supervised Learning and Frequency-based Masking
Research Papers
Maximizing Patch Coverage for Testing of Highly-Configurable Software without Exploding Build Times
Research Papers
Metamorphic Testing of Secure Multi-Party Computation (MPC) Compilers
Research Papers
Mining Action Rules for Defect Reduction Planning
Research Papers
MirrorFair: Fixing Fairness Bugs in Machine Learning Software via Counterfactual Predictions
Research Papers
Misconfiguration Software Testing for Failure Emergence in Autonomous Driving Systems
Research Papers
Mobile Bug Report Reproduction via Global Search on the App UI Model
Research Papers
MTAS: A Reference-Free Approach for Evaluating Abstractive Summarization Systems
Research Papers
Natural Is The Best: Model-Agnostic Code Simplification for Pre-trained Large Language Models
Research Papers
Natural Symbolic Execution-based Testing for Big Data Analytics
Research Papers
Only diff is Not Enough: Generating Commit Messages Leveraging Reasoning and Action of Large Language Model
Research Papers
On Reducing Undesirable Behavior in Deep-Reinforcement-Learning-Based Software
Research Papers
On the Contents and Utility of IoT Cybersecurity Guidelines
Research Papers
Partial Solution Based Constraint Solving Cache in Symbolic Execution
Research Papers
PBE-based Selective Abstraction and Refinement for Efficient Property Falsification of Embedded Software
Research Papers
Pre-print
PPM: Automated Generation of Diverse Programming Problems for Benchmarking Code Generation Models
Research Papers
Predicting Code Comprehension: A Novel Approach to Align Human Gaze with Code Using Deep Neural Networks
Research Papers
Predicting Configuration Performance in Multiple Environments with Sequential Meta-Learning
Research Papers
Pre-print
Predicting Failures of Autoscaling Distributed Applications
Research Papers
Predictive Program Slicing via Execution Knowledge-Guided Dynamic Dependence Learning
Research Papers
Pre-print
ProveNFix: Temporal Property guided Program Repair
Research Papers
PyRadar: Towards Automatically Retrieving and Validating Source Code Repository Information for PyPI Packages
Research Papers
R2I: A Relative Readability Metric for Decompiled Code
Research Papers
RavenBuild: Context, Relevance, and Dependency Aware Build Outcome Prediction
Research Papers
Refactoring to Pythonic Idioms: A Hybrid Knowledge-Driven Approach Leveraging Large Language Models
Research Papers
Revealing Software Development Work Patterns with PR-Issue Graph Topologies
Research Papers
Rocks Coding, Not Development–A Human-Centric, Experimental Evaluation of LLM-Supported SE Tasks
Research Papers
Semi-Supervised Crowdsourced Test Report Clustering via Screenshot-Text Binding Rules
Research Papers
Shadows in the Interface: A Comprehensive Study on Dark Patterns
Research Papers
Sharing Software-Evolution Datasets: Practices, Challenges, and Recommendations
Research Papers
SimLLM: Measuring Semantic Similarity in Code Summaries Using a Large Language Model-Based Approach
Research Papers
SmartAxe: Detecting Cross-Chain Vulnerabilities in Bridge Smart Contracts via Fine-Grained Static Analysis
Research Papers
State Reconciliation Defects in Infrastructure as Code
Research Papers
Pre-print
Static Application Security Testing (SAST) Tools for Smart Contracts: How Far Are We?
Research Papers
Syntax Is All You Need: A Universal-Language Approach to Mutant Generation
Research Papers
"The Law Doesn’t Work Like a Computer": Exploring Software Licensing Issues Faced by Legal Practitioners
Research Papers
TIPS: Tracking Integer-Pointer Value Flows for C++ Member Function Pointers
Research Papers
Towards AI-Assisted Synthesis of Verified Dafny Methods
Research Papers
Towards Better Graph Neural Neural Network-based Fault Localization Through Enhanced Code Representation
Research Papers
Towards Effective Multi-Hunk Bug Repair: Detecting, Creating, Evaluating, and Understanding Indivisible Bugs
Research Papers
Towards Efficient Build Ordering for Incremental Builds with Multiple Configurations
Research Papers
Towards Efficient Verification of Constant-Time Cryptographic Implementations
Research Papers
TraStrainer: Adaptive Sampling for Distributed Traces with System Runtime State
Research Papers
Pre-print
Understanding and Detecting Annotation-induced Faults of Static Analyzers
Research Papers
Understanding Developers' Discussions and Perceptions on Non-Functional Requirements: The Case of the Spring Ecosystem
Research Papers
Understanding the Impact of APIs Behavioral Breaking Changes on Client Applications
Research Papers
Unprecedented Code Change Automation: The Fusion of LLMs and Transformation by Example
Research Papers
Your Code Secret Belongs to Me: Neural Code Completion Tools Can Memorize Hard-coded Credentials
Research Papers

Call for Papers

We invite high-quality submissions, from both industry and academia, describing original and unpublished results of theoretical, empirical, conceptual, and experimental software engineering research.

Contributions should describe innovative and significant original research. Papers describing groundbreaking approaches to emerging problems are also welcome, as well as replication papers. Submissions that facilitate reproducibility by using available datasets or making the described tools and datasets publicly available are especially encouraged. For a list of specific topics of interest, please see the end of this call.

NEW THIS YEAR #1: The inaugural Proceedings of the ACM on Software Engineering (PACMSE) Issue FSE 2024 seeks contributions through submissions in this track. Accepted papers will be invited for presentation at FSE 2024. Approval has been granted by ACM in late July. PACMSE will be the only proceedings where accepted research track papers will be published. Please check the FAQ for details.

NEW THIS YEAR #2: The steering committee has decided that the conference name will be changed to ACM International Conference on the Foundations of Software Engineering (FSE). In this transition year, we may still use the old and new names interchangeably on the conference webpage.

NEW THIS YEAR #3: Based on the coordination among FSE, ICSE, and ASE steering committees, the FSE conference and submission dates have been moved a few months early. The intention is for this schedule to remain stable in the years ahead and the conference and submission deadlines of the three large general software engineering conferences to be spread out throughout the year. This change however means that the submission deadline of FSE 2024 (late September 2023) will be before the days ESEC/FSE 2023 will be held (December 2023).

NEW THIS YEAR #4: Submissions must follow the "ACM Policy on Authorship" released April 20, 2023, which contains policy regarding the use of Generative AI tools and technologies, such as ChatGPT. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgement).

Tracks

This CFP refers to the Research Track of FSE. For the remaining tracks, please check the specific calls on the website:

HOW TO SUBMIT

At the time of submission, each paper should have no more than 18 pages for all text and figures, plus 4 pages for references, using the following templates: Latex or Word (Mac) or Word (Windows). Authors using LaTeX should use the sample-acmsmall-conf.tex file (found in the samples folder of the acmart package) with the acmsmall option. We also strongly encourage the use of the review, screen, and anonymous options as well. In sum, you want to use: \documentclass[acmsmall,screen,review,anonymous]{acmart}. Papers may be submitted using numeric citations, but final versions of accepted papers must use the author-year format for citations. It is a single-column page layout. Submissions that do not comply with the above instructions will be desk rejected without review. Papers must be submitted electronically through the FSE 2024 submission site:

https://fse2024.hotcrp.com

Each submission will be reviewed by at least three members of the program committee. The review process is that the initial output can be accept, reject or major revision. When the initial output of the three reviews is major revision, authors will have an opportunity to address the reviewers’ requests during a 6 week major revision period. Such requests may include additional experiments or new analyses of existing results; major rewriting of algorithms and explanations; clarifications, better scoping, and improved motivations. The revised submission must be accompanied by a response letter, where the authors explain how they addressed each concern expressed by the reviewers. The same reviewers who requested major revisions will then assess whether the revised submission satisfies their requests adequately.

Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation (if relevant), quality of presentation, and appropriate comparison to related work. Some papers may have more than three reviews, as PC chairs may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The program committee as a whole will make final decisions about which submissions to accept for publication.

In addition to declaring the topics which are relevant for their submissions, authors will be asked to declare the research methods employed in their submissions. This will enable us to ensure reviewer expertise both for research methods and topics. For full definitions of the research methods, see the SIGSOFT Empirical Standards.

Double-Anonymous Review Process

In order to ensure the fairness of the reviewing process, the FSE 2024 Research Papers Track will employ a double-anonymous review process, where external reviewers do not know the identity of authors, and authors do not know the identity of external reviewers. The papers submitted must not reveal the authors’ identities in any way:

  • Authors should leave out author names and affiliations from the body of their submission.
  • Authors should ensure that any citation to related work by themselves is written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.
  • Authors should not include URLs to author-revealing sites (tools, datasets). Authors are still encouraged to follow open science principles and submit replication packages, see more details on the open science policy below.
  • Authors should anonymize author-revealing company names but instead provide general characteristics of the organizations involved needed to understand the context of the paper.
  • Authors should ensure that paper acknowledgements do not reveal the origin of their work.
  • While authors have the right to upload preprints on ArXiV or similar sites, they should avoid specifying that the manuscript was submitted to FSE 2024.
  • During review, authors should not publicly use the submission title.

The double-anonymous process used this year is “heavy”, i.e., the paper anonymity will be maintained during all reviewing and discussion periods. In case of major revision, authors must therefore maintain anonymity in their response letter and must provide no additional information that could be author-revealing.

To facilitate double-anonymous reviewing, we recommend the authors to postpone publishing their submitted work on arXiv or similar sites until after the notification. If the authors have uploaded to arXiv or similar, they should avoid specifying that the manuscript was submitted to FSE 2024.

Authors with further questions on double-anonymous reviewing are encouraged to contact the program chairs by email. Papers that do not comply with the double-anonymous review process will be desk-rejected.

Submission Policies

The authors must follow the “ACM Policy on Authorship” released April 20, 2023 and its accompanying FAQ including the following points:

  • “Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.”

  • “If you are using generative AI software tools to edit and improve the quality of your existing text in much the same way you would use a typing assistant like Grammarly to improve spelling, grammar, punctuation, clarity, engagement or to use a basic word processing system to correct spelling or grammar, it is not necessary to disclose such usage of these tools in your Work.”

Please read the full policy and FAQ.

Papers submitted for consideration to FSE should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the reviewing period. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

To prevent double submissions, the chairs might compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to unrefereed forums (e.g. arXiv.org). To check for plagiarism issues, the chairs might use external plagiarism detection software.

All publications are subject to the ACM Author Representations policy.

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects.

Alleged violations to any of the above policies will be reported to ACM for further investigation and may result in a full retraction of your paper, in addition to other potential penalties, as per the ACM Publications Policies.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process if your paper is accepted. ACM has been involved in ORCID from the start and they have recently made a commitment to collect ORCID IDs from all published authors. ACM is committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

The authors of accepted papers are invited and strongly encouraged to attend the conference to present their work. Attendance at the event is not mandatory for publication. Authors also have the option of not presenting their work at the conference, in which case they do not need to register.

Important Dates

All dates are 23:59:59 AoE (UTC-12h)

  • Paper registration: Thursday, September 21, 2023 (to register a paper, paper title, abstract, author list, and some additional metadata are required; title and abstract must contain sufficient information for effective bidding; registrations containing empty or generic title and abstract may be dropped)
  • Full paper submission: Thursday, September 28, 2023
  • Author response: December 11-13, 2023 (3-day period)
  • Initial notification: Tuesday, January 23, 2024 (long discussion period due to year-end holidays)
  • Camera ready (directly accepted papers): Tuesday, February 20, 2024
  • Revised manuscript submissions (major revisions only): Tuesday, March 5, 2024
  • Final notification for major revisions: Tuesday, April 16, 2024
  • Camera ready (accepted major revision papers): Tuesday, May 14, 2024

The official publication date is the date the journal issue is made available in the ACM Digital Library. This date may be as early as July 1, 2024. The official publication date affects the deadline for any patent filings related to published work.

Open Science Policy

The research track of FSE has introduced an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles and encourage all contributing authors to disclose (anonymized and curated) data to increase reproducibility and replicability.

Upon submission to the research track, authors are asked to make a replication package available to the program committee (via upload of supplemental material or a link to a private or public repository) or to comment on why this is not possible or desirable. Furthermore, authors are asked to indicate whether they intend to make their data publicly available upon acceptance. We ask authors to provide a supporting statement on the availability of a replication package (or lack thereof) in their submitted papers in a section named Data Availability after the Conclusion section. Be careful that such statements continue to maintain author anonymity. For more details, see the FSE open science policy.

Authors of accepted papers will be given an opportunity (and encouragement) to submit their data and tools to the separate FSE’24 artifact evaluation committee.

Topics of Interest

Topics of interest include, but are not limited to:

  • Artificial intelligence and machine learning for software engineering
  • Autonomic computing
  • Debugging and fault localization
  • Dependability, safety, and reliability
  • Distributed and collaborative software engineering
  • Embedded software, safety-critical systems, and cyber-physical systems
  • Empirical software engineering
  • Human and social aspects of software engineering
  • Human-computer interaction
  • Mining software repositories
  • Mobile development
  • Model checking
  • Model-driven engineering
  • Parallel, distributed, and concurrent systems
  • Performance engineering
  • Program analysis
  • Program comprehension
  • Program repair
  • Program synthesis
  • Programming languages
  • Recommendation systems
  • Requirements engineering
  • Search-based software engineering
  • Services, components, and cloud
  • Software architectures
  • Software engineering education
  • Software engineering for machine learning and artificial intelligence
  • Software evolution
  • Software processes
  • Software security
  • Software testing
  • Software traceability
  • Symbolic execution
  • Tools and environments

FAQ on Review Process: Major Revisions, Open Science Policy, Double-Anonymous Reviewing

PACMSE Proceedings

Q: What paper format shall we follow for FSE 2024?

A: Papers accepted by the technical track of FSE 2024 will be published in the inaugural journal issue of the Proceedings of the ACM on Software Engineering (PACMSE). Approval has been granted by ACM in late July. Please check the Research Paper How to Submit section for details.

Q: How would the inaugural PACMSE journal affect FSE 2024?

A: FSE will be published in the inaugural PACMSE journal following the recent practices of other communities such as PACMPL (PLDI, POPL, OOPSLA, etc.), PACMHCI, PACMMOD, PACMNET, etc.

Identity: FSE papers will be published in a dedicated issue of PACMSE, with FSE as the issue name. This means that FSE papers will keep their identity!

Paper format: The paper format will follow the ACM’s requirement. This is a switch from the traditional FSE two-column format to this new PACMSE single-column format. However, the amount of content should remain more or less the same: the FSE 2024’s 18 page limit in the singe-column format maps roughly to the old single-column of 10 pages.

Review process: FSE already has a major-revision cycle in 2023 and 2024, which maps neatly onto PACMSE’s requirements for two rounds of reviews, so there are no PACMSE-related changes here.

Conference presentations: FSE 2024’s move to PACMSE changes how the proceedings are published. All accepted papers will still be guaranteed presentation delivery at the conference in the usual way.

Policy on Authorship (e.g., regarding ChatGPT)

Q: What is the policy on Authorship, especially considering the the use of Generative AI tools and technologies, such as ChatGPT?

A: Submissions must follow the “ACM Policy on Authorship” released April 20, 2023, which contains policy regarding the use of Generative AI tools and technologies, such as ChatGPT. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgement).

Major Revision Process

Q: Why is FSE allowing major revisions?

A: SE conferences are currently forced to reject papers that include valuable material, but would need major changes to become acceptable for conference presentation, because major revisions cannot be accommodated in the current review process. By supporting only a binary outcome, conferences force reviewers to decide between rejection and acceptance even in borderline cases that would be better judged after a round of major revision. This can cause additional reviewing burden for the community (the paper is resubmitted to another venue with new reviewers) and inconsistency for the authors (the new reviewers have different opinions). We hope by allowing major revisions to both increase the acceptance rate of FSE and to help reduce these current problems with the reviewing process.

For Authors

Q: If my paper receives major revisions, what happens next?

A: The meta-review will clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. Authors of these papers are granted 6 weeks to implement the requested changes. In addition to the revised paper, authors are asked to submit a response letter that explains how each required change was implemented. If any change was not implemented, authors can explain why. The same reviewers will then review the revised paper and make their final (binary) decision. Authors can also choose to withdraw their submission if they wish.

Q: Will major revision become the default decision causing initial acceptance rates to drop?

A: This is not the intention: reviewers are instructed to accept all papers that would have been accepted when major revision was not an available outcome.

For Reviewers

Q: When shall I recommend major revision for a paper?

A: Major revision should not become the default choice for borderline papers and should be used only if without major revisions the paper would be rejected, while a properly done major revision, which addresses the reviewers’ concerns, could make the paper acceptable for publication; if the requested changes are doable in 6 weeks and are implementable within the page limit; if the requested changes are strictly necessary for paper acceptance (i.e., not just nice-to-have features); if the requested changes require recheck (i.e., reviewers cannot trust the authors to implement them directly in the camera ready).

Q: When shall I recommend rejection instead of major revision?

A: Rejection is a more appropriate outcome than major revision if the requested additions/changes are not implementable in 6 weeks; if the contribution is very narrow or not relevant to the SE audience, and it cannot be retargeted in 6 weeks; if the methodology is flawed and cannot be fixed in 6 weeks; if results are unconvincing, the paper does not seem to improve the state of the art much, and new convincing results are unlikely to be available after 6 weeks of further experiments; if the customary benchmark used in the community was ignored and cannot be adopted and compared to in 6 weeks.

Q: When shall I recommend acceptance instead of major revision?

A: We do not want major revision to become the primary pathway for acceptance. We should continue to trust the authors to make minor changes to the submissions in the camera ready version. Acceptance is preferable if the requested additions/changes are nice to have features, not mandatory for the acceptability of the work; if minor improvements of the text are needed; if minor clarifications requested by the reviewers should be incorporated; if important but not critical references should be added and discussed; if discussion of results could be improved, but the current one is already sufficient.

Q: What is the difference between major revision and shepherding?

A: Major revision is not shepherding. While shepherding typically focuses on important but minor changes, which can be specified in an operational way and can be checked quite easily and quickly by reviewers, major revisions require major changes (although doable in 6 weeks), which means the instructions for the authors cannot be completely operational and the check will need to go deeply into the new content delivered by the paper. Hence, while the expectation for shepherded papers is that most of them will be accepted once the requested changes are implemented, this is not necessarily the case with major revisions.

Q: Is there a quota of papers that can have major revision as outcome? A: As there is no quota for the accepted papers, there is also no quota for major revisions. However, we expect that thanks to major revisions we will be able to eventually accept 10-15% more papers, while keeping the quality bar absolutely unchanged.

Q: What shall I write in the meta-review of a paper with major revision outcome?

A: With the possibility of a major revision outcome, meta-reviews become extremely important. The meta-review should clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. The meta-review should act as a contract between reviewers and authors, such that when all required changes are properly made, the paper is accepted. In this respect, the listed changes should be extremely clear, precise, and implementable.

Review Process

For Authors

Q: Can I withdraw my paper?

A: Yes, papers can be withdrawn at any time using HotCRP.

Q: Is appendix or other supplemental materials allowed?

A: The main submission file must follow the page limit. Any supplemental materials including appendix and replication packages must be submitted separately under “Supplemental Material”. Program Committee members can review supplemental materials but are not obligated to review them.

For Reviewers

Q: The authors have provided a URL to supplemental material. I would like to see the material but I worry they will snoop my IP address and learn my identity. What should I do?

A: Contact the Program Co-Chairs, who will download the material on your behalf and make it available to you.

Q: If I am assigned a paper for which I feel I am not an expert, how do I seek an outside review?

A: PC members should do their own reviews, not delegate them to someone else. Please contact the Program Co-Chairs, especially since additional reviewers might have a different set of conflicts of interest.

Open Science Policy

Q: What is the FSE 2024 open science policy and how can I follow it?

A: Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. Upon submission to the research track, authors are asked to:

  • make their data available to the program committee (via upload of supplemental material or a link to an anonymous repository) and provide instructions on how to access this data in the paper; or
  • include in the paper an explanation as to why this is not possible or desirable; and
  • indicate if they intend to make their data publicly available upon acceptance. This information should be provided in the submitted papers in a section named Data Availability after the Conclusion section. For more details, see the FSE open science policy.

Q: How can I upload supplementary material via the HotCRP site and make it anonymous for double-anonymous review?

A: To conform to the double-anonymous policy, please include an anonymized URL. Code and data repositories may be exported to remove version control history, scrubbed of names in comments and metadata, and anonymously uploaded to a sharing site. Instructions are provided in the FSE open science policy.

Double-Anonymous Reviewing (DAR)

Q: Why are you using double-anonymous reviewing?

A: Studies have shown that a reviewer’s attitude toward a submission may be affected, even unconsciously, by the identity of the authors.

Q: Do you really think DAR actually works? I suspect reviewers can often guess who the authors are anyway.

A: It is rare for authorship to be guessed correctly, even by expert reviewers, as detailed in this study.

For Authors

Q: What exactly do I have to do to anonymize my paper?

A: Your job is not to make your identity undiscoverable but simply to make it possible for reviewers to evaluate your submission without having to know who you are: omit authors’ names from your title page, and when you cite your own work, refer to it in the third person. Also, be sure not to include any acknowledgements that would give away your identity. You should also avoid revealing the institutional affiliation of authors.

Q: I would like to provide supplementary material for consideration, e.g., the code of my implementation or proofs of theorems. How do I do this?

A: On the submission site, there will be an option to submit supplementary material along with your main paper. You can also share supplementary material in a private or publicly shared repository (preferred). This supplementary material should also be anonymized; it may be viewed by reviewers during the review period, so it should adhere to the same double-anonymous guidelines. See instructions on the FSE open science policy.

Q: My submission is based on code available in a public repository. How do I deal with this?

A: Making your code publicly available is not incompatible with double-anonymous reviewing. You can create an anonymized version of the repository and include a new URL that points to the anonymized version of the repository, similar to how you would include supplementary materials to adhere to the Open Science policy. Authors wanting to share GitHub repositories may want to look into using https://anonymous.4open.science/ which is an open source tool that helps you to quickly double-anonymize your repository.

Q: I am building on my own past work on the WizWoz system. Do I need to rename this system in my paper for purposes of anonymity, so as to remove the implied connection between my authorship of past work on this system and my present submission? A: Maybe. The core question is really whether the system is one that, once identified, automatically identifies the author(s) and/or the institution. If the system is widely available, and especially if it has a substantial body of contributors and has been out for a while, then these conditions may not hold (e.g., LLVM or HotSpot), because there would be considerable doubt about authorship. By contrast, a paper on a modification to a proprietary system (e.g., Visual C++, or a research project that has not open-sourced its code) implicitly reveals the identity of the authors or their institution. If naming your system essentially reveals your identity (or institution), then anonymize it. In your submission, point out that the system name has been anonymized. If you have any doubts, please contact the Program Co-Chairs.

Q: I am submitting a paper that extends my own work that previously appeared at a workshop. Should I anonymize any reference to that prior work?

A: No. But we recommend you do not use the same title for your FSE submission, so that it is clearly distinguished from the prior paper. In general, there is rarely a good reason to anonymize a citation. When in doubt, contact the Program Co-Chairs.

Q: Am I allowed to post my (non-anonymized) paper on my web page or arXiv?

A: You can discuss and present your work that is under submission at small meetings (e.g., job talks, visits to research labs, a Dagstuhl or Shonan meeting), but you should avoid broadly advertising it in a way that reaches the reviewers even if they are not searching for it. Whenever possible, please avoid posting your manuscript on public archives (e.g, ArXiV) before or during the submission period. Would you still prefer to do so, carefully avoid adding to the manuscript any reference to FSE 2024 (e.g., using footnotes saying “Submitted to FSE 2024”).

Q: Can I give a talk about my work while it is under review? How do I handle social media?

A: We have developed guidelines, described here, to help everyone navigate in the same way the tension between the normal communication of scientific results, which double-anonymous reviewing should not impede, and actions that essentially force potential reviewers to learn the identity of the authors for a submission. Roughly speaking, you may (of course!) discuss work under submission, but you should not broadly advertise your work through media that is likely to reach your reviewers. We acknowledge there are grey areas and trade-offs; we cannot describe every possible scenario.

Things you may do:

  • Put your submission on your home page.
  • Discuss your work with anyone who is not on the review committees, or with people on the committees with whom you already have a conflict.
  • Present your work at professional meetings, job interviews, etc.
  • Submit work previously discussed at an informal workshop, previously posted on arXiv or a similar site, previously submitted to a conference not using double-anonymous reviewing, etc.

Things you should not do:

  • Contact members of the review committees about your work, or deliberately present your work where you expect them to be.
  • Publicize your work on major mailing lists used by the community (because potential reviewers likely read these lists).
  • Publicize your work on social media if wide public [re-]propagation is common (e.g., Twitter) and therefore likely to reach potential reviewers. For example, on Facebook, a post with a broad privacy setting (public or all friends) saying, “Whew, FSE paper in, time to sleep” is okay, but one describing the work or giving its title is not appropriate. Alternatively, a post to a group including only the colleagues at your institution is fine.

Reviewers will not be asked to recuse themselves from reviewing your paper unless they feel you have gone out of your way to advertise your authorship information to them. If you are unsure about what constitutes “going out of your way”, please contact the Program Co-Chairs.

Q: Will the fact that FSE is double-anonymous have an impact on handling conflicts of interest?

A: Double-anonymous reviewing does not change the principle that reviewers should not review papers with which they have a conflict of interest, even if they do not immediately know who the authors are. Authors declare conflicts of interest when submitting their papers using the guidelines in the Call for Papers. Papers will not be assigned to reviewers who have a conflict. Note that you should not declare gratuitous conflicts of interest and the chairs will compare the conflicts declared by the authors with those declared by the reviewers. Papers abusing the system will be desk-rejected.

For Reviewers

Q: What should I do if I learn the authors’ identity? What should I do if a prospective FSE author contacts me and asks to visit my institution? A: If you feel that the authors’ actions are largely aimed at ensuring that potential reviewers know their identity, contact the Program Co-Chairs. Otherwise, you should not treat double-anonymous reviewing differently from other reviewing. In particular, refrain from seeking out information on the authors’ identity, but if you discover it accidentally this will not automatically disqualify you as a reviewer. Use your best judgement.

Q: How do we handle potential conflicts of interest since I cannot see the author names?

A: The conference review system will ask that you identify conflicts of interest when you get an account on the submission system.

Q: How should I avoid learning the authors’ identity, if I am using web-search in the process of performing my review?

A: You should make a good-faith effort not to find the authors’ identity during the review period, but if you inadvertently do so, this does not disqualify you from reviewing the paper. As part of the good-faith effort, please turn off Google Scholar auto-notifications. Please do not use search engines with terms like the paper’s title or the name of a new system being discussed. If you need to search for related work you believe exists, do so after completing a preliminary review of the paper.


The above guidelines are partly based on the PLDI FAQ on double-anonymous reviewing and the ICSE 2023 guidelines on double-anonymous submissions.