Write a Blog >>
ESEC/FSE 2022
Mon 14 - Fri 18 November 2022 Singapore
Dates
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Thu 17 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

09:00 - 10:30
Tutorials - 17 Morning session - part 1Tutorials at Town Plaza GLR
09:00
90m
Tutorial
Program Analysis using WALA (Tutorial)
Tutorials
Joanna C. S. Santos University of Notre Dame, Julian Dolby IBM Research
DOI
10:30 - 11:00
Coffee/Tea BreakSocial
11:00 - 12:30
Tutorials - 17 Morning session - part 2Tutorials at Town Plaza GLR
11:00
90m
Tutorial
Program Analysis using WALA (Tutorial)
Tutorials
Joanna C. S. Santos University of Notre Dame, Julian Dolby IBM Research
DOI
12:30 - 14:00
LunchSocial
14:00 - 15:30
Tutorials - 17 Afternoon session - part 1Tutorials at Town Plaza GLR
14:00
90m
Tutorial
Dynamic Data Race Prediction: Fundamentals, Theory, and Practice (Tutorial)
Tutorials
Umang Mathur National University of Singapore, Andreas Pavlogiannis Aarhus University
DOI Pre-print
15:30 - 16:00
Coffee/Tea BreakSocial
16:00 - 17:30
Tutorials - 17 Afternoon session - part 2Tutorials at Town Plaza GLR
16:00
90m
Tutorial
Dynamic Data Race Prediction: Fundamentals, Theory, and Practice (Tutorial)
Tutorials
Umang Mathur National University of Singapore, Andreas Pavlogiannis Aarhus University
DOI Pre-print

Fri 18 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

09:00 - 10:30
Tutorials - 18 Morning session - part 1Tutorials at Town Plaza Auditorium 1
09:00
90m
Tutorial
Machine Learning and Natural Language Processing for Automating Software Testing (Tutorial)
Tutorials
Mauro Pezze USI Lugano; Schaffhausen Institute of Technology
DOI
10:30 - 11:00
Coffee/Tea BreakSocial
11:00 - 12:30
Tutorials - 18 Morning session - part 2Tutorials at Town Plaza Auditorium 1
11:00
90m
Tutorial
Machine Learning and Natural Language Processing for Automating Software Testing (Tutorial)
Tutorials
Mauro Pezze USI Lugano; Schaffhausen Institute of Technology
DOI
12:30 - 14:00
LunchSocial
14:00 - 15:30
Tutorials - 18 Afternoon session - part 1Tutorials at Town Plaza Auditorium 1
14:00
90m
Tutorial
Performing Large-Scale Mining Studies: From Start to Finish (Tutorial)
Tutorials
Robert Dyer University of Nebraska-Lincoln, Samuel W. Flint University of Nebraska-Lincoln
DOI
15:30 - 16:00
Coffee/Tea BreakSocial
16:00 - 17:30
Tutorials - 18 Afternoon session - part 2Tutorials at Town Plaza Auditorium 1
16:00
90m
Tutorial
Performing Large-Scale Mining Studies: From Start to Finish (Tutorial)
Tutorials
Robert Dyer University of Nebraska-Lincoln, Samuel W. Flint University of Nebraska-Lincoln
DOI

Call for Tutorials

The ESEC/FSE 2022 Tutorials track provides conference participants with the opportunity to gain new insights, knowledge, and skills in a broad range of areas of software engineering.

We invite proposals for tutorials on topics related to software engineering. A tutorial may describe a software engineering activity, such as the state-of-the-art in program analysis, or it may describe a method, technique, or technology used in software engineering (research and/or practice), such as natural language processing, Bayesian statistics, or grounded theory.

Tutorials are intended to provide independent instruction on a topic of relevance to software engineering practitioners and researchers. Therefore, no commercial or sales-oriented presentations will be accepted.

Potential presenters should keep in mind that there may be quite a varied audience, including novice graduate students, seasoned practitioners, and specialized researchers. They should be prepared to cope with this diversity unless they make clear that the tutorial is oriented to a particular subgroup. Also bear in mind that not everyone will have English as their first language. Thus, presenters should provide comprehensive notes written in clear, standard English. Idioms, irony, slang, and culture-specific references should be avoided.

Tutorials will be held after the main conference, on November 17-18, 2022. Each tutorial will last 180 minutes.

Tutorial Proposal Guidelines

Proposal submissions should follow this structure:

  • Title of the tutorial
  • Name, affiliation, and email address of the presenter(s)
  • Abstract (max 300 words), suitable for posting on the conference web site
  • Tutorial aims and objectives
  • Intended audience and required background
  • Relevance: justification of why the topic would be of interest to a broad section of the software engineering community
  • Format: lecture, hands-on session, group activity, etc.
  • Outline of the topics covered by the tutorial, with approximate timing
  • Key learning objectives for the participants
  • Presenter’s bio: 250-word bio of the presenter(s), stressing their qualification with respect to the tutorial topic
  • Tutorial history: list of previous editions of the tutorial (if any), including the dates, the venue, and the number of attendees.
  • Audio-visual and technical requirements
  • At least 3 representative sample slides from the intended tutorial presentation

The proposal (excluding the sample slides) should be no longer than three (3) pages and must conform to the official “ACM Primary Article Template”. LaTeX users must use \documentclass[sigconf]{acmart}.

How to Submit

The proposal, as well as the sample slides, should be submitted in a single PDF (with all fonts included) through the online submission site https://esecfse2022tutorials.hotcrp.com

Evaluation

The Tutorial committee will review each proposal and will select quality proposals that fit the evaluation criteria. Each proposal will be evaluated on its anticipated benefit for prospective participants and its fit within the program as a whole. Factors to be considered include: relevance, timeliness, importance, audience appeal; suitability for presentation in a 180-minute format; effectiveness of teaching methods; past experience and qualifications of the instructors.

Important Dates

  • Submission deadline: Friday June 10, 2022 (23:59 AoE)
  • Notification: Friday, July 8, 2022

If you have any question, please contact the Tutorials chair Domenico Bianculli.

November 17 - Morning session

Program Analysis using WALA

  • Joanna Cecilia da Silva Santos (University of Notre Dame)
  • Julian Dolby (IBM Thomas J. Watson Research Center)

Static analysis is widely used in research and practice for multiple purposes such as fault localization, vulnerability detection, code clone identification, code refactoring, optimization, etc. Since implementing static analyzers is a non-trivial task, engineers often rely on existing frameworks to implement their techniques. The IBM T.J. Watson Libraries for Analysis (WALA) is one of such frameworks that allows the analysis of multiple environments, such as Java bytecode (and related languages), JavaScript, Android, Python, etc. In this tutorial, we walk through the process of using WALA for program analysis. First, the tutorial will cover all the required background knowledge that is necessary to understand the technical implementation details of the explained algorithms and techniques. Subsequently, we provide a technical overview of the WALA framework and its support for analysis of multiple programming languages and frameworks code. Then, we will do several live demonstration of using WALA to implement client analyses. We will focus on two common uses of analysis: a form of security analysis, taint analysis, and on using analysis graphs for machine learning of code.


November 17 - Afternoon session

Dynamic Data Race Prediction: Fundamentals, Theory and Practice

  • Umang Mathur (National University of Singapore)
  • Andreas Pavlogiannis (Aarhus University)

Data races are the most common concurrency bugs and considerable efforts are put in ensuring data-race-free (DRF) programs. The most popular approach is via dynamic analyses, which soundly report DRF violations by analyzing program executions. Recently, there has been a prevalent shift to predictive analysis techniques. Such techniques attempt to predict DRF violations even in unobserved program executions, while making sure that the analysis is sound (does not raise false positives).

This tutorial will present the foundations of race prediction in a systematic manner, and summarize latest advances in race prediction in a concise and unifying way. State-of-the-art predictive techniques will be explained out of first principles, followed by a comparison between soundness, completeness and complexity guarantees provided in each case. In addition, we will highlight the use of specific data structures that result in algorithmic efficiency in these techniques. We will also touch on various notions of optimality and their suitability in online/offline prediction. On the theoretical side, we will highlight some recent hard computational barriers inherent in race prediction, as well as ways to alleviate them in specific settings. We will also touch upon other common concurrency bugs, such as deadlocks and atomicity violations, and highlight cases when techniques are transferable between them. The tutorial will include a hands-on demonstration of two relevant tools, namely RAPID and M2. Finally, we will end with some key open questions with the aim to inspire future research.


November 18 - Morning session

Machine Learning and Natural Language Processing for Automating Software Testing

  • Mauro Pezzè (USI Università della Svizzera italiana)

In this tutorial, we see how natural language processing and machine learning can help us address the open challenges of software testing. We overview the open challenges of testing autonomous and self-adaptive software systems, discuss the leading-edge technologies that can address the core issues, and see the latest progresses and future prospective of natural language processing and machine learning to cope with core problems.

Automating test case and oracle generation are still largely open issues. Autonomous and self-adaptive systems, like self-driving cars, smart cities, and smart buildings, raise new issues that further toughen the already challenging scenarios. In the tutorial we understand the growing importance of field testing to address failures that emerge in production, the role of dynamic analysis and deep learning in revealing failure-prone scenarios, the need of symbolic fuzzing to explore unexpected scenarios, and the potentiality of reinforcement learning and natural language processing to generate test cases and oracles.We see in details state-of-the-art approaches that exploit natural language processing to automatically generate executable test oracles, as well as semantic matching, deep and reinforcement learning to automatically generate test cases and reveal failure-prone scenarios in production.

The tutorial is designed for both researchers, whose research roadmap focuses on software testing and applications of natural language processing and machine learning to software engineering, and practitioners, who see important professional opportunities from autonomous and self-adaptive systems. It is particularly well suited to PhD students and postdoctoral researchers who aim to address new challenges with novel technologies. The tutorial is self-contained, and is designed for a software engineering audience, who many not have a specific background in natural language processing and machine learning.


November 18 - Afternoon session

Performing Large-scale Mining Studies, From Start to Finish

  • Robert Dyer (University of Nebraska-Lincoln)
  • Samuel W. Flint (University of Nebraska-Lincoln)

Modern software engineering research often relies on mining open-source software repositories, to either provide motivation for their research problems and/or evaluation of the proposed approach. Mining ultra-large-scale software repositories is still a difficult task, requiring substantial expertise and access to significant hardware. Tools such as Boa can help researchers easily mine large numbers of open-source repositories. There has also recently been more of a push toward open science, with an emphasis on making replication packages available. Building such replication packages incurs additional workload for researchers. In this tutorial, we teach how to use the Boa infrastructure for mining software repository data. We leverage Boa’s VS Code IDE extension to help write and submit Boa queries, and also leverage Boa’s study template to show how researchers can more easily analyze the output from Boa and automatically produce a suitable replication package that is published on Zenodo.