The Languages for Inference (LAFI) workshop aims to bring programming-language and machine-learning researchers together to advance all aspects of languages for inference.
Topics include but are not limited to:
- Design of programming languages for statistical inference and/or differentiable programming
- Inference algorithms for probabilistic programming languages, including ones that incorporate automatic differentiation
- Automatic differentiation algorithms for differentiable programming languages
- Probabilistic generative modelling and inference
- Variational and differential modeling and inference
- Semantics (axiomatic, operational, denotational, games, etc) and types for inference and/or differentiable programming
- Efficient and correct implementation
- Applications of inference and/or differentiable programming
Sun 15 JanDisplayed time zone: Eastern Time (US & Canada) change
09:00 - 10:30
First SessionLAFI at Scollay
Chair(s): Steven Holtzen Northeastern University, Christine Tasson Sorbonne Université — LIP6
Christine Tasson Sorbonne Université — LIP6, Steven Holtzen Northeastern University
|Introduction to the tensor-programs framework, a PL approach that helps analyse theoretical properties of deep learning.Boston|
A: Hongseok Yang KAIST; IBS
|Exact Inference for Discrete Probabilistic Programs via Generating FunctionsParis|
A: Fabian Zaiser University of Oxford, C.-H. Luke Ong University of OxfordFile Attached
|Exact Probabilistic Inference Using Generating FunctionsBoston|
A: Lutz Klinkenberg RWTH Aachen University, Tobias Winkler RWTH Aachen University, Mingshuai Chen RWTH Aachen, Joost-Pieter Katoen RWTH Aachen UniversityFile Attached
16:00 - 18:00
Poster SessionLAFI at Scollay
Chair(s): Steven Holtzen Northeastern University
Call for Papers
Call for Extended Abstracts LAFI 2023 POPL 2023 workshop on Languages for Inference January 15, 2023 https://popl23.sigplan.org/home/lafi-2023 Submission deadline on October 28, 2022 (EXTENDED) !
==================================================================== ** Invited Speaker
Hongseok Yang, professor at the School of Computing, KAIST, Korea
Deadline: October 28, 2022 (AoE) (EXTENDED)
Format: extended abstract (2 pages + references)
Call for Extended Abstracts
Inference concerns re-calibrating program parameters based on observed data, and has gained wide traction in machine learning and data science. Inference can be driven by probabilistic analysis and simulation, and through back-propagation and differentiation. Languages for inference offer built-in support for expressing probabilistic models and inference methods as programs, to ease reasoning, use, and reuse. The recent rise of practical implementations as well as research activity in inference-based programming has renewed the need for semantics to help us share insights and innovations.
This workshop aims to bring programming-language and machine-learning researchers together to advance all aspects of languages for inference. Topics include but are not limited to:
design of programming languages for inference and/or differentiable programming;
inference algorithms for probabilistic programming languages, including ones that incorporate automatic differentiation;
automatic differentiation algorithms for differentiable programming languages;
probabilistic generative modeling and inference;
variational and differential modeling and inference;
semantics (axiomatic, operational, denotational, games, etc) and types for inference and/or differentiable programming;
efficient and correct implementation;
and last but not least, applications of inference and/or differentiable programming.
We expect this workshop to be informal, and our goal is to foster collaboration and establish common ground. Thus, the proceedings will not be a formal or archival publication, and we expect to spend only a portion of the workshop day on traditional research talks. Nevertheless, as a concrete basis for fruitful discussions, we call for extended abstracts describing specific and ideally ongoing work on probabilistic and differential programming languages, semantics, and systems.
Submission deadline on October 28, 2022 (AoE) (EXTENDED)
Submission link: https://lafi23.hotcrp.com/
Anonymous extended abstracts are up to 2 pages in PDF format, excluding references.
In line with the SIGPLAN Republication Policy, inclusion of extended abstracts in the program is not intended to preclude later formal publication.
Remote participation policy
We plan to coordinate with the POPL conference on remote participation. We would like to have remote participation even if the workshop happens in person. Our plan is to create an inclusive environment that does not demand traveling if needed.
Introduction to the tensor-programs framework, a PL approach that helps analyse theoretical properties of deep learning.
While deep learning has many remarkable success stories, finding a satisfactory mathematical explanation on why it is so effective is still considered an open challenge. One recent promising direction for this challenge is to analyse the mathematical properties of neural networks in the limit where the widths of hidden layers of the networks goes to infinity. Researchers were able to prove highly-nontrivial properties of such infinitely-wide neural networks, such as the gradient-based training achieving the zero training error (so that it finds a global optimum), and the typical random initialisation of those infinitely-wide networks making them so called Gaussian processes, which are well-studied random objects in machine learning, statistics, and probability theory.
In this talk, I will introduce Greg Yang’s tensor-programs framework, which has led to substantial generalisations of prior mathematical results on infinitely-wide neural networks. The framework specifies a programming language for expressing computations of neural networks that are parameterised by the widths of those networks. Although simple, the language is expressive enough to cover both forward and backward computations of networks of nearly all architectures.The most important part of The framework is the so called master theorem which says that every program in the framework’s language has a well-defined limit as the widths of the associated network go to infinity, and furthermore the limit can even be defined inductively over the syntax of the program. The tensor-programs framework has been used to generalise results on infinitely-wide neural networks from a few simple network architectures to nearly all architectures.
The goal of my talk is to introduce a possibly-interesting new research topic for PL researchers. I will not assume any prior knowledge on theories of neural networks, in particular, those related to infinitely-wide neural networks and Greg Yang’s tensor programs. At the end of the talk, I will briefly mention a few research opportunities for PL researchers.
Boston - Paris - Online
We encourage to attend in-person LAFI’s workshop in Boston. Yet for presenters and attendees that are not able to travel (for any various reason - visa, work or family constraints, environmental concern, …), we propose an hybrid meeting with a mix of in person and virtual talks.
Registration is Mandatory, with a virtual option with 6 days pass with reduced fees for Paris and Online Attendees (the Airmeet link will be sent few days in advance).
Join the slack channel for remote and asynchronous interactions during LAFI’s meeting.
Boston - In-Person Attendees and Speakers
50 Park Plaza
Massachusetts, United States
Paris - Attendees and Speakers
We propose a bilocated event with speakers and attendees gathering in Université de Paris Cité.
Salle Leduc, RDC,
43 rue des Saints Pères,
Métro Saint Germain des prés
To access the building, you need to be on the list of Paris participants and to present an ID: join Paris local event
Online Attendees and Speakers
Complementary information for Virtual Attendees and Virtual Speakers is now available.