ComBayNS 2025 Workshop
Combining Bayesian and Neural approaches for Structured Data
This workshop is part of the 2025 IJCNN Conference, Rome, June 30-July 5 2025
Paper submission
Submission link is available in the Call for Papers page.
Be sure to read the submission instructions carefully!
Important Dates
- Paper Submission Deadline:
March 20, 2025March 27, 2025 - Notification of Acceptance: April 15, 2025
- Workshop Dates: July 2, 2025
Workshop Program
11:15 – 11:30
Introduction and dinner plans
11:15 – 11:30
Introduction and dinner plans
Join us for dinner!
Please scan the QR code below and fill the form:

11:30 – 12:30
Keynote talk by Prof. Coates
11:30 – 12:30
Keynote talk by Prof. Coates
Title: Bayesian Graph Neural Networks and Transformers
Abstract: In numerous settings, ranging from medical diagnosis to quantitative finance, we observe interacting entities and need to make predictions based on the observed relationships. We can represent such data using an annotated graph, with nodes representing the entities and edges depicting the relationships. It is important to develop inference methods that can provide confidence bounds and are robust to graph errors such as missing or spurious edges. In this talk, I will introduce a Bayesian graph learning framework that delivers the desired robustness and uncertainty characterization. Critical to this framework is the specification of a graph model, and I will introduce several candidate options. I will then discuss how this framework can be extended to a state-of-the-art graph transformer and a continuous-kernel graph convolution network. I will conclude by highlighting some of the practical applications of the graph learning methods, including recommender systems and circuit design.
Mark Coates is a Professor in the Department of Electrical and Computer Engineering at McGill University (Montreal, Canada). He received the B.E. degree in computer systems engineering from the University of Adelaide, Australia, in 1995, and a Ph.D. degree in information engineering from the University of Cambridge, U.K., in 1999. He was a research associate and lecturer at Rice University, Texas, from 1999-2001. In 2012-2013, he worked as a Senior Scientist at Winton Capital Management, Oxford, UK. Coates’ research interests include machine learning, statistical signal processing, and Bayesian and Monte Carlo inference.
12:30 – 12:50
Ensembles of Multi-scale Kernel Smoothers for Data Imputation
12:30 – 12:50
Ensembles of Multi-scale Kernel Smoothers for Data Imputation
Authors: Amit Shreiber, Dalia Fishelov, Neta Rabin
Abstract: When collecting a dataset, it usually contains some proportion of incomplete data. Various methods for handling this missing data exist in the literature, such as deleting observations that contain missing values, or replacing missing values with the mean of the other observations in the relevant variables. Nevertheless, most of the techniques do not consider the geometric structure of the data both in the row (instance) space and the column (feature) space. In this work, we propose a smoothing or regression procedure that operates both on the row and column space of the data, and refines the approximated model in an iterative manner, following ideas from iterative bias reduction models. We provide a mathematical analysis of the method, as well as test its performance of several datasets with diverse missingness mechanisms. Promising results are seen across all of the missingness types and datasets. Last, the proposed multi-scale approximation is general, and may be beneficial for additional machine learning tasks that process tabular data.
13:00 – 14:00
Lunch
13:00 – 14:00
Lunch
Let's grab a bite!
14:00 – 15:00
Keynote talk by Dr. Lutzeyer
14:00 – 15:00
Keynote talk by Dr. Lutzeyer
Title: Avenues for Interaction between Bayesian Methodology and Graph Neural Networks
Abstract: Graph Neural Networks (GNNs) have celebrated many academic and industrial successes in the past years; providing a rich ground for theoretical analysis and achieving state-of-the-art results in several learning tasks. In this talk, I will present work in which we propose a data augmentation scheme using Gaussian Mixture Models in the latent space of the penultimate neural network layer. I will furthermore review associated theoretical results in which we upper bound the generalisation error of GNNs (trained under data augmentation) using their associated Rademacher complexity. Interacting with the learned Euclidean representations of structured data may be a broadly applicable avenue for future research introducing Bayesian methodology to neural approaches. I will conclude my talk by reviewing how Bayesian considerations could naturally extend some of our other recent work on orthonormal weight matrices in GNNs and interaction effects in neighbourhoods in graphs. The majority of the presented work was done by Yassine Abbahaddou, my final-year PhD student, and in collaboration with his co-supervisors Fragkiskos Malliaros, Amine Aboussalah and Michalis Vazirgiannis.
Johannes Lutzeyer is an Assistant Professor in the Computer Science Department of École Polytechnique, IP Paris in France since 2022. Previously, he completed a 2.5-year postdoc, under the supervision of Prof. Michalis Vazirgiannis also at École Polytechnique. He obtained his degrees (BSc, MSc and PhD) in the Statistics Section of the Mathematics Department at Imperial College London under the supervision of Prof. Andrew Walden. Johannes works in the area of Graph Representation Learning, specifically on Graph Neural Networks, and has made contributions to the academic literature in this domain with a small number of publications at the ICLR, ICML and NeurIPS conferences among others.
15:00 – 15:20
Learn to Jump: Adaptive Random Walks for Long-Range Propagation through Graph Hierarchies
15:00 – 15:20
Learn to Jump: Adaptive Random Walks for Long-Range Propagation through Graph Hierarchies
Authors: Joël Mathys, Federico Errica
Abstract: Message-passing architectures struggle to sufficiently model long-range dependencies in node and graph prediction tasks. We propose a novel approach exploiting hierarchical graph structures and adaptive random walks to address this challenge. Our method introduces learnable transition probabilities that decide whether the walk should prefer the original graph or travel across hierarchical shortcuts. On a synthetic long-range task, we demonstrate that our approach can exceed the theoretical bound that constrains traditional approaches operating solely on the original topology. Specifically, walks that prefer the hierarchy achieve the same performance as longer walks on the original graph. These preliminary findings open a promising direction for efficiently processing large graphs while effectively capturing long-range dependencies.
15:20 – 15:40
BN-Pool: a Bayesian Nonparametric Approach to Graph Pooling
15:20 – 15:40
BN-Pool: a Bayesian Nonparametric Approach to Graph Pooling
Authors: Daniele Castellana, Filippo Maria Bianchi
Abstract: We introduce BN-Pool, the first clustering-based pooling method for Graph Neural Networks (GNNs) that adaptively determines the number of supernodes in a coarsened graph. By leveraging a Bayesian non-parametric framework, BN-Pool employs a generative model capable of partitioning graph nodes into an unbounded number of clusters. During training, we learn the node-to-cluster assignments by combining the supervised loss of the downstream task with an unsupervised auxiliary term, which encourages the reconstruction of the original graph topology while penalizing unnecessary proliferation of clusters. This adaptive strategy allows BN-Pool to automatically discover an optimal coarsening level, offering enhanced flexibility and removing the need to specify sensitive pooling ratios. We show that BN-Pool achieves superior performance across diverse benchmarks.
15:40 – 16:00
On Learning the Width of Neural Networks
15:40 – 16:00
On Learning the Width of Neural Networks
Authors: Federico Errica, Henrik Christiansen, Viktor Zaverkin, Mathias Niepert, Francesco Alesiani
Abstract: We introduce an easy-to-use technique to learn an unbounded width of a neural network’s layer during training. The technique does not rely on alternate optimization nor hand-crafted gradient heuristics; rather, it jointly optimizes the width and the parameters of each layer via simple backpropagation. We apply the technique to a broad range of data domains such as tables, images, texts, and graphs, showing how the width adapts to the task’s difficulty. By imposing a soft ordering of importance among neurons, it is also possible to dynamically compress the network with no performance degradation.
16:00 – 16:30
Tea Break
16:00 – 16:30
Tea Break
Enjoy a refreshing tea break.
16:30 – 16:50
Towards solving Kolmogorov-Arnold Theorem using Variational Optimization
16:30 – 16:50
Towards solving Kolmogorov-Arnold Theorem using Variational Optimization
Authors: Francesco Alesiani, Federico Errica, Henrik Christiansen
Abstract: Kolmogorov Arnold Networks (KANs) are an emerging architecture for building machine learning models. KANs are based on the theoretical foundation of the Kolmogorov-Arnold Theorem and its expansions, which provide an exact representation of a multi-variate continuous bounded function as the composition of a limited number of uni-variate continuous functions. While such theoretical results are powerful, its use as a representation learning alternative to multi-layer perceptron (MLP) hinges on the choice of the number of bases modeling each of the univariate functions. In this work, we show how to address this problem by adaptively learning a potentially infinite number of bases for each univariate function during training. We do so by means of a variational inference optimization problem. Our proposal, called INFINITYKAN, extends the potential applicability of KANs by treating an important hyper-parameter as part of the learning process.
16:50 – 17:50
Panel discussion
16:50 – 17:50
Panel discussion
Join us for the panel discussion!
Please scan the QR code below and submit your own question:

17:50 – 18:00
Conclusions and dinner location
17:50 – 18:00
Conclusions and dinner location
Concluding remarks and information regarding dinner arrangements.
Speakers

Prof. Mark Coates
McGill University
Mark Coates is a Professor in the Department of Electrical and Computer Engineering at McGill University (Montreal, Canada)...
Mark Coates is a Professor in the Department of Electrical and Computer Engineering at McGill University (Montreal, Canada). He received the B.E. degree in computer systems engineering from the University of Adelaide, Australia, in 1995, and a Ph.D. degree in information engineering from the University of Cambridge, U.K., in 1999. He was a research associate and lecturer at Rice University, Texas, from 1999-2001. In 2012-2013, he worked as a Senior Scientist at Winton Capital Management, Oxford, UK. Coates’ research interests include machine learning, statistical signal processing, and Bayesian and Monte Carlo inference.

Dr. Johannes Lutzeyer
École Polytechnique
Johannes Lutzeyer is an Assistant Professor in the Computer Science Department of École Polytechnique, IP Paris in France since 2022...
Johannes Lutzeyer is an Assistant Professor in the Computer Science Department of École Polytechnique, IP Paris in France since 2022. Previously, he completed a 2.5-year postdoc, under the supervision of Prof. Michalis Vazirgiannis also at École Polytechnique. He obtained his degrees (BSc, MSc and PhD) in the Statistics Section of the Mathematics Department at Imperial College London under the supervision of Prof. Andrew Walden. Johannes works in the area of Graph Representation Learning, specifically on Graph Neural Networks, and has made contributions to the academic literature in this domain with a small number of publications at the ICLR, ICML and NeurIPS conferences among others.
!!! Dinner Location and Time !!!
Restaurant "Dar Bruttone Rione Monti", Time: 20:00-20:15
Workshop Organizers

Davide Bacciu
University of Pisa

Daniele Castellana
University of Firenze

Federico Errica
NEC Italy

Mathias Niepert
University of Stuttgart

Marco Podda
University of Pisa

Olga Zaghen
University of Amsterdam
Program Committee
- Carlo Abate
UiT - Steve Azzolin
University of Trento - Maria Sofia Bucarelli
Sapienza University - Andrea Cini
USI
- Michele Fontanesi
University of Pisa - Claudio Gallicchio
University of Pisa - Julia Gastinger
Uni Mannheim - Filippo Grazioli
GSK - Jimwoo Kim
KAIST
- Lorenzo Loconte
Uni Edimburgh - Manuel Madeira
EPFL - Ivan Marisca
USI - Riccardo Massidda
University of Pisa - Luca Miglior
University of Pisa
- Matteo Ninniri
University of Pisa - Yiming Qin
EPFL - Matteo Tolloso
University of Pisa - Daniele Zambon
USI