Viktor Jirsa
Director, Institut de Neurosciences des Systèmes (INS)

Viktor Jirsa, PhD, is Director of the Inserm Institut de Neurosciences des Systèmes (INS U1106) at Aix-Marseille Université and Director of Research (DRCE) at CNRS in Marseille, France. Trained in theoretical physics and applied mathematics (PhD 1996), he pioneered large-scale brain-network models that combine biologically grounded neural dynamics with individual connectomes, establishing a mathematical framework now central to network science in medicine. He is the scientific architect of the open-source simulation platform The Virtual Brain and served as a lead investigator in the EU Flagship Human Brain Project and its successor infrastructure EBRAINS, driving clinical translation of personalised brain models for epilepsy surgery and other disorders. Jirsa’s contributions have been recognised with numerous honours, including the Human Brain Project Innovation Prize (2021).
Virtual Brain Twins for Health and Disease
In the past twenty years, we have made significant progress in creating digital models of an individual’s brain, so called virtual brain twins. By combining brain imaging data with mathematical models, we can predict outcomes more accurately than using each method separately. Our approach has helped us understand normal brain states, their operation and conditions like healthy aging, dementia and schizophrenia. We illustrate the virtual brain workflow along the example of drug resistant epilepsy, the so-called Virtual Epileptic Patient (VEP): we reconstruct the connectome of an epileptic patient using DTI and co-register other potential imaging data from the same individual (anatomical MRI, computer tomography (CT)). Each brain region is represented by neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the patient’s EZ, including estimates of confidence and diagnostics of performance of the inference. In summary, the Virtual Brain Twin augments the value of empirical data by completing missing data, allowing clinical hypothesis testing and optimizing treatment strategies for the individual patient. Virtual Brain Twins are part of the European infrastructure called EBRAINS, which supports researchers worldwide in digital neuroscience.
Bernhard Schölkopf
Director, Max Planck Institute for Intelligent Systems & ELLIS Institute Tübingen; Professor at ETH

Bernhard Schölkopf’s studies machine learning and causal inference, with applications to fields ranging from astronomy to biomedicine, computational photography, music and robotics. Originally trained in physics and mathematics, he earned a Ph.D. in computer science in 1997 and became a Max Planck director in 2001. His awards include the ACM AAAI Allen Newell Award, the BBVA Foundation Frontiers of Knowledge Award, the Leibniz Award, and the Royal Society Milner Award. He is a Professor at ETH Zurich, a Fellow of the ACM and of the CIFAR Program “Learning in Machines and Brains”, and a member of the German Academy of Sciences. He helped start the MLSS series of Machine Learning Summer Schools, the ELLIS society, and the Journal of Machine Learning Research, an early development in open access and today the field’s flagship journal. In 2023, he founded the ELLIS Institute Tuebingen, and acts as its scientific director.
Causal Representations, World Models and Digital Twins
Research on understanding and building artificially intelligent systems has moved from symbolic approaches to statistical learning, and is now beginning to study interventional models relying on concepts of causality. Some of the hard open problems of machine learning and AI are intrinsically related to causality, and progress may require advances in our understanding of how to model and infer causality from data, as well as conceptual progress on what constitutes a causal representation and a causal world model. I will present basic concepts and thoughts, as well some applications to astronomy.
Gintare Karolina Dziugaite
Google DeepMind

Gintarė is a senior research scientist at Google DeepMind, based in Toronto, an adjunct professor in the McGill University School of Computer Science, and an associate industry member of Mila, the Quebec AI Institute. Prior to joining Google, Gintarė led the Trustworthy AI program at Element AI / ServiceNow, and obtained her Ph.D. in machine learning from the University of Cambridge, under the supervision of Zoubin Ghahramani. Gintarė was recognized as a Rising Stars in Machine Learning by the University of Maryland program in 2019. Her research combines theoretical and empirical approaches to understanding deep learning, with a focus on generalization, memorization, unlearning, and network compression.
Remembering and Forgetting: The Antithetical Roles of Linear Mode Connectivity
Continual learning, model merging, and information unlearning address critical challenges in managing foundation models. In this talk I will present evidence that linear mode connectivity (LMC)—the lack of loss barriers on the linear path connecting two models—is a decisive factor in the success and failure of methods in these three areas. We demonstrate how LMC enables efficient, training-free model merging and continual learning through weight interpolation. Conversely, in unlearning, we show that model editing fails when the edited model is linearly connected to the original model. Attempts to “unlearn” information are often superficial, as the original knowledge remains easily recoverable along the same low-loss path, compromising the model’s robustness and leaving it susceptible to relearning attacks. This work highlights a core trade-off in modern AI: the very properties that make models continually learn also make them difficult to truly and safely edit.
Emmanuel Bengio
McGill University; Recursion/Valence Labs

Emmanuel Bengio is a senior ML Scientist at Valence Labs @ Recursion, working on GFlowNet and drug-discovery from Mila, Montreal, Canada. He is mainly interested in Machine Learning, especially Deep Learning and Reinforcement Learning and mixing both. Lately, he has been working at the intersection of ML and drug design using new GFlowNet framework.
AI and Generative Models
In this talk I will share some recent progress on training energy-based generative models using the GFlowNet framework and applying them to scientific problems, as well as speculate on the bigger picture; using GFNs to be good Bayesians, and create models that reason about their environment, concluding with an open discussion of next steps and open problems.
Christiane Woopen
Heinrich Hertz Professorship at the University of Bonn

Christiane Woopen is Heinrich Hertz Chair for Life Ethics at the University of Bonn and founding director of the Center for Life Ethics since October 2021. In addition to leading national and international research projects on ethics in digital technologies, sciences and health she is involved in policy advice, including as Chair of the German Ethics Council (2012-2016), as President of the Global Summit of National Ethics Councils (2014-2016), as a member of the UNESCO International Bioethics Committee until 2017, as Co-Speaker of the German Data Ethics Commission (2018-2019), and as Chair of the European Group on Ethics in Science and New Technologies (EGE, 2017 – 2021). Woopen is a member of several academies of sciences (NRW, BBAW, Academia Europaea, National Academy of Medicine in Mexico) and was awarded the Order of Merit of the State of North Rhine-Westphalia as well as the Federal Cross of Merit 1st Class.
Ethics in AI, Neuroscience, Digital Twins, and Consciousness Research Neuroscience and AI for a Flourishing Life
Neuroscience and artificial intelligence are rapidly advancing fields that combine remarkable scientific progress with ambitious visions for the future. At the same time, they face deep conceptual disagreements, ethical concerns, and dystopian anxieties. Their findings—and the methodological innovations they employ, such as digital twins—raise complex ethical questions concerning autonomy, justice, privacy, and sustainability. Moreover, they are grounded in contested assumptions about the brain and its significance for personal identity and agency. These debates inevitably lead to broader philosophical questions about the nature of human beings and their potential distinctiveness within the continuum of life and consciousness. This lecture examines the ethical implications of these interrelated developments and levels of thought, and it argues for the relevance of human flourishing as a guiding ethical principle in addressing them.
Alessio Micheli
University of Pisa

Alessio Micheli is Full Professor at the Department of Computer Science of the University of Pisa, where he is the head and scientific coordinator of the Computational Intelligence $\&$ Machine Learning Group (CIML), part of the CAIRNE.eu Research Network. His research interests include machine learning, neural networks, deep learning, learning in structured domains (sequence, tree, and graph data), recurrent and recursive neural networks, reservoir computing, and probabilistic and kernel-based learning for non-vectorial data, with a particular focus and pioneering works on efficient neural networks for learning from graphs.
Prof. Micheli is the national coordinator of the “Italian Working group on Machine Learning and Data Mining” of the Italian Association for Artificial Intelligence and he has been co-founder/chair of the IEEE CIS Task Force on Reservoir Computing. He is an elected member of the Executive Committee of the European Neural Network Society – ENNS. He serves as an Associate Editor for Neural Networks and IEEE Transactions on Neural Networks and Learning Systems.
A Journey through Efficient Deep Learning on Graphs
Although the investigation on learning in structured domains started in the late 90’s, recently deep learning for graphs has attracted tremendous research interest and increasing attention for applications. Indeed, graphs are powerful and flexible tools for representing relationships among data at different levels of abstraction. Unsurprisingly, the range of applications includes many fields, from biology, chemistry, network science, computer vision, natural languages and many others. On the other hand, extending the data domain to graphs opens new challenges for the field of deep learning, particularly regarding efficiency, which is related to the important aspect of environmental sustainability. The talk will very briefly introduce the area of deep learning for graphs, with a focus on its origins. We will then move on to discuss advanced topics and current open issues by providing an overview of recent progress in my research group. We will pay particular attention to efficiency, the interplay between model depth and learning complex data representations, and explainability on graphs.