Keynote speakers

Jürgen Schmidhuber
Tanja Schultz
Walter Senn
Michael W. Riemann

Jürgen Schmidhuber

IDSIA USI-SUPSI, Switzerland, and KAUST AI Initiative, Saudi Arabia

The New York Times headlined: “When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’.”  Since age 15, his main goal has been to build a self-improving Artificial Intelligence smarter than himself, then retire. His lab’s deep learning artificial neural networks have revolutionised machine learning and A.I. By 2017, they were on over 3 billion smartphones, and used billions of times per day, for Facebook’s automatic translation, Google’s speech recognition, Google Translate, Apple’s Siri & QuickType, Amazon’s Alexa, etc. Generative AI is also based on his work: he introduced artificial curiosity & generative adversarial networks (1990, now widely used), self-supervised pre-training for deep learning (1991, the “P” in “ChatGPT” stands for “pre-trained”), unnormalised linear Transformers (1991, the “T” in “ChatGPT” stands for “Transformer”), and meta-learning machines that learn to learn (since 1987, now widely used). His lab also produced LSTM, the most cited AI of the 20th century, and the LSTM-inspired Highway Net, the first very deep feedforward net with hundreds of layers (ResNet, the most cited AI of the 21st century, is an open-gated Highway Net). In 2006-2010, he published the “formal theory of fun and creativity.”  Elon Musk tweeted: “Schmidhuber invented everything.” He is recipient of numerous awards, Director of the AI Initiative at KAUST in KSA, Scientific Director of the Swiss AI Lab IDSIA, and Co-Founder & ex-President of the company NNAISENSE. He is a frequent keynote speaker at major events, and advises various governments on A.I. strategies.

Homepage of Jürgen Schmidhuber

Past, present, future, and far future of machine learning

I’ll discuss modern Artificial Intelligence and how the principles of the G, P and T in Chat GPT emerged in 1991. I’ll also discuss what’s next in AI, and its expected impact on the future of the universe.


Tanja Schultz

University of Bremen, Germany

Tanja Schultz is a professor of computer science at the University of Bremen, director of the Cognitive Systems Lab and speaker of the DFG AI Research Unit “Lifespan AI” and the high-profile area “Minds, Media, Machines”. She received her doctoral and diploma degree in Informatics from University of Karlsruhe, Germany in 2000 and 1995 respectively. In 2000 she joined Carnegie Mellon University in Pittsburgh, Pennsylvania where she held a position as Research Professor at the Language Technologies Institute until 2020. From 2007 to 2015 she was a Full Professor at the Department of Informatics of the Karlsruhe Institute of Technology (KIT) in Germany before she joined the University of Bremen in April 2015. Since 2007, Tanja Schultz directs the Cognitive Systems Lab, where her research activities focus on biosignal-adaptive cognitive systems with a particular emphasis on the acquisition, processing and decoding of multimodal biosignals in real-time with the aim of interpreting human behavior and predicting human needs in the context of communication and interaction with cognitive systems and robots. Tanja Schultz received several awards for her work including the Allen Newell Medal for Research Excellence from Carnegie Mellon, the Alcatel-Lucent Research Award for Technical Communication, the Plux Wireless Award, the Otto-Haxel Award, and two Google Research Faculty Awards. She is a fellow of ISCA, EACS, AAAI, and IEEE and has co-authored about 500 articles published in books, journals, and proceedings.

Homepage of Tanja Schultz

Biosignal-Adaptive Cognitive Systems 

I will describe technical cognitive systems that automatically adapt to users’ needs by interpreting their biosignals: Human behavior includes physical, mental, and social actions that emit a range of biosignals which can be captured by a variety of sensors. The processing and interpretation of such biosignals provides an inside perspective on human physical and mental activities, complementing the traditional approach of merely observing human behavior. As great strides have been made in recent years in integrating sensor technologies into ubiquitous devices and in machine learning methods for processing and learning from data, I argue that the time has come to harness the full spectrum of biosignals to understand user needs. I will present illustrative cases ranging from silent and imagined speech interfaces that convert myographic and neural signals directly into audible speech, to interpretation of human attention and decision making in human-robot interaction from multimodal biosignals.  


Walter Senn

Institute of Physiology, University of Bern, Switzerland

Walter Senn is Professor for Computational Neuroscience and co-director of the Department of Physiology, University of Bern. He studied Mathematics and Physics, with a PhD in differential geometry and calculus of variation (1993). During his PhD he studied dynamical systems at Lomonossov University in Moscow, and he got a degree as a high school teacher from University of Zurich. Before joining the Department of Physiology (1999), he was at the Department of Computer Science at University of Bern, spending postdoc time at the Hebrew University, Jerusalem (I. Segev), and the Center for Neural Sciences, New York University (J. Rinzel). Using mathematical models of synapses, neurons and networks, he investigates how cognitive phenomena such as perception, learning and memory can emerge from neuronal substrates. His recent interests are devoted to translate principles of cortical computation into algorithms for neuromorphic systems (with M. Petrovici), inspired by progress in AI (partly with Y. Bengio).

Homepage of Walter Senn

Dendritic computations and deep learning in the brain

Artificial Intelligence, through its working horse of neural networks, is inspired by the biological example of the brain. The unprecedented success of AI in modeling cognitive processes, in turn, inspires functional models of the brain. Yet, when looking into the brain, additional biological structures become apparent, such as dendritic morphologies, interneuron circuits, recurrent connectivity, error representations, top-down signaling and various gating hierarchies. I will give a review on these biological elements and show how they may integrate in an energy-based theory of cortical computation. Dendrites and cortical microcircuits turn out to implement a real-time version of error-backpropagation based on prospective errors. The theory is inspired by the least-action principle in physics from which all dynamical equations of motions are derived. We likewise derive the neuronal dynamics, including the synaptic dynamics with gradient-descent learning, from our Neuronal Least-Action (NLA) principle. The principle tells that the cortical activities and the real-time learning follows a path that minimizes prospective errors across all neurons of the network. Prospective errors in output neurons relate to behavioral errors, while prospective errors in deep network neurons relate to errors in the neuron-specific dendritic prediction of somatic firing. I will explain how these ideas relate to cortical attention mechanisms and context-dependent gating that link to, and potentially inspires, recent developments in AI.


Michael W. Reimann

Blue Brain, Swiss Federal Institute of Technology Lausanne, Switzerland

Michael Reimann is the group leader of the Connectomics section at Blue Brain. He received his diploma degree from the University of Tübingen in Germany and his PhD from the EPFL in Switzerland. Now, he leads a team of six postdocs and two PhD candidates that refine a biophysically detailed model of rat cortical circuitry and use it to study the relation between the structure of connectivity and its function. They pursue a multi-disciplinary approach, combining methods from pure mathematics, to modelling and simulation of detailed neuronal networks, and to close collaborations with experimental neuroscientists. They aim to understand how the complex non-random structure present in biological neural networks affects their function and what we can learn from it for use in more simplified artificial networks.

Homepage of Michael Reimann

A Model of Neocortical Micro- and Mesocircuitry and its Applications

We present a large-scale, biophysically detailed model of rat non-barrel somatosensory regions. Building upon an earlier version of such a model, we increased the spatial scale of the model and enhanced its biological realism. The most salient improvements are: First, construction of realistic synaptic connectivity as the union of two algorithms, one for local connections, and another for long-range connections. Second, introduction of methods to build a model inside a standardized voxel atlas. This, combined with the connectivity algorithms allows models of brain regions to be developed separately and then easily integrated. Third, improvements in the methods to compensate for missing extrinsic inputs and to validate an in-vivo-like activity regime. We demonstrate several applications of the model that make use of its specific advantages over more simplified models: First, studying the rules of synaptic plasticity at the population level. Second, studying the effect of heterogeneous and non-random connectivity on circuit function and reliability. Third, studying the accuracy and biases inherent in spike sorting algorithms.

Back to keynote speakers:
Jürgen Schmidhuber
Tanja Schultz
Walter Senn
Michael W. Riemann