Keynote speakers

ICANN 2016 will feature the following confirmed plenary speakers.

Stephen Coombes (University of Nottingham, UK)

Modelling Brain Waves

In this talk I will explore the way in which synaptically coupled neural networks may generate and maintain travelling waves of activity.  Although these models are inherently non-local, a combination of mathematical approaches (predominantly drawn from non-smooth dynamical systems) means that we are now in a position to address fundamental questions about the effects of intrinsic ionic currents, synaptic processing, and anatomical connectivity on travelling waves in neural tissue.  I will present a number of examples from both one and two dimensions, focusing on the contributions of axonal delays, adaptation, refractoriness, and slow hyper-polarisation activated currents, to brain waves seen in the cortex, thalamus, and hippocampus.  I will also endeavour to explain the functional relevance of such waves and how in some instances they may subserve natural computation.


Wlodek Duch (Nicolaus Copernicus University, Torun, Poland)

Neurodynamics, Neuroimaging and Brains

Despite the astronomical complexity of the brain the engineering approach — understanding the brain by creating artificial brains – is feasible. To understand how brains work we need to describe animal and human phenotypes at all levels: genetic, proteins, cellular, neural,  circuits, large network subsystems, and behavioral. This is the basis of precision medicine, NIMH Research Domain Criteria, but also a necessary step in understanding animal and human behavior.
All mental states result from neural dynamics of the brain. Understanding mental processes in a conceptual way, without understanding their underlying neurodynamics, will always be limited. Brain mechanisms behind perception, cognitive activity, representation of concepts in the brain, have recently been discovered using functional neuroimaging techniques. Computational cognitive neurodynamics is leading the way to show how brain activity is linked to behavior.
Examples are presented of computational models that provide insights into autism spectrum disorders, ADHD, distortions of memory states and formation of memes, development of conspiracy theories, dyscalculia, and learning styles. Hypothesis derived from these computational models are tested by experiments carried out in our recently created Neurocognitive Laboratory on infants, preschool children, students and old people.


Joaquin Fuster (University of California at Los Angeles, USA)

The Prefrontal Cortex is a predictive and preadaptive organ

Purposeful and goal-directed behavior or language is guided by the neural mechanisms of the perception-action cycle (PA cycle), the circular cybernetic exchange of information between the brain and the environment. The PA cycle flows through the cerebral cortex, the environment, and back to the cortex. The prefrontal cortex controls the temporal organization of the PA cycle through its prospective functions of attention, working memory, and decision-making.   These functions regulate activity in other cortical regions toward the attainment of adaptive and rewarding goals.


Etienne Koechlin (Ecole Normale Supérieure, Paris, France)

Adaptive behavior and human reasoning

I will present our recent works combining computational modeling, experimental psychology and fMRI describing how the prefrontal cortex subserves reasoning in the service of decision-making and adaptive behavior. I will show how the ventromedial, dorsomedial, lateral and polar prefrontal regions along with the striatum forms an unified system combining inferential and creative abilities for efficient behavior in uncertain, variable and open-ended environments.


Věra Kůrková (Academy of Sciences of the Czech Republic)

Limitations of Shallow Neural Networks

Recent successes of deep networks pose a theoretical question: When are deep nets provably better than shallow ones? We show that for most common types of computational units, almost any uniformly randomly chosen function on a sufficiently large domain cannot be computed by a reasonably sparse shallow network. Our theoretical arguments, based on the probabilistic and geometric properties of high-dimensional spaces, are complemented by the concrete construction of classes of such functions. We describe an example of functions which cannot be computed by shallow networks with number of units depending on input dimension polynomially but can be computed by two-hidden-layer networks with number of units depending on the dimension linearly. We also discuss connections with the No Free Lunch Theorem, with the central paradox of coding theory, and with pseudo-noise sequences.


Erkki Oja (Aalto University, Helsinki, Finland)

Unsupervised learning for matrix decompositions

Unsupervised learning is a classical approach in pattern recognition and data analysis. Its importance is growing today, due to the increasing data volumes and the difficulty of obtaining statistically sufficient amounts of of labelled training data. Typical analysis techniques using unsupervised learning are principal component analysis, independent component analysis, and cluster analysis. They can all be presented as decompositions of the data matrix containing the unlabeled samples. Starting from the classical results, and especially the state-of-the-art during the first ICANN conference in 1991, the author reviews some advances in the field up to the present day.


Günther Palm (University of Ulm, Germany)

What are the units of neural representations?

This classical question in computational neuroscience may also be relevant for the design of artificial neural networks for technical applications. In the discussion of this question I will touch upon various related topics such as: Does a representation need units? Is it in terms of single neurons, neural population activity, or spatio-temporal patterns? What are the relevant populations? What about sparse representations?


Murray Shanahan (Imperial College London, UK)

Metastability in neural dynamics

Sets of oscillators in a modular network can exhibit a rich variety of metastable states in which synchronisation and desynchronisation coexist. Systems of oscillators tuned to behave this way have been shown to reproduce the statistics of brain activity under various conditions, including resting state, cognitive control, and sleep. In this talk I will describe this strange dynamical regime and how it can be modeled, and describe some of its applications in neuroscience.