Special Session: The challenge of errors, stability, robustness, and accuracy in deep neural networks

Recent years have witnessed tremendous success of the applications of neural networks and deep learning in particular in tasks ranging from pattern recognition in images to autonomous driving and processing of natural language. Despite this progress, there are several fundamental questions that still require attention and resolution. One of these questions is the challenge of errors and instabilities. Apparent sensitivity of deep learning classifiers to small adversarial perturbations of their input data, the robustness of modern data-driven AI systems has been a widely discussed and broadly debated issue. Among these adversarial perturbations, there can exist even universal perturbations which trigger the instability of the network for seemingly any input. The presence of such instabilities in a tool which is so widely used in applications gives rise to a fundamental question: are these instabilities typical, and to be expected in modern large-scale AI and deep learning models? Moreover, is it even possible to compute a data-driven AI model which is accurate, generalises well, and is verifiably stable at the same time?

This special session will provide a forum for the discussion of this important challenge. The challenge, through raising the question of simultaneous robustness, accuracy on the test set, and generalisation, is related to many other questions in post-classical modern theory of machine learning and AI such as benign overfitting, learning from few examples in high-dimensional setting, and the notions of data dimension. The session welcomes contributions focussed on the current limitations of AI, including its verifiability. It is also open to submissions discussing potential approaches to alleviate the problem by regularisation or through continuous learning with rigorous performance guarantees. Papers focusing on both theoretical and practical/applied challenges as well as the contributions suggesting new empirically verifiable heuristics are encouraged and will be considered by the Programme Committee.

ORGANIZERS
Prof Věra Kůrková
Institute of Computer Science, Czech Academy of Sciences

Prof Ivan Tyukin
King’s College London

Prof Alexander Gorban
University of Leicester