Organizers
- Věra Kůrková, Institute of Computer Science of the Czech Academy of Sciences,Prague, Czech Republic
- Marcello Sanguineti, Universita di Genova, Italy
- Ivan Tyukin, King’s College, London, UK
Abstract
The reliability of AI systems based on neural networks is crucial for their practical applicability. In particular, it is desirable that networks maintain their intended functionality even when exposed to changes or variations in their inputs. Another desirable aspect for the practical performance of neural networks is the reduction of their model complexity while keeping sufficient computational accuracy. Despite numerous successful applications, many theoretical questions regarding network robustness, stability, and the trade-off between accuracy and model complexity remain unresolved. As current networks process high-dimensional data and have a large number of parameters, many questions arise regarding the performance of the networks in high-dimensional settings.
The special session will provide a forum for discussing some of these issues, in particular robustness to perturbations, accuracy on the test sets and generalization, overfitting, learning from few examples in high-dimensional settings, notions of data dimensionality and its benefits, and influence of the choice of network architectures (numbers of their layers and types of computational units) on accuracy and robustness of network performance. Other questions include error identification and correction with provable performance guarantees, and understanding uncertainties and their modeling in modern learning algorithms.
List of Topics Covered in the Special Session
Contributions that offer theoretical insights, algorithmic advancements, and verifiable heuristics, including topics (non-exhaustive list):
- Robustness to random and adversarial perturbations
- The trade-off between accuracy and model complexity
- Benefits of network depth
- Curse and blessing of dimensionality in neural networks
- Learning in a high-dimensional setting
- Errors of neural networks and methods to reliably address them
- Quantization, low-precision computing, information-theoretic measures
