The ICANN 2018 scientific programme includes the following workshops.
Interpretable Methods for Machine and Deep Learning
Description: Machine learning-based methods and, in particular, deep neural networks have proven extremely successful in a wide variety of tasks but remain for the most part extremely opaque in terms of what they learn and how they use their acquired knowledge to make predictions. More specifically, without any insight into the network, the task of determining whether the system has properly integrated a specific concept is very difficult and the subsequent validation of the method for critical activities (e.g. driving, medicine, security) is not possible. Furthermore, without a tool to interrogate the system, we
lack the ability to transfer the acquired knowledge from the network parameters to a human user.
The search for explainability in the field of machine learning is not a new topic and has been addressed since the early stages of its development. Yet, the ever-growing amount of computational power has enabled the usage of continually more complex systems, and successful explanation methods able to cope with this high level of complexity are extremely challenging to design. This lack of interpretability is currently one of the major challenges in the field of artificial intelligence. Research institutes, companies, and governments are becoming aware of the central role that AI-based systems will play in our societies and are starting to take action. For instance, the European Union will, as of 2018 through a new regulation, allow the right to non-discrimination and the right to explanation for every automated decision-making process that significantly affects users triggering de facto the need for interpretable systems. Another notable example is the call for projects launched by the U.S. Defense Advanced Research Projects Agency (DARPA) with the explicit aim to produce explainable AI-based models. The global program is considerable with a span of 4 years and a budget ranging from 10M to 20M USD per year.
In this context, the aim of this workshop is to bring together top researchers and scholars working in the domain of interpretable machine learning to share their ideas and address the issue of interpretability.
Organizer: Prof. Carlos Peña, University of Applied Sciences Western Switzerland
“Prof. Peña’s group has been involved in interpretable machine learning for more than a decade with the creation of fuzzy logic systems capable of making accurate predictions while providing a reasonable level of interpretability to human users. This work focuses primarily on developing a modeling approach able to evolve automatically and test its own set of rules with the help of an evolutionary algorithm. This research, after several successful application projects in medical diagnostics, led to the creation of SimplicityBio, a commercial company that provides clients with services for discovering biomarkers and interpretable diagnostic signatures. Since June 2016, under a Hasler foundation grant, the group has been investigating rule-extraction methods for deep neural networks with the aim to study, implement, and evaluate methods for better understanding how deep neural networks make their predictions. This topic will be further explored and expanded with a new Swiss National Foundation proposal that is currently under review.”
- Corrado Mencar, University of Bari, Italy
- Wojciech Samek, Fraunhofer Institute for Telecommunications, Germany
- Klaus-R. Müller, TU Berlin, Germany
- Alfredo Vellido, Polytechnic University of Catalonia, Spain
- Mario Fritz, Max Plank Institute for Informatics
- François Fleuret, idiap, EPFL, Switzerland
- Martin Jaggi, EPFL, Switzerland