Organizers
- Roberto Confalonieri, University of Padua Italy
- Giuseppe Marra, KU Leuven Belgium
- Gustav Šír, Czech Technical University Czechia
Abstract
Building truly understandable and performant AI systems is a central challenge. This special issue is dedicated to technical contributions that advance Explainable AI (XAI) by leveraging the synergy between continuous deep learning models and discrete, symbolic structures (logic, programs, knowledge graphs). We are seeking work where the neurosymbolic paradigm leads to models with built-in transparency and high-fidelity explanations that go beyond general-purpose XAI methods.
List of Topics Covered in the Special Session
We explicitly welcome submissions from related fields, including mechanistic interpretability, causal inference, formal methods, and program synthesis.
Key Topics of Interest (include but are not limited to):
- Interpretable Representations
- Knowledge Extraction
- Transparent-by-Design Models
- Generating Structured Explanations
- Explanations over Structured Data
