About the Session
Building truly understandable and performant AI systems is a central challenge. This special session is dedicated to technical contributions that advance Explainable AI (XAI) by leveraging the synergy between continuous deep learning models and discrete, symbolic structures (logic, programs, knowledge graphs).
We are seeking work where the neurosymbolic paradigm leads to models with built-in transparency and high-fidelity explanations that go beyond general-purpose XAI methods.
Topics of Interest
Key topics include (but are not limited to):
- Interpretable Representations
- Knowledge Extraction
- Transparent-by-Design Models
- Generating Structured Explanations
- Explanations over Structured Data
We explicitly welcome submissions from related fields, including mechanistic interpretability, causal inference, formal methods, and program synthesis.
Important Dates
- ICANN Submission Deadline: 16th March 2026
- Notification of Acceptance: 29th May 2026
- ICANN Conference: 14–17 September 2026
- Journal Extension Deadline: November/December 2026 [TBA]
Submission Instructions
Follow the provided ICANN submission instructions and select the “Special Session on Explainable Neurosymbolic AI (X-NeSy)” track on Microsoft CMT (soon available).
Accepted papers will be published in the ICANN proceedings (Springer LNCS).
Importantly, this session serves as a track for the Special Issue on Explainable Neurosymbolic AI (X-NeSy) in the Neurosymbolic Artificial Intelligence journal.
Authors of accepted papers will be invited to submit an extended version to the journal.
Submit to ICANN 2026
View Journal Call
Organizers
Roberto Confalonieri
University of Padua, Italy
Website
Giuseppe Marra
KU Leuven, Belgium
Website
Gustav Ĺ Ăr
Czech Technical University, Czechia
Website