Workshop
eXplainable AI approaches for debugging and diagnosis
Roberto Capobianco · Biagio La Rosa · Leilani Gilpin · Wen Sun · Alice Xiang · Alexander Feldman
Tue 14 Dec, 5 a.m. PST
Recently, artificial intelligence (AI) has seen the explosion of deep learning (DL) models, which are able to reach super-human performance in several tasks. These improvements, however, come at a cost: DL models are ``black boxes’’, where one feeds an input and obtains an output without understanding the motivations behind that prediction or decision. The eXplainable AI (XAI) field tries to address such problems by proposing methods that explain the behavior of these networks.
In this workshop, we narrow the XAI focus to the specific case in which developers or researchers need to debug their models and diagnose system behaviors. This type of user typically has substantial knowledge about the models themselves but needs to validate, debug, and improve them.
This is an important topic for several reasons. For example, domains like healthcare and justice require that experts are able to validate DL models before deployment. Despite this, the development of novel deep learning models is dominated by trial-and-error phases guided by aggregated metrics and old benchmarks that tell us very little about the skills and utility of these models. Moreover, the debugging phase is a nightmare for practitioners too.
Another community that is working on tracking and debugging machine learning models is the visual analytics one, which proposes systems that help users to understand and interact with machine learning models. In the last years, the usage of methodologies that explain DL models became central in these systems. As a result, the interaction between the XAI and visual analytics communities became more and more important.
The workshop aims at advancing the discourse by collecting novel methods and discussing challenges, issues, and goals around the usage of XAI approaches to debug and improve current deep learning models. In order to achieve this goal, the workshop aims at bringing researchers and practitioners from both fields, strengthening their collaboration.
Join our Slack channel for Live and Offline Q/A with authors and presenters!
Schedule
Tue 5:00 a.m. - 5:09 a.m.
|
Welcome
(
Opening
)
>
SlidesLive Video |
Roberto Capobianco 🔗 |
Tue 5:10 a.m. - 5:13 a.m.
|
Speaker Introduction
(
Introduction
)
>
|
Wen Sun 🔗 |
Tue 5:14 a.m. - 5:52 a.m.
|
[IT1] Visual Analytics for Explainable Machine Learning
(
Invited Talk
)
>
SlidesLive Video |
Shixia Liu 🔗 |
Tue 5:53 a.m. - 6:03 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Wen Sun · Shixia Liu 🔗 |
Tue 6:04 a.m. - 6:05 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Biagio La Rosa 🔗 |
Tue 6:05 a.m. - 6:19 a.m.
|
[O1] Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation
(
Oral
)
>
link
SlidesLive Video |
Théo Jaunet · Guillaume Bono · Romain Vuillemot · Christian Wolf 🔗 |
Tue 6:20 a.m. - 6:25 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Biagio La Rosa 🔗 |
Tue 6:25 a.m. - 6:35 a.m.
|
Break (10min)
|
🔗 |
Tue 6:35 a.m. - 6:37 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Biagio La Rosa 🔗 |
Tue 6:37 a.m. - 7:21 a.m.
|
[IT2] Explainability and robustness: Towards trustworthy AI
(
Invited Talk
)
>
SlidesLive Video |
Andreas Holzinger 🔗 |
Tue 7:22 a.m. - 7:32 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Biagio La Rosa · Andreas Holzinger 🔗 |
Tue 7:33 a.m. - 7:34 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Leilani Gilpin 🔗 |
Tue 7:34 a.m. - 7:49 a.m.
|
[O2] Not too close and not too far: enforcing monotonicity requires penalizing the right points
(
Oral
)
>
link
SlidesLive Video |
Joao Monteiro · · Hossein Hajimirsadeghi · Greg Mori 🔗 |
Tue 7:50 a.m. - 7:55 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Leilani Gilpin 🔗 |
Tue 7:55 a.m. - 8:05 a.m.
|
Break (10min)
|
🔗 |
Tue 8:05 a.m. - 8:07 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Biagio La Rosa 🔗 |
Tue 8:07 a.m. - 8:20 a.m.
|
[G] Empowering Human Translators via Interpretable Interactive Neural Machine Translation
(
A glimpse of the future Track
)
>
SlidesLive Video |
Gabriele Sarti 🔗 |
Tue 8:21 a.m. - 8:26 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Biagio La Rosa · Gabriele Sarti 🔗 |
Tue 8:27 a.m. - 8:28 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Biagio La Rosa 🔗 |
Tue 8:28 a.m. - 8:41 a.m.
|
[O3] Reinforcement Explanation Learning
(
Oral
)
>
link
SlidesLive Video |
Siddhant Agarwal · OWAIS IQBAL · Sree Aditya Buridi · Madda Manjusha · Abir Das 🔗 |
Tue 8:42 a.m. - 8:47 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Biagio La Rosa 🔗 |
Tue 8:48 a.m. - 8:50 a.m.
|
Spotlight Introduction
(
Introduction
)
>
SlidesLive Video |
Biagio La Rosa 🔗 |
Tue 8:50 a.m. - 8:53 a.m.
|
[S1] Interpreting BERT architecture predictions for peptide presentation by MHC class I proteins
(
Spotlight
)
>
link
SlidesLive Video |
Hans-Christof Gasser 🔗 |
Tue 8:53 a.m. - 8:57 a.m.
|
[S2] XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection
(
Spotlight
)
>
link
SlidesLive Video |
Sunsheng Gu · Vahdat Abdelzad · Krzysztof Czarnecki 🔗 |
Tue 8:57 a.m. - 9:00 a.m.
|
[S3] Interpretability in Gated Modular Neural Networks
(
Spotlight
)
>
link
SlidesLive Video |
Yamuna Krishnamurthy · Chris Watkins 🔗 |
Tue 9:00 a.m. - 9:03 a.m.
|
[S4] A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
(
Spotlight
)
>
link
SlidesLive Video |
Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji Kasneci 🔗 |
Tue 9:03 a.m. - 9:06 a.m.
|
[S5] Debugging the Internals of Convolutional Networks
(
Spotlight
)
>
link
SlidesLive Video |
Bilal Alsallakh · Narine Kokhlikyan · Vivek Miglani · Shubham Muttepawar · Edward Wang · Sara Zhang · Orion Reblitz-Richardson 🔗 |
Tue 9:06 a.m. - 9:09 a.m.
|
[S6] Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors
(
Spotlight
)
>
link
SlidesLive Video |
Dylan Slack · Krishnaram Kenthapadi 🔗 |
Tue 9:09 a.m. - 9:12 a.m.
|
[S7] DeDUCE: Generating Counterfactual Explanations At Scale
(
Spotlight
)
>
link
SlidesLive Video |
Benedikt Höltgen · Lisa Schut · Jan Brauner · Yarin Gal 🔗 |
Tue 9:12 a.m. - 9:22 a.m.
|
Break (10min)
|
🔗 |
Tue 9:22 a.m. - 9:25 a.m.
|
Speaker Introduction
(
Introduction
)
>
|
Alexander Feldman 🔗 |
Tue 9:26 a.m. - 10:04 a.m.
|
[IT3] Towards Reliable and Robust Model Explanations
(
Invited Talk
)
>
SlidesLive Video |
Himabindu Lakkaraju 🔗 |
Tue 10:05 a.m. - 10:15 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Alexander Feldman · Himabindu Lakkaraju 🔗 |
Tue 10:16 a.m. - 10:17 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Roberto Capobianco 🔗 |
Tue 10:17 a.m. - 10:32 a.m.
|
[O4] Are All Neurons Created Equal? Interpreting and Controlling BERT through Individual Neurons
(
Oral
)
>
link
SlidesLive Video |
Omer Antverg · Yonatan Belinkov 🔗 |
Tue 10:33 a.m. - 10:38 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Roberto Capobianco 🔗 |
Tue 10:38 a.m. - 10:50 a.m.
|
Break (12min)
|
🔗 |
Tue 10:50 a.m. - 10:51 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Leilani Gilpin 🔗 |
Tue 10:51 a.m. - 11:23 a.m.
|
[IT4] Detecting model reliance on spurious signals is challenging for post hoc explanation approaches
(
Invited Talk
)
>
SlidesLive Video |
Julius Adebayo 🔗 |
Tue 11:24 a.m. - 11:34 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Leilani Gilpin · Julius Adebayo 🔗 |
Tue 11:35 a.m. - 11:36 a.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Roberto Capobianco 🔗 |
Tue 11:36 a.m. - 12:48 p.m.
|
[O5] Do Feature Attribution Methods Correctly Attribute Features?
(
Oral
)
>
link
SlidesLive Video |
Yilun Zhou · Serena Booth · Marco Tulio Ribeiro · Julie A Shah 🔗 |
Tue 11:49 a.m. - 11:54 a.m.
|
Q/A Session
(
Live Q/A
)
>
|
Roberto Capobianco 🔗 |
Tue 11:55 a.m. - 12:10 p.m.
|
Break (15min)
|
🔗 |
Tue 12:10 p.m. - 12:11 p.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Roberto Capobianco 🔗 |
Tue 12:11 p.m. - 12:27 p.m.
|
[O6] Explaining Information Flow Inside Vision Transformers Using Markov Chain
(
Oral
)
>
link
SlidesLive Video |
Tingyi Yuan · Xuhong Li · Haoyi Xiong · Dejing Dou 🔗 |
Tue 12:28 p.m. - 12:32 p.m.
|
Q/A Session
(
Live Q/A
)
>
|
Roberto Capobianco 🔗 |
Tue 12:33 p.m. - 12:36 p.m.
|
Speaker Introduction
(
Introduction
)
>
SlidesLive Video |
Alice Xiang 🔗 |
Tue 12:37 p.m. - 1:21 p.m.
|
[IT5] Natural language descriptions of deep features
(
Invited Talk
)
>
SlidesLive Video |
Jacob Andreas 🔗 |
Tue 1:22 p.m. - 1:32 p.m.
|
Q/A Session
(
Q/A Session
)
>
|
Alice Xiang · Jacob Andreas 🔗 |
Tue 1:33 p.m. - 1:35 p.m.
|
Spotlight Introduction
(
Introduction
)
>
SlidesLive Video |
Biagio La Rosa 🔗 |
Tue 1:35 p.m. - 1:39 p.m.
|
[S8] Fast TreeSHAP: Accelerating SHAP Value Computation for Trees
(
Spotlight
)
>
link
SlidesLive Video |
Jilei Yang 🔗 |
Tue 1:39 p.m. - 1:42 p.m.
|
[S9] Simulated User Studies for Explanation Evaluation
(
Spotlight
)
>
link
SlidesLive Video |
Valerie Chen · Gregory Plumb · Nicholay Topin · Ameet S Talwalkar 🔗 |
Tue 1:42 p.m. - 1:45 p.m.
|
[S10] Exploring XAI for the Arts: Explaining Latent Space in Generative Music
(
Spotlight
)
>
link
SlidesLive Video |
Nick Bryan-Kinns · Berker Banar · Corey Ford · Simon Colton 🔗 |
Tue 1:45 p.m. - 1:50 p.m.
|
[S11] Interpreting Language Models Through Knowledge Graph Extraction
(
Spotlight
)
>
link
SlidesLive Video |
Vinitra Swamy · Angelika Romanou · Martin Jaggi 🔗 |
Tue 1:50 p.m. - 1:53 p.m.
|
[S12] Efficient Decompositional Rule Extraction for Deep Neural Networks
(
Spotlight
)
>
link
SlidesLive Video |
Mateo Espinosa Zarlenga · Mateja Jamnik 🔗 |
Tue 1:53 p.m. - 1:57 p.m.
|
[S13] Revisiting Sanity Checks for Saliency Maps
(
Spotlight
)
>
link
SlidesLive Video |
Gal Yona 🔗 |
Tue 1:57 p.m. - 2:01 p.m.
|
[S14] Towards Better Visual Explanations for Deep ImageClassifiers
(
Spotlight
)
>
link
SlidesLive Video |
Agnieszka Grabska-Barwinska · Amal Rannen-Triki · Omar Rivasplata · András György 🔗 |
Tue 2:00 p.m. - 2:06 p.m.
|
Closing Remarks
(
Closing
)
>
SlidesLive Video |
Biagio La Rosa 🔗 |
Tue 2:06 p.m. - 2:30 p.m.
|
Poster Session ( Link ) > link | 🔗 |
-
|
Interpreting BERT architecture predictions for peptide presentation by MHC class I proteins ( Poster ) > link | Hans-Christof Gasser 🔗 |
-
|
XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection ( Poster ) > link | Sunsheng Gu · Vahdat Abdelzad · Krzysztof Czarnecki 🔗 |
-
|
Towards Better Visual Explanations for Deep ImageClassifiers ( Poster ) > link | Agnieszka Grabska-Barwinska · Amal Rannen-Triki · Omar Rivasplata · András György 🔗 |
-
|
Interpreting Language Models Through Knowledge Graph Extraction ( Poster ) > link | Vinitra Swamy · Angelika Romanou · Martin Jaggi 🔗 |
-
|
Exploring XAI for the Arts: Explaining Latent Space in Generative Music ( Poster ) > link | Nick Bryan-Kinns · Berker Banar · Corey Ford · Simon Colton 🔗 |
-
|
Interpretability in Gated Modular Neural Networks ( Poster ) > link | Yamuna Krishnamurthy · Chris Watkins 🔗 |
-
|
Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors ( Poster ) > link | Dylan Slack · Krishnaram Kenthapadi 🔗 |
-
|
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines ( Poster ) > link | Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji Kasneci 🔗 |
-
|
Simulated User Studies for Explanation Evaluation ( Poster ) > link | Valerie Chen · Gregory Plumb · Nicholay Topin · Ameet S Talwalkar 🔗 |
-
|
Efficient Decompositional Rule Extraction for Deep Neural Networks ( Poster ) > link | Mateo Espinosa Zarlenga · Mateja Jamnik 🔗 |
-
|
Fast TreeSHAP: Accelerating SHAP Value Computation for Trees ( Poster ) > link | Jilei Yang 🔗 |
-
|
Revisiting Sanity Checks for Saliency Maps ( Poster ) > link | Gal Yona 🔗 |
-
|
DeDUCE: Generating Counterfactual Explanations At Scale ( Poster ) > link | Benedikt Höltgen · Lisa Schut · Jan Brauner · Yarin Gal 🔗 |
-
|
Debugging the Internals of Convolutional Networks ( Poster ) > link | Bilal Alsallakh · Narine Kokhlikyan · Vivek Miglani · Shubham Muttepawar · Edward Wang · Sara Zhang · Orion Reblitz-Richardson 🔗 |
-
|
Our Slack Channel for Q/A, social and networking ( Link ) > link | 🔗 |