Workshop
XAI in Action: Past, Present, and Future Applications
Chhavi Yadav · Michal Moshkovitz · Nave Frost · Suraj Srinivas · Bingqing Chen · Valentyn Boreiko · Himabindu Lakkaraju · J. Zico Kolter · Dotan Di Castro · Kamalika Chaudhuri
Room 271 - 273
Sat 16 Dec, 6:50 a.m. PST
Transparency is vital for AI’s growth. This led to the design of new methods inexplainable AI. We aim to explore the current state of applied XAI and identifyfuture directions.
Chat is not available.
Timezone: America/Los_Angeles
Schedule
Sat 6:50 a.m. - 7:00 a.m.
|
Opening Remarks
(
Opening
)
>
SlidesLive Video |
🔗 |
Sat 7:00 a.m. - 7:30 a.m.
|
Explanations: Let's talk about them!
(
Talk
)
>
SlidesLive Video |
Sameer Singh 🔗 |
Sat 7:30 a.m. - 8:00 a.m.
|
Theoretical guarantees for explainable AI?
(
Talk
)
>
SlidesLive Video |
Ulrike Luxburg 🔗 |
Sat 8:00 a.m. - 8:30 a.m.
|
Coffee & Games
(
Social
)
>
|
🔗 |
Sat 8:30 a.m. - 9:00 a.m.
|
Explainable AI: where we are and how to move forward for health AI.
(
Talk
)
>
SlidesLive Video |
Su-In Lee 🔗 |
Sat 9:00 a.m. - 10:00 a.m.
|
Panel Discussion
(
Panel Discussion
)
>
SlidesLive Video |
Leilani Gilpin · Shai Ben-David · Julius Adebayo · Sameer Singh · Su-In Lee · Kamalika Chaudhuri 🔗 |
Sat 10:00 a.m. - 11:30 a.m.
|
Lunch
(
Lunch
)
>
|
🔗 |
Sat 11:30 a.m. - 12:00 p.m.
|
Confronting the Faithfulness Challenge with Post-hoc Model Explanations.
(
Talk
)
>
SlidesLive Video |
Julius Adebayo 🔗 |
Sat 12:00 p.m. - 1:00 p.m.
|
Poster Session 1
(
Poster Session
)
>
|
🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations ( Poster ) > link | Kenza Amara · Mennatallah El-Assady · Rex Ying 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
FRUNI and FTREE synthetic knowledge graphs for evaluating explainability ( Poster ) > link | Pablo Sanchez-Martin · Tarek R. Besold · Priyadarshini Kumari 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Explainable AI in Music Performance: Case Studies from Live Coding and Sound Spatialisation ( Poster ) > link | Jack Armitage · Nicola Privato · Victor Shepardson · Celeste Betancur Gutierrez 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Towards Explanatory Model Monitoring ( Poster ) > link | Alexander Koebler · Thomas Decker · Michael Lebacher · Ingo Thon · Volker Tresp · Florian Buettner 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Lessons from Usable ML Deployments Applied to Wind Turbine Monitoring ( Poster ) > link | Alexandra Zytek · Wei-En Wang · Sofia Koukoura · Kalyan Veeramachaneni 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models ( Poster ) > link | Albert Garde · Esben Kran · Fazl Barez 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors? ( Poster ) > link | Zachariah Carmichael · Walter Scheirer 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making ( Poster ) > link | Aliyah Hsu · Yeshwanth Cherapanamjeri · Briton Park · Tristan Naumann · Anobel Odisho · Bin Yu 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments ( Poster ) > link | Yang Zhang · Yawei Li · Hannah Brown · Mina Rezaei · Bernd Bischl · Philip Torr · Ashkan Khakzar · Kenji Kawaguchi 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Geometric Remove-and-Retrain (GOAR): Coordinate-Invariant eXplainable AI Assessment ( Poster ) > link | Yong-Hyun Park · Junghoon Seo · 범석 박 · Seongsu Lee · Junghyo Jo 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Utilizing Explainability Techniques for Reinforcement Learning Model Assurance ( Demo ) > link | Alexander Tapley 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Detecting Spurious Correlations via Robust Visual Concepts in Real and AI-Generated Image Classification ( Poster ) > link | Preetam Prabhu Srikar Dammu · Chirag Shah 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Towards the next generation explainable AI that promotes AI-human mutual understanding ( Poster ) > link | Janet Hsiao · Antoni Chan 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Are VideoQA Models Truly Multimodal? ( Poster ) > link | Ishaan Singh Rawal · Shantanu Jaiswal · Basura Fernando · Cheston Tan 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning ( Poster ) > link | Maxime Wabartha · Joelle Pineau 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
COMET: Cost Model Explanation Framework ( Poster ) > link | Isha Chaudhary · Alex Renda · Charith Mendis · Gagandeep Singh 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Interactive Visual Feature Search ( Demo ) > link | Devon Ulrich · Ruth Fong 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable? ( Poster ) > link | Ričards Marcinkevičs · Sonia Laguna · Moritz Vandenhirtz · Julia Vogt 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Estimation of Concept Explanations Should be Uncertainty Aware ( Poster ) > link | Vihari Piratla · Juyeon Heo · Sukriti Singh · Adrian Weller 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Optimising Human-AI Collaboration by Learning Convincing Explanations ( Poster ) > link | Alex Chan · Alihan Hüyük · Mihaela van der Schaar 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Inherent Inconsistencies of Feature Importance ( Poster ) > link | Nimrod Harel · Uri Obolski · Ran Gilad-Bachrach 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
ExpLIMEable: An exploratory framework for LIME ( Demo ) > link | Sonia Laguna · Julian Heidenreich · Jiugeng Sun · Nilüfer Cetin · Ibrahim Al Hazwani · Udo Schlegel · Furui Cheng · Mennatallah El-Assady 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Influence Based Approaches to Algorithmic Fairness: A Closer Look ( Poster ) > link | Soumya Ghosh · Prasanna Sattigeri · Inkit Padhi · Manish Nagireddy · Jie Chen 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Explaining black box text modules in natural language with language models ( Poster ) > link | Chandan Singh · Aliyah Hsu · Richard Antonello · Shailee Jain · Alexander Huth · Bin Yu · Jianfeng Gao 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Use Perturbations when Learning from Explanations ( Poster ) > link | Juyeon Heo · Vihari Piratla · Matthew Wicker · Adrian Weller 🔗 |
Sat 12:01 p.m. - 1:00 p.m.
|
Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain ( Poster ) > link | Gabriel Kasmi · Laurent Dubus · Yves-Marie Saint-Drenan · Philippe BLANC 🔗 |
Sat 1:00 p.m. - 1:30 p.m.
|
Coffee & Games
(
Social
)
>
|
🔗 |
Sat 1:30 p.m. - 2:00 p.m.
|
Explaining Self-Driving Cars for Accountable Autonomy.
(
Talk
)
>
SlidesLive Video |
Leilani Gilpin 🔗 |
Sat 2:00 p.m. - 2:30 p.m.
|
Contributed Talks
(
Talk
)
>
SlidesLive Video |
🔗 |
Sat 2:01 p.m. - 2:07 p.m.
|
Emergence of Segmentation with Minimalistic White-Box Transformers ( Spotlight ) > link | Yaodong Yu · Tianzhe Chu · Shengbang Tong · Ziyang Wu · Druv Pai · Sam Buchanan · Yi Ma 🔗 |
Sat 2:07 p.m. - 2:14 p.m.
|
Scale Alone Does not Improve Mechanistic Interpretability in Vision Models ( Spotlight ) > link | Roland S. Zimmermann · Thomas Klein · Wieland Brendel 🔗 |
Sat 2:14 p.m. - 2:21 p.m.
|
On Evaluating Explanation Utility for Human-AI Decision-Making in NLP ( Spotlight ) > link | Fateme Hashemi Chaleshtori · Atreya Ghosal · Ana Marasovic 🔗 |
Sat 2:21 p.m. - 2:28 p.m.
|
Understanding Scalable Perovskite Solar Cell Manufacturing with Explainable AI ( Spotlight ) > link | Lukas Klein · Sebastian Ziegler · Felix Laufer · Charlotte Debus · Markus Götz · Klaus Maier-Hein · Ulrich Paetzold · Fabian Isensee · Paul Jaeger 🔗 |
Sat 2:30 p.m. - 3:30 p.m.
|
Poster Session 2
(
Poster Session
)
>
|
🔗 |
-
|
A Critical Survey on Fairness Benefits of XAI ( Poster ) > link | Luca Deck · Jakob Schoeffer · Maria De-Arteaga · Niklas Kuehl 🔗 |
-
|
Exploring Practitioner Perspectives On Training Data Attribution Explanations ( Poster ) > link | Elisa Nguyen · Evgenii Kortukov · Jean Song · Seong Joon Oh 🔗 |
-
|
Explaining high-dimensional text classifiers ( Poster ) > link | Odelia Melamed · Rich Caruana 🔗 |
-
|
Sum-of-Parts Models: Faithful Attributions for Groups of Features ( Poster ) > link | Weiqiu You · Helen Qu · Marco Gatti · Bhuvnesh Jain · Eric Wong 🔗 |
-
|
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test ( Poster ) > link | Anna Hedström · Leander Weber · Sebastian Lapuschkin · Marina Höhne 🔗 |
-
|
Stability Guarantees for Feature Attributions with Multiplicative Smoothing ( Poster ) > link | Anton Xue · Rajeev Alur · Eric Wong 🔗 |
-
|
On the Consistency of GNN Explainability Methods ( Poster ) > link | Ehsan Hajiramezanali · Sepideh Maleki · Alex Tseng · Aicha BenTaieb · Gabriele Scalia · Tommaso Biancalani 🔗 |
-
|
Transparent Anomaly Detection via Concept-based Explanations ( Poster ) > link | Laya Rafiee Sevyeri · Ivaxi Sheth · Farhood Farahnak · Shirin Abbasinejad Enger 🔗 |
-
|
Robust Recourse for Binary Allocation Problems ( Poster ) > link | Meirav Segal · Anne-Marie George · Ingrid Yu · Christos Dimitrakakis 🔗 |
-
|
Are Large Language Models Post Hoc Explainers? ( Poster ) > link | Nicholas Kroeger · Dan Ley · Satyapriya Krishna · Chirag Agarwal · Himabindu Lakkaraju 🔗 |
-
|
Rectifying Group Irregularities in Explanations for Distribution Shift ( Poster ) > link | Adam Stein · Yinjun Wu · Eric Wong · Mayur Naik 🔗 |
-
|
Explainable Alzheimer’s Disease Progression Prediction using Reinforcement Learning ( Poster ) > link | Raja Farrukh Ali · Ayesha Farooq · Emmanuel Adeniji · John Woods · Vinny Sun · William Hsu 🔗 |
-
|
A Simple Scoring Function to Fool SHAP: Stealing from the One Above ( Poster ) > link | Jun Yuan · Aritra Dasgupta 🔗 |
-
|
Explaining Longitudinal Clinical Outcomes using Domain-Knowledge driven Intermediate Concepts ( Poster ) > link | Sayantan Kumar · Thomas Kannampallil · Aristeidis Sotiras · Philip Payne 🔗 |
-
|
Visual Topics via Visual Vocabularies ( Poster ) > link | Shreya Havaldar · Weiqiu You · Lyle Ungar · Eric Wong 🔗 |
-
|
Extracting human interpretable structure-property relationships in chemistry using XAI and large language models ( Poster ) > link | Geemi Wellawatte · Philippe Schwaller 🔗 |
-
|
Interactive Model Correction with Natural Language ( Poster ) > link | Yoonho Lee · Michelle Lam · Helena Vasconcelos · Michael Bernstein · Chelsea Finn 🔗 |
-
|
On the Relationship Between Explanation and Prediction: A Causal View ( Poster ) > link | Amir-Hossein Karimi · Krikamol Muandet · Simon Kornblith · Bernhard Schölkopf · Been Kim 🔗 |
-
|
ReLax: An Efficient and Scalable Recourse Explanation Benchmarking Library using JAX ( Poster ) > link | Hangzhi Guo · Xinchang Xiong · Wenbo Zhang · Amulya Yadav 🔗 |
-
|
Caution to the Exemplars: On the Intriguing Effects of Example Choice on Human Trust in XAI ( Poster ) > link | Tobias Leemann · Yao Rong · Thai-Trang Nguyen · Dr. Enkelejda Kasneci · Gjergji Kasneci 🔗 |
-
|
Policy graphs in action: explaining single- and multi-agent behaviour using predicates ( Poster ) > link | Sergio Alvarez-Napagao · Adrián Tormos · Victor Gimenez-Abalos · Dmitry Gnatyshak 🔗 |
-
|
Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection ( Poster ) > link | Noah Ziems · Gang Liu · John Flanagan · Meng Jiang 🔗 |
-
|
ObEy Anything: Quantifiable Object-based Explainability without Ground Truth Annotations ( Poster ) > link | William Ho · Lennart Schulze · Richard Zemel 🔗 |
-
|
Cost-aware counterfactuals for black box explanations ( Poster ) > link | Natalia Martinez · Kanthi Sarpatwar · Sumanta Mukherjee · Roman Vaculin 🔗 |
-
|
The Disagreement Problem in Faithfulness Metrics ( Poster ) > link | Brian Barr · Noah Fatsi · Leif Hancox-Li · Peter Richter · Daniel Proano 🔗 |
-
|
Empowering Domain Experts to Detect Social Bias in Generative AI with User-Friendly Interfaces ( Poster ) > link | Roy Jiang · Rafal Kocielnik · Adhithya Prakash Saravanan · Pengrui Han · R. Michael Alvarez · Animashree Anandkumar 🔗 |
-
|
Do Concept Bottleneck Models Obey Locality? ( Poster ) > link | Naveen Raman · Mateo Espinosa Zarlenga · Juyeon Heo · Mateja Jamnik 🔗 |
-
|
Diffusion-Guided Counterfactual Generation for Model Explainability ( Poster ) > link | Nishtha Madaan · Srikanta Bedathur 🔗 |
-
|
GLANCE: Global to Local Architecture-Neutral Concept-based Explanations ( Poster ) > link | Avinash Kori · Ben Glocker · Francesca Toni 🔗 |