Talk
in
Workshop: XAI in Action: Past, Present, and Future Applications
Explanations: Let's talk about them!
Sameer Singh
Posthoc explanations aim to give end-users insights and understanding into the workings of complex machine learning models. Despite their potential, posthoc explanations have found limited use in real-world applications and, for some evaluation setups, fail to help end-users achieve their tasks effectively. In a survey we carried out with domain experts to understand why they do not use explanation techniques, they pointed out that explanations are static and inflexible, making it challenging to explore the model behavior intuitively. Based on these insights, we propose a shift towards natural language conversations as a promising avenue for future work for explainability: they are easy to use, flexible, and interactive. We introduce an initial version of such a system, TalkToModel, that uses LLMs to enable open-ended natural language conversations for machine learning explainability. In our evaluation, TalkToModel can accurately identify diverse user intents and support various user queries. Further, users strongly prefer TalkToModel over existing explainability systems, demonstrating the effectiveness of natural language interfaces in supporting model understanding. (This is work with Dylan Slack, Satya Krishna, Hima Lakkaraju, Chenhao Tan, and Yuxin Chen)