Poster
in
Workshop: AIM-FM: Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond
Sentiment Reasoning for Healthcare
Khai-Nguyen Nguyen · Khai Le-Duc · Bach Phan Tat · Duy Le · Phuc H Ngo · Long Vo-Dang · Anh Nguyen · Truong Son Hy
Transparency in AI healthcare decision-making is crucial for building trust among AI and users. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)’s reasoning, facilitating better decision-making based on the classification results. In this work, we introduce a new task - Sentiment Reasoning - for both speech and text modalities, along with our proposed multimodal multitask framework and one of the world's largest speech sentiment analysis dataset. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (2\% increase in both accuracy and macro-F1) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (English-translated and Vietnamese) and models are published online: https://github.com/leduckhai/Sentiment-Reasoning.