Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Causality and Large Models

Using Relational and Causality Context for Tasks with Specialized Vocabularies that are Challenging for LLMs

Ryosuke Nakanishi · Yan-Ying Chen · Francine Chen · Matt Klenk · Charlene C. Wu

Keywords: [ Linguistic Causality ] [ LLM ] [ Graph neural network ] [ Specialized Vocabulary ] [ Short Report Classification ]

[ ] [ Project Page ]
Sat 14 Dec 2 p.m. PST — 2:15 p.m. PST
 
presentation: Causality and Large Models
Sat 14 Dec 8:45 a.m. PST — 5:30 p.m. PST

Abstract:

Short text is typical for reports such as incident synopsis and product feedback for efficiency and convenience. However, classifying short reports can be very challenging due to incomplete information and limited labeled data, and in some cases, many domain-specific terms. To address these issues, we examine the use of causality, as represented by linguistic cause and effect, in models for short report classification. We propose two augmentations of a hierarchical graph attention network to represent latent causes and effects. We also investigate the effectiveness of using a pretrained Language Model SBERT vs. the more traditional tf-idf representations for reports with general and specialized vocabularies. Experiments on five public report datasets verify that inclusion of causality in modeling short report datasets with many domain-specific terms improves classification performance.

Chat is not available.