Poster
in
Affinity Event: LatinX in AI
Enhancing Graph-to-Text Systems in Low-Resource Settings: Distilling Chain-Of-Thought Reasoning For Task-Specific Workflows
David Guzman Piedrahita · Arnisa Fazla · Anna Kiepura
Knowledge graphs are essential for organizing vast amounts of information, yet their structured nature can be challenging for non-experts to interpret directly. Graph-based text generation addresses this issue by converting graph data into natural language, facilitating user understanding. While recent advancements in Large Language Models (LLMs) have shown promise in this task, their high resource consumption limits their feasibility. This study proposes a pipeline of smaller language models (SLMs) that distill reasoning capabilities from external LLMs, specifically GPT-3.5 Turbo, and evaluates their performance on the graph-based text generation task using the WebNLG dataset. By augmenting the dataset with intermediate reasoning steps, we fine-tune two models in the pipeline: Triples-to-Reasoning and Reasoning-to-Text. Our results indicate that the pipeline consisting of FLAN-T5-base models outperforms the baseline single FLAN-T5-base model approach, showcasing the effectiveness of intermediate reasoning, while the FLAN-T5-small model did not yield similar improvements, emphasizing the importance of model capacity. This work highlights the potential for SLM pipelines to emulate task decomposition and step-by-step reasoning, offering a pathway for deploying efficient and interpretable models in low-resource environments.
Live content is unavailable. Log in and register to view live content