Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning
Informed Tree of Thought: Cost-efficient Problem Solving with Large Language Models
Sajad Mousavi · Desik Rengarajan · Ashwin Ramesh Babu · Sahand Ghorbanpour · Vineet Gundecha · Avisek Naug · Soumyendu Sarkar
This paper introduces Informed Tree of Thought (iToT), a novel framework that addresses the challenge of improving the reasoning and dynamic re-planning capabilities of large language models (LLMs) in complex tasks involving external tools. iToT optimizes decision-making by accounting for tool costs and failures by integrating tool usage with informed search algorithms. The framework builds on existing methods like Chain of Thought (CoT) and Tree of Thought (ToT) and extends to iToT-A * and iToT-D * Lite for refinements and efficient task execution. Our solution is evaluated on the HotPotQA dataset, where it outperforms several baselines, including direct prompting and ToT approaches. Through experiments, iToT demonstrates superior performance in handling complex reasoning tasks by minimizing tool costs and effectively managing tool interactions. All methods are implemented using open-source models, ensuring broad accessibility and reproducibility.