Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Open-World Agents: Synnergizing Reasoning and Decision-Making in Open-World Environments (OWA-2024)

RAR-Agent: Retrieval Augmented Reflection Learning from Scratch for Reasoning

Shipeng Xie · Haichao Zhu · Da Chen

Keywords: [ LLM ] [ Retrieval Augmented Generation ] [ Continued Learning ] [ Reflection ] [ Open-World Agents ] [ Decision-making ] [ Interleave Reasoning ] [ Agent ]


Abstract:

In various complex question-answering scenarios, large-scale model agents have achieved remarkable performance by leveraging external tools for reasoning and planning. Despite incessant exploration in this domain, current large-scale model agent systems still suffer from issues such as high costs, difficulties in relying on repeatable prior knowledge, and the challenge of enabling a single model to fulfill multiple functions in open-world environments. To address these issues, we propose RAR-Agent (Retrieval Augmented Reflection Agent), a framework that learns from scratch for reasoning and knowledge update through retrieval-augmented reflection, without relying on vast annotated data or requiring fine-tuning. Given limited prior knowledge data and a tool library, RAR-Agent first autonomously synthesizes trajectory data for reasoning decisions, bypassing the need for manual annotation or assistance from powerful closed-source models. Subsequently, RAR-Agent autonomously constructs a prior knowledge base and provides with task-specific prior knowledge through retrieval. Through interactive dialogue with users, RAR-Agent collects a small amount of human feedback and leverages a continuous learning mechanism to update its prior knowledge base. We conduct comprehensive experiments with diverse LLMs (Large Language Models), demonstrating that RAR-Agent can achieve better or comparable performance to many benchmarks, all with very little annotated data and no extra fine-tuning required.

Chat is not available.