Skip to yearly menu bar Skip to main content


Poster

AutoSurvey: Large Language Models Can Automatically Write Surveys

Yidong Wang · Qi Guo · Wenjin Yao · Hongbo Zhang · Xin Zhang · Zhen Wu · Meishan Zhang · Xinyu Dai · Min zhang · Qingsong Wen · Wei Ye · Shikun Zhang · Yue Zhang

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

This paper introduces AutoSurvey, a speedy and well-organized methodology for automating the creation of comprehensive literature surveys in rapidly evolving fields like artificial intelligence. Traditional survey paper creation faces challenges due to the vast volume and complexity of information, prompting the need for efficient survey methods. While large language models (LLMs) offer promise in automating this process, challenges such as context window limitations, parametric knowledge constraints, and the lack of evaluation benchmarks remain. AutoSurvey addresses these challenges through a systematic approach that involves initial retrieval and outline generation, subsection drafting by specialized LLMs, integration and refinement, and rigorous evaluation and iteration. Our contributions include a comprehensive solution to the survey problem, a reliable evaluation method, and experimental validation demonstrating AutoSurvey's effectiveness.

Live content is unavailable. Log in and register to view live content