Oral
in
Workshop: The First Workshop on Large Foundation Models for Educational Assessment
Leveraging Grounded Large Language Models to Automate Educational Presentation Generation
Eric Xie · Guangzhi Xiong · Haolin Yang · Aidong Zhang
Large Language Models (LLMs) have shown great potential in education, which may significantly facilitate course preparation from making quiz questions to automatically evaluating student answers. By helping educators quickly generate high-quality educational content, LLMs enable an increased focus on student engagement, lesson planning, and personalized instruction, ultimately enhancing the overall learning experience. While slide preparation is a crucial step in education, which helps instructors present the course in an organized way, there have been few attempts at using LLMs for slide generation. Due to the hallucination problem of LLMs and the requirement of accurate knowledge in education, there is a distinct lack of LLM tools that generate presentations tailored for education, especially in specific domains such as biomedicine. To address this gap, we design a new framework to accelerate and automate the slide preparation step in biomedical education using knowledge-enhanced LLMs. Specifically, we leverage the code generation capabilities of LLMs to bridge the gap between modalities of texts and slides in presentation. The retrieval-augmented generation (RAG) is also incorporated into our framework to enhance the slide generation with external knowledge bases and ground the generated content with traceable sources. Our experiments demonstrate the utility of our framework in terms of relevance and depth, which reflect the potential of LLMs in facilitating slide preparation for education.