Skip to yearly menu bar Skip to main content


Poster

PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications

Dingkang Yang · Jinjie Wei · Dongling Xiao · Shunli Wang · Tong Wu · Gang Li · Mingcheng Li · Shuaibing Wang · Jiawei Chen · Yue Jiang · Qingyao Xu · Ke Li · Peng Zhai · Lihua Zhang

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Developing intelligent pediatric consultation systems offers promising prospects for improving diagnostic efficiency, especially in China, where healthcare resources are scarce. Despite recent advances in Large Language Models (LLMs) for Chinese medicine, their performance is sub-optimal in pediatric applications due to inadequate instruction data and vulnerable training procedures.To address the above issues, this paper builds PedCorpus, a high-quality dataset of over 300,000 multi-task instructions from pediatric textbooks, guidelines, and knowledge graph resources to fulfill diverse diagnostic demands. Upon well-designed PedCorpus, we propose PediatricsGPT, the first Chinese pediatric LLM assistant built on a systematic and robust training pipeline. In the continuous pre-training phase, we introduce a hybrid instruction pre-training mechanism to mitigate the internal-injected knowledge inconsistency of LLMs for medical domain adaptation. Immediately, the full-parameter Supervised Fine-Tuning (SFT) is utilized to incorporate the general medical knowledge schema into the models. After that, we devise a direct following preference optimization to enhance the generation of pediatrician-like humanistic responses. A mixture of universal-specific experts strategy is presented to resolve the competency conflict between medical generalist and pediatric expertise mastery in the parameter-efficient secondary SFT. Extensive results based on the metrics, GPT-4, and doctor evaluations on distinct pediatric downstream tasks show that PediatricsGPT consistently outperforms previous Chinese medical LLMs. Our model and dataset will be open-source for community development.

Live content is unavailable. Log in and register to view live content