Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Science: Progress, Opportunities, and Challenges

ChemDFM: A Large Language Foundation Model for Chemistry

Zihan Zhao · Da Ma · Lu Chen · Liangtai Sun · Zihao Li · Yi Xia · Hongshen Xu · Zichen Zhu · Su Zhu · Shuai Fan · Guodong Shen · Kai Yu · Xin Chen

Keywords: [ Large Language Models ] [ Instruction Tuning ] [ LLM ] [ Domain Pre-Training ] [ AI for Chemistry ] [ AI for Science ] [ Small Molecules ]


Abstract:

Artificial intelligence (AI) has played an increasingly important role in chemical research. However, most models currently used in chemistry are specialist models that require training and tuning for specific tasks. A more generic and efficient solution would be an AI model that could address many tasks and support free-form dialogue in the broad field of chemistry. In its utmost form, such a generalist AI chemist could be referred to as Chemical General Intelligence. Large language models (LLMs) have recently logged tremendous success in the general domain of natural language processing, showing emerging task generalization and free-form dialogue capabilities. However, domain knowledge of chemistry is largely missing when training general-domain LLMs. The lack of such knowledge greatly hinders the performance of generalist LLMs in the field of chemistry. To this end, we develop ChemDFM, a pioneering LLM for chemistry trained on 34B tokens from chemical literature and textbooks, and fine-tuned using 2.7M instructions. As a result, it can understand and reason with chemical knowledge in free-form dialogue. Quantitative evaluations show that ChemDFM significantly surpasses most representative open-source LLMs. It outperforms GPT-4 on a great portion of chemical tasks, despite the substantial size difference. We will open-source ChemDFM for the broader community of AI and chemistry.

Chat is not available.