Poster
in
Workshop: Tackling Climate Change with Machine Learning
Large language model co-pilot for transparent and trusted life cycle assessment comparisons
Nathan Preuss · Fengqi You
Intercomparing life cycle assessments (LCA), a common type of sustainability and climate model, is difficult due to basic differences in fundamental assumptions, especially in the goal and scope definition stage. This complicates decision-making and the selection of climate-smart policies, as it becomes difficult to compare optimal products and processes between different studies. To aid policymakers and LCA practitioners alike, we plan to leverage large language models (LLM) to build a database containing documented assumptions for LCAs across the agricultural sector, with a case study on manure management. The articles for this database will be identified in a systematic literature search, then processed to extract relevant assumptions about the goal and scope definition of the LCA. We then leverage this database to develop an AI co-pilot by augmenting LLMs with retrieval augmented generation to be used by stakeholders and LCA practitioners alike. This co-pilot will accrue two major benefits: 1) enhance the decision-making process through facilitating comparisons among LCAs to enable policymakers to adopt data-driven climate policies and 2) encourage the use of common assumptions by LCA practitioners. Ultimately, we hope to create a foundational model for LCA tasks that can plug-in with existing open source LCA software and tools.
Live content is unavailable. Log in and register to view live content