Poster
in
Workshop: Foundation Models for Science: Progress, Opportunities, and Challenges
Developing a Foundation Model for Predicting Material Failure
Agnese Marcato · Javier E. Santos · Aleksandra Pachalieva · Kai Gao · Ryley Hill · Esteban Rougier · Qinjun Kang · Jeffrey Hyman · Abigail Hunter · Janel Chua · Earl Lawrence · Hari Viswanathan · Daniel O'Malley
Keywords: [ computational science ] [ material science; transformers; LLM ]
Abstract:
Understanding material failure is critical for designing stronger and lighter structures by identifying weaknesses that could be mitigated, predicting the integrity of engineered systems under stress to prevent unexpected breakdowns, and evaluating fractured subsurface reservoirs to ensure the long-term stability of the reservoir walls, fluid containment, and surrounding geological formations. Existing full-physics numerical simulation techniques involve trade-offs between speed, accuracy, and the ability to handle complex features like varying boundary conditions, grid types, resolution, and physical models. While each of these aspects is important, relying on a single method is often insufficient, and performing a comprehensive suite of simulations to capture variability and uncertainty is impractical due to computational constraints.We present the first foundation model specifically designed for predicting material failure, leveraging large-scale datasets and a high parameter count (up to 3B) to significantly improve the accuracy of failure predictions. In addition, a large language model provides rich context embeddings, enabling our model to make predictions across a diverse range of conditions. Unlike traditional machine learning models, which are often tailored to specific systems or limited to narrow simulation conditions, our foundation model is designed to generalize across different materials and simulators. This flexibility enables the model to handle a range of material properties and conditions, providing accurate predictions without the need for retraining or adjustments for each specific case. Our model is capable of accommodating diverse input formats, such as images and varying simulation conditions, and producing a range of outputs, from simulation results to effective properties. It supports both Cartesian and unstructured grids, with design choices that allow for seamless updates and extensions as new data and requirements emerge.Our results show that increasing the scale of the model leads to significant performance gains (loss scales as $N^{-1.6}$, compared to language models which often scale as $N^{-0.5}$). This model represents a key stepping stone to advancing predictive capabilities of material science and related fields.
Chat is not available.