Poster
in
Workshop: NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems
Stress-testing the coupled behavior of hybrid physics-machine learning climate simulations on an unseen, warmer climate
Jerry Lin · Mohamed Aziz Bhouri · Tom Beucler · Sungduk Yu · Mike Pritchard
Accurate and computationally-viable representations of clouds and turbulence are a long-standing challenge for climate model development. Traditional parameterizations that crudely but efficiently approximate these processes are a leading source of uncertainty in long-term projected warming and precipitation patterns. Machine Learning (ML)-based parameterizations have long been hailed as a promising alternative with the potential to yield higher accuracy at a fraction of the cost of more explicit simulations. However, these ML variants are often unpredictably unstable and inaccurate in online testing (i.e. in a downstream hybrid simulation task where they are dynamically coupled to the large-scale climate model). These issues are exacerbated in out-of-distribution climates. Certain design decisions such as ``climate-invariant" feature transformation, input vector expansion, and temporal history incorporation have been shown to improve online performance, but they may be insufficient for the mission-critical task of online out-of-distribution generalization. If feature selection and transformations can inoculate hybrid physics-ML climate models from non-physical out-of-distribution extrapolation in a changing climate, there is far greater potential in extrapolating from observational data. Otherwise, training on multiple simulated climates becomes an inevitable necessity. While our results show generalization benefits from these design decisions, such benefits do not sufficiently preclude the necessity of using multi-climate simulated training data.