Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Progress and Promises

Incorporating Higher Order Constraints for Training Surrogate Models to Solve Inverse Problems

Jihui Jin · Nick Durofchalk · Richard Touret · Karim Sabra · Justin Romberg

Keywords: [ Inverse Problem ] [ Hybrid Machine Learning ] [ Neural Adjoint ]


Abstract:

Inverse problems describe the task of recovering some underlying signal given some observables. Typically, the observables are related via some non-linear forward model applied to the underlying signal. Inverting the non-linear forward model can be computationally expensive, as it involves calculating the adjoint when computing a descent direction. Rather than inverting the non-linear model, we instead train a surrogate forward model and leverage modern auto-grad libraries to solve for SSPs within a classical optimization framework. Current methods to train surrogate models are done in a black box supervised machine learning fashion and don't take advantage of any existing knowledge of the forward model. In this article, we propose a simple regularization method to enforce constraints on the gradients of the surrogate model in addition to the output to improve overall accuracy. We demonstrate the efficacy on an ocean acoustic tomography (OAT) example that aims to recover ocean sound speed profile (SSP) variations from acoustic observations (e.g. eigenray arrival times) within simulation of ocean dynamics in the Gulf of Mexico.

Chat is not available.