Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2024 Workshop: Machine Learning and the Physical Sciences

LensPINN: Physics Informed Neural Network for Learning Dark Matter Morphology in Lensing

Ashutosh Ojha · Sergei Gleyzer · Michael Toomey · Pranath Reddy Kumbam


Abstract:

We present LensPINN, a Physics-Informed Neural Network architecture for studying dark matter through strong gravitational lensing images. Our approach integrates the gravitational lensing equation into the model, combining the capabilities of Vision Transformer (ViT) and CNN frameworks. The architecture incorporates the lensing equation to perform lensing inversion for the reconstruction of the source galaxy. The deflection angle due to gravitational lensing is learned by the Vision Transformer Encoder, and the information from the source image is then passed through the architecture to enhance model learning, leading to improved performance in tasks related to strong gravitational lensing and dark matter localization. In this paper, we focus on a classification task that distinguishes between simulations of different dark matter models. We compare the performance of our model, LensPINN, with previous state-of-the-art models and other leading architectures. We propose two versions of the LensPINN model: LensPINNsmall, which is highly efficient, having only half the number of parameters while performing on par with all other models, and LensPINNlarge, which has the same number of parameters as existing models but surpasses all of them across various metrics.

Chat is not available.