Skip to yearly menu bar Skip to main content


Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks

Evaluating Loss Landscapes from a Topology Perspective

Tiankai Xie · Caleb Geniesse · Jiaqing Chen · Yaoqing Yang · Dmitriy Morozov · Michael Mahoney · Ross Maciejewski · Gunther Weber

[ ] [ Project Page ]
Sun 15 Dec 4:30 p.m. PST — 5:30 p.m. PST

Abstract: Characterizing the loss of a neural network with respect to model parameters, i.e., the $loss\ landscape$, can provide valuable insights into properties of that model. Various methods for visualizing loss landscapes have been proposed, but less emphasis has been placed on quantifying and extracting actionable and reproducible insights from these complex representations. Inspired by powerful tools from topological data analysis (TDA) for summarizing the structure of high-dimensional data, here we characterize the underlying $shape$ (or topology) of loss landscapes, quantifying the topology to reveal new insights about neural networks. To relate our findings to the machine learning (ML) literature, we compute simple performance metrics (accuracy, error), and we characterize the local structure of loss landscapes using Hessian-based metrics (largest eigenvalue, trace, eigenvalue spectral density). Following this approach, we study established models from image pattern recognition (e.g., ResNets) and scientific ML (e.g., physics-informed neural networks), and we show how quantifying the shape of loss landscapes can provide new insights into model performance and learning dynamics. We find that the number of saddle points in the loss landscape are positively correlated with Hessian-based metrics, and both of these metrics are negatively correlated with performance.

Chat is not available.