Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks
Understanding the Limitations of B-Spline KANs: Convergence Dynamics and Computational Efficiency
Avik Pal · Dipankar Das
Kolmogorov-Arnold Networks (KANs) have recently emerged as a potential alternative to multi-layer perceptrons (MLPs), leveraging the Kolmogorov Representation Theorem to introduce learnable activation functions on each edge rather than fixed activations at the nodes. While KANs have shown promise in small-scale problems by achieving similar or better performance with fewer parameters, our empirical investigations reveal significant limitations when these networks are scaled to real-world tasks. Specifically, KANs suffer from increased computational costs and reduced performance, rendering them unsuitable for deep learning. Our study explores these limitations, examining KANs across diverse tasks, including computer vision and scientific machine learning, and provides a detailed comparison with MLPs.