Skip to yearly menu bar Skip to main content


Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks

We Need Far Fewer Unique Filters Than We Thought

Zahra Babaiee · Peyman M. Kiasari · Daniela Rus · Radu Grosu

[ ] [ Project Page ]
Sun 15 Dec 4:30 p.m. PST — 5:30 p.m. PST

Abstract:

We challenge the conventional belief that CNNs require numerous distinct kernels for effective image classification. Our study on depthwise separable CNNs (DS-CNNs) reveals that a drastically reduced set of unique filters can maintain performance. Replacing thousands of trained filters in ConvNextv2 with the closest linear transform from a small filter set, results in small accuracy drops. Remarkably, initializing depthwise filters with \textbf{only 8 unique frozen filters}, achieves minimal accuracy drop on ImageNet. Our findings question the necessity of numerous filters in DS-CNNs, offering insights into more efficient network designs.

Chat is not available.