Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks
We Need Far Fewer Unique Filters Than We Thought
Zahra Babaiee · Peyman M. Kiasari · Daniela Rus · Radu Grosu
Abstract:
We challenge the conventional belief that CNNs require numerous distinct kernels for effective image classification. Our study on depthwise separable CNNs (DS-CNNs) reveals that a drastically reduced set of unique filters can maintain performance. Replacing thousands of trained filters in ConvNextv2 with the closest linear transform from a small filter set, results in small accuracy drops. Remarkably, initializing depthwise filters with \textbf{only 8 unique frozen filters}, achieves minimal accuracy drop on ImageNet. Our findings question the necessity of numerous filters in DS-CNNs, offering insights into more efficient network designs.
Chat is not available.