Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning and Compression

QIANets: Quantum-Integrated Adaptive Networks for Reduced Latency and Improved Inference Times in CNN Models

Zhumazhan Balapanov · Edward Magongo · Vanessa Matvei · Olivia Holmberg · Kevin Zhu · Jonathan Pei


Abstract:

Convolutional neural networks (CNNs) have made significant advances in computer vision tasks, yet their high inference times and latency often limit real-world applicability. While model compression techniques have gained popularity as solutions, they often overlook the \textit{critical balance between low latency and uncompromised accuracy.} By harnessing \textbf{quantum-inspired pruning},\textbf{ tensor decomposition} and \textbf{annealing-based matrix factorization} – three quantum-inspired concepts – we introduce QIANets: a novel approach of redesigning the traditional GoogLeNet, DenseNet, and ResNet-18 model architectures to process more parameters and computations whilst maintaining low inference times. Despite experimental limitations, the method was tested and evaluated, demonstrating reductions in inference times, along with effective accuracy preservations.

Chat is not available.