Poster
in
Workshop: Machine Learning for Systems
ACLTuner: A Profiling-Driven Fast Tuning to Optimized Deep Learning Inference
Yongin Kwon · Joo Hyoung Cha · Jubin Lee · Misun Yu · Jeman Park · Jemin Lee
Deep learning has expanded its footprint across diverse domains. The performance of these computations hinges on the interplay between deep learning compilers and inference libraries. While compilers adapt efficiently to new deep learning operations or models, their tuning processes are too time-consuming. In contrast, inference libraries offer quick execution but with adaptability limitations. To address these challenges, we propose ACLTuner, which optimizes execution configurations using existing inference library kernels. ACLTuner identifies and assigns the optimal kernel through targeted device profiling.Compared to ArmNN, AutoTVM, Ansor, ONNXRuntime, and TFLite, ACLTuner not only achieves up to 2.0x faster execution time across seven deep learning models, but also reduces the average tuning time by 95%.