What if Neural Networks had SVDs?
Alexander Mathiasen, Frederik Hvilshøj, Jakob Rødsgaard Jørgensen, Anshul Nasery, Davide Mottin
Spotlight presentation: Orals & Spotlights Track 18: Deep Learning
on 2020-12-09T07:00:00-08:00 - 2020-12-09T07:10:00-08:00
on 2020-12-09T07:00:00-08:00 - 2020-12-09T07:10:00-08:00
Poster Session 4 (more posters)
on 2020-12-09T09:00:00-08:00 - 2020-12-09T11:00:00-08:00
GatherTown: Deep learning ( Town C1 - Spot B1 )
on 2020-12-09T09:00:00-08:00 - 2020-12-09T11:00:00-08:00
GatherTown: Deep learning ( Town C1 - Spot B1 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Various Neural Networks employ time-consuming matrix operations like matrix inversion. Many such matrix operations are faster to compute given the Singular Value Decomposition (SVD). Techniques from (Zhang et al., 2018; Mhammedi et al., 2017) allow using the SVD in Neural Networks without computing it. In theory, the techniques can speed up matrix operations, however, in practice, they are not fast enough. We present an algorithm that is fast enough to speed up several matrix operations. The algorithm increases the degree of parallelism of an underlying matrix multiplication H*X where H is an orthogonal matrix represented by a product of Householder matrices.