Poster
in
Workshop: ML with New Compute Paradigms
High-speed random number generator co-processors for machine learning and AI acceleration
Shannon Egan
Random number generation is a hidden draw on runtime, power, and compute resources in machine learning (ML) and artificial intelligence (AI) applications. Random numbers are required, for instance, to initialize network weights, shuffle and partition datasets during training, and generate synthetic data. In each of these cases, the volume of random numbers required scales with the size of the model and dataset.We argue that there is an opportunity to accelerate ML applications at system level by interfacing existing general-purpose and specialized hardware with a high-speed random number generator (RNG) co-processor. Such a co-processor would support ML workloads by independently generating random numbers from a high-speed physical entropy source and feeding them into the application, freeing up CPU or GPU resources that would otherwise be required to run software-based pseudo-random number generators.Reducing RNG overhead will become even more critical to application performance as the industry increasingly adopts probabilistic approaches which rely heavily on random sampling. Use cases include differential privacy, uncertainty tracking, and Bayesian methods for ML, as well as synthetic data for generative AI training.