Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ML with New Compute Paradigms

Analog Gradient Calculation of Optical Activation Function Material

Jakub Kostial · Filipe Ferreira

[ ] [ Project Page ]
Sun 15 Dec noon PST — 1:40 p.m. PST

Abstract:

Most realisations of Optical Neural Networks are aimed at using the platform for inference. In backpropagation, the update process is performed in opposite di- rection to the forward pass and relies on the gradient of the loss function with respect to the weights. To calculate this, the gradient of the activation function with respect to the inputs is required. As such, the training process of Optical Neural Networks is typically implemented in the digital domain. This is realised using simulation techniques however this method is not optimal as it is not fast and cannot replicate the subtle experimental imperfections of the system. One mitiga- tion involves training with additional noise but this is suboptimal. We propose a novel method to implement backpropagation through nonlinear optical units us- ing small-signal modulation of laser inputs and a lock-in amplifier. This allows for the calculation of the gradient of the activation function with respect to the inputs synchronously with the forward pass using high-speed analog circuitry. We experimentally demonstrate this method for a semiconductor optical amplifier in the nonlinear regime and show that the measured gradient is in good agreement with the calculated gradient from a steady-state analytical model for the device. This method can extract the phase of the signal enabling the encoding of negative and complex weights - besides being applicable to any optical nonlinear material, electro-optic activation functions in free space or photonic integrated platforms. Importantly, this gradient measurement method is resilient to device drift, induced by environment or ageing, which would affect finite difference based techniques - those would require periodic calibration.

Chat is not available.