Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

Discrete-Continuous Variational Optimization with Local Gradients

Jonathan Warrell · Francesco Alesiani · Cameron Smith · Anja Mösch · Martin Renqiang Min


Abstract:

Variational optimization (VO) offers a general approach for handling objectives which may involve discontinuities, or whose gradients are difficult to calculate. By introducing a variational distribution over the parameter space, such objectives are smoothed, and rendered amenable to VO methods. Local gradient information, though, may be available in certain problems, which is neglected by such an approach. We therefore consider a general method for incorporating local information via an augmented VO objective function to accelerate convergence and improve accuracy. We show how our augmented objective can be viewed as an instance of multilevel optimization. Finally, we show our method can train a genetic algorithm simulator, using a recursive Wasserstein distance objective.

Chat is not available.