Skip to yearly menu bar Skip to main content


Poster

Scalable Kernel Inverse Optimization

Youyuan Long · Tolga Ok · Pedro Zattoni Scroccaro · Peyman Mohajerin Esfahani

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Inverse Optimization (IO) is a framework to learn the unknown objective function of an expert decision-maker from a past dataset. In this paper, we extend the hypothesis class of the IO objective functions to a space of reproducing kernel Hilbert space (RKHS), thereby enhancing its features to an infinite dimensional space. We show that a variant of the representer theorem holds for a specific training loss, hence reformulating the problem to a finite-dimensional convex optimization. To address the scalability issues often encountered with kernel methods, we further propose a Sequential Selection Optimization (SSO) algorithm to efficiently train the proposed Kernel Inverse Optimization (KIO) model. Finally, we demonstrate the generalization capabilities of the proposed KIO model and the effectiveness of the SSO algorithm through learning-from-demonstration tasks within the MuJoCo benchmark.

Live content is unavailable. Log in and register to view live content