Skip to yearly menu bar Skip to main content


Poster

DppNet: Approximating Determinantal Point Processes with Deep Networks

Zelda Mariet · Yaniv Ovadia · Jasper Snoek

East Exhibition Hall B, C #123

Keywords: [ Generative Models ] [ Deep Learning ] [ Submodular Optimization ] [ Algorithms -> Representation Learning; Optimization ]


Abstract:

Determinantal point processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items. For this reason, they have gained prominence in many machine learning applications that rely on subset selection. However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets. We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPPs so powerful and unique. Empirically, we show across multiple datasets that DPPNET is orders of magnitude faster than competing approaches for DPP sampling, while generating high-likelihood samples and performing as well as DPPs on downstream tasks.

Live content is unavailable. Log in and register to view live content