Workshop
Discrete Optimization in Machine Learning (DISCML): Structure and Scalability
Stefanie Jegelka · Andreas Krause · Jeffrey A Bilmes · Pradeep Ravikumar
Emerald Bay B, Harveys Convention Center Floor (CC)
Fri 7 Dec, 7:30 a.m. PST
Optimization problems with discrete solutions (e.g., combinatorial optimization) are becoming increasingly important in machine learning. The core of statistical machine learning is to infer conclusions from data, and when the variables underlying the data are discrete, both the tasks of inferring the model from data, as well as performing predictions using the estimated model are discrete optimization problems. Two factors complicate matters: first, many discrete problems are in general computationally hard, and second, machine learning applications often demand solving such problems at very large scales.
The focus of this year's workshop lies on structures that enable scalability. Examples of important structures include sparse graphs, the marginal polytope, and submodularity. Which properties of the problem make it possible to still efficiently obtain exact or decent approximate solutions? What are the challenges posed by parallel and distributed processing? Which discrete problems in machine learning are in need of more scalable algorithms? How can we make discrete algorithms scalable while retaining quality? Some heuristics perform well but as of yet are devoid of a theoretical foundation; what explains such good behavior?
Live content is unavailable. Log in and register to view live content