Workshop
Interpretable Machine Learning for Complex Systems
Andrew Wilson · Been Kim · William Herlands
AC Barcelona, Sagrada Familia
Thu 8 Dec, 11 p.m. PST
Complex machine learning models, such as deep neural networks, have recently achieved great predictive successes for visual object recognition, speech perception, language modelling, and information retrieval. These predictive successes are enabled by automatically learning expressive features from the data. Typically, these learned features are a priori unknown, difficult to engineer by hand, and hard to interpret. This workshop is about interpreting the structure and predictions of these complex models.
Interpreting the learned features and the outputs of complex systems allows us to more fundamentally understand our data and predictions, and to build more effective models. For example, we may build a complex model to predict long range crime activity. But by interpreting the learned structure of the model, we can gain new insights into the processing driving crime events, enabling us to develop more effective public policy. Moreover, if we learn, for example, that the model is making good predictions by discovering how the geometry of clusters of crime events affect future activity, we can use this knowledge to design even more successful predictive models.
This 1 day workshop is focused on interpretable methods for machine learning, with an emphasis on the ability to learn structure which provides new fundamental insights into the data, in addition to accurate predictions. We will consider a wide range of topics, including deep learning, kernel methods, tensor methods, generalized additive models, rule based models, symbolic regression, visual analytics, and causality. A poster session, coffee breaks, and a panel guided discussion will encourage interaction between attendees. We wish to carefully review and enumerate modern approaches to the challenges of interpretability, share insights into the underlying properties of popular machine learning algorithms, and discuss future directions.
Live content is unavailable. Log in and register to view live content