Tutorial
Incentive-Aware Machine Learning: A Tale of Robustness, Fairness, Improvement, and Performativity
Chara Podimata
Virtual
When an algorithm can make consequential decisions for people's lives, people have an incentive to respond to the algorithm strategically in order to obtain a more desirable decision. This means that unless the algorithm adapts to this strategizing, it may end up creating policy decisions that are incompatible with the original policy's goal. This has been the mantra of the rapidly growing research area of incentive-aware Machine Learning (ML). In this tutorial, we introduce this area to the broader ML community. After a primer on the basic background needed, we introduce the audience to the four perspectives that have been studied so far: the robustness perspective (where the decision-maker tries to create algorithms that are robust to strategizing), the fairness perspective (where we study the inequalities that arise or are reinforced as a result of strategizing), the improvement perspective (where the learner tries to incentivize effort exertion towards actually improving their points), and the performativity perspective (where the decision-maker wishes to achieve a notion of stability in these settings).
Schedule
Mon 11:00 a.m. - 12:50 p.m.
|
Tutorial part 1
(
tutorial part 1
)
>
SlidesLive Video |
Chara Podimata 🔗 |
Mon 12:50 p.m. - 1:00 p.m.
|
Q & A
(
questions
)
>
|
Chara Podimata 🔗 |
Mon 1:00 p.m. - 1:05 p.m.
|
Break to welcome panellists
|
🔗 |
Mon 1:05 p.m. - 1:30 p.m.
|
Panel
(
Panel
)
>
SlidesLive Video |
Meena Jagadeesan · Avrim Blum · Jon Kleinberg · Celestine Mendler-Dünner · Jennifer Wortman Vaughan · Chara Podimata 🔗 |