Skip to yearly menu bar Skip to main content



Invited Talks
Radhika Nagpal

In nature, groups of thousands of individuals cooperate to create complex structure purely through local interactions – from cells that form complex organisms, to social insects like termites that build meter-high mounds and army ants that self-assemble entire nests, to the complex and mesmerizing motion of fish schools and bird flocks. What makes these systems so fascinating to scientists and engineers alike, is that even though each individual has limited ability, as a collective they achieve tremendous complexity.

What would it take to create our own artificial collectives of the scale and complexity that nature achieves? My lab investigates this question by using inspiration from biological collectives to create robotic systems, e.g. the Kilobot thousand robot swarm inspired by cells, and the Termes robots inspired by mound-building termites. In this talk, I will discuss a recent project in my group – Eciton robotica - to create a self-assembling swarm of soft climbing robots inspired by the living architectures of army ants. Our work spans soft robotics, new theoretical models of self-organized self-assembly, and new field experiments in biology. Most critically, our work derives from the collective intelligence of engineers and scientists working together.

Luis von Ahn

Duolingo is the most popular way to learn languages in the world. With over half a billion exercises completed every day, we have the largest dataset of people learning languages ever amassed. In this talk I will describe all the different ways in which we use AI to improve how well we teach and how to keep our learners engaged.

Mary L. Gray

If data is power, this keynote asks what methodologies and frameworks, beyond measuring bias and fairness in ML, might best serve communities that are, otherwise, written off as inevitable ‘data gaps?’ To address this question, the talk applies design justice principles articulated in 2020 by scholar Costanza-Chock to the case of community-based organizations (CBOs) serving marginalized Black and Latinx communities in North Carolina. These CBOs, part of an 8-month study of community healthcare work, have become pivotal conduits for COVID-19 health information and equitable vaccine access. As such, they create and collect the so-called ‘sparse data’ of marginalized groups often missing from healthcare analyses. How might health equity—a cornerstone of social justice—be better served by equipping CBOs to collect community-level data and set the agendas for what to share and learn from the people that they serve? The talk will open with an analysis of the limits of ML models that prioritize the efficiencies of scale over attention to just and inclusive sampling. It will then examine how undertheorized investments in measuring bias and fairness in data and decision-making systems distract us from considering the value of collecting data with rather than for communities. Outlining an early learning theory proposed …

(Posner Lecture)
Peter Bartlett

Deep learning has revealed some major surprises from the perspective of statistical complexity: even without any explicit effort to control model complexity, these methods find prediction rules that give a near-perfect fit to noisy training data and yet exhibit excellent prediction performance in practice. This talk surveys work on methods that predict accurately in probabilistic settings despite fitting too well to training data. We present a characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization shows that overparameterization is essential for benign overfitting in this setting: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size. We discuss implications for robustness to adversarial examples, and we describe extensions to ridge regression and barriers to analyzing benign overfitting via model-dependent generalization bounds.

Alessio Figalli

At the end of the 18th century, Gaspard Monge introduced the optimal transport problem to understand the most efficient way of transporting a distribution of material from one place to another to build fortifications. In the last 30 years, this theory has found various applications in many areas of mathematics. However, more recently, optimal transport has also become a very powerful tool in many areas of machine learning. In this talk, we will give an overview of optimal transport, with some selected applications.

Meredith Broussard

In October 2021, X officially became an option for gender on US passports. What are the computational changes necessary to adapt to this more inclusive gender option? In this talk, Meredith Broussard investigates why large-scale computer systems are stuck using 1950s ideas about gender, and what is needed to update sociotechnical systems. She explores how allies can leverage public interest technology in order to think beyond the gender binary, interrogate and audit software systems, and create code for social good.

(Breiman Lecture)
Gabor Lugosi

In this talk I discuss mean estimation based on independent observations, perhaps the most basic problems in statistics. Despite its long history, the subject has attracted a flurry of renewed activity. Motivated by applications in machine learning and data science, the problem has been viewed from new angles both from statistical and computational points of view. We review some recent results on the statistical performance of mean estimators that allow heavy tails and adversarial contamination in the data, focusing on high-dimensional aspects.