Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Attributing Model Behavior at Scale (ATTRIB)

Approximations to worst-case data dropping: unmasking failure modes

Jenny Huang · David Burt · Tin Nguyen · Yunyi Shen · Tamara Broderick


Abstract:

A data analyst would worry about generalization if dropping a very small fraction of data points from a study could change its substantive conclusions. Finding the worst-case data subset to drop poses a combinatorial optimization problem. To overcome this intractability, recent works propose using additive approximations, which treat the contribution of a collection of data points as the sum of their individual contributions, and greedy approximations, which iteratively select the point with the highest impact to drop and re-runs the data analysis without that point [Broderick et al., 2020, Kuschnig et al., 2021]. We identify that, even in a setting as simple as OLS linear regression, many of these approximations can break down in realistic data arrangements. Several of our examples reflect masking, where one data point may hide or conceal the effect of another data point. We provide recommendations for users and suggest directions for future development.

Chat is not available.