Workshop
Navigating the Broader Impacts of AI Research
Carolyn Ashurst · Rosie Campbell · Deborah Raji · Solon Barocas · Stuart Russell
Sat 12 Dec, 5:30 a.m. PST
Following growing concerns with both harmful research impact and research conduct in computer science, including concerns with research published at NeurIPS, this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions.
These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research, and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice. The changes have been met with both praise and criticism some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.
This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research practices, there remains many open questions about how to ensure effective ethical oversight. This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year devoted to normative issues in AI and builds on others from years past, but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.
Schedule
Sat 5:30 a.m. - 5:45 a.m.
|
Welcome
(
Welcome
)
>
|
🔗 |
Sat 5:45 a.m. - 6:15 a.m.
|
Morning keynote
(
Keynote
)
>
|
Hanna Wallach · Rosie Campbell 🔗 |
Sat 6:15 a.m. - 7:15 a.m.
|
Ethical oversight in the peer review process
(
Discussion panel
)
>
|
Sarah Brown · Heather Douglas · Iason Gabriel · Brent Hecht · Rosie Campbell 🔗 |
Sat 7:15 a.m. - 7:30 a.m.
|
Morning break
|
🔗 |
Sat 7:30 a.m. - 8:30 a.m.
|
Harms from AI research
(
Discussion panel
)
>
|
Anna Lauren Hoffmann · Nyalleng Moorosi · Vinay Prabhu · Deborah Raji · Jacob Metcalf · Sherry Stanley 🔗 |
Sat 8:30 a.m. - 9:30 a.m.
|
How should researchers engage with controversial applications of AI?
(
Discussion panel
)
>
|
Logan Koepke · CATHERINE ONEIL · Tawana Petty · Cynthia Rudin · Deborah Raji · Shawn Bushway 🔗 |
Sat 9:30 a.m. - 10:30 a.m.
|
Lunch and watch lightning talks (in parallel) from workshop submissions link | 🔗 |
Sat 10:30 a.m. - 11:30 a.m.
|
Discussions with authors of submitted papers ( Breakouts ) > link | 🔗 |
Sat 11:30 a.m. - 12:30 p.m.
|
Responsible publication: NLP case study
(
Discussion panel
)
>
|
Miles Brundage · Bryan McCann · Colin Raffel · Natalie Schulter · Zeerak Waseem · Rosie Campbell 🔗 |
Sat 12:30 p.m. - 12:45 p.m.
|
Afternoon break
|
🔗 |
Sat 12:45 p.m. - 1:45 p.m.
|
Strategies for anticipating and mitigating risks
(
Discussion panel
)
>
|
Ashley Casovan · Timnit Gebru · Shakir Mohamed · Solon Barocas · Aviv Ovadya 🔗 |
Sat 1:45 p.m. - 2:45 p.m.
|
The roles of different parts of the research ecosystem in navigating broader impacts
(
Discussion panel
)
>
|
Josh Greenberg · Liesbeth Venema · Ben Zevenbergen · Lilly Irani · Solon Barocas 🔗 |
Sat 2:45 p.m. - 3:00 p.m.
|
Closing remarks
(
Closing remarks
)
>
|
🔗 |
-
|
Auditing Government AI: Assessing ethical vulnerability of machine learning
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Alayna A Kennedy 🔗 |
-
|
An Ethical Highlighter for People-Centric Dataset Creation
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Margot Hanley · Apoorv Khandelwal · Hadar Averbuch-Elor · Noah Snavely · Helen Nissenbaum 🔗 |
-
|
The Managerial Effects of Algorithmic Fairness Activism
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Bo Cowgill · Fabrizio Dell'Acqua · Sandra Matz 🔗 |
-
|
Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Bo Cowgill · Fabrizio Dell'Acqua · Augustin Chaintreau · Nakul Verma · Samuel Deng · Daniel Hsu 🔗 |
-
|
Ethical Testing in the Real World: Recommendations for Physical Testing of Adversarial Machine Learning Attacks
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Ram Shankar Siva Kumar · Maggie Delano · Kendra Albert · Afsaneh Rigot · Jonathon Penney 🔗 |
-
|
Nose to Glass: Looking In to Get Beyond
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Josephine Seah 🔗 |
-
|
Training Ethically Responsible AI Researchers: a Case Study
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Hang Yuan · Claudia Vanea · Federica Lucivero · Nina Hallowell 🔗 |
-
|
Like a Researcher Stating Broader Impact For the Very First Time
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Grace Abuhamad · Claudel Rheault 🔗 |
-
|
Anticipatory Ethics and the Role of Uncertainty
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Priyanka Nanayakkara · Nicholas Diakopoulos · Jessica Hullman 🔗 |
-
|
Non-Portability of Algorithmic Fairness in India
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Nithya Sambasivan · Erin Arnesen · Ben Hutchinson · Vinodkumar Prabhakaran 🔗 |
-
|
An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
David Tran · Alex Valtchanov · Keshav R Ganapathy · Raymond Feng · Eric Slud · Micah Goldblum · Tom Goldstein 🔗 |
-
|
AI in the “Real World”: Examining the Impact of AI Deployment in Low-Resource Contexts
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Chinasa T. Okolo 🔗 |
-
|
Ideal theory in AI ethics
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Daniel Estrada 🔗 |
-
|
Overcoming Failures of Imagination in AI Infused System Development and Deployment
(
Lightning talk (5-7 mins)
)
>
SlidesLive Video |
Margarita Boyarskaya · Alexandra Olteanu · Kate Crawford 🔗 |