Workshop
Algorithmic Fairness through the Lens of Time
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Jessica Schrouff
Room 252 - 254
Fri 15 Dec, 7 a.m. PST
We are proposing the Algorithmic Fairness through the Lens of Time (AFLT) workshop, which isthe fourth edition of this workshop series on algorithmic fairness. Previous editions have looked atcausal approaches to fairness and the intersection of fairness with other fields of trustworthy machinelearning namely interpretability, robustness and privacy.The aim of this year’s workshop is to provide a venue to discuss foundational work on fairness,challenge existing static definitions of fairness (group, individual, causal) and explore the long-termeffects of fairness methods. More importantly, the workshop aims to foster an open discussion on howto reconcile existing fairness frameworks with the development and proliferation of large generativemodels.$$$$Topic $$$$Fairness has been predominantly studied under the static regime, assuming an unchangingdata generation process [Hardt et al., 2016a, Dwork et al., 2012, Agarwal et al., 2018, Zafar et al.,2017]. However, these approaches neglect the dynamic interplay between algorithmic decisions andthe individuals they impact, which have shown to be prevalent in practical settings [Chaney et al.,2018, Fuster et al., 2022]. Such observation has highlighted the need to study the long term effectof fairness mitigation strategies and incorporate dynamic systems within the development of fairalgorithms.Despite prior research identifying several impactful scenarios where such dynamics can occur,including bureaucratic processes [Liu et al., 2018], social learning [Heidari et al., 2019], recourse[Karimi et al., 2020], and strategic behavior [Hardt et al., 2016b, Perdomo et al., 2020], extensiveinvestigation of the long term effect of fairness methods remains limited. Initial studies have shownhow enforcing static fairness constraints in dynamical systems can lead to unfair data distributionsand may perpetuate or even amplify biases [Zhang et al., 2020, Creager et al., 2020, D’Amour et al.,2020].Additionally, the rise of powerful large generative models have brought at the forefront the need tounderstand fairness in evolving systems. The general capabilities and widespread use of these modelsraise the critical question of how to assess these models for fairness[Luccioni et al., 2023] and mitigateobserved biases [Ranaldi et al., 2023, Ma et al., 2023] within a long term perspective. Importantly,mainstream fairness frameworks have been developed around classification and prediction tasks. Howcan we reconcile these existing techniques (proprocessing, in-processing and post-processing) withthe development of large generative models?Given these interesting questions, this workshop aims to deeply investigate how to address fairness concerns in settings where learning occurs sequentially or in evolving environments. We are particularly interested in addressing open questions in the field, such as:• What are the long term effects of static fairness methods?• How to develop adaptable fairness approaches under known or unknown dynamic environments?• Are there trade-offs between short-term and long-term fairness?• How to incorporate existing fairness frameworks into the development of large generativemodels?• How to ensure long term fairness in large generative models via feedback loops?
Chat is not available.
Timezone: America/Los_Angeles
Schedule
Fri 7:00 a.m. - 7:10 a.m.
|
Opening remarks
(
Opening remarks by organizers
)
>
SlidesLive Video |
🔗 |
Fri 7:10 a.m. - 7:40 a.m.
|
Invited talk 1: Richard Zemel: A Framework for Responsible Deployment of Large Language Models
(
invited talk
)
>
SlidesLive Video |
🔗 |
Fri 7:40 a.m. - 7:50 a.m.
|
Invited talk Q&A
(
Q&A
)
>
|
🔗 |
Fri 7:50 a.m. - 8:00 a.m.
|
Contributed Talk 1: Backtracking Counterfactual Fairness Lucius Bynum · Joshua Loftus · Julia Stoyanovichmore
(
Talk
)
>
SlidesLive Video |
🔗 |
Fri 8:00 a.m. - 8:05 a.m.
|
Contributed Talk 1 Q & A
(
Q&A
)
>
|
🔗 |
Fri 8:05 a.m. - 8:15 a.m.
|
Contributed Talk 2: Designing Long-term Group Fair Policies in Dynamical Systems Miriam Rateike · Isabel Valera · Patrick Forré
(
Talk
)
>
SlidesLive Video |
🔗 |
Fri 8:15 a.m. - 8:20 a.m.
|
Contributed Talk 2 Q&A
(
Q&A
)
>
|
🔗 |
Fri 8:20 a.m. - 9:00 a.m.
|
Coffee break and poster session 1
(
Poster session 1
)
>
|
🔗 |
Fri 9:00 a.m. - 9:30 a.m.
|
Invited Talk 2: Celestine Mendler-Dünner: Performativity and Power in Prediction
(
invited talk
)
>
SlidesLive Video |
🔗 |
Fri 9:30 a.m. - 9:45 a.m.
|
Invited talk Q&A
(
Q&A
)
>
|
🔗 |
Fri 9:45 a.m. - 11:00 a.m.
|
Roundtables
(
Roundtables
)
>
|
🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Information-Theoretic Bounds on The Removal of Attribute-Specific Bias From Neural Networks
(
Spotlight
)
>
link
SlidesLive Video |
Jiazhi Li · Mahyar Khayatkhoei · Jiageng Zhu · Hanchen Xie · Mohamed Hussein · Wael Abd-Almageed 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Is My Prediction Arbitrary? Confounding Effects of Variance in Fair Classification
(
Spotlight
)
>
|
A. Feder Cooper · Katherine Lee · Madiha Choksi · Solon Barocas · Christopher De Sa · James Grimmelmann · Jon Kleinberg · Siddhartha Sen · Baobao Zhang 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Procedural Fairness Through Decoupling Objectionable Data Generating Components
(
Spotlight
)
>
link
SlidesLive Video |
Zeyu Tang · Jialu Wang · Yang Liu · Peter Spirtes · Kun Zhang 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Exploring Predictive Arbitrariness as Unfairness via Predictive Multiplicity and Predictive Churn
(
Spotlight
)
>
SlidesLive Video |
Jamelle Watson-Daniels · Lance Strait · Mehadi Hassen · Amy Skerry-Ryan · Alexander D'Amour 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Improving Fairness-Accuracy tradeoff with few Test Samples under Covariate Shift
(
Spotlight
)
>
SlidesLive Video |
Shreyas Havaldar · Jatin Chauhan · Karthikeyan Shanmugam · Jay Nandy · Aravindan Raghuveer 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Loss Modeling for Multi-Annotator Datasets
(
Spotlight
)
>
SlidesLive Video |
Uthman Jinadu · Jesse Annan · Shanshan Wen · Yi Ding 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Measuring fairness of synthetic oversampling on credit datasets
(
Spotlight
)
>
SlidesLive Video |
Decio Miranda Filho · Thalita Veronese · Marcos M. Raimundo 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Transparency Through the Lens of Recourse and Manipulation
(
Spotlight
)
>
SlidesLive Video |
Yatong Chen · Andrew Estornell · Yevgeniy Vorobeychik · Yang Liu 🔗 |
Fri 11:00 a.m. - 11:03 a.m.
|
Variation of Gender Biases in Visual Recognition Models Before and After Finetuning
(
Spotlight
)
>
SlidesLive Video |
Jaspreet Ranjit · Tianlu Wang · Baishakhi Ray · Vicente Ordonez 🔗 |
Fri 11:00 a.m. - 12:00 p.m.
|
Lunch break
|
🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
On Comparing Fair classifiers under Data Bias
(
Spotlight
)
>
link
SlidesLive Video |
mohit sharma · Amit Deshpande · Rajiv Ratn Shah 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
Reevaluating COMPAS: Base Rate Tracking and Racial Bias
(
Spotlight
)
>
|
Victor Crespo · Javier Rando · Benjamin Eva · Vijay Keswani · Walter Sinnott-Armstrong 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
Performativity and Prospective Fairness.
(
Spotlight
)
>
SlidesLive Video |
Sebastian Zezulka · Konstantin Genin 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
Explaining knock-on effects of bias mitigation
(
Spotlight
)
>
SlidesLive Video |
Svetoslav Nizhnichenkov · Rahul Nair · Elizabeth Daly · Brian Mac Namee 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
Addressing The Cost Of Fairness In A Data Market Over Time
(
Spotlight
)
>
|
Augustin Chaintreau · Roland Maio · Juba Ziani 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
On Mitigating Unconscious Bias through Bandits with Evolving Biased Feedback
(
Spotlight
)
>
SlidesLive Video |
Matthew Faw · Constantine Caramanis · Sanjay Shakkottai · Jessica Hoffmann 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
Everything, Everywhere All in One Evaluation: Using Multiverse Analysis to Evaluate the Influence of Model Design Decisions on Algorithmic Fairness
(
Spotlight
)
>
SlidesLive Video |
Jan Simson · Florian Pfisterer · Christoph Kern 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
Fairer and More Accurate Models Through NAS
(
Spotlight
)
>
SlidesLive Video |
Richeek Das · Samuel Dooley 🔗 |
Fri 11:03 a.m. - 11:06 a.m.
|
Causal Dependence Plots
(
Spotlight
)
>
SlidesLive Video |
Joshua Loftus · Lucius Bynum · Sakina Hansen 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
Fairness in link analysis ranking algorithms
(
Spotlight
)
>
SlidesLive Video |
Ana-Andreea Stoica · Augustin Chaintreau · Nelly Litvak 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
A Causal Perspective on Label Bias
(
Spotlight
)
>
SlidesLive Video |
Vishwali Mhasawade · Alexander D'Amour · Stephen Pfohl 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
Remembering to Be Fair: On Non-Markovian Fairness in Sequential Decision Making
(
Spotlight
)
>
SlidesLive Video |
Parand A. Alamdari · Toryn Klassen · Elliot Creager · Sheila McIlraith 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
FAIR-Ensemble: Homogeneous Deep Ensembling Naturally Attenuates Disparate Group Performances
(
Spotlight
)
>
SlidesLive Video |
Wei-Yin Ko · Daniel Dsouza · Karina Nguyen · Randall Balestriero · Sara Hooker 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
Fair Clustering: Critique and Future Directions
(
Spotlight
)
>
|
John Dickerson · Seyed Esmaeili · Jamie Morgenstern · Claire Jie Zhang 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
Seller-side Outcome Fairness in Online Marketplaces
(
Spotlight
)
>
|
Zikun Ye · Reza Yousefi Maragheh · Lalitesh Morishetti · Shanu Vashishtha · Jason Cho · Kaushiki Nag · Sushant Kumar · Kannan Achan 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
Mitigating stereotypical biases in text to image generative systems
(
Spotlight
)
>
SlidesLive Video |
Piero Esposito · Parmida Atighehchian · Anastasis Germanidis · Deepti Ghadiyaram 🔗 |
Fri 11:06 a.m. - 11:09 a.m.
|
Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework
(
Spotlight
)
>
|
Sina Baharlouei · Meisam Razaviyayn 🔗 |
Fri 11:10 a.m. - 11:20 a.m.
|
Designing Long-term Group Fair Policies in Dynamical Systems
(
Oral
)
>
SlidesLive Video |
Miriam Rateike · Isabel Valera · Patrick Forré 🔗 |
Fri 11:20 a.m. - 11:30 a.m.
|
Backtracking Counterfactual Fairness
(
Oral
)
>
SlidesLive Video |
Lucius Bynum · Joshua Loftus · Julia Stoyanovich 🔗 |
Fri 11:30 a.m. - 11:40 a.m.
|
Learning in reverse causal strategic environments with ramifications on two sided markets
(
Oral
)
>
SlidesLive Video |
Seamus Somerstep · Yuekai Sun · Ya'acov Ritov 🔗 |
Fri 11:40 a.m. - 11:50 a.m.
|
Repairing Regressors for Fair Binary Classification at Any Decision Threshold
(
Oral
)
>
SlidesLive Video |
Kweku Kwegyir-Aggrey · Jessica Dai · A. Feder Cooper · John Dickerson · Suresh Venkatasubramanian · Keegan Hines 🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Invited Talk 3: Kun Zhang: At the Intersection of Algorithmic Fairness and Causal Representation Learning
(
invited talk
)
>
SlidesLive Video |
🔗 |
Fri 12:30 p.m. - 12:45 p.m.
|
Invited talk Q&A
(
Q&A
)
>
|
🔗 |
Fri 12:45 p.m. - 12:55 p.m.
|
Contributed Talk 3: Learning in reverse causal strategic environments with ramifications on two sided markets Seamus Somerstep · Yuekai Sun · Ya'acov Ritov
(
Talk
)
>
SlidesLive Video |
🔗 |
Fri 12:55 p.m. - 1:00 p.m.
|
Contributed talk 3 Q&A
(
Q&A
)
>
|
🔗 |
Fri 1:00 p.m. - 1:10 p.m.
|
Contributed Talk 4: Repairing Regressors for Fair Binary Classification at Any Decision Threshold Kweku Kwegyir-Aggrey · Jessica Dai · A. Feder Cooper · John Dickerson · Suresh Venkatasubramanian · Keegan Hines
(
Talk
)
>
SlidesLive Video |
🔗 |
Fri 1:10 p.m. - 1:15 p.m.
|
Contributed Talk 4 Q&A
(
Q&A
)
>
|
🔗 |
Fri 1:20 p.m. - 1:50 p.m.
|
Invited Talk 4: Ioana Baldini: Uncovering Hidden Bias: Auditing Language Models with a Social Stigma Lens
(
invited talk
)
>
SlidesLive Video |
🔗 |
Fri 1:50 p.m. - 2:00 p.m.
|
Invited talk Q&A
(
Q&A
)
>
|
🔗 |
Fri 2:00 p.m. - 2:40 p.m.
|
Panel: Kun Zhang, Ioana Baldini, Baobao Zhang, Tom Goldstein, Yacine Jernite
(
Panel
)
>
SlidesLive Video |
🔗 |
Fri 2:40 p.m. - 2:50 p.m.
|
Closing remarks
(
Closing remarks
)
>
SlidesLive Video |
🔗 |
Fri 2:50 p.m. - 3:30 p.m.
|
Poster session 2
(
Poster
)
>
|
🔗 |
-
|
Information-Theoretic Bounds on The Removal of Attribute-Specific Bias From Neural Networks
(
Poster
)
>
|
Jiazhi Li · Mahyar Khayatkhoei · Jiageng Zhu · Hanchen Xie · Mohamed Hussein · Wael Abd-Almageed 🔗 |
-
|
Is My Prediction Arbitrary? Confounding Effects of Variance in Fair Classification
(
Poster
)
>
|
A. Feder Cooper · Katherine Lee · Madiha Choksi · Solon Barocas · Christopher De Sa · James Grimmelmann · Jon Kleinberg · Siddhartha Sen · Baobao Zhang 🔗 |
-
|
Procedural Fairness Through Decoupling Objectionable Data Generating Components
(
Poster
)
>
|
Zeyu Tang · Jialu Wang · Yang Liu · Peter Spirtes · Kun Zhang 🔗 |
-
|
Exploring Predictive Arbitrariness as Unfairness via Predictive Multiplicity and Predictive Churn
(
Poster
)
>
|
Jamelle Watson-Daniels · Lance Strait · Mehadi Hassen · Amy Skerry-Ryan · Alexander D'Amour 🔗 |
-
|
Improving Fairness-Accuracy tradeoff with few Test Samples under Covariate Shift
(
Poster
)
>
|
Shreyas Havaldar · Jatin Chauhan · Karthikeyan Shanmugam · Jay Nandy · Aravindan Raghuveer 🔗 |
-
|
Loss Modeling for Multi-Annotator Datasets
(
Poster
)
>
|
Uthman Jinadu · Jesse Annan · Shanshan Wen · Yi Ding 🔗 |
-
|
Measuring fairness of synthetic oversampling on credit datasets
(
Poster
)
>
|
Decio Miranda Filho · Thalita Veronese · Marcos M. Raimundo 🔗 |
-
|
Transparency Through the Lens of Recourse and Manipulation
(
Poster
)
>
|
Yatong Chen · Andrew Estornell · Yevgeniy Vorobeychik · Yang Liu 🔗 |
-
|
Variation of Gender Biases in Visual Recognition Models Before and After Finetuning
(
Poster
)
>
|
Jaspreet Ranjit · Tianlu Wang · Baishakhi Ray · Vicente Ordonez 🔗 |
-
|
On Comparing Fair classifiers under Data Bias
(
Poster
)
>
|
mohit sharma · Amit Deshpande · Rajiv Ratn Shah 🔗 |
-
|
Reevaluating COMPAS: Base Rate Tracking and Racial Bias
(
Poster
)
>
|
Victor Crespo · Javier Rando · Benjamin Eva · Vijay Keswani · Walter Sinnott-Armstrong 🔗 |
-
|
Performativity and Prospective Fairness.
(
Poster
)
>
|
Sebastian Zezulka · Konstantin Genin 🔗 |
-
|
Explaining knock-on effects of bias mitigation
(
Poster
)
>
|
Svetoslav Nizhnichenkov · Rahul Nair · Elizabeth Daly · Brian Mac Namee 🔗 |
-
|
Addressing The Cost Of Fairness In A Data Market Over Time
(
Poster
)
>
|
Augustin Chaintreau · Roland Maio · Juba Ziani 🔗 |
-
|
On Mitigating Unconscious Bias through Bandits with Evolving Biased Feedback
(
Poster
)
>
|
Matthew Faw · Constantine Caramanis · Sanjay Shakkottai · Jessica Hoffmann 🔗 |
-
|
Everything, Everywhere All in One Evaluation: Using Multiverse Analysis to Evaluate the Influence of Model Design Decisions on Algorithmic Fairness
(
Poster
)
>
|
Jan Simson · Florian Pfisterer · Christoph Kern 🔗 |
-
|
Fairer and More Accurate Models Through NAS
(
Poster
)
>
|
Richeek Das · Samuel Dooley 🔗 |
-
|
Causal Dependence Plots
(
Poster
)
>
|
Joshua Loftus · Lucius Bynum · Sakina Hansen 🔗 |
-
|
Fairness in link analysis ranking algorithms
(
Poster
)
>
|
Ana-Andreea Stoica · Augustin Chaintreau · Nelly Litvak 🔗 |
-
|
A Causal Perspective on Label Bias
(
Poster
)
>
|
Vishwali Mhasawade · Alexander D'Amour · Stephen Pfohl 🔗 |
-
|
Remembering to Be Fair: On Non-Markovian Fairness in Sequential Decision Making
(
Poster
)
>
|
Parand A. Alamdari · Toryn Klassen · Elliot Creager · Sheila McIlraith 🔗 |
-
|
FAIR-Ensemble: Homogeneous Deep Ensembling Naturally Attenuates Disparate Group Performances
(
Poster
)
>
|
Wei-Yin Ko · Daniel Dsouza · Karina Nguyen · Randall Balestriero · Sara Hooker 🔗 |
-
|
Fair Clustering: Critique and Future Directions
(
Poster
)
>
|
John Dickerson · Seyed Esmaeili · Jamie Morgenstern · Claire Jie Zhang 🔗 |
-
|
Seller-side Outcome Fairness in Online Marketplaces
(
Poster
)
>
|
Zikun Ye · Reza Yousefi Maragheh · Lalitesh Morishetti · Shanu Vashishtha · Jason Cho · Kaushiki Nag · Sushant Kumar · Kannan Achan 🔗 |
-
|
Mitigating stereotypical biases in text to image generative systems
(
Poster
)
>
|
Piero Esposito · Parmida Atighehchian · Anastasis Germanidis · Deepti Ghadiyaram 🔗 |
-
|
Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework
(
Poster
)
>
|
Sina Baharlouei · Meisam Razaviyayn 🔗 |
-
|
On The Vulnerability of Fairness Constrained Learning to Malicious Noise
(
Poster
)
>
|
Avrim Blum · Princewill Okoroafor · Aadirupa Saha · Kevin Stangl 🔗 |
-
|
Model Fairness is Constrained by Decision Making Strategy Design
(
Poster
)
>
|
Alexandra Stolyarova 🔗 |
-
|
Algorithmic Fairness Reproducibility: A Close Look at Data Usage over the Years
(
Poster
)
>
|
Jan Simson · Alessandro Fabris · Christoph Kern 🔗 |
-
|
Bayesian Multilevel Regression and Poststratification for Dynamic Diversity-Aware Modeling
(
Poster
)
>
|
Nicole Osayande · Danilo Bzdok 🔗 |
-
|
The Long-Term Effects of Personalization: Evidence from Youtube
(
Poster
)
>
|
Andreas Haupt · Mihaela Curmei · François-Marie de Jouvencel · Marc Faddoul · Benjamin Recht · Dylan Hadfield-Menell 🔗 |
-
|
Allocating Bonus Points in Sequential Matchings with Preference Dynamics
(
Poster
)
>
|
Meirav Segal · Liu Leqi · Anne-Marie George · Christos Dimitrakakis · Hoda Heidari 🔗 |
-
|
Equal Opportunity under Performative Effects
(
Poster
)
>
|
Sophia Gunluk · Dhanya Sridhar · Antonio Gois · Simon Lacoste-Julien 🔗 |
-
|
Assessing Perceived Fairness in Machine Learning (ML) Process: A Conceptual Framework
(
Poster
)
>
|
Anoop Mishra · Deepak Khazanchi 🔗 |
-
|
Unbiased Sequential Prediction for Fairness in Predictions-to-Decisions Pipelines
(
Poster
)
>
|
Georgy Noarov · Ramya Ramalingam · Aaron Roth · Stephan Xie 🔗 |
-
|
Deep Reinforcement Learning for Efficient and Fair Allocation of Healthcare Resources
(
Poster
)
>
|
Yikuan Li 🔗 |
-
|
What Comes After Auditing: Distinguishing Between Algorithmic Errors and Task Specification Issues
(
Poster
)
>
|
Charvi Rastogi 🔗 |
-
|
Designing Long-term Group Fair Policies in Dynamical Systems
(
Poster
)
>
|
Miriam Rateike · Isabel Valera · Patrick Forré 🔗 |
-
|
Backtracking Counterfactual Fairness
(
Poster
)
>
|
Lucius Bynum · Joshua Loftus · Julia Stoyanovich 🔗 |
-
|
Learning in reverse causal strategic environments with ramifications on two sided markets
(
Poster
)
>
|
Seamus Somerstep · Yuekai Sun · Ya'acov Ritov 🔗 |
-
|
Repairing Regressors for Fair Binary Classification at Any Decision Threshold
(
Poster
)
>
|
Kweku Kwegyir-Aggrey · Jessica Dai · A. Feder Cooper · John Dickerson · Suresh Venkatasubramanian · Keegan Hines 🔗 |
-
|
It’s About Time: Fairness and Temporal Depth
(
Poster
)
>
|
Joshua Loftus 🔗 |
-
|
Are computational interventions to advance fair lending robust to different modeling choices about the nature of lending?
(
Poster
)
>
|
Benjamin Laufer · Manish Raghavan · Solon Barocas 🔗 |
-
|
Improving Fairness in Facial Recognition Models with Distribution Shifts
(
Poster
)
>
|
Gianluca Barone · Aashrit Cunchala · Rudy Nunez · Nicole Yang 🔗 |
-
|
Detecting Electricity Service Equity Issues with Transfer Counterfactual Learning on Large-Scale Outage Datasets ( Poster ) > link | Song Wei · Xiangrui Kong · Sarah Huestis-Mitchell · Yao Xie · Shixiang Zhu · Alinson Xavier · Feng Qiu 🔗 |
-
|
Democratise with Care: The need for fairness specific features in user-interface based open source AutoML tools
(
Poster
)
>
|
Sundaraparipurnan Narayanan 🔗 |