FairMask: Better Fairness via Model-based Rebalancing of Protected Attributes
Machine learning software can generate models that inappropriately discriminate against specific protected social groups (e.g., groups based on gender, ethnicity, etc.). Motivated by those results, software engineering researchers have proposed many methods for mitigating those discriminatory effects. While those methods are effective in mitigating bias, few of them can provide explanations on what is the root cause of bias.
Our goal is to detect better and mitigate algorithmic discrimination in machine learning software problems. Here we propose FairMask, a model-based extrapolation method that is capable of both mitigating bias and explaining the cause. In our FairMask approach, protected attributes are represented by models learned from the other independent variables (and these models offer extrapolations over the space between existing examples). We then use the extrapolation models to relabel protected attributes later seen in testing data or deployment time. Our approach aims to offset the biased predictions of the classification model by rebalancing the distribution of protected attributes.
The experiments of this paper show that, without compromising (original) model performance, FairMask can achieve significantly better group and individual fairness (as measured in different metrics) than benchmark methods. Moreover, compared to another instance-based rebalancing method, our model-based approach shows faster runtime and thus better scalability. When looking at individual fairness (as indicated by the Flip Rates), FairMask can ensure perfect individual fairness while other benchmarks cannot.
Based on the above, we conclude that: We can recommend FairMask for faster and more effective bias mitigation. FairMask greatly excludes the risk of individual unfairness: Two individuals who only differ in the protected attributes will always receive the same prediction outcomes.
Thu 18 MayDisplayed time zone: Hobart change
13:45 - 15:15
AI bias and fairnessDEMO - Demonstrations / Technical Track / Journal-First Papers at Meeting Room 104
Chair(s): Amel Bennaceur The Open University, UK
|Towards Understanding Fairness and its Composition in Ensemble Machine Learning|
Usman Gohar Dept. of Computer Science, Iowa State University, Sumon Biswas Carnegie Mellon University, Hridesh Rajan Iowa State UniversityPre-print
|Fairify: Fairness Verification of Neural Networks|
Sumon Biswas Carnegie Mellon University, Hridesh Rajan Iowa State UniversityPre-print
|Leveraging Feature Bias for Scalable Misprediction Explanation of Machine Learning Models|
Jiri Gesi University of California, Irvine, Xinyun Shen University of California, Irvine, Yunfan Geng University of California, Irvine, Qihong Chen University of California, Irvine, Iftekhar Ahmed University of California at Irvine
|Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks|
Verya Monjezi University of Texas at El Paso, Ashutosh Trivedi University of Colorado Boulder, Gang Tan Pennsylvania State University, Saeid Tizpaz-Niari University of Texas at El PasoPre-print
|Seldonian Toolkit: Building Software with Safe and Fair Machine Learning|
DEMO - Demonstrations
Austin Hoag Berkeley Existential Risk Initiative, James E. Kostas University of Massachusetts, Bruno Castro da Silva University of Massachusetts, Philip S. Thomas University of Massachusetts, Yuriy Brun University of MassachusettsPre-print Media Attached
|What Would You do? An Ethical AI Quiz|
DEMO - Demonstrations
Wei Teo Monash University, Ze Teoh Monash University, Dayang Abang Arabi Monash University, Morad Aboushadi Monash University, Khairenn Lai Monash University, Zhe Ng Monash University, Aastha Pant Monash Univeristy, Rashina Hoda Monash University, Kla Tantithamthavorn Monash University, Burak Turhan University of OuluPre-print Media Attached
|Search-Based Fairness Testing for Regression-Based Machine Learning Systems|
Anjana Perera Oracle Labs, Australia, Aldeida Aleti Monash University, Kla Tantithamthavorn Monash University, Jirayus Jiarpakdee Monash University, Australia, Burak Turhan University of Oulu, Lisa Kuhn Monash University, Katie Walker Monash UniversityLink to publication DOI
|FairMask: Better Fairness via Model-based Rebalancing of Protected Attributes|
Kewen Peng North Carolina State University, Tim Menzies North Carolina State University, Joymallya Chakraborty North Carolina State UniversityLink to publication Pre-print