EcoSta 2022: Start Registration
View Submission - EcoSta2022
A0231
Title: Does enforcing fairness mitigate algorithmic biases due to distributional shift? Authors:  Yuekai Sun - University of Michigan (United States) [presenting]
Abstract: Many instances of algorithmic bias are caused by distributional shifts. A particularly prominent class of examples is algorithmic biases caused by the under-representation of samples from minority groups in the training data. We study whether enforcing algorithmic fairness during training mitigates such biases in the target domain. On one hand, we show that there are scenarios in which enforcing fairness does not improve model performance (in the target domain). In fact, it may even harm performance. On the other hand, we derive sufficient conditions under which enforcing group and individual fairness successfully mitigate algorithmic biases due to distributional shifts.