A1227
Title: Assessing the fairness of stability for binary classifiers
Authors: Hiroe Seto - Kyoto Women\'s University (Japan) [presenting]
Michio Yamamoto - The University of Osaka / RIKEN AIP / Shiga University (Japan)
Shin-ichi Mayekawa - Tokyo Institute of Technology (Japan)
Abstract: In recent years, binary classifiers have increasingly been used to make important decisions, such as hiring or lending. However, there are concerns that the predictions of binary classifiers may lead to unfairness. Unfairness is defined as a worse impact on protected classes, such as women or ethnic minorities, than on non-protected classes. Previous research has developed methods to assess the fairness of prediction accuracy between protected and non-protected classes. However, there has been no method to assess the fairness of stability, which refers to the degree of agreement between predictions made by a model trained on a different dataset. Using an algorithm with unfair stability results in greater variance in predictions for protected classes than for non-protected classes, posing a risk of exacerbating unfairness in society. DOSACA is therefore proposed as a metric that assesses unfairness based on the difference in stability between protected and non-protected classes. Furthermore, to verify the practical usefulness of the proposed evaluation method, real-world data analysis is conducted using several open data sets. The results of this research provide new evaluation criteria for the development of fair binary classifiers and have important implications, particularly for the ethical and legal aspects of predictive algorithms in the real world.