EcoSta 2021: Start Registration
View Submission - EcoSta2021
A0751
Title: Within group fairness: A new concept for better between group fairness Authors:  Yongdai Kim - Seoul National University (Korea, South) [presenting]
Abstract: As they have a vital effect on social decision-making, AI algorithms should be accurate and should not impose unfairness against certain sensitive groups. Various specially designed AI algorithms to ensure trained AI models to be fair between sensitive groups have been developed. We raise a new issue that between-group fair AI models could treat individuals in the same group unfairly. We introduce a new concept of fairness, so-called within-group fairness, which requires that AI models be fair for those in the same sensitive group and those in different sensitive groups.