EcoSta 2024: Start Registration
View Submission - EcoSta 2025
A0378
Title: Fair archetypal analysis for fair representation Authors:  Aleix Alcacer - Universitat Jaume I (Spain)
Irene Epifanio - Universitat Jaume I (Spain) [presenting]
Abstract: For the first time, the problem of fair archetypal analysis is addressed. Archetypal analysis (AA) is a technique of unsupervised statistical learning where observations are expressed as convex combinations (alphas) of archetypes, which in turn are convex combinations of the observations. Fairness in machine learning pertains to the various efforts aimed at addressing algorithmic bias in automated decision-making processes that rely on machine learning models. This bias involves the equitable treatment of sensitive variables, including gender, ethnicity, sexual orientation, disability, and others. Ensuring fairness in unsupervised statistical learning problems is more challenging than in supervised scenarios, as the data lacks labels, making it impossible to calculate ground-truth error rates for assessing bias and unfairness. Consequently, both defining and implementing fairness in unsupervised learning settings becomes a complex issue. The attempt is to obfuscate the sensitive attributes in AA representation, i.e., the aim is to remove sensitive information when projecting the dataset in AA space. Therefore, the original objective function in AA is modified with regularization to make alphas not be correlated with the sensitive variables. This has been implemented, and results in different experiments show good performance.