CMStatistics 2023: Start Registration
View Submission - CMStatistics
B0222
Title: A new proposal to assess robustness of artificial intelligence methods Authors:  Emanuela Raffinetti - University of Pavia (Italy) [presenting]
Paolo Giudici - University of Pavia (Italy)
Abstract: When applied to high-impact and regulated industries, such as energy, finance and health, artificial intelligence methods need to be validated by national regulators in order to monitor the risks arising from their employment. Indeed, most artificial intelligence methods rely on the application of highly complex machine learning (ML) models which, while reaching high predictive performance, may lack in terms of trustworthiness. To be trustworthy, artificial intelligence has to fulfil a set of specific key principles: it should be sustainable to extreme data and to cyber attacks (sustainability); it should lead to accurate predictions (accuracy); it should not discriminate by population groups (fairness); it should be humanly interpretable in terms of its drivers (explainability). Several contributions in literature have also proved that ML models are deeply affected by data perturbations. As this represents a threat to real applications, it seems crucial to evaluate the robustness condition of ML models. The purpose is to propose a new metric (based on the Lorenz and concordance curves), which evaluates the concordance between the ranks of the predicted values generated by the ML model fitted on non-perturbed data, and the ranks of the predicted values provided by the same ML model fitted on perturbed data.