CFE-CMStatistics 2025: Start Registration
View Submission - CFE-CMStatistics 2025
A0531
Title: Conformal alignment: Knowing when to trust foundation models with guarantees Authors:  Ying Jin - University of Pennsylvania (United States) [presenting]
Abstract: Before deploying outputs from foundation models in high-stakes tasks, it is imperative to ensure that they align with human values. For instance, in radiology report generation, reports generated by a vision-language model must align with human evaluations before their use in medical decision-making. Conformal alignment is presented, a general framework for identifying units whose outputs meet a user-specified alignment criterion. It is guaranteed that on average, a prescribed fraction of selected units indeed meet the alignment criterion, regardless of the foundation model or the data distribution. Given any pre-trained model and new units with model-generated outputs, conformal alignment leverages a set of reference data with ground-truth alignment status to train an alignment predictor. It then selects new units whose predicted alignment scores surpass a data-dependent threshold, certifying their corresponding outputs as trustworthy. Through applications to question answering and radiology report generation, it is demonstrated that the method is able to accurately identify units with trustworthy outputs via lightweight training over a moderate amount of reference data. En route, the informativeness of various features is investigated in alignment prediction, and they are combined with standard models to construct the alignment predictor.