CFE-CMStatistics 2024: Start Registration
View Submission - CFECMStatistics2024
A0941
Title: Large language model validity via enhanced conformal prediction methods Authors:  John Cherian - Stanford University (United States) [presenting]
Isaac Gibbs - Stanford University (United States)
Emmanuel Candes - Stanford (United States)
Abstract: New conformal inference methods are developed to obtain validity guarantees on the output of large language models (LLMs). Prior work in conformal language modeling identifies a subset of the text that satisfies a high-probability guarantee of correctness. These methods work by filtering claims from the LLM's original response if a scoring function evaluated on the claim fails to exceed a threshold calibrated via split conformal prediction. Existing methods in this area suffer from two deficiencies. First, the guarantee stated is not conditionally valid. The trustworthiness of the filtering step may vary based on the topic of the response. Second, because the scoring function is imperfect, the filtering step can remove many valuable and accurate claims. Both of these challenges are addressed via two new conformal methods. First, the conditional conformal procedure is generalized to adaptively issue weaker guarantees when required to preserve the utility of the output. Second, it is shown how to systematically improve the quality of the scoring function via a novel algorithm for differentiating through the conditional conformal procedure. The efficacy of the approach is demonstrated on both synthetic and real-world datasets.