CFE-CMStatistics 2025: Start Registration
View Submission - CFE-CMStatistics 2025
A0480
Title: Learning to partially defer for sequences Authors:  Sahana Rayan - University of Michigan (United States) [presenting]
Abstract: In the learning to defer (L2D) framework, a prediction model can either make a prediction or defer it to an expert, as determined by a rejector. Current L2D methods train the rejector to decide whether to reject the entire prediction, which is not desirable when the model predicts long sequences. An L2D setting is presented for sequence outputs where the system can defer specific outputs of the whole model prediction to an expert in an effort to interleave the expert and machine throughout the prediction. Two types of model-based post-hoc rejectors are proposed for pre-trained predictors: A token-level rejector, which defers specific token predictions to experts with next token prediction capabilities, and a one-time rejector for experts without such abilities, which defers the remaining sequence from a specific point onward. In the experiments, it is also empirically demonstrated that such granular deferrals achieve better cost-accuracy tradeoffs than whole deferrals on traveling salesman solvers and news summarization models.