A1246
Title: Fast \& feasible: Provable acceleration of diffusion models via higher-order approximation
Authors: Yuchen Zhou - University of Illinois Urbana-Champaign (United States) [presenting]
Gen Li - The Chinese University of Hong Kong (Hong Kong)
Yuxin Chen - University of Pennsylvania (United States)
Yuting Wei - University of Pennsylvania (United States)
Abstract: The purpose is to explore provable acceleration of diffusion models without any additional retraining. Focusing on the task of approximating a target data distribution in $\mathbb{R}^d$ to within epsilon total-variation distance, a principled, training-free sampling algorithm is proposed that requires only $d^{1+2/K}epsilon^{-1/K}$ score function evaluations (up to log factor) in the presence of accurate scores, where K is an arbitrarily large fixed integer. This result applies to a broad class of target data distributions without the need for assumptions such as smoothness or log-concavity. The theory is robust to inexact score estimation, degrading gracefully as the score estimation error increases - without demanding higher-order smoothness on the score estimates as assumed in previous work. The proposed algorithm draws insight from high-order ODE solvers, leveraging high-order Lagrange interpolation and successive refinement to approximate the integral derived from the probability flow ODE.