EcoSta 2024: Start Registration
View Submission - EcoSta2024
A1016
Title: Adjusted expected improvement for cumulative regret minimization in noisy Bayesian optimization Authors:  Shouri Hu - University of Electronic Science and Technology of China (China) [presenting]
Abstract: The expected improvement (EI) is one of the most popular acquisition functions for Bayesian optimization (BO) and has demonstrated good empirical performances in many applications for minimizing simple regret. However, under the evaluation metric of cumulative regret, the performance of EI may not be competitive, and its existing theoretical regret upper bound still has room for improvement. To adapt the EI for better performance under cumulative regret, a novel quantity called the evaluation cost is introduced, which is compared against the acquisition function. With this, the expected improvement-cost (EIC) algorithm is developed. In each iteration of EIC, a new point with the largest acquisition function value is sampled, only if that value exceeds its evaluation cost. If none meets this criteria, the current best point is resampled. This evaluation cost quantifies the potential downside of sampling a point, which is important under the cumulative regret metric, as the objective function value in every iteration affects the performance measure. A high-probability regret upper bound of EIC is established in theory based on the maximum information gain, which is tighter than the bound of existing EI-based algorithms. It is comparable to the regret bound of popular BO algorithms such as Thompson sampling (GP-TS) and upper confidence bound (GP-UCB). Experiments are further performed to illustrate the improvement of EIC over several popular BO algorithms.