EcoSta 2024: Start Registration
View Submission - EcoSta2024
A0194
Title: Data-driven label-poisoning backdoor attack Authors:  Xuan Bi - University of Minnesota (United States) [presenting]
Abstract: Backdoor attacks, which aim to disrupt or paralyze classifiers on specific tasks, are becoming an emerging concern in several learning scenarios, e.g., machine learning as a service. Various backdoor attacks have been introduced in the literature, including perturbation-based methods, which modify a subset of training data, and clean-sample methods, which relabel only a proportion of training samples. Indeed, clean-sample attacks can be particularly stealthy since they never require modifying the samples at the training and test stages. However, the state-of-the-art clean-sample attack of relabeling training data based on their semantic meanings could be ineffective and inefficient in test performances due to heuristic selections of semantic patterns. A new type of clean-sample backdoor attack is introduced, named a DLP backdoor attack, allowing attackers to backdoor effectively, as measured by test performances, for an arbitrary backdoor sample size. The critical component of DLP is a data-driven backdoor scoring mechanism embedded in a multi-task formulation, which enables attackers to perform well on the normal learning tasks and the backdoor tasks simultaneously. Systematic empirical evaluations show the superior performance of the proposed DLP to state-of-the-art clean-sample attacks.