Authors

* External authors

Venue

Date

Share

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

Yige Li*

Xixiang Lyu*

Nodens Koren*

Lingjuan Lyu

Bo Li*

Xingjun Ma*

* External authors

NeurIPS-2021

2021

Abstract

Backdoor attack has emerged as a major security threat to deep neural networks(DNNs). While existing defense methods have demonstrated promising results on detecting and erasing backdoor triggers, it is still not clear if measures can be taken to avoid the triggers from being learned into the model in the first place. In this paper, we introduce the concept of anti-backdoor learning, of which the aim is to train clean models on backdoor-poisoned data. We frame the overall learning process as a dual-task of learning the clean portion of data and learning the backdoor portion of data. From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data at a much faster rate than learning clean data, and the stronger the attack the faster the model converges on backdoored data; and 2) the backdoor task is tied to a specific class (the backdoor target class). Based on these two weaknesses, we propose a general learning scheme, Anti-Backdoor Learning (ABL), to automatically prevent backdoor attacks during training. ABL introduces a two-stage gradient ascent mechanism into standard training to 1) help isolate backdoor examples at an early training stage, and 2) break the correlation between backdoor examples and the target class at a later training stage. Through extensive experiments on multiple benchmark datasets against 10 state-of-the-art attacks, we empirically show that ABL-trained models on backdoor-poisoned data achieve the same performance as they were trained on purely clean data. Code is available athttps://github.com/bboylyg/ABL.

Related Publications

PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization for GDPR

ICML, 2024
Kartik Patwari, Chen-Nee Chuah*, Lingjuan Lyu, Vivek Sharma

Current image anonymization techniques, largely focus on localized pseudonymization, typically modify identifiable features like faces or full bodies and evaluate anonymity through metrics such as detection and re-identification rates. However, this approach often overlooks …

COALA: A Practical and Vision-Centric Federated Learning Platform

ICML, 2024
Weiming Zhuang, Jian Xu, Chen Chen, Jingtao Li, Lingjuan Lyu

We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize as task, data, and model levels. At the task level, COALA extends support from simple classification to 15 computer vision tasks, in…

How to Trace Latent Generative Model Generated Images without Artificial Watermark?

ICML, 2024
Zhenting Wang, Vikash Sehwag, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas*, Shiqing Ma*

Latent generative models (e.g., Stable Diffusion) have become more and more popular, but concerns have arisen regarding potential misuse related to images generated by these models. It is, therefore, necessary to analyze the origin of images by inferring if a particular imag…

  • HOME
  • Publications
  • Anti-Backdoor Learning: Training Clean Models on Poisoned Data

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.