Authors

* External authors

Venue

Date

Share

Decision Boundary-aware Data Augmentation for Adversarial Training

Chen Chen

Jingfeng Zhang*

Xilie Xu*

Lingjuan Lyu

Chaochao Chen*

Tianlei Hu*

Gang Chen*

* External authors

IEEE Transactions on Dependable and Secure Computing

2022

Abstract

Adversarial training (AT) is a typical method to learn adversarially robust deep neural networks via training on the adversarial variants generated by their natural examples. However, as training progresses, the training data becomes less attackable, which may undermine the enhancement of model robustness. A straightforward remedy is to incorporate more training data, but it may incur an unaffordable cost. To mitigate this issue, in this paper, we propose a deCisiOn bounDary-aware data Augmentation framework (CODA): in each epoch, the CODA directly employs the meta information of the previous epoch to guide the augmentation process and generate more data that are close to the decision boundary, i.e., attackable data. Compared with the vanilla mixup, our proposed CODA can provide a higher ratio of attackable data, which is beneficial to enhance model robustness; it meanwhile mitigates the model’s linear behavior between classes, where the linear behavior is favorable to the standard training for generalization but not to the adversarial training for robustness. As a result, our proposed CODA encourages the model to predict invariantly in the cluster of each class. Experiments demonstrate that our proposed CODA can indeed enhance adversarial robustness across various adversarial training methods and multiple datasets.

Related Publications

How to Evaluate and Mitigate IP Infringement in Visual Generative AI?

ICML, 2025
Zhenting Wang, Chen Chen, Vikash Sehwag, Minzhou Pan*, Lingjuan Lyu

The popularity of visual generative AI models like DALL-E 3, Stable Diffusion XL, Stable Video Diffusion, and Sora has been increasing. Through extensive evaluation, we discovered that the state-of-the-art visual generative models can generate content that bears a striking r…

Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion Models

CVPR, 2025
Jie Ren, Kangrui Chen, Yingqian Cui, Shenglai Zeng, Hui Liu, Yue Xing, Jiliang Tang, Lingjuan Lyu

Text-to-image (T2I) diffusion models have shown exceptional capabilities in generating images that closely correspond to textual prompts. However, the advancement of T2I diffusion models presents significant risks, as the models could be exploited for malicious purposes, suc…

CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images by AI

CVPR, 2025
Siyuan Cheng, Lingjuan Lyu, Zhenting Wang, Xiangyu Zhang, Vikash Sehwag

With the rapid advancement of generative AI, it is now pos-sible to synthesize high-quality images in a few seconds.Despite the power of these technologies, they raise signif-icant concerns regarding misuse. Current efforts to dis-tinguish between real and AI-generated image…

  • HOME
  • Publications
  • Decision Boundary-aware Data Augmentation for Adversarial Training

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.