* External authors




Decision Boundary-aware Data Augmentation for Adversarial Training

Chen Chen

Jingfeng Zhang*

Xilie Xu*

Lingjuan Lyu

Chaochao Chen*

Tianlei Hu*

Gang Chen*

* External authors

IEEE Transactions on Dependable and Secure Computing



Adversarial training (AT) is a typical method to learn adversarially robust deep neural networks via training on the adversarial variants generated by their natural examples. However, as training progresses, the training data becomes less attackable, which may undermine the enhancement of model robustness. A straightforward remedy is to incorporate more training data, but it may incur an unaffordable cost. To mitigate this issue, in this paper, we propose a deCisiOn bounDary-aware data Augmentation framework (CODA): in each epoch, the CODA directly employs the meta information of the previous epoch to guide the augmentation process and generate more data that are close to the decision boundary, i.e., attackable data. Compared with the vanilla mixup, our proposed CODA can provide a higher ratio of attackable data, which is beneficial to enhance model robustness; it meanwhile mitigates the model’s linear behavior between classes, where the linear behavior is favorable to the standard training for generalization but not to the adversarial training for robustness. As a result, our proposed CODA encourages the model to predict invariantly in the cluster of each class. Experiments demonstrate that our proposed CODA can indeed enhance adversarial robustness across various adversarial training methods and multiple datasets.

Related Publications

Heterogeneous Graph Node Classification with Multi-Hops Relation Features

ICASSP, 2022
Xiaolong Xu*, Lingjuan Lyu, Hong Jin*, Weiqiang Wang*, Shuo Jia*

In recent years, knowledge graph~(KG) has obtained many achievements in both research and industrial fields. However, most KG algorithms consider node embedding with only structure and node features, but not relation features. In this paper, we propose a novel Heterogeneous …

How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data

ICLR, 2022
Zhiyuan Zhang*, Lingjuan Lyu, Weiqiang Wang*, Lichao Sun*, Xu Sun*

Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work finds that backdoors can be i…

Communication-Efficient Federated Learning via Knowledge Distillation

Nature Communications, 2022
Yongfeng Huang*, Chuhan Wu*, Fangzhao Wu*, Lingjuan Lyu, Xing Xie*

Federated learning is a privacy-preserving machine learning technique to train intelligent models from decentralized data, which enables exploiting private data by communicating local model updates in each iteration of model learning rather than the raw data. However, model …

  • HOME
  • Publications
  • Decision Boundary-aware Data Augmentation for Adversarial Training


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.