CV NeurIPS

IBA: Towards Irreversible Backdoor Attacks in Federated Learning

October 4, 2023

Federated learning (FL) is a distributed learning approach that enables machine learning models to be trained on decentralized data without compromising end devices’ personal, potentially sensitive data. However, the distributed nature and uninvestigated data intuitively introduce new security vulnerabilities, including backdoor attacks. In this scenario, an adversary implants backdoor functionality into the global model during training, which can be activated to cause the desired misbehaviors for any input with a specific adversarial pattern.Despite having remarkable success in triggering and distorting model behavior, prior backdoor attacks in FL often hold impractical assumptions, limited imperceptibility, and durability. Specifically, the adversary needs to control a sufficiently large fraction of clients or know the data distribution of other honest clients. In many cases, the trigger inserted is often visually apparent, and the backdoor effect is quickly diluted if the adversary is removed from the training process. To address these limitations, we propose a novel backdoor attack framework in FL, the Irreversible Backdoor Attack (IBA), that jointly learns the optimal and visually stealthy trigger and then gradually implants the backdoor into a global model. This approach allows the adversary to execute a backdoor attack that can evade both human and machine inspections. Additionally, we enhance the efficiency and durability of the proposed attack by selectively poisoning the model’s parameters that are least likely updated by the main task’s learning process and constraining the poisoned model update to the vicinity of the global model.Finally, we evaluate the proposed attack framework on several benchmark datasets, including MNIST, CIFAR-10, and Tiny ImageNet, and achieved high success rates while simultaneously bypassing existing backdoor defenses and achieving a more durable backdoor effect compared to other backdoor attacks. Overall, IBA offers a more effective, stealthy, and durable approach to backdoor attacks in FL. Code for this paper is published at https://github.com/sail-research/iba.

Overall

9 minutes

Dung Nguyen, Tuan Nguyen, Anh Tran, Khoa Doan, Kok-seng Wong

NeurIPS 2023

Share Article

Related publications

CV NeurIPS Top Tier
October 4, 2023

Quang Nguyen, Vu Tuan Truong, Anh Tran, Khoi Nguyen

CV NeurIPS Top Tier
October 4, 2023

Dung Nguyen, Tuan Nguyen, Anh Tran, Khoa Doan, Kok-seng Wong

CV ICCV Top Tier
July 31, 2023

Yifeng Huang, Viresh Ranjan, Minh Hoai

CV ICCV Top Tier
July 31, 2023

Hong-Wing Pang, Son Hua, Sai-Kit Yeung