backdoor-learning-resources

GitHub
1.2k 174 非常简单 3 次阅读 昨天MIT开发框架
AI 解读 由 AI 自动生成,仅供参考

backdoor-learning-resources 是一份系统整理的后门学习(Backdoor Learning)领域资源清单,由研究者持续维护更新,旨在帮助社区快速追踪这一新兴安全方向的前沿进展。

后门学习关注机器学习训练过程中的安全隐患:攻击者可通过在训练数据中植入特定"触发器",使模型在正常输入下表现正常,但一旦检测到触发模式便产生恶意输出。这类威胁也被称为"神经木马"(Neural Trojan),对安全使用第三方训练数据或预训练模型构成现实挑战。

这份资源库涵盖了从综述论文、开源工具箱到具体攻防技术的完整脉络,按攻击类型(投毒攻击、权重攻击、结构修改攻击等)和防御策略(预处理、模型重构、触发器合成、样本过滤等)细致分类,并延伸至联邦学习、迁移学习、强化学习等场景。内容均按会议/期刊/预印本的时间顺序整理,便于研究者把握技术演进。

适合 AI 安全研究者、深度学习工程师及关注模型可信性的开发者使用。若你的研究涉及模型供应链安全、预训练模型审计或对抗样本防御,这份清单能帮你快速定位关键文献与代码实现。项目作者每月更新,欢迎社区通过 PR 贡献新论文。

使用场景

一位机器学习安全研究员正在研究如何检测和防御第三方预训练模型中的潜在后门攻击。

没有 backdoor-learning-resources 时

  • 研究员需要花费大量时间在不同会议和期刊中手动搜索相关论文,效率低下且容易遗漏重要研究
  • 难以系统性了解后门学习领域的最新进展,特别是不同攻击和防御方法的分类和发展脉络
  • 在进行文献调研时,经常遇到找不到代码实现或实验数据的情况,影响研究进度
  • 缺乏对领域内经典survey论文的快速访问渠道,需要自行判断哪些是权威参考文献
  • 更新知识库困难,难以持续跟踪最新的研究成果

使用 backdoor-learning-resources 后

  • 可以按主题和年份快速定位所需论文,涵盖ICLR、NeurIPS等顶级会议的最新研究成果
  • 清晰的分类结构帮助研究员系统掌握各类攻击(如投毒攻击、权重攻击)和防御方法的发展现状
  • 提供论文PDF和代码链接,方便快速复现和验证相关研究结果
  • 直接获取领域权威survey论文及其引用信息,为研究提供可靠的理论基础
  • 定期更新机制确保研究员能够及时获取最新研究动态,保持知识库的时效性

backdoor-learning-resources通过系统化的资源整理和持续更新,显著提升了研究人员在机器学习安全领域的工作效率。

运行环境要求

操作系统
  • 未说明
GPU

未说明

内存

未说明

依赖
notes该仓库主要汇总了后门学习相关的资源和论文,未明确提及具体的运行环境需求。
python未说明
backdoor-learning-resources hero image

快速开始

后门学习资源

本 GitHub 仓库汇总了一系列**后门学习(Backdoor Learning)**相关资源。更多详情及分类标准,请参阅我们的综述论文

我们将尽力以月度频率持续维护本 GitHub 仓库。

为什么选择后门学习?

后门学习是一个新兴的研究领域,探讨机器学习算法训练过程中的安全问题。对于现实中安全采用第三方训练资源或模型至关重要。

注:"Backdoor"(后门)也常被称为 "Neural Trojan"(神经木马)或 "Trojan"(木马)。

动态

  • 2023/7/24:新增十篇 ICLR'23 论文。该会议的所有论文应已全部收录。
  • 2023/7/23:新增七篇 NeurIPS'22 论文和四篇 AAAI'23 论文。这些会议的所有论文应已全部收录。
  • 2023/01/25:因个人原因(如身体不适和博士论文撰写),近期暂停了相关论文阅读和本仓库的更新,深表歉意。将于 2023 年 6 月后恢复更新。
  • 2022/12/05:略微调整仓库格式,将会议论文置于期刊论文之前。具体而言,同一年份内请将会议论文放在期刊论文之前,因为期刊通常投稿时间较早,存在一定滞后性。
  • 2022/12/05:新增三篇 ECCV'22 论文。该会议的所有论文应已全部收录。

引用

如果本仓库或综述论文对您的研究有所帮助,请按以下格式引用我们的论文:

@article{li2022backdoor,
  title={Backdoor learning: A survey},
  author={Li, Yiming and Jiang, Yong and Li, Zhifeng and Xia, Shu-Tao},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2022}
}

贡献

We Need You!

欢迎通过邮件联系本人或提交 Pull Request 来贡献本列表。

Markdown 格式:

- 论文名称。
  [[pdf]](链接) 
  [[code]](链接)
  - 作者 1, 作者 2, **and** 作者 3. *会议/期刊*, 年份。

注意:同一年份内,请将会议论文置于期刊论文之前,因为期刊通常投稿时间较早,存在一定滞后性。(即:会议论文-->期刊论文-->预印本)

目录

综述

  • Backdoor Learning: A Survey(后门学习:一项综述)。 [pdf]

    • Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  • Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review(深度学习中的后门攻击与对策:一项全面综述)。 [pdf]

    • Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Anmin Fu, Surya Nepal, and Hyoungshick Kim. arXiv, 2020.
  • Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses(机器学习的数据安全:数据投毒、后门攻击与防御)。 [pdf]

    • Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  • A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning(机器学习中毒攻击与对策的全面综述)。 [link]

    • Zhiyi Tian, Lei Cui, Jie Liang, and Shui Yu. ACM Computing Surveys, 2022.
  • Backdoor Attacks and Defenses in Federated Learning: State-of-the-art, Taxonomy, and Future Directions(联邦学习中的后门攻击与防御:最新进展、分类与未来方向)。 [link]

    • Xueluan Gong, Yanjiao Chen, Qian Wang, and Weihan Kong. IEEE Wireless Communications, 2022.
  • Backdoor Attacks on Image Classification Models in Deep Neural Networks(深度神经网络中图像分类模型的后门攻击)。 [link]

    • Quanxin Zhang, Wencong Ma, Yajie Wang, Yaoyuan Zhang, Zhiwei Shi, and Yuanzhang Li. Chinese Journal of Electronics, 2022.
  • Defense against Neural Trojan Attacks: A Survey(神经木马攻击防御:一项综述)。 [link]

    • Sara Kaviani and Insoo Sohn. Neurocomputing, 2021.
  • A Survey on Neural Trojans(神经木马综述)。 [pdf]

    • Yuntao Liu, Ankit Mondal, Abhishek Chakraborty, Michael Zuzak, Nina Jacobsen, Daniel Xing, and Ankur Srivastava. ISQED, 2020.
  • Backdoor Attacks against Voice Recognition Systems: A Survey(针对语音识别系统的后门攻击:一项综述)。 [pdf]

    • Baochen Yan, Jiahe Lan, and Zheng Yan. arXiv, 2023.
  • A Survey of Neural Trojan Attacks and Defenses in Deep Learning(深度学习中的神经木马攻击与防御综述)。 [pdf]

    • Jie Wang, Ghulam Mubashar Hassan, and Naveed Akhtar. arXiv, 2022.
  • Threats to Pre-trained Language Models: Survey and Taxonomy(预训练语言模型的威胁:综述与分类)。 [pdf]

    • Shangwei Guo, Chunlong Xie, Jiwei Li, Lingjuan Lyu, and Tianwei Zhang. arXiv, 2022.
  • An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences(深度神经网络后门攻击及可能防御方法概述)。 [pdf]

    • Wei Guo, Benedetta Tondi, and Mauro Barni. arXiv, 2021.
  • Deep Learning Backdoors(深度学习后门)。 [pdf]

    • Shaofeng Li, Shiqing Ma, Minhui Xue, and Benjamin Zi Hao Zhao. arXiv, 2020.

工具箱

学位论文

  • Poisoning-based Backdoor Attacks in Computer Vision(计算机视觉中基于投毒的后门攻击)。 [pdf]

    • Yiming Li. 清华大学博士学位论文, 2023.
  • Defense of Backdoor Attacks against Deep Neural Network Classifiers(针对深度神经网络分类器的后门攻击防御)。 [pdf]

    • Zhen Xiang. 宾夕法尼亚州立大学博士学位论文, 2022.
  • Towards Adversarial and Backdoor Robustness of Deep Learning(迈向深度学习的对抗与后门鲁棒性)。 [link]

    • Yifan Guo. 凯斯西储大学博士学位论文, 2022.
  • Toward Robust and Communication Efficient Distributed Machine Learning(迈向鲁棒且通信高效的分布式机器学习)。 [pdf]

    • Hongyi Wang. 威斯康星大学麦迪逊分校博士学位论文, 2021.
  • Towards Robust Image Classification with Deep Learning and Real-Time DNN Inference on Mobile(基于深度学习的鲁棒图像分类与移动端实时DNN推理)。 [pdf]

    • Pu Zhao. 东北大学博士学位论文, 2021.
  • Countermeasures Against Backdoor, Data Poisoning, and Adversarial Attacks(针对后门、数据投毒和对抗攻击的对策)。 [pdf]

    • Henry Daniel. 德克萨斯大学圣安东尼奥分校博士学位论文, 2021.
  • Understanding and Mitigating the Impact of Backdooring Attacks on Deep Neural Networks(理解与缓解后门攻击对深度神经网络的影响)。 [pdf]

    • Kang Liu. 纽约大学博士学位论文, 2021.
  • Un-fair trojan: Targeted Backdoor Attacks against Model Fairness(不公平木马:针对模型公平性的目标后门攻击)。 [pdf]

    • Nicholas Furth. 新泽西理工学院硕士学位论文, 2022.
  • Check Your Other Door: Creating Backdoor Attacks in the Frequency Domain(检查另一扇门:在频域中创建后门攻击)。 [pdf]

    • Hasan Abed Al Kader Hammoud. 阿卜杜拉国王科技大学硕士学位论文, 2022.
  • Backdoor Attacks in Neural Networks(神经网络中的后门攻击)。 [link]

    • Stefanos Koffas. 代尔夫特理工大学硕士学位论文, 2021.
  • Backdoor Defenses(后门防御)。 [pdf]

    • Andrea Milakovic. 维也纳技术大学硕士学位论文, 2021.
  • Geometric Properties of Backdoored Neural Networks(后门神经网络的几何特性)。 [pdf]

    • Dominic Carrano. 加州大学伯克利分校硕士学位论文, 2021.
  • Detecting Backdoored Neural Networks with Structured Adversarial Attacks(利用结构化对抗攻击检测后门神经网络)。 [pdf]

    • Charles Yang. 加州大学伯克利分校硕士学位论文, 2021.
  • Backdoor Attacks Against Deep Learning Systems in the Physical World(物理世界中针对深度学习系统的后门攻击)。 [pdf]

    • Emily Willson. 芝加哥大学硕士学位论文, 2020.

图像与视频分类

基于投毒的攻击

2023

  • Revisiting the Assumption of Latent Separability for Backdoor Defenses(重新审视后门防御中潜在可分离性的假设)。 [pdf] [code]

    • Xiangyu Qi, Tinghao Xie, Yiming Li, Saeed Mahloujifar, and Prateek Mittal. ICLR, 2023.
  • 基于神经正切核(Neural Tangent Kernels, NTK)的少样本后门攻击。 [pdf] [code]

    • Jonathan Hayase 和 Sewoong Oh. ICLR, 2023.
  • Color Backdoor:色彩空间中的鲁棒性投毒攻击。 [pdf]

    • Wenbo Jiang, Hongwei Li, Guowen Xu 和 Tianwei Zhang. CVPR, 2023.

2022

  • 无目标后门水印:面向无害且隐蔽的数据集版权保护。 [pdf] [code]

    • Yiming Li, Yang Bai, Yong Jiang, Yong Yang, Shu-Tao Xia 和 Bo Li. NeurIPS, 2022.
  • DEFEAT:通过不可察觉扰动和潜在表示约束的深度隐藏特征后门攻击。 [pdf]

    • Zhendong Zhao, Xiaojun Chen, Yuexin Xuan, Ye Dong, Dakui Wang 和 Kaitai Liang. CVPR, 2022.
  • Marksman Backdoor:具有任意目标类别的后门攻击。 [pdf]

    • Khoa D Doan, Yingjie Lao 和 Ping Li. NeurIPS, 2022.
  • 通过频域实现的不可见黑盒后门攻击。 [pdf] [code]

    • Tong Wang, Yuan Yao, Feng Xu, Shengwei An, Hanghang Tong 和 Ting Wang. ECCV, 2022.
  • BppAttack:通过图像量化和对比对抗学习实现的针对深度神经网络的隐蔽高效木马攻击。 [pdf] [code]

    • Zhenting Wang, Juan Zhai 和 Shiqing Ma. CVPR, 2022.
  • 针对机器学习模型的动态后门攻击。 [pdf]

    • Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma 和 Yang Zhang. EuroS&P, 2022.
  • 不可察觉的后门攻击:从输入空间到特征表示。 [pdf] [code]

    • Nan Zhong, Zhenxing Qian 和 Xinpeng Zhang. IJCAI, 2022.
  • 基于对抗训练的隐蔽后门攻击。 [link]

    • Le Feng, Sheng Li, Zhenxing Qian 和 Xinpeng Zhang. ICASSP, 2022.
  • 针对压缩深度神经网络的不可见高效后门攻击。 [link]

    • Huy Phan, Yi Xie, Jian Liu, Yingying Chen 和 Bo Yuan. ICASSP, 2022.
  • 基于全局平均池化的动态后门。 [pdf]

    • Stefanos Koffas, Stjepan Picek 和 Mauro Conti. AICAS, 2022.
  • Poison Ink:鲁棒且不可见的后门攻击。 [pdf]

    • Jie Zhang, Dongdong Chen, Qidong Huang, Jing Liao, Weiming Zhang, Huamin Feng, Gang Hua 和 Nenghai Yu. IEEE Transactions on Image Processing, 2022.
  • 通过多级MMD正则化增强后门攻击。 [link]

    • Pengfei Xia, Hongjing Niu, Ziqiang Li 和 Bin Li. IEEE Transactions on Dependable and Secure Computing, 2022.
  • PTB:现实世界中针对深度神经网络的鲁棒物理后门攻击。 [link]

    • Mingfu Xue, Can He, Yinghao Wu, Shichang Sun, Yushu Zhang, Jian Wang 和 Weiqiang Liu. Computers & Security, 2022.
  • IBAttack:谨慎对待数据标签。 [link]

    • Akshay Agarwal, Richa Singh, Mayank Vatsa 和 Nalini Ratha. IEEE Transactions on Artificial Intelligence, 2022.
  • BlindNet Backdoor:使用盲水印攻击深度神经网络。 [link]

    • Hyun Kwon 和 Yongchul Kim. Multimedia Tools and Applications, 2022.
  • 通过雨滴实现的深度神经网络自然后门攻击。 [link]

    • Feng Zhao, Li Zhou, Qi Zhong, Rushi Lan 和 Leo Yu Zhang. Security and Communication Networks, 2022.
  • 基于分散像素扰动的图像分类模型不可察觉后门触发器。 [pdf]

    • Yulong Wang, Minghui Zhao, Shenghong Li, Xin Yuan 和 Wei Ni. arXiv, 2022.
  • FRIB:基于特征修复的低投毒率不可见后门攻击。 [pdf]

    • Hui Xia, Xiugui Yang, Xiangyun Qian 和 Rui Zhang. arXiv, 2022.
  • 增强后门。 [pdf] [code]

    • Joseph Rance, Yiren Zhao, Ilia Shumailov 和 Robert Mullins. arXiv, 2022.
  • Just Rotate it:通过旋转变换部署后门攻击。 [pdf]

    • Tong Wu, Tianhao Wang, Vikash Sehwag, Saeed Mahloujifar 和 Prateek Mittal. arXiv, 2022.
  • 自然后门数据集。 [pdf]

    • Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji, Josephine Passananti, Emilio Andere, Haitao Zheng 和 Ben Y. Zhao. arXiv, 2022.
  • 使用两阶段特定触发器增强干净标签后门攻击。 [pdf]

    • Nan Luo, Yuanzhang Li, Yajie Wang, Shangbo Wu, Yu-an Tan 和 Quanxin Zhang. arXiv, 2022.
  • 规避基于潜在可分离性的后门防御。 [pdf] [code]

    • Xiangyu Qi, Tinghao Xie, Saeed Mahloujifar 和 Prateek Mittal. arXiv, 2022.
  • Narcissus:一种具有有限信息的实用干净标签后门攻击。 [pdf]

    • Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu 和 Ruoxi Jia. arXiv, 2022.
  • CASSOCK:在特定源后门防御壁垒下针对深度神经网络的可行后门攻击。 [pdf]

    • Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang 和 Willy Susilo. arXiv, 2022.
  • 特洛伊木马训练:打破深度学习中对后门攻击的防御。 [pdf]

    • Arezoo Rajabi, Bhaskar Ramasubramanian 和 Radha Poovendran. arXiv, 2022.
  • Label-Smoothed Backdoor Attack(标签平滑后门攻击)。 [pdf]

    • Minlong Peng, Zidi Xiong, Mingming Sun, and Ping Li. arXiv, 2022.
  • Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks(针对深度神经网络的不可感知多通道后门攻击)。 [pdf]

    • Mingfu Xue, Shifeng Ni, Yinghao Wu, Yushu Zhang, Jian Wang, and Weiqiang Liu. arXiv, 2022.
  • Compression-Resistant Backdoor Attack against Deep Neural Networks(针对深度神经网络的抗压缩后门攻击)。 [pdf]

    • Mingfu Xue, Xin Wang, Shichang Sun, Yushu Zhang, Jian Wang, and Weiqiang Liu. arXiv, 2022.

2021

  • Invisible Backdoor Attack with Sample-Specific Triggers(基于样本特定触发器的隐形后门攻击)。 [pdf] [code]

    • Yuezun Li, Yiming Li, Baoyuan Wu, Longkang Li, Ran He, and Siwei Lyu. ICCV, 2021.
  • Manipulating SGD with Data Ordering Attacks(通过数据排序攻击操控 SGD)。 [pdf]

    • Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, and Ross Anderson. NeurIPS, 2021.
  • Backdoor Attack with Imperceptible Input and Latent Modification(输入与潜在特征不可感知的后门攻击)。 [pdf]

    • Khoa Doan, Yingjie Lao, and Ping Li. NeurIPS, 2021.
  • LIRA: Learnable, Imperceptible and Robust Backdoor Attacks(LIRA:可学习的、不可感知的鲁棒后门攻击)。 [pdf]

    • Khoa Doan, Yingjie Lao, Weijie Zhao, and Ping Li. ICCV, 2021.
  • Blind Backdoors in Deep Learning Models(深度学习模型中的盲后门)。 [pdf] [code]

    • Eugene Bagdasaryan, and Vitaly Shmatikov. USENIX Security, 2021.
  • Backdoor Attacks Against Deep Learning Systems in the Physical World(物理世界中针对深度学习系统的后门攻击)。 [pdf] [Master Thesis]

    • Emily Wenger, Josephine Passanati, Yuanshun Yao, Haitao Zheng, and Ben Y. Zhao. CVPR, 2021.
  • Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification(通过可控去毒实现的神经网络深度特征空间木马攻击)。 [pdf] [code]

    • Siyuan Cheng, Yingqi Liu, Shiqing Ma, and Xiangyu Zhang. AAAI, 2021.
  • WaNet - Imperceptible Warping-based Backdoor Attack(WaNet - 基于不可感知形变的后门攻击)。 [pdf] [code]

    • Tuan Anh Nguyen, and Anh Tuan Tran. ICLR, 2021.
  • AdvDoor: Adversarial Backdoor Attack of Deep Learning System(AdvDoor:深度学习系统的对抗性后门攻击)。 [pdf] [code]

    • Quan Zhang, Yifeng Ding, Yongqiang Tian, Jianmin Guo, Min Yuan, and Yu Jiang. ISSTA, 2021.
  • Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks(隐形毒药:针对深度神经网络的黑盒干净标签后门攻击)。 [pdf]

    • Rui Ning, Jiang Li, ChunSheng Xin, and Hongyi Wu. INFOCOM, 2021.
  • Backdoor Attack in the Physical World(物理世界中的后门攻击)。 [pdf] [extension]

    • Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. ICLR Workshop, 2021.
  • Defense-Resistant Backdoor Attacks against Deep Neural Networks in Outsourced Cloud Environment(外包云环境中针对深度神经网络的抗防御后门攻击)。 [Link]

    • Xueluan Gong, Yanjiao Chen, Qian Wang, Huayang Huang, Lingshuo Meng, Chao Shen, and Qian Zhang. IEEE Journal on Selected Areas in Communications, 2021.
  • A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification(针对基于 DNN 的人脸验证系统的通用冒充攻击万能钥匙后门)。 [link]

    • WeiGuo, Benedetta Tondi, and Mauro Barni. Pattern Recognition Letters, 2021.
  • Backdoors Hidden in Facial Features: A Novel Invisible Backdoor Attack against Face Recognition Systems(隐藏于面部特征中的后门:针对人脸识别系统的新型隐形后门攻击)。 [link]

    • Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu. Peer-to-Peer Networking and Applications, 2021.
  • Use Procedural Noise to Achieve Backdoor Attack(利用过程噪声实现后门攻击)。 [link] [code]

    • Xuan Chen, Yuena Ma, and Shiwei Lu. IEEE Access, 2021.
  • A Multitarget Backdooring Attack on Deep Neural Networks with Random Location Trigger(基于随机位置触发器的深度神经网络多目标后门攻击)。 [link]

    • Xiao Yu, Cong Liu, Mingwen Zheng, Yajie Wang, Xinrui Liu, Shuxiao Song, Yuexuan Ma, and Jun Zheng. International Journal of Intelligent Systems, 2021.
  • Simtrojan: Stealthy Backdoor Attack(Simtrojan:隐蔽后门攻击)。 [link]

    • Yankun Ren, Longfei Li, and Jun Zhou. ICIP, 2021.
  • A Statistical Difference Reduction Method for Escaping Backdoor Detection(用于逃避后门检测的统计差异缩减方法)。 [pdf]

    • Pengfei Xia, Hongjing Niu, Ziqiang Li, and Bin Li. arXiv, 2021.
  • Backdoor Attack through Frequency Domain(通过频域的后门攻击)。 [pdf]

    • Tong Wang, Yuan Yao, Feng Xu, Shengwei An, and Ting Wang. arXiv, 2021.
  • Check Your Other Door! Establishing Backdoor Attacks in the Frequency Domain(检查你的另一扇门!在频域中建立后门攻击)。 [pdf]

    • Hasan Abed Al Kader Hammoud and Bernard Ghanem. arXiv, 2021.
  • Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch(沉睡特工:针对从头训练神经网络的可扩展隐藏触发器后门)。 [pdf] [code]

    • Hossein Souri, Micah Goldblum, Liam Fowl, Rama Chellappa, and Tom Goldstein. arXiv, 2021.
  • RABA: A Robust Avatar Backdoor Attack on Deep Neural Network(RABA:针对深度神经网络的鲁棒虚拟形象后门攻击)。 [pdf]

    • Ying He, Zhili Shen, Chang Xia, Jingyu Hua, Wei Tong, and Sheng Zhong. arXiv, 2021.
  • Robust Backdoor Attacks against Deep Neural Networks in Real Physical World(真实物理世界中针对深度神经网络的鲁棒后门攻击)。 [pdf]

    • Mingfu Xue, Can He, Shichang Sun, Jian Wang, and Weiqiang Liu. arXiv, 2021.

2020

  • Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features(通过混合现有良性特征实现的深度神经网络复合后门攻击)。 [pdf]

    • Junyu Lin, Lei Xu, Yingqi Liu, Xiangyu Zhang. CCS, 2020.
  • Input-Aware Dynamic Backdoor Attack(输入感知动态后门攻击)。 [pdf] [code]

    • Anh Nguyen, and Anh Tran. NeurIPS, 2020.
  • Bypassing Backdoor Detection Algorithms in Deep Learning(绕过深度学习中的后门检测算法)。 [pdf]

    • Te Juin Lester Tan, and Reza Shokri. EuroS&P, 2020.
  • 通过不可见扰动(Invisible Perturbation)在卷积神经网络(Convolutional Neural Network, CNN)模型中嵌入后门。 [pdf]

    • Cong Liao, Haoti Zhong, Anna Squicciarini, Sencun Zhu, and David Miller. ACM CODASPY, 2020.
  • 针对视频识别模型的干净标签后门攻击(Clean-Label Backdoor Attacks)。 [pdf] [code]

    • Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, and Yu-Gang Jiang. CVPR, 2020.
  • 逃避深度学习后门攻击检测。 [link]

    • Yayuan Xiong, Fengyuan Xu, Sheng Zhong, and Qun Li. IFIP SEC, 2020.
  • 反射后门(Reflection Backdoor):一种针对深度神经网络的自然后门攻击。 [pdf] [code]

    • Yunfei Liu, Xingjun Ma, James Bailey, and Feng Lu. ECCV, 2020.
  • 针对深度神经网络的实时木马攻击(Live Trojan Attacks)。 [pdf] [code]

    • Robby Costales, Chengzhi Mao, Raphael Norwitz, Bryan Kim, and Junfeng Yang. CVPR Workshop, 2020.
  • 利用图像缩放攻击(Image-Scaling Attacks)对神经网络进行后门植入和投毒。 [pdf]

    • Erwin Quiring, and Konrad Rieck. IEEE S&P Workshop, 2020.
  • One-to-N 与 N-to-One:两种针对深度学习模型的高级后门攻击。 [pdf]

    • Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu. IEEE Transactions on Dependable and Secure Computing, 2020.
  • 通过隐写术(Steganography)和正则化(Regularization)对深度神经网络进行不可见后门攻击。 [pdf] [arXiv Version (2019)]

    • Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, and Xinpeng Zhang. IEEE Transactions on Dependable and Secure Computing, 2020.
  • HaS-Nets:一种用于数据收集场景下防御深度神经网络后门攻击的修复与选择机制(Heal and Select Mechanism)。 [pdf]

    • Hassan Ali, Surya Nepal, Salil S. Kanhere, and Sanjay Jha. arXiv, 2020.
  • FaceHack:利用面部特征触发后门面部识别系统。 [pdf]

    • Esha Sarkar, Hadjer Benkraouda, and Michail Maniatakos. arXiv, 2020.
  • 光线可以攻击你的面部!针对人脸识别系统的黑盒后门攻击(Black-box Backdoor Attack)。 [pdf]

    • Haoliang Li, Yufei Wang, Xiaofei Xie, Yang Liu, Shiqi Wang, Renjie Wan, Lap-Pui Chau, and Alex C. Kot. arXiv, 2020.

2019

  • 一种无需标签投毒(Label Poisoning)通过训练集污染在 CNN 中实施的新型后门攻击。 [pdf]

    • M.Barni, K.Kallas, and B.Tondi. ICIP, 2019.
  • 标签一致的后门攻击(Label-Consistent Backdoor Attacks)。 [pdf] [code]

    • Alexander Turner, Dimitris Tsipras, and Aleksander Madry. arXiv, 2019.

2018

  • 针对神经网络的木马攻击(Trojaning Attack)。 [pdf] [code]
    • Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, and Juan Zhai. NDSS, 2018.

2017

  • BadNets:识别机器学习模型供应链中的漏洞。 [pdf] [journal]

    • Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. arXiv, 2017 (IEEE Access, 2019).
  • 利用数据投毒(Data Poisoning)对深度学习系统实施目标后门攻击(Targeted Backdoor Attacks)。 [pdf] [code]

    • Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. arXiv, 2017.

非投毒式攻击(Non-poisoning-based Attack)

权重导向攻击(Weights-oriented Attack)

  • Handcrafted Backdoors in Deep Neural Networks(深度神经网络中的手工后门)。 [pdf]

    • Sanghyun Hong, Nicholas Carlini, and Alexey Kurakin. NeurIPS, 2022.
  • Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips(基于比特翻转的神经网络几乎不可感知木马攻击)。 [pdf] [code]

    • Jiawang Bai, Kuofeng Gao, Dihong Gong, Shu-Tao Xia, Zhifeng Li, and Wei Liu. ECCV, 2022.
  • ProFlip: Targeted Trojan Attack with Progressive Bit Flips(ProFlip:渐进式比特翻转的目标化木马攻击)。 [pdf]

    • Huili Chen, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. ICCV, 2021.
  • TBT: Targeted Neural Network Attack with Bit Trojan(TBT:基于比特木马的目标化神经网络攻击)。 [pdf] [code]

    • Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. CVPR, 2020.
  • How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data(如何以更好的一致性注入后门:基于干净数据的 Logit 锚定)。 [pdf]

    • Zhiyuan Zhang, Lingjuan Lyu, Weiqiang Wang, Lichao Sun, and Xu Sun. ICLR, 2022.
  • Can Adversarial Weight Perturbations Inject Neural Backdoors?(对抗性权重扰动能否注入神经后门?)。 [pdf]

    • Siddhant Garg, Adarsh Kumar, Vibhor Goel, and Yingyu Liang. CIKM, 2020.
  • Versatile Weight Attack via Flipping Limited Bits(通过有限比特翻转实现多功能权重攻击)。 [pdf]

    • Jiawang Bai, Baoyuan Wu, Zhifeng Li, and Shu-Tao Xia. arXiv, 2022.
  • Toward Realistic Backdoor Injection Attacks on DNNs using Rowhammer(基于 Rowhammer 的 DNN 真实后门注入攻击)。 [pdf]

    • M. Caner Tol, Saad Islam, Berk Sunar, and Ziming Zhang. 2022.
  • TrojanNet: Embedding Hidden Trojan Horse Models in Neural Network(TrojanNet:在神经网络中嵌入隐藏木马模型)。 [pdf]

    • Chuan Guo, Ruihan Wu, and Kilian Q. Weinberger. arXiv, 2020.
  • Backdooring Convolutional Neural Networks via Targeted Weight Perturbations(通过目标化权重扰动向卷积神经网络植入后门)。 [pdf]

    • Jacob Dumford, and Walter Scheirer. arXiv, 2018.

结构修改攻击(Structure-modified Attack)

  • LoneNeuron: a Highly-Effective Feature-Domain Neural Trojan Using Invisible and Polymorphic Watermarks(LoneNeuron:使用不可见和多态水印的高效特征域神经木马)。 [pdf]

    • Zeyan Liu, Fengjun Li, Zhu Li, and Bo Luo. CCS, 2022.
  • Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks(面向深度神经网络实用部署阶段的后门攻击)。 [pdf] [code]

    • Xiangyu Qi, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, and Kai Bu. CVPR, 2022.
  • Hiding Needles in a Haystack: Towards Constructing Neural Networks that Evade Verification(大海捞针:构建逃避验证的神经网络)。 [link] [code]

    • Árpád Berta, Gábor Danner, István Hegedűs and Márk Jelasity. ACM IH&MMSec, 2022.
  • Stealthy and Flexible Trojan in Deep Learning Framework(深度学习框架中的隐蔽灵活木马)。 [link]

    • Yajie Wang, Kongyang Chen, Yu-An Tan, Shuxin Huang, Wencong Ma, and Yuanzhang Li. IEEE Transactions on Dependable and Secure Computing, 2022.
  • FooBaR: Fault Fooling Backdoor Attack on Neural Network Training(FooBaR:神经网络训练中的故障欺骗后门攻击)。 [link] [code]

    • Jakub Breier, Xiaolu Hou, Martín Ochoa and Jesus Solano. IEEE Transactions on Dependable and Secure Computing, 2022.
  • DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection(DeepPayload:通过神经载荷注入对深度学习模型的黑盒后门攻击)。 [pdf]

    • Yuanchun Li, Jiayi Hua, Haoyu Wang, Chunyang Chen, and Yunxin Liu. ICSE, 2021.
  • An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks(深度神经网络中一种极其简单的木马攻击方法)。 [pdf] [code]

    • Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, and Xia Hu. KDD, 2020.
  • BadRes: Reveal the Backdoors through Residual Connection(BadRes:通过残差连接揭示后门)。 [pdf]

    • Mingrui He, Tianyu Chen, Haoyi Zhou, Shanghang Zhang, and Jianxin Li. arXiv, 2022.
  • Architectural Backdoors in Neural Networks(神经网络中的架构后门)。 [pdf]

    • Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert Mullins, and Nicolas Papernot. arXiv, 2022.
  • Planting Undetectable Backdoors in Machine Learning Models(在机器学习模型中植入不可检测的后门)。 [pdf]

    • Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan, and Or Zamir. arXiv, 2022.

其他攻击(Other Attacks)

  • ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks(ImpNet:编译神经网络中的不可感知且黑盒不可检测的后门)。 [pdf] [website] [code]

    • Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, and Robert Mullins. arXiv, 2022.
  • Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks(别触发我!针对深度神经网络的无触发器后门攻击)。 [pdf]

    • Ahmed Salem, Michael Backes, and Yang Zhang. arXiv, 2020.

后门防御(Backdoor Defense)

基于预处理的实证防御(Preprocessing-based Empirical Defense)

  • Backdoor Attack in the Physical World(物理世界中的后门攻击)。 [pdf] [extension]

    • Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. ICLR Workshop, 2021.
  • DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation(DeepSweep:使用数据增强缓解 DNN 后门攻击的评估框架)。 [pdf] [code]

    • Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, and Bhavani Thuraisingham. AsiaCCS, 2021.
  • Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems(Februus:针对深度神经网络系统木马攻击的输入净化防御)。 [pdf] [code]

    • Bao Gia Doan, Ehsan Abbasnejad, and Damith C. Ranasinghe. ACSAC, 2020.
  • Neural Trojans(神经木马)。 [pdf]

    • Yuntao Liu, Yang Xie, and Ankur Srivastava. ICCD, 2017.
  • Defending Deep Neural Networks against Backdoor Attack by Using De-trigger Autoencoder(使用去触发器自编码器防御深度神经网络的后门攻击)。 [pdf]

    • Hyun Kwon. IEEE Access, 2021.
  • ConFoc: 针对神经网络木马攻击的内容聚焦防护(Content-Focus Protection)。 [pdf]

    • Miguel Villarreal-Vasquez 和 Bharat Bhargava。arXiv,2021。
  • 针对机器学习中后门攻击的模型无关防御(Model Agnostic Defense)。 [pdf]

    • Sakshi Udeshi、Shanshan Peng、Gerald Woo、Lionell Loh、Louth Rawshan 和 Sudipta Chattopadhyay。arXiv,2019。

基于模型重构的经验性防御

  • 利用无标签数据进行后门清洗(Backdoor Cleansing with Unlabeled Data)。 [pdf] [code]

    • Lu Pang、Tao Sun、Haibin Ling 和 Chao Chen。CVPR,2023。
  • 通过隐式超梯度进行对抗性后门遗忘(Adversarial Unlearning of Backdoors via Implicit Hypergradient)。 [pdf] [code]

    • Yi Zeng、Si Chen、Won Park、Z. Morley Mao、Ming Jin 和 Ruoxi Jia。ICLR,2022。
  • 基于通道利普希茨性(Channel Lipschitzness)的无数据后门移除。 [pdf] [code]

    • Runkai Zheng、Rongjun Tang、Jianze Li 和 Li Liu。ECCV,2022。
  • 利用注意力关系图蒸馏(Attention Relation Graph Distillation)消除深度神经网络的后门触发器。 [pdf]

    • Jun Xia、Ting Wang、Jieping Ding、Xian Wei 和 Mingsong Chen。IJCAI,2022。
  • 预激活分布暴露后门神经元(Pre-activation Distributions Expose Backdoor Neurons)。 [pdf] [code]

    • Runkai Zheng、Rongjun Tang、Jianze Li 和 Li Liu。NeurIPS,2022。
  • 对抗性神经元剪枝(Adversarial Neuron Pruning)净化后门深度模型。 [pdf] [code]

    • Dongxian Wu 和 Yisen Wang。NeurIPS,2021。
  • 神经注意力蒸馏(Neural Attention Distillation):从深度神经网络中擦除后门触发器。 [pdf] [code]

    • Yige Li、Xingjun Ma、Nodens Koren、Lingjuan Lyu、Xixiang Lyu 和 Bo Li。ICLR,2021。
  • 可解释性引导的深度神经网络后门攻击防御。 [link]

    • Wei Jiang、Xiangyu Wen、Jinyu Zhan、Xupeng Wang 和 Ziwei Song。IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,2021。
  • 边界增强(Boundary augment):一种防御投毒攻击的数据增强方法。 [link]

    • Xuan Chen、Yuena Ma、Shiwei Lu 和 Yu Yao。IET Image Processing,2021。
  • 连接损失景观中的模式连通性(Mode Connectivity)与对抗鲁棒性。 [pdf] [code]

    • Pu Zhao、Pin-Yu Chen、Payel Das、Karthikeyan Natesan Ramamurthy 和 Xue Lin。ICLR,2020。
  • 精细剪枝(Fine-Pruning):防御深度神经网络的后门攻击。 [pdf] [code]

    • Kang Liu、Brendan Dolan-Gavitt 和 Siddharth Garg。RAID,2018。
  • 神经木马(Neural Trojans)。 [pdf]

    • Yuntao Liu、Yang Xie 和 Ankur Srivastava。ICCD,2017。
  • 针对投毒和后门攻击的残差块测试时自适应(Test-time Adaptation)。 [pdf]

    • Arnav Gudibande、Xinyun Chen、Yang Bai、Jason Xiong 和 Dawn Song。CVPR Workshop,2022。
  • 利用知识蒸馏(Knowledge Distillation)在深度神经网络后门攻击中禁用后门并识别投毒数据。 [pdf]

    • Kota Yoshida 和 Takeshi Fujino。CCS Workshop,2020。
  • 防御深度神经网络的后门攻击。 [pdf]

    • Hao Cheng、Kaidi Xu、Sijia Liu、Pin-Yu Chen、Pu Zhao 和 Xue Lin。KDD Workshop,2019。
  • 通过识别和净化不良神经元(Bad Neurons)防御后门攻击。 [pdf]

    • Mingyuan Fan、Yang Liu、Cen Chen、Ximeng Liu 和 Wenzhong Guo。arXiv,2022。
  • 化诅咒为祝福:通过模型反演(Model Inversion)实现无干净数据防御。 [pdf]

    • Si Chen、Yi Zeng、Won Park 和 Ruoxi Jia。arXiv,2022。
  • 针对后门防御的对抗性微调(Adversarial Fine-tuning):将对抗样本与触发样本关联。 [pdf]

    • Bingxu Mu、Le Wang 和 Zhenxing Niu。arXiv,2022。
  • 神经网络清洗(Neural Network Laundering):从深度神经网络中移除黑盒后门水印。 [pdf]

    • William Aiken、Hyoungshick Kim 和 Simon Woo。arXiv,2020。
  • HaS-Nets:一种用于数据收集场景下防御深度神经网络后门攻击的修复与选择机制(Heal and Select Mechanism)。 [pdf]

    • Hassan Ali、Surya Nepal、Salil S. Kanhere 和 Sanjay Jha。arXiv,2020。

基于触发器合成的经验性防御

  • 图像中认知后门模式蒸馏(Distilling Cognitive Backdoor Patterns)。 [pdf] [code]

    • Hanxun Huang、Xingjun Ma、Sarah Monazam Erfani 和 James Bailey。ICLR,2023。
  • UNICORN:统一的后门触发器反演框架(Unified Backdoor Trigger Inversion Framework)。 [pdf] [code]

    • Zhenting Wang、Zhenting Wang、Kai Mei、Juan Zhai 和 Shiqing Ma。ICLR,2023。
  • 隔离(Quarantine):稀疏性可以免费揭示木马攻击触发器。 [pdf] [code]

    • Tianlong Chen、Zhenyu Zhang、Yihua Zhang*、Shiyu Chang、Sijia Liu 和 Zhangyang Wang。CVPR,2022。
  • 后门扫描中更优的触发器反演优化。 [pdf] [code]

    • Guanhong Tao、Guangyu Shen、Yingqi Liu、Shengwei An、Qiuling Xu、Shiqing Ma、Pan Li 和 Xiangyu Zhang。CVPR,2022。
  • 使用夏普利估计(Shapley Estimation)进行少样本后门防御。 [pdf]

    • Jiyang Guan、Zhuozhuo Tu、Ran He 和 Dacheng Tao。CVPR,2022。
  • 重新思考木马触发器的逆向工程(Reverse-engineering)。 [pdf] [code]

    • Zhenting Wang、Kai Mei、Hailun Ding、Juan Zhai 和 Shiqing Ma。NeurIPS,2022。
  • 基于对抗性权重掩码(Adversarial Weight Masking, AWM)的单次神经网络后门擦除。 [pdf] [code]

    • Shuwen Chai 和 Jinghui Chen。NeurIPS,2022。
  • AEVA:基于对抗性极值分析(Adversarial Extreme Value Analysis)的黑盒后门检测。 [pdf] [code]

    • Junfeng Guo、Ang Li 和 Cong Liu。ICLR,2022。
  • 基于拓扑先验的触发器搜索用于木马检测(Trigger Hunting with a Topological Prior for Trojan Detection)。 [pdf] [code]

    • Xiaoling Hu、Xiao Lin、Michael Cogswell、Yi Yao、Susmit Jha 和 Chao Chen。ICLR,2022。
  • 基于机器遗忘(Machine Unlearning)的后门防御。 [pdf]

    • Yang Liu、Mingyuan Fan、Cen Chen、Ximeng Liu、Zhuo Ma、Li Wang 和 Jianfeng Ma。INFOCOM,2022。
  • 在有限信息和数据下的黑盒后门攻击检测。 [pdf]

    • Yinpeng Dong、Xiao Yang、Zhijie Deng、Tianyu Pang、Zihao Xiao、Hang Su 和 Jun Zhu。ICCV,2021。
  • 通过 K-Arm 优化(K-Arm Optimization)对深度神经网络进行后门扫描。 [pdf] [code]

    • Guangyu Shen、Yingqi Liu、Guanhong Tao、Shengwei An、Qiuling Xu、Siyuan Cheng、Shiqing Ma 和 Xiangyu Zhang。ICML,2021。
  • 面向深度神经网络中木马后门(Trojan Backdoors)的检测与消除。 [pdf] [previous version] [code]

    • Wenbo Guo、Lun Wang、Xinyu Xing、Min Du 和 Dawn Song。ICDM,2020。
  • GangSweep:通过生成对抗网络(GAN, Generative Adversarial Network)清除神经网络后门。 [pdf]

    • Liuwan Zhu、Rui Ning、Cong Wang、Chunsheng Xin 和 Hongyi Wu。ACM MM,2020。
  • 在无法访问训练集的情况下检测训练好的分类器中的后门。 [pdf]

    • Z Xiang、DJ Miller 和 G Kesidis。IEEE Transactions on Neural Networks and Learning Systems,2020。
  • Neural Cleanse:识别和缓解神经网络中的后门攻击。 [pdf] [code]

    • Bolun Wang、Yuanshun Yao、Shawn Shan、Huiying Li、Bimal Viswanath、Haitao Zheng、Ben Y. Zhao。IEEE S&P,2019。
  • 基于生成式分布建模(Generative Distribution Modeling)的神经网络后门防御。 [pdf] [code]

    • Ximing Qiao、Yukun Yang 和 Hai Li。NeurIPS,2019。
  • DeepInspect:深度神经网络的黑盒木马检测与缓解框架。 [pdf]

    • Huili Chen、Cheng Fu、Jishen Zhao、Farinaz Koushanfar。IJCAI,2019。
  • 识别后门人脸识别网络中物理可实现的触发器。 [link]

    • Ankita Raj、Ambar Pal 和 Chetan Arora。ICIP,2021。
  • 通过最大可实现误分类分数统计量(Maximum Achievable Misclassification Fraction Statistic)在无训练集情况下揭示深度神经网络中的可感知后门。 [pdf]

    • Zhen Xiang、David J. Miller、Hang Wang 和 George Kesidis。MLSP,2020。
  • 面向多后门检测的自适应扰动生成。 [pdf]

    • Yuhang Wang、Huafeng Shi、Rui Min、Ruijia Wu、Siyuan Liang、Yichao Wu、Ding Liang 和 Aishan Liu。arXiv,2022。
  • 置信度至关重要:通过分布迁移(Distribution Transfer)检查深度神经网络中的后门。 [pdf]

    • Tong Wang、Yuan Yao、Feng Xu、Miao Xu、Shengwei An 和 Ting Wang。arXiv,2022。
  • 防御多目标木马攻击。 [pdf]

    • Haripriya Harikumar、Santu Rana、Kien Do、Sunil Gupta、Wei Zong、Willy Susilo 和 Svetha Venkastesh。arXiv,2022。
  • 基于模型对比学习(Model-Contrastive Learning)的后门防御。 [pdf]

    • Zhihao Yue、Jun Xia、Zhiwei Ling、Ting Wang、Xian Wei 和 Mingsong Chen。arXiv,2022。
  • CatchBackdoor:通过差分模糊测试(Differential Fuzzing)识别关键木马神经路径进行后门测试。 [pdf]

    • Haibo Jin、Ruoxi Chen、Jinyin Chen、Yao Cheng、Chong Fu、Ting Wang、Yue Yu 和 Zhaoyan Ming。arXiv,2021。
  • 通过生成对抗网络(GAN)检测和移除深度神经网络中的水印。 [pdf]

    • Haoqi Wang、Mingfu Xue、Shichang Sun、Yushu Zhang、Jian Wang 和 Weiqiang Liu。arXiv,2021。
  • TAD:基于触发器近似(Trigger Approximation)的 AI 黑盒木马检测。 [pdf]

    • Xinqiao Zhang、Huili Chen 和 Farinaz Koushanfar。arXiv,2021。
  • 神经网络中的可扩展后门检测。 [pdf]

    • Haripriya Harikumar、Vuong Le、Santu Rana、Sourangshu Bhattacharya、Sunil Gupta 和 Svetha Venkatesh。arXiv,2020。
  • NNoculation:后门深度神经网络的广谱和靶向治疗。 [pdf] [code]

    • Akshaj Kumar Veldanda、Kang Liu、Benjamin Tan、Prashanth Krishnamurthy、Farshad Khorrami、Ramesh Karri、Brendan Dolan-Gavitt 和 Siddharth Garg。arXiv,2020。

基于模型诊断的经验防御

  • 通过对称特征差分(Symmetric Feature Differencing)检测复杂后门。 [pdf] [code]

    • Yingqi Liu、Guangyu Shen、Guanhong Tao、Zhenting Wang、Shiqing Ma 和 Xiangyu Zhang。CVPR,2022。
  • 针对二分类和多攻击场景的后门攻击训练后检测。 [pdf] [code]

    • Zhen Xiang、David J. Miller 和 George Kesidis。ICLR,2022。
  • 随机通道混洗(Randomized Channel Shuffling):无需干净数据集的最小开销后门攻击检测。 [pdf] [code]

    • Ruisi Cai、Zhenyu Zhang、Tianlong Chen、Xiaohan Chen 和 Zhangyang Wang。NeurIPS,2022。
  • 神经网络后门检测的异常检测方法:以人脸识别为案例研究。 [pdf]

    • Alexander Unnervik 和 Sébastien Marcel。BIOSIG,2022。
  • 基于关键路径的深度学习后门检测(Critical Path-Based Backdoor Detection)。 [link]

    • Wei Jiang, Xiangyu Wen, Jinyu Zhan, Xupeng Wang, Ziwei Song, and Chen Bian. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  • 基于元神经分析的 AI 木马检测(Detecting AI Trojans Using Meta Neural Analysis)。 [pdf]

    • Xiaojun Xu, Qi Wang, Huichen Li, Nikita Borisov, Carl A. Gunter, and Bo Li. IEEE S&P, 2021.
  • 木马神经网络拓扑检测(Topological Detection of Trojaned Neural Networks)。 [pdf]

    • Songzhu Zheng, Yikai Zhang, Hubert Wagner, Mayank Goswami, and Chao Chen. NeurIPS, 2021.
  • 有限信息和数据下的黑盒后门攻击检测(Black-box Detection of Backdoor Attacks with Limited Information and Data)。 [pdf]

    • Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, and Jun Zhu. ICCV, 2021.
  • 通用石蕊图案:揭示 CNN 中的后门攻击(Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs)。 [pdf] [code]

    • Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, and Heiko Hoffmann. CVPR, 2020.
  • 单像素签名:用于后门检测的 CNN 模型特征刻画(One-Pixel Signature: Characterizing CNN Models for Backdoor Detection)。 [pdf]

    • Shanjiaoyang Huang, Weiqi Peng, Zhiwei Jia, and Zhuowen Tu. ECCV, 2020.
  • 木马神经网络的实用检测:数据受限与无数据场景(Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases)。 [pdf] [code]

    • Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong, and Meng Wang. ECCV, 2020.
  • 基于类别差异的深度学习后门攻击检测(Detecting Backdoor Attacks via Class Difference in Deep Neural Networks)。 [pdf]

    • Hyun Kwon. IEEE Access, 2020.
  • 基于基线剪枝的神经网络木马检测方法(Baseline Pruning-Based Approach to Trojan Detection in Neural Networks)。 [pdf]

    • Peter Bajcsy and Michael Majurski. ICLR Workshop, 2021.
  • 通用训练后后门检测(Universal Post-Training Backdoor Detection)。 [pdf]

    • Hang Wang, Zhen Xiang, David J. Miller, and George Kesidis. arXiv, 2022.
  • DNN 权重中的木马特征(Trojan Signatures in DNN Weights)。 [pdf]

    • Greg Fields, Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar, and Tara Javidi. arXiv, 2021.
  • EX-RAY:通过检查差异特征对称性区分神经网络中的注入后门与自然特征(EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry)。 [pdf]

    • Yingqi Liu, Guangyu Shen, Guanhong Tao, Zhenting Wang, Shiqing Ma, and Xiangyu Zhang. arXiv, 2021.
  • TOP:基于扰动可迁移性的神经网络后门检测(TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation)。 [pdf]

    • Todd Huster and Emmanuel Ekwedike. arXiv, 2021.
  • 基于反事实归因的木马 DNN 检测(Detecting Trojaned DNNs Using Counterfactual Attributions)。 [pdf]

    • Karan Sikka, Indranil Sur, Susmit Jha, Anirban Roy, and Ajay Divakaran. arXiv, 2021.
  • 对抗样本同样有用!(Adversarial examples are useful too!) [pdf] [code]

    • XBorji A. arXiv, 2020.
  • Cassandra:从对抗扰动中检测木马网络(Cassandra: Detecting Trojaned Networks from Adversarial Perturbations)。 [pdf]

    • Xiaoyu Zhang, Ajmal Mian, Rohit Gupta, Nazanin Rahnavard, and Mubarak Shah. arXiv, 2020.
  • Odyssey:木马模型的创建、分析与检测(Odyssey: Creation, Analysis and Detection of Trojan Models)。 [pdf] [dataset]

    • Marzieh Edraki, Nazmul Karim, Nazanin Rahnavard, Ajmal Mian, and Mubarak Shah. arXiv, 2020.
  • 基于噪声响应分析的深度学习后门快速检测(Noise-response Analysis for Rapid Detection of Backdoors in Deep Neural Networks)。 [pdf]

    • N. Benjamin Erichson, Dane Taylor, Qixuan Wu, and Michael W. Mahoney. arXiv, 2020.
  • NeuronInspect:通过输出解释检测神经网络中的后门(NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations)。 [pdf]

    • Xijie Huang, Moustafa Alzantot, and Mani Srivastava. arXiv, 2019.

基于毒化抑制的经验防御

  • 通过解耦训练过程进行后门防御(Backdoor Defense via Decoupling the Training Process)。 [pdf] [code]

    • Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, and Kui Ren. ICLR, 2022.
  • 利用毒化样本的敏感性实现有效后门防御(Effective Backdoor Defense by Exploiting Sensitivity of Poisoned Samples)。 [pdf] [code]

    • Weixin Chen, Baoyuan Wu, and Haoqian Wang. NeurIPS, 2022.
  • 诱捕与替换:通过将后门诱入易替换子网络进行防御(Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork)。 [pdf] [code]

    • Haotao Wang, Junyuan Hong, Aston Zhang, Jiayu Zhou, and Zhangyang Wang. NeurIPS, 2022.
  • 更自信地训练:在训练过程中缓解注入型与自然后门(Training with More Confidence: Mitigating Injected and Natural Backdoors During Training)。 [pdf] [code]

    • Zhenting Wang, Zhenting_Wang, Hailun Ding, Juan Zhai, and Shiqing Ma. NeurIPS, 2022.
  • 反后门学习:在毒化数据上训练干净模型(Anti-Backdoor Learning: Training Clean Models on Poisoned Data)。 [pdf] [code]

    • Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. NeurIPS, 2021.
  • 基于差分隐私的鲁棒异常检测与后门攻击检测(Robust Anomaly Detection and Backdoor Attack Detection via Differential Privacy)。 [pdf] [code]

    • Min Du, Ruoxi Jia, and Dawn Song. ICLR, 2020.
  • 强数据增强可在不牺牲精度的情况下消除毒化与后门攻击(Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Trade-off)。 [pdf]

    • Eitan Borgnia, Valeriia Cherepanova, Liam Fowl, Amin Ghiasi, Jonas Geiping, Micah Goldblum, Tom Goldstein, and Arjun Gupta. ICASSP, 2021.
  • 杀不死你的使你更强大:针对毒化与后门的对抗训练(What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors)。 [pdf]

    • Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, and Tom Goldstein. ICLR Workshop, 2021.
  • 有限数据下移除神经网络中的基于后门的水印(Removing Backdoor-Based Watermarks in Neural Networks with Limited Data)。 [pdf]

    • Xuankai Liu, Fengting Li, Bihan Wen, and Qi Li. ICPR, 2021.
  • 掩码与恢复:测试时使用掩码自编码器的盲后门防御(Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder)。 [pdf] [code]

    • Tao Sun, Lu Pang, Chao Chen, and Haibin Ling. arXiv, 2023.
  • 对抗训练(Adversarial Training)针对后门攻击的有效性研究。 [pdf]

    • Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, 和 Masashi Sugiyama. arXiv, 2022.
  • 重塑人脸识别中的信任:缓解人脸识别中的后门攻击以防止潜在的隐私泄露。 [pdf]

    • Reena Zelenkova, Jack Swallow, M. A. P. Chamikara, Dongxi Liu, Mohan Baruwal Chhetri, Seyit Camtepe, Marthie Grobler, 和 Mahathir Almashor. arXiv, 2022.
  • SanitAIs:利用无监督数据增强(Unsupervised Data Augmentation)净化植入木马(Trojan)的神经网络。 [pdf]

    • Kiran Karra 和 Chace Ashcraft. arXiv, 2021.
  • 梯度塑形(Gradient Shaping)缓解数据投毒攻击的有效性研究。 [pdf] [code]

    • Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, 和 Nicolas Papernot. arXiv, 2020.
  • DP-InstaHide:利用差分隐私(Differentially Private)数据增强可证明地化解投毒和后门攻击。 [pdf]

    • Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, 和 Tom Goldstein. arXiv, 2021.

基于样本过滤的经验性防御

  • SCALE-UP:通过分析缩放预测一致性实现高效的黑盒输入级后门检测。 [pdf] [code]

    • Junfeng Guo, Yiming Li, Xun Chen, Hanqing Guo, Lichao Sun, 和 Cong Liu. ICLR, 2023.
  • 图像中认知后门模式的蒸馏提取。 [pdf] [code]

    • Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, 和 James Bailey. ICLR, 2023.
  • 不兼容性聚类(Incompatibility Clustering)作为防御后门投毒攻击的手段。 [pdf] [code]

    • Charles Jin, Melinda Sun, 和 Martin Rinard. ICLR, 2023.
  • "Beatrix" 复活:通过格拉姆矩阵(Gram Matrices)实现鲁棒的后门检测。 [pdf] [code]

    • Wanlun Ma, Derui Wang, Ruoxi Sun, Minhui Xue, Sheng Wen, 和 Yang Xiang. NDSS, 2023.
  • 利用投毒样本的敏感性实现有效的后门防御。 [pdf] [code]

    • Weixin Chen, Baoyuan Wu, 和 Haoqian Wang. NeurIPS, 2022.
  • 通过输入过滤实现有效且鲁棒的神经木马防御。 [pdf] [code]

    • Kien Do, Haripriya Harikumar, Hung Le, Dung Nguyen, Truyen Tran, Santu Rana, Dang Nguyen, Willy Susilo, 和 Svetha Venkatesh. ECCV, 2022.
  • 我们能否利用对抗检测方法来缓解后门攻击? [link]

    • Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, 和 Ting Liu. IEEE Transactions on Dependable and Secure Computing, 2022.
  • LinkBreaker:通过神经元一致性检查打破深度神经网络(DNNs)中的后门触发器链接。 [link]

    • Zhenzhu Chen, Shang Wang, Anmin Fu, Yansong Gao, Shui Yu, 和 Robert H. Deng. IEEE Transactions on Information Forensics and Security, 2022.
  • 基于相似性的深度学习系统完整性保护。 [link]

    • Ruitao Hou, Shan Ai, Qi Chen, Hongyang Yan, Teng Huang, 和 Kongyang Chen. Information Sciences, 2022.
  • 基于特征的在线检测器,通过迭代界定移除对抗性后门。 [pdf]

    • Hao Fu, Akshaj Kumar Veldanda, Prashanth Krishnamurthy, Siddharth Garg, 和 Farshad Khorrami. IEEE ACCESS, 2022.
  • 重新思考后门攻击的触发器:频率视角。 [pdf] [code]

    • Yi Zeng, Won Park, Z. Morley Mao, 和 Ruoxi Jia. ICCV, 2021.
  • 变体中的恶魔:深度神经网络(DNNs)的统计分析用于鲁棒的后门污染检测。 [pdf] [code]

    • Di Tang, XiaoFeng Wang, Haixu Tang, 和 Kehuan Zhang. USENIX Security, 2021.
  • SPECTRE:利用鲁棒统计学(Robust Statistics)防御后门攻击。 [pdf] [code]

    • Jonathan Hayase, Weihao Kong, Raghav Somani, 和 Sewoong Oh. ICML, 2021.
  • CLEANN:嵌入式神经网络的加速木马防护盾。 [pdf]

    • Mojan Javaheripi, Mohammad Samragh, Gregory Fields, Tara Javidi, 和 Farinaz Koushanfar. ICCAD, 2020.
  • 通过差分隐私(Differential Privacy)实现鲁棒的异常检测和后门攻击检测。 [pdf] [code]

    • Min Du, Ruoxi Jia, 和 Dawn Song. ICLR, 2020.
  • 利用余弦相似度(Cosine Similarity)实现简单且与攻击无关的针对目标训练集攻击的防御。 [pdf] [code]

    • Zayd Hammoudeh 和 Daniel Lowd. ICML Workshop, 2021.
  • SentiNet:检测针对深度学习系统的局部化通用攻击。 [pdf]

    • Edward Chou, Florian Tramèr, 和 Giancarlo Pellegrino. IEEE S&P Workshop, 2020.
  • STRIP:针对深度神经网络木马攻击的防御。 [pdf] [extension] [code]

    • Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, 和 Surya Nepal. ACSAC, 2019.
  • 通过激活聚类(Activation Clustering)检测深度神经网络的后门攻击。 [pdf] [code]

    • Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, 和 Biplav Srivastava. AAAI Workshop, 2019.
  • 深度概率模型检测数据投毒攻击(Deep Probabilistic Models to Detect Data Poisoning Attacks)。 [pdf]

    • Mahesh Subedar, Nilesh Ahuja, Ranganath Krishnan, Ibrahima J. Ndiour, and Omesh Tickoo. NeurIPS Workshop, 2019.
  • 后门攻击中的频谱特征(Spectral Signatures in Backdoor Attacks)。 [pdf] [code]

    • Brandon Tran, Jerry Li, and Aleksander Madry. NeurIPS, 2018.
  • 针对木马攻击的自适应黑盒防御(TrojDef)(An Adaptive Black-box Defense against Trojan Attacks (TrojDef))。 [pdf]

    • Guanxiong Liu, Abdallah Khreishah, Fatima Sharadgah, and Issa Khalil. arXiv, 2022.
  • 以毒攻毒:通过解耦良性相关性检测后门投毒样本(Fight Poison with Poison: Detecting Backdoor Poison Samples via Decoupling Benign Correlations)。 [pdf] [code]

    • Xiangyu Qi, Tinghao Xie, Saeed Mahloujifar, and Prateek Mittal. arXiv, 2022.
  • PiDAn:一种用于深度神经网络后门攻击检测与缓解的一致性优化方法(PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks)。 [pdf]

    • Yue Wang, Wenqing Li, Esha Sarkar, Muhammad Shafique, Michail Maniatakos, and Saif Eddin Jabari. arXiv, 2022.
  • 从输入域进行神经网络木马分析与缓解(Neural Network Trojans Analysis and Mitigation from the Input Domain)。 [pdf]

    • Zhenting Wang, Hailun Ding, Juan Zhai, and Shiqing Ma. arXiv, 2022.
  • 通过影响图防御后门攻击的通用框架(A General Framework for Defending Against Backdoor Attacks via Influence Graph)。 [pdf]

    • Xiaofei Sun, Jiwei Li, Xiaoya Li, Ziyao Wang, Tianwei Zhang, Han Qiu, Fei Wu, and Chun Fan. arXiv, 2021.
  • NTD:基于不可迁移性的后门检测(NTD: Non-Transferability Enabled Backdoor Detection)。 [pdf]

    • Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, and Derek Abbott. arXiv, 2021.
  • 任务驱动的数据质量管理统一框架(A Unified Framework for Task-Driven Data Quality Management)。 [pdf]

    • Tianhao Wang, Yi Zeng, Ming Jin, and Ruoxi Jia. arXiv, 2021.
  • TESDA:深度神经网络中基于变换的统计攻击检测(TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks)。 [pdf]

    • Chandramouli Amarnath, Aishwarya H. Balwani, Kwondo Ma, and Abhijit Chatterjee. arXiv, 2021.
  • 神经网络数据投毒攻击的溯源(Traceback of Data Poisoning Attacks in Neural Networks)。 [pdf]

    • Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, and Ben Y. Zhao. arXiv, 2021.
  • 利用自扩展和兼容性防御数据投毒的可证明保证(Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility)。 [pdf]

    • Charles Jin, Melinda Sun, and Martin Rinard. arXiv, 2021.
  • 利用错误归因在线防御木马模型(Online Defense of Trojaned Models using Misattributions)。 [pdf]

    • Panagiota Kiourti, Wenchao Li, Anirban Roy, Karan Sikka, and Susmit Jha. arXiv, 2021.
  • 通过有意对抗扰动检测深度神经网络中的后门(Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations)。 [pdf]

    • Mingfu Xue, Yinghao Wu, Zhiyu Wu, Jian Wang, Yushu Zhang, and Weiqiang Liu. arXiv, 2021.
  • 暴露鲁棒机器学习模型中的后门(Exposing Backdoors in Robust Machine Learning Models)。 [pdf]

    • Ezekiel Soremekun, Sakshi Udeshi, and Sudipta Chattopadhyay. arXiv, 2020.
  • HaS-Nets:一种用于数据收集场景下防御深度神经网络后门攻击的修复与选择机制(HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios)。 [pdf]

    • Hassan Ali, Surya Nepal, Salil S. Kanhere, and Sanjay Jha. arXiv, 2020.
  • 以毒为药:检测并中和深度神经网络中的可变尺寸后门攻击(Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor Attacks in Deep Neural Networks)。 [pdf]

    • Alvin Chan, and Yew-Soon Ong. arXiv, 2019.

可认证防御(Certificated Defense)

  • 针对通用扰动的鲁棒性认证(Towards Robustness Certification Against Universal Perturbations)。 [pdf] [code]

    • Yi Zeng, Zhouxing Shi, Ming Jin, Feiyang Kang, Lingjuan Lyu, Cho-Jui Hsieh, and Ruoxi Jia. ICLR, 2023.
  • BagFlip:针对数据投毒的可认证防御(BagFlip: A Certified Defense against Data Poisoning)。 [pdf] [code]

    • Yuhao Zhang, Aws Albarghouthi, and Loris D'Antoni. NeurIPS, 2022.
  • RAB:针对后门攻击的可证明鲁棒性(RAB: Provable Robustness Against Backdoor Attacks)。 [pdf] [code]

    • Maurice Weber, Xiaojun Xu, Bojan Karlas, Ce Zhang, and Bo Li. IEEE S&P, 2022.
  • 最近邻针对数据投毒和后门攻击的可认证鲁棒性(Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks)。 [pdf]

    • Jinyuan Jia, Yupei Liu, Xiaoyu Cao, and Neil Zhenqiang Gong. AAAI, 2022.
  • 深度分区聚合:针对通用投毒攻击的可证明防御(Deep Partition Aggregation: Provable Defense against General Poisoning Attacks) [pdf] [code]

    • Alexander Levine and Soheil Feizi. ICLR, 2021.
  • Bagging 针对数据投毒攻击的内在可认证鲁棒性(Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks) [pdf] [code]

    • Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. AAAI, 2021.
  • 通过随机平滑实现针对标签翻转攻击的可认证鲁棒性(Certified Robustness to Label-Flipping Attacks via Randomized Smoothing)。 [pdf]

    • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, and J. Zico Kolter. ICML, 2020.
  • 关于通过随机平滑实现针对后门攻击的鲁棒性认证(On Certifying Robustness against Backdoor Attacks via Randomized Smoothing)。 [pdf]

    • Binghui Wang, Xiaoyu Cao, Jinyuan jia, and Neil Zhenqiang Gong. CVPR Workshop, 2020.
  • BagFlip:针对数据投毒的可认证防御(BagFlip: A Certified Defense against Data Poisoning)。 [pdf]

    • Yuhao Zhang, Aws Albarghouthi, and Loris D'Antoni. arXiv, 2022.

针对其他范式与任务的攻击与防御

联邦学习(Federated Learning)

  • FLIP:联邦学习中后门缓解的可证明防御框架(FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning)。 [pdf] [code]

    • Kaiyuan Zhang, Guanhong Tao, Qiuling Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, and Xiangyu Zhang. ICLR, 2023.
  • 联邦学习中后门防御的脆弱性研究(On the Vulnerability of Backdoor Defenses for Federated Learning)。 [link] [code]

    • Pei Fang and Jinghui Chen. AAAI, 2023.
  • Cerberus 投毒:针对联邦学习的隐蔽协同后门攻击(Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning)。 [link]

    • Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Bin Wang, Jiqiang Liu, and Xiangliang Zhang. AAAI, 2023.
  • Neurotoxin: 联邦学习中的持久化后门攻击(Durable Backdoors in Federated Learning)。 [pdf]

    • Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Joseph E. Gonzalez, Kannan Ramchandran, and Prateek Mittal. ICML, 2022.
  • FLAME: 驯服联邦学习中的后门攻击(Taming Backdoors in Federated Learning)。 [pdf]

    • Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, and Thomas Schneider. USENIX Security, 2022.
  • DeepSight: 通过深度模型检测缓解联邦学习中的后门攻击(Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection)。 [pdf]

    • Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, and Ahmad-Reza Sadeghi. NDSS, 2022.
  • 防御纵向联邦学习中的标签推断与后门攻击(Defending Label Inference and Backdoor Attacks in Vertical Federated Learning)。 [pdf]

    • Yang Liu, Zhihao Yi, Yan Kang, Yuanqin He, Wenhan Liu, Tianyuan Zou, and Qiang Yang. AAAI, 2022.
  • 拜占庭容错聚合机制在联邦学习模型投毒攻击中的分析(An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning)。 [link]

    • Mary Roszel, Robert Norvill, and Radu State. MDAI, 2022.
  • 利用差分隐私防御联邦学习中的后门攻击(Against Backdoor Attacks In Federated Learning With Differential Privacy)。 [link]

    • Lu Miao, Wei Yang, Rong Hu, Lu Li, and Liusheng Huang. ICASSP, 2022.
  • 安全部分聚合:使联邦学习在工业4.0应用中更加鲁棒(Secure Partial Aggregation: Making Federated Learning More Robust for Industry 4.0 Applications)。 [link]

    • Jiqiang Gao, Baolei Zhang, Xiaojie Guo, Thar Baker, Min Li, and Zheli Liu. IEEE Transactions on Industrial Informatics, 2022.
  • 基于联邦学习中离群值鲁棒过滤的后门攻击弹性聚合用于图像分类(Backdoor Attacks-resilient Aggregation based on Robust Filtering of Outliers in Federated Learning for Image Classification)。 [link]

    • Nuria Rodríguez-Barroso, Eugenio Martínez-Cámara, M. Victoria Luzónb, and Francisco Herrera. Knowledge-Based Systems, 2022.
  • 联邦学习中的后门攻击防御(Defense against Backdoor Attack in Federated Learning)。 [link] [code]

    • Shiwei Lu, Ruihu Li, Wenbin Liu, and Xuan Chen. Computers & Security, 2022.
  • 针对投毒对抗者的隐私增强联邦学习(Privacy-Enhanced Federated Learning against Poisoning Adversaries)。 [link]

    • Xiaoyuan Liu, Hongwei Li, Guowen Xu, Zongqi Chen, Xiaoming Huang, and Rongxing Lu. IEEE Transactions on Information Forensics and Security, 2021.
  • 基于模型相关触发器的协调后门攻击对抗联邦学习(Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers)。 [link]

    • Xueluan Gong, Yanjiao Chen, Huayang Huang, Yuqing Liao, Shuai Wang, and Qian Wang. IEEE Network, 2022.
  • CRFL: 可证明鲁棒的联邦学习防御后门攻击(Certifiably Robust Federated Learning against Backdoor Attacks)。 [pdf]

    • Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li. ICML, 2021.
  • 诅咒还是救赎?数据异质性如何影响联邦学习的鲁棒性(Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning)。 [pdf]

    • Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, and Feng Yan. AAAI, 2021.
  • 利用鲁棒学习率防御联邦学习中的后门攻击(Defending Against Backdoors in Federated Learning with Robust Learning Rate)。 [pdf]

    • Mustafa Safa Ozdayi, Murat Kantarcioglu, and Yulia R. Gel. AAAI, 2021.
  • BaFFLe: 基于反馈的联邦学习后门检测(Backdoor detection via Feedback-based Federated Learning)。 [pdf]

    • ebastien Andreina, Giorgia Azzurra Marson, Helen Möllering, and Ghassan Karame. ICDCS, 2021.
  • PipAttack: 投毒联邦推荐系统以操纵商品推广(Poisoning Federated Recommender Systems for Manipulating Item Promotion)。 [pdf]

    • Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Quoc Viet Hung Nguyen, and Lizhen Cui. WSDM, 2021.
  • 通过联邦过滤器缓解工业物联网应用中的后门攻击(Mitigating the Backdoor Attack by Federated Filters for Industrial IoT Applications)。 [link]

    • Boyu Hou, Jiqiang Gao, Xiaojie Guo, Thar Baker, Ying Zhang, Yanlong Wen, and Zheli Liu. IEEE Transactions on Industrial Informatics, 2021.
  • 基于稳定性的边缘计算服务后门攻击分析与防御(Stability-Based Analysis and Defense against Backdoor Attacks on Edge Computing Services)。 [link]

    • Yi Zhao, Ke Xu, Haiyang Wang, Bo Li, and Ruoxi Jia. IEEE Network, 2021.
  • 尾部攻击:是的,你真的可以在联邦学习中植入后门(Attack of the Tails: Yes, You Really Can Backdoor Federated Learning)。 [pdf]

    • Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. NeurIPS, 2020.
  • DBA: 针对联邦学习的分布式后门攻击(Distributed Backdoor Attacks against Federated Learning)。 [pdf]

    • Chulin Xie, Keli Huang, Pinyu Chen, and Bo Li. ICLR, 2020.
  • 联邦学习在女巫攻击场景下的局限性(The Limitations of Federated Learning in Sybil Settings)。 [pdf] [extension] [code]

    • Clement Fung, Chris J.M. Yoon, and Ivan Beschastnikh. RAID, 2020 (arXiv, 2018).
  • 如何在联邦学习中植入后门(How to Backdoor Federated Learning)。 [pdf]

    • Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. AISTATS, 2020 (arXiv, 2018).
  • BEAS: 区块链赋能的异步安全联邦机器学习(Blockchain Enabled Asynchronous & Secure Federated Machine Learning)。 [pdf]

    • Arup Mondal, Harpreet Virk, and Debayan Gupta. AAAI Workshop, 2022.
  • 特征分割协作学习中的后门攻击与防御(Backdoor Attacks and Defenses in Feature-partitioned Collaborative Learning)。 [pdf]

    • Yang Liu, Zhihao Yi, and Tianjian Chen. ICML Workshop, 2020.
  • 你真的能在联邦学习中植入后门吗?(Can You Really Backdoor Federated Learning?)。 [pdf]

    • Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. NeurIPS Workshop, 2019.
  • 用于防御联邦后门攻击的不变聚合器(Invariant Aggregator for Defending Federated Backdoor Attacks)。 [pdf]

    • Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, and Shruti Tople. arXiv, 2022.
  • 保护联邦学习:以更少的约束缓解拜占庭攻击(Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints)。 [pdf]

    • Minghui Li, Wei Wan, Jianrong Lu, Shengshan Hu, Junyu Shi, and Leo Yu Zhang. arXiv, 2022.
  • 面向视觉识别的联邦零样本学习(Federated Zero-Shot Learning for Visual Recognition)。 [pdf]

    • Zhi Chen, Yadan Luo, Sen Wang, Jingjing Li, and Zi Huang. arXiv, 2022.
  • 利用全群体知识对齐辅助后门联邦学习(Backdoor Federated Learning)。 [pdf]

    • Tian Liu, Xueyang Hu, and Tao Shu. arXiv, 2022.
  • FL-Defender:应对联邦学习中的针对性攻击。 [pdf]

    • Najeeb Jebreel and Josep Domingo-Ferrer. arXiv, 2022.
  • 后门攻击是联邦 GAN 医疗图像合成中的恶魔。 [pdf]

    • Ruinan Jin and Xiaoxiao Li. arXiv, 2022.
  • SafeNet:缓解针对私有机器学习的数据投毒攻击(Data Poisoning Attacks)。 [pdf] [code]

    • Harsh Chaudhari, Matthew Jagielski, and Alina Oprea. arXiv, 2022.
  • PerDoor:利用对抗扰动(Adversarial Perturbations)在联邦学习中实现持久非均匀后门。 [pdf] [code]

    • Manaar Alam, Esha Sarkar, and Michail Maniatakos. arXiv, 2022.
  • 面向持续联邦学习(Continual Federated Learning)中后门攻击的防御。 [pdf]

    • Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, and Sewoong Oh. arXiv, 2022.
  • 联邦学习中的客户端定向后门攻击(Client-Wise Targeted Backdoor)。 [pdf]

    • Gorka Abad, Servio Paguada, Stjepan Picek, Víctor Julio Ramírez-Durán, and Aitor Urbieta. arXiv, 2022.
  • 利用差分测试(Differential Testing)和异常值检测(Outlier Detection)进行联邦学习中的后门防御。 [pdf]

    • Yein Kim, Huili Chen, and Farinaz Koushanfar. arXiv, 2022.
  • ARIBA:实现联邦学习中后门攻击的准确鲁棒识别。 [pdf]

    • Yuxi Mi, Jihong Guan, and Shuigeng Zhou. arXiv, 2022.
  • 更多即是更好(大多数情况下):论联邦图神经网络(Federated Graph Neural Networks)中的后门攻击。 [pdf]

    • Jing Xu, Rui Wang, Kaitai Liang, and Stjepan Picek. arXiv, 2022.
  • 低损子空间压缩(Low-Loss Subspace Compression):对抗多智能体后门攻击的有效手段。 [pdf]

    • Siddhartha Datta and Nigel Shadbolt. arXiv, 2022.
  • 后门困于前门:适得其反的多智能体后门攻击。 [pdf]

    • Siddhartha Datta and Nigel Shadbolt. arXiv, 2022.
  • 基于知识蒸馏(Knowledge Distillation)的联邦遗忘学习(Federated Unlearning)。 [pdf]

    • Chen Wu, Sencun Zhu, and Prasenjit Mitra. arXiv, 2022.
  • 针对个性化联邦学习(Personalized Federated Learning)中后门超网络(Backdoor HyperNetwork)的模型迁移攻击。 [pdf]

    • Phung Lai, NhatHai Phan, Abdallah Khreishah, Issa Khalil, and Xintao Wu. arXiv, 2022.
  • 基于彩票假设(Lottery Ticket Hypothesis)的联邦学习后门攻击。 [pdf]

    • Zihang Zou, Boqing Gong, and Liqiang Wang. arXiv, 2021.
  • 论协作学习中的可证明后门防御。 [pdf]

    • Ximing Qiao, Yuhua Bai, Siping Hu, Ang Li, Yiran Chen, and Hai Li. arXiv, 2021.
  • SparseFed:利用稀疏化(Sparsification)缓解联邦学习中的模型投毒攻击(Model Poisoning Attacks)。 [pdf]

    • Ashwinee Panda, Saeed Mahloujifar, Arjun N. Bhagoji, Supriyo Chakraborty, and Prateek Mittal. arXiv, 2021.
  • 具有攻击自适应聚合(Attack-Adaptive Aggregation)的鲁棒联邦学习。 [pdf] [code]

    • Ching Pui Wan, and Qifeng Chen. arXiv, 2021.
  • 元联邦学习(Meta Federated Learning)。 [pdf]

    • Omid Aramoon, Pin-Yu Chen, Gang Qu, and Yuan Tian. arXiv, 2021.
  • FLGUARD:安全私有的联邦学习。 [pdf]

    • Thien Duc Nguyen, Phillip Rieger, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Ahmad-Reza Sadeghi, Thomas Schneider, and Shaza Zeitouni. arXiv, 2021.
  • 迈向鲁棒且私有的联邦学习:本地与中心差分隐私(Differential Privacy)实验。 [pdf]

    • Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro. arXiv, 2020.
  • 针对联邦元学习(Federated Meta-Learning)的后门攻击。 [pdf]

    • Chien-Lun Chen, Leana Golubchik, and Marco Paolieri. arXiv, 2020.
  • 针对联邦学习的动态后门攻击。 [pdf]

    • Anbu Huang. arXiv, 2020.
  • 对抗性设置中的联邦学习。 [pdf]

    • Raouf Kerkouche, Gergely Ács, and Claude Castelluccia. arXiv, 2020.
  • BlockFLA:基于混合区块链架构的可问责联邦学习。 [pdf]

    • Harsh Bimal Desai, Mustafa Safa Ozdayi, and Murat Kantarcioglu. arXiv, 2020.
  • 缓解联邦学习中的后门攻击。 [pdf]

    • Chen Wu, Xian Yang, Sencun Zhu, and Prasenjit Mitra. arXiv, 2020.
  • 学习检测恶意客户端以实现鲁棒联邦学习。 [pdf]

    • Suyi Li, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. arXiv, 2020.
  • 基于残差重加权(Residual-based Reweighting)的抗攻击联邦学习。 [pdf] [code]

    • Shuhao Fu, Chulin Xie, Bo Li, and Qifeng Chen. arXiv, 2019.

迁移学习(Transfer Learning)

  • Incremental Learning, Incremental Backdoor Threats(增量学习,增量后门威胁) [link]

    • Wenbo Jiang, Tianwei Zhang, Han Qiu, Hongwei Li, and Guowen Xu. IEEE Transactions on Dependable and Secure Computing, 2022.
  • Robust Backdoor Injection with the Capability of Resisting Network Transfer(具有抵抗网络迁移能力的鲁棒后门注入) [link]

    • Le Feng, Sheng Li, Zhenxing Qian, and Xinpeng Zhang. Information Sciences, 2022.
  • Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation(反蒸馏后门攻击:后门真的可以在知识蒸馏中存活) [pdf]

    • Yunjie Ge, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen, and Cong Wang. ACM MM, 2021.
  • Hidden Trigger Backdoor Attacks(隐藏触发器后门攻击) [pdf] [code]

    • Aniruddha Saha, Akshayvarun Subramanya, and Hamed Pirsiavash. AAAI, 2020.
  • Weight Poisoning Attacks on Pre-trained Models(预训练模型的权重投毒攻击) [pdf] [code]

    • Keita Kurita, Paul Michel, and Graham Neubig. ACL, 2020.
  • Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models(针对预训练深度学习模型迁移学习的后门攻击) [pdf]

    • Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, and Tianle Chen. IEEE Transactions on Services Computing, 2020.
  • Latent Backdoor Attacks on Deep Neural Networks(深度神经网络的潜在后门攻击) [pdf]

    • Yuanshun Yao, Huiying Li, Haitao Zheng and Ben Y. Zhao. CCS, 2019.
  • Architectural Backdoors in Neural Networks(神经网络中的架构后门) [pdf]

    • Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert Mullins, and Nicolas Papernot. arXiv, 2022.
  • Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks(预训练模型的红色警报:神经元级后门攻击导致的通用漏洞) [pdf] [code]

    • Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Xin Jiang, and Maosong Sun. arXiv, 2021.

强化学习(Reinforcement Learning)

  • Provable Defense against Backdoor Policies in Reinforcement Learning(强化学习中后门策略的可证明防御) [pdf] [code]

    • Shubham Kumar Bharti, Xuezhou Zhang, Adish Singla, and Jerry Zhu. NeurIPS, 2022.
  • MARNet: Backdoor Attacks against Cooperative Multi-Agent Reinforcement Learning(MARNet:针对协作多智能体强化学习的后门攻击) [link]

    • Yanjiao Chen, Zhicong Zheng, and Xueluan Gong. IEEE Transactions on Dependable and Secure Computing, 2022.
  • BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning(BACKDOORL:针对竞争强化学习的后门攻击) [pdf]

    • Lun Wang, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. IJCAI, 2021.
  • Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-based Traffic Congestion Control Systems(Stop-and-Go:探索基于深度强化学习的交通拥堵控制系统上的后门攻击) [pdf]

    • Yue Wang, Esha Sarkar, Michail Maniatakos, and Saif Eddin Jabari. IEEE Transactions on Information Forensics and Security, 2021.
  • Agent Manipulator: Stealthy Strategy Attacks on Deep Reinforcement Learning(智能体操控器:深度强化学习的隐蔽策略攻击) [link]

    • Jinyin Chen, Xueke Wang, Yan Zhang, Haibin Zheng, Shanqing Yu, and Liang Bao. Applied Intelligence, 2022.
  • TrojDRL: Evaluation of Backdoor Attacks on Deep Reinforcement Learning(TrojDRL:深度强化学习后门攻击评估) [pdf] [code]

    • Panagiota Kiourti, Kacper Wardega, Susmit Jha, and Wenchao Li. DAC, 2020.
  • Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers(利用分布内触发器投毒深度强化学习智能体) [pdf]

    • Chace Ashcraft and Kiran Karra. ICLR Workshop, 2021.
  • A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning(针对深度强化学习的时间模式后门攻击) [pdf]

    • Yinbo Yu, Jiajia Liu, Shouqing Li, Kepu Huang, and Xudong Feng. arXiv, 2022.
  • Backdoor Detection in Reinforcement Learning(强化学习中的后门检测) [pdf]

    • Junfeng Guo, Ang Li, and Cong Liu. arXiv, 2022.
  • Design of Intentional Backdoors in Sequential Models(序列模型中故意后门的构建) [pdf]

    • Zhaoyuan Yang, Naresh Iyer, Johan Reimann, and Nurali Virani. arXiv, 2019.

半监督学习与自监督学习(Semi-Supervised and Self-Supervised Learning)

  • Backdoor Attacks on Self-Supervised Learning(自监督学习中的后门攻击) [pdf] [code]

    • Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash. CVPR, 2022.
  • Poisoning and Backdooring Contrastive Learning(对比学习的投毒与后门攻击) [pdf]

    • Nicholas Carlini and Andreas Terzis. ICLR, 2022.
  • BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning(BadEncoder:自监督学习中预训练编码器的后门攻击) [pdf] [code]

    • Jinyuan Jia, Yupei Liu, and Neil Zhenqiang Gong. IEEE S&P, 2022.
  • DeHiB: Deep Hidden Backdoor Attack on Semi-supervised Learning via adversarial Perturbation(DeHiB:通过对抗扰动对半监督学习的深度隐藏后门攻击) [pdf]

    • Zhicong Yan, Gaolei Li, Yuan Tian, Jun Wu, Shenghong Li, Mingzhe Chen, and H. Vincent Poor. AAAI, 2021.
  • Deep Neural Backdoor in Semi-Supervised Learning: Threats and Countermeasures(半监督学习中的深度神经后门:威胁与对策) [link]

    • Zhicong Yan, Jun Wu, Gaolei Li, Shenghong Li, and Mohsen Guizani. IEEE Transactions on Information Forensics and Security, 2021.
  • Backdoor Attacks in the Supply Chain of Masked Image Modeling(掩码图像建模供应链中的后门攻击) [pdf]

    • Xinyue Shen, Xinlei He, Zheng Li, Yun Shen, Michael Backes, and Yang Zhang. arXiv, 2022.
  • Watermarking Pre-trained Encoders in Contrastive Learning(对比学习中预训练编码器的水印) [pdf]

    • Yutong Wu, Han Qiu, Tianwei Zhang, Jiwei Li, and Meikang Qiu. arXiv, 2022.

量化(Quantization)

  • RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN. [pdf] [code]

    • Huy Phan, Cong Shi, Yi Xie, Tianfang Zhang, Zhuohang Li, Tianming Zhao, Jian Liu, Yan Wang, Yingying Chen, and Bo Yuan. ECCV, 2022.
  • Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes. [pdf] [code]

    • Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yiğitcan Kaya, and Tudor Dumitraş. NeurIPS, 2021.
  • Understanding the Threats of Trojaned Quantized Neural Network in Model Supply Chains. [pdf]

    • Xudong Pan, Mi Zhang, Yifan Yan, and Min Yang. ACSAC, 2021
  • Quantization Backdoors to Deep Learning Models. [pdf]

    • Hua Ma, Huming Qiu, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said Al-Sarawi, and Derek Abbott. arXiv, 2021.
  • Stealthy Backdoors as Compression Artifacts. [pdf]

    • Yulong Tian, Fnu Suya, Fengyuan Xu, and David Evans. arXiv, 2021.

自然语言处理(Natural Language Processing)

  • TrojText: Test-time Invisible Textual Trojan Insertion. [pdf] [code]

    • Qian Lou, Yepeng Liu, and Bo Feng. ICLR, 2023.
  • Defending against Backdoor Attacks in Natural Language Generation. [link]

    • Xiaofei Sun, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li, and Tianwei Zhang. AAAI, 2023.
  • Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training. [pdf] [code]

    • Biru Zhu, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun, Ming Gu. Transactions of the Association for Computational Linguistics, 2023.
  • BadPrompt: Backdoor Attacks on Continuous Prompts. [pdf] [code]

    • Xiangrui Cai, haidong xu, Sihan Xu, Ying Zhang, and Xiaojie Yuan. NeurIPS, 2022.
  • Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models [pdf] [code]

    • Biru Zhu, Yujia Qin, Ganqu Cui, Yangyi Chen, Weilin Zhao, Chong Fu, Yangdong Deng, Zhiyuan Liu, Jingang Wang, Wei Wu, Maosong Sun, and Ming Gu. NeurIPS, 2022.
  • A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. [pdf] [code]

    • Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, and Maosong Sun. NeurIPS, 2022.
  • Spinning Language Models: Risks of Propaganda-as-a-Service and Countermeasures [pdf] [code]

    • Eugene Bagdasaryan and Vitaly Shmatikov. IEEE S&P, 2022.
  • PICCOLO: Exposing Complex Backdoors in NLP Transformer Models. [pdf] [code]

    • Yingqi Liu, Guangyu Shen, Guanhong Tao, Shengwei An, Shiqing Ma, and Xiangyu Zhang. IEEE S&P, 2022.
  • Triggerless Backdoor Attack for NLP Tasks with Clean Labels. [pdf]

    • Leilei Gan, Jiwei Li, Tianwei Zhang, Xiaoya Li, Yuxian Meng, Fei Wu, Shangwei Guo, and Chun Fan. NAACL, 2022.
  • A Study of the Attention Abnormality in Trojaned BERTs. [pdf] [code]

    • Weimin Lyu, Songzhu Zheng, Tengfei Ma, and Chao Chen. NAACL, 2022.
  • The Triggers that Open the NLP Model Backdoors Are Hidden in the Adversarial Samples. [link]

    • Kun Shao, Yu Zhang, Junan Yang, Xiaoshuai Li, and Hui Liu. Computers & Security, 2022.
  • BDDR: An Effective Defense Against Textual Backdoor Attacks. [pdf]

    • Kun Shao, Junan Yang, Yang Ai, Hui Liu, and Yu Zhang. Computers & Security, 2021.
  • BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. [pdf]

    • Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, and Chun Fan. ICLR, 2022.
  • Exploring the Universal Vulnerability of Prompt-based Learning Paradigm. [pdf] [code]

    • Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, and Zhiyuan Liu. NAACL-Findings, 2022.
  • Backdoor Pre-trained Models Can Transfer to All. [pdf]

    • Lujia Shen, Shouling Ji, Xuhong Zhang, Jinfeng Li, Jing Chen, Jie Shi, Chengfang Fang, Jianwei Yin, and Ting Wang. CCS, 2021.
  • BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements. [pdf] [arXiv-20]

    • Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, and Yang Zhang. ACSAC, 2021.
  • Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning. [pdf]

    • Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, and Xipeng Qiu. EMNLP, 2021.
  • T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification. [pdf]

    • Ahmadreza Azizi, Ibrahim Asadullah Tahmid, Asim Waheed, Neal Mangaokar, Jiameng Pu, Mobin Javed, Chandan K. Reddy, and Bimal Viswanath. USENIX Security, 2021.
  • RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models. [pdf] [code]

    • Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. EMNLP, 2021.
  • ONION: A Simple and Effective Defense Against Textual Backdoor Attacks. [pdf]

    • Fanchao Qi, Yangyi Chen, Mukai Li, Zhiyuan Liu, and Maosong Sun. EMNLP, 2021.
  • Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer. [pdf] [code]

    • Fanchao Qi, Yangyi Chen, Xurui Zhang, Mukai Li, Zhiyuan Liu, and Maosong Sun. EMNLP, 2021.
  • 重新思考针对 NLP 模型的后门攻击隐蔽性(Rethinking Stealthiness of Backdoor Attack against NLP Models)。 [pdf] [code]

    • Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. ACL, 2021.
  • 转动密码锁:通过词语替换实现可学习的文本后门攻击(Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution)。 [pdf]

    • Fanchao Qi, Yuan Yao, Sophia Xu, Zhiyuan Liu, and Maosong Sun. ACL, 2021.
  • 隐藏杀手:基于句法触发器的隐形文本后门攻击(Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger)。 [pdf] [code]

    • Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun. ACL, 2021.
  • 利用差分隐私缓解文本分类中的数据投毒攻击(Mitigating Data Poisoning in Text Classification with Differential Privacy)。 [pdf]

    • Chang Xu, Jun Wang, Francisco Guzmán, Benjamin I. P. Rubinstein, and Trevor Cohn. EMNLP-Findings, 2021.
  • BFClass:无后门文本分类框架(BFClass: A Backdoor-free Text Classification Framework)。 [pdf] [code]

    • Zichao Li, Dheeraj Mekala, Chengyu Dong, and Jingbo Shang. EMNLP-Findings, 2021.
  • 警惕被投毒的词嵌入:探索 NLP 模型嵌入层的脆弱性(Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models)。 [pdf] [code]

    • Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, and Bin He. NAACL-HLT, 2021.
  • 神经网络手术:以最小化实例级副作用将数据模式注入预训练模型(Neural Network Surgery: Injecting Data Patterns into Pre-trained Models with Minimal Instance-wise Side Effects)。 [pdf]

    • Zhiyuan Zhang, Xuancheng Ren, Qi Su, Xu Sun, and Bin He. NAACL-HLT, 2021.
  • 基于可解释 RNN 抽象模型的文本后门检测(Text Backdoor Detection Using An Interpretable RNN Abstract Model)。 [link]

    • Ming Fan, Ziliang Si, Xiaofei Xie, Yang Liu, and Ting Liu. IEEE Transactions on Information Forensics and Security, 2021.
  • 针对文本分类系统的文本后门攻击(Textual Backdoor Attack for the Text Classification System)。 [pdf]

    • Hyun Kwon and Sanghyun Lee. Security and Communication Networks, 2021.
  • 针对预训练模型的权重投毒攻击(Weight Poisoning Attacks on Pre-trained Models)。 [pdf] [code]

    • Keita Kurita, Paul Michel, and Graham Neubig. ACL, 2020.
  • 基于条件对抗正则化自编码器的文本数据集投毒攻击(Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder)。 [pdf]

    • Alvin Chan, Yi Tay, Yew-Soon Ong, and Aston Zhang. EMNLP-Findings, 2020.
  • 针对基于 LSTM 的文本分类系统的后门攻击(A Backdoor Attack Against LSTM-based Text Classification Systems)。 [pdf]

    • Jiazhu Dai, Chuanshuai Chen, and Yufeng Li. IEEE Access, 2019.
  • PerD:基于扰动敏感性的 NLP 应用神经木马检测框架(PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework on NLP Applications)。 [pdf]

    • Diego Garcia-soto, Huili Chen, and Farinaz Koushanfar. arXiv, 2022.
  • Kallima:文本后门攻击的干净标签框架(Kallima: A Clean-label Framework for Textual Backdoor Attacks)。 [pdf]

    • Xiaoyi Chen, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen, and Zhonghai Wu. arXiv, 2022.
  • 基于迭代触发器注入的文本后门攻击(Textual Backdoor Attacks with Iterative Trigger Injection)。 [pdf] [code]

    • Jun Yan, Vansh Gupta, and Xiang Ren. arXiv, 2022.
  • WeDef:文本分类的弱监督后门防御(WeDef: Weakly Supervised Backdoor Defense for Text Classification)。 [pdf]

    • Lesheng Jin, Zihan Wang, and Jingbo Shang. arXiv, 2022.
  • 基于动态边界缩放约束优化的有效 NLP 后门防御(Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense)。 [pdf]

    • Guangyu Shen, Yingqi Liu, Guanhong Tao, Qiuling Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, and Xiangyu Zhang. arXiv, 2022.
  • 重新思考自然语言处理中的隐蔽后门攻击(Rethink Stealthy Backdoor Attacks in Natural Language Processing)。 [pdf]

    • Lingfeng Shen, Haiyun Jiang, Lemao Liu, and Shuming Shi. arXiv, 2022.
  • 通过两个简单技巧使文本后门攻击更具危害性(Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks)。 [pdf]

    • Yangyi Chen, Fanchao Qi, Zhiyuan Liu, and Maosong Sun. arXiv, 2021.
  • 利用元后门操控序列到序列模型(Spinning Sequence-to-Sequence Models with Meta-Backdoors)。 [pdf]

    • Eugene Bagdasaryan and Vitaly Shmatikov. arXiv, 2021.
  • 防御自然语言生成中的后门攻击(Defending against Backdoor Attacks in Natural Language Generation)。 [pdf] [code]

    • Chun Fan, Xiaoya Li, Yuxian Meng, Xiaofei Sun, Xiang Ao, Fei Wu, Jiwei Li, and Tianwei Zhang. arXiv, 2021.
  • 以人为中心的语言模型中的隐藏后门(Hidden Backdoors in Human-Centric Language Models)。 [pdf]

    • Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, and Jialiang Lu. arXiv, 2021.
  • 利用蜜罐检测通用触发器的对抗攻击(Detecting Universal Trigger's Adversarial Attack with Honeypot)。 [pdf]

    • Thai Le, Noseong Park, Dongwon Lee. arXiv, 2020.
  • 通过后门关键词识别缓解基于 LSTM 的文本分类系统中的后门攻击(Mitigating Backdoor Attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification)。 [pdf]

    • Chuanshuai Chen, and Jiazhu Dai. arXiv, 2020.
  • 木马化语言模型以谋取利益(Trojaning Language Models for Fun and Profit)。 [pdf]

    • Xinyang Zhang, Zheng Zhang, and Ting Wang. arXiv, 2020.

Graph Neural Networks

  • Transferable Graph Backdoor Attack. [pdf]

    • Shuiqiao Yang, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, and Salil S. Kanhere. RAID, 2022.
  • More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks. [pdf]

    • Jing Xu, Rui Wang, Kaitai Liang, and Stjepan Picek. ACSAC, 2022.
  • Graph Backdoor. [pdf] [code]

    • Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. USENIX Security, 2021.
  • Backdoor Attacks to Graph Neural Networks. [pdf]

    • Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. SACMAT, 2021.
  • Defending Against Backdoor Attack on Graph Nerual Network by Explainability. [pdf]

    • Bingchen Jiang and Zhao Li. arXiv, 2022.
  • Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection. [pdf] [code]

    • Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, and Jinyin Chen. arXiv, 2022.
  • Neighboring Backdoor Attacks on Graph Convolutional Network. [pdf]

    • Liang Chen, Qibiao Peng, Jintang Li, Yang Liu, Jiawei Chen, Yong Li, and Zibin Zheng. arXiv, 2022.
  • Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction. [pdf]

    • Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang, and Yi Liu. arXiv, 2021.
  • Explainability-based Backdoor Attacks Against Graph Neural Networks. [pdf]

    • Jing Xu, Minhui, Xue, and Stjepan Picek. arXiv, 2021.

Point Cloud

  • A Backdoor Attack against 3D Point Cloud Classifiers. [pdf] [code]

    • Zhen Xiang, David J. Miller, Siheng Chen, Xi Li, and George Kesidis. ICCV, 2021.
  • PointBA: Towards Backdoor Attacks in 3D Point Cloud. [pdf]

    • Xinke Li, Zhiru Chen, Yue Zhao, Zekun Tong, Yabang Zhao, Andrew Lim, and Joey Tianyi Zhou. ICCV, 2021.
  • Imperceptible and Robust Backdoor Attack in 3D Point Cloud. [pdf]

    • Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, and Shu-Tao Xia. arXiv, 2022.
  • Detecting Backdoor Attacks Against Point Cloud Classifiers. [pdf]

    • Zhen Xiang, David J. Miller, Siheng Chen, Xi Li, and George Kesidis. arXiv, 2021.
  • Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds. [pdf]

    • Guiyu Tian, Wenhao Jiang, Wei Liu, and Yadong Mu. arXiv, 2021.

Acoustics Signal Processing

  • Toward Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound. [pdf] [code]

    • Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao, Stefanos Koffas, Yiming Li. IEEE Transactions on Information Forensics and Security, 2024.
  • Breaking Speaker Recognition with Paddingback. [link]

    • Zhe Ye, Diqun Yan, Li Dong, Kailai Shen. ICASSP, 2024.
  • Inaudible Backdoor Attack via Stealthy Frequency Trigger Injection in Audio Spectrogram. [link]

    • Tianfang Zhang, Huy Phan, Zijie Tang, Cong Shi, Yan Wang, Bo Yuan, Yingying Chen. ACM MobiCom, 2024.
  • SilentTrig: An imperceptible backdoor attack against speaker identification with hidden triggers. [link]

    • Yu Tang, Lijuan Sun, Xiaolong Xu. Pattern Recognition Letters, 2024.
  • Devil in the Room: Triggering Audio Backdoors in the Physical World. [pdf]

    • Meng Chen, Xiangyu Xu, Li Lu, Zhongjie Ba, Feng Lin, and Kui Ren. USENIX, 2023.
  • The Silent Manipulator: A Practical and Inaudible Backdoor Attack against Speech Recognition Systems. [link]

    • Zhicong Zheng, Xinfeng Li, Chen Yan, Xiaoyu Ji, Wenyuan Xu. ACM MM, 2023.
  • Going in Style: Audio Backdoors Through Stylistic Transformations. [pdf]

    • Stefanos Koffas, Luca Pajola, Stjepan Picek, and Mauro Conti. ICASSP, 2023.
  • VenoMave: Targeted Poisoning Against Speech Recognition. [pdf]

    • Hojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, Giovanni Vigna. SaTML, 2023.
  • Stealthy Backdoor Attack Against Speaker Recognition Using Phase-Injection Hidden Trigger. [link]

    • Zhe Ye, Diqun Yan, Li Dong, Jiacheng Deng, Shui Yu. IEEE Signal Processing Letters, 2023.
  • Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion. [pdf]

    • Zhe Ye, Terui Mao, Li Dong, Diqun Yan. arXiv, 2023.
  • Opportunistic Backdoor Attacks: Exploring Human-imperceptible Vulnerabilities on Speech Recognition Systems. [link] [code]

    • Qiang Liu, Tongqing Zhou, Zhiping Cai, Yonghao Tang. ACM MM, 2022.
  • Audio-domain position-independent backdoor attack via unnoticeable triggers. [pdf]

    • Cong Shi, Tianfang Zhang, Zhuohang Li, Huy Phan, Tianming Zhao, Yan Wang, Jian Liu, Bo Yuan, Yingying Chen. ACM MobiCom, 2022.
  • Can You Hear It? Backdoor Attacks via Ultrasonic Triggers. [pdf] [code]

    • Stefanos Koffas, Jing Xu, Mauro Conti, and Stjepan Picek. WiseML, 2022.
  • Natural Backdoor Attacks on Speech Recognition Models. [link]

    • Jinwen Xin, Xixiang Lyu, Jing Ma. ML4CS, 2022.
  • Adversarial Audio: A New Information Hiding Method and Backdoor for DNN-based Speech Recognition Models. [pdf] [code]

    • Yehao Kong, Jiliang Zhang. arXiv, 2022.
  • DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models. [link]

    • Jianbin Ye, Xiaoyuan Liu, Zheng You, Guowei Li, and Bo Liu. Applied Sciences, 2022.
  • Backdoor Attack against Speaker Verification [pdf] [code]

    • Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, and Shu-Tao Xia. ICASSP, 2021.
  • A Novel Trojan Attack against Co-learning Based ASR DNN System. [link]

    • Mingxuan Li, Xiao Wang, Dongdong Huo, Han Wang, Chao Liu, Yazhe Wang, Yu Wang, Zhen Xu. CSCWD, 2021.

Medical Science

  • FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis. [pdf]

    • Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong Xia, and Dacheng Tao. CVPR, 2022.
  • Exploiting Missing Value Patterns for a Backdoor Attack on Machine Learning Models of Electronic Health Records: Development and Validation Study. [link]

    • Byunggill Joe, Yonghyeon Park, Jihun Hamm, Insik Shin, and Jiyeon Lee. JMIR Medical Informatics, 2022.
  • Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks. [pdf]

    • Byunggill Joe, Akshay Mehra, Insik Shin, and Jihun Hamm. AAAI Workshop, 2021.
  • Explainability Matters: Backdoor Attacks on Medical Imaging. [pdf]

    • Munachiso Nwadike, Takumi Miyawaki, Esha Sarkar, Michail Maniatakos, and Farah Shamout. AAAI Workshop, 2021.
  • TRAPDOOR: Repurposing Backdoors to Detect Dataset Bias in Machine Learning-based Genomic Analysis. [pdf]

    • Esha Sarkar and Michail Maniatakos. arXiv, 2021.

Vision Transformer

  • You Are Catching My Attention: Are Vision Transformers Bad Learners Under Backdoor Attacks? [pdf]

    • Zenghui Yuan, Pan Zhou, Kai Zou, and Yu Cheng. CVPR, 2023.
  • TrojViT: Trojan Insertion in Vision Transformers. [pdf]

    • Mengxin Zheng, Qian Lou, and Lei Jiang. CVPR, 2023.
  • Defending Backdoor Attacks on Vision Transformer via Patch Processing. [link]

    • Khoa D. Doan, Yingjie Lao, Peng Yang, and Ping Li. AAAI, 2023.
  • Backdoor Attacks on Vision Transformers. [pdf] [code]

    • Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, and Hamed Pirsiavash. arXiv, 2022.
  • TrojViT: Trojan Insertion in Vision Transformers. [pdf]

    • Mengxin Zheng, Qian Lou, and Lei Jiang. arXiv, 2022.
  • Attention Hijacking in Trojan Transformers. [pdf]

    • Weimin Lyu, Songzhu Zheng, Tengfei Ma, Haibin Ling, and Chao Chen. arXiv, 2022.
  • DBIA: Data-free Backdoor Injection Attack against Transformer Networks. [pdf] [code]

    • Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen, Shengzhi Zhang, and Yunfei Yang. arXiv, 2021.

Diffusion Model

  • How to Backdoor Diffusion Models? [pdf] [code]

    • Sheng-Yen Chou, Pin-Yu Chen, and Tsung-Yi Ho. CVPR, 2023.
  • TrojDiff: Trojan Attacks on Diffusion Models With Diverse Targets. [pdf] [code]

    • Weixin Chen, Dawn Song, and Bo Li. CVPR, 2023.

Cybersecurity

  • VulnerGAN: A Backdoor Attack through Vulnerability Amplification against Machine Learning-based Network Intrusion Detection Systems. [link] [code]

    • Guangrui Liu, Weizhe Zhang, Xinjie Li, Kaisheng Fan, and Shui Yu. Information Sciences, 2022.
  • Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers. [pdf]

    • Giorgio Severi, Jim Meyer, Scott Coull, and Alina Oprea. USENIX Security, 2021.
  • Backdoor Attack on Machine Learning Based Android Malware Detectors. [link]

    • Chaoran Li, Xiao Chen, Derui Wang, Sheng Wen, Muhammad Ejaz Ahmed, Seyit Camtepe, and Yang Xiang. IEEE Transactions on Dependable and Secure Computing, 2021.
  • Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers. [pdf]

    • Limin Yang, Zhi Chen, Jacopo Cortellazzi, Feargus Pendlebury, Kevin Tu, Fabio Pierazzi, Lorenzo Cavallaro, and Gang Wang. arXiv, 2022.

Detection and Tracking

  • Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only. [pdf] [code]

    • Kangjie Chen, Xiaoxuan Lou, Guowen Xu, Jiwei Li, and Tianwei Zhang. ICLR, 2023.
  • Untargeted Backdoor Attack against Object Detection. [pdf] [code]

    • Chengxiao Luo, Yiming Li, Yong Jiang, and Shu-Tao Xia. ICASSP, 2023.
  • Few-Shot Backdoor Attacks on Visual Object Tracking. [pdf] [code]

    • Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, and Shu-Tao Xia. ICLR, 2022.
  • BadDet: Backdoor attacks on object detection. [pdf]

    • Shih-Han Chan, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, and Jun Zhou. ECCV Workshop, 2022.
  • Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection. [pdf]

    • Yize Cheng, Wenbin Hu, Minhao Cheng. arXiv, 2023.
  • Mask-based Invisible Backdoor Attacks on Object Detection. [pdf]

    • Jeongjin Shin. arXiv, 2024.
  • TAT: Targeted Backdoor Attacks against Visual Object Tracking. [link] [code]

    • Ziyi Cheng, Baoyuan Wu, Zhenya Zhang, and Jianjun Zhao. Pattern Recognition, 2023.

Others

  • The Dark Side of AutoML: Towards Architectural Backdoor Search. [pdf] [code]

    • Ren Pang, Changjiang Li, Zhaohan Xi, Shouling Ji, and Ting Wang. ICLR, 2023.
  • Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger. [pdf] Yi Yu, Yufei Wang, Wenhan Yang, Shijian Lu, Yap-Peng Tan, and Alex C. Kot. CVPR, 2023.

  • The Dark Side of Dynamic Routing Neural Networks: Towards Efficiency Backdoor Injection. [pdf]

    • Simin Chen, Hanlin Chen, Mirazul Haque, Cong Liu, and Wei Yang. CVPR, 2023.
  • Backdoor Attacks on Crowd Counting. [pdf]

    • Yuhua Sun, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng, and Lichao Sun. ACM MM, 2022.
  • Backdoor Attacks on the DNN Interpretation System. [pdf]

    • Shihong Fang and Anna Choromanska. AAAI, 2022.
  • The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks. [pdf] [code] [demo]

    • Ambrish Rawat, Killian Levacher, and Mathieu Sinn. ESORICS, 2022.
  • Object-Oriented Backdoor Attack Against Image Captioning. [link]

    • Meiling Li, Nan Zhong, Xinpeng Zhang, Zhenxing Qian, and Sheng Li. ICASSP, 2022.
  • When Does Backdoor Attack Succeed in Image Reconstruction? A Study of Heuristics vs. Bi-Level Solution. [link]

    • Vardaan Taneja, Pin-Yu Chen, Yuguang Yao, and Sijia Liu. ICASSP, 2022.
  • An Interpretive Perspective: Adversarial Trojaning Attack on Neural-Architecture-Search Enabled Edge AI Systems. [link]

    • Ship Peng Xu, Ke Wang, Md. Rafiul Hassan, Mohammad Mehedi Hassan, and Chien-Ming Chen. IEEE Transactions on Industrial Informatics, 2022.
  • A Triggerless Backdoor Attack and Defense Mechanism for Intelligent Task Offloading in Multi-UAV Systems. [link]

    • Shafkat Islam, Shahriar Badsha, Ibrahim Khalil, Mohammed Atiquzzaman, and Charalambos Konstantinou. IEEE Internet of Things Journal, 2022.
  • Multi-Target Invisibly Trojaned Networks for Visual Recognition and Detection. [pdf]

    • Xinzhe Zhou, Wenhao Jiang, Sheng Qi, and Yadong Mu. IJCAI, 2021.
  • Hidden Backdoor Attack against Semantic Segmentation Models. [pdf]

    • Yiming Li, Yanjie Li, Yalei Lv, Yong Jiang, and Shu-Tao Xia. ICLR Workshop, 2021.
  • Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models. [link]

    • Muhammad Umer and Robi Polikar. IJCNN, 2021.
  • Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks. [pdf]

    • Muhammad Umer, Glenn Dawson, Robi Polikar. IJCNN, 2020.
  • Trojan Attacks on Wireless Signal Classification with Adversarial Machine Learning. [pdf]

    • Kemal Davaslioglu, and Yalin E. Sagduyu. DySPAN, 2019.
  • BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label. [pdf]

    • Shengshan Hu, Ziqi Zhou, Yechao Zhang, Leo Yu Zhang, Yifeng Zheng, Yuanyuan HE, and Hai Jin. arXiv, 2022.
  • A Temporal Chrominance Trigger for Clean-label Backdoor Attack against Anti-spoof Rebroadcast Detection. [pdf]

    • Wei Guo, Benedetta Tondi, and Mauro Barni. arXiv, 2022.
  • MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World. [pdf]

    • Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, and Derek Abbott. arXiv, 2022.
  • BadDet: Backdoor Attacks on Object Detection. [pdf]

    • Shih-Han Chan, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, and Jun Zhou. arXiv, 2022.
  • Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution. [pdf]

    • Zhixin Pan and Prabhat Mishra. arXiv, 2022.
  • Backdooring Explainable Machine Learning. [pdf]

    • Maximilian Noppel, Lukas Peter, and Christian Wressnegger. arXiv, 2022.
  • Clean-Annotation Backdoor Attack against Lane Detection Systems in the Wild. [pdf]

    • Xingshuo Han, Guowen Xu, Yuan Zhou, Xuehuan Yang, Jiwei Li, and Tianwei Zhang. arXiv, 2022.
  • Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World. [pdf]

    • Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, and Derek Abbott. arXiv, 2022.
  • Targeted Trojan-Horse Attacks on Language-based Image Retrieval. [pdf]

    • Fan Hu, Aozhu Chen, and Xirong Li. arXiv, 2022.
  • Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection. [pdf]

    • Jinyin Chen, Chengyu Jia, Haibin Zheng, Ruoxi Chen, and Chenbo Fu. arXiv, 2022.
  • Dual-Key Multimodal Backdoors for Visual Question Answering. [pdf]

    • Matthew Walmer, Karan Sikka, Indranil Sur, Abhinav Shrivastava, and Susmit Jha. arXiv, 2021.
  • Clean-label Backdoor Attack against Deep Hashing based Retrieval. [pdf]

    • Kuofeng Gao, Jiawang Bai, Bin Chen, Dongxian Wu, and Shu-Tao Xia. arXiv, 2021.
  • Backdoor Attacks on Network Certification via Data Poisoning. [pdf]

    • Tobias Lorenz, Marta Kwiatkowska, and Mario Fritz. arXiv, 2021.
  • Backdoor Attack and Defense for Deep Regression. [pdf]

    • Xi Li, George Kesidis, David J. Miller, and Vladimir Lucic. arXiv, 2021.
  • The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks. [pdf]

    • Ambrish Rawat, Killian Levacher, and Mathieu Sinn. arXiv, 2021.
  • BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models. [pdf]

    • Ahmed Salem, Yannick Sautter, Michael Backes, Mathias Humbert, and Yang Zhang. arXiv, 2020.
  • DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks. [pdf]

    • Yingzhe He, Guozhu Meng, Kai Chen, Jinwen He, and Xingbo Hu. arXiv, 2021.
  • Backdoors in Neural Models of Source Code. [pdf]

    • Goutham Ramakrishnan, and Aws Albarghouthi. arXiv, 2020.
  • EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks. [pdf]

    • Lubin Meng, Jian Huang, Zhigang Zeng, Xue Jiang, Shan Yu, Tzyy-Ping Jung, Chin-Teng Lin, Ricardo Chavarriaga, and Dongrui Wu. arXiv, 2020.
  • Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors Against Backdooring Attacks. [pdf]

    • Kang Liu, Benjamin Tan, Gaurav Rajavendra Reddy, Siddharth Garg, Yiorgos Makris, and Ramesh Karri. arXiv, 2020.

Evaluation and Discussion

  • Distilling Cognitive Backdoor Patterns within an Image. [pdf] [code]

    • Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, and James Bailey. ICLR, 2023.
  • Finding Naturally Occurring Physical Backdoors in Image Datasets. [pdf] [code]

    • Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji, Josephine Passananti, Emilio Andere, Heather Zheng, and Ben Zhao. NeurIPS, 2022.
  • BackdoorBox: A Python Toolbox for Backdoor Learning. [pdf] [code]

    • Yiming Li, Mengxi Ya, Yang Bai, Yong Jiang, Shu-Tao Xia. ICLR Workshop, 2023.
  • TROJANZOO: Everything You Ever Wanted to Know about Neural Backdoors (But were Afraid to Ask). [pdf] [code]

    • Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, and Ting Wang. EuroS&P, 2022.
  • A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. [pdf] [code]

    • Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, and Maosong Sun. NeurIPS, 2022.
  • BackdoorBench: A Comprehensive Benchmark of Backdoor Learning. [pdf] [code] [website]

    • Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Chao Shen, and Hongyuan Zha. NeurIPS, 2022.
  • Backdoor Defense via Decoupling the Training Process. [pdf] [code]

    • Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, and Kui Ren. ICLR, 2022.
  • How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data. [pdf]

    • Zhiyuan Zhang, Lingjuan Lyu, Weiqiang Wang, Lichao Sun, and Xu Sun. ICLR, 2022.
  • Defending against Model Stealing via Verifying Embedded External Features. [pdf] [code]

    • Yiming Li, Linghui Zhu, Xiaojun Jia, Yong Jiang, Shu-Tao Xia, and Xiaochun Cao. AAAI, 2022. (Discuss the limitations of using backdoor attacks for model watermarking)
  • Susceptibility & Defense of Satellite Image-trained Convolutional Networks to Backdoor Attacks. [link]

    • Ethan Brewer, Jason Lin, and Dan Runfola. Information Sciences, 2021.
  • Data-Efficient Backdoor Attacks. [pdf] [code]

    • Pengfei Xia, Ziqiang Li, Wei Zhang, and Bin Li. IJCAI, 2022.
  • Excess Capacity and Backdoor Poisoning. [pdf]

    • Naren Sarayu Manoj and Avrim Blum. NeurIPS, 2021.
  • Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks. [pdf] [code]

    • Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P Dickerson, and Tom Goldstein. ICML, 2021.
  • Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective. [pdf]

    • Yi Zeng, Won Park, Z. Morley Mao, and Ruoxi Jia. ICCV, 2021.
  • Backdoor Attacks Against Deep Learning Systems in the Physical World. [pdf] [Master Thesis]

    • Emily Wenger, Josephine Passanati, Yuanshun Yao, Haitao Zheng, and Ben Y. Zhao. CVPR, 2021.
  • Can Optical Trojans Assist Adversarial Perturbations? [pdf]

    • Adith Boloor, Tong Wu, Patrick Naughton, Ayan Chakrabarti, Xuan Zhang, and Yevgeniy Vorobeychik. ICCV Workshop, 2021.
  • On the Trade-off between Adversarial and Backdoor Robustness. [pdf]

    • Cheng-Hsin Weng, Yan-Ting Lee, and Shan-Hung Wu. NeurIPS, 2020.
  • A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. [pdf] [code]

    • Ren Pang, Hua Shen, Xinyang Zhang, Shouling Ji, Yevgeniy Vorobeychik, Xiapu Luo, Alex Liu, and Ting Wang. CCS, 2020.
  • Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers. [pdf]

    • Loc Truong, Chace Jones, Brian Hutchinson, Andrew August, Brenda Praggastis, Robert Jasper, Nicole Nichols, and Aaron Tuor. CVPR Workshop, 2020.
  • On Evaluating Neural Network Backdoor Defenses. [pdf]

    • Akshaj Veldanda, and Siddharth Garg. NeurIPS Workshop, 2020.
  • Attention Hijacking in Trojan Transformers. [pdf]

    • Weimin Lyu, Songzhu Zheng, Tengfei Ma, Haibin Ling, and Chao Chen. arXiv, 2022.
  • Game of Trojans: A Submodular Byzantine Approach. [pdf]

    • Dinuka Sahabandu, Arezoo Rajabi, Luyao Niu, Bo Li, Bhaskar Ramasubramanian, and Radha Poovendran. arXiv, 2022.
  • Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior. [pdf] [code]

    • Jean-Stanislas Denain and Jacob Steinhardt. arXiv, 2022.
  • A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. [pdf] [code]

    • Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, and Maosong Sun. arXiv, 2022.
  • Can Backdoor Attacks Survive Time-Varying Models? [pdf]

    • Huiying Li, Arjun Nitin Bhagoji, Ben Y. Zhao, and Haitao Zheng. arXiv, 2022.
  • Dynamic Backdoor Attacks with Global Average Pooling [pdf] [code]

    • Stefanos Koffas, Stjepan Picek, and Mauro Conti. arXiv, 2022.
  • Planting Undetectable Backdoors in Machine Learning Models. [pdf]

    • Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan, and Or Zamir. arXiv, 2022.
  • Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures. [pdf]

    • Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, and Yansong Gao. arXiv, 2022.
  • Neural Network Trojans Analysis and Mitigation from the Input Domain. [pdf]

    • Zhenting Wang, Hailun Ding, Juan Zhai, and Shiqing Ma. arXiv, 2022.
  • Widen The Backdoor To Let More Attackers In. [pdf]

    • Siddhartha Datta, Giulio Lovisotto, Ivan Martinovic, and Nigel Shadbolt. arXiv, 2021.
  • Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions. [pdf]

    • Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, and Marcello Pelillo. arXiv, 2021.
  • Rethinking the Trigger of Backdoor Attack. [pdf]

    • Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, and Shutao Xia. arXiv, 2020.
  • Poisoned Classifiers are Not Only Backdoored, They are Fundamentally Broken. [pdf] [code]

    • Mingjie Sun, Siddhant Agarwal, and J. Zico Kolter. ICLR Workshop, 2021.
  • Effect of Backdoor Attacks over the Complexity of the Latent Space Distribution. [pdf] [code]

    • Henry D. Chacon, and Paul Rad. arXiv, 2020.
  • Trembling Triggers: Exploring the Sensitivity of Backdoors in DNN-based Face Recognition. [pdf]

    • Cecilia Pasquini, and Rainer Böhme. EURASIP Journal on Information Security, 2020.
  • Noise-response Analysis for Rapid Detection of Backdoors in Deep Neural Networks. [pdf]

    • N. Benjamin Erichson, Dane Taylor, Qixuan Wu, and Michael W. Mahoney. arXiv, 2020.

Backdoor Attack for Positive Purposes

  • Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection. [pdf] [code]

    • Yiming Li, Yang Bai, Yong Jiang, Yong Yang, Shu-Tao Xia, and Bo Li. NeurIPS, 2022.
  • Membership Inference via Backdooring. [pdf] [code]

    • Hongsheng Hu, Zoran Salcic, Gillian Dobbie, Jinjun Chen, Lichao Sun, and Xuyun Zhang. IJCAI, 2022.
  • Neural Network Surgery: Injecting Data Patterns into Pre-trained Models with Minimal Instance-wise Side Effects. [pdf]

    • Zhiyuan Zhang, Xuancheng Ren, Qi Su, Xu Sun and Bin He. NAACL-HLT, 2021.
  • One Step Further: Evaluating Interpreters using Metamorphic Testing. [pdf]

    • Ming Fan, Jiali Wei, Wuxia Jin, Zhou Xu, Wenying Wei, and Ting Liu. ISSTA, 2022.
  • What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors. [pdf]

    • Yi-Shan Lin, Wen-Chuan Lee, and Z. Berkay Celik. KDD, 2021.
  • Using Honeypots to Catch Adversarial Attacks on Neural Networks. [pdf]

    • Shawn Shan, Emily Wenger, Bolun Wang, Bo Li, Haitao Zheng, Ben Y. Zhao. CCS, 2020. (Note: Unfortunately, it was bypassed by Nicholas Carlini most recently. [arXiv])
  • Turning Your Weakness into a Strength: Watermarking Deep Neural Networks by Backdooring. [pdf] [code]

    • Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet. USENIX Security, 2018.
  • Open-sourced Dataset Protection via Backdoor Watermarking. [pdf] [code]

    • Yiming Li, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang, and Shu-Tao Xia. NeurIPS Workshop, 2020.
  • Protecting Deep Cerebrospinal Fluid Cell Image Processing Models with Backdoor and Semi-Distillation. [link]

    • FangQi Li, Shilin Wang, and Zhenhai Wang. DICTA, 2021.
  • Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias. [pdf]

    • Shangxi Wu, Qiuyang He, Yi Zhang, and Jitao Sang. arXiv, 2022.
  • Watermarking Graph Neural Networks based on Backdoor Attacks. [pdf]

    • Jing Xu and Stjepan Picek. arXiv, 2021.
  • CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning. [pdf]

    • Zhensu Sun, Xiaoning Du, Fu Song, Mingze Ni, and Li Li. arXiv, 2021.
  • What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space. [pdf]

    • Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, and Yu-Gang Jiang. arXiv, 2021.
  • A Stealthy and Robust Fingerprinting Scheme for Generative Models. [pdf]

    • Guanlin Li, Shangwei Guo, Run Wang, Guowen Xu, and Tianwei Zhang. arXiv, 2021.
  • Towards Probabilistic Verification of Machine Unlearning. [pdf] [code]

    • David Marco Sommer, Liwei Song, Sameer Wagh, and Prateek Mittal. arXiv, 2020.

Competition

常见问题

相似工具推荐

openclaw

OpenClaw 是一款专为个人打造的本地化 AI 助手,旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚,能够直接接入你日常使用的各类通讯渠道,包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息,OpenClaw 都能即时响应,甚至支持在 macOS、iOS 和 Android 设备上进行语音交互,并提供实时的画布渲染功能供你操控。 这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地,用户无需依赖云端服务即可享受快速、私密的智能辅助,真正实现了“你的数据,你做主”。其独特的技术亮点在于强大的网关架构,将控制平面与核心助手分离,确保跨平台通信的流畅性与扩展性。 OpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者,以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力(支持 macOS、Linux 及 Windows WSL2),即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你

349.3k|★★★☆☆|5天前
Agent开发框架图像

stable-diffusion-webui

stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面,旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点,将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。 无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师,还是想要深入探索模型潜力的开发者与研究人员,都能从中获益。其核心亮点在于极高的功能丰富度:不仅支持文生图、图生图、局部重绘(Inpainting)和外绘(Outpainting)等基础模式,还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外,它内置了 GFPGAN 和 CodeFormer 等人脸修复工具,支持多种神经网络放大算法,并允许用户通过插件系统无限扩展能力。即使是显存有限的设备,stable-diffusion-webui 也提供了相应的优化选项,让高质量的 AI 艺术创作变得触手可及。

162.1k|★★★☆☆|6天前
开发框架图像Agent

everything-claude-code

everything-claude-code 是一套专为 AI 编程助手(如 Claude Code、Codex、Cursor 等)打造的高性能优化系统。它不仅仅是一组配置文件,而是一个经过长期实战打磨的完整框架,旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。 通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能,everything-claude-code 能显著提升 AI 在复杂任务中的表现,帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略,使得模型响应更快、成本更低,同时有效防御潜在的攻击向量。 这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库,还是需要 AI 协助进行安全审计与自动化测试,everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目,它融合了多语言支持与丰富的实战钩子(hooks),让 AI 真正成长为懂上

150k|★★☆☆☆|今天
开发框架Agent语言模型

ComfyUI

ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎,专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式,采用直观的节点式流程图界面,让用户通过连接不同的功能模块即可构建个性化的生成管线。 这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景,也能自由组合模型、调整参数并实时预览效果,轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性,不仅支持 Windows、macOS 和 Linux 全平台,还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构,并率先支持 SDXL、Flux、SD3 等前沿模型。 无论是希望深入探索算法潜力的研究人员和开发者,还是追求极致创作自由度的设计师与资深 AI 绘画爱好者,ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能,使其成为当前最灵活、生态最丰富的开源扩散模型工具之一,帮助用户将创意高效转化为现实。

108.3k|★★☆☆☆|昨天
开发框架图像Agent

gemini-cli

gemini-cli 是一款由谷歌推出的开源 AI 命令行工具,它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言,它提供了一条从输入提示词到获取模型响应的最短路径,无需切换窗口即可享受智能辅助。 这款工具主要解决了开发过程中频繁上下文切换的痛点,让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用,还是执行复杂的 Git 操作,gemini-cli 都能通过自然语言指令高效处理。 它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口,具备出色的逻辑推理能力;内置 Google 搜索、文件操作及 Shell 命令执行等实用工具;更独特的是,它支持 MCP(模型上下文协议),允许用户灵活扩展自定义集成,连接如图像生成等外部能力。此外,个人谷歌账号即可享受免费的额度支持,且项目基于 Apache 2.0 协议完全开源,是提升终端工作效率的理想助手。

100.8k|★★☆☆☆|昨天
插件Agent图像

markitdown

MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具,专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片(含 OCR)、音频(含语音转录)、HTML 乃至 YouTube 链接等多种格式的解析,能够精准提取文档中的标题、列表、表格和链接等关键结构信息。 在人工智能应用日益普及的今天,大语言模型(LLM)虽擅长处理文本,却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点,它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式,成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外,它还提供了 MCP(模型上下文协议)服务器,可无缝集成到 Claude Desktop 等 LLM 应用中。 这款工具特别适合开发者、数据科学家及 AI 研究人员使用,尤其是那些需要构建文档检索增强生成(RAG)系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性,但其核心优势在于为机器

93.4k|★★☆☆☆|4天前
插件开发框架