[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Harry24k--adversarial-attacks-pytorch":3,"tool-Harry24k--adversarial-attacks-pytorch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":102,"env_os":103,"env_gpu":104,"env_ram":103,"env_deps":105,"category_tags":110,"github_topics":111,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":115,"updated_at":116,"faqs":117,"releases":147},2715,"Harry24k\u002Fadversarial-attacks-pytorch","adversarial-attacks-pytorch","PyTorch implementation of adversarial attacks [torchattacks]","adversarial-attacks-pytorch（又名 Torchattacks）是一个基于 PyTorch 构建的开源库，旨在帮助开发者轻松生成“对抗样本”。所谓对抗样本，是指通过对原始图像添加人眼难以察觉的微小扰动，从而误导人工智能模型做出错误判断的数据。该工具主要解决了在深度学习研究中，复现和实现各类对抗攻击算法门槛高、代码重复编写繁琐的问题，让研究人员能专注于模型鲁棒性的评估与改进，而非底层攻击逻辑的实现。\n\n它非常适合从事人工智能安全研究的研究员、需要测试模型稳定性的算法工程师，以及高校中相关领域的师生使用。adversarial-attacks-pytorch 的最大亮点在于其高度友好的\"PyTorch 原生风格”接口设计。用户只需几行代码即可调用 PGD 等经典攻击算法，无需复杂配置。此外，它内置了自动归一化处理与输入范围裁剪机制，严格遵循计算机视觉领域的通用标准，并支持通过设置确定性模式来确保实验结果的可复现性。无论是用于学术探索还是工业界的模型压力测试，它都是一个高效且可靠的得力助手。","# Adversarial-Attacks-PyTorch\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002FLICENSE\">\u003Cimg alt=\"MIT License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FHarry24k\u002Fadversarial-attacks-pytorch?&color=brightgreen\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchattacks\u002F\">\u003Cimg alt=\"Pypi\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftorchattacks.svg?&color=orange\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Freleases\">\u003Cimg alt=\"Latest Release\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002FHarry24k\u002Fadversarial-attacks-pytorch.svg?&color=blue\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fadversarial-attacks-pytorch.readthedocs.io\u002Fen\u002Flatest\u002F\">\u003Cimg alt=\"Documentation Status\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHarry24k_adversarial-attacks-pytorch_readme_13d664e1afd7.png\" \u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FHarry24k\u002Fadversarial-attacks-pytorch\">\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg?token=00CQ79UTC2\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Flgtm.com\u002Fprojects\u002Fg\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Ftorchattacks?color=blue\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\u003Cimg alt=\"Code style: black\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cstrong>Torchattacks  is a PyTorch library that provides adversarial attacks to generate adversarial examples.\u003C\u002Fstrong> \n\nIt contains *PyTorch-like* interface and functions that make it easier for PyTorch users to implement adversarial attacks.\n\n\n```python\nimport torchattacks\natk = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, steps=4)\n# If inputs were normalized, then\n# atk.set_normalization_used(mean=[...], std=[...])\nadv_images = atk(images, labels)\n```\n\n**Additional Recommended Packages**.\n\n* [MAIR](https:\u002F\u002Fgithub.com\u002FHarry24k\u002FMAIR): *Adversarial Trainining Framework, [NeurIPS'23 Main Track](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2023\u002Fposter\u002F72546).*\n* [RobustBench](https:\u002F\u002Fgithub.com\u002FRobustBench\u002Frobustbench): *Adversarially Trained Models & Benchmarks, [NeurIPS'21 Datasets and Benchmarks Track](https:\u002F\u002Fopenreview.net\u002Fforum?id=SSKZPJCt7B).*\n\n**Citation.** If you use this package, please cite the following BibTex ([GoogleScholar](https:\u002F\u002Fscholar.google.com\u002Fscholar?cluster=10203998516567946917&hl=ko&as_sdt=2005&sciodt=0,5)):\n\n```\n@article{kim2020torchattacks,\ntitle={Torchattacks: A pytorch repository for adversarial attacks},\nauthor={Kim, Hoki},\njournal={arXiv preprint arXiv:2010.01950},\nyear={2020}\n}\n```\n\n\n## :hammer: Requirements and Installation\n\n**Requirements**\n\n- PyTorch version >=1.4.0\n- Python version >=3.6\n\n**Installation**\n\n```\n#  pip\npip install torchattacks\n\n#  source\npip install git+https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch.git\n\n#  git clone\ngit clone https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch.git\ncd adversarial-attacks-pytorch\u002F\npip install -e .\n```\n\n## :rocket:  Getting Started\n\n**Precautions**\n\n* **All models should return ONLY ONE vector of `(N, C)` where `C = number of classes`.** Considering most models in _torchvision.models_ return one vector of `(N,C)`, where `N` is the number of inputs and `C` is thenumber of classes, _torchattacks_ also only supports limited forms of output.  Please check the shape of the model’s output carefully. \n* **The domain of inputs should be in the range of [0, 1]**. Since the clipping operation is always applied after the perturbation, the original inputs should have the range of [0, 1], which is the general settings in the vision domain.\n* **`torch.backends.cudnn.deterministic = True` to get same adversarial examples with fixed random seed**. Some operations are non-deterministic with float tensors on GPU [[discuss]](https:\u002F\u002Fdiscuss.pytorch.org\u002Ft\u002Finconsistent-gradient-values-for-the-same-input\u002F26179). If you want to get same results with same inputs, please run `torch.backends.cudnn.deterministic = True`[[ref]](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F56354461\u002Freproducibility-and-performance-in-pytorch).\n\n\n\n**[Demos](https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002Fdemo\u002FWhite-box%20Attack%20on%20ImageNet.ipynb)**\n\n* Targeted mode\n  \n    * Random target label\n        ```python\n        # random labels as target labels.\n        atk.set_mode_targeted_random()\n        ```\n    * Least likely label\n        ```python\n        # labels with the k-th smallest probability as target labels.\n        atk.set_mode_targeted_least_likely(kth_min)\n        ```\n    * By custom function\n        ```python\n        # labels obtained by mapping function as target labels.\n        # shift all class loops one to the right, 1=>2, 2=>3, .., 9=>0\n        atk.set_mode_targeted_by_function(target_map_function=lambda images, labels:(labels+1)%10)\n        ```\n    * By label\n        ```python\n        atk.set_mode_targeted_by_label(quiet=True)\n        # shift all class loops one to the right, 1=>2, 2=>3, .., 9=>0\n        target_labels = (labels + 1) % 10\n        adv_images = atk(images, target_labels)\n        ```\n    * Return to default\n        ```python\n        atk.set_mode_default()\n        ```\n    \n* Save adversarial images\n    ```python\n    # Save\n    atk.save(data_loader, save_path=\".\u002Fdata.pt\", verbose=True)\n\n    # Load\n    adv_loader = atk.load(load_path=\".\u002Fdata.pt\")\n    ```\n\n* Training\u002FEval during attack\n  \n    ```python\n    # For RNN-based models, we cannot calculate gradients with eval mode.\n    # Thus, it should be changed to the training mode during the attack.\n    atk.set_model_training_mode(model_training=False, batchnorm_training=False, dropout_training=False)\n    ```\n    \n* Make a set of attacks\n    * Strong attacks\n        ```python\n        atk1 = torchattacks.FGSM(model, eps=8\u002F255)\n        atk2 = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, iters=40, random_start=True)\n        atk = torchattacks.MultiAttack([atk1, atk2])\n        ```\n    * Binary search for CW\n        ```python\n        atk1 = torchattacks.CW(model, c=0.1, steps=1000, lr=0.01)\n        atk2 = torchattacks.CW(model, c=1, steps=1000, lr=0.01)\n        atk = torchattacks.MultiAttack([atk1, atk2])\n        ```\n    * Random restarts\n        ```python\n        atk1 = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, iters=40, random_start=True)\n        atk2 = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, iters=40, random_start=True)\n        atk = torchattacks.MultiAttack([atk1, atk2])\n        ```\n\n\n\n## :page_with_curl: Supported Attacks\n\nThe distance measure in parentheses.\n\n|              Name               | Paper                                                                                                                                                     | Remark                                                                                                                 |\n|:-------------------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|\n|      **FGSM**\u003Cbr \u002F>(Linf)       | Explaining and harnessing adversarial examples ([Goodfellow et al., 2014](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6572))                                               |                                                                                                                        |\n|       **BIM**\u003Cbr \u002F>(Linf)       | Adversarial Examples in the Physical World ([Kurakin et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02533))                                                     | Basic iterative method or Iterative-FSGM                                                                               |\n|        **CW**\u003Cbr \u002F>(L2)         | Towards Evaluating the Robustness of Neural Networks ([Carlini et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.04644))                                           |                                                                                                                        |\n|      **RFGSM**\u003Cbr \u002F>(Linf)      | Ensemble Adversarial Traning: Attacks and Defences ([Tramèr et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07204))                                              | Random initialization + FGSM                                                                                           |\n|       **PGD**\u003Cbr \u002F>(Linf)       | Towards Deep Learning Models Resistant to Adversarial Attacks ([Mardry et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06083))                                   | Projected Gradient Method                                                                                              |\n|       **PGDL2**\u003Cbr \u002F>(L2)       | Towards Deep Learning Models Resistant to Adversarial Attacks ([Mardry et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06083))                                   | Projected Gradient Method                                                                                              |\n|     **MIFGSM**\u003Cbr \u002F>(Linf)      | Boosting Adversarial Attacks with Momentum ([Dong et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06081))                                                        | :heart_eyes: Contributor [zhuangzi926](https:\u002F\u002Fgithub.com\u002Fzhuangzi926), [huitailangyz](https:\u002F\u002Fgithub.com\u002Fhuitailangyz) |\n|      **TPGD**\u003Cbr \u002F>(Linf)       | Theoretically Principled Trade-off between Robustness and Accuracy ([Zhang et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.08573))                               |                                                                                                                        |\n|     **EOTPGD**\u003Cbr \u002F>(Linf)      | Comment on \"Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network\" ([Zimmermann, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.00895))          | [EOT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07397)+PGD                                                                            |\n|    **APGD**\u003Cbr \u002F>(Linf, L2)     | Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks ([Croce et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994)) |                                                                                                                        |\n|    **APGDT**\u003Cbr \u002F>(Linf, L2)    | Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks ([Croce et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994)) | Targeted APGD                                                                                                          |\n|   **FAB**\u003Cbr \u002F>(Linf, L2, L1)   | Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack ([Croce et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02044))                    |                                                                                                                        |\n|   **Square**\u003Cbr \u002F>(Linf, L2)    | Square Attack: a query-efficient black-box adversarial attack via random search ([Andriushchenko et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00049))         |                                                                                                                        |\n| **AutoAttack**\u003Cbr \u002F>(Linf, L2)  | Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks ([Croce et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994)) | APGD+APGDT+FAB+Square                                                                                                  |\n|     **DeepFool**\u003Cbr \u002F>(L2)      | DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks ([Moosavi-Dezfooli et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.04599))                   |                                                                                                                        |\n|     **OnePixel**\u003Cbr \u002F>(L0)      | One pixel attack for fooling deep neural networks ([Su et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.08864))                                                   |                                                                                                                        |\n|    **SparseFool**\u003Cbr \u002F>(L0)     | SparseFool: a few pixels make a big difference ([Modas et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.02248))                                                   |                                                                                                                        |\n|     **DIFGSM**\u003Cbr \u002F>(Linf)      | Improving Transferability of Adversarial Examples with Input Diversity ([Xie et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.06978))                             | :heart_eyes: Contributor [taobai](https:\u002F\u002Fgithub.com\u002Ftao-bai)                                                          |\n|     **TIFGSM**\u003Cbr \u002F>(Linf)      | Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks ([Dong et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02884))            | :heart_eyes: Contributor [taobai](https:\u002F\u002Fgithub.com\u002Ftao-bai)                                                          |\n| **NIFGSM**\u003Cbr \u002F>(Linf) | Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks ([Lin, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.06281))                 | :heart_eyes: Contributor [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n| **SINIFGSM**\u003Cbr \u002F>(Linf) | Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks ([Lin, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.06281))                 | :heart_eyes: Contributor [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n| **VMIFGSM**\u003Cbr \u002F>(Linf) | Enhancing the Transferability of Adversarial Attacks through Variance Tuning ([Wang, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15571))                 | :heart_eyes: Contributor [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n| **VNIFGSM**\u003Cbr \u002F>(Linf) | Enhancing the Transferability of Adversarial Attacks through Variance Tuning ([Wang, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15571))                 | :heart_eyes: Contributor [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n|     **Jitter**\u003Cbr \u002F>(Linf)      | Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks ([Schwinn, Leo, et al., 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.10304))    |                                                                                                                        |\n|       **Pixle**\u003Cbr \u002F>(L0)       | Pixle: a fast and effective black-box attack based on rearranging pixels ([Pomponi, Jary, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02236))                |                                                                                                                        |\n| **LGV**\u003Cbr \u002F>(Linf, L2, L1, L0) | LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity ([Gubri, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13129))                 | :heart_eyes: Contributor [Martin Gubri](https:\u002F\u002Fgithub.com\u002FFramartin)                               |\n| **SPSA**\u003Cbr \u002F>(Linf) | Adversarial Risk and the Dangers of Evaluating Against Weak Attacks ([Uesato, Jonathan, et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05666))                 | :heart_eyes: Contributor [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **JSMA**\u003Cbr \u002F>(L0) | The Limitations of Deep Learning in Adversarial Settings ([Papernot, Nicolas, et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.07528v1))                 | :heart_eyes: Contributor [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **EADL1**\u003Cbr \u002F>(L1) | EAD: Elastic-Net Attacks to Deep Neural Networks ([Chen, Pin-Yu, et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04114))                 | :heart_eyes: Contributor [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **EADEN**\u003Cbr \u002F>(L1, L2) | EAD: Elastic-Net Attacks to Deep Neural Networks ([Chen, Pin-Yu, et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04114))                 | :heart_eyes: Contributor [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **PIFGSM (PIM)**\u003Cbr \u002F>(Linf) | Patch-wise Attack for Fooling Deep Neural Network ([Gao, Lianli, et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06765))                 | :heart_eyes: Contributor [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **PIFGSM++ (PIM++)**\u003Cbr \u002F>(Linf) | Patch-wise++ Perturbation for Adversarial Targeted Attacks ([Gao, Lianli, et al., 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15503))                 | :heart_eyes: Contributor [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n\n\n\n## :bar_chart: Performance Comparison\n\nAs for the comparison packages, currently updated and the most cited methods were selected:\n* **Foolbox**: [611](https:\u002F\u002Fscholar.google.com\u002Fscholar?cites=10871007443931887615&as_sdt=2005&sciodt=0,5&hl=ko) citations and last update 2023.10.\n* **ART**: [467](https:\u002F\u002Fscholar.google.com\u002Fscholar?cites=16247708270610532647&as_sdt=2005&sciodt=0,5&hl=ko) citations and last update 2023.10.\n\nRobust accuracy against each attack and elapsed time on the first 50 images of CIFAR10. For L2 attacks, the average L2 distances between adversarial images and the original images are recorded. All experiments were done on GeForce RTX 2080. For the latest version, please refer to here ([code](https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002Fdemos\u002FPerformance%20Comparison%20(CIFAR10).ipynb), [nbviewer](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002Fdemos\u002FPerformance%20Comparison%20(CIFAR10).ipynb)).\n\n|  **Attack**  |     **Package**     |     Standard |     [Wong2020Fast](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994) |     [Rice2020Overfitting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11569) |     **Remark**     |\n| :----------------: | :-----------------: | -------------------------------------------: | -------------------------------------------: | ---------------------------------------------: | :----------------: |\n|      **FGSM** (Linf)      |    Torchattacks     | 34% (54ms) |                                 **48% (5ms)** |                                    62% (82ms) |                    |\n|  | **Foolbox\u003Csup>*\u003C\u002Fsup>** | **34% (15ms)** |                                     48% (8ms) |                  **62% (30ms)** |                    |\n|                    |         ART         | 34% (214ms) |                                     48% (59ms) |                                   62% (768ms) |                    |\n| **PGD** (Linf) |    **Torchattacks** | **0% (174ms)** |                               **44% (52ms)** |            **58% (1348ms)** | :crown: ​**Fastest** |\n|                    | Foolbox\u003Csup>*\u003C\u002Fsup> | 0% (354ms) |                                  44% (56ms) |              58% (1856ms) |                    |\n|                    |         ART         | 0% (1384 ms) |                                   44% (437ms) |                58% (4704ms) |                    |\n| **CW\u003Csup>† \u003C\u002Fsup>**(L2) |    **Torchattacks** | **0% \u002F 0.40\u003Cbr \u002F> (2596ms)** |                **14% \u002F 0.61 \u003Cbr \u002F>(3795ms)** | **22% \u002F 0.56\u003Cbr \u002F>(43484ms)** | :crown: ​**Highest Success Rate** \u003Cbr \u002F> :crown: **Fastest** |\n|                    | Foolbox\u003Csup>*\u003C\u002Fsup> | 0% \u002F 0.40\u003Cbr \u002F> (2668ms) |                   32% \u002F 0.41 \u003Cbr \u002F>(3928ms) |                34% \u002F 0.43\u003Cbr \u002F>(44418ms) |  |\n|                    |         ART         | 0% \u002F 0.59\u003Cbr \u002F> (196738ms) |                 24% \u002F 0.70 \u003Cbr \u002F>(66067ms) | 26% \u002F 0.65\u003Cbr \u002F>(694972ms) |  |\n| **PGD** (L2) |    **Torchattacks** | **0% \u002F 0.41 (184ms)** |                  **68% \u002F 0.5\u003Cbr \u002F> (52ms)** |                  **70% \u002F 0.5\u003Cbr \u002F>(1377ms)** | :crown: **Fastest** |\n|                    | Foolbox\u003Csup>*\u003C\u002Fsup> | 0% \u002F 0.41 (396ms) |                       68% \u002F 0.5\u003Cbr \u002F> (57ms) |                     70% \u002F 0.5\u003Cbr \u002F> (1968ms) |                    |\n|                    |         ART         | 0% \u002F 0.40 (1364ms) |                       68% \u002F 0.5\u003Cbr \u002F> (429ms) | 70% \u002F 0.5\u003Cbr \u002F> (4777ms) |                           |\n\n\u003Csup>*\u003C\u002Fsup> Note that Foolbox returns accuracy and adversarial images simultaneously, thus the *actual* time for generating adversarial images  might be shorter than the records.\n\n\u003Csup>**†**\u003C\u002Fsup>Considering that the binary search algorithm for const `c` can be time-consuming, torchattacks supports MutliAttack for grid searching `c`.\n\n\n\nAdditionally, I also recommend to use a recently proposed package, [**Rai-toolbox**](https:\u002F\u002Fscholar.google.com\u002Fscholar_lookup?arxiv_id=2201.05647).\n\n| Attack      | Package      | Time\u002Fstep (accuracy) |\n| ----------- | ------------ | -------------------- |\n| FGSM (Linf) | rai-toolbox  | **58 ms** (0%)       |\n|             | Torchattacks | 81 ms (0%)           |\n|             | Foolbox      | 105 ms (0%)          |\n|             | ART          | 83 ms (0%)           |\n| PGD (Linf)  | rai-toolbox  | **58 ms** (44%)      |\n|             | Torchattacks | 79 ms (44%)          |\n|             | Foolbox      | 82 ms (44%)          |\n|             | ART          | 90 ms (44%)          |\n| PGD (L2)    | rai-toolbox  | **58 ms** (70%)      |\n|             | Torchattacks | 81 ms (70%)          |\n|             | Foolbox      | 82 ms (70%)          |\n|             | ART          | 89 ms (70%)          |\n\n> The rai-toolbox takes a unique approach to gradient-based perturbations: they are implemented in terms of [parameter-transforming optimizers](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Fref_optim.html) and [perturbation models](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Fref_perturbation.html). This enables users to implement diverse algorithms (like [universal perturbations](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Fhow_to\u002Funiv_adv_pert.html) and [concept probing with sparse gradients](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Ftutorials\u002FImageNet-Concept-Probing.html)) using the same paradigm as a standard PGD attack.\n","# 对抗攻击-PyTorch\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002FLICENSE\">\u003Cimg alt=\"MIT License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FHarry24k\u002Fadversarial-attacks-pytorch?&color=brightgreen\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchattacks\u002F\">\u003Cimg alt=\"Pypi\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftorchattacks.svg?&color=orange\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Freleases\">\u003Cimg alt=\"最新版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002FHarry24k\u002Fadversarial-attacks-pytorch.svg?&color=blue\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fadversarial-attacks-pytorch.readthedocs.io\u002Fen\u002Flatest\u002F\">\u003Cimg alt=\"文档状态\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHarry24k_adversarial-attacks-pytorch_readme_13d664e1afd7.png\" \u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FHarry24k\u002Fadversarial-attacks-pytorch\">\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg?token=00CQ79UTC2\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Flgtm.com\u002Fprojects\u002Fg\u002FHarry24k\u002Fadversarial-attacks-pytorch\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Ftorchattacks?color=blue\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\u003Cimg alt=\"代码风格：black\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cstrong>Torchattacks 是一个 PyTorch 库，提供了用于生成对抗样本的对抗攻击方法。\u003C\u002Fstrong> \n\n它拥有 *类似 PyTorch 的接口和函数*，使得 PyTorch 用户更容易实现对抗攻击。\n\n\n```python\nimport torchattacks\natk = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, steps=4)\n# 如果输入已经归一化，则\n# atk.set_normalization_used(mean=[...], std=[...])\nadv_images = atk(images, labels)\n```\n\n**其他推荐的包**。\n\n* [MAIR](https:\u002F\u002Fgithub.com\u002FHarry24k\u002FMAIR): *对抗训练框架，[NeurIPS'23 主赛道](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2023\u002Fposter\u002F72546)。*\n* [RobustBench](https:\u002F\u002Fgithub.com\u002FRobustBench\u002Frobustbench): *对抗训练模型与基准测试，[NeurIPS'21 数据集与基准测试赛道](https:\u002F\u002Fopenreview.net\u002Fforum?id=SSKZPJCt7B)。*\n\n**引用**。如果您使用本包，请引用以下 BibTex（[GoogleScholar](https:\u002F\u002Fscholar.google.com\u002Fscholar?cluster=10203998516567946917&hl=ko&as_sdt=2005&sciodt=0,5))：\n\n```\n@article{kim2020torchattacks,\ntitle={Torchattacks: A pytorch repository for adversarial attacks},\nauthor={Kim, Hoki},\njournal={arXiv preprint arXiv:2010.01950},\nyear={2020}\n}\n```\n\n\n## :hammer: 要求与安装\n\n**要求**\n\n- PyTorch 版本 >=1.4.0\n- Python 版本 >=3.6\n\n**安装**\n\n```\n#  pip\npip install torchattacks\n\n#  源码\npip install git+https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch.git\n\n#  git 克隆\ngit clone https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch.git\ncd adversarial-attacks-pytorch\u002F\npip install -e .\n```\n\n## :rocket: 开始使用\n\n**注意事项**\n\n* **所有模型应仅返回形状为 `(N, C)` 的单个向量，其中 `C` 为类别数。** 考虑到 _torchvision.models_ 中大多数模型都返回形状为 `(N,C)` 的向量，其中 `N` 是输入数量，`C` 是类别数，因此 _torchattacks_ 也仅支持有限形式的输出。请仔细检查模型输出的形状。\n* **输入数据的取值范围应在 [0, 1] 之间。** 由于扰动后始终会进行裁剪操作，原始输入应具有 [0, 1] 的范围，这也是视觉领域的通用设置。\n* **设置 `torch.backends.cudnn.deterministic = True` 以在固定随机种子下获得相同的对抗样本。** 在 GPU 上使用浮点张量时，某些操作是非确定性的 [[讨论]](https:\u002F\u002Fdiscuss.pytorch.org\u002Ft\u002Finconsistent-gradient-values-for-the-same-input\u002F26179)。如果您希望对相同输入得到相同结果，请运行 `torch.backends.cudnn.deterministic = True`[[参考]](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F56354461\u002Freproducibility-and-performance-in-pytorch)。\n\n\n\n**[示例](https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002Fdemo\u002FWhite-box%20Attack%20on%20ImageNet.ipynb)**\n\n* 针对性模式\n  \n    * 随机目标标签\n        ```python\n        # 随机标签作为目标标签。\n        atk.set_mode_targeted_random()\n        ```\n    * 最不可能的标签\n        ```python\n        # 将概率最小的第 k 个标签作为目标标签。\n        atk.set_mode_targeted_least_likely(kth_min)\n        ```\n    * 自定义函数\n        ```python\n        # 通过映射函数获取的目标标签。\n        # 将所有类别循环右移一位，1=>2，2=>3，..，9=>0\n        atk.set_mode_targeted_by_function(target_map_function=lambda images, labels:(labels+1)%10)\n        ```\n    * 指定标签\n        ```python\n        atk.set_mode_targeted_by_label(quiet=True)\n        # 将所有类别循环右移一位，1=>2，2=>3，..，9=>0\n        target_labels = (labels + 1) % 10\n        adv_images = atk(images, target_labels)\n        ```\n    * 恢复默认模式\n        ```python\n        atk.set_mode_default()\n        ```\n    \n* 保存对抗图像\n    ```python\n    # 保存\n    atk.save(data_loader, save_path=\".\u002Fdata.pt\", verbose=True)\n\n    # 加载\n    adv_loader = atk.load(load_path=\".\u002Fdata.pt\")\n    ```\n\n* 攻击过程中的训练\u002F评估\n  \n    ```python\n    # 对于基于 RNN 的模型，我们无法在评估模式下计算梯度。\n    # 因此，在攻击过程中应将其切换回训练模式。\n    atk.set_model_training_mode(model_training=False, batchnorm_training=False, dropout_training=False)\n    ```\n    \n* 组合多种攻击\n    * 强力攻击\n        ```python\n        atk1 = torchattacks.FGSM(model, eps=8\u002F255)\n        atk2 = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, iters=40, random_start=True)\n        atk = torchattacks.MultiAttack([atk1, atk2])\n        ```\n    * CW 的二分搜索\n        ```python\n        atk1 = torchattacks.CW(model, c=0.1, steps=1000, lr=0.01)\n        atk2 = torchattacks.CW(model, c=1, steps=1000, lr=0.01)\n        atk = torchattacks.MultiAttack([atk1, atk2])\n        ```\n    * 随机重启\n        ```python\n        atk1 = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, iters=40, random_start=True)\n        atk2 = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, iters=40, random_start=True)\n        atk = torchattacks.MultiAttack([atk1, atk2])\n        ```\n\n\n\n## :page_with_curl: 支持的攻击\n\n括号内为距离度量。\n\n|              名称               | 论文                                                                                                                                                     | 备注                                                                                                                 |\n|:-------------------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|\n|      **FGSM**\u003Cbr \u002F>(Linf)       | 解释与利用对抗样本 ([Goodfellow et al., 2014](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6572))                                               |                                                                                                                        |\n|       **BIM**\u003Cbr \u002F>(Linf)       | 物理世界中的对抗样本 ([Kurakin et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02533))                                                     | 基本迭代方法或迭代-FSGM                                                                               |\n|        **CW**\u003Cbr \u002F>(L2)         | 评估神经网络的鲁棒性 ([Carlini et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.04644))                                           |                                                                                                                        |\n|      **RFGSM**\u003Cbr \u002F>(Linf)      | 集成对抗训练：攻击与防御 ([Tramèr et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07204))                                              | 随机初始化 + FGSM                                                                                           |\n|       **PGD**\u003Cbr \u002F>(Linf)       | 构建对抗攻击鲁棒的深度学习模型 ([Mardry et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06083))                                   | 投影梯度法                                                                                              |\n|       **PGDL2**\u003Cbr \u002F>(L2)       | 构建对抗攻击鲁棒的深度学习模型 ([Mardry et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06083))                                   | 投影梯度法                                                                                              |\n|     **MIFGSM**\u003Cbr \u002F>(Linf)      | 利用动量增强对抗攻击 ([Dong et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06081))                                                        | :heart_eyes: 贡献者 [zhuangzi926](https:\u002F\u002Fgithub.com\u002Fzhuangzi926), [huitailangyz](https:\u002F\u002Fgithub.com\u002Fhuitailangyz) |\n|      **TPGD**\u003Cbr \u002F>(Linf)       | 理论上合理的鲁棒性与准确率之间的权衡 ([Zhang et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.08573))                               |                                                                                                                        |\n|     **EOTPGD**\u003Cbr \u002F>(Linf)      | 对“Adv-BNN：通过鲁棒贝叶斯神经网络改进对抗防御”的评论 ([Zimmermann, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.00895))          | [EOT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07397)+PGD                                                                            |\n|    **APGD**\u003Cbr \u002F>(Linf, L2)     | 使用多样化的无参数攻击集合可靠地评估对抗鲁棒性 ([Croce et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994)) |                                                                                                                        |\n|    **APGDT**\u003Cbr \u002F>(Linf, L2)    | 使用多样化的无参数攻击集合可靠地评估对抗鲁棒性 ([Croce et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994)) | 目标APGD                                                                                                          |\n|   **FAB**\u003Cbr \u002F>(Linf, L2, L1)   | 以快速自适应边界攻击生成失真最小的对抗样本 ([Croce et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02044))                    |                                                                                                                        |\n|   **Square**\u003Cbr \u002F>(Linf, L2)    | Square攻击：一种基于随机搜索的高效黑盒对抗攻击 ([Andriushchenko et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00049))         |                                                                                                                        |\n| **AutoAttack**\u003Cbr \u002F>(Linf, L2)  | 使用多样化的无参数攻击集合可靠地评估对抗鲁棒性 ([Croce et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994)) | APGD+APGDT+FAB+Square                                                                                                  |\n|     **DeepFool**\u003Cbr \u002F>(L2)      | DeepFool：一种简单且精确的欺骗深度神经网络的方法 ([Moosavi-Dezfooli et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.04599))                   |                                                                                                                        |\n|     **OnePixel**\u003Cbr \u002F>(L0)      | 用于欺骗深度神经网络的一像素攻击 ([Su et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.08864))                                                   |                                                                                                                        |\n|    **SparseFool**\u003Cbr \u002F>(L0)     | SparseFool：少量像素就能产生巨大影响 ([Modas et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.02248))                                                   |                                                                                                                        |\n|     **DIFGSM**\u003Cbr \u002F>(Linf)      | 通过输入多样性提高对抗样本的迁移性 ([Xie et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.06978))                             | :heart_eyes: 贡献者 [taobai](https:\u002F\u002Fgithub.com\u002Ftao-bai)                                                          |\n|     **TIFGSM**\u003Cbr \u002F>(Linf)      | 通过平移不变攻击规避防御并实现可迁移的对抗样本 ([Dong et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02884))            | :heart_eyes: 贡献者 [taobai](https:\u002F\u002Fgithub.com\u002Ftao-bai)                                                          |\n| **NIFGSM**\u003Cbr \u002F>(Linf) | 用于对抗攻击的Nesterov加速梯度和尺度不变性 ([Lin, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.06281))                 | :heart_eyes: 贡献者 [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n| **SINIFGSM**\u003Cbr \u002F>(Linf) | 用于对抗攻击的Nesterov加速梯度和尺度不变性 ([Lin, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.06281))                 | :heart_eyes: 贡献者 [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n| **VMIFGSM**\u003Cbr \u002F>(Linf) | 通过方差调优提升对抗攻击的迁移性 ([Wang, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15571))                 | :heart_eyes: 贡献者 [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n| **VNIFGSM**\u003Cbr \u002F>(Linf) | 通过方差调优提升对抗攻击的迁移性 ([Wang, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15571))                 | :heart_eyes: 贡献者 [Zhijin-Ge](https:\u002F\u002Fgithub.com\u002FZhijin-Ge)                               |\n|     **Jitter**\u003Cbr \u002F>(Linf)      | 探索鲁棒神经网络的误分类以增强对抗攻击 ([Schwinn, Leo, et al., 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.10304))    |                                                                                                                        |\n|       **Pixle**\u003Cbr \u002F>(L0)       | Pixle：一种基于像素重排的快速有效的黑盒攻击 ([Pomponi, Jary, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02236))                |                                                                                                                        |\n| **LGV**\u003Cbr \u002F>(Linf, L2, L1, L0) | LGV：从大范围几何邻域提升对抗样本的迁移性 ([Gubri, et al., 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13129))                 | :heart_eyes: 贡献者 [Martin Gubri](https:\u002F\u002Fgithub.com\u002FFramartin)                               |\n| **SPSA**\u003Cbr \u002F>(Linf) | 对抗风险及使用弱攻击进行评估的危险性 ([Uesato, Jonathan, et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05666))                 | :heart_eyes: 贡献者 [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **JSMA**\u003Cbr \u002F>(L0) | 深度学习在对抗环境中的局限性 ([Papernot, Nicolas, et al., 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.07528v1))                 | :heart_eyes: 贡献者 [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **EADL1**\u003Cbr \u002F>(L1) | EAD：针对深度神经网络的弹性网攻击 ([Chen, Pin-Yu, et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04114))                 | :heart_eyes: 贡献者 [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **EADEN**\u003Cbr \u002F>(L1, L2) | EAD：针对深度神经网络的弹性网攻击 ([Chen, Pin-Yu, et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04114))                 | :heart_eyes: 贡献者 [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **PIFGSM (PIM)**\u003Cbr \u002F>(Linf) | 用于欺骗深度神经网络的补丁式攻击 ([Gao, Lianli, et al., 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06765))                 | :heart_eyes: 贡献者 [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n| **PIFGSM++ (PIM++)**\u003Cbr \u002F>(Linf) | 用于对抗目标攻击的补丁式++扰动 ([Gao, Lianli, et al., 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15503))                 | :heart_eyes: 贡献者 [Riko Naka](https:\u002F\u002Fgithub.com\u002Frikonaka)                               |\n\n## :bar_chart: 性能对比\n\n在比较的工具包中，我们选择了当前最新更新且引用次数最多的几种方法：\n* **Foolbox**：[611](https:\u002F\u002Fscholar.google.com\u002Fscholar?cites=10871007443931887615&as_sdt=2005&sciodt=0,5&hl=ko) 次引用，最后更新时间为 2023 年 10 月。\n* **ART**：[467](https:\u002F\u002Fscholar.google.com\u002Fscholar?cites=16247708270610532647&as_sdt=2005&sciodt=0,5&hl=ko) 次引用，最后更新时间为 2023 年 10 月。\n\n对比了针对 CIFAR10 前 50 张图像的每种攻击下的鲁棒准确率及运行耗时。对于 L2 攻击，记录了对抗样本与原始样本之间的平均 L2 距离。所有实验均在 GeForce RTX 2080 上进行。如需查看最新版本，请参阅此处（[代码](https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002Fdemos\u002FPerformance%20Comparison%20(CIFAR10).ipynb)，[nbviewer](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fblob\u002Fmaster\u002Fdemos\u002FPerformance%20Comparison%20(CIFAR10).ipynb)）。\n\n|  **攻击**  |     **工具包**     |     标准 |     [Wong2020Fast](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.03994) |     [Rice2020Overfitting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11569) |     **备注**     |\n| :----------------: | :-----------------: | -------------------------------------------: | -------------------------------------------: | ---------------------------------------------: | :----------------: |\n|      **FGSM** (Linf)      |    Torchattacks     | 34% (54ms) |                                 **48% (5ms)** |                                    62% (82ms) |                    |\n|  | **Foolbox\u003Csup>*\u003C\u002Fsup>** | **34% (15ms)** |                                     48% (8ms) |                  **62% (30ms)** |                    |\n|                    |         ART         | 34% (214ms) |                                     48% (59ms) |                                   62% (768ms) |                    |\n| **PGD** (Linf) |    **Torchattacks** | **0% (174ms)** |                               **44% (52ms)** |            **58% (1348ms)** | :crown: ​**最快** |\n|                    | Foolbox\u003Csup>*\u003C\u002Fsup> | 0% (354ms) |                                  44% (56ms) |              58% (1856ms) |                    |\n|                    |         ART         | 0% (1384 ms) |                                   44% (437ms) |                58% (4704ms) |                    |\n| **CW\u003Csup>† \u003C\u002Fsup>**(L2) |    **Torchattacks** | **0% \u002F 0.40\u003Cbr \u002F> (2596ms)** |                **14% \u002F 0.61 \u003Cbr \u002F>(3795ms)** | **22% \u002F 0.56\u003Cbr \u002F>(43484ms)** | :crown: ​**成功率最高** \u003Cbr \u002F> :crown: **最快** |\n|                    | Foolbox\u003Csup>*\u003C\u002Fsup> | 0% \u002F 0.40\u003Cbr \u002F> (2668ms) |                   32% \u002F 0.41 \u003Cbr \u002F>(3928ms) |                34% \u002F 0.43\u003Cbr \u002F>(44418ms) |  |\n|                    |         ART         | 0% \u002F 0.59\u003Cbr \u002F> (196738ms) |                 24% \u002F 0.70 \u003Cbr \u002F>(66067ms) | 26% \u002F 0.65\u003Cbr \u002F>(694972ms) |  |\n| **PGD** (L2) |    **Torchattacks** | **0% \u002F 0.41 (184ms)** |                  **68% \u002F 0.5\u003Cbr \u002F> (52ms)** |                  **70% \u002F 0.5\u003Cbr \u002F>(1377ms)** | :crown: **最快** |\n|                    | Foolbox\u003Csup>*\u003C\u002Fsup> | 0% \u002F 0.41 (396ms) |                       68% \u002F 0.5\u003Cbr \u002F> (57ms) |                     70% \u002F 0.5\u003Cbr \u002F> (1968ms) |                    |\n|                    |         ART         | 0% \u002F 0.40 (1364ms) |                       68% \u002F 0.5\u003Cbr \u002F> (429ms) | 70% \u002F 0.5\u003Cbr \u002F> (4777ms) |                           |\n\n\u003Csup>*\u003C\u002Fsup> 需要注意的是，Foolbox 会同时返回准确率和对抗样本，因此生成对抗样本的 *实际* 时间可能比记录的时间更短。\n\n\u003Csup>**†**\u003C\u002Fsup>考虑到用于确定常数 `c` 的二分搜索算法可能较为耗时，Torchattacks 支持使用 MutliAttack 对 `c` 进行网格搜索。\n\n\n\n此外，我还推荐使用最近提出的一个工具包，[**Rai-toolbox**](https:\u002F\u002Fscholar.google.com\u002Fscholar_lookup?arxiv_id=2201.05647)。\n\n| 攻击      | 工具包      | 每步时间\u002F准确率 |\n| ----------- | ------------ | -------------------- |\n| FGSM (Linf) | rai-toolbox  | **58 ms** (0%)       |\n|             | Torchattacks | 81 ms (0%)           |\n|             | Foolbox      | 105 ms (0%)          |\n|             | ART          | 83 ms (0%)           |\n| PGD (Linf)  | rai-toolbox  | **58 ms** (44%)      |\n|             | Torchattacks | 79 ms (44%)          |\n|             | Foolbox      | 82 ms (44%)          |\n|             | ART          | 90 ms (44%)          |\n| PGD (L2)    | rai-toolbox  | **58 ms** (70%)      |\n|             | Torchattacks | 81 ms (70%)          |\n|             | Foolbox      | 82 ms (70%)          |\n|             | ART          | 89 ms (70%)          |\n\n> Rai-toolbox 采用了一种独特的基于梯度扰动的方法：它们通过 [参数变换优化器](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Fref_optim.html) 和 [扰动模型](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Fref_perturbation.html) 来实现。这使得用户能够以与标准 PGD 攻击相同的范式来实现多种算法（例如 [通用扰动](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Fhow_to\u002Funiv_adv_pert.html) 和 [稀疏梯度的概念探测](https:\u002F\u002Fmit-ll-responsible-ai.github.io\u002Fresponsible-ai-toolbox\u002Ftutorials\u002FImageNet-Concept-Probing.html)）。","# adversarial-attacks-pytorch 快速上手指南\n\n`adversarial-attacks-pytorch` (包名：`torchattacks`) 是一个基于 PyTorch 的库，旨在帮助用户轻松生成对抗样本（Adversarial Examples）。它提供了类似 PyTorch 原生的接口，支持多种主流攻击算法。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下最低要求：\n\n*   **Python**: 版本 >= 3.6\n*   **PyTorch**: 版本 >= 1.4.0\n*   **系统**: 支持 Linux, macOS, Windows\n\n## 安装步骤\n\n您可以选择通过 pip 直接安装，或从源码安装。国内用户建议使用清华源或阿里源加速下载。\n\n### 方式一：使用 pip 安装（推荐）\n\n```bash\n# 使用默认源\npip install torchattacks\n\n# 或使用国内镜像源加速（推荐）\npip install torchattacks -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：从源码安装\n\n如果您需要最新的功能或进行开发调试：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch.git\ncd adversarial-attacks-pytorch\u002F\npip install -e .\n```\n\n## 基本使用\n\n以下是使用该库生成对抗样本的最简流程。\n\n### 1. 初始化攻击方法\n\n以经典的 **PGD** (Projected Gradient Descent) 攻击为例：\n\n```python\nimport torchattacks\n\n# 初始化攻击对象\n# model: 已加载的 PyTorch 模型\n# eps: 最大扰动范围 (例如 8\u002F255)\n# alpha: 每次迭代的步长 (例如 2\u002F255)\n# steps: 迭代次数\natk = torchattacks.PGD(model, eps=8\u002F255, alpha=2\u002F255, steps=4)\n\n# 【重要】如果您的输入图像已经过归一化处理 (使用了 mean 和 std)，必须设置此参数\n# atk.set_normalization_used(mean=[...], std=[...])\n```\n\n### 2. 执行攻击\n\n调用攻击对象即可生成对抗样本。注意模型输出应为 `(N, C)` 形状的向量（N 为批次大小，C 为类别数），且输入图像像素值范围应在 `[0, 1]` 之间。\n\n```python\n# images: 原始输入图像张量\n# labels: 对应的真实标签张量\nadv_images = atk(images, labels)\n```\n\n### 3. 进阶提示\n\n*   **复现性**：若需在 GPU 上获得完全一致的对抗样本结果，请在代码开头添加：\n    ```python\n    torch.backends.cudnn.deterministic = True\n    ```\n*   **目标攻击**：默认是非目标攻击（让模型分类错误）。若需指定目标类别，可使用 `set_mode_targeted_by_label` 等方法。\n*   **组合攻击**：可以使用 `torchattacks.MultiAttack` 将多种攻击方法组合在一起执行。","某自动驾驶初创公司的算法团队正在对车载行人检测模型进行安全压力测试，试图找出模型在极端干扰下的识别漏洞。\n\n### 没有 adversarial-attacks-pytorch 时\n- **重复造轮子耗时久**：工程师需手动复现 FGSM、PGD 等经典攻击算法的数学公式，花费数周编写底层梯度计算代码，极易出错。\n- **接口适配成本高**：每次更换测试模型架构，都要重新调整输入数据的归一化范围和裁剪逻辑，导致测试脚本无法通用。\n- **实验结果难复现**：由于缺乏统一的随机种子管理和确定性后端设置，不同成员生成的对抗样本不一致，难以横向对比模型鲁棒性。\n- **维护负担重**：自行编写的攻击代码缺乏文档和社区支持，一旦 PyTorch 版本升级，内部工具链往往需要大规模重构。\n\n### 使用 adversarial-attacks-pytorch 后\n- **开箱即用效率高**：通过 `torchattacks.PGD` 等几行代码即可调用多种成熟攻击算法，将原本数周的开发周期缩短至几小时。\n- **标准化流程顺畅**：工具自动处理输入域 [0, 1] 的约束及归一化参数设置，无缝适配 torchvision 主流模型，实现“一次编写，到处运行”。\n- **结果稳定可信赖**：内置确定性计算支持，固定随机种子后即可精确复现相同的对抗样本，确保团队评估基准一致。\n- **生态集成便捷**：可直接与 MAIR 训练框架或 RobustBench 基准库联动，快速构建从攻击生成到防御训练的完整闭环。\n\nadversarial-attacks-pytorch 将复杂的对抗攻击理论转化为标准化的工程实践，让开发者能专注于提升模型鲁棒性而非底层算法实现。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHarry24k_adversarial-attacks-pytorch_7bc805a7.png","Harry24k","Hoki Kim","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FHarry24k_b5ef1696.png",null,"Chung-Ang University","SEOUL, KOREA","hokikim@cau.ac.kr","HokiKimKR","trustworthyai.co.kr","https:\u002F\u002Fgithub.com\u002FHarry24k",[86,90,94],{"name":87,"color":88,"percentage":89},"Python","#3572A5",98.2,{"name":91,"color":92,"percentage":93},"Jupyter Notebook","#DA5B0B",1.2,{"name":95,"color":96,"percentage":97},"Jinja","#a52a22",0.6,2155,369,"2026-04-02T17:05:14","MIT",1,"未说明","非必需（支持 CPU），若使用 GPU 需注意设置 torch.backends.cudnn.deterministic = True 以保证结果可复现，具体型号和显存未说明",{"notes":106,"python":107,"dependencies":108},"输入数据的数值范围必须在 [0, 1] 之间；模型输出必须仅为形状 (N, C) 的单个向量（N 为样本数，C 为类别数）；若需固定随机种子获得相同的对抗样本，必须启用确定性模式。",">=3.6",[109],"torch>=1.4.0",[13],[112,113,114],"deep-learning","pytorch","adversarial-attacks","2026-03-27T02:49:30.150509","2026-04-06T05:36:23.399039",[118,123,128,133,138,142],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},12575,"为什么使用 PGD 攻击且 epsilon 设为 0 时，结果与干净准确率（Clean Accuracy）不一致？","这是因为数据归一化（Normalization）处理的问题。当图像输入到 torchattack 进行攻击时，如果模型期望的是归一化后的数据，而攻击代码内部没有自动处理反归一化，就会导致差异。\n解决方案：\n1. 使用 `atk.set_normalization_used` 函数来告知攻击代码已使用的归一化参数。\n2. 或者，在将图像送入 `torchattack` 之前，手动对图像进行反归一化（denormalize）操作，确保输入符合攻击算法的预期。","https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fissues\u002F124",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},12576,"CW (Carlini & Wagner) 攻击无法生成有效的对抗样本或效果不佳，应该如何调整？","CW 攻击对超参数非常敏感，如果攻击失败或生成的图像与原图无异，通常是因为迭代次数不足或损失函数权重设置问题。\n建议尝试以下调整：\n1. **增加迭代次数**：默认迭代次数可能不够，尝试增大 iteration 参数以获得更有效的梯度更新。\n2. **检查损失函数权重**：CW 攻击包含两项损失（距离损失和分类损失）。如果控制分类损失的常数 `c` 太小，梯度会主要由距离损失主导，导致生成的图像几乎等同于原图而非对抗样本。需要适当调整 `c` 的值或让其动态变化。","https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fissues\u002F5",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},12577,"JSMA 攻击在 ImageNet 等大分辨率图像上运行时出现显存溢出（OOM），提示需要数十 GB 显存，如何解决？","这是 JSMA 算法本身的特性导致的，并非代码 Bug。根据原始论文算法，JSMA 需要计算所有像素对 (p1, p2) 的雅可比矩阵组合。\n对于 ImageNet 图像 (3x224x224)，特征数量巨大，组合数达到 (3*224*224)^2，这会生成一个极大的查找矩阵，导致显存需求爆炸（如 84GB+）。\n解决方案：\n1. **避免在大分辨率图像上使用 JSMA**：该算法目前不适合直接用于 ImageNet 级别的全图攻击。\n2. **替代方案**：如果需要 L0 范数攻击，建议尝试 **OnePixel** 攻击或其他内存效率更高的方法。\n3. **缩小输入**：仅在极小尺寸的图像或感兴趣区域（ROI）上尝试运行 JSMA。","https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fissues\u002F187",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},12578,"为什么 PGD-L2 攻击的成功率远低于 PGD-Linf，即使设置了相同的 epsilon 值（如 8\u002F255）？","这是一个常见的误解。**不同 Lp 范数下的 epsilon 值不能直接比较**。\n- `epsilon=8\u002F255` 对于 Linf 范数来说是一个标准的扰动强度。\n- 但对于 L2 范数来说，同样的数值代表的扰动能量极其微小，因此攻击成功率会非常低。\n\n正确做法：\n1. **不要跨范数比较成功率**：Linf 攻击和 L2 攻击被视为两种不同的实验设置。\n2. **参考论文设定**：复现实验时，应查阅相关论文（如 AutoAttack 或 RobustBench 论文），使用针对特定范数（L2 或 Linf）专门设定的 epsilon 值，而不是强行统一数值。\n3. **同范数内比较**：只有在数据集、epsilon 值和范数类型完全一致的情况下，比较不同攻击算法（如 PGD vs AutoAttack）的成功率才有意义。","https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fissues\u002F142",{"id":139,"question_zh":140,"answer_zh":141,"source_url":122},12579,"在进行对抗训练或测试时，是否每次都需要将数据归一化为零均值？","不一定非要零均值，但必须保证**一致性**。\n关键在于：模型训练时使用的归一化方式（均值和标准差），必须在攻击生成和评估时保持一致。\n- 如果模型是用 `(0.5, 0.5, 0.5)` 归一化训练的，那么输入给攻击算法的图像也必须经过相同的处理，或者在攻击算法中通过 `set_normalization_used` 指定该参数。\n- 维护者建议：你可以随时进行归一化，但在将图像喂给 `torchattack` 时，请确保理解其内部是否会自动处理，必要时手动进行反归一化后再输入，或直接配置攻击对象以适配你的归一化参数。",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},12580,"如何使用该库对模型进行高斯噪声（Gaussian Noise）鲁棒性测试或训练？","可以通过将模型设置为训练模式来模拟含噪环境。\n具体操作：\n1. 调用 `model.train()`。\n2. 在某些实现中，训练模式下可能会激活特定的噪声层或行为（取决于具体模型定义）。\n注意：如果目的是测试标准准确率，应使用 `model.eval()`；如果是进行对抗训练或噪声注入训练，则保持 `model.train()` 并配合相应的数据加载策略。如果遇到准确率异常低的情况，请检查是否正确切换了模型模式以及数据预处理流程（如是否意外应用了攻击变换）。","https:\u002F\u002Fgithub.com\u002FHarry24k\u002Fadversarial-attacks-pytorch\u002Fissues\u002F40",[148,153,158,163,167,171,175,179,183,187],{"id":149,"version":150,"summary_zh":151,"released_at":152},62965,"v3.5.1","修复 PIFGSMPP 加载错误  \n在 Black 中重新编译代码","2023-10-20T11:32:56",{"id":154,"version":155,"summary_zh":156,"released_at":157},62966,"v3.5.0","* 修复了归一化错误。\n* 更新了文档和自述文件。","2023-10-20T07:55:25",{"id":159,"version":160,"summary_zh":161,"released_at":162},62967,"v3.4.0","* 错误修复\n** 添加了 setup.py\n** 针对 CW 的攻击\n** Pixle 参数\n\n* 新增攻击：\n** SPSA\n** JSMA\n\n* 其他：\n** `set_mode_targeted_*` 支持静默模式。\n** 添加了 `_check_inputs`。","2023-03-27T04:37:38",{"id":164,"version":165,"summary_zh":78,"released_at":166},62968,"v3.3.0","2022-10-03T11:08:37",{"id":168,"version":169,"summary_zh":78,"released_at":170},62969,"v3.2.6","2022-04-10T02:50:07",{"id":172,"version":173,"summary_zh":78,"released_at":174},62970,"v3.2.5","2022-04-10T02:49:53",{"id":176,"version":177,"summary_zh":78,"released_at":178},62971,"v3.2.4","2022-04-10T02:49:38",{"id":180,"version":181,"summary_zh":78,"released_at":182},62972,"v3.2.3","2021-12-09T07:16:40",{"id":184,"version":185,"summary_zh":78,"released_at":186},62973,"v3.0.0","2021-07-08T07:13:55",{"id":188,"version":189,"summary_zh":78,"released_at":190},62974,"v2.10.2","2020-12-04T17:12:23"]