[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-bethgelab--foolbox":3,"similar-bethgelab--foolbox":203},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":18,"owner_email":18,"owner_twitter":18,"owner_website":19,"owner_url":20,"languages":21,"stars":42,"forks":43,"last_commit_at":44,"license":45,"difficulty_score":46,"env_os":47,"env_gpu":48,"env_ram":49,"env_deps":50,"category_tags":59,"github_topics":61,"view_count":46,"oss_zip_url":18,"oss_zip_packed_at":18,"status":70,"created_at":71,"updated_at":72,"faqs":73,"releases":102},5408,"bethgelab\u002Ffoolbox","foolbox","A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX","Foolbox 是一款专为评估机器学习模型鲁棒性而设计的 Python 开源库，能够帮助用户轻松生成“对抗样本”，即那些经过细微修改却能误导神经网络做出错误判断的数据。它主要解决了深度学习模型在面对恶意攻击时安全性难以量化测试的痛点，让研究人员和开发者能够便捷地验证模型在 PyTorch、TensorFlow 和 JAX 等主流框架下的防御能力。\n\n这款工具特别适合人工智能领域的研究人员、算法工程师以及关注模型安全性的开发者使用。Foolbox 的核心亮点在于其基于 EagerPy 重构的架构，实现了真正的原生性能支持。这意味着它无需在不同框架间进行繁琐的代码转换或重复开发，即可直接利用各框架的原生计算能力（包括真实的批量处理），从而大幅提升攻击算法的运行效率。此外，Foolbox 集成了大量业界领先的梯度基于和决策基于的攻击算法，并提供了完善的类型检查机制，帮助用户在代码运行前发现潜在错误。无论是进行学术研究还是工业级的模型压力测试，Foolbox 都能提供高效、统一且可靠的解决方案。",".. raw:: html\n\n   \u003Ca href=\"https:\u002F\u002Ffoolbox.jonasrauber.de\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbethgelab_foolbox_readme_593bdbe18e44.png\" align=\"right\" \u002F>\u003C\u002Fa>\n\n.. image:: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ffoolbox.svg\n   :target: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ffoolbox\n\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Ffoolbox\u002Fbadge\u002F?version=latest\n    :target: https:\u002F\u002Ffoolbox.readthedocs.io\u002Fen\u002Flatest\u002F\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fambv\u002Fblack\n\n.. image:: https:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F10.21105\u002Fjoss.02607\u002Fstatus.svg\n   :target: https:\u002F\u002Fdoi.org\u002F10.21105\u002Fjoss.02607\n\n===============================================================================================================================\nFoolbox: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX\n===============================================================================================================================\n\n`Foolbox \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de>`_ is a **Python library** that lets you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in `PyTorch \u003Chttps:\u002F\u002Fpytorch.org>`_, `TensorFlow \u003Chttps:\u002F\u002Fwww.tensorflow.org>`_, and `JAX \u003Chttps:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax>`_.\n\n🔥 Design \n----------\n\n**Foolbox 3** has been rewritten from scratch\nusing `EagerPy \u003Chttps:\u002F\u002Fgithub.com\u002Fjonasrauber\u002Feagerpy>`_ instead of\nNumPy to achieve native performance on models\ndeveloped in PyTorch, TensorFlow and JAX, all with one code base without code duplication.\n\n- **Native Performance**: Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, and JAX and comes with real batch support.\n- **State-of-the-art attacks**: Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks.\n- **Type Checking**: Catch bugs before running your code thanks to extensive type annotations in Foolbox.\n\n📖 Documentation\n-----------------\n\n- **Guide**: The best place to get started with Foolbox is the official `guide \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de>`_.\n- **Tutorial**: If you are looking for a tutorial, check out this `Jupyter notebook \u003Chttps:\u002F\u002Fgithub.com\u002Fjonasrauber\u002Ffoolbox-native-tutorial\u002Fblob\u002Fmaster\u002Ffoolbox-native-tutorial.ipynb>`_ |colab|.\n- **Documentation**: The API documentation can be found on `ReadTheDocs \u003Chttps:\u002F\u002Ffoolbox.readthedocs.io\u002Fen\u002Fstable\u002F>`_.\n\n.. |colab| image:: https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\n   :target: https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fjonasrauber\u002Ffoolbox-native-tutorial\u002Fblob\u002Fmaster\u002Ffoolbox-native-tutorial.ipynb\n\n🚀 Quickstart\n--------------\n\n.. code-block:: bash\n\n   pip install foolbox\n\nFoolbox is tested with Python 3.8 and newer - however, it will most likely also work with version 3.6 - 3.8. To use it with `PyTorch \u003Chttps:\u002F\u002Fpytorch.org>`_, `TensorFlow \u003Chttps:\u002F\u002Fwww.tensorflow.org>`_, or `JAX \u003Chttps:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax>`_, the respective framework needs to be installed separately. These frameworks are not declared as dependencies because not everyone wants to use and thus install all of them and because some of these packages have different builds for different architectures and CUDA versions. Besides that, all essential dependencies are automatically installed.\n\nYou can see the versions we currently use for testing in the `Compatibility section \u003C#-compatibility>`_ below, but newer versions are in general expected to work.\n\n🎉 Example\n-----------\n\n.. code-block:: python\n\n   import foolbox as fb\n\n   model = ...\n   fmodel = fb.PyTorchModel(model, bounds=(0, 1))\n\n   attack = fb.attacks.LinfPGD()\n   epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]\n   _, advs, success = attack(fmodel, images, labels, epsilons=epsilons)\n\n\nMore examples can be found in the `examples \u003C.\u002Fexamples\u002F>`_ folder, e.g.\na full `ResNet-18 example \u003C.\u002Fexamples\u002Fsingle_attack_pytorch_resnet18.py>`_.\n\n📄 Citation\n------------\n\nIf you use Foolbox for your work, please cite our `JOSS paper on Foolbox Native (i.e., Foolbox 3.0) \u003Chttps:\u002F\u002Fdoi.org\u002F10.21105\u002Fjoss.02607>`_ and our `ICML workshop paper on Foolbox \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04131>`_ using the following BibTeX entries:\n\n.. code-block::\n\n   @article{rauber2017foolboxnative,\n     doi = {10.21105\u002Fjoss.02607},\n     url = {https:\u002F\u002Fdoi.org\u002F10.21105\u002Fjoss.02607},\n     year = {2020},\n     publisher = {The Open Journal},\n     volume = {5},\n     number = {53},\n     pages = {2607},\n     author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel},\n     title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX},\n     journal = {Journal of Open Source Software}\n   }\n\n.. code-block::\n\n   @inproceedings{rauber2017foolbox,\n     title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models},\n     author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias},\n     booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning},\n     year={2017},\n     url={http:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04131},\n   }\n\n\n👍 Contributions\n-----------------\n\nWe welcome contributions of all kind, please have a look at our\n`development guidelines \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de\u002Fguide\u002Fdevelopment.html>`_.\nIn particular, you are invited to contribute\n`new adversarial attacks \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de\u002Fguide\u002Fadding_attacks.html>`_.\nIf you would like to help, you can also have a look at the issues that are\nmarked with `contributions welcome\n\u003Chttps:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fissues?q=is%3Aopen+is%3Aissue+label%3A%22contributions+welcome%22>`_.\n\n💡 Questions?\n--------------\n\nIf you have a question or need help, feel free to open an issue on GitHub.\nOnce GitHub Discussions becomes publicly available, we will switch to that.\n\n💨 Performance\n--------------\n\nFoolbox 3.0 is much faster than Foolbox 1 and 2. A basic `performance comparison`_ can be found in the `performance` folder.\n\n🐍 Compatibility\n-----------------\n\nWe currently test with the following versions:\n\n* PyTorch 1.10.1\n* TensorFlow 2.6.3\n* JAX 0.2.517\n* NumPy 1.18.1\n\n.. _performance comparison: performance\u002FREADME.md\n",".. raw:: html\n\n   \u003Ca href=\"https:\u002F\u002Ffoolbox.jonasrauber.de\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbethgelab_foolbox_readme_593bdbe18e44.png\" align=\"right\" \u002F>\u003C\u002Fa>\n\n.. image:: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ffoolbox.svg\n   :target: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ffoolbox\n\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Ffoolbox\u002Fbadge\u002F?version=latest\n    :target: https:\u002F\u002Ffoolbox.readthedocs.io\u002Fen\u002Flatest\u002F\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fambv\u002Fblack\n\n.. image:: https:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F10.21105\u002Fjoss.02607\u002Fstatus.svg\n   :target: https:\u002F\u002Fdoi.org\u002F10.21105\u002Fjoss.02607\n\n===============================================================================================================================\nFoolbox：用于在 PyTorch、TensorFlow 和 JAX 中基准机器学习模型鲁棒性的快速对抗攻击工具\n===============================================================================================================================\n\n`Foolbox \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de>`_ 是一个 **Python 库**，可让您轻松地对深度神经网络等机器学习模型执行对抗攻击。它基于 EagerPy 构建，原生支持 `PyTorch \u003Chttps:\u002F\u002Fpytorch.org>`_、`TensorFlow \u003Chttps:\u002F\u002Fwww.tensorflow.org>`_ 和 `JAX \u003Chttps:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax>`_ 中的模型。\n\n🔥 设计 \n----------\n\n**Foolbox 3** 已经从头重写，\n使用 `EagerPy \u003Chttps:\u002F\u002Fgithub.com\u002Fjonasrauber\u002Feagerpy>`_ 替代 NumPy，\n以实现对 PyTorch、TensorFlow 和 JAX 中开发模型的原生性能，\n且仅需一套代码库即可完成，无需重复编写。\n\n- **原生性能**：Foolbox 3 基于 EagerPy 构建，在 PyTorch、TensorFlow 和 JAX 中原生运行，并真正支持批量处理。\n- **最先进攻击**：Foolbox 提供大量最先进的基于梯度和基于决策的对抗攻击方法。\n- **类型检查**：借助 Foolbox 中丰富的类型注解，在运行代码之前即可捕获错误。\n\n📖 文档\n-----------------\n\n- **指南**：开始使用 Foolbox 的最佳地点是官方 `指南 \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de>`_。\n- **教程**：如果您正在寻找教程，请查看此 `Jupyter 笔记本 \u003Chttps:\u002F\u002Fgithub.com\u002Fjonasrauber\u002Ffoolbox-native-tutorial\u002Fblob\u002Fmaster\u002Ffoolbox-native-tutorial.ipynb>`_ |colab|。\n- **文档**：API 文档可在 `ReadTheDocs \u003Chttps:\u002F\u002Ffoolbox.readthedocs.io\u002Fen\u002Fstable\u002F>` 上找到。\n\n.. |colab| image:: https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\n   :target: https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fjonasrauber\u002Ffoolbox-native-tutorial\u002Fblob\u002Fmaster\u002Ffoolbox-native-tutorial.ipynb\n\n🚀 快速入门\n--------------\n\n.. code-block:: bash\n\n   pip install foolbox\n\nFoolbox 经过 Python 3.8 及更高版本的测试；不过，它很可能也能在 3.6 至 3.8 版本中正常工作。要将其与 `PyTorch \u003Chttps:\u002F\u002Fpytorch.org>`_、`TensorFlow \u003Chttps:\u002F\u002Fwww.tensorflow.org>`_ 或 `JAX \u003Chttps:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax>`_ 一起使用，需要分别安装相应的框架。这些框架并未被声明为依赖项，因为并非所有人都希望同时使用并安装它们，而且其中一些软件包针对不同的架构和 CUDA 版本有不同的构建版本。除此之外，所有必要的依赖项都会自动安装。\n\n我们当前用于测试的版本可在下方的 `兼容性部分 \u003C#-compatibility>`_ 中查看，但通常情况下，更新的版本也应能正常工作。\n\n🎉 示例\n-----------\n\n.. code-block:: python\n\n   import foolbox as fb\n\n   model = ...\n   fmodel = fb.PyTorchModel(model, bounds=(0, 1))\n\n   attack = fb.attacks.LinfPGD()\n   epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]\n   _, advs, success = attack(fmodel, images, labels, epsilons=epsilons)\n\n\n更多示例可在 `examples \u003C.\u002Fexamples\u002F>` 文件夹中找到，例如完整的 `ResNet-18 示例 \u003C.\u002Fexamples\u002Fsingle_attack_pytorch_resnet18.py>`_。\n\n📄 引用\n------------\n\n如果您在工作中使用了 Foolbox，请引用我们的 `JOSS 关于 Foolbox Native（即 Foolbox 3.0）的文章 \u003Chttps:\u002F\u002Fdoi.org\u002F10.21105\u002Fjoss.02607>`_ 以及我们在 `ICML 研讨会发表的关于 Foolbox 的论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04131>`_，引用格式如下：\n\n.. code-block::\n\n   @article{rauber2017foolboxnative,\n     doi = {10.21105\u002Fjoss.02607},\n     url = {https:\u002F\u002Fdoi.org\u002F10.21105\u002Fjoss.02607},\n     year = {2020},\n     publisher = {The Open Journal},\n     volume = {5},\n     number = {53},\n     pages = {2607},\n     author = {Jonas Rauber 和 Roland Zimmermann、Matthias Bethge、Wieland Brendel},\n     title = {Foolbox Native：用于在 PyTorch、TensorFlow 和 JAX 中基准机器学习模型鲁棒性的快速对抗攻击工具},\n     journal = {Journal of Open Source Software}\n   }\n\n.. code-block::\n\n   @inproceedings{rauber2017foolbox,\n     title={Foolbox：用于基准机器学习模型鲁棒性的 Python 工具箱},\n     author={Rauber, Jonas 和 Brendel, Wieland、Bethge, Matthias},\n     booktitle={Reliable Machine Learning in the Wild Workshop，第 34 届国际机器学习大会},\n     year={2017},\n     url={http:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04131},\n   }\n\n\n👍 贡献\n-----------------\n\n我们欢迎各种形式的贡献，请参阅我们的 `开发指南 \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de\u002Fguide\u002Fdevelopment.html>`_。特别是，欢迎您贡献 `新的对抗攻击 \u003Chttps:\u002F\u002Ffoolbox.jonasrauber.de\u002Fguide\u002Fadding_attacks.html>`_。如果您想提供帮助，也可以查看标记为 `欢迎贡献 \u003Chttps:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fissues?q=is%3Aopen+is%3Aissue+label%3A%22contributions+welcome%22>`_ 的问题。\n\n💡 有问题吗？\n--------------\n\n如果您有任何问题或需要帮助，欢迎在 GitHub 上提交一个问题。待 GitHub Discussions 公开可用后，我们将切换到该平台。\n\n💨 性能\n--------------\n\nFoolbox 3.0 比 Foolbox 1 和 2 快得多。基本的 `性能比较`_ 可在 `performance` 文件夹中找到。\n\n🐍 兼容性\n-----------------\n\n我们目前使用的版本如下：\n\n* PyTorch 1.10.1\n* TensorFlow 2.6.3\n* JAX 0.2.517\n* NumPy 1.18.1\n\n.. _性能比较: performance\u002FREADME.md","# Foolbox 快速上手指南\n\nFoolbox 是一个用于对机器学习模型（如深度神经网络）进行对抗攻击的 Python 库。Foolbox 3 基于 EagerPy 重构，原生支持 **PyTorch**、**TensorFlow** 和 **JAX**，无需代码复制即可在同一代码库中实现高性能攻击。\n\n## 环境准备\n\n### 系统要求\n- **Python 版本**：推荐 Python 3.8 及以上（理论上兼容 3.6 - 3.8）。\n- **操作系统**：Linux, macOS, Windows。\n\n### 前置依赖\nFoolbox 本身不强制捆绑深度学习框架，你需要根据项目需求**单独安装**以下任一框架：\n- PyTorch\n- TensorFlow\n- JAX\n\n> **注意**：由于不同架构和 CUDA 版本的构建差异，请前往各框架官网获取适合你环境的安装命令。\n\n## 安装步骤\n\n使用 pip 安装 Foolbox 核心库：\n\n```bash\npip install foolbox\n```\n\n如果你希望使用国内镜像源加速安装，推荐使用清华源：\n\n```bash\npip install foolbox -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n安装完成后，确保已单独安装了对应的深度学习框架（例如 `pip install torch` 或 `pip install tensorflow`）。\n\n## 基本使用\n\n以下是一个基于 **PyTorch** 的最简使用示例，演示如何加载模型并执行 `LinfPGD` 对抗攻击。\n\n### 代码示例\n\n```python\nimport foolbox as fb\n\n# 1. 加载你的预训练模型 (此处为占位符，需替换为实际模型)\nmodel = ... \n\n# 2. 将模型包装为 Foolbox 模型，指定输入数据范围 (例如归一化到 0-1)\nfmodel = fb.PyTorchModel(model, bounds=(0, 1))\n\n# 3. 选择攻击算法 (此处为 Linf 范数下的 PGD 攻击)\nattack = fb.attacks.LinfPGD()\n\n# 4. 定义扰动强度列表 (epsilons)\nepsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]\n\n# 5. 执行攻击\n# images: 输入图像张量, labels: 真实标签张量\n_, advs, success = attack(fmodel, images, labels, epsilons=epsilons)\n```\n\n### 说明\n- `bounds=(0, 1)`：表示模型输入数据的像素值范围，请根据你的预处理流程调整（如 `(-1, 1)`）。\n- `success`：返回一个布尔数组，指示在每个 `epsilon` 强度下攻击是否成功。\n- 更多完整示例（如 ResNet-18 实战）可参考官方仓库的 `examples` 文件夹。","某自动驾驶初创公司的算法团队正在对自研的交通标志识别模型进行安全审计，急需验证其在对抗样本攻击下的鲁棒性。\n\n### 没有 foolbox 时\n- **框架适配成本高**：团队混合使用了 PyTorch 和 TensorFlow，手动为不同框架重写攻击代码导致大量重复劳动，维护极其困难。\n- **攻击实现复杂**：复现前沿的梯度攻击算法（如 PGD）需要深入推导数学公式并处理复杂的张量运算，极易引入隐蔽的 Bug。\n- **评估效率低下**：缺乏原生批处理支持，只能单张图片串行生成对抗样本，耗时数天才能完成一次完整的鲁棒性基准测试。\n- **调试困难**：由于缺乏类型检查，张量维度不匹配等错误往往在运行很久后才爆发，排查问题耗费大量精力。\n\n### 使用 foolbox 后\n- **统一代码底座**：借助 EagerPy 底层支持，同一套攻击代码可直接无缝运行于 PyTorch、TensorFlow 和 JAX 模型，彻底消除框架隔阂。\n- **开箱即用算法**：直接调用内置的 LinfPGD 等状态-of-the-art 攻击接口，几行代码即可发起高强度攻击，无需关注底层数学实现。\n- **原生高性能加速**：利用真实的批处理（Batch）支持并行生成对抗样本，将原本数天的测试周期缩短至小时级，大幅迭代速度。\n- **开发更稳健**：得益于完善的类型注解，能在编码阶段提前捕获维度错误，显著降低运行时崩溃风险，让工程师更专注于策略分析。\n\nfoolbox 通过提供统一、高效且原生的对抗攻击基准测试能力，帮助团队以最小成本构建了可靠的模型防御防线。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbethgelab_foolbox_d280f20e.png","bethgelab","Bethge Lab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbethgelab_e95551ad.jpg","Perceiving Neural Networks",null,"http:\u002F\u002Fbethgelab.org","https:\u002F\u002Fgithub.com\u002Fbethgelab",[22,26,30,34,38],{"name":23,"color":24,"percentage":25},"Python","#3572A5",93.1,{"name":27,"color":28,"percentage":29},"Jupyter Notebook","#DA5B0B",5.2,{"name":31,"color":32,"percentage":33},"TeX","#3D6117",0.9,{"name":35,"color":36,"percentage":37},"Makefile","#427819",0.6,{"name":39,"color":40,"percentage":41},"JavaScript","#f1e05a",0.2,2952,439,"2026-04-05T12:48:22","MIT",2,"","未说明（取决于所选用的深度学习框架 PyTorch\u002FTensorFlow\u002FJAX 及其对应的 CUDA 版本）","未说明",{"notes":51,"python":52,"dependencies":53},"Foolbox 本身不将 PyTorch、TensorFlow 或 JAX 列为强制依赖，用户需根据实际需求单独安装其中一个或多个框架。不同框架对系统架构和 CUDA 版本有特定要求。当前测试使用的框架版本为：PyTorch 1.10.1, TensorFlow 2.6.3, JAX 0.2.517, NumPy 1.18.1。","3.8+ (测试环境为 3.8，推测支持 3.6-3.8)",[54,55,56,57,58],"eagerpy","PyTorch (可选，需单独安装)","TensorFlow (可选，需单独安装)","JAX (可选，需单独安装)","NumPy",[60],"开发框架",[62,63,64,65,66,67,68,69],"adversarial-examples","machine-learning","python","adversarial-attacks","pytorch","tensorflow","jax","keras","ready","2026-03-27T02:49:30.150509","2026-04-08T14:44:24.402779",[74,79,84,89,94,98],{"id":75,"question_zh":76,"answer_zh":77,"source_url":78},24542,"为什么生成的对抗样本在重新输入模型预测时，分类结果又变回了原始标签（即看似不是对抗样本）？","这通常是因为样本非常接近决策边界，导致模型的预测在不同批次或微小扰动下不一致（数值不稳定）。Foolbox 保证返回的样本在攻击过程中曾被误分类。如果您需要验证，可以尝试：1. 信任 Foolbox 的保证直接使用；2. 将生成的对抗样本稍微远离决策边界一点点以确认其稳定性；3. 注意不要依赖单次批处理预测来验证，因为批次内的其他样本可能影响结果。","https:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fissues\u002F243",{"id":80,"question_zh":81,"answer_zh":82,"source_url":83},24543,"L-BFGS 攻击运行极慢甚至似乎卡住不动，是什么原因？","L-BFGS 攻击需要进行大量的模型评估，如果模型较大（如 VGG19）且在 CPU 上运行，速度会非常慢。解决方案是使用 GPU 进行加速。如果无法使用 GPU，建议尝试更快的攻击方法，例如 FGSM (GradientSignAttack)，它在大多数情况下也能快速生成对抗样本。","https:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fissues\u002F72",{"id":85,"question_zh":86,"answer_zh":87,"source_url":88},24544,"Foolbox 是否支持非图像输入（如整数序列、文本等）的对抗攻击？","如果您的模型输入必须是整数（例如经过 Embedding 层的序列），直接应用基于梯度的攻击会很困难，因为模型对输入不可微。可能的变通方法是：定义一个接受浮点数输入但在内部将其舍入为整数的包装模型，然后对该包装模型进行攻击。对于不可微的情况，可以尝试使用无需梯度的攻击方法，如 `AdditiveUniformNoiseAttack`。建议先在图像或音频等常见场景熟悉 Foolbox，再针对特定问题调整。","https:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fissues\u002F80",{"id":90,"question_zh":91,"answer_zh":92,"source_url":93},24545,"如何针对外部 API 或黑盒网站（无法获取模型权重）构建 Foolbox 模型并进行攻击？","您需要创建一个自定义的 Foolbox 模型类，该类封装对外部 API 的调用。在 `predictions` 方法中，发送图像数据到网站并解析返回的置信度或类别概率。虽然 Issue 中未提供完整代码，但核心思路是：子类化 `foolbox.models.Model`，实现 `predictions` 方法通过网络请求获取结果，并确保返回格式符合 Foolbox 要求（通常是 numpy 数组形式的概率分布）。之后即可像普通模型一样对其使用 Boundary Attack 等黑盒攻击。","https:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fissues\u002F253",{"id":95,"question_zh":96,"answer_zh":97,"source_url":93},24546,"Boundary Attack 总是很快收敛（如 1500 步），如何让它运行更多步骤以获得更优结果？","Boundary Attack 的设计目标是在找到满足条件的对抗样本后，尽可能减小扰动使其看起来与原图无异，因此它会在找到可行解后继续优化直至收敛。如果您希望运行更多步数以探索更小的扰动，可以调整攻击的超参数（如增加最大迭代次数 `max_iterations` 或调整步长策略）。但在许多情况下，攻击提前收敛意味着已经找到了视觉上难以区分的高质量对抗样本，这是对抗脆弱性的本质体现。",{"id":99,"question_zh":100,"answer_zh":101,"source_url":83},24547,"在使用 Foolbox 教程示例时，代码运行但不生成对抗样本或结果不符合预期，常见错误有哪些？","常见问题包括：1. 预处理不一致：确保传入攻击的图像与模型训练时的预处理（如减去均值 [123.68, 116.78, 103.94]）完全一致，不要在传入模型前重复预处理或遗漏预处理；2. 硬件限制：大型模型（如 VGG）在 CPU 上运行攻击极慢，建议使用 GPU；3. 准则设置：检查 `TargetClassProbability` 等准则的参数设置是否合理。参考官方文档修正预处理逻辑通常能解决大部分“无输出”问题。",[103,108,113,118,123,128,133,138,143,148,153,158,163,168,173,178,183,188,193,198],{"id":104,"version":105,"summary_zh":106,"released_at":107},154129,"v3.3.4","# 新功能与改进\n- [期望值变换包装器](https:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fcommit\u002F55d2c8db674429d787dad49e88feddd20dbea0ec)\n- [添加MIFGSM攻击](https:\u002F\u002Fgithub.com\u002Fbethgelab\u002Ffoolbox\u002Fcommit\u002Fb3fcc739b4bfc7ad5a10f9c89df5c4b9998802bd)","2024-03-04T12:27:37",{"id":109,"version":110,"summary_zh":111,"released_at":112},154130,"v3.3.3","## 新功能与改进\n- 修复了一个 bug：AdamPGD 攻击实际上并未使用 Adam 优化器。\n- 攻击现在会验证其输入是否在模型的取值范围内。","2022-04-02T15:26:25",{"id":114,"version":115,"summary_zh":116,"released_at":117},154131,"v3.3.2","## 新功能与改进\n\n* 添加了 AdamPGD 攻击\n* 添加了逐点攻击\n* 修复了跳步跳跃攻击的 bug（感谢 @zhuangzi926）\n* 其他改进与 bug 修复\n","2022-03-08T08:12:37",{"id":119,"version":120,"summary_zh":121,"released_at":122},154132,"v3.3.1","修复了 `SaltAndPepperAttack` 中的严重 bug（感谢 @maurapintor 和 @zangobot）","2021-02-23T07:09:34",{"id":124,"version":125,"summary_zh":126,"released_at":127},154133,"v3.3.0","## 新功能与改进\n\n* PGD 现在支持目标攻击（感谢 @zimmerrol）\n* 修复了 DDN 攻击中的 bug（感谢 @maurapintor）\n* 修复了 Brendel-Bethge 攻击中的 bug（感谢 @wielandbrendel）\n* 其他改进和 bug 修复\n","2021-02-10T08:53:50",{"id":129,"version":130,"summary_zh":131,"released_at":132},154134,"v3.2.1","Foolbox 在 Zenodo 上发布","2020-09-26T06:48:48",{"id":134,"version":135,"summary_zh":136,"released_at":137},154135,"v3.2.0","* 添加了我们的 JOSS 论文\n* 添加了 Foolbox 1、2 和 3 之间的性能对比\n* 改进了测试\n* 修复了 TensorFlow 示例代码\n* 改进了示例\n* 改进了教程\n* 更新了依赖项","2020-09-26T06:28:21",{"id":139,"version":140,"summary_zh":141,"released_at":142},154136,"v3.1.1","错误修复","2020-08-29T21:00:26",{"id":144,"version":145,"summary_zh":146,"released_at":147},154137,"v3.1.0","## 新特性\n* 将 `HopSkipJump` 攻击移植到 v3 版本\n* 添加了考虑裁剪的噪声攻击\n* 模型包装器现在支持 `data_format`\n* `JAXModel` 现在支持 `data_format`\n* 改进了文档\n\n## 错误修复\n* 修复了 `EADAttack` 的错误\n* 修复了 `GenAttack` 的错误\n* 其他错误修复和改进","2020-08-29T20:58:52",{"id":149,"version":150,"summary_zh":151,"released_at":152},154138,"v3.0.4","修复了版本号","2020-07-03T13:56:48",{"id":154,"version":155,"summary_zh":156,"released_at":157},154139,"v3.0.3","Fixes a bug in the `BrendelBethgeAttack` and updated Numba to silence warnings.","2020-07-03T13:48:38",{"id":159,"version":160,"summary_zh":161,"released_at":162},154140,"v3.0.2","Fixes a bug in the `BrendelBethgeAttack` (thanks @AidanKelley)","2020-05-23T14:35:57",{"id":164,"version":165,"summary_zh":166,"released_at":167},154141,"v3.0.1","### Bug fixes\r\n* type annotations are now correctly exposed using `py.typed` (file was missing in MANIFEST)\r\n* TransformBoundsWrapper now correctly handles `data_format` (thanks @zimmerrol)","2020-05-23T07:49:13",{"id":169,"version":170,"summary_zh":171,"released_at":172},154142,"v3.0.0","## New Features\r\n\r\nFoolbox 3 aka Foolbox Native has been rewritten from scratch with performance in mind. All code is running natively in PyTorch, TensorFlow and JAX, and all attacks have been rewritten with real batch support.","2020-03-22T21:22:22",{"id":174,"version":175,"summary_zh":176,"released_at":177},154143,"v3.0.0b1","## New Features\r\n\r\n* added `foolbox.gradient_estimators`\r\n* improved attack hyperparameter documentation","2020-02-16T23:26:30",{"id":179,"version":180,"summary_zh":181,"released_at":182},154144,"v3.0.0b0","Foolbox 3 aka Foolbox Native has been rewritten from scratch with performance in mind. All code is running natively in PyTorch, TensorFlow and JAX, and all attacks have been rewritten with real batch support.\r\n\r\nWarning: This is a pre-release beta version. Expect breaking changes.","2020-02-15T15:26:49",{"id":184,"version":185,"summary_zh":186,"released_at":187},154145,"v2.4.0","## New Features\r\n\r\n* fixed PyTorch model gradients (fixes DeepFool with batch size > 1)\r\n* added support for TensorFlow 2.0 and newer (Graph and Eager mode)\r\n* refactored the tests\r\n* support for the latest `randomgen` version","2020-02-07T14:38:08",{"id":189,"version":190,"summary_zh":191,"released_at":192},154146,"v2.3.0","## New Features\r\n* new `EnsembleAveragedModel` (thanks to @zimmerrol)\r\n* new `foolbox.utils.flatten`\r\n* new `foolbox.utils.atleast_kd`\r\n* new `foolbox.utils.accuracy`\r\n* `PyTorchModel` now always warns if model is in train mode, not just once\r\n* batch support for `ModelWithEstimatedGradients`\r\n\r\n## Bug fixes\r\n* fixed dtype when using Adam PGD with a PyTorch model\r\n* fixed CW attack hyperparameters\r\n","2019-11-04T22:07:13",{"id":194,"version":195,"summary_zh":196,"released_at":197},154147,"v2.2.0","## New Features\r\n* support for Foolbox extensions using the `foolbox.ext` namespace","2019-10-28T15:56:02",{"id":199,"version":200,"summary_zh":201,"released_at":202},154148,"v2.1.0","## New Features\r\n* New `foolbox.models.JAXModel` class to support JAX models (https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax)\r\n* The `preprocessing` argument of models now supports a `flip_axis` key to support common preprocessing operations like RGB to BGR in a nice way. This builds on the ability to pass dicts to `preprocessing` introduced in Foolbox 2.0.\r\n\r\n## Bug fixes and improvements\r\n* Fixed a serious bug in the `LocalSearchAttack` (thanks to @duoergun0729)\r\n* `foolbox.utils.samples` now warns if samples are repeated\r\n* `foolbox.utils.sampels` now uses PNGs instead of JPGs (except for ImageNet)\r\n* Other bug fixes\r\n* Improved docstrings\r\n* Improved docs","2019-10-27T09:18:44",[204,216,224,233,241,250],{"id":205,"name":206,"github_repo":207,"description_zh":208,"stars":209,"difficulty_score":210,"last_commit_at":211,"category_tags":212,"status":70},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[213,60,214,215],"Agent","图像","数据工具",{"id":217,"name":218,"github_repo":219,"description_zh":220,"stars":221,"difficulty_score":210,"last_commit_at":222,"category_tags":223,"status":70},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[60,214,213],{"id":225,"name":226,"github_repo":227,"description_zh":228,"stars":229,"difficulty_score":46,"last_commit_at":230,"category_tags":231,"status":70},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,"2026-04-07T23:26:32",[60,213,232],"语言模型",{"id":234,"name":235,"github_repo":236,"description_zh":237,"stars":238,"difficulty_score":46,"last_commit_at":239,"category_tags":240,"status":70},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[60,214,213],{"id":242,"name":243,"github_repo":244,"description_zh":245,"stars":246,"difficulty_score":46,"last_commit_at":247,"category_tags":248,"status":70},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[249,60],"插件",{"id":251,"name":252,"github_repo":253,"description_zh":254,"stars":255,"difficulty_score":210,"last_commit_at":256,"category_tags":257,"status":70},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[232,214,213,60]]