[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-QData--TextAttack":3,"tool-QData--TextAttack":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":23,"env_os":100,"env_gpu":101,"env_ram":100,"env_deps":102,"category_tags":106,"github_topics":107,"view_count":116,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":117,"updated_at":118,"faqs":119,"releases":150},535,"QData\u002FTextAttack","TextAttack","TextAttack 🐙  is a Python framework for adversarial attacks, data augmentation, and model training in NLP https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Fmaster\u002F","TextAttack 是一款专为自然语言处理（NLP）设计的开源 Python 框架，致力于提升模型的安全性与鲁棒性。TextAttack 核心解决了 NLP 模型容易受到对抗样本攻击的问题，通过自动生成对抗示例，帮助用户深入理解模型的脆弱点。同时，TextAttack 还能进行数据增强和模型训练，有效提高下游任务的泛化能力。\n\nTextAttack 非常适合 NLP 领域的研究人员和开发者。无论是想评估现有模型的安全性，还是希望开发新的对抗攻击算法，TextAttack 都提供了丰富的组件库和预置配方（如 TextFooler、DeepWordBug）。其独特的模块化设计让自定义转换和约束变得简单，支持通过一行命令即可启动攻击或训练任务。此外，TextAttack 还兼容多 GPU 并行计算，显著提升了大规模实验的效率。对于需要构建更健壮 NLP 系统的团队来说，TextAttack 是一个值得依赖的重要资源。","\u003Ch1 align=\"center\">TextAttack 🐙\u003C\u002Fh1>\n\n\u003Cp align=\"center\">Generating adversarial examples for NLP models\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Ftextattack.readthedocs.io\u002F\">[TextAttack Documentation on ReadTheDocs]\u003C\u002Fa>\n  \u003Cbr> \u003Cbr>\n  \u003Ca href=\"#about\">About\u003C\u002Fa> •\n  \u003Ca href=\"#setup\">Setup\u003C\u002Fa> •\n  \u003Ca href=\"#usage\">Usage\u003C\u002Fa> •\n  \u003Ca href=\"#design\">Design\u003C\u002Fa>\n  \u003Cbr> \u003Cbr>\n  \u003Ca target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fworkflows\u002FGithub%20PyTest\u002Fbadge.svg\" alt=\"Github Runner Covergae Status\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftextattack\">\n    \u003Cimg src=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftextattack.svg\" alt=\"PyPI version\" height=\"18\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FQData_TextAttack_readme_2a7d07e5becb.gif\" alt=\"TextAttack Demo GIF\" style=\"display: block; margin: 0 auto;\" \u002F>\n\n## About\n\nTextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.\n\n> If you're looking for information about TextAttack's menagerie of pre-trained models, you might want the [TextAttack Model Zoo](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002F3recipes\u002Fmodels.html) page.\n\n## Slack Channel\n\nFor help and realtime updates related to TextAttack, please [join the TextAttack Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ftextattack\u002Fshared_invite\u002Fzt-huomtd9z-KqdHBPPu2rOP~Z8q3~urgg)!\n\n### _Why TextAttack?_\n\nThere are lots of reasons to use TextAttack:\n\n1. **Understand NLP models better** by running different adversarial attacks on them and examining the output\n2. **Research and develop different NLP adversarial attacks** using the TextAttack framework and library of components\n3. **Augment your dataset** to increase model generalization and robustness downstream\n4. **Train NLP models** using just a single command (all downloads included!)\n\n## Setup\n\n### Installation\n\nYou should be running Python 3.6+ to use this package. A CUDA-compatible GPU is optional but will greatly improve code speed. TextAttack is available through pip:\n\n```bash\npip install textattack\n```\n\nOnce TextAttack is installed, you can run it via command-line (`textattack ...`)\nor via python module (`python -m textattack ...`).\n\n> **Tip**: TextAttack downloads files to `~\u002F.cache\u002Ftextattack\u002F` by default. This includes pretrained models,\n> dataset samples, and the configuration file `config.yaml`. To change the cache path, set the\n> environment variable `TA_CACHE_DIR`. (for example: `TA_CACHE_DIR=\u002Ftmp\u002F textattack attack ...`).\n\n## Usage\n\n### Help: `textattack --help`\n\nTextAttack's main features can all be accessed via the `textattack` command. Two very\ncommon commands are `textattack attack \u003Cargs>`, and `textattack augment \u003Cargs>`. You can see more\ninformation about all commands using\n\n```bash\ntextattack --help\n```\n\nor a specific command using, for example,\n\n```bash\ntextattack attack --help\n```\n\nThe [`examples\u002F`](examples\u002F) folder includes scripts showing common TextAttack usage for training models, running attacks, and augmenting a CSV file.\n\nThe [documentation website](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest) contains walkthroughs explaining basic usage of TextAttack, including building a custom transformation and a custom constraint..\n\n### Running Attacks: `textattack attack --help`\n\nThe easiest way to try out an attack is via the command-line interface, `textattack attack`.\n\n> **Tip:** If your machine has multiple GPUs, you can distribute the attack across them using the `--parallel` option. For some attacks, this can really help performance. (If you want to attack Keras models in parallel, please check out `examples\u002Fattack\u002Fattack_keras_parallel.py` instead)\n\nHere are some concrete examples:\n\n_TextFooler on BERT trained on the MR sentiment classification dataset_:\n\n```bash\ntextattack attack --recipe textfooler --model bert-base-uncased-mr --num-examples 100\n```\n\n_DeepWordBug on DistilBERT trained on the Quora Question Pairs paraphrase identification dataset_:\n\n```bash\ntextattack attack --model distilbert-base-uncased-cola --recipe deepwordbug --num-examples 100\n```\n\n_Beam search with beam width 4 and word embedding transformation and untargeted goal function on an LSTM_:\n\n```bash\ntextattack attack --model lstm-mr --num-examples 20 \\\n --search-method beam-search^beam_width=4 --transformation word-swap-embedding \\\n --constraints repeat stopword max-words-perturbed^max_num_words=2 embedding^min_cos_sim=0.8 part-of-speech \\\n --goal-function untargeted-classification\n```\n\n> **Tip:** Instead of specifying a dataset and number of examples, you can pass `--interactive` to attack samples inputted by the user.\n\n### Attacks and Papers Implemented (\"Attack Recipes\"): `textattack attack --recipe [recipe_name]`\n\nWe include attack recipes which implement attacks from the literature. You can list attack recipes using `textattack list attack-recipes`.\n\nTo run an attack recipe: `textattack attack --recipe [recipe_name]`\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FQData_TextAttack_readme_88c4147b9029.png\" alt=\"TextAttack Overview\" style=\"display: block; margin: 0 auto;\" \u002F>\n\n\u003Ctable  style=\"width:100%\" border=\"1\">\n\u003Cthead>\n\u003Ctr class=\"header\">\n\u003Cth>\u003Cstrong>Attack Recipe Name\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>Goal Function\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>ConstraintsEnforced\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>Transformation\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>Search Method\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>Main Idea\u003C\u002Fstrong>\u003C\u002Fth>\n\u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n  \u003Ctr>\u003Ctd style=\"text-align: center;\" colspan=\"6\">\u003Cstrong>\u003Cbr>Attacks on classification tasks, like sentiment classification and entailment:\u003Cbr>\u003C\u002Fstrong>\u003C\u002Ftd>\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>a2t\u003C\u002Fcode>\n\u003Cspan class=\"citation\" data-cites=\"yoo2021a2t\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted {Classification, Entailment}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Percentage of words perturbed, Word embedding distance, DistilBERT sentence encoding cosine similarity, part-of-speech consistency\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted word embedding swap (or) BERT Masked Token Prediction\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR (gradient)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>from ([\"Towards Improving Adversarial Training of NLP Models\" (Yoo et al., 2021)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00544))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>alzantot\u003C\u002Fcode>  \u003Cspan class=\"citation\" data-cites=\"Alzantot2018GeneratingNL Jia2019CertifiedRT\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted {Classification, Entailment}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Percentage of words perturbed, Language Model perplexity, Word embedding distance\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted word embedding swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Genetic Algorithm\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>from ([\"Generating Natural Language Adversarial Examples\" (Alzantot et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.07998))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>bae\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"garg2020bae\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE sentence encoding cosine similarity\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>BERT Masked Token Prediction\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>BERT masked language model transformation attack from ([\"BAE: BERT-based Adversarial Examples for Text Classification\" (Garg & Ramakrishnan, 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.01970)). \u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>bert-attack\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"li2020bertattack\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE sentence encoding cosine similarity, Maximum number of words perturbed\u003C\u002Ftd>\n\u003Ctd>\u003Csub>BERT Masked Token Prediction (with subword expansion)\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub> ([\"BERT-ATTACK: Adversarial Attack Against BERT Using BERT\" (Li et al., 2020)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.09984))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>checklist\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Gao2018BlackBoxGO\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{Untargeted, Targeted} Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>checklist distance\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>contract, extend, and substitutes name entities\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Invariance testing implemented in CheckList . ([\"Beyond Accuracy: Behavioral Testing of NLP models with CheckList\" (Ribeiro et al., 2020)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04118))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd> \u003Ccode>clare\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Alzantot2018GeneratingNL Jia2019CertifiedRT\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted {Classification, Entailment}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE sentence encoding cosine similarity\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>RoBERTa Masked Prediction for token swap, insert and merge\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>[\"Contextualized Perturbation for Textual Adversarial Attack\" (Li et al., 2020)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.07502))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>deepwordbug\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Gao2018BlackBoxGO\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{Untargeted, Targeted} Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Levenshtein edit distance\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{Character Insertion, Character Deletion, Neighboring Character Swap, Character Substitution}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Greedy replace-1 scoring and multi-transformation character-swap attack ([\"Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers\" (Gao et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04354)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd> \u003Ccode>faster-alzantot\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Alzantot2018GeneratingNL Jia2019CertifiedRT\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted {Classification, Entailment}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Percentage of words perturbed, Language Model perplexity, Word embedding distance\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted word embedding swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Genetic Algorithm\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Modified, faster version of the Alzantot et al. genetic algorithm, from ([\"Certified Robustness to Adversarial Word Substitutions\" (Jia et al., 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.00986))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>hotflip\u003C\u002Fcode> (word swap) \u003Cspan class=\"citation\" data-cites=\"Ebrahimi2017HotFlipWA\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Word Embedding Cosine Similarity, Part-of-speech match, Number of words perturbed\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Gradient-Based Word Swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Beam search\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub> ([\"HotFlip: White-Box Adversarial Examples for Text Classification\" (Ebrahimi et al., 2017)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.06751))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>iga\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"iga-wang2019natural\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted {Classification, Entailment}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Percentage of words perturbed, Word embedding distance\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted word embedding swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Genetic Algorithm\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Improved genetic algorithm -based word substitution from ([\"Natural Language Adversarial Attacks and Defenses in Word Level (Wang et al., 2019)\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.06723)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>input-reduction\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"feng2018pathologies\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Input Reduction\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Word deletion\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Greedy attack with word importance ranking , Reducing the input while maintaining the prediction through word importance ranking ([\"Pathologies of Neural Models Make Interpretation Difficult\" (Feng et al., 2018)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.07781.pdf))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>kuleshov\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Kuleshov2018AdversarialEF\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Thought vector encoding cosine similarity, Language model similarity probability\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted word embedding swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy word swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>([\"Adversarial Examples for Natural Language Classification Problems\" (Kuleshov et al., 2018)](https:\u002F\u002Fopenreview.net\u002Fpdf?id=r1QZ3zbAZ)) \u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>pruthi\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"pruthi2019combating\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Minimum word length, Maximum number of words perturbed\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{Neighboring Character Swap, Character Deletion, Character Insertion, Keyboard-Based Character Swap}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy search\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>simulates common typos ([\"Combating Adversarial Misspellings with Robust Word Recognition\" (Pruthi et al., 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11268) \u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>pso\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"pso-zang-etal-2020-word\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>HowNet Word Swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Particle Swarm Optimization\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>([\"Word-level Textual Adversarial Attacking as Combinatorial Optimization\" (Zang et al., 2020)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.540\u002F)) \u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>pwws\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"pwws-ren-etal-2019-generating\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>WordNet-based synonym swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR (saliency)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Greedy attack with word importance ranking based on word saliency and synonym swap scores ([\"Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency\" (Ren et al., 2019)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1103\u002F))\u003C\u002Fsub> \u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>textbugger\u003C\u002Fcode> : (black-box) \u003Cspan class=\"citation\" data-cites=\"Li2019TextBuggerGA\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted Classification\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE sentence encoding cosine similarity\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{Character Insertion, Character Deletion, Neighboring Character Swap, Character Substitution}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>([([\"TextBugger: Generating Adversarial Text Against Real-world Applications\" (Li et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.05271)).\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>textfooler\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Jin2019TextFooler\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Untargeted {Classification, Entailment}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Word Embedding Distance, Part-of-speech match, USE sentence encoding cosine similarity\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted word embedding swap\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Greedy attack with word importance ranking  ([\"Is Bert Really Robust?\" (Jin et al., 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.11932))\u003C\u002Fsub> \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\u003Ctd style=\"text-align: center;\" colspan=\"6\">\u003Cstrong>\u003Cbr>Attacks on sequence-to-sequence models: \u003Cbr>\u003C\u002Fstrong>\u003C\u002Ftd>\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>morpheus\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"morpheus-tan-etal-2020-morphin\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Minimum BLEU Score\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Inflection Word Swap\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy search\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd >\u003Csub>Greedy to replace words with their inflections with the goal of minimizing BLEU score ([\"It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations\"](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.263.pdf)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>seq2sick\u003C\u002Fcode> :(black-box) \u003Cspan class=\"citation\" data-cites=\"cheng2018seq2sick\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Non-overlapping output\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted word embedding swap\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003Csub>Greedy-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>Greedy attack with goal of changing every word in the output translation. Currently implemented as black-box with plans to change to white-box as done in paper ([\"Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples\" (Cheng et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.01128)) \u003C\u002Fsub>  \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\u003Ctd style=\"text-align: center;\" colspan=\"6\">\u003Cstrong>\u003Cbr>General: \u003Cbr>\u003C\u002Fstrong>\u003C\u002Ftd>\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>bad-characters\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Targeted classification, Strict targeted classification, Named entity recognition, Logit sum, Minimize Bleu score, Maximize Levenshtein score\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>(Homoglyph, Invisible Characters, Reorderings, Deletions) Word Swap\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003Csub>DifferentialEvolution\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub> ([\"Bad Characters: Imperceptible NLP Attacks\" (Boucher et al., 2021)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09898)) \u003C\u002Fsub>  \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003C\u002Ftbody>\n\u003C\u002Ffont>\n\u003C\u002Ftable>\n\n#### Recipe Usage Examples\n\nHere are some examples of testing attacks from the literature from the command-line:\n\n_TextFooler against BERT fine-tuned on SST-2:_\n\n```bash\ntextattack attack --model bert-base-uncased-sst2 --recipe textfooler --num-examples 10\n```\n\n_seq2sick (black-box) against T5 fine-tuned for English-German translation:_\n\n```bash\n textattack attack --model t5-en-de --recipe seq2sick --num-examples 100\n```\n\n### Augmenting Text: `textattack augment`\n\nMany of the components of TextAttack are useful for data augmentation. The `textattack.Augmenter` class\nuses a transformation and a list of constraints to augment data. We also offer built-in recipes\nfor data augmentation:\n\n- `wordnet` augments text by replacing words with WordNet synonyms\n- `embedding` augments text by replacing words with neighbors in the counter-fitted embedding space, with a constraint to ensure their cosine similarity is at least 0.8\n- `charswap` augments text by substituting, deleting, inserting, and swapping adjacent characters\n- `eda` augments text with a combination of word insertions, substitutions and deletions.\n- `checklist` augments text by contraction\u002Fextension and by substituting names, locations, numbers.\n- `clare` augments text by replacing, inserting, and merging with a pre-trained masked language model.\n- `back_trans` augments text by backtranslation approach.\n- `back_transcription` augments text by back transcription approach.\n\n#### Augmentation Command-Line Interface\n\nThe easiest way to use our data augmentation tools is with `textattack augment \u003Cargs>`. `textattack augment`\ntakes an input CSV file and text column to augment, along with the number of words to change per augmentation\nand the number of augmentations per input example. It outputs a CSV in the same format with all the augmentation\nexamples corresponding to the proper columns.\n\nFor example, given the following as `examples.csv`:\n\n```csv\n\"text\",label\n\"the rock is destined to be the 21st century's new conan and that he's going to make a splash even greater than arnold schwarzenegger , jean- claud van damme or steven segal.\", 1\n\"the gorgeously elaborate continuation of 'the lord of the rings' trilogy is so huge that a column of words cannot adequately describe co-writer\u002Fdirector peter jackson's expanded vision of j . r . r . tolkien's middle-earth .\", 1\n\"take care of my cat offers a refreshingly different slice of asian cinema .\", 1\n\"a technically well-made suspenser . . . but its abrupt drop in iq points as it races to the finish line proves simply too discouraging to let slide .\", 0\n\"it's a mystery how the movie could be released in this condition .\", 0\n```\n\nThe command\n\n```bash\ntextattack augment --input-csv examples.csv --output-csv output.csv  --input-column text --recipe embedding --pct-words-to-swap .1 --transformations-per-example 2 --exclude-original\n```\n\nwill augment the `text` column by altering 10% of each example's words, generating twice as many augmentations as original inputs, and exclude the original inputs from the\noutput CSV. (All of this will be saved to `augment.csv` by default.)\n\n> **Tip:** Just as running attacks interactively, you can also pass `--interactive` to augment samples inputted by the user to quickly try out different augmentation recipes!\n\nAfter augmentation, here are the contents of `augment.csv`:\n\n```csv\ntext,label\n\"the rock is destined to be the 21st century's newest conan and that he's gonna to make a splashing even stronger than arnold schwarzenegger , jean- claud van damme or steven segal.\",1\n\"the rock is destined to be the 21tk century's novel conan and that he's going to make a splat even greater than arnold schwarzenegger , jean- claud van damme or stevens segal.\",1\nthe gorgeously elaborate continuation of 'the lord of the rings' trilogy is so huge that a column of expression significant adequately describe co-writer\u002Fdirector pedro jackson's expanded vision of j . rs . r . tolkien's middle-earth .,1\nthe gorgeously elaborate continuation of 'the lordy of the piercings' trilogy is so huge that a column of mots cannot adequately describe co-novelist\u002Fdirector peter jackson's expanded vision of j . r . r . tolkien's middle-earth .,1\ntake care of my cat offerings a pleasantly several slice of asia cinema .,1\ntaking care of my cat offers a pleasantly different slice of asiatic kino .,1\na technically good-made suspenser . . . but its abrupt drop in iq points as it races to the finish bloodline proves straightforward too disheartening to let slide .,0\na technically well-made suspenser . . . but its abrupt drop in iq dot as it races to the finish line demonstrates simply too disheartening to leave slide .,0\nit's a enigma how the film wo be releases in this condition .,0\nit's a enigma how the filmmaking wo be publicized in this condition .,0\n```\n\nThe 'embedding' augmentation recipe uses counterfitted embedding nearest-neighbors to augment data.\n\n#### Augmentation Python Interface\n\nIn addition to the command-line interface, you can augment text dynamically by importing the\n`Augmenter` in your own code. All `Augmenter` objects implement `augment` and `augment_many` to generate augmentations\nof a string or a list of strings. Here's an example of how to use the `EmbeddingAugmenter` in a python script:\n\n```python\n>>> from textattack.augmentation import EmbeddingAugmenter\n>>> augmenter = EmbeddingAugmenter()\n>>> s = 'What I cannot create, I do not understand.'\n>>> augmenter.augment(s)\n['What I notable create, I do not understand.', 'What I significant create, I do not understand.', 'What I cannot engender, I do not understand.', 'What I cannot creating, I do not understand.', 'What I cannot creations, I do not understand.', 'What I cannot create, I do not comprehend.', 'What I cannot create, I do not fathom.', 'What I cannot create, I do not understanding.', 'What I cannot create, I do not understands.', 'What I cannot create, I do not understood.', 'What I cannot create, I do not realise.']\n```\n\nYou can also create your own augmenter from scratch by importing transformations\u002Fconstraints from `textattack.transformations` and `textattack.constraints`. Here's an example that generates augmentations of a string using `WordSwapRandomCharacterDeletion`:\n\n```python\n>>> from textattack.transformations import WordSwapRandomCharacterDeletion\n>>> from textattack.transformations import CompositeTransformation\n>>> from textattack.augmentation import Augmenter\n>>> transformation = CompositeTransformation([WordSwapRandomCharacterDeletion()])\n>>> augmenter = Augmenter(transformation=transformation, transformations_per_example=5)\n>>> s = 'What I cannot create, I do not understand.'\n>>> augmenter.augment(s)\n['What I cannot creae, I do not understand.', 'What I cannot creat, I do not understand.', 'What I cannot create, I do not nderstand.', 'What I cannot create, I do nt understand.', 'Wht I cannot create, I do not understand.']\n```\n\n#### Prompt Augmentation\n\nIn additional to augmentation of regular text, you can augment prompts and then generate responses to\nthe augmented prompts using a large language model (LLMs). The augmentation is performed using the same\n`Augmenter` as above. To generate responses, you can use your own LLM, a HuggingFace LLM, or an OpenAI LLM.\nHere's an example using a pretrained HuggingFace LLM:\n\n```python\n>>> from textattack.augmentation import EmbeddingAugmenter\n>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer\n>>> from textattack.llms import HuggingFaceLLMWrapper\n>>> from textattack.prompt_augmentation import PromptAugmentationPipeline\n>>> augmenter = EmbeddingAugmenter(transformations_per_example=3)\n>>> model = AutoModelForSeq2SeqLM.from_pretrained(\"google\u002Fflan-t5-small\")\n>>> tokenizer = AutoTokenizer.from_pretrained(\"google\u002Fflan-t5-small\")\n>>> model_wrapper = HuggingFaceLLMWrapper(model, tokenizer)\n>>> pipeline = PromptAugmentationPipeline(augmenter, model_wrapper)\n>>> pipeline(\"Classify the following piece of text as `positive` or `negative`: This movie is great!\")\n[('Classify the following piece of text as `positive` or `negative`: This film is great!', ['positive']), ('Classify the following piece of text as `positive` or `negative`: This movie is fabulous!', ['positive']), ('Classify the following piece of text as `positive` or `negative`: This movie is wonderful!', ['positive'])]\n```\n\n### Training Models: `textattack train`\n\nOur model training code is available via `textattack train` to help you train LSTMs,\nCNNs, and `transformers` models using TextAttack out-of-the-box. Datasets are\nautomatically loaded using the `datasets` package.\n\n#### Training Examples\n\n_Train our default LSTM for 50 epochs on the Yelp Polarity dataset:_\n\n```bash\ntextattack train --model-name-or-path lstm --dataset yelp_polarity  --epochs 50 --learning-rate 1e-5\n```\n\n\\*Fine-Tune `bert-base` on the `CoLA` dataset for 5 epochs\\*\\*:\n\n```bash\ntextattack train --model-name-or-path bert-base-uncased --dataset glue^cola --per-device-train-batch-size 8 --epochs 5\n```\n\n### To check datasets: `textattack peek-dataset`\n\nTo take a closer look at a dataset, use `textattack peek-dataset`. TextAttack will print some cursory statistics about the inputs and outputs from the dataset. For example,\n\n```bash\ntextattack peek-dataset --dataset-from-huggingface snli\n```\n\nwill show information about the SNLI dataset from the NLP package.\n\n### To list functional components: `textattack list`\n\nThere are lots of pieces in TextAttack, and it can be difficult to keep track of all of them. You can use `textattack list` to list components, for example, pretrained models (`textattack list models`) or available search methods (`textattack list search-methods`).\n\n## Design\n\n### Models\n\nTextAttack is model-agnostic! You can use `TextAttack` to analyze any model that outputs IDs, tensors, or strings. To help users, TextAttack includes pre-trained models for different common NLP tasks. This makes it easier for\nusers to get started with TextAttack. It also enables a more fair comparison of attacks from\nthe literature.\n\n#### Built-in Models and Datasets\n\nTextAttack also comes built-in with models and datasets. Our command-line interface will automatically match the correct\ndataset to the correct model. We include 82 different (Oct 2020) pre-trained models for each of the nine [GLUE](https:\u002F\u002Fgluebenchmark.com\u002F)\ntasks, as well as some common datasets for classification, translation, and summarization.\n\nA list of available pretrained models and their validation accuracies is available at\n[textattack\u002Fmodels\u002FREADME.md](textattack\u002Fmodels\u002FREADME.md). You can also view a full list of provided models\n& datasets via `textattack attack --help`.\n\nHere's an example of using one of the built-in models (the SST-2 dataset is automatically loaded):\n\n```bash\ntextattack attack --model roberta-base-sst2 --recipe textfooler --num-examples 10\n```\n\n#### HuggingFace support: `transformers` models and `datasets` datasets\n\nWe also provide built-in support for [`transformers` pretrained models](https:\u002F\u002Fhuggingface.co\u002Fmodels)\nand datasets from the [`datasets` package](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdatasets)! Here's an example of loading\nand attacking a pre-trained model and dataset:\n\n```bash\ntextattack attack --model-from-huggingface distilbert-base-uncased-finetuned-sst-2-english --dataset-from-huggingface glue^sst2 --recipe deepwordbug --num-examples 10\n```\n\nYou can explore other pre-trained models using the `--model-from-huggingface` argument, or other datasets by changing\n`--dataset-from-huggingface`.\n\n#### Loading a model or dataset from a file\n\nYou can easily try out an attack on a local model or dataset sample. To attack a pre-trained model,\ncreate a short file that loads them as variables `model` and `tokenizer`. The `tokenizer` must\nbe able to transform string inputs to lists or tensors of IDs using a method called `encode()`. The\nmodel must take inputs via the `__call__` method.\n\n##### Custom Model from a file\n\nTo experiment with a model you've trained, you could create the following file\nand name it `my_model.py`:\n\n```python\nmodel = load_your_model_with_custom_code() # replace this line with your model loading code\ntokenizer = load_your_tokenizer_with_custom_code() # replace this line with your tokenizer loading code\n```\n\nThen, run an attack with the argument `--model-from-file my_model.py`. The model and tokenizer will be loaded automatically.\n\n### Custom Datasets\n\n#### Dataset from a file\n\nLoading a dataset from a file is very similar to loading a model from a file. A 'dataset' is any iterable of `(input, output)` pairs.\nThe following example would load a sentiment classification dataset from file `my_dataset.py`:\n\n```python\ndataset = [('Today was....', 1), ('This movie is...', 0), ...]\n```\n\nYou can then run attacks on samples from this dataset by adding the argument `--dataset-from-file my_dataset.py`.\n\n#### Dataset loading via other mechanism, see: [more details at here](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002Fapi\u002Fdatasets.html)\n\n```python\nimport textattack\nmy_dataset = [(\"text\",label),....]\nnew_dataset = textattack.datasets.Dataset(my_dataset)\n```\n\n#### Dataset via AttackedText class\n\nTo allow for word replacement after a sequence has been tokenized, we include an `AttackedText` object\nwhich maintains both a list of tokens and the original text, with punctuation. We use this object in favor of a list of words or just raw text.\n\n### Attacks and how to design a new attack\n\nWe formulate an attack as consisting of four components: a **goal function** which determines if the attack has succeeded, **constraints** defining which perturbations are valid, a **transformation** that generates potential modifications given an input, and a **search method** which traverses through the search space of possible perturbations. The attack attempts to perturb an input text such that the model output fulfills the goal function (i.e., indicating whether the attack is successful) and the perturbation adheres to the set of constraints (e.g., grammar constraint, semantic similarity constraint). A search method is used to find a sequence of transformations that produce a successful adversarial example.\n\nThis modular design unifies adversarial attack methods into one system, enables us to easily assemble attacks from the literature while re-using components that are shared across attacks. We provides clean, readable implementations of 16 adversarial attack recipes from the literature (see above table). For the first time, these attacks can be benchmarked, compared, and analyzed in a standardized setting.\n\nTextAttack is model-agnostic - meaning it can run attacks on models implemented in any deep learning framework. Model objects must be able to take a string (or list of strings) and return an output that can be processed by the goal function. For example, machine translation models take a list of strings as input and produce a list of strings as output. Classification and entailment models return an array of scores. As long as the user's model meets this specification, the model is fit to use with TextAttack.\n\n#### Goal Functions\n\nA `GoalFunction` takes as input an `AttackedText` object, scores it, and determines whether the attack has succeeded, returning a `GoalFunctionResult`.\n\n#### Constraints\n\nA `Constraint` takes as input a current `AttackedText`, and a list of transformed `AttackedText`s. For each transformed option, it returns a boolean representing whether the constraint is met.\n\n#### Transformations\n\nA `Transformation` takes as input an `AttackedText` and returns a list of possible transformed `AttackedText`s. For example, a transformation might return all possible synonym replacements.\n\n#### Search Methods\n\nA `SearchMethod` takes as input an initial `GoalFunctionResult` and returns a final `GoalFunctionResult` The search is given access to the `get_transformations` function, which takes as input an `AttackedText` object and outputs a list of possible transformations filtered by meeting all of the attack’s constraints. A search consists of successive calls to `get_transformations` until the search succeeds (determined using `get_goal_results`) or is exhausted.\n\n## On Benchmarking Attacks\n\n- See our analysis paper: Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples at [EMNLP BlackBoxNLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06368).\n\n- As we emphasized in the above paper, we don't recommend to directly compare Attack Recipes out of the box.\n\n- This comment is due to that attack recipes in the recent literature used different ways or thresholds in setting up their constraints. Without the constraint space held constant, an increase in attack success rate could come from an improved search or transformation method or a less restrictive search space.\n\n- Our Github on benchmarking scripts and results: [TextAttack-Search-Benchmark Github](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack-Search-Benchmark)\n\n## On Quality of Generated Adversarial Examples in Natural Language\n\n- Our analysis Paper in [EMNLP Findings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.14174)\n- We analyze the generated adversarial examples of two state-of-the-art synonym substitution attacks. We find that their perturbations often do not preserve semantics, and 38% introduce grammatical errors. Human surveys reveal that to successfully preserve semantics, we need to significantly increase the minimum cosine similarities between the embeddings of swapped words and between the sentence encodings of original and perturbed sentences.With constraints adjusted to better preserve semantics and grammaticality, the attack success rate drops by over 70 percentage points.\n- Our Github on Reevaluation results: [Reevaluating-NLP-Adversarial-Examples Github](https:\u002F\u002Fgithub.com\u002FQData\u002FReevaluating-NLP-Adversarial-Examples)\n- As we have emphasized in this analysis paper, we recommend researchers and users to be EXTREMELY mindful on the quality of generated adversarial examples in natural language\n- We recommend the field to use human-evaluation derived thresholds for setting up constraints\n\n## Multi-lingual Support\n\n- see example code: [https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002Fexamples\u002Fattack\u002Fattack_camembert.py](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002Fexamples\u002Fattack\u002Fattack_camembert.py) for using our framework to attack French-BERT.\n\n- see tutorial notebook: [https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002F2notebook\u002FExample_4_CamemBERT.html](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002F2notebook\u002FExample_4_CamemBERT.html) for using our framework to attack French-BERT.\n\n- See [README_ZH.md](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002FREADME_ZH.md) for our README in Chinese\n\n## Contributing to TextAttack\n\nWe welcome suggestions and contributions! Submit an issue or pull request and we will do our best to respond in a timely manner. TextAttack is currently in an \"alpha\" stage in which we are working to improve its capabilities and design.\n\nSee [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md) for detailed information on contributing.\n\n## Citing TextAttack\n\nIf you use TextAttack for your research, please cite [TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.05909).\n\n```bibtex\n@inproceedings{morris2020textattack,\n  title={TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP},\n  author={Morris, John and Lifland, Eli and Yoo, Jin Yong and Grigsby, Jake and Jin, Di and Qi, Yanjun},\n  booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},\n  pages={119--126},\n  year={2020}\n}\n```\n","\u003Ch1 align=\"center\">TextAttack 🐙\u003C\u002Fh1>\n\n\u003Cp align=\"center\">为自然语言处理（NLP）模型生成对抗样本\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Ftextattack.readthedocs.io\u002F\">[TextAttack 文档在 ReadTheDocs 上]\u003C\u002Fa>\n  \u003Cbr> \u003Cbr>\n  \u003Ca href=\"#about\">关于\u003C\u002Fa> •\n  \u003Ca href=\"#setup\">设置\u003C\u002Fa> •\n  \u003Ca href=\"#usage\">用法\u003C\u002Fa> •\n  \u003Ca href=\"#design\">设计\u003C\u002Fa>\n  \u003Cbr> \u003Cbr>\n  \u003Ca target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fworkflows\u002FGithub%20PyTest\u002Fbadge.svg\" alt=\"GitHub Runner 覆盖状态\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftextattack\">\n    \u003Cimg src=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftextattack.svg\" alt=\"PyPI 版本\" height=\"18\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FQData_TextAttack_readme_2a7d07e5becb.gif\" alt=\"TextAttack 演示 GIF\" style=\"display: block; margin: 0 auto;\" \u002F>\n\n## 关于\n\nTextAttack 是一个用于自然语言处理（NLP）中的对抗攻击、数据增强和模型训练的 Python 框架。\n\n> 如果您正在寻找有关 TextAttack 的预训练模型集合的信息，您可能需要查看 [TextAttack 模型库](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002F3recipes\u002Fmodels.html) 页面。\n\n## Slack 频道\n\n有关 TextAttack 的帮助和实时更新，请 [加入 TextAttack Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ftextattack\u002Fshared_invite\u002Fzt-huomtd9z-KqdHBPPu2rOP~Z8q3~urgg)!\n\n### _为什么选择 TextAttack？_\n\n使用 TextAttack 有很多理由：\n\n1. **更好地理解 NLP 模型**，通过对其运行不同的对抗攻击并检查结果输出\n2. **研究和开发不同的 NLP 对抗攻击**，使用 TextAttack 框架和组件库\n3. **增强您的数据集**，以提高下游模型的泛化能力和鲁棒性\n4. **训练 NLP 模型**，仅使用单个命令即可（包含所有下载！）\n\n## 设置\n\n### 安装\n\n您应该运行 Python 3.6+ 来使用此包。兼容 CUDA 的 GPU 是可选的，但会大大提高代码速度。TextAttack 可通过 pip 获取：\n\n```bash\npip install textattack\n```\n\n安装 TextAttack 后，您可以通过命令行（`textattack ...`）或 Python 模块（`python -m textattack ...`）运行它。\n\n> **提示**：TextAttack 默认将文件下载到 `~\u002F.cache\u002Ftextattack\u002F`。这包括预训练模型、\n> 数据集样本和配置文件 `config.yaml`。要更改缓存路径，请设置\n> 环境变量 `TA_CACHE_DIR`。（例如：`TA_CACHE_DIR=\u002Ftmp\u002F textattack attack ...`）。\n\n## 用法\n\n### 帮助：`textattack --help`\n\nTextAttack 的主要功能都可以通过 `textattack` 命令访问。两个非常\n常用的命令是 `textattack attack \u003Cargs>` 和 `textattack augment \u003Cargs>`。您可以使用以下命令查看所有命令的更多信息\n\n```bash\ntextattack --help\n```\n\n或使用特定命令，例如\n\n```bash\ntextattack attack --help\n```\n\n[`examples\u002F`](examples\u002F) 文件夹中包含脚本，展示了训练模型、运行攻击和增强 CSV 文件的常见 TextAttack 用法。\n\n[文档网站](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest) 包含解释 TextAttack 基本用法的教程，包括构建自定义转换和自定义约束。\n\n### 运行攻击：`textattack attack --help`\n\n尝试攻击的最简单方法是通过命令行界面 `textattack attack`。\n\n> **提示**：如果您的机器有多个 GPU，您可以使用 `--parallel` 选项将攻击分布在它们上面。对于某些攻击，这确实有助于提高性能。（如果您想并行攻击 Keras 模型，请改用 `examples\u002Fattack\u002Fattack_keras_parallel.py`）\n\n以下是一些具体示例：\n\n_BERT 在 MR 情感分类数据集上的训练上使用 TextFooler_:\n\n```bash\ntextattack attack --recipe textfooler --model bert-base-uncased-mr --num-examples 100\n```\n\n_DistilBERT 在 Quora 问题对释义识别数据集上的训练上使用 DeepWordBug_:\n\n```bash\ntextattack attack --model distilbert-base-uncased-cola --recipe deepwordbug --num-examples 100\n```\n\n_LSTM 上使用束搜索（束宽为 4）、词嵌入转换和无目标目标函数_:\n\n```bash\ntextattack attack --model lstm-mr --num-examples 20 \\\n --search-method beam-search^beam_width=4 --transformation word-swap-embedding \\\n --constraints repeat stopword max-words-perturbed^max_num_words=2 embedding^min_cos_sim=0.8 part-of-speech \\\n --goal-function untargeted-classification\n```\n\n> **提示**：与其指定数据集和示例数量，您可以传递 `--interactive` 以攻击用户输入的样本。\n\n### 已实现的攻击和论文（“攻击配方”）：`textattack attack --recipe [recipe_name]`\n\n我们包含了实现文献中攻击的攻击配方。您可以使用 `textattack list attack-recipes` 列出攻击配方。\n\n要运行攻击配方：`textattack attack --recipe [recipe_name]`\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FQData_TextAttack_readme_88c4147b9029.png\" alt=\"TextAttack 概览\" style=\"display: block; margin: 0 auto;\" \u002F>\n\n\u003Ctable  style=\"width:100%\" border=\"1\">\n\u003Cthead>\n\u003Ctr class=\"header\">\n\u003Cth>\u003Cstrong>攻击配方名称\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>目标函数\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>执行的约束\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>转换\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>搜索方法\u003C\u002Fstrong>\u003C\u002Fth>\n\u003Cth>\u003Cstrong>主要思想\u003C\u002Fstrong>\u003C\u002Fth>\n\u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n  \u003Ctr>\u003Ctd style=\"text-align: center;\" colspan=\"6\">\u003Cstrong>\u003Cbr>针对分类任务的攻击，如情感分类和蕴含：\u003Cbr>\u003C\u002Fstrong>\u003C\u002Ftd>\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>a2t\u003C\u002Fcode>\n\u003Cspan class=\"citation\" data-cites=\"yoo2021a2t\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标 {分类，蕴含}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>扰动单词百分比，词嵌入距离，DistilBERT 句子编码余弦相似度，词性一致性\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted 词嵌入交换（或）BERT 掩码标记预测\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR（梯度）\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>源自 ([\"Towards Improving Adversarial Training of NLP Models\" (Yoo et al., 2021)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00544))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>alzantot\u003C\u002Fcode>  \u003Cspan class=\"citation\" data-cites=\"Alzantot2018GeneratingNL Jia2019CertifiedRT\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标 {分类，蕴含}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>扰动单词百分比，语言模型困惑度，词嵌入距离\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted 词嵌入交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>遗传算法\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>源自 ([\"Generating Natural Language Adversarial Examples\" (Alzantot et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.07998))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>bae\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"garg2020bae\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE 句子编码余弦相似度\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>BERT 掩码标记预测\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>BERT 掩码语言模型转换攻击 源自 ([\"BAE: BERT-based Adversarial Examples for Text Classification\" (Garg & Ramakrishnan, 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.01970)). \u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>bert-attack\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"li2020bertattack\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE 句子编码余弦相似度，最大扰动单词数\u003C\u002Ftd>\n\u003Ctd>\u003Csub>BERT 掩码标记预测（含子词扩展）\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub> ([\"BERT-ATTACK: Adversarial Attack Against BERT Using BERT\" (Li et al., 2020)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.09984))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>checklist\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Gao2018BlackBoxGO\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{无目标，有目标} 分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Checklist 距离\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>收缩、扩展和替换命名实体\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>CheckList 中实现的不变性测试。 ([\"Beyond Accuracy: Behavioral Testing of NLP models with CheckList\" (Ribeiro et al., 2020)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04118))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd> \u003Ccode>clare\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Alzantot2018GeneratingNL Jia2019CertifiedRT\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标 {分类，蕴含}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE 句子编码余弦相似度\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>RoBERTa 掩码预测用于令牌交换、插入和合并\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>[\"Contextualized Perturbation for Textual Adversarial Attack\" (Li et al., 2020)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.07502))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>deepwordbug\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Gao2018BlackBoxGO\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{无目标，有目标} 分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Levenshtein 编辑距离\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{字符插入，字符删除，相邻字符交换，字符替换}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>贪婪 replace-1 评分和多变换字符交换攻击 ([\"Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers\" (Gao et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04354)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd> \u003Ccode>faster-alzantot\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Alzantot2018GeneratingNL Jia2019CertifiedRT\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标 {分类，蕴含}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>扰动单词百分比，语言模型困惑度，词嵌入距离\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted 词嵌入交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>遗传算法\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>修改版、更快的 Alzantot 等人遗传算法版本，源自 ([\"Certified Robustness to Adversarial Word Substitutions\" (Jia et al., 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.00986))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>hotflip\u003C\u002Fcode>（单词交换）\u003Cspan class=\"citation\" data-cites=\"Ebrahimi2017HotFlipWA\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>词嵌入余弦相似度，词性匹配，扰动单词数量\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>基于梯度的单词交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>束搜索\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub> ([\"HotFlip: White-Box Adversarial Examples for Text Classification\" (Ebrahimi et al., 2017)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.06751))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>iga\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"iga-wang2019natural\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标 {分类，蕴含}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>扰动单词百分比，词嵌入距离\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted 词嵌入交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>遗传算法\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>改进的基于遗传算法的单词替换 源自 ([\"Natural Language Adversarial Attacks and Defenses in Word Level (Wang et al., 2019)\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.06723)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>input-reduction\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"feng2018pathologies\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>输入减少\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>单词删除\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>基于单词重要性排名的贪婪攻击，通过单词重要性排名在保持预测的同时减少输入 ([\"Pathologies of Neural Models Make Interpretation Difficult\" (Feng et al., 2018)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.07781.pdf))\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>kuleshov\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Kuleshov2018AdversarialEF\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>思维向量编码余弦相似度，语言模型相似概率\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted 词嵌入交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪单词交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>([\"Adversarial Examples for Natural Language Classification Problems\" (Kuleshov et al., 2018)](https:\u002F\u002Fopenreview.net\u002Fpdf?id=r1QZ3zbAZ)) \u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>pruthi\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"pruthi2019combating\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>最小单词长度，最大扰动单词数\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{相邻字符交换，字符删除，字符插入，基于键盘的字符交换}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪搜索\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>模拟常见拼写错误 ([\"Combating Adversarial Misspellings with Robust Word Recognition\" (Pruthi et al., 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11268) \u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>pso\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"pso-zang-etal-2020-word\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>HowNet 单词交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>粒子群优化\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>([\"Word-level Textual Adversarial Attacking as Combinatorial Optimization\" (Zang et al., 2020)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.540\u002F)) \u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>pwws\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"pwws-ren-etal-2019-generating\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>基于 WordNet 的同义词交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR（显著性）\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>基于单词显著性和同义词交换分数的单词重要性排名的贪婪攻击 ([\"Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency\" (Ren et al., 2019)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1103\u002F))\u003C\u002Fsub> \u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>textbugger\u003C\u002Fcode>：（黑盒）\u003Cspan class=\"citation\" data-cites=\"Li2019TextBuggerGA\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标分类\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>USE 句子编码余弦相似度\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>{字符插入，字符删除，相邻字符交换，字符替换}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>([([\"TextBugger: Generating Adversarial Text Against Real-world Applications\" (Li et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.05271)).\u003C\u002Fsub>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\u003Ccode>textfooler\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"Jin2019TextFooler\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>无目标 {分类，蕴含}\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>词嵌入距离，词性匹配，USE 句子编码余弦相似度\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>Counter-fitted 词嵌入交换\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>基于单词重要性排名的贪婪攻击 ([\"Is Bert Really Robust?\" (Jin et al., 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.11932))\u003C\u002Fsub> \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\u003Ctd style=\"text-align: center;\" colspan=\"6\">\u003Cstrong>\u003Cbr>对序列到序列模型 (sequence-to-sequence models) 的攻击：\u003Cbr>\u003C\u002Fstrong>\u003C\u002Ftd>\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>morpheus\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"morpheus-tan-etal-2020-morphin\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>最小 BLEU 分数 (BLEU Score)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>屈折词替换 (Inflection Word Swap)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪搜索 (Greedy search)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd >\u003Csub>贪婪攻击，旨在通过用屈折形式替换单词来最小化 BLEU 分数 ([\"It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations\"](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.263.pdf)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>seq2sick\u003C\u002Fcode> : (黑盒 (black-box)) \u003Cspan class=\"citation\" data-cites=\"cheng2018seq2sick\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>非重叠输出 (Non-overlapping output)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>反事实词嵌入替换 (Counter-fitted word embedding swap)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003Csub>贪婪-WIR (Greedy-WIR)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub>贪婪攻击，旨在改变输出翻译中的每个单词。目前实现为黑盒 (black-box)，并计划像论文中那样更改为白盒 (white-box) ([\"Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples\" (Cheng et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.01128)) \u003C\u002Fsub>  \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\u003Ctd style=\"text-align: center;\" colspan=\"6\">\u003Cstrong>\u003Cbr>通用 (General):\u003Cbr>\u003C\u002Fstrong>\u003C\u002Ftd>\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd>\u003Ccode>bad-characters\u003C\u002Fcode> \u003Cspan class=\"citation\" data-cites=\"\">\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>定向分类 (Targeted classification)、严格定向分类 (Strict targeted classification)、命名实体识别 (Named entity recognition)、Logit 求和 (Logit sum)、最小化 BLEU 分数 (Minimize Bleu score)、最大化 Levenshtein 分数 (Maximize Levenshtein score)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003C\u002Ftd>\n\u003Ctd>\u003Csub>(同形异义符 (Homoglyph)、不可见字符 (Invisible Characters)、重排 (Reorderings)、删除 (Deletions)) 词替换 (Word Swap)\u003C\u002Fsub> \u003C\u002Ftd>\n\u003Ctd>\u003Csub>差分进化 (DifferentialEvolution)\u003C\u002Fsub>\u003C\u002Ftd>\n\u003Ctd >\u003Csub> ([\"Bad Characters: Imperceptible NLP Attacks\" (Boucher et al., 2021)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09898)) \u003C\u002Fsub>  \u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003C\u002Ftbody>\n\u003C\u002Ffont>\n\u003C\u002Ftable>\n\n#### 配方用法示例\n\n以下是一些来自文献的攻击测试命令行示例：\n\n_TextFooler 针对在 SST-2 上微调的 BERT：_\n\n```bash\ntextattack attack --model bert-base-uncased-sst2 --recipe textfooler --num-examples 10\n```\n\n_seq2sick (黑盒 (black-box)) 针对用于英德翻译微调的 T5：_\n\n```bash\n textattack attack --model t5-en-de --recipe seq2sick --num-examples 100\n```\n\n\n\n### 文本增强：`textattack augment`\n\nTextAttack 的许多组件都可用于数据增强 (data augmentation)。`textattack.Augmenter` 类\n使用变换和一组约束来增强数据。我们还提供用于数据增强的内置配方 (recipes)：\n\n- `wordnet` 通过将单词替换为 WordNet 同义词来增强文本\n- `embedding` 通过将单词替换为反事实嵌入空间 (counter-fitted embedding space) 中的邻居来增强文本，并带有约束以确保其余弦相似度 (cosine similarity) 至少为 0.8\n- `charswap` 通过替换、删除、插入和交换相邻字符来增强文本\n- `eda` (Easy Data Augmentation) 通过使用单词插入、替换和删除的组合来增强文本\n- `checklist` 通过收缩\u002F扩展以及替换名称、位置、数字来增强文本\n- `clare` 通过使用预训练的掩码语言模型 (masked language model) 进行替换、插入和合并来增强文本\n- `back_trans` 通过回译 (backtranslation) 方法来增强文本\n- `back_transcription` 通过反向转录 (back transcription) 方法来增强文本\n\n#### 增强命令行界面\n\n使用我们的数据增强工具最简单的方法是使用 `textattack augment \u003Cargs>`。`textattack augment`\n接受一个输入 CSV 文件和要增强的文本列，以及每次增强要更改的单词数和每个输入示例的增强数。它输出一个格式相同的 CSV 文件，其中包含所有对应于正确列的增强示例。\n\n例如，给定以下作为 `examples.csv`：\n\n```csv\n\"text\",label\n\"the rock is destined to be the 21st century's new conan and that he's going to make a splash even greater than arnold schwarzenegger , jean- claud van damme or steven segal.\", 1\n\"the gorgeously elaborate continuation of 'the lord of the rings' trilogy is so huge that a column of words cannot adequately describe co-writer\u002Fdirector peter jackson's expanded vision of j . r . r . tolkien's middle-earth .\", 1\n\"take care of my cat offers a refreshingly different slice of asian cinema .\", 1\n\"a technically well-made suspenser . . . but its abrupt drop in iq points as it races to the finish line proves simply too discouraging to let slide .\", 0\n\"it's a mystery how the movie could be released in this condition .\", 0\n```\n\n命令\n\n```bash\ntextattack augment --input-csv examples.csv --output-csv output.csv  --input-column text --recipe embedding --pct-words-to-swap .1 --transformations-per-example 2 --exclude-original\n```\n\n将通过更改每个示例 10% 的单词来增强 `text` 列，生成两倍于原始输入的增强样本，并从输出 CSV 中排除原始输入。(默认情况下，所有这些都将被保存到 `augment.csv`。)\n\n> **提示：** 就像交互式运行攻击一样，您也可以传递 `--interactive` 参数来增强用户输入的样本，以便快速尝试不同的增强配方！\n\n增强后，以下是 `augment.csv` 的内容：\n\n```csv\ntext,label\n\"the rock is destined to be the 21st century's newest conan and that he's gonna to make a splashing even stronger than arnold schwarzenegger , jean- claud van damme or steven segal.\",1\n\"the rock is destined to be the 21tk century's novel conan and that he's going to make a splat even greater than arnold schwarzenegger , jean- claud van damme or stevens segal.\",1\nthe gorgeously elaborate continuation of 'the lord of the rings' trilogy is so huge that a column of expression significant adequately describe co-writer\u002Fdirector pedro jackson's expanded vision of j . rs . r . tolkien's middle-earth .,1\nthe gorgeously elaborate continuation of 'the lordy of the piercings' trilogy is so huge that a column of mots cannot adequately describe co-novelist\u002Fdirector peter jackson's expanded vision of j . r . r . tolkien's middle-earth .,1\ntake care of my cat offerings a pleasantly several slice of asia cinema .,1\ntaking care of my cat offers a pleasantly different slice of asiatic kino .,1\na technically good-made suspenser . . . but its abrupt drop in iq points as it races to the finish bloodline proves straightforward too disheartening to let slide .,0\na technically well-made suspenser . . . but its abrupt drop in iq dot as it races to the finish line demonstrates simply too disheartening to leave slide .,0\nit's a enigma how the film wo be releases in this condition .,0\nit's a enigma how the filmmaking wo be publicized in this condition .,0\n```\n\n“embedding\"增强配方使用反事实词嵌入最近邻 (counterfitted embedding nearest-neighbors) 来增强数据。\n\n#### 增强 Python 接口\n\n除了命令行界面外，你还可以在自己的代码中导入 `Augmenter`（增强器）来动态增强文本。所有的 `Augmenter` 对象都实现了 `augment` 和 `augment_many` 方法，用于生成单个字符串或字符串列表的增强版本。以下是在 Python 脚本中使用 `EmbeddingAugmenter`（嵌入增强器）的示例：\n\n```python\n>>> from textattack.augmentation import EmbeddingAugmenter\n>>> augmenter = EmbeddingAugmenter()\n>>> s = 'What I cannot create, I do not understand.'\n>>> augmenter.augment(s)\n['What I notable create, I do not understand.', 'What I significant create, I do not understand.', 'What I cannot engender, I do not understand.', 'What I cannot creating, I do not understand.', 'What I cannot creations, I do not understand.', 'What I cannot create, I do not comprehend.', 'What I cannot create, I do not fathom.', 'What I cannot create, I do not understanding.', 'What I cannot create, I do not understands.', 'What I cannot create, I do not understood.', 'What I cannot create, I do not realise.']\n```\n\n你也可以通过从 `textattack.transformations` 和 `textattack.constraints` 导入转换\u002F约束条件，从头开始创建自己的增强器。以下是一个使用 `WordSwapRandomCharacterDeletion`（随机字符删除词交换）生成字符串增强版本的示例：\n\n```python\n>>> from textattack.transformations import WordSwapRandomCharacterDeletion\n>>> from textattack.transformations import CompositeTransformation\n>>> from textattack.augmentation import Augmenter\n>>> transformation = CompositeTransformation([WordSwapRandomCharacterDeletion()])\n>>> augmenter = Augmenter(transformation=transformation, transformations_per_example=5)\n>>> s = 'What I cannot create, I do not understand.'\n>>> augmenter.augment(s)\n['What I cannot creae, I do not understand.', 'What I cannot creat, I do not understand.', 'What I cannot create, I do not nderstand.', 'What I cannot create, I do nt understand.', 'Wht I cannot create, I do not understand.']\n```\n\n#### 提示增强\n\n除了常规文本的增强外，你还可以增强提示（prompts），然后使用大型语言模型（LLMs，大语言模型）为增强后的提示生成响应。增强操作使用与上述相同的 `Augmenter`。为了生成响应，你可以使用自己的 LLM、HuggingFace LLM 或 OpenAI LLM。以下是使用预训练的 HuggingFace LLM 的示例：\n\n```python\n>>> from textattack.augmentation import EmbeddingAugmenter\n>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer\n>>> from textattack.llms import HuggingFaceLLMWrapper\n>>> from textattack.prompt_augmentation import PromptAugmentationPipeline\n>>> augmenter = EmbeddingAugmenter(transformations_per_example=3)\n>>> model = AutoModelForSeq2SeqLM.from_pretrained(\"google\u002Fflan-t5-small\")\n>>> tokenizer = AutoTokenizer.from_pretrained(\"google\u002Fflan-t5-small\")\n>>> model_wrapper = HuggingFaceLLMWrapper(model, tokenizer)\n>>> pipeline = PromptAugmentationPipeline(augmenter, model_wrapper)\n>>> pipeline(\"Classify the following piece of text as `positive` or `negative`: This movie is great!\")\n[('Classify the following piece of text as `positive` or `negative`: This film is great!', ['positive']), ('Classify the following piece of text as `positive` or `negative`: This movie is fabulous!', ['positive']), ('Classify the following piece of text as `positive` or `negative`: This movie is wonderful!', ['positive'])]\n```\n\n### 训练模型：`textattack train`\n\n我们的模型训练代码可通过 `textattack train` 获取，帮助您开箱即用 TextAttack 来训练 LSTMs（长短期记忆网络）、CNNs（卷积神经网络）和 `transformers` 模型。数据集使用 `datasets` 包自动加载。\n\n#### 训练示例\n\n*在 Yelp Polarity 数据集上训练我们默认的 LSTM 50 个 epoch（轮次）：*\n\n```bash\ntextattack train --model-name-or-path lstm --dataset yelp_polarity  --epochs 50 --learning-rate 1e-5\n```\n\n*在 `CoLA` 数据集上微调 `bert-base` 5 个 epoch：*\n\n```bash\ntextattack train --model-name-or-path bert-base-uncased --dataset glue^cola --per-device-train-batch-size 8 --epochs 5\n```\n\n### 查看数据集：`textattack peek-dataset`\n\n要更仔细地查看数据集，请使用 `textattack peek-dataset`。TextAttack 将打印关于该数据集输入和输出的粗略统计信息。例如，\n\n```bash\ntextattack peek-dataset --dataset-from-huggingface snli\n```\n\n将显示来自 NLP（自然语言处理）包的 SNLI 数据集的信息。\n\n### 列出功能组件：`textattack list`\n\nTextAttack 包含许多组件，很难全部追踪。你可以使用 `textattack list` 来列出组件，例如预训练模型（`textattack list models`）或可用的搜索方法（`textattack list search-methods`）。\n\n## 设计\n\n### 模型\n\nTextAttack 是模型无关 (model-agnostic) 的！您可以使用 `TextAttack` 分析任何输出 ID、张量或字符串的模型。为了帮助用户，`TextAttack` 包含了针对不同常见 NLP（自然语言处理）任务的预训练模型。这使得用户更容易开始使用 `TextAttack`。它还能更公平地比较文献中的攻击方法。\n\n#### 内置模型和数据集\n\n`TextAttack` 还内置了模型和数据集。我们的命令行界面会自动将正确的数据集匹配到正确的模型。我们为九个 [GLUE](https:\u002F\u002Fgluebenchmark.com\u002F) 任务中的每一个都包含了 82 个不同的（2020 年 10 月）预训练模型，以及一些用于分类、翻译和摘要的常见数据集。\n\n可用预训练模型列表及其验证准确率可在 [textattack\u002Fmodels\u002FREADME.md](textattack\u002Fmodels\u002FREADME.md) 查看。您也可以通过 `textattack attack --help` 查看所有提供的模型和数据集列表。\n\n以下是使用其中一个内置模型的示例（SST-2 数据集会自动加载）：\n\n```bash\ntextattack attack --model roberta-base-sst2 --recipe textfooler --num-examples 10\n```\n\n#### HuggingFace 支持：transformers 模型与 datasets 数据集\n\n我们还为 [`transformers` 预训练模型](https:\u002F\u002Fhuggingface.co\u002Fmodels) 和来自 [`datasets` 包](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdatasets) 的数据集提供了内置支持！以下是加载和攻击预训练模型及数据集的示例：\n\n```bash\ntextattack attack --model-from-huggingface distilbert-base-uncased-finetuned-sst-2-english --dataset-from-huggingface glue^sst2 --recipe deepwordbug --num-examples 10\n```\n\n您可以使用 `--model-from-huggingface` 参数探索其他预训练模型，或通过更改 `--dataset-from-huggingface` 来探索其他数据集。\n\n#### 从文件加载模型或数据集\n\n您可以轻松地在本地模型或数据集样本上尝试攻击。要攻击预训练模型，请创建一个短文件，将它们作为变量 `model` 和 `tokenizer`（分词器）加载。`tokenizer` 必须能够使用名为 `encode()` 的方法将字符串输入转换为 ID 列表或张量。模型必须通过 `__call__` 方法接收输入。\n\n##### 从文件自定义模型\n\n若要对您训练的模型进行实验，可以创建以下文件并将其命名为 `my_model.py`：\n\n```python\nmodel = load_your_model_with_custom_code() # replace this line with your model loading code\ntokenizer = load_your_tokenizer_with_custom_code() # replace this line with your tokenizer loading code\n```\n\n然后，使用参数 `--model-from-file my_model.py` 运行攻击。模型和 `tokenizer` 将自动加载。\n\n### 自定义数据集\n\n#### 从文件加载数据集\n\n从文件加载数据集与从文件加载模型非常相似。“数据集”是任意 `(input, output)` 对的迭代对象。以下示例将从文件 `my_dataset.py` 加载情感分类数据集：\n\n```python\ndataset = [('Today was....', 1), ('This movie is...', 0), ...]\n```\n\n然后，您可以通过添加参数 `--dataset-from-file my_dataset.py` 对该数据集中的样本运行攻击。\n\n#### 通过其他机制加载数据集，详见：[此处更多详情](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002Fapi\u002Fdatasets.html)\n\n```python\nimport textattack\nmy_dataset = [(\"text\",label),....]\nnew_dataset = textattack.datasets.Dataset(my_dataset)\n```\n\n#### 通过 AttackedText 类使用数据集\n\n为了允许在序列分词后进行单词替换，我们包含了一个 `AttackedText` 对象，它维护令牌列表和原始文本（包括标点符号）。我们优先使用此对象，而不是单词列表或纯文本。\n\n### 攻击及如何设计新攻击\n\n我们将攻击定义为由四个组件组成：**目标函数 (Goal Function)**（确定攻击是否成功）、**约束条件 (Constraints)**（定义哪些扰动有效）、**变换 (Transformation)**（给定输入生成潜在修改）以及**搜索方法 (Search Method)**（遍历可能扰动的搜索空间）。攻击旨在扰动输入文本，使得模型输出满足目标函数（即指示攻击是否成功），且扰动符合一组约束条件（例如，语法约束、语义相似度约束）。搜索方法用于寻找一系列变换，以产生成功的对抗样本 (adversarial example)。\n\n这种模块化设计将对抗攻击方法统一到一个系统中，使我们能够轻松组装文献中的攻击，同时重用跨攻击共享的组件。我们提供了文献中 16 种对抗攻击配方的干净、可读的实现（见上方表格）。首次，这些攻击可以在标准化设置中进行基准测试、比较和分析。\n\n`TextAttack` 是模型无关的——意味着它可以在任何深度学习框架实现的模型上运行攻击。模型对象必须能够接受字符串（或字符串列表）并返回可由目标函数处理的输出。例如，机器翻译模型以字符串列表作为输入，并生成字符串列表作为输出。分类和蕴含模型返回一个分数数组。只要用户的模型符合此规范，该模型就适合与 `TextAttack` 一起使用。\n\n#### 目标函数\n\n`GoalFunction` 接受 `AttackedText` 对象作为输入，对其进行评分，并确定攻击是否成功，返回 `GoalFunctionResult`（目标函数结果）。\n\n#### 约束条件\n\n`Constraint` 接受当前的 `AttackedText` 和转换后的 `AttackedText` 列表作为输入。对于每个转换选项，它返回一个布尔值，表示是否满足约束条件。\n\n#### 变换\n\n`Transformation` 接受 `AttackedText` 作为输入，并返回可能的转换后 `AttackedText` 列表。例如，变换可能会返回所有可能的同义词替换。\n\n#### 搜索方法\n\n`SearchMethod` 接受初始 `GoalFunctionResult` 作为输入，并返回最终的 `GoalFunctionResult`。搜索功能可以访问 `get_transformations` 函数，该函数接受 `AttackedText` 对象作为输入，并输出过滤掉未满足所有攻击约束的可能变换列表。搜索由对 `get_transformations` 的连续调用组成，直到搜索成功（使用 `get_goal_results` 确定）或耗尽。\n\n## 关于基准测试攻击\n\n- 查看我们的分析论文：《Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples》，发表于 [EMNLP BlackBoxNLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06368)。\n\n- 正如我们在上述论文中强调的，我们不推荐直接开箱即用（out of the box）地比较攻击配方（Attack Recipes）。\n\n- 这一观点源于近期文献中的攻击配方在设置约束条件（constraints）时使用了不同的方法或阈值。如果约束空间（constraint space）不保持恒定，攻击成功率的提升可能源于搜索（search）或转换方法的改进，或者更宽松的搜索空间（search space）。\n\n- 我们用于基准测试脚本和结果的 GitHub 仓库：[TextAttack-Search-Benchmark Github](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack-Search-Benchmark)\n\n## 关于自然语言中生成的对抗样本 (Adversarial Examples) 的质量\n\n- 我们的分析论文发表于 [EMNLP Findings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.14174)\n- 我们分析了两种最先进 (State-of-the-art) 的同义词替换攻击生成的对抗样本。我们发现它们的扰动 (perturbations) 通常无法保留语义，且 38% 引入了语法错误。人类调查表明，为了成功保留语义，我们需要显著提高交换词嵌入 (embeddings) 之间以及原始句子与扰动句子的句子编码 (sentence encodings) 之间的最小余弦相似度 (cosine similarities)。当调整约束条件以更好地保留语义和语法正确性 (grammaticality) 时，攻击成功率下降了超过 70 个百分点。\n- 我们用于重新评估结果的 GitHub 仓库：[Reevaluating-NLP-Adversarial-Examples Github](https:\u002F\u002Fgithub.com\u002FQData\u002FReevaluating-NLP-Adversarial-Examples)\n- 正如我们在该分析论文中所强调的，我们建议研究人员和用户格外留意 (EXTREMELY mindful) 自然语言中生成的对抗样本的质量\n- 我们建议该领域使用基于人工评估得出的阈值来设置约束条件\n\n## 多语言支持\n\n- 查看示例代码：[https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002Fexamples\u002Fattack\u002Fattack_camembert.py](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002Fexamples\u002Fattack\u002Fattack_camembert.py) 了解如何使用我们的框架 (framework) 攻击 French-BERT。\n\n- 查看教程笔记本 (notebook)：[https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002F2notebook\u002FExample_4_CamemBERT.html](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002F2notebook\u002FExample_4_CamemBERT.html) 了解如何使用我们的框架攻击 French-BERT。\n\n- 查看 [README_ZH.md](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002FREADME_ZH.md) 获取我们的中文 README\n\n## 为 TextAttack 做贡献\n\n我们欢迎建议和贡献！提交 Issue 或 Pull Request，我们将尽力及时回复。TextAttack 目前处于\"alpha\"阶段，我们正在努力改进其功能和设计。\n\n有关贡献的详细信息，请参见 [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md)。\n\n## 引用 TextAttack\n\n如果您在研究中使用 TextAttack，请引用 [TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.05909)。\n\n```bibtex\n@inproceedings{morris2020textattack,\n  title={TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP},\n  author={Morris, John and Lifland, Eli and Yoo, Jin Yong and Grigsby, Jake and Jin, Di and Qi, Yanjun},\n  booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},\n  pages={119--126},\n  year={2020}\n}\n```","# TextAttack 快速上手指南\n\nTextAttack 是一个用于自然语言处理（NLP）模型的对抗攻击、数据增强及模型训练的 Python 框架。\n\n## 环境准备\n\n- **Python 版本**: 3.6 及以上\n- **硬件要求**: 支持 CUDA 的 GPU（可选，但能显著提升运行速度）\n- **依赖管理**: 推荐使用虚拟环境隔离依赖\n\n## 安装步骤\n\n通过 pip 安装 TextAttack：\n\n```bash\npip install textattack\n```\n\n> **提示**: TextAttack 默认将文件下载到 `~\u002F.cache\u002Ftextattack\u002F`（包括预训练模型、数据集样本和配置文件）。如需更改缓存路径，可设置环境变量 `TA_CACHE_DIR`。例如：\n> ```bash\n> TA_CACHE_DIR=\u002Ftmp\u002F textattack attack ...\n> ```\n\n## 基本使用\n\nTextAttack 主要通过命令行接口运行，支持直接攻击或数据增强。\n\n### 查看帮助\n\n查看所有可用命令及参数：\n\n```bash\ntextattack --help\n```\n\n### 运行对抗攻击\n\n以下是两个常见的攻击示例：\n\n**1. 在 BERT 情感分类模型上运行 TextFooler 攻击**\n```bash\ntextattack attack --recipe textfooler --model bert-base-uncased-mr --num-examples 100\n```\n\n**2. 在 DistilBERT 语义匹配模型上运行 DeepWordBug 攻击**\n```bash\ntextattack attack --model distilbert-base-uncased-cola --recipe deepwordbug --num-examples 100\n```\n\n### 高级选项\n\n- **交互式攻击**: 输入 `--interactive` 参数，手动输入样本进行攻击。\n- **多 GPU 并行**: 若机器拥有多个 GPU，可使用 `--parallel` 选项分布攻击任务以提升性能。\n\n更多详细用法（如自定义转换、约束等）请查阅 [官方文档](https:\u002F\u002Ftextattack.readthedocs.io\u002F)。","某金融科技公司正在构建智能客服系统，核心需求是确保情感分析模型在面对恶意篡改或用户拼写错误时依然稳定可靠，避免误判导致客诉风险。\n\n### 没有 TextAttack 时\n- 需从零编写对抗样本生成代码，耗时且易引入逻辑错误\n- 手动构造拼写错误或同义词替换数据增强训练集效率极低\n- 测试模型鲁棒性需维护多套独立脚本，难以统一量化评估\n- 缺乏现成攻击模板，复现学术界最新方法困难重重\n- 环境配置复杂，不同数据集和模型之间迁移成本高昂\n\n### 使用 TextAttack 后\n- 一条命令行即可运行 TextFooler 等经典攻击算法，秒级生成对抗样本\n- 内置数据增强功能快速扩充高质量训练样本，显著提升泛化能力\n- 统一框架简化模型训练与攻击测试流程，大幅降低维护成本\n- 直接调用预训练模型，无需重复下载配置环境，实现开箱即用\n- 支持并行计算加速攻击过程，在大规模测试中节省大量等待时间\n\nTextAttack 通过标准化流程显著提升了 NLP 模型的抗干扰能力与研发效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FQData_TextAttack_4f44fa5b.png","QData","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FQData_0bb74c09.png","http:\u002F\u002Fwww.cs.virginia.edu\u002Fyanjun\u002F  ",null,"yanjun@virginia.edu","Qdatalab","https:\u002F\u002Fqdata.github.io\u002Fqdata-page","https:\u002F\u002Fgithub.com\u002FQData",[84,88,92],{"name":85,"color":86,"percentage":87},"Python","#3572A5",92.5,{"name":89,"color":90,"percentage":91},"Jupyter Notebook","#DA5B0B",7.5,{"name":93,"color":94,"percentage":95},"Makefile","#427819",0.1,3396,440,"2026-04-05T04:26:57","MIT","未说明","可选，需 CUDA 兼容环境",{"notes":103,"python":104,"dependencies":105},"默认缓存路径为 ~\u002F.cache\u002Ftextattack\u002F，可通过 TA_CACHE_DIR 环境变量修改；支持命令行（textattack）和 Python 模块（python -m textattack）调用；部分攻击支持使用 --parallel 选项在多 GPU 上并行运行以加速","3.6+",[100],[51,13,26],[108,109,110,111,112,113,114,115],"machine-learning","security","natural-language-processing","nlp","adversarial-machine-learning","adversarial-attacks","data-augmentation","adversarial-examples",4,"2026-03-27T02:49:30.150509","2026-04-06T06:54:53.512871",[120,125,130,135,140,145],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},2159,"如何通过 API 并行加速攻击以充分利用 GPU？","默认循环方式可能无法充分利用 GPU。如果遇到 `Found no GPU` 错误，建议重置环境。关于并行攻击的相关修复已在 PR #428 中处理，请确保使用最新版本的库。","https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fissues\u002F372",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},2160,"并行攻击时出现 `RuntimeError: generator raised StopIteration` 如何解决？","此错误常见于旧版本中。解决方法是拉取项目的最新版本（pull the latest version），该问题已有补丁修复。","https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fissues\u002F366",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},2161,"命令行加载模型时报错 `Must supply pretrained model or dataset` 或 `unsupported TextAttack model` 怎么办？","首先检查参数：加载预训练模型时应使用 `--model-from-file` 而非 `--model`。其次，加载本地保存的模型时，请确保路径正确且模型格式符合 TextAttack 标准，否则可能提示不支持。","https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fissues\u002F529",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},2162,"如何从指定文件攻击文本样本？报错 `'NoneType' object has no attribute 'loader'` 如何处理？","请确认输入文件的格式是否符合要求，并检查文件路径是否正确。确保数据加载器（loader）能正确初始化并读取文件内容。","https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fissues\u002F282",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},2163,"使用自定义词向量时出现索引越界错误怎么办？","这通常是因为自定义词表中的单词被分词成多个部分，导致索引与原文不匹配。建议升级到包含相关修复的版本（参考 PR #333），或提交代码集成解决方案。","https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fissues\u002F325",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},2164,"TextAttack 是否支持使用 spaCy 模型？","是的，目前 TextAttack 已支持 spaCy 模型。维护者确认该功能已实现，用户可以尝试使用。","https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fissues\u002F117",[151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226],{"id":152,"version":153,"summary_zh":154,"released_at":155},101637,"v0.3.10","## What's Changed\r\n* Fix faster-alzantot recipe references by @marcorosa in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F776\r\n* Add back transcription augmentation method by @skorzewski in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F767\r\n* Polish dependencies and support python3.11 by @marcorosa in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F780\r\n* Update update_test_outputs.py by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F781\r\n* Add support for prompt augmentation by @k-ivey in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F766\r\n* Increase the swap file size of the GitHub actions runner by @k-ivey in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F755\r\n* Consistent word swap by @k-ivey in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F752\r\n* update docs with missing api by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F757\r\n* Typo corrections in installation docs by @dmlls in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F759\r\n* Do not use pipeline to achieve faster generation of Chinese mask repl… by @liuyuyan2717 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F778\r\n* Rename BERT constraint to SBERT by @k-ivey in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F763\r\n* Word Swap Qwerty Failure Bug Fix by @jstzwj in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F761\r\n* disable tests while compute issues are resolved by @jxmorris12 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F779\r\n\r\n## New Contributors\r\n* @dmlls made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F759\r\n* @marcorosa made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F776\r\n* @liuyuyan2717 made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F778\r\n* @jstzwj made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F761\r\n* @skorzewski made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F767\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fcompare\u002Fv0.3.9...v0.3.10","2024-03-11T02:09:54",{"id":157,"version":158,"summary_zh":159,"released_at":160},101638,"v0.3.9","this release mainly is about\r\n- #747 fixing CSVlogger missing df issue\r\n- #748 reverting one goal_func change due to the \"textattack attack\" errors\r\n- #719 extending textattack into Chinese language\r\n\r\n## What's Changed\r\n* fix command help str :-) by @jxmorris12 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F703\r\n* Clean up formatting in HTML tables by @Arrrlex in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F707\r\n* Extra quality metrics by @gmurro in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F695\r\n* format after #695 by @jxmorris12 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F710\r\n* Extend Chinese Attack by @Hanyu-Liu-123 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F719\r\n* add in tutorials and reference for Chinese Textattack by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F744\r\n* fix potential bug in the filter_by_labels_ method of the Dataset class by @wenh06 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F746\r\n* Fixed a batch_size bug in attack_args.py by @Falanke21 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F735\r\n* Fix the problem of text output from T5 model by @plasmashen in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F709\r\n* Bump transformers from 4.27.4 to 4.30.0 by @dependabot in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F740\r\n* Fixed syntax and import issues in the example of Attack API by @eldorabdukhamidov in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F734\r\n* hard label classification by @cogeid in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F635\r\n* fixing the csvlogger missing DF issues by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F747\r\n* Fix pytest errors - due to goal_func by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F748\r\n* format update by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F749\r\n* Stanza test and notebooks minor fix by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F750\r\n\r\n## New Contributors\r\n* @Arrrlex made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F707\r\n* @gmurro made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F695\r\n* @Falanke21 made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F735\r\n* @eldorabdukhamidov made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F734\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fcompare\u002Fv0.3.8...v0.3.9","2023-09-11T23:06:00",{"id":162,"version":163,"summary_zh":164,"released_at":165},101639,"v0.3.8","#689: Add more type annotations and do some code cleanup in AttackedText\r\n   notably removed some code that did Chinese word segmentation because it did not properly\r\n   support words_from_text, which caused issues with various transformations.\r\n\r\n#691: Optimize comparison between two AttackedText objects (thanks @plasmashen!)\r\n\r\n#693: Fix bug with writing parameters twice in AttackedText (thanks @89x98!)\r\n\r\n#700: Lots of miscellaneous bug fixes and some helper function implementation\r\n\r\n#701: Fix bugs with loading TedTalk translation dataset, using T5, seq2sick\u002Ftext-to-text goal functions","2022-11-02T19:43:17",{"id":167,"version":168,"summary_zh":169,"released_at":170},101640,"v0.3.7","- Update dependency: `transformers>=4.21.0`\r\n- Update dependency: `datasets==2.4.0`\r\n- Update optional dependency: `sentence_transformers==2.2.0`\r\n- Update optional dependency: `gensim==4.1.2`\r\n- Update optional dependency: `tensorflow==2.7.0` (Thanks @VijayKalmath !!!!)\r\n- Miscellaneous fixes for new packages to update things and remove warning messages\r\n- Fix logging attack args to W&B #647 (thanks @VijayKalmath)\r\n- Fix bug with word_swap_masked_lm #649 (thanks @Hanyu-Liu-123)\r\n- Fix small issues with `textattack train` #653 (thanks @VijayKalmath)\r\n- Fix issue with PWWS #654 (thanks @VijayKalmath)\r\n- Update recipe for FasterGeneticAlgorithm to match paper #656 (thanks @VijayKalmath)\r\n- Update adversarial dataset generation logic #657 (thanks @VijayKalmath)\r\n- Update dataset_args to correctly set dataset_split #659 (thanks @VijayKalmath)\r\n- Add logic for loading SQUAD via HuggingFaceDataset class #660 (thanks @VijayKalmath)\r\n- Fix ANSI color-printing #662\r\n- Make GreedyWordSwapWIR and related search methods more query-efficient under the presence of pre-transformation constraints #665 and #674 (thanks @VijayKalmath)\r\n- Save attack summary table as JSON (thanks @VijayKalmath -- great feature add!!)\r\n- Fix typo and update numpy #671 and #672 (thanks @JohnGiorgi -- and welcome!)\r\n- Finish CLARE attack #675 (thanks @Hanyu-Liu-123 and @VijayKalmath)\r\n- Add __repr__ for better user experience with GoalFunctionResult #676 (thanks @VijayKalmath)\r\n- Better exception handling in WordSwapChangeNumber ((thanks @dangne -- and welcome!!)\r\n- Various other typo and bug fixes\r\n\r\nThanks to everyone who contributed to TextAttack this summer, and a special shoutout once more to @VijayKalmath for all the hard work and attention to detail. Glad to see TextAttack so healthy 🙂","2022-08-14T16:26:58",{"id":172,"version":173,"summary_zh":174,"released_at":175},101641,"v0.3.5","- #644:\r\n  - Ability to specify device via `TA_DEVICE` env variable\r\n  - New constraint, `MaxNumWordsModified`\r\n  - Tracks previous AttackedText during attack to allow for reconstruction of chain of modifications\r\n  - Change`GreedyWordSwapWIR` to allow passing of specific unk token\r\n  - Formatting updates to new Black version\r\n  - fix Universal Sentence Encoder from TF breakage\r\n  - fix Flair to new API (thanks @[VijayKalmath](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fissues?q=is%3Apr+author%3AVijayKalmath) for the help!)\r\n- Bump version to 0.3.5\r\n- #623 Fix quotation bug, thanks @donggrant \r\n- #613 and others, fix dependencies\r\n- #609 Only initialize embeddings when needed :) thanks to @duesenfranz \r\n- #591 fix a bug with CLARE","2022-05-25T19:13:02",{"id":177,"version":178,"summary_zh":179,"released_at":180},101642,"v0.3.4","## What's Changed\r\n* [CODE] Keras parallel attack fix - Issue #499 by @sanchit97 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F515\r\n* Bump tensorflow from 2.4.2 to 2.5.1 in \u002Fdocs by @dependabot in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F517\r\n* Add a high level overview diagram to docs by @cogeid in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F519\r\n* readtheDoc fix by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F522\r\n* Add new attack recipe A2T by @jinyongyoo in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F523\r\n* Fix incorrect `__eq__` method of `AttackedText` in `textattack\u002Fshared\u002Fattacked_text.py` by @wenh06 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F509\r\n* Fix a bug when running textattack eval with --num-examples=-1 by @dangne in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F521\r\n* New metric module to improve flexibility and  intuitiveness - moved from #475 by @sanchit97 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F514\r\n* Update installation.md to add FAQ on installation by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F535\r\n* Fix dataset-split bug by @Hanyu-Liu-123 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F533\r\n* Update by @Hanyu-Liu-123 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F541\r\n* add custom dataset API use example in doc by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F543\r\n* Fix logger initiation bug by @Hanyu-Liu-123 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F539\r\n* Updated Tutorial 0 to use the Rotten Tomatoes dataset instead of the … by @srujanjoshi in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F542\r\n* Back translation transformation by @cogeid in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F534\r\n* Fixed a bug in the allennlp tutorial by @donggrant in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F546\r\n* Logger bug fix by @ankitgv0 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F551\r\n* add   \"textattack[tensorflow]\" option in all tutorials by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F559\r\n* Fix CLARE Extra Character Bug by @Hanyu-Liu-123 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F556\r\n* Fix metric-module Issue#532 by @sanchit97 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F540\r\n* Add API docstrings for back translation by @cogeid in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F563\r\n* Fixed the \"no attribute\" error from #536 by @ankitgv0 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F552\r\n* Enhance augment function by @Hanyu-Liu-123 in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F531\r\n* fix read-the-doc installation issue  \u002F clean up and add new docstrings for recently added classes\u002Fpackages by @qiyanjun in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F569\r\n\r\n## New Contributors\r\n* @wenh06 made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F509\r\n* @dangne made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F521\r\n* @srujanjoshi made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F542\r\n* @donggrant made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F546\r\n* @ankitgv0 made their first contribution in https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fpull\u002F551\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Fcompare\u002Fv0.3.3...v0.3.4","2021-11-10T01:24:27",{"id":182,"version":183,"summary_zh":184,"released_at":185},101643,"v0.3.3","1. Merge pull request #508 from QData\u002Fexample_bug_fix \r\n\r\n2. Merge pull request #505 from QData\u002Fs3-model-fix  \r\n\r\n3. Merge pull request #503 from QData\u002Fmultilingual-doc \r\n\r\n4. Merge pull request #502 from QData\u002FNotebook-10-bug-fix  \r\n\r\n5. Merge pull request #500 from QData\u002Fdocstring-rework-missing \r\n\r\n6. Merge pull request #497 from QData\u002Fdependabot\u002Fpip\u002Fdocs\u002Ftensorflow-2.4.2 \r\n\r\n7. Merge pull request #495 from QData\u002Freadthedoc-fix ","2021-08-03T02:33:23",{"id":187,"version":188,"summary_zh":189,"released_at":190},101644,"v0.3.2","## Multiple bug fixes: \r\n\r\n- Merge pull request #473 from cogeid\u002Ffile-redirection-fix \r\n\r\n- Merge pull request #469 from xinzhel\u002Fallennlp_doc \r\n\r\n- Merge pull request #477 from cogeid\u002FFix-RandomSwap-and-RandomSynonymI… \r\n\r\n- Merge pull request #484 from QData\u002Fupdate-torch-version  \r\n\r\n- Merge pull request #490 from QData\u002Fscipy-version-plus-two-doc-updates \r\n\r\n- Merge pull request #420 from QData\u002Fmultilingual  \r\n\r\n- Merge pull request #495 from QData\u002Freadthedoc-fix ","2021-07-28T16:37:21",{"id":192,"version":193,"summary_zh":194,"released_at":195},101645,"v0.3.0","# New Updated API\r\nWe have added two new classes called `Attacker` and `Trainer` that can be used to perform adversarial attacks and adversarial training with full logging support and multi-GPU parallelism. This is intended to provide an alternative way of performing attacks and training for custom models and datasets.\r\n\r\n## `Attacker`: Running Adversarial Attacks\r\nBelow is an example use of `Attacker` to attack BERT model finetuned on IMDB dataset using TextFooler method. `AttackArgs` class is used to set the parameters of the attacks, including the number of examples to attack, CSV file to log the results, and the interval at which to save checkpoint.\r\n\r\n![Screen Shot 2021-06-24 at 8 34 44 PM](https:\u002F\u002Fuser-images.githubusercontent.com\u002F32072203\u002F123256196-a85e6280-d52b-11eb-94fd-a4f0408a851a.png)\r\n\r\nMore details about `Attacker` and `AttackArgs` can be found [here](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002Fapi\u002Fattack.html).\r\n\r\n## `Trainer`: Running Adversarial Training\r\nPreviously, TextAttack supported adversarial training in a limited manner. Users could only train models using the CLI command, and not every aspects of training was available for tuning.\r\n\r\n`Trainer` class introduces an easy way to train custom PyTorch\u002FTransformers models on a custom dataset. Below is an example where we finetune BERT on IMDB dataset with an adversarial attack called [DeepWordBug](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04354).\r\n\r\n![Screen Shot 2021-06-25 at 9 28 57 PM](https:\u002F\u002Fuser-images.githubusercontent.com\u002F32072203\u002F123425045-93053900-d5fc-11eb-81ae-2e11f2b15137.png)\r\n\r\n## `Dataset`\r\nPreviously, datasets passed to TextAttack were simply expected to be an iterable of `(input, target)` tuples. While this offers flexibility, it prevents users from passing key information about the dataset that TextAttack can use to provide better experience (e.g. label names, label remapping, input column names used for printing). \r\n\r\nWe instead explicitly define `Dataset` class that users can use or subclass for their own datasets. \r\n\r\n## Bug Fixes:\r\n- #467: Don't check self.target_max_score when it is already known to be None.\r\n- #417: Fixed bug where in masked_lm transformations only subwords were candidates for top_words.","2021-06-25T12:50:10",{"id":197,"version":198,"summary_zh":199,"released_at":200},101646,"v0.2.15","# CLARE Attack (#356, #392)\r\nWe have added a new attack proposed by \"[Contextualized Perturbation for Textual Adversarial Attack](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.07502)\" (Li et al., 2020). There's also a corresponding augmenter recipe using CLARE. Thanks to @Hanyu-Liu-123, @cookielee77. \r\n\r\n# Custom Word Embedding (#333, #399)\r\nWe have added support for custom word embedding via `AbstractWordEmbedding`, `WordEmbedding`, `GensimWordEmbedding` from`textattack.shared`. These three classes allow users to use their own custom word embeddings for transformations and constraints that require custom word embeddings. Thanks @tsinggggg and @alexander-zap for contributing!\r\n\r\n# Bug Fixes and Changes\r\n- We fixed a bug that caused TextAttack to report fewer number of average queries than what it should be reporting (#350, thanks @ a1noack).\r\n- Update the dataset split used to evaluate robustness during adversarial training (#361, thanks @Opdoop).\r\n- Updated default parameters for TextBugger recipe (#373)\r\n- Fixed an issue with TextBugger by updating the default method used to segment text into words to work with homoglyphs. (#376, thanks @lethaiq!)\r\n- Updated `ModelWrapper` to not require `get_grad` method to be defined. (#381)\r\n- Fixed an issue with `WordSwapMaskedLM` that was causing words with lowest probability to be picked first. (#396)\r\n\r\n","2020-12-27T04:51:45",{"id":202,"version":203,"summary_zh":204,"released_at":205},101647,"0.2.14","# Improvements\r\n\r\nBug fixing \r\nMatching documentation in Readme.md and the files in \u002Fdoc folder\r\nadd checklist\r\nadd multilingual USE\r\nadd gradient-based word importance ranking\r\nupdate to a more complete API documentation\r\nadd cola constraint\r\nadd the lazy loader\r\n","2020-11-18T15:41:35",{"id":207,"version":208,"summary_zh":209,"released_at":210},101648,"0.2.12","## Big Improvements\r\n- add checklist \r\n- add multilingual USE\r\n- add gradient-based word importance ranking \r\n- update to a more complete API documentation \r\n- add cola constraint \r\n- add the lazy loader \r\n","2020-11-13T18:59:01",{"id":212,"version":213,"summary_zh":214,"released_at":215},101649,"0.2.0","## Big Improvements\r\n- Add tons of (over 70!) pre-trained models (#192, see [Model Zoo page!](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack\u002Ftree\u002Fmaster\u002Ftextattack\u002Fmodels))\r\n- Data augmentation integrated into training! (#195, thanks @jakegrigsby)\r\n- Allow for maximization goal functions (#151, thanks @uvafan )\r\n\r\n## New Attacks\r\n- Add the Improved Genetic Algorithm (#183, thanks @sherlockyyc!)\r\n- Add BAE and BERT-Attack attack recipes (#160)\r\n- Add PWWS attack (#168, thanks @jakegrigsby)\r\n- Add typo-based attack from Pruthi et al. (#191, thanks @jakegrigsby )\r\n- Easy Data Augmentation augmentation recipe (#168, thanks @jakegrigsby)\r\n- Add input reduction attack from Feng et al. (#161, thanks @uvafan)\r\n\r\n## Smaller Improvements\r\n- more accurate attack recipes for BAE and TextFooler (#199)\r\n- important fixes to model training code (#186, thanks so much @jind11!!)\r\n- abstract classes, better string representations when printing attacks to console (#202)\r\n- genetic algorithm improvements (#160, thanks @jinyongyoo )\r\n- fixes to parallel attacks (#164, thanks @jinyongyoo )\r\n- datasets to test out T5 on seq2seq attacks (#176)\r\n\r\n## Bug Fixes\r\n- correctly print attack perturbed words in color, even when words are deleted & inserted (#200)\r\n- fix `print_step` bug with `alzantot` recipe (#195, thanks @heytitle for reporting!)\r\n- fix some annoying issues with dependency versioning","2020-07-09T19:44:19",{"id":217,"version":218,"summary_zh":219,"released_at":220},101650,"0.1.0","Version 0.1.0 is our biggest release yet! Here's a summary of the changes:\r\n\r\n> **Backwards compatibility note:** `python -m textattack \u003Cargs>` is renamed to `python -m textattack attack \u003Cargs>`. Or, better yet, `textattack attack \u003Cargs>`!\r\n## Big improvements\r\n- add `textattack` command (#132)\r\n  - add `textattack augment`, `textattack eval`, `textattack attack`, `textattack list` (#132)\r\n  - add `textattack train`, `textattack peek-dataset`, and lots of infrastructure for training models (#139)\r\n- Move all datasets to [`nlp`](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fnlp\u002F) format; temporarily remove non-NLP datasets (AGNews, English->German translation) (#134)\r\n\r\n## Smaller improvements\r\n- Better output formatted -- show labels (\"Positive\", \"Entailment\") and confidence score (91%) in output (#142)\r\n- add `MaxLengthModification` constraint that prevents modifications beyond tokenizer max_length (#143)\r\n- Add `pytest` tests and code formatting with `black`; run tests on Python 3.6, 3.7, 3.8 with Travis CI (#127, #136)\r\n- Update NLTK part-of-speech constraint and support part-of-speech tagging with FLAIR instead (#135)\r\n- add `BERTScore` constrained based on  [\"BERTScore: Evaluating Text Generation with BERT\" (Zhang et al, 2019)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09675) (#146)\r\n- make logging to file optional (#145)\r\n- Updates to `Checkpoint` class; track attack results in a worklist; attack resume fixes (#128, #141)\r\n- Silence Weights & Biases warning message when not being used (#130)\r\n- Optionally point all cache directories to a universal cache directory, `TA_CACHE_DIR` (#150)\r\n\r\n## Bug fixes\r\n- Fix a bug that can be encountered when resuming attacks from checkpoints (#149)\r\n- Fix a bug in Greedy word-importance-ranking deletion (#152)\r\n- Documentation updates and fixes (#153)","2020-06-24T23:56:56",{"id":222,"version":223,"summary_zh":224,"released_at":225},101651,"0.0.3.0","big changes:\r\n- load [`transformers`](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers\u002F) models from the command-line using the `--model-from-huggingface` option\r\n- load [`nlp`](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fnlp) datasets from the command-line using the `--dataset-from-nlp` option\r\n- command-line support for custom attacks, models, and datasets: `--attack-from-file`, `--model-from-file`, `--dataset-from-file`\r\n- implement attack recipe for [TextBugger](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.05271) attack\r\n- add WordDeletion transformation\r\n\r\nsmall changes:\r\n- support white-box transformations via the command-line\r\n- allow Greedy-WIR to rank things in order of ascending importance\r\n- use fast tokenizers behind the scenes\r\n- fix some bugs with the attack `Checkpoint` class\r\n- some abbreviated syntax (`textattack.shared.utils.get_logger() -> textattack.shared.logger`, `textattack.shared.utils.get_device() -> textattack.shared.utils.device`)\r\n- substantially decrease overall `TokenizedText` memory usage","2020-06-11T22:23:21",{"id":227,"version":228,"summary_zh":229,"released_at":230},101652,"0.0.2","## 0.0.2: Better documentation, attack checkpoints, PreTransformationConstraints, and more\r\n- Major documentation restructure ([check it out](https:\u002F\u002Ftextattack.readthedocs.io\u002Fen\u002Flatest\u002F))\r\n- Some refactoring and variable renames to make it easier to jump right in and start working with TextAttack\r\n- Introduction of `PreTransformationConstraints`: constraints now can be applied _before_ the transformation to prevent word modifications at certain indices. This abstraction allowed us to remove the notion of `modified_indices` from search methods, which paves the way for us to introduce attacks that insert or delete words and phrases, as opposed to simply swapping words.\r\n- Separation of `Attack` and `SearchMethod`: search methods are now a parameter to the attack instead of different subclasses of `Attack`. This syntax fits better with our framework and enforces a clearer sense of separation between the responsibilities of the attack and those of the search method.\r\n- Transformation and constraint compatibility: Constraints now ensure they're compatible with a specific transformation via a `check_compatibility` method\r\n- Goal function scores are now normalized between 0 and 1: `UntargetedClassification` and `NonOverlappingOutput` now return scores between 0 and 1. \r\n- Attack Checkpoints: Attacks can now save and resume their progress. This is really useful for running long, expensive attacks. `python-m textattack` supports new checkpoint-related arguments: `--checkpoint-interval` and `--checkpoint-dir` \r\n- Weights & Biases: Log attack results to [Weights & Biases](https:\u002F\u002Fwww.wandb.com\u002F) by adding the `--enable-wandb` flag","2020-05-21T21:06:37"]