[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-makcedward--nlpaug":3,"tool-makcedward--nlpaug":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":97,"forks":98,"last_commit_at":99,"license":100,"difficulty_score":101,"env_os":102,"env_gpu":103,"env_ram":103,"env_deps":104,"category_tags":118,"github_topics":119,"view_count":130,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":131,"updated_at":132,"faqs":133,"releases":159},1355,"makcedward\u002Fnlpaug","nlpaug","Data augmentation for NLP ","nlpaug 是一个轻量级 Python 库，专门帮你在自然语言处理任务里“变出”更多训练数据。它通过同义词替换、键盘错别字、随机删词、语音加噪等几十种现成策略，把一段文本或语音自动扩展成多条相似样本，既省人工标注，又能提升模型泛化能力。只需三行代码即可接入 scikit-learn、PyTorch、TensorFlow 等任何框架，支持中文、英文等多语种文本，也支持音频和频谱图。特别适合数据稀缺的开发者、科研人员或做对话、语音识别、文本分类的团队。","\u003Cp align=\"center\">\n    \u003Cbr>\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_db699989f7d0.png\"\u002F>\n    \u003Cbr>\n\u003Cp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Ftravis-ci.org\u002Fmakcedward\u002Fnlpaug\">\n        \u003Cimg alt=\"Build\" src=\"https:\u002F\u002Ftravis-ci.org\u002Fmakcedward\u002Fnlpaug.svg?branch=master\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fwww.codacy.com\u002Fapp\u002Fmakcedward\u002Fnlpaug?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=makcedward\u002Fnlpaug&amp;utm_campaign=Badge_Grade\">\n        \u003Cimg alt=\"Code Quality\" src=\"https:\u002F\u002Fapi.codacy.com\u002Fproject\u002Fbadge\u002FGrade\u002F2d6d1d08016a4f78818161a89a2dfbfb\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_40d304df6d24.png\">\n        \u003Cimg alt=\"Downloads\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_40d304df6d24.png\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n# nlpaug\n\nThis python library helps you with augmenting nlp for your machine learning projects. Visit this introduction to understand about [Data Augmentation in NLP](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-in-nlp-2801a34dfc28). `Augmenter` is the basic element of augmentation while `Flow` is a pipeline to orchestra multi augmenter together.\n\n## Features\n*   Generate synthetic data for improving model performance without manual effort\n*   Simple, easy-to-use and lightweight library. Augment data in 3 lines of code\n*   Plug and play to any machine leanring\u002F neural network frameworks (e.g. scikit-learn, PyTorch, TensorFlow)\n*   Support textual and audio input\n\n\u003Ch3 align=\"center\">Textual Data Augmentation Example\u003C\u002Fh3>\n\u003Cbr>\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_32d74f383058.png\"\u002F>\u003C\u002Fp>\n\u003Ch3 align=\"center\">Acoustic Data Augmentation Example\u003C\u002Fh3>\n\u003Cbr>\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_71dbfd4062de.png\"\u002F>\u003C\u002Fp>\n\n| Section | Description |\n|:---:|:---:|\n| [Quick Demo](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#quick-demo) | How to use this library |\n| [Augmenter](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#augmenter) | Introduce all available augmentation methods |\n| [Installation](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#installation) | How to install this library |\n| [Recent Changes](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#recent-changes) | Latest enhancement |\n| [Extension Reading](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#extension-reading) | More real life examples or researchs |\n| [Reference](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#reference) | Reference of external resources such as data or model |\n\n## Quick Demo\n*   [Quick Example](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fquick_example.ipynb)\n*   [Example of Augmentation for Textual Inputs](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Ftextual_augmenter.ipynb)\n*   [Example of Augmentation for Multilingual Textual Inputs ](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Ftextual_language_augmenter.ipynb)\n*   [Example of Augmentation for Spectrogram Inputs](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fspectrogram_augmenter.ipynb)\n*   [Example of Augmentation for Audio Inputs](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Faudio_augmenter.ipynb)\n*   [Example of Orchestra Multiple Augmenters](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fflow.ipynb)\n*   [Example of Showing Augmentation History](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fchange_log.ipynb)\n*   How to train [TF-IDF model](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Ftfidf-train_model.ipynb)\n*   How to train [LAMBADA model](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Flambada-train_model.ipynb)\n*   How to create [custom augmentation](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fcustom_augmenter.ipynb)\n*   [API Documentation](https:\u002F\u002Fnlpaug.readthedocs.io\u002Fen\u002Flatest\u002F)\n\n## Augmenter\n| Augmenter | Target | Augmenter | Action | Description |\n|:---:|:---:|:---:|:---:|:---:|\n|Textual| Character | KeyboardAug | substitute | Simulate keyboard distance error |\n|Textual| | OcrAug | substitute | Simulate OCR engine error |\n|Textual| | [RandomAug](https:\u002F\u002Fmedium.com\u002Fhackernoon\u002Fdoes-your-nlp-model-able-to-prevent-adversarial-attack-45b5ab75129c) | insert, substitute, swap, delete | Apply augmentation randomly |\n|Textual| Word | AntonymAug | substitute | Substitute opposite meaning word according to WordNet antonym|\n|Textual| | ContextualWordEmbsAug | insert, substitute | Feeding surroundings word to [BERT](https:\u002F\u002Ftowardsdatascience.com\u002Fhow-bert-leverage-attention-mechanism-and-transformer-to-learn-word-contextual-relations-5bbee1b6dbdb), DistilBERT, [RoBERTa](https:\u002F\u002Fmedium.com\u002Ftowards-artificial-intelligence\u002Fa-robustly-optimized-bert-pretraining-approach-f6b6e537e6a6) or [XLNet](https:\u002F\u002Fmedium.com\u002Fdataseries\u002Fwhy-does-xlnet-outperform-bert-da98a8503d5b) language model to find out the most suitlabe word for augmentation|\n|Textual| | RandomWordAug | swap, crop, delete | Apply augmentation randomly |\n|Textual| | SpellingAug | substitute | Substitute word according to spelling mistake dictionary |\n|Textual| | SplitAug | split | Split one word to two words randomly|\n|Textual| | SynonymAug | substitute | Substitute similar word according to WordNet\u002F PPDB synonym |\n|Textual| | [TfIdfAug](https:\u002F\u002Fmedium.com\u002Ftowards-artificial-intelligence\u002Funsupervised-data-augmentation-6760456db143) | insert, substitute | Use TF-IDF to find out how word should be augmented |\n|Textual| | WordEmbsAug | insert, substitute | Leverage  [word2vec](https:\u002F\u002Ftowardsdatascience.com\u002F3-silver-bullets-of-word-embedding-in-nlp-10fa8f50cc5a), [GloVe](https:\u002F\u002Ftowardsdatascience.com\u002F3-silver-bullets-of-word-embedding-in-nlp-10fa8f50cc5a) or [fasttext](https:\u002F\u002Ftowardsdatascience.com\u002F3-silver-bullets-of-word-embedding-in-nlp-10fa8f50cc5a) embeddings to apply augmentation|\n|Textual| | [BackTranslationAug](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-in-nlp-2801a34dfc28) | substitute | Leverage two translation models for augmentation |\n|Textual| | ReservedAug | substitute | Replace reserved words |\n|Textual| Sentence | ContextualWordEmbsForSentenceAug | insert | Insert sentence according to [XLNet](https:\u002F\u002Fmedium.com\u002Fdataseries\u002Fwhy-does-xlnet-outperform-bert-da98a8503d5b), [GPT2](https:\u002F\u002Ftowardsdatascience.com\u002Ftoo-powerful-nlp-model-generative-pre-training-2-4cc6afb6655) or DistilGPT2 prediction |\n|Textual| | AbstSummAug | substitute | Summarize article by abstractive summarization method |\n|Textual| | LambadaAug | substitute | Using language model to generate text and then using classification model to retain high quality results |\n|Signal| Audio | CropAug | delete | Delete audio's segment |\n|Signal| | LoudnessAug|substitute | Adjust audio's volume |\n|Signal| | MaskAug | substitute | Mask audio's segment |\n|Signal| | NoiseAug | substitute | Inject noise |\n|Signal| | PitchAug | substitute | Adjust audio's pitch |\n|Signal| | ShiftAug | substitute | Shift time dimension forward\u002F backward |\n|Signal| | SpeedAug | substitute | Adjust audio's speed |\n|Signal| | VtlpAug | substitute | Change vocal tract |\n|Signal| | NormalizeAug | substitute | Normalize audio |\n|Signal| | PolarityInverseAug | substitute | Swap positive and negative for audio |\n|Signal| Spectrogram | FrequencyMaskingAug | substitute | Set block of values to zero according to frequency dimension |\n|Signal| | TimeMaskingAug | substitute | Set block of values to zero according to time dimension |\n|Signal| | LoudnessAug | substitute | Adjust volume |\n\n## Flow\n| Augmenter | Augmenter | Description |\n|:---:|:---:|:---:|\n|Pipeline| Sequential | Apply list of augmentation functions sequentially |\n|Pipeline| Sometimes | Apply some augmentation functions randomly |\n\n## Installation\nThe library supports python 3.5+ in linux and window platform.\n\nTo install the library:\n```bash\npip install numpy requests nlpaug\n```\nor install the latest version (include BETA features) from github directly\n```bash\npip install numpy git+https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug.git\n```\nor install over conda\n```bash\nconda install -c makcedward nlpaug\n```\n\nIf you use BackTranslationAug, ContextualWordEmbsAug, ContextualWordEmbsForSentenceAug and AbstSummAug, installing the following dependencies as well\n```bash\npip install torch>=1.6.0 transformers>=4.11.3 sentencepiece\n```\n\nIf you use LambadaAug, installing the following dependencies as well\n```bash\npip install simpletransformers>=0.61.10\n```\n\nIf you use AntonymAug, SynonymAug, installing the following dependencies as well\n```bash\npip install nltk>=3.4.5\n```\n\nIf you use WordEmbsAug (word2vec, glove or fasttext), downloading pre-trained model first and installing the following dependencies as well\n```bash\nfrom nlpaug.util.file.download import DownloadUtil\nDownloadUtil.download_word2vec(dest_dir='.') # Download word2vec model\nDownloadUtil.download_glove(model_name='glove.6B', dest_dir='.') # Download GloVe model\nDownloadUtil.download_fasttext(model_name='wiki-news-300d-1M', dest_dir='.') # Download fasttext model\n\npip install gensim>=4.1.2\n```\n\nIf you use SynonymAug (PPDB), downloading file from the following URI. You may not able to run the augmenter if you get PPDB file from other website\n```bash\nhttp:\u002F\u002Fparaphrase.org\u002F#\u002Fdownload\n```\n\nIf you use PitchAug, SpeedAug and VtlpAug, installing the following dependencies as well\n```bash\npip install librosa>=0.9.1 matplotlib\n```\n\n## Recent Changes\n\n### 1.1.11 Jul 6, 2022\n*   [Return list of output](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F302)\n*   [Fix download util](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F301)\n*   [Fix lambda label misalignment](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F295)\n*   [Add language pack reference link for SynonymAug](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F289)\n\n\nSee [changelog](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002FCHANGE.md) for more details.\n\n## Extension Reading\n*   [Data Augmentation library for Text](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-library-for-text-9661736b13ff)\n*   [Does your NLP model able to prevent adversarial attack?](https:\u002F\u002Fmedium.com\u002Fhackernoon\u002Fdoes-your-nlp-model-able-to-prevent-adversarial-attack-45b5ab75129c)\n*   [How does Data Noising Help to Improve your NLP Model?](https:\u002F\u002Fmedium.com\u002Ftowards-artificial-intelligence\u002Fhow-does-data-noising-help-to-improve-your-nlp-model-480619f9fb10)\n*   [Data Augmentation library for Speech Recognition](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-for-speech-recognition-e7c607482e78)\n*   [Data Augmentation library for Audio](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-for-audio-76912b01fdf6)\n*   [Unsupervied Data Augmentation](https:\u002F\u002Fmedium.com\u002Ftowards-artificial-intelligence\u002Funsupervised-data-augmentation-6760456db143)\n*   [A Visual Survey of Data Augmentation in NLP](https:\u002F\u002Famitness.com\u002F2020\u002F05\u002Fdata-augmentation-for-nlp\u002F)\n\n## Reference\nThis library uses data (e.g. capturing from internet), research (e.g. following augmenter idea), model (e.g. using pre-trained model) See [data source](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002FSOURCE.md) for more details.\n\n## Citation\n\n```latex\n@misc{ma2019nlpaug,\n  title={NLP Augmentation},\n  author={Edward Ma},\n  howpublished={https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug},\n  year={2019}\n}\n```\n\nThis package is cited by many books, workshop and academic research papers (70+). Here are some of examples and you may visit [here](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002FCITED.md) to get the full list.\n\n### Workshops cited nlpaug\n*   S. Vajjala. [NLP without a readymade labeled dataset](https:\u002F\u002Frpubs.com\u002Fvbsowmya\u002Ftmls2021) at [Toronto Machine Learning Summit, 2021](https:\u002F\u002Fwww.torontomachinelearning.com\u002F). 2021\n\n### Book cited nlpaug\n*   S. Vajjala, B. Majumder, A. Gupta and H. Surana. [Practical Natural Language Processing: A Comprehensive Guide to Building Real-World NLP Systems](https:\u002F\u002Fwww.amazon.com\u002FPractical-Natural-Language-Processing-Pragmatic\u002Fdp\u002F1492054054). 2020\n*   A. Bartoli and A. Fusiello. [Computer Vision–ECCV 2020 Workshops](https:\u002F\u002Fbooks.google.com\u002Fbooks?hl=en&lr=lang_en&id=0rYREAAAQBAJ&oi=fnd&pg=PR7&dq=nlpaug&ots=88bPp5rhnY&sig=C2ue8Xxbu09l59nAMOcVxWYvvWM#v=onepage&q=nlpaug&f=false). 2020\n*   L. Werra, L. Tunstall, and T. Wolf [Natural Language Processing with Transformers](https:\u002F\u002Fwww.amazon.com\u002FNatural-Language-Processing-Transformers-Applications\u002Fdp\u002F1098103246\u002Fref=sr_1_3?crid=2CWBPA8QG0TRU&keywords=Natural+Language+Processing+with+Transformers&qid=1645646312&sprefix=natural+language+processing+with+transformers%2Caps%2C111&sr=8-3). 2022\n\n### Research paper cited nlpaug\n*   Google: M. Raghu and  E. Schmidt. [A Survey of Deep Learning for Scientific Discovery](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.11755.pdf). 2020\n*   Sirius XM: E. Jing, K. Schneck, D. Egan and S. A. Waterman. [Identifying Introductions in Podcast Episodes from Automatically Generated Transcripts](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07096.pdf). 2021\n*   Salesforce Research: B. Newman, P. K. Choubey and N. Rajani. [P-adapters: Robustly Extracting Factual Information from Language Modesl with Diverse Prompts](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07280.pdf). 2021\n*   Salesforce Research: L. Xue, M. Gao, Z. Chen, C. Xiong and R. Xu. [Robustness Evaluation of Transformer-based Form Field Extractors via Form Attacks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04413.pdf). 2021\n\n\n## Contributions\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsakares\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_c921a8a5d04a.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>sakares saengkaew\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbdalal\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_40d5cfea8d00.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Binoy Dalal\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Femrecncelik\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_88dc512cd5de.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Emrecan Çelik\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>","\u003Cp align=\"center\">\n    \u003Cbr>\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_db699989f7d0.png\"\u002F>\n    \u003Cbr>\n\u003Cp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Ftravis-ci.org\u002Fmakcedward\u002Fnlpaug\">\n        \u003Cimg alt=\"构建\" src=\"https:\u002F\u002Ftravis-ci.org\u002Fmakcedward\u002Fnlpaug.svg?branch=master\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fwww.codacy.com\u002Fapp\u002Fmakcedward\u002Fnlpaug?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=makcedward\u002Fnlpaug&amp;utm_campaign=Badge_Grade\">\n        \u003Cimg alt=\"代码质量\" src=\"https:\u002F\u002Fapi.codacy.com\u002Fproject\u002Fbadge\u002FGrade\u002F2d6d1d08016a4f78818161a89a2dfbfb\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_40d304df6d24.png\">\n        \u003Cimg alt=\"下载量\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_40d304df6d24.png\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n# nlpaug\n\n这个 Python 库可帮助您为机器学习项目增强自然语言处理能力。请访问本介绍，了解有关【NLP 数据增强】的详细信息（[NLP 数据增强](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-in-nlp-2801a34dfc28)）。其中，“Augmenter”是数据增强的基本单元，“Flow”则是一套管道，用于将多个增强器有机地组合在一起。\n\n## 功能\n*   无需人工操作，即可生成合成数据以提升模型性能\n*   简单易用、轻量级的库；只需三行代码即可完成数据增强\n*   可即插即用，轻松集成到各类机器学习\u002F神经网络框架中（例如：scikit-learn、PyTorch、TensorFlow）\n*   支持文本和音频输入\n\n\u003Ch3 align=\"center\">文本数据增强示例\u003C\u002Fh3>\n\u003Cbr>\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_32d74f383058.png\"\u002F>\u003C\u002Fp>\n\u003Ch3 align=\"center\">声学数据增强示例\u003C\u002Fh3>\n\u003Cbr>\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_71dbfd4062de.png\"\u002F>\u003C\u002Fp>\n\n| 部分 | 说明 |\n|:---:|:---:|\n| [快速演示](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#quick-demo) | 如何使用本库 |\n| [Augmenter](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#augmenter) | 介绍所有可用的数据增强方法 |\n| [安装](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#installation) | 如何安装本库 |\n| [最新变更](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#recent-changes) | 最新功能更新 |\n| [扩展阅读](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#extension-reading) | 更多真实场景案例或相关研究 |\n| [参考文档](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug#reference) | 外部资源（如数据或模型）的参考链接 |\n\n## 快速演示\n*   [快速示例](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fquick_example.ipynb)\n*   [文本输入数据增强示例](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Ftextual_augmenter.ipynb)\n*   [多语言文本输入数据增强示例](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Ftextual_language_augmenter.ipynb)\n*   [频谱图输入数据增强示例](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fspectrogram_augmenter.ipynb)\n*   [音频输入数据增强示例](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Faudio_augmenter.ipynb)\n*   [多增强器组合示例](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fflow.ipynb)\n*   [展示增强历史记录示例](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fchange_log.ipynb)\n*   如何训练【TF-IDF 模型】（https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Ftfidf-train_model.ipynb）\n*   如何训练【LAMBADA 模型】（https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Flambada-train_model.ipynb）\n*   如何创建【自定义增强】（https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Fcustom_augmenter.ipynb）\n*   [API 文档](https:\u002F\u002Fnlpaug.readthedocs.io\u002Fen\u002Flatest\u002F)\n\n## 增强\n| 增强 | 目标 | 增强 | 操作 | 说明 |\n|:---:|:---:|:---:|:---:|:---:|\n| 文本类 | 字符 | 键盘增强 | 替换 | 模拟键盘距离误差 |\n| 文本类 | | OCR增强 | 替换 | 模拟OCR引擎错误 |\n| 文本类 | | [随机增强](https:\u002F\u002Fmedium.com\u002Fhackernoon\u002Fdoes-your-nlp-model-able-to-prevent-adversarial-attack-45b5ab75129c) | 插入、替换、交换、删除 | 随机应用增强操作 |\n| 文本类 | 单词 | 反义词增强 | 替换 | 根据WordNet反义词表，替换具有相反含义的单词 |\n| 文本类 | | 上下文词嵌入增强 | 插入、替换 | 将上下文中的单词输入到[BERT](https:\u002F\u002Ftowardsdatascience.com\u002Fhow-bert-leverage-attention-mechanism-and-transformer-to-learn-word-contextual-relations-5bbee1b6dbdb)、DistilBERT、[RoBERTa](https:\u002F\u002Fmedium.com\u002Ftowards-artificial-intelligence\u002Fa-robustly-optimized-bert-pretraining-approach-f6b6e537e6a6)或[XLNet](https:\u002F\u002Fmedium.com\u002Fdataseries\u002Fwhy-does-xlnet-outperform-bert-da98a8503d5b)语言模型中，以找到最适合用于增强的单词 |\n| 文本类 | | 随机单词增强 | 交换、裁剪、删除 | 随机应用增强操作 |\n| 文本类 | | 拼写增强 | 替换 | 根据拼写错误词典替换单词 |\n| 文本类 | | 分割增强 | 分割 | 随机将一个单词拆分为两个单词 |\n| 文本类 | | 同义词增强 | 替换 | 根据WordNet\u002FPPDB同义词表，替换相似的单词 |\n| 文本类 | | TF-IDF增强 | 插入、替换 | 利用TF-IDF算法，确定应如何对单词进行增强 |\n| 文本类 | | 词嵌入增强 | 插入、替换 | 利用[word2vec](https:\u002F\u002Ftowardsdatascience.com\u002F3-silver-bullets-of-word-embedding-in-nlp-10fa8f50cc5a)、[GloVe](https:\u002F\u002Ftowardsdatascience.com\u002F3-silver-bullets-of-word-embedding-in-nlp-10fa8f50cc5a)或[fasttext](https:\u002F\u002Ftowardsdatascience.com\u002F3-silver-bullets-of-word-embedding-in-nlp-10fa8f50cc5a)的词嵌入，实现增强操作 |\n| 文本类 | | 反向翻译增强 | 替换 | 利用两种翻译模型进行增强 |\n| 文本类 | | 保留词增强 | 替换 | 替换保留词 |\n| 文本类 | 句子 | 句子上下文词嵌入增强 | 插入 | 根据[XLNet](https:\u002F\u002Fmedium.com\u002Fdataseries\u002Fwhy-does-xlnet-outperform-bert-da98a8503d5b)、[GPT2](https:\u002F\u002Ftowardsdatascience.com\u002Ftoo-powerful-nlp-model-generative-pre-training-2-4cc6afb6655)或DistilGPT2的预测结果，插入句子 |\n| 文本类 | | 摘要总结增强 | 替换 | 通过摘要式总结方法，对文章进行摘要 |\n| 文本类 | | Lambada增强 | 替换 | 使用语言模型生成文本，并借助分类模型，保留高质量的结果 |\n| 信号类 | 音频 | 裁剪增强 | 删除 | 删除音频的特定片段 |\n| 信号类 | | 声音强度增强 | 替换 | 调整音频的音量 |\n| 信号类 | | 指定区域增强 | 替换 | 对音频的指定区域进行屏蔽 |\n| 信号类 | | 噪音增强 | 替换 | 注入噪声 |\n| 信号类 | | 音高增强 | 替换 | 调整音频的音高 |\n| 信号类 | | 时间偏移增强 | 替换 | 向前或向后调整时间维度 |\n| 信号类 | | 速度增强 | 替换 | 调整音频的速度 |\n| 信号类 | | 声音特性增强 | 替换 | 改变发声声道 |\n| 信号类 | | 归一化增强 | 替换 | 对音频进行归一化处理 |\n| 信号类 | | 极性反转增强 | 替换 | 将音频的正负极性互换 |\n| 信号类 | 频谱图 | 频率掩码增强 | 替换 | 根据频率维度，将某一范围的值设置为零 |\n| 信号类 | | 时间掩码增强 | 替换 | 根据时间维度，将某一范围的值设置为零 |\n| 信号类 | | 声音强度增强 | 替换 | 调整音量 |\n\n## 流程\n| 增强器 | 增强器 | 说明 |\n|:---:|:---:|:---:|\n|管道式 | 顺序执行 | 依次应用一系列增强函数 |\n|管道式 | 有时 | 随机应用部分增强函数 |\n\n## 安装\n该库支持Linux和Windows平台上的Python 3.5及以上版本。\n\n要安装该库：\n```bash\npip install numpy requests nlpaug\n```\n或者直接从GitHub安装最新版本（包含BETA功能）：\n```bash\npip install numpy git+https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug.git\n```\n或者使用Conda进行安装：\n```bash\nconda install -c makcedward nlpaug\n```\n\n若使用BackTranslationAug、ContextualWordEmbsAug、ContextualWordEmbsForSentenceAug以及AbstSummAug，则还需安装以下依赖项：\n```bash\npip install torch>=1.6.0 transformers>=4.11.3 sentencepiece\n```\n\n若使用LambadaAug，则需安装以下依赖项：\n```bash\npip install simpletransformers>=0.61.10\n```\n\n若使用AntonymAug、SynonymAug，则还需安装以下依赖项：\n```bash\npip install nltk>=3.4.5\n```\n\n若使用WordEmbsAug（如word2vec、glove或fasttext），请先下载预训练模型，并同时安装以下依赖项：\n```bash\nfrom nlpaug.util.file.download import DownloadUtil\nDownloadUtil.download_word2vec(dest_dir='.') # 下载word2vec模型\nDownloadUtil.download_glove(model_name='glove.6B', dest_dir='.') # 下载GloVe模型\nDownloadUtil.download_fasttext(model_name='wiki-news-300d-1M', dest_dir='.') # 下载fasttext模型\n\npip install gensim>=4.1.2\n```\n\n若使用SynonymAug（PPDB），请从以下网址下载文件。如果您从其他网站获取PPDB文件，可能无法正常运行该增强器。\n```bash\nhttp:\u002F\u002Fparaphrase.org\u002F#\u002Fdownload\n```\n\n若使用PitchAug、SpeedAug和VtlpAug，则需安装以下依赖项：\n```bash\npip install librosa>=0.9.1 matplotlib\n```\n\n## 最近更新\n\n### 2022年7月6日，版本1.1.11\n*   [返回输出列表](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F302)\n*   [修复下载工具](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F301)\n*   [修复lambda标签对齐问题](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F295)\n*   [添加SynonymAug的语言包参考链接](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F289)\n\n更多详细信息，请参阅[变更记录](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002FCHANGE.md)。\n\n## 扩展阅读\n*   [适用于文本的数据增强库](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-library-for-text-9661736b13ff)\n*   [你的NLP模型能有效防止对抗攻击吗？](https:\u002F\u002Fmedium.com\u002Fhackernoon\u002Fdoes-your-nlp-model-able-to-prevent-adversarial-attack-45b5ab75129c)\n*   [数据噪声如何帮助提升你的NLP模型性能？](https:\u002F\u002Fmedium.com\u002Ftowards-artificial-intelligence\u002Fhow-does-data-noising-help-to-improve-your-nlp-model-480619f9fb10)\n*   [适用于语音识别的数据增强库](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-for-speech-recognition-e7c607482e78)\n*   [适用于音频的数据增强库](https:\u002F\u002Ftowardsdatascience.com\u002Fdata-augmentation-for-audio-76912b01fdf6)\n*   [无监督数据增强](https:\u002F\u002Fmedium.com\u002Ftowards-artificial-intelligence\u002Funsupervised-data-augmentation-6760456db143)\n*   [NLP领域数据增强的视觉综述](https:\u002F\u002Famitness.com\u002F2020\u002F05\u002Fdata-augmentation-for-nlp\u002F)\n\n## 参考文献\n本库采用多种数据来源（例如从互联网抓取数据）、研究方法（例如遵循增强学习理念）以及模型技术（例如利用预训练模型）。更多详细信息，请参阅[数据源](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002FSOURCE.md)。\n\n## 引用格式\n\n```latex\n@misc{ma2019nlpaug,\n  title={NLP 增强},\n  author={Edward Ma},\n  howpublished={https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug},\n  year={2019}\n}\n```\n\n该包已被众多书籍、研讨会及学术研究论文引用，数量超过70篇。以下是一些示例，您还可以访问[此处](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002FCITED.md)以获取完整列表。\n\n### 被引用的与 NLP 增强相关的研讨会\n*   S. Vajjala. [无需现成标签数据集的 NLP](https:\u002F\u002Frpubs.com\u002Fvbsowmya\u002Ftmls2021)——于 2021 年在[多伦多机器学习峰会](https:\u002F\u002Fwww.torontomachinelearning.com\u002F)举行。2021 年\n\n### 被引用的与 NLP 增强相关的书籍\n*   S. Vajjala、B. Majumder、A. Gupta 和 H. Surana. [实用自然语言处理：构建真实世界 NLP 系统的全面指南](https:\u002F\u002Fwww.amazon.com\u002FPractical-Natural-Language-Processing-Pragmatic\u002Fdp\u002F1492054054)。2020 年\n*   A. Bartoli 和 A. Fusiello. [计算机视觉——ECCV 2020 研讨会](https:\u002F\u002Fbooks.google.com\u002Fbooks?hl=en&lr=lang_en&id=0rYREAAAQBAJ&oi=fnd&pg=PR7&dq=nlpaug&ots=88bPp5rhnY&sig=C2ue8Xxbu09l59nAMOcVxWYvvWM#v=onepage&q=nlpaug&f=false)。2020 年\n*   L. Werra、L. Tunstall 和 T. Wolf [使用 Transformer 的自然语言处理](https:\u002F\u002Fwww.amazon.com\u002FNatural-Language-Processing-Transformers-Applications\u002Fdp\u002F1098103246\u002Fref=sr_1_3?crid=2CWBPA8QG0TRU&keywords=Natural+Language+Processing+with+Transformers&qid=1645646312&sprefix=natural+language+processing+with+transformers%2Caps%2C111&sr=8-3)。2022 年\n\n### 被引用的与 NLP 增强相关的研究论文\n*   Google：M. Raghu 和 E. Schmidt. [深度学习在科学发现中的应用综述](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.11755.pdf)。2020 年\n*   Sirius XM：E. Jing、K. Schneck、D. Egan 和 S. A. Waterman. [从自动生成的转录文本中识别播客节目中的开场白](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07096.pdf)。2021 年\n*   Salesforce 研究：B. Newman、P. K. Choubey 和 N. Rajani. [P-适配器：通过多样化的提示，从语言模型中稳健地提取事实性信息](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07280.pdf)。2021 年\n*   Salesforce 研究：L. Xue、M. Gao、Z. Chen、C. Xiong 和 R. Xu. [通过形式攻击评估基于 Transformer 的表单字段提取器的鲁棒性](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04413.pdf)。2021 年\n\n\n## 贡献者\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsakares\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_c921a8a5d04a.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>sakares saengkaew\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbdalal\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_40d5cfea8d00.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Binoy Dalal\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Femrecncelik\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_readme_88dc512cd5de.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Emrecan Çelik\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>","# nlpaug 中文快速上手指南\n\n## 环境准备\n- **系统**：Linux \u002F Windows，Python ≥ 3.5  \n- **可选依赖**：  \n  - 文本增强（BERT、回译等）：`torch ≥ 1.6.0`、`transformers ≥ 4.11.3`、`sentencepiece`  \n  - 词向量：`gensim ≥ 4.1.2`  \n  - 音频增强：`librosa ≥ 0.9.1`、`matplotlib`  \n  - 同义词\u002F反义词：`nltk ≥ 3.4.5`\n\n## 安装步骤\n```bash\n# 1. 基础安装\npip install numpy requests nlpaug -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 2. 如需最新版（含 Beta 功能）\npip install numpy git+https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug.git\n\n# 3. 文本增强扩展\npip install torch>=1.6.0 transformers>=4.11.3 sentencepiece -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 4. 音频增强扩展\npip install librosa>=0.9.1 matplotlib -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n```python\nimport nlpaug.augmenter.word as naw\n\n# 1. 同义词替换（3 行代码完成数据增强）\naug = naw.SynonymAug(aug_src='wordnet')\ntext = \"今天天气真好，我们一起去公园散步吧。\"\nprint(aug.augment(text))\n# 输出示例：今天天气真好，我们一起去公园溜达吧。\n\n# 2. 回译增强（中→英→中）\nfrom nlpaug.augmenter.word import BackTranslationAug\nbt_aug = BackTranslationAug(\n    from_model_name='Helsinki-NLP\u002Fopus-mt-zh-en',\n    to_model_name='Helsinki-NLP\u002Fopus-mt-en-zh'\n)\nprint(bt_aug.augment(\"今天天气真好\"))\n# 输出示例：今天天气不错\n```\n\n完成！现在你可以将增强后的数据直接用于任何机器学习框架（PyTorch、TensorFlow、scikit-learn 等）。","一家做智能客服的初创公司，正在训练一个中文意图分类模型，用来识别用户“退货\u002F换货\u002F开发票\u002F查物流”等 20 多种意图，训练集只有 1.2 万条真实对话，模型上线后准确率仅 78%，产品经理要求两周内提升到 90%。\n\n### 没有 nlpaug 时\n- 数据团队只能人工编写同义句，3 个人 3 天写了 800 条，效率低且句式单一。  \n- 为了覆盖口语化表达，又去找客服聊天记录，清洗、脱敏、标注再花 5 天，结果只多出 2000 条。  \n- 训练集仍然不平衡，“开发票”只有 300 条，模型对该意图的 F1 仅 0.55。  \n- 上线前做鲁棒性测试，发现用户把“退货”打成“退火”或“tuihuo”就识别失败，只能临时加规则补丁。  \n\n### 使用 nlpaug 后\n- 一行代码 `nlpaug.augmenter.char.KeyboardAug()` 自动生成 5000 条含键盘误触的句子，10 分钟搞定。  \n- 用 `nlpaug.augmenter.word.SynonymAug('word2vec')` 基于中文词向量把“退货”扩展成“退钱、退单、退商品”，2 小时产出 1.5 万条语义等价样本。  \n- 针对“开发票”类别，用 `nlpaug.augmenter.word.RandomWordAug(action=\"substitute\")` 随机替换金额、抬头等实体，把样本量从 300 扩到 3000，F1 提升到 0.82。  \n- 通过 `nlpaug.Flow` 把字符级、词级、句式级增强串联，一次性生成 5 万条多样化数据，模型整体准确率 7 天内从 78% 提到 91%，无需额外人工标注。  \n\nnlpaug 让数据增强像调参一样简单，两周内用极低成本把中文意图分类模型推向可用水平。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmakcedward_nlpaug_e4157a3e.png","makcedward","Edward Ma","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmakcedward_74d33cf5.jpg","Focus on Natural Language Processing, Transferring Learning, Data Science Architecture","SambaNova Systems","San Francisco Bay Area",null,"https:\u002F\u002Fmakcedward.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fmakcedward",[85,89,93],{"name":86,"color":87,"percentage":88},"Jupyter Notebook","#DA5B0B",60.2,{"name":90,"color":91,"percentage":92},"Python","#3572A5",39.7,{"name":94,"color":95,"percentage":96},"Shell","#89e051",0.2,4654,474,"2026-04-02T21:45:16","MIT",1,"Linux, Windows","未说明",{"notes":105,"python":106,"dependencies":107},"macOS 未在官方支持列表中；如需使用 BackTranslationAug、ContextualWordEmbsAug、LambadaAug、SynonymAug、WordEmbsAug 及音频相关 Augmenter，需额外安装对应依赖并下载预训练模型或数据文件（如 word2vec、GloVe、fastText、PPDB 等）。","3.5+",[108,109,110,111,112,113,114,115,116,117],"numpy","requests","torch>=1.6.0","transformers>=4.11.3","sentencepiece","simpletransformers>=0.61.10","nltk>=3.4.5","gensim>=4.1.2","librosa>=0.9.1","matplotlib",[14,51,54,26,13,15],[120,121,122,123,124,125,126,127,128,129],"nlp","augmentation","machine-learning","artificial-intelligence","data-science","natural-language-processing","adversarial-attacks","adversarial-example","ai","ml",4,"2026-03-27T02:49:30.150509","2026-04-06T07:12:05.195255",[134,139,144,149,154],{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},6195,"ContextualWordEmbsAug 突然变慢，如何排查与提速？","1. 长句会显著拖慢速度，因为 Transformer 要对每个被 mask 的 token 单独做一次前向计算；句子越长、aug_p 越大，耗时越高。\n2. 1.1.4 版本后改用 HuggingFace 官方 API，速度从 1s 降到 9s（Colab 实测）。若对速度敏感，可暂时回退到 1.1.3：\n   ```bash\n   pip install nlpaug==1.1.3\n   ```\n3. 若 GPU 仍慢，可减小 aug_p（如 0.1）或限制句子长度（截断\u002F分句）。\n4. 最新版已支持一次传入多条文本给 Transformer，可升级到最新版再试：\n   ```bash\n   pip install -U nlpaug\n   ```","https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F248",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},6196,"在 Google Colab 上用 GPU 运行 ContextualWordEmbsAug 报 “RuntimeError: Input, output and indices must be on the current device” 怎么办？","通常是 device 设置不一致导致。确保模型与输入都在同一 GPU：\n```python\nimport torch\naug = naw.ContextualWordEmbsAug(\n    model_path='bert-base-uncased',\n    device='cuda' if torch.cuda.is_available() else 'cpu'\n)\n```\n如果仍报错，可重启 Colab 并确认先执行：\n```python\n!pip install torch transformers nlpaug\n```\n官方 Colab 示例：https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1A152yd4M5Lzo6rjP-gmFSNnklusT_xC0","https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F211",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},6197,"使用 ContextualWordEmbsAug 出现 NameError: name 'BertTokenizer' is not defined 如何解决？","该错误一般因 transformers 版本与 nlpaug 不匹配。请按以下顺序重装：\n```bash\npip install -U nlpaug transformers torch\n```\n若仍报错，可显式导入 tokenizer：\n```python\nfrom transformers import BertTokenizer\n```\n并确保 transformers≥3.0.0、torch≥1.2.0。Colab 用户建议新建干净环境再试。","https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F71",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},6198,"BertAug 做 insert 时抛 ValueError: Sample larger than population or is negative 怎么办？","原因是待选词列表为空，导致 random.sample 出错。常见触发条件：\n1. 文本过短，模型预测结果为空。\n2. aug_n 设置过大。\n解决：\n- 升级到最新版，已把 sample 改为 random.choice，避免该异常。\n- 临时方案：手动降低 aug_n 或 aug_p，例如：\n```python\naug = naw.BertAug(action='insert', aug_n=1, aug_p=0.1)\n```","https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F38",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},6199,"加载葡萄牙语等多词嵌入文件（如 “Hey there 0.001 …”）时报错，如何处理？","nilc.icmc.usp.br 提供的 fastText、GloVe、Word2Vec 文件格式与官方 fastText 一致，请使用 FasttextAug 加载：\n```python\nimport nlpaug.augmenter.word as naw\naug = naw.FasttextAug(model_path='cbow_s50.txt')\n```\n若仍想用 Word2VecAug\u002FGloVeAug，需先自行把含空格的词用下划线连接或等待后续版本修复。","https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F5",[160,165,170,175,180,184,189,193,197,201,205,209,213,218,223,228,233,238,243,248],{"id":161,"version":162,"summary_zh":163,"released_at":164},105752,"1.1.11","*   [Return list of output](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F302)\r\n*   [Fix download util](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F301)\r\n*   [Fix lambda label misalignment](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F295)\r\n*   [Add language pack reference link for SynonymAug](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F289)","2022-07-07T05:24:14",{"id":166,"version":167,"summary_zh":168,"released_at":169},105753,"1.1.10","KeywordAug supports Turkish\r\nFix FrequencyMasking time range\r\nRemove unnecessary printout\r\nRollback ContextualWordEmbsForSentenceAug and AbstSummAug to use custom transformers API to reduce execution time","2021-12-25T02:49:58",{"id":171,"version":172,"summary_zh":173,"released_at":174},105754,"1.1.9","*   [ReservedAug supports generating all combinations](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fpull\u002F251)\r\n*   [Rollback to use native HuggingFace API from Huggingface pipeline to solve slow performance issue](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F248)\r\n*   [Added description to explain the model of WordEmbsAug is custom class](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F249)\r\n*   [Change random behavior to increase more augmentation samples](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fpull\u002F228)\r\n*   [Fix SpeedAug random factor issue](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F207)","2021-12-01T15:44:08",{"id":176,"version":177,"summary_zh":178,"released_at":179},105755,"1.1.8","*   [OCRAug support customer mapping\u002F json file](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F241)\r\n*   [Improve slow loading word2vec issue](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F239)\r\n*   [Solve transformers comparability issue](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F243)","2021-11-20T15:53:22",{"id":181,"version":182,"summary_zh":81,"released_at":183},105756,"1.1.5","2021-07-16T06:29:47",{"id":185,"version":186,"summary_zh":187,"released_at":188},105757,"1.1.4","Release 1.1.4","2021-06-20T22:18:56",{"id":190,"version":191,"summary_zh":81,"released_at":192},105758,"1.1.3","2021-03-07T23:29:30",{"id":194,"version":195,"summary_zh":81,"released_at":196},105759,"1.1.2","2021-01-09T12:51:29",{"id":198,"version":199,"summary_zh":81,"released_at":200},105760,"1.1.1","2020-12-11T16:46:30",{"id":202,"version":203,"summary_zh":81,"released_at":204},105761,"1.1.0","2020-11-14T04:13:01",{"id":206,"version":207,"summary_zh":81,"released_at":208},105762,"1.0.1","2020-09-26T04:37:47",{"id":210,"version":211,"summary_zh":81,"released_at":212},105763,"1.0.0","2020-09-26T04:33:49",{"id":214,"version":215,"summary_zh":216,"released_at":217},105764,"0.0.20","* Update MANIFECT file to include txt resource","2020-08-23T03:23:53",{"id":219,"version":220,"summary_zh":221,"released_at":222},105765,"0.0.19","* Add back English mispelling dictionary","2020-08-23T03:09:17",{"id":224,"version":225,"summary_zh":226,"released_at":227},105766,"0.0.18","*   Fix PPDB model misloaded nltk module[#144](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F144)","2020-08-22T04:26:54",{"id":229,"version":230,"summary_zh":231,"released_at":232},105767,"0.0.17","*   Enhance default tokenizer and reverse tokenizer[#143](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F143)\r\n*   Introduce Abstractive Summarization in sentence ausgmenter (Check out example from [here](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fblob\u002Fmaster\u002Fexample\u002Ftextual_augmenter.ipynb))","2020-08-21T06:00:12",{"id":234,"version":235,"summary_zh":236,"released_at":237},105768,"0.0.16","Fix [#142](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F142)","2020-08-12T05:54:31",{"id":239,"version":240,"summary_zh":241,"released_at":242},105769,"0.0.15","Support crop action in RandomWordAug [#126](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F126)\r\nFix [#130](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F130)\r\nFix [#132](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F132)\r\nFix [#134](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F134)\r\nUpgraded and verified torch (1.6.0) and transformers (3.0.2) libraies\r\nAdd new Back Translation Augmenter [#75](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F75) [#102](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F102) [#131](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug\u002Fissues\u002F131)","2020-08-11T05:31:46",{"id":244,"version":245,"summary_zh":246,"released_at":247},105770,"0.0.12","ContextualWordEmbsAug supports bert-base-multilingual-uncased (for non English inputs)\r\nFix missing library dependency #74\r\nFix single token error when using RandomWordAug #76\r\nFix replacing character in RandomCharAug error #77\r\nEnhance word's augmenter to support regular expression stopwords #81\r\nEnhance char's augmenter to support regular expression stopwords #86\r\nKeyboardAug supports Thai language #92\r\nFix word casing issue #82","2020-02-06T04:10:12",{"id":249,"version":250,"summary_zh":251,"released_at":252},105771,"0.0.11","Support color noise (pink, blue, red and violet noise) in audio's NoiseAug\r\nSupport given background noise in audio's NoiseAug\r\nSupport inject noise to portion of audio only in audio's NoiseAug\r\nIntroduce zone, coverage to all audio augmenter. Support only augmented portion of audio input\r\nAdd VTLP augmentation methods (Audio's augmenter)\r\nAdopt latest transformer's interface #59\r\nSupport RoBERTa (including DistilRoBERTa) and DistilBERT (ContextualWordEmbsAug)\r\nSupport DistilGPT2 (ContextualWordEmbsForSentenceAug)\r\nFix librosa hard dependency #62\r\nIntroduce optimize attribute ContextualWordEmbsForSentenceAug #63\r\nOptimize word selection for ContextualWordEmbsAug and ContextualWordEmbsForSentenceAug (Speed up around 30%)\r\nAdd retry mechanism into ContextualWordEmbsAug insert action #68","2019-12-06T04:21:48"]