[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-styfeng--DataAug4NLP":3,"tool-styfeng--DataAug4NLP":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":80,"stars":84,"forks":85,"last_commit_at":86,"license":80,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":93,"github_topics":94,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":105,"updated_at":106,"faqs":107,"releases":118},2797,"styfeng\u002FDataAug4NLP","DataAug4NLP","Collection of papers and resources for data augmentation for NLP.","DataAug4NLP 是一个专为自然语言处理（NLP）领域打造的数据增强资源库，旨在帮助开发者和研究人员解决训练数据稀缺、类别不平衡及模型泛化能力不足等核心难题。它并非一个直接运行的软件包，而是一份精心整理的学术论文集与技术资源指南，系统性地收录了从文本分类、机器翻译到对话生成等十余个细分任务的前沿数据增强方法。\n\n该项目的独特亮点在于其严谨的学术背景与结构化分类。内容基于发表在 ACL 2021 上的权威综述论文构建，将复杂的增强技术按应用场景（如序列标注、语法纠错）和功能目标（如缓解偏见、对抗样本生成）进行了清晰归类。无论是需要提升小样本模型性能的算法工程师，还是希望追踪最新科研动态的学者，都能在此快速定位到适合特定任务的策略，如同义词替换、上下文增强等经典与创新方案。通过提供论文链接、适用数据集及部分代码实现，DataAug4NLP 极大地降低了技术落地门槛，是 NLP 从业者优化模型表现、探索数据潜力的实用案头参考。","# Data Augmentation Techniques for NLP \n\n\nIf you'd like to add your paper, do not email us. Instead, read the protocol for [adding a new entry](https:\u002F\u002Fgithub.com\u002Fstyfeng\u002FDataAug4NLP\u002Fblob\u002Fmain\u002Frules.md) and send a pull request.\n\nWe group the papers by [text classification](#text-classification), [translation](#translation), [summarization](#summarization), [question-answering](#question-answering), [sequence tagging](#sequence-tagging), [parsing](#parsing), [grammatical-error-correction](#grammatical-error-correction), [generation](#generation), [dialogue](#dialogue), [multimodal](#multimodal), [mitigating bias](#mitigating-bias), [mitigating class imbalance](#mitigating-class-imbalance), [adversarial examples](#adversarial-examples), [compositionality](#compositionality), and [automated augmentation](#automated-augmentation).\n\nThis repository is based on our paper, [\"A survey of data augmentation approaches in NLP (Findings of ACL '21)\"](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.84\u002F). You can cite it as follows:\n```\n@inproceedings{feng-etal-2021-survey,\n    title = \"A Survey of Data Augmentation Approaches for {NLP}\",\n    author = \"Feng, Steven Y.  and\n      Gangal, Varun  and\n      Wei, Jason  and\n      Chandar, Sarath  and\n      Vosoughi, Soroush  and\n      Mitamura, Teruko  and\n      Hovy, Eduard\",\n    booktitle = \"Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021\",\n    month = aug,\n    year = \"2021\",\n    address = \"Online\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.84\",\n    doi = \"10.18653\u002Fv1\u002F2021.findings-acl.84\",\n    pages = \"968--988\",\n}\n```\nAuthors: \u003Ca href=\"https:\u002F\u002Fscholar.google.ca\u002Fcitations?hl=en&user=zwiszZIAAAAJ\">Steven Y. Feng\u003C\u002Fa>,\n\t\t\t  \u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=rWZq2nQAAAAJ&hl=en\">Varun Gangal\u003C\u002Fa>,\n\t\t\t  \u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=wA5TK_0AAAAJ&hl=en\">Jason Wei\u003C\u002Fa>,\n\t\t\t  \u003Ca href=\"https:\u002F\u002Fscholar.google.co.in\u002Fcitations?user=yxWtZLAAAAAJ&hl=en\">Sarath Chandar\u003C\u002Fa>,\n\t\t\t  \u003Ca href=\"https:\u002F\u002Fscholar.google.ca\u002Fcitations?user=45DAXkwAAAAJ&hl=en\">Soroush Vosoughi\u003C\u002Fa>,\n\t\t\t  \u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=gjsxBCkAAAAJ&hl=en\">Teruko Mitamura\u003C\u002Fa>,\n\t\t\t  \u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=PUFxrroAAAAJ&hl=en\">Eduard Hovy\u003C\u002Fa>\n\nSpecial thanks to Ryan Shentu, Fiona Feng, Karen Liu, Emily Nie, Tanya Lu, and Bonnie Ma for helping out with this repo.\nNote: WIP. More papers will be added from our survey paper to this repo soon.\nInquiries should be directed to stevenyfeng@gmail.com or by opening an issue here.\n\nAlso, check out our **talk for Google Research** (Steven Feng and Varun Gangal) [here](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kNBVesKUZCk&ab_channel=StevenFeng), and our **podcast episode** (Steven Feng and Eduard Hovy) [here](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qmqyT_97Poc) and [here](https:\u002F\u002Fthedataexchange.media\u002Fdata-augmentation-in-natural-language-processing\u002F).\n\n\n### Text Classification\n| Paper | Datasets | \n| -- | --- |\n| Unsupervised Word Sense Disambiguation Rivaling Supervised Methods ([ACL '95](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP95-1026.pdf)) | Paper-Specific\u002FLegacy Corpus | \n| Synonym Replacement (Character-Level Convolutional Networks for Text Classification, [NeurIPS '15](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2015\u002Ffile\u002F250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf)) | AG’s News, DBPedia, Yelp, Yahoo Answers, Amazon | \n| That’s So Annoying!!!: A Lexical and Frame-Semantic Embedding Based Data Augmentation Approach to Automatic Categorization of Annoying Behaviors using #petpeeve Tweets [(EMNLP '15)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD15-1306.pdf) | twitter| \n| Robust Training under Linguistic Adversity [(EACL '17)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FE17-2004\u002F) [code](https:\u002F\u002Fgithub.com\u002Flrank\u002FLinguistic_adversity) | Movie review, customer review, SUBJ, SST | \n| Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations [(NAACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-2072.pdf) [code](https:\u002F\u002Fgithub.com\u002Fpfnet-research\u002Fcontextual_augmentation) | SST, SUBJ, MRQA, RT, TREC | \n| Variational Pretraining for Semi-supervised Text Classification [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1590.pdf) [code](http:\u002F\u002Fgithub.com\u002Fallenai\u002Fvampire) | IMDB, AG News, Yahoo, hatespeech | \n| EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks [(EMNLP '19)](http:\u002F\u002Fdx.doi.org\u002F10.18653\u002Fv1\u002FD19-1670) [code](https:\u002F\u002Fgithub.com\u002Fjasonwei20\u002Feda_nlp) | SST, CR, SUBJ, TREC, PC |\n| A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification [(DeepLo @ EMNLP '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.04176) | SNIPS |\n| Nonlinear Mixup: Out-Of-Manifold Data Augmentation for Text Classification [(AAAI '20)](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v34i04.5822) | TREC, SST, Subj, MR |\n| MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.194\u002F) [code](https:\u002F\u002Fgithub.com\u002FGT-SALT\u002FMixText) | AG News, DBpedia, Yahoo, IMDb | \n| Unsupervised Data Augmentation for Consistency Training [(NeurIPS '20)](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F44feb0096faa8326192570788b38c1d1-Abstract.html) [code](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F44feb0096faa8326192570788b38c1d1-Abstract.html) | Yelp, IMDb, amazon, DBpedia | \n| Not Enough Data? Deep Learning to the Rescue! [(AAAI '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.03118) | ATIS, TREC, WVA | \n| Data Augmentation using Pre-trained Transformer Models [LifeLongNLP @ AACL '20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.02245), [code](https:\u002F\u002Fgithub.com\u002Fvarunkumar-dev\u002FTransformersDataAugmentation) |SNIPS, TREC, SST2 |\n| SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving Out-of-Domain Robustness [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.97\u002F) [code](https:\u002F\u002Fgithub.com\u002Fnng555\u002Fssmba) | IWSLT'14 | \n| Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.726\u002F) | ICWSM 20’ Data Challenge, SemEval '17 sentiment analysis, SemEval '18 irony |\n| Textual Data Augmentation for Efficient Active Learning on Tiny Datasets [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.600\u002F) | SST2, TREC |\n| Text Augmentation in a Multi-Task View [(EACL '21)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2021.eacl-main.252\u002F) | SST2, TREC, SUBJ | \n| GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation [(arXiv '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08826) | SST2, CR, TREC, SUBJ, MPQA, CoLA |\n| Few-Shot Text Classification with Triplet Loss, Data Augmentation, and Curriculum Learning [(NAACL '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07552) [code](https:\u002F\u002Fgithub.com\u002Fjasonwei20\u002Ftriplet-loss) | HUFF, COV-Q, AMZN, FEWREL | \n| Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification [(EMNLP '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00523) [code](https:\u002F\u002Fgithub.com\u002Flancopku\u002Ftext-autoaugment) | IMDB, SST2, SST5, TREC, YELP2, YELP5 |\n| AEDA: An Easier Data Augmentation Technique for Text Classification [(EMNLP '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13230) [code](https:\u002F\u002Fgithub.com\u002Fakkarimi\u002Faeda_nlp) | SST, CR, SUBJ, TREC, PC |\n\n### Translation\n\n| Paper | Datasets | \n| -- | --- |\n| Backtranslation (Improving Neural Machine Translation Models with Monolingual Data, [ACL '16](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP16-1009.pdf)) | WMT '15 en-de, IWSLT '15 en-tr |\n| Adapting Neural Machine Translation with Parallel Synthetic Data [(WMT '17)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW17-4714\u002F) | COMMON, 1 Billion Words, dev2013, XRCE, IT, E-Com| \n| Data Augmentation for Low-Resource Neural Machine Translation [(ACL '17)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP17-2090\u002F) [code](https:\u002F\u002Fgithub.com\u002Fmarziehf\u002FDataAugmentationNMT) | WMT '14\u002F'15\u002F'16 en-de\u002Fde-en| \n| Synthetic Data for Neural Machine Translation of Spoken-Dialects [(arxiv '17)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.00079) | LDC2012T09, OpenSubtitles-2013| \n| Multi-Source Neural Machine Translation with Data Augmentation [(IWSLT '18)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06826) | TED Talks| \n| SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine Translation [(EMNLP '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD18-1100\u002F) | IWSLT '15 en-vi, IWSLT '16 de-en, WMT '15 en-de |\n| Generalizing Back-Translation in Neural Machine Translation [(WMT '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW19-5205\u002F) | ed NewsCrawl2, WMT'18 de-en| \n| Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1175\u002F) | DGT-TM en-ml\u002Fen-hu| \n| Augmenting Neural Machine Translation with Knowledge Graphs [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.08816) | WMT '14 -'18| \n| Generalized Data Augmentation for Low-Resource Translation [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1579\u002F) [code](https:\u002F\u002Fgithub.com\u002Fxiamengzhou\u002FDataAugForLRL)| ENG-HRL-LRL, HRL-LRL | \n| Improving Robustness of Machine Translation with Synthetic Noise [(NAACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN19-1190\u002F) [code](https:\u002F\u002Fgithub.com\u002FMysteryVaibhav\u002Frobust_mtnt)| EP, TED, MTNT en-fr en-jpn| \n| Soft Contextual Data Augmentation for Neural Machine Translation [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1555\u002F) [code](https:\u002F\u002Fgithub.com\u002Fteslacool\u002FSCA) | IWSLT '14 de\u002Fes\u002Fhe-en, WMT '14 en-de |\n| Data augmentation using back-translation for context-aware neural machine translation [(DiscoMT @ EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-6504\u002F) [code](https:\u002F\u002Fgithub.com\u002Fsugi-a\u002Fdiscomt2019) | IWSLT'17 en-ja\u002Fen-fr, BookCorpus, Europarl v7, National Diet of Japan | \n| Improving Neural Machine Translation Robustness via Data Augmentation: Beyond Back-Translation [(W-NUT @ EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-5543\u002F) | WMT'15\u002F'19 en\u002Ffr, MTNT, IWSLT'17, MuST-C | \n| Data augmentation for pipeline-based speech translation [(Baltic HLT '20)](https:\u002F\u002Fhal.inria.fr\u002Fhal-02907053) | WMT '17 | \n| Lexical-Constraint-Aware Neural Machine Translation via Data Augmentation [(IJCAI '20)](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2020\u002F496) [code](https:\u002F\u002Fgithub.com\u002Fghchen18\u002Fleca) | WMT '16 de-en, NIST zh-en |\n| A Diverse Data Augmentation Strategy for Low-Resource Neural Machine Translation [(Information '20)](https:\u002F\u002Fwww.mdpi.com\u002F2078-2489\u002F11\u002F5\u002F255) | IWSLT '14 en-de | \n| Syntax-aware Data Augmentation for Neural Machine Translation [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.14200) | WMT '14 en-de, IWSLT '14 de-en | \n| SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving Out-of-Domain Robustness [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.97\u002F) [code](https:\u002F\u002Fgithub.com\u002Fnng555\u002Fssmba) | IWSLT'14 | \n| Data diversification: A simple strategy for neural machine translation [(NeurIPS '20)](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F7221e5c8ec6b08ef6d3f9ff3ce6eb1d1-Paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fnxphi47\u002Fdata_diversification) | WMT '14 en-de\u002Fen-fr, IWSLT '13\u002F'14\u002F'15 en-de\u002Fde-en\u002Fen-fr |\n| AdvAug: Robust Adversarial Augmentation for Neural Machine Translation [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.529\u002F) | NIST zh-en, WMT '14 en-de| \n| Dictionary-based Data Augmentation for Cross-Domain Neural Machine Translation [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.02577) | WMT '14\u002F'19 | \n| Sentence Boundary Augmentation For Neural Machine Translation Robustness [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11132) | IWSLT '14\u002F'15\u002F'18 en-de, WMT '18 en-de | \n| Valar nmt : Vastly lacking resources neural machine translation [(Stanford CS224N)](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Farchive\u002Fcs\u002Fcs224n\u002Fcs224n.1194\u002Freports\u002Fcustom\u002F15811193.pdf) | Bible, Misc, Europarl v8, Newstest '18 | \n\n\n### Summarization\n\n| Paper | Datasets | \n| -- | --- |\n| Transforming Wikipedia into Augmented Data for Query-Focused Summarization [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.03324) | DUC |\n| Iterative Data Augmentation with Synthetic Data (Abstract Text Summarization: A Low Resource Challenge [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1616\u002F) | Swisstext, commoncrawl | \n| Improving Zero and Few-Shot Abstractive Summarization with Intermediate Fine-tuning and Data Augmentation [(NAACL '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12836) | CNN-DailyMail | \n| Data Augmentation for Abstractive Query-Focused Multi-Document Summarization [(AAAI '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.01863) [code](https:\u002F\u002Fgithub.com\u002Framakanth-pasunuru\u002FQmdsCnnIr) | QMDSCNN, QMDSIR, WikiSum, DUC 2006, DUC 2007 |\n\n\n### Question Answering\n\n| Paper | Datasets | \n| -- | --- |\n| QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension [(ICLR '18)](https:\u002F\u002Fopenreview.net\u002Fforum?id=B14TlG-RW) | SQuAD, TriviaQA |\n| An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering [(EMNLP '19 Workshop)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-5829\u002F) | MRQA | \n| Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.06652) | SQuAD, Trivia-QA, CMRC, DRCD | \n| XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering [(arxiv '19)](https:\u002F\u002Fopenreview.net\u002Fforum?id=BJgAf6Etwr) | XNLI, SQuAD |\n| Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12643) | MLQA, XQuAD, SQuAD-it, PIAF | \n| Logic-Guided Data Augmentation and Regularization for Consistent Question Answering [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.499\u002F) [code](https:\u002F\u002Fgithub.com\u002FAkariAsai\u002Flogic_guided_qa) | WIQA, QuaRel, HotpotQA |\n\n\n### Sequence Tagging\n\n| Paper | Datasets | \n| -- | --- |\n| Data Augmentation via Dependency Tree Morphing for Low-Resource Languages [(EMNLP '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD18-1545.pdf) [code](https:\u002F\u002Fgithub.com\u002Fgozdesahin\u002Fcrop-rotate-augment) | universal dependencies project | \n| DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.488\u002F) [code](https:\u002F\u002Fgithub.com\u002Fntunlp\u002Fdaga) | CoNLL2002\u002F2003 |\n| An Analysis of Simple Data Augmentation for Named Entity Recognition [(COLING '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.coling-main.343\u002F) | MaSciP, i2b2- 2010 |\n| SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.691\u002F) [code](https:\u002F\u002Fgithub.com\u002Frz-zhang\u002FSeqMix) | CoNLL-03, ACE05, Webpage |\n\n\n### Parsing\n| Paper | Datasets | \n| -- | --- |\n| Data Recombination for Neural Semantic Parsing [(ACL '16)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP16-1002\u002F) [code](https:\u002F\u002Fgithub.com\u002Fdongpobeyond\u002FSeq2Act) | GeoQuery, ATIS, Overnight |\n| A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1102\u002F) | Universal Dependencies treebanks version 2.2 |\n| Named Entity Recognition for Social Media Texts with Semantic Augmentation [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.107\u002F)[code](https:\u002F\u002Fgithub.com\u002Fcuhksz-nlp\u002FSANER) | WNUT16, WNUT17, Weibo |\n| Good-Enough Compositional Data Augmentation [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.676\u002F) [code](https:\u002F\u002Fgithub.com\u002Fjacobandreas\u002Fgeca) | SCAN |\n| GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing [(ICLR '21)](https:\u002F\u002Fopenreview.net\u002Fforum?id=kyaIeYj4zZ) | SPIDER, WIKISQL, WIKITABLEQUESTIONS |\n\n\n### Grammatical Error Correction\n| Paper | Datasets | \n| -- | --- |\n| GenERRate: Generating Errors for Use in Grammatical Error Detection [(BEA '09)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW09-2112\u002F) | Ungram-BNC |\n| Mining Revision Log of Language Learning SNS for Automated Japanese Error Correction of Second Language Learners [(IJCNLP '11)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FI11-1017\u002F) [code](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002Fclang8) | Lang-8 |\n| Artificial error generation for translation-based grammatical error correction [(University of Cambridge Technical Report '16)](https:\u002F\u002Fwww.cl.cam.ac.uk\u002Ftechreports\u002FUCAM-CL-TR-895.pdf)  | Several Datasets |\n| Noising and Denoising Natural Language: Diverse Backtranslation for Grammar Correction. [(NAACL'18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-1057\u002F) | Lang-8, CoNLL-2014, CoNLL-2013, JFLEG | \n| Using Wikipedia Edits in Low Resource Grammatical Error Correction. [(WNUT @ EMNLP '18)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FW18-6111) | Falko-MERLIN GEC Corpus |\n| Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.06002) | CoNLL-2014 , JFLEG |\n| Controllable Data Synthesis Method for Grammatical Error Correction [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13302) [code](https:\u002F\u002Fgithub.com\u002Fmarumalo\u002Fsurvey\u002Fissues\u002F21) | NUCLE, Lang-8, One-Billion, CoNLL2013, CoNLL2014|\n| Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data. [(BEA @ ACL '19)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FW19-4427) | FCE, NUCLE, W&I+LOCNESS, Lang-8 |\n| Corpora Generation for Grammatical Error Correction [(NAACL'19)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FN19-1333) | CoNLL-2014, JFLEG, Lang-8 |\n| Erroneous data generation for Grammatical Error Correction [(BEA @ ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW19-4415\u002F) | Lang-8,n CoNLL, JFLEG, CoNLL-2014, ABCN, FCE |\n| Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.06002) [code](https:\u002F\u002Fgithub.com\u002Fmarumalo\u002Fsurvey\u002Fissues\u002F6) | GYAFC, WMT14, WMT18 |\n| A neural grammatical error correction  system  built  on  better  pre-training  and  sequential  transfer  learning. [(BEA @ ACL '19)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FW19-4423) | FCE, NUCLE, W&I+LOCNESS, Lang-8, Gutenberg, Tatoeba, WikiText-103 |\n| Improving Grammatical Error Correction with Data Augmentation by Editing Latent Representation [(COLING'20)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2020.coling-main.200) | FCE, NUCLE, W&I+LOCNESS, Lang-8 |\n| A Comparative Study of Synthetic Data Generation Methods for Grammatical Error Correction [(BEA @ ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.bea-1.21\u002F) | W&I+LOCNESS, FCE, News Crawl 2, W&I+L train, FCE-train, NUCLE, Lang-8, W&I+L dev, FCE-test, Tatoeba, WikiText-103 |\n| A syntactic rule-based framework for parallel data synthesis in Japanese GEC [(MIT Thesis '20)](https:\u002F\u002Fdspace.mit.edu\u002Fhandle\u002F1721.1\u002F127416) | Lang-8 |\n\n\n### Generation\n\n| Paper | Datasets | \n| -- | --- |\n| TNT-NLG, System 2: Data repetition and meaning representation manipulation to improve neural generation [(E2E NLG Challenge System Descriptions)](http:\u002F\u002Fwww.macs.hw.ac.uk\u002FInteractionLab\u002FE2E\u002Ffinal_papers\u002FE2E-TNT_NLG2.pdf) | TODO | \n| Findings of the Third Workshop on Neural Generation and Translation [(WNGT @ EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-5601\u002F) | RotoWire English-German | \n| A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models [(INLG '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW19-8672\u002F) [code](https:\u002F\u002Fgithub.com\u002Fkedz\u002Fnoiseylg) | E2E Challenge Dataset, Laptops, TVs | \n| GenAug: Data Augmentation for Finetuning Text Generators [(DeeLIO @ EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.deelio-1.4\u002F) [code](https:\u002F\u002Fgithub.com\u002Fstyfeng\u002FGenAug) | Yelp | \n| Denoising Pre-Training and Data Augmentation Strategies for Enhanced RDF Verbalization with Transformers [(WebNLG+ @ INLG '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.webnlg-1.9\u002F) | WebNLG |\n\n\n### Dialogue\n| Paper | Datasets | \n| -- | --- |\n| Sequence-to-Sequence Data Augmentation for Dialogue Language Understanding [(COLING '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FC18-1105\u002F) [code](https:\u002F\u002Fgithub.com\u002FAtmaHou\u002FSeq2SeqDataAugmentationForLU) | ATIS, Dec94, Stanford dialogue |\n| Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.10484) [code](https:\u002F\u002Fgithub.com\u002Fthu-spmi\u002Fdamd-multiwoz) | MultiWOZ |\n| Data Augmentation by Data Noising for Open-vocabulary Slots in Spoken Language Understanding [(Student Research Workshop @ NAACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN19-3014\u002F) | ATIS, Snips, MR |\n| Data Augmentation with Atomic Templates for Spoken Language Understanding [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1375\u002F) [code](https:\u002F\u002Fgithub.com\u002Fsz128\u002FDAAT_SLU) | DSTC 2&3,  DSTC2 |\n| Data Augmentation for Spoken Language Understanding via Joint Variational Generation [(AAAI '19)](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F4729) | ATIS, Snips, MIT |\n| Effective Data Augmentation Approaches to End-to-End Task-Oriented Dialogue [(IALP '19)](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9037690) | CamRest676, KVRET |\n| Paraphrase Augmented Task-Oriented Dialog Generation [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.60\u002F) [code](https:\u002F\u002Fgithub.com\u002Fthu-spmi\u002FPARG) | TCamRest676, MultiWOZ |\n| Dialog State Tracking with Reinforced Data Augmentation [(AAAI '20)](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6491) | WoZ,  MultiWoZ |\n| Data Augmentation for Copy-Mechanism in Dialogue State Tracking [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.09634) | WoZ, DSTC2, Multi |\n| Simple is Better! Lightweight Data Augmentation for Low Resource Slot Filling and Intent Classification [(PACLIC '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.paclic-1.20\u002F) [code](https:\u002F\u002Fgithub.com\u002Fslouvan\u002Fsaug) | ATIS, SNIPS, FB |\n| Conversation Graph: Data Augmentation, Training, and Evaluation for Non-Deterministic Dialogue Management [(TACL '21)](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00352\u002F97777\u002FConversation-Graph-Data-Augmentation-Training-and) | M2M, MultiWOZ |\n| GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation [(EMNLP '21)](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.35\u002F) [code](https:\u002F\u002Fgithub.com\u002Fasappresearch\u002Fgold) | SMCalFlow, ROSTD |\n| Improving Automated Evaluation of Open Domain Dialog via Diverse Reference Augmentation [(ACL '21 Findings)](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.357\u002F) [code](https:\u002F\u002Fgithub.com\u002Fharsh19\u002FDiverse-Reference-Augmentation\u002F)| DailyDialog |\n\n### Multimodal\n| Paper | Datasets | \n| -- | --- |\n| Data Augmentation for Visual Question Answering [(INLG '17)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW17-3529\u002F) | COCO-VQA, COCO-QA |\n| Low Resource Multi-modal Data Augmentation for End-to-end ASR [(CoRR ’18)](https:\u002F\u002Fdeepai.org\u002Fpublication\u002Flow-resource-multi-modal-data-augmentation-for-end-to-end-asr) | TODO |\n| Multi-Modal Data Augmentation for End-to-end ASR [(Interspeech '18)](https:\u002F\u002Fwww.isca-speech.org\u002Farchive\u002FInterspeech_2018\u002Fabstracts\u002F2456.html) | Voxforge, HUB4 |\n| Augmenting Image Question Answering Dataset by Exploiting Image Captions [(LREC '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FL18-1436\u002F) | IQA |\n| Multimodal Continuous Emotion Recognition with Data Augmentation Using Recurrent Neural Networks [(AVEC '18)](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3266302.3266304) | TODO |\n| Multimodal Dialogue State Tracking By QA Approach with Data Augmentation [(DSTC8 @ AAAI '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09903) | DSTC7-AVSD |\n| Data augmentation techniques for the Video Question Answering task [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.09849) | TGIF-QA,  MSVD-QA |\n| Data Augmentation for Training Dialog Models Robust to Speech Recognition Errors [(NLP for ConvAI @ ACL '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05635) | DSTC2 |\n| Semantic Equivalent Adversarial Data Augmentation for Visual Question Answering [(ECCV '20)](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58529-7_26) | TODO |\n| Text Augmentation Using BERT for Image Captioning [(Applied Sciences '20)](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F10\u002F17\u002F5978) | MSCOCO |\n| MDA: Multimodal Data Augmentation Framework for Boosting Performance on Image-Text Sentiment\u002FEmotion Classification Tasks [(IEEE Intelligent Systems '20)](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9206007) | TODO |\n\n### Mitigating Bias\n| Paper | Datasets | \n| -- | --- |\n| Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. [(NAACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-2003\u002F) [code](https:\u002F\u002Fgithub.com\u002Fuclanlp\u002FcorefBias) | WinoBias, OntoNotes|\n| Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1161\u002F) [code](https:\u002F\u002Fgithub.com\u002Frycolab\u002FbiasCDA) | TODO |\n| CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech [(ACL '19)](https:\u002F\u002Faclanthology.org\u002FP19-1271.pdf) [Dataset](https:\u002F\u002Fgithub.com\u002Fmarcoguerini\u002FCONAN)| New Dataset Created|\n| It’s All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1530\u002F) [code](https:\u002F\u002Fgithub.com\u002Frowanhm\u002Fcounterfactual-data-substitution) | SSA, Stanford Large Movie Review, SimLex-999 |\n| Gender Bias in Neural Natural Language Processing. [(Springer '20)](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007%2F978-3-030-62077-6_14 ) | Wikitext-2, CoNLL-2012 |\n| Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12510) | SWAG, CoNLL2009, MultiNLI, HANS|\n\n### Mitigating Class Imbalance\n| Paper | Datasets | \n| -- | --- |\n| SMOTE: Synthetic Minority Over-sampling Technique [(Journal of Artificial Intelligence Research '02)](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fview\u002F10302) | Pima, Phoneme, Adult, E-state, Satimage, Forest Cover, Oil, Mammography, Can |\n| Active Learning for Word Sense Disambiguation with Methods for Addressing the Class Imbalance Problem [(EMNLP '07)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD07-1082\u002F) | TODO |\n| MLSMOTE: Approaching imbalanced multilabel learning through synthetic instance generation [(Knowledge-Based Systems '15)](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0950705115002737?via%3Dihub) | bibtex, cal500, corel5k, slashdot, tmc2007, mediamill, medical, scene, enron, emotions |\n| SMOTE for Learning from Imbalanced Data: Progress and Challenges, Marking the 15-year Anniversary [(Journal of Artificial Intelligence Research '18)](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fview\u002F11192) | TODO |\n\n### Adversarial examples\n\n| Paper | Datsets | \n| -- | --- |\n| Adversarial Example Generation with Syntactically Controlled Paraphrase Networks [(NAACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-1170\u002F) [code](https:\u002F\u002Fgithub.com\u002Fmiyyer\u002Fscpn)| SST, SICK | \n| AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples [(ACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP18-1225\u002F) [code](https:\u002F\u002Fgithub.com\u002Fdykang\u002Fadventure)| WordNet, PPDB, SICK, SNLI, SciTail | \n| Breaking NLI Systems with Sentences that Require Simple Lexical Inferences [(ACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP18-2103\u002F) | SNLI, SciTail, MultiNLI |\n| Certified Robustness to Adversarial Word Substitutions [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1423\u002F) [code](https:\u002F\u002Fgithub.com\u002Frobinjia\u002Fcertified-word-sub)| IMDB, SNLI | \n| PAWS: Paraphrase Adversaries from Word Scrambling [(NAACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN19-1131\u002F) [code](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002Fpaws)| PAWS (QQP + Wikipedia) | \n| Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency [(ACL '19)](https:\u002F\u002Faclanthology.org\u002FP19-1103\u002F) [code](https:\u002F\u002Fgithub.com\u002FJHL-HUST\u002FPWWS) | IMDB, AG’s News, Yahoo Answers |\n\n\n### Compositionality\n\n| Paper | Datsets | \n| -- | --- |\n| Good-Enough Compositional Data Augmentation [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.676.pdf) [code](https:\u002F\u002Fgithub.com\u002Fjacobandreas\u002Fgeca) | SCAN |\n| Sequence-Level Mixed Sample Data Augmentation [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.447) [code](https:\u002F\u002Fgithub.com\u002Fdguo98\u002Fseqmix) | IWSLT ’14, WMT ’14 | \n\n### Automated Augmentation\n\n| Paper                                                        | Datsets                     |\n| ------------------------------------------------------------ | --------------------------- |\n| Learning Data Manipulation for Augmentation and Weighting [(NeurIPS '19)](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Ffile\u002F671f0311e2754fcdd37f70a8550379bc-Paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Ftanyuqian\u002Flearning-data-manipulation) | SST, IMDB, TREC, CIFAR-10   |\n| Data Manipulation: Towards Effective Instance Learning for Neural Dialogue Generation via Learning to Augment and Reweight [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.564.pdf) | DailyDialog,  OpenSubtitles |\n| Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification [(EMNLP '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00523) [code](https:\u002F\u002Fgithub.com\u002Flancopku\u002Ftext-autoaugment) | IMDB, SST2, SST5, TREC, YELP2, YELP5 |\n\n\n### Popular Resources\n- [A visual survey of data augmentation in NLP](https:\u002F\u002Famitness.com\u002F2020\u002F05\u002Fdata-augmentation-for-nlp\u002F)\n- [nlpaug](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug)\n- [TextAttack](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack)\n- [AugLy](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FAugLy)\n- [NL-Augmenter 🦎 → 🐍](https:\u002F\u002Fgithub.com\u002FGEM-benchmark\u002FNL-Augmenter\u002F)\n","# 自然语言处理中的数据增强技术\n\n\n如果您希望添加自己的论文，请不要通过电子邮件联系我们。相反，请阅读[添加新条目](https:\u002F\u002Fgithub.com\u002Fstyfeng\u002FDataAug4NLP\u002Fblob\u002Fmain\u002Frules.md)的流程说明，并提交一个拉取请求。\n\n我们按照以下任务类别对论文进行分组：[文本分类](#text-classification)、[机器翻译](#translation)、[文本摘要](#summarization)、[问答](#question-answering)、[序列标注](#sequence-tagging)、[句法分析](#parsing)、[语法错误修正](#grammatical-error-correction)、[文本生成](#generation)、[对话系统](#dialogue)、[多模态](#multimodal)、[缓解偏见](#mitigating-bias)、[缓解类别不平衡](#mitigating-class-imbalance)、[对抗样本](#adversarial-examples)、[组合性](#compositionality)以及[自动化数据增强](#automated-augmentation)。\n\n本仓库基于我们的论文《自然语言处理中数据增强方法综述（ACL 2021发现）》(Findings of ACL '21)。您可以按如下方式引用该论文：\n```\n@inproceedings{feng-etal-2021-survey,\n    title = \"A Survey of Data Augmentation Approaches for {NLP}\",\n    author = \"Feng, Steven Y.  and\n      Gangal, Varun  and\n      Wei, Jason  and\n      Chandar, Sarath  and\n      Vosoughi, Soroush  and\n      Mitamura, Teruko  and\n      Hovy, Eduard\",\n    booktitle = \"Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021\",\n    month = aug,\n    year = \"2021\",\n    address = \"Online\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.84\",\n    doi = \"10.18653\u002Fv1\u002F2021.findings-acl.84\",\n    pages = \"968--988\",\n}\n```\n\n作者：Steven Y. Feng (\u003Ca href=\"https:\u002F\u002Fscholar.google.ca\u002Fcitations?hl=en&user=zwiszZIAAAAJ\">链接\u003C\u002Fa>)、Varun Gangal (\u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=rWZq2nQAAAAJ&hl=en\">链接\u003C\u002Fa>)、Jason Wei (\u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=wA5TK_0AAAAJ&hl=en\">链接\u003C\u002Fa>)、Sarath Chandar (\u003Ca href=\"https:\u002F\u002Fscholar.google.co.in\u002Fcitations?user=yxWtZLAAAAAJ&hl=en\">链接\u003C\u002Fa>)、Soroush Vosoughi (\u003Ca href=\"https:\u002F\u002Fscholar.google.ca\u002Fcitations?user=45DAXkwAAAAJ&hl=en\">链接\u003C\u002Fa>)、Teruko Mitamura (\u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=gjsxBCkAAAAJ&hl=en\">链接\u003C\u002Fa>)、Eduard Hovy (\u003Ca href=\"https:\u002F\u002Fscholar.google.com\u002Fcitations?user=PUFxrroAAAAJ&hl=en\">链接\u003C\u002Fa>)。\n\n特别感谢 Ryan Shentu、Fiona Feng、Karen Liu、Emily Nie、Tanya Lu 和 Bonnie Ma 在本仓库建设过程中提供的帮助。\n\n注：项目仍在开发中。我们将在近期从综述论文中补充更多内容到本仓库。如有任何问题，请发送邮件至 stevenyfeng@gmail.com 或在此处提交问题。\n\n此外，您还可以观看我们的**Google Research演讲**（由 Steven Feng 和 Varun Gangal 主讲）[这里](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kNBVesKUZCk&ab_channel=StevenFeng)，以及我们的**播客节目**（由 Steven Feng 和 Eduard Hovy 主持）[这里](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qmqyT_97Poc)和[这里](https:\u002F\u002Fthedataexchange.media\u002Fdata-augmentation-in-natural-language-processing\u002F)。\n\n### 文本分类\n| 论文 | 数据集 |\n| -- | --- |\n| 无监督词义消歧媲美有监督方法（[ACL '95](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP95-1026.pdf)） | 论文专用\u002F遗留语料库 |\n| 同义词替换（用于文本分类的字符级卷积神经网络，[NeurIPS '15](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2015\u002Ffile\u002F250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf)） | AG新闻、DBpedia、Yelp、Yahoo问答、亚马逊 |\n| 真烦人！！！：基于词汇和框架语义嵌入的数据增强方法，用于利用#petpeeve推文自动分类令人讨厌的行为（[EMNLP '15](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD15-1306.pdf)） | Twitter |\n| 面对语言学对抗的鲁棒训练（[EACL '17](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FE17-2004\u002F) [代码](https:\u002F\u002Fgithub.com\u002Flrank\u002FLinguistic_adversity)） | 电影评论、客户评论、SUBJ、SST |\n| 上下文增强：利用范例关系词进行数据增强（[NAACL '18](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-2072.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fpfnet-research\u002Fcontextual_augmentation)） | SST、SUBJ、MRQA、RT、TREC |\n| 半监督文本分类的变分预训练（[ACL '19](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1590.pdf) [代码](http:\u002F\u002Fgithub.com\u002Fallenai\u002Fvampire)） | IMDB、AG新闻、Yahoo、仇恨言论 |\n| EDA：提升文本分类任务性能的简单数据增强技术（[EMNLP '19](http:\u002F\u002Fdx.doi.org\u002F10.18653\u002Fv1\u002FD19-1670) [代码](https:\u002F\u002Fgithub.com\u002Fjasonwei20\u002Feda_nlp)） | SST、CR、SUBJ、TREC、PC |\n| 少样本意图分类中特征空间数据增强的深入研究（DeepLo @ EMNLP '19）（[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.04176)） | SNIPS |\n| 非线性Mixup：面向文本分类的流形外数据增强（[AAAI '20](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v34i04.5822)） | TREC、SST、Subj、MR |\n| MixText：面向半监督文本分类的隐空间语言学启发式插值（[ACL '20](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.194\u002F) [代码](https:\u002F\u002Fgithub.com\u002FGT-SALT\u002FMixText)） | AG新闻、DBpedia、Yahoo、IMDb |\n| 用于一致性训练的无监督数据增强（[NeurIPS '20](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F44feb0096faa8326192570788b38c1d1-Abstract.html) [代码](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F44feb0096faa8326192570788b38c1d1-Abstract.html)） | Yelp、IMDb、亚马逊、DBpedia |\n| 数据不足？深度学习来救场！（[AAAI '20](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.03118)） | ATIS、TREC、WVA |\n| 使用预训练Transformer模型进行数据增强 [LifeLongNLP @ AACL '20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.02245)，[代码](https:\u002F\u002Fgithub.com\u002Fvarunkumar-dev\u002FTransformersDataAugmentation) | SNIPS、TREC、SST2 |\n| SSMBA：基于自监督流形的数据增强，以提升域外稳健性（[EMNLP '20](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.97\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fnng555\u002Fssmba)） | IWSLT'14 |\n| Data Boost：通过强化学习引导的条件生成进行文本数据增强（[EMNLP '20](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.726\u002F)） | ICWSM 20’ 数据挑战赛、SemEval '17 情感分析、SemEval '18 反讽 |\n| 面向小型数据集的有效主动学习的文本数据增强（[EMNLP '20](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.600\u002F)） | SST2、TREC |\n| 多任务视角下的文本增强（[EACL '21](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2021.eacl-main.252\u002F)） | SST2、TREC、SUBJ |\n| GPT3Mix：利用大规模语言模型进行文本增强（[arXiv '21](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08826)） | SST2、CR、TREC、SUBJ、MPQA、CoLA |\n| 基于三元组损失、数据增强和课程学习的少样本文本分类（[NAACL '21](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07552) [代码](https:\u002F\u002Fgithub.com\u002Fjasonwei20\u002Ftriplet-loss)） | HUFF、COV-Q、AMZN、FEWREL |\n| 文本AutoAugment：学习用于文本分类的组合式增强策略（[EMNLP '21](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00523) [代码](https:\u002F\u002Fgithub.com\u002Flancopku\u002Ftext-autoaugment)） | IMDB、SST2、SST5、TREC、YELP2、YELP5 |\n| AEDA：一种更简单的文本分类数据增强技术（[EMNLP '21](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13230) [代码](https:\u002F\u002Fgithub.com\u002Fakkarimi\u002Faeda_nlp)） | SST、CR、SUBJ、TREC、PC |\n\n### 翻译\n\n| 论文 | 数据集 |\n| -- | --- |\n| 反向翻译（利用单语数据改进神经机器翻译模型，[ACL '16](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP16-1009.pdf)） | WMT '15 英德、IWSLT '15 英土 |\n| 使用平行合成数据调整神经机器翻译 [(WMT '17)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW17-4714\u002F) | COMMON、10亿词、dev2013、XRCE、IT、E-Com|\n| 低资源神经机器翻译的数据增强 [(ACL '17)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP17-2090\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fmarziehf\u002FDataAugmentationNMT) | WMT '14\u002F'15\u002F'16 英德\u002F德英|\n| 用于口语方言神经机器翻译的合成数据 [(arxiv '17)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.00079) | LDC2012T09、OpenSubtitles-2013|\n| 基于数据增强的多源神经机器翻译 [(IWSLT '18)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06826) | TED演讲|\n| SwitchOut：一种高效的神经机器翻译数据增强算法 [(EMNLP '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD18-1100\u002F) | IWSLT '15 英越、IWSLT '16 德英、WMT '15 英德|\n| 神经机器翻译中反向翻译的泛化 [(WMT '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW19-5205\u002F) | ed NewsCrawl2、WMT'18 德英|\n| 神经模糊修复：将模糊匹配整合到神经机器翻译中 [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1175\u002F) | DGT-TM 英马\u002F英匈|\n| 利用知识图谱增强神经机器翻译 [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.08816) | WMT '14 -'18|\n| 针对低资源翻译的广义数据增强 [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1579\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fxiamengzhou\u002FDataAugForLRL) | ENG-HRL-LRL、HRL-LRL|\n| 通过合成噪声提高机器翻译鲁棒性 [(NAACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN19-1190\u002F) [代码](https:\u002F\u002Fgithub.com\u002FMysteryVaibhav\u002Frobust_mtnt) | EP、TED、MTNT 英法、英日|\n| 神经机器翻译的软上下文数据增强 [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1555\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fteslacool\u002FSCA) | IWSLT '14 德西希英、WMT '14 英德|\n| 基于反向翻译的数据增强用于上下文感知的神经机器翻译 [(DiscoMT @ EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-6504\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fsugi-a\u002Fdiscomt2019) | IWSLT'17 英日\u002F英法、BookCorpus、Europarl v7、日本国会|\n| 通过数据增强提升神经机器翻译鲁棒性：超越反向翻译 [(W-NUT @ EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-5543\u002F) | WMT'15\u002F'19 英法、MTNT、IWSLT'17、MuST-C|\n| 面向流水线式语音翻译的数据增强 [(Baltic HLT '20)](https:\u002F\u002Fhal.inria.fr\u002Fhal-02907053) | WMT '17|\n| 基于数据增强的词汇约束感知神经机器翻译 [(IJCAI '20)](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2020\u002F496) [代码](https:\u002F\u002Fgithub.com\u002Fghchen18\u002Fleca) | WMT '16 德英、NIST 中英|\n| 低资源神经机器翻译的多样化数据增强策略 [(Information '20)](https:\u002F\u002Fwww.mdpi.com\u002F2078-2489\u002F11\u002F5\u002F255) | IWSLT '14 英德|\n| 针对神经机器翻译的句法感知数据增强 [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.14200) | WMT '14 英德、IWSLT '14 德英|\n| SSMBA：基于自监督流形的数据增强以提升域外鲁棒性 [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.97\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fnng555\u002Fssmba) | IWSLT'14|\n| 数据多样化：神经机器翻译的简单策略 [(NeurIPS '20)](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F7221e5c8ec6b08ef6d3f9ff3ce6eb1d1-Paper.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fnxphi47\u002Fdata_diversification) | WMT '14 英德\u002F英法、IWSLT '13\u002F'14\u002F'15 英德\u002F德英\u002F英法|\n| AdvAug：针对神经机器翻译的鲁棒对抗增强 [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.529\u002F) | NIST 中英、WMT '14 英德|\n| 基于词典的跨领域神经机器翻译数据增强 [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.02577) | WMT '14\u002F'19|\n| 针对神经机器翻译鲁棒性的句子边界增强 [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11132) | IWSLT '14\u002F'15\u002F'18 英德、WMT '18 英德|\n| Valar nmt：极度缺乏资源的神经机器翻译 [(斯坦福CS224N)](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Farchive\u002Fcs\u002Fcs224n\u002Fcs224n.1194\u002Freports\u002Fcustom\u002F15811193.pdf) | 圣经、杂项、Europarl v8、Newstest '18|\n\n\n### 摘要生成\n\n| 论文 | 数据集 |\n| -- | --- |\n| 将维基百科转化为查询聚焦摘要的增强数据 [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.03324) | DUC|\n| 基于合成数据的迭代数据增强（抽象文本摘要：一项低资源挑战 [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1616\u002F) | Swisstext、commoncrawl|\n| 通过中间微调和数据增强改进零样本和少样本抽象摘要生成 [(NAACL '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12836) | CNN-DailyMail|\n| 针对抽象查询聚焦多文档摘要生成的数据增强 [(AAAI '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.01863) [代码](https:\u002F\u002Fgithub.com\u002Framakanth-pasunuru\u002FQmdsCnnIr) | QMDSCNN、QMDSIR、WikiSum、DUC 2006、DUC 2007|\n\n\n### 问答\n\n| 论文 | 数据集 |\n| -- | --- |\n| QANet：结合局部卷积与全局自注意力的阅读理解模型 [(ICLR '18)](https:\u002F\u002Fopenreview.net\u002Fforum?id=B14TlG-RW) | SQuAD、TriviaQA|\n| 针对领域无关问答的数据增强与采样技术探索 [(EMNLP '19研讨会)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-5829\u002F) | MRQA|\n| 面向开放域问答的BERT微调数据增强 [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.06652) | SQuAD、Trivia-QA、CMRC、DRCD|\n| XLDA：面向自然语言推理与问答的跨语言数据增强 [(arxiv '19)](https:\u002F\u002Fopenreview.net\u002Fforum?id=BJgAf6Etwr) | XNLI、SQuAD|\n| 零样本跨语言问答的合成数据增强 [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12643) | MLQA、XQuAD、SQuAD-it、PIAF|\n| 面向一致问答的逻辑引导数据增强与正则化 [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.499\u002F) [代码](https:\u002F\u002Fgithub.com\u002FAkariAsai\u002Flogic_guided_qa) | WIQA、QuaRel、HotpotQA|\n\n### 序列标注\n\n| 论文 | 数据集 |\n| -- | --- |\n| 基于依存树变形的低资源语言数据增强 [(EMNLP '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD18-1545.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fgozdesahin\u002Fcrop-rotate-augment) | 通用依存项目 |\n| DAGA：面向低资源标注任务的生成式数据增强 [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.488\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fntunlp\u002Fdaga) | CoNLL2002\u002F2003 |\n| 命名实体识别中简单数据增强方法的分析 [(COLING '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.coling-main.343\u002F) | MaSciP, i2b2-2010 |\n| SeqMix：通过序列混合增强主动序列标注 [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.691\u002F) [代码](https:\u002F\u002Fgithub.com\u002Frz-zhang\u002FSeqMix) | CoNLL-03、ACE05、Webpage |\n\n\n### 句法分析\n| 论文 | 数据集 |\n| -- | --- |\n| 面向神经网络语义解析的数据重组 [(ACL '16)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP16-1002\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fdongpobeyond\u002FSeq2Act) | GeoQuery、ATIS、Overnight |\n| 真正低资源语言上低资源依存句法分析方法的系统性比较 [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1102\u002F) | 通用依存树库版本2.2 |\n| 基于语义增强的社交媒体文本命名实体识别 [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.107\u002F)[代码](https:\u002F\u002Fgithub.com\u002Fcuhksz-nlp\u002FSANER) | WNUT16、WNUT17、Weibo |\n| 足够好的组合式数据增强 [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.676\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fjacobandreas\u002Fgeca) | SCAN |\n| GraPPa：用于表格语义解析的语法增强预训练 [(ICLR '21)](https:\u002F\u002Fopenreview.net\u002Fforum?id=kyaIeYj4zZ) | SPIDER、WIKISQL、WIKITABLEQUESTIONS |\n\n\n### 语法错误修正\n| 论文 | 数据集 |\n| -- | --- |\n| GenERRate：为语法错误检测生成错误 [(BEA '09)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW09-2112\u002F) | Ungram-BNC |\n| 从语言学习社交网络的修订日志中挖掘数据以自动纠正日语作为第二语言学习者的错误 [(IJCNLP '11)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FI11-1017\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002Fclang8) | Lang-8 |\n| 基于翻译的语法错误修正中的人工错误生成 [(剑桥大学技术报告 '16)](https:\u002F\u002Fwww.cl.cam.ac.uk\u002Ftechreports\u002FUCAM-CL-TR-895.pdf) | 多个数据集 |\n| 自然语言的加噪与去噪：用于语法修正的多样化反向翻译。[(NAACL'18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-1057\u002F) | Lang-8、CoNLL-2014、CoNLL-2013、JFLEG |\n| 在低资源语法错误修正中使用维基百科编辑内容。[(WNUT @ EMNLP '18)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FW18-6111) | Falko-MERLIN GEC语料库 |\n| 带有数据增强的序列到序列预训练用于句子重写 [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.06002) | CoNLL-2014、JFLEG |\n| 用于语法错误修正的可控数据合成方法 [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13302) [代码](https:\u002F\u002Fgithub.com\u002Fmarumalo\u002Fsurvey\u002Fissues\u002F21) | NUCLE、Lang-8、One-Billion、CoNLL2013、CoNLL2014 |\n| 基于合成数据无监督预训练的神经网络语法错误修正系统。[(BEA @ ACL '19)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FW19-4427) | FCE、NUCLE、W&I+LOCNESS、Lang-8 |\n| 用于语法错误修正的语料库生成 [(NAACL'19)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FN19-1333) | CoNLL-2014、JFLEG、Lang-8 |\n| 用于语法错误修正的错误数据生成 [(BEA @ ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW19-4415\u002F) | Lang-8、n个CoNLL、JFLEG、CoNLL-2014、ABCN、FCE |\n| 带有数据增强的序列到序列预训练用于句子重写 [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.06002) [代码](https:\u002F\u002Fgithub.com\u002Fmarumalo\u002Fsurvey\u002Fissues\u002F6) | GYAFC、WMT14、WMT18 |\n| 基于更好预训练和序列迁移学习构建的神经网络语法错误修正系统。[(BEA @ ACL '19)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002FW19-4423) | FCE、NUCLE、W&I+LOCNESS、Lang-8、Gutenberg、Tatoeba、WikiText-103 |\n| 通过编辑潜在表示进行数据增强以改进语法错误修正 [(COLING'20)](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2020.coling-main.200) | FCE、NUCLE、W&I+LOCNESS、Lang-8 |\n| 用于语法错误修正的合成数据生成方法比较研究 [(BEA @ ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.bea-1.21\u002F) | W&I+LOCNESS、FCE、News Crawl 2、W&I+L train、FCE-train、NUCLE、Lang-8、W&I+L dev、FCE-test、Tatoeba、WikiText-103 |\n| 用于日语语法错误修正的基于句法规则的平行数据合成框架 [(MIT论文 '20)](https:\u002F\u002Fdspace.mit.edu\u002Fhandle\u002F1721.1\u002F127416) | Lang-8 |\n\n\n### 文本生成\n\n| 论文 | 数据集 |\n| -- | --- |\n| TNT-NLG，系统2：通过数据重复和语义表示操作提升神经网络生成能力 [(E2E NLG挑战系统描述)](http:\u002F\u002Fwww.macs.hw.ac.uk\u002FInteractionLab\u002FE2E\u002Ffinal_papers\u002FE2E-TNT_NLG2.pdf) | 待办事项 |\n| 第三届神经网络生成与翻译研讨会成果 [(WNGT @ EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-5601\u002F) | RotoWire英德双语数据 |\n| 好样本难寻：噪声注入采样与自训练在神经语言生成模型中的应用 [(INLG '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW19-8672\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fkedz\u002Fnoiseylg) | E2E挑战数据集、笔记本电脑、电视机 |\n| GenAug：用于微调文本生成器的数据增强 [(DeeLIO @ EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.deelio-1.4\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fstyfeng\u002FGenAug) | Yelp |\n| 去噪预训练和数据增强策略，以提升使用Transformer的RDF口头化效果 [(WebNLG+ @ INLG '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.webnlg-1.9\u002F) | WebNLG |\n\n### 对话\n| 论文 | 数据集 | \n| -- | --- |\n| 面向对话语言理解的序列到序列数据增强 [(COLING '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FC18-1105\u002F) [代码](https:\u002F\u002Fgithub.com\u002FAtmaHou\u002FSeq2SeqDataAugmentationForLU) | ATIS, Dec94, 斯坦福对话 |\n| 在相同上下文中考虑多种适当回复的任务导向对话系统 [(arxiv '19)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.10484) [代码](https:\u002F\u002Fgithub.com\u002Fthu-spmi\u002Fdamd-multiwoz) | MultiWOZ |\n| 用于语音语言理解中开放词汇槽位的数据加噪增强 [(NAACL '19 学生研究研讨会)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN19-3014\u002F) | ATIS, Snips, MR |\n| 基于原子模板的语音语言理解数据增强 [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1375\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fsz128\u002FDAAT_SLU) | DSTC 2&3, DSTC2 |\n| 通过联合变分生成进行语音语言理解的数据增强 [(AAAI '19)](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F4729) | ATIS, Snips, MIT |\n| 面向端到端任务导向对话的有效数据增强方法 [(IALP '19)](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9037690) | CamRest676, KVRET |\n| 带释义增强的任务导向对话生成 [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.60\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fthu-spmi\u002FPARG) | TCamRest676, MultiWOZ |\n| 基于强化学习数据增强的对话状态跟踪 [(AAAI '20)](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6491) | WoZ, MultiWoZ |\n| 对话状态跟踪中复制机制的数据增强 [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.09634) | WoZ, DSTC2, Multi |\n| 简单就是最好！面向低资源槽位填充和意图分类的轻量级数据增强 [(PACLIC '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.paclic-1.20\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fslouvan\u002Fsaug) | ATIS, SNIPS, FB |\n| 对话图：非确定性对话管理中的数据增强、训练与评估 [(TACL '21)](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00352\u002F97777\u002FConversation-Graph-Data-Augmentation-Training-and) | M2M, MultiWOZ |\n| GOLD：利用数据增强改进对话中的域外检测 [(EMNLP '21)](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.35\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fasappresearch\u002Fgold) | SMCalFlow, ROSTD |\n| 通过多样化参考数据增强提升开放域对话的自动评价 [(ACL '21 Findings)](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.357\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fharsh19\u002FDiverse-Reference-Augmentation\u002F) | DailyDialog |\n\n### 多模态\n| 论文 | 数据集 | \n| -- | --- |\n| 视觉问答的数据增强 [(INLG '17)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW17-3529\u002F) | COCO-VQA, COCO-QA |\n| 面向端到端自动语音识别的低资源多模态数据增强 [(CoRR ’18)](https:\u002F\u002Fdeepai.org\u002Fpublication\u002Flow-resource-multi-modal-data-augmentation-for-end-to-end-asr) | 待定 |\n| 面向端到端自动语音识别的多模态数据增强 [(Interspeech '18)](https:\u002F\u002Fwww.isca-speech.org\u002Farchive\u002FInterspeech_2018\u002Fabstracts\u002F2456.html) | Voxforge, HUB4 |\n| 利用图像说明扩充图像问答数据集 [(LREC '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FL18-1436\u002F) | IQA |\n| 基于循环神经网络的数据增强实现多模态连续情绪识别 [(AVEC '18)](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3266302.3266304) | 待定 |\n| 基于问答方法并结合数据增强的多模态对话状态跟踪 [(DSTC8 @ AAAI '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09903) | DSTC7-AVSD |\n| 视频问答任务的数据增强技术 [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.09849) | TGIF-QA, MSVD-QA |\n| 针对语音识别错误鲁棒的对话模型训练数据增强 [(NLP for ConvAI @ ACL '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05635) | DSTC2 |\n| 视觉问答的语义等价对抗性数据增强 [(ECCV '20)](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58529-7_26) | 待定 |\n| 使用 BERT 进行文本增强以辅助图像描述生成 [(Applied Sciences '20)](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F10\u002F17\u002F5978) | MSCOCO |\n| MDA：用于提升图像-文本情感\u002F情绪分类任务性能的多模态数据增强框架 [(IEEE Intelligent Systems '20)](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9206007) | 待定 |\n\n### 缓解偏见\n| 论文 | 数据集 | \n| -- | --- |\n| 核心指代消解中的性别偏见：评估与去偏方法。[(NAACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-2003\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fuclanlp\u002FcorefBias) | WinoBias, OntoNotes|\n| 用于缓解具有丰富形态学特征的语言中性别刻板印象的反事实数据增强 [(ACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP19-1161\u002F) [代码](https:\u002F\u002Fgithub.com\u002Frycolab\u002FbiasCDA) | 待定 |\n| CONAN - 通过利基来源构建反叙事：打击在线仇恨言论的多语言回应数据集 [(ACL '19)](https:\u002F\u002Faclanthology.org\u002FP19-1271.pdf) [数据集](https:\u002F\u002Fgithub.com\u002Fmarcoguerini\u002FCONAN)| 新创建的数据集 |\n| 关键在于名字：基于名字的反事实数据替换以缓解性别偏见 [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1530\u002F) [代码](https:\u002F\u002Fgithub.com\u002Frowanhm\u002Fcounterfactual-data-substitution) | SSA, 斯坦福大型影评, SimLex-999 |\n| 神经网络自然语言处理中的性别偏见。[(Springer '20)](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007%2F978-3-030-62077-6_14 ) | Wikitext-2, CoNLL-2012 |\n| 通过添加谓词-论元结构来增强训练句的鲁棒性 [(arxiv '20)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12510) | SWAG, CoNLL2009, MultiNLI, HANS|\n\n### 缓解类别不平衡\n| 论文 | 数据集 | \n| -- | --- |\n| SMOTE：合成少数类过采样技术 [(Journal of Artificial Intelligence Research '02)](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fview\u002F10302) | Pima, Phoneme, Adult, E-state, Satimage, Forest Cover, Oil, Mammography, Can |\n| 用于解决类别不平衡问题的词语义消歧主动学习 [(EMNLP '07)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD07-1082\u002F) | 待定 |\n| MLSMOTE：通过合成实例生成解决多标签学习中的类别不平衡问题 [(Knowledge-Based Systems '15)](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0950705115002737?via%3Dihub) | bibtex, cal500, corel5k, slashdot, tmc2007, mediamill, medical, scene, enron, emotions |\n| 面向不平衡数据学习的 SMOTE：进展与挑战，纪念 15 周年 [(Journal of Artificial Intelligence Research '18)](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fview\u002F11192) | 待定 |\n\n### 对抗样本\n\n| 论文 | 数据集 |\n| -- | --- |\n| 基于句法控制释义网络的对抗样本生成 [(NAACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-1170\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fmiyyer\u002Fscpn) | SST, SICK |\n| AdvEntuRe：基于知识引导样例的文本蕴含对抗训练 [(ACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP18-1225\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fdykang\u002Fadventure) | WordNet, PPDB, SICK, SNLI, SciTail |\n| 用需要简单词汇推理的句子攻破自然语言推理系统 [(ACL '18)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP18-2103\u002F) | SNLI, SciTail, MultiNLI |\n| 对抗性词语替换的认证鲁棒性 [(EMNLP '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD19-1423\u002F) [代码](https:\u002F\u002Fgithub.com\u002Frobinjia\u002Fcertified-word-sub) | IMDB, SNLI |\n| PAWS：通过打乱词序生成释义对抗样本 [(NAACL '19)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN19-1131\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002Fpaws) | PAWS (QQP + Wikipedia) |\n| 基于概率加权词重要性的自然语言对抗样本生成 [(ACL '19)](https:\u002F\u002Faclanthology.org\u002FP19-1103\u002F) [代码](https:\u002F\u002Fgithub.com\u002FJHL-HUST\u002FPWWS) | IMDB, AG’s News, Yahoo Answers |\n\n\n### 复合性\n\n| 论文 | 数据集 |\n| -- | --- |\n| 足够好的复合数据增强 [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.676.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fjacobandreas\u002Fgeca) | SCAN |\n| 序列级混合样本数据增强 [(EMNLP '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.447) [代码](https:\u002F\u002Fgithub.com\u002Fdguo98\u002Fseqmix) | IWSLT ’14, WMT ’14 | \n\n### 自动化增强\n\n| 论文                                                        | 数据集                     |\n| ------------------------------------------------------------ | --------------------------- |\n| 学习数据操作以进行增强和加权 [(NeurIPS '19)](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Ffile\u002F671f0311e2754fcdd37f70a8550379bc-Paper.pdf) [代码](https:\u002F\u002Fgithub.com\u002Ftanyuqian\u002Flearning-data-manipulation) | SST, IMDB, TREC, CIFAR-10   |\n| 数据操作：通过学习增强和重新加权，实现神经对话生成的有效实例学习 [(ACL '20)](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.564.pdf) | DailyDialog, OpenSubtitles |\n| 文本自动增强：为文本分类学习组合式增强策略 [(EMNLP '21)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00523) [代码](https:\u002F\u002Fgithub.com\u002Flancopku\u002Ftext-autoaugment) | IMDB, SST2, SST5, TREC, YELP2, YELP5 |\n\n\n### 热门资源\n- [NLP中数据增强的可视化综述](https:\u002F\u002Famitness.com\u002F2020\u002F05\u002Fdata-augmentation-for-nlp\u002F)\n- [nlpaug](https:\u002F\u002Fgithub.com\u002Fmakcedward\u002Fnlpaug)\n- [TextAttack](https:\u002F\u002Fgithub.com\u002FQData\u002FTextAttack)\n- [AugLy](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FAugLy)\n- [NL-Augmenter 🦎 → 🐍](https:\u002F\u002Fgithub.com\u002FGEM-benchmark\u002FNL-Augmenter\u002F)","# DataAug4NLP 快速上手指南\n\n**DataAug4NLP** 并非一个单一的 Python 安装包，而是一个由学术界维护的**开源论文与代码资源库**。它系统性地整理了自然语言处理（NLP）领域中数据增强（Data Augmentation）的相关研究、算法实现及适用数据集。本指南将帮助您快速定位所需技术并运行相关代码。\n\n## 环境准备\n\n由于该仓库包含多种不同时期、不同框架实现的算法，没有统一的“一键安装”包。您需要根据具体选择的论文\u002F算法准备环境。\n\n### 系统要求\n- **操作系统**: Linux, macOS, 或 Windows (推荐 Linux)\n- **Python 版本**: 大多数现代算法需要 Python 3.6+ (部分旧论文代码可能需要 Python 2.7，建议优先选择近年来的实现)\n- **硬件**: 涉及深度学习模型（如 BERT, GPT, Transformer）的增强方法建议使用 NVIDIA GPU 及 CUDA 环境。\n\n### 前置依赖\n在克隆仓库前，请确保已安装基础工具：\n- `git`\n- `python3` & `pip`\n- 常用深度学习框架（按需安装）：\n  ```bash\n  pip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n  # 或\n  pip install tensorflow\n  ```\n- 通用 NLP 库：\n  ```bash\n  pip install numpy pandas scikit-learn transformers datasets\n  ```\n\n> **国内加速建议**：\n> - 使用清华或阿里镜像源安装 Python 包：\n>   ```bash\n>   pip install \u003Cpackage_name> -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n>   ```\n> - 若克隆 GitHub 速度慢，可使用国内镜像站（如 Gitee 镜像，若有）或配置代理。\n\n## 安装步骤\n\n该项目以代码清单和链接形式存在，使用前需克隆仓库并定位具体算法的代码库。\n\n1. **克隆主仓库**\n   获取完整的论文列表和索引：\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fstyfeng\u002FDataAug4NLP.git\n   cd DataAug4NLP\n   ```\n\n2. **定位并获取具体算法代码**\n   浏览仓库中的 `README.md` 或按任务分类（如 `Text Classification`, `Translation`）找到您需要的论文。\n   \n   *示例*：如果您想使用经典的 **EDA (Easy Data Augmentation)** 技术：\n   - 在列表中找到 \"EDA: Easy Data Augmentation Techniques...\" 一行。\n   - 点击对应的 `[code]` 链接（通常指向独立的 GitHub 仓库，如 `jasonwei20\u002Feda_nlp`）。\n   - 克隆该具体算法的仓库：\n     ```bash\n     git clone https:\u002F\u002Fgithub.com\u002Fjasonwei20\u002Feda_nlp.git\n     cd eda_nlp\n     ```\n\n3. **安装特定算法依赖**\n   进入具体算法目录后，查看其自带的 `requirements.txt` 并安装：\n   ```bash\n   pip install -r requirements.txt\n   ```\n   *(注：若无 requirements.txt，请参考该子项目的 README 说明)*\n\n## 基本使用\n\n以下以 **EDA (Easy Data Augmentation)** 为例，展示如何对文本分类数据进行增强。其他算法的使用逻辑类似，请参考各自子仓库的说明。\n\n### 1. 准备输入数据\n创建一个名为 `input.txt` 的文件，每行包含一条原始文本数据：\n```text\nThis movie is absolutely fantastic and I loved it.\nThe service was terrible and the food was cold.\n```\n\n### 2. 运行增强脚本\n执行提供的 Python 脚本，指定输入文件、输出文件及增强参数（如每条数据生成的副本数量 `alpha`）：\n\n```bash\npython eda.py --input=input.txt --output=output_eda.txt --num_aug=9 --alpha=0.1\n```\n\n**参数说明：**\n- `--input`: 原始数据文件路径。\n- `--output`: 增强后的数据保存路径。\n- `--num_aug`: 每条原始数据生成的增强样本数。\n- `--alpha`: 控制增强强度的参数（例如同义词替换的比例）。\n\n### 3. 验证结果\n查看生成的 `output_eda.txt`，您将看到经过同义词替换、随机插入、随机交换和随机删除等操作后的新句子，可直接用于后续模型训练。\n\n---\n**提示**：对于其他任务（如机器翻译的回译 Back-translation），通常需要加载预训练模型，请务必查阅对应子项目 README 中关于模型下载和推理的具体指令。","某电商初创公司的算法团队正致力于构建一个能精准识别用户评论情感（正面\u002F负面\u002F中性）的分类模型，但面临标注数据严重不足的困境。\n\n### 没有 DataAug4NLP 时\n- **数据匮乏导致过拟合**：由于只有少量人工标注的评论数据，模型在训练集上表现尚可，但在真实用户评论中泛化能力极差，极易过拟合。\n- **类别分布严重失衡**：负面投诉样本稀缺，导致模型倾向于将大多数输入预测为“正面”，无法有效识别潜在的公关危机。\n- **试错成本高昂**：团队需花费数周时间手动查阅文献寻找增强方法，且难以确定哪种技术（如同义词替换或回译）最适合当前业务场景。\n- **鲁棒性不足**：模型对用户拼写错误或口语化表达的抵抗力弱，稍微变换句式的评论就会被误判。\n\n### 使用 DataAug4NLP 后\n- **快速匹配最佳方案**：团队利用其分类索引，迅速锁定了针对“文本分类”和“缓解类别不平衡”的成熟论文（如 Synonym Replacement），直接复用经过验证的策略。\n- **低成本扩充高质量数据**：通过应用文中推荐的自动化增强技术，将有限的负面样本扩充了十倍，显著改善了类别失衡问题，使模型能敏锐捕捉异常情绪。\n- **提升模型泛化与鲁棒性**：引入基于对抗样本和语言变化的增强数据后，模型对口语化表达及轻微噪声的识别准确率提升了 15%。\n- **研发效率大幅跃升**：无需从零开始摸索，团队依据资源库中的代码链接和实验设置，将数据预处理周期从数周缩短至两天。\n\nDataAug4NLP 通过系统化整合前沿增强技术，帮助团队在低资源条件下快速构建了高鲁棒性的情感分析模型，极大降低了落地门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstyfeng_DataAug4NLP_baee21d8.png","styfeng","Steven Yuanshuo Feng","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fstyfeng_aecb1773.jpg","Stanford CS PhD student. Previously master's at Carnegie Mellon University. Strong passion for NLP, computer vision, machine learning, and AI research.","Stanford University",null,"stevenyfeng","https:\u002F\u002Fstyfeng.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fstyfeng",833,77,"2026-03-26T22:43:28",5,"","未说明",{"notes":91,"python":89,"dependencies":92},"该仓库主要是一个 NLP 数据增强技术的论文和代码资源列表（Survey Repository），而非一个单一的、具有统一运行环境要求的软件工具。列表中包含了数十篇不同论文的独立代码实现，每篇论文的技术栈、依赖库（如 TensorFlow 或 PyTorch 的不同版本）及硬件需求各不相同。用户需根据具体想复现的论文，点击 README 中对应的代码链接（code column）前往其独立的 GitHub 仓库查看具体的环境配置说明。",[],[51,13,26],[95,96,97,98,99,100,101,102,103,104],"data-augmentation","text-classification","acl2021","natural-language-processing","machine-learning","artificial-intelligence","deep-learning","survey","survey-paper","transformers","2026-03-27T02:49:30.150509","2026-04-06T05:16:01.396041",[108,113],{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},12937,"如果某篇论文使用了数据增强技术（如改写），但主要贡献是创建新数据集，应该归类到哪个领域？","这类论文可以根据其具体应用场景进行归类。例如，如果该数据集旨在对抗在线仇恨言论，将其归类为“减轻偏见（Mitigating Bias）”领域是合适的。维护者建议查看相关后续研究，如果作者尝试了生成而不仅仅是分析，这也支持将其纳入数据增强范畴。用户可以根据论文的核心目标提出 Pull Request 并将其归入最匹配的领域。","https:\u002F\u002Fgithub.com\u002Fstyfeng\u002FDataAug4NLP\u002Fissues\u002F7",{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},12938,"数据增强技术是最近才发明的吗？早期的相关工作在哪里可以找到？","不是，数据增强并非近期发明，其历史至少可以追溯到 20 世纪 90 年代甚至更早。例如，1995 年就有相关论文发表（ACL Anthology P95-1026）。项目维护者已确认会添加早期的数据增强论文到仓库中，以补充历史记录。读者也可以参考关于 Transformer 文本排序的书籍摘要（arXiv:2010.06467 第 7 页）来获取简短的历史总结。","https:\u002F\u002Fgithub.com\u002Fstyfeng\u002FDataAug4NLP\u002Fissues\u002F1",[]]