[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-atpaino--deep-text-corrector":3,"tool-atpaino--deep-text-corrector":65},[4,17,27,35,48,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",149489,2,"2026-04-10T11:32:46",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[15,26,14,13],"图像",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":10,"last_commit_at":33,"category_tags":34,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,"2026-04-10T11:13:16",[26,43,44,45,14,46,15,13,47],"数据工具","视频","插件","其他","音频",{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":54,"last_commit_at":55,"category_tags":56,"status":16},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[15,43,46],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":23,"last_commit_at":63,"category_tags":64,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[14,26,13,15,46],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":79,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":97,"env_os":98,"env_gpu":98,"env_ram":98,"env_deps":99,"category_tags":104,"github_topics":79,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":105,"updated_at":106,"faqs":107,"releases":118},6207,"atpaino\u002Fdeep-text-corrector","deep-text-corrector","Deep learning models trained to correct input errors in short, message-like text","deep-text-corrector 是一款基于深度学习的开源项目，旨在自动修正短信、即时通讯等短文本中的细微语法错误。它主要解决了传统拼写检查工具无法识别上下文相关语法问题的痛点，例如将漏掉冠词的\"I'm going to store\"智能还原为地道的\"I'm going to the store\"，或纠正同音词误用及动词缩写缺失等常见错误。\n\n该工具非常适合自然语言处理领域的研究人员和开发者使用，尤其是那些希望构建或优化英语语法纠错系统、需要序列到序列（Seq2Seq）模型训练方案的技术人员。其独特的技术亮点在于创新的数据集构建策略：利用电影对白等高质量语料，通过算法随机引入特定类型的语法扰动（如删除冠词、替换同音词）来生成大规模的“错误 - 正确”配对数据。这种方法有效克服了真实语法错误标注数据稀缺的难题，使得模型能够高效学习并掌握从非规范输入到规范输出的映射能力，为提升书面沟通的准确性提供了强有力的技术支持。","# Deep Text Corrector\n\nDeep Text Corrector uses [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002F) to train sequence-to-sequence models that are capable of automatically correcting small grammatical errors in conversational written English (e.g. SMS messages). \nIt does this by taking English text samples that are known to be mostly grammatically correct and randomly introducing a handful of small grammatical errors (e.g. removing articles) to each sentence to produce input-output pairs (where the output is the original sample), which are then used to train a sequence-to-sequence model.\n\nSee [this blog post](http:\u002F\u002Fatpaino.com\u002F2017\u002F01\u002F03\u002Fdeep-text-correcter.html) for a more thorough write-up of this work.\n\n## Motivation\nWhile context-sensitive spell-check systems are able to automatically correct a large number of input errors in instant messaging, email, and SMS messages, they are unable to correct even simple grammatical errors. \nFor example, the message \"I'm going to store\" would be unaffected by typical autocorrection systems, when the user most likely intendend to write \"I'm going to _the_ store\". \nThese kinds of simple grammatical mistakes are common in so-called \"learner English\", and constructing systems capable of detecting and correcting these mistakes has been the subect of multiple [CoNLL shared tasks](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW14-1701.pdf).\n\nThe goal of this project is to train sequence-to-sequence models that are capable of automatically correcting such errors. \nSpecifically, the models are trained to provide a function mapping a potentially errant input sequence to a sequence with all (small) grammatical errors corrected.\nGiven these models, it would be possible to construct tools to help correct these simple errors in written communications, such as emails, instant messaging, etc.\n\n## Correcting Grammatical Errors with Deep Learning\nThe basic idea behind this project is that we can generate large training datasets for the task of grammar correction by starting with grammatically correct samples and introducing small errors to produce input-output pairs, which can then be used to train a sequence-to-sequence models.\nThe details of how we construct these datasets, train models using them, and produce predictions for this task are described below.\n\n### Datasets\nTo create a dataset for Deep Text Corrector models, we start with a large collection of mostly grammatically correct samples of conversational written English. \nThe primary dataset considered in this project is the [Cornell Movie-Dialogs Corpus](http:\u002F\u002Fwww.cs.cornell.edu\u002F~cristian\u002FCornell_Movie-Dialogs_Corpus.html), which contains over 300k lines from movie scripts.\nThis was the largest collection of conversational written English I could find that was mostly grammatically correct. \n\nGiven a sample of text like this, the next step is to generate input-output pairs to be used during training. \nThis is done by:\n1. Drawing a sample sentence from the dataset.\n2. Setting the input sequence to this sentence after randomly applying certain perturbations.\n3. Setting the output sequence to the unperturbed sentence.\n\nwhere the perturbations applied in step (2) are intended to introduce small grammatical errors which we would like the model to learn to correct. \nThus far, these perturbations are limited to the:\n- subtraction of articles (a, an, the)\n- subtraction of the second part of a verb contraction (e.g. \"'ve\", \"'ll\", \"'s\", \"'m\")\n- replacement of a few common homophones with one of their counterparts (e.g. replacing \"their\" with \"there\", \"then\" with \"than\")\n\nThe rates with which these perturbations are introduced are loosely based on figures taken from the [CoNLL 2014 Shared Task on Grammatical Error Correction](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW14-1701.pdf). \nIn this project, each perturbation is applied in 25% of cases where it could potentially be applied.\n\n### Training\nTo artificially increase the dataset when training a sequence model, we perform the sampling strategy described above multiple times to arrive at 2-3x the number of input-output pairs. \nGiven this augmented dataset, training proceeds in a very similar manner to [TensorFlow's sequence-to-sequence tutorial](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fseq2seq\u002F). \nThat is, we train a sequence-to-sequence model using LSTM encoders and decoders with an attention mechanism as described in [Bahdanau et al., 2014](http:\u002F\u002Farxiv.org\u002Fabs\u002F1409.0473) using stochastic gradient descent. \n\n### Decoding\n\nInstead of using the most probable decoding according to the seq2seq model, this project takes advantage of the unique structure of the problem to impose the prior that all tokens in a decoded sequence should either exist in the input sequence or belong to a set of \"corrective\" tokens. \nThe \"corrective\" token set is constructed during training and contains all tokens seen in the target, but not the source, for at least one sample in the training set. \nThe intuition here is that the errors seen during training involve the misuse of a relatively small vocabulary of common words (e.g. \"the\", \"an\", \"their\") and that the model should only be allowed to perform corrections in this domain.\n\nThis prior is carried out through a modification to the seq2seq model's decoding loop in addition to a post-processing step that resolves out-of-vocabulary (OOV) tokens:\n\n**Biased Decoding**\n\nTo restrict the decoding such that it only ever chooses tokens from the input sequence or corrective token set, this project applies a binary mask to the model's logits prior to extracting the prediction to be fed into the next time step. \nThis mask is constructed such that `mask[i] == 1.0 if (i in input or corrective_tokens) else 0.0`. \nSince this mask is applited to the result of a softmax transormation (which guarantees all outputs are non-negative), we can be sure that only input or corrective tokens are ever selected.\n\nNote that this logic is not used during training, as this would only serve to eliminate potentially useful signal from the model.\n\n**Handling OOV Tokens**\n\nSince the decoding bias described above is applied within the truncated vocabulary used by the model, we will still see the unknown token in its output for any OOV tokens. \nThe more generic problem of resolving these OOV tokens is non-trivial (e.g. see [Addressing the Rare Word Problem in NMT](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1410.8206v4.pdf)), but in this project we can again take advantage of its unique structure to create a fairly straightforward OOV token resolution scheme. \nThat is, if we assume the sequence of OOV tokens in the input is equal to the sequence of OOV tokens in the output sequence, then we can trivially assign the appropriate token to each \"unknown\" token encountered int he decoding. \nEmpirically, and intuitively, this appears to be an appropriate assumption, as the relatively simple class of errors these models are being trained to address should never include mistakes that warrant the insertion or removal of a rare token.\n\n## Experiments and Results\n\nBelow are some anecdotal and aggregate results from experiments using the Deep Text Corrector model with the [Cornell Movie-Dialogs Corpus](http:\u002F\u002Fwww.cs.cornell.edu\u002F~cristian\u002FCornell_Movie-Dialogs_Corpus.html). \nThe dataset consists of 304,713 lines from movie scripts, of which 243,768 lines were used to train the model and 30,474 lines each were used for the validation and testing sets. \nThe sets were selected such that no lines from the same movie were present in both the training and testing sets.\n\nThe model being evaluated below is a sequence-to-sequence model, with attention, where the encoder and decoder were both 2-layer, 512 hidden unit LSTMs. \nThe model was trained with a vocabulary of the 2k most common words seen in the training set.\n\n### Aggregate Performance\nBelow are reported the BLEU scores and accuracy numbers over the test dataset for both a trained model and a baseline, where the baseline is the identity function (which assumes no errors exist in the input).\n\nYou'll notice that the model outperforms this baseline for all bucket sizes in terms of accuracy, and outperforms all but one in terms of BLEU score. \nThis tells us that applying the Deep Text Corrector model to a potentially errant writing sample would, on average, result in a more grammatically correct writing sample. \nAnyone who tends to make errors similar to those the model has been trained on could therefore benefit from passing their messages through this model.\n\n```\nBucket 0: (10, 10)\n        Baseline BLEU = 0.8341\n        Model BLEU = 0.8516\n        Baseline Accuracy: 0.9083\n        Model Accuracy: 0.9384\nBucket 1: (15, 15)\n        Baseline BLEU = 0.8850\n        Model BLEU = 0.8860\n        Baseline Accuracy: 0.8156\n        Model Accuracy: 0.8491\nBucket 2: (20, 20)\n        Baseline BLEU = 0.8876\n        Model BLEU = 0.8880\n        Baseline Accuracy: 0.7291\n        Model Accuracy: 0.7817\nBucket 3: (40, 40)\n        Baseline BLEU = 0.9099\n        Model BLEU = 0.9045\n        Baseline Accuracy: 0.6073\n        Model Accuracy: 0.6425\n```\n\n### Examples\nDecoding a sentence with a missing article:\n\n```\nIn [31]: decode(\"Kvothe went to market\")\nOut[31]: 'Kvothe went to the market'\n```\n\nDecoding a sentence with then\u002Fthan confusion:\n\n```\nIn [30]: decode(\"the Cardinals did better then the Cubs in the offseason\")\nOut[30]: 'the Cardinals did better than the Cubs in the offseason'\n```\n\n\n## Implementation Details\nThis project reuses and slightly extends TensorFlow's [`Seq2SeqModel`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fblob\u002Fmaster\u002Ftensorflow\u002Fmodels\u002Frnn\u002Ftranslate\u002Fseq2seq_model.py), which itself implements a sequence-to-sequence model with an attention mechanism as described in https:\u002F\u002Farxiv.org\u002Fpdf\u002F1412.7449v3.pdf. \nThe primary contributions of this project are:\n\n- `data_reader.py`: an abstract class that defines the interface for classes which are capable of reading a source dataset and producing input-output pairs, where the input is a grammatically incorrect variant of a source sentence and the output is the original sentence.\n- `text_corrector_data_readers.py`: contains a few implementations of `DataReader`, one over the [Penn Treebank dataset](https:\u002F\u002Fwww.google.com\u002Furl?q=http:\u002F\u002Fwww.fit.vutbr.cz\u002F~imikolov\u002Frnnlm\u002Fsimple-examples.tgz&usg=AFQjCNG0IP5OHusdIAdJIrrem-HMck9AzA) and one over the [Cornell Movie-Dialogs Corpus](http:\u002F\u002Fwww.cs.cornell.edu\u002F~cristian\u002FCornell_Movie-Dialogs_Corpus.html).\n- `text_corrector_models.py`: contains a version of `Seq2SeqModel` modified such that it implements the logic described in [Biased Decoding](#biased-decoding)\n- `correct_text.py`: a collection of helper functions that together allow for the training of a model and the usage of it to decode errant input sequences (at test time). The `decode` method defined here implements the [OOV token resolution logic](#handling-oov-tokens). This also defines a main method, and can be invoked from the command line. It was largely derived from TensorFlow's [`translate.py`](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fseq2seq\u002F).\n- `TextCorrector.ipynb`: an IPython notebook which ties together all of the above pieces to allow for the training and evaluation of the model in an interactive fashion.\n\n### Example Usage\nNote: this project requires TensorFlow version >= 0.11. See [this page](https:\u002F\u002Fwww.tensorflow.org\u002Fget_started\u002Fos_setup) for setup instructions.\n\n**Preprocess Movie Dialog Data**\n```\npython preprocessors\u002Fpreprocess_movie_dialogs.py --raw_data movie_lines.txt \\\n                                                 --out_file preprocessed_movie_lines.txt\n```\nThis preprocessed file can then be split up however you like to create training, validation, and testing sets.\n\n**Training:**\n```\npython correct_text.py --train_path \u002Fmovie_dialog_train.txt \\\n                       --val_path \u002Fmovie_dialog_val.txt \\\n                       --config DefaultMovieDialogConfig \\\n                       --data_reader_type MovieDialogReader \\\n                       --model_path \u002Fmovie_dialog_model\n```\n\n**Testing:**\n```\npython correct_text.py --test_path \u002Fmovie_dialog_test.txt \\\n                       --config DefaultMovieDialogConfig \\\n                       --data_reader_type MovieDialogReader \\\n                       --model_path \u002Fmovie_dialog_model \\\n                       --decode\n```\n\n","# 深度文本校正器\n\n深度文本校正器使用 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002F) 训练序列到序列模型，能够自动纠正会话式书面英语（例如短信）中的小语法错误。其方法是：选取已知基本语法正确的英语文本样本，在每句话中随机引入少量小语法错误（例如删除冠词），从而生成输入-输出对（输出为原始样本），再用这些对来训练序列到序列模型。\n\n有关这项工作的更详细说明，请参阅[这篇博客文章](http:\u002F\u002Fatpaino.com\u002F2017\u002F01\u002F03\u002Fdeep-text-correcter.html)。\n\n## 动机\n尽管上下文感知的拼写检查系统能够自动纠正即时通讯、电子邮件和短信中的大量输入错误，但它们却无法纠正哪怕是简单的语法错误。例如，消息“I'm going to store”通常不会被自动纠错系统修改，而用户很可能原本想写的是“I'm going to _the_ store”。这类简单的语法错误在所谓的“学习者英语”中十分常见，构建能够检测并纠正这些错误的系统一直是多个 [CoNLL 共享任务](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW14-1701.pdf) 的研究主题。\n\n本项目的目标是训练能够自动纠正此类错误的序列到序列模型。具体而言，这些模型被训练成能够将可能包含错误的输入序列映射为所有（小）语法错误均已修正的序列。有了这些模型，便可以构建工具来帮助纠正电子邮件、即时通讯等书面交流中的简单语法错误。\n\n## 使用深度学习纠正语法错误\n本项目的基本思路是：我们可以通过从语法正确的样本出发，人为引入小错误来生成用于语法纠正任务的大规模训练数据集，进而利用这些数据对序列到序列模型进行训练。以下将详细介绍如何构建这些数据集、如何使用它们训练模型，以及如何针对该任务生成预测结果。\n\n### 数据集\n为了创建深度文本校正器的数据集，我们首先选取大量基本语法正确的会话式书面英语样本。本项目主要使用的数据集是 [康奈尔电影对话语料库](http:\u002F\u002Fwww.cs.cornell.edu\u002F~cristian\u002FCornell_Movie-Dialogs_Corpus.html)，其中包含超过 30 万行电影剧本内容。这是我能找到的、大部分语法正确的最大规模会话式书面英语集合。\n\n有了这样的文本样本后，下一步就是生成用于训练的输入-输出对。具体步骤如下：\n1. 从数据集中随机抽取一句话。\n2. 对该句子施加若干随机扰动，作为输入序列。\n3. 将未受扰动的原句作为输出序列。\n\n其中，步骤 (2) 中所施加的扰动旨在引入我们希望模型学会纠正的小型语法错误。截至目前，这些扰动主要包括：\n- 删除冠词（a、an、the）\n- 删除动词缩写形式中的第二部分（如“'ve”、“'ll”、“'s”、“'m”）\n- 将少数常见同音异义词替换为其对应的另一个词（例如将“their”替换成“there”，或将“then”替换成“than”）\n\n这些扰动的引入频率大致参考了 [2014 年 CoNLL 语法错误纠正共享任务](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW14-1701.pdf) 中的相关数据。在本项目中，每种扰动仅在可能应用的情况下以 25% 的概率执行。\n\n### 训练\n为了在训练序列模型时人为扩充数据集，我们会多次重复上述采样策略，最终使输入-输出对的数量达到原始数量的 2 到 3 倍。有了扩充后的数据集，训练过程与 [TensorFlow 的序列到序列教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fseq2seq\u002F) 非常相似。也就是说，我们使用 LSTM 编码器和解码器，并结合 [Bahdanau 等人，2014 年](http:\u002F\u002Farxiv.org\u002Fabs\u002F1409.0473) 提出的注意力机制，通过随机梯度下降法来训练序列到序列模型。\n\n### 解码\n\n与使用序列到序列模型中最可能的解码方法不同，本项目利用问题的独特结构，施加了一个先验约束：解码出的序列中的所有标记要么存在于输入序列中，要么属于一组“纠正”标记。  \n“纠正”标记集是在训练过程中构建的，包含了在训练集中至少一个样本的目标序列中出现、但源序列中未出现的所有标记。  \n其背后的直觉是，训练过程中观察到的错误主要涉及一些常见词汇（如“the”、“an”、“their”）的误用，因此模型应当只允许在此范围内进行纠正。\n\n这一先验约束通过修改序列到序列模型的解码循环以及一个用于处理未登录词（OOV）的后处理步骤来实现：\n\n**有偏解码**\n\n为了限制解码过程仅从输入序列或纠正标记集中选择标记，本项目在提取预测结果并将其输入到下一个时间步之前，对模型的 logits 应用了一个二值掩码。  \n该掩码的构造方式为：`mask[i] == 1.0 如果 i 在输入或纠正标记集中，否则为 0.0`。  \n由于此掩码应用于 softmax 转换后的结果（保证所有输出均为非负值），我们可以确保始终只选择输入序列或纠正标记集中的标记。\n\n需要注意的是，这一逻辑在训练过程中并不使用，因为那样会过滤掉模型可能有用的信号。\n\n**处理未登录词**\n\n由于上述解码偏置是在模型使用的截断词汇表内应用的，对于任何未登录词，输出中仍会出现未知标记。  \n更一般地解决这些未登录词的问题并非易事（例如参见 [Addressing the Rare Word Problem in NMT](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1410.8206v4.pdf)），但在本项目中，我们同样可以利用其独特的结构，设计出一种相对简单的未登录词处理方案。  \n具体来说，如果我们假设输入序列中的未登录词顺序与输出序列中的未登录词顺序相同，则可以简单地将每个“未知”标记替换为相应的正确标记。  \n从经验上和直觉上看，这一假设是合理的，因为这些模型所训练应对的错误类型相对简单，通常不会涉及需要插入或删除罕见词汇的错误。\n\n## 实验与结果\n\n以下是使用深度文本纠正模型，并基于 [康奈尔电影对话语料库](http:\u002F\u002Fwww.cs.cornell.edu\u002F~cristian\u002FCornell_Movie-Dialogs_Corpus.html) 进行的一些实验的轶事性和汇总结果。  \n该数据集包含 304,713 行电影剧本，其中 243,768 行用于模型训练，验证集和测试集各包含 30,474 行。数据集的划分方式确保同一部电影的台词不会同时出现在训练集和测试集中。\n\n下面评估的模型是一个带有注意力机制的序列到序列模型，编码器和解码器均为两层、每层 512 个隐藏单元的 LSTM。  \n模型使用了训练集中出现频率最高的 2,000 个单词作为词汇表进行训练。\n\n### 汇总性能\n\n以下报告了在测试数据集上，经过训练的模型与基线模型的 BLEU 分数和准确率。其中，基线模型为恒等函数（即假定输入中不存在任何错误）。\n\n可以看出，就准确率而言，模型在所有分桶大小下均优于基线；而在 BLEU 分数方面，除一个分桶外，模型也优于基线。  \n这表明，将深度文本纠正模型应用于可能存在错误的文本时，平均而言会产生语法更加正确的文本。因此，那些经常犯与模型训练目标类似的错误的人，可以通过让他们的文本经过该模型来受益。\n\n```\n分桶 0: (10, 10)\n        基线 BLEU = 0.8341\n        模型 BLEU = 0.8516\n        基线准确率：0.9083\n        模型准确率：0.9384\n分桶 1: (15, 15)\n        基线 BLEU = 0.8850\n        模型 BLEU = 0.8860\n        基线准确率：0.8156\n        模型准确率：0.8491\n分桶 2: (20, 20)\n        基线 BLEU = 0.8876\n        模型 BLEU = 0.8880\n        基线准确率：0.7291\n        模型准确率：0.7817\n分桶 3: (40, 40)\n        基线 BLEU = 0.9099\n        模型 BLEU = 0.9045\n        基线准确率：0.6073\n        模型准确率：0.6425\n```\n\n### 示例\n\n解码一个缺少冠词的句子：\n\n```\nIn [31]: decode(\"Kvothe went to market\")\nOut[31]: 'Kvothe went to the market'\n```\n\n解码一个混淆“then”和“than”的句子：\n\n```\nIn [30]: decode(\"the Cardinals did better then the Cubs in the offseason\")\nOut[30]: 'the Cardinals did better than the Cubs in the offseason'\n```\n\n\n## 实现细节\n\n本项目复用了 TensorFlow 的 [`Seq2SeqModel`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fblob\u002Fmaster\u002Ftensorflow\u002Fmodels\u002Frnn\u002Ftranslate\u002Fseq2seq_model.py)，并对其进行了轻微扩展。该模型本身实现了带有注意力机制的序列到序列模型，其原理参见 https:\u002F\u002Farxiv.org\u002Fpdf\u002F1412.7449v3.pdf。  \n本项目的主要贡献包括：\n\n- `data_reader.py`: 一个抽象类，定义了能够读取源数据集并生成输入-输出对的接口，其中输入是语法错误的源句变体，输出则是原始句子。\n- `text_corrector_data_readers.py`: 包含几个 `DataReader` 的实现，分别针对 [Penn Treebank 数据集](https:\u002F\u002Fwww.google.com\u002Furl?q=http:\u002F\u002Fwww.fit.vutbr.cz\u002F~imikolov\u002Frnnlm\u002Fsimple-examples.tgz&usg=AFQjCNG0IP5OHusdIAdJIrrem-HMck9AzA) 和 [康奈尔电影对话语料库](http:\u002F\u002Fwww.cs.cornell.edu\u002F~cristian\u002FCornell_Movie-Dialogs_Corpus.html)。\n- `text_corrector_models.py`: 包含一个修改后的 `Seq2SeqModel` 版本，实现了 [有偏解码](#biased-decoding) 中描述的逻辑。\n- `correct_text.py`: 一系列辅助函数，共同支持模型的训练以及在测试时对错误输入序列的解码。此处定义的 `decode` 方法实现了 [处理未登录词的逻辑](#handling-oov-tokens)。此外，该文件还定义了一个主方法，可通过命令行调用。它主要基于 TensorFlow 的 [`translate.py`](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fseq2seq\u002F) 开发。\n- `TextCorrector.ipynb`: 一个 IPython 笔记本，将上述所有组件整合在一起，以交互式的方式进行模型的训练和评估。\n\n### 使用示例\n注意：本项目要求 TensorFlow 版本 >= 0.11。请参阅[此页面](https:\u002F\u002Fwww.tensorflow.org\u002Fget_started\u002Fos_setup)以获取安装说明。\n\n**预处理电影对话数据**\n```\npython preprocessors\u002Fpreprocess_movie_dialogs.py --raw_data movie_lines.txt \\\n                                                 --out_file preprocessed_movie_lines.txt\n```\n随后，您可以根据需要对预处理后的文件进行分割，以创建训练集、验证集和测试集。\n\n**训练：**\n```\npython correct_text.py --train_path \u002Fmovie_dialog_train.txt \\\n                       --val_path \u002Fmovie_dialog_val.txt \\\n                       --config DefaultMovieDialogConfig \\\n                       --data_reader_type MovieDialogReader \\\n                       --model_path \u002Fmovie_dialog_model\n```\n\n**测试：**\n```\npython correct_text.py --test_path \u002Fmovie_dialog_test.txt \\\n                       --config DefaultMovieDialogConfig \\\n                       --data_reader_type MovieDialogReader \\\n                       --model_path \u002Fmovie_dialog_model \\\n                       --decode\n```","# Deep Text Corrector 快速上手指南\n\nDeep Text Corrector 是一个基于 TensorFlow 的序列到序列（Seq2Seq）模型，旨在自动纠正英语口语化文本（如短信、即时通讯消息）中的细微语法错误（例如冠词缺失、同音词混淆等）。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：建议 Python 3.6+\n*   **核心依赖**：\n    *   TensorFlow (版本 >= 0.11，建议使用较新的兼容版本)\n    *   NumPy\n    *   Jupyter Notebook (可选，用于交互式运行 `TextCorrector.ipynb`)\n\n**安装依赖：**\n\n```bash\npip install tensorflow numpy jupyter\n```\n\n> **提示**：国内用户推荐使用清华源或阿里源加速安装：\n> `pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple tensorflow numpy jupyter`\n\n## 安装步骤\n\n该项目为开源代码库，无需通过包管理器安装，直接克隆源码即可。\n\n1.  **克隆仓库**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fatpaino\u002Fdeep-text-corrector.git\n    cd deep-text-corrector\n    ```\n\n2.  **数据预处理（以电影对话数据集为例）**\n    在使用模型前，需要先将原始文本数据转换为训练所需的输入 - 输出对。项目提供了针对 Cornell Movie-Dialogs Corpus 的预处理脚本。\n\n    假设您已下载 `movie_lines.txt`，运行以下命令生成预处理文件：\n    ```bash\n    python preprocessors\u002Fpreprocess_movie_dialogs.py --raw_data movie_lines.txt \\\n                                                     --out_file preprocessed_movie_lines.txt\n    ```\n\n    *注：生成的 `preprocessed_movie_lines.txt` 可自行划分为训练集、验证集和测试集。*\n\n## 基本使用\n\n### 1. 训练模型\n\n使用预处理好的数据训练 Seq2Seq 模型。以下命令展示了如何指定训练路径、验证路径以及配置参数：\n\n```bash\npython correct_text.py --train_path \u002Fpath\u002Fto\u002Fmovie_dialog_train.txt \\\n                       --val_path \u002Fpath\u002Fto\u002Fmovie_dialog_val.txt \\\n                       --config DefaultMovieDialogConfig \\\n                       --data_reader_type MovieDialogReader \\\n                       --model_dir \u002Fpath\u002Fto\u002Fsave\u002Fmodel\n```\n\n*   `--train_path`: 训练数据文件路径\n*   `--val_path`: 验证数据文件路径\n*   `--config`: 配置文件类名（默认使用 `DefaultMovieDialogConfig`）\n*   `--data_reader_type`: 数据读取器类型（此处使用 `MovieDialogReader`）\n*   `--model_dir`: 模型保存目录\n\n### 2. 推理与纠错\n\n训练完成后，您可以加载模型对含有语法错误的句子进行纠正。可以通过 Python 交互式环境或直接修改脚本调用 `decode` 方法。\n\n**Python 交互示例：**\n\n```python\nfrom correct_text import decode\n\n# 示例 1: 纠正缺失的冠词\nresult_1 = decode(\"Kvothe went to market\")\nprint(result_1)\n# 输出: 'Kvothe went to the market'\n\n# 示例 2: 纠正同音词混淆 (then -> than)\nresult_2 = decode(\"the Cardinals did better then the Cubs in the offseason\")\nprint(result_2)\n# 输出: 'the Cardinals did better than the Cubs in the offseason'\n```\n\n**命令行交互模式：**\n您也可以直接运行主脚本进入交互模式（具体取决于 `correct_text.py` 的实现细节，通常支持直接输入句子）：\n\n```bash\npython correct_text.py --model_dir \u002Fpath\u002Fto\u002Fsave\u002Fmodel --interactive\n```\n\n### 3. 使用 Notebook 进行实验\n\n项目包含一个完整的 Jupyter Notebook (`TextCorrector.ipynb`)，集成了数据加载、模型训练和评估流程，适合开发者进行交互式探索和调试：\n\n```bash\njupyter notebook TextCorrector.ipynb\n```","某跨国电商平台的客服团队每天需处理大量非英语母语用户发来的英文咨询消息，这些消息常包含细微的语法错误，影响理解效率。\n\n### 没有 deep-text-corrector 时\n- 客服人员需花费额外时间脑补用户意图，例如将\"I'm going to store\"自行推断为\"I'm going to **the** store\"，降低响应速度。\n- 传统拼写检查工具对冠词缺失（如漏掉 a\u002Fan\u002Fthe）或缩写错误（如把 I'm 写成 I）完全无效，导致错误信息直接流入工单系统。\n- 同音词混淆（如把 their 写成 there）频繁引发误解，甚至导致发错货或退款纠纷，增加售后成本。\n- 新员工培训成本高，需专门学习如何“翻译”各类典型的“学习者英语”错误模式。\n- 自动化聊天机器人因无法识别此类语法偏差，经常给出答非所问的回复，用户体验极差。\n\n### 使用 deep-text-corrector 后\n- 用户输入的破碎句子被实时修正为标准英语，客服无需猜测即可直接理解意图，平均响应时间缩短 40%。\n- 针对冠词遗漏、动词缩写缺失等特定错误进行精准修复，让原本会被忽略的语法问题在进入人工流程前就被自动清洗。\n- 智能识别并纠正同音词误用，从源头减少因语义歧义导致的业务差错，显著降低客诉率。\n- 新人上手更快，因为系统展示的都是规范文本，不再需要掌握复杂的错误解码技巧。\n- 接入客服机器人的预处理器后，意图识别准确率大幅提升，自动回复更加自然流畅，用户满意度明显提高。\n\ndeep-text-corrector 通过深度学习模型将碎片化的“中式英语”或“学习者英语”实时转化为规范表达，成为连接非母语用户与高效服务之间的隐形桥梁。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fatpaino_deep-text-corrector_8b0213a7.png","atpaino","Alex Paino","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fatpaino_d98ae7ac.jpg",null,"United States","atpaino@gmail.com","www.atpaino.com","https:\u002F\u002Fgithub.com\u002Fatpaino",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",70.1,{"name":90,"color":91,"percentage":92},"Jupyter Notebook","#DA5B0B",29.9,1237,260,"2026-03-28T00:22:00","Apache-2.0",4,"未说明",{"notes":100,"python":101,"dependencies":102},"该项目基于较旧版本的 TensorFlow (>=0.11) 构建，主要使用 LSTM 编码器 - 解码器架构。代码复用了 TensorFlow 早期的 seq2seq 教程实现。由于依赖版本过低，在现代环境中运行可能需要对代码进行适配或使用兼容的旧版环境。训练数据主要使用康奈尔电影对话语料库 (Cornell Movie-Dialogs Corpus)。","未说明 (需配合 TensorFlow >= 0.11)",[103],"tensorflow>=0.11",[15],"2026-03-27T02:49:30.150509","2026-04-10T20:35:36.555247",[108,113],{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},28118,"导入模块时出现 'ModuleNotFoundError: No module named text_correcter_data_readers' 错误怎么办？","这是一个拼写错误。代码中的模块名 'text_correcter' 应更正为 'text_corrector'（将 'er' 改为 'or'）。请检查并修改导入语句为：from text_corrector_data_readers import PTBDataReader, MovieDialogReader。","https:\u002F\u002Fgithub.com\u002Fatpaino\u002Fdeep-text-corrector\u002Fissues\u002F27",{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},28119,"模型运行后输出与输入完全一致，没有进行纠错，如何解决？","这是因为在运行解码（decode）时未传入纠正令牌（corrective tokens）。解决方法是先调用 get_corrective_tokens(data_reader, train_path) 函数获取纠正令牌，然后将其传递给 decode 函数。完成此步骤后，模型即可正常输出纠错结果。","https:\u002F\u002Fgithub.com\u002Fatpaino\u002Fdeep-text-corrector\u002Fissues\u002F23",[]]