[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-OpenNMT--OpenNMT-tf":3,"tool-OpenNMT--OpenNMT-tf":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":23,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":102,"github_topics":103,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":111,"updated_at":112,"faqs":113,"releases":134},1369,"OpenNMT\u002FOpenNMT-tf","OpenNMT-tf","Neural machine translation and sequence learning using TensorFlow","OpenNMT-tf 是一个基于 TensorFlow 2 的通用序列学习工具包，主打神经机器翻译，也能轻松胜任文本生成、序列标注、分类和语言模型等任务。它把复杂的 Transformer、RNN 等模型拆成可插拔的积木，开发者只需几行 Python 就能拼出自定义网络，比如多输入、共享词向量、级联编码器等，既适合快速实验，也支持生产部署。  \n借助 TensorFlow 2 生态，OpenNMT-tf 原生支持多 GPU、混合精度、分布式训练、TensorBoard 可视化，并能一键导出 SavedModel 上线推理。  \n如果你是对 NLP 感兴趣的开发者、研究人员，或想把翻译、文本生成能力集成到产品中的工程师，OpenNMT-tf 提供了简洁的 API、丰富的模型库和向后兼容保证，让你专注算法创新，而不用重复造轮子。","[![CI](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fworkflows\u002FCI\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Factions?query=workflow%3ACI) [![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FOpenNMT\u002FOpenNMT-tf\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FOpenNMT\u002FOpenNMT-tf) [![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FOpenNMT-tf.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FOpenNMT-tf) [![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue.svg)](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002F) [![Gitter](https:\u002F\u002Fbadges.gitter.im\u002FOpenNMT\u002FOpenNMT-tf.svg)](https:\u002F\u002Fgitter.im\u002FOpenNMT\u002FOpenNMT-tf?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Forum](https:\u002F\u002Fimg.shields.io\u002Fdiscourse\u002Fstatus?server=https%3A%2F%2Fforum.opennmt.net%2F)](https:\u002F\u002Fforum.opennmt.net\u002F)\n\n# OpenNMT-tf\n\nOpenNMT-tf is a general purpose sequence learning toolkit using TensorFlow 2. While neural machine translation is the main target task, it has been designed to more generally support:\n\n* sequence to sequence mapping\n* sequence tagging\n* sequence classification\n* language modeling\n\nThe project is production-oriented and comes with [backward compatibility guarantees](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fblob\u002Fmaster\u002FCHANGELOG.md).\n\n## Key features\n\n### Modular model architecture\n\nModels are described with code to allow training custom architectures and overriding default behavior. For example, the following instance defines a sequence to sequence model with 2 concatenated input features, a self-attentional encoder, and an attentional RNN decoder sharing its input and output embeddings:\n\n```python\nopennmt.models.SequenceToSequence(\n    source_inputter=opennmt.inputters.ParallelInputter(\n        [\n            opennmt.inputters.WordEmbedder(embedding_size=256),\n            opennmt.inputters.WordEmbedder(embedding_size=256),\n        ],\n        reducer=opennmt.layers.ConcatReducer(axis=-1),\n    ),\n    target_inputter=opennmt.inputters.WordEmbedder(embedding_size=512),\n    encoder=opennmt.encoders.SelfAttentionEncoder(num_layers=6),\n    decoder=opennmt.decoders.AttentionalRNNDecoder(\n        num_layers=4,\n        num_units=512,\n        attention_mechanism_class=tfa.seq2seq.LuongAttention,\n    ),\n    share_embeddings=opennmt.models.EmbeddingsSharingLevel.TARGET,\n)\n```\n\nThe [`opennmt`](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Foverview.html) package exposes other building blocks that can be used to design:\n\n* [multiple input features](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Fopennmt.inputters.ParallelInputter.html)\n* [mixed embedding representation](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Fopennmt.inputters.MixedInputter.html)\n* [multi-source context](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Fopennmt.inputters.ParallelInputter.html)\n* [cascaded](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Fopennmt.encoders.SequentialEncoder.html) or [multi-column](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Fopennmt.encoders.ParallelEncoder.html) encoder\n* [hybrid sequence to sequence models](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Fopennmt.models.SequenceToSequence.html)\n\nStandard models such as the Transformer are defined in a [model catalog](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fblob\u002Fmaster\u002Fopennmt\u002Fmodels\u002Fcatalog.py) and can be used without additional configuration.\n\n*Find more information about model configuration in the [documentation](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fmodel.html).*\n\n### Full TensorFlow 2 integration\n\nOpenNMT-tf is fully integrated in the TensorFlow 2 ecosystem:\n\n* Reusable layers extending [`tf.keras.layers.Layer`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Flayers\u002FLayer)\n* Multi-GPU training with [`tf.distribute`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fdistribute) and distributed training with [Horovod](https:\u002F\u002Fgithub.com\u002Fhorovod\u002Fhorovod)\n* Mixed precision training with [`tf.keras.mixed_precision`](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fmixed_precision)\n* Visualization with [TensorBoard](https:\u002F\u002Fwww.tensorflow.org\u002Ftensorboard)\n* `tf.function` graph tracing that can be [exported to a SavedModel](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fserving.html) and served with [TensorFlow Serving](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Ftree\u002Fmaster\u002Fexamples\u002Fserving\u002Ftensorflow_serving) or [Python](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Ftree\u002Fmaster\u002Fexamples\u002Fserving\u002Fpython)\n\n### Compatibility with CTranslate2\n\n[CTranslate2](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FCTranslate2) is an optimized inference engine for OpenNMT models featuring fast CPU and GPU execution, model quantization, parallel translations, dynamic memory usage, interactive decoding, and more! OpenNMT-tf can [automatically export](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fserving.html#ctranslate2) models to be used in CTranslate2.\n\n### Dynamic data pipeline\n\nOpenNMT-tf does not require to compile the data before the training. Instead, it can directly read text files and preprocess the data when needed by the training. This allows [on-the-fly tokenization](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Ftokenization.html) and data augmentation by injecting random noise.\n\n### Model fine-tuning\n\nOpenNMT-tf supports model fine-tuning workflows:\n\n* Model weights can be transferred to new word vocabularies, e.g. to inject domain terminology before fine-tuning on in-domain data\n* [Contrastive learning](https:\u002F\u002Fai.google\u002Fresearch\u002Fpubs\u002Fpub48253\u002F) to reduce word omission errors\n\n### Source-target alignment\n\nSequence to sequence models can be trained with [guided alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.01628) and alignment information are returned as part of the translation API.\n\n---\n\nOpenNMT-tf also implements most of the techniques commonly used to train and evaluate sequence models, such as:\n\n* automatic evaluation during the training\n* multiple decoding strategy: greedy search, beam search, random sampling\n* N-best rescoring\n* gradient accumulation\n* scheduled sampling\n* checkpoint averaging\n* ... and more!\n\n*See the [documentation](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002F) to learn how to use these features.*\n\n## Usage\n\nOpenNMT-tf requires:\n\n* Python 3.7 or above\n* TensorFlow 2.6, 2.7, 2.8, 2.9, 2.10, 2.11, 2.12, or 2.13\n\nWe recommend installing it with `pip`:\n\n```bash\npip install --upgrade pip\npip install OpenNMT-tf\n```\n\n*See the [documentation](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Finstallation.html) for more information.*\n\n### Command line\n\nOpenNMT-tf comes with several command line utilities to prepare data, train, and evaluate models.\n\nFor all tasks involving a model execution, OpenNMT-tf uses a unique entrypoint: `onmt-main`. A typical OpenNMT-tf run consists of 3 elements:\n\n* the **model** type\n* the **parameters** described in a YAML file\n* the **run** type such as `train`, `eval`, `infer`, `export`, `score`, `average_checkpoints`, or `update_vocab`\n\nthat are passed to the main script:\n\n```\nonmt-main --model_type \u003Cmodel> --config \u003Cconfig_file.yml> --auto_config \u003Crun_type> \u003Crun_options>\n```\n\n*For more information and examples on how to use OpenNMT-tf, please visit [our documentation](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf).*\n\n### Library\n\nOpenNMT-tf also exposes [well-defined and stable APIs](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Foverview.html), from high-level training utilities to low-level model layers and dataset transformations.\n\nFor example, the `Runner` class can be used to train and evaluate models with few lines of code:\n\n```python\nimport opennmt\n\nconfig = {\n    \"model_dir\": \"\u002Fdata\u002Fwmt-ende\u002Fcheckpoints\u002F\",\n    \"data\": {\n        \"source_vocabulary\": \"\u002Fdata\u002Fwmt-ende\u002Fjoint-vocab.txt\",\n        \"target_vocabulary\": \"\u002Fdata\u002Fwmt-ende\u002Fjoint-vocab.txt\",\n        \"train_features_file\": \"\u002Fdata\u002Fwmt-ende\u002Ftrain.en\",\n        \"train_labels_file\": \"\u002Fdata\u002Fwmt-ende\u002Ftrain.de\",\n        \"eval_features_file\": \"\u002Fdata\u002Fwmt-ende\u002Fvalid.en\",\n        \"eval_labels_file\": \"\u002Fdata\u002Fwmt-ende\u002Fvalid.de\",\n    }\n}\n\nmodel = opennmt.models.TransformerBase()\nrunner = opennmt.Runner(model, config, auto_config=True)\nrunner.train(num_devices=2, with_eval=True)\n```\n\nHere is another example using OpenNMT-tf to run efficient beam search with a self-attentional decoder:\n\n```python\ndecoder = opennmt.decoders.SelfAttentionDecoder(num_layers=6, vocab_size=32000)\n\ninitial_state = decoder.initial_state(\n    memory=memory, memory_sequence_length=memory_sequence_length\n)\n\nbatch_size = tf.shape(memory)[0]\nstart_ids = tf.fill([batch_size], opennmt.START_OF_SENTENCE_ID)\n\ndecoding_result = decoder.dynamic_decode(\n    target_embedding,\n    start_ids=start_ids,\n    initial_state=initial_state,\n    decoding_strategy=opennmt.utils.BeamSearch(4),\n)\n```\n\nMore examples using OpenNMT-tf as a library can be found online:\n\n* The directory [examples\u002Flibrary](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Ftree\u002Fmaster\u002Fexamples\u002Flibrary) contains additional examples that use OpenNMT-tf as a library\n* [nmt-wizard-docker](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002Fnmt-wizard-docker) uses the high-level `opennmt.Runner` API to wrap OpenNMT-tf with a custom interface for training, translating, and serving\n\n*For a complete overview of the APIs, see the [package documentation](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Foverview.html).*\n\n## Additional resources\n\n* [Documentation](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf)\n* [Forum](https:\u002F\u002Fforum.opennmt.net)\n* [Gitter](https:\u002F\u002Fgitter.im\u002FOpenNMT\u002FOpenNMT-tf)\n","[![CI](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fworkflows\u002FCI\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Factions?query=workflow%3ACI) [![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FOpenNMT\u002FOpenNMT-tf\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FOpenNMT\u002FOpenNMT-tf) [![PyPI 版本](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FOpenNMT-tf.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FOpenNMT-tf) [![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue.svg)](https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002F) [![Gitter](https:\u002F\u002Fbadges.gitter.im\u002FOpenNMT\u002FOpenNMT-tf.svg)](https:\u002F\u002Fgitter.im\u002FOpenNMT\u002FOpenNMT-tf?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![论坛](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fforum-status-100-blue.svg)](https:\u002F\u002Fforum.opennmt.net\u002F)\n\n# OpenNMT-tf\n\nOpenNMT-tf 是一款基于 TensorFlow 2 的通用序列学习工具包。虽然神经机器翻译是其主要目标任务，但该工具包也旨在更广泛地支持以下多种任务：\n\n* 序列到序列映射\n* 序列标注\n* 序列分类\n* 语言建模\n\n该项目以生产级为目标，具备【向后兼容性保障】（详见：https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fblob\u002Fmaster\u002FCHANGELOG.md）。\n\n## 核心功能\n\n### 模块化模型架构\n\n模型的定义采用代码形式，便于用户自定义模型架构并覆盖默认行为。例如，以下示例定义了一个序列到序列模型，该模型包含 2 个串联的输入特征、一个自注意力编码器，以及一个共享输入和输出嵌入的注意力 RNN 解码器：\n\n```python\nopennmt.models.SequenceToSequence(\n    source_inputter=opennmt.inputters.ParallelInputter(\n        [\n            opennmt.inputters.WordEmbedder(embedding_size=256),\n            opennmt.inputters.WordEmbedder(embedding_size=256),\n        ],\n        reducer=opennmt.layers.ConcatReducer(axis=-1),\n    ),\n    target_inputter=opennmt.inputters.WordEmbedder(embedding_size=512),\n    encoder=opennmt.encoders.SelfAttentionEncoder(num_layers=6),\n    decoder=opennmt.decoders.AttentionalRNNDecoder(\n        num_layers=4,\n        num_units=512,\n        attention_mechanism_class=tfa.seq2seq.LuongAttention,\n    ),\n    share_embeddings=opennmt.models.EmbeddingsSharingLevel.TARGET,\n)\n```\n\n`opennmt` 包还提供了其他构建模块，可用于设计各类模型：\n\n* 多个输入特征（如：`ParallelInputter`）\n* 混合嵌入表示（如：`MixedInputter`）\n* 多源上下文（如：`ParallelInputter`）\n* 串行编码器（如：`SequentialEncoder`）或多列编码器（如：`ParallelEncoder`）\n* 混合序列到序列模型（如：`SequenceToSequence`）\n\n标准模型，例如 Transformer，已在【模型目录】中进行定义，并且无需额外配置即可直接使用。\n\n* 更多关于模型配置的信息，请参阅【文档】（https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Foverview.html）。\n\n### 全面集成 TensorFlow 2\n\nOpenNMT-tf 与 TensorFlow 2 生态系统实现了深度整合：\n\n* 可重用层，可扩展自 `tf.keras.layers.Layer`（如：`Layer`）\n* 支持多 GPU 训练，可通过 `tf.distribute` 实现分布式训练，同时借助 Horovod 进行分布式训练（GitHub: https:\u002F\u002Fgithub.com\u002Fhorovod\u002Fhorovod）\n* 支持混合精度训练，通过 `tf.keras.mixed_precision` 实现\n* 提供可视化功能，如 TensorBoard（GitHub: https:\u002F\u002Fwww.tensorflow.org\u002Ftensorboard）\n* 使用 `tf.function` 进行图追踪，可导出为 SavedModel（GitHub: https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fserving.html），并通过 TensorFlow Serving（GitHub: https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Ftree\u002Fmaster\u002Fexamples\u002Fserving\u002Ftensorflow_serving）或 Python 方式进行部署。\n\n### 与 CTranslate2 的兼容性\n\n[CTranslate2] 是一款专为 OpenNMT 模型优化的推理引擎，具备高效的 CPU 和 GPU 执行能力、模型量化、并行翻译、动态内存管理、交互式解码等功能！OpenNMT-tf 能够自动将模型导出为 CTranslate2 所需的格式。\n\n### 动态数据管道\n\nOpenNMT-tf 不需要在训练前对数据进行编译，而是可以直接读取文本文件，并在训练过程中根据需要对数据进行预处理。这使得我们能够实现【即时分词】（如：`tokenization`）以及通过注入随机噪声来实现数据增强。\n\n### 模型微调\n\nOpenNMT-tf 支持模型微调流程：\n\n* 模型权重可以迁移到新的词汇表中，例如在针对域内数据进行微调之前，注入领域术语\n* 通过【对比学习】（如：https:\u002F\u002Fai.google\u002Fresearch\u002Fpubs\u002Fpub48253）来减少单词遗漏错误\n\n### 源目标对齐\n\n序列到序列模型可通过【引导对齐】（如：arXiv:1607.01628）进行训练，而对齐信息会作为翻译 API 的一部分返回。\n\n---\n\nOpenNMT-tf 同时实现了大多数用于训练和评估序列模型的常用技术，例如：\n\n* 在训练过程中自动进行评估\n* 多种解码策略：贪婪搜索、束搜索、随机采样\n* N 个最佳结果的重新加权排序\n* 梯度累积\n* 定期采样\n* 检查点平均值\n* ……以及其他更多功能！\n\n* 请参阅【文档】（https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002F）以了解如何使用这些功能。\n\n## 使用方法\n\nOpenNMT-tf 需要：\n\n* Python 3.7 或更高版本\n* TensorFlow 2.6、2.7、2.8、2.9、2.10、2.11、2.12 或 2.13\n\n我们建议使用 `pip` 进行安装：\n\n```bash\npip install --upgrade pip\npip install OpenNMT-tf\n```\n\n* 请参阅【文档】（https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Finstallation.html）以获取更多信息。*\n\n### 命令行\n\nOpenNMT-tf 提供了多个命令行工具，用于准备数据、训练和评估模型。\n\n对于所有涉及模型运行的任务，OpenNMT-tf 都使用一个独特的入口点：`onmt-main`。典型的 OpenNMT-tf 运行由三个部分组成：\n\n* **模型** 类型\n* **参数**，以 YAML 文件的形式描述\n* **运行** 类型，例如 `train`、`eval`、`infer`、`export`、`score`、`average_checkpoints` 或 `update_vocab`\n\n这些参数会被传递给主脚本：\n\n```\nonmt-main --model_type \u003Cmodel> --config \u003Cconfig_file.yml> --auto_config \u003Crun_type> \u003Crun_options>\n```\n\n* 如需了解更多关于 OpenNMT-tf 的使用方法及示例，请访问【我们的文档】（https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf）。\n\n### 图书馆\n\nOpenNMT-tf 还提供了【清晰定义且稳定的 API】（https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Foverview.html），从高级训练工具到低级模型层，以及数据集转换，一应俱全。\n\n例如，只需寥寥数行代码，即可使用 `Runner` 类来训练和评估模型：\n\n```python\nimport opennmt\n\nconfig = {\n    \"model_dir\": \"\u002Fdata\u002Fwmt-ende\u002Fcheckpoints\u002F\",\n    \"data\": {\n        \"source_vocabulary\": \"\u002Fdata\u002Fwmt-ende\u002Fjoint-vocab.txt\",\n        \"target_vocabulary\": \"\u002Fdata\u002Fwmt-ende\u002Fjoint-vocab.txt\",\n        \"train_features_file\": \"\u002Fdata\u002Fwmt-ende\u002Ftrain.en\",\n        \"train_labels_file\": \"\u002Fdata\u002Fwmt-ende\u002Ftrain.de\",\n        \"eval_features_file\": \"\u002Fdata\u002Fwmt-ende\u002Fvalid.en\",\n        \"eval_labels_file\": \"\u002Fdata\u002Fwmt-ende\u002Fvalid.de\",\n    }\n}\n\nmodel = opennmt.models.TransformerBase()\nrunner = opennmt.Runner(model, config, auto_config=True)\nrunner.train(num_devices=2, with_eval=True)\n```\n\n以下是另一个示例，利用 OpenNMT-tf 以高效的束搜索方式运行自注意力解码器：\n\n```python\ndecoder = opennmt.decoders.SelfAttentionDecoder(num_layers=6, vocab_size=32000)\n\ninitial_state = decoder.initial_state(\n    memory=memory, memory_sequence_length=memory_sequence_length\n)\n\nbatch_size = tf.shape(memory)[0]\nstart_ids = tf.fill([batch_size], opennmt.START_OF_SENTENCE_ID)\n\ndecoding_result = decoder.dynamic_decode(\n    target_embedding,\n    start_ids=start_ids,\n    initial_state=initial_state,\n    decoding_strategy=opennmt.utils.BeamSearch(4),\n)\n```\n\n更多使用 OpenNMT-tf 作为库的示例，可在线查阅：\n\n* [examples\u002Flibrary](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Ftree\u002Fmaster\u002Fexamples\u002Flibrary) 目录中，收录了更多以 OpenNMT-tf 为库的示例。\n* [nmt-wizard-docker](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002Fnmt-wizard-docker) 使用高级的 `opennmt.Runner` API，将 OpenNMT-tf 与自定义接口相结合，用于训练、翻译及服务。\n\n*如需全面了解各类 API，请参阅【软件包文档】（https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf\u002Fpackage\u002Foverview.html）。\n\n## 其他资源\n\n* 【文档】（https:\u002F\u002Fopennmt.net\u002FOpenNMT-tf）\n* 【论坛】（https:\u002F\u002Fforum.opennmt.net）\n* 【Gitter】（https:\u002F\u002Fgitter.im\u002FOpenNMT\u002FOpenNMT-tf）","# OpenNMT-tf 快速上手指南\n\n## 环境准备\n- **操作系统**：Linux \u002F macOS \u002F Windows  \n- **Python**：3.7 及以上  \n- **TensorFlow**：2.6 ~ 2.13 任一版本  \n- **硬件**：CPU 即可运行；GPU 训练需 CUDA 11.2+（推荐 NVIDIA 驱动 ≥ 470）\n\n## 安装步骤\n```bash\n# 1. 升级 pip\npip install --upgrade pip\n\n# 2. 安装 OpenNMT-tf（国内可换清华源加速）\npip install OpenNMT-tf\n# 如需 GPU 支持，先装对应版本 TensorFlow\n# pip install tensorflow==2.12\n```\n\n## 基本使用\n### 1. 准备数据（示例：英德翻译）\n```\ntrain.en   # 英文源文件\ntrain.de   # 德文目标文件\nvalid.en   # 验证源文件\nvalid.de   # 验证目标文件\n```\n\n### 2. 创建配置文件 `config.yml`\n```yaml\nmodel_dir: .\u002Frun\u002F\n\ndata:\n  source_vocabulary: vocab.src\n  target_vocabulary: vocab.tgt\n  train_features_file: train.en\n  train_labels_file: train.de\n  eval_features_file: valid.en\n  eval_labels_file: valid.de\n```\n\n### 3. 一键训练\n```bash\n# 使用内置 Transformer 模型\nonmt-main --model_type TransformerBase \\\n          --config config.yml \\\n          --auto_config train --with_eval\n```\n\n### 4. 推理翻译\n```bash\necho \"Hello world\" > input.txt\nonmt-main --model_type TransformerBase \\\n          --config config.yml \\\n          --auto_config infer --features_file input.txt\n```\n\n输出结果将保存在 `run\u002Fpredictions.txt`。","一家跨境电商初创公司需要把 200 万条商品标题和描述从中文自动翻译成英语、西班牙语和法语，以同步到 Amazon 北美、欧洲站点。\n\n### 没有 OpenNMT-tf 时\n- 团队只能调用通用翻译 API，每条 0.001 美元，200 万条需 2000 美元，且品牌词、规格词常被误译，导致退货率上升 3%。\n- 通用模型无法识别“连帽卫衣”与“卫衣”在标题长度限制下的差异，经常输出过长文本，需要人工二次截断，每天 2 人专职处理。\n- 多语言模型分散在 3 个不同框架里，维护 3 套代码，GPU 利用率不到 40%，训练一次要 5 天。\n- 无法增量学习：新品类“露营灯”上线后，旧模型把“太阳能”译成 “sun ability”，必须重新训练整个模型，周期两周。\n\n### 使用 OpenNMT-tf 后\n- 用公司历史语料（含品牌词、规格表）在 OpenNMT-tf 上训练专属 Transformer 模型，200 万条本地推理 0 成本，品牌词准确率从 82% 提升到 96%，退货率降至 1.2%。\n- 通过自定义 `target_inputter` 加入长度惩罚因子，模型自动在 60 字符内生成最简有效标题，人工截断岗位直接取消。\n- 统一代码库：同一套 YAML 配置即可切换 en\u002Fes\u002Ffr 三语，利用 `tf.distribute` 在四张 A100 上并行训练，GPU 利用率 95%，训练时间缩短到 9 小时。\n- 新增品类只需追加 5 千条标注数据做增量微调，OpenNMT-tf 的 checkpoint 热更新机制让模型 30 分钟内上线，“太阳能露营灯”被准确译成 “solar camping lantern”。\n\nOpenNMT-tf 让这家初创公司在 3 周内拥有低成本、高准确率且可持续进化的专属多语言翻译管线。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenNMT_OpenNMT-tf_8808ab88.png","OpenNMT","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FOpenNMT_cd98805a.png","Open source ecosystem for neural machine translation and neural sequence learning",null,"https:\u002F\u002Fopennmt.net\u002F","https:\u002F\u002Fgithub.com\u002FOpenNMT",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.1,{"name":87,"color":88,"percentage":89},"Shell","#89e051",0.9,1485,380,"2026-04-02T16:35:36","MIT","Linux, macOS, Windows","非必需，支持多 GPU 训练；若使用 GPU 需 TensorFlow 2.x 对应 CUDA\u002FcuDNN（官方未列具体版本）","未说明",{"notes":98,"python":99,"dependencies":100},"可通过 pip 安装 OpenNMT-tf；支持 Horovod 分布式训练；模型可导出为 SavedModel 或 CTranslate2 格式用于生产部署；无需预编译数据，可直接读取文本文件并动态预处理","3.7+",[101],"TensorFlow>=2.6,\u003C=2.13",[26,13],[104,105,106,107,108,109,110],"neural-machine-translation","tensorflow","python","opennmt","machine-translation","deep-learning","natural-language-processing","2026-03-27T02:49:30.150509","2026-04-06T05:35:27.852394",[114,119,124,129],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},6279,"开启 gpu_allow_growth 后显存仍被一次性占满，如何解决？","问题通常是因为在程序最开始就调用了检测 GPU 数量的代码，导致 TensorFlow 提前初始化了全部显存。\n解决步骤：\n1. 不要在主进程最外层 import 或调用任何会触发 TensorFlow 初始化 GPU 的代码。\n2. 把 GPU 检测逻辑放到每个训练子进程内部再执行。\n3. 确认启动命令已加 `--gpu_allow_growth`，正常启动后显存占用应只有百兆级别（如 111 MB）。","https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fissues\u002F168",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},6280,"单机 8 卡异步分布式训练升级版本后崩溃，提示 RecvTensor 重复请求，怎么办？","TensorFlow 的 gRPC 在 fork 后并不安全，直接在主进程 import opennmt 再 multiprocessing 启动子进程会导致通信异常。\n解决步骤：\n1. 不要在主脚本顶层 `from opennmt.bin.main import main`。\n2. 把 import 语句放到每个子进程内部，确保每个进程独立 import。\n3. 或者改用 N 个独立终端\u002FSSH 会话手动启动各角色，避免 fork。\n4. 若仍想用脚本，可用 subprocess.Popen 分别启动不同命令，而不是 multiprocessing。","https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fissues\u002F449",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},6281,"将已有 .h5 音频特征转成 .tfrecord 后，LAS 模型训练 10 万步仍不收敛，输出几乎相同句子，如何排查？","99% 的情况都是数据集本身的问题。\n排查与修复：\n1. 检查 .tfrecord 生成脚本，确认特征与标签严格对齐，没有错位或重复。\n2. 用少量数据（如 100 条）过拟合测试，若仍无法过拟合，说明数据或预处理脚本有误。\n3. 确认音频特征提取与训练阶段使用完全一致的帧移、窗长、梅尔滤波器个数等超参。\n4. 若仍无解，可先用 PyTorch 版 LAS 做小规模实验，验证数据正确性后再回到 OpenNMT-tf 放大训练。","https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fissues\u002F184",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},6282,"使用 onmt-update-vocab 更新词表后进行领域微调，分布式训练报错，需要改哪些配置？","更新词表后，只需确保以下文件一致即可：\n1. 用 `onmt-update-vocab` 生成的新 checkpoint 目录（含更新后的嵌入）。\n2. 训练\u002F验证的 `.yml` 配置文件中：\n   - `train_features_file` 与 `train_labels_file` 指向新领域数据。\n   - `source_vocabulary` 与 `target_vocabulary` 指向新的 50k 词表文件。\n3. 其余超参（层数、隐层大小等）保持与基线模型一致。\n4. 启动命令示例：\n   ```bash\n   onmt-main train_and_eval \\\n     --model_type Transformer \\\n     --checkpoint_path \u002Fpath\u002Fto\u002Fupdated_vocab \\\n     --config \u002Fpath\u002Fto\u002Fconfig_da.yml \\\n     --auto_config --num_gpus 8\n   ```","https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-tf\u002Fissues\u002F269",[135,140,145,150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230],{"id":136,"version":137,"summary_zh":138,"released_at":139},105845,"v2.32.0","## New features\r\n\r\n* Support TensorFlow 2.12 and 2.13\r\n* Make timeout value configurable while searching for an optimal batch size","2023-08-04T08:42:18",{"id":141,"version":142,"summary_zh":143,"released_at":144},105846,"v2.31.0","## New features\r\n\r\n* Add option `--jit_compile` to compile the model with XLA (only applied in training at the moment)\r\n\r\n## Fixes and improvements\r\n\r\n* Improve correctness of gradient accumulation and multi-GPU training by normalizing the gradients with the true global batch size instead of using an approximation\r\n* Report the total number of tokens per second in the training logs, in addition to the source and target numbers\r\n* Relax the sacreBLEU version requirement to include any 2.x versions","2023-01-13T15:54:34",{"id":146,"version":147,"summary_zh":148,"released_at":149},105847,"v2.30.0","## Changes\r\n\r\n* The model attribute `ctranslate2_spec` has been removed as it is no longer relevant with the new CTranslate2 converter\r\n* The global gradient norm is no longer reported in TensorBoard because it was misleading: it did not take into account gradient accumulation and multi-GPU\r\n\r\n## New features\r\n\r\n* Support TensorFlow 2.11 (note that the new Keras optimizers are not yet supported, if you are creating optimizers manually please use an optimizer in `tf.keras.optimizers.legacy` for now)\r\n* Support CTranslate2 3.0\r\n* Add training parameter `pad_to_bucket_boundary` to pad the batch length to a multiple of `length_bucket_width` (this is useful to reduce the number of recompilation with XLA)\r\n* Integrate the scorers `chrf` and `chrf++` from SacreBLEU\r\n\r\n## Fixes and improvements\r\n\r\n* Fix error when training with Horovod and using an early stopping condition\r\n* Fix error when using guided alignment with mixed precision","2022-12-12T13:16:51",{"id":151,"version":152,"summary_zh":153,"released_at":154},105848,"v2.29.1","## Fixes and improvements\r\n\r\n* Fix error when using gzipped training data files\r\n* Remove unnecessary casting in `MultiHeadAttention` for a small performance improvement","2022-10-03T09:29:55",{"id":156,"version":157,"summary_zh":158,"released_at":159},105849,"v2.29.0","## New features\r\n\r\n* Support TensorFlow 2.10\r\n* Add model configurations `ScalingNmtEnDe` and `ScalingNmtEnFr` from [Ott et al. 2018](https:\u002F\u002Faclanthology.org\u002FW18-6301\u002F)\r\n* Add embedding parameter `EmbeddingsSharingLevel.AUTO` to automatically share embeddings when the vocabulary is shared\r\n* Extend method `Runner.average_checkpoints` to accept a list of checkpoints to average\r\n\r\n## Fixes and improvements\r\n\r\n* Make batch size autotuning faster when using gradient accumulation","2022-09-26T15:11:28",{"id":161,"version":162,"summary_zh":163,"released_at":164},105850,"v2.28.0","## New features\r\n\r\n* Add `initial_learning_rate` parameter to the `InvSqrtDecay` schedule\r\n* Add new arguments to the `Transformer` constructor:\r\n  * `mha_bias`: to disable bias terms in the multi-head attention (as presented in the original paper)\r\n  * `output_layer_bias`: to disable bias in the output linear layer\r\n\r\n## Fixes and improvements\r\n\r\n* Fix incorrect dtype for `SequenceRecordInputter` length vector\r\n* Fix rounding error when batching datasets which could make the number of tokens in a batch greater than the configured batch size\r\n* Fix deprecation warning when using `distutils.version.LooseVersion`, use `packaging.version.Version` instead\r\n* Make the length dimension unknown in the dataset used for batch size autotuning so that it matches the behavior in training\r\n* Update SacreBLEU requirement to include new version 2.2","2022-07-29T09:28:30",{"id":166,"version":167,"summary_zh":168,"released_at":169},105851,"v2.27.1","## Fixes and improvements\r\n\r\n* Fix evaluation and scoring with language models","2022-06-02T12:53:04",{"id":171,"version":172,"summary_zh":173,"released_at":174},105852,"v2.27.0","## Changes\r\n\r\n* Remove support for older TensorFlow versions 2.4 and 2.5\r\n* Remove support for deprecated Python version 3.6\r\n\r\n## New features\r\n\r\n* Support TensorFlow 2.9\r\n* Integrate the new CTranslate2 converter to export more Transformer variants, including multi-features models\r\n\r\n## Fixes and improvements\r\n\r\n* Fix error when loading the SavedModel of Transformer models with relative position representations\r\n* Fix dataset error in inference with language models\r\n* Fix batch size autotuning error with language models\r\n* Fix division by zero error on some systems when the time to the last training log is too small","2022-05-30T09:23:12",{"id":176,"version":177,"summary_zh":178,"released_at":179},105853,"v2.26.1","## Fixes and improvements\r\n\r\n* Fix documentation build error","2022-03-31T10:54:39",{"id":181,"version":182,"summary_zh":183,"released_at":184},105854,"v2.26.0","## New features\r\n\r\n* Add learning rate schedule `InvSqrtDecay`\r\n* Enable CTranslate2 conversion for models using GELU or Swish activations\r\n\r\n## Fixes and improvements\r\n\r\n* Fix inference error when using the `decoding_noise` parameter\r\n* Clarify the inference log about buffered predictions","2022-03-31T10:54:19",{"id":186,"version":187,"summary_zh":188,"released_at":189},105855,"v2.25.0","## New features\r\n\r\n* Support TensorFlow 2.8\r\n* Add training flag `--continue_from_checkpoint` to simplify continuing the training in another model directory (to be used in combination with `--checkpoint_path`)\r\n\r\n## Fixes and improvements\r\n\r\n* Fix target unknowns replacement when the source has BOS or EOS tokens\r\n* Update length constraints in Transformer automatic configuration to work with multiple sources\r\n* Allow explicit configuration of the first argument of learning rate schedules (if not set, `learning_rate` is passed as the first argument)","2022-02-21T10:32:56",{"id":191,"version":192,"summary_zh":193,"released_at":194},105856,"v2.24.0","## New features\r\n\r\n* Add experimental parameter `mask_loss_outliers` to mask high loss values considered as outliers (requires the `tensorflow-probability` module)\r\n\r\n## Fixes and improvements\r\n\r\n* Fix TensorFlow Lite conversion for models using a `PositionEmbedder` layer\r\n* Automatically pad the weights of linear layers to enable Tensor Cores in mixed precision training\r\n* Correctly set the CTranslate2 options `alignment_layer` and `alignment_heads` when converting models using the attention reduction `AVERAGE_LAST_LAYER`\r\n* Raise an error if a training dataset or annotation file has an unexpected size\r\n* Warn about duplicated tokens when loading vocabularies","2021-12-17T13:57:32",{"id":196,"version":197,"summary_zh":198,"released_at":199},105857,"v2.23.0","## Changes\r\n\r\n* Remove support for TensorFlow 2.3\r\n\r\n## New features\r\n\r\n* Support TensorFlow 2.7\r\n* Add CTranslate2 exporter with \"int8_float16\" quantization\r\n\r\n## Fixes and improvements\r\n\r\n* Improve performance when applying the OpenNMT tokenization during training by vectorizing the dataset transformation\r\n* Disable configuration merge for fields `optimizer_params` and `decay_params`\r\n* Enable the CTranslate2 integration when installing OpenNMT-tf on Windows\r\n* Include PyYAML 6 in supported versions","2021-11-15T10:40:19",{"id":201,"version":202,"summary_zh":203,"released_at":204},105858,"v2.22.0","## New features\r\n\r\n* Support TensorFlow Lite conversion for Transformer models\r\n* Make the options `model_dir` and `auto_config` available in both the command line and the configuration file\r\n* Paths in the `data` configuration can now be relative to the model directory\r\n\r\n## Fixes and improvements\r\n\r\n* Fix encoding when writing sentences detokenized by an in-graph tokenizer\r\n* Always output the tokenized target in scoring even when a target tokenization is configured\r\n* Enable the OpenNMT Tokenizer when installing OpenNMT-tf on Windows","2021-09-30T13:30:05",{"id":206,"version":207,"summary_zh":208,"released_at":209},105859,"v2.21.0","## New features\r\n\r\n* Support TensorFlow 2.6\r\n* Add tokenizer `SentencePieceTokenizer`, an in-graph SentencePiece tokenizer provided by [tensorflow-text](https:\u002F\u002Fwww.tensorflow.org\u002Ftext)\r\n* Add methods to facilitate training a `Model` instance:\r\n  * `model.compute_training_loss`\r\n  * `model.compute_gradients`\r\n  * `model.train`\r\n* Add `--output_file` argument to `score` command\r\n\r\n## Fixes and improvements\r\n\r\n* Fix `make_features` method of inputters `WordEmbedder` and `SequenceRecordInputter` to work on a batch of elements\r\n* Fix error when `SelfAttentionDecoder` is called without `memory_sequence_length`\r\n* Fix `ConvEncoder` on variable-length inputs\r\n* Support SacreBLEU 2.0","2021-08-30T12:04:33",{"id":211,"version":212,"summary_zh":213,"released_at":214},105860,"v2.20.1","## Fixes and improvements\r\n\r\n* Fix missing environment variables in the child process when autotuning the batch size\r\n* Fix error during evaluation when setting the inference parameter `n_best` > 1\r\n* Fix error in Python serving example when using TensorFlow 2.5\r\n* Log some information about the input layer after initialization (vocabulary size, special tokens, etc.)\r\n* Update the minimum required pyonmttok version to 1.26.4 to include the latest fixes","2021-07-01T14:13:26",{"id":216,"version":217,"summary_zh":218,"released_at":219},105861,"v2.20.0","## New features\r\n\r\n* Update the minimum required CTranslate2 version to 2.0\r\n\r\n## Fixes and improvements\r\n\r\n* Set a timeout for each training attempt when autotuning the batch size\r\n* Set `keep_checkpoint_max` to `average_last_checkpoints` if the later value is larger\r\n* Update the minimum required pyonmttok version to include the latest fixes","2021-06-17T10:55:57",{"id":221,"version":222,"summary_zh":223,"released_at":224},105862,"v2.19.0","## New features\r\n\r\n* Support TensorFlow 2.5\r\n\r\n## Fixes and improvements\r\n\r\n* Fix dtype error in RNN decoder when enabling mixed precision\r\n* Pass training flag to tokenizers to disable subword regularization in inference\r\n* Update Sphinx from 2.3 to 3.5 to generate the documentation","2021-05-31T12:23:15",{"id":226,"version":227,"summary_zh":228,"released_at":229},105863,"v2.18.1","## Fixes and improvements\r\n\r\n* Fix vocabulary update for models with shared embeddings\r\n* Fix a compatibility issue with TensorFlow 2.5 for early users\r\n* When all training attempts fail in batch size autotuning, log the error message of the last attempt","2021-04-27T09:03:53",{"id":231,"version":232,"summary_zh":233,"released_at":234},105864,"v2.18.0","## New features\r\n\r\n* Add `TransformerBaseSharedEmbeddings` and `TransformerBigSharedEmbeddings` in the model catalog\r\n\r\n## Fixes and improvements\r\n\r\n* Fix loss normalization when using sentence weighting\r\n* Tune the automatic batch size selection to avoid some out of memory errors\r\n* Harmonize training logs format when using `onmt-main`","2021-04-19T08:18:20"]