[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-NVIDIA--NVFlare":3,"tool-NVIDIA--NVFlare":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159267,2,"2026-04-17T11:29:14",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":118,"forks":119,"last_commit_at":120,"license":121,"difficulty_score":122,"env_os":123,"env_gpu":124,"env_ram":123,"env_deps":125,"category_tags":133,"github_topics":134,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":142,"updated_at":143,"faqs":144,"releases":174},8495,"NVIDIA\u002FNVFlare","NVFlare","NVIDIA Federated Learning Application Runtime Environment","NVFlare 是英伟达推出的开源联邦学习应用运行时环境，旨在帮助研究人员和数据科学家轻松将现有的机器学习工作流迁移到联邦学习范式。它核心解决了数据隐私与协作建模之间的矛盾，让多方能够在不共享原始数据的前提下，安全地联合训练高性能模型，特别适用于医疗、金融等对数据敏感性要求极高的场景。\n\n这款工具非常适合希望探索或落地联邦学习的开发者、算法研究员及平台架构师。NVFlare 具备显著的跨框架兼容性，不仅支持 PyTorch、TensorFlow 等主流深度学习库，也兼容 Scikit-learn、XGBoost 等传统机器学习算法。其独特的技术亮点在于提供了从本地模拟仿真到真实生产部署的无缝过渡能力，用户只需极少代码改动即可完成转换。此外，它内置了 FedAvg 等多种经典联邦算法，并集成了差分隐私、同态加密等高级隐私保护机制，配合模块化设计和可视化管理仪表盘，让用户能灵活构建安全、健壮且可扩展的分布式协作系统。","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NVFlare_readme_90ca51eedf8e.png\" alt=\"NVIDIA Logo\" width=\"200\">\n\n# NVIDIA FLARE\n\n[Website](https:\u002F\u002Fnvidia.github.io\u002FNVFlare) | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13291) | [Blogs](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Ftag\u002Ffederated-learning) | [Talks & Papers](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fpublications_and_talks.html) | [Webinars](https:\u002F\u002Fnvidia.github.io\u002FNVFlare\u002Fwebinars) | [Research](.\u002Fresearch\u002FREADME.md) | [Documentation](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain)\n\n[![Blossom-CI](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvflare\u002Fworkflows\u002FBlossom-CI\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvflare\u002Factions)\n[![documentation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NVFlare_readme_13d664e1afd7.png)](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002F?badge=main)\n[![license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-brightgreen.svg)](.\u002FLICENSE)\n[![pypi](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fnvflare.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fnvflare)\n[![pyversion](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fnvflare.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fnvflare)\n[![downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NVFlare_readme_3fe80ca5cd92.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fnvflare)\n[![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002FNVIDIA\u002FNVFlare)\n\n[NVIDIA FLARE](https:\u002F\u002Fnvidia.github.io\u002FNVFlare\u002F) (**NV**IDIA **F**ederated **L**earning **A**pplication **R**untime **E**nvironment)\nis a domain-agnostic, open-source, extensible Python SDK that allows researchers and data scientists to adapt existing ML\u002FDL workflows to a federated paradigm.\nIt enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration.\n\n## Features\nFLARE is built on a componentized architecture that allows you to take federated learning workloads\nfrom research and simulation to real-world production deployment.\n\nApplication Features\n* Support both deep learning and traditional machine learning algorithms (e.g., PyTorch, TensorFlow, scikit-learn, XGBoost, etc.)\n* Support horizontal and vertical federated learning\n* Built-in Federated Learning algorithms (e.g., FedAvg, FedProx, FedOpt, Scaffold, Ditto, etc.)\n* Support multiple server and client-controlled training workflows (e.g., scatter & gather, cyclic) and validation workflows (global model evaluation, cross-site validation)\n* Support both data analytics (federated statistics) and machine learning lifecycle management\n* Privacy preservation with differential privacy, homomorphic encryption, private set intersection (PSI)\n\nFrom Simulation to Real-World\n* FLARE Client API to transition seamlessly from ML\u002FDL to FL with minimal code changes\n* Simulator and POC mode for rapid development and prototyping\n* Fully customizable and extensible components with modular design\n* Deployment on cloud and on-premise\n* Dashboard for project management and deployment\n* Security enforcement through federated authorization and privacy policy\n* Built-in support for system resiliency and fault tolerance\n\n> _Take a look at [NVIDIA FLARE Overview](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fflare_overview.html) for a complete overview, and [What's New](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fwhats_new.html) for the latest changes._\n\n## Installation\nTo install the [current release](https:\u002F\u002Fpypi.org\u002Fproject\u002Fnvflare\u002F):\n```\n$ python -m pip install nvflare\n```\n\nFor detailed installation please refer to [NVIDIA FLARE installation](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Finstallation.html).\n\n## Getting Started\n\n* To get started, refer to the [Quick Start](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fquickstart.html) documentation\n\n* Structured, self-paced learning is available through curated tutorials and training paths on the website.\n  * DLI courses:\n    * https:\u002F\u002Flearn.nvidia.com\u002Fcourses\u002Fcourse-detail?course_id=course-v1:DLI+S-FX-28+V1\n    * https:\u002F\u002Flearn.nvidia.com\u002Fcourses\u002Fcourse-detail?course_id=course-v1:DLI+S-FX-29+V1\n* Visit the [developer portal](https:\u002F\u002Fdeveloper.nvidia.com\u002Fflare).\n\n## Community\n\nWe welcome community contributions! Please refer to the [contributing guidelines](.\u002FCONTRIBUTING.md) for more details.\n\nAsk and answer questions, share ideas, and engage with other community members at [NVFlare Discussions](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fdiscussions).\n\n## Related Talks and Publications\n\nTake a look at our growing list of [talks and publications](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fpublications_and_talks.html), and [technical blogs](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Ftag\u002Ffederated-learning) related to NVIDIA FLARE.\n\n\n## License\n\nNVIDIA FLARE is released under an [Apache 2.0 license](.\u002FLICENSE).\n","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NVFlare_readme_90ca51eedf8e.png\" alt=\"NVIDIA Logo\" width=\"200\">\n\n# NVIDIA FLARE\n\n[官网](https:\u002F\u002Fnvidia.github.io\u002FNVFlare) | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13291) | [博客](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Ftag\u002Ffederated-learning) | [演讲与论文](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fpublications_and_talks.html) | [网络研讨会](https:\u002F\u002Fnvidia.github.io\u002FNVFlare\u002Fwebinars) | [研究](.\u002Fresearch\u002FREADME.md) | [文档](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain)\n\n[![Blossom-CI](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvflare\u002Fworkflows\u002FBlossom-CI\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvflare\u002Factions)\n[![文档](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NVFlare_readme_13d664e1afd7.png)](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002F?badge=main)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-brightgreen.svg)](.\u002FLICENSE)\n[![PyPI](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fnvflare.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fnvflare)\n[![Python版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fnvflare.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fnvflare)\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NVFlare_readme_3fe80ca5cd92.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fnvflare)\n[![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002FNVIDIA\u002FNVFlare)\n\n[NVIDIA FLARE](https:\u002F\u002Fnvidia.github.io\u002FNVFlare\u002F) (**NV**IDIA **F**ederated **L**earning **A**pplication **R**untime **E**nvironment)\n是一个领域无关、开源且可扩展的 Python SDK，使研究人员和数据科学家能够将现有的机器学习\u002F深度学习工作流迁移到联邦学习范式。它还支持平台开发者构建安全、保护隐私的分布式多方协作解决方案。\n\n## 特性\nFLARE 基于组件化架构，允许您将联邦学习任务从研究和仿真阶段过渡到实际生产部署。\n\n应用特性\n* 同时支持深度学习和传统机器学习算法（如 PyTorch、TensorFlow、scikit-learn、XGBoost 等）\n* 支持横向和纵向联邦学习\n* 内置联邦学习算法（如 FedAvg、FedProx、FedOpt、Scaffold、Ditto 等）\n* 支持多种服务器和客户端控制的训练流程（如散聚式、循环式）以及验证流程（全局模型评估、跨站点验证）\n* 同时支持数据分析（联邦统计）和机器学习生命周期管理\n* 通过差分隐私、同态加密、私有集合交集（PSI）等技术实现隐私保护\n\n从仿真到真实世界\n* FLARE 客户端 API 可以在 ML\u002FDL 和 FL 之间无缝切换，只需少量代码修改\n* 提供模拟器和 POC 模式，便于快速开发和原型设计\n* 组件完全可定制且可扩展，采用模块化设计\n* 支持云端和本地部署\n* 提供项目管理和部署仪表盘\n* 通过联邦授权和隐私策略强化安全性\n* 内置系统弹性与容错机制\n\n> _请参阅 [NVIDIA FLARE 概览](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fflare_overview.html) 获取完整介绍，以及 [新功能](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fwhats_new.html) 查看最新更新。_\n\n## 安装\n要安装[当前版本](https:\u002F\u002Fpypi.org\u002Fproject\u002Fnvflare\u002F)：\n```\n$ python -m pip install nvflare\n```\n\n有关详细安装说明，请参阅[NVIDIA FLARE 安装指南](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Finstallation.html)。\n\n## 入门\n\n* 如需开始使用，请参考[快速入门](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fquickstart.html)文档。\n\n* 网站上提供了结构化的自学课程和培训路径。\n  * DLI 课程：\n    * https:\u002F\u002Flearn.nvidia.com\u002Fcourses\u002Fcourse-detail?course_id=course-v1:DLI+S-FX-28+V1\n    * https:\u002F\u002Flearn.nvidia.com\u002Fcourses\u002Fcourse-detail?course_id=course-v1:DLI+S-FX-29+V1\n* 请访问[开发者门户](https:\u002F\u002Fdeveloper.nvidia.com\u002Fflare)。\n\n## 社区\n\n我们欢迎社区贡献！更多详情请参阅[贡献指南](.\u002FCONTRIBUTING.md)。\n\n您可以在[NVFlare 讨论区](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fdiscussions)提问、回答问题、分享想法并与社区成员互动。\n\n## 相关演讲与出版物\n\n请查看我们不断增长的[演讲与出版物列表](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain\u002Fpublications_and_talks.html)，以及与 NVIDIA FLARE 相关的[技术博客](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Ftag\u002Ffederated-learning)。\n\n\n## 许可证\n\nNVIDIA FLARE 采用[Apache 2.0 许可证](.\u002FLICENSE)发布。","# NVIDIA FLARE (NVFlare) 快速上手指南\n\nNVIDIA FLARE 是一个开源、可扩展的 Python SDK，旨在帮助研究人员和数据科学家将现有的机器学习\u002F深度学习工作流适配到联邦学习（Federated Learning）范式。它支持从模拟仿真到真实生产环境的无缝部署，内置多种联邦学习算法及隐私保护机制。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux (推荐 Ubuntu 18.04\u002F20.04\u002F22.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python 版本**：Python 3.8, 3.9, 3.10 或 3.11。\n*   **前置依赖**：\n    *   已安装 `pip` 包管理工具。\n    *   若需运行深度学习示例，建议预先安装对应的深度学习框架（如 PyTorch 或 TensorFlow）。\n*   **网络环境**：确保可以访问 PyPI 源。国内用户可配置清华或阿里镜像源以加速下载。\n\n## 2. 安装步骤\n\n### 标准安装\n使用 pip 直接安装最新稳定版：\n\n```bash\npython -m pip install nvflare\n```\n\n### 国内加速安装（推荐）\n如果您在中国大陆地区，建议使用国内镜像源以提高下载速度：\n\n```bash\npython -m pip install nvflare -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 验证安装\n安装完成后，可通过以下命令检查版本信息，确认安装成功：\n\n```bash\npython -c \"import nvflare; print(nvflare.__version__)\"\n```\n\n## 3. 基本使用\n\nNVFlare 的核心优势在于能够通过极少的代码改动，将现有的单机训练脚本转换为联邦学习流程。以下是最简单的入门路径：\n\n### 步骤一：准备现有模型\n假设您已经有一个基于 PyTorch 的标准训练脚本 `train.py`。NVFlare 不需要您重写整个训练逻辑，而是通过封装器（Wrapper）来适配。\n\n### 步骤二：使用模拟器快速原型开发\n在生产环境部署前，您可以使用 NVFlare 自带的 **Simulator** 在单机上模拟多客户端的联邦训练过程。这是最快的验证方式。\n\n创建一个配置文件 `config_fed_server.json` 和 `config_fed_client.json`（或使用官方提供的示例配置），然后运行模拟器：\n\n```bash\n# 使用模拟器运行联邦学习任务，模拟 2 个客户端\npython -m nvflare.fuel.hci.tools.simulator --workspace .\u002Fmy_sim_workspace --clients 2\n```\n\n> **提示**：初学者可以直接克隆官方仓库中的 `examples` 目录，里面包含了针对 PyTorch、TensorFlow 和 Scikit-learn 的完整演示项目（如 `hello-pt`），直接运行其中的启动脚本即可观察效果。\n\n### 步骤三：过渡到真实部署\n当模拟测试通过后，只需将运行模式从 Simulator 切换为 POC (Proof of Concept) 模式或生产模式，即可在分布式环境中启动：\n\n1.  **初始化工作空间**：\n    ```bash\n    python -m nvflare.fuel.hci.tools.provisioner\n    ```\n2.  **启动服务器与客户端**：\n    在生成的部署包中，分别在不同机器或终端执行启动脚本：\n    ```bash\n    # 在服务器端\n    .\u002Fstart.sh\n    \n    # 在客户端端\n    .\u002Fstart.sh\n    ```\n\n通过以上步骤，您即可完成从本地单机训练到分布式联邦学习的快速迁移。更多详细教程和进阶用法，请访问 [NVFlare 官方文档](https:\u002F\u002Fnvflare.readthedocs.io\u002Fen\u002Fmain)。","某跨国医疗集团联合三家医院研发肺癌 CT 影像诊断模型，需在严格保护患者隐私且数据不出院的前提下完成联合训练。\n\n### 没有 NVFlare 时\n- **数据孤岛难打破**：受限于医疗合规要求，各医院原始数据无法集中，传统集中式训练方案直接不可行。\n- **开发适配成本高**：团队需手动重写大量代码来模拟分布式通信，将现有的 PyTorch 流程强行改造为联邦学习架构，极易出错。\n- **隐私安全无保障**：缺乏内置的同态加密或差分隐私机制，传输模型梯度时存在患者信息泄露风险。\n- **部署运维复杂**：不同医院的网络环境和硬件配置各异，缺乏统一的管理控制台，故障排查和节点监控极其困难。\n\n### 使用 NVFlare 后\n- **无缝迁移工作流**：利用 FLARE Client API，团队仅需修改少量代码即可将原有深度学习流程平滑迁移至联邦范式，支持横向与纵向联邦学习。\n- **内置算法与安全**：直接调用内置的 FedAvg 等算法及同态加密组件，在确保梯度传输安全的同时，大幅缩短研发周期。\n- **仿真到生产一键切换**：通过模拟器快速验证原型，随后无缝切换至真实的跨院云边端部署，无需重构系统架构。\n- **可视化统一管理**：借助内置 Dashboard 实时监控各医院节点的训练状态与资源消耗，实现了对多中心协作的高效管控。\n\nNVFlare 让医疗机构在数据不出域的安全底线之上，以最低的开发成本实现了高质量的跨机构协同智能。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NVFlare_90ca51ee.png","NVIDIA","NVIDIA Corporation","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FNVIDIA_7dcf6000.png","",null,"https:\u002F\u002Fnvidia.com","https:\u002F\u002Fgithub.com\u002FNVIDIA",[80,84,88,92,96,99,103,107,110,114],{"name":81,"color":82,"percentage":83},"Python","#3572A5",92.2,{"name":85,"color":86,"percentage":87},"Kotlin","#A97BFF",1.8,{"name":89,"color":90,"percentage":91},"Astro","#ff5a03",1.7,{"name":93,"color":94,"percentage":95},"Jupyter Notebook","#DA5B0B",1.3,{"name":97,"color":98,"percentage":95},"C++","#f34b7d",{"name":100,"color":101,"percentage":102},"Swift","#F05138",0.8,{"name":104,"color":105,"percentage":106},"Shell","#89e051",0.3,{"name":108,"color":109,"percentage":106},"Objective-C++","#6866fb",{"name":111,"color":112,"percentage":113},"HTML","#e34c26",0.2,{"name":115,"color":116,"percentage":117},"CMake","#DA3434",0.1,919,248,"2026-04-16T16:32:19","Apache-2.0",4,"未说明","未说明（支持多种深度学习框架如 PyTorch、TensorFlow，具体 GPU 需求取决于所选算法和模型）",{"notes":126,"python":127,"dependencies":128},"该工具为联邦学习运行时环境，支持横向和纵向联邦学习。内置多种算法（如 FedAvg, FedProx 等）及隐私保护技术（差分隐私、同态加密）。可通过 pip 直接安装。具体硬件资源需求取决于实际部署的机器学习工作负载规模。","3.8+",[129,130,131,132],"PyTorch","TensorFlow","scikit-learn","XGBoost",[14],[135,136,137,138,139,140,141],"python","decentralized","federated-analytics","federated-learning","pet","privacy-protection","federated-computing","2026-03-27T02:49:30.150509","2026-04-18T00:45:54.361345",[145,150,154,159,164,169],{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},38023,"运行模拟器时遇到 \"unable to handle command: simulator due to: 'run_status'\" 错误怎么办？","该问题通常与语法或环境配置有关。维护者建议提交 PR 修复以使 TensorFlow 变为可选依赖。作为临时解决方案，可以尝试清理旧的工作空间并重新启动。如果问题依旧，请分享日志（log.txt）和输出文件夹结构（使用 tree 命令），以便进一步调查。此外，确保按照 README 的步骤正确执行，检查 Python 版本（推荐 3.10+）和 NVFlare 版本是否匹配。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fissues\u002F1843",{"id":151,"question_zh":152,"answer_zh":153,"source_url":149},38024,"如何加速 NVFlare 的模拟训练过程？","可以通过以下几种方式加速训练：\n1. 使用 GPU 进行训练。\n2. 在数据加载时启用 pin_memory 选项。\n3. 增加数据加载的 worker 数量（num_workers）。\n这些设置可以显著减少数据读取和预处理的时间，从而加快整体训练进度。",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},38025,"XGBoost 垂直联邦学习（VFL）示例中如何进行预测？","目前的 XGBoost VFL 示例主要实现了训练功能。对于预测功能，如果遇到问题（如 ZeroDivisionError），可能是数据格式问题。有用户反馈通过手动移除 CSV 文件的表头行（例如使用 `df = df.iloc[1:]`）解决了 pandas 无法识别表头导致的错误。如果需要在分布式客户端上进行预测，可能需要参考源代码自行扩展预测逻辑，或关注官方后续更新的示例。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fissues\u002F3336",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},38026,"启动服务器时遇到 \"SSL_ERROR_SSL: WRONG_VERSION_NUMBER\" 错误如何解决？","该错误通常表示 SSL 握手失败，可能是由于版本不匹配或配置文件残留导致。建议步骤如下：\n1. 删除旧的工作空间（workspace）目录，确保从头开始。\n2. 重新配置并启动 1 个服务器和 2 个客户端进行测试。\n3. 检查服务器和客户端的 log.txt 日志文件（位于 startup 文件夹中），确认 \"Run number\" 是否已设置。\n4. 如果服务器进程意外终止，请检查端口占用情况并重启。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fissues\u002F636",{"id":165,"question_zh":166,"answer_zh":167,"source_url":168},38027,"在 CIFAR-10 示例中使用 FedOpt 算法更换模型（如 MobileNetV2, ResNet18）后准确率不提升怎么办？","这是因为 FedOpt 算法与包含 Batch Normalization (BN) 层的模型存在兼容性问题。BN 层的权重会被平均，但运行统计量（running stats，如 running_mean 和 running_var）不会，导致模型行为异常。\n解决方案：\n1. 推荐使用不包含 BN 层而使用 Group Normalization (GN) 的模型（如某些 Vision Transformer 或特定配置的 ResNet）。\n2. 或者采用变通方法：使用 FedOpt 优化全局可训练参数，同时使用 FedAvg 更新 BN 统计量。\n3. 避免直接使用带有 BN 层的预训练 torchvision 模型，除非进行上述调整。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fissues\u002F1718",{"id":170,"question_zh":171,"answer_zh":172,"source_url":173},38028,"如何在 NVIDIA Jetson Nano 等嵌入式设备上部署 NVFlare 真实客户端？","目前官方文档中关于将示例（如 CIFAR-10）迁移到嵌入式设备（如 Jetson Nano）的具体步骤较少。一般流程包括：\n1. 在嵌入式设备上安装与环境匹配的 NVFlare 版本。\n2. 配置客户端的 startup 脚本，指向正确的服务器地址。\n3. 根据设备算力调整模型大小和数据加载策略（如减小 batch size）。\n建议参考官方提供的真实世界示例代码结构，手动修改配置文件以适应嵌入式环境的资源限制。如有具体报错，需查看客户端日志进行调试。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fissues\u002F1931",[175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250,255,260,265,270],{"id":176,"version":177,"summary_zh":178,"released_at":179},306215,"2.7.2","### 2.7.2 版本贡献者（按时间顺序）\n\n  本次发布共提交 213 个 Pull Request\n\n  - [Holger Roth](https:\u002F\u002Fgithub.com\u002Fholgerroth) — 52 个 PR\n  - [Chester Chen](https:\u002F\u002Fgithub.com\u002Fchesterxgchen) — 40 个 PR\n  - [Yuan-Ting Hsieh (謝沅廷)](https:\u002F\u002Fgithub.com\u002FYuanTingHsieh) — 39 个 PR\n  - [Ziyue Xu](https:\u002F\u002Fgithub.com\u002FZiyueXu77) — 26 个 PR\n  - [nvkevlu](https:\u002F\u002Fgithub.com\u002Fnvkevlu) — 21 个 PR\n  - [Zhihong Zhang](https:\u002F\u002Fgithub.com\u002Fnvidianz) — 13 个 PR\n  - [Peter Cnudde](https:\u002F\u002Fgithub.com\u002Fpcnudde) — 11 个 PR\n  - [Isaac Yang](https:\u002F\u002Fgithub.com\u002FIsaacYangSLA) — 10 个 PR\n  - [GeorgeWang-nv](https:\u002F\u002Fgithub.com\u002FGeorgeWang-nv) — 1 个 PR\n\n  ---\n\n  ### 🎉 欢迎首次贡献者！\n\n  - [Peter Cnudde](https:\u002F\u002Fgithub.com\u002Fpcnudde) — 11 个 PR\n  - [GeorgeWang-nv](https:\u002F\u002Fgithub.com\u002FGeorgeWang-nv) — 1 个 PR\n\n  ---\n\n  ### 功能亮点\n\n  **Job Recipe API — 正式可用**\n\n  在 2.7.0 版本中作为技术预览推出的 Job Recipe API，现已正式可用。NVFlare 代码库中的几乎所有示例均已迁移到使用 Job Recipes。现提供开箱即用的配方，适用于 FedAvg、FedProx、FedOpt、SCAFFOLD、循环学习、XGBoost、联邦统计、FedEval、跨站点评估、PSI、Flower、Swarm Learning 和边缘联邦学习等场景。同一份配方可在 SimEnv、PoCEnv 和 ProdEnv 环境中无缝运行。\n\n  **大模型训练的内存管理**\n\n  FLARE 2.7.2 提供了从 FL 服务器到客户端训练子进程各层级的完整内存管理栈。三项互补功能协同工作：\n\n  - **TensorDownloader**：针对 PyTorch 大模型的零代码内存优化。张量以 safetensors 格式逐步序列化，并通过拉取式下载服务进行分发。在所有训练模式下均可降低服务器端和客户端的峰值内存占用。\n  - **CJ 进程中的零拷贝中继（直通模式）**：对于子进程模式的客户端（`ClientAPILauncherExecutor`），CJ 中继现在使用轻量级的 `LazyDownloadRef` 占位符，而非直接加载完整的张量，从而使 CJ 的内存占用与模型大小无关。这一改进对 LLM 规模的模型（7B–70B 参数）尤为显著。\n  - **服务器与客户端内存清理**：在服务器端以及完整的客户端流水线（CJ 中继 + 训练子进程）上，自动定期执行 `gc.collect()` 和 `malloc_trim()`，可有效防止长时间多轮作业中 RSS 内存无限制增长。同时包含用于 GPU 内存的 `torch.cuda.empty_cache()`。可通过 `client_memory_gc_rounds` 配置，无需修改任何训练脚本。\n\n  在一个 5 GB 的 PyTorch 模型基准测试中，采用 FedAvg 算法并连接 4 个客户端时，服务器端峰值内存降低了 **60–75%**（进程内模式），而客户端峰值内存则降低了 **70–82%**（子进程模式）。\n\n  **可靠性与性能**\n\n  针对 F3 流式传输层及大模型子进程处理进行了重点加固：\n\n  - **避免队头阻塞**：引入有限的发送超时机制、ACK 进度监控守护程序，以及可选的…","2026-03-20T20:51:32",{"id":181,"version":182,"summary_zh":183,"released_at":184},306216,"2.7.2rc20","## 变更内容\n* [2.7] @nvidianz 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4320 中将 monai 类型加入白名单\n* [2.7] @nvidianz 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4325 中向 decomposer_for_large_object.rst 添加 FOBS 安全章节 [skip ci]\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc19...2.7.2rc20","2026-03-19T00:08:39",{"id":186,"version":187,"summary_zh":188,"released_at":189},306217,"2.7.2rc19","## 变更内容\n* [2.7] 从发布说明中删除安全修复的实现细节 [skip ci]，由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4332 中完成\n* [2.7] 修复 MetricRelay 多轮管道的生命周期问题及因无效负载导致的崩溃，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4334 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc18...2.7.2rc19","2026-03-18T23:22:36",{"id":191,"version":192,"summary_zh":193,"released_at":194},306218,"2.7.2rc18","## 变更内容\n* [2.7] 2.7.2 版本发布说明中优化了内存相关部分，并添加了基准测试表格 [skip-ci]，由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4323 中完成\n* [2.7] 清除每轮之间的管道状态，以防止出现过时的 PEER_GONE 状态，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4326 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc17...2.7.2rc18","2026-03-18T02:28:10",{"id":196,"version":197,"summary_zh":198,"released_at":199},306219,"2.7.2rc17","## 变更内容\n* [2.7] 修复 SignatureBuilder：仅在 CC 模式下对根节点进行签名，非 CC 模式下恢复为 startup+local 方式，由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4318 中完成\n* [2.7] 修复 FedAvg 自定义聚合器丢失 fl_ctx CURRENT_ROUND 的问题，由 @ZiyueXu77 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4317 中完成\n* [2.7] 更新 2.7.2 版本发布说明，添加安全修复的 CWE ID [skip ci]，由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4319 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc16...2.7.2rc17","2026-03-16T16:38:30",{"id":201,"version":202,"summary_zh":203,"released_at":204},306220,"2.7.2rc16","## 变更内容\n* [2.7] 由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4312 中修复接收端 PASS_THROUGH 通道与 GET_TASK 不匹配的问题\n* [2.7] 由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4315 中修复认证测试问题\n* [2.7] 由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4314 中修复 Swarm 死锁问题：_executing 保护机制阻止事务中途替换管道处理器\n* [2.7] 由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4316 中将 2.7.2 版本发布说明精简为高层次功能亮点 [跳过 CI] \n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc15...2.7.2rc16","2026-03-13T22:44:59",{"id":206,"version":207,"summary_zh":208,"released_at":209},306221,"2.7.2rc15","## 变更内容\n* [2.7] 添加了由 @nvidianz 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4295 中提出的修复，解决了因未验证 type_name 导致的 RCE 漏洞。\n* @nvidianz 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4298 中添加了 FOBS 缺失的类型定义。\n* [2.7] @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4296 中修复了 FilePipe 的 TOCTOU 问题。\n* [2.7] @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4297 中修复了 pt_init_client 的 data_path，以避免 CIFAR10 并发下载导致的数据损坏。\n* [2.7] 数据集许可证；更新 swarm 说明文档 [跳过 CI]，由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4303 中完成。\n* [2.7] 更新文档，加入 CVM 基础镜像的构建说明，由 @nvidianz 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4306 中完成。\n* [2.7]: 由 @IsaacYangSLA 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4301 中对启动套件的根目录进行签名。\n* [2.7]: 由 @IsaacYangSLA 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4308 中修复了文档与代码之间的不一致问题。\n* [2.7] 由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4309 中实现了接收端针对 ext-process CJ 转发路径的 PASS_THROUGH 功能。\n* [2.7] 将 FilePipe 的默认心跳超时值提高至 600 秒，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4307 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc14...2.7.2rc15","2026-03-13T05:47:05",{"id":211,"version":212,"summary_zh":213,"released_at":214},306222,"2.7.2rc14","## 变更内容\n* [2.7] 由 @ZiyueXu77 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4273 中向 ET 类添加 device_wait_timeout\n* [2.7] 由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4249 中对 CI 的 install_requirements 进行增强\n* [2.7] 脚手架更新：增加防御性检查；更新文档字符串，由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4279 中完成\n* [2.7] 2.7.2 版本文档更新 [跳过 CI]，由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4278 中完成\n* [2.7] 修复服务器下载前的外部终止问题，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4275 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc13...2.7.2rc14","2026-03-09T23:39:46",{"id":216,"version":217,"summary_zh":218,"released_at":219},306223,"2.7.2rc13","## 变更内容\n* [2.7] Fedbuff 文档及检查更新，由 @ZiyueXu77 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4262 中完成\n* [2.7] 针对不兼容的 flwr CLI 版本集成 Guard Flower 保护机制，由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4253 中完成\n* [2.7] 澄清 TensorFlow 示例文档，要求使用继承自 Keras 的模型 [文档]，由 @pcnudde 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4264 中完成\n* [2.7] 改进 FedAvg HE TenSEAL 上下文配置指南及验证流程，由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4258 中完成\n* [2.7] 改进 et 导入功能，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4259 中完成\n* [2.7] 更新示例 README 文件 [跳过 CI]，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4272 中完成\n* [2.7] 修复 RC12 Swarm ext-process 中的 bug：msg_root 竞争条件、CSE 模型加载问题、验证卡死以及 round_timeout 问题，由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4270 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc12...2.7.2rc13","2026-03-06T00:24:29",{"id":221,"version":222,"summary_zh":223,"released_at":224},306224,"2.7.2rc12","## 变更内容\n* [2.7] 由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4235 中修复 FedOpt 参数定义\n* [2.7] 为保持作业 API 的一致性，在组件配置中接受 'class_path'，由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4239 中完成\n* 由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4242 中对类路径组件构建器进行增强\n* [2.7] 检查完整的 Apache 许可证头，并规范不一致的许可证头，由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4243 中完成\n* [2.7] 修复 1-18：大模型子进程内存、反向\u002F正向 PASS_THROUGH、下载门控、MSG_ROOT_TTL、LazyRef 本地聚合、日志记录、最小客户端数、诊断功能等问题，由 @chesterxgchen 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4247 中完成\n* [2.7] 添加缺失的 submit_model 执行器，由 @nvkevlu 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4254 中完成\n* [2.7] 修复 XGB 循环问题，由 @ZiyueXu77 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4255 中完成\n* [2.7] 使 SimEnv 兼容严格的 simulator_run API，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4250 中完成\n* [2.7] 跳过不支持的指标进行聚合（#4223），由 @holgerroth 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4252 中完成\n* [2.7] 修复 swarm 配方模型问题，由 @YuanTingHsieh 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4260 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc11...2.7.2rc12","2026-03-04T03:19:59",{"id":226,"version":227,"summary_zh":228,"released_at":229},306225,"2.7.2rc11","## What's Changed\r\n* [2.7] pin fastdigest==0.4.0 due to API changes by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4217\r\n* [2.7] Skip unsupported metrics for aggregation by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4223\r\n* [2.7] Fix global model selection by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4222\r\n* [2.7] Pin pandas\u003C3.0 and fix pandas 3.x compatibility in federated statistics by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4227\r\n* [2.7] Fix recipe API bug list and harden recipe behavior by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4228\r\n* [2.7] Fix subprocess converter wiring, swarm learning bugs, and recipe enhancements by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4225\r\n* [2.7]: Fix a security issue on FileRetriever by @IsaacYangSLA in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4230\r\n* [2.7] Fix hierarchical FL startup failures: deployment timeouts, selective client exclusion, and dead-detection debounce by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4209\r\n* [2.7] Fix client-side RSS memory growth and subprocess logging gap by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4231\r\n* [2.7] Update 2.7.2 release notes: streaming hardening, memory management, hierarchical startup stability [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4218\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc10...2.7.2rc11","2026-02-25T01:02:49",{"id":231,"version":232,"summary_zh":233,"released_at":234},306226,"2.7.2rc10","## What's Changed\r\n* [2.7] Fix hello-numpy-cross-val example by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4168\r\n* [2.7] Add end-to-end download starvation test for stream pool fix by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4172\r\n* [2.7] Fix numpy cross val path of results by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4173\r\n* [2.7] update class_path for llm example by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4175\r\n* [2.7] Remove roadmap from documentation [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4179\r\n* [2.7] Initial checkpoint info by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4178\r\n* [2.7] Fix cifar10 integration tests by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4150\r\n* [2.7] Avoid self-message deadlock for local swarm result submission by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4186\r\n* [2.7] smaller lock in produce item by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4174\r\n* [2.7] Replace job.to with alternative recipe apis by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4183\r\n* [2.7] Fix recipes and job api model\u002Finitial_model confusion by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4188\r\n* [2.7] Fix numpy cross val sticky property by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4181\r\n* [2.7] Use Initial Global Model in BioNeMo Recipe Examples by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4189\r\n* [2.7] Add PR-4172 style tests for swarm self-result submission fix by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4190\r\n* [2.7] update to match fix in main by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4202\r\n* [2.7] Fix RxTask self-deadlock on stream error cleanup by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4204\r\n* [2.7] Mitigate F3 streaming Head-of-Line (HOL) stalls and add guardrails by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4206\r\n* [2.7] Client-side memory management  by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4211\r\n* [2.7] Pass-Through: Zero Tensor Copy at CJ for Large-Model Federated Training by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4210\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc9...2.7.2rc10","2026-02-21T00:17:39",{"id":236,"version":237,"summary_zh":238,"released_at":239},306227,"2.7.2rc9","## What's Changed\r\n* [2.7] Update links on web page by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4160\r\n* [2.7] Restructure documentation for persona-driven navigation and real-world use cases [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4159\r\n* [2.7] Add retry mechanism for streaming download on TIMEOUT by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4167\r\n* [2.7] Fix stream pool starvation issue by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4171\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc8...2.7.2rc9","2026-02-11T02:41:11",{"id":241,"version":242,"summary_zh":243,"released_at":244},306228,"2.7.2rc8","## What's Changed\r\n* [2.7] Add comprehensive timeout documentation [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4083\r\n* [2.7] Rename recipe argument 'initial_model' to 'model' by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4144\r\n* [2.7] Fix arg docstring [skip ci] by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4145\r\n* [2.7] added comment in job_recipe notebook by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4147\r\n* [2.7] CIFAR-10 Experiment Tracking Instructions Corrections by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4148\r\n* [2.7] Recipe: support relative initial_ckpt and API enhancements by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4155\r\n* [2.7] Revert model back to initial model for Job API by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4153\r\n* [2.7] Fix fed eval example by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4152\r\n* [2.7] Rename sklearn recipe args by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4154\r\n* [2.7] Recipe API: use class_path for model dict, rename validate_ckpt by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4156\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc7...2.7.2rc8","2026-02-08T22:40:34",{"id":246,"version":247,"summary_zh":248,"released_at":249},306229,"2.7.2rc7","## What's Changed\r\n* [2.7] Snapshot task data only by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4126\r\n* [2.7] Chapter 2 Fixes by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4115\r\n* [2.7] Update jobapi pt example by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4113\r\n* [2.7] Use dict-based config in HuggingFace example by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4123\r\n* [2.7] Chapter 10 Fixes by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4119\r\n* [2.7] improve tensor streaming by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4120\r\n* [2.7] Chapter 12 Fixes by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4122\r\n* [2.7] CIFAR-10 Update tb event reader and requirements [skip ci] by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4125\r\n* [2.7] Updates for latest api from PEFT\u002FTRL by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4128\r\n* [2.7] Add tbparse license [skip ci] by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4136\r\n* [2.7] Touchups on local training scripts by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4137\r\n* [2.7]: Fix the monitoring example by @IsaacYangSLA in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4099\r\n* [2.7] Cherry-pick LLM HF updates  by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4140\r\n* [2.7] Fix POC Run result caching and cleanup by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4132\r\n* [2.7] Recipe Interface Part 3: Dict Model Config and Initial Checkpoint Support by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4130\r\n* [2.7] Fix xgboost adaptor by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4127\r\n* [2.7] Recipe Interface Part 3: Documentation Updates [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4131\r\n* [2.7] Tutorials disclaimer by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4142\r\n* [2.7] Fix swarm controller + tensor streaming issue by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4141\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc6...2.7.2rc7","2026-02-06T02:41:55",{"id":251,"version":252,"summary_zh":253,"released_at":254},306230,"2.7.2rc6","## What's Changed\r\n* [2.7] Add Server-Side Memory Management for Long-Running FL Jobs by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4042\r\n* [2.7] same switch to open model for llm by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4044\r\n* [2.7] Support bf16 in PT model persistor by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4045\r\n* [2.7] remove no longer supported arg, fix data format by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4047\r\n* [2.7] Add integration tests for examples (cherrypick from #4041) by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4051\r\n* [2.7] Move monai examples under advanced by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4053\r\n* [2.7] Update custom authentication example by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4055\r\n* Fixed keycloak docker tag by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4058\r\n* [2.7] increase link check timout by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4060\r\n* [2.7] Increase BioNeMo external script init timeout by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4057\r\n* [2.7] Update Edge for Android by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4064\r\n* [2.7] update GNN readme by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4065\r\n* [2.7] Update info logging of Cacheable by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4062\r\n* [2.7] Fix the rest of the examples by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4039\r\n* [2.7] Updates to notebooks by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4066\r\n* [2.7] Add timeout check by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4070\r\n* [2.7] Update to standardize all cifar10 data location in self-paced tutorails by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4072\r\n* [2.7] Fix\u002Fnotebooks by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4068\r\n* [2.7] fix skmeans and vertical learning notebooks by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4076\r\n* [2.7] Replace NLP-NER with link to tutorial by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4077\r\n* [2.7] Add missing server_memory_gc_rounds parameter to recipes by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4081\r\n* [2.7] Redesign Job-level Authorization Example by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4074\r\n* [2.7] amplify tutorial updates by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4089\r\n* [2.7] Remove link check on github by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4104\r\n* [2.7] Logging tutorial fix by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4098\r\n* [2.7] Job CLI Tutorial Fixes by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4096\r\n* [2.7] Add dynamic ignore_result_error logic and POC environment cleanup by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4084\r\n* [2.7] Update docker example by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4102\r\n* [2.7] Chapter 1 TensorBoard Streaming Fix by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4103\r\n* [2.7] Update holoscan tutorial by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4095\r\n* [2.7] Improve df_stat example by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4108\r\n* [2.7] Chapter 8 Fixes by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4105\r\n* [2.7]: Fix finance example by @IsaacYangSLA in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4091\r\n* [2.7] Add model config interface with dict-based model input and initial_ckpt support part 1 of 3 by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4082\r\n* [2.7]: Update the streaming example with more details by @IsaacYangSLA in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4114\r\n* [2.7] Fix xgboost recipes by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4087\r\n* [2.7] Fix federated_policy example by @IsaacYangSLA in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4090\r\n* [2.7] Remove multi gpu tf 2.7 by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4116\r\n* [2.7] Add dict config and initial_ckpt support to standard recipes (Part 2) by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4117\r\n* [2.7] Address potential data corruption issue with Streamer by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4100\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc5...2.7.2rc6","2026-02-04T01:45:22",{"id":256,"version":257,"summary_zh":258,"released_at":259},306231,"2.7.2rc5","## What's Changed\r\n* [2.7] Mandating signature by @IsaacYangSLA in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4008\r\n* [2.7] Fix NumPy CSE regression in 2.7.2rc4 by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4011\r\n* [2.7] handle non exists file with absolute file path by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4007\r\n* [2.7] Cherry pick Add Recipe for Experiment Tracking by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4009\r\n* [2.7] Release news and feature highlights, doc restructure [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4017\r\n* [2.7] Update LR related examples by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4014\r\n* [2.7] Update product feature docs [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4018\r\n* [2.7] Fix sklearn examples by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4015\r\n* [2.7] Increase link check timeout by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4027\r\n* [2.7] Convert AMPLIFY example to recipe by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4022\r\n* [2.7] Recipe site configuration [skip ci] by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4021\r\n* [2.7] Add Brats to research by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4026\r\n* [2.7] Fixed the swarm bug by @nvidianz in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4025\r\n* [2.7] Add recipes to __init__ by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4031\r\n* [2.7] Enhance weighted agg helper by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4020\r\n* [2.7] Cherry pick 4002 4005 by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4034\r\n* [2.7] Added origin to the stream lookup key by @nvidianz in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4033\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc4...2.7.2rc5","2026-01-24T18:41:45",{"id":261,"version":262,"summary_zh":263,"released_at":264},306232,"2.7.2rc4","## What's Changed\r\n* [2.7] Fix TLS corruption by replacing fork with posix_spawn (#3856) by @GeorgeWang-nv in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3983\r\n* [2.7] Temporally remove pre install tool and example from 2.7 branch by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3986\r\n* [2.7] multinode guide for llm_hf  by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3991\r\n* [2.7] Convert MONAI examples to Recipe by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3971\r\n* [2.7] Cherrypick from main: Fix docs [skip ci] (#3974) by @pcnudde in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3992\r\n* [2.7] BioNeMo Task Fitting with PyTorch by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3982\r\n* [2.7] Cherry Pick Comprehensively remove mention of SAG by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3997\r\n* [2.7] Cherry pick Make updates to Client API tutorials by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3995\r\n* [2.7] Update web page and bump dependencies for security fixes by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3999\r\n* [2.7] Cherry pick Add recipe for xgboost by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3994\r\n* [2.7] Ignore downloder no ref_id errors by @nvidianz in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F4004\r\n* [2.7] Cherry Pick Fix hello-numpy-cross-val example by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3998\r\n* [2.7] FedAvg Merge with FedAvgEarlyStopping + InTimeAggregation by @chesterxgchen in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3993\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc3...2.7.2rc4","2026-01-22T02:47:57",{"id":266,"version":267,"summary_zh":268,"released_at":269},306233,"2.7.2rc3","## What's Changed\r\n* [2.7] Cherry pick simplify CSE recipe (#3942) by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3957\r\n* [2.7] Cifar10 tf central training logs path by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3955\r\n* [2.7] BioNeMo Recipes by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3943\r\n* [2.7] Apply same llm updates to 2.7 by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3956\r\n* [2.7] Apply same GNN updates to 2.7 by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3966\r\n* [2.7] cherry pick fix for NumpyFedAvgRecipe experiment tracking by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3970\r\n* [2.7] Remove step-by-step by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3972\r\n* [2.7] Hello Differential Privacy by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3961\r\n* [2.7] expose key metric in recipe by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3973\r\n* [2.7] Update key_metric over all examples by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3981\r\n* [2.7] Raise exception on FOBS errors by @nvidianz in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3969\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc2...2.7.2rc3","2026-01-17T02:31:08",{"id":271,"version":272,"summary_zh":273,"released_at":274},306234,"2.7.2rc2","## What's Changed\r\n* [2.7] Lower Downloader logging verbosity by @nvidianz in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3913\r\n* [2.7] Cherry pick Convert PSI example to use recipe (#3901) by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3911\r\n* [2.7] Redesign Cifar10 PT by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3905\r\n* [2.7] Restructure CIFAR-10 PT example by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3916\r\n* [2.7] Refactor CIFAR-10 TensorFlow example by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3919\r\n* [2.7] Consolidate BaseFedJob and fedavg.py and Fix import error for TBAnalyticsReceiver by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3918\r\n* [2.7] Cherry pick from main by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3920\r\n* [2.7] Cherry pick of Fix preflight check and ci (#3917) by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3929\r\n* [2.7] Fix Job API TF examples by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3926\r\n* [2.7] Kaplan Meier updates by @ZiyueXu77 in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3935\r\n* [2.7] Multinode doc update by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3939\r\n* [2.7] Add missing tensorboard requirements by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3938\r\n* [2.7] Cherry pick cross site eval 3923 by @nvkevlu in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3936\r\n* [2.7] Improve error message in client API by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3947\r\n* [2.7] Removed references to ws by @nvidianz in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3950\r\n* [2.7] Cherry pick 3930 3945 by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3952\r\n* [2.7] Cherry-pick of Consolidate LR examples (#3944) by @YuanTingHsieh in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3953\r\n* [2.7] Expose launch_once option in ScriptRunner by @holgerroth in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fpull\u002F3954\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNVFlare\u002Fcompare\u002F2.7.2rc1...2.7.2rc2","2026-01-14T17:59:54"]