[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-google-parfait--tensorflow-federated":3,"tool-google-parfait--tensorflow-federated":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,2,"2026-04-07T23:26:32",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":100,"env_os":101,"env_gpu":102,"env_ram":102,"env_deps":103,"category_tags":107,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":108,"updated_at":109,"faqs":110,"releases":111},5335,"google-parfait\u002Ftensorflow-federated","tensorflow-federated","An open-source framework for machine learning and other computations on decentralized data.","TensorFlow Federated 是一个专为去中心化数据设计的开源机器学习框架。它核心解决了数据隐私与孤岛难题，允许在数据保留于本地设备（如手机、传感器）的前提下，协同训练全局模型，无需将敏感原始数据上传至服务器。这种“联邦学习”模式既保障了用户隐私，又实现了多方数据的价值挖掘。\n\n该工具主要面向机器学习开发者与研究人员。对于希望快速落地的开发者，其提供的联邦学习高层接口（FL API）能轻松将现有 TensorFlow 模型转化为联邦训练任务；对于致力于算法创新的研究者，底层的联邦核心接口（FC API）则结合了强类型函数式编程与分布式通信算子，支持灵活构建和实验 novel 算法。此外，它也适用于需要跨机构进行聚合分析但不愿共享原始数据的场景。\n\nTensorFlow Federated 的独特亮点在于其分层架构设计：上层封装了成熟的联邦平均等算法，降低使用门槛；下层提供了声明式的计算表达机制，使同一套逻辑可部署于多样化的运行环境。框架内置单机模拟运行时，方便用户在本地高效验证想法，是探索隐私保护计算的理想起点。","# TensorFlow Federated\n\nTensorFlow Federated (TFF) is an open-source framework for machine learning and\nother computations on decentralized data. TFF has been developed to facilitate\nopen research and experimentation with\n[Federated Learning (FL)](https:\u002F\u002Fai.googleblog.com\u002F2017\u002F04\u002Ffederated-learning-collaborative.html),\nan approach to machine learning where a shared global model is trained across\nmany participating clients that keep their training data locally. For example,\nFL has been used to train\n[prediction models for mobile keyboards](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.03604)\nwithout uploading sensitive typing data to servers.\n\nTFF enables developers to use the included federated learning algorithms with\ntheir models and data, as well as to experiment with novel algorithms. The\nbuilding blocks provided by TFF can also be used to implement non-learning\ncomputations, such as aggregated analytics over decentralized data.\n\nTFF's interfaces are organized in two layers:\n\n*   [Federated Learning (FL) API](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Ffederated_learning.md)\n    The `tff.learning` layer offers a set of high-level interfaces that allow\n    developers to apply the included implementations of federated training and\n    evaluation to their existing TensorFlow models.\n\n*   [Federated Core (FC) API](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Ffederated_core.md)\n    At the core of the system is a set of lower-level interfaces for concisely\n    expressing novel federated algorithms by combining TensorFlow with\n    distributed communication operators within a strongly-typed functional\n    programming environment. This layer also serves as the foundation upon which\n    we've built `tff.learning`.\n\nTFF enables developers to declaratively express federated computations, so they\ncould be deployed to diverse runtime environments. Included with TFF is a\nsingle-machine simulation runtime for experiments. Please visit the tutorials\nand try it out yourself!\n\n## Installation\n\nSee the\n[install](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Finstall.md)\ndocumentation for instructions on how to install TensorFlow Federated as a\npackage or build TensorFlow Federated from source.\n\n## Getting Started\n\nSee the\n[get started](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Fget_started.md)\ndocumentation for instructions on how to use TensorFlow Federated.\n\n## Contributing\n\nThere are a number of ways to contribute depending on what you're interested in:\n\n*   If you are interested in developing new federated learning algorithms, the\n    best way to start would be to study the implementations of federated\n    averaging and evaluation in `tff.learning`, and to think of extensions to\n    the existing implementation (or alternative approaches). If you have a\n    proposal for a new algorithm, we recommend starting by staging your project\n    in the `research` directory and including a colab notebook to showcase the\n    new features.\n\n    You may want to also develop new algorithms in your own repository. We are\n    happy to feature pointers to academic publications and\u002For repos using TFF on\n    [tensorflow.org\u002Ffederated](http:\u002F\u002Fwww.tensorflow.org\u002Ffederated).\n\n*   If you are interested in applying federated learning, consider contributing\n    a tutorial, a new federated dataset, or an example model that others could\n    use for experiments and testing, or writing helper classes that others can\n    use in setting up simulations.\n\n*   If you are interested in helping us improve the developer experience, the\n    best way to start would be to study the implementations behind the\n    `tff.learning` API, and to reflect on how we could make the code more\n    streamlined. You could contribute helper classes that build upon the FC API\n    or suggest extensions to the FC API itself.\n\n*   If you are interested in helping us develop runtime infrastructure for\n    simulations and beyond, please wait for a future release in which we will\n    introduce interfaces and guidelines for contributing to a simulation\n    infrastructure.\n\nPlease be sure to review the\n[contribution](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002FCONTRIBUTING.md#guidelines)\nguidelines on how to contribute.\n\n## Issues\n\nUse\n[GitHub issues](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fissues)\nfor tracking requests and bugs.\n\n## Questions\n\nPlease direct questions to [Stack Overflow](https:\u002F\u002Fstackoverflow.com) using the\n[tensorflow-federated](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Ftensorflow-federated)\ntag.\n","# TensorFlow Federated\n\nTensorFlow Federated (TFF) 是一个用于在去中心化数据上进行机器学习及其他计算的开源框架。TFF 的开发旨在促进对 [联邦学习 (FL)](https:\u002F\u002Fai.googleblog.com\u002F2017\u002F04\u002Ffederated-learning-collaborative.html) 的开放研究与实验。联邦学习是一种机器学习方法，其中共享的全局模型由众多参与客户端协作训练，而这些客户端则将其训练数据保留在本地。例如，联邦学习已被用于训练 [移动键盘预测模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.03604)，且无需将敏感的输入数据上传至服务器。\n\nTFF 使开发者能够在其模型和数据上使用内置的联邦学习算法，同时也可以尝试新的算法。此外，TFF 提供的构建模块还可用于实现非学习类计算，例如对去中心化数据的聚合分析。\n\nTFF 的接口分为两层：\n\n*   [联邦学习 (FL) API](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Ffederated_learning.md)  \n    `tff.learning` 层提供了一组高级接口，允许开发者将内置的联邦训练和评估实现应用于其现有的 TensorFlow 模型。\n\n*   [联邦核心 (FC) API](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Ffederated_core.md)  \n    系统的核心是一组低级接口，可在强类型函数式编程环境中，通过结合 TensorFlow 和分布式通信算子，简洁地表达新颖的联邦算法。这一层也是我们构建 `tff.learning` 的基础。\n\nTFF 允许开发者以声明式的方式表达联邦计算，从而使其能够部署到不同的运行时环境。TFF 自带单机模拟运行时，可用于实验。请参阅教程并亲自试用！\n\n## 安装\n\n有关如何将 TensorFlow Federated 作为软件包安装或从源代码构建的说明，请参阅 [安装文档](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Finstall.md)。\n\n## 入门\n\n有关如何使用 TensorFlow Federated 的说明，请参阅 [入门文档](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002Fdocs\u002Fget_started.md)。\n\n## 贡献\n\n根据您的兴趣，有多种方式可以参与贡献：\n\n*   如果您对开发新的联邦学习算法感兴趣，最好的起点是研究 `tff.learning` 中联邦平均和评估的实现，并思考对现有实现的扩展（或替代方案）。如果您有新的算法提案，建议先将其项目暂存于 `research` 目录中，并附上 Colab 笔记本来展示新功能。\n\n    您也可以在自己的仓库中开发新算法。我们很乐意在 [tensorflow.org\u002Ffederated](http:\u002F\u002Fwww.tensorflow.org\u002Ffederated) 上推荐使用 TFF 的学术论文和\u002F或代码库链接。\n\n*   如果您对应用联邦学习感兴趣，可以考虑贡献教程、新的联邦数据集，或可供他人用于实验和测试的示例模型；或者编写帮助类，以便他人在搭建模拟环境时使用。\n\n*   如果您希望帮助我们改善开发者体验，最好的开始方式是研究 `tff.learning` API 背后的实现，并思考如何使代码更加简洁高效。您可以贡献基于 FC API 的辅助类，或为 FC API 本身提出改进建议。\n\n*   如果您希望帮助我们开发用于模拟及其他场景的运行时基础设施，请等待未来的版本发布，届时我们将推出用于贡献模拟基础设施的接口和指南。\n\n请务必阅读 [贡献指南](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fblob\u002Fmain\u002FCONTRIBUTING.md#guidelines)，了解具体的贡献方式。\n\n## 问题\n\n请使用 [GitHub Issues](https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated\u002Fissues) 来跟踪请求和 bug。\n\n## 问答\n\n如有疑问，请在 [Stack Overflow](https:\u002F\u002Fstackoverflow.com) 上使用 [tensorflow-federated](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Ftensorflow-federated) 标签提问。","# TensorFlow Federated 快速上手指南\n\nTensorFlow Federated (TFF) 是一个开源框架，旨在支持去中心化数据上的机器学习与其他计算。它专为联邦学习（Federated Learning, FL）的研究与实验而设计，允许在数据保留在本地客户端的情况下训练共享的全局模型。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux 或 macOS（Windows 用户建议使用 WSL2 或 Docker）。\n*   **Python 版本**：推荐 Python 3.8 - 3.10。\n*   **前置依赖**：\n    *   已安装 `pip` 包管理工具。\n    *   建议先安装基础版 `tensorflow`（CPU 或 GPU 版本均可），TFF 会自动处理其余依赖。\n\n> **国内加速建议**：\n> 在中国大陆地区，建议在安装命令中指定清华或阿里镜像源，以显著提升下载速度。\n> 临时使用镜像源方法：在 pip 命令后添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`。\n\n## 安装步骤\n\n您可以选择通过 PyPI 直接安装稳定版，或从源码构建（适合需要最新特性的开发者）。\n\n### 方式一：使用 pip 安装（推荐）\n\n执行以下命令安装 TensorFlow Federated：\n\n```bash\npip install tensorflow-federated\n```\n\n**国内用户加速安装命令：**\n\n```bash\npip install tensorflow-federated -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：从源码构建\n\n如果需要最新的开发版本，可以克隆仓库并安装：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated.git\ncd tensorflow-federated\npip install .\n```\n\n## 基本使用\n\nTFF 提供了两层接口：高层的 `tff.learning`（适用于快速应用联邦学习算法）和低层的 `tff.federated.core`（适用于自定义算法）。以下是一个使用 `tff.learning` 进行简单联邦平均（Federated Averaging）的最小化示例。\n\n### 1. 导入库并准备数据\n\n首先导入必要的模块，并加载一个简单的数据集（此处以 EMNIST 为例，TFF 内置了数据加载工具）。\n\n```python\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\n# 加载预处理的 EMNIST 数据集\nemnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()\n\n# 定义数据预处理函数\ndef preprocess(dataset):\n    return dataset.batch(20)\n\n# 将数据转换为联邦格式\ntrain_data = emnist_train.create_tf_dataset_for_client(emnist_train.client_ids[0])\ntrain_data = preprocess(train_data)\n```\n\n### 2. 定义模型与联邦学习过程\n\n使用 Keras 定义一个基础模型，并将其转换为 TFF 可用的联邦学习流程。\n\n```python\n# 创建一个简单的 Keras 模型\ndef create_keras_model():\n    return tf.keras.Sequential([\n        tf.keras.layers.InputLayer(input_shape=(28, 28)),\n        tf.keras.layers.Flatten(),\n        tf.keras.layers.Dense(10, activation='softmax')\n    ])\n\n# 将 Keras 模型转换为 TFF 模型\nmodel_spec = tff.learning.models.from_keras_model(\n    create_keras_model(),\n    input_spec=train_data.element_spec,\n    loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]\n)\n\n# 构建联邦平均算法\niterative_process = tff.learning.algorithms.build_weighted_fed_avg(model_spec)\n\n# 初始化全局状态\nstate = iterative_process.initialize()\n```\n\n### 3. 执行联邦训练\n\n模拟一轮联邦训练过程。在实际场景中，`client_data` 会来自多个不同的客户端。\n\n```python\n# 模拟一轮训练 (Round)\n# 在实际应用中，这里会选取多个客户端的数据进行聚合\nclient_data = [preprocess(emnist_train.create_tf_dataset_for_client(cid)) \n               for cid in emnist_train.client_ids[:5]]\n\n# 执行下一轮训练\nstate, metrics = iterative_process.next(state, client_data)\n\nprint(f\"训练后的指标: {metrics}\")\n```\n\n通过以上步骤，您已成功运行了一个基础的联邦学习流程。您可以参考官方文档中的 `tff.learning` 和 `tff.federated.core` 教程，进一步探索自定义算法或更复杂的部署场景。","某大型医疗联盟希望联合多家医院训练一个疾病预测模型，但受限于患者隐私法规，各医院的病历数据严禁出院或上传至云端。\n\n### 没有 tensorflow-federated 时\n- **数据孤岛难以突破**：由于无法集中数据，研究人员只能在各医院本地单独训练模型，导致单个模型因数据量小而准确率低下，无法形成合力。\n- **隐私合规风险极高**：若强行构建中心服务器收集数据，不仅违反 HIPAA 等隐私法规，还面临巨大的数据泄露法律风险。\n- **算法验证成本高昂**：研发人员需手动编写复杂的分布式通信代码来模拟联邦学习流程，调试困难且极易出错，新算法从构思到验证周期长达数月。\n- **异构数据适配困难**：不同医院的设备算力和数据格式差异巨大，缺乏统一框架来屏蔽底层异构性，导致部署方案难以通用。\n\n### 使用 tensorflow-federated 后\n- **实现“数据不动模型动”**：利用 `tff.learning` 接口，直接将全局模型下发至各医院本地训练，仅上传加密后的模型参数更新，完美解决隐私合规难题。\n- **快速构建高精度全局模型**：通过内置的联邦平均算法（Federated Averaging），自动聚合多家医院的局部更新，在数据不出院的前提下显著提升了模型的泛化能力和准确率。\n- **高效实验与原型开发**：借助单机模拟运行时，开发者可在本地快速复现分布式场景，将新算法的验证周期从数月缩短至几天，大幅降低试错成本。\n- **灵活定制联邦策略**：通过底层的 Federated Core (FC) API，研究团队能轻松自定义聚合逻辑，灵活适配不同医院的算力限制和数据分布特征。\n\ntensorflow-federated 让医疗机构在不牺牲数据隐私的前提下，成功打破了数据孤岛，实现了安全、高效的协同智能进化。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgoogle-parfait_tensorflow-federated_3ca20cad.png","google-parfait","Parfait","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fgoogle-parfait_2c208bf2.png","Private aggregation & retrieval, federated, analytics, inference, & training from Google.",null,"federated.withgoogle.com","https:\u002F\u002Fgithub.com\u002Fgoogle-parfait",[80,84,88,92],{"name":81,"color":82,"percentage":83},"Python","#3572A5",64.3,{"name":85,"color":86,"percentage":87},"C++","#f34b7d",31.2,{"name":89,"color":90,"percentage":91},"Starlark","#76d275",4.4,{"name":93,"color":94,"percentage":95},"Shell","#89e051",0.1,2430,605,"2026-04-07T14:46:35","Apache-2.0",4,"","未说明",{"notes":104,"python":102,"dependencies":105},"README 中未直接列出具体的操作系统、GPU、内存或 Python 版本要求，仅指出包含单机模拟运行时用于实验。详细的安装指令（包括依赖版本和环境配置）需参考其文档链接 (docs\u002Finstall.md)。该工具主要依赖 TensorFlow 生态。",[106],"tensorflow",[14],"2026-03-27T02:49:30.150509","2026-04-08T12:17:52.017051",[],[112,117,122,127,132,137,142,147,152,157,162,167,172,177,182,187,192,197,202,207],{"id":113,"version":114,"summary_zh":115,"released_at":116},145761,"v0.88.0","# 版本 0.88.0\n\n## 新增\n\n*   `tff.tensorflow.to_type`。\n*   在 `framework` 下的公共 API 中新增了 `pack_args_into_struct` 和 `unpack_args_from_struct`。\n\n## 已弃用\n\n*   `tff.types.tensorflow_to_type`，请改用 `tff.tensorflow.to_type`。\n\n## 变更\n\n*   更新为使用与环境无关的方式表示数据序列。\n*   更新 JAX 计算和上下文，使其能够处理序列类型。\n*   将 `tff.types.structure_from_tensor_type_tree` 和 `tff.types.type_to_tf_tensor_specs` 移至 `tff.tensorflow` 包。\n\n## 移除\n\n*   `tff.framework.merge_cardinalities`\n*   `tff.framework.CardinalityCarrying`\n*   `tff.framework.CardinalityFreeDataDescriptor`\n*   `tff.framework.CreateDataDescriptor`\n*   `tff.framework.DataDescriptor`\n*   `tff.framework.Ingestable`","2024-09-26T16:21:47",{"id":118,"version":119,"summary_zh":120,"released_at":121},145762,"v0.87.0","# 发布版本 0.87.0\n\n## 新增\n\n*   在 `tff.learning.optimizers` 中添加了 AdamW 的实现。\n\n## 变更\n\n*   支持 `tff.learning.optimizers` 中的 `None` 梯度。这模仿了 `tf.keras.optimizers` 的行为——值为 `None` 的梯度将被跳过，其对应的优化器输出（例如动量和权重）也不会更新。\n*   修改了 `DPGroupingFederatedSum::Clamp` 的行为：现在会将负值设为 0。相关测试代码也已更新。原因在于，用于计算差分隐私噪声的敏感度是基于非负值校准的。\n*   更新教程，使其与 `tff.learning` 计算结合使用 `tff.learning.optimizers`。\n*   `tff.simulation.datasets.TestClientData` 现仅接受叶节点不是 `tf.Tensor` 的字典。\n\n## 修复\n\n*   修复了一个 bug：每次调用 `.next()` 时，`tff.learning.optimizers.build_adafactor` 的步数计数器都会被更新两次。\n*   修复了一个 bug：当梯度的数据类型混合时，`tff.learning.optimizers.build_sgdm` 的张量学习率会失败。\n*   修复了一个 bug：不同优化器对空权重结构的行为不一致。现在，TFF 优化器在处理空权重结构时都能一致地接受，并将其视为无操作。\n*   修复了一个 bug：`tff.simulation.datasets.TestClientData.dataset_computation` 生成的数据集形状不确定。\n\n## 移除\n\n*   `tff.jax_computation`，请改用 `tff.jax.computation`。\n*   `tff.profiler`，该 API 已不再使用。\n*   移除了多个过时的教程。\n*   从 `tff.program.SavedModelFileReleaseManager` 的 `get_value` 方法参数中移除了 `structure`。\n*   移除了 `tff.learning` 对 `tf.keras.optimizers` 的支持。","2024-09-17T17:44:56",{"id":123,"version":124,"summary_zh":125,"released_at":126},145763,"v0.86.0","# 版本 0.86.0\n\n## 新增\n\n*   `tff.tensorflow.transform_args` 和 `tff.tensorflow.transform_result`，这两个函数旨在 TensorFlow 环境中实例化和执行上下文时使用。\n\n## 变更\n\n*   将 `Value` Protocol Buffer 中的 `tensor` 字段替换为 `array` 字段，并更新序列化逻辑以使用这个新字段。","2024-08-20T15:33:53",{"id":128,"version":129,"summary_zh":130,"released_at":131},145764,"v0.85.0","# 版本 0.85.0\n\n## 新增\n\n*   `dp_noise_mechanisms` 头文件和源文件：包含根据隐私参数和范数界生成 `differential_privacy::LaplaceMechanism` 或 `differential_privacy::GaussianMechanism` 的函数。这些函数均返回一个 `DPHistogramBundle` 结构体，其中包含机制、用于差分隐私开放域直方图所需的阈值，以及一个布尔值，指示是否使用了拉普拉斯噪声。\n*   将部分 TFF 执行器类添加到公共 API 中（CPPExecutorFactory、ResourceManagingExecutorFactory、RemoteExecutor、RemoteExecutorGrpcStub）。\n*   添加对 `ml_dtypes` 包中 `bfloat16` 数据类型的支持。\n\n## 修复\n\n*   修复了一个错误：此前错误地允许将 `tf.string` 作为 `tff.types.TensorType` 的数据类型。现在必须使用 `np.str_`。\n\n## 变更\n\n*   `tff.Computation` 和 `tff.framework.ConcreteComputation` 现在能够转换计算的输入参数和输出结果。\n*   `DPClosedDomainHistogram::Report` 和 `DPOpenDomainHistogram::Report`：两者均使用 `dp_noise_mechanisms` 中 `CreateDPHistogramBundle` 函数生成的 `DPHistogramBundles`。\n*   `DPGroupByFactory::CreateInternal`：当未提供 `delta` 参数时，会检查是否提供了正确的范数界以计算 L1 敏感度（用于拉普拉斯机制）。\n*   `CreateRemoteExecutorStack` 现在允许指定组成执行器，并将客户端值分配给叶执行器，使得所有叶执行器接收相同数量的客户端，除非最后一个叶执行器可能接收较少的客户端。\n*   允许 `tff.learning.programs.train_model` 接受 `should_discard_round` 函数，以决定是否应丢弃并重试某一轮次。\n\n## 移除\n\n*   `tff.structure.to_container_recursive`：此函数不应在外部使用。","2024-08-14T19:58:59",{"id":133,"version":134,"summary_zh":135,"released_at":136},145765,"v0.84.0","# 版本 0.84.0\n\n## 新增\n\n*   将部分 TFF 执行器类添加到公共 API 中（`ComposingExecutor`、`ExecutorTestBase`、`MockExecutor`、`ThreadPool`）。\n*   将一些编译器转换辅助函数添加到公共 API 中（`replace_intrinsics_with_bodies`、`unique_name_generator`、`transform_preorder`、`to_call_dominant`）。\n*   在 `CheckpointAggregator` API 中新增了一个用于获取已聚合检查点数量的方法。\n*   新增函数 `DPClosedDomainHistogram::IncrementDomainIndices`。该函数允许调用代码通过 do-while 循环遍历复合键的域。\n\n## 变更\n\n*   修复了 `tff.jax.computation` 中的一个 bug，该 bug 会在计算包含未使用参数时引发错误。\n*   修复了在使用 `tff.backends.xla` 执行栈时的一个 bug，该 bug 会在由 `tff.jax.computation` 包装的方法返回单元素结构时引发错误。\n*   修改了 `tff.learning.programs.train_model` 中模型输出的发布频率，改为每 10 轮及最后一轮发布。\n*   放宽了 `kEpsilonThreshold` 常量，并相应更新了 `DPOpenDomainHistogram` 的测试用例。\n*   修改了 `DPClosedDomainHistogram::Report()` 的行为：现在会为每个可能的键组合生成一个聚合值。对于 `GroupByAggregator` 尚未分配聚合值的复合键，将赋值为 0。未来的 CL 将在此基础上添加噪声。\n*   修改了 `tff.learning.algorithms.build_weighted_fed_avg` 函数，使其在 `use_experimental_simulation_loop=True` 且 `model_fn` 为 `tff.learning.models.FunctionalModel` 类型时，生成不同的训练图。","2024-07-29T16:08:08",{"id":138,"version":139,"summary_zh":140,"released_at":141},145766,"v0.83.0","# 版本 0.83.0\n\n## 变更\n\n*   修改了 `tff.learning.programs.train_model` 程序的逻辑，以便在程序状态中保存数据源迭代器的深拷贝。\n*   基于文件的原生程序组件不再对值进行展平和反展平操作。\n\n## 移除\n\n*   从 `tensorflow_utils` 中移除了未使用的函数。\n*   移除了将原始 `tf.Tensor` 值序列化为 `Value` Protocol Buffer 的功能。\n*   移除了对 `dataclasses` 的部分支持。","2024-07-16T16:19:12",{"id":143,"version":144,"summary_zh":145,"released_at":146},145767,"v0.82.0","# 版本 0.82.0\n\n## 新增\n\n*   向 Array protocol buffer 添加了一个序列化的原始数组内容字段。\n*   为 `DPCompositeKeyCombiner` 增加了一个函数，用于获取序号。该功能旨在供封闭域差分隐私直方图聚合核心使用。\n*   定义了无效序号常量和默认的 `l0_bound_` 常量。\n\n## 变更\n\n*   修改了 `DPCompositeKeyCombiner` 处理无效 `l0_bound_` 值的方式。\n*   将 `DPCompositeKeyCombiner` 中的默认 `l0_bound_` 值更新为新的常量。\n*   重新组织了差分隐私直方图相关代码。此前，开放域直方图类与工厂类并列存在于 `dp_group_by_aggregator.h\u002Fcc` 文件中。现在已拆分为 `dp_open_domain_histogram.h\u002Fcc` 和 `dp_group_by_factory.h\u002Fcc`，这将便于未来添加封闭域直方图的相关代码。\n*   将 `tff.federated_secure_modular_sum` 移至 mapreduce 后端，请改用 `tff.backends.mapreduce.federated_secure_modular_sum`。","2024-06-28T17:16:27",{"id":148,"version":149,"summary_zh":150,"released_at":151},145768,"v0.81.0","# 版本 0.81.0\n\n## 新增\n\n*   一个辅助函数，用于获取张量元素的字符串向量，以帮助格式化。\n*   在 `tensor` 协议缓冲区中添加了一个 `string_val` 字段，以便显式地表示字符串值。\n\n## 变更\n\n*   将发行说明的格式（即 `RELEASE.md`）调整为基于 https:\u002F\u002Fkeepachangelog.com\u002Fen\u002F1.1.0\u002F 的规范。\n*   将对 `linfinity_bound` 的约束从 `DPGroupingFederatedSumFactory` 移至 `DPGroupByFactory`，因为封闭域直方图算法会使用 `DPGroupingFederatedSum`，但并不需要正的 `linfinity_bound`。\n\n## 移除\n\n*   对 `semantic-version` 的依赖。\n*   `tff.async_utils` 包，改用 `asyncio`。","2024-06-17T16:46:11",{"id":153,"version":154,"summary_zh":155,"released_at":156},145769,"v0.80.0","# 版本 0.80.0\n\n## 破坏性变更\n\n- 将 tools 包移动到仓库的根目录。\n- Bazel 更新至 6.5.0 版本。\n- rules_python 更新至 0.31.0 版本。\n- 删除已弃用的 tff.learning.build_federated_evaluation，其已被 tff.learning.algorithms.build_fed_eval 取代。","2024-06-08T05:47:20",{"id":158,"version":159,"summary_zh":160,"released_at":161},145770,"v0.79.0","# 版本 0.79.0\n\n## 主要功能与改进\n\n*   在 `tff.learning.models.functional_model_from_keras` 中启用对包含不可训练变量的模型的支持。\n\n## 破坏性变更\n\n*   移除了 `farmhashpy` 依赖。\n*   将 `com_github_grpc_grpc` 更新至版本 `1.50.0`。\n*   将 TFF 仓库从 https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ffederated 迁移到 https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ftensorflow-federated。","2024-05-24T20:47:54",{"id":163,"version":164,"summary_zh":165,"released_at":166},145771,"v0.78.0","# Release 0.78.0\r\n\r\n## Major Features and Improvements\r\n\r\n*   Moved aggregation from https:\u002F\u002Fgithub.com\u002Fgoogle-parfait\u002Ffederated-compute\r\n    to TFF to consolidate the federated language and remove circular\r\n    dependencies.\r\n\r\n## Breaking Changes\r\n\r\n*   Updated `rules_license` to version `0.0.8`.\r\n*   Removed `elias_gamma_encode` module.\r\n*   Removed `tensorflow_compression` dependency.\r\n","2024-05-09T20:21:18",{"id":168,"version":169,"summary_zh":170,"released_at":171},145772,"v0.77.0","# Release 0.77.0\r\n\r\n## Major Features and Improvements\r\n\r\n*   Added an implementation of `__eq__()` on `building blocks`.\r\n\r\n## Bug Fixes\r\n\r\n*   Fix #4588: Target Haswell CPU architectures (`-march=haswell`) instead of\r\n    whatever is native to the build infrastructure to ensure that binaries in\r\n    the pip package and executable on Colab CPU runtimes.","2024-04-26T20:13:52",{"id":173,"version":174,"summary_zh":175,"released_at":176},145773,"v0.76.0","# Release 0.76.0\r\n\r\n## Major Features and Improvements\r\n\r\n*   Added a `Literal` to the TFF language, part 2. This change updates the\r\n    tracing and execution portions of TFF to begin using the `Literal`.\r\n*   Added an implementation of the Adafactor optimizer to\r\n    `tff.learning.optimizers.build_adafactor`\r\n*   Added a new field, `content`, to the `Data` proto.\r\n\r\n## Breaking Changes\r\n\r\n*   Removed the `check_foo()` methods on building blocks.\r\n*   Removed `tff.data`, this symbol is not used.\r\n\r\n## Bug Fixes\r\n\r\n*   Fix a bug where the pip package default executor stack cannot execute\r\n    computations that have `Lambda`s under `sequence_*` intrinsics.","2024-04-19T16:56:25",{"id":178,"version":179,"summary_zh":180,"released_at":181},145774,"v0.75.0","# Release 0.75.0\r\n\r\n## Major Features and Improvements\r\n\r\n*   Updated the type annotation for MaterializedValue to include the Python\r\n    scalar types in addition to the numpy scalar types.\r\n*   Added a `Literal` to the TFF language, part 1.\r\n*   Added `Literal` to the framework package.\r\n*   Extended\r\n    `tff.learning.algorithms.build_weighted_fed_avg_with_optimizer_schedule` to\r\n    support `tff.learning.models.FunctionalModel`.\r\n\r\n## Breaking Changes\r\n\r\n*   Deleted the `tff.learning.framework` namespace⚰️.\r\n\r\n## Bug Fixes\r\n\r\n*   Fixed logic for determining if a value can be cast to a specific dtype.\r\n*   Fixed a bug where repeated calls to\r\n    `FilePerUserClientData.create_tf_dataset_for_client` could blow up memory\r\n    usage","2024-04-05T20:29:40",{"id":183,"version":184,"summary_zh":185,"released_at":186},145775,"v0.74.0","# Release 0.74.0\r\n\r\n## Major Features and Improvements\r\n\r\n*   Make some of the C++ executor APIs public visibility for downstream repos.\r\n*   Moved the `DataType` protobuf object into its own module. Moving the\r\n    `DataType` object into its own module allows `DataType` to be used outside\r\n    of a `Computation` more easily and prevents a circular dependency between\r\n    `Computation` and `Array` which both require a `DataType`.\r\n*   Updated `build_apply_optimizer_finalizer` to allow custom reject update\r\n    function.\r\n*   Relaxed the type requirement of the attributes of `ModelWeights` to allow\r\n    assigning list or tuples of matching values to other sequence types on\r\n    `tf.keras.Model` instances.\r\n*   Improved the errors raised by JAX computations for various types.\r\n*   Updated tutorials to use recommended `tff.learning` APIs.\r\n\r\n## Breaking Changes\r\n\r\n*   Removed the runtime-agnostic support for `tf.RaggedTensor` and\r\n    `tf.SparseTensor`.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ffederated\u002Fcompare\u002Fv0.73.0...v0.74.0","2024-03-20T23:24:22",{"id":188,"version":189,"summary_zh":190,"released_at":191},145776,"v0.73.0","# Release 0.73.0\r\n\r\n## Major Features and Improvements\r\n\r\n*   Make some of the C++ executor APIs public visibility for downstream repos.\r\n*   `tff.learning.algorithms.build_fed_kmeans` supports floating point weights,\r\n    enabling compatibility with `tff.aggregators` using differential privacy.\r\n*   Added two new metrics aggregators:\r\n    `tff.learning.metrics.finalize_then_sample` and\r\n    `tff.learning.metrics.FinalizeThenSampleFactory`.\r\n\r\n## Breaking Changes\r\n\r\n*   Remove the ability to return `SequenceType` from `tff.federated_computation`\r\n    decorated callables.\r\n\r\n## Bug Fixes\r\n\r\n*   `tff.learning` algorithms now correctly do *not* include metrics for clients\r\n    that had zero weight due to model updates containing non-finite values.\r\n    Previously the update was rejected, but the metrics still aggregated.","2024-03-11T19:58:11",{"id":193,"version":194,"summary_zh":195,"released_at":196},145777,"v0.72.0","# Release 0.72.0\r\n\r\n## Major Features and Improvements\r\n\r\n* Added an async XLA runtime under `tff.backends.xla`. \r\n\r\n## Breaking Changes\r\n\r\n* Updated `tensorflow-privacy` version to `0.9.0`.\r\n* Removed the deprecated `type_signature` parameter from the `tff.program.ReleaseManager.release` method.\r\n\r\n","2024-02-27T16:50:10",{"id":198,"version":199,"summary_zh":200,"released_at":201},145778,"v0.71.0","# Release 0.71.0\r\n\r\n## Major Features and Improvements\r\n\r\n* Added new environment-specific packages to TFF.\r\n","2024-02-13T21:25:43",{"id":203,"version":204,"summary_zh":205,"released_at":206},145779,"v0.70.0","## Breaking Changes\r\n\r\n*   Temporarily disable `tff.program.PrefetchingDataSource` due to flakiness\r\n    from a lack of determinism.\r\n*   Removed support for invoking `infer_type` with TensorFlow values.\r\n*   Removed deprecated `tff.aggregators.federated_(min|max)`symbols, please use\r\n    `tff.federated_(min|max)` instead.\r\n*   Removed support for creating a `tff.TensorType` using a `tf.dtypes.DType`.\r\n*   Removed `tff.check_return_type`.\r\n\r\n## Bug Fixes\r\n\r\n*   Declared `OwnedValueId::INVALID_ID` as a static constexpr.","2024-02-02T19:02:30",{"id":208,"version":209,"summary_zh":210,"released_at":211},145780,"v0.69.0","# Release 0.69.0\r\n\r\n## Major Features and Improvements\r\n\r\n*   The `local_unfinalized_metrics_type` argument to\r\n    tff.learning.metrics.(secure_)sum_then_finalize is now optional (and is not\r\n    actually used). It will be removed in a future release.\r\n\r\n## Breaking Changes\r\n\r\n*   tff.learning.metrics.(secure_)sum_then_finalize now return polymorphic\r\n    computations. They can still be passed into algorithm builders (e.g.\r\n    tff.learning.algorithms.build_weighted_fed_avg) but to be called directly\r\n    they must first be traced with explicit types.\r\n*   Removed support for handling `tf.TensorSpec` using `to_type`, use\r\n    `tensorflow_to_type` instead.\r\n*   Removed support for calling `tff.TensorType` using a `tf.dtypes.DType`.","2024-01-23T19:39:17"]