[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-FedML-AI--FedML":3,"tool-FedML-AI--FedML":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":121,"forks":122,"last_commit_at":123,"license":124,"difficulty_score":125,"env_os":126,"env_gpu":127,"env_ram":126,"env_deps":128,"category_tags":131,"github_topics":132,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":144,"updated_at":145,"faqs":146,"releases":182},2239,"FedML-AI\u002FFedML","FedML","FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, TensorOpera AI (https:\u002F\u002FTensorOpera.ai) is your generative AI platform at scale.","FedML 是一个统一且可扩展的机器学习开源库，旨在帮助用户在任何规模下轻松运行分布式训练、模型服务及联邦学习任务。它核心解决了跨云环境、本地集群乃至边缘设备上进行 AI 开发时面临的资源调度复杂、环境配置繁琐以及算力成本高昂等痛点。\n\n无论是希望快速微调大语言模型的开发者，还是从事分布式系统研究的研究人员，亦或是需要管理混合云基础设施的工程团队，都能从 FedML 中获益。其独特亮点在于配套的 FedML Launch 跨云调度器，能够自动匹配最具性价比的 GPU 资源，无需手动搭建复杂环境即可启动任务。此外，FedML 构建了完整的三层架构：通过 Studio 和 Job Store 提供友好的 MLOps 体验；利用高性能计算库支持大规模训练与低延迟推理；更拥有业界领先的联邦学习操作（FLOps）能力，支持在手机端与云端之间协同训练。作为 TensorOpera AI 平台的底层基石，FedML 让生成式 AI 应用的构建变得更加经济、安全且高效。","\n# FEDML Open Source: A Unified and Scalable Machine Learning Library for Running Training and Deployment Anywhere at Any Scale\n\nBacked by TensorOpera AI: Your Generative AI Platform at Scale (https:\u002F\u002FTensorOpera.ai)\n\n\u003Cdiv align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFedML-AI_FedML_readme_b9061afb9613.png\" width=\"600px\">\n\u003C\u002Fdiv>\n\nTensorOpera Documentation: https:\u002F\u002Fdocs.TensorOpera.ai\n\nTensorOpera Homepage: https:\u002F\u002FTensorOpera.ai\u002F \\\nTensorOpera Blog: https:\u002F\u002Fblog.TensorOpera.ai\u002F\n\nJoin the Community:\nSlack: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ffedml\u002Fshared_invite\u002Fzt-havwx1ee-a1xfOUrATNfc9DFqU~r34w \\\nDiscord: https:\u002F\u002Fdiscord.gg\u002F9xkW8ae6RV\n\n\nTensorOpera® AI (https:\u002F\u002FTensorOpera.ai) is the next-gen cloud service for LLMs & Generative AI. It helps developers to launch complex model training, deployment, and federated learning anywhere on decentralized GPUs, multi-clouds, edge servers, and smartphones, easily, economically, and securely.\n\nHighly integrated with TensorOpera open source library, TensorOpera AI provides holistic support of three interconnected AI infrastructure layers: user-friendly MLOps, a well-managed scheduler, and high-performance ML libraries for running any AI jobs across GPU Clouds.\n\nA typical workflow is showing in figure above. When developer wants to run a pre-built job in Studio or Job Store, TensorOpera®Launch swiftly pairs AI jobs with the most economical GPU resources, auto-provisions, and effortlessly runs the job, eliminating complex environment setup and management. When running the job, TensorOpera®Launch orchestrates the compute plane in different cluster topologies and configuration so that any complex AI jobs are enabled, regardless model training, deployment, or even federated learning. TensorOpera®Open Source is unified and scalable machine learning library for running these AI jobs anywhere at any scale. \n\nIn the MLOps layer of TensorOpera AI\n- **TensorOpera® Studio** embraces the power of Generative AI! Access popular open-source foundational models (e.g., LLMs), fine-tune them seamlessly with your specific data, and deploy them scalably and cost-effectively using the TensorOpera Launch on GPU marketplace.\n- **TensorOpera® Job Store** maintains a list of pre-built jobs for training, deployment, and federated learning. Developers are encouraged to run directly with customize datasets or models on cheaper GPUs.\n\nIn the scheduler layer of TensorOpera AI\n- **TensorOpera® Launch** swiftly pairs AI jobs with the most economical GPU resources, auto-provisions, and effortlessly runs the job, eliminating complex environment setup and management. It supports a range of compute-intensive jobs for generative AI and LLMs, such as large-scale training, serverless deployments, and vector DB searches. TensorOpera Launch also facilitates on-prem cluster management and deployment on private or hybrid clouds.\n\nIn the Compute layer of TensorOpera AI\n- **TensorOpera® Deploy** is a model serving platform for high scalability and low latency.\n- **TensorOpera® Train** focuses on distributed training of large and foundational models.\n- **TensorOpera® Federate** is a federated learning platform backed by the most popular federated learning open-source library and the world’s first FLOps (federated learning Ops), offering on-device training on smartphones and cross-cloud GPU servers.\n- **TensorOpera® Open Source** is unified and scalable machine learning library for running these AI jobs anywhere at any scale.\n\n# Contributing \nFedML embraces and thrive through open-source. We welcome all kinds of contributions from the community. Kudos to all of \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffedml-ai\u002Ffedml\u002Fgraphs\u002Fcontributors\" target=\"_blank\">our amazing contributors\u003C\u002Fa>!  \nFedML has adopted [Contributor Covenant](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002FCODE_OF_CONDUCT.md).\n","# FEDML 开源：统一且可扩展的机器学习库，可在任何规模、任何环境中运行训练与部署\n\n由 TensorOpera AI 提供支持：您的规模化生成式 AI 平台（https:\u002F\u002FTensorOpera.ai）\n\n\u003Cdiv align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFedML-AI_FedML_readme_b9061afb9613.png\" width=\"600px\">\n\u003C\u002Fdiv>\n\nTensorOpera 文档：https:\u002F\u002Fdocs.TensorOpera.ai\n\nTensorOpera 首页：https:\u002F\u002FTensorOpera.ai\u002F \\\nTensorOpera 博客：https:\u002F\u002Fblog.TensorOpera.ai\u002F\n\n加入社区：\nSlack：https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ffedml\u002Fshared_invite\u002Fzt-havwx1ee-a1xfOUrATNfc9DFqU~r34w \\\nDiscord：https:\u002F\u002Fdiscord.gg\u002F9xkW8ae6RV\n\n\nTensorOpera® AI（https:\u002F\u002FTensorOpera.ai）是面向大语言模型与生成式 AI 的下一代云服务。它帮助开发者在去中心化的 GPU 集群、多云环境、边缘服务器以及智能手机上，轻松、经济、安全地启动复杂模型的训练、部署及联邦学习任务。\n\nTensorOpera AI 与 TensorOpera 开源库高度集成，为三个相互关联的 AI 基础设施层提供全方位支持：用户友好的 MLOps 平台、高效管理的调度器，以及高性能的机器学习库，可跨 GPU 云平台运行各类 AI 任务。\n\n典型的工作流程如上图所示。当开发者希望在 Studio 或 Job Store 中运行预构建的任务时，TensorOpera®Launch 会迅速将 AI 任务匹配到最具成本效益的 GPU 资源，自动完成资源调配并流畅执行任务，从而免去复杂的环境搭建与管理。在任务执行过程中，TensorOpera®Launch 会根据不同的集群拓扑和配置编排计算平面，确保无论模型训练、部署，还是联邦学习等复杂 AI 任务都能顺利开展。TensorOpera®Open Source 是一款统一且可扩展的机器学习库，可用于在任何规模、任何环境中运行这些 AI 任务。\n\n在 TensorOpera AI 的 MLOps 层中：\n- **TensorOpera® Studio** 充分利用生成式 AI 的强大能力！开发者可以访问流行的开源基础模型（如大语言模型），并使用自己的特定数据对其进行无缝微调，随后借助 TensorOpera Launch 的 GPU 市场，以经济高效的方式实现规模化部署。\n- **TensorOpera® Job Store** 维护着一系列用于训练、部署和联邦学习的预构建任务。鼓励开发者直接使用自定义数据集或模型，在更廉价的 GPU 上运行这些任务。\n\n在 TensorOpera AI 的调度层中：\n- **TensorOpera® Launch** 能够快速将 AI 任务匹配到最具性价比的 GPU 资源，自动完成资源调配并顺畅执行任务，从而彻底简化了复杂的环境搭建与管理工作。它支持多种面向生成式 AI 和大语言模型的计算密集型任务，例如大规模训练、无服务器部署以及向量数据库检索等。此外，TensorOpera Launch 还能助力本地集群的管理和私有云或混合云环境下的部署。\n\n在 TensorOpera AI 的计算层中：\n- **TensorOpera® Deploy** 是一个具备高扩展性和低延迟的模型推理服务平台。\n- **TensorOpera® Train** 专注于大型模型和基础模型的分布式训练。\n- **TensorOpera® Federate** 是一个基于最流行的联邦学习开源框架及全球首个 FLOps（联邦学习运维）的联邦学习平台，支持在智能手机端进行设备内训练，以及跨云的 GPU 服务器上的联邦学习。\n- **TensorOpera® Open Source** 则是一款统一且可扩展的机器学习库，适用于在任何规模、任何环境中运行上述各类 AI 任务。\n\n# 贡献\nFedML 秉持并受益于开源精神。我们欢迎来自社区的各种贡献。向所有\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffedml-ai\u002Ffedml\u002Fgraphs\u002Fcontributors\" target=\"_blank\">杰出的贡献者\u003C\u002Fa>致以诚挚的谢意！  \nFedML 已采纳 [贡献者公约](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002FCODE_OF_CONDUCT.md)。","# FedML 快速上手指南\n\nFedML 是一个统一且可扩展的机器学习库，支持在任何规模的去中心化 GPU、多云、边缘服务器及智能手机上运行训练和部署任务。它与 TensorOpera AI 深度集成，提供从 MLOps、调度器到高性能计算层的全栈支持。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux (Ubuntu\u002FCentOS), macOS, 或 Windows (WSL2 推荐)。\n*   **Python 版本**：Python 3.7 - 3.10。\n*   **前置依赖**：\n    *   `pip` (包管理工具)\n    *   `git` (版本控制工具)\n    *   (可选) CUDA Toolkit (如需使用 GPU 加速，建议版本 11.0+)\n\n## 安装步骤\n\n您可以通过 PyPI 直接安装 FedML 核心库。对于国内开发者，推荐使用清华或阿里镜像源以加速下载。\n\n### 方式一：使用 pip 安装（推荐）\n\n**使用国内镜像源加速安装：**\n\n```bash\npip install fedml -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**或使用官方源安装：**\n\n```bash\npip install fedml\n```\n\n### 方式二：从源码安装（适合贡献者或需要最新特性）\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffedml-ai\u002Ffedml.git\ncd fedml\npip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：如果您需要使用特定的功能模块（如联邦学习 `federate`、分布式训练 `train` 或模型部署 `deploy`），可能需要安装额外的依赖包。请参考官方文档获取特定模块的安装命令，例如 `pip install fedml[federate]`。\n\n## 基本使用\n\nFedML 的设计旨在简化从本地原型设计到大规模分布式运行的流程。以下是一个最简单的初始化与运行示例。\n\n### 1. 初始化 FedML 参数\n\n在使用 FedML 运行任何任务前，通常需要初始化配置。这可以通过命令行参数或配置文件完成。以下是一个简单的 Python 脚本示例，展示如何加载 FedML 并打印版本信息：\n\n```python\nimport fedml\n\nif __name__ == \"__main__\":\n    # 初始化 FedML 环境\n    fedml.init()\n    \n    # 打印当前版本信息\n    print(f\"FedML Version: {fedml.__version__}\")\n    \n    # 此处可添加具体的训练或推理逻辑\n    # 例如：调用 fedml.trainer 或 fedml.federate 模块\n```\n\n### 2. 运行一个简单的联邦学习任务\n\n假设您想启动一个基础的联邦学习流程，FedML 提供了统一的接口。以下概念性代码展示了如何定义角色并启动任务（具体实现需结合您的模型代码）：\n\n```python\nimport fedml\n\ndef run_federated_learning():\n    # 初始化\n    fedml.init()\n    \n    # 获取设备信息\n    device_info = fedml.get_device_info()\n    print(f\"Running on device: {device_info}\")\n    \n    # 模拟启动训练流程\n    # 在实际场景中，这里会连接至 TensorOpera Launch 或本地集群\n    print(\"Starting federated training job...\")\n    \n    # 您的模型训练逻辑\n    # model.train() \n\nif __name__ == \"__main__\":\n    run_federated_learning()\n```\n\n### 3. 通过命令行提交任务 (配合 TensorOpera)\n\n如果您已配置好 TensorOpera 账户，可以使用 CLI 工具将任务提交到云端或边缘节点：\n\n```bash\n# 登录 TensorOpera (首次使用)\nfedml login\n\n# 提交一个预定义的作业\nfedml run --job-name my_llm_finetune --gpu-type A100 --num-gpus 4\n```\n\n通过以上步骤，您已完成 FedML 的基础环境搭建并了解了基本的使用模式。更多高级功能（如跨云调度、手机端联邦训练）请参考 TensorOpera 官方文档。","一家医疗科技初创公司需要联合多家医院的数据，在不共享患者隐私的前提下训练一个高精度的肺部疾病诊断大模型。\n\n### 没有 FedML 时\n- **数据孤岛难打破**：受限于隐私法规，各医院数据无法集中，传统集中式训练方案完全不可行，项目陷入停滞。\n- **环境配置极其繁琐**：工程师需手动为每家医院的本地服务器和云端集群分别配置依赖、驱动和网络，耗时数周且极易出错。\n- **资源调度成本高昂**：缺乏统一的跨云调度机制，只能固定使用昂贵的单一云厂商 GPU，无法利用闲置的本地算力或更经济的异构资源。\n- **模型部署延迟高**：训练好的模型难以快速适配边缘设备（如医院内部的诊断终端），导致从实验到临床落地的周期长达数月。\n\n### 使用 FedML 后\n- **联邦学习无缝落地**：利用 FedML Federate 模块，直接在医院本地设备和手机端进行模型更新，仅交换加密参数而非原始数据，完美合规地打破了数据孤岛。\n- **一键跨云自动编排**：通过 FedML Launch 统一调度器，自动将训练任务分发至最经济的混合云资源（包括本地集群和公有云 GPU），消除了复杂的环境搭建过程。\n- **弹性伸缩降低成本**：系统根据任务负载动态匹配算力，灵活调用去中心化的 GPU 资源，使整体训练成本降低了 60% 以上。\n- **高效边缘模型服务**：借助 FedML Deploy，将优化后的模型快速部署到医院边缘服务器，实现了低延迟的实时诊断推理，加速了产品上市进程。\n\nFedML 通过统一的联邦学习与跨云调度能力，让企业在保障数据隐私的同时，以最低成本实现了大规模分布式 AI 模型的训练与落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFedML-AI_FedML_b9061afb.png","FedML-AI","TensorOpera (Formerly FEDML)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FFedML-AI_42377591.png","TensorOpera - Your Generative AI Platform at Scale",null,"https:\u002F\u002Ftensoropera.ai","https:\u002F\u002Fgithub.com\u002FFedML-AI",[83,87,91,95,98,102,106,110,114,117],{"name":84,"color":85,"percentage":86},"Python","#3572A5",78.5,{"name":88,"color":89,"percentage":90},"Jupyter Notebook","#DA5B0B",14.5,{"name":92,"color":93,"percentage":94},"Java","#b07219",2.9,{"name":96,"color":97,"percentage":23},"Shell","#89e051",{"name":99,"color":100,"percentage":101},"C++","#f34b7d",1.4,{"name":103,"color":104,"percentage":105},"Dockerfile","#384d54",0.3,{"name":107,"color":108,"percentage":109},"Batchfile","#C1F12E",0.2,{"name":111,"color":112,"percentage":113},"Smarty","#f0c040",0.1,{"name":115,"color":116,"percentage":113},"CMake","#DA3434",{"name":118,"color":119,"percentage":120},"Jinja","#a52a22",0,4029,768,"2026-04-04T20:11:55","Apache-2.0",4,"未说明","支持去中心化 GPU、多云 GPU 集群及边缘服务器；具体型号、显存大小及 CUDA 版本未在文中明确，但提及可自动匹配最经济的 GPU 资源以运行大规模训练和部署任务。",{"notes":129,"python":126,"dependencies":130},"该工具是一个统一的机器学习库，旨在支持在任何规模（从智能手机到云端 GPU 集群）上运行训练、部署和联邦学习任务。它与 TensorOpera AI 平台深度集成，能够自动配置环境并编排复杂的计算拓扑。特别支持在智能手机上进行设备端训练以及跨云 GPU 服务器的联邦学习。具体的技术依赖（如 Python 版本、具体库列表）需参考官方文档或代码仓库，README 中主要侧重于架构描述和功能介绍。",[126],[15,13],[133,134,135,136,137,138,139,140,141,142,143],"federated-learning","deep-learning","distributed-training","edge-ai","machine-learning","on-device-training","inference-engine","mlops","model-deployment","model-serving","ai-agent","2026-03-27T02:49:30.150509","2026-04-06T05:16:43.593604",[147,152,157,162,167,172,177],{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},10314,"运行 FedIoT 相关脚本时出现 boto3 和 S3 相关的报错如何解决？","该问题涉及 AWS S3 配置和环境依赖。首先确保已安装 boto3 库（pip install boto3）。其次，检查是否正确配置了 AWS 凭证（Access Key 和 Secret Key），可以通过 aws configure 命令设置，或在代码中显式传入凭证。如果是在本地测试，确认网络连接正常且能访问 S3 服务。该问题在后续版本更新中已得到 addressed，建议更新至最新版。","https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fissues\u002F294",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},10308,"FedAvg 训练准确率卡在 50% 以下无法提升怎么办？","这通常是因为使用了旧版本代码或参数配置不一致。请检查以下几点：1. 升级到最新的 FedML 仓库版本；2. 确认优化器设置，基准测试使用的是 SGD，而旧版本默认可能是 Adam；3. 检查是否有多余的梯度裁剪（grad clip）操作，新版本中可能包含此逻辑但基准分支没有；4. 确认每轮客户端数量参数（client_num_per_round）设置是否与基准一致。建议对比基准运行的分支代码以复现结果。","https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fissues\u002F115",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},10309,"运行示例时遇到“登录 FedML MLOps 平台失败”或网络错误如何解决？","该问题已在后续版本中修复。请将 FedML 升级到 0.7.122 或更高版本。升级命令通常为：pip install --upgrade fedml。升级后重新运行示例代码即可解决登录和网络连接问题。","https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fissues\u002F343",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},10310,"MNIST 数据集无法通过代码自动下载或链接失效怎么办？","可以使用以下手动下载命令替代自动下载脚本：\nrm MNIST.zip\nrm -rf mnist\nwget --no-check-certificate -r \"https:\u002F\u002Fdrive.google.com\u002Fuc?export=download&id=1cU_LcBAUZvfZWveOMhG4G5Fg9uFXhVdf\" -O MNIST.zip\nunzip MNIST.zip\nrm MNIST.zip\n如果上述命令因 Google Drive 验证失败，可使用带 cookie 处理的复杂 wget 命令或直接浏览器下载后解压到指定目录。","https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fissues\u002F35",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},10311,"在 Windows 上运行 MPI 示例脚本报错\"-np: command not found\"是什么原因？","这是因为在 Windows PowerShell 或 CMD 中直接运行 bash 脚本会导致语法不兼容。mpiexec 或 mpirun 命令在 Windows 下需要直接调用，而不是通过 bash 脚本中的 $(which mpirun) 方式。建议在 Windows 上直接使用命令：mpiexec -np \u003C进程数> python torch_fedavg_mnist_lr_one_line_example.py --cf config\\fedml_config.yaml，并确保已正确安装 MPI 环境（如 Microsoft MPI 或 Intel MPI）。","https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fissues\u002F361",{"id":173,"question_zh":174,"answer_zh":175,"source_url":176},10312,"修改 fedml_config.yaml 配置文件（如设置 use_gpu=true）后为何不生效？","在旧版本的 FedML 模拟模式（simulation）下，代码会强制加载内置的默认配置文件，忽略用户传入的 YAML 文件路径。这是已知的设计限制。解决方案是升级到最新版本的 FedML，该问题在新版本架构中已被重构修复。如果必须使用旧版，需直接修改源码中硬编码的默认配置文件路径。","https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fissues\u002F218",{"id":178,"question_zh":179,"answer_zh":180,"source_url":181},10313,"安装 FedML 时出现\"subprocess-exited-with-error\"或文件路径不存在错误怎么办？","此类错误通常发生在 Windows 环境下，由于文件路径过长或目录结构嵌套过深导致。尝试以下方法：1. 将项目克隆到根目录下较短的路径（如 C:\\fedml）；2. 确保使用管理员权限运行命令行；3. 尝试使用预编译的 wheel 包而非从源码构建：pip install fedml --only-binary=:all:；4. 检查 Python 版本兼容性，建议使用 Python 3.7 或 3.8。","https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fissues\u002F530",[183,188,193,198,203,208,213],{"id":184,"version":185,"summary_zh":186,"released_at":187},107548,"v0.8.9","\r\n# FEDML Open Source: A Unified and Scalable Machine Learning Library for Running Training and Deployment Anywhere at Any Scale\r\n\r\nBacked by FEDML Nexus AI: Next-Gen Cloud Services for LLMs & Generative AI (https:\u002F\u002Fnexus.fedml.ai)\r\n\r\n\u003Cdiv align=\"center\">\r\n \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Fdoc\u002Fimg\u002Ffedml_logo_light_mode.png\" width=\"400px\">\r\n\u003C\u002Fdiv>\r\n\r\nFedML Documentation: https:\u002F\u002Fdoc.fedml.ai \r\n\r\nFedML Homepage: https:\u002F\u002Ffedml.ai\u002F \\\r\nFedML Blog: https:\u002F\u002Fblog.fedml.ai\u002F \\\r\nFedML Medium: https:\u002F\u002Fmedium.com\u002F@FedML \\\r\nFedML Research: https:\u002F\u002Ffedml.ai\u002Fresearch-papers\u002F \r\n\r\nJoin the Community: \\\r\nSlack: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ffedml\u002Fshared_invite\u002Fzt-havwx1ee-a1xfOUrATNfc9DFqU~r34w \\\r\nDiscord: https:\u002F\u002Fdiscord.gg\u002F9xkW8ae6RV\r\n\r\n\r\nFEDML® stands for Foundational Ecosystem Design for Machine Learning. [FEDML Nexus AI](https:\u002F\u002Fnexus.fedml.ai) is the next-gen cloud service for LLMs & Generative AI. It helps developers to *launch* complex model *training*, *deployment*, and *federated learning* anywhere on decentralized GPUs, multi-clouds, edge servers, and smartphones, *easily, economically, and securely*.\r\n\r\nHighly integrated with [FEDML open source library](https:\u002F\u002Fgithub.com\u002Ffedml-ai\u002Ffedml), FEDML Nexus AI provides holistic support of three interconnected AI infrastructure layers: user-friendly MLOps, a well-managed scheduler, and high-performance ML libraries for running any AI jobs across GPU Clouds.\r\n\r\n![drawing](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Fdoc\u002Fen\u002F_static\u002Fimage\u002Ffedml-nexus-ai-overview.png)\r\n\r\nA typical workflow is showing in figure above. When developer wants to run a pre-built job in Studio or Job Store, FEDML®Launch swiftly pairs AI jobs with the most economical GPU resources, auto-provisions, and effortlessly runs the job, eliminating complex environment setup and management. When running the job, FEDML®Launch orchestrates the compute plane in different cluster topologies and configuration so that any complex AI jobs are enabled, regardless model training, deployment, or even federated learning. FEDML®Open Source is unified and scalable machine learning library for running these AI jobs anywhere at any scale. \r\n\r\nIn the MLOps layer of FEDML Nexus AI\r\n- **FEDML® Studio** embraces the power of Generative AI! Access popular open-source foundational models (e.g., LLMs), fine-tune them seamlessly with your specific data, and deploy them scalably and cost-effectively using the FEDML Launch on GPU marketplace.\r\n- **FEDML® Job Store** maintains a list of pre-built jobs for training, deployment, and federated learning. Developers are encouraged to run directly with customize datasets or models on cheaper GPUs.\r\n\r\nIn the scheduler layer of FEDML Nexus AI\r\n- **FEDML® Launch** swiftly pairs AI jobs with the most economical GPU resources, auto-provisions, and effortlessly runs the job, eliminating complex environment setup and management. It supports a range of compute-intensive jobs for generative AI and LLMs, such as large-scale training, serverless deployments, and vector DB searches. FEDML Launch also facilitates on-prem cluster management and deployment on private or hybrid clouds.\r\n\r\nIn the Compute layer of FEDML Nexus AI\r\n- **FEDML® Deploy** is a model serving platform for high scalability and low latency.\r\n- **FEDML® Train** focuses on distributed training of large and foundational models.\r\n- **FEDML® Federate** is a federated learning platform backed by the most popular federated learning open-source library and the world’s first FLOps (federated learning Ops), offering on-device training on smartphones and cross-cloud GPU servers.\r\n- **FEDML® Open Source** is unified and scalable machine learning library for running these AI jobs anywhere at any scale.\r\n\r\n# Contributing \r\nFedML embraces and thrive through open-source. We welcome all kinds of contributions from the community. Kudos to all of \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffedml-ai\u002Ffedml\u002Fgraphs\u002Fcontributors\" target=\"_blank\">our amazing contributors\u003C\u002Fa>!  \r\nFedML has adopted [Contributor Covenant](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002FCODE_OF_CONDUCT.md).\r\n","2023-10-28T04:45:33",{"id":189,"version":190,"summary_zh":191,"released_at":192},107549,"v0.8.7","# What's Changed\r\n## New Features\r\n- [CoreEngine\u002FMLOps] Supported LLM record logging.\r\n- [Serving] Made the inference backend for deepspeed work.\r\n- [CoreEngine\u002FDevOps] Made the public cloud server scheduled into specific nodes.\r\n- [DevOps] Added the fedml light docker image and related documents.\r\n- [DevOps] Built and pushed light docker images and related pipelines.\r\n- [CoreEngine] Added timestamp when reporting system metrics.\r\n- [DevOps] Made the serving k8s cluster work with the latest images and updated related chart files.\r\n- [CoreEngine] Added the skip_log_model_net option for llm training.\r\n- [CoreEngine\u002FCrossSilo] Supported customized hierarchical cross-silo.\r\n- [Serving] Created the default model config and readme file if the user did not provide any model config and readme options when creating a model card.\r\n- [Serving] Allow users to customize their token for end point and inference.\r\n## Bug Fixes\r\n- [CoreEngine] Made compatibility when opening subprocess on windows.\r\n- [CoreEngine] Fixed the issue that MPI Mode does not have client rank -1.\r\n- [CoreEngine] Set the python interpreter based on the current running python version.\r\n- [CoreEngine] Fixed the issue that failed to verify the pip ssl certificate when checking OTA versions.\r\n- [CrossDevice] Fixed issues where the test metrics are reported twice to MLOps and loss metrics are clipped to integers on the Beehive platform.\r\n- [App] Fixed issues when installing flamby on the heart-disease app.\r\n- [CoreEngine] Added handler when utf-8 cannot decode the output and error string.\r\n- [App] Fixed scripts and requirements on the FedNLP app.\r\n- [CoreEngine] Fixed issues whereFileExistsError triggered for all os.makedirs.\r\n- [Serving] Changed the model url to open.fedml.ai.\r\n- [Serving] Fixed the issue for OnnxExporterError and added Onnx as default dependent library when installing fedml.\r\n- [Serving] Fixed the issue where the local package name is different from MLOps UI.\r\n## Enhancements\r\n- [Serving] Establish container based on user's config and improve code readability.","2023-07-23T09:55:29",{"id":194,"version":195,"summary_zh":196,"released_at":197},107550,"v0.8.4","# What's Changed\r\n## New Features in 0.8.4\r\nAt FedML, our mission is to remove the friction and pain points of converting your ML & AI models from R&D into production-scale-distributed and federated training & serving via our no-code MLOps platform. \r\nFedML is happy to announce our update 0.8.4.  This release is filled with new capabilities, bug fixes, and enhancements.  A key announcement is the launch of FedLLM for simplifying & reducing the costs associated with training & serving large language models. You can read more about it on our [blog post](https:\u002F\u002Fblog.fedml.ai\u002Freleasing-fedllm-build-your-own-large-language-models-on-proprietary-data-using-the-fedml-platform\u002F).\r\n\u003Cimg width=\"655\" alt=\"Screenshot 2023-06-21 at 01 42 07\" src=\"https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fassets\u002F68619812\u002F6da6c1de-d649-4b7b-bb32-daf074fc05cc\">\r\n\r\n## New Features\r\n\r\n- [CoreEngine\u002FMLOps] Launched FedLLM (Federated Large Language Model) for training and serving [GitHub](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Ftree\u002Fmaster\u002Fpython\u002Fapp\u002Ffedllm)  [Blog](https:\u002F\u002Fblog.fedml.ai\u002Freleasing-fedllm-build-your-own-large-language-models-on-proprietary-data-using-the-fedml-platform\u002F)\r\n\r\n- [CoreEngine] Deployed Helm Charts to our repository for packaging and ease of deploying on Kubernetes  https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Finstallation\u002Finstall_on_k8s\u002Ffedml-edge-client-server\u002Ffedml-server-deployment-latest.tgz https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Finstallation\u002Finstall_on_k8s\u002Ffedml-edge-client-server\u002Ffedml-client-deployment-latest.tgz\r\n\r\n- [Documents] Refactored the devops and installation structures (devops for internal pipelines, installation for external users). https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Ftree\u002Fmaster\u002Finstallation\r\n\r\n- [DevOps] Deployed a new fedml fedml-light docker image and related documents. [DockerHub](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ffedml\u002Ffedml\u002Ftags)  [GitHub doc](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Fdoc\u002Fen\u002Fstarter\u002Finstallation.md)\r\n\r\n- [DevOps] Built the light docker image to deploy to the k8s cluster, refined k8s related installation sections in the document. https:\u002F\u002Fhub.docker.com\u002Fr\u002Ffedml\u002Ffedml-edge-client-server-light\u002Ftags\r\n\r\n- [CoreEngine] Added support for multiple simultaneous training jobs when using our open source MLOPs commands. \r\n\r\n- [CoreEngine] Improved training health monitoring and properly report failed status.\r\n\r\n- [CoreEngine] Added APIs for enabling, disabling and querying client agent status. The APIs are as follows.\r\n\r\ncurl -XPOST http:\u002F\u002Flocalhost:40800\u002Ffedml\u002Fapi\u002Fv2\u002FdisableAgent -d’{}' \r\ncurl -XPOST [http:\u002F\u002Flocalhost:40800\u002Ffedml\u002Fapi\u002Fv2\u002FenableAgent](http:\u002F\u002Flocalhost:40800\u002Ffedml\u002Fapi\u002Fv2\u002FdisableAgent) -d’{}'\r\ncurl -XPOST [http:\u002F\u002Flocalhost:40800\u002Ffedml\u002Fapi\u002Fv2\u002FqueryAgentStatus](http:\u002F\u002Flocalhost:40800\u002Ffedml\u002Fapi\u002Fv2\u002FdisableAgent) -d’{}'\r\n## Bug Fixes\r\n\r\n- [CoreEngine] Create distinct device ids when running multiple Docker containers to simulate multiple clients or silos on one machine.  Now using the product id plus a random id as the device id\r\n\r\n- [CoreEngine] Fixed a device assignment issue in get_torch_device in the distributed training mode.\r\n\r\n- [Serving] Fixed the exceptions that occurred when recovering at startup after upgrading.\r\n\r\n- [CoreEngine] Fixed the device id issue when running in the docker on MacOS.\r\n\r\n- [App] Fixed the issue in the app fedprox + sage graph regression and graph clf.\r\n\r\n- [App] Fixed an issue with the heart disease app failing when running in MLOps.\r\n\r\n- [App] Fixed an issue with the heart disease app’s performance curve\r\n\r\n- [App\u002FAndroid] Enhanced Android starting\u002Fstopping mechanism and fixed the following issues:\r\n\r\nFixed status displays after stopping the run.\r\nWhen stopping a Run during a round that has not finished, the [MNN](https:\u002F\u002Fgithub.com\u002Falibaba\u002FMNN) process will remain in IDLE state (it was previously going OFFLINE).\r\nWhen stopping after a round is done, the training will now stop\r\nPython server TAG in the logs is not correct. Now you can easily find the server mentioned in logs.\r\n## Enhancements\r\n\r\n- [Serving] Tested the inference backend and checked the response after the model deployment is finished.\r\n\r\n- [CoreEngine\u002FServing] Set the GPU option based on the availability of CUDA when running the inference backend, optimize the mqtt connection checking.\r\n\r\n- [CoreEngine] Stored model caches to the user home directory when running the federated learning.\r\n\r\n- [CoreEngine] Added the device id to the monitor message when processing inference request\r\n\r\n- [CoreEngine] Reported the runner exception and ignored exceptions when missing the bootstrap section in the fedml_config.yaml.","2023-06-20T17:47:39",{"id":199,"version":200,"summary_zh":201,"released_at":202},107551,"v0.8.3","# What's Changed\r\n## New Features\r\n- [CoreEngine\u002FMLOps] Introducing the FedML OTA (Over-the-Air) upgrade mechanism for the training platform and serving platform.\r\n- [Documents] Added guidance for the OTA mechanism in the user guide document.\r\n## Bug Fixes\r\n- [Serving] Fixed an issue where exceptions occurred when activating the model inference.\r\n- [CoreEngine] Fixed an issue where aggregator exceptions occurred when running MPI scripts.\r\n- [Documents] Fixed broken links in the user guide document.\r\n- [CoreEngine] Checked if the current job is empty in the get_current_job_status api.\r\n- [CoreEngine] Fixed a high CPU usage issue when the reload option was enabled in the client API.\r\n## Enhancements\r\n- [Serving] Improved data syncing between Redis server and Sqlite database.\r\n- [Serving] Implemented the use of triple elements (end point name\u002Fmodel name\u002Fmodel version) to identify each inference API request.\r\n- [DevOps]  Updated Jenkinsfile to automate the building and deployment of the model serving Docker to the K8s cluster.\r\n- [Serving] Implemented the model monitor stop functionality when deactivating and deleting the model deployment.\r\n- [Serving] Checked the status of the end point when recovering on startup.\r\n- [CoreEngine] Refactored the OTA upgrade process for improved robustness.\r\n- [CoreEngine] Attach logs to the new Run ID when initiating a new run or deploying a model.\r\n- [CoreEngine] Refined upgrade status messages for enhanced clarity.","2023-04-23T09:28:46",{"id":204,"version":205,"summary_zh":206,"released_at":207},107552,"v0.8.2","# What's Changed\r\n## New Features\r\n- [CoreEngine\u002FMLOps] Refactor the entire serving platform to make it run more smoothly on the Kubernetes cluster.\r\n## Bug Fixes\r\n- [Training] The training status is still running after training.\r\n- [Training] Fixed the issue that the parrot platform can not collect and analyze metrics, events and logs.\r\n- [CoreEngine] Make the device unique in the docker container.\r\n## Enhancements\r\n- [CoreEngine\u002FMLOps] Print log does not show on the MLOps distributed logging platform.\r\n- [CoreEngine\u002FMLOps] Use the bootstrap script to upgrade the version of FedML when we don't need to publish the pip package.","2023-03-31T12:26:46",{"id":209,"version":210,"summary_zh":211,"released_at":212},107553,"v0.8.0","#  **FedML Open and Collaborative AI Platform**\r\n\r\nTrain, deploy, monitor, and improve machine learning models anywhere (edge\u002Fcloud) powered by collaboration on combined data, models, and computing resources\r\n![image](https:\u002F\u002Fuser-images.githubusercontent.com\u002F7238845\u002F227211348-87a9fa62-1ef0-4c7d-9e5f-12e3dd2fa154.png)\r\n\r\n#  **What's Changed**\r\n## **Feature Overview**\r\n1. supporting MLOps (https:\u002F\u002Fopen.fedml.ai)\r\n2. Multiple scenarios: \r\n- FedML Octopus: Cross-silo Federated Learning\r\n- FedML Beehive: Cross-device Federated Learning\r\n- FedML Parrot: FL Simulation with Single Process or Distributed Computing, smooth migration from research to production\r\n- FedML Spider: Federated Learning on Web Browsers\r\n\r\n3. Support Any Machine Learning Framework: PyTorch, TensorFlow, JAX with Haiku, and MXNet.\r\n4. Diverse communication backends (MPI, gRPC, PyTorch RPC, MQTT + S3)\r\n5. Differential Privacy (CDP-central DP; LDP-local DP)\r\n6. Attacker (API: fedml.core.FedMLAttacker); README: [python\u002Ffedml\u002Fcore\u002Fsecurity\u002Freadme.md](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Fpython\u002Ffedml\u002Fcore\u002Fsecurity\u002Freadme.md)\r\n7. Defender (API: fedml.core.FedMLDefender); README: [python\u002Ffedml\u002Fcore\u002Fsecurity\u002Freadme.md](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Fpython\u002Ffedml\u002Fcore\u002Fsecurity\u002Freadme.md)\r\n8. Secure Aggregation (multi-party computation): [cross_silo\u002Flight_sec_agg_example](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Fpython\u002Fexamples\u002Fcross_silo\u002Flight_sec_agg_example)\r\n9. In [FedML\u002Fpython\u002Fapp](https:\u002F\u002Fgithub.com\u002FFedML-AI\u002FFedML\u002Fblob\u002Fmaster\u002Fpython\u002Fapp) folder, we provide applications in real-world settings.\r\n10. Enable federated model inference at MLOps (https:\u002F\u002Fopen.fedml.ai)\r\n\r\nFor more detailed instructions, please refer to https:\u002F\u002Fdoc.fedml.ai\u002F\r\n\r\n## **New Features**\r\n- [Serving] Make all serving pipelines work: device login, model creation, model packaging, model pushing, model deployment and model monitoring.\r\n- [Serving] Make three entries for creating model cards work:  from the trained model list,  from the web page for creating model cards, from the related CLI for fedml model.\r\n- [OpenSource] Formally releases all of the previous versions as this v0.8.0 version: training, security, aggregator, communication backends, MQTT optimization, metrics tracing, events tracing, realtime logs.\r\n## **Bug Fixes**\r\n- [CoreEngine] CLI engine error when running simulation.\r\n- [Serving] Adjust the training codes to adapt the ONNX sequence rule.\r\n- [Serving] URL error in the model serving platform.\r\n## **Enhancements**\r\n- [CoreEngine\u002FMLOps][log] Format the log time to NTP time.\r\n- [CoreEngin\u002FMLOps] Shows the progress bar and the size of the transferred data in the log when the client downloads and uploads the model.\r\n- [CoreEngine] Client optimization when the network is weak or disconnected.","2023-03-23T12:37:52",{"id":214,"version":215,"summary_zh":79,"released_at":216},107554,"fedml_v0.6_before_fundraising","2022-04-30T03:31:46"]