[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-alibaba--FederatedScope":3,"tool-alibaba--FederatedScope":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":10,"env_os":100,"env_gpu":101,"env_ram":102,"env_deps":103,"category_tags":111,"github_topics":112,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":116,"updated_at":117,"faqs":118,"releases":148},9653,"alibaba\u002FFederatedScope","FederatedScope","An easy-to-use federated learning platform","FederatedScope 是一款由阿里巴巴开源的联邦学习平台，旨在让数据隐私保护下的协同建模变得简单高效。在数据孤岛日益严重且隐私法规趋严的背景下，它解决了多方在不共享原始数据的前提下，如何安全、灵活地联合训练人工智能模型的难题。\n\n该平台特别适合人工智能研究人员、算法工程师以及希望探索隐私计算技术的开发者使用。无论是学术界的理论验证，还是工业界的实际落地，FederatedScope 都能提供强有力的支持。其核心亮点在于采用了“事件驱动”的系统架构，这不仅赋予了系统极高的灵活性，方便用户根据需求定制复杂的联邦学习流程，还内置了丰富的功能模块，涵盖了从个性化联邦学习到超参数优化、甚至防御后门攻击等多种前沿场景。\n\n此外，FederatedScope 拥有完善的文档和在线试玩环境（Playground），用户无需繁琐的配置即可通过 JupyterLab 或 Google Colab 快速上手。凭借其在顶会论文中的技术积淀和对易用性的极致追求，FederatedScope 正成为连接联邦学习理论与实践的重要桥梁，帮助用户更安全、有效地挖掘数据价值。","\u003Ch1 align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_849a91ef5cec.png\" width=\"400\" alt=\"federatedscope-logo\">\n\u003C\u002Fh1>\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flanguage-python-blue.svg)\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache-000000.svg)\n[![Website](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fwebsite-FederatedScope-0000FF)](https:\u002F\u002Ffederatedscope.io\u002F)\n[![Playground](https:\u002F\u002Fshields.io\u002Fbadge\u002FJupyterLab-Enjoy%20Your%20FL%20Journey!-F37626?logo=jupyter)](https:\u002F\u002Ftry.federatedscope.io\u002F)\n[![Contributing](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg)](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fcontributor\u002F)\n\nFederatedScope is a comprehensive federated learning platform that provides convenient usage and flexible customization for various federated learning tasks in both academia and industry.  Based on an event-driven architecture, FederatedScope integrates rich collections of functionalities to satisfy the burgeoning demands from federated learning, and aims to build up an easy-to-use platform for promoting learning safely and effectively.\n\nA detailed tutorial is provided on our website: [federatedscope.io](https:\u002F\u002Ffederatedscope.io\u002F)\n\nYou can try FederatedScope via [FederatedScope Playground](https:\u002F\u002Ftry.federatedscope.io\u002F) or [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Falibaba\u002FFederatedScope).\n\n| [Code Structure](#code-structure) | [Quick Start](#quick-start) | [Advanced](#advanced) | [Documentation](#documentation) | [Publications](#publications) | [Contributing](#contributing) | \n\n## News\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [05-17-2023] Our paper [FS-REAL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13363) has been accepted by KDD'2023!\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [05-17-2023] Our benchmark paper for FL backdoor attacks [Backdoor Attacks Bench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01677) has been accepted by KDD'2023!\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [05-17-2023] Our paper [Communication Efficient and Differentially Private Logistic Regression under the Distributed Setting]() has been accepted by KDD'2023!\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [04-25-2023] Our paper [pFedGate](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02776) has been accepted by ICML'2023!\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [04-25-2023] Our benchmark paper for FedHPO [FedHPO-Bench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03966) has been accepted by ICML'2023!\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [04-03-2023] We release FederatedScope v0.3.0!\n- [02-10-2022] Our [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05011.pdf) elaborating on FederatedScope is accepted by VLDB'23!\n- [10-05-2022] Our benchmark paper for personalized FL, [pFL-Bench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03655) has been accepted by NeurIPS'22, Dataset and Benchmark Track!\n- [08-18-2022] Our KDD 2022 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05562) on federated graph learning receives the KDD Best Paper Award for ADS track!\n- [07-30-2022] We release FederatedScope v0.2.0! \n- [06-17-2022] We release **pFL-Bench**, a comprehensive benchmark for personalized Federated Learning (pFL), containing 10+ datasets and 20+ baselines. [[code](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FpFL-Bench), [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03655)]\n- [06-17-2022] We release **FedHPO-Bench**, a benchmark suite for studying federated hyperparameter optimization. [[code](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FFedHPOBench), [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03966)]\n- [06-17-2022] We release **B-FHTL**, a benchmark suit for studying federated hetero-task learning. [[code](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FB-FHTL), [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03436)]\n- [06-13-2022] Our project was receiving an attack, which has been resolved. [More details](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Fblob\u002Fmaster\u002Fdoc\u002Fnews\u002F06-13-2022_Declaration_of_Emergency.txt).\n- [05-25-2022] Our paper [FederatedScope-GNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05562) has been accepted by KDD'2022!\n- [05-06-2022] We release FederatedScope v0.1.0! \n\n## Code Structure\n```\nFederatedScope\n├── federatedscope\n│   ├── core           \n│   |   ├── workers              # Behaviors of participants (i.e., server and clients)\n│   |   ├── trainers             # Details of local training\n│   |   ├── aggregators          # Details of federated aggregation\n│   |   ├── configs              # Customizable configurations\n│   |   ├── monitors             # The monitor module for logging and demonstrating  \n│   |   ├── communication.py     # Implementation of communication among participants   \n│   |   ├── fed_runner.py        # The runner for building and running an FL course\n│   |   ├── ... ..\n│   ├── cv                       # Federated learning in CV        \n│   ├── nlp                      # Federated learning in NLP          \n│   ├── gfl                      # Graph federated learning          \n│   ├── autotune                 # Auto-tunning for federated learning         \n│   ├── vertical_fl              # Vertical federated learning         \n│   ├── contrib                          \n│   ├── main.py           \n│   ├── ... ...          \n├── scripts                      # Scripts for reproducing existing algorithms\n├── benchmark                    # We release several benchmarks for convenient and fair comparisons\n├── doc                          # For automatic documentation\n├── environment                  # Installation requirements and provided docker files\n├── materials                    # Materials of related topics (e.g., paper lists)\n│   ├── notebook                        \n│   ├── paper_list                                        \n│   ├── tutorial                                       \n│   ├── ... ...                                      \n├── tests                        # Unittest modules for continuous integration\n├── LICENSE\n└── setup.py\n```\n\n## Quick Start\n\nWe provide an end-to-end example for users to start running a standard FL course with FederatedScope.\n\n### Step 1. Installation\n\nFirst of all, users need to clone the source code and install the required packages (we suggest python version >= 3.9). You can choose between the following two installation methods (via docker or conda) to install FederatedScope.\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope.git\ncd FederatedScope\n```\n#### Use Docker\n\nYou can build docker image and run with docker env (cuda 11 and torch 1.10):\n\n```\ndocker build -f environment\u002Fdocker_files\u002Ffederatedscope-torch1.10.Dockerfile -t alibaba\u002Ffederatedscope:base-env-torch1.10 .\ndocker run --gpus device=all --rm -it --name \"fedscope\" -w $(pwd) alibaba\u002Ffederatedscope:base-env-torch1.10 \u002Fbin\u002Fbash\n```\nIf you need to run with down-stream tasks such as graph FL, change the requirement\u002Fdocker file name into another one when executing the above commands:\n```\n# environment\u002Frequirements-torch1.10.txt -> \nenvironment\u002Frequirements-torch1.10-application.txt\n\n# environment\u002Fdocker_files\u002Ffederatedscope-torch1.10.Dockerfile ->\nenvironment\u002Fdocker_files\u002Ffederatedscope-torch1.10-application.Dockerfile\n```\nNote: You can choose to use cuda 10 and torch 1.8 via changing `torch1.10` to `torch1.8`.\nThe docker images are based on the nvidia-docker. Please pre-install the NVIDIA drivers and `nvidia-docker2` in the host machine. See more details [here](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fenvironment\u002Fdocker_files).\n\n#### Use Conda\n\nWe recommend using a new virtual environment to install FederatedScope:\n\n```bash\nconda create -n fs python=3.9\nconda activate fs\n```\n\nIf your backend is torch, please install torch in advance ([torch-get-started](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F)). For example, if your cuda version is 11.3 please execute the following command:\n\n```bash\nconda install -y pytorch=1.10.1 torchvision=0.11.2 torchaudio=0.10.1 torchtext=0.11.1 cudatoolkit=11.3 -c pytorch -c conda-forge\n```\n\nFor users with Apple M1 chips:\n```bash\nconda install pytorch torchvision torchaudio -c pytorch\n# Downgrade torchvision to avoid segmentation fault\npython -m pip install torchvision==0.11.3\n```\n\nFinally, after the backend is installed, you can install FederatedScope from `source`:\n\n##### From source\n\n```bash\n# Editable mode\npip install -e .\n\n# Or (developers for dev mode)\npip install -e .[dev]\npre-commit install\n```\n\nNow, you have successfully installed the minimal version of FederatedScope. (**Optinal**) For application version including graph, nlp and speech, run:\n\n```bash\nbash environment\u002Fextra_dependencies_torch1.10-application.sh\n```\n\n### Step 2. Prepare datasets\n\nTo run an FL task, users should prepare a dataset. \nThe DataZoo provided in FederatedScope can help to automatically download and preprocess widely-used public datasets for various FL applications, including CV, NLP, graph learning, recommendation, etc. Users can directly specify `cfg.data.type = DATASET_NAME`in the configuration. For example, \n\n```bash\ncfg.data.type = 'femnist'\n```\n\nTo use customized datasets, you need to prepare the datasets following a certain format and register it. Please refer to [Customized Datasets](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fown-case\u002F#data) for more details.\n\n### Step 3. Prepare models\n\nThen, users should specify the model architecture that will be trained in the FL course.\nFederatedScope provides a ModelZoo that contains the implementation of widely adopted model architectures for various FL applications. Users can set up `cfg.model.type = MODEL_NAME` to apply a specific model architecture in FL tasks. For example,\n\n```yaml\ncfg.model.type = 'convnet2'\n```\n\nFederatedScope allows users to use customized models via registering. Please refer to [Customized Models](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fown-case\u002F#model) for more details about how to customize a model architecture.\n\n### Step 4. Start running an FL task\n\nNote that FederatedScope provides a unified interface for both standalone mode and distributed mode, and allows users to change via configuring. \n\n#### Standalone mode\n\nThe standalone mode in FederatedScope means to simulate multiple participants (servers and clients) in a single device, while participants' data are isolated from each other and their models might be shared via message passing. \n\nHere we demonstrate how to run a standard FL task with FederatedScope, with setting `cfg.data.type = 'FEMNIST'`and `cfg.model.type = 'ConvNet2'` to run vanilla FedAvg for an image classification task. Users can customize training configurations, such as `cfg.federated.total_round_num`, `cfg.dataloader.batch_size`, and `cfg.train.optimizer.lr`, in the configuration (a .yaml file), and run a standard FL task as: \n\n```bash\n# Run with default configurations\npython federatedscope\u002Fmain.py --cfg scripts\u002Fexample_configs\u002Ffemnist.yaml\n# Or with custom configurations\npython federatedscope\u002Fmain.py --cfg scripts\u002Fexample_configs\u002Ffemnist.yaml federate.total_round_num 50 dataloader.batch_size 128\n```\n\nThen you can observe some monitored metrics during the training process as:\n\n```\nINFO: Server has been set up ...\nINFO: Model meta-info: \u003Cclass 'federatedscope.cv.model.cnn.ConvNet2'>.\n... ...\nINFO: Client has been set up ...\nINFO: Model meta-info: \u003Cclass 'federatedscope.cv.model.cnn.ConvNet2'>.\n... ...\nINFO: {'Role': 'Client #5', 'Round': 0, 'Results_raw': {'train_loss': 207.6341676712036, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.152683353424072}}\nINFO: {'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_loss': 209.0940284729004, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1818805694580075}}\nINFO: {'Role': 'Client #8', 'Round': 0, 'Results_raw': {'train_loss': 202.24929332733154, 'train_acc': 0.04, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.0449858665466305}}\nINFO: {'Role': 'Client #6', 'Round': 0, 'Results_raw': {'train_loss': 209.43883895874023, 'train_acc': 0.06, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1887767791748045}}\nINFO: {'Role': 'Client #9', 'Round': 0, 'Results_raw': {'train_loss': 208.83140087127686, 'train_acc': 0.0, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1766280174255375}}\nINFO: ----------- Starting a new training round (Round #1) -------------\n... ...\nINFO: Server: Training is finished! Starting evaluation.\nINFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 163.029045\n... ...\nINFO: Server: Final evaluation is finished! Starting merging results.\n... ...\n```\n\n#### Distributed mode\n\nThe distributed mode in FederatedScope denotes running multiple procedures to build up an FL course, where each procedure plays as a participant (server or client) that instantiates its model and loads its data. The communication between participants is already provided by the communication module of FederatedScope.\n\nTo run with distributed mode, you only need to:\n\n- Prepare isolated data file and set up `cfg.data.file_path = PATH\u002FTO\u002FDATA` for each participant;\n- Change `cfg.federate.model = 'distributed'`, and specify the role of each participant  by `cfg.distributed.role = 'server'\u002F'client'`.\n- Set up a valid address by `cfg.distribute.server_host\u002Fclient_host = x.x.x.x` and `cfg.distribute.server_port\u002Fclient_port = xxxx`. (Note that for a server, you need to set up `server_host` and `server_port` for listening messages, while for a client, you need to set up `client_host` and `client_port` for listening as well as `server_host` and `server_port` for joining in an FL course)\n\nWe prepare a synthetic example for running with distributed mode:\n\n```bash\n# For server\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_server.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx\n\n# For clients\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_1.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_2.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_3.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx\n```\n\nAn executable example with generated toy data can be run with (a script can be found in `scripts\u002Frun_distributed_lr.sh`):\n```bash\n# Generate the toy data\npython scripts\u002Fdistributed_scripts\u002Fgen_data.py\n\n# Firstly start the server that is waiting for clients to join in\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_server.yaml data.file_path toy_data\u002Fserver_data distribute.server_host 127.0.0.1 distribute.server_port 50051\n\n# Start the client #1 (with another process)\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_1.yaml data.file_path toy_data\u002Fclient_1_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50052\n# Start the client #2 (with another process)\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_2.yaml data.file_path toy_data\u002Fclient_2_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50053\n# Start the client #3 (with another process)\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_3.yaml data.file_path toy_data\u002Fclient_3_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50054\n```\n\nAnd you can observe the results as (the IP addresses are anonymized with 'x.x.x.x'):\n\n```\nINFO: Server: Listen to x.x.x.x:xxxx...\nINFO: Server has been set up ...\nModel meta-info: \u003Cclass 'federatedscope.core.lr.LogisticRegression'>.\n... ...\nINFO: Client: Listen to x.x.x.x:xxxx...\nINFO: Client (address x.x.x.x:xxxx) has been set up ...\nClient (address x.x.x.x:xxxx) is assigned with #1.\nINFO: Model meta-info: \u003Cclass 'federatedscope.core.lr.LogisticRegression'>.\n... ...\n{'Role': 'Client #2', 'Round': 0, 'Results_raw': {'train_avg_loss': 5.215108394622803, 'train_loss': 333.7669372558594, 'train_total': 64}}\n{'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_total': 64, 'train_loss': 290.9668884277344, 'train_avg_loss': 4.54635763168335}}\n----------- Starting a new training round (Round #1) -------------\n... ...\nINFO: Server: Training is finished! Starting evaluation.\nINFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 30.387419\n... ...\nINFO: Server: Final evaluation is finished! Starting merging results.\n... ...\n```\n\n\n## Advanced\n\nAs a comprehensive FL platform, FederatedScope provides the fundamental implementation to support requirements of various FL applications and frontier studies, towards both convenient usage and flexible extension, including:\n\n- **Personalized Federated Learning**: Client-specific model architectures and training configurations are applied to handle the non-IID issues caused by the diverse data distributions and heterogeneous system resources.\n- **Federated Hyperparameter Optimization**: When hyperparameter optimization (HPO) comes to Federated Learning, each attempt is extremely costly due to multiple rounds of communication across participants. It is worth noting that HPO under the FL is unique and more techniques should be promoted such as low-fidelity HPO.\n- **Privacy Attacker**: The privacy attack algorithms are important and convenient to verify the privacy protection strength of the design FL systems and algorithms, which is growing along with Federated Learning.\n- **Graph Federated Learning**: Working on the ubiquitous graph data, Graph Federated Learning aims to exploit isolated sub-graph data to learn a global model, and has attracted increasing popularity.\n- **Recommendation**: As a number of laws and regulations go into effect all over the world, more and more people are aware of the importance of privacy protection, which urges the recommender system to learn from user data in a privacy-preserving manner.\n- **Differential Privacy**: Different from the encryption algorithms that require a large amount of computation resources,  differential privacy is an economical yet flexible technique to protect privacy, which has achieved great success in database and is ever-growing in federated learning.\n- ...\n\nMore supports are coming soon! We have prepared a [tutorial](https:\u002F\u002Ffederatedscope.io\u002F) to provide more details about how to utilize FederatedScope to enjoy your journey of Federated Learning! \n\nMaterials of related topics are constantly being updated, please refer to [FL-Recommendation](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Recommendation), [Federated-HPO](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFederated_HPO), [Personalized FL](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FPersonalized_FL), [Federated Graph Learning](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFederated_Graph_Learning), [FL-NLP](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-NLP), [FL-Attacker](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Attacker), [FL-Incentive-Mechanism](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Incentive), [FL-Fairness](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Fiarness) and so on. \n\n## Documentation\n\nThe classes and methods of FederatedScope have been well documented so that users can generate the API references by:\n\n```shell\ncd doc\npip install -r requirements.txt\nmake html\n```\nNOTE:\n* The `doc\u002Frequirements.txt` is only for documentation of API by Sphinx, which can be automatically generated by Github actions `.github\u002Fworkflows\u002Fsphinx.yml`. (Trigger by pull request if `DOC` in the title.)\n* Download via Artifacts in Github actions.\n\nWe put the API references on our [website](https:\u002F\u002Ffederatedscope.io\u002Frefs\u002Findex).\n\nBesides, we provide documents for [executable scripts](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fscripts) and [customizable configurations](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Ffederatedscope\u002Fcore\u002Fconfigs).\n\n## License\n\nFederatedScope is released under Apache License 2.0.\n\n## Publications\nIf you find FederatedScope useful for your research or development, please cite the following \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05011\" target=\"_blank\">paper\u003C\u002Fa>:\n```\n@article{federatedscope,\n  title = {FederatedScope: A Flexible Federated Learning Platform for Heterogeneity},\n  author = {Xie, Yuexiang and Wang, Zhen and Gao, Dawei and Chen, Daoyuan and Yao, Liuyi and Kuang, Weirui and Li, Yaliang and Ding, Bolin and Zhou, Jingren},\n  journal={Proceedings of the VLDB Endowment},\n  volume={16},\n  number={5},\n  pages={1059--1072},\n  year={2023}\n}\n```\nMore publications can be found in the [Publications](https:\u002F\u002Ffederatedscope.io\u002Fpub\u002F).\n\n## Contributing\n\nWe **greatly appreciate** any contribution to FederatedScope! We provide a developer version of FederatedScope with additional pre-commit hooks to perform commit checks compared to the official version:\n\n```bash\n# Install the developer version\npip install -e .[dev]\npre-commit install\n\n# Or switch to the developer version from the official version\npip install pre-commit\npre-commit install\npre-commit run --all-files\n```\n\nYou can refer to [Contributing to FederatedScope](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fcontributor\u002F) for more details.\n\nWelcome to join in our [Slack channel](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ffederatedscopeteam\u002Fshared_invite\u002Fzt-1apmfjqmc-hvpYbsWJdm7D93wPNXbqww), or DingDing group (please scan the following QR code) for discussion.\n\n\u003Cimg width=\"150\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b9341500c63b.jpg\" width=\"400\" alt=\"federatedscope-logo\">\n","\u003Ch1 align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_849a91ef5cec.png\" width=\"400\" alt=\"federatedscope-logo\">\n\u003C\u002Fh1>\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flanguage-python-blue.svg)\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache-000000.svg)\n[![Website](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fwebsite-FederatedScope-0000FF)](https:\u002F\u002Ffederatedscope.io\u002F)\n[![Playground](https:\u002F\u002Fshields.io\u002Fbadge\u002FJupyterLab-Enjoy%20Your%20FL%20Journey!-F37626?logo=jupyter)](https:\u002F\u002Ftry.federatedscope.io\u002F)\n[![Contributing](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg)](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fcontributor\u002F)\n\nFederatedScope 是一个全面的联邦学习平台，为学术界和工业界的各类联邦学习任务提供便捷的使用方式和灵活的定制能力。基于事件驱动架构，FederatedScope 集成了丰富的功能模块，以满足联邦学习领域日益增长的需求，并致力于打造一个易于使用的平台，推动安全、高效的联邦学习实践。\n\n详细的教程请访问我们的官网：[federatedscope.io](https:\u002F\u002Ffederatedscope.io\u002F)\n\n您可以通过 [FederatedScope Playground](https:\u002F\u002Ftry.federatedscope.io\u002F) 或 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Falibaba\u002FFederatedScope) 体验 FederatedScope。\n\n| [代码结构](#code-structure) | [快速入门](#quick-start) | [进阶指南](#advanced) | [文档](#documentation) | [相关论文](#publications) | [贡献指南](#contributing) | \n\n## 新闻\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [2023年5月17日] 我们的论文 [FS-REAL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13363) 已被 KDD'2023 接受！\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [2023年5月17日] 我们关于 FL 后门攻击的基准研究论文 [Backdoor Attacks Bench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01677) 已被 KDD'2023 接受！\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [2023年5月17日] 我们的论文 [分布式环境下的通信高效且差分隐私逻辑回归]() 已被 KDD'2023 接受！\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [2023年4月25日] 我们的论文 [pFedGate](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02776) 已被 ICML'2023 接受！\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [2023年4月25日] 我们关于 FedHPO 的基准研究论文 [FedHPO-Bench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03966) 已被 ICML'2023 接受！\n- ![new](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b2e9abe195d0.png) [2023年4月3日] 我们发布了 FederatedScope v0.3.0！\n- [2022年2月10日] 我们阐述 FederatedScope 的论文 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05011.pdf) 被 VLDB'23 接受！\n- [2022年10月5日] 我们关于个性化联邦学习的基准研究论文 [pFL-Bench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03655) 被 NeurIPS'22 数据集与基准赛道接受！\n- [2022年8月18日] 我们在 KDD 2022 上发表的关于联邦图学习的论文 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05562) 获得了 ADS 赛道的最佳论文奖！\n- [2022年7月30日] 我们发布了 FederatedScope v0.2.0！ \n- [2022年6月17日] 我们发布了 **pFL-Bench**，这是一个针对个性化联邦学习（pFL）的综合性基准，包含 10 多个数据集和 20 多个基线。[[代码](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FpFL-Bench), [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03655)]\n- [2022年6月17日] 我们发布了 **FedHPO-Bench**，一套用于研究联邦超参数优化的基准工具。[[代码](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FFedHPOBench), [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03966)]\n- [2022年6月17日] 我们发布了 **B-FHTL**，一套用于研究联邦异构任务学习的基准工具。[[代码](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FB-FHTL), [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03436)]\n- [2022年6月13日] 我们的项目曾遭受攻击，现已解决。[详情](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Fblob\u002Fmaster\u002Fdoc\u002Fnews\u002F06-13-2022_Declaration_of_Emergency.txt).\n- [2022年5月25日] 我们的论文 [FederatedScope-GNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05562) 已被 KDD'2022 接受！\n- [2022年5月6日] 我们发布了 FederatedScope v0.1.0！\n\n## 代码结构\n```\nFederatedScope\n├── federatedscope\n│   ├── core           \n│   |   ├── workers              # 参与者的具体行为（即服务器和客户端）\n│   |   ├── trainers             # 本地训练的详细实现\n│   |   ├── aggregators          # 联邦聚合的具体实现\n│   |   ├── configs              # 可配置的参数设置\n│   |   ├── monitors             # 日志记录与可视化模块\n│   |   ├── communication.py     # 参与者之间的通信实现   \n│   |   ├── fed_runner.py        # 用于构建和运行联邦学习流程的主程序\n│   |   ├── ... ..\n│   ├── cv                       # 计算机视觉领域的联邦学习        \n│   ├── nlp                      # 自然语言处理领域的联邦学习          \n│   ├── gfl                      # 图联邦学习          \n│   ├── autotune                 # 联邦学习中的自动调参         \n│   ├── vertical_fl              # 垂直联邦学习         \n│   ├── contrib                          \n│   ├── main.py           \n│   ├── ... ...          \n├── scripts                      # 用于复现现有算法的脚本\n├── benchmark                    # 我们发布了一些基准，便于公平比较\n├── doc                          # 用于自动生成文档\n├── environment                  # 安装要求及提供的 Docker 文件\n├── materials                    # 相关主题的资料（如论文列表）\n│   ├── notebook                        \n│   ├── paper_list                                        \n│   ├── tutorial                                       \n│   ├── ... ...                                      \n├── tests                        # 用于持续集成的单元测试模块\n├── LICENSE\n└── setup.py\n```\n\n## 快速入门\n\n我们提供了一个端到端的示例，帮助用户使用 FederatedScope 开始运行一个标准的联邦学习流程。\n\n### 第1步：安装\n\n首先，用户需要克隆源代码并安装所需的软件包（建议使用Python 3.9及以上版本）。您可以选择以下两种安装方式（通过Docker或Conda）来安装FederatedScope。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope.git\ncd FederatedScope\n```\n\n#### 使用 Docker\n\n您可以构建Docker镜像，并在带有CUDA 11和PyTorch 1.10的Docker环境中运行：\n\n```\ndocker build -f environment\u002Fdocker_files\u002Ffederatedscope-torch1.10.Dockerfile -t alibaba\u002Ffederatedscope:base-env-torch1.10 .\ndocker run --gpus device=all --rm -it --name \"fedscope\" -w $(pwd) alibaba\u002Ffederatedscope:base-env-torch1.10 \u002Fbin\u002Fbash\n```\n\n如果您需要运行下游任务，例如图联邦学习，在执行上述命令时，请将依赖文件和Docker文件名更改为相应的文件：\n\n```\n# environment\u002Frequirements-torch1.10.txt -> \nenvironment\u002Frequirements-torch1.10-application.txt\n\n# environment\u002Fdocker_files\u002Ffederatedscope-torch1.10.Dockerfile ->\nenvironment\u002Fdocker_files\u002Ffederatedscope-torch1.10-application.Dockerfile\n```\n\n注意：您可以通过将`torch1.10`替换为`torch1.8`来选择使用CUDA 10和PyTorch 1.8。这些Docker镜像基于nvidia-docker。请提前在宿主机上安装NVIDIA驱动程序和`nvidia-docker2`。更多详细信息请参阅[此处](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fenvironment\u002Fdocker_files)。\n\n#### 使用 Conda\n\n我们建议使用一个新的虚拟环境来安装FederatedScope：\n\n```bash\nconda create -n fs python=3.9\nconda activate fs\n```\n\n如果您的后端是PyTorch，请提前安装PyTorch（[PyTorch入门](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F)）。例如，如果您的CUDA版本是11.3，请执行以下命令：\n\n```bash\nconda install -y pytorch=1.10.1 torchvision=0.11.2 torchaudio=0.10.1 torchtext=0.11.1 cudatoolkit=11.3 -c pytorch -c conda-forge\n```\n\n对于配备Apple M1芯片的用户：\n\n```bash\nconda install pytorch torchvision torchaudio -c pytorch\n# 降级torchvision以避免段错误\npython -m pip install torchvision==0.11.3\n```\n\n最后，在安装好后端之后，您可以从源代码安装FederatedScope：\n\n##### 从源代码安装\n\n```bash\n# 可编辑模式\npip install -e .\n\n# 或者（开发者用于开发模式）\npip install -e .[dev]\npre-commit install\n```\n\n现在，您已成功安装了FederatedScope的最小版本。（可选）对于包含图、NLP和语音等应用的版本，请运行：\n\n```bash\nbash environment\u002Fextra_dependencies_torch1.10-application.sh\n```\n\n### 第2步：准备数据集\n\n要运行一个联邦学习任务，用户需要准备一个数据集。FederatedScope提供的DataZoo可以帮助自动下载和预处理广泛使用的公共数据集，适用于各种联邦学习应用场景，包括计算机视觉、自然语言处理、图学习、推荐系统等。用户可以直接在配置中指定`cfg.data.type = DATASET_NAME`。例如：\n\n```bash\ncfg.data.type = 'femnist'\n```\n\n如果要使用自定义数据集，您需要按照特定格式准备数据集并进行注册。更多详情请参阅[自定义数据集](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fown-case\u002F#data)。\n\n### 第3步：准备模型\n\n接下来，用户应指定将在联邦学习过程中训练的模型架构。FederatedScope提供了一个ModelZoo，其中包含了各种联邦学习应用中广泛采用的模型架构实现。用户可以通过设置`cfg.model.type = MODEL_NAME`来在联邦学习任务中应用特定的模型架构。例如：\n\n```yaml\ncfg.model.type = 'convnet2'\n```\n\nFederatedScope还允许用户通过注册的方式使用自定义模型。有关如何自定义模型架构的更多信息，请参阅[自定义模型](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fown-case\u002F#model)。\n\n### 第4步：开始运行联邦学习任务\n\n请注意，FederatedScope同时支持单机模式和分布式模式，并且用户可以通过配置轻松切换。\n\n#### 单机模式\n\nFederatedScope中的单机模式是指在一台设备上模拟多个参与者（服务器和客户端），参与者的数据彼此隔离，但他们的模型可以通过消息传递共享。\n\n下面我们演示如何使用FederatedScope运行一个标准的联邦学习任务，设置`cfg.data.type = 'FEMNIST'`和`cfg.model.type = 'ConvNet2'`，以运行针对图像分类任务的Vanilla FedAvg算法。用户可以在配置文件（.yaml文件）中自定义训练参数，如`cfg.federated.total_round_num`、`cfg.dataloader.batch_size`和`cfg.train.optimizer.lr`，然后按如下方式运行标准联邦学习任务：\n\n```bash\n# 使用默认配置运行\npython federatedscope\u002Fmain.py --cfg scripts\u002Fexample_configs\u002Ffemnist.yaml\n\n# 或者使用自定义配置\npython federatedscope\u002Fmain.py --cfg scripts\u002Fexample_configs\u002Ffemnist.yaml federate.total_round_num 50 dataloader.batch_size 128\n```\n\n然后，您可以在训练过程中观察到一些监控指标，如下所示：\n\n```\nINFO: 服务器已启动 ...\nINFO: 模型元信息：`\u003Cclass 'federatedscope.cv.model.cnn.ConvNet2'>`。\n... ...\nINFO: 客户端已启动 ...\nINFO: 模型元信息：`\u003Cclass 'federatedscope.cv.model.cnn.ConvNet2'>`。\n... ...\nINFO: {'角色': '客户端 #5', '轮次': 0, '原始结果': {'train_loss': 207.6341676712036, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.152683353424072}}\nINFO: {'角色': '客户端 #1', '轮次': 0, '原始结果': {'train_loss': 209.0940284729004, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1818805694580075}}\nINFO: {'角色': '客户端 #8', '轮次': 0, '原始结果': {'train_loss': 202.24929332733154, 'train_acc': 0.04, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.0449858665466305}}\nINFO: {'角色': '客户端 #6', '轮次': 0, '原始结果': {'train_loss': 209.43883895874023, 'train_acc': 0.06, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1887767791748045}}\nINFO: {'角色': '客户端 #9', '轮次': 0, '原始结果': {'train_loss': 208.83140087127686, 'train_acc': 0.0, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1766280174255375}}\nINFO: ----------- 开始新一轮训练（第1轮） -------------\n... ...\nINFO: 服务器：训练结束！开始评估。\nINFO: 客户端 #1：（第20轮测试集评估）test_loss 为 163.029045\n... ...\nINFO: 服务器：最终评估结束！开始合并结果。\n... ...\n```\n\n#### 分布式模式\n\nFederatedScope 中的分布式模式是指通过运行多个进程来构建一个联邦学习流程，其中每个进程充当参与者（服务器或客户端），实例化其模型并加载数据。参与者之间的通信已经由 FederatedScope 的通信模块提供。\n\n要以分布式模式运行，您只需：\n\n- 为每个参与者准备独立的数据文件，并设置 `cfg.data.file_path = PATH\u002FTO\u002FDATA`；\n- 将 `cfg.federate.model = 'distributed'`，并通过 `cfg.distributed.role = 'server'\u002F'client'` 指定每个参与者的角色；\n- 通过 `cfg.distribute.server_host\u002Fclient_host = x.x.x.x` 和 `cfg.distribute.server_port\u002Fclient_port = xxxx` 设置有效的地址。（请注意，对于服务器，您需要设置 `server_host` 和 `server_port` 来监听消息；而对于客户端，除了需要设置用于监听的 `client_host` 和 `client_port` 外，还需要设置用于加入联邦学习流程的 `server_host` 和 `server_port`）\n\n我们准备了一个使用分布式模式运行的合成示例：\n\n```bash\n# 对于服务器\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_server.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx\n\n# 对于客户端\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_1.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_2.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_3.yaml data.file_path 'PATH\u002FTO\u002FDATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx\n```\n\n使用生成的玩具数据的可执行示例可以这样运行（脚本可在 `scripts\u002Frun_distributed_lr.sh` 中找到）：\n```bash\n# 生成玩具数据\npython scripts\u002Fdistributed_scripts\u002Fgen_data.py\n\n# 首先启动等待客户端加入的服务器\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_server.yaml data.file_path toy_data\u002Fserver_data distribute.server_host 127.0.0.1 distribute.server_port 50051\n\n# 启动客户端 #1（通过另一个进程）\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_1.yaml data.file_path toy_data\u002Fclient_1_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50052\n# 启动客户端 #2（通过另一个进程）\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_2.yaml data.file_path toy_data\u002Fclient_2_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50053\n# 启动客户端 #3（通过另一个进程）\npython federatedscope\u002Fmain.py --cfg scripts\u002Fdistributed_scripts\u002Fdistributed_configs\u002Fdistributed_client_3.yaml data.file_path toy_data\u002Fclient_3_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50054\n```\n\n您可以观察到以下结果（IP 地址已用 'x.x.x.x' 匿名化）：\n\n```\nINFO: 服务器：正在监听 x.x.x.x:xxxx...\nINFO: 服务器已启动 ...\n模型元信息：`\u003Cclass 'federatedscope.core.lr.LogisticRegression'>`。\n... ...\nINFO: 客户端：正在监听 x.x.x.x:xxxx...\nINFO: 客户端（地址 x.x.x.x:xxxx）已启动 ...\n客户端（地址 x.x.x.x:xxxx）被分配为 #1。\nINFO: 模型元信息：`\u003Cclass 'federatedscope.core.lr.LogisticRegression'>`。\n... ...\n{'角色': '客户端 #2', '轮次': 0, '原始结果': {'train_avg_loss': 5.215108394622803, 'train_loss': 333.7669372558594, 'train_total': 64}}\n{'角色': '客户端 #1', '轮次': 0, '原始结果': {'train_total': 64, 'train_loss': 290.9668884277344, 'train_avg_loss': 4.54635763168335}}\n----------- 开始新一轮训练（第1轮） -------------\n... ...\nINFO: 服务器：训练结束！开始评估。\nINFO: 客户端 #1：（第20轮测试集评估）test_loss 为 30.387419\n... ...\nINFO: 服务器：最终评估结束！开始合并结果。\n... ...\n```\n\n## 高级功能\n\n作为一款全面的联邦学习平台，FederatedScope 提供了支持各类联邦学习应用及前沿研究需求的基础实现，在易用性和灵活性扩展方面均表现出色，具体包括：\n\n- **个性化联邦学习**：通过为不同客户端定制模型架构和训练配置，有效应对由数据分布不独立同分布（non-IID）及系统资源异构性带来的挑战。\n- **联邦超参数优化**：在联邦学习场景下进行超参数优化（HPO）时，由于需要跨参与方进行多轮通信，每次尝试的成本极高。值得注意的是，联邦学习环境下的超参数优化具有独特性，亟需推广低精度 HPO 等技术。\n- **隐私攻击者**：隐私攻击算法是验证联邦学习系统与算法隐私保护强度的重要工具，随着联邦学习的发展，其重要性日益凸显。\n- **图联邦学习**：针对无处不在的图数据，图联邦学习旨在利用分散的子图数据联合学习全局模型，近年来受到广泛关注。\n- **推荐系统**：随着全球范围内一系列法律法规的实施，隐私保护的重要性愈发受到重视，这促使推荐系统必须以隐私友好的方式从用户数据中学习。\n- **差分隐私**：与需要大量计算资源的加密算法不同，差分隐私是一种经济高效且灵活的隐私保护技术，在数据库领域已取得显著成效，并在联邦学习中不断发展壮大。\n- …\n\n更多支持即将推出！我们准备了一份[教程](https:\u002F\u002Ffederatedscope.io\u002F)，详细介绍如何使用 FederatedScope 开启您的联邦学习之旅！\n\n相关主题资料正在持续更新，请参阅 [FL-Recommendation](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Recommendation)、[Federated-HPO](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFederated_HPO)、[Personalized FL](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FPersonalized_FL)、[Federated Graph Learning](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFederated_Graph_Learning)、[FL-NLP](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-NLP)、[FL-Attacker](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Attacker)、[FL-Incentive-Mechanism](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Incentive)、[FL-Fairness](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Fiarness) 等。\n\n## 文档\n\nFederatedScope 的类和方法均已完善文档化，用户可通过以下步骤生成 API 参考文档：\n\n```shell\ncd doc\npip install -r requirements.txt\nmake html\n```\n\n注意：\n* `doc\u002Frequirements.txt` 仅用于通过 Sphinx 生成 API 文档，该过程可由 GitHub Actions 中的 `.github\u002Fworkflows\u002Fsphinx.yml` 自动触发。（标题中包含 `DOC` 时会触发构建。）\n* 文档也可通过 GitHub Actions 中的 Artifacts 下载。\n\n我们已将 API 参考文档部署至我们的[官网](https:\u002F\u002Ffederatedscope.io\u002Frefs\u002Findex)。\n\n此外，我们还提供了[可执行脚本](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fscripts)和[可定制配置](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Ffederatedscope\u002Fcore\u002Fconfigs)的相关文档。\n\n## 许可证\n\nFederatedScope 采用 Apache License 2.0 许可协议发布。\n\n## 出版物\n\n如果您在研究或开发中使用了 FederatedScope，请引用以下论文：\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05011\" target=\"_blank\">论文\u003C\u002Fa>：\n```\n@article{federatedscope,\n  title = {FederatedScope: A Flexible Federated Learning Platform for Heterogeneity},\n  author = {Xie, Yuexiang and Wang, Zhen and Gao, Dawei and Chen, Daoyuan and Yao, Liuyi and Kuang, Weirui and Li, Yaliang and Ding, Bolin and Zhou, Jingren},\n  journal={Proceedings of the VLDB Endowment},\n  volume={16},\n  number={5},\n  pages={1059--1072},\n  year={2023}\n}\n```\n\n更多出版物请访问[FederatedScope 出版物页面](https:\u002F\u002Ffederatedscope.io\u002Fpub\u002F)。\n\n## 贡献\n\n我们非常欢迎对 FederatedScope 的任何贡献！我们提供开发者版本的 FederatedScope，其中包含额外的 pre-commit 钩子，用于比官方版本更严格的提交检查：\n\n```bash\n# 安装开发者版本\npip install -e .[dev]\npre-commit install\n\n# 或从官方版本切换到开发者版本\npip install pre-commit\npre-commit install\npre-commit run --all-files\n```\n\n更多详情请参阅[FederatedScope 贡献指南](https:\u002F\u002Ffederatedscope.io\u002Fdocs\u002Fcontributor\u002F)。\n\n欢迎加入我们的[Slack 社区](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ffederatedscopeteam\u002Fshared_invite\u002Fzt-1apmfjqmc-hvpYbsWJdm7D93wPNXbqww)，或钉钉群（请扫描下方二维码）进行讨论。\n\n\u003Cimg width=\"150\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_readme_b9341500c63b.jpg\" width=\"400\" alt=\"federatedscope-logo\">","# FederatedScope 快速上手指南\n\nFederatedScope 是一个综合性的联邦学习平台，基于事件驱动架构，旨在为学术界和工业界提供便捷易用且灵活定制的联邦学习解决方案。\n\n## 环境准备\n\n*   **操作系统**: Linux \u002F macOS (Windows 需通过 WSL 或 Docker)\n*   **Python 版本**: >= 3.9\n*   **后端依赖**: PyTorch (推荐 1.10+ 或 1.8+)\n*   **硬件要求**: \n    *   单机模拟模式：普通 CPU\u002FGPU 即可\n    *   分布式模式：多机网络环境\n    *   若运行图联邦学习等下游任务，建议配备 NVIDIA GPU (CUDA 11.3+)\n\n## 安装步骤\n\n你可以选择 **Conda** 或 **Docker** 方式进行安装。\n\n### 方式一：使用 Conda 安装（推荐）\n\n1.  **克隆代码库**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope.git\n    cd FederatedScope\n    ```\n\n2.  **创建虚拟环境**\n    ```bash\n    conda create -n fs python=3.9\n    conda activate fs\n    ```\n\n3.  **安装 PyTorch**\n    请根据你的 CUDA 版本安装对应的 PyTorch。例如 CUDA 11.3：\n    ```bash\n    # 国内用户可使用清华源加速\n    conda install -y pytorch=1.10.1 torchvision=0.11.2 torchaudio=0.10.1 torchtext=0.11.1 cudatoolkit=11.3 -c pytorch -c conda-forge\n    ```\n    *注：Apple M1 芯片用户请直接运行 `conda install pytorch torchvision torchaudio -c pytorch`，随后执行 `python -m pip install torchvision==0.11.3` 以避免错误。*\n\n4.  **安装 FederatedScope**\n    ```bash\n    # 开发模式安装（可编辑源码）\n    pip install -e .\n    \n    # (可选) 如需运行图学习、NLP 等应用任务，安装额外依赖\n    bash environment\u002Fextra_dependencies_torch1.10-application.sh\n    ```\n\n### 方式二：使用 Docker 安装\n\n如果你希望避免环境配置冲突，可以使用预构建的 Docker 镜像。\n\n```bash\n# 构建镜像 (基于 Torch 1.10 + CUDA 11)\ndocker build -f environment\u002Fdocker_files\u002Ffederatedscope-torch1.10.Dockerfile -t alibaba\u002Ffederatedscope:base-env-torch1.10 .\n\n# 启动容器\ndocker run --gpus device=all --rm -it --name \"fedscope\" -w $(pwd) alibaba\u002Ffederatedscope:base-env-torch1.10 \u002Fbin\u002Fbash\n```\n*注：运行前请确保宿主机已安装 NVIDIA 驱动和 `nvidia-docker2`。*\n\n## 基本使用\n\nFederatedScope 提供了统一的接口，支持在单机上模拟多参与方（服务器与客户端）的联邦学习过程。以下是一个运行标准 FedAvg 算法进行图像分类的最简示例。\n\n### 1. 准备配置\nFederatedScope 内置了常用的数据集（DataZoo）和模型（ModelZoo）。你只需通过配置文件指定数据集和模型类型即可。\n*   数据集示例：`femnist`\n*   模型示例：`convnet2`\n\n### 2. 运行任务\n使用提供的示例配置文件启动训练。\n\n**使用默认配置运行：**\n```bash\npython federatedscope\u002Fmain.py --cfg scripts\u002Fexample_configs\u002Ffemnist.yaml\n```\n\n**自定义参数运行：**\n你也可以在命令行直接覆盖配置文件中的参数，例如修改总轮数为 50，批大小为 128：\n```bash\npython federatedscope\u002Fmain.py --cfg scripts\u002Fexample_configs\u002Ffemnist.yaml federate.total_round_num 50 dataloader.batch_size 128\n```\n\n### 3. 查看结果\n运行后，终端将实时输出监控指标，包括各客户端的训练损失（train_loss）、准确率（train_acc）等信息：\n```text\nINFO: Server has been set up ...\nINFO: Model meta-info: \u003Cclass 'federatedscope.cv.model.cnn.ConvNet2'>.\n...\nINFO: {'Role': 'Client #5', 'Round': 0, 'Results_raw': {'train_loss': 207.63, 'train_acc': 0.02, ...}}\nINFO: {'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_loss': 209.09, 'train_acc': 0.02, ...}}\n```\n\n更多高级用法、自定义数据集注册及分布式部署详情，请访问官方文档：[federatedscope.io](https:\u002F\u002Ffederatedscope.io\u002F)","某大型连锁医疗机构希望联合多家分院的数据训练疾病预测模型，但受限于患者隐私法规，原始数据无法集中上传至云端。\n\n### 没有 FederatedScope 时\n- **开发门槛极高**：团队需从零搭建通信架构，手动处理各分院间的网络同步与断点重连，耗费数周仅完成基础框架。\n- **隐私保护难落地**：自行实现差分隐私或安全聚合算法容易出错，难以通过严格的合规审计，存在数据泄露风险。\n- **异构适配困难**：各分院设备算力与数据分布差异巨大，缺乏现成策略处理非独立同分布（Non-IID）数据，导致模型收敛极慢甚至失败。\n- **实验复现成本高**：缺乏统一的基准测试集与评估工具，调整超参数或对比新算法时，每次都需要重新配置复杂的环境。\n\n### 使用 FederatedScope 后\n- **快速启动项目**：利用其事件驱动架构和预置模板，开发人员仅需修改少量配置文件即可在几天内部署跨院区的联邦学习任务。\n- **内置安全机制**：直接调用平台集成的差分隐私与安全聚合模块，无需重复造轮子，轻松满足医疗行业的高标准隐私合规要求。\n- **智能应对异构**：借助内置的个性化联邦学习（pFL）算法和自动调优工具（FedHPO），有效解决数据分布不均问题，显著提升模型在各分院的准确率。\n- **高效迭代验证**：依托丰富的基准测试库（如 pFL-Bench），团队可一键复现前沿论文算法，快速对比不同策略效果，大幅缩短研发周期。\n\nFederatedScope 将复杂的联邦学习底层技术封装为易用的标准化流程，让机构能在严守数据隐私的前提下，高效释放多方数据的联合价值。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falibaba_FederatedScope_4195081b.png","alibaba","Alibaba","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Falibaba_f65f7221.png","Alibaba Open Source",null,"https:\u002F\u002Fopensource.alibaba.com\u002F","https:\u002F\u002Fgithub.com\u002Falibaba",[80,84,88,92],{"name":81,"color":82,"percentage":83},"Python","#3572A5",91.6,{"name":85,"color":86,"percentage":87},"Shell","#89e051",5.3,{"name":89,"color":90,"percentage":91},"Jupyter Notebook","#DA5B0B",2.4,{"name":93,"color":94,"percentage":95},"Dockerfile","#384d54",0.7,1528,259,"2026-04-13T22:08:11","Apache-2.0","Linux, macOS","可选。若使用 GPU 加速，需 NVIDIA GPU 并预装 NVIDIA 驱动及 nvidia-docker2。支持 CUDA 10 (对应 Torch 1.8) 或 CUDA 11 (对应 Torch 1.10\u002F1.13)。Apple M1 芯片可使用 CPU\u002FMPS 后端。","未说明",{"notes":104,"python":105,"dependencies":106},"1. 推荐使用 Conda 创建虚拟环境或 Docker 进行部署。2. Apple M1 用户安装 torchvision 后需降级至 0.11.3 以避免段错误。3. 若运行图联邦学习、NLP 等下游任务，需额外安装应用版依赖包。4. 单机模式可在单设备上模拟多参与方，无需多台机器即可运行联邦学习任务。",">=3.9",[107,108,109,110],"torch>=1.8","torchvision","torchaudio","torchtext",[14],[113,114,115],"federated-learning","machine-learning","pytorch","2026-03-27T02:49:30.150509","2026-04-20T04:06:01.084975",[119,124,129,134,139,144],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},43362,"在国内运行示例代码时无法下载数据，或者在 Windows 上运行时报错 \"getaddrinfo failed\" 怎么办？","这通常是因为 `os.path.join` 在 Windows 和 Linux 上的行为不同导致的。在 Windows 上，它使用反斜杠 `\\` 连接 URL，导致生成的链接格式错误（如 `...com\\data.zip`），从而无法下载。\n解决方案是修改源码 `leaf_cv.py`，将生成 URL 的逻辑从 `osp.join(url, name)` 改为字符串拼接：`url = url + '\u002F' + name`。这样可以确保 URL 中的分隔符始终为正斜杠 `\u002F`，兼容所有操作系统。","https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Fissues\u002F80",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},43363,"如何在使用预训练模型初始化联邦学习流程时，正确加载权重以避免训练准确率不提升？","直接使用 `Fed_runner.server.model = copy.deepcopy(pretrained_model)` 会破坏服务器端 `server.model` 与 `server.models[0]` 之间的同步引用关系，导致广播的参数不是最新的。\n正确的做法有两种：\n1. 使用 `load_state_dict` 加载权重：`Fed_runner.server.model.load_state_dict(pretrained_model.state_dict().copy())`。\n2. 或者直接替换模型列表：`Fed_runner.server.models = [copy.deepcopy(pretrained_model)]`。\n请避免直接赋值给 `server.model` 属性。","https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Fissues\u002F477",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},43364,"在 'fedavg' 模式下使用 WandB 记录日志时，为什么看不到 'test_acc' 或 'test_loss' 曲线，只有 Server Round 信息？","这是因为在 `federatedscope\u002Fcore\u002Fauxiliaries\u002Flogging.py` 文件的第 228-230 行中，Server 的原始结果（Results_raw）被默认丢弃了。\n解决方法：\n1. 注释掉该文件中丢弃 Results_raw 的代码行。\n2. 如果注释后出现内存不足导致内核崩溃（\"The kernel appears to have died\"），需要在配置文件中添加 `cfg.wandb.client_train_info = False` 以减少资源占用。\n完成上述设置后，即可在 WandB 中看到服务器端的测试准确率及损失曲线。注意：此功能在 Python 脚本中运行正常，在 Jupyter Notebook 中可能存在问题。","https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Fissues\u002F591",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},43365,"运行基于 LLaMA 的联邦训练配置时，Loss 不下降且测试损失很高，如何启用多 GPU 训练或修复配置？","首先，检查配置文件是否过时。对于 DeepSpeed 相关配置，应参考 `federatedscope\u002Fcore\u002Fconfigs\u002Fcfg_llm.py` 进行设置，例如：\n```python\ncfg.llm.deepspeed = CN()\ncfg.llm.deepspeed.use = False\ncfg.llm.deepspeed.ds_config = ''\n```\n其次，如果需要启用 DataParallel 进行多卡训练，可以在配置中设置 `cfg.train.data_para_dids = []`。如果问题依旧，请确认使用的模型路径和半精度训练设置（`is_enable_half: True`）是否与当前硬件环境兼容。","https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Fissues\u002F742",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},43366,"运行后门攻击（Backdoor Attack）相关的 YAML 配置文件时报错，提示找不到文件或格式错误，如何解决？","后门攻击基准测试（backdoor bench）中的 `.yaml` 配置文件格式可能与常规联邦学习脚本不同。如果遇到无法运行的情况，请检查以下几点：\n1. 确认配置文件路径是否正确，特别是 `backdoor_scripts\u002Fattack_config` 目录下的文件。\n2. 检查 YAML 文件内的字段定义是否符合后门攻击模块的特定要求，不要直接套用普通联邦学习的配置模板。\n3. 建议参考项目中最新更新的示例脚本或文档，因为部分旧配置可能存在格式不一致的问题。","https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Fissues\u002F723",{"id":145,"question_zh":146,"answer_zh":147,"source_url":138},43367,"如何在客户端或集中式训练设置中为单个客户端启用多 GPU 训练？","可以通过配置 DataParallel 来实现。在配置文件中设置 `cfg.train.data_para_dids = []` 即可启用 `torch.nn.DataParallel`，从而允许单个客户端利用多个 GPU 进行训练。如果需要更高级的分布式训练支持，还可以配置 DeepSpeed 相关选项。",[149,154,159],{"id":150,"version":151,"summary_zh":152,"released_at":153},343016,"v0.3.0","## 摘要\n本次发布（FederatedScope v0.3.0）的改进与优化总结如下：\n- **基于树的模型**。FederatedScope 现已支持在纵向联邦学习（VFL）中训练基于树的模型。我们提供了多种常用模型（如 XGBoost、GBDT、随机森林等）的实现，以及基准数据集的数据加载器。针对 VFL 中不同类型的树模型，用户可以灵活选择不同的隐私保护机制（如差分隐私、OP_Boost、同态加密等），以按需调整隐私保护强度。值得注意的是，这些模块均采用事件驱动架构设计，既便于使用，又支持灵活的自定义扩展。更多详情请参阅 `federatedscope\u002Fvertical_fl`。\n- **效率与效果**。我们新增了多项高级功能，旨在提升联邦学习算法在计算和通信方面的效率与效果，包括训练并行化（位于 `federatedscope\u002Fcore\u002Fparallel`）、消息压缩（位于 `federatedscope\u002Fcore\u002Fcompression`）以及鲁棒聚合器（位于 `federatedscope\u002Fcore\u002Faggregators`）。这些功能对于推动联邦学习在学术研究和实际应用中的发展均具有重要价值。\n- **攻击与防御**。我们提供了一系列针对对抗性攻击的防御策略，包括 Krum、Multi-Krum、中位数法、范数约束法、Bulyan 以及 [Simple Tunning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.01677.pdf)。此外，我们还将发布一个面向个性化联邦学习的后门攻击与防御基准测试平台，支持用户测试多种基于数据投毒的后门攻击方法，如 BadNet、Blend、SIG 及边缘案例攻击。\n- **个性化联邦学习**。借助 FederatedScope，我们构建了一个全面的[个性化联邦学习基准测试平台](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03655)（已被 NeurIPS 2022 接收）。在此过程中，大量个性化算法的实现得到了优化与验证，并新增了一些新的个性化 FL 算法。我们诚挚欢迎更多关于个性化联邦学习的研究与应用方面的贡献与反馈！\n- **更多探索与资料**。我们在联邦学习的多个应用领域和研究方向上持续探索与开发新算法，涵盖超参数优化、图学习、自然语言处理、公平性等多个主题。相关资料（如[论文列表](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list)）会不断更新，欢迎大家积极参与贡献！\n\nFederatedScope 致力于为用户提供既易用又灵活的开发接口。我们衷心希望 FederatedScope 能够助力用户和开发者构建新型联邦学习应用、提出创新算法，并热烈欢迎社区通过讨论、建议、代码贡献等多种方式参与共建。感谢您的关注与支持！ \n\n## 结束","2023-04-03T14:31:49",{"id":155,"version":156,"summary_zh":157,"released_at":158},343017,"v0.2.0","# 总结\n\n本次发布（FederatedScope v0.2.0）的改进总结如下：\n- FederatedScope 基于事件驱动架构，支持在联邦学习中应用异步训练策略，包括不同的聚合条件、过时容忍度、广播方式等。同时，我们还提供了一种高效的单机模拟工具，用于大规模参与者的跨设备联邦学习场景。\n- 我们新增了三个基准测试：[联邦超参优化]( https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FFedHPOB)、[个性化联邦学习]( https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FpFL-Bench) 和 [异构任务联邦学习]( https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fbenchmark\u002FB-FHTL)，以推动联邦学习在更广泛场景中的应用。\n- 我们简化了安装、配置和持续集成（CI）流程，使用户能够更轻松地上手和定制。此外，FederatedScope 还增加了实用的可视化功能，帮助用户监控训练过程和评估结果。\n- 我们补充了相关主题的论文列表，包括 [FL-推荐](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Recommendation)、[联邦超参优化](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFederated_HPO)、[个性化联邦学习](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FPersonalized_FL)、[联邦图学习](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFederated_Graph_Learning)、[FL-NLP](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-NLP)、[FL-攻击者](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Attacker)、[FL-激励机制](https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\u002Ftree\u002Fmaster\u002Fmaterials\u002Fpaper_list\u002FFL-Incentive) 等。这些资料将持续更新。\n- 本次版本还引入了多项新特性，如性能攻击、组织者模块、未见客户端泛化能力、数据拆分器、客户端采样器等，进一步提升了 FederatedScope 的鲁棒性和全面性。\n\n\n# 提交记录\n\n## 🚀 改进与新特性\n\n- 添加后门攻击 @Alan-Qin (#267)\n- 在 FederatedScope 中添加组织者模块 @rayrayraykk (#265, #257)\n- 监控客户端粒度及全局的 WandB 信息 @yxdyc (#260, #226, #206, #176, #90)\n- 更友好的安装、配置及贡献指南 @rayrayraykk (#255, #192)\n- 在 FS 中添加学习率调度器 @DavdGao (#248)\n- 支持通过 gRPC 通信时使用不同类型的键 @xieyxclack (#239)\n- 支持在服务器无数据的情况下构建联邦学习课程 @xieyxclack (#236)\n- 启用未见客户端场景，以检验参与泛化差距 @yxdyc (#238, #100)\n- 支持 YAML 文件中更稳健的类型转换 @yxdyc (#229)\n- 异步联邦学习 @xieyxclack (#225)\n- 支持 bo","2022-07-30T14:09:31",{"id":160,"version":161,"summary_zh":162,"released_at":163},343018,"v0.1.0","发布 FederatedScope v0.1.0","2022-05-06T12:07:30"]