[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-lasgroup--SDPO":3,"tool-lasgroup--SDPO":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":10,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":116,"github_topics":117,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":155},635,"lasgroup\u002FSDPO","SDPO","Reinforcement Learning via Self-Distillation (SDPO)","SDPO（Self-Distilled Policy Optimization）是一个专为大语言模型设计的强化学习框架，主要用于代码生成、数学推理等可验证领域的后训练优化。\n\n传统强化学习方法通常仅依赖最终成功与否的简单反馈，难以告诉模型具体错在哪里，导致学习效率受限。SDPO 通过引入“丰富反馈”机制解决了这一痛点。它的独特之处在于无需外部教师或复杂的奖励模型，而是利用模型自身产生的丰富文本反馈（如运行错误、评估意见）作为学习信号。SDPO 将当前模型视为自我教师，通过自蒸馏技术将反馈信息转化为密集的监督信号，帮助模型更精准地识别并修正上下文中的错误。即使在环境反馈稀疏的情况下，它也能通过重用高奖励轨迹保持高效学习。\n\n这项技术显著提升了模型在推理任务上的准确率与收敛速度。SDPO 非常适合致力于大模型对齐、强化学习算法优化的研究人员及开发者使用，尤其适用于需要低成本提升模型推理能力的场景。","\u003Cdiv align=\"center\">\n\n# Reinforcement Learning via Self-Distillation (SDPO)\n\n[![Paper](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpaper-A42C25?style=for-the-badge&logo=arxiv&logoColor=white)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20802)  [![Github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO) [![W&B Logs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWandB%20Logs-%2300B4AB?style=for-the-badge&logo=weightsandbiases&logoColor=white&labelColor=000000)](https:\u002F\u002Fwandb.ai\u002Fjonhue\u002FSDPO?nw=mgotcx6kk7)\n\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\" style=\"font-family: Arial, sans-serif;\">\n  \u003Cp>\n    \u003Ca href=\"#-introduction\" style=\"text-decoration: none; font-weight: bold;\">📖 Introduction\u003C\u002Fa> •\n    \u003Ca href=\"#-main-results\" style=\"text-decoration: none; font-weight: bold;\">📊 Main Results\u003C\u002Fa> •\n    \u003Ca href=\"#-getting-started\" style=\"text-decoration: none; font-weight: bold;\">🚀 Getting Started\u003C\u002Fa>\n  \u003C\u002Fp>\n  \u003Cp>\n    \u003Ca href=\"#usage-documentation\" style=\"text-decoration: none; font-weight: bold;\">Usage Documentation\u003C\u002Fa> •\n    \u003Ca href=\"#citation\" style=\"text-decoration: none; font-weight: bold;\">Citation\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n## 📖 Introduction\n\nLarge language models are increasingly post-trained with reinforcement learning in verifiable domains such as code and math. Yet, current methods for reinforcement learning with verifiable rewards (RLVR) learn only from a scalar outcome reward per attempt, creating a severe credit-assignment bottleneck. Many verifiable environments actually provide rich textual feedback, such as runtime errors or judge evaluations, that explain *why* an attempt failed. We formalize this setting as *Reinforcement Learning with Rich Feedback* (RLRF):\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_018c01a07533.png\" alt=\"Reinforcement Learning from Rich Feedback\" width=\"80%\">\n\u003C\u002Fp>\n\n**We propose Self-Distilled Policy Optimization (SDPO)**, a reinforcement learning framework that augments on-policy optimization with self-distillation from the model’s own high-reward trajectories.\n\nSDPO converts tokenized feedback into a dense learning signal without any external teacher or explicit reward model. SDPO treats the current model conditioned on feedback as a self-teacher and distills its feedback-informed next-token predictions back into the policy. In this way, SDPO leverages the model's ability to retrospectively identify its own mistakes in-context.\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_6df7e1c6cb19.png\" alt=\"SDPO\" width=\"80%\">\n\u003C\u002Fp>\n\n---\n\n## 📊 Main Results\n\n### Learning without Rich Environment Feedback\n\nWhen environment feedback is sparse or rule-based, standard reinforcement learning methods struggle to propagate learning signals efficiently. SDPO addresses this by reusing high-reward rollouts as implicit feedback, providing dense supervision even in the absence of rich environment feedback.\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_536c336077c2.png\" alt=\"SDPO Performance vs. Training Steps\" width=\"80%\">\n\u003C\u002Fp>\n\n*Training progression of Olmo3-7B-Instruct on Chemistry. We report the average accuracy across 16 samples per question and a rolling average of response lengths over 5 steps. We report GRPO with the optimal hyperparameters for this model and task. We run each configuration for 3 seeds and report standard errors as shaded areas.*\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_57af68acabc5.png\" alt=\"SDPO Performance without Rich Environment Feedback\" width=\"80%\">\n\u003C\u002Fp>\n\n***Comparison of SDPO and GRPO on reasoning-related benchmarks.** We report the highest achieved avg@16 within 1 hour and 5 hours of wall-clock training time, respectively.\nBoth SDPO and on-policy GRPO perform one gradient step per generation batch, while GRPO performs 4 off-policy mini batch steps. We select optimal hyperparameters for SDPO and baselines based on 5h accuracy. Each run is performed on a node with 4 NVIDIA GH200 GPUs. Together with initialization and validation, each run takes approximately 6 hours.*\n\n---\n\n### Learning with Rich Environment Feedback\n\nIn settings where environments provide structured or textual feedback, SDPO naturally incorporates this information into self-distillation. By conditioning future attempts on both successful demonstrations and feedback from failed attempts, SDPO achieves faster convergence and more stable training.\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_c91b02c160d2.png\" alt=\"SDPO Performance with Rich Environment Feedback\" width=\"80%\">\n\u003C\u002Fp>\n\n***SDPO with rich environment feedback.**\nLeft: SDPO benefits from denser credit assignment (logit > token > sequence-level) and consistently outperforms GRPO when rich feedback is available.\nRight: The self-teacher improves throughout training, and the final student substantially surpasses the initial teacher. Error bars show variability across seeds.*\n\n---\n\n### Solving Hard Questions via Test-Time Self-Distillation\n\nSDPO also enables **test-time self-distillation**. By generating multiple candidate solutions, identifying high-quality responses, and reusing them as demonstrations, the model can iteratively refine its outputs at inference time.  This leads to substantial gains on hard reasoning tasks without additional training.\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_630b19159bfe.png\" alt=\"Test-Time Self-Distillation\" width=\"80%\">\n\u003C\u002Fp>\n\n***Test-time self-distillation on hard coding problems.**\nSDPO solves questions that neither the base model nor multi-turn interaction can solve, achieving higher solution discovery rates across generation budgets.*\n\n---\n\n## 🚀 Getting Started\n\n### System Requirements\n*   **Operating System:** Linux (Tested on SLES 15 SP5 and Ubuntu 22.04)\n*   **Hardware:** NVIDIA GPUs (CUDA compatible)\n*   **Python:** 3.12 (Tested on 3.12.3)\n*   **CUDA Driver:** Compatible with the PyTorch version installed (see below).\n\n---\n\n### Installation\n\n#### Option 1: Docker (Recommended for HPC\u002FGH200 Clusters)\n\nFor NVIDIA GH200 (aarch64) clusters with CUDA 13.1, we provide a pre-configured Dockerfile based on the NGC vLLM container.\n\n**Build and deploy:**\n```bash\n# Build the image\npodman build . -f Dockerfile.gh200 -t sdpo-gh200\n\n# Export for cluster use (enroot\u002Fsquashfs)\nenroot import -x mount -o sdpo-gh200.sqsh podman:\u002F\u002Flocalhost\u002Fsdpo-gh200:latest\n```\n\n> [!NOTE]\n> The Docker images use `requirements-gh200.txt` which contains pinned versions from `requirements-full.txt`, excluding packages pre-installed in the NGC vLLM container (torch, vllm, flash-attn, xformers, triton).\n\n---\n\n#### Option 2: Local Installation\n\n1. **Install PyTorch:**\n\n*   **For Ampere\u002FHopper (RTX 30\u002F40, H100):**\n    ```bash\n    pip install torch==2.5.1 torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu124\n    ```\n\n*   **For Blackwell (RTX 50, RTX PRO 2000 Blackwell):**\n    ```bash\n    pip install torch==2.7.0 torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu128\n    ```\n\n2. **Install SDPO and Dependencies:**\n```bash\n# Install core dependencies (pinned versions)\npip install -r requirements.txt\n\n# Install SDPO (verl) in editable mode\npip install -e .\n\n# Install Flash Attention 2 (compiled from source)\npip install flash-attn --no-build-isolation\n```\n\n3. **Optional: Install SGLang\u002FvLLM for high-throughput inference:**\n```bash\npip install -r requirements_sglang.txt\n```\n\n---\n\n### Requirement Files\n\n| File | Description |\n|------|-------------|\n| `requirements.txt` | Core dependencies with pinned versions |\n| `requirements-gh200.txt` | For NGC vLLM container (excludes pre-installed packages) |\n| `requirements-full.txt` | Complete pip freeze from working environment |\n| `requirements_sglang.txt` | SGLang\u002FvLLM stack for local inference |\n| `requirements-cuda.txt` | Flash Attention (for non-Docker installs) |\n\n**vLLM Version Note:**\n```\n# vllm==0.8.4       # GH200 cluster\n# vllm>=0.12.0      # Blackwell (RTX 50 series, B100\u002FB200) - NOT FULLY TESTED\n```\n\n> [!WARNING]\n> Blackwell architecture support (RTX 50 series, B100\u002FB200) has not been fully tested.\n\n> [!TIP]\n> For reproducibility, use `requirements-full.txt` which contains the exact versions from a tested environment.\n\n> [!NOTE]\n> For more specific instructions on `verl` architecture and advanced configuration, refer to the [official verl repository](https:\u002F\u002Fgithub.com\u002Fvolcengine\u002Fverl).\n\n---\n\n### Data Preparation\n\nThe data is already loaded and split into train and test sets in the `datasets` directory. You can proceed to **preprocessing** the data.\n\nIf you want to load and process the data yourself, you can run the following command:\n\n#### Data Loading\nThe detailed instructions for loading the data are provided in `data\u002FREADME.md`.\n\nOne example is provided below:\n```bash\npython data\u002Fload_dataset.py \\\n    --dataset_name Chemistry \\\n    --output_path datasets\u002Fsciknoweval\u002Fchemistry.json\n```\n\nTo split the data into train and test sets, run the following command:\n```bash\npython data\u002Fsplit_tasks.py \\\n    --json_path datasets\u002Fsciknoweval\u002Fchemistry.json \\\n    --output_dir datasets\u002Fsciknoweval\u002Fchemistry \\\n    --test_ratio 0.1 \\\n    --seed 42\n```\n\nFor `LiveCodeBenchv6` split the _unit tests_ into train and test sets, run the following command:\n```bash\npython data\u002Fsplit_tests.py \\\n    --json_path datasets\u002Flcb_v6.json \\\n    --output_dir datasets\u002Flcb_v6\n```\n\n\n#### Data Preprocessing\nOur implementation uses the `parquet` format for the data. To preprocess the data, run the following command:\n\n```bash\npython data\u002Fpreprocess.py \\\n    --data_source DATASET_PATH\n```\n`DATASET_PATH` should contain the `train.json` and `test.json` files.\n\n---\n\n### Configuration\nBefore running experiments, adapt the paths in `verl\u002Ftrainer\u002Fconfig\u002Fuser.yaml` to your environment:\n\n```yaml\nvars:\n  dir: \u002Fpath\u002Fto\u002Fyour\u002FSDPO              # Path to the SDPO repository\n  log_dir: \u002Fpath\u002Fto\u002Fyour\u002Flogs          # Directory for logs\n  ckpt_dir: \u002Fpath\u002Fto\u002Fyour\u002Fcheckpoints  # Directory for model checkpoints\n```\n\n---\n\n### Training\n\n#### Reproducing Results (Without Rich Environment Feedback)\n\nRun the following commands to reproduce the results without rich environment feedback.\n\n**GRPO baseline:**\n```bash\nbash experiments\u002Fgeneralization\u002Frun_baseline_grpo_all.sh\n```\n\n**SDPO:**\n```bash\nbash experiments\u002Fgeneralization\u002Frun_sdpo_all.sh\n```\n\n#### Reproducing Results (With Rich Environment Feedback)\nRun the following commands to reproduce the results with rich environment feedback.\n\n**GRPO baseline:**\n```bash\nbash experiments\u002Frich_feedback\u002Frun_baseline_grpo.sh\n```\n\n**SDPO:**\n```bash\nbash experiments\u002Frich_feedback\u002Frun_sdpo.sh\n```\n\n---\n\n### Multi-turn Baseline of Section 5\n\nPrepare the data by splitting it into individual tasks:\n\n```\nexport MY_DATA_SPLITS_DIR=lcb_v6\nexport MY_DATA_SINGLES_DIR=lcb_v6_singles\nbash dat\u002Fprepare_data_splits.sh datasets\u002Flcb_v6.json\n```\n\nRun the multi-turn baseline for, e.g., question 120:\n\n```\npython baseline_multiturn\u002Fmultiturn.py --data-dir=lcb_v6_singles\u002Fq_120 --run-name multiturn_q120\n```\n\nOr, for all hard questions:\n\n```\nbash experiments\u002Fttt\u002Frun_multiturn_all.sh\n```\n\n---\n\n## Usage Documentation\n\nThis section documents the configuration options added by SDPO on top of the base verl framework.\n\n### Policy Loss Configuration\n\nLocated at `actor.policy_loss` in the config.\n\n- **loss_mode** (str, default: `\"vanilla\"`): Loss function mode. Set to `\"sdpo\"` to enable self-distillation. Options: `vanilla`, `sdpo`.\n\n### Self-Distillation Configuration\n\nLocated at `actor.self_distillation` in the config. Only active when `actor.policy_loss.loss_mode = \"sdpo\"`.\n\n#### Core Settings\n\n- **full_logit_distillation** (bool, default: `True`): Whether to use full-logit KL distillation.\n\n- **alpha** (float, default: `0.5`): KL interpolation coefficient. `0.0` = forward KL, `1.0` = reverse KL, `0.5` = JSD.\n\n- **success_reward_threshold** (float, default: `1.0`): Minimum sequence reward to be considered a successful demonstration.\n\n- **teacher_regularization** (str, default: `\"ema\"`): Teacher regularization mode. Options: `ema`, `trust-region`. Note: if `ema` is used, the model on the `RefWorker` is updated as an exponential moving average. `trust-region` requires `use_fused_kernels = False`.\n\n- **teacher_update_rate** (float, default: `0.05`): EMA update rate for teacher weights, or trust-region mixing coefficient.\n\n- **distillation_topk** (int | None, default: `100`): If set, use top-k logits for distillation instead of full distribution.\n\n- **distillation_add_tail** (bool, default: `True`): Whether to add a tail bucket for top-k distillation.\n\n- **is_clip** (float | None, default: `2.0`): Clip value for importance sampling ratio. `None` disables IS weighting.\n\n#### Reprompting Settings\n\n- **max_reprompt_len** (int, default: `10240`): Maximum token length of the reprompted prompt.\n\n- **reprompt_truncation** (str, default: `\"right\"`): Truncation method for reprompted prompts. Options: `left`, `right`, `error`.\n\n- **dont_reprompt_on_self_success** (bool, default: `True`): If `True`, don't use a sample's own successful response as demonstration.\n\n- **remove_thinking_from_demonstration** (bool, default: `True`): Whether to remove `\u003Cthink>...\u003C\u002Fthink>` tags from demonstrations.\n\n#### Template Settings\n\n- **reprompt_template** (str): Main template for reprompting. Placeholders: `{prompt}`, `{solution}`, `{feedback}`.\n\n- **solution_template** (str): Template for the solution section. Placeholder: `{successful_previous_attempt}`.\n\n- **feedback_template** (str): Template for the feedback section. Placeholder: `{feedback_raw}`.\n\n#### Feedback Settings\n\n- **include_environment_feedback** (bool, default: `True`): Whether to include environment feedback (e.g., test errors) in reprompting.\n\n- **environment_feedback_only_without_solution** (bool, default: `True`): If `True`, only use feedback when no successful solution is available.\n\n---\n\n## Citation\nIf you find this work helpful, please cite us.\n\n```bibtex\n@article{hubotter2026reinforcement,\n  title = {Reinforcement Learning via Self-Distillation},\n  author = {Hübotter, Jonas and Lübeck, Frederike and Behric, Lejs and Baumann, Anton and Bagatella, Marco and Marta, Daniel and Hakimi, Ido and Shenfeld, Idan and Kleine Buening, Thomas and Guestrin, Carlos and Krause, Andreas},\n  year = {2026},\n  journal = {arXiv preprint arXiv:2601.20802},\n}\n```\n\n## Attribution\n\nOur implementation is based on a recent version of [verl](https:\u002F\u002Fgithub.com\u002Fverl-project\u002Fverl).\n","\u003Cdiv align=\"center\">\n\n# 通过自蒸馏进行强化学习 (SDPO)\n\n[![Paper](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpaper-A42C25?style=for-the-badge&logo=arxiv&logoColor=white)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20802)  [![Github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO) [![W&B Logs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWandB%20Logs-%2300B4AB?style=for-the-badge&logo=weightsandbiases&logoColor=white&labelColor=000000)](https:\u002F\u002Fwandb.ai\u002Fjonhue\u002FSDPO?nw=mgotcx6kk7)\n\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\" style=\"font-family: Arial, sans-serif;\">\n  \u003Cp>\n    \u003Ca href=\"#-introduction\" style=\"text-decoration: none; font-weight: bold;\">📖 简介\u003C\u002Fa> •\n    \u003Ca href=\"#-main-results\" style=\"text-decoration: none; font-weight: bold;\">📊 主要结果\u003C\u002Fa> •\n    \u003Ca href=\"#-getting-started\" style=\"text-decoration: none; font-weight: bold;\">🚀 快速开始\u003C\u002Fa>\n  \u003C\u002Fp>\n  \u003Cp>\n    \u003Ca href=\"#usage-documentation\" style=\"text-decoration: none; font-weight: bold;\">使用文档\u003C\u002Fa> •\n    \u003Ca href=\"#citation\" style=\"text-decoration: none; font-weight: bold;\">引用\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n## 📖 简介\n\n大型语言模型（Large Language Models）在代码和数学等可验证领域越来越多地使用强化学习进行后训练。然而，当前具有可验证奖励的强化学习（Reinforcement Learning with Verifiable Rewards, **RLVR**）方法仅从每次尝试的标量结果奖励中学习，造成了严重的信用分配瓶颈。许多可验证环境实际上提供了丰富的文本反馈，例如运行时错误或评估器评价，解释了尝试失败的原因。我们将此设置形式化为*具有丰富反馈的强化学习*（Reinforcement Learning with Rich Feedback, **RLRF**）：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_018c01a07533.png\" alt=\"来自丰富反馈的强化学习\" width=\"80%\">\n\u003C\u002Fp>\n\n**我们提出了自蒸馏策略优化（Self-Distilled Policy Optimization, SDPO）**，这是一种强化学习框架，它利用模型自身高奖励轨迹的自蒸馏来增强基于策略的优化（on-policy optimization）。\n\nSDPO 将分词化的反馈转换为密集的学习信号，无需任何外部教师或显式奖励模型。SDPO 将基于反馈条件化的当前模型视为自教师，并将其受反馈影响的下一个 token 预测蒸馏回策略中。通过这种方式，SDPO 利用了模型在上下文中回溯识别自身错误的能力。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_6df7e1c6cb19.png\" alt=\"SDPO\" width=\"80%\">\n\u003C\u002Fp>\n\n---\n\n## 📊 主要结果\n\n### 无丰富环境反馈下的学习\n\n当环境反馈稀疏或基于规则时，标准强化学习方法难以有效地传播学习信号。SDPO 通过重用高奖励轨迹（rollouts）作为隐式反馈来解决这一问题，即使在缺乏丰富环境反馈的情况下也能提供密集监督。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_536c336077c2.png\" alt=\"SDPO 性能与训练步骤对比\" width=\"80%\">\n\u003C\u002Fp>\n\n*Olmo3-7B-Instruct 在化学任务上的训练进展。我们报告了每个问题 16 个样本的平均准确率以及 5 步的响应长度滚动平均值。我们针对该模型和任务报告了 GRPO 的最优超参数。每种配置运行 3 个种子，并以阴影区域报告标准误差。*\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_57af68acabc5.png\" alt=\"无丰富环境反馈下的 SDPO 性能\" width=\"80%\">\n\u003C\u002Fp>\n\n***SDPO 与 GRPO 在推理相关基准测试上的比较。**我们分别报告了在 1 小时和 5 小时墙钟（wall-clock）训练时间内达到的最高 avg@16。SDPO 和基于策略的 GRPO 每生成批次执行一次梯度步骤，而 GRPO 执行 4 次离策略（off-policy）mini batch 步骤。我们根据 5 小时准确率选择 SDPO 和基线的最优超参数。每次运行在拥有 4 块 NVIDIA GH200 GPU 的节点上进行。加上初始化和验证，每次运行大约需要 6 小时。*\n\n---\n\n### 有丰富环境反馈下的学习\n\n在环境提供结构化或文本反馈的设置中，SDPO 自然地将这些信息纳入自蒸馏过程。通过将未来尝试的条件化建立在成功演示和失败尝试的反馈之上，SDPO 实现了更快的收敛和更稳定的训练。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_c91b02c160d2.png\" alt=\"有丰富环境反馈下的 SDPO 性能\" width=\"80%\">\n\u003C\u002Fp>\n\n***带有丰富环境反馈的 SDPO。**\n左图：SDPO 受益于更密集的信用分配（logit > token > sequence-level），并且在有丰富反馈可用时始终优于 GRPO。\n右图：自教师在训练过程中不断改进，最终学生显著超越初始教师。误差条显示了不同种子之间的变异性。*\n\n---\n\n### 通过测试时自蒸馏解决难题\n\nSDPO 还支持**测试时自蒸馏（test-time self-distillation）**。通过生成多个候选解决方案、识别高质量响应并将它们作为演示重用，模型可以在推理时间迭代优化其输出。这导致在没有额外训练的情况下，在硬推理任务上获得显著提升。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_readme_630b19159bfe.png\" alt=\"测试时自蒸馏\" width=\"80%\">\n\u003C\u002Fp>\n\n***在硬编码问题上的测试时自蒸馏。**SDPO 解决了基础模型和多轮交互都无法解决的问题，在生成预算范围内实现了更高的解决方案发现率。*\n\n---\n\n## 🚀 快速开始\n\n### 系统要求\n*   **操作系统：** Linux（已在 SLES 15 SP5 和 Ubuntu 22.04 上测试）\n*   **硬件：** NVIDIA GPU（兼容 CUDA）\n*   **Python：** 3.12（已在 3.12.3 上测试）\n*   **CUDA 驱动：** 与已安装的 PyTorch 版本兼容（见下文）。\n\n---\n\n### 安装\n\n#### 选项 1：Docker（推荐用于 HPC\u002FGH200 集群）\n\n对于配备 CUDA 13.1 的 NVIDIA GH200 (aarch64) 集群，我们提供了一个基于 NGC vLLM 容器的预配置 Dockerfile。\n\n**构建和部署：**\n```bash\n# 构建镜像\npodman build . -f Dockerfile.gh200 -t sdpo-gh200\n\n# 导出供集群使用（enroot\u002Fsquashfs）\nenroot import -x mount -o sdpo-gh200.sqsh podman:\u002F\u002Flocalhost\u002Fsdpo-gh200:latest\n```\n\n> [!NOTE]\n> Docker 镜像使用 `requirements-gh200.txt`，其中包含来自 `requirements-full.txt` 的固定版本，排除了 NGC vLLM 容器中预安装的包（torch, vllm, flash-attn, xformers, triton）。\n\n---\n\n#### 选项 2：本地安装\n\n1. **安装 PyTorch：**\n\n*   **适用于 Ampere\u002FHopper (RTX 30\u002F40, H100)：**\n    ```bash\n    pip install torch==2.5.1 torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu124\n    ```\n\n*   **适用于 Blackwell (RTX 50, RTX PRO 2000 Blackwell)：**\n    ```bash\n    pip install torch==2.7.0 torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu128\n    ```\n\n2. **安装 SDPO 及依赖项：**\n```bash\n\n# 安装核心依赖（固定版本）\npip install -r requirements.txt\n\n# 以可编辑模式安装 SDPO (verl)\npip install -e .\n\n# 安装 Flash Attention 2（从源码编译）\npip install flash-attn --no-build-isolation\n```\n\n3. **可选：安装 SGLang\u002FvLLM（高吞吐量推理框架）用于高吞吐量推理：**\n```bash\npip install -r requirements_sglang.txt\n```\n\n---\n\n### 需求文件\n\n| 文件 | 描述 |\n|------|-------------|\n| `requirements.txt` | 带有固定版本的核心依赖 |\n| `requirements-gh200.txt` | 用于 NGC vLLM 容器（排除预装包） |\n| `requirements-full.txt` | 来自工作环境的完整 pip freeze |\n| `requirements_sglang.txt` | 用于本地推理的 SGLang\u002FvLLM 堆栈 |\n| `requirements-cuda.txt` | Flash Attention（用于非 Docker 安装） |\n\n**vLLM 版本说明：**\n```\n# vllm==0.8.4       # GH200 集群\n# vllm>=0.12.0      # Blackwell（RTX 50 系列，B100\u002FB200）- 尚未完全测试\n```\n\n> [!WARNING]\n> Blackwell 架构支持（RTX 50 系列，B100\u002FB200）尚未完全测试。\n\n> [!TIP]\n> 为了可复现性，请使用 `requirements-full.txt`，其中包含来自已测试环境的精确版本。\n\n> [!NOTE]\n> 关于 `verl`（Volcano Engine Reinforcement Learning，火山引擎强化学习框架）架构和高级配置的更具体说明，请参阅 [官方 verl 仓库](https:\u002F\u002Fgithub.com\u002Fvolcengine\u002Fverl)。\n\n---\n\n### 数据准备\n\n数据已经加载并拆分到 `datasets` 目录中的训练集和测试集。您可以继续进行数据的**预处理**。\n\n如果您想自己加载和处理数据，可以运行以下命令：\n\n#### 数据加载\n加载数据的详细说明在 `data\u002FREADME.md` 中提供。\n\n下面提供了一个示例：\n```bash\npython data\u002Fload_dataset.py \\\n    --dataset_name Chemistry \\\n    --output_path datasets\u002Fsciknoweval\u002Fchemistry.json\n```\n\n要将数据拆分为训练集和测试集，请运行以下命令：\n```bash\npython data\u002Fsplit_tasks.py \\\n    --json_path datasets\u002Fsciknoweval\u002Fchemistry.json \\\n    --output_dir datasets\u002Fsciknoweval\u002Fchemistry \\\n    --test_ratio 0.1 \\\n    --seed 42\n```\n\n对于 `LiveCodeBenchv6`，将 _单元测试_ 拆分为训练集和测试集，请运行以下命令：\n```bash\npython data\u002Fsplit_tests.py \\\n    --json_path datasets\u002Flcb_v6.json \\\n    --output_dir datasets\u002Flcb_v6\n```\n\n\n#### 数据预处理\n我们的实现使用 `parquet`（列式存储格式）格式存储数据。要预处理数据，请运行以下命令：\n\n```bash\npython data\u002Fpreprocess.py \\\n    --data_source DATASET_PATH\n```\n`DATASET_PATH` 应包含 `train.json` 和 `test.json` 文件。\n\n---\n\n### 配置\n\n在运行实验之前，请根据您的环境调整 `verl\u002Ftrainer\u002Fconfig\u002Fuser.yaml` 中的路径：\n\n```yaml\nvars:\n  dir: \u002Fpath\u002Fto\u002Fyour\u002FSDPO              # 指向 SDPO 仓库的路径\n  log_dir: \u002Fpath\u002Fto\u002Fyour\u002Flogs          # 日志目录\n  ckpt_dir: \u002Fpath\u002Fto\u002Fyour\u002Fcheckpoints  # 模型检查点目录\n```\n\n---\n\n### 训练\n\n#### 复现结果（无丰富环境反馈）\n\n运行以下命令以复现无丰富环境反馈的结果。\n\n**GRPO（组相对策略优化）基线：**\n```bash\nbash experiments\u002Fgeneralization\u002Frun_baseline_grpo_all.sh\n```\n\n**SDPO（自蒸馏策略优化）：**\n```bash\nbash experiments\u002Fgeneralization\u002Frun_sdpo_all.sh\n```\n\n#### 复现结果（有丰富环境反馈）\n运行以下命令以复现有丰富环境反馈的结果。\n\n**GRPO（组相对策略优化）基线：**\n```bash\nbash experiments\u002Frich_feedback\u002Frun_baseline_grpo.sh\n```\n\n**SDPO（自蒸馏策略优化）：**\n```bash\nbash experiments\u002Frich_feedback\u002Frun_sdpo.sh\n```\n\n---\n\n### 第 5 节的多轮基线\n\n通过将数据拆分为单个任务来准备数据：\n\n```\nexport MY_DATA_SPLITS_DIR=lcb_v6\nexport MY_DATA_SINGLES_DIR=lcb_v6_singles\nbash dat\u002Fprepare_data_splits.sh datasets\u002Flcb_v6.json\n```\n\n运行多轮基线，例如问题 120：\n\n```\npython baseline_multiturn\u002Fmultiturn.py --data-dir=lcb_v6_singles\u002Fq_120 --run-name multiturn_q120\n```\n\n或者，针对所有难题：\n\n```\nbash experiments\u002Fttt\u002Frun_multiturn_all.sh\n```\n\n---\n\n## 使用说明文档\n\n本节记录 SDPO 在基础 verl 框架之上添加的配置选项。\n\n### 策略损失配置\n\n位于配置中的 `actor.policy_loss`。\n\n- **loss_mode** (str, default: `\"vanilla\"`): 损失函数模式。设置为 `\"sdpo\"` 以启用自蒸馏。选项：`vanilla`（标准模式）, `sdpo`。\n\n### 自蒸馏配置\n\n位于配置中的 `actor.self_distillation`。仅在 `actor.policy_loss.loss_mode = \"sdpo\"` 时激活。\n\n#### 核心设置\n\n- **full_logit_distillation** (bool, default: `True`): 是否使用完整 Logits（对数几率）KL（Kullback-Leibler 散度）蒸馏。\n\n- **alpha** (float, default: `0.5`): KL 插值系数。`0.0` = 前向 KL，`1.0` = 反向 KL，`0.5` = JSD（杰森-香农散度）。\n\n- **success_reward_threshold** (float, default: `1.0`): 被视为成功演示的最小序列奖励。\n\n- **teacher_regularization** (str, default: `\"ema\"`): 教师正则化模式。选项：`ema`（指数移动平均）, `trust-region`（信任区域）。注意：如果使用 `ema`，`RefWorker` 上的模型将作为指数移动平均值更新。`trust-region` 需要 `use_fused_kernels = False`。\n\n- **teacher_update_rate** (float, default: `0.05`): 教师权重的 EMA 更新率，或信任区域混合系数。\n\n- **distillation_topk** (int | None, default: `100`): 如果设置，则使用 top-k logits 进行蒸馏而不是完整分布。\n\n- **distillation_add_tail** (bool, default: `True`): 是否为 top-k 蒸馏添加尾部桶。\n\n- **is_clip** (float | None, default: `2.0`): 重要性采样（IS）比率的裁剪值。`None` 禁用 IS 加权。\n\n#### 重提示设置\n\n- **max_reprompt_len** (int, default: `10240`): 重提示提示的最大 Token（词元）长度。\n\n- **reprompt_truncation** (str, default: `\"right\"`): 重提示提示的截断方法。选项：`left`, `right`, `error`。\n\n- **dont_reprompt_on_self_success** (bool, default: `True`): 如果为 `True`，不使用样本自身的成功响应作为演示。\n\n- **remove_thinking_from_demonstration** (bool, default: `True`): 是否从演示中移除 `\u003Cthinking>` 标签。\n\n#### 模板设置\n\n- **reprompt_template** (str): 重提示的主要模板。占位符：`{prompt}`, `{solution}`, `{feedback}`。\n\n- **solution_template** (str): 解决方案部分的模板。占位符：`{successful_previous_attempt}`。\n\n- **feedback_template** (str): 反馈部分的模板。占位符：`{feedback_raw}`。\n\n#### 反馈设置\n\n- **include_environment_feedback** (bool, default: `True`): 是否在重提示中包含环境反馈（例如测试错误）。\n\n- **environment_feedback_only_without_solution** (bool, default: `True`): 如果为 `True`，仅在没有可用成功解决方案时使用反馈。\n\n## 引用\n如果您发现这项工作有所帮助，请引用我们。\n\n```bibtex\n@article{hubotter2026reinforcement,\n  title = {Reinforcement Learning via Self-Distillation},\n  author = {Hübotter, Jonas and Lübeck, Frederike and Behric, Lejs and Baumann, Anton and Bagatella, Marco and Marta, Daniel and Hakimi, Ido and Shenfeld, Idan and Kleine Buening, Thomas and Guestrin, Carlos and Krause, Andreas},\n  year = {2026},\n  journal = {arXiv preprint arXiv:2601.20802},\n}\n```\n\n## 致谢\n我们的实现基于 [verl](https:\u002F\u002Fgithub.com\u002Fverl-project\u002Fverl) 的较新版本。","# SDPO 快速上手指南\n\n**SDPO (Self-Distilled Policy Optimization)** 是一种基于强化学习的框架，通过模型自身的高奖励轨迹进行自蒸馏，无需外部教师或显式奖励模型即可增强策略优化。适用于代码、数学等可验证领域的强化学习场景。\n\n---\n\n## 🛠️ 环境准备\n\n*   **操作系统**: Linux (推荐 Ubuntu 22.04 或 SLES 15 SP5)\n*   **硬件**: NVIDIA GPU (需支持 CUDA)\n*   **Python**: 3.12 (测试版本 3.12.3)\n*   **CUDA**: 与安装的 PyTorch 版本兼容\n\n> **注意**: \n> *   Ampere\u002FHopper 架构 (RTX 30\u002F40, H100): 使用 CUDA 12.4\n> *   Blackwell 架构 (RTX 50, B100\u002FB200): 使用 CUDA 12.8 (尚未完全测试)\n\n---\n\n## 🚀 安装步骤\n\n### 1. 安装 PyTorch\n\n根据您的显卡架构选择对应的命令：\n\n**Ampere \u002F Hopper (RTX 30\u002F40, H100):**\n```bash\npip install torch==2.5.1 torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu124\n```\n\n**Blackwell (RTX 50, RTX PRO 2000 Blackwell):**\n```bash\npip install torch==2.7.0 torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu128\n```\n\n### 2. 安装 SDPO 核心依赖\n\n```bash\n# 安装核心依赖 (固定版本)\npip install -r requirements.txt\n\n# 以可编辑模式安装 SDPO (verl)\npip install -e .\n\n# 编译安装 Flash Attention 2\npip install flash-attn --no-build-isolation\n```\n\n### 3. 可选：高性能推理支持\n\n如需使用 SGLang\u002FvLLM 进行高吞吐量推理：\n```bash\npip install -r requirements_sglang.txt\n```\n\n> **提示**: 建议使用 `requirements-full.txt` 以确保环境复现性。若使用 HPC\u002FGH200 集群，推荐使用 Docker 方案（见 README）。\n\n---\n\n## 💻 基本使用\n\n### 1. 数据准备\n\n数据已预置于 `datasets` 目录。如需自行加载和预处理：\n\n**加载数据集：**\n```bash\npython data\u002Fload_dataset.py \\\n    --dataset_name Chemistry \\\n    --output_path datasets\u002Fsciknoweval\u002Fchemistry.json\n```\n\n**划分训练\u002F测试集：**\n```bash\npython data\u002Fsplit_tasks.py \\\n    --json_path datasets\u002Fsciknoweval\u002Fchemistry.json \\\n    --output_dir datasets\u002Fsciknoweval\u002Fchemistry \\\n    --test_ratio 0.1 \\\n    --seed 42\n```\n\n**数据预处理 (Parquet 格式)：**\n```bash\npython data\u002Fpreprocess.py \\\n    --data_source DATASET_PATH\n```\n\n### 2. 配置修改\n\n在运行实验前，请修改配置文件 `verl\u002Ftrainer\u002Fconfig\u002Fuser.yaml`，设置以下路径：\n\n```yaml\nvars:\n  dir: \u002Fpath\u002Fto\u002Fyour\u002FSDPO              # SDPO 仓库路径\n  log_dir: \u002Fpath\u002Fto\u002Fyour\u002Flogs          # 日志目录\n  ckpt_dir: \u002Fpath\u002Fto\u002Fyour\u002Fcheckpoints  # 检查点目录\n```\n\n**启用 SDPO 模式：**\n确保在配置文件中将策略损失模式设置为 `sdpo`：\n```yaml\nactor:\n  policy_loss:\n    loss_mode: \"sdpo\"  # 默认是 \"vanilla\"，改为 \"sdpo\" 启用自蒸馏\n```\n\n### 3. 开始训练\n\n**无丰富环境反馈场景 (通用泛化)：**\n\n运行 SDPO 基线：\n```bash\nbash experiments\u002Fgeneralization\u002Frun_sdpo_all.sh\n```\n\n**有丰富环境反馈场景：**\n\n运行 SDPO 基线：\n```bash\nbash experiments\u002Frich_feedback\u002Frun_sdpo.sh\n```\n\n> **提示**: 训练完成后，可通过 W&B 查看日志监控训练进度。","某算法团队正在微调开源大模型用于自动化数学解题，目标是在不依赖人工标注的情况下，显著提升复杂推理任务的准确率。\n\n### 没有 SDPO 时\n- 仅依赖最终对错作为标量奖励，模型无法定位解题步骤中的具体错误环节。\n- 面对稀疏反馈时强化学习面临信用分配瓶颈，导致训练收敛极慢且不稳定。\n- 若要获取详细反馈需引入外部奖励模型，增加了额外的计算开销与部署复杂度。\n- 模型难以利用运行时产生的文本错误信息来自我修正策略，浪费大量算力。\n\n### 使用 SDPO 后\n- SDPO 将 Token 化的环境反馈转化为稠密学习信号，无需任何外部教师模型介入。\n- 通过自蒸馏机制，模型基于自身高奖励轨迹生成隐式反馈，有效指导参数更新。\n- 即使环境反馈规则化，SDPO 也能复用成功样本提供监督，显著提升数学推理准确率。\n- 训练过程中模型自动识别上下文错误模式，大幅减少无效尝试并加快收敛速度。\n\nSDPO 通过自蒸馏机制将稀疏的验证反馈转化为稠密信号，不仅解决了信用分配难题，还显著提升了模型在可验证领域的自我进化能力与训练效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flasgroup_SDPO_536c3360.png","lasgroup","LAS @ ETH Zurich","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flasgroup_0049dd28.png","Learning and Adaptive Systems Group",null,"https:\u002F\u002Flas.inf.ethz.ch\u002F","https:\u002F\u002Fgithub.com\u002Flasgroup",[83,87,91,95],{"name":84,"color":85,"percentage":86},"Python","#3572A5",94.4,{"name":88,"color":89,"percentage":90},"Shell","#89e051",5.4,{"name":92,"color":93,"percentage":94},"Jinja","#a52a22",0.1,{"name":96,"color":97,"percentage":98},"Dockerfile","#384d54",0,725,80,"2026-04-04T01:10:14","Apache-2.0","Linux","需要 NVIDIA GPU，支持 CUDA 12.4\u002F12.8\u002F13.1，推荐多卡环境 (如 4x GH200)，显存需求未明确","未说明",{"notes":107,"python":108,"dependencies":109},"推荐使用 Docker 部署 (特别是 GH200 集群); Blackwell 架构 (RTX 50\u002FB100\u002FB200) 支持未完全测试; 需修改 verl 配置文件路径并预处理数据; 核心依赖包含 verl 框架","3.12",[110,111,112,113,114,115],"torch","vllm","flash-attn","verl","xformers","triton",[26,13],[118,119,120,121],"distillation","llm","reasoning","rl","2026-03-27T02:49:30.150509","2026-04-06T05:17:23.410402",[125,130,135,140,145,150],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},2607,"复现论文 Table 3 结果时，训练配置（如 Epochs、数据集）应该如何设置？","论文中每个 run 约 6 小时，若使用默认配置（total_epochs: 30）可能导致训练时间过长（如 70 小时）。建议：1. 数据集需显式移除非选择题（remove non multiple-choice questions），仅保留选择题数据。2. vLLM 生成 rollout 时存在随机性且难以固定种子，建议重复运行 3 次取均值和误差条。3. 具体准确率参考 W&B 日志（约 0.8）。","https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO\u002Fissues\u002F17",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},2608,"训练过程中出现大量指标为零值（如 critic\u002Freturns, actor\u002Fpg_loss）是什么原因？","这是由于 `load_dataset.py` 引入的 bug 导致数据集的 `system` 列被丢弃。解决方案：1. 更新代码至修复该问题的版本（见 PR #12）。2. 临时变通方案：在用户提示词中硬编码系统提示（Hardcoding the system prompt in the user prompt），效果大致相同。","https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO\u002Fissues\u002F8",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},2609,"不同 NVIDIA GPU 架构（如 H20 vs GH200）应如何配置环境？推荐使用哪个 vLLM 版本？","不同架构对 PyTorch 和 CUDA 版本要求不同，易产生依赖冲突。建议：1. 优先使用官方提供的 Docker 镜像。2. 使用完全锁定版本的 `requirements.txt`。目前实验主要在 GH200 架构上进行，通用指令较难提供，请遵循 Docker 和 pinned requirements。","https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO\u002Fissues\u002F9",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},2610,"SDPO 与 SDFT 算法有什么本质区别？","两者的损失函数是相同的。主要区别在于“自教师”（self-teacher）的数据来源：SDFT 使用数据集中的演示（demonstrations）构建 ICL，而 SDPO 使用环境中生成的轨迹（generated trajectories，标记为正确的）以及其他丰富信号在线构建教师提示。","https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO\u002Fissues\u002F6",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},2611,"既然环境中可观测反馈，为什么需要训练学生模型，而不是直接在测试时从自教师采样？","原因主要有两点：1. 经过一次反馈后的教师模型通常仍无法解决问题。2. 具有多次顺序反馈上下文（multi-turn baseline）的教师才是基线方法，因此需要训练学生来学习利用这些反馈。","https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO\u002Fissues\u002F3",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},2612,"复现 LCB_V6 结果时遇到内存错误或精度异常怎么办？","如果遇到内存错误，可以尝试降低 batch size。关于复现结果精度与论文不符的情况，建议检查是否进行了官方评估，或对比训练配置（如 batch size）以确保一致性。","https:\u002F\u002Fgithub.com\u002Flasgroup\u002FSDPO\u002Fissues\u002F14",[]]