[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-WooooDyy--AgentGym-RL":3,"tool-WooooDyy--AgentGym-RL":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":78,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":99,"env_os":100,"env_gpu":101,"env_ram":102,"env_deps":103,"category_tags":110,"github_topics":111,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":116,"updated_at":117,"faqs":118,"releases":119},9833,"WooooDyy\u002FAgentGym-RL","AgentGym-RL","Code and implementations for the paper \"AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning\" by Zhiheng Xi et al.","AgentGym-RL 是一个专为训练大语言模型（LLM）智能体而设计的强化学习框架，旨在提升其在复杂真实场景中进行多轮交互与长程决策的能力。传统方法往往依赖人类示范，难以让智能体获得真正的突破性进展，且现有研究多局限于数学或代码等单轮任务，缺乏环境多样性并面临训练不稳定的挑战。AgentGym-RL 通过提供丰富的真实世界场景和主流强化学习算法支持，有效解决了这些痛点。\n\n该工具特别适合 AI 研究人员和开发者使用，帮助他们构建能够自主探索环境、积累经验并持续进化的智能体。其核心技术亮点在于提出了\"ScalingInter-RL\"方法，通过在训练过程中逐步延长智能体与环境的交互跨度，巧妙平衡了探索与利用的关系，显著提升了优化的稳定性与效率。实验表明，基于该框架训练的 7B 规模开源模型，在 27 项多样化任务上的表现已媲美甚至超越商业模型。此外，AgentGym-RL 还配备了可视化的交互式界面，支持完整交互轨迹的回放与分析，极大地简化了迭代开发中的实证研究工作。","# AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning\n\u003Cp align=\"center\">\n  📃 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755\" target=\"_blank\">Paper\u003C\u002Fa > • 🌐 \u003Ca href=\"https:\u002F\u002Fagentgym-rl.github.io\u002F\" target=\"_blank\">Project Page\u003C\u002Fa > • 🤗 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID\" target=\"_blank\">AgentGym-RL-Data-ID\u003C\u002Fa >\n\u003C\u002Fp >\n\nAgentGym-RL is a new framework to train LLM agents for **multi-turn** interactive decision-making through RL. It encompasses a wide variety of **real-world scenarios** and supports mainstream RL algorithms. Extensive experiments show that our framework and method substatially enhances the open-sourced 7B-scale model to a level that **match or surpass commercial models** on **27 tasks** across diverse environments.\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_93d1e689f7da.jpg)\n\n## 🔔 News\n- **🏆[2026-02-06]** Our paper has been accepted to ICLR 2026 as an Oral presentation!\n- **🎉[2025-09-10]** You can develop your custom environment to AgentGym and perform RL! The tutorial is [here](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fen\u002F05-2nd-Development.md).\n- **🥳[2025-09-10]** Our paper is released on arXiv: [AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755)\n- **🍺[2025-09-10]** Our RL dataset and benchmark are available on Hugging Face: [AgentGym-RL-Data-ID](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID)\n\n## 🌟 Overview\n\nDeveloping autonomous LLM agents capable of making a series of intelligent decisioiins to solve complex, real-world tasks is a fast-evolving frontier. Merely relying on human demonstration for behaviour cloning can make agents competent for tasks, but rarely leads to genuine breakthoughs. As Richard Sutton emphasizes, it is the knowledge, skills and experience acquired through exploration and interaction with the environment that truly drives agents forward. Therefore, a promising approach is to train these agents using Reinforcement Learning.\n\nMost existing studies remain limited to single-turn tasks like math and coding. Recent attempts to extend RL to train LLM agents with multi-turn capabilities face notable challenges:\n\n- **Restricted task complexity and environment diversity.** In the era of reinforcement learining, environments have become increasingly crucial. Agents that perform well only in toy settings struggle to transfer to real-world scenarios, while diversity in environments is a prerequisite for their generalization. \n- **Difficulties in achieving stable and efficient** **optimization**. Multi-turn interaction dramatically enlarges the search space and increases variance in training signals, making it challenging to strike a balance between exploration and exploitation. \n\nTo address these challenges, we introduce **AgentGym-RL**, a new framework to train LLM agents for **multi-turn** interactive decision-making through RL. It encompasses a wide variety of **real-world scenarios** and supports mainstream RL algorithms, establishing a foundation for the research and practice **in the era of experience**.\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_e7458203c0a3.png)\n\nFurthermore, to tackle the exploration–exploitation trade-off and improve optimization stability in agent RL training, we propose **ScalingInter-RL**, a method that progressively extends the agent–environment interaction horizon during training. Experiments across different environments show that leveraging our AgentGym-RL framework with the ScalingInter-RL algorithm yields stable, sustained and substantial behavioral improvement.\n\nIn addition, to facilitate probing of data and model behaviors, we provide an **visualized interactive** **user interface** that allows for the replay and examination of full interaction trajectories, thereby streamlining empirical analysis for iterative development.\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_fe7a4c773bbf.jpg)\n\n## 📖 Table of Contents\n\n- [AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning](#agentgym-rl-training-llm-agents-for-long-horizon-decision-making-through-multi-turn-reinforcement-learning)\n  * [🔔 News](#-news)\n  * [🌟 Overview](#-overview)\n  * [Features](#features)\n    + [Modular System Design of AgentGym-RL](#modular-system-design-of-agentgym-rl)\n    + [Environments](#environments)\n    + [Post-Training Strategies](#post-training-strategies)\n    + [ScalingInter-RL: Progressive Scaling Interaction for Agent RL](#scalinginter-rl-progressive-scaling-interaction-for-agent-rl)\n    + [Extending Verl](#extending-verl)\n  * [Performance](#performance)\n  * [Running Tutorial](#running-tutorial)\n    + [Environment Setup](#environment-setup)\n    + [Training](#training)\n    + [Evaluation](#evaluation)\n    + [Visualized user interface](#visualized-user-interface)\n  * [Acknowledgement](#acknowledgement)\n  * [Contact](#Contact)\n  * [Citation](#citation)\n\n## Features\n\n### Modular System Design of AgentGym-RL\n\nWe adopt a modular and decoupled design to implement AgentGym-RL, organizing it into three main components:\n\n- **Environment** **module**: provides diverse scenarios via a standardized server–client architecture with unified HTTP protocols and parallel requests.\n- **Agent module**: encapsulates the reasoning and decision-making process of agents in multi-turn interactions, with support for advanced mechanisms such as long-horizon planning and self-reflection. \n- **Training module**: implements reinforcement learning pipelines and other training methods to optimize agent policies. \n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_a5ac97f20fc9.jpg)\n\n### Environments\n\n* **Web Navigation**: We include **WebArena**, a realistic and reproducible web environment containing 4 distinct domains prevalent on the internet: online shopping, discussioin forums, collaborative development, and bussiness contnt management.\n* **Deep Search**: Building upon **Search-R1**, we include a RAG-based environment which enables LLMs to interact with search engines and solve multi-turn retrieval and reasoning tasks.\n* **Digital Games**: We include **TextCraft**, a text-based crafting game environment in which agents complete tasks via natural language interactions and task-based planning.\n* **Embodied Tasks**: We include **BabyAI** which provides a controllable grid world with text instructions for embodied reasoning in simulated environments.\n* **Scientific Tasks**: We include **SciWorld** which offers a scientific exploration simulator where agents conduct scientific experimemts through text-driven reasoning cycles. \n\n### Post-Training Strategies\n\nAgentGym-RL supports a suite of mainstream online RL algorithms: **PPO, GRPO, RLOO, REINFORCE++.**\n\nBeyond online RL, AgentGym-RL also supports a broad range of complementary training paradigms: **SFT, DPO, AgentEvol.**\n\n### ScalingInter-RL: Progressive Scaling Interaction for Agent RL\n\nScalingInter-RL is a training approach designed to balance exploration and exploitation while ensuring stable optimization. At its core is a **progressive horizon-scaling strategy** that adaptively adjusts the number of interaction turns during RL.\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_af4161f2d2da.png)\n\nWe start training with a smaller horizon, allowing the agent to efficiently exploits its policy and gain early proficiency on simple tasks. This establishes  the groundwork for deeper, long-horizon reasoning.  As training progresses, we gradually extend the horizon, enabling the agent to explore longer decision paths and fostering the emergence of higher-order cognitive behaviors.\n\n### Extending Verl\n\nWe make following modifications to verl in order to develop AgentGym-RL:\n\n1. **Rollout using vllm engine**: To support multi-turn rollouts and efficent interaction with the environment, we introduce:\n\n   * RolloutHandler to handle trajectories. We introduce `RolloutHandler` to correctly compute the attention masks, loss masks, position ids and sequence ids for environment observations and assistant's actions in each turn. It also handles historical messages, status and reward.\n\n   * EnvClient to handle interactions. The `EnvClient` provides several methods to facilitates interactions with the environment during rollout, such as observarion() to get the currect observation from the environment, available_actions() to get the currectly available actions, step() to perform an action, and reset() to reset the environmet. To improve efficiency, our framework initializes environments and collects trajectories in parallel.\n\n2. **Advantage computation**: We revise verl's implementation of advantage computation for REINFORCE++ and GAE to ensure correctness in both single-turn and multi-turn scenarios.\n\n3. **Scaling interaction during training**: To develop ScalingInter-RL, we introduce `RoundScheduler` to scale interactions during training. The `FixedRoundsScheduler` enforces a fixed maximum number of interactions. The `StepRoundsScheduler` gradually increases the interaction horizon in a step-wise manner, enabling progressive scaling during training.\n\n## Performance\n\nWe leverage Qwen2.5-3B and Qwen2.5-7B as our primary backbone models. We evaludate AgentGym-RL and ScalingInter-RL across **five scenarios** and include multiple closed-source models and open-source models for comparison. The evaluation results on WebArena benchmark are as follows, while results on other benchmarks can be found in our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755).\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_644bd8b9ade1.png)\n\n- The **ScalingInter-7B** model significantly **surpasses top-tier proprietary models** like GPT-4o, and **performs on par with larger models** like DeepSeek-R1-0528 and Gemini-2.5-Pro. Moreover, in Shopping and CMS, the achieved score matches the best performance among all models in these categories.\n- The **AgentGym-RL-7B** achieved an overall score that **matches the performance of GPT-4o**.\n\nMoreover, ScalingInter-RL demonstrates more **stable and effcient** training dynamics during RL optimization as shown in the figure below. \n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_f1d70fffbfdc.jpg)\n\n* Longer-turn settings initially achieve higher rewards by enabling richer exploration but rapidly collapse; Shorter-turns yield more stable but less exploratory learning, leading to a performance ceilling. \n* Our ScalingInter-RL method **progressively increases the interaction horizon**, and ultimately achieves **higher and more efficient** long-term performance.\n\n## Running Tutorial\n\n### Environment Setup\n\nWe recommend using CUDA 12.4, PyTorch 2.4, and Python 3.10. First, install the requirements using the following command:\n```sh\necho \"Preparing environment for agentgym-rl...\"\nconda create -n agentgym-rl python==3.10 -y\nconda activate agentgym-rl\npip3 install torch==2.4.0 --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu124\n# install flash-atten\nFLASH_ATTENTION_URL=\"https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention\u002Freleases\u002Fdownload\u002Fv2.7.3\u002Fflash_attn-2.7.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\"\nFLASH_ATTENTION_NAME=\"flash_attn-2.7.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\"\nwget -q $FLASH_ATTENTION_URL -O $FLASH_ATTENTION_NAME\npip3 install $FLASH_ATTENTION_NAME\nrm -f $FLASH_ATTENTION_NAME\n# for RL\ncd AgentGym-RL\npip3 install -e .\n# for agentgym\necho \"Preparing environment for agentenv...\"\ncd AgentGym\u002Fagentenv\npip3 install -e .\npip3 install transformers==4.51.3\n```\n\n### Training\n\nFor SFT, DPO and AgentEvol, please refer to the `README.md` of [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4\u002F).\n\nFor RL training:\n\n**1. Environment Setup**\n\nMake sure you have the required environments set up (see [Environment Setup section](#environment-setup) above).\n\n**2. Data Preparation**\n\nDownload the AgentGym-RL-Data-ID dataset from [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID).\n\n**3. Launch the environment server**\n\nPlease launch the environment server by referring to the `README.md` of [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4).\n\n**4. Training**\n\nYou can see the training example scripts for each task in the [examples\u002Ftrain](.\u002Fexamples\u002Ftrain) for AgentGym-RL and the ScalingInter-RL. In addition, you may refer to the training parameters configured in those scripts.\n\n```sh\nbash webarena_train.sh\n```\n\nMost explanations of the arguments can be found in the docs of [verl](https:\u002F\u002Fverl.readthedocs.io\u002Fen\u002Flatest\u002Fexamples\u002Fconfig.html). Other key arguments:\n* `data.max_prompt_length`: Maximum length of the general task description prompt in the first turn.\n* `data.max_response_length`: Maximum total token length of the interaction trajectory (excluding the task prompt).\n* `actor_rollout_ref.agentgym.task_name`: Training task name of AgentGym.\n* `actor_rollout_ref.agentgym.env_addr`: URL of the AgentGym environment server.\n* `actor_rollout_ref.rollout.max_tokens`: Maximum token length of a single response per turn.\n* `actor_rollout_ref.rollout.rollout_log_dir`: Directory for storing rollout trajectories.\n* `algorithm.rounds_ctrl.type`: Strategy for controlling the maximum number of interaction turns. Options:\n  - `fixed`: fixed number of turns.\n  - `scaling_inter_stepwise`: number of turns increases at fixed step intervals.\n* `algorithm.rounds_ctrl.rounds`: Maximum number of allowed interaction turns.\n* `algorithm.rounds_ctrl.steps_scaling_inter`: Frequency (in training steps) to increase the maximum number of turns when using `scaling_inter_stepwise`.\n\nSee [AgentGym-RL\u002Fverl\u002Fagent_trainer\u002Fconfig\u002Fppo_trainer.yaml](.\u002FAgentGym-RL\u002Fverl\u002Fagent_trainer\u002Fconfig\u002Fppo_trainer.yaml) for more details.\n\nTo launch the AgentGym-RL training, set:\n\n```sh\nalgorithm.rounds_ctrl.type=fixed \\\nalgorithm.rounds_ctrl.rounds=15 \\\n```\n\nYou can see [examples\u002Ftrain\u002FAgentGym-RL\u002Fwebarena_train.sh](.\u002Fexamples\u002Ftrain\u002FAgentGym-RL\u002Fwebarena_train.sh) as an example.\n\nTo launch the ScalingInter-RL training, set:\n\n```sh\nalgorithm.rounds_ctrl.type=scaling_inter_stepwise\\\nalgorithm.rounds_ctrl.steps_scaling_inter=100 \\\nalgorithm.rounds_ctrl.rounds=[10,20,30] \\\n```\n\nYou can see [examples\u002Ftrain\u002FScalingInter-RL\u002Fwebarena_train.sh](.\u002Fexamples\u002Ftrain\u002FScalingInter-RL\u002Fwebarena_train.sh) as an example.\n\n### Evaluation\n\n**1. Environment Setup**\n\nMake sure you have the required environments set up (see [Environment Setup section](#environment-setup) above).\n\n**2. Data Preparation**\n\nDownload the AgentGym-RL-Data-ID dataset from [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID).\n\n**3. Launch the environment server**\n\nPlease launch the environment server by referring to the `README.md` of [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4).\n\n**4. Evaluation**\n\nYou can see the evaluation example scripts for each task in the `examples\u002Feval`. In addition, you may refer to the evaluation parameters configured in those scripts.\n\nTo run the evaluation, you can see `examples\u002Feval\u002Fwebarena_eval.sh` as an example.\n\n```sh\nbash webarena_eval.sh\n```\n\nMost explanations of the arguments can be found in the docs of [verl](https:\u002F\u002Fverl.readthedocs.io\u002Fen\u002Flatest\u002Fexamples\u002Fconfig.html). See `AgentGym-RL\u002Fverl\u002Fagent_trainer\u002Fconfig\u002Fgeneration.yaml` for more details.\n\n### Visualized user interface\n\nCheck [here](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4\u002Fenv-visualization) for setup instructions.\n\n## Acknowledgement\n\nThe Training module of AgentGym-RL is built upon [Verl](https:\u002F\u002Fgithub.com\u002Fvolcengine\u002Fverl), and the Environment module is built upon [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym). We are grateful for their infrastructure support.  We also extend our thanks to [TextCraft](https:\u002F\u002Fgithub.com\u002Farchiki\u002FADaPT), [BabyAI](https:\u002F\u002Fgithub.com\u002Fmila-iqia\u002Fbabyai), [SciWorld](https:\u002F\u002Fgithub.com\u002Fallenai\u002FScienceWorld), [WebArena](https:\u002F\u002Fgithub.com\u002Fweb-arena-x\u002Fwebarena), [Search-R1](https:\u002F\u002Fgithub.com\u002Fnyu-dl\u002Fdl4ir-searchQA) for their opensource.\n\n## Contact\n\n- zhxi22@m.fudan.edu.cn\n\n## Citation\n\nPlease cite the following paper if you find AgentGym-RL helpful!\n\n```\n@misc{xi2025agentgymrltrainingllmagents,\n      title={AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning}, \n      author={Zhiheng Xi and Jixuan Huang and Chenyang Liao and Baodai Huang and Honglin Guo and Jiaqi Liu and Rui Zheng and Junjie Ye and Jiazheng Zhang and Wenxiang Chen and Wei He and Yiwen Ding and Guanyu Li and Zehui Chen and Zhengyin Du and Xuesong Yao and Yufei Xu and Jiecao Chen and Tao Gui and Zuxuan Wu and Qi Zhang and Xuanjing Huang and Yu-Gang Jiang},\n      year={2025},\n      eprint={2509.08755},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755}, \n}\n```\n\n\n```\n@misc{xi2024agentgymevolvinglargelanguage,\n      title={AgentGym: Evolving Large Language Model-based Agents across Diverse Environments}, \n      author={Zhiheng Xi and Yiwen Ding and Wenxiang Chen and Boyang Hong and Honglin Guo and Junzhe Wang and Dingwen Yang and Chenyang Liao and Xin Guo and Wei He and Songyang Gao and Lu Chen and Rui Zheng and Yicheng Zou and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang and Zuxuan Wu and Yu-Gang Jiang},\n      year={2024},\n      eprint={2406.04151},\n      archivePrefix={arXiv},\n      primaryClass={cs.AI},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04151}, \n}\n```\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_19e8f4a60c66.png\" height=50>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_6a0d1d7b7f37.jpg\" height=50>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_251b3079ba2f.png\" height=50>\n\u003C\u002Fdiv>\n","# AgentGym-RL：通过多轮强化学习训练长 horizon 决策的 LLM 代理\n\u003Cp align=\"center\">\n  📃 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755\" target=\"_blank\">论文\u003C\u002Fa > • 🌐 \u003Ca href=\"https:\u002F\u002Fagentgym-rl.github.io\u002F\" target=\"_blank\">项目页面\u003C\u002Fa > • 🤗 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID\" target=\"_blank\">AgentGym-RL-Data-ID\u003C\u002Fa >\n\u003C\u002Fp >\n\nAgentGym-RL 是一个全新的框架，用于通过强化学习训练能够进行 **多轮** 交互式决策的 LLM 代理。它涵盖了多种 **现实场景**，并支持主流的强化学习算法。大量实验表明，我们的框架和方法可以显著提升开源的 7B 规模模型，在不同环境下的 **27 项任务** 上达到 **媲美或超越商业模型** 的水平。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_93d1e689f7da.jpg)\n\n## 🔔 新闻\n- **🏆[2026-02-06]** 我们的论文已被 ICLR 2026 接受，并将作为口头报告发表！\n- **🎉[2025-09-10]** 您现在可以为 AgentGym 开发自定义环境并进行强化学习！教程请见 [这里](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fen\u002F05-2nd-Development.md)。\n- **🥳[2025-09-10]** 我们的论文已在 arXiv 上发布：[AgentGym-RL：通过多轮强化学习训练长 horizon 决策的 LLM 代理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755)\n- **🍺[2025-09-10]** 我们的强化学习数据集和基准测试已在 Hugging Face 上发布：[AgentGym-RL-Data-ID](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID)\n\n## 🌟 概述\n\n开发能够做出一系列智能决策以解决复杂现实任务的自主 LLM 代理，是当前快速发展的前沿领域。仅仅依赖人类示范进行行为克隆，虽然可以让代理胜任某些任务，但很少能带来真正的突破。正如理查德·萨顿所强调的那样，真正推动代理进步的是通过探索与环境互动所获得的知识、技能和经验。因此，一种很有前景的方法是使用强化学习来训练这些代理。\n\n然而，现有的大多数研究仍然局限于数学和编程等单轮任务。最近尝试将强化学习扩展到具备多轮能力的 LLM 代理时，却面临一些显著挑战：\n\n- **任务复杂度和环境多样性的限制。** 在强化学习时代，环境的重要性日益凸显。那些只在玩具环境中表现良好的代理，很难迁移到真实场景中；而环境的多样性则是代理实现泛化能力的前提。\n- **难以实现稳定高效的优化。** 多轮交互会极大地扩大搜索空间，并增加训练信号的方差，从而使得在探索与利用之间取得平衡变得非常困难。\n\n为应对这些挑战，我们提出了 **AgentGym-RL**，这是一个全新的框架，用于通过强化学习训练能够进行 **多轮** 交互式决策的 LLM 代理。它涵盖了广泛的 **现实场景**，并支持主流的强化学习算法，为 **体验驱动的时代** 的研究与实践奠定了基础。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_e7458203c0a3.png)\n\n此外，为了更好地处理探索与利用之间的权衡，并提高代理强化学习训练中的优化稳定性，我们提出了 **ScalingInter-RL** 方法，该方法在训练过程中逐步扩展代理与环境的交互范围。不同环境下的实验结果表明，结合 AgentGym-RL 框架与 ScalingInter-RL 算法，能够带来稳定、持续且显著的行为改进。\n\n同时，为了便于对数据和模型行为进行深入分析，我们还提供了一个 **可视化交互式用户界面**，允许回放和检查完整的交互轨迹，从而简化了迭代开发中的实证分析过程。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_fe7a4c773bbf.jpg)\n\n## 📖 目录\n\n- [AgentGym-RL：通过多轮强化学习训练长 horizon 决策的 LLM 代理](#agentgym-rl-training-llm-agents-for-long-horizon-decision-making-through-multi-turn-reinforcement-learning)\n  * [🔔 新闻](#-news)\n  * [🌟 概述](#-overview)\n  * [特性](#features)\n    + [AgentGym-RL 的模块化系统设计](#modular-system-design-of-agentgym-rl)\n    + [环境](#environments)\n    + [训练后策略](#post-training-strategies)\n    + [ScalingInter-RL：面向代理强化学习的渐进式交互扩展](#scalinginter-rl-progressive-scaling-interaction-for-agent-rl)\n    + [扩展 VerL](#extending-verl)\n  * [性能](#performance)\n  * [运行教程](#running-tutorial)\n    + [环境设置](#environment-setup)\n    + [训练](#training)\n    + [评估](#evaluation)\n    + [可视化用户界面](#visualized-user-interface)\n  * [致谢](#acknowledgement)\n  * [联系方式](#Contact)\n  * [引用](#citation)\n\n## 特性\n\n### AgentGym-RL 的模块化系统设计\n\n我们采用模块化和解耦的设计来实现 AgentGym-RL，将其划分为三个主要组件：\n\n- **环境模块**：通过标准化的服务器–客户端架构，使用统一的 HTTP 协议和并行请求，提供多样化的场景。\n- **代理模块**：封装代理在多轮交互中的推理和决策过程，支持长 horizon 计划和自我反思等高级机制。\n- **训练模块**：实现强化学习流水线及其他训练方法，以优化代理策略。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_a5ac97f20fc9.jpg)\n\n### 环境\n\n* **网页导航**：我们包含了 **WebArena**，这是一个真实且可重复的网页环境，包含互联网上常见的四个不同领域：在线购物、讨论论坛、协作开发和内容管理。\n* **深度搜索**：基于 **Search-R1**，我们引入了一个基于 RAG 的环境，使 LLM 能够与搜索引擎交互，解决多轮检索和推理任务。\n* **数字游戏**：我们包含了 **TextCraft**，这是一个基于文本的工艺制作游戏环境，代理可以通过自然语言交互和任务规划完成各种任务。\n* **具身任务**：我们加入了 **BabyAI**，它提供一个可控的网格世界，配有文本指令，用于在模拟环境中进行具身推理。\n* **科学任务**：我们提供了 **SciWorld**，这是一个科学探索模拟器，代理可以通过文本驱动的推理循环来进行科学实验。\n\n### 训练后策略\n\nAgentGym-RL 支持一系列主流的在线强化学习算法：**PPO、GRPO、RLOO、REINFORCE++。**\n\n除了在线强化学习之外，AgentGym-RL 还支持广泛的补充训练范式：**SFT、DPO、AgentEvol。**\n\n### ScalingInter-RL：面向智能体强化学习的渐进式交互规模扩展\n\nScalingInter-RL 是一种训练方法，旨在平衡探索与利用，同时确保优化过程的稳定性。其核心是一种**渐进式的时域规模扩展策略**，能够自适应地调整强化学习中的交互轮次数量。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_af4161f2d2da.png)\n\n我们从较小的时域开始训练，使智能体能够高效地利用现有策略，在简单任务上快速掌握技能。这为后续更深层次、长时域的推理奠定了基础。随着训练的推进，我们会逐步延长时域长度，让智能体有机会探索更长的决策路径，从而促进更高阶认知行为的涌现。\n\n### Verl 的扩展\n\n为了开发 AgentGym-RL，我们对 Verl 进行了以下改进：\n\n1. **基于 vLLM 引擎的回放**：为支持多轮回放及与环境的高效交互，我们引入了：\n   * `RolloutHandler` 用于处理轨迹。该组件负责正确计算每一轮中环境观测和助手动作的注意力掩码、损失掩码、位置 ID 及序列 ID，并管理历史消息、状态和奖励。\n   * `EnvClient` 用于处理环境交互。`EnvClient` 提供了多种方法来简化回放过程中的环境交互，例如 `observation()` 用于获取当前环境观测、`available_actions()` 获取当前可用动作、`step()` 执行动作以及 `reset()` 重置环境。为提升效率，我们的框架会并行初始化环境并收集轨迹。\n\n2. **优势函数计算**：我们修订了 Verl 中 REINFORCE++ 和 GAE 的优势函数实现，以确保在单轮和多轮场景下均能正确计算。\n\n3. **训练过程中的交互规模扩展**：为实现 ScalingInter-RL，我们引入了 `RoundScheduler` 来控制训练过程中交互轮次的规模。其中，`FixedRoundsScheduler` 固定最大交互轮次；而 `StepRoundsScheduler` 则以阶梯式方式逐步增加交互时域长度，从而实现训练过程中的渐进式规模扩展。\n\n## 性能表现\n\n我们主要使用 Qwen2.5-3B 和 Qwen2.5-7B 作为基础模型。在 AgentGym-RL 和 ScalingInter-RL 的评估中，我们覆盖了**五个场景**，并与多个闭源及开源模型进行了对比。以下是 WebArena 基准测试的结果；其他基准测试的结果则可在我们的论文中查阅（[arXiv:2509.08755](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755)）。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_644bd8b9ade1.png)\n\n- **ScalingInter-7B** 模型显著**超越了 GPT-4o 等顶级专有模型**，并且**性能与 DeepSeek-R1-0528、Gemini-2.5-Pro 等更大规模模型相当**。此外，在 Shopping 和 CMS 场景中，其得分已达到同类模型中的最佳水平。\n- **AgentGym-RL-7B** 的综合得分则**与 GPT-4o 相当**。\n\n此外，如图所示，ScalingInter-RL 在强化学习优化过程中展现出更为**稳定且高效的**训练动态。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_f1d70fffbfdc.jpg)\n\n- 较长的交互时域初始阶段能通过更丰富的探索获得更高的奖励，但随后会迅速回落；而较短的时域虽然学习过程更稳定，但探索性不足，最终导致性能停滞。\n- 我们的 ScalingInter-RL 方法通过**逐步增加交互时域长度**，最终实现了**更高且更高效的**长期性能。\n\n## 运行教程\n\n### 环境搭建\n\n建议使用 CUDA 12.4、PyTorch 2.4 和 Python 3.10。首先，请通过以下命令安装所需依赖：\n```sh\necho \"Preparing environment for agentgym-rl...\"\nconda create -n agentgym-rl python==3.10 -y\nconda activate agentgym-rl\npip3 install torch==2.4.0 --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu124\n# 安装 FlashAttention\nFLASH_ATTENTION_URL=\"https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention\u002Freleases\u002Fdownload\u002Fv2.7.3\u002Fflash_attn-2.7.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\"\nFLASH_ATTENTION_NAME=\"flash_attn-2.7.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\"\nwget -q $FLASH_ATTENTION_URL -O $FLASH_ATTENTION_NAME\npip3 install $FLASH_ATTENTION_NAME\nrm -f $FLASH_ATTENTION_NAME\n# 安装 RL 相关依赖\ncd AgentGym-RL\npip3 install -e .\n# 安装 AgentGym 环境相关依赖\necho \"Preparing environment for agentenv...\"\ncd AgentGym\u002Fagentenv\npip3 install -e .\npip3 install transformers==4.51.3\n```\n\n### 训练\n\n对于 SFT、DPO 和 AgentEvol，请参考 [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4\u002F) 的 `README.md`。\n\n对于 RL 训练：\n\n**1. 环境设置**\n\n请确保已搭建好所需的环境（参见上文的[环境设置部分](#environment-setup)）。\n\n**2. 数据准备**\n\n从 [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID) 下载 AgentGym-RL-Data-ID 数据集。\n\n**3. 启动环境服务器**\n\n请参照 [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4\u002F) 的 `README.md` 启动环境服务器。\n\n**4. 训练**\n\n您可以在 AgentGym-RL 和 ScalingInter-RL 的 [examples\u002Ftrain](.\u002Fexamples\u002Ftrain) 中找到各个任务的训练示例脚本，同时也可以参考这些脚本中配置的训练参数。\n\n```sh\nbash webarena_train.sh\n```\n\n大多数参数的解释都可以在 [verl](https:\u002F\u002Fverl.readthedocs.io\u002Fen\u002Flatest\u002Fexamples\u002Fconfig.html) 的文档中找到。其他关键参数如下：\n* `data.max_prompt_length`: 第一轮中通用任务描述提示的最大长度。\n* `data.max_response_length`: 交互轨迹的最大总 token 长度（不包括任务提示）。\n* `actor_rollout_ref.agentgym.task_name`: AgentGym 的训练任务名称。\n* `actor_rollout_ref.agentgym.env_addr`: AgentGym 环境服务器的 URL。\n* `actor_rollout_ref.rollout.max_tokens`: 每轮单次回复的最大 token 长度。\n* `actor_rollout_ref.rollout.rollout_log_dir`: 存储 rollout 轨迹的目录。\n* `algorithm.rounds_ctrl.type`: 控制最大交互轮数的策略。选项：\n  - `fixed`: 固定轮数。\n  - `scaling_inter_stepwise`: 轮数按固定步长递增。\n* `algorithm.rounds_ctrl.rounds`: 允许的最大交互轮数。\n* `algorithm.rounds_ctrl.steps_scaling_inter`: 使用 `scaling_inter_stepwise` 时，增加最大轮数的频率（以训练步数为单位）。\n\n更多详细信息请参阅 [AgentGym-RL\u002Fverl\u002Fagent_trainer\u002Fconfig\u002Fppo_trainer.yaml](.\u002FAgentGym-RL\u002Fverl\u002Fagent_trainer\u002Fconfig\u002Fppo_trainer.yaml)。\n\n要启动 AgentGym-RL 训练，请设置：\n\n```sh\nalgorithm.rounds_ctrl.type=fixed \\\nalgorithm.rounds_ctrl.rounds=15 \\\n```\n\n您可以参考 [examples\u002Ftrain\u002FAgentGym-RL\u002Fwebarena_train.sh](.\u002Fexamples\u002Ftrain\u002FAgentGym-RL\u002Fwebarena_train.sh) 作为示例。\n\n要启动 ScalingInter-RL 训练，请设置：\n\n```sh\nalgorithm.rounds_ctrl.type=scaling_inter_stepwise\\\nalgorithm.rounds_ctrl.steps_scaling_inter=100 \\\nalgorithm.rounds_ctrl.rounds=[10,20,30] \\\n```\n\n您可以参考 [examples\u002Ftrain\u002FScalingInter-RL\u002Fwebarena_train.sh](.\u002Fexamples\u002Ftrain\u002FScalingInter-RL\u002Fwebarena_train.sh) 作为示例。\n\n### 评估\n\n**1. 环境设置**\n\n请确保已搭建好所需的环境（参见上文的[环境设置部分](#environment-setup)）。\n\n**2. 数据准备**\n\n从 [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID) 下载 AgentGym-RL-Data-ID 数据集。\n\n**3. 启动环境服务器**\n\n请参照 [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4\u002F) 的 `README.md` 启动环境服务器。\n\n**4. 评估**\n\n您可以在 `examples\u002Feval` 中找到各个任务的评估示例脚本，同时也可以参考这些脚本中配置的评估参数。\n\n要运行评估，可以参考 `examples\u002Feval\u002Fwebarena_eval.sh` 作为示例。\n\n```sh\nbash webarena_eval.sh\n```\n\n大多数参数的解释都可以在 [verl](https:\u002F\u002Fverl.readthedocs.io\u002Fen\u002Flatest\u002Fexamples\u002Fconfig.html) 的文档中找到。更多细节请参阅 `AgentGym-RL\u002Fverl\u002Fagent_trainer\u002Fconfig\u002Fgeneration.yaml`。\n\n### 可视化用户界面\n\n设置说明请查看 [这里](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym\u002Ftree\u002F640f8bca6901a6a6d540ff61522b813988da47c4\u002Fenv-visualization)。\n\n## 致谢\n\nAgentGym-RL 的训练模块基于 [Verl](https:\u002F\u002Fgithub.com\u002Fvolcengine\u002Fverl)，环境模块则基于 [AgentGym](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym)。我们感谢它们提供的基础设施支持。此外，我们还要感谢 [TextCraft](https:\u002F\u002Fgithub.com\u002Farchiki\u002FADaPT)、[BabyAI](https:\u002F\u002Fgithub.com\u002Fmila-iqia\u002Fbabyai)、[SciWorld](https:\u002F\u002Fgithub.com\u002Fallenai\u002FScienceWorld)、[WebArena](https:\u002F\u002Fgithub.com\u002Fweb-arena-x\u002Fwebarena)、[Search-R1](https:\u002F\u002Fgithub.com\u002Fnyu-dl\u002Fdl4ir-searchQA) 等开源项目。\n\n## 联系方式\n\n- zhxi22@m.fudan.edu.cn\n\n## 引用\n\n如果您觉得 AgentGym-RL 对您有所帮助，请引用以下论文！\n\n```\n@misc{xi2025agentgymrltrainingllmagents,\n      title={AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning}, \n      author={Zhiheng Xi and Jixuan Huang and Chenyang Liao and Baodai Huang and Honglin Guo and Jiaqi Liu and Rui Zheng and Junjie Ye and Jiazheng Zhang and Wenxiang Chen and Wei He and Yiwen Ding and Guanyu Li and Zehui Chen and Zhengyin Du and Xuesong Yao and Yufei Xu and Jiecao Chen and Tao Gui and Zuxuan Wu and Qi Zhang and Xuanjing Huang and Yu-Gang Jiang},\n      year={2025},\n      eprint={2509.08755},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08755}, \n}\n```\n\n\n```\n@misc{xi2024agentgymevolvinglargelanguage,\n      title={AgentGym: Evolving Large Language Model-based Agents across Diverse Environments}, \n      author={Zhiheng Xi and Yiwen Ding and Wenxiang Chen and Boyang Hong and Honglin Guo and Junzhe Wang and Dingwen Yang and Chenyang Liao and Xin Guo and Wei He and Songyang Gao and Lu Chen and Rui Zheng and Yicheng Zou and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang and Zuxuan Wu and Yu-Gang Jiang},\n      year={2024},\n      eprint={2406.04151},\n      archivePrefix={arXiv},\n      primaryClass={cs.AI},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04151}, \n}\n```\n\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_19e8f4a60c66.png\" height=50>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_6a0d1d7b7f37.jpg\" height=50>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_readme_251b3079ba2f.png\" height=50>\n\u003C\u002Fdiv>","# AgentGym-RL 快速上手指南\n\nAgentGym-RL 是一个通过多轮强化学习（RL）训练大语言模型（LLM）智能体进行长程决策的框架。它支持多种真实场景（如网页导航、深度搜索、数字游戏等），并引入了 **ScalingInter-RL** 算法以平衡探索与利用，显著提升开源模型在复杂任务中的表现。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**: Linux (推荐)\n*   **GPU**: 支持 CUDA 的 NVIDIA 显卡\n*   **CUDA 版本**: 12.4\n*   **Python 版本**: 3.10\n*   **PyTorch 版本**: 2.4.0\n\n## 安装步骤\n\n建议使用 `conda` 创建独立的虚拟环境。以下是完整的安装命令流程：\n\n```bash\n# 1. 创建并激活 conda 环境\necho \"Preparing environment for agentgym-rl...\"\nconda create -n agentgym-rl python==3.10 -y\nconda activate agentgym-rl\n\n# 2. 安装 PyTorch (指定 CUDA 12.4 版本)\npip3 install torch==2.4.0 --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu124\n\n# 3. 安装 Flash Attention (预编译包，加速注意力计算)\nFLASH_ATTENTION_URL=\"https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention\u002Freleases\u002Fdownload\u002Fv2.7.3\u002Fflash_attn-2.7.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\"\nFLASH_ATTENTION_NAME=\"flash_attn-2.7.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\"\nwget -q $FLASH_ATTENTION_URL -O $FLASH_ATTENTION_NAME\npip3 install $FLASH_ATTENTION_NAME\nrm -f $FLASH_ATTENTION_NAME\n\n# 4. 安装 AgentGym-RL 核心库\ncd AgentGym-RL\npip3 install -e .\n\n# 5. 安装 AgentGym 环境依赖\necho \"Preparing environment for agentenv...\"\ncd ..\u002FAgentGym\u002Fagentenv\npip3 install -e .\npip3 install transformers==4.51.3\n```\n\n> **注意**：如果下载 `flash_attn` 或 `torch` 速度较慢，可尝试配置国内镜像源（如清华源或阿里源），但需确保源中包含对应 CUDA 版本的 wheel 包。\n\n## 基本使用\n\n### 1. 数据准备\n从 Hugging Face 下载必要的训练数据集：\n*   数据集名称：[AgentGym\u002FAgentGym-RL-Data-ID](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FAgentGym\u002FAgentGym-RL-Data-ID)\n\n### 2. 启动环境服务器\n在进行 RL 训练前，需要先启动环境服务器。具体启动方式请参考 [AgentGym 主仓库 README](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym)。\n\n### 3. 开始训练\nAgentGym-RL 支持多种训练策略。对于强化学习（RL）训练，框架内置了 PPO, GRPO, RLOO, REINFORCE++ 等主流算法，以及特有的 **ScalingInter-RL** 渐进式交互策略。\n\n假设您已配置好配置文件（config），运行训练脚本的典型命令如下（具体脚本路径视项目结构而定）：\n\n```bash\n# 示例：启动带有 ScalingInter-RL 策略的 RL 训练\npython train.py --algorithm scaling_inter_rl --model Qwen2.5-7B --data_path \u003Cpath_to_dataset>\n```\n\n*   **SFT\u002FDPO\u002FAgentEvol**: 若需进行监督微调或其他后训练策略，请参阅 [AgentGym 仓库](https:\u002F\u002Fgithub.com\u002FWooooDyy\u002FAgentGym) 的相关文档。\n*   **可视化分析**: 训练完成后，可使用框架提供的可视化用户界面回放和检查完整的交互轨迹，以便分析智能体行为。\n\n### 核心特性提示\n*   **模块化设计**: 环境、智能体和训练模块解耦，便于自定义扩展。\n*   **渐进式交互 (ScalingInter-RL)**: 训练初期使用较短的交互轮次建立基础能力，随后逐渐增加轮次以探索长程决策，有效避免训练崩溃并提升最终性能。\n*   **多场景支持**: 内置 WebArena (网页导航), Search-R1 (深度搜索), TextCraft (数字游戏), BabyAI (具身任务), SciWorld (科学实验) 等多种环境。","某电商平台的自动化运营团队正尝试构建一个能独立处理“用户投诉 - 查询订单 - 协调退款 - 发送安抚邮件”这一完整长流程的智能客服 Agent。\n\n### 没有 AgentGym-RL 时\n- **长程决策断裂**：模型在处理多轮对话时，往往记得住开头却忘了最终目标，导致在第三步协调退款时丢失了“安抚用户”的核心指令。\n- **环境适应性差**：仅在简单的数学或代码测试集上训练过，一旦面对真实复杂的电商后台接口和多变的用户情绪，Agent 立刻束手无策。\n- **训练难以收敛**：由于多轮交互的搜索空间过大，传统强化学习算法信号方差极高，模型训练经常震荡甚至崩溃，无法稳定提升。\n- **黑盒调试困难**：当 Agent 犯错时，开发人员只能看到最终错误结果，无法回溯具体的交互轨迹来定位是哪一轮决策出了问题。\n\n### 使用 AgentGym-RL 后\n- **长程逻辑连贯**：借助 ScalingInter-RL 算法逐步延长交互视界，Agent 能稳稳记住六轮对话前的初始目标，完整闭环解决复杂投诉。\n- **真实场景泛化**：利用框架内置的多样化真实世界场景进行训练，7B 规模的小模型也能在复杂的电商环境中表现出媲美商业大模型的决策力。\n- **优化稳定高效**：框架有效平衡了探索与利用，显著降低了训练信号的方差，让模型在多轮任务中的表现持续、稳定地提升。\n- **全链路可视化复盘**：通过自带的可视化交互界面，团队可以像看录像一样回放完整的决策轨迹，快速定位并修复特定轮次的逻辑漏洞。\n\nAgentGym-RL 通过多轮强化学习框架，将原本只能处理单点任务的弱模型，进化为能在真实复杂环境中独立解决长程难题的自主智能体。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWooooDyy_AgentGym-RL_93d1e689.jpg","WooooDyy","Zhiheng Xi","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FWooooDyy_0bce9f32.jpg","Now PhD student at Fudan NLP Group of Fudan University. Previously got Bechler's degree at Nanjing University.\r\n","Fudan University",null,"Be1ong1","https:\u002F\u002Fwoooodyy.github.io\u002F","https:\u002F\u002Fgithub.com\u002FWooooDyy",[83,87,91],{"name":84,"color":85,"percentage":86},"Python","#3572A5",99.9,{"name":88,"color":89,"percentage":90},"Makefile","#427819",0.1,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0,702,70,"2026-04-19T08:43:05","MIT",4,"Linux","必需 NVIDIA GPU，需安装 CUDA 12.4，显存大小未说明（建议根据模型规模配置，7B 模型通常需 16GB+）","未说明",{"notes":104,"python":105,"dependencies":106},"项目明确推荐使用 CUDA 12.4 和 PyTorch 2.4。安装过程包含手动下载并安装特定版本的 flash-attn wheel 包。框架基于 verl 修改，支持多轮次强化学习训练。环境设置分为 agentgym-rl 核心库和 agentenv 环境库两部分安装。","3.10",[107,108,109],"torch==2.4.0","flash-attn==2.7.3","transformers==4.51.3",[13,35,14],[112,113,114,115],"agent","llm","llm-based-agent","scaling","2026-03-27T02:49:30.150509","2026-04-20T07:16:14.354488",[],[]]