[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-opendilab--LightZero":3,"tool-opendilab--LightZero":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",156033,2,"2026-04-14T23:32:00",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":73,"owner_website":76,"owner_url":78,"languages":79,"stars":111,"forks":112,"last_commit_at":113,"license":114,"difficulty_score":10,"env_os":115,"env_gpu":116,"env_ram":115,"env_deps":117,"category_tags":124,"github_topics":125,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":146,"updated_at":147,"faqs":148,"releases":177},7594,"opendilab\u002FLightZero","LightZero","[NeurIPS 2023 Spotlight] LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios (awesome MCTS)","LightZero 是一个轻量级、高效且易于理解的开源算法工具包，旨在将蒙特卡洛树搜索（MCTS）与深度强化学习（RL）无缝结合。它主要解决了在通用序列决策场景中，缺乏统一基准来评估和比较不同 MCTS 算法性能的问题。以往像 AlphaZero 或 MuZero 这样的顶尖算法虽在游戏领域表现卓越，但其实现复杂且难以直接迁移到新场景，而 LightZero 通过提供标准化的接口和丰富的环境支持，大幅降低了研究与复现的门槛。\n\n这款工具特别适合人工智能领域的研究人员、算法开发者以及希望深入理解决策智能的学生使用。无论是想要快速验证新想法的学者，还是致力于构建通用决策系统的工程师，都能从中受益。LightZero 的独特亮点在于其“统一基准”的设计理念，不仅支持多种经典游戏环境，还具备高度的模块化特性，允许用户灵活替换模型组件或调整搜索策略。此外，作为 NeurIPS 2023 的焦点论文成果，它持续吸纳了如 UniZero、ReZero 等前沿研究的进展，确保用户能接触到最新的算法优化方案，是探索高效世界模型与多任务规划的理想起点。","\u003Cdiv id=\"top\">\u003C\u002Fdiv>\n\n# LightZero\n\n\u003Cdiv align=\"center\">\n    \u003Cimg width=\"1000px\" height=\"auto\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_39f1f155ea20.png\">\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n---\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl?style=social&url=https%3A%2F%2Ftwitter.com%2Fopendilab)](https:\u002F\u002Ftwitter.com\u002Fopendilab)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002FLightZero)](https:\u002F\u002Fpypi.org\u002Fproject\u002FLightZero\u002F)\n![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002FLightZero)\n![Loc](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https:\u002F\u002Fgist.githubusercontent.com\u002FHansBug\u002Fe002642132ec758e99264118c66778a4\u002Fraw\u002Floc.json)\n![Comments](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https:\u002F\u002Fgist.githubusercontent.com\u002FHansBug\u002Fe002642132ec758e99264118c66778a4\u002Fraw\u002Fcomments.json)\n\n[![Code Test](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fworkflows\u002FCode%20Test\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Factions?query=workflow%3A%22Code+Test%22)\n[![Badge Creation](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fworkflows\u002FBadge%20Creation\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Factions?query=workflow%3A%22Badge+Creation%22)\n[![Package Release](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fworkflows\u002FPackage%20Release\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Factions?query=workflow%3A%22Package+Release%22)\n\n![GitHub Org's stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fopendilab)\n[![GitHub stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fstargazers)\n[![GitHub forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fnetwork)\n![GitHub commit activity](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fopendilab\u002FLightZero)\n[![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues)\n[![GitHub pulls](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fpulls)\n[![Contributors](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fgraphs\u002Fcontributors)\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmaster\u002FLICENSE)\n[![discord badge](https:\u002F\u002Fdcbadge.vercel.app\u002Fapi\u002Fserver\u002FdkZS2JF56X?style=flat)](https:\u002F\u002Fdiscord.gg\u002FdkZS2JF56X)\n\nUpdated on 2026.03.11 LightZero-v0.2.0\n\n\n> LightZero is a lightweight, efficient, and easy-to-understand open-source algorithm toolkit that combines Monte Carlo Tree Search (MCTS) and Deep Reinforcement Learning (RL).\n\nEnglish | [简体中文(Simplified Chinese)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002FREADME.zh.md) | [Documentation](https:\u002F\u002Fopendilab.github.io\u002FLightZero) | [LightZero Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08348) | [UniZero Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=Gl6dF9soQo) | [ReZero Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=F9Y7j3AJTu) | [🔥ScaleZero Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07945)\n\n## News\n\n- [2026.02] 🔥 The ScaleZero paper has been accepted as a conference paper at ICLR 2026: [One Model for All Tasks: Leveraging Efficient World Models in Multi-Task Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07945).\n- [2025.08] The [ReZero paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=F9Y7j3AJTu) has been accepted by the CoRL 2025 RemembeRL workshop.\n- [2025.06] The [UniZero paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=Gl6dF9soQo) has been accepted by Transactions on Machine Learning Research (TMLR 2025).\n- [2023.09] The [LightZero paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F765043fe026f7d704c96cec027f13843-Abstract-Datasets_and_Benchmarks.html) has been accepted as a Spotlight Presentation at the NeurIPS 2023 Datasets and Benchmarks Track.\n- [2023.04] LightZero v0.0.1 was officially released.\n\n## 🔍 Background\n\nThe integration of Monte Carlo Tree Search and Deep Reinforcement Learning,\nexemplified by AlphaZero and MuZero,\nhas achieved unprecedented performance levels in various games, including Go and Atari.\nThis advanced methodology has also made significant strides in scientific domains like protein structure prediction and the search for matrix multiplication algorithms.\nThe following is an overview of the historical evolution of the Monte Carlo Tree Search algorithm series:\n![pipeline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_5cda461b3377.png)\n\n## 🎨 Overview\n\n**LightZero** is an open-source algorithm toolkit that combines Monte Carlo Tree Search (MCTS) and Reinforcement Learning (RL) for PyTorch. It supports a range of MCTS-based RL algorithms and applications, offering several key advantages:\n- Lightweight.\n- Efficient.\n- Easy-to-understand.\n\nFor further details, please refer to [Features](#features), [Framework Structure](#framework-structure) and [Integrated Algorithms](#integrated-algorithms).\n\n**LightZero** aims to **promote the standardization of the MCTS+RL algorithm family to accelerate related research and applications**. A performance comparison of all implemented algorithms under a unified framework is presented in the [Benchmark](#benchmark).\n\n### Outline\n\n- [LightZero](#lightzero)\n  - [News](#news)\n  - [🔍 Background](#-background)\n  - [🎨 Overview](#-overview)\n    - [Outline](#outline)\n    - [💥 Features](#-features)\n    - [🧩 Framework Structure](#-framework-structure)\n    - [🎁 Integrated Algorithms](#-integrated-algorithms)\n  - [⚙️ Installation](#️-installation)\n    - [Installation with Docker](#installation-with-docker)\n  - [🚀 Quick Start](#-quick-start)\n  - [📚 Documentation](#-documentation)\n  - [📊 Benchmark](#-benchmark)\n  - [📝 Awesome-MCTS Notes](#-awesome-mcts-notes)\n    - [Paper Notes](#paper-notes)\n    - [Algo. Overview](#algo-overview)\n  - [Awesome-MCTS Papers](#awesome-mcts-papers)\n    - [Classic & Foundational Papers](#classic--foundational-papers)\n      - [LightZero Implemented series](#lightzero-implemented-series)\n      - [AlphaGo series](#alphago-series)\n      - [MuZero series](#muzero-series)\n      - [MCTS Analysis](#mcts-analysis)\n      - [MCTS Application](#mcts-application)\n    - [Recent Research & Emerging Applications](#recent-research--emerging-applications)\n      - [ICML](#icml)\n      - [ICLR](#iclr)\n      - [NeurIPS](#neurips)\n      - [Other Conference or Journal](#other-conference-or-journal)\n  - [💬 Feedback and Contribution](#-feedback-and-contribution)\n  - [🌏 Citation](#-citation)\n  - [💓 Acknowledgments](#-acknowledgments)\n  - [🏷️ License](#️-license)\n\n### 💥 Features\n\n**Lightweight**: LightZero integrates multiple MCTS algorithm families and can solve decision-making problems with various attributes in a lightweight framework. The algorithms and environments LightZero implemented can be found [here](#integrated-algorithms).\n\n**Efficient**: LightZero uses mixed heterogeneous computing programming to improve computational efficiency for the most time-consuming part of MCTS algorithms.\n\n**Easy-to-understand**: LightZero provides detailed documentation and algorithm framework diagrams for all integrated algorithms to help users understand the algorithm's core and compare the differences and similarities between algorithms under the same paradigm. LightZero also provides function call graphs and network structure diagrams for algorithm code implementation, making it easier for users to locate critical code. All the documentation can be found [here](#paper-notes).\n\n### 🧩 Framework Structure\n\n[comment]: \u003C> (\u003Cp align=\"center\">)\n\n[comment]: \u003C> (  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_04e55907b30a.png\" alt=\"Image Description 1\" width=\"45%\" height=\"auto\" style=\"margin: 0 1%;\">)\n\n[comment]: \u003C> (  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_fa2f81a9302f.png\" alt=\"Image Description 2\" width=\"45%\" height=\"auto\" style=\"margin: 0 1%;\">)\n\n[comment]: \u003C> (\u003C\u002Fp>)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"assets\u002Flightzero_pipeline.svg\" alt=\"Image Description 2\" width=\"50%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\nThe above picture is the framework pipeline of LightZero. We briefly introduce the three core modules below: \n\n**Model**:\n``Model`` is used to define the network structure, including the ``__init__`` function for initializing the network structure and the ``forward`` function for computing the network's forward propagation.\n\n**Policy**:\n``Policy`` defines the way the network is updated and interacts with the environment, including three processes: the ``learning`` process, the ``collecting`` process, and the ``evaluation`` process.\n\n**MCTS**:\n``MCTS`` defines the structure of the Monte Carlo search tree and the way it interacts with the Policy. The implementation of MCTS includes two languages: Python and C++, implemented in ``ptree`` and ``ctree``, respectively.\n\nFor the file structure of LightZero, please refer to [lightzero_file_structure](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Flightzero_file_structure.svg).\n\n### 🎁 Integrated Algorithms\nLightZero is a library with a [PyTorch](https:\u002F\u002Fpytorch.org\u002F) implementation of MCTS algorithms (sometimes combined with cython and cpp), including:\n- [AlphaZero](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscience.aar6404)\n- [MuZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08265)\n- [Sampled MuZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06303)\n- [Stochastic MuZero](https:\u002F\u002Fopenreview.net\u002Fpdf?id=X6D9bAHhBQ1)\n- [EfficientZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00210)\n- [Gumbel MuZero](https:\u002F\u002Fopenreview.net\u002Fpdf?id=bERaNdoegnO&)\n- [ReZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16364)\n- [UniZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10667)\n\nThe environments and algorithms currently supported by LightZero are shown in the table below:\n\n\n| Env.\u002FAlgo.             | AlphaZero | MuZero | Sampled MuZero | EfficientZero | Sampled EfficientZero | Gumbel MuZero | Stochastic MuZero | UniZero | Sampled UniZero | ReZero |\n|------------------------| -------- | ---- |---------------| ---------- | ------------------ | ------------- | ---------------- | ------- | --- | ------ |\n| TicTacToe              | ✔        | ✔    | 🔒            | 🔒         | 🔒                | ✔             | 🔒               | ✔      | 🔒   | 🔒     |\n| Gomoku                 | ✔        | ✔    | 🔒            | 🔒         | 🔒                | ✔             | 🔒               | ✔      | 🔒   | ✔      |\n| Connect4               | ✔        | ✔    | 🔒            | 🔒         | 🔒                | 🔒             | 🔒               | ✔      | 🔒   | ✔      |\n| 2048                   | ---      | ✔    | 🔒            | 🔒         | 🔒                | 🔒             | ✔               | ✔      | 🔒   | 🔒     |\n| Chess                  | 🔒        | 🔒   | 🔒            | 🔒         | 🔒                | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| Go                     | 🔒        | 🔒   | 🔒            | 🔒         | 🔒                | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| CartPole               | ---      | ✔    | 🔒            | ✔          | ✔                 | ✔             | ✔               | ✔      | 🔒   | ✔      |\n| Pendulum               | ---      | ✔    | ✔             | ✔          | ✔                 | ✔             | ✔               | 🔒      | ✔  | 🔒     |\n| LunarLander            | ---      | ✔    | ✔             | ✔          | ✔                 | ✔             | ✔               | ✔      | ✔  | 🔒     |\n| BipedalWalker          | ---      | ✔    | ✔             | ✔          | ✔                 | ✔             | 🔒               | 🔒      | ✔  | 🔒     |\n| Atari                  | ---      | ✔    | 🔒            | ✔          | ✔                 | ✔             | ✔               | ✔      | 🔒   | ✔      |\n| DeepMind Control       | ---      | ---     | ✔            | ---            | ✔                 | 🔒             | 🔒               | 🔒      | ✔  | 🔒     |\n| MuJoCo                 | ---      | ✔    | 🔒            | ✔          | ✔                 | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| MiniGrid               | ---      | ✔    | 🔒            | ✔          | ✔                 | 🔒             | 🔒               | ✔      | 🔒   | 🔒     |\n| Bsuite                 | ---      | ✔    | 🔒            | ✔          | ✔                 | 🔒             | 🔒               | ✔      | 🔒   | 🔒     |\n| Memory                 | ---      | ✔    | 🔒              | ✔          | ✔                 | 🔒             | 🔒               | ✔      | 🔒   | 🔒     |\n| SumToThree (billiards) | ---      | 🔒   | 🔒            | 🔒         | ✔                 | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| MetaDrive     | ---      | 🔒     | 🔒      | 🔒  | ✔               | 🔒         | 🔒           | 🔒  | 🔒 |🔒             |\n\n\n\u003Csup>(1): \"✔\" means that the corresponding item is finished and well-tested.\u003C\u002Fsup>\n\n\u003Csup>(2): \"🔒\" means that the corresponding item is in the waiting-list (Work In Progress).\u003C\u002Fsup>\n\n\u003Csup>(3): \"---\" means that this algorithm doesn't support this environment.\u003C\u002Fsup>\n\n\n## ⚙️ Installation\n\nYou can install the latest LightZero in development from the GitHub source codes with the following command:\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero.git\ncd LightZero\npip3 install -e .\n```\n\nKindly note that LightZero currently supports compilation only on `Linux` and `macOS` platforms.\nWe are actively working towards extending this support to the `Windows` platform. \nYour patience during this transition is greatly appreciated.\n\n### Installation with Docker\n\nWe also provide a Dockerfile that sets up an environment with all dependencies needed to run the LightZero library. This Docker image is based on Ubuntu 20.04 and installs Python 3.8, along with other necessary tools and libraries.\nHere's how to use our Dockerfile to build a Docker image, run a container from this image, and execute LightZero code inside the container.\n1. **Download the Dockerfile**: The Dockerfile is located in the root directory of the LightZero repository. Download this [file](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002FDockerfile) to your local machine.\n2. **Prepare the build context**: Create a new empty directory on your local machine, move the Dockerfile into this directory, and navigate into this directory. This step helps to avoid sending unnecessary files to the Docker daemon during the build process.\n    ```bash\n    mkdir lightzero-docker\n    mv Dockerfile lightzero-docker\u002F\n    cd lightzero-docker\u002F\n    ```\n3. **Build the Docker image**: Use the following command to build the Docker image. This command should be run from inside the directory that contains the Dockerfile.\n    ```bash\n    docker build -t ubuntu-py38-lz:latest -f .\u002FDockerfile .\n    ```\n4. **Run a container from the image**: Use the following command to start a container from the image in interactive mode with a Bash shell.\n    ```bash\n    docker run -dit --rm ubuntu-py38-lz:latest \u002Fbin\u002Fbash\n    ```\n5. **Execute LightZero code inside the container**: Once you're inside the container, you can run the example Python script with the following command:\n    ```bash\n    python .\u002FLightZero\u002Fzoo\u002Fclassic_control\u002Fcartpole\u002Fconfig\u002Fcartpole_muzero_config.py\n    ```\n\n[comment]: \u003C> (- [AlphaGo Zero]&#40;https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature24270&#41; )\n\n## 🚀 Quick Start\n\nTrain a MuZero agent to play [CartPole](https:\u002F\u002Fgymnasium.farama.org\u002Fenvironments\u002Fclassic_control\u002Fcart_pole\u002F):\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fclassic_control\u002Fcartpole\u002Fconfig\u002Fcartpole_muzero_config.py\n```\n\nTrain a MuZero agent to play [Pong](https:\u002F\u002Fgymnasium.farama.org\u002Fenvironments\u002Fatari\u002Fpong\u002F):\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fatari\u002Fconfig\u002Fatari_muzero_segment_config.py\n```\n\nTrain a MuZero agent to play [TicTacToe](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTic-tac-toe):\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fboard_games\u002Ftictactoe\u002Fconfig\u002Ftictactoe_muzero_bot_mode_config.py\n```\n\nTrain a UniZero agent to play [Pong](https:\u002F\u002Fgymnasium.farama.org\u002Fenvironments\u002Fatari\u002Fpong\u002F):\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fatari\u002Fconfig\u002Fatari_unizero_segment_config.py\n```\n\n## 📚 Documentation\n\nThe LightZero documentation can be found [here](https:\u002F\u002Fopendilab.github.io\u002FLightZero\u002F). It contains tutorials and the API reference.\n\nFor those interested in customizing environments and algorithms, we provide relevant guides:\n\n- [Customize Environments](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Fenvs\u002Fcustomize_envs.md)\n- [Customize Algorithms](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Falgos\u002Fcustomize_algos.md)\n- [How to Set Configuration Files?](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Fconfig\u002Fconfig.md)\n- [Logging and Monitoring System](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Flogs\u002Flogs.md)\n- [Loss Landscape Visualization](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Floss_landscape\u002FREADME.md)\n\nShould you have any questions, feel free to contact us for support.\n\n## 📊 Benchmark\n\n\u003Cdetails>\u003Csummary>Click to expand\u003C\u002Fsummary>\n\n- Below are the benchmark results of [AlphaZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Falphazero.py) and [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) on three board games: [TicTacToe](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Ftictactoe\u002Fenvs\u002Ftictactoe_env.py), [Connect4](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Fconnect4\u002Fenvs\u002Fconnect4_env.py), [Gomoku](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Fgomoku\u002Fenvs\u002Fgomoku_env.py).\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_9ad7439e7495.png\" alt=\"tictactoe_bot-mode_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_2b34ab768a3f.png\" alt=\"connect4_bot-mode_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_2b34ab768a3f.png\" alt=\"gomoku_bot-mode_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- Below are the benchmark results of [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py), [MuZero w\u002F SSL](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) , [EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fefficientzero.py) and [Sampled EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fsampled_efficientzero.py) on three discrete action space games in [Atari](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py).\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_dc3a472d62d8.png\" alt=\"pong_main\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_d7baef6450a1.png\" alt=\"qbert_main\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_321c5146b6e0.png\" alt=\"mspacman_main\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_805ba560a4d3.png\" alt=\"mspacman_sez_K\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n\n- Below are the benchmark results of [Sampled EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fsampled_efficientzero.py) with ``Factored\u002FGaussian`` policy representation on three classic continuous action space games: [Pendulum-v1](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fclassic_control\u002Fpendulum\u002Fenvs\u002Fpendulum_lightzero_env.py), [LunarLanderContinuous-v2](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fbox2d\u002Flunarlander\u002Fenvs\u002Flunarlander_env.py), [BipedalWalker-v3](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fbox2d\u002Fbipedalwalker\u002Fenvs\u002Fbipedalwalker_env.py)\nand two MuJoCo continuous action space games: [Hopper-v3](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fmujoco\u002Fenvs\u002Fmujoco_lightzero_env.py), [Walker2d-v3](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fmujoco\u002Fenvs\u002Fmujoco_lightzero_env.py).\n> \"Factored Policy\" indicates that the agent learns a policy network that outputs a categorical distribution. After manual discretization, the dimensions of the action space for the five environments are 11, 49 (7^2), 256 (4^4), 64 (4^3), and 4096 (4^6), respectively. On the other hand, \"Gaussian Policy\" refers to the agent learning a policy network that directly outputs parameters (mu and sigma) for a Gaussian distribution.\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_0ebc3ff49c98.png\" alt=\"pendulum_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_7ba5a2a8a1cf.png\" alt=\"pendulum_sez_K\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_4fac92a43fad.png\" alt=\"lunarlander_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_0d9f85047e97.png\" alt=\"bipedalwalker_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_ac378132e830.png\" alt=\"hopper_main\" width=\"31.5%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_35fe538e99cd.png\" alt=\"walker2d_main\" width=\"31.5%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- Below are the benchmark results of [GumbelMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fgumbel_muzero.py) and [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) (under different simulation cost) on four environments: [PongNoFrameskip-v4](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py), [MsPacmanNoFrameskip-v4]((https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py)), [Gomoku](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Fgomoku\u002Fenvs\u002Fgomoku_env.py), and [LunarLanderContinuous-v2](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fbox2d\u002Flunarlander\u002Fenvs\u002Flunarlander_env.py).\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_e9c9f845b745.png\" alt=\"pong_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_ad43cfdec7b5.png\" alt=\"mspacman_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_b97eb57bbc46.png\" alt=\"gomoku_bot-mode_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_c41198d7136f.png\" alt=\"lunarlander_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- Below are the benchmark results of [StochasticMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fstochastic_muzero.py) and [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) on [2048 environment](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fgame_2048\u002Fenvs\u002Fgame_2048_env.py) with varying levels of chance (num_chances=2 and 5).\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_f856ab75072b.png\" alt=\"2048_stochasticmz_mz\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_8da1b7cdc615.png\" alt=\"mspacman_gmz_ns\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- Below are the benchmark results of various MCTS exploration mechanisms of [MuZero w\u002F SSL](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) in the [MiniGrid environment](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fminigrid\u002Fenvs\u002Fminigrid_lightzero_env.py).\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_fc7f7bd8505b.png\" alt=\"keycorridors3r3_exploration\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_a31bef7217b0.png\" alt=\"fourrooms_exploration\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n\u003C\u002Fdetails>\n\n\n## 📝 Awesome-MCTS Notes\n\n### Paper Notes\nThe following are the detailed paper notes (in Chinese) of the above algorithms:\n\n\u003Cdetails open>\u003Csummary>Click to collapse\u003C\u002Fsummary>\n\n  \n- [AlphaZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FAlphaZero.pdf)\n- [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FMuZero.pdf)\n- [EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FEfficientZero.pdf)\n- [SampledMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FSampledMuZero.pdf)\n- [GumbelMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FGumbelMuZero.pdf)\n- [StochasticMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FStochasticMuZero.pdf)\n- [NotationTable](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FSymbolTable.pdf)\n\n\u003C\u002Fdetails>\n\nYou can also refer to the relevant Zhihu column (in Chinese): [In-depth Analysis of MCTS+RL Frontier Theories and Applications](https:\u002F\u002Fwww.zhihu.com\u002Fcolumn\u002Fc_1764308735227662336).\n\n### Algo. Overview\n\nThe following are the overview MCTS principle diagrams of the above algorithms:\n\n\u003Cdetails>\u003Csummary>Click to expand\u003C\u002Fsummary>\n\n- [MCTS](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fmcts_overview.pdf)\n- [AlphaZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Falphazero_overview.pdf)\n- [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fmuzero_overview.png)\n- [EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fefficientzero_overview.png)\n- [SampledMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fsampled_muzero_overview.png)\n- [GumbelMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fgumbel_muzero_overview.png)\n- [StochasticMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fstochastic_muzero_overview.png)\n\n\u003C\u002Fdetails>\n\n## Awesome-MCTS Papers\n\nHere is a collection of research papers about **Monte Carlo Tree Search**.\n[This Section](#awesome-msts-papers) will be continuously updated to track the frontier of MCTS. \n\n### Classic & Foundational Papers\n\n\u003Cdetails>\u003Csummary>Click to expand\u003C\u002Fsummary>\n\n#### LightZero Implemented series\n\n- [2018 _Science_ AlphaZero: A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscience.aar6404)\n- [2019 MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08265)\n- [2021 EfficientZero: Mastering Atari Games with Limited Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00210)\n- [2021 Sampled MuZero: Learning and Planning in Complex Action Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06303)\n- [2022 Stochastic MuZero: Planning in Stochastic Environments with A Learned Model](https:\u002F\u002Fopenreview.net\u002Fpdf?id=X6D9bAHhBQ1)\n- [2022 Gumbel MuZero: Policy Improvement by Planning with Gumbel](https:\u002F\u002Fopenreview.net\u002Fpdf?id=bERaNdoegnO&)\n- [2024 UniZero: Generalized and Efficient Planning with Scalable Latent World Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10667)\n\n#### AlphaGo series\n- [2015 _Nature_ AlphaGo Mastering the game of Go with deep neural networks and tree search](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature16961)\n- [2017 _Nature_ AlphaGo Zero Mastering the game of Go without human knowledge](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature24270)\n- [2019 ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04522) \n  - [Code](https:\u002F\u002Fgithub.com\u002Fpytorch\u002FELF)\n- [2023 Student of Games: A unified learning algorithm for both perfect and imperfect information games](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fsciadv.adg3256)\n\n#### MuZero series\n- [2022 Online and Offline Reinforcement Learning by Planning with a Learned Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06294)\n- [2021 Vector Quantized Models for Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04615)\n- [2021 Muesli: Combining Improvements in Policy Optimization. ](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06159)\n\n#### MCTS Analysis\n- [2020 Monte-Carlo Tree Search as Regularized Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.12509)\n- [2021 Self-Consistent Models and Values](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.12840)\n- [2022 Adversarial Policies Beat Professional-Level Go AIs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00241)\n- [2022 _PNAS_ Acquisition of Chess Knowledge in AlphaZero.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09259)\n\n#### MCTS Application\n- [2023 Symbolic Physics Learner: Discovering governing equations via Monte Carlo tree search](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ZTK3SefE8_Z)\n- [2022 _Nature_ Discovering faster matrix multiplication algorithms with reinforcement learning](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41586-022-05172-4) \n  - [Code](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Falphatensor)\n- [2022 MuZero with Self-competition for Rate Control in VP9 Video Compression](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06626)\n- [2021 DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06135)\n- [2019 Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.02680.pdf)\n\n\u003C\u002Fdetails>\n\n### Recent Research & Emerging Applications\n\n\u003Cdetails>\u003Csummary>Click to expand\u003C\u002Fsummary>\n\n#### ICML\n- [STAIR: Improving Safety Alignment with Introspective Reasoning](https:\u002F\u002Fopenreview.net\u002Fforum?id=aHzPGyUhZa) 2025\n  - Yichi Zhang, Siyuan Zhang, Yao Huang, Zeyu Xia, Zhengwei Fang, Xiao Yang, Ranjie Duan, Dong Yan, Yinpeng Dong, Jun Zhu\n  - Key: LLM, Safety Alignment, Reasoning\n  - ExpEnv: StrongReject, XsTest, WildChat, Do-Not-Answer, GSM8k, AlpacaEval 2.0, BIG-bench HHH, SimpleQA, InfoFlow, AdvGLUE\n- [rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking](https:\u002F\u002Fopenreview.net\u002Fforum?id=5zwF1GizFa) 2025\n  - Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, Mao Yang\n  - Key: LLM, Reasoning, Self-evolution\n  - ExpEnv: GSM8K, MATH, AIME 2024, AMC 2023, Olympiad Bench, College Math, Gaokao (Chinese College Entrance Exam 2023)\n- [Monte-Carlo Tree Search with Uncertainty Propagation via Optimal Transport](https:\u002F\u002Fopenreview.net\u002Fforum?id=DUGFTH9W8B) 2025\n  - Tuan Quang Dam, Pascal Stenger, Lukas Schneider, Joni Pajarinen, Carlo D'Eramo, Odalric-Ambrym Maillard\n  - Key: Monte-Carlo Tree Search, Planning under Uncertainty\n  - ExpEnv: FrozenLake, NChain, RiverSwim, SixArms, Taxi, Rocksample, Pocman, Tag, LaserTag\n- [Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger](https:\u002F\u002Fopenreview.net\u002Fforum?id=DJcEoC9JpQ) 2025\n  - Qi Yang, Chenghao Zhang, Lubin Fan, Kun Ding, Jieping Ye, Shiming Xiang\n  - Key: Large Vision Language Model, Multimodal Retrieval-Augmented Generation, In-context Learning, Monte Carlo Tree Search\n  - ExpEnv: ScienceQA, MMMU, MathV, VizWiz, VSR-MC \n- [Mastering Board Games by External and Internal Planning with Language Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=KKwBo3u3IW) 2025\n  - John Schultz, Jakub Adamek, Matej Jusup, Marc Lanctot, Michael Kaisers, Sarah Perrin, Daniel Hennes, Jeremy Shar, Cannada A. Lewis, Anian Ruoss, Tom Zahavy, Petar Veličković, Laurel Prince, Satinder Singh, Eric Malmi, Nenad Tomasev\n  - Key: search, planning, language models, games, chess\n  - ExpEnv: Chess, Chess960, Connect Four, Hex \n- [Language Models as Implicit Tree Search](https:\u002F\u002Fopenreview.net\u002Fforum?id=bEqMmGu6qg) 2025\n  - Ziliang Chen, Zhao-Rong Lai, Yufeng Yang, Liangda Fang, ZHANFU YANG, Liang Lin\n  - Key: RL-free preference optimization; LLM based MCTS; LLM alignment;LLM reasoning\n  - ExpEnv: Anthropic HH, GSM8K, MATH, Game24\n- [Free Process Rewards without Process Labels](https:\u002F\u002Fopenreview.net\u002Fforum?id=8ThnPFhGm8) 2025\n  - Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, Bowen Zhou, Zhiyuan Liu, Hao Peng\n  - Key: Process Reward Model\n  - ExpEnv: MATH\n- [Monte Carlo Tree Search for Comprehensive Exploration in LLM-Based Automatic Heuristic Design](https:\u002F\u002Fopenreview.net\u002Fforum?id=Do1OdZzYHr) 2025\n  - Zhi Zheng, Zhuoliang Xie, Zhenkun Wang, Bryan Hooi\n  - Key: Automatic Heuristic Design, Combinatorial Optimization, Large Language Model, Neural Combinatorial Optimization, Monte Carlo Tree Search\n  - ExpEnv: TSP, Knapsack, CVRP, Multiple Knapsack, Bin Packing, Admissible Set Problem, Bayesian Optimization\n- [Boosting Virtual Agent Learning and Reasoning: A Step-Wise, Multi-Dimensional, and Generalist Reward Model with Benchmark](https:\u002F\u002Fopenreview.net\u002Fforum?id=OKWlVPHeW1) 2025\n  - Bingchen Miao, Yang Wu, Minghe Gao, Qifan Yu, Wendong Bu, Wenqiao Zhang, Yunfei Li, Siliang Tang, Tat-Seng Chua, Juncheng Li\n  - Key: Virtual Agent; Digital Agent; Reward Model\n  - ExpEnv: WebArena, VisualWebArena, Android World, OSWorld\n- [Online Robust Reinforcement Learning Through Monte-Carlo Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=m25ma7O7Ec) 2025\n  - Tuan Quang Dam, Kishan Panaganti, Brahim Driss, Adam Wierman\n  - Key: Monte-carlo tree search, distributionally robust reinforcement learning, online reinforcement learning\n  - ExpEnv: Gambler’s Problem, Frozen Lake, American Option Pricing \n- [Trust-Region Twisted Policy Improvement](https:\u002F\u002Fopenreview.net\u002Fgroup?id=ICML.cc\u002F2025\u002FConference#tab-accept-oral) 2025\n  - Joery A. de Vries, Jinke He, Yaniv Oren, Matthijs T. J. Spaan\n  - Key: Reinforcement Learning; Sequential Monte-Carlo; Monte-Carlo Tree Search; planning; model-based; policy improvement\n  - ExpEnv: Brax, Jumanji \n- [KBQA-o1: Agentic Knowledge Base Question Answering with Monte Carlo Tree Search](https:\u002F\u002Fopenreview.net\u002Fforum?id=QuecSemZIy) 2025\n  - Haoran Luo, Haihong E, Yikai Guo, Qika Lin, Xiaobao Wu, Xinyu Mu, Wenhao Liu, Meina Song, Yifan Zhu, Anh Tuan Luu\n  - Key: Knowledge Base Question Answering, Large Language Model, LLM Agents, Monte Carlo Tree Search\n  - ExpEnv: GrailQA, WebQSP, GraphQ\n- [Monte Carlo Tree Diffusion for System 2 Planning](https:\u002F\u002Fproceedings.mlr.press\u002Fv267\u002Fyoon25a.html) 2025\n  - Jaesik Yoon, Hyeonseo Cho, Doojin Baek, Yoshua Bengio, Sungjin Ahn\n  - Key: Diffusion Models, MCTS, System 2 Planning, Trajectory Optimization\n  - ExpEnv: Maze2D, Kitchen, Block stacking\n  - [Code](https:\u002F\u002Fgithub.com\u002Fahn-ml\u002Fmctd)\n- [Monte-Carlo Tree Search with Uncertainty Propagation via Optimal Transport](https:\u002F\u002Fopenreview.net\u002Fforum?id=DUGFTH9W8B) 2025\n  - Tuan Quang Dam, Pascal Stenger, Lukas Schneider, Joni Pajarinen, Carlo D’Eramo, Odalric-Ambrym Maillard\n  - Key: Optimal Transport, Wasserstein Distance, Uncertainty Propagation, MCTS\n  - ExpEnv: FrozenLake, NChain, RiverSwim, SixArms, Taxi, Rocksample\n- [Online Robust Reinforcement Learning Through Monte-Carlo Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=m25ma7O7Ec) 2025\n  - Tuan Quang Dam, Kishan Panaganti, Brahim Driss, Adam Wierman\n  - Key: Robust RL, MCTS, Distributionally Robust Optimization, Sim-to-Real\n  - ExpEnv: Gambler’s Problem, Frozen Lake, American Option Pricing\n  - [Code](https:\u002F\u002Fgithub.com\u002Fbrahimdriss\u002FRobustMCTS)\n- [Power Mean Estimation in Stochastic Continuous Monte-Carlo Tree Search](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2025\u002Fposter\u002F45596) 2025\n  - Tuan Quang Dam\n  - Key: Continuous MCTS, Polynomial Exploration, Stochastic Environments, Power Mean\n  - ExpEnv: Continuous Cartpole, Inverted Pendulum\n- [Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2024\u002Fposter\u002F33107) 2024  \n  - Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang  \n  - Key: language models, decision-making, Monte Carlo Tree Search, reasoning, acting, planning  \n  - ExpEnv: HumanEval, WebShop, interactive QA, programming, math\n- [Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fhuang24p.html) 2024  \n  - Yizhe Huang, Anji Liu, Fanqi Kong, Yaodong Yang, Song-Chun Zhu, Xue Feng  \n  - Key: multi-agent reinforcement learning, hierarchical opponent modeling, Monte Carlo Tree Search, few-shot adaptation, mixed-motive environments  \n  - ExpEnv: multi-agent decision-making scenarios, self-play, mixed-motive interactions\n- [Accelerating Look-ahead in Bayesian Optimization: Multilevel Monte Carlo is All you Need](https:\u002F\u002Fopenreview.net\u002Fforum?id=46vXhZn7lN) 2024  \n  - Shangda Yang, Vitaly Zankin, Maximilian Balandat, Stefan Scherer, Kevin Thomas Carlberg, Neil Walton, Kody J. H. Law  \n  - Key: Bayesian optimization, multilevel Monte Carlo, nested expectations, acquisition functions  \n  - ExpEnv: Benchmark examples\n- [Accelerated Speculative Sampling Based on Tree Monte Carlo](https:\u002F\u002Fopenreview.net\u002Fforum?id=stMhi1Sn2G) 2024  \n  - Zhengmian Hu, Heng Huang  \n  - Key: speculative sampling, large language models, tree Monte Carlo, inference acceleration  \n  - ExpEnv: Not specified\n- [Provably Efficient Long-Horizon Exploration in Monte Carlo Tree Search through State Occupancy Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.05511) 2024\n  - Liam Schramm, Abdeslam Boularias\n  - Key: Exploration, State Occupancy, Long-horizon planning, Volume-MCTS\n  - ExpEnv: Robot Navigation, 2D Maze\n  - [Code](https:\u002F\u002Fgithub.com\u002Fschrammlb2\u002FVolume-MCTS-ICML)\n- [Scalable Safe Policy Improvement via Monte Carlo Tree Search](https:\u002F\u002Fopenreview.net\u002Fpdf?id=tevbBSzSfK) 2023\n  - Alberto Castellini, Federico Bianchi, Edoardo Zorzi, Thiago D. Simão, Alessandro Farinelli, Matthijs T. J. Spaan\n  - Key: safe policy improvement online using a MCTS based strategy, Safe Policy Improvement with Baseline Bootstrapping\n  - ExpEnv: Gridworld and SysAdmin\n- [Efficient Learning for AlphaZero via Path Consistency](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fzhao22h\u002Fzhao22h.pdf) 2022\n  - Dengwei Zhao, Shikui Tu, Lei Xu\n  - Key: limited amount of self-plays,  path consistency (PC) optimality\n  - ExpEnv: Go, Othello, Gomoku\n- [Visualizing MuZero Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.12924) 2021\n  - Joery A. de Vries, Ken S. Voskuil, Thomas M. Moerland, Aske Plaat\n  - Key: visualizing the value equivalent dynamics model, action trajectories diverge, two regularization techniques\n  - ExpEnv: CartPole and MountainCar.\n- [Convex Regularization in Monte-Carlo Tree Search](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00391.pdf) 2021\n  - Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen\n  - Key: entropy-regularization backup operators, regret analysis, Tsallis entropy\n  - ExpEnv: synthetic tree, Atari\n- [Information Particle Filter Tree: An Online Algorithm for POMDPs with Belief-Based Rewards on Continuous Domains](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Ffischer20a\u002Ffischer20a.pdf) 2020\n  - Johannes Fischer, Ömer Sahin Tas\n  - Key: Continuous POMDP, Particle Filter Tree, information-based reward shaping, Information Gathering.\n  - ExpEnv: POMDPs.jl framework\n  - [Code](https:\u002F\u002Fgithub.com\u002Fjohannes-fischer\u002Ficml2020_ipft)\n- [Retro*: Learning Retrosynthetic Planning with Neural Guided A* Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fchen20k\u002Fchen20k.pdf) 2020\n  - Binghong Chen, Chengtao Li, Hanjun Dai, Le Song \n  - Key: chemical retrosynthetic planning, neural-based A*-like algorithm, ANDOR tree\n  - ExpEnv: USPTO datasets\n  - [Code](https:\u002F\u002Fgithub.com\u002Fbinghong-ml\u002Fretro_star)\n#### ICLR\n- [OptionZero: Planning with Learned Options](https:\u002F\u002Fopenreview.net\u002Fforum?id=3IFRygQKGL) 2025  \n  - Po-Wei Huang, Pei-Chiun Peng, Hung Guei, Ti-Rong Wu  \n  - Key: Option, Semi-MDP, MuZero, MCTS, Planning, Reinforcement Learning  \n  - ExpEnv: 26 Atari games\n- [Monte Carlo Planning with Large Language Model for Text-Based Games](https:\u002F\u002Fopenreview.net\u002Fforum?id=r1KcapkzCt) 2025  \n  - Zijing Shi, Meng Fang, Ling Chen  \n  - Key: Large language model, Monte Carlo tree search, Text-based games  \n  - ExpEnv: Jericho benchmark\n- [Epistemic Monte Carlo Tree Search](https:\u002F\u002Fopenreview.net\u002Fforum?id=Tb8RiXOc3N) 2025  \n  - Yaniv Oren, Viliam Vadocz, Matthijs T. J. Spaan, Wendelin Boehmer  \n  - Key: model based, epistemic uncertainty, exploration, planning, alphazero, muzero  \n  - ExpEnv: SUBLEQ (Assembly language), Deep Sea\n- [Enhancing Software Agents with Monte Carlo Tree Search and Hindsight Feedback](https:\u002F\u002Fopenreview.net\u002Fforum?id=G7sIFXugTX) 2025  \n  - Antonis Antoniades, Albert Örwall, Kexun Zhang, Yuxi Xie, Anirudh Goyal, William Yang Wang  \n  - Key: agents, LLM, SWE-agents, SWE-bench, search, planning, reasoning, self-improvement, open-ended  \n  - ExpEnv: SWE-bench\n- [Epistemic Monte Carlo Tree Search](https:\u002F\u002Fopenreview.net\u002Fforum?id=Tb8RiXOc3N) 2025\n  - Wendelin Boehmer, Zheng Shen, Haoran Duan, Chengzhi Mao, Rosario Scalise\n  - Key: MCTS, Epistemic Uncertainty, Exploration, Sparse Reward, Model-based RL\n  - ExpEnv: Deep Sea, SUBLEQ (Assembly language)\n- [DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search](https:\u002F\u002Fopenreview.net\u002Fforum?id=I4YAIwrsXa) 2025\n  - DeepSeek Prover Team\n  - Key: Automated Theorem Proving, LLM, MCTS, RL from Proof Assistant Feedback (RLPAF), RMaxTS\n  - ExpEnv: Lean 4, miniF2F, ProofNet\n  - [Code](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-Prover-V1.5)\n- [Bayes Adaptive Monte Carlo Tree Search for Offline Model-based Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=RGjqr1jBJy) 2025\n  - Lucas Niu Janson, et al.\n  - Key: Offline RL, Model-based RL, Bayes-Adaptive MDP, Uncertainty Propagation\n  - ExpEnv: D4RL\n- [The Update Equivalence Framework for Decision-Time Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=JXGph215fL) 2024\n  - Samuel Sokota, Gabriele Farina, David J Wu, Hengyuan Hu, Kevin A. Wang, J Zico Kolter, Noam Brown\n  - Key: imperfect-information games, search, decision-time planning, update equivalence\n  - ExpEnv: Hanabi, 3x3 Abrupt Dark Hex and Phantom Tic-Tac-Toe\n- [Efficient Multi-agent Reinforcement Learning by Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=CpnKq3UJwp) 2024\n  - Qihan Liu, Jianing Ye, Xiaoteng Ma, Jun Yang, Bin Liang, Chongjie Zhang\n  - Key: multi-agent reinforcement learning, planning, multi-agent MCTS\n  - ExpEnv: SMAC, LunarLander, MuJoCo, and Google Research Football\n- [PromptAgent: Strategic Planning with Large Language Models Enables Expert-Level Prompt Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=22pyNMuIoa) 2024\n  - Zhutian Yang, et al.\n  - Key: Prompt Optimization, Strategic Planning, MCTS, LLM Agent\n  - ExpEnv: BIG-Bench Hard (BBH), MMLU, HellaSwag\n  - [Code](https:\u002F\u002Fgithub.com\u002Fzhutianyang\u002FPromptAgent)\n- [Become a Proficient Player with Limited Data through Watching Pure Videos](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Sy-o2N0hF4f) 2023\n  - Weirui Ye, Yunsheng Zhang, Pieter Abbeel, Yang Gao\n  - Key: pre-training from action-free videos, forward-inverse cycle consistency (FICC) objective based on vector quantization, pre-training phase, fine-tuning phase.\n  - ExpEnv: Atari\n- [Policy-Based Self-Competition for Planning Problems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04403) 2023\n  - Jonathan Pirnay, Quirin Göttl, Jakob Burger, Dominik Gerhard Grimm\n  - Key: self-competition, find strong trajectories by planning against possible strategies of its past self.\n  - ExpEnv: Traveling Salesman Problem and the Job-Shop Scheduling Problem.\n- [Explaining Temporal Graph Models through an Explorer-Navigator Framework](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BR_ZhvcYbGJ) 2023\n  - Wenwen Xia, Mincai Lai, Caihua Shan, Yao Zhang, Xinnan Dai, Xiang Li, Dongsheng Li\n  - Key: Temporal GNN Explainer, an explorer to find the event subsets with MCTS, a navigator that learns the correlations between events and helps reduce the search space.\n  - ExpEnv: Wikipedia and Reddit, Synthetic datasets\n- [SpeedyZero: Mastering Atari with Limited Data and Time](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Mg5CLXZgvLJ) 2023\n  - Yixuan Mei, Jiaxuan Gao, Weirui Ye, Shaohuai Liu, Yang Gao, Yi Wu\n  - Key: distributed RL system, Priority Refresh, Clipped LARS\n  - ExpEnv: Atari\n- [Efficient Offline Policy Optimization with a Learned Model](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Yt-yM-JbYFO) 2023\n  - Zichen Liu, Siyi Li, Wee Sun Lee, Shuicheng YAN, Zhongwen Xu\n  - Key: Regularized One-Step Model-based algorithm for Offline-RL\n  - ExpEnv: Atari，BSuite\n  - [Code](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Frosmo\u002Ftree\u002Fmain)\n- [Enabling Arbitrary Translation Objectives with Adaptive Tree Search](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.11444.pdf) 2022\n  - Wang Ling, Wojciech Stokowiec, Domenic Donato, Chris Dyer, Lei Yu, Laurent Sartran, Austin Matthews\n  - Key: adaptive tree search, translation models, autoregressive models\n  - ExpEnv: Chinese–English and Pashto–English tasks from WMT2020, German–English from WMT2014\n- [What's Wrong with Deep Learning in Tree Search for Combinatorial Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.10494) 2022\n  - Maximili1an Böther, Otto Kißig, Martin Taraz, Sarel Cohen, Karen Seidel, Tobias Friedrich\n  - Key: combinatorial optimization, open-source benchmark suite for the NP-hard maximum independent set problem, an in-depth analysis of the popular guided tree search algorithm, compare the tree search implementations to other solvers\n  - ExpEnv: NP-hard MAXIMUM INDEPENDENT SET.\n  - [Code](https:\u002F\u002Fgithub.com\u002Fmaxiboether\u002Fmis-benchmark-framework)\n- [Monte-Carlo Planning and Learning with Language Action Value Estimates](https:\u002F\u002Fopenreview.net\u002Fpdf?id=7_G8JySGecm) 2021\n  - Youngsoo Jang, Seokin Seo, Jongmin Lee, Kee-Eung Kim\n  - Key: Monte-Carlo tree search with language-driven exploration, locally optimistic language value estimates.\n  - ExpEnv: Interactive Fiction (IF) games\n- [Practical Massively Parallel Monte-Carlo Tree Search Applied to Molecular Design](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10504) 2021\n  - Xiufeng Yang, Tanuj Kr Aasawat, Kazuki Yoshizoe\n  - Key: massively parallel Monte-Carlo Tree Search, molecular design, Hash-driven parallel search\n  - ExpEnv:  octanol-water partition coefficient (logP) penalized by the synthetic accessibility (SA) and large Ring Penalty score.\n- [Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11755.pdf) 2020\n  - Anji Liu, Jianshu Chen, Mingze Yu, Yu Zhai, Xuewen Zhou, Ji Liu\n  - Key: parallel Monte-Carlo Tree Search, partition the tree into sub-trees efficiently, compare the observation ratio of each processor.\n  - ExpEnv: speedup and performance comparison on JOY-CITY game, average episode return on atari game\n  - [Code](https:\u002F\u002Fgithub.com\u002Fliuanji\u002FWU-UCT)\n- [Learning to Plan in High Dimensions via Neural Exploration-Exploitation Trees](https:\u002F\u002Fopenreview.net\u002Fpdf?id=rJgJDAVKvB) 2020\n  - Binghong Chen, Bo Dai, Qinjie Lin, Guo Ye, Han Liu, Le Song\n  - Key: meta path planning algorithm, exploits a novel neural architecture which can learn promising search directions from problem structures.\n  - ExpEnv: a 2d workspace with a 2 DoF (degrees of freedom) point robot, a 3 DoF stick robot and a 5 DoF snake robot\n#### NeurIPS\n- [Feedback-Aware MCTS for Goal-Oriented Information Seeking](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ustF8MMZDJ) 2025\n  - Harmanpreet Chopra, Chirag Shah\n  - Key: Conversational AI, Goal-Oriented Information Seeking, MCTS, LLM\n  - ExpEnv: 20 Questions, GuessWhat?, MutualFriends\n- [MCTS-Transfer: Monte Carlo Tree Search based Space Transfer for Black-box Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=T5UfIfmDbq) 2024\n  - Shukuan Wang, Ke Xue, Lei Song, Xiaobin Huang, Chao Qian\n  - Key: Black-box Optimization, Transfer Learning, MCTS, Search Space Transfer\n  - ExpEnv: Synthetic functions (Ackley, etc.), Design-Bench, Hyper-parameter optimization\n  - [Code](https:\u002F\u002Fgithub.com\u002Flamda-bbo\u002Fmcts-transfer)\n- [Speculative Monte-Carlo Tree Search](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002Fa19940b01b77b6acd41ff8b32b334e7c-Paper-Conference.pdf) 2024\n  - Jungwoo Park, David Wu, Kellin Pelrine, Jimmy Wei, Thomas Anthony, Julian Schrittwieser, Junwhan Ahn\n  - Key: Efficiency, Speculative Execution, Parallelism, AlphaZero\n  - ExpEnv: Go (9x9, 19x19)\n- [Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F6f479ea488e0908ac8b1b37b27fd134c-Paper-Conference.pdf) 2024\n  - Nicola Dainese, Matteo Merler, Minttu Alakuijala, Pekka Marttinen\n  - Key: Code Generation, World Models, MCTS, Model-based Planning\n  - ExpEnv: CWMB (Code World Models Benchmark), Crafter\n- [ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search](https:\u002F\u002Fopenreview.net\u002Fforum?id=8rcFOqEud5) 2024\n  - Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, Jie Tang\n  - Key: LLM Self-training, Process Reward, Reasoning, CoT\n  - ExpEnv: GSM8K, MATH\n  - [Code](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FReST-MCTS)\n- [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https:\u002F\u002Fopenreview.net\u002Fpdf?id=oIUXpBnyjv) 2023\n  - Yazhe Niu, Yuan Pu, Zhenjie Yang, Xueyan Li, Tong Zhou, Jiyuan Ren, Shuai Hu, Hongsheng Li, Yu Liu\n  - Key: the first unified benchmark for deploying MCTS\u002FMuZero in general sequential decision scenarios.\n  - ExpEnv: ClassicControl, Box2D, Atari, MuJoCo, GoBigger, MiniGrid, TicTacToe, ConnectFour, Gomoku, 2048, etc.\n- [Large Language Models as Commonsense Knowledge for Large-Scale Task Planning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Wjp1AYB8lH) 2023\n  - Zirui Zhao, Wee Sun Lee, David Hsu\n  - Key: world model (LLM) and the LLM-induced policy can be combined in MCTS, to scale up task planning.\n  - ExpEnv: multiplication, travel planning, object rearrangement\n- [Monte Carlo Tree Search with Boltzmann Exploration](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NG4DaApavi) 2023\n  - Michael Painter, Mohamed Baioumy, Nick Hawes, Bruno Lacerda\n  - Key: Boltzmann exploration with MCTS, optimal actions for the maximum entropy objective do not necessarily correspond to optimal actions for the original objective, two improved algorithms.\n  - ExpEnv: the Frozen Lake environment, the Sailing Problem, Go\n- [Generalized Weighted Path Consistency for Mastering Atari Games](https:\u002F\u002Fopenreview.net\u002Fpdf?id=vHRLS8HhK1) 2023\n  - Dengwei Zhao, Shikui Tu, Lei Xu\n  - Key: Generalized Weighted Path Consistency, A weighting mechanism.\n  - ExpEnv: Atari\n- [Accelerating Monte Carlo Tree Search with Probability Tree State Abstraction](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0zeLTZAqaJ) 2023\n  - Yangqing Fu, Ming Sun, Buqing Nie, Yue Gao\n  - Key: probability tree state abstraction, transitivity and aggregation error bound\n  - ExpEnv: Atari, CartPole, LunarLander, Gomoku\n- [Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions](https:\u002F\u002Fopenreview.net\u002Fpdf?id=B_LdLljS842) 2022\n  - Weirui Ye, Pieter Abbeel, Yang Gao\n  - Key: trade off computation versus performancem, virtual expansions, spend thinking time adaptively.\n  - ExpEnv: Atari, 9x9 Go\n- [Planning for Sample Efficient Imitation Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=BkN5UoAqF7) 2022\n  - Zhao-Heng Yin, Weirui Ye, Qifeng Chen, Yang Gao\n  - Key: Behavioral Cloning，Adversarial Imitation Learning (AIL)，MCTS-based RL.\n  - ExpEnv:  DeepMind Control Suite\n  - [Code](https:\u002F\u002Fgithub.com\u002Fzhaohengyin\u002FEfficientImitate)\n- [Evaluation Beyond Task Performance: Analyzing Concepts in AlphaZero in Hex](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dwKwB2Cd-Km) 2022 \n  - Charles Lovering, Jessica Zosa Forde, George Konidaris, Ellie Pavlick, Michael L. Littman\n  - Key: AlphaZero’s internal representations, model probing and behavioral tests, how these concepts are captured in the network.\n  - ExpEnv: Hex\n- [Are AlphaZero-like Agents Robust to Adversarial Perturbations?](https:\u002F\u002Fopenreview.net\u002Fpdf?id=yZ_JlZaOCzv) 2022\n  - Li-Cheng Lan, Huan Zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, 4 Cho-Jui Hsieh\n  - Key: adversarial states, first adversarial attack on Go AIs.\n  - ExpEnv: Go\n- [Monte Carlo Tree Descent for Black-Box Optimization](https:\u002F\u002Fopenreview.net\u002Fpdf?id=FzdmrTUyZ4g) 2022\n  - Yaoguang Zhai, Sicun Gao\n  - Key: Black-Box Optimization, how to further integrate samplebased descent for faster optimization. \n  - ExpEnv: synthetic functions for nonlinear optimization, reinforcement learning problems in MuJoCo locomotion environments, and optimization problems in Neural Architecture Search (NAS).\n- [Monte Carlo Tree Search based Variable Selection for High Dimensional Bayesian Optimization](https:\u002F\u002Fopenreview.net\u002Fpdf?id=SUzPos_pUC) 2022\n  - Lei Song∗ , Ke Xue∗ , Xiaobin Huang, Chao Qian\n  - Key: a low-dimensional subspace via MCTS, optimizes in the subspace with any Bayesian optimization algorithm.\n  - ExpEnv: NAS-bench problems and MuJoCo locomotion\n- [Monte Carlo Tree Search With Iteratively Refining State Abstractions](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F9b0ead00a217ea2c12e06a72eec4923f-Paper.pdf) 2021\n  - Samuel Sokota, Caleb Ho, Zaheen Ahmad, J. Zico Kolter\n  - Key: stochastic environments, Progressive widening, abstraction refining\n  - ExpEnv: Blackjack, Trap, five by five Go.\n- [Deep Synoptic Monte Carlo Planning in Reconnaissance Blind Chess](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F215a71a12769b056c3c32e7299f1c5ed-Paper.pdf) 2021\n  - Gregory Clark\n  - Key: imperfect information, belief state with an unweighted particle filter, a novel stochastic abstraction of information states.\n  - ExpEnv:  reconnaissance blind chess\n- [POLY-HOOT: Monte-Carlo Planning in Continuous Space MDPs with Non-Asymptotic Analysis](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F30de24287a6d8f07b37c716ad51623a7-Paper.pdf) 2020\n  - Weichao Mao, Kaiqing Zhang, Qiaomin Xie, Tamer Ba¸sar\n  - Key: continuous state-action spaces, Hierarchical Optimistic Optimization.\n  - ExpEnv: CartPole, Inverted Pendulum, Swing-up, and LunarLander.\n- [Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fe2ce14e81dba66dbff9cbc35ecfdb704-Paper.pdf) 2020\n  - Linnan Wang, Rodrigo Fonseca, Yuandong Tian\n  - Key: learns the partition of the search space using a few samples, a nonlinear decision boundary and learns a local model to pick good candidates.\n  - ExpEnv: MuJoCo locomotion tasks, Small-scale Benchmarks\n- [Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.10154) 2020\n  - Matthew Faw, Rajat Sen, Karthikeyan Shanmugam, Constantine Caramanis, Sanjay Shakkottai\n  - Key: covariate shift problem, Mix&Match combines stochastic gradient descent (SGD) with optimistic tree search and model re-use (evolving partially trained models with samples from different mixture distributions)\n  - [Code](https:\u002F\u002Fgithub.com\u002Fmatthewfaw\u002Fmixnmatch)\n#### Other Conference or Journal\n- [Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.07910.pdf) AAAI 2021.\n- [On Monte Carlo Tree Search and Reinforcement Learning](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fdownload\u002F11099\u002F26289\u002F20632) Journal of Artificial Intelligence Research 2017.\n- [Sample-Efficient Neural Architecture Search by Learning Actions for Monte Carlo Tree Search](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06832) IEEE Transactions on Pattern Analysis and Machine Intelligence 2022.\n\u003C\u002Fdetails>\n\n\n## 💬 Feedback and Contribution\n\n- [File an issue](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues\u002Fnew\u002Fchoose) on Github\n- Open or participate in our [discussion forum](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fdiscussions)\n- Discuss on LightZero [discord server](https:\u002F\u002Fdiscord.gg\u002FdkZS2JF56X)\n- Contact our email (opendilab@pjlab.org.cn)\n\n- We appreciate all the feedback and contributions to improve LightZero, both algorithms and system designs. \n\n[comment]: \u003C> (- Contributes to our future plan [Roadmap]&#40;https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fprojects&#41;)\n\n[comment]: \u003C> (And `CONTRIBUTING.md` offers some necessary information.)\n\n\n## 🌏 Citation\n```latex\n@article{niu2024lightzero,\n  title={LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios},\n  author={Niu, Yazhe and Pu, Yuan and Yang, Zhenjie and Li, Xueyan and Zhou, Tong and Ren, Jiyuan and Hu, Shuai and Li, Hongsheng and Liu, Yu},\n  journal={Advances in Neural Information Processing Systems},\n  volume={36},\n  year={2024}\n}\n\n@article{puunizero,\n  title={UniZero: Generalized and Efficient Planning with Scalable Latent World Models},\n  author={Pu, Yuan and Niu, Yazhe and Yang, Zhenjie and Ren, Jiyuan and Li, Hongsheng and Liu, Yu},\n  journal={Transactions on Machine Learning Research}\n}\n\n@article{xuan2024rezero,\n  title={ReZero: Boosting MCTS-based Algorithms by Backward-view and Entire-buffer Reanalyze},\n  author={Xuan, Chunyu and Niu, Yazhe and Pu, Yuan and Hu, Shuai and Liu, Yu and Yang, Jing},\n  journal={arXiv preprint arXiv:2404.16364},\n  year={2024}\n}\n\n@article{pu2025one,\n  title={One Model for All Tasks: Leveraging Efficient World Models in Multi-Task Planning},\n  author={Pu, Yuan and Niu, Yazhe and Tang, Jia and Xiong, Junyu and Hu, Shuai and Li, Hongsheng},\n  journal={arXiv preprint arXiv:2509.07945},\n  year={2025}\n}\n```\n\n## 💓 Acknowledgments\n\nThis project has been developed partially based on the following pioneering works on GitHub repositories.\nWe express our profound gratitude for these foundational resources:\n- https:\u002F\u002Fgithub.com\u002Fopendilab\u002FDI-engine\n- https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fmctx\n- https:\u002F\u002Fgithub.com\u002FYeWR\u002FEfficientZero\n- https:\u002F\u002Fgithub.com\u002Fwerner-duvaud\u002Fmuzero-general\n\nWe would like to extend our special thanks to the following contributors [@PaParaZz1](https:\u002F\u002Fgithub.com\u002FPaParaZz1), [@karroyan](https:\u002F\u002Fgithub.com\u002Fkarroyan), [@nighood](https:\u002F\u002Fgithub.com\u002Fnighood), \n[@jayyoung0802](https:\u002F\u002Fgithub.com\u002Fjayyoung0802), [@timothijoe](https:\u002F\u002Fgithub.com\u002Ftimothijoe), [@TuTuHuss](https:\u002F\u002Fgithub.com\u002FTuTuHuss), [@HarryXuancy](https:\u002F\u002Fgithub.com\u002FHarryXuancy), [@puyuan1996](https:\u002F\u002Fgithub.com\u002Fpuyuan1996), [@HansBug](https:\u002F\u002Fgithub.com\u002FHansBug) for their valuable contributions and support to this algorithm library.\n\nThanks to all who contributed to this project:\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fgraphs\u002Fcontributors\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_e4559a668299.png\" \u002F>\n\u003C\u002Fa>\n\n\n## 🏷️ License\nAll code within this repository is under [Apache License 2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0).\n\n\u003Cp align=\"right\">(\u003Ca href=\"#top\">Back to top\u003C\u002Fa>)\u003C\u002Fp>\n","\u003Cdiv id=\"top\">\u003C\u002Fdiv>\n\n# LightZero\n\n\u003Cdiv align=\"center\">\n    \u003Cimg width=\"1000px\" height=\"auto\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_39f1f155ea20.png\">\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n---\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl?style=social&url=https%3A%2F%2Ftwitter.com%2Fopendilab)](https:\u002F\u002Ftwitter.com\u002Fopendilab)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002FLightZero)](https:\u002F\u002Fpypi.org\u002Fproject\u002FLightZero\u002F)\n![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002FLightZero)\n![Loc](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https:\u002F\u002Fgist.githubusercontent.com\u002FHansBug\u002Fe002642132ec758e99264118c66778a4\u002Fraw\u002Floc.json)\n![Comments](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https:\u002F\u002Fgist.githubusercontent.com\u002FHansBug\u002Fe002642132ec758e99264118c66778a4\u002Fraw\u002Fcomments.json)\n\n[![Code Test](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fworkflows\u002FCode%20Test\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Factions?query=workflow%3A%22Code+Test%22)\n[![Badge Creation](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fworkflows\u002FBadge%20Creation\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Factions?query=workflow%3A%22Badge+Creation%22)\n[![Package Release](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fworkflows\u002FPackage%20Release\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Factions?query=workflow%3A%22Package+Release%22)\n\n![GitHub Org's stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fopendilab)\n[![GitHub stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fstargazers)\n[![GitHub forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fnetwork)\n![GitHub commit activity](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fopendilab\u002FLightZero)\n[![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues)\n[![GitHub pulls](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fpulls)\n[![Contributors](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fgraphs\u002Fcontributors)\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fopendilab\u002FLightZero)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmaster\u002FLICENSE)\n[![discord badge](https:\u002F\u002Fdcbadge.vercel.app\u002Fapi\u002Fserver\u002FdkZS2JF56X?style=flat)](https:\u002F\u002Fdiscord.gg\u002FdkZS2JF56X)\n\n更新于 2026年3月11日 LightZero-v0.2.0\n\n\n> LightZero 是一个轻量级、高效且易于理解的开源算法工具包，结合了蒙特卡洛树搜索（MCTS）和深度强化学习（RL）。\n\nEnglish | [简体中文(Simplified Chinese)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002FREADME.zh.md) | [Documentation](https:\u002F\u002Fopendilab.github.io\u002FLightZero) | [LightZero Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08348) | [UniZero Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=Gl6dF9soQo) | [ReZero Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=F9Y7j3AJTu) | [🔥ScaleZero Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07945)\n\n## 新闻\n\n- [2026.02] 🔥 ScaleZero 论文已被 ICLR 2026 接受为会议论文：《一个模型适用于所有任务：在多任务规划中利用高效的世界模型》(https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07945)。\n- [2025.08] [ReZero 论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=F9Y7j3AJTu)已被 CoRL 2025 RemembeRL 研讨会接受。\n- [2025.06] [UniZero 论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=Gl6dF9soQo)已被 Transactions on Machine Learning Research (TMLR 2025) 接受。\n- [2023.09] [LightZero 论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F765043fe026f7d704c96cec027f13843-Abstract-Datasets_and_Benchmarks.html)已被 NeurIPS 2023 数据集与基准赛道接受为 Spotlight Presentation。\n- [2023.04] LightZero v0.0.1 正式发布。\n\n## 🔍 背景\n\n以 AlphaZero 和 MuZero 为代表的蒙特卡洛树搜索与深度强化学习的结合，在围棋、Atari 游戏等多种游戏中取得了前所未有的性能水平。这一先进方法也在蛋白质结构预测和矩阵乘法算法搜索等科学领域取得了显著进展。以下是蒙特卡洛树搜索算法系列的历史演进概述：\n![pipeline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_5cda461b3377.png)\n\n## 🎨 概述\n\n**LightZero** 是一个基于 PyTorch 的开源算法工具包，结合了蒙特卡洛树搜索（MCTS）和强化学习（RL）。它支持一系列基于 MCTS 的 RL 算法及应用，具有以下几大优势：\n- 轻量级。\n- 高效。\n- 易于理解。\n\n更多详细信息，请参阅 [特性](#features)、[框架结构](#framework-structure) 和 [集成算法](#integrated-algorithms)。\n\n**LightZero** 的目标是 **推动 MCTS+RL 算法家族的标准化，以加速相关研究与应用**。所有实现算法在统一框架下的性能比较见 [基准测试](#benchmark)。\n\n### 大纲\n\n- [LightZero](#lightzero)\n  - [新闻](#news)\n  - [🔍 背景](#-background)\n  - [🎨 概述](#-overview)\n    - [大纲](#outline)\n    - [💥 特性](#-features)\n    - [🧩 框架结构](#-framework-structure)\n    - [🎁 集成算法](#-integrated-algorithms)\n  - [⚙️ 安装](#️-installation)\n    - [使用 Docker 安装](#installation-with-docker)\n  - [🚀 快速入门](#-quick-start)\n  - [📚 文档](#-documentation)\n  - [📊 基准测试](#-benchmark)\n  - [📝 Awesome-MCTS 笔记](#-awesome-mcts-notes)\n    - [论文笔记](#paper-notes)\n    - [算法概述](#algo-overview)\n  - [Awesome-MCTS 论文](#awesome-mcts-papers)\n    - [经典与基础论文](#classic--foundational-papers)\n      - [LightZero 实现系列](#lightzero-implemented-series)\n      - [AlphaGo 系列](#alphago-series)\n      - [MuZero 系列](#muzero-series)\n      - [MCTS 分析](#mcts-analysis)\n      - [MCTS 应用](#mcts-application)\n    - [近期研究与新兴应用](#recent-research--emerging-applications)\n      - [ICML](#icml)\n      - [ICLR](#iclr)\n      - [NeurIPS](#neurips)\n      - [其他会议或期刊](#other-conference-or-journal)\n  - [💬 反馈与贡献](#-feedback-and-contribution)\n  - [🌏 引用](#-citation)\n  - [💓 致谢](#-acknowledgments)\n  - [🏷️ 许可证](#️-license)\n\n### 💥 功能特性\n\n**轻量级**: LightZero 集成了多种 MCTS 算法家族，能够在轻量级框架中解决具有不同属性的决策问题。LightZero 已实现的算法和环境列表请参见 [此处](#integrated-algorithms)。\n\n**高效性**: LightZero 采用混合异构计算编程，以提升 MCTS 算法中最耗时部分的计算效率。\n\n**易理解**: LightZero 为所有集成的算法提供了详细的文档和算法框架图，帮助用户理解算法的核心，并比较同一种范式下不同算法之间的异同。此外，LightZero 还提供了算法代码实现的函数调用图和网络结构图，便于用户定位关键代码。所有文档均可在 [这里](#paper-notes)找到。\n\n### 🧩 框架结构\n\n[comment]: \u003C> (\u003Cp align=\"center\">)\n\n[comment]: \u003C> (  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_04e55907b30a.png\" alt=\"Image Description 1\" width=\"45%\" height=\"auto\" style=\"margin: 0 1%;\">)\n\n[comment]: \u003C> (  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_fa2f81a9302f.png\" alt=\"Image Description 2\" width=\"45%\" height=\"auto\" style=\"margin: 0 1%;\">)\n\n[comment]: \u003C> (\u003C\u002Fp>)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"assets\u002Flightzero_pipeline.svg\" alt=\"Image Description 2\" width=\"50%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n上图展示了 LightZero 的框架流程。下面我们简要介绍三个核心模块：\n\n**模型**:\n``Model`` 用于定义网络结构，包括用于初始化网络结构的 ``__init__`` 函数以及用于计算网络前向传播的 ``forward`` 函数。\n\n**策略**:\n``Policy`` 定义了网络更新及与环境交互的方式，包含三个过程：学习过程、收集过程和评估过程。\n\n**MCTS**:\n``MCTS`` 定义了蒙特卡洛搜索树的结构及其与策略模块的交互方式。MCTS 的实现使用 Python 和 C++ 两种语言，分别由 ``ptree`` 和 ``ctree`` 实现。\n\n关于 LightZero 的文件结构，请参阅 [lightzero_file_structure](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Flightzero_file_structure.svg)。\n\n### 🎁 集成的算法\nLightZero 是一个基于 PyTorch 的 MCTS 算法库（有时结合 Cython 和 C++），其中包括：\n- [AlphaZero](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscience.aar6404)\n- [MuZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08265)\n- [采样 MuZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06303)\n- [随机 MuZero](https:\u002F\u002Fopenreview.net\u002Fpdf?id=X6D9bAHhBQ1)\n- [EfficientZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00210)\n- [Gumbel MuZero](https:\u002F\u002Fopenreview.net\u002Fpdf?id=bERaNdoegnO&)\n- [ReZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16364)\n- [UniZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10667)\n\n目前 LightZero 支持的环境和算法如下表所示：\n\n\n| 环境\u002F算法             | AlphaZero | MuZero | 采样 MuZero | EfficientZero | 采样 EfficientZero | Gumbel MuZero | 随机 MuZero | UniZero | 采样 UniZero | ReZero |\n|------------------------| -------- | ---- |---------------| ---------- | ------------------ | ------------- | ---------------- | ------- | --- | ------ |\n| 井字棋              | ✔        | ✔    | 🔒            | 🔒         | 🔒                | ✔             | 🔒               | ✔      | 🔒   | 🔒     |\n| 五子棋                 | ✔        | ✔    | 🔒            | 🔒         | 🔒                | ✔             | 🔒               | ✔      | 🔒   | ✔      |\n| 四子连珠               | ✔        | ✔    | 🔒            | 🔒         | 🔒                | 🔒             | 🔒               | ✔      | 🔒   | ✔      |\n| 2048                   | ---      | ✔    | 🔒            | 🔒         | 🔒                | 🔒             | ✔               | ✔      | 🔒   | 🔒     |\n| 国际象棋                  | 🔒        | 🔒   | 🔒            | 🔒         | 🔒                | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| 围棋                     | 🔒        | 🔒   | 🔒            | 🔒         | 🔒                | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| 倒立摆               | ---      | ✔    | 🔒            | ✔          | ✔                 | ✔             | ✔               | ✔      | 🔒   | ✔      |\n| 单摆               | ---      | ✔    | ✔             | ✔          | ✔                 | ✔             | ✔               | 🔒      | ✔  | 🔒     |\n| 登月飞船            | ---      | ✔    | ✔             | ✔          | ✔                 | ✔             | ✔               | ✔      | ✔  | 🔒     |\n| 双足行走者          | ---      | ✔    | ✔             | ✔          | ✔                 | ✔             | 🔒               | 🔒      | ✔  | 🔒     |\n| Atar i游戏                  | ---      | ✔    | 🔒            | ✔          | ✔                 | ✔             | ✔               | ✔      | 🔒   | ✔      |\n| DeepMind 控制       | ---      | ---     | ✔            | ---            | ✔                 | 🔒             | 🔒               | 🔒      | ✔  | 🔒     |\n| MuJoCo                 | ---      | ✔    | 🔒            | ✔          | ✔                 | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| MiniGrid               | ---      | ✔    | 🔒            | ✔          | ✔                 | 🔒             | 🔒               | ✔      | 🔒   | 🔒     |\n| Bsuite                 | ---      | ✔    | 🔒            | ✔          | ✔                 | 🔒             | 🔒               | ✔      | 🔒   | 🔒     |\n| 记忆任务                 | ---      | ✔    | 🔒              | ✔          | ✔                 | 🔒             | 🔒               | ✔      | 🔒   | 🔒     |\n| 三球相加（台球） | ---      | 🔒   | 🔒            | 🔒         | ✔                 | 🔒             | 🔒               | 🔒      | 🔒  | 🔒     |\n| MetaDrive     | ---      | 🔒     | 🔒      | 🔒  | ✔               | 🔒         | 🔒           | 🔒  | 🔒 |🔒             |\n\n\n\u003Csup>(1): “✔” 表示该项目已完成并经过充分测试。\u003C\u002Fsup>\n\n\u003Csup>(2): “🔒” 表示该项目处于待处理状态（开发中）。\u003C\u002Fsup>\n\n\u003Csup>(3): “---” 表示该算法不支持此环境。\u003C\u002Fsup>\n\n\n## ⚙️ 安装说明\n\n您可以通过以下命令从 GitHub 源代码安装最新版本的 LightZero 开发版：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero.git\ncd LightZero\npip3 install -e .\n```\n\n请注意，LightZero 目前仅支持在 `Linux` 和 `macOS` 平台上进行编译。我们正在积极努力将支持扩展到 `Windows` 平台。在此过渡期间，您的耐心等待将不胜感激。\n\n### 使用 Docker 安装\n\n我们还提供了一个 Dockerfile，用于搭建运行 LightZero 库所需的所有依赖环境。该 Docker 镜像是基于 Ubuntu 20.04 构建的，并安装了 Python 3.8 以及其他必要的工具和库。\n以下是使用我们的 Dockerfile 构建 Docker 镜像、从该镜像运行容器，并在容器内执行 LightZero 代码的方法。\n1. **下载 Dockerfile**：Dockerfile 位于 LightZero 仓库的根目录下。请将此[文件](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002FDockerfile)下载到本地机器。\n2. **准备构建上下文**：在本地机器上创建一个空目录，将 Dockerfile 移动到该目录中，并进入该目录。这一步有助于避免在构建过程中向 Docker 守护进程发送不必要的文件。\n    ```bash\n    mkdir lightzero-docker\n    mv Dockerfile lightzero-docker\u002F\n    cd lightzero-docker\u002F\n    ```\n3. **构建 Docker 镜像**：使用以下命令构建 Docker 镜像。此命令应在包含 Dockerfile 的目录内执行。\n    ```bash\n    docker build -t ubuntu-py38-lz:latest -f .\u002FDockerfile .\n    ```\n4. **从镜像运行容器**：使用以下命令以交互模式并带有 Bash shell 启动容器。\n    ```bash\n    docker run -dit --rm ubuntu-py38-lz:latest \u002Fbin\u002Fbash\n    ```\n5. **在容器内执行 LightZero 代码**：进入容器后，可以使用以下命令运行示例 Python 脚本：\n    ```bash\n    python .\u002FLightZero\u002Fzoo\u002Fclassic_control\u002Fcartpole\u002Fconfig\u002Fcartpole_muzero_config.py\n    ```\n\n[comment]: \u003C> (- [AlphaGo Zero]&#40;https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature24270&#41; )\n\n## 🚀 快速入门\n\n训练一个 MuZero 智能体来玩 [CartPole](https:\u002F\u002Fgymnasium.farama.org\u002Fenvironments\u002Fclassic_control\u002Fcart_pole\u002F)：\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fclassic_control\u002Fcartpole\u002Fconfig\u002Fcartpole_muzero_config.py\n```\n\n训练一个 MuZero 智能体来玩 [Pong](https:\u002F\u002Fgymnasium.farama.org\u002Fenvironments\u002Fatari\u002Fpong\u002F)：\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fatari\u002Fconfig\u002Fatari_muzero_segment_config.py\n```\n\n训练一个 MuZero 智能体来玩 [TicTacToe](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTic-tac-toe)：\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fboard_games\u002Ftictactoe\u002Fconfig\u002Ftictactoe_muzero_bot_mode_config.py\n```\n\n训练一个 UniZero 智能体来玩 [Pong](https:\u002F\u002Fgymnasium.farama.org\u002Fenvironments\u002Fatari\u002Fpong\u002F)：\n\n```bash\ncd LightZero\npython3 -u zoo\u002Fatari\u002Fconfig\u002Fatari_unizero_segment_config.py\n```\n\n## 📚 文档\n\nLightZero 的文档可以在这里找到：[LightZero 文档](https:\u002F\u002Fopendilab.github.io\u002FLightZero\u002F)。其中包含了教程和 API 参考。\n\n对于希望自定义环境和算法的用户，我们提供了相关指南：\n\n- [自定义环境](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Fenvs\u002Fcustomize_envs.md)\n- [自定义算法](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Falgos\u002Fcustomize_algos.md)\n- [如何设置配置文件？](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Fconfig\u002Fconfig.md)\n- [日志与监控系统](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fdocs\u002Fsource\u002F\u002Ftutorials\u002Flogs\u002Flogs.md)\n- [损失景观可视化](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Floss_landscape\u002FREADME.md)\n\n如果您有任何问题，请随时联系我们获取支持。\n\n## 📊 基准测试\n\n\u003Cdetails>\u003Csummary>点击展开\u003C\u002Fsummary>\n\n- 以下是 [AlphaZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Falphazero.py) 和 [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) 在三种棋类游戏上的基准测试结果：[井字棋](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Ftictactoe\u002Fenvs\u002Ftictactoe_env.py)、[连珠四子棋](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Fconnect4\u002Fenvs\u002Fconnect4_env.py)、[五子棋](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Fgomoku\u002Fenvs\u002Fgomoku_env.py)。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_9ad7439e7495.png\" alt=\"tictactoe_bot-mode_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_2b34ab768a3f.png\" alt=\"connect4_bot-mode_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_2b34ab768a3f.png\" alt=\"gomoku_bot-mode_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- 以下是 [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py)、[带 SSL 的 MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py)、[EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fefficientzero.py) 和 [采样 EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fsampled_efficientzero.py) 在 [Atari] 游戏中的三个离散动作空间环境上的基准测试结果：[Pong](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py)、[Q*bert](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py)、[Ms. Pac-Man](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py)。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_dc3a472d62d8.png\" alt=\"pong_main\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_d7baef6450a1.png\" alt=\"qbert_main\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_321c5146b6e0.png\" alt=\"mspacman_main\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_805ba560a4d3.png\" alt=\"mspacman_sez_K\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n\n- 以下是 [采样 EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fsampled_efficientzero.py) 使用 ``Factored\u002FGaussian`` 策略表示在三个经典连续动作空间游戏上的基准测试结果：[摆锤-v1](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fclassic_control\u002Fpendulum\u002Fenvs\u002Fpendulum_lightzero_env.py)、[月球着陆器连续-v2](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fbox2d\u002Flunarlander\u002Fenvs\u002Flunarlander_env.py)、[双足行走者-v3](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fbox2d\u002Fbipedalwalker\u002Fenvs\u002Fbipedalwalker_env.py)，以及两个 MuJoCo 连续动作空间游戏：[跳鼠-v3](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fmujoco\u002Fenvs\u002Fmujoco_lightzero_env.py)、[Walker2d-v3](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fmujoco\u002Fenvs\u002Fmujoco_lightzero_env.py)。\n> “Factored Policy” 表示智能体学习一个输出分类分布的策略网络。经过手动离散化后，这五个环境的动作空间维度分别为 11、49 (7^2)、256 (4^4)、64 (4^3) 和 4096 (4^6)。另一方面，“Gaussian Policy” 指的是智能体学习一个直接输出高斯分布参数（均值 μ 和标准差 σ）的策略网络。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_0ebc3ff49c98.png\" alt=\"pendulum_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_7ba5a2a8a1cf.png\" alt=\"pendulum_sez_K\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_4fac92a43fad.png\" alt=\"lunarlander_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_0d9f85047e97.png\" alt=\"bipedalwalker_main\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_ac378132e830.png\" alt=\"hopper_main\" width=\"31.5%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_35fe538e99cd.png\" alt=\"walker2d_main\" width=\"31.5%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- 以下是 [GumbelMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fgumbel_muzero.py) 和 [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py)（在不同模拟成本下）在四个环境上的基准测试结果：[PongNoFrameskip-v4](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py)、[Ms. Pac-Man NoFrameskip-v4](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fatari\u002Fenvs\u002Fatari_lightzero_env.py)、[五子棋](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fboard_games\u002Fgomoku\u002Fenvs\u002Fgomoku_env.py)、[月球着陆器连续-v2](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fbox2d\u002Flunarlander\u002Fenvs\u002Flunarlander_env.py)。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_e9c9f845b745.png\" alt=\"pong_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_ad43cfdec7b5.png\" alt=\"mspacman_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_b97eb57bbc46.png\" alt=\"gomoku_bot-mode_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_c41198d7136f.png\" alt=\"lunarlander_gmz_ns\" width=\"23%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- 以下是 [StochasticMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fstochastic_muzero.py) 和 [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) 在 [2048 环境](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fgame_2048\u002Fenvs\u002Fgame_2048_env.py) 上，面对不同随机性水平（num_chances=2 和 5）时的基准测试结果。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_f856ab75072b.png\" alt=\"2048_stochasticmz_mz\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_8da1b7cdc615.png\" alt=\"mspacman_gmz_ns\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n- 以下是 [带 SSL 的 MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Flzero\u002Fpolicy\u002Fmuzero.py) 在 [MiniGrid 环境](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fzoo\u002Fminigrid\u002Fenvs\u002Fminigrid_lightzero_env.py) 中，采用多种 MCTS 探索机制时的基准测试结果。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_fc7f7bd8505b.png\" alt=\"keycorridors3r3_exploration\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_a31bef7217b0.png\" alt=\"fourrooms_exploration\" width=\"30%\" height=\"auto\" style=\"margin: 0 1%;\">\n\u003C\u002Fp>\n\n\u003C\u002Fdetails>\n\n\n## 📝 Awesome-MCTS 笔记\n\n### 论文笔记\n以下是上述算法的详细论文笔记（中文）：\n\n\u003Cdetails open>\u003Csummary>点击收起\u003C\u002Fsummary>\n\n  \n- [AlphaZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FAlphaZero.pdf)\n- [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FMuZero.pdf)\n- [EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FEfficientZero.pdf)\n- [SampledMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FSampledMuZero.pdf)\n- [GumbelMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FGumbelMuZero.pdf)\n- [StochasticMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FStochasticMuZero.pdf)\n- [符号表](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Fpaper_notes\u002FSymbolTable.pdf)\n\n你也可以参考相关的知乎专栏（中文）：[MCTS+RL前沿理论与应用深度解析](https:\u002F\u002Fwww.zhihu.com\u002Fcolumn\u002Fc_1764308735227662336)。\n\n### 算法概览\n\n以下是上述算法的MCTS原理示意图概览：\n\n\u003Cdetails>\u003Csummary>点击展开\u003C\u002Fsummary>\n\n- [MCTS](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fmcts_overview.pdf)\n- [AlphaZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Falphazero_overview.pdf)\n- [MuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fmuzero_overview.png)\n- [EfficientZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fefficientzero_overview.png)\n- [SampledMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fsampled_muzero_overview.png)\n- [GumbelMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fgumbel_muzero_overview.png)\n- [StochasticMuZero](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fblob\u002Fmain\u002Fassets\u002Falgo_overview\u002Fstochastic_muzero_overview.png)\n\n\u003C\u002Fdetails>\n\n## Awesome-MCTS论文\n\n这里收集了关于**蒙特卡洛树搜索**的研究论文。\n[本节](#awesome-msts-papers)将持续更新，以追踪MCTS的前沿进展。\n\n### 经典与基础论文\n\n\u003Cdetails>\u003Csummary>点击展开\u003C\u002Fsummary>\n\n#### LightZero实现系列\n\n- [2018年《科学》杂志 AlphaZero：通过自我对弈掌握国际象棋、将棋和围棋的通用强化学习算法](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscience.aar6404)\n- [2019年 MuZero：通过规划与学习模型掌握雅达利游戏、围棋、国际象棋和将棋](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08265)\n- [2021年 EfficientZero：在数据有限的情况下掌握雅达利游戏](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00210)\n- [2021年 Sampled MuZero：在复杂动作空间中进行学习与规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06303)\n- [2022年 Stochastic MuZero：利用学习模型在随机环境中进行规划](https:\u002F\u002Fopenreview.net\u002Fpdf?id=X6D9bAHhBQ1)\n- [2022年 Gumbel MuZero：通过Gumbel规划改进策略](https:\u002F\u002Fopenreview.net\u002Fpdf?id=bERaNdoegnO&)\n- [2024年 UniZero：基于可扩展潜在世界模型的通用高效规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10667)\n\n#### AlphaGo系列\n- [2015年《自然》杂志 AlphaGo：利用深度神经网络和树搜索掌握围棋](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature16961)\n- [2017年《自然》杂志 AlphaGo Zero：无需人类知识即可掌握围棋](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature24270)\n- [2019年 ELF OpenGo：对AlphaZero的分析与开源重实现](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04522) \n  - [代码](https:\u002F\u002Fgithub.com\u002Fpytorch\u002FELF)\n- [2023年 Student of Games：一种适用于完全信息和不完全信息游戏的统一学习算法](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fsciadv.adg3256)\n\n#### MuZero系列\n- [2022年 利用学习模型进行在线与离线强化学习的规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06294)\n- [2021年 用于规划的向量量化模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04615)\n- [2021年 Muesli：结合策略优化的改进方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06159)\n\n#### MCTS分析\n- [2020年 蒙特卡洛树搜索作为正则化的策略优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.12509)\n- [2021年 自洽模型与价值函数](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.12840)\n- [2022年 对抗性策略击败职业级围棋AI](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00241)\n- [2022年《PNAS》AlphaZero中的国际象棋知识习得](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09259)\n\n#### MCTS应用\n- [2023年 符号物理学习者：通过蒙特卡洛树搜索发现控制方程](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ZTK3SefE8_Z)\n- [2022年《自然》杂志 利用强化学习发现更快的矩阵乘法算法](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41586-022-05172-4) \n  - [代码](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Falphatensor)\n- [2022年 MuZero通过自我对抗实现VP9视频压缩中的速率控制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06626)\n- [2021年 DouZero：利用自我对弈的深度强化学习掌握斗地主](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06135)\n- [2019年 将规划与深度强化学习结合用于自动驾驶中的战术决策](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.02680.pdf)\n\n\u003C\u002Fdetails>\n\n### 最新研究与新兴应用\n\n\u003Cdetails>\u003Csummary>点击展开\u003C\u002Fsummary>\n\n#### ICML\n- [STAIR: 通过内省推理提升安全对齐](https:\u002F\u002Fopenreview.net\u002Fforum?id=aHzPGyUhZa) 2025\n  - 张一驰、张思远、黄耀、夏泽宇、方正伟、杨晓、段然杰、闫东、董银鹏、朱俊\n  - 关键词：大语言模型、安全对齐、推理\n  - 实验环境：StrongReject、XsTest、WildChat、Do-Not-Answer、GSM8k、AlpacaEval 2.0、BIG-bench HHH、SimpleQA、InfoFlow、AdvGLUE\n- [rStar-Math: 小型大语言模型通过自我进化式深度思考掌握数学推理](https:\u002F\u002Fopenreview.net\u002Fforum?id=5zwF1GizFa) 2025\n  - 关鑫宇、李琳娜·张、刘一飞、尚宁、孙佑然、朱毅、杨帆、杨茂\n  - 关键词：大语言模型、推理、自我进化\n  - 实验环境：GSM8K、MATH、AIME 2024、AMC 2023、奥林匹克基准测试、大学数学、高考（2023年中国高考）\n- [基于最优传输的不确定性传播蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=DUGFTH9W8B) 2025\n  - 段端光、帕斯卡尔·施滕格、卢卡斯·施奈德、乔尼·帕亚里宁、卡洛·德埃拉莫、奥达尔里克-安布里姆·迈亚尔\n  - 关键词：蒙特卡洛树搜索、不确定性下的规划\n  - 实验环境：FrozenLake、NChain、RiverSwim、SixArms、Taxi、Rocksample、Pocman、Tag、LaserTag\n- [利用树搜索对推理上下文进行重排序使大型视觉-语言模型更强大](https:\u002F\u002Fopenreview.net\u002Fforum?id=DJcEoC9JpQ) 2025\n  - 杨琪、张成浩、范鲁斌、丁坤、叶洁平、项世明\n  - 关键词：大型视觉语言模型、多模态检索增强生成、上下文学习、蒙特卡洛树搜索\n  - 实验环境：ScienceQA、MMMU、MathV、VizWiz、VSR-MC\n- [通过语言模型实现内外部规划来精通棋类游戏](https:\u002F\u002Fopenreview.net\u002Fforum?id=KKwBo3u3IW) 2025\n  - 约翰·舒尔茨、雅库布·阿达梅克、马泰伊·尤苏普、马克·兰科特、迈克尔·凯瑟斯、萨拉·佩林、丹尼尔·亨尼斯、杰里米·沙尔、坎纳达·A·刘易斯、阿尼安·鲁奥斯、汤姆·扎哈维、彼得·韦利奇科维奇、劳雷尔·普林斯、萨廷德·辛格、埃里克·马尔米、内纳德·托马塞夫\n  - 关键词：搜索、规划、语言模型、游戏、国际象棋\n  - 实验环境：国际象棋、Chess960、四子连珠、六角棋\n- [语言模型作为隐式树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=bEqMmGu6qg) 2025\n  - 陈子良、赖兆荣、杨宇峰、方亮达、杨振福、林亮\n  - 关键词：无强化学习偏好优化；基于大语言模型的蒙特卡洛树搜索；大语言模型对齐；大语言模型推理\n  - 实验环境：Anthropic HH、GSM8K、MATH、Game24\n- [无需过程标签即可获得过程奖励](https:\u002F\u002Fopenreview.net\u002Fforum?id=8ThnPFhGm8) 2025\n  - 袁立凡、李文迪、陈华宇、崔甘渠、丁宁、张凯燕、周博文、刘志远、彭浩\n  - 关键词：过程奖励模型\n  - 实验环境：MATH\n- [基于大语言模型的自动启发式设计中的全面探索蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=Do1OdZzYHr) 2025\n  - 郑智、谢卓亮、王振坤、布莱恩·胡伊\n  - 关键词：自动启发式设计、组合优化、大语言模型、神经组合优化、蒙特卡洛树搜索\n  - 实验环境：TSP、背包问题、CVRP、多背包问题、装箱问题、容许集问题、贝叶斯优化\n- [提升虚拟智能体学习与推理能力：具有基准测试的分步、多维度且通用型奖励模型](https:\u002F\u002Fopenreview.net\u002Fforum?id=OKWlVPHeW1) 2025\n  - 苗炳辰、吴洋、高明赫、于启凡、卜文东、张文桥、李云飞、唐思亮、蔡宗盛、李俊诚\n  - 关键词：虚拟智能体；数字智能体；奖励模型\n  - 实验环境：WebArena、VisualWebArena、Android World、OSWorld\n- [通过蒙特卡洛规划实现在线稳健强化学习](https:\u002F\u002Fopenreview.net\u002Fforum?id=m25ma7O7Ec) 2025\n  - 段端光、基尚·帕纳甘蒂、布拉欣·德里斯、亚当·维尔曼\n  - 关键词：蒙特卡洛树搜索、分布鲁棒强化学习、在线强化学习\n  - 实验环境：赌徒问题、冰湖、美式期权定价\n- [信任区域扭曲策略改进](https:\u002F\u002Fopenreview.net\u002Fgroup?id=ICML.cc\u002F2025\u002FConference#tab-accept-oral) 2025\n  - 乔瑞·A·德弗里斯、何金科、亚尼夫·奥伦、马蒂斯·T·J·斯潘\n  - 关键词：强化学习；序列蒙特卡洛；蒙特卡洛树搜索；规划；基于模型；策略改进\n  - 实验环境：Brax、Jumanji\n- [KBQA-o1: 基于蒙特卡洛树搜索的代理式知识库问答](https:\u002F\u002Fopenreview.net\u002Fforum?id=QuecSemZIy) 2025\n  - 罗浩然、E海红、郭义凯、林其卡、吴小宝、穆新宇、刘文豪、宋美娜、朱一凡、刘安团\n  - 关键词：知识库问答、大语言模型、LLM代理、蒙特卡洛树搜索\n  - 实验环境：GrailQA、WebQSP、GraphQ\n- [蒙特卡洛树扩散用于系统2规划](https:\u002F\u002Fproceedings.mlr.press\u002Fv267\u002Fyoon25a.html) 2025\n  - 尹在植、赵贤书、白斗镇、约书亚·本吉奥、安成镇\n  - 关键词：扩散模型、MCTS、系统2规划、轨迹优化\n  - 实验环境：2D迷宫、厨房、积木堆叠\n  - [代码](https:\u002F\u002Fgithub.com\u002Fahn-ml\u002Fmctd)\n- [基于最优传输的不确定性传播蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=DUGFTH9W8B) 2025\n  - 段端光、帕斯卡尔·施滕格、卢卡斯·施奈德、乔尼·帕亚里宁、卡洛·德埃拉莫、奥达尔里克-安布里姆·迈亚尔\n  - 关键词：最优传输、Wasserstein距离、不确定性传播、MCTS\n  - 实验环境：FrozenLake、NChain、RiverSwim、SixArms、Taxi、Rocksample\n- [通过蒙特卡洛规划实现在线稳健强化学习](https:\u002F\u002Fopenreview.net\u002Fforum?id=m25ma7O7Ec) 2025\n  - 段端光、基尚·帕纳甘蒂、布拉欣·德里斯、亚当·维尔曼\n  - 关键词：稳健RL、MCTS、分布鲁棒优化、模拟到现实\n  - 实验环境：赌徒问题、冰湖、美式期权定价\n  - [代码](https:\u002F\u002Fgithub.com\u002Fbrahimdriss\u002FRobustMCTS)\n- [随机连续蒙特卡洛树搜索中的幂平均估计](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2025\u002Fposter\u002F45596) 2025\n  - 段端光\n  - 关键词：连续MCTS、多项式探索、随机环境、幂平均\n  - 实验环境：连续Cartpole、倒立摆\n- [语言代理树搜索统一了语言模型中的推理、行动和规划](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2024\u002Fposter\u002F33107) 2024  \n  - 周安迪、严凯、米哈尔·什拉彭托赫-罗斯曼、王浩瀚、王宇雄  \n  - 关键词：语言模型、决策、蒙特卡洛树搜索、推理、行动、规划  \n  - 实验环境：HumanEval、WebShop、交互式问答、编程、数学\n- [通过层次化对手建模和规划实现在混合动机环境中的高效适应](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fhuang24p.html) 2024  \n  - 黄一哲、刘安吉、孔凡奇、杨耀东、朱松春、冯雪  \n  - 关键词：多智能体强化学习、层次化对手建模、蒙特卡洛树搜索、少量样本适应、混合动机环境  \n  - 实验环境：多智能体决策场景、自我对弈、混合动机互动\n- [加速贝叶斯优化中的前瞻：只需多级蒙特卡洛即可](https:\u002F\u002Fopenreview.net\u002Fforum?id=46vXhZn7lN) 2024  \n  - 杨尚达、维塔利·赞金、马克西米利安·巴兰达特、斯特凡·舍雷尔、凯文·托马斯·卡尔伯格、尼尔·沃尔顿、科迪·J·H·劳  \n  - 关键词：贝叶斯优化、多级蒙特卡洛、嵌套期望、采集函数  \n  - 实验环境：基准示例\n- [基于树状蒙特卡洛的加速推测采样](https:\u002F\u002Fopenreview.net\u002Fforum?id=stMhi1Sn2G) 2024  \n  - 胡正勉、黄恒  \n  - 关键词：推测采样、大语言模型、树状蒙特卡洛、推理加速  \n  - 实验环境：未指定\n- [通过状态占用正则化在蒙特卡洛树搜索中实现可证明高效的长 horizon 探索](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.05511) 2024\n  - 利亚姆·施拉姆、阿卜杜斯拉姆·布拉里亚斯\n  - 关键词：探索、状态占用、长 horizon 规划、体积-MCTS\n  - 实验环境：机器人导航、2D迷宫\n  - [代码](https:\u002F\u002Fgithub.com\u002Fschrammlb2\u002FVolume-MCTS-ICML)\n- [通过蒙特卡洛树搜索实现可扩展的安全策略改进](https:\u002F\u002Fopenreview.net\u002Fpdf?id=tevbBSzSfK) 2023\n  - 阿尔贝托·卡斯特利尼、费德里科·比安奇、爱德华多·佐尔齐、蒂亚戈·D·西芒、亚历山德罗·法里内利、马蒂斯·T·J·斯潘\n  - 关键词：使用基于MCTS的策略在线进行安全策略改进、带有基准引导的安全策略改进\n  - 实验环境：Gridworld和SysAdmin\n- [通过路径一致性实现AlphaZero的高效学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fzhao22h\u002Fzhao22h.pdf) 2022\n  - 赵登伟、涂士奎、徐磊\n  - 关键词：有限数量的自我对弈、路径一致性（PC）最优性\n  - 实验环境：围棋、奥赛罗、五子棋\n- [可视化MuZero模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.12924) 2021\n  - 乔瑞·A·德弗里斯、肯·S·沃斯奎尔、托马斯·M·莫兰德、阿斯克·普拉特\n  - 关键词：可视化价值等效动力学模型、动作轨迹发散、两种正则化技术\n  - 实验环境：CartPole和MountainCar。\n- [蒙特卡洛树搜索中的凸正则化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00391.pdf) 2021\n  - 段端光、卡洛·德埃拉莫、扬·彼得斯、乔尼·帕亚里宁\n  - 关键词：熵正则化备份算子、遗憾分析、Tsallis熵\n  - 实验环境：合成树、Atari\n- [信息粒子滤波树：一种适用于连续域上基于信念奖励的POMDP的在线算法](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Ffischer20a\u002Ffischer20a.pdf) 2020\n  - 约翰内斯·费舍尔、厄默·萨欣·塔斯\n  - 关键词：连续POMDP、粒子滤波树、基于信息的奖励塑造、信息收集。\n  - 实验环境：POMDPs.jl框架\n  - [代码](https:\u002F\u002Fgithub.com\u002Fjohannes-fischer\u002Ficml2020_ipft)\n- [Retro*: 使用神经引导的A*搜索学习逆向合成规划](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fchen20k\u002Fchen20k.pdf) 2020\n  - 陈炳宏、李承涛、戴汉军、宋乐\n  - 关键词：化学逆向合成规划、基于神经网络的类似A*的算法、ANDOR树\n  - 实验环境：USPTO数据集\n  - [代码](https:\u002F\u002Fgithub.com\u002Fbinghong-ml\u002Fretro_star)\n#### ICLR\n- [OptionZero: 带有学习选项的规划](https:\u002F\u002Fopenreview.net\u002Fforum?id=3IFRygQKGL) 2025  \n  - 黄柏伟、彭沛纯、洪贵、吴季荣  \n  - 关键词：选项、半马尔可夫决策过程、MuZero、MCTS、规划、强化学习  \n  - 实验环境：26款Atari游戏\n- [使用大语言模型为文字冒险游戏进行蒙特卡洛规划](https:\u002F\u002Fopenreview.net\u002Fforum?id=r1KcapkzCt) 2025  \n  - 史子京、方萌、陈玲  \n  - 关键词：大语言模型、蒙特卡洛树搜索、文字冒险游戏  \n  - 实验环境：Jericho基准测试\n- [认识论蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=Tb8RiXOc3N) 2025  \n  - 亚尼夫·奥伦、维利亚姆·瓦多奇、马蒂斯·T·J·斯潘、温德林·博默  \n  - 关键词：基于模型、认识论不确定性、探索、规划、alphazero、muzero  \n  - 实验环境：SUBLEQ（汇编语言）、深海\n- [利用蒙特卡洛树搜索和事后反馈增强软件智能体](https:\u002F\u002Fopenreview.net\u002Fforum?id=G7sIFXugTX) 2025  \n  - 安东尼斯·安东尼阿德斯、阿尔伯特·厄尔瓦尔、张克勋、谢宇熙、阿尼鲁德·戈亚尔、威廉·杨·王  \n  - 关键词：智能体、LLM、SWE智能体、SWE基准测试、搜索、规划、推理、自我改进、开放性  \n  - 实验环境：SWE基准测试\n- [认识论蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=Tb8RiXOc3N) 2025\n  - 温德林·博默、沈郑、段浩然、毛成志、罗萨里奥·斯卡利塞\n  - 关键词：MCTS、认识论不确定性、探索、稀疏奖励、基于模型的强化学习\n  - 实验环境：深海、SUBLEQ（汇编语言）\n- [DeepSeek-Prover-V1.5: 利用证明助手反馈进行强化学习和蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=I4YAIwrsXa) 2025\n  - DeepSeek Prover团队\n  - 关键词：自动化定理证明、LLM、MCTS、来自证明助手反馈的强化学习（RLPAF）、RMaxTS\n  - 实验环境：Lean 4、miniF2F、ProofNet\n  - [代码](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-Prover-V1.5)\n- [面向离线基于模型的强化学习的贝叶斯自适应蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fforum?id=RGjqr1jBJy) 2025\n  - 卢卡斯·牛·詹森等人\n  - 关键词：离线RL、基于模型的RL、贝叶斯自适应MDP、不确定性传播\n  - 实验环境：D4RL\n- [决策时间规划的更新等价框架](https:\u002F\u002Fopenreview.net\u002Fforum?id=JXGph215fL) 2024\n  - 塞缪尔·索科塔、加布里埃莱·法里纳、大卫·J·吴、胡恒源、凯文·A·王、J·齐科·科尔特、诺姆·布朗\n  - 关键词：不完全信息游戏、搜索、决策时间规划、更新等价\n  - 实验环境：Hanabi、3x3突发黑暗六角棋和幽灵井字棋\n- [通过规划实现高效的多智能体强化学习](https:\u002F\u002Fopenreview.net\u002Fforum?id=CpnKq3UJwp) 2024\n  - 刘启涵、叶嘉宁、马晓腾、杨俊、梁斌、张崇杰\n  - 关键词：多智能体强化学习、规划、多智能体MCTS\n  - 实验环境：SMAC、LunarLander、MuJoCo和Google Research Football\n- [PromptAgent: 带有大语言模型的战略规划实现专家级提示优化](https:\u002F\u002Fopenreview.net\u002Fforum?id=22pyNMuIoa) 2024\n  - 杨竹天等人\n  - 关键词：提示优化、战略规划、MCTS、LLM代理\n  - 实验环境：BIG-Bench Hard（BBH）、MMLU、HellaSwag\n  - [代码](https:\u002F\u002Fgithub.com\u002Fzhutianyang\u002FPromptAgent)\n- [通过观看纯视频以有限数据成为熟练玩家](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Sy-o2N0hF4f) 2023\n  - 叶伟睿、张云生、皮特·阿贝尔、高阳\n  - 关键词：从无动作视频中预训练、基于向量量化的目标前向-反向循环一致性（FICC）、预训练阶段、微调阶段\n  - 实验环境：Atari\n- [基于策略的自我竞争解决规划问题](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04403) 2023\n  - 乔纳森·皮尔奈、奎林·格特尔、雅各布·布尔格、多米尼克·格哈德·格林姆\n  - 关键词：自我竞争、通过针对过去可能策略的规划找到强轨迹\n  - 实验环境：旅行商问题和作业车间调度问题\n- [通过探险者-导航者框架解释时序图模型](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BR_ZhvcYbGJ) 2023\n  - 夏文文、赖敏才、单彩华、张瑶、戴新楠、李翔、李东升\n  - 关键词：时序GNN解释器、利用MCTS寻找事件子集的探险者、学习事件之间相关性的导航者，从而减少搜索空间\n  - 实验环境：维基百科和Reddit、合成数据集\n- [SpeedyZero: 以有限数据和时间掌握Atari](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Mg5CLXZgvLJ) 2023\n  - 梅一轩、高佳轩、叶伟睿、刘绍怀、高阳、吴毅\n  - 关键词：分布式RL系统、优先刷新、剪切LARS\n  - 实验环境：Atari\n- [使用学习模型进行高效的离线策略优化](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Yt-yM-JbYFO) 2023\n  - 刘子晨、李思怡、李伟孙、颜水成、徐仲文\n  - 关键词：正则化一步基于模型的算法用于离线RL\n  - 实验环境：Atari、BSuite\n  - [代码](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Frosmo\u002Ftree\u002Fmain)\n- [利用适应性树搜索实现任意翻译目标](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.11444.pdf) 2022\n  - 王凌、沃伊切赫·斯托科维茨、多梅尼克·多纳托、克里斯·戴尔、余磊、洛朗·萨特朗、奥斯汀·马修斯\n  - 关键词：适应性树搜索、翻译模型、自回归模型\n  - 实验环境：WMT2020的中文–英语和普什图语–英语任务、WMT2014的德语–英语任务\n- [组合优化树搜索中的深度学习有何问题](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.10494) 2022\n  - 马克西米利安·博特尔、奥托·基西格、马丁·塔拉兹、萨雷尔·科恩、卡伦·赛德尔、托比亚斯·弗里德里希\n  - 关键词：组合优化、用于NP-hard最大独立集问题的开源基准测试套件、对流行的引导式树搜索算法的深入分析、将树搜索实现与其他求解器进行比较\n  - 实验环境：NP-hard的最大独立集\n  - [代码](https:\u002F\u002Fgithub.com\u002Fmaxiboether\u002Fmis-benchmark-framework)\n- [带有语言行动价值估计的蒙特卡洛规划与学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=7_G8JySGecm) 2021\n  - 张英洙、徐世钦、李钟民、金基雄\n  - 关键词：蒙特卡洛树搜索结合语言驱动的探索、本地乐观的语言价值估计\n  - 实验环境：互动小说（IF）游戏\n- [应用于分子设计的实际大规模并行蒙特卡洛树搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10504) 2021\n  - 杨秀峰、塔努杰·克·阿萨瓦特、吉崎和纪\n  - 关键词：大规模并行蒙特卡洛树搜索、分子设计、哈希驱动的并行搜索\n  - 实验环境：辛醇-水分配系数（logP）受合成可及性（SA）和高额环罚分的影响\n- [观察未被观测的事物：一种简单的方法来并行化蒙特卡洛树搜索](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11755.pdf) 2020\n  - 刘安吉、陈建树、俞明泽、翟宇、周雪雯、刘继\n  - 关键词：并行蒙特卡洛树搜索、高效地将树分割成子树、比较每个处理器的观测比例\n  - 实验环境：JOY-CITY游戏的速度提升和性能对比、Atari游戏的平均回合回报\n  - [代码](https:\u002F\u002Fgithub.com\u002Fliuanji\u002FWU-UCT)\n- [通过神经探索-开发树学习高维规划](https:\u002F\u002Fopenreview.net\u002Fpdf?id=rJgJDAVKvB) 2020\n  - 陈炳宏、戴博、林秦杰、叶国、刘汉、宋乐\n  - 关键词：元路径规划算法、利用一种新颖的神经架构，可以从问题结构中学习有希望的搜索方向\n  - 实验环境：一个具有2自由度点机器人、一个3自由度棍形机器人和一个5自由度蛇形机器人的2D工作空间\n#### NeurIPS\n- [面向目标的信息获取的反馈感知MCTS](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ustF8MMZDJ) 2025\n  - 哈曼普里特·乔普拉、奇拉格·沙阿\n  - 关键词：对话式AI、目标导向的信息获取、MCTS、LLM\n  - 实验环境：20个问题、GuessWhat?、MutualFriends\n- [MCTS-Transfer: 基于蒙特卡洛树搜索的空间转移用于黑箱优化](https:\u002F\u002Fopenreview.net\u002Fforum?id=T5UfIfmDbq) 2024\n  - 王淑宽、薛科、宋乐、黄晓彬、钱超\n  - 关键词：黑箱优化、迁移学习、MCTS、搜索空间转移\n  - 实验环境：合成函数（Ackley等）、Design-Bench、超参数优化\n  - [代码](https:\u002F\u002Fgithub.com\u002Flamda-bbo\u002Fmcts-transfer)\n- [推测式蒙特卡洛树搜索](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002Fa19940b01b77b6acd41ff8b32b334e7c-Paper-Conference.pdf) 2024\n  - 朴勇宇、大卫·吴、凯林·佩尔林、吉米·魏、托马斯·安东尼、朱利安·施里特维瑟、安俊焕\n  - 关键词：效率、推测执行、并行性、AlphaZero\n  - 实验环境：围棋（9x9、19x19）\n- [利用大语言模型引导的蒙特卡洛树搜索生成代码世界模型](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F6f479ea488e0908ac8b1b37b27fd134c-Paper-Conference.pdf) 2024\n  - 尼古拉·戴内塞、马泰奥·梅勒、明图·阿拉库亚拉、佩卡·马尔蒂宁\n  - 关键词：代码生成、世界模型、MCTS、基于模型的规划\n  - 实验环境：CWMB（代码世界模型基准测试）、Crafter\n- [ReST-MCTS*: LLM通过过程奖励引导的树搜索进行自我训练](https:\u002F\u002Fopenreview.net\u002Fforum?id=8rcFOqEud5) 2024\n  - 张丹、周思宁、胡子牛、岳一松、董宇晓、唐杰\n  - 关键词：LLM自我训练、过程奖励、推理、CoT\n  - 实验环境：GSM8K、MATH\n  - [代码](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FReST-MCTS)\n- [LightZero: 用于一般顺序决策场景中蒙特卡洛树搜索的统一基准](https:\u002F\u002Fopenreview.net\u002Fpdf?id=oIUXpBnyjv) 2023\n  - 牛雅哲、浦元、杨振杰、李雪艳、周彤、任继远、胡帅、李鸿胜、刘宇\n  - 关键词：首个用于在一般顺序决策场景中部署MCTS\u002FMuZero的统一基准\n  - 实验环境：ClassicControl、Box2D、Atari、MuJoCo、GoBigger、MiniGrid、井字棋、四子连珠、五子棋、2048等\n- [大语言模型作为常识知识用于大规模任务规划](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Wjp1AYB8lH) 2023\n  - 赵子睿、李伟孙、大卫·许\n  - 关键词：世界模型（LLM）和由LLM诱导的策略可以结合在MCTS中，以扩大任务规划规模\n  - 实验环境：乘法运算、旅行规划、物品重新排列\n- [带有玻尔兹曼探索的蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NG4DaApavi) 2023\n  - 迈克尔·画家、穆罕默德·拜乌米、尼克·霍斯、布鲁诺·拉塞尔达\n  - 关键词：带有MCTS的玻尔兹曼探索、最大化熵目标的最佳行动不一定对应于原始目标的最佳行动、两种改进算法\n  - 实验环境：冰湖环境、航海问题、围棋\n- [广义加权路径一致性以掌握Atari游戏](https:\u002F\u002Fopenreview.net\u002Fpdf?id=vHRLS8HhK1) 2023\n  - 赵登伟、涂士奎、徐磊\n  - 关键词：广义加权路径一致性、一种加权机制\n  - 实验环境：Atari\n- [通过概率树状态抽象加速蒙特卡洛树搜索](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0zeLTZAqaJ) 2023\n  - 傅阳青、孙明、聂步清、高悦\n  - 关键词：概率树状态抽象、传递性和聚合误差界限\n  - 实验环境：Atari、CartPole、LunarLander、五子棋\n- [明智地利用思考时间：利用虚拟扩展加速MCTS](https:\u002F\u002Fopenreview.net\u002Fpdf?id=B_LdLljS842) 2022\n  - 叶伟睿、皮特·阿贝尔、高阳\n  - 关键词：计算与性能之间的权衡、虚拟扩展、灵活地分配思考时间\n  - 实验环境：Atari、9x9围棋\n- [为样本高效的模仿学习进行规划](https:\u002F\u002Fopenreview.net\u002Fforum?id=BkN5UoAqF7) 2022\n  - 尹兆恒、叶伟睿、陈启峰、高阳\n  - 关键词：行为克隆、对抗性模仿学习（AIL）、基于MCTS的RL\n  - 实验环境：DeepMind Control Suite\n  - [代码](https:\u002F\u002Fgithub.com\u002Fzhaohengyin\u002FEfficientImitate)\n- [超越任务表现的评估：分析Hex中AlphaZero的概念](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dwKwB2Cd-Km) 2022 \n  - 查尔斯·洛弗林、杰西卡·佐萨·福德、乔治·科尼达里斯、艾莉·帕夫利克、迈克尔·L·利特曼\n  - 关键词：AlphaZero的内部表征、模型探测和行为测试、这些概念如何在网络中被捕获\n  - 实验环境：Hex\n- [类似AlphaZero的智能体是否能抵御对抗性扰动？](https:\u002F\u002Fopenreview.net\u002Fpdf?id=yZ_JlZaOCzv) 2022\n  - 兰力成、张欢、吴季荣、蔡孟瑜、吴一臣、4位侯居正\n  - 关键词：对抗性状态、首次针对围棋AI的攻击\n  - 实验环境：围棋\n- [用于黑箱优化的蒙特卡洛树下降](https:\u002F\u002Fopenreview.net\u002Fpdf?id=FzdmrTUyZ4g) 2022\n  - 翟耀光、高思存\n  - 关键词：黑箱优化、如何进一步整合基于样本的下降以加快优化速度\n  - 实验环境：非线性优化的合成函数、MuJoCo运动环境中的强化学习问题以及神经架构搜索（NAS）中的优化问题\n- [基于蒙特卡洛树搜索的高维贝叶斯优化变量选择](https:\u002F\u002Fopenreview.net\u002Fpdf?id=SUzPos_pUC) 2022\n  - 宋乐∗、薛科∗、黄晓彬、钱超\n  - 关键词：通过MCTS确定低维子空间，在该子空间中使用任何贝叶斯优化算法进行优化\n  - 实验环境：NAS-bench问题和MuJoCo运动\n- [带有迭代细化状态抽象的蒙特卡洛树搜索](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F9b0ead00a217ea2c12e06a72eec4923f-Paper.pdf) 2021\n  - 塞缪尔·索科塔、迦勒·霍、扎欣·艾哈迈德、J·齐科·科尔特\n  - 关键词：随机环境、渐进式扩张、抽象细化\n  - 实验环境：二十一点、陷阱、五乘五围棋\n- [侦察盲棋中的深度综合蒙特卡洛规划](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F215a71a12769b056c3c32e7299f1c5ed-Paper.pdf) 2021\n  - 格雷戈里·克拉克\n  - 关键词：不完全信息、使用无权重粒子滤波器的状态信念、一种新型的随机信息状态抽象\n  - 实验环境：侦察盲棋\n- [POLY-HOOT: 在连续空间MDP中进行蒙特卡洛规划，并附带非渐近分析](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F30de24287a6d8f07b37c716ad51623a7-Paper.pdf) 2020\n  - 茂伟超、张凯庆、谢巧敏、塔米尔·巴萨尔\n  - 关键词：连续状态-动作空间、分层乐观优化\n  - 实验环境：CartPole、倒立摆、摇摆起、月球着陆器\n- [利用蒙特卡洛树搜索学习黑箱优化的搜索空间划分](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fe2ce14e81dba66dbff9cbc35ecfdb704-Paper.pdf) 2020\n  - 王琳楠、罗德里戈·丰塞卡、田元东\n  - 关键词：通过少量样本学习搜索空间的划分、非线性决策边界以及学习局部模型以挑选出优秀的候选者\n  - 实验环境：MuJoCo运动任务、小型基准测试\n- [混搭：一种乐观的树搜索方法，用于从混合分布中学习模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.10154) 2020\n  - 马修·法夫、拉贾特·森、卡尔提凯扬·桑穆甘、康斯坦丁·卡拉马尼斯、桑杰·沙科泰\n  - 关键词：协变量偏移问题、Mix&Match将随机梯度下降（SGD）与乐观树搜索和模型再利用相结合（用来自不同混合分布的样本逐步完善部分训练好的模型）\n  - [代码](https:\u002F\u002Fgithub.com\u002Fmatthewfaw\u002Fmixnmatch)\n#### 其他会议或期刊\n- [学会停止：动态模拟蒙特卡洛树搜索](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.07910.pdf) AAAI 2021。\n- [关于蒙特卡洛树搜索和强化学习](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fdownload\u002F11099\u002F26289\u002F20632) 《人工智能研究杂志》2017年。\n- [通过学习蒙特卡洛树搜索的动作实现样本高效的神经架构搜索](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06832) IEEE模式分析与机器智能事务 2022年。\n\n## 💬 反馈与贡献\n\n- 在 Github 上 [提交问题](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues\u002Fnew\u002Fchoose)\n- 打开或参与我们的 [讨论论坛](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fdiscussions)\n- 在 LightZero 的 [Discord 服务器](https:\u002F\u002Fdiscord.gg\u002FdkZS2JF56X) 中讨论\n- 联系我们的邮箱 (opendilab@pjlab.org.cn)\n\n- 我们非常感谢所有关于改进 LightZero 的反馈和贡献，无论是算法还是系统设计方面。\n\n[comment]: \u003C> (- 为我们的未来计划 [Roadmap]&#40;https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fprojects&#41; 做出贡献)\n\n[comment]: \u003C> (并且 `CONTRIBUTING.md` 提供了一些必要的信息。)\n\n\n## 🌏 引用\n```latex\n@article{niu2024lightzero,\n  title={LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios},\n  author={Niu, Yazhe and Pu, Yuan and Yang, Zhenjie and Li, Xueyan and Zhou, Tong and Ren, Jiyuan and Hu, Shuai and Li, Hongsheng and Liu, Yu},\n  journal={Advances in Neural Information Processing Systems},\n  volume={36},\n  year={2024}\n}\n\n@article{puunizero,\n  title={UniZero: Generalized and Efficient Planning with Scalable Latent World Models},\n  author={Pu, Yuan and Niu, Yazhe and Yang, Zhenjie and Ren, Jiyuan and Li, Hongsheng and Liu, Yu},\n  journal={Transactions on Machine Learning Research}\n}\n\n@article{xuan2024rezero,\n  title={ReZero: Boosting MCTS-based Algorithms by Backward-view and Entire-buffer Reanalyze},\n  author={Xuan, Chunyu and Niu, Yazhe and Pu, Yuan and Hu, Shuai and Liu, Yu and Yang, Jing},\n  journal={arXiv preprint arXiv:2404.16364},\n  year={2024}\n}\n\n@article{pu2025one,\n  title={One Model for All Tasks: Leveraging Efficient World Models in Multi-Task Planning},\n  author={Pu, Yuan and Niu, Yazhe and Tang, Jia and Xiong, Junyu and Hu, Shuai and Li, Hongsheng},\n  journal={arXiv preprint arXiv:2509.07945},\n  year={2025}\n}\n```\n\n## 💓 致谢\n\n本项目部分基于 GitHub 仓库中的以下开创性工作开发而成。我们对这些基础资源表示由衷的感谢：\n- https:\u002F\u002Fgithub.com\u002Fopendilab\u002FDI-engine\n- https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fmctx\n- https:\u002F\u002Fgithub.com\u002FYeWR\u002FEfficientZero\n- https:\u002F\u002Fgithub.com\u002Fwerner-duvaud\u002Fmuzero-general\n\n我们还要特别感谢以下贡献者 [@PaParaZz1](https:\u002F\u002Fgithub.com\u002FPaParaZz1)、[@karroyan](https:\u002F\u002Fgithub.com\u002Fkarroyan)、[@nighood](https:\u002F\u002Fgithub.com\u002Fnighood)、\n[@jayyoung0802](https:\u002F\u002Fgithub.com\u002Fjayyoung0802)、[@timothijoe](https:\u002F\u002Fgithub.com\u002Ftimothijoe)、[@TuTuHuss](https:\u002F\u002Fgithub.com\u002FTuTuHuss)、[@HarryXuancy](https:\u002F\u002Fgithub.com\u002FHarryXuancy)、[@puyuan1996](https:\u002F\u002Fgithub.com\u002Fpuyuan1996)、[@HansBug](https:\u002F\u002Fgithub.com\u002FHansBug)，感谢他们对本算法库的宝贵贡献和支持。\n\n感谢所有为本项目做出贡献的人：\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fgraphs\u002Fcontributors\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_readme_e4559a668299.png\" \u002F>\n\u003C\u002Fa>\n\n\n## 🏷️ 许可证\n本仓库中的所有代码均采用 [Apache License 2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) 许可证。\n\n\u003Cp align=\"right\">(\u003Ca href=\"#top\">返回顶部\u003C\u002Fa>)\u003C\u002Fp>","# LightZero 快速上手指南\n\nLightZero 是一个轻量级、高效且易于理解的开源算法工具包，结合了蒙特卡洛树搜索（MCTS）与深度强化学习（RL），基于 PyTorch 构建。它支持 AlphaZero、MuZero 及其多种变体算法，旨在推动 MCTS+RL 算法家族的标准化研究与应用。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 macOS。Windows 用户建议使用 WSL2 或 Docker。\n*   **Python 版本**: Python 3.8 - 3.10。\n*   **核心依赖**:\n    *   PyTorch >= 1.8.0\n    *   NumPy\n    *   Gym \u002F Gymnasium (用于强化学习环境)\n*   **可选加速**: 若需极致性能，部分 MCTS 模块支持 C++\u002FCython 扩展，需安装 `gcc` 和 `cython`。\n\n> **提示**：建议创建独立的虚拟环境（如使用 `conda` 或 `venv`）以避免依赖冲突。\n\n## 安装步骤\n\n您可以通过 PyPI 直接安装稳定版，或从源码安装以获取最新功能。\n\n### 方式一：通过 PyPI 安装（推荐）\n\n这是最快捷的安装方式，适合大多数用户。\n\n```bash\npip install LightZero\n```\n\n**国内加速方案**：\n如果您在中国大陆，建议使用国内镜像源以提升下载速度：\n\n```bash\npip install LightZero -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：从源码安装\n\n如果您需要修改代码或使用最新开发版功能：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero.git\ncd LightZero\npip install -e .\n```\n\n### 方式三：使用 Docker（可选）\n\n为避免环境配置问题，您可以直接使用官方提供的 Docker 镜像（如有发布）或根据仓库中的 `Dockerfile` 构建环境。\n\n```bash\n# 示例：构建并运行容器（具体指令请参考仓库最新 Docker 文档）\ndocker build -t lightzero-env .\ndocker run -it --gpus all lightzero-env\n```\n\n## 基本使用\n\nLightZero 的设计遵循模块化原则，主要包含 `Model`（网络结构）、`Policy`（策略交互）和 `MCTS`（搜索树）三个核心部分。以下是一个基于 **MuZero** 算法在 **CartPole** 环境中运行的最小化示例。\n\n### 1. 导入模块与配置\n\n首先导入必要的类，并配置算法参数。LightZero 提供了丰富的默认配置。\n\n```python\nfrom lightzero.algorithms import MuZero\nfrom lightzero.envs import CartPoleEnv\nfrom lightzero.config import MuZeroConfig\n\n# 初始化配置\nconfig = MuZeroConfig()\nconfig.env_name = \"CartPole-v1\"\nconfig.algorithm.learning_steps = 1000  # 设置训练步数（示例设为较小值）\nconfig.mcts.num_simulations = 50        # MCTS 模拟次数\n```\n\n### 2. 初始化模型与环境\n\n实例化算法对象，它将自动处理神经网络初始化和环境交互逻辑。\n\n```python\n# 创建算法实例\nalgo = MuZero(config)\n\n# 初始化环境\nenv = CartPoleEnv(config)\n```\n\n### 3. 运行训练循环\n\nLightZero 将训练过程封装为简单的接口，通常包含数据采集（Collect）、学习（Learn）和评估（Eval）阶段。\n\n```python\n# 开始训练\nprint(\"Start training...\")\nalgo.train()\n\n# 或者手动控制主循环（高级用法）\n# for step in range(config.algorithm.learning_steps):\n#     algo.collect_experience(env)\n#     algo.learn()\n#     if step % 100 == 0:\n#         algo.evaluate(env)\n```\n\n### 4. 查看结果\n\n训练完成后，您可以在指定的日志目录（默认为 `.\u002Flogs` 或配置中指定的路径）查看 TensorBoard 日志，监控奖励曲线和损失函数变化。\n\n```bash\ntensorboard --logdir .\u002Flogs\n```\n\n---\n\n**下一步**：\n*   访问 [官方文档](https:\u002F\u002Fopendilab.github.io\u002FLightZero) 查阅详细的 API 参考和架构图解。\n*   查看 `Integrated Algorithms` 列表，尝试在 Atari 或 DeepMind Control 等更复杂的环境中运行 AlphaZero 或 UniZero 算法。","某自动驾驶初创公司的算法团队正在研发城市路况下的智能决策系统，需要在复杂多变的交通环境中实现安全且高效的路径规划。\n\n### 没有 LightZero 时\n- **基准缺失导致重复造轮子**：团队需手动复现 AlphaZero 或 MuZero 等经典算法作为基线，耗费数周时间搭建环境且难以保证还原度，严重拖慢研发进度。\n- **场景适配成本高昂**：面对从棋盘游戏到连续控制（如车辆驾驶）的不同场景，原有框架缺乏统一接口，每次迁移新任务都需重写大量底层代码。\n- **性能评估标准不一**：由于缺乏标准化的评测流程，不同算法版本间的对比实验变量众多，难以客观判断模型改进是否真正有效。\n- **资源消耗难以控制**：自行集成的 MCTS 与深度学习模块往往耦合紧密且未优化，导致训练过程显存占用过高，无法在有限算力下进行大规模实验。\n\n### 使用 LightZero 后\n- **开箱即用的统一基准**：直接调用 LightZero 内置的标准化 MCTS 基准套件，几分钟内即可建立可靠的对比基线，将算法验证周期从数周缩短至数天。\n- **通用接口加速任务迁移**：利用其统一的序列决策接口，团队无需修改核心逻辑即可将算法从仿真环境快速迁移至真实的车辆控制任务中。\n- **公平高效的性能对标**：基于官方提供的标准化评测流程，团队能精准量化策略提升效果，迅速定位模型瓶颈并进行针对性优化。\n- **轻量架构降低算力门槛**：得益于 LightZero 轻量化且高效的设计，相同硬件配置下的训练吞吐量显著提升，使得大规模并行实验成为可能。\n\nLightZero 通过提供统一、轻量且高效的 MCTS 基准框架，彻底解决了通用序列决策场景中算法复现难、迁移成本高及评估标准混乱的核心痛点。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_LightZero_9ad7439e.png","opendilab","OpenDILab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fopendilab_83f31d72.png","Open-source Decision Intelligence (DI) Platform",null,"opendilab@pjlab.org.cn","https:\u002F\u002Fgithub.com\u002Fopendilab",[80,84,88,92,96,100,104,108],{"name":81,"color":82,"percentage":83},"Python","#3572A5",91.6,{"name":85,"color":86,"percentage":87},"C++","#f34b7d",6.4,{"name":89,"color":90,"percentage":91},"Cython","#fedf5b",1.2,{"name":93,"color":94,"percentage":95},"Jupyter Notebook","#DA5B0B",0.5,{"name":97,"color":98,"percentage":99},"Shell","#89e051",0.2,{"name":101,"color":102,"percentage":103},"CMake","#DA3434",0.1,{"name":105,"color":106,"percentage":107},"Dockerfile","#384d54",0,{"name":109,"color":110,"percentage":107},"Makefile","#427819",1568,189,"2026-04-14T11:30:14","Apache-2.0","未说明","未说明（项目提及使用混合异构计算编程优化 MCTS，且基于 PyTorch，通常建议配备 NVIDIA GPU 以加速训练，但 README 未明确具体型号或显存要求）",{"notes":118,"python":119,"dependencies":120},"该工具结合了蒙特卡洛树搜索（MCTS）和深度强化学习（RL）。核心模块包含 Python 和 C++ 两种实现（ptree 和 ctree），因此安装可能需要 C++ 编译环境。支持多种算法（如 AlphaZero, MuZero, UniZero 等）及多种环境（Atari, MuJoCo, 棋类游戏等）。具体版本依赖需参考官方文档或 setup.py，README 中未列出确切版本号。","3.8+",[121,122,123],"torch","cython","cpp (用于 ctrees 实现)",[14],[126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145],"alpha-beta-pruning","alphazero","atari","board-games","continuous-control","gomoku","monte-carlo-tree-search","muzero","pytorch","reinforcement-learning","tictactoe","efficientzero","sampled-muzero","mcts","mcts-algorithm","board-game","gym","self-play","stochastic-muzero","gumbel-muzero","2026-03-27T02:49:30.150509","2026-04-15T10:58:34.970813",[149,154,159,164,169,173],{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},34280,"如何配置以使用多张 GPU 进行模型训练？","虽然该 Issue 标题询问多 GPU 配置，但提供的评论主要讨论了评估脚本生成的文件管理。关于文件生成：该文件由评估脚本在 `ding\u002Fconfig\u002Fconfig.py` 第 465 行生成，用于记录关键统计信息，建议保留以便后续审查。如果不需要，目前只能手动删除。关于限制检查点数量：目前尚未实现自动保留最新 N 个检查点（如最新 5 个）的功能。如需节省空间，建议修改 `ding\u002Fworker\u002Flearner\u002Flearner_hook.py` 第 155 行附近的存储逻辑，或在本地环境中编写脚本自行管理。","https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues\u002F196",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},34281,"为什么在 MsPacman 和 SpaceInvaders 等 Atari 游戏上长期运行表现不佳或无法收敛？","表现不佳的一个关键原因可能是输入数据与原始 MuZero 论文不一致。当前 LightZero 实现使用的是 4 帧灰度图拼接作为输入（类似 EfficientZero），而原始 MuZero 论文（附录 E）使用的是最后 32 帧 RGB 图像（96x96 分辨率）以及导致这些帧的最近 32 个动作。这种输入差异巨大，可能导致难以复现原始论文结果。此外，实验中断也可能导致数据不完整，建议检查是否因磁盘空间不足导致实验提前终止，并尝试使用更大的网络结构或调整学习率策略。","https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues\u002F233",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},34282,"在 UniZero 中，num_unroll_steps 和 infer_context_length 这两个参数有什么区别？为什么要设置为不同的值？","虽然具体技术细节的回复在提供的片段中被截断，但该问题涉及核心算法设计。通常 `infer_context_length` 指推理时用于构建状态的历史上下文长度（例如过去 4 帧），而 `num_unroll_steps` 指在潜在空间中向前推演的步数（用于计算目标值）。两者设置不同是因为它们服务于不同的目的：前者是感知输入的大小，后者是规划搜索的深度。将它们设为相同可能会限制模型的规划能力或导致信息冗余。具体配置需参考官方文档中关于 TensorBoard 标签 `_step`（环境步数）和 `_iter`（训练迭代次数）的说明，以正确解读训练曲线。","https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues\u002F248",{"id":165,"question_zh":166,"answer_zh":167,"source_url":168},34283,"如何在频繁变化的网格环境（如挖矿游戏）中对观察空间进行建模？","对于此类动作空间固定且有限（例如 10x10 网格对应 100 个动作）的环境，MuZero 理论上能够处理。如果学习效果不佳，可能是因为陷入了局部最优。建议采取以下措施：1. 监控 TensorBoard 中的 `loss` 和 `policy_entropy` 指标；2. 增加温度参数（temperature）以提高探索率；3. 启用或调整 epsilon-greedy 策略。关于温度衰减，默认配置 `manual_temperature_decay` 为 False，即使用固定温度值。若需动态衰减（1->0.5->0.25），可在配置文件或将代码 `lzero\u002Fpolicy\u002Fmuzero.py` 第 163 行的 `manual_temperature_decay` 设为 True，并在训练入口 `lzero\u002Fentry\u002Ftrain_muzero.py` 第 144 行触发衰减逻辑。","https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fissues\u002F224",{"id":170,"question_zh":171,"answer_zh":172,"source_url":163},34284,"TensorBoard 中的日志标签 `_step` 和 `_iter` 分别代表什么含义？论文中的曲线基于哪种数据？","在 TensorBoard 中，带有 `_step` 前缀的标签（如 `collector_step`）表示“环境步数”（Environment Steps, EnvStep），即智能体与环境交互的总步数。带有 `_iter` 前缀的标签表示“训练迭代次数”（Training Iterations）。除非另有说明，论文中展示的学习曲线通常基于评估器（Evaluator）的数据，且横坐标多为环境步数。用户在分析数据时应区分采集器（Collector）和评估器（Evaluator）的数据源，以确保正确解读模型性能。",{"id":174,"question_zh":175,"answer_zh":176,"source_url":168},34285,"如何修改配置以启用温度参数（temperature）的动态衰减？","默认情况下，`manual_temperature_decay` 参数在 `lzero\u002Fpolicy\u002Fmuzero.py` 第 163 行被设置为 False，这意味着系统会使用固定的温度值。若要启用动态衰减（按 1 -> 0.5 -> 0.25 的顺序衰减），有两种方法：1. 直接在配置文件中将 `manual_temperature_decay` 设置为 True；2. 修改源代码 `lzero\u002Fpolicy\u002Fmuzero.py` 第 163 行，将其硬编码为 True。启用后，训练脚本 `lzero\u002Fentry\u002Ftrain_muzero.py` 第 144 行会自动执行衰减逻辑。这有助于在训练后期减少探索，促进策略收敛。",[178,183,188,193,198,203,208],{"id":179,"version":180,"summary_zh":181,"released_at":182},264152,"v0.2.0","# 环境\n1. 添加 Metadrive 环境及其配置 (#192)  \n2. 添加采样版 MuZero\u002FUniZero 和 DMC 环境及相应配置 (#260)  \n3. 优化国际象棋环境及其渲染方法；添加单元测试和配置 (#272)  \n4. 添加 Jericho 环境及其相关配置 (#307)  \n\n# 算法\n1. 在 MuZero 中加入 Harmony Dream 损失平衡 (#242)  \n2. 将 AlphaZero 应用于非零和博弈 (#245)  \n3. 添加 AlphaZero CTree 的单元测试 (#306)  \n4. 补充近期与 MCTS 相关的论文 (#324)  \n5. 引入 rope，以真实的时间步索引作为 pos_index (#266)  \n6. 添加 Jericho DDP 配置 (#337)  \n\n# 增强\n1. 添加 LightZero Sphinx 文档 (#237)  \n2. 增加 Wandb 支持 (#294)  \n3. 添加 Atari100k 指标工具 (#295)  \n4. 添加 eval_benchmark 测试 (#296)  \n5. 在 Jericho 中加入 save_replay 和 collect_episode_data 选项 (#333)  \n6. 添加单文件 MCTS 井字棋演示 (#315)  \n\n# 优化\n1. 优化 Atari 和 DMC 上的效率与性能 (#292)  \n2. 更新依赖项要求 (#298)  \n3. 优化 reward\u002Fvalue\u002Fpolicy_head_hidden_channels 参数设置 (#314)  \n4. 更新教程中的配置和日志说明 (#330)  \n\n# 修复\n1. 修复不同观测形状下的 DownSample 问题 (#254)  \n2. 修正 Stochastic MuZero 中错误的 chance 值 (#275)  \n3. 在 CartPole 中使用 display_frames_as_gif (#288)  \n4. 修复 stochastic_muzero_model_mlp.py 中的 chance 编码器问题 (#284)  \n6. 修正 model\u002Futils.py 中的拼写错误 (#290)  \n7. 修复 world_model 中 SMZ 的 compile_args 和 num_simulations 错误 (#297)  \n8. 修复 2048 中的 reward 类型错误以及 CartPole 中的 OS 导入问题 (#304)  \n9. 将 action 切换至 macos-13 (#319)  \n10. 为基于像素的 DMC 修复 SMZ 和 SEZ 配置 (#322)  \n11. 修复 DDP 设置中的 update_per_collect 问题 (#321)  \n12. 修复 initialize_zeros_batch 中 obs_shape 元组的 bug (#327)  \n13. 修复 prepare_obs_stack_for_unizero 的问题 (#328)  \n14. 修复当 len(ready_env_id) \u003C collector_env_num 时 random_policy 的问题 (#335)  \n15. 修复时间步兼容性问题 (#339)  \n\n# CI & 测试\n1. 添加自托管的 Linux (Ubuntu) CI runner (#259)  \n2. 为 CI 测试添加自托管的 Linux runner (#323)  \n\n---\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fcompare\u002Fv0.1.0...v0.2.0\n\n**贡献者: @ruiheng123 @TuTuHuss @HarryXuancy @ShivamKumar2002 @Roland0511 @cmarlin @xiongjyu @PaParaZz1  @puyuan1996**","2025-04-09T09:06:57",{"id":184,"version":185,"summary_zh":186,"released_at":187},264153,"v0.1.0","# 环境\n1. SumToThree 环境，来自 pooltool (#227)\n\n# 算法\n1. UniZero (#232)\n2. ReZero (#238)\n\n# 增强\n1. 添加日志和配置文档 (#220)\n2. 优化 atari_env_action_space_map，修复 test_muzero_game_buffer\n3. 优化 release.yml\n\n# 风格\n1. 更新 Discord 链接，并在 README 中添加新徽章 (#221)\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fcompare\u002Fv0.0.5...v0.1.0\n\n\n**贡献者: @ekiefl @TuTuHuss @HarryXuancy @PaParaZz1  @puyuan1996**\n","2024-07-12T10:26:59",{"id":189,"version":190,"summary_zh":191,"released_at":192},264154,"v0.0.5","# 环境\n1. MemoryEnv (#197)\n2. MountainCar (#181)\n\n# 算法\n1. ctree 中的 Gumbel AlphaZero (#212)\n\n# 增强功能\n1. 添加 eval_offline 选项 (#188)\n2. 在重新分析过程中将更新后的搜索策略和价值保存到缓冲区 (#190)\n3. 添加 MuZero 可视化功能 (#181)\n4. 添加 EfficientZero 井字棋配置 (#204)\n5. 添加 2 篇与 MCTS 相关的 ICLR2024 论文\n6. 在 test_game_segment 中添加加载预训练模型的选项 (#194)\n7. 优化 _forward_learn() 及部分数据处理操作 (#191)\n\n# 修复\n1. 修复 DDP 设置中的 sync_gradients 和日志记录问题 (#200)\n2. 修复 channel_last 错误\n3. 修复收集器中的 total_episode_count 错误\n4. 修复 memory_lightzero_env 返回值错误\n5. 修复 memory_env 中的 obs_max_scale 错误\n\n# 风格改进\n1. 添加 ZeroPal 和 Discord 链接 (#209)\n2. 为 game_buffer_muzero 添加单元测试 (#186)\n3. 在 README 中添加自定义文档章节\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fcompare\u002Fv0.0.4...v0.0.5\n\n\n**贡献者: @karroyan @HarryXuancy @nighood @puyuan1996**","2024-04-16T10:41:53",{"id":194,"version":195,"summary_zh":196,"released_at":197},264155,"v0.0.4","# 增强功能\n1. 添加智能体配置，并优化重放视频的保存方法 (#184)\n2. 优化工作进程文件中的注释\n3. 优化树搜索文件中的注释 (#185)\n4. 将 mcts_mode 重命名为 battle_mode_in_simulation_env，并为井字棋添加采样的 AlphaZero 配置 (#179)\n5. 优化冗余数据压缩操作 (#177)\n6. 优化 SEZ 模型中的连续动作处理流程\n7. 优化 BipedalWalker 环境\n\n# 修复\n1. 修复在 Gumbel MuZero 中，当 action_mask 中存在零时导致的值无穷大问题 (#178)\n2. 修复使用 Gymnasium 时的渲染设置 (#173)\n3. 修复 sampled_efficientzero_model.py 中的 lstm_hidden_size 参数\n4. 修复 BipedalWalker 连续离散环境中的 action_mask，并修复采样 EfficientZero 中的设备相关 bug (#168)\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fcompare\u002Fv0.0.3...v0.0.4\n\n\n**贡献者: @karroyan @HarryXuancy @puyuan1996 @zjowowen**\n","2024-02-21T10:16:28",{"id":199,"version":200,"summary_zh":201,"released_at":202},264156,"v0.0.3","# 环境\n1. MiniGrid 环境 (#110)\n2. Bsuite 环境 (#110)\n3. GoBigger 环境 (#39)\n\n# 算法\n1. 采样版 AlphaZero (#141)\n2. MuZero+RND (#110)\n3. 多智能体 MuZero\u002FEfficientZero (#39)\n\n# 增强功能\n1. 在 AlphaZero 中添加 CTree 版本的 MCTS (#142)\n2. 将对 Gym 的依赖升级为 Gymnasium (#150)\n3. 添加智能体类以支持 LightZero 的 HuggingFace 模型库 (#163)\n4. 在 README 中添加近期与 MCTS 相关的论文 (#159)\n5. 为 Connect4 添加 MuZero 配置 (#107)\n6. 添加 CONTRIBUTING.md (#119)\n7. 添加 .gitpod.yml 和 .gitpod.Dockerfile (#123)\n8. 在 README 中添加贡献者小节 (#132)\n9. 添加 CODE_OF_CONDUCT.md (#127)\n10. 优化各类常用环境的注释及 render_eval 配置 (#154) (#161)\n11. 优化 action_type 和 env_type，修复 test.yml，并修正单元测试 (#160)\n12. 更新环境与算法教程文档 (#106)\n13. 优化 Gomoku 环境 (#141)\n14. 为连续动作空间环境添加 random_policy 支持 (#118)\n15. 优化 ptree_az 的模拟方法 (#120)\n16. 优化 game_segment_to_array 的注释\n\n# 修复\n1. 修复各类常用环境的 render 方法 (#154) (#161)\n2. 修复 Gumbel MuZero 收集器的 bug，并修正 Gumbel 的拼写错误 (#144)\n3. 修复 game_segment.py 中的 assert bug (#138)\n4. 修正 muzero_evaluator 中 visit_count_distributions 的名称\n5. 修复 MCTS 和 AlphaBeta bot 的单元测试 (#120)\n6. 修复 ptree_mz.py 中的错别字 (#113)\n7. 修复 sez ptree 中 root_sampled_actions_tmp 形状的 bug\n8. 修复 policy utils 的单元测试\n9. 修复 README 中的错别字，并在 README 中添加“返回顶部”按钮 (#104) (#109) (#111)\n\n# 样式\n1. 添加 NeurIPS 2023 论文链接\n\n# 新闻\n1. NeurIPS 2023 Spotlight：[LightZero：面向通用序列决策场景的蒙特卡洛树搜索统一基准](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero)\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fcompare\u002Fv0.0.2...v0.0.3\n\n\n**贡献者：@PaParaZz1 @karroyan @nighood @jayyoung0802 @timothijoe @TuTuHuss @HarryXuancy @puyuan1996 @HansBug @mohitd404 @@PentesterPriyanshu @0Armaan025 @prajjwalyd @suravshresth @sohamtembhurne @eltociear**\n","2023-12-07T08:27:56",{"id":204,"version":205,"summary_zh":206,"released_at":207},264157,"v0.0.2","# 环境\n1. MuJoCo 环境 (#50)\n2. 2048 环境 (#64)\n3. 连四格环境 (#63)\n\n# 算法\n1. Gumbel MuZero (#22)\n2. 随机性 MuZero (#64)\n\n# 增强功能\n1. 优化 MCTS 和 ptree_az (#57) (#61)\n2. 优化 README 文件 (#36) (#47) (#51) (#77) (#95) (#96)\n3. 更新论文笔记 (#89) (#91)\n4. 优化模型和配置文件 (#26) (#27) (#50)\n5. 添加 Dockerfile 及其使用说明 (#95)\n6. 添加关于如何自定义环境和算法的文档 (#78)\n7. 添加 PyTorch DDP 支持 (#68)\n8. 在 train_muzero_entry 中添加 ε-贪婪策略和随机收集选项 (#54)\n9. 添加 Atari 可视化选项 (#40)\n10. 添加 log_buffer_memory_usage 工具函数 (#30)\n\n# 修复\n1. 修复 MuZero 数据收集器中的优先级 bug (#74)\n\n# 风格调整\n1. 更新 GitHub Actions 流水线 (#71) (#72) (#73) (#81) (#83) (#84) (#90)\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fcompare\u002Fv0.0.1...v0.0.2\n\n\n**贡献者: @PaParaZz1 @karroyan @nighood @jayyoung0802 @timothijoe @TuTuHuss @HarryXuancy @puyuan1996 @HansBug**\n","2023-09-21T10:55:05",{"id":209,"version":210,"summary_zh":211,"released_at":212},264158,"v0.0.1","**完整更新日志**: https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero\u002Fcommits\u002Fv0.0.1","2023-04-14T10:54:00"]