[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-MushroomRL--mushroom-rl":3,"tool-MushroomRL--mushroom-rl":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":79,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":10,"env_os":96,"env_gpu":97,"env_ram":98,"env_deps":99,"category_tags":113,"github_topics":114,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":129,"updated_at":130,"faqs":131,"releases":157},3033,"MushroomRL\u002Fmushroom-rl","mushroom-rl","Python library for Reinforcement Learning.","MushroomRL 是一款专为强化学习（RL）设计的 Python 开源库，旨在让算法实验变得简单高效。它核心解决了研究人员在尝试不同算法或切换仿真环境时，常面临的代码重构繁琐与兼容性难题。通过高度模块化的架构，MushroomRL 能够无缝对接 PyTorch、TensorFlow 等主流深度学习框架，并原生支持 Gymnasium、PyBullet、MuJoCo 及 Deepmind Control Suite 等多种经典基准环境与物理模拟器。\n\n该工具内置了从 Q-Learning、SARSA 等传统算法，到 DQN、PPO、SAC、TD3 等前沿深度强化学习算法的完整实现，用户无需从零编写底层逻辑，即可快速搭建并运行复杂的训练任务。此外，它还特别适配了 Habitat 和 iGibson 等高保真仿真平台，支持 RGBD 图像及多模态感官数据输入，非常适合构建具身智能应用。\n\nMushroomRL 主要面向 AI 研究人员、算法工程师及相关领域的开发者。对于希望专注于策略创新而非底层工程细节的用户而言，它是一个灵活且强大的实验平台。无论是进行学术探索还是工业级原型验证，Mus","MushroomRL 是一款专为强化学习（RL）设计的 Python 开源库，旨在让算法实验变得简单高效。它核心解决了研究人员在尝试不同算法或切换仿真环境时，常面临的代码重构繁琐与兼容性难题。通过高度模块化的架构，MushroomRL 能够无缝对接 PyTorch、TensorFlow 等主流深度学习框架，并原生支持 Gymnasium、PyBullet、MuJoCo 及 Deepmind Control Suite 等多种经典基准环境与物理模拟器。\n\n该工具内置了从 Q-Learning、SARSA 等传统算法，到 DQN、PPO、SAC、TD3 等前沿深度强化学习算法的完整实现，用户无需从零编写底层逻辑，即可快速搭建并运行复杂的训练任务。此外，它还特别适配了 Habitat 和 iGibson 等高保真仿真平台，支持 RGBD 图像及多模态感官数据输入，非常适合构建具身智能应用。\n\nMushroomRL 主要面向 AI 研究人员、算法工程师及相关领域的开发者。对于希望专注于策略创新而非底层工程细节的用户而言，它是一个灵活且强大的实验平台。无论是进行学术探索还是工业级原型验证，MushroomRL 都能提供稳定、可扩展的技术支持，帮助用户轻松跨越从理论到实践的距离。","**********\nMushroomRL\n**********\n\n.. image:: https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml\u002Fbadge.svg?branch=dev\n   :target: https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml\n   :alt: Continuous Integration\n\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fmushroomrl\u002Fbadge\u002F?version=latest\n   :target: https:\u002F\u002Fmushroomrl.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\n   :alt: Documentation Status\n\n.. image:: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\u002Fmaintainability.svg\n   :target: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\n   :alt: Maintainability\n\n.. image:: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\u002Fcoverage.svg\n   :target: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\n   :alt: Test Coverage\n\n**MushroomRL: Reinforcement Learning Python library.**\n\n.. contents:: **Contents of this document:**\n   :depth: 2\n\nWhat is MushroomRL\n==================\nMushroomRL is a Python Reinforcement Learning (RL) library whose modularity allows\nto easily use well-known Python libraries for tensor computation (e.g. PyTorch,\nTensorflow) and RL benchmarks (e.g. Gymnasium, PyBullet, Deepmind Control Suite).\nIt allows to perform RL experiments in a simple way providing classical RL algorithms\n(e.g. Q-Learning, SARSA, FQI), and deep RL algorithms (e.g. DQN, DDPG, SAC, TD3,\nTRPO, PPO).\n\n`Full documentation and tutorials available here \u003Chttp:\u002F\u002Fmushroomrl.readthedocs.io\u002Fen\u002Flatest\u002F>`_.\n\nInstallation\n============\n\nYou can do a minimal installation of ``MushroomRL`` with:\n\n.. code:: shell\n\n    pip3 install mushroom_rl\n\nInstalling everything\n---------------------\n``MushroomRL`` contains also some optional components e.g., support for ``Gymnasium``\nenvironments, Atari 2600 games from the ``Arcade Learning Environment``, and the support\nfor physics simulators such as ``Pybullet`` and ``MuJoCo``. \nSupport for these classes is not enabled by default.\n\nTo install the whole set of features, you will need additional packages installed.\nYou can install everything by running:\n\n.. code:: shell\n\n    pip3 install mushroom_rl[all]\n\nThis will install every dependency of MushroomRL, except the Plots dependency.\nFor ubuntu>20.04, you may need to install pygame and gym dependencies:\n\n.. code:: shell\n\n    sudo apt -y install libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev \\\n                     libsdl1.2-dev libsmpeg-dev libportmidi-dev ffmpeg libswscale-dev \\\n                     libavformat-dev libavcodec-dev swig\n\nNotice that you still need to install some of these dependencies for different operating systems, e.g. swig for macOS \n\nBelow is the code that you need to run to install the Plots dependencies:\n\n.. code:: shell\n\n    sudo apt -y install python3-pyqt5\n    pip3 install mushroom_rl[plots]\n\nYou might need to install external dependencies first. For more information about mujoco-py\ninstallation follow the instructions on the `project page \u003Chttps:\u002F\u002Fgithub.com\u002Fopenai\u002Fmujoco-py>`_\n\n    WARNING! when using conda, there may be issues with QT. You can fix them by adding the following lines to the code, replacing ``\u003Cconda_base_path>`` with the path to your conda distribution and ``\u003Cenv_name>`` with the name of the conda environment you are using:\n   \n.. code:: python\n\n   import os\n   os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = '\u003Cconda_base_path>\u002Fenvs\u002F\u003Cenv_name>\u002Fbin\u002Fplatforms'\n\nTo use dm_control MushroomRL interface, install ``dm_control`` following the instruction that can\nbe found `here \u003Chttps:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control>`_\n\nUsing Habitat and iGibson with MushroomRL\n-----------------------------------------\n\n`Habitat \u003Chttps:\u002F\u002Faihabitat.org\u002F>`__ and `iGibson \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002F>`__\nare simulation platforms providing realistic and sensory-rich learning environments.\nIn MushroomRL, the agent's default observations are RGB images, but RGBD,\nagent sensory data, and other information can also be used.\n\n    If you have previous versions of iGibson or Habitat already installed, we recommend to remove them and do clean installs.\n\niGibson Installation\n^^^^^^^^^^^^^^^^^^^^\nFollow the `official guide \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002F#install_env>`__ and install its\n`assets \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002Fdocs\u002Fassets.html>`__ and\n`datasets \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002Fdocs\u002Fdataset.html>`__.\n\nFor ``\u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fexamples\u002Figibson_dqn.py`` you need to run\n\n.. code:: shell\n\n    python -m igibson.utils.assets_utils --download_assets\n    python -m igibson.utils.assets_utils --download_demo_data\n    python -m igibson.utils.assets_utils --download_ig_dataset\n\nYou can also use `third party datasets \u003Chttps:\u002F\u002Fgithub.com\u002FStanfordVL\u002FiGibson\u002Ftree\u002Fmaster\u002Figibson\u002Futils\u002Fdata_utils\u002Fext_scene>`__.\n\nThe scene details are defined in a YAML file, that needs to be passed to the agent.\nSee ``\u003CIGIBSON PATH>\u002Figibson\u002Ftest\u002Ftest_house.YAML`` for an example.\n\n\nHabitat Installation\n^^^^^^^^^^^^^^^^^^^^\nFollow the `official guide \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-lab\u002F#installation>`__\nand do a **full install** with `habitat_baselines`.\nThen you can download interactive datasets following\n`this \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-lab#data>`__ and\n`this \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-lab#task-datasets>`__.\nIf you need to download other datasets, you can use\n`this utility \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-sim\u002Fblob\u002Fmaster\u002Fhabitat_sim\u002Futils\u002Fdatasets_download.py>`__.\n\nBasic Usage of Habitat\n^^^^^^^^^^^^^^^^^^^^^^\nWhen you create a ``Habitat`` environment, you need to pass a wrapper name and two\nYAML files: ``Habitat(wrapper, config_file, base_config_file)``.\n\n* The wrapper has to be among the ones defined in ``\u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fenvironments\u002Fhabitat_env.py``,\n  and takes care of converting actions and observations in a gym-like format. If your task \u002F robot requires it,\n  you may need to define new wrappers.\n\n* The YAML files define every detail: the Habitat environment, the scene, the\n  sensors available to the robot, the rewards, the action discretization, and any\n  additional information you may need. The second YAML file is optional, and\n  overwrites whatever was already defined in the first YAML.\n\n    If you use YAMLs from ``habitat-lab``, check if they define a YAML for\n    ``BASE_TASK_CONFIG_PATH``. If they do, you need to pass it as ``base_config_file`` to\n    ``Habitat()``. ``habitat-lab`` YAMLs, in fact, use relative paths, and calling them\n    from outside its root folder will cause errors.\n\n* If you use a dataset, be sure that the path defined in the YAML file is correct,\n  especially if you use relative paths. ``habitat-lab`` YAMLs use relative paths, so\n  be careful with that. By default, the path defined in the YAML file will be\n  relative to where you launched the python code. If your `data` folder is\n  somewhere else, you may also create a symbolic link.\n\nRearrange Task Example\n^^^^^^^^^^^^^^^^^^^^^^\n* Download the ReplicaCAD datasets (``--data-path data`` downloads them in the folder\n  from where you are launching your code)\n\n.. code:: shell\n\n    python -m habitat_sim.utils.datasets_download --uids replica_cad_dataset --data-path data\n\n* For this task we use ``\u003CHABITAT_LAB PATH>\u002Fhabitat_baselines\u002Fconfig\u002Frearrange\u002Frl_pick.yaml``.\n  This YAML defines ``BASE_TASK_CONFIG_PATH: configs\u002Ftasks\u002Frearrange\u002Fpick.yaml``,\n  and since this is a relative path we need to overwrite it by passing its absolute path\n  as ``base_config_file`` argument to ``Habitat()``.\n\n* Then, ``pick.yaml`` defines the dataset to be used with respect to ``\u003CHABITAT_LAB PATH>``.\n  If you have not used ``--data-path`` argument with the previous download command,\n  the ReplicaCAD datasets is now in ``\u003CHABITAT_LAB PATH>\u002Fdata`` and you need to\n  make a link to it\n\n.. code:: shell\n\n    ln -s \u003CHABITAT_LAB PATH>\u002Fdata\u002F \u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fexamples\u002Fhabitat\n\n* Finally, you can launch ``python habitat_rearrange_sac.py``.\n\nNavigation Task Example\n^^^^^^^^^^^^^^^^^^^^^^^\n* Download and extract Replica scenes\n\n    WARNING! The dataset is very large!\n\n.. code:: shell\n\n    sudo apt-get install pigz\n    git clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FReplica-Dataset.git\n    cd Replica-Dataset\n    .\u002Fdownload.sh replica-path\n\n* For this task we only use the custom YAML file ``pointnav_apartment-0.yaml``.\n\n* ``DATA_PATH: \"replica_{split}_apartment-0.json.gz\"`` defines the JSON file with\n  some scene details, such as the agent's initial position and orientation.\n  The ``{split}`` value is defined in the ``SPLIT`` key.\n\n    If you want to try new positions, you can sample some from the set of the scene's navigable points.\n    After initializing a ``habitat`` environment, for example ``mdp = Habitat(...)``,\n    run ``mdp.env._env._sim.sample_navigable_point()``.\n\n* ``SCENES_DIR: \"Replica-Dataset\u002Freplica-path\u002Fapartment_0\"`` defines the scene.\n  As said before, this path is relative to where you launch the script, thus we need to link the Replica folder.\n  If you launch ``habitat_nav_dqn.py`` from its example folder, run\n\n.. code:: shell\n\n    ln -s \u003CPATH TO>\u002FReplica-Dataset\u002F \u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fexamples\u002Fhabitat\n\n* Finally, you can launch ``python habitat_nav_dqn.py``.\n\n\n\nEditable Installation\n---------------------\n\nYou can also perform a local editable installation by using:\n\n.. code:: shell\n\n    pip install --no-use-pep517 -e .\n\nTo install also optional dependencies:\n\n.. code:: shell\n\n    pip install --no-use-pep517 -e .[all]\n\n\n\nHow to set and run and experiment\n=================================\nTo run experiments, MushroomRL requires a script file that provides the necessary information\nfor the experiment. Follow the scripts in the \"examples\" folder to have an idea\nof how an experiment can be run.\n\nFor instance, to run a quick experiment with one of the provided example scripts, run:\n\n.. code:: shell\n\n    python3 examples\u002Fcar_on_hill_fqi.py\n\nCite MushroomRL\n===============\nIf you are using MushroomRL for your scientific publications, please cite:\n\n.. code:: bibtex\n\n    @article{JMLR:v22:18-056,\n        author  = {Carlo D'Eramo and Davide Tateo and Andrea Bonarini and Marcello Restelli and Jan Peters},\n        title   = {MushroomRL: Simplifying Reinforcement Learning Research},\n        journal = {Journal of Machine Learning Research},\n        year    = {2021},\n        volume  = {22},\n        number  = {131},\n        pages   = {1-5},\n        url     = {http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F18-056.html}\n    }\n\nHow to contact us\n=================\nFor any question, drop an e-mail at mushroom4rl@gmail.com.\n\nFollow us on Twitter `@Mushroom_RL \u003Chttps:\u002F\u002Ftwitter.com\u002Fmushroom_rl>`_!\n","**********\nMushroomRL\n**********\n\n.. image:: https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml\u002Fbadge.svg?branch=dev\n   :target: https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml\n   :alt: 持续集成\n\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fmushroomrl\u002Fbadge\u002F?version=latest\n   :target: https:\u002F\u002Fmushroomrl.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\n   :alt: 文档状态\n\n.. image:: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\u002Fmaintainability.svg\n   :target: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\n   :alt: 可维护性\n\n.. image:: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\u002Fcoverage.svg\n   :target: https:\u002F\u002Fqlty.sh\u002Fgh\u002FMushroomRL\u002Fprojects\u002Fmushroom-rl\n   :alt: 测试覆盖率\n\n**MushroomRL：强化学习 Python 库。**\n\n.. contents:: **本文档目录：**\n   :depth: 2\n\n什么是 MushroomRL\n==================\nMushroomRL 是一个 Python 强化学习（RL）库，其模块化设计使其能够轻松地与常用的 Python 库结合使用，例如用于张量计算的 PyTorch 和 TensorFlow，以及 RL 基准测试环境 Gymnasium、PyBullet 和 Deepmind Control Suite 等。它提供了一种简单的方式来执行强化学习实验，支持经典的强化学习算法（如 Q-Learning、SARSA、FQI）以及深度强化学习算法（如 DQN、DDPG、SAC、TD3、TRPO、PPO）。\n\n`完整文档和教程请见此处 \u003Chttp:\u002F\u002Fmushroomrl.readthedocs.io\u002Fen\u002Flatest\u002F>`_。\n\n安装\n====\n\n您可以使用以下命令进行 ``MushroomRL`` 的最小化安装：\n\n.. code:: shell\n\n    pip3 install mushroom_rl\n\n安装所有功能\n-------------\n``MushroomRL`` 还包含一些可选组件，例如对 ``Gymnasium`` 环境的支持、来自 ``Arcade Learning Environment`` 的 Atari 2600 游戏，以及对物理模拟器如 ``Pybullet`` 和 ``MuJoCo`` 的支持。这些功能默认并未启用。\n\n要安装所有功能，您需要额外安装一些包。可以通过运行以下命令来完成安装：\n\n.. code:: shell\n\n    pip3 install mushroom_rl[all]\n\n这将安装 MushroomRL 的所有依赖项，但不包括绘图依赖。对于 Ubuntu > 20.04，您可能还需要安装 pygame 和 gym 的依赖：\n\n.. code:: shell\n\n    sudo apt -y install libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev \\\n                     libsdl1.2-dev libsmpeg-dev libportmidi-dev ffmpeg libswscale-dev \\\n                     libavformat-dev libavcodec-dev swig\n\n请注意，对于不同的操作系统，您可能仍需安装某些依赖项，例如 macOS 上需要安装 swig。\n\n以下是安装绘图依赖所需的代码：\n\n.. code:: shell\n\n    sudo apt -y install python3-pyqt5\n    pip3 install mushroom_rl[plots]\n\n您可能需要先安装一些外部依赖。有关 mujoco-py 安装的更多信息，请参阅 `项目页面 \u003Chttps:\u002F\u002Fgithub.com\u002Fopenai\u002Fmujoco-py>`_。\n\n    注意！当使用 conda 时，可能会出现 QT 相关问题。您可以通过在代码中添加以下行来解决这些问题，其中将 ``\u003Cconda_base_path>`` 替换为您的 conda 发行版路径，将 ``\u003Cenv_name>`` 替换为您正在使用的 conda 环境名称：\n\n.. code:: python\n\n   import os\n   os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = '\u003Cconda_base_path>\u002Fenvs\u002F\u003Cenv_name>\u002Fbin\u002Fplatforms'\n\n要使用 dm_control 的 MushroomRL 接口，请按照 `此处 \u003Chttps:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control>`_ 提供的说明安装 ``dm_control``。\n\n使用 Habitat 和 iGibson 与 MushroomRL\n-----------------------------------------\n\n`Habitat \u003Chttps:\u002F\u002Faihabitat.org\u002F>`__ 和 `iGibson \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002F>`__\n是提供逼真且富含感官信息的学习环境的仿真平台。在 MushroomRL 中，智能体的默认观测值是 RGB 图像，但也可以使用 RGBD、智能体的感官数据以及其他信息。\n\n    如果您已经安装了旧版本的 iGibson 或 Habitat，我们建议您将其卸载并重新进行干净安装。\n\niGibson 安装\n^^^^^^^^^^^^^\n请遵循 `官方指南 \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002F#install_env>`__，并安装其\n`资产 \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002Fdocs\u002Fassets.html>`__ 和\n`数据集 \u003Chttp:\u002F\u002Fsvl.stanford.edu\u002Figibson\u002Fdocs\u002Fdataset.html>`__。\n\n对于 ``\u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fexamples\u002Figibson_dqn.py``，您需要运行以下命令：\n\n.. code:: shell\n\n    python -m igibson.utils.assets_utils --download_assets\n    python -m igibson.utils.assets_utils --download_demo_data\n    python -m igibson.utils.assets_utils --download_ig_dataset\n\n您还可以使用 `第三方数据集 \u003Chttps:\u002F\u002Fgithub.com\u002FStanfordVL\u002FiGibson\u002Ftree\u002Fmaster\u002Figibson\u002Futils\u002Fdata_utils\u002Fext_scene>`__。\n\n场景细节由 YAML 文件定义，该文件需要传递给智能体。示例请参见 ``\u003CIGIBSON PATH>\u002Figibson\u002Ftest\u002Ftest_house.YAML``。\n\nHabitat 安装\n^^^^^^^^^^^^^\n请遵循 `官方指南 \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-lab\u002F#installation>`__，\n并使用 `habitat_baselines` 进行 **完整安装**。然后，您可以按照\n`此链接 \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-lab#data>`__ 和\n`此链接 \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-lab#task-datasets>`__ 下载交互式数据集。\n如果您需要下载其他数据集，可以使用\n`此工具 \u003Chttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-sim\u002Fblob\u002Fmaster\u002Fhabitat_sim\u002Futils\u002Fdatasets_download.py>`__。\n\nHabitat 的基本使用方法\n^^^^^^^^^^^^^^^^^^^^^^\n当您创建 ``Habitat`` 环境时，需要传入一个包装器名称和两个 YAML 文件：\n``Habitat(wrapper, config_file, base_config_file)``。\n\n* 包装器必须是 ``\u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fenvironments\u002Fhabitat_env.py`` 中定义的其中之一，\n  它负责将动作和观测转换为类似 gym 的格式。如果您的任务或机器人有特殊需求，\n  您可能需要定义新的包装器。\n\n* YAML 文件定义了所有细节：Habitat 环境、场景、机器人可用的传感器、奖励、动作离散化，\n  以及您可能需要的任何其他信息。第二个 YAML 文件是可选的，它可以覆盖第一个 YAML 文件中已定义的内容。\n\n    如果您使用的是 ``habitat-lab`` 的 YAML 文件，请检查它们是否定义了 ``BASE_TASK_CONFIG_PATH``。\n    如果有，则需要将其作为 ``base_config_file`` 传递给 ``Habitat()``。事实上，``habitat-lab`` 的 YAML 文件使用相对路径，\n    如果从其根目录之外调用，将会导致错误。\n\n* 如果您使用数据集，请确保 YAML 文件中定义的路径正确，尤其是当使用相对路径时。``habitat-lab`` 的 YAML 文件使用相对路径，\n  因此请务必小心。默认情况下，YAML 文件中定义的路径是相对于您启动 Python 代码的位置而言的。\n  如果您的 `data` 文件夹位于其他位置，您也可以创建一个符号链接。\n\n重新排列任务示例\n^^^^^^^^^^^^^^^^^^^^^^\n* 下载 ReplicaCAD 数据集（``--data-path data`` 会将其下载到您启动代码的文件夹中）\n\n.. code:: shell\n\n    python -m habitat_sim.utils.datasets_download --uids replica_cad_dataset --data-path data\n\n* 对于此任务，我们使用 ``\u003CHABITAT_LAB PATH>\u002Fhabitat_baselines\u002Fconfig\u002Frearrange\u002Frl_pick.yaml``。\n  该 YAML 文件定义了 ``BASE_TASK_CONFIG_PATH: configs\u002Ftasks\u002Frearrange\u002Fpick.yaml``,\n  由于这是一个相对路径，我们需要通过将它的绝对路径作为 ``base_config_file`` 参数传递给 ``Habitat()`` 来覆盖它。\n\n* 接着，``pick.yaml`` 根据 ``\u003CHABITAT_LAB PATH>`` 定义了要使用的数据集。\n  如果您在之前的下载命令中没有使用 ``--data-path`` 参数，\n  那么 ReplicaCAD 数据集现在位于 ``\u003CHABITAT_LAB PATH>\u002Fdata``，您需要为其创建一个链接：\n\n.. code:: shell\n\n    ln -s \u003CHABITAT_LAB PATH>\u002Fdata\u002F \u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fexamples\u002Fhabitat\n\n* 最后，您可以运行 ``python habitat_rearrange_sac.py``。\n\n导航任务示例\n^^^^^^^^^^^^^^^^^^^^^^^\n* 下载并解压 Replica 场景\n\n    警告！该数据集非常庞大！\n\n.. code:: shell\n\n    sudo apt-get install pigz\n    git clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FReplica-Dataset.git\n    cd Replica-Dataset\n    .\u002Fdownload.sh replica-path\n\n* 对于此任务，我们仅使用自定义 YAML 文件 ``pointnav_apartment-0.yaml``。\n\n* ``DATA_PATH: \"replica_{split}_apartment-0.json.gz\"`` 定义了一个包含场景细节的 JSON 文件，\n  比如智能体的初始位置和朝向。其中的 ``{split}`` 值由 ``SPLIT`` 键定义。\n\n    如果您想尝试新的位置，可以从场景的可导航点集中随机采样。\n    在初始化一个 ``habitat`` 环境后，例如 ``mdp = Habitat(...)``,\n    运行 ``mdp.env._env._sim.sample_navigable_point()``。\n\n* ``SCENES_DIR: \"Replica-Dataset\u002Freplica-path\u002Fapartment_0\"`` 定义了场景。\n  如前所述，这个路径是相对于您启动脚本的位置而言的，因此我们需要为 Replica 文件夹创建一个链接。\n  如果您从其示例文件夹中运行 ``habitat_nav_dqn.py``，请执行以下命令：\n\n.. code:: shell\n\n    ln -s \u003CPATH TO>\u002FReplica-Dataset\u002F \u003CMUSHROOM_RL PATH>\u002Fmushroom-rl\u002Fexamples\u002Fhabitat\n\n* 最后，您可以运行 ``python habitat_nav_dqn.py``。\n\n\n\n可编辑安装\n---------------------\n\n您也可以通过以下方式进行本地可编辑安装：\n\n.. code:: shell\n\n    pip install --no-use-pep517 -e .\n\n如果还需要安装可选依赖项：\n\n.. code:: shell\n\n    pip install --no-use-pep517 -e .[all]\n\n\n\n如何设置并运行实验\n=================================\n为了运行实验，MushroomRL 需要一个提供实验所需信息的脚本文件。您可以参考“examples”文件夹中的脚本，了解如何运行一个实验。\n\n例如，要使用提供的示例脚本之一快速运行一个实验，可以执行以下命令：\n\n.. code:: shell\n\n    python3 examples\u002Fcar_on_hill_fqi.py\n\n引用 MushroomRL\n===============\n如果您在科学出版物中使用 MushroomRL，请引用以下文献：\n\n.. code:: bibtex\n\n    @article{JMLR:v22:18-056,\n        author  = {Carlo D'Eramo and Davide Tateo and Andrea Bonarini and Marcello Restelli and Jan Peters},\n        title   = {MushroomRL: Simplifying Reinforcement Learning Research},\n        journal = {Journal of Machine Learning Research},\n        year    = {2021},\n        volume  = {22},\n        number  = {131},\n        pages   = {1-5},\n        url     = {http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F18-056.html}\n    }\n\n如何联系我们\n=================\n如有任何问题，请发送邮件至 mushroom4rl@gmail.com。\n\n欢迎关注我们的 Twitter `@Mushroom_RL \u003Chttps:\u002F\u002Ftwitter.com\u002Fmushroom_rl>`_！","# MushroomRL 快速上手指南\n\nMushroomRL 是一个模块化的 Python 强化学习（RL）库，支持 PyTorch、TensorFlow 等深度学习框架，并兼容 Gymnasium、PyBullet、Deepmind Control Suite 等主流基准环境。它提供了从经典算法（如 Q-Learning）到深度强化学习算法（如 DQN, PPO, SAC）的完整实现。\n\n## 环境准备\n\n*   **操作系统**: Linux (推荐 Ubuntu > 20.04), macOS, Windows\n*   **Python 版本**: Python 3.x\n*   **前置依赖**:\n    *   若需使用绘图功能或 Atari 游戏，Ubuntu 用户需安装系统级依赖：\n        ```bash\n        sudo apt -y install libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev \\\n                         libsdl1.2-dev libsmpeg-dev libportmidi-dev ffmpeg libswscale-dev \\\n                         libavformat-dev libavcodec-dev swig python3-pyqt5\n        ```\n    *   **国内加速建议**: 推荐使用清华或阿里镜像源安装 Python 包，以提升下载速度。\n\n## 安装步骤\n\n### 1. 最小化安装\n仅安装核心库，不包含额外的环境模拟器依赖：\n```bash\npip3 install mushroom_rl -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2. 全功能安装\n安装所有可选组件（包括 Gymnasium, Atari, PyBullet 等支持）：\n```bash\npip3 install \"mushroom_rl[all]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 可选组件单独安装\n*   **绘图支持**:\n    ```bash\n    pip3 install \"mushroom_rl[plots]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n*   **Conda 用户注意**: 若在使用 Conda 环境时遇到 QT 相关报错，需在代码开头添加以下配置（替换 `\u003Cconda_base_path>` 和 `\u003Cenv_name>`）：\n    ```python\n    import os\n    os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = '\u003Cconda_base_path>\u002Fenvs\u002F\u003Cenv_name>\u002Fbin\u002Fplatforms'\n    ```\n\n### 4. 开发者安装（可编辑模式）\n如需修改源码或贡献代码：\n```bash\npip install --no-use-pep517 -e .\n# 包含所有依赖的可编辑安装\npip install --no-use-pep517 -e .[all]\n```\n\n> **注意**: 若需使用 MuJoCo 或 Deepmind Control (`dm_control`)，请参照其官方文档单独安装对应依赖。\n\n## 基本使用\n\nMushroomRL 通过脚本运行实验。库中 `examples` 文件夹提供了丰富的示例。\n\n### 运行示例实验\n克隆仓库后，直接运行提供的示例脚本即可开始一个简单的强化学习实验（例如在“山坡小车”环境中使用拟合迭代算法 FQI）：\n\n```bash\npython3 examples\u002Fcar_on_hill_fqi.py\n```\n\n### 自定义实验结构\n编写自己的实验脚本通常包含以下步骤：\n1.  定义环境（Environment）。\n2.  定义代理（Agent）及策略（Policy）。\n3.  定义核心（Core）并运行实验。\n\n参考 `examples` 目录下的其他脚本（如 `dqn_atari.py`, `ppo_gym.py`）以了解针对不同环境和算法的具体配置方式。\n\n---\n**引用**: 若在科研工作中使用本库，请引用：\n*Carlo D'Eramo et al., \"MushroomRL: Simplifying Reinforcement Learning Research\", JMLR 2021.*","某机器人初创公司的算法团队正在开发一款基于深度强化学习的机械臂抓取系统，需要在 PyBullet 物理仿真环境中快速验证多种主流算法（如 SAC、PPO）的性能。\n\n### 没有 mushroom-rl 时\n- **重复造轮子耗时**：团队需手动编写每种算法的核心逻辑及与环境交互的接口，导致大量时间浪费在基础架构搭建而非策略优化上。\n- **框架绑定严重**：若想从 TensorFlow 切换到 PyTorch 进行实验，必须重构大部分代码，缺乏灵活的张量计算后端支持。\n- **环境适配困难**：对接 PyBullet 或 Gymnasium 等不同仿真器时，需反复调整数据格式和奖励函数接口，调试过程繁琐且易出错。\n- **基准对比缺失**：缺乏内置的经典算法（如 Q-Learning、FQI）作为基线，难以客观评估新提出的深度强化学习模型的实际提升效果。\n\n### 使用 mushroom-rl 后\n- **开箱即用加速研发**：直接调用内置的 SAC、PPO 等先进算法及经典基线，团队将算法验证周期从数周缩短至几天。\n- **无缝切换深度学习后端**：凭借模块化设计，可轻松指定使用 PyTorch 或 TensorFlow 进行张量运算，无需修改核心业务逻辑。\n- **统一环境交互标准**：通过标准化的代理接口，一键连接 PyBullet、Gymnasium 甚至 Deepmind Control Suite，彻底消除环境适配痛点。\n- **高效实验管理**：利用其完善的实验流程管理功能，能够系统化地记录并对比不同算法在相同物理场景下的表现，数据更具说服力。\n\nmushroom-rl 通过高度模块化和对主流生态的兼容，让研发团队从繁琐的工程实现中解放出来，专注于核心策略的创新与迭代。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMushroomRL_mushroom-rl_bb876ed9.png","MushroomRL","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FMushroomRL_1cb73dc9.png","",null,"mushroom4rl@gmail.com","Mushroom_RL","https:\u002F\u002Fgithub.com\u002FMushroomRL",[84,88],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,{"name":89,"color":90,"percentage":91},"Makefile","#427819",0,925,156,"2026-04-02T13:02:52","MIT","Linux, macOS","未说明（取决于所选用的深度学习后端如 PyTorch\u002FTensorFlow 及具体环境如 MuJoCo\u002FHabitat）","未说明",{"notes":100,"python":101,"dependencies":102},"1. 基础安装仅需 pip install mushroom_rl，完整功能需安装 mushroom_rl[all]。2. Ubuntu 系统需手动安装 SDL、FFmpeg 等系统级依赖；macOS 需安装 swig。3. 使用 Conda 时若遇到 QT 绘图问题，需手动设置 QT_QPA_PLATFORM_PLUGIN_PATH 环境变量。4. 使用 Habitat 或 iGibson 等复杂仿真环境时，需单独按照官方指南下载大型数据集和资产文件，并可能需要配置 YAML 路径或创建符号链接。5. MuJoCo 需要额外遵循其项目页面的安装说明。","3.x (通过 pip3 和 python3 命令推断，具体版本未明确限制)",[103,104,105,106,107,108,109,110,111,112],"PyTorch","TensorFlow","Gymnasium","PyBullet","MuJoCo","dm_control","Habitat","iGibson","PyQt5","SWIG",[13],[115,116,117,118,119,120,121,122,123,124,125,126,127,128],"reinforcement-learning","deep-reinforcement-learning","deep-learning","openai-gym","atari","rl","pytorch","mujoco","dqn","ddpg","trpo","qlearning","pybullet","sac","2026-03-27T02:49:30.150509","2026-04-06T08:45:30.337943",[132,137,142,147,152],{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},13978,"如何在训练过程中记录损失（loss）并绘制损失曲线？","可以通过传递一个回调对象（fit callback）给 core 来实现。该回调可以是一个类实例，在构造函数中传入模型的引用（例如 .model 属性），从而在回调内部访问模型数据。如果使用的是自定义的损失类，也可以直接在损失计算函数中手动记录日志。虽然默认未测试多损失情况，但通过自定义回调或修改损失函数内部逻辑是可行的解决方案。","https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Fissues\u002F59",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},13979,"如何在离散动作空间（如 GridWorld）中使用 Categorical 策略进行策略梯度学习？","MushroomRL 默认不直接支持有限状态空间的策略搜索方法，且现有的 Boltzmann 策略通常基于 Q 值。解决方案是使用 ParametricPolicy 接口自行实现一个基于 Categorical 分布的 Boltzmann 策略。虽然官方未保证此方法在所有场景下都能完美工作，但这是在不引入深度网络或 TD 学习假设的情况下，实现简单 REINFORCE 算法的推荐途径。","https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Fissues\u002F86",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},13980,"使用 PPO 算法在 Lunar Lander 环境中无法收敛或遇到 Bug 怎么办？","这是一个已知的分布处理问题。维护者已在开发分支（dev branch）的最新提交中修复了该问题。修复方案是直接包装分布（wrapping the distribution）并修改 log_prob 的行为，确保修复仅影响 Categorical 分布而不干扰其他分布（如 Gaussian）。建议用户更新到包含该修复的最新版本，如果问题仍然存在，可以重新开启 Issue。","https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Fissues\u002F124",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},13981,"运行 DQN 训练 Atari 游戏时出现内存泄漏（内存持续增加）如何解决？","该问题通常是由 OpenAI Gym 接口变更导致的 Atari 环境版本不兼容引起的，而非 MushroomRL 本身的代码错误。解决方法是找到与当前 Gym 版本兼容的 Atari 库版本。有用户反馈在另一台服务器上重新安装所有依赖（使用 GitHub 最新版本的 mushroom_rl）后问题解决。建议检查并调整 `gym`、`atari-py` 和 `torch` 的版本组合以确保兼容性。","https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Fissues\u002F111",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},13982,"REINFORCE 算法中 Baseline 的形状计算是否正确？单轨迹（single trajectory）情况下如何处理？","REINFORCE 算法的标准用法是使用多个轨迹（multiple trajectories）来估计梯度。在单轨迹情况下，不使用 baseline 的 REINFORCE 算法效果极差，因此为了保持代码整洁，官方计划添加断言（assert）以防止用户在单轨迹模式下运行该算法。如果必须处理单轨迹情况，一种变通方法是将 baseline 强制设为 0，但这并非标准推荐做法。","https:\u002F\u002Fgithub.com\u002FMushroomRL\u002Fmushroom-rl\u002Fissues\u002F89",[158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253],{"id":159,"version":160,"summary_zh":161,"released_at":162},80732,"1.10.2","Mushroom-RL 1.x 的维护版本，包含以下改进：\n- 锁定软件包版本，以简化安装过程；\n- 移除了 minigrid 示例（因为它目前与 1.0 版本的依赖项不兼容）；\n- 修复了 CV2Viewer 中的一个 bug。\n\n请注意，Mushroom 2.0 目前正处于积极开发中，而 1.x 系列仅提供 Bug 修复支持。","2025-04-14T16:33:32",{"id":164,"version":165,"summary_zh":166,"released_at":167},80733,"1.10.1","- 修复了SAC算法中alpha参数的加载问题","2024-04-18T12:50:29",{"id":169,"version":170,"summary_zh":171,"released_at":172},80734,"1.10.0","- 实现了用于录制环境视频的记录接口\n- 更新了 MuJoCo 接口，支持多个环境 XML 文件\n- 更新了 MuJoCo 查看器，新增无头渲染、多后端支持、高级功能与选项，以及多视图支持\n- 改进了 SAC 算法\n- 修复了若干 bug，并进行了代码清理\n","2023-10-26T18:43:27",{"id":174,"version":175,"summary_zh":176,"released_at":177},80735,"1.9.2","小版本更新，包含错误修复和改进：\n- 修复了 macOS 上 MuJoCo 观察器窗口的缩放问题\n- 改进了多项式特征和高斯径向基函数\n- 新增了 ProMP 策略\n- 修复了 BoltzmannTorchPolicy 中的 bug，现在该策略可与 PPO 和 TRPO 正常配合使用\n- 对序列化进行了少量错误修复","2023-06-14T08:57:57",{"id":179,"version":180,"summary_zh":181,"released_at":182},80736,"1.9.1","MuJoCo 接口的少量改动：\n- 更新以支持 MuJoCo 的最新版本 [2.3.2](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fmujoco\u002Freleases\u002Ftag\u002F2.3.2)\n- 增加了使用观测值重置 MuJoCo 环境状态的功能","2023-02-14T17:18:24",{"id":184,"version":185,"summary_zh":186,"released_at":187},80737,"1.9.0","* 移除了所有 Cython 依赖，现在安装更加简便！\r\n* 移除了依赖 Cython 的类人型环境。\r\n* 改进了 PyBullet 环境。\r\n* 新增了使用 DeepMind 原生 MuJoCo 绑定的 MuJoCo 接口。\r\n* 新增了基于 MuJoCo 实现的空气曲棍球环境。\r\n* 核心模块现在会收集环境信息，并将这些信息传递给智能体的 `fit` 方法。这打破了之前的 MushroomRL 接口，但允许支持不同类型的算法（例如安全强化学习方法）。\r\n* 文档得到了改进。\r\n* 进行了若干小更新和错误修复。","2023-01-31T17:13:18",{"id":189,"version":190,"summary_zh":191,"released_at":192},80738,"1.7.2","- 新增绘图功能，此前来自 MushroomRL 基准测试\n- 修复了 MuJoCo 接口\n- 在 eNAC 更新中添加了缺失的折扣因子\n- Gym 实时渲染\n- Pybullet 接口现强制执行关节扭矩限制","2022-06-30T18:15:11",{"id":194,"version":195,"summary_zh":196,"released_at":197},80739,"1.7.1","- 完善了文档；\n- 新增了 MORE 算法；\n- 新增了分位数回归 DQN 算法；\n- 为 Minigrid、Habitat 和 iGibson 添加了封装器（感谢 @sparisi）；\n- 新增了 AirHockey 环境（目前仍处于实验阶段，这些环境未来可能会发生变化）；\n- 升级到新版 OpenAI Gym；\n- 修复了 NoysiDQN 和 LSPI 中的若干 bug；\n- 修复了 Clipped Gaussian Policy，现已按预期工作；\n- 改进了 DMControl 环境，增加了像素观测支持，并新增了机械臂等环境，例如 ‘manipulator’（感谢 @jdsalmonson）。","2022-05-02T12:04:03",{"id":199,"version":200,"summary_zh":201,"released_at":202},80740,"1.7.0","- 代理和环境接口现已移至 core.py 模块。\n- 新增了简便的环境注册接口：可通过环境名称直接创建环境；\n- 更新了文档；\n- 新增了教程；\n- 优化了 CONTRIBUTING.md 文件；\n- 新增了 ConstrainedREPS；\n- 修复了 GPOMDP 中的 bug；\n- 改进了回归器拟合函数中损失的日志记录；\n- 对环境构造函数进行了全面清理；\n- 优化了 Pybullet 环境；\n- 改进了 Voronoi 瓦片；\n- 在 DQN 和 Actor-Critic 算法中新增了预测参数；\n- 在 DQN 中增加了对 Logger 的支持。","2021-06-09T15:53:04",{"id":204,"version":205,"summary_zh":206,"released_at":207},80741,"1.6.1","- 经验回放机制可返回截断的n步回报；\r\n- 新增Rainbow和NoisyDQN算法；\r\n- 改进了PyBullet仿真环境；\r\n- 新增裁剪高斯策略；\r\n- 在策略和近似器中添加了预测参数。","2021-03-30T14:04:12",{"id":209,"version":210,"summary_zh":211,"released_at":212},80742,"1.6.0","- Added MushroomRL logger;\r\n- Support for wrapper args in gym environment;\r\n- Fixes in tiles;\r\n- Dueling DQN added;\r\n- MDPInfo and spaces are now serializable;\r\n- Optimizers are now serializable;\r\n- DoubleFQI and BoostedFQI split into separate modules;\r\n- Minor bug fixes.","2021-02-02T16:21:31",{"id":214,"version":215,"summary_zh":216,"released_at":217},80743,"1.5.4","- Fixes in tiles;\r\n- Maxmin DQN and Maxmin Q-Learning added;\r\n- Minor fixes in SAC;\r\n- Added optimizer to policy gradient algorithms;\r\n- Code quality improvements;\r\n- Minor bug fixes.","2020-12-09T15:35:42",{"id":219,"version":220,"summary_zh":221,"released_at":222},80744,"1.5.3","- Added QLambda algorithm (thanks to @nikosNalmpantis);\r\n- Improved regressor support for Torch tensors;\r\n- Added frames module;\r\n- Fixed observation shape in Atari environment.","2020-09-28T09:21:14",{"id":224,"version":225,"summary_zh":226,"released_at":227},80745,"1.5.2","- Added missing resources to setup\r\n- Moved humanoid meshes into meshes folder (was Geometry)\r\n- Improved makefile in order to ensure proper build before uploading","2020-08-26T12:58:02",{"id":229,"version":230,"summary_zh":231,"released_at":232},80746,"1.5.1","Fixed source distribution by adding a missing .pyx file.","2020-08-24T10:31:12",{"id":234,"version":235,"summary_zh":236,"released_at":237},80747,"1.5.0","- Implemented PEP 517-518;\r\n- Improved saving system:\r\n    * Added Serializable interface that can be used by any Mushroom class;\r\n    * Approximator is now serializable;\r\n    * Fixed issue with save\u002Fload attributes;\r\n    * Everything is saved in a zip file with .msh extension.\r\n- Implemented cmac regressor;\r\n- Step callback added in core;\r\n- callbacks_episode renamed in callbacks_fit;\r\n- Distributions are now serializable;\r\n- Added computation of entropy for distributions;\r\n- Policies are now serializable;\r\n- Added BoltzmannTorchPolicy;\r\n- Added LQR solver;\r\n- Added PyBullet environment;\r\n- Fixed issue in Gym environment when PyBullet import was missing;\r\n- LQR bug fixed and generate function improved;\r\n- Improved Mujoco environment;\r\n- Cythonization and cleanup of HumanoidGait environment;\r\n- TRPO bug fixed;\r\n- Added RandomFourierBasis;\r\n- Fixed pendulum_trust_region example.","2020-08-24T09:46:20",{"id":239,"version":240,"summary_zh":241,"released_at":242},80748,"1.4.0","- Save and load of the agent;\r\n- Online data and performance visualization using pyqtgraph;\r\n- Normalization function added as preprocessors;\r\n- Step callback added;\r\n- Humanoid gait environment.","2020-03-31T10:23:57",{"id":244,"version":245,"summary_zh":246,"released_at":247},80749,"1.3.0","MushroomRL is the new name of the library. Code, doc, and config files have been updated accordingly.\r\nThe package is now available on PyPI and can be installed using pip.","2020-01-04T16:25:39",{"id":249,"version":250,"summary_zh":251,"released_at":252},80750,"1.2.0","- Improved test suit;\r\n- Implemented Deep RL algorithms;\r\n- Improved visualization;\r\n- Improved code quality;\r\n- Several bug fixes;\r\n- Refactoring of Torch approximator.","2019-11-18T17:58:30",{"id":254,"version":255,"summary_zh":256,"released_at":257},80751,"1.1","- bug fixes;\r\n- torch support;\r\n- dm_control and pybullet interface;\r\n- doc improved.","2019-07-19T12:48:25"]