[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-qgallouedec--panda-gym":3,"tool-qgallouedec--panda-gym":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":81,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":97,"env_os":98,"env_gpu":98,"env_ram":98,"env_deps":99,"category_tags":104,"github_topics":105,"view_count":10,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":149},205,"qgallouedec\u002Fpanda-gym","panda-gym","Set of robotic environments based on PyBullet physics engine and gymnasium.","panda-gym 是一个基于 PyBullet 物理引擎和 gymnasium 接口打造的机器人仿真环境集合，专为机器人学习设计。panda-gym 解决了强化学习算法在真实硬件上训练成本高、风险大的难题，让研究人员能够在安全的仿真环境中快速验证想法。\n\npanda-gym 非常适合从事机器人控制、深度强化学习研究的开发者与科研人员。内置了多种经典任务场景，包括机械臂到达目标、推动物体、滑动、抓取放置、堆叠及翻转等。所有环境均完全兼容标准的 Gym 接口，只需几行代码即可启动训练。此外，panda-gym 还提供了丰富的预训练模型基线，支持用户在 Hugging Face 上直接获取代理模型进行对比实验。作为 OpenAI Fetch 环境的开源替代方案，panda-gym 凭借易安装、文档完善的特点，已成为机器人仿真学习领域的高效选择。","# panda-gym\n\nSet of robotic environments based on PyBullet physics engine and gymnasium.\n\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpanda-gym.svg?logo=pypi&logoColor=FFE873)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpanda-gym\u002F)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_dfdd18368d89.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpanda-gym)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fqgallouedec\u002Fpanda-gym.svg)](LICENSE.txt)\n[![build](https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Factions\u002Fworkflows\u002Fbuild.yml\u002Fbadge.svg?branch=master)](https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Factions\u002Fworkflows\u002Fbuild.yml)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fqgallouedec\u002Fpanda-gym\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg?token=pv0VdsXByP)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fqgallouedec\u002Fpanda-gym)\n[![Code style: black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcs.LG-arXiv%3A2106.13687-B31B1B.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13687)\n\n## Documentation\n\nCheck out the [documentation](https:\u002F\u002Fpanda-gym.readthedocs.io\u002Fen\u002Flatest\u002F).\n\n## Installation\n\n### Using PyPI\n\n```bash\npip install panda-gym\n```\n\n### From source\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym.git\npip install -e panda-gym\n```\n\n## Usage\n\n```python\nimport gymnasium as gym\nimport panda_gym\n\nenv = gym.make('PandaReach-v3', render_mode=\"human\")\n\nobservation, info = env.reset()\n\nfor _ in range(1000):\n    action = env.action_space.sample() # random action\n    observation, reward, terminated, truncated, info = env.step(action)\n\n    if terminated or truncated:\n        observation, info = env.reset()\n\nenv.close()\n```\n\nYou can also [![Open in Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fqgallouedec\u002Fpanda-gym\u002Fblob\u002Fmaster\u002Fexamples\u002FPickAndPlace.ipynb)\n\n## Environments\n\n|                                  |                                                |\n| :------------------------------: | :--------------------------------------------: |\n|         `PandaReach-v3`          |                 `PandaPush-v3`                 |\n| ![PandaReach-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_4f6e63320623.png) |         ![PandaPush-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_5bda099a42d0.png)         |\n|         `PandaSlide-v3`          |             `PandaPickAndPlace-v3`             |\n| ![PandaSlide-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_6ae5c2d56f79.png) | ![PandaPickAndPlace-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_3c4db5e5489c.png) |\n|         `PandaStack-v3`          |              `PandaFlip-v3`                    |\n| ![PandaStack-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_bab03d81d5d1.png) | ![PandaFlip-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_88110a5a1ae5.png) |\n\n## Baselines results\n\nBaselines results are available in [rl-baselines3-zoo](https:\u002F\u002Fgithub.com\u002FDLR-RM\u002Frl-baselines3-zoo) and the pre-trained agents in the [Hugging Face Hub](https:\u002F\u002Fhuggingface.co\u002Fsb3).\n\n## Citation\n\nCite as\n\n```bib\n@article{gallouedec2021pandagym,\n  title        = {{panda-gym: Open-Source Goal-Conditioned Environments for Robotic Learning}},\n  author       = {Gallou{\\'e}dec, Quentin and Cazin, Nicolas and Dellandr{\\'e}a, Emmanuel and Chen, Liming},\n  year         = 2021,\n  journal      = {4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS},\n}\n```\n\nEnvironments are widely inspired from [OpenAI Fetch environments](https:\u002F\u002Fopenai.com\u002Fblog\u002Fingredients-for-robotics-research\u002F). \n","# panda-gym\n\n基于 PyBullet 物理引擎和 gymnasium（强化学习库）的一组机器人环境（environments）。\n\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpanda-gym.svg?logo=pypi&logoColor=FFE873)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpanda-gym\u002F)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_dfdd18368d89.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpanda-gym)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fqgallouedec\u002Fpanda-gym.svg)](LICENSE.txt)\n[![build](https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Factions\u002Fworkflows\u002Fbuild.yml\u002Fbadge.svg?branch=master)](https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Factions\u002Fworkflows\u002Fbuild.yml)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fqgallouedec\u002Fpanda-gym\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg?token=pv0VdsXByP)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fqgallouedec\u002Fpanda-gym)\n[![Code style: black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcs.LG-arXiv%3A2106.13687-B31B1B.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13687)\n\n## 文档\n\n查看 [文档](https:\u002F\u002Fpanda-gym.readthedocs.io\u002Fen\u002Flatest\u002F)。\n\n## 安装\n\n### 使用 PyPI\n\n```bash\npip install panda-gym\n```\n\n### 从源代码安装\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym.git\npip install -e panda-gym\n```\n\n## 使用方法\n\n```python\nimport gymnasium as gym\nimport panda_gym\n\nenv = gym.make('PandaReach-v3', render_mode=\"human\")\n\nobservation, info = env.reset()\n\nfor _ in range(1000):\n    action = env.action_space.sample() # random action\n    observation, reward, terminated, truncated, info = env.step(action)\n\n    if terminated or truncated:\n        observation, info = env.reset()\n\nenv.close()\n```\n\n您也可以 [![Open in Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fqgallouedec\u002Fpanda-gym\u002Fblob\u002Fmaster\u002Fexamples\u002FPickAndPlace.ipynb)\n\n## 环境\n\n|                                  |                                                |\n| :------------------------------: | :--------------------------------------------: |\n|         `PandaReach-v3`          |                 `PandaPush-v3`                 |\n| ![PandaReach-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_4f6e63320623.png) |         ![PandaPush-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_5bda099a42d0.png)         |\n|         `PandaSlide-v3`          |             `PandaPickAndPlace-v3`             |\n| ![PandaSlide-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_6ae5c2d56f79.png) | ![PandaPickAndPlace-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_3c4db5e5489c.png) |\n|         `PandaStack-v3`          |              `PandaFlip-v3`                    |\n| ![PandaStack-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_bab03d81d5d1.png) | ![PandaFlip-v3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_readme_88110a5a1ae5.png) |\n\n## 基线结果\n\n基线结果可在 [rl-baselines3-zoo](https:\u002F\u002Fgithub.com\u002FDLR-RM\u002Frl-baselines3-zoo) 中找到，预训练智能体（agents）位于 [Hugging Face Hub](https:\u002F\u002Fhuggingface.co\u002Fsb3)。\n\n## 引用\n\n引用格式如下\n\n```bib\n@article{gallouedec2021pandagym,\n  title        = {{panda-gym: Open-Source Goal-Conditioned Environments for Robotic Learning}},\n  author       = {Gallou{\\'e}dec, Quentin and Cazin, Nicolas and Dellandr{\\'e}a, Emmanuel and Chen, Liming},\n  year         = 2021,\n  journal      = {4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS},\n}\n```\n\n环境设计广泛灵感来源于 [OpenAI Fetch 环境](https:\u002F\u002Fopenai.com\u002Fblog\u002Fingredients-for-robotics-research\u002F)。","# panda-gym 快速上手指南\n\npanda-gym 是一套基于 PyBullet 物理引擎和 Gymnasium 接口的机器人学习环境，广泛用于强化学习算法的开发与测试。\n\n## 环境准备\n\n*   **操作系统**: Linux \u002F macOS \u002F Windows\n*   **Python 版本**: 建议 Python 3.7 及以上\n*   **前置依赖**: 无需手动安装，`panda-gym` 会自动安装 `gymnasium` 和 `pybullet` 等依赖库\n\n## 安装步骤\n\n### 方式一：通过 PyPI 安装（推荐）\n\n```bash\npip install panda-gym\n```\n\n> **提示**：国内用户如遇下载速度慢，可添加国内镜像源（如清华源）：\n> `pip install panda-gym -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 方式二：从源码安装\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym.git\npip install -e panda-gym\n```\n\n## 基本使用\n\n以下是最简单的环境运行示例，以 `PandaReach-v3` 为例：\n\n```python\nimport gymnasium as gym\nimport panda_gym\n\nenv = gym.make('PandaReach-v3', render_mode=\"human\")\n\nobservation, info = env.reset()\n\nfor _ in range(1000):\n    action = env.action_space.sample() # 随机动作\n    observation, reward, terminated, truncated, info = env.step(action)\n\n    if terminated or truncated:\n        observation, info = env.reset()\n\nenv.close()\n```\n\n**可用环境列表**：\n除了 `PandaReach-v3`，还支持以下环境：\n*   `PandaPush-v3` (推动)\n*   `PandaSlide-v3` (滑动)\n*   `PandaPickAndPlace-v3` (抓取放置)\n*   `PandaStack-v3` (堆叠)\n*   `PandaFlip-v3` (翻转)\n\n更多详细信息请参阅 [官方文档](https:\u002F\u002Fpanda-gym.readthedocs.io\u002Fen\u002Flatest\u002F)。","某高校机器人实验室正在研发基于强化学习的机械臂抓取算法，需要在仿真环境中高效验证控制策略。\n\n### 没有 panda-gym 时\n- 团队需手动搭建 PyBullet 物理仿真场景，配置机械臂模型与碰撞检测耗时数周，起步门槛极高。\n- 自定义奖励函数和观测空间容易出错，调试环境代码占用了大部分研发时间，挤占了算法优化精力。\n- 缺乏标准基准环境，难以与现有研究成果进行公平对比，论文复现困难，实验可信度受质疑。\n- 每次算法迭代都要重新检查环境稳定性，实验效率极低，阻碍算法快速迭代与验证。\n\n### 使用 panda-gym 后\n- 通过 `pip install panda-gym` 即可直接调用 `PandaPickAndPlace-v3` 等标准环境，几分钟内完成配置。\n- 兼容 gymnasium 接口，只需关注算法逻辑，无需关心底层物理引擎细节，开发体验流畅。\n- 内置多种任务场景（如 Reach、Push、Slide），可快速验证算法在不同任务上的泛化能力。\n- 支持对接 rl-baselines3-zoo，直接复用预训练模型加速实验进程，显著缩短研发周期。\n\npanda-gym 将繁琐的仿真环境搭建转化为简单的接口调用，让研究者专注于核心算法创新。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqgallouedec_panda-gym_4f6e6332.png","qgallouedec","Quentin Gallouédec","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fqgallouedec_e543af38.jpg","Research @huggingface","@huggingface","Canada",null,"QGallouedec","https:\u002F\u002Fgithub.com\u002Fqgallouedec",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",99.4,{"name":90,"color":91,"percentage":92},"Makefile","#427819",0.6,746,135,"2026-03-27T19:33:02","MIT",1,"未说明",{"notes":100,"python":98,"dependencies":101},"基于 PyBullet 物理引擎和 gymnasium 的机器人环境集合。支持通过 PyPI 或源码安装。提供 Google Colab 示例。基线结果和预训练代理可在 rl-baselines3-zoo 和 Hugging Face Hub 获取。",[102,103],"gymnasium","pybullet",[54,13],[106,107,108,109,110,111,112],"franka-emika","robotics","reinforcement-learning","python","deep-learning","machine-learning","artificial-intelligence","2026-03-27T02:49:30.150509","2026-04-06T07:15:03.538330",[116,121,126,130,135,139,144],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},546,"PandaPickAndPlace 任务的训练超参数是多少？如何复现结果？","建议使用 TQC 算法，DDPG 和 SAC 的推荐超参数较少。如果无法复现结果，请检查代码是否有 bug。参考的 TQC 配置包括：`batch_size=2048`, `buffer_size=1_000_000`, `gamma=0.95`, `learning_rate=0.001`, `policy_kwargs=dict(net_arch=[512, 512, 512], n_critics=2)`, `replay_buffer_kwargs=dict(goal_selection_strategy='future', n_sampled_goal=4)`。使用 HuggingFace 上的超参数通常可以成功。","https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fissues\u002F66",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},547,"哪里可以查看官方基准测试结果和超参数？","维护者推荐查看 openrlbenchmark 项目，这里包含了所有任务的结果和超参数。访问地址：https:\u002F\u002Fwandb.ai\u002Fopenrlbenchmark\u002Fsb3","https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fissues\u002F53",{"id":127,"question_zh":128,"answer_zh":129,"source_url":125},548,"PandaPush 任务的动作空间是基于末端执行器还是关节？控制模式是什么？","使用 `PandaPush` 时，观察和控制都与末端执行器相关。动作是目标位移（target displacement）。原始动作缩放 0.05 后加到当前关节位置得到目标角度，然后 PyBullet 使用 PD 控制器计算扭矩。可以视为作用在关节上的虚拟力。",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},549,"如何使用图像输入（如 RGB）并处理观察空间？","可以使用 `PixelObservationWrapper` 和 `FilterObservation` 包装器来处理图像输入。在提供的代码示例中，代理的输入由 (1) 图像和 (2) 期望目标（x, y, z 坐标向量）组成。","https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fissues\u002F82",{"id":136,"question_zh":137,"answer_zh":138,"source_url":134},550,"使用图像输入时，是否还需要末端执行器的位置和速度信息？","这取决于需求。在提供的代码示例中，这些值不是输入的一部分。在更接近现实的场景中，可以包含机器人相关数据，但不包含物体数据，让代理从图像中推断物体位置。",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},551,"如何定义自定义任务（如添加末端旋转控制或固定目标位置）？","请参考官方文档关于自定义任务的说明：https:\u002F\u002Fpanda-gym.readthedocs.io\u002Fen\u002Flatest\u002Fcustom\u002Fcustom_task.html。如果需要固定目标位置而不是每次运行随机化，需查看文档中的实现方式。","https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fissues\u002F49",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},552,"如何限制机器人只在特定区域（如前方）移动以提高效率？","可以检查 `is_front` 方法是否符合预期。有用户反馈通过在 `_get_obs` 中设置 `self._last_obs = obs[\"observation\"]` 而不是在 `__init__` 中解决了观察值相关的问题。此外，可以使用多进程加速学习（注意离线策略算法的支持情况）。","https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fissues\u002F14",[150,155,160,165,170,175,180,185,190,195,200,205,210,215],{"id":151,"version":152,"summary_zh":153,"released_at":154},109844,"v3.0.7","## What's Changed\r\n* Allow the env to be closed many times by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F68\r\n* Push coverage once by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F69\r\n* Update README.md by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F73\r\n* fixed a typo in core.py by @mahaozhe in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F76\r\n* Small typo fix in comments by @Ivan-267 in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F83\r\n* Fix CI for `macos-latest` by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F91\r\n* fix typo by @Me-in-U in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F89\r\n\r\n## New Contributors\r\n* @mahaozhe made their first contribution in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F76\r\n* @Ivan-267 made their first contribution in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F83\r\n* @Me-in-U made their first contribution in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F89\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv3.0.6...v3.0.7","2024-06-10T10:13:17",{"id":156,"version":157,"summary_zh":158,"released_at":159},109845,"v3.0.6","## What's Changed\r\n* Use render arguments for debug GUI by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F64\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv3.05...v3.0.6","2023-04-27T09:08:56",{"id":161,"version":162,"summary_zh":163,"released_at":164},109846,"v3.05","## What's Changed\r\n* Fix typo by @yuanzhedong in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F57\r\n* Upgrade to `gymnasium>=0.26` by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F60\r\n\r\n## New Contributors\r\n* @yuanzhedong made their first contribution in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F57\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv3.0.4...v3.05","2023-04-27T09:04:53",{"id":166,"version":167,"summary_zh":168,"released_at":169},109847,"v3.0.4","## What's Changed\r\n* Fix rendering for `render_mode=human` by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F56\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv3.0.3...v3.0.4","2023-01-25T10:03:08",{"id":171,"version":172,"summary_zh":173,"released_at":174},109848,"v3.0.3","## What's Changed\r\n* Documentation Update by @simoninithomas in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F50\r\n* `np.bool` deprecation by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F51\r\n* Update \"rgb_array\" rendering mode warning and add render_fps metadata by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F52\r\n\r\n## New Contributors\r\n* @simoninithomas made their first contribution in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F50\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv3.0.1...v3.0.3","2023-01-02T17:01:01",{"id":176,"version":177,"summary_zh":178,"released_at":179},109849,"v3.0.1","## What's Changed\r\n* Upgrade github-actions by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F45\r\n* PyBullet argument list reduction via whitespace removal by @jonasreiher in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F44\r\n* Fix render by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F48\r\n\r\n## New Contributors\r\n* @jonasreiher made their first contribution in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F44\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv3.0.0...v3.0.1","2022-12-19T10:46:57",{"id":181,"version":182,"summary_zh":183,"released_at":184},109850,"v3.0.0","## What's Changed\r\n\r\n* panda-gym now uses Gymnasium by @qgallouedec in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F35\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv2.0.4...v3.0.0","2022-10-12T16:55:35",{"id":186,"version":187,"summary_zh":188,"released_at":189},109851,"v2.0.4","## What's Changed\r\n* Implemented methods to save and restore PyBullet states. by @louixp in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F33\r\n\r\n## New Contributors\r\n* @louixp made their first contribution in https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fpull\u002F33\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fqgallouedec\u002Fpanda-gym\u002Fcompare\u002Fv2.0.3...v2.0.4","2022-07-05T10:10:28",{"id":191,"version":192,"summary_zh":193,"released_at":194},109852,"v2.0.3","Fix gym version to be \u003C0.24","2022-07-04T09:53:18",{"id":196,"version":197,"summary_zh":198,"released_at":199},109853,"v2.0.2","Changes:\r\n\r\n- Fix `gym>=0.22` dependency","2022-05-25T14:36:10",{"id":201,"version":202,"summary_zh":203,"released_at":204},109854,"v2.0.0","Many breaking changes:\r\n\r\n- Changing dynamics (friction coefficient, among other things)\r\n- Make code structure cleaner\r\n- Add a task (flipping)\r\n- Panda joint control\r\n- readthedocs documentation","2021-11-19T10:29:45",{"id":206,"version":207,"summary_zh":208,"released_at":209},109855,"v1.1.1","New in v1.1.1\r\n- Improve rendering\r\n- Fix closing env issue","2021-09-23T13:03:53",{"id":211,"version":212,"summary_zh":213,"released_at":214},109856,"v1.1.0","New in v1.1.0\r\n\r\n- `panda-gym` can now handle multiple physics clients.\r\n- correct implementation of a random seed for reproducibility.","2021-06-29T15:40:54",{"id":216,"version":217,"summary_zh":81,"released_at":218},109857,"V1.0.0","2021-04-18T16:20:26"]