[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-haosulab--ManiSkill":3,"tool-haosulab--ManiSkill":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,2,"2026-04-18T11:18:24",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[52,15,13,14],"语言模型",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,61],"视频",{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":103,"env_ram":104,"env_deps":105,"category_tags":111,"github_topics":113,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":123,"updated_at":124,"faqs":125,"releases":155},9263,"haosulab\u002FManiSkill","ManiSkill","SAPIEN Manipulation Skill Framework, an open source GPU parallelized robotics simulator and benchmark","ManiSkill 是一个基于 SAPIEN 引擎打造的开源机器人操作技能框架，专注于提供高性能的仿真环境与训练基准。它主要解决了传统机器人模拟中数据收集效率低、场景单一以及从仿真到现实（Sim2Real）迁移困难等痛点，让研究人员能够以极低的成本获取海量高质量的合成数据。\n\n这款工具特别适合机器人学研究人员、AI 开发者以及强化学习工程师使用。其最引人注目的技术亮点在于全面的 GPU 并行化能力：不仅支持视觉数据的并行采集，在高端显卡上每秒可生成超过 30,000 帧的 RGBD 及分割数据，还能实现异构场景的并行仿真，即每个并行环境都能拥有完全不同的场景和物体组合。此外，ManiSkill 提供了简洁的任务构建接口，屏蔽了复杂的底层内存管理，并内置了多种前沿的强化学习、模仿学习及视觉语言动作模型基线，帮助用户快速验证算法。目前该框架正处于 v3 测试阶段，是探索具身智能与机器人操作技能的强大助手。","# ManiSkill 3 (Beta)\n\n\n![teaser](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhaosulab_ManiSkill_readme_007fa911261c.jpg)\n\u003Cp style=\"text-align: center; font-size: 0.8rem; color: #999;margin-top: -1rem;\">Sample of environments\u002Frobots rendered with ray-tracing. Scene datasets sourced from AI2THOR and ReplicaCAD\u003C\u002Fp>\n\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhaosulab_ManiSkill_readme_9c5915a1ee24.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmani_skill)\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhaosulab\u002FManiSkill\u002Fblob\u002Fmain\u002Fexamples\u002Ftutorials\u002F1_quickstart.ipynb)\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmani-skill.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmani-skill)\n[![Docs status](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-passing-brightgreen.svg)](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002F)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F996566046414753822?logo=discord)](https:\u002F\u002Fdiscord.gg\u002Fx8yUZe5AdN)\n\nManiSkill is a powerful unified framework for robot simulation and training powered by [SAPIEN](https:\u002F\u002Fsapien.ucsd.edu\u002F), with a strong focus on manipulation skills. The entire tech stack is as open-source as possible and ManiSkill v3 is in beta release now. Among its features include:\n- GPU parallelized visual data collection system. On the high end you can collect RGBD + Segmentation data at 30,000+ FPS with a 4090 GPU!\n- GPU parallelized simulation, enabling high throughput state-based synthetic data collection in simulation\n- GPU parallelized heterogeneous simulation, where every parallel environment has a completely different scene\u002Fset of objects\n- Example tasks cover a wide range of different robot embodiments (humanoids, mobile manipulators, single-arm robots) as well as a wide range of different tasks (table-top, drawing\u002Fcleaning, dexterous manipulation)\n- Flexible and simple task building API that abstracts away much of the complex GPU memory management code via an object oriented design\n- Real2sim environments for scalably evaluating real-world policies 100x faster via GPU simulation.\n- Sim2real support for deploying policies trained in simulation to the real world\n- Many tuned robot learning baselines in Reinforcement Learning (e.g. PPO, SAC, [TD-MPC2](https:\u002F\u002Fgithub.com\u002Fnicklashansen\u002Ftdmpc2)), Imitation Learning (e.g. Behavior Cloning, [Diffusion Policy](https:\u002F\u002Fgithub.com\u002Freal-stanford\u002Fdiffusion_policy)), and large Vision Language Action (VLA) models (e.g. [Octo](https:\u002F\u002Fgithub.com\u002Focto-models\u002Focto), [RDT-1B](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FRoboticsDiffusionTransformer), [RT-x](https:\u002F\u002Frobotics-transformer-x.github.io\u002F))\n\nFor more details we encourage you to take a look at our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00425), published at [RSS 2025](https:\u002F\u002Froboticsconference.org\u002F).\n\nPlease refer to our [documentation](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide) to learn more information from tutorials on building tasks to sim2real to running baselines.\n\n**NOTE:**\nThis project currently is in a **beta release**, so not all features have been added in yet and there may be some bugs. If you find any bugs or have any feature requests please post them to our [GitHub issues](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fissues\u002F) or discuss about them on [GitHub discussions](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fdiscussions\u002F). We also have a [Discord Server](https:\u002F\u002Fdiscord.gg\u002Fx8yUZe5AdN) through which we make announcements and discuss about ManiSkill.\n\nUsers looking for the original ManiSkill2 can find the commit for that codebase at the [v0.5.3 tag](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Ftree\u002Fv0.5.3)\n\n\n## Installation\nInstallation of ManiSkill is extremely simple, you only need to run a few pip installs and setup Vulkan for rendering.\n\n```bash\n# install the package\npip install --upgrade mani_skill\n# install a version of torch that is compatible with your system\npip install torch\n```\n\nFinally you also need to set up Vulkan with [instructions here](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Finstallation.html#vulkan)\n\nFor more details about installation (e.g. from source, or doing troubleshooting) see [the documentation](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Finstallation.html\n)\n\n## Getting Started\n\nTo get started, check out the quick start documentation: https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Fquickstart.html\n\nWe also have a quick start [colab notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhaosulab\u002FManiSkill\u002Fblob\u002Fmain\u002Fexamples\u002Ftutorials\u002F1_quickstart.ipynb) that lets you try out GPU parallelized simulation without needing your own hardware. Everything is runnable on Colab free tier.\n\nFor a full list of example scripts you can run, see [the docs](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fdemos\u002Findex.html).\n\n## System Support\n\nWe currently best support Linux based systems. There is limited support for windows and MacOS at the moment. We are working on trying to support more features on other systems but this may take some time. Most constraints stem from what the [SAPIEN](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FSAPIEN\u002F) package is capable of supporting.\n\n| System \u002F GPU         | CPU Sim | GPU Sim | Rendering |\n| -------------------- | ------- | ------- | --------- |\n| Linux \u002F NVIDIA GPU   | ✅      | ✅      | ✅        |\n| Windows \u002F NVIDIA GPU | ✅      | ❌      | ✅        |\n| Windows \u002F AMD GPU    | ✅      | ❌      | ✅        |\n| WSL \u002F Anything       | ✅      | ❌      | ❌        |\n| MacOS \u002F Anything     | ✅      | ❌      | ✅        |\n\n## Citation\n\n\nIf you use ManiSkill3 (versions `mani_skill>=3.0.0`) in your work please cite our [ManiSkill3 paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00425) as so:\n\n```\n@article{taomaniskill3,\n  title={ManiSkill3: GPU Parallelized Robotics Simulation and Rendering for Generalizable Embodied AI},\n  author={Stone Tao and Fanbo Xiang and Arth Shukla and Yuzhe Qin and Xander Hinrichsen and Xiaodi Yuan and Chen Bao and Xinsong Lin and Yulin Liu and Tse-kai Chan and Yuan Gao and Xuanlin Li and Tongzhou Mu and Nan Xiao and Arnav Gurha and Viswesh Nagaswamy Rajesh and Yong Woo Choi and Yen-Ru Chen and Zhiao Huang and Roberto Calandra and Rui Chen and Shan Luo and Hao Su},\n  journal = {Robotics: Science and Systems},\n  year={2025},\n} \n```\n\nIf you use ManiSkill2 (version `mani_skill==0.5.3` or lower) in your work please cite the ManiSkill2 paper as so:\n```\n@inproceedings{gu2023maniskill2,\n  title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},\n  author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiang and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},\n  booktitle={International Conference on Learning Representations},\n  year={2023}\n}\n```\n\nNote that some other assets, algorithms, etc. in ManiSkill are from other sources\u002Fresearch. We try our best to include the correct citation bibtex where possible when introducing the different components provided by ManiSkill.\n\n## License\n\nAll rigid body environments in ManiSkill are licensed under fully permissive licenses (e.g., Apache-2.0).\n\nThe assets are licensed under [CC BY-NC 4.0](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc\u002F4.0\u002Flegalcode).\n","# ManiSkill 3 (测试版)\n\n\n![预告图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhaosulab_ManiSkill_readme_007fa911261c.jpg)\n\u003Cp style=\"text-align: center; font-size: 0.8rem; color: #999;margin-top: -1rem;\">使用光线追踪渲染的环境\u002F机器人示例。场景数据集来源于AI2THOR和ReplicaCAD\u003C\u002Fp>\n\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhaosulab_ManiSkill_readme_9c5915a1ee24.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmani_skill)\n[![在Colab中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhaosulab\u002FManiSkill\u002Fblob\u002Fmain\u002Fexamples\u002Ftutorials\u002F1_quickstart.ipynb)\n[![PyPI版本](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmani-skill.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmani-skill)\n[![文档状态](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-passing-brightgreen.svg)](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002F)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F996566046414753822?logo=discord)](https:\u002F\u002Fdiscord.gg\u002Fx8yUZe5AdN)\n\nManiSkill是一个功能强大的统一机器人仿真与训练框架，基于[SAPIEN](https:\u002F\u002Fsapien.ucsd.edu\u002F)构建，专注于操作技能。整个技术栈尽可能开源，目前ManiSkill v3处于测试版发布阶段。其主要特性包括：\n- GPU并行化的视觉数据采集系统。在高端配置下，使用4090显卡可实现超过30,000 FPS的RGBD + 分割数据采集！\n- GPU并行化仿真，支持高吞吐量的状态驱动合成数据采集。\n- GPU并行化的异构仿真，每个并行环境都拥有完全不同的场景\u002F物体集合。\n- 示例任务覆盖多种机器人形态（人形机器人、移动操作臂、单臂机器人）以及多种任务类型（桌面操作、绘画\u002F清洁、灵巧操作）。\n- 灵活且简洁的任务构建API，通过面向对象的设计抽象掉了复杂的GPU内存管理代码。\n- Real2sim环境，可通过GPU仿真以100倍速度高效评估真实世界策略。\n- Sim2real支持，可将仿真中训练好的策略部署到真实世界。\n- 许多经过调优的机器人学习基准模型，涵盖强化学习（如PPO、SAC、[TD-MPC2](https:\u002F\u002Fgithub.com\u002Fnicklashansen\u002Ftdmpc2)）、模仿学习（如行为克隆、[Diffusion Policy](https:\u002F\u002Fgithub.com\u002Freal-stanford\u002Fdiffusion_policy)）以及大型视觉语言行动（VLA）模型（如[Octo](https:\u002F\u002Fgithub.com\u002Focto-models\u002Focto)、[RDT-1B](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FRoboticsDiffusionTransformer)、[RT-x](https:\u002F\u002Frobotics-transformer-x.github.io\u002F))。\n\n欲了解更多详情，请参阅我们在[RSS 2025](https:\u002F\u002Froboticsconference.org\u002F)上发表的[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00425)。\n\n请查阅我们的[文档](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide)，其中包含从任务构建到Sim2Real再到运行基准模型的教程。\n\n**注意：**\n本项目目前处于**测试版**阶段，因此并非所有功能均已完善，可能存在一些Bug。如果您发现任何问题或有功能需求，请提交至我们的[GitHub Issues](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fissues\u002F)或在[GitHub Discussions](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fdiscussions\u002F)中讨论。我们还有一个[Discord服务器](https:\u002F\u002Fdiscord.gg\u002Fx8yUZe5AdN)，用于发布公告及讨论ManiSkill相关事宜。\n\n希望使用原始ManiSkill2的用户，可在[v0.5.3标签](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Ftree\u002Fv0.5.3)中找到该版本的代码。\n\n## 安装\nManiSkill的安装非常简单，只需执行几条pip命令并设置Vulkan进行渲染即可。\n\n```bash\n# 安装软件包\npip install --upgrade mani_skill\n# 安装与您的系统兼容的torch版本\npip install torch\n```\n\n最后，您还需要按照[此处的说明](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Finstallation.html#vulkan)设置Vulkan。\n\n有关安装的更多详细信息（例如从源码安装或故障排除），请参阅[文档](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Finstallation.html)。\n\n## 入门指南\n\n要开始使用，请查看快速入门文档：https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Fquickstart.html\n\n我们还提供了一个快速入门的[Colab笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhaosulab\u002FManiSkill\u002Fblob\u002Fmain\u002Fexamples\u002Ftutorials\u002F1_quickstart.ipynb)，让您无需自己的硬件即可体验GPU并行化仿真。所有内容均可在Colab免费层级上运行。\n\n完整的示例脚本列表，请参阅[文档](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fdemos\u002Findex.html)。\n\n## 系统支持\n\n我们目前最支持Linux系统。Windows和MacOS的支持相对有限。我们正在努力增加对其他系统的支持，但这可能需要一些时间。大多数限制源于[SAPIEN](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FSAPIEN\u002F)包所能支持的功能。\n\n| 系统 \u002F 显卡         | CPU仿真 | GPU仿真 | 渲染 |\n| -------------------- | ------- | ------- | --------- |\n| Linux \u002F NVIDIA显卡   | ✅      | ✅      | ✅        |\n| Windows \u002F NVIDIA显卡 | ✅      | ❌      | ✅        |\n| Windows \u002F AMD显卡    | ✅      | ❌      | ✅        |\n| WSL \u002F 任何显卡       | ✅      | ❌      | ❌        |\n| MacOS \u002F 任何显卡     | ✅      | ❌      | ✅        |\n\n## 引用\n\n\n如果您在工作中使用了ManiSkill3（`mani_skill>=3.0.0`版本），请引用我们的[ManiSkill3论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00425)，格式如下：\n\n```\n@article{taomaniskill3,\n  title={ManiSkill3: GPU并行化机器人仿真与渲染，用于可泛化的具身AI},\n  author={Stone Tao和Fanbo Xiang、Arth Shukla、Yuzhe Qin、Xander Hinrichsen、Xiaodi Yuan、Chen Bao、Xinsong Lin、Yulin Liu、Tse-kai Chan、Yuan Gao、Xuanlin Li、Tongzhou Mu、Nan Xiao、Arnav Gurha、Viswesh Nagaswamy Rajesh、Yong Woo Choi、Yen-Ru Chen、Zhiao Huang、Roberto Calandra、Rui Chen、Shan Luo、Hao Su},\n  journal = {Robotics: Science and Systems},\n  year={2025},\n} \n```\n\n如果您在工作中使用了ManiSkill2（`mani_skill==0.5.3`或更低版本），请引用ManiSkill2论文，格式如下：\n```\n@inproceedings{gu2023maniskill2,\n  title={ManiSkill2：一个用于可泛化操作技能的统一基准},\n  author={Gu, Jiayuan、Xiang, Fanbo、Li, Xuanlin、Ling, Zhan、Liu, Xiqiang、Mu, Tongzhou、Tang, Yihe、Tao, Stone、Wei, Xinyue、Yao, Yunchao、Yuan, Xiaodi、Xie, Pengwei、Huang, Zhiao、Chen, Rui、Su, Hao},\n  booktitle={国际表示学习会议},\n  year={2023}\n}\n```\n\n请注意，ManiSkill中的部分资产、算法等来自其他来源或研究。我们在介绍ManiSkill提供的各个组件时，会尽力包含正确的引用BibTeX。\n\n## 许可证\n\nManiSkill 中的所有刚体环境均采用完全宽松的许可证（例如 Apache-2.0）进行授权。\n\n这些资产采用 [CC BY-NC 4.0](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc\u002F4.0\u002Flegalcode) 许可证授权。","# ManiSkill 3 快速上手指南\n\nManiSkill 是一个基于 [SAPIEN](https:\u002F\u002Fsapien.ucsd.edu\u002F) 构建的强大机器人仿真与训练统一框架，专注于操作技能（Manipulation Skills）。ManiSkill v3 目前处于 Beta 版本，支持大规模 GPU 并行仿真、异构环境模拟以及从强化学习到视觉语言动作模型（VLA）的多种基线算法。\n\n## 环境准备\n\n### 系统要求\n目前 ManiSkill 对 **Linux** 系统的支持最为完善。Windows 和 macOS 仅支持部分功能（如 CPU 仿真和渲染，但不支持 GPU 并行仿真）。\n\n| 系统 \u002F GPU | CPU 仿真 | GPU 并行仿真 | 渲染 |\n| :--- | :---: | :---: | :---: |\n| **Linux \u002F NVIDIA GPU** | ✅ | ✅ | ✅ |\n| Windows \u002F NVIDIA GPU | ✅ | ❌ | ✅ |\n| Windows \u002F AMD GPU | ✅ | ❌ | ✅ |\n| WSL \u002F 任意 GPU | ✅ | ❌ | ❌ |\n| MacOS \u002F 任意 GPU | ✅ | ❌ | ✅ |\n\n> **注意**：要体验完整的 GPU 并行加速能力（如 30,000+ FPS 数据采集），推荐使用 **Linux + NVIDIA GPU** 环境。\n\n### 前置依赖\n- **Vulkan**：必须安装并配置 Vulkan 以支持渲染功能。\n  - 安装指南请参考官方文档：[Vulkan Installation](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Finstallation.html#vulkan)\n- **Python**：建议使用 Python 3.8 或更高版本。\n- **PyTorch**：需安装与您的 CUDA 版本兼容的 PyTorch。\n\n## 安装步骤\n\n### 1. 安装 ManiSkill\n使用 pip 安装最新版本的 ManiSkill：\n\n```bash\npip install --upgrade mani_skill\n```\n\n> **国内加速建议**：如果下载速度较慢，可使用清华或阿里镜像源：\n> ```bash\n> pip install --upgrade mani_skill -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 2. 安装 PyTorch\n请根据您的硬件环境（CUDA 版本）安装对应的 PyTorch 版本。以下为通用安装命令（具体版本请访问 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) 查询）：\n\n```bash\npip install torch\n```\n\n### 3. 验证安装\n确保 Vulkan 已正确配置后，即可开始使用。详细的故障排查可参考 [安装文档](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fgetting_started\u002Finstallation.html)。\n\n## 基本使用\n\n### 方式一：在线体验（无需本地硬件）\n如果您想快速体验 GPU 并行仿真而无需配置本地环境，可以直接运行 Google Colab 示例：\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhaosulab\u002FManiSkill\u002Fblob\u002Fmain\u002Fexamples\u002Ftutorials\u002F1_quickstart.ipynb)\n\n点击上述链接即可在免费版的 Colab 中运行所有示例。\n\n### 方式二：本地运行示例\n安装完成后，您可以运行官方提供的示例脚本来快速上手。\n\n1. **查看示例列表**：\n   访问 [示例脚本文档](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fdemos\u002Findex.html) 获取完整列表。\n\n2. **运行快速入门教程**：\n   克隆仓库并运行基础的 Quickstart 脚本（假设已安装 git）：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill.git\ncd ManiSkill\npython examples\u002Ftutorials\u002F1_quickstart.py\n```\n\n### 核心功能概览\n- **任务构建**：通过面向对象的 API 轻松构建新任务，底层复杂的 GPU 内存管理已被抽象封装。\n- **数据采集**：利用 GPU 并行化特性，高效采集 RGBD、分割掩码等视觉数据。\n- **算法基线**：框架内置了多种调优过的基线算法，包括 PPO、SAC、Diffusion Policy、Octo 等，可直接用于复现或作为开发起点。\n\n更多详细教程（从任务构建到 Sim2Real 部署）请参阅 [用户指南](https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide)。","某机器人实验室团队正致力于训练一个能处理复杂家庭场景的通用机械臂策略，需要海量多样化的交互数据来强化模型的泛化能力。\n\n### 没有 ManiSkill 时\n- **数据采集效率极低**：依赖单线程 CPU 仿真或真实机器人采集，生成带深度图和语义分割的 RGBD 数据速度缓慢，难以满足大规模训练需求。\n- **场景多样性受限**：并行环境通常只能复制相同的场景配置，导致模型过拟合于特定物体摆放，无法适应现实世界中千变万化的家居布局。\n- **开发门槛高且易错**：开发者需手动编写复杂的底层 GPU 内存管理代码来构建新任务，调试困难且极易引发显存溢出错误。\n- **虚实迁移验证周期长**：缺乏高效的 Real2sim 评估流程，在将仿真策略部署到真机前，难以快速验证其在真实物理环境中的表现。\n\n### 使用 ManiSkill 后\n- **数据采集速度飞跃**：利用 GPU 并行渲染技术，单张 RTX 4090 即可实现每秒 30,000+ 帧的高质量 RGBD 数据收集，将数周的数据准备时间缩短至几小时。\n- **异构环境全面覆盖**：支持完全异构的并行仿真，每个并行环境均可加载不同的场景数据集（如 AI2THOR 或 ReplicaCAD）和物体组合，显著提升策略鲁棒性。\n- **任务构建简洁高效**：通过面向对象的灵活 API 自动抽象底层 GPU 细节，研究人员可专注于逻辑设计，快速搭建从桌面操作到灵巧手抓取等各类新任务。\n- **闭环验证加速落地**：内置 Real2sim 与 Sim2real 完整链路，支持在 GPU 仿真中以 100 倍速评估真实世界策略，大幅降低真机试错成本并加快部署进程。\n\nManiSkill 通过极致的 GPU 并行化能力，将机器人技能学习从“手工小作坊”模式升级为“工业化流水线”模式，让通用机器人策略的快速迭代成为可能。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhaosulab_ManiSkill_007fa911.jpg","haosulab","Hao Su's Lab, UCSD","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhaosulab_d853116a.png","",null,"haosulabucsd","http:\u002F\u002Fcseweb.ucsd.edu\u002F~haosu\u002F","https:\u002F\u002Fgithub.com\u002Fhaosulab",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",98.9,{"name":87,"color":88,"percentage":89},"Shell","#89e051",1,{"name":91,"color":92,"percentage":93},"Dockerfile","#384d54",0.1,{"name":95,"color":96,"percentage":97},"CMake","#DA3434",0,2796,466,"2026-04-18T07:38:54","Apache-2.0","Linux","GPU 模拟和渲染必需 NVIDIA GPU（支持 Vulkan），Windows\u002FmacOS 仅支持 CPU 模拟；具体显存大小未说明，但高性能数据收集示例提及 RTX 4090","未说明",{"notes":106,"python":104,"dependencies":107},"该项目目前处于 Beta 版本。必须配置 Vulkan 用于渲染。Linux 系统支持最完善（支持 CPU\u002FGPU 模拟及渲染）；Windows 和 macOS 仅支持 CPU 模拟和渲染，不支持 GPU 并行模拟；WSL 不支持渲染和 GPU 模拟。核心底层依赖为 SAPIEN。",[108,109,110],"mani_skill","torch","sapien",[15,112],"其他",[114,115,116,117,118,119,120,121,122],"3d-computer-vision","reinforcement-learning","robotics","computer-vision","robot-manipulation","simulation-environment","embodied-ai","robotics-simulation","robot-learning","2026-03-27T02:49:30.150509","2026-04-19T03:05:57.103234",[126,131,136,141,146,151],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},41577,"如何在多 GPU 环境下运行 ManiSkill？","目前后端必须保持一致，无法将一个 GPU 专门用于渲染而另一个用于模拟。常见的做法是使用传统的基于 CPU 的 PPO 多进程实现：为 N 个 GPU 启动 N 个子进程，每个子进程中运行约 120 个基于 GPU 的并行环境。每个子进程（即每个 GPU）拥有一份网络副本，独立探索环境并计算梯度，最后由主进程平均所有 N*120 个环境的梯度来更新网络。注意不能简单地在不同 GPU 间传输 CUDA 张量。","https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fissues\u002F460",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},41578,"在阿里云等无头服务器上报错\"Cannot find a suitable rendering device\"或\"DISPLAY environment variable is missing\"如何解决？","该错误通常是因为服务器缺少图形显示环境（X11）。解决方法是配置虚拟显示或使用无头渲染模式。确保在运行代码前设置了正确的 DISPLAY 变量，或者在初始化环境时指定不使用 GLFW 进行渲染（如果支持）。对于 multiprocessing 环境下的 SubProcVecEnv，需要确保子进程也能访问渲染设备，通常需要在服务器安装 xvfb (X Virtual Framebuffer) 并通过 `xvfb-run` 命令启动脚本，例如：`xvfb-run -a python your_script.py`。","https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fissues\u002F160",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},41579,"模拟 InspireHand 灵巧手时出现 NaN 位置或关节误差过大怎么办？","这是一个已知的控制器信号发送问题。当使用固定肌腱（基于 mimic 标签创建）时，如果向所有关节（包括控制关节和模仿关节）同时发送位置信号，会导致 NaN。解决方案是更新到修复后的版本（参考 fix-mimic-joints 分支），该版本修改了向 PhysX 引擎发送关节信号的方式，不再对模仿关节重复更新位置目标。你可以尝试运行演示脚本来验证修复效果：`python mani_skill\u002Fexamples\u002Fdemo_robot.py -r floating_inspire_hand_right --random-actions -c pd_joint_delta_pos -b physx_cuda`（或使用 physx_cpu 后端）。","https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fissues\u002F925",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},41580,"本地训练成功率正常，但提交到排行榜后得分为 0 是什么原因？","这通常是因为模型过拟合了训练集的资产，而排行榜测试使用了不同的资产，导致泛化能力不足。此外，早期排行榜可能存在非确定性部分未正确处理的问题，现在服务器端已对所有评估进行了正确的种子设置。如果分数极低但不为 0（如 0.04），可能是策略刚好略低于成功阈值，存在一定的随机性。建议检查是否严格遵循了测试集的评估规范，并确保代码中没有硬编码训练集特有的特征。","https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fissues\u002F93",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},41581,"在特定环境（如 MoveBucket-v1）下训练时出现\"Segmentation fault (core dumped)\"错误如何解决？","如果在其他环境（如 PickCube-v0）正常仅在特定环境崩溃，通常与环境配置参数或物理仿真稳定性有关。建议检查 `env_cfg` 中的参数设置，特别是 `obs_mode`、`n_points` 和 `control_mode` 是否与当前环境版本兼容。有时减少并行进程数（`rollout_cfg.num_procs`）或调整物理子步数可以缓解内存冲突导致的段错误。如果问题依旧，请确保使用的是最新版本的 ManiSkill 和相关依赖，因为旧版本可能在处理复杂软体或特定资产时存在内存泄漏 bug。","https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fissues\u002F79",{"id":152,"question_zh":153,"answer_zh":154,"source_url":150},41582,"如何正确提交多个任务的模型到排行榜？","根据最新规则，每个赛道（track）的最终提交数量上限已提高（例如刚体赛道为 14 次）。这意味着你不需要将所有任务的最佳模型合并到一个 Docker 镜像中，而是可以为每个任务单独提交一个镜像。请利用增加的提交次数，针对每个任务分别打包和提交表现最好的模型，以避免因合并模型导致的兼容性或权重冲突问题。",[156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241,246,251],{"id":157,"version":158,"summary_zh":159,"released_at":160},333572,"v3.0.0b22","## 变更内容\n* 移除冗余的 `num_actors` 定义，由 @Gamenot 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1056 中完成\n* [BugFix] 修复固定肌腱建模机器人在 PhysX 后端中的一些 NaN 问题。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1080 中完成\n* 更新文档，在 Ubuntu 上使用正确的 Vulkan 包，由 @akashsharma02 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1086 中完成\n* [BugFix]: 在 GPU 模拟中修复 IK 错误，由 @arth-shukla 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F996 中完成\n* [BugFix] 为 TurnFaucet 任务注册缺失的资产下载 ID，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1092 中完成\n* 为行为克隆 RGBD 添加 panda_wristcam 支持，由 @chenyenru 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F814 中完成\n* [BugFix] 停止 Fetch 机器人模仿控制器的警告信息，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1103 中完成\n* [Documentation] 使用新数据和文件更新基准测试文档，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1104 中完成\n* [BUG] 从 PlugCharger-v1 中移除空的密集奖励函数，由 @jstmn 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1096 中完成\n* [Feature] 支持 WidowXAI 机器人，并提供立方体抓取的强化学习基线，同时更新任务文档，由 @hu-po 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F904 中完成\n* [Feature] 提供 Sim2Real 工具和教程，支持 LeRobot 硬件，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F942 中完成\n* [Feature] 支持不同的机器人颜色 \u002F 机器人颜色域随机化，用于 SO100GraspCube 的 Sim2Real 测试，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1132 中完成\n* [Feature] 支持仅使用 CPU 进行模拟，以及在未找到可用设备时的回退选项，用于渲染，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1133 中完成\n* [BugFix] 修复 Mac 设备未被识别为具备渲染能力的问题，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1137 中完成\n* 将 ActionRepeatWrapper 添加到 wrappers 导入中，由 @AlexandreBrown 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1141 中完成\n* 修复动作重复示意图未在线显示的问题，由 @AlexandreBrown 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1142 中完成\n* vectorenv: 将环境类型注解放宽至 gymnasium.Env，由 @songyuc 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1144 中完成\n* [BugFix] 修复某些 RGB 叠加模式下继承自基础数字孪生环境的环境无法释放内存的问题，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1158 中完成\n* 修复数字孪生任务的部分重置支持，由 @zanghz21 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1155 中完成\n* [BugFix] 统一多智能体和基础智能体中的控制器状态获取与设置函数，修复轨迹记录中的一些问题，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1161 中完成\n* [Feature] 支持设置单个关节的 qpos 值，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1162 中完成\n* 带有运动规划支持的 Stackpyramid 任务，由 @chenyenru 完成","2025-12-05T09:27:11",{"id":162,"version":163,"summary_zh":164,"released_at":165},333573,"v3.0.0b21","## 变更内容\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F941 中实现了对 macOS 的初步支持\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F947 中添加了夜间构建\n* [BugFix] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F949 中修复了 macOS 平台上的性能分析代码，并更新了文档，说明了 macOS Sapien 的一些限制\n* [BugFix] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F951 中修复了 PhysX CPU 后端允许 `num_envs > 1` 的 bug\n* @dwaitbhatt 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F929 中更新了 xarm6_robotiq 机器人，使其静止姿态更加自然\n* [BugFix] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F974 中修复了渲染相机局部位姿未使缓存失效的问题\n* [文档] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F980 中更新了状态注册表的过时文档函数\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F981 中增加了更高级的模仿控制器配置，并实现了浮动式 Inspire 手部\n* @jstmn 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F976 中为 `stack_cube.py` 和 `peg_insertion_side.py` 添加了缺失的 `.cpu()` 调用\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F984 中自动生成机器人文档，并添加了初始质量评分\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F985 中实现了控制器的美观打印表示\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F986 中增加了 Lerobot 硬件支持，以及 SO100 和 Koch 型号机械臂的建模\n* [文档] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F988 中修复了机器人文档链接，并修正了 Koch 机械臂控制器的错误命名\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F995 中进行了 SO100 PickCube 模拟测试\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F997 中改进了 SO100 模型\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1008 中为 SO100 添加了凸包网格，以支持运动规划，并提供了生成凸包网格的示例脚本\n* @XuGW-Kevin 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F788 中通过混合专家和 BEE 算子增强了 SAC 算法，提升了稳定性和性能\n* @AlexandreBrown 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1022 中修复了 PushCube 策略在将方块举过目标位置时不被惩罚的问题\n* [BugFix] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1034 中修复了 CPU 模拟回放中的 bug\n* [功能] @dwaitbhatt 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F936 中为 xarm6_robotiq 提供了运动规划解决方案\n* [BugFix] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1035 中更新了获取碰撞网格的函数\n* @AlexandreBrown 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F832 中添加了 Gym 环境动作重复包装器\n* [文档] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F1039 中更新了引用信息，加入了 RSS 2025 论文的详细内容\n* [BugFix] CONT 中的链接","2025-05-20T05:05:42",{"id":167,"version":168,"summary_zh":169,"released_at":170},333574,"v3.0.0b20","## 变更内容\n* [BugFix] 修复 framestack 包装器不接受 `env_idx=0...num_envs` 的问题（会导致重置所有环境，而非部分重置）。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F888 中完成。\n* 修复绘图任务文档中的视频链接。由 @arnavg115 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F890 中完成。\n* [BugFix] 对某些属性先创建 NumPy 数组，再转换为 PyTorch 张量。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F894 中完成。\n* [Feature] 更新 TD-MPC2 基线，支持 128x128 RGB 图像及额外的状态数据。由 @t-sekai 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F903 中完成。\n* [Feature] 更新关于角色\u002F连杆领域随机化的文档，并支持对机器人\u002F智能体进行领域随机化。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F910 中完成。\n* 修复：查看器 + RGB-D 观测。由 @arth-shukla 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F897 中完成。\n* [BugFix] YCB 校验码问题。由 @arth-shukla 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F918 中完成。\n* [BugFix] 修复绘图 MP 解决方案的导入问题。由 @arnavg115 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F916 中完成。\n* 更新 interactive_panda.py。由 @Ethan-Chen-plus 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F917 中完成。\n* 修复 GPU 模拟演示，内存类型为 None 的问题。由 @SajjadPSavoji 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F924 中完成。\n* [BugFix] 修复使用非零初始位姿构建的关节模型会生成错误碰撞网格的问题，并弃用旧的获取关节网格函数。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F927 中完成。\n\n## 新贡献者\n* @Ethan-Chen-plus 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F917 中完成了首次贡献。\n* @SajjadPSavoji 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F924 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b19...v3.0.0b20","2025-03-17T17:20:30",{"id":172,"version":173,"summary_zh":174,"released_at":175},333575,"v3.0.0b19","## 变更内容\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F854 中升级了 pytorch_kinematics\n* @AlexandreBrown 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F828 中修复了重置后 final_info 的 elapsed_steps 值错误，以及 CUDA 上 final_observation 错误的问题\n* @shaido987 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F856 中将智能体的 qpos 类型修正为浮点数\n* [文档] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F861 中更新了关于标准 IL 数据集及如何回放它们的 IL 基线文档，并移除了旧的 examples.sh 文件\n* @arth-shukla 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F859 中更新了 YCB 和 RCAD 场景构建器，并添加了 demo_manual_control 示例\n* [BugFix] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F865 中更新了 ycb 数据集的校验和\n* [BugFix] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F870 中修复了 CPU 模拟速度过慢的问题，并优化了 CPU 模拟基准测试代码以提高速度\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F876 中新增了 RGB DP 基线以及 DrawTriangle\u002FSVG 任务\n* [功能] @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F863 中更新了针对单件 YCB 物品抓取的 PPO 基线，并移除了 wandb-entity CLI 参数\n* @shaido987 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F880 中修复了多刚体 Actor 在 CPU 后端中的崩溃问题\n\n## 新贡献者\n* @AlexandreBrown 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F828 中完成了首次贡献\n* @shaido987 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F856 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b18...v3.0.0b19","2025-02-25T23:55:41",{"id":177,"version":178,"summary_zh":179,"released_at":180},333576,"v3.0.0b18","## 变更内容\n* [BugFix] 修复由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F842 中提出的保存 GPU 重放轨迹时的越界错误\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b17...v3.0.0b18","2025-02-06T21:20:58",{"id":182,"version":183,"summary_zh":184,"released_at":185},333577,"v3.0.0b17","## 变更内容\n* [功能] 支持将 pd_joint_pos 到 pd_ee_pose 控制器的动作进行转换。修复了运动规划示例默认使用 GPU 仿真后端回放时无法正常工作的问题。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F834 中完成。\n* [功能] 处理更灵活的观测模式，支持用户检查是否请求了真实状态数据。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F835 中完成。\n* [BugFix] 修复 AI2THOR 场景中静态物体的初始位姿在 CPU 和 GPU 仿真中不为零的问题。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F836 中完成。\n* [功能] 移除 PPO 代码中的额外张量初始化。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F837 中完成。\n* [BugFix] 修复回放缓冲区轨迹工具会使用 PyTorch 张量种子并尝试将其保存为 JSON 文件的问题。由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F840 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b16...v3.0.0b17","2025-02-06T16:43:09",{"id":187,"version":188,"summary_zh":189,"released_at":190},333578,"v3.0.0b16","## 变更内容\n* [文档\u002F修复] 修复 SAC 重放缓冲区在达到最大容量后错误填满的 bug，并由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F731 中为多项任务添加新的 PPO 基线。\n* 由 @songyuc 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F745 中修复 ppo.py 中 torch_deterministic 参数的文档。\n* [新功能] 支持对物体施加外力，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F752 中实现。\n* 将基于 Transformer 的动作分块（ACT）加入基线，由 @ywchoi02 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F640 中完成。\n* [修复] 修复末端执行器目标控制的动作转换中出现的抖动问题，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F756 中完成。\n* [修复] 修复因缺少属性导致在运行 ManiSkill 环境时 GPU 基准测试脚本出错的问题，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F762 中完成。\n* [新功能] 提供批量且更简洁的 API 来设置锁定的运动轴，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F763 中实现。\n* [新功能] 支持在 GPU 上设置物体质量，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F792 中完成。\n* 由 @vncntt 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F801 中修正文档中的 minor 拼写错误。\n* 移除 `info` 到 NumPy 的不必要的转换（#791），由 @hesic73 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F796 中完成。\n* [修复] 修复在 anymalc-reach 上运行 TD-MPC2 时出现的数据类型错误，由 @t-sekai 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F808 中完成。\n* 将 ManiSkill-HAB 添加到文档中，由 @arth-shukla 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F759 中完成。\n* [新功能] 清理并标准化模仿学习基线和数据集，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F811 中完成。\n* [新功能] 为 ACT 基线和 DP 提供更整洁的基线脚本，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F815 中完成。\n* [修复] 改进 PPO 的默认参数，并修复传感器\u002F相机延迟初始化的 bug，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F816 中完成。\n* [文档] 更新图库，增加更多视频，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F819 中完成。\n* [新功能] 支持 state+\u003Cvisual_textures> 模式，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F821 中实现。\n* 由 @vncntt 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F802 中修正拼写错误。\n* 修复：vector-gymnasium-wrapper —— 修复 'call' 方法 —— getattr(self.env...) …，由 @imishani 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F783 中完成。\n* [新功能] 支持配备 Robotiq 夹爪的 XArm6，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F820 中完成。\n* [修复] 修复 MS_ASSET_DIR 变量的使用问题，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F824 中完成。\n\n## 新贡献者\n* @songyuc 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F745 中完成了首次贡献。\n* @ywchoi02 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F640 中完成了首次贡献。\n* @vncntt 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F801 中完成了首次贡献。\n* @imishani 完成了首次贡","2025-02-04T22:37:15",{"id":192,"version":193,"summary_zh":194,"released_at":195},333579,"v3.0.0b15","## 变更内容\n* [BugFix] 修复单 CPU 环境下记录 episode 包装器的 bug，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F723 中完成\n* [Feature] 子步渲染功能，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F724 中实现\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b14...v3.0.0b15","2024-11-28T19:36:39",{"id":197,"version":198,"summary_zh":199,"released_at":200},333580,"v3.0.0b14","## 变更内容\n* [版本] 版本号升级至 b14，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F721 中完成\n* [BugFix] 修复了 panda 棒状物运动规划文件夹中缺少 init.py 文件的 bug\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b13...v3.0.0b14","2024-11-28T02:36:46",{"id":202,"version":203,"summary_zh":204,"released_at":205},333581,"v3.0.0b13","# v3.0.0b13 发布\n显著变更\u002F新增内容：\n- 在 examples\u002Fbaselines\u002Fppo_fast 中提供了一个基于 LeanRL 的新 PPO 实现。现在 PPO 训练速度大幅提升（一分钟内即可解决 PickCube 任务！约一小时可解决 PegInsertionSide 任务！）\n- 现在 scripts\u002Fdata_generation\u002Frl.sh 中提供了基于强化学习的数据生成脚本。\n- 新增了多个任务，其中包括一些双臂人形机器人任务。\n- 修复了一些小 bug，并从多方面提升了仿真速度。\n- 更全面的领域随机化工具文档及如何将其应用于自定义环境的说明：https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Ftutorials\u002Fdomain_randomization.html\n- 添加了基于 RGB(D) 的 SAC 基线实现。\n- 修复了 pd_ee_delta_pose 控制器在使用 PPO 代码时因就地修改而无法正常工作的问题。\n- 升级了文档，现在更详细地展示了 ManiSkill 默认提供的所有任务信息。任务文档会自动注明该任务是否包含演示、奖励是密集型还是稀疏型，以及每个回合的长度。\n- 增加了许多演示视频，未来还将继续增加。所有演示视频现已统一放置在以下 Hugging Face 数据集仓库中：https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fhaosulab\u002FManiSkill_Demonstrations。\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F211e8b87-6b81-460e-841f-081979d1ea35\n\n\n\n完整更新日志：\n\n## 变更内容\n* [修复] 使用与基座关节相同的索引设置根节点速度，由 @arth-shukla 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F665 中完成。\n* [BugFix] 修复数字孪生 RGB 叠加层因新的 to tensor 系统导致的问题，并在选择的观测模式无效时显示可用的观测模式，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F668 中完成。\n* [BugFix] 修复记录回合包装器仅存储所有并行环境的回合种子而非单个种子的问题，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F671 中完成。\n* [Bug] 修复：仅在不进行部分重置时才调用 .step() 方法，由 @arth-shukla 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F672 中完成。\n* [BugFix] 修复 demo_vis_segmentation 脚本，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F675 中完成。\n* [特性] 为异构仿真环境统一状态表示，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F676 中完成。\n* [文档] 添加关于仿真属性的文档，例如可在运行时动态更新的对象纹理和控制器属性，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F681 中完成。\n* [特性] 支持在观测数据中包含法线和反照率，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F682 中完成。\n* [特性] 为 look_at 矩阵构造添加批量输入支持，由 @Xander-Hinrichsen 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F678 中完成。\n* [特性] 更新 interactive_panda.py，使其能够使用不同的查看器着色器，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F687 中完成。\n* [特性] 极速 PPO，由 @StoneT2000 在 https:\u002F\u002Fgithub.com\u002Fhaos","2024-11-27T22:54:08",{"id":207,"version":208,"summary_zh":209,"released_at":210},333582,"v3.0.0b12","\r\n# v3.0.0b12 Release\r\n\r\nBig changes include\r\n- RoboCasa scenes integrated into ManiSkill, simulating easily 500-1000 RGBD FPS. Run `python -m mani_skill.examples.demo_random_action -e \"RoboCasaKitchen-v1\" --render-mode=\"human\" --shader rt-fast` to try it!\r\n- Batched RNG is now recommended to be used for scene loading in tasks to ensure CPU\u002FGPU sims can generate the same exact objects\u002Fgeometries\u002Ftextures for a task when given the same list of seeds. Read https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fconcepts\u002Frng.html to learn more about the details\r\n- Motion planning solutions have CPU parallelization options now\r\n- Fix bug with point cloud data being scaled down twice\r\n- Add a warning if you don't add initial poses to builders. This is **strongly** recommended to do now by overriding def _load_agent to initialize a robot pose and to set initial poses of other objects such that they don't intersect if they were to spawn at the initial poses. This is absolutely necessary for fast and accuracte GPU simulation, and fixes a number of issues with the open cabinet task.\r\n- Major bug fixes with the RNG that were introduced in 3.0.0b11. Namely envs wouldn't randomize at all despite resets sometimes. Please upgrade to 3.0.0b12 instead of using b11.\r\n\r\n![sapien_screenshot_6 1](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8d78d8db-d829-4747-bab5-86c15c6abfb7)\r\n![robocasa_0](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fafab1413-1a3e-4a0e-82d8-7574a2e74f73)\r\n![sapien_screenshot_9 1](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F22c2f026-02e9-4b3f-b3c1-855aea491aab)\r\n\r\n\r\n## What's Changed\r\n* docs: update README.md by @eltociear in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F594\r\n* add missing 'sensor_data' handling in `parse_visual_obs_mode_to_struct` by @hesic73 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F606\r\n* Update docs by @FarukhS52 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F614\r\n* [BugFix] ControllerDict prevented before_simulation_step on child controllers by @RoboNuke in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F613\r\n* [Docs] Update docs on demonstrations and imitation learning by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F618\r\n* [Docs] Fix Typos in docs by @VaibhavWakde52 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F622\r\n* [BugFix] Agent registration not working when overriding a robot uid by @rechim25 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F623\r\n* [Feature]: Add CPU multiprocessing for Panda Motionplanner's Trajecto… by @chenyenru in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F611\r\n* [Feature] Habitat Rearrange tasks + Bug fixes use due to bad default initial poses by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F529\r\n* [BugFix] Add error handling for Panda's Motionplanner by @chenyenru in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F627\r\n* [Feature] Reproducible batched RNG support, fix state representations of various tasks  by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F619\r\n* [BugFix] Fix OpenCabinetDrawer-v1 by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F630\r\n* [BugFix] Fix bug in SAC code and remove some old code in RL examples by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F639\r\n* [Feature] Support the Robocasa scene dataset by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F648\r\n* [(Small) Feature]: small build util changes, extra humanoid keyframes + attrs by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F651\r\n* [BugFix] Improve GPU sim speed and new docs on initial pose handling by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F655\r\n* [BugFix]: improve perf for batched_rng by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F656\r\n* [BugFix] Fix bug where sometimes the same episode rng is given if reconfiguration freq is 1 but enhanced determinism is false and no seed is given by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F659\r\n\r\n## New Contributors\r\n* @eltociear made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F594\r\n* @hesic73 made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F606\r\n* @FarukhS52 made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F614\r\n* @RoboNuke made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F613\r\n* @VaibhavWakde52 made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F622\r\n* @rechim25 made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F623\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b10...v3.0.0b12","2024-10-29T21:29:10",{"id":212,"version":213,"summary_zh":214,"released_at":215},333583,"v3.0.0b10","# v3.0.0b10 is released\r\n\r\nThis new version introduces a lot of new features\r\n- Ant walk\u002Frun control tasks added and solvable via visual\u002Fstate based PPO\r\n- New, clean, systematic wrappers\u002Fdesign for fairly evaluating algorithms in simulation for RL, IL etc.\r\n- All end-effector \u002F IK based controllers are now GPU parallelized. Big thank you to @LemonPi for help on batched IK control via their pytorch kinematics library\r\n- four Real2sim environments from SIMPLER have been added, cleaned up, and show good correlation between sim and real success rates and low MMRV relative to original CPU implementations. Evaluation is now 60-100x faster than the real world! \r\n- Trajectory conversion tool for the panda arm is now much more accurate. Tasks like peg insertion \u002F plug charger can convert from pd joint position to any other action space with a much higher success rate.\r\n- Panda wristcam robot had an incorrect initialization on the table scene which has been fixed.\r\n- A base drawing simulation environment with no tasks or demos at the moment\r\n- TwoRobotPickCube-v1 has a new more optimized reward function\r\n- Updated baselines to adhere to new standards of evaluation metrics and reporting\r\n- New sim+rendering benchmarks\r\n- Fix various bugs\r\n\r\n\r\nDemos of the new drawing environment and real2sim environments below\r\n\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F60c2a5c5-13d8-4d89-b5f3-1fb92ba55772\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F50ef4ae8-e6ac-412e-bc7e-94d23f85a6cb\r\n\r\n\r\n## What's Changed\r\n* [BugFix] Fix bug with processing segmentation images for older torch versions by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F532\r\n* [BugFix] Bump sapien dependency in tdmpc2 baseline for mani_skill update by @chenyenru in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F527\r\n* Added MS Ant Walk and Run Tasks by @Xander-Hinrichsen in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F533\r\n* [Feature] Updated TD-MPC2 evaluation and fixed some bugs by @t-sekai in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F538\r\n* Added behavior cloning baselines by @arnavg115 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F531\r\n* [Feature] Align and simplify policy\u002FML evaluation in ManiSkill by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F544\r\n* [Feature] Massively improve success rate of pd joint pos \u002F delta pos conversion to pd ee delta pos\u002Fpose controller by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F549\r\n* [Feature] Clean up trajectory replay code and more docs on the tool by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F552\r\n* panda wristcam initialization added by @jamesahou in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F553\r\n* [Feature] Drawing Simulation Base Environment by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F555\r\n* [BugFix] Fix CPU articulation net contact forces being wrong shape by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F556\r\n* [Feature] Real2Sim Eval Digital Twins by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F536\r\n* Solved PPO for TwoRobotPickCube-v1 by @jamesahou in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F562\r\n* Behavior cloning bugfixes by @arnavg115 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F565\r\n* [Docs] New benchmarking scripts by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F568\r\n* Benchmarking code 2.0 by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F573\r\n* [BugFix] Fix panda_v2_gripper.urdf visual mesh suffix (dae->glb) by @kh11kim in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F575\r\n* [Docs] More rendering benchmarking by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F580\r\n* [Docs] Sim+rendering Benchmarking, matching visuals by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F584\r\n* [Docs] Paper upload and update documentation and READMEs by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F585\r\n* [Docs] fix typos in paper header by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F586\r\n* [Docs] Update docs on controllers and fix video link by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F589\r\n* [Feature] Updated TD-MPC2 with CPU mode and new shared metrics by @t-sekai in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F578\r\n* [Version] v3.0.0b10 by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F591\r\n\r\n## New Contributors\r\n* @chenyenru made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F527\r\n* @jamesahou made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F553\r\n* @kh11kim made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F575\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b9...v3.0.0b10","2024-10-01T20:22:06",{"id":217,"version":218,"summary_zh":219,"released_at":220},333584,"v3.0.0b9","## What's Changed\r\n* [BugFix] Fix render all mode by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F521\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b8...v3.0.0b9","2024-08-22T09:42:09",{"id":222,"version":223,"summary_zh":224,"released_at":225},333585,"v3.0.0b8","## What's Changed\r\n* [Feature] More standardized metrics reporting for diffusion policy by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F495\r\n* [Feature] Update PPO RGB baselines by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F475\r\n* [Docs] Add missing citations for baselines by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F497\r\n* Fix pytorch kinematics bugs by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F500\r\n* [Feature] New shader config system and refactors by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F499\r\n* [Bug Fix] Add missing _viewer typing by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F506\r\n* [BugFix] Support triangle mesh collision shapes in collision mesh generations by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F507\r\n* [BugFixes] Fix default robots used for harder table top tasks fix record episode bug when applied to cpu gym wrapped envs by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F508\r\n* [Feature] Add floating robotiq_2f_85 gripper support + fix bugs with other robot arms that use that gripper. by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F513\r\n* Dm control humanoid ppo learnable stand, walk, run by @Xander-Hinrichsen in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F484\r\n* [Docs] Upgrading docs by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F516\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b7...v3.0.0b8","2024-08-22T04:34:19",{"id":227,"version":228,"summary_zh":229,"released_at":230},333586,"v3.0.0b7","Fix a bug with the CPU IK solver and the record episode wrapper.\r\n## What's Changed\r\n* [Feature] Diffusion policy baseline by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F493\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b6...v3.0.0b7","2024-08-12T22:05:22",{"id":232,"version":233,"summary_zh":234,"released_at":235},333587,"v3.0.0b6","## What's Changed\r\n* use zeros_like in scene setup gpu to avoid nan by @hzaskywalker in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F400\r\n* Dense Rews PullCube & LiftPegUpright by @Xander-Hinrichsen in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F403\r\n* Update docstring to correctly cite ReplicaCAD as default by @chennisden in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F410\r\n* [BugFix] Replaces the default gymnasium timelimit wrapper with a batched\u002Ftorch version when gym.make is used by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F416\r\n* [Feature] Expose articulation acceleration values by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F415\r\n* [BugFix] Fix bug where motion planning example code kept trying to recreate grasp pose visual objects by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F428\r\n* [Docs] In ReplicaCAD scene builder, fix comment documenting ASSET_DIR location by @chennisden in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F430\r\n* Remove unnecessary global declaration by @chennisden in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F433\r\n* [Feature] add G1 robot by @matheecs in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F405\r\n* [Feature] Add a floating panda gripper and update missing docs on new robots by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F436\r\n* Initialize qvel when resetting for control_mode = \"pd_joint_pos_vel\" by @sean1295 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F432\r\n* [Feature] Add RLPD Baseline by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F437\r\n* Update Docs by @Xander-Hinrichsen in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F441\r\n* [Feature] Support ppo_rgb with rgb only, no state. And some bug fixes for ppo_rgb for locomotion tasks by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F446\r\n* [BugFix] Fix bug where linear\u002Fangular velocities are swapped in GPU sim vs CPU sim by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F452\r\n* [Docs] Update RFCL baseline documentation to pin compatible jax\u002Ftorch… by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F453\r\n* Control Hopper Env, Stand + Hop by @Xander-Hinrichsen in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F440\r\n* [Feature] New assets for floor\u002Flighting and support for recording all envs in one scene in the parallel GUI render mode by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F445\r\n* [Feature] SIm\u002FRender device selection by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F455\r\n* [Feature] Automatic asset download checks by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F456\r\n* simbackend for cpu by @hzaskywalker in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F459\r\n* [Feature] New assets, vision based quadruped reach example, and new render all mode by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F461\r\n* [Feature] Add unitree go 2 task with simplified collisions and new penalty for quadruped reach for better gaits by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F463\r\n* [Feature] Quadruped spin control task by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F464\r\n* [Docs] Update renders with new floor texture and update docs on humanoid tasks by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F465\r\n* [BugFix] Fix memory leak by forcing a gc.collect upon reconfiguring by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F467\r\n* [BugFix] Fix multi-gpu setup documentation and future proof some code by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F468\r\n* [Docs] Documentation on proper RL benchmarking and state based PPO results by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F470\r\n* [Feature] Basic support for stable baselines 3 by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F476\r\n* [Feature] Code to support future render system by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F480\r\n* [BugFix] Add back pytorch kinematics which is likely more stable by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F485\r\n* [Feature] Add in vectorized TDMPC-2 Baseline by @t-sekai in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F451\r\n* [Docs] Update docs and fix some bugs with autodoc by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F487\r\n* [Feature] Align RFCL and RLPD training metrics with other RL baselines by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F489\r\n* [BugFix] Adding missing init.py file for users who pip install from git by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F491\r\n\r\n## New Contributors\r\n* @chennisden made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F410\r\n* @matheecs made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F405\r\n* @sean1295 made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F432\r\n* @t-sekai made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F451\r\n\r\n**Full Changelog**: https:","2024-08-12T18:21:33",{"id":237,"version":238,"summary_zh":239,"released_at":240},333588,"v3.0.0b5","## What's Changed\r\n* [Feature] SAC Parallelized Baseline by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F345\r\n* [Feature] Add Panda Wrist Cam option to PickSingleYCB task by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F349\r\n* [Feature] Improved QuadrupedReach rewards and make env more markov for faster training by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F350\r\n* [Feature] Add back the high quality AI2THOR Scenes by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F352\r\n* [Docs] New documentation on scene datasets by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F353\r\n* [Docs] Old hyperlink fix by @CreativeNick in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F356\r\n* [Feature] New CPU\u002FGPU contact forces\u002Fimpulses unified API by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F354\r\n* [Docs] Fix docs on contacts API by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F359\r\n* [BugFix]: Remove extraneous print() by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F360\r\n* [Fix]: Fix Fetch grip directions by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F361\r\n* [Feature] SAC + reverse forward curriculum learning by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F365\r\n* [Feature] Parallel Rendering Benchmark Code by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F363\r\n* [Docs] Some benchmark results on 4090 by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F368\r\n* [BugFix] Fix trajectory replay saving the wrong first observation if using use first env state by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F369\r\n* [Feature] Faster jax RFCL by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F373\r\n* [Docs] Fix docs links for RFCL by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F374\r\n* [Feature] Add more scripts for generating motionplanning demos, update demo links to new datasets on HF by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F372\r\n* [BugFix] allow set collision group bit to 0 in set_collision_group_bit by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F376\r\n* [CSE 276F Submission] PokeCube-v1 by @xixinzhang in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F375\r\n* Added RollBall env by @guru-narayana in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F366\r\n* [Docs] Fix docs on roll ball and update the demo video by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F380\r\n* fix data set & apply_gpu order in setup_gpu by @hzaskywalker in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F381\r\n* [CSE 276F Submission] PlaceSphere-v1 by @astonastonaston in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F379\r\n* [CSE 276F Submission] PushT-v1 by @Xander-Hinrichsen in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F378\r\n* [Feature] Parallel render viewer by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F383\r\n* [Feature] - Disable some collisions on fetch to make it run faster by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F385\r\n* fetch move forward\u002Fbackward only by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F386\r\n* [BugFix] Fix PD joint position to PD ee delta pose control conversion by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F388\r\n* [Docs] Update ppo examples by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F389\r\n* [BugFix] Fix bug with recorded motion planning demos of some tasks. by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F391\r\n\r\n## New Contributors\r\n* @CreativeNick made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F356\r\n* @xixinzhang made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F375\r\n* @guru-narayana made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F366\r\n* @hzaskywalker made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F381\r\n* @astonastonaston made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F379\r\n* @Xander-Hinrichsen made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F378\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b4...v3.0.0b5\r\n\r\n## What's Changed\r\n* [Feature] SAC Parallelized Baseline by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F345\r\n* [Feature] Add Panda Wrist Cam option to PickSingleYCB task by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F349\r\n* [Feature] Improved QuadrupedReach rewards and make env more markov for faster training by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F350\r\n* [Feature] Add back the high quality AI2THOR Scenes by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F352\r\n* [Docs] New documentation on scene datasets by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F353\r\n* [Docs] Old hyperlink fix by @CreativeNick in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F356\r\n* [Feature] New CPU\u002FGPU contact forces\u002Fimpulse","2024-06-23T01:59:23",{"id":242,"version":243,"summary_zh":244,"released_at":245},333589,"v3.0.0b4","# v3.0.0b4 Release\r\n\r\nMajor changes:\r\n- The open cv (cv2) package is now no longer a dependency, making it easier for ManiSkill to run on more systems.\r\n- EE based control (PDEEPoseController) has been overhauled and documentation has been updated to explain it much more clearly. Importantly there are now 4 frames of control (arising from the product of 2 rotation and 2 translation frames) and the default has been changed to be the more intuitive default (e.g. +Z is actually upwards \u002F in the root link frame). Moreover the CPU sim default frame is the same frame used by the GPU sim. See https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fconcepts\u002Fcontrollers.html#pd-ee-end-effector-pose. This also means previous policies trained on ee controllers (like pd_ee_delta_pose) will not work the same as before and do not directly transfer to this version.  \r\n- SAPIEN dependency has been upgraded, which fixes a number of bugs particularly around support for older \u002F less common GPUs\r\n- \r\n## What's Changed\r\n* [BugFix] Windows\u002Fwsl bug fixes by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F333\r\n* [Feature] Remove cv2 by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F335\r\n* [Feature] Update scene builder api by @arth-shukla in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F315\r\n* [BugFix] Fix bug where Actors\u002FRoot Links did not accept raw tensors for poses by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F336\r\n* [BugFix] Fix typo in ppo baselines with printed evaluation step values by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F337\r\n* [BugFix] Update motionplanner.py by @PartyPenguin in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F338\r\n* [Feature] Update and document ee-control controllers by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F339\r\n* [Docs] Fix video links by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F340\r\n* [Feature] Support any task loading any robot and print warning instead of assert by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F341\r\n\r\n## New Contributors\r\n* @PartyPenguin made their first contribution in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F338\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0b3...v3.0.0b4","2024-05-23T01:26:31",{"id":247,"version":248,"summary_zh":249,"released_at":250},333590,"v3.0.0b3","## What's Changed\r\n* [Docs] Fix docs for visualizing reset distributions demo by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F313\r\n* [Docs] Fix typo in readmes\u002Fdocs by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F314\r\n* [BugFix] Fix fast-kinematics dependency by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F318\r\n* [BugFix] fix bug for demo robot code where mimic joints could not work with keyframe actions by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F319\r\n* [BugFix] copy info dict to avoid self reference memory leak in the gymnasium vector env wrapper by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F320\r\n* [BugFix] Fix bug with ppo example code under new wrapper API fix by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F321\r\n* version bump by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F322\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv3.0.0.b2...v3.0.0b3","2024-05-06T23:25:32",{"id":252,"version":253,"summary_zh":254,"released_at":255},333591,"v3.0.0.b2","# ManiSkill 3 Beta Release\r\n\r\nMany many changes compared to ManiSkill 2. ManiSkill v3 is now a GPU accelerated robotics simulator, with extremely fast parallel rendering supporting all kinds of robot learning workflows. There are a lot of new robots\u002Ftasks and features, find out more on our documentation: https:\u002F\u002Fmaniskill.readthedocs.io\u002Fen\u002Flatest\u002F\r\n\r\n![teaser](https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fassets\u002F35373228\u002F84b68155-2a7a-4f9e-9c55-e24565a3404e)\r\n\r\nA short summary of some key features:\r\n- GPU parallelized visual data collection system. On the high end you can collect RGBD + Segmentation data at 20k FPS with a 4090 GPU, 10-100x faster compared to most other simulators.\r\n- Example tasks covering a wide range of different robot embodiments (quadruped, mobile manipulators, single-arm robots) as well as a wide range of different tasks (table-top, locomotion, dextrous manipulation)\r\n- GPU parallelized tasks, enabling incredibly fast synthetic data collection in simulation\r\n- GPU parallelized tasks support simulating diverse scenes where every parallel environment has a completely different scene\u002Fset of objects\r\n- Flexible task building API that abstracts away much of the complex GPU memory management code\r\n\r\n## What's Changed\r\n* Setup Automatic Pypi uploads on release by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F137\r\n* Update README.md by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F280\r\n* Initial ManiSkill 3 merge into main branch by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F287\r\n* Improve rgb ppo, update code for latest sapien release by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F291\r\n* Update quadruped envs and procedural generated floor texture by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F279\r\n* Improved demo code, moving some robot assets to other repos, bug fixes by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F292\r\n* Unitree go2 initial work by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F293\r\n* Quadruped rl by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F294\r\n* Cartpole by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F296\r\n* Updated robot tutorial by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F297\r\n* More docs, robots by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F298\r\n* code refactoring for some sensors, base env is always batched+torch unless user uses wrapper, make some code cleaner by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F299\r\n* Allow import mani_skill to get access to envs + bug fix with pick clutter env by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F301\r\n* update custom robot docs, property to check if gpu sim is enabled, code to test gpu sim of robot by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F302\r\n* More docs on existing tasks by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F303\r\n* Unify the way non primitive actors can be built by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F304\r\n* Refactoring ._scene->.scene, refactor custom task docs, more custom task docs, doc on simulation 101, some bug fixes for a missing function by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F305\r\n* More docs on demo scripts, fix old demo scripts, add some missing videos of tasks by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F306\r\n* Plug charger task by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F307\r\n* version bump + update README + prepare for beta release by @StoneT2000 in https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fpull\u002F308\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fhaosulab\u002FManiSkill\u002Fcompare\u002Fv0.5.3...v3.0.0.b1","2024-05-02T13:49:21"]