[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-mujocolab--mjlab":3,"similar-mujocolab--mjlab":117},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":14,"owner_avatar_url":15,"owner_bio":16,"owner_company":16,"owner_location":16,"owner_email":16,"owner_twitter":16,"owner_website":16,"owner_url":17,"languages":18,"stars":38,"forks":39,"last_commit_at":40,"license":41,"difficulty_score":42,"env_os":43,"env_gpu":44,"env_ram":45,"env_deps":46,"category_tags":49,"github_topics":51,"view_count":42,"oss_zip_url":16,"oss_zip_packed_at":16,"status":57,"created_at":58,"updated_at":59,"faqs":60,"releases":91},711,"mujocolab\u002Fmjlab","mjlab","Isaac Lab API, powered by MuJoCo-Warp, for RL and robotics research.","mjlab 是一个专为强化学习与机器人研究打造的开源框架。它将 Isaac Lab 灵活的管理器式 API 与 MuJoCo Warp 的 GPU 加速引擎相结合，解决了传统仿真环境搭建繁琐、训练速度受限的痛点。通过提供可组合的环境设计模块，mjlab 让用户能以更少的依赖直接操作原生 MuJoCo 数据结构，显著提升开发效率。\n\nmjlab 特别适合需要高性能仿真的研究人员和开发者。无论是训练人形机器人进行速度跟踪还是动作模仿，mjlab 都提供了丰富的现成示例。mjlab 支持多 GPU 分布式训练，还能在 Google Colab 上无需本地配置即可运行演示。此外，内置的智能体功能帮助开发者在正式训练前快速验证任务逻辑。如果你正在寻找高效、易用的机器人仿真平台，不妨关注 mjlab。","![Project banner](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmujocolab_mjlab_readme_69d3917e383c.jpg)\n\n# mjlab\n\n[![GitHub Actions](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fmujocolab\u002Fmjlab\u002Fci.yml?branch=main)](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Factions\u002Fworkflows\u002Fci.yml?query=branch%3Amain)\n[![Documentation](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Factions\u002Fworkflows\u002Fdocs.yml\u002Fbadge.svg)](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002F)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fmujocolab\u002Fmjlab)](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fblob\u002Fmain\u002FLICENSE)\n[![Nightly Benchmarks](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNightly-Benchmarks-blue)](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fnightly\u002F)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmjlab)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmjlab\u002F)\n\nmjlab combines [Isaac Lab](https:\u002F\u002Fgithub.com\u002Fisaac-sim\u002FIsaacLab)'s manager-based API with [MuJoCo Warp](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco_warp), a GPU-accelerated version of [MuJoCo](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco).\nThe framework provides composable building blocks for environment design,\nwith minimal dependencies and direct access to native MuJoCo data structures.\n\n## Getting Started\n\nmjlab requires an NVIDIA GPU for training. macOS is supported for evaluation only.\n\n**Try it now:**\n\nRun the demo (no installation needed):\n\n```bash\nuvx --from mjlab --refresh demo\n```\n\nOr try in [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmujocolab\u002Fmjlab\u002Fblob\u002Fmain\u002Fnotebooks\u002Fdemo.ipynb) (no local setup required).\n\n**Install from source:**\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab.git && cd mjlab\nuv run demo\n```\n\nFor alternative installation methods (PyPI, Docker), see the [Installation Guide](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Finstallation.html).\n\n## Training Examples\n\n### 1. Velocity Tracking\n\nTrain a Unitree G1 humanoid to follow velocity commands on flat terrain:\n\n```bash\nuv run train Mjlab-Velocity-Flat-Unitree-G1 --env.scene.num-envs 4096\n```\n\n**Multi-GPU Training:** Scale to multiple GPUs using `--gpu-ids`:\n\n```bash\nuv run train Mjlab-Velocity-Flat-Unitree-G1 \\\n  --gpu-ids \"[0, 1]\" \\\n  --env.scene.num-envs 4096\n```\n\nSee the [Distributed Training guide](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Ftraining\u002Fdistributed_training.html) for details.\n\nEvaluate a policy while training (fetches latest checkpoint from Weights & Biases):\n\n```bash\nuv run play Mjlab-Velocity-Flat-Unitree-G1 --wandb-run-path your-org\u002Fmjlab\u002Frun-id\n```\n\n### 2. Motion Imitation\n\nTrain a humanoid to mimic reference motions. See the [motion imitation guide](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Ftraining\u002Fmotion_imitation.html) for preprocessing setup.\n\n```bash\nuv run train Mjlab-Tracking-Flat-Unitree-G1 --registry-name your-org\u002Fmotions\u002Fmotion-name --env.scene.num-envs 4096\nuv run play Mjlab-Tracking-Flat-Unitree-G1 --wandb-run-path your-org\u002Fmjlab\u002Frun-id\n```\n\n### 3. Sanity-check with Dummy Agents\n\nUse built-in agents to sanity check your MDP before training:\n\n```bash\nuv run play Mjlab-Your-Task-Id --agent zero  # Sends zero actions\nuv run play Mjlab-Your-Task-Id --agent random  # Sends uniform random actions\n```\n\nWhen running motion-tracking tasks, add `--registry-name your-org\u002Fmotions\u002Fmotion-name` to the command.\n\n\n## Documentation\n\nFull documentation is available at **[mujocolab.github.io\u002Fmjlab](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002F)**.\n\n## Development\n\n```bash\nmake test          # Run all tests\nmake test-fast     # Skip slow tests\nmake format        # Format and lint\nmake docs          # Build docs locally\n```\n\nFor development setup: `uvx pre-commit install`\n\n## Citation\n\nmjlab is used in published research and open-source robotics projects. See the [Research](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Fresearch.html) page for publications and projects, or share your own in [Show and Tell](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fdiscussions\u002Fcategories\u002Fshow-and-tell).\n\nIf you use mjlab in your research, please consider citing:\n\n```bibtex\n@misc{zakka2026mjlablightweightframeworkgpuaccelerated,\n  title={mjlab: A Lightweight Framework for GPU-Accelerated Robot Learning},\n  author={Kevin Zakka and Qiayuan Liao and Brent Yi and Louis Le Lay and Koushil Sreenath and Pieter Abbeel},\n  year={2026},\n  eprint={2601.22074},\n  archivePrefix={arXiv},\n  primaryClass={cs.RO},\n  url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22074},\n}\n```\n\n## License\n\nmjlab is licensed under the [Apache License, Version 2.0](LICENSE).\n\n### Third-Party Code\n\nSome portions of mjlab are forked from external projects:\n\n- **`src\u002Fmjlab\u002Futils\u002Flab_api\u002F`** — Utilities forked from [NVIDIA Isaac\n  Lab](https:\u002F\u002Fgithub.com\u002Fisaac-sim\u002FIsaacLab) (BSD-3-Clause license, see file\n  headers)\n\nForked components retain their original licenses. See file headers for details.\n\n## Acknowledgments\n\nmjlab wouldn't exist without the excellent work of the Isaac Lab team, whose API\ndesign and abstractions mjlab builds upon.\n\nThanks to the MuJoCo Warp team — especially Erik Frey and Taylor Howell — for\nanswering our questions, giving helpful feedback, and implementing features\nbased on our requests countless times.\n","![Project banner](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmujocolab_mjlab_readme_69d3917e383c.jpg)\n\n# mjlab\n\n[![GitHub Actions](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fmujocolab\u002Fmjlab\u002Fci.yml?branch=main)](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Factions\u002Fworkflows\u002Fci.yml?query=branch%3Amain)\n[![Documentation](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Factions\u002Fworkflows\u002Fdocs.yml\u002Fbadge.svg)](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002F)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fmujocolab\u002Fmjlab)](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fblob\u002Fmain\u002FLICENSE)\n[![Nightly Benchmarks](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNightly-Benchmarks-blue)](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fnightly\u002F)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmjlab)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmjlab\u002F)\n\nmjlab 结合了 [Isaac Lab](https:\u002F\u002Fgithub.com\u002Fisaac-sim\u002FIsaacLab) 的基于管理器（manager-based）API 与 [MuJoCo Warp](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco_warp)，后者是 [MuJoCo](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco) 的 GPU 加速版本。\n该框架为环境设计提供了可组合的构建模块，\n依赖极少，并可直接访问原生的 MuJoCo 数据结构。\n\n## 入门指南\n\nmjlab 需要 NVIDIA GPU 进行训练。macOS 仅支持评估用途。\n\n**立即尝试：**\n\n运行演示（无需安装）：\n\n```bash\nuvx --from mjlab --refresh demo\n```\n\n或者在 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmujocolab\u002Fmjlab\u002Fblob\u002Fmain\u002Fnotebooks\u002Fdemo.ipynb) 中尝试（无需本地设置）。\n\n**从源码安装：**\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab.git && cd mjlab\nuv run demo\n```\n\n对于其他安装方法（PyPI、Docker），请参阅 [安装指南](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Finstallation.html)。\n\n## 训练示例\n\n### 1. 速度跟踪\n\n训练 Unitree G1 人形机器人在平坦地形上跟随速度指令：\n\n```bash\nuv run train Mjlab-Velocity-Flat-Unitree-G1 --env.scene.num-envs 4096\n```\n\n**多 GPU 训练：** 使用 `--gpu-ids` 扩展至多个 GPU：\n\n```bash\nuv run train Mjlab-Velocity-Flat-Unitree-G1 \\\n  --gpu-ids \"[0, 1]\" \\\n  --env.scene.num-envs 4096\n```\n\n详见 [分布式训练指南](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Ftraining\u002Fdistributed_training.html)。\n\n训练时评估策略（从 Weights & Biases 获取最新检查点）：\n\n```bash\nuv run play Mjlab-Velocity-Flat-Unitree-G1 --wandb-run-path your-org\u002Fmjlab\u002Frun-id\n```\n\n### 2. 运动模仿\n\n训练人形机器人模仿参考动作。详见 [运动模仿指南](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Ftraining\u002Fmotion_imitation.html) 以了解预处理设置。\n\n```bash\nuv run train Mjlab-Tracking-Flat-Unitree-G1 --registry-name your-org\u002Fmotions\u002Fmotion-name --env.scene.num-envs 4096\nuv run play Mjlab-Tracking-Flat-Unitree-G1 --wandb-run-path your-org\u002Fmjlab\u002Frun-id\n```\n\n### 3. 使用虚拟代理进行健全性检查\n\n在训练前使用内置代理对您的 MDP（马尔可夫决策过程）进行健全性检查：\n\n```bash\nuv run play Mjlab-Your-Task-Id --agent zero  # 发送零动作\nuv run play Mjlab-Your-Task-Id --agent random  # 发送均匀随机动作\n```\n\n运行运动跟踪任务时，请在命令中添加 `--registry-name your-org\u002Fmotions\u002Fmotion-name`。\n\n\n## 文档\n\n完整文档可在 **[mujocolab.github.io\u002Fmjlab](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002F)** 获取。\n\n## 开发\n\n```bash\nmake test          # 运行所有测试\nmake test-fast     # 跳过慢速测试\nmake format        # 格式化与代码检查\nmake docs          # 本地构建文档\n```\n\n开发环境设置：`uvx pre-commit install`\n\n## 引用\n\nmjlab 被用于已发表的研究和开源机器人项目中。请查看 [研究](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Fresearch.html) 页面以获取出版物和项目，或在 [Show and Tell](https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fdiscussions\u002Fcategories\u002Fshow-and-tell) 中分享您自己的项目。\n\n如果您在研究中使用了 mjlab，请考虑引用：\n\n```bibtex\n@misc{zakka2026mjlablightweightframeworkgpuaccelerated,\n  title={mjlab: A Lightweight Framework for GPU-Accelerated Robot Learning},\n  author={Kevin Zakka and Qiayuan Liao and Brent Yi and Louis Le Lay and Koushil Sreenath and Pieter Abbeel},\n  year={2026},\n  eprint={2601.22074},\n  archivePrefix={arXiv},\n  primaryClass={cs.RO},\n  url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22074},\n}\n```\n\n## 许可\n\nmjlab 采用 [Apache License, Version 2.0](LICENSE) 许可。\n\n### 第三方代码\n\nmjlab 的部分部分是从外部项目分叉而来的：\n\n- **`src\u002Fmjlab\u002Futils\u002Flab_api\u002F`** — 从 [NVIDIA Isaac\n  Lab](https:\u002F\u002Fgithub.com\u002Fisaac-sim\u002FIsaacLab) 分叉的工具（BSD-3-Clause 许可，见文件\n 头部）\n\n分叉组件保留其原始许可。详情请参见文件头部。\n\n## 致谢\n\n如果没有 Isaac Lab 团队的杰出工作，mjlab 就不会存在，mjlab 正是建立在其 API\n设计和抽象之上。\n\n感谢 MuJoCo Warp 团队 —— 特别是 Erik Frey 和 Taylor Howell —— 回答我们的问题，提供有益反馈，并根据我们的请求无数次实现功能。","# mjlab 快速上手指南\n\n**mjlab** 是一个轻量级框架，结合了 Isaac Lab 的基于管理器的 API 与 MuJoCo Warp（GPU 加速版 MuJoCo）。它专为机器人学习设计，提供可组合的环境构建模块，支持直接访问原生 MuJoCo 数据结构。\n\n## 环境准备\n\n*   **操作系统**：推荐使用 Linux 进行训练；macOS 仅支持评估任务。\n*   **硬件要求**：**必须配备 NVIDIA GPU** 用于训练。\n*   **前置依赖**：\n    *   Python 环境\n    *   [uv](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv)（高性能 Python 包管理器）\n    *   Git\n\n> **注意**：确保您的 NVIDIA 驱动已正确安装并支持 CUDA。\n\n## 安装步骤\n\n### 方法一：免安装体验（推荐新手）\n无需本地配置，直接使用 `uvx` 运行演示：\n\n```bash\nuvx --from mjlab --refresh demo\n```\n\n或者在 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmujocolab\u002Fmjlab\u002Fblob\u002Fmain\u002Fnotebooks\u002Fdemo.ipynb) 中尝试（无需本地设置）。\n\n### 方法二：从源码安装\n克隆仓库并使用 `uv` 运行：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab.git && cd mjlab\nuv run demo\n```\n\n如需其他安装方式（如 PyPI、Docker），请查阅官方 [安装指南](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Fsource\u002Finstallation.html)。\n\n## 基本使用\n\n### 1. 运行演示\n进入项目目录后，运行以下命令启动基础演示：\n\n```bash\nuv run demo\n```\n\n### 2. 训练示例（速度跟踪）\n训练 Unitree G1 人形机器人在平坦地形上跟随速度指令：\n\n```bash\nuv run train Mjlab-Velocity-Flat-Unitree-G1 --env.scene.num-envs 4096\n```\n\n**多 GPU 训练**：通过 `--gpu-ids` 参数扩展至多卡：\n\n```bash\nuv run train Mjlab-Velocity-Flat-Unitree-G1 \\\n  --gpu-ids \"[0, 1]\" \\\n  --env.scene.num-envs 4096\n```\n\n### 3. 策略评估\n在训练过程中或之后评估策略（需指定 Weights & Biases 路径）：\n\n```bash\nuv run play Mjlab-Velocity-Flat-Unitree-G1 --wandb-run-path your-org\u002Fmjlab\u002Frun-id\n```\n\n### 4. 调试检查\n使用内置代理进行 MDP 合理性检查：\n\n```bash\n# 发送零动作\nuv run play Mjlab-Your-Task-Id --agent zero \n\n# 发送均匀随机动作\nuv run play Mjlab-Your-Task-Id --agent random\n```\n\n> **提示**：对于运动跟踪任务，请在命令中添加 `--registry-name your-org\u002Fmotions\u002Fmotion-name`。\n\n---\n\n更多详细文档请访问：[mujocolab.github.io\u002Fmjlab](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002F)","某高校机器人实验室团队正在基于 Unitree G1 人形机器人开发自适应行走策略，急需一个高效的仿真平台来验证强化学习算法。\n\n### 没有 mjlab 时\n- 环境配置极其繁琐，需手动整合 MuJoCo 与 Isaac Sim 接口，常因 CUDA 版本不兼容导致项目无法启动。\n- 仿真运行主要依赖 CPU，单步计算慢，导致一次完整的策略训练周期长达数天，严重拖慢算法迭代进度。\n- 调试过程中难以深入底层，无法直接读取物理引擎内部状态，排查碰撞或关节异常耗时费力且容易出错。\n\n### 使用 mjlab 后\n- mjlab 内置组合式 API 和最小化依赖，配合 Google Colab 即可零安装快速启动实验环境，减少运维时间。\n- 利用 MuJoCo-Warp 的 GPU 加速能力，支持 4096 个并行环境及多卡分布式训练，训练效率提升数十倍。\n- 提供对原生 MuJoCo 数据结构的直接访问，结合内置的随机代理脚本，能快速验证任务逻辑是否正确。\n\nmjlab 通过降低环境搭建门槛与释放 GPU 算力，将机器人强化学习的研发周期从周级缩短至小时级。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmujocolab_mjlab_69d3917e.jpg","mujocolab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmujocolab_f66f607f.png",null,"https:\u002F\u002Fgithub.com\u002Fmujocolab",[19,23,27,31,35],{"name":20,"color":21,"percentage":22},"Python","#3572A5",98.6,{"name":24,"color":25,"percentage":26},"Jupyter Notebook","#DA5B0B",1,{"name":28,"color":29,"percentage":30},"Shell","#89e051",0.3,{"name":32,"color":33,"percentage":34},"Makefile","#427819",0,{"name":36,"color":37,"percentage":34},"Dockerfile","#384d54",2080,307,"2026-04-05T22:01:50","Apache-2.0",3,"Linux, macOS","需要 NVIDIA GPU（训练必需），具体型号、显存大小及 CUDA 版本未说明","未说明",{"notes":47,"python":45,"dependencies":48},"训练必须使用 NVIDIA GPU；macOS 仅支持评估任务；推荐使用 uv 工具管理环境；支持多 GPU 分布式训练；可通过 Google Colab 在线运行无需本地安装。",[45],[50],"其他",[52,53,54,55,56],"isaaclab","mujoco","mujoco-warp","reinforcement-learning","robotics-simulation","ready","2026-03-27T02:49:30.150509","2026-04-06T09:46:09.519139",[61,66,71,76,81,86],{"id":62,"question_zh":63,"answer_zh":64,"source_url":65},2991,"多环境训练时频繁崩溃且报错 `std` 无效怎么办？","检查几何体（geom）的 `solimp` 和 `solref` 设置。若使用了自定义设置（如 `solimp=\"0.95 1.0 0.001 0.5 2\"`），请恢复为默认值，这能解决因噪声标准差变为负数或 NaN 导致的崩溃。","https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fissues\u002F224",{"id":67,"question_zh":68,"answer_zh":69,"source_url":70},2992,"macOS 系统下安装或运行 mjlab 遇到兼容性错误怎么办？","macOS 用户需要启用 viser 查看器。运行脚本时请添加 `--viewer viser` 参数，以解决平台兼容性问题并确保正常运行。","https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fissues\u002F306",{"id":72,"question_zh":73,"answer_zh":74,"source_url":75},2993,"训练时 episode_length 始终为 1 且机器人悬浮在空中是什么原因？","这通常是由于环境配置中的终止条件（termination）设置错误导致的。请检查环境配置文件，确认是否存在错误的终止逻辑导致回合立即结束。","https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fissues\u002F403",{"id":77,"question_zh":78,"answer_zh":79,"source_url":80},2994,"传感器数据中出现 NaN 导致训练崩溃如何排查？","NaN 可能源自接触传感器的力数据（force），而当前的 nan guard 未检查此类数据。建议检查 `foot_contact` 等传感器的 force 数据，并考虑在观测管理器中屏蔽 nan\u002Finf 项。","https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fissues\u002F520",{"id":82,"question_zh":83,"answer_zh":84,"source_url":85},2995,"mjlab 中的射线检测传感器性能较慢，有更快的替代方案吗？","主分支已集成 BVH 加速的射线投射（BVH-accelerated ray casting）。建议更新代码至最新主分支以利用此功能提升地形感知速度。","https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fissues\u002F486",{"id":87,"question_zh":88,"answer_zh":89,"source_url":90},2996,"加载新机器人模型时发生段错误（Segmentation fault）如何调试？","请确保遵循 Warp 的安装指南和推荐配置。检查碰撞几何体的 `conaffinity` 和 `contype` 设置（建议设为 1），并参考 G1 示例或 mujoco_warp 的 panda 示例进行验证。","https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fissues\u002F134",[92,97,102,107,112],{"id":93,"version":94,"summary_zh":95,"released_at":96},112195,"v1.2.0","Our biggest release yet. 60+ pull requests from 12 contributors. A ground up redesign of domain randomization, major viewer improvements, cloud training support, and many bug fixes.\r\n\r\n```bash\r\npip install mjlab\r\n```\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F18fe2bde-5fa3-4a61-ac19-4e464991aed3\r\n\r\n_Domain randomization on the yam lift cube task: cube color, cube size, cube mass, link orientations, link inertias, camera FOV, and lighting all randomized per environment on every reset._\r\n\r\n## Domain Randomization, Redesigned\r\n\r\nDomain randomization is a key technique for sim-to-real transfer. The new dr module replaces the previous `randomize_field` interface with typed, per-field randomization functions. These functions automatically recompute dependent physical quantities when a parameter is modified. For example, if body mass is randomized, the corresponding inertia values are updated to remain physically consistent. Similarly, when geom size parameters change, the broadphase collision bounds are recomputed. This design removes the need for manual `set_const` calls and reduces the risk of introducing inconsistent physics states.\r\n\r\n```python\r\nimport mjlab.envs.mdp.dr as dr\r\n\r\ndr.geom_friction(env, cfg, operation=dr.scale, distribution=dr.uniform, ranges=(0.8, 1.2))\r\ndr.pseudo_inertia(env, cfg, alpha_range=(-0.3, 0.3), d_range=(-0.3, 0.3))\r\ndr.mat_rgba(env, cfg, operation=dr.add, distribution=dr.gaussian, ranges=(-0.1, 0.1))\r\n```\r\n\r\nThe full lineup covers geometry, bodies, visuals, cameras, and lights. Custom operations and distributions are first class: define your own and pass them anywhere a string is accepted. The native viewer syncs all randomized fields from the GPU model on every reset, so DR changes are immediately visible.\r\n\r\n## Viewer Overhaul\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F345bde31-30b9-4466-8fe5-d5a54aad37e6\r\n\r\nThe viewer timing model was rewritten. A single sim budget accumulator with a wall time deadline keeps physics and rendering in sync at any speed multiplier (1\u002F32x to 8x). When physics cannot keep up, the deadline caps the burst so the renderer always gets a turn.\r\n\r\nNew in both viewers:\r\n- **Single step mode** to advance exactly one physics step while paused\r\n- **Error recovery** that pauses and logs the traceback instead of crashing\r\n- **Force arrows** that visualize `apply_body_impulse` events in real time\r\n- **Realtime factor** displayed alongside FPS\r\n\r\nNew in Viser:\r\n- **Velocity joystick** for manual command override\r\n- **Revamped term plotter** with per term filtering\r\n- **Reorganized controls** with a cleaner folder hierarchy\r\n\r\n## Step Events and Body Impulses\r\n\r\nThe new `\"step\"` event mode fires every environment step, not just on reset. Combine it with `apply_body_impulse` to throw external forces at your robot during training, with configurable duration, magnitude, and application point.\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F5999a0b1-a874-43e4-843b-40abfbf4691b\r\n\r\n## Cloud Training\r\n\r\nTrain on cloud GPUs with a single command. We added [SkyPilot](https:\u002F\u002Fskypilot.readthedocs.io\u002F) integration for Lambda Cloud with docs covering setup, monitoring, and cost management. W&B sweep scripts distribute one agent per GPU across multi GPU instances.\r\n\r\n## Documentation\r\n\r\nThe docs have been completely rewritten with improved guides, API reference, and multi versioned support. Check them out at [mujocolab.github.io\u002Fmjlab](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fmain\u002Findex.html).\r\n\r\n## Also In This Release\r\n\r\n- **`export-scene` CLI** to dump any task scene or asset zoo entity to a directory or zip for inspection and debugging\r\n- **`rsl-rl-lib` upgraded to 5.0.1** with automatic checkpoint migration for the new distribution config format\r\n- **Contact sensor history** across decimation substeps for more reliable self collision and illegal contact detection\r\n- **Docker images** published on every push to main\r\n- **`joint_torques_l2`** now accepts `actuator_ids` for penalizing a subset of actuators\r\n\r\n## Breaking Changes\r\n\r\n- `randomize_field` is removed. Use typed functions from the `dr` module (e.g. `dr.geom_friction`, `dr.body_mass`).\r\n- `EventTermCfg` no longer accepts `domain_randomization`.\r\n- `RslRlModelCfg` uses `distribution_cfg` dict instead of `stochastic`\u002F`init_noise_std`\u002F`noise_std_type`. Existing checkpoints are migrated automatically on load.\r\n\r\n## Bug Fixes\r\n\r\n- Viewer FPS drops from physics starving the renderer (#694, #705)\r\n- `height_scan` returning ~0 for missed rays (#646)\r\n- Ghost mesh rendering for fixed base entities (#645)\r\n- Actuator target resolution for entities with internal attach prefixes (#714)\r\n- Offscreen rendering artifacts in large vectorized scenes (#682)\r\n- Viser viewer crashing on scenes with no mocap bodies (#662)\r\n- Bundled `ffmpeg` via `imageio-ffmpeg`, no more system install required (#650)\r\n\r\n## New Contributors\r\n\r\nThank you to @Msornerrrr, @rdeits-bd, @jonzamora, @jgueldenstein, @saikishor, @ax-anoop, @chengruiz, @ManuelActisCa","2026-03-06T22:22:37",{"id":98,"version":99,"summary_zh":100,"released_at":101},112196,"v1.1.1","Minor patch release with bug fixes and small improvements. Highlights include a new differential IK action space, reward visualization in the native viewer, and a switch from `moviepy` to `mediapy` for video recording.\r\n\r\n## What's Changed\r\n* Enable reward plots in the native viewer. by @kevinzakka in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F629\r\n* Extend Viser plotting to plot metrics by @saikishor in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F625\r\n* Fix viser depth image display for vision example tasks by @pthangeda in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F627\r\n* Add differential IK action space. by @kevinzakka in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F632\r\n* fix(play): use MjlabOnPolicyRunner as default runner by @griffinaddison in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F626\r\n* Remove unsafe body fields from domain randomization by @kevinzakka in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F631\r\n* Replace moviepy with mediapy for video recording by @kevinzakka in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F637\r\n* Use bleeding edge mujoco and mujoco_warp for dev. by @kevinzakka in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F638\r\n\r\n## New Contributors\r\n* @pthangeda made their first contribution in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F627\r\n* @griffinaddison made their first contribution in https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F626\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fcompare\u002Fv1.1.0...v1.1.1","2026-02-15T01:55:52",{"id":103,"version":104,"summary_zh":105,"released_at":106},112197,"v1.1.0","mjlab and all its dependencies (including mujoco-warp) are now available directly from [PyPI](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmjlab\u002F). Installation no longer requires pinning a specific mujoco-warp revision or custom indices and is now just:\r\n\r\n```bash\r\npip install mjlab\r\n```\r\n\r\nor try it instantly with:\r\n\r\n```bash\r\nuvx --from mjlab demo\r\n```\r\n\r\n## What's new\r\n\r\n- RGB and depth camera sensors with BVH-accelerated raycasting\r\n- MetricsManager for logging custom metrics during training\r\n- Terrain visualizer and many new terrain types\r\n- Site group visualization in the Viser viewer\r\n- Upgraded rsl-rl-lib to 4.0.0 with native ONNX export\r\n- Various bug fixes\r\n\r\nSee the full [changelog](https:\u002F\u002Fmujocolab.github.io\u002Fmjlab\u002Fsource\u002Fchangelog.html#version-1-1-0-february-12-2026) for details.","2026-02-13T05:41:02",{"id":108,"version":109,"summary_zh":110,"released_at":111},112198,"v1.0.0","mjlab is now stable. Thank you to everyone who contributed code, reported issues, and provided feedback along the way. This release wouldn't have happened without you.\r\n\r\nSome highlights:\r\n  - RayCastSensor: Terrain and obstacle detection for navigation tasks\r\n  - ContactSensor improvements: History tracking for better contact dynamics\r\n  - Muscle actuator support: Biomechanical simulation capabilities\r\n  - Sensor caching: Performance optimizations for large-scale training\r\n  - Better NaN handling: Easier debugging with detection in observations and sensor data\r\n\r\nv1.1 will follow shortly after mjwarp exits beta (imminent), adding RGB-D camera support (experimental https:\u002F\u002Fgithub.com\u002Fmujocolab\u002Fmjlab\u002Fpull\u002F511).\r\n\r\nCheers!","2026-01-29T05:38:09",{"id":113,"version":114,"summary_zh":115,"released_at":116},112199,"v0.1.0","**mjlab is public and on PyPI! 🎉**\r\n\r\nWe're excited to announce the release of `mjlab`. It is available here on GitHub and on [PyPI](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmjlab\u002F).\r\n\r\n### Quick Demo\r\n\r\nSee `mjlab` in action with a pre-trained motion imitation policy on the Unitree G1 humanoid:\r\n\r\n```bash\r\nuvx --from mjlab --with \"mujoco-warp @ git+https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco_warp\" demo\r\n```\r\n\r\n### Beta Release\r\n\r\nThis is an early beta release - we're actively implementing missing features and would love your feedback on what to prioritize! The API may evolve as we incorporate community input, add new features and squash bugs.\r\n\r\nThanks!","2025-09-29T10:00:45",[118,135,143,151,159,167],{"id":119,"name":120,"github_repo":121,"description_zh":122,"stars":123,"difficulty_score":124,"last_commit_at":125,"category_tags":126,"status":57},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,2,"2026-04-05T10:45:23",[127,128,129,130,131,50,132,133,134],"图像","数据工具","视频","插件","Agent","语言模型","开发框架","音频",{"id":136,"name":137,"github_repo":138,"description_zh":139,"stars":140,"difficulty_score":42,"last_commit_at":141,"category_tags":142,"status":57},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[131,127,133,132,50],{"id":144,"name":145,"github_repo":146,"description_zh":147,"stars":148,"difficulty_score":42,"last_commit_at":149,"category_tags":150,"status":57},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[132,127,133,50],{"id":152,"name":153,"github_repo":154,"description_zh":155,"stars":156,"difficulty_score":26,"last_commit_at":157,"category_tags":158,"status":57},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[133,50],{"id":160,"name":161,"github_repo":162,"description_zh":163,"stars":164,"difficulty_score":26,"last_commit_at":165,"category_tags":166,"status":57},2234,"scikit-learn","scikit-learn\u002Fscikit-learn","scikit-learn 是一个基于 Python 构建的开源机器学习库，依托于 SciPy、NumPy 等科学计算生态，旨在让机器学习变得简单高效。它提供了一套统一且简洁的接口，涵盖了从数据预处理、特征工程到模型训练、评估及选择的全流程工具，内置了包括线性回归、支持向量机、随机森林、聚类等在内的丰富经典算法。\n\n对于希望快速验证想法或构建原型的数据科学家、研究人员以及 Python 开发者而言，scikit-learn 是不可或缺的基础设施。它有效解决了机器学习入门门槛高、算法实现复杂以及不同模型间调用方式不统一的痛点，让用户无需重复造轮子，只需几行代码即可调用成熟的算法解决分类、回归、聚类等实际问题。\n\n其核心技术亮点在于高度一致的 API 设计风格，所有估算器（Estimator）均遵循相同的调用逻辑，极大地降低了学习成本并提升了代码的可读性与可维护性。此外，它还提供了强大的模型选择与评估工具，如交叉验证和网格搜索，帮助用户系统地优化模型性能。作为一个由全球志愿者共同维护的成熟项目，scikit-learn 以其稳定性、详尽的文档和活跃的社区支持，成为连接理论学习与工业级应用的最",65628,"2026-04-05T10:10:46",[133,50,128],{"id":168,"name":169,"github_repo":170,"description_zh":171,"stars":172,"difficulty_score":124,"last_commit_at":173,"category_tags":174,"status":57},3364,"keras","keras-team\u002Fkeras","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。",63927,"2026-04-04T15:24:37",[133,128,50]]