[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-PKU-MARL--DexterousHands":3,"tool-PKU-MARL--DexterousHands":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",145895,2,"2026-04-08T11:32:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":75,"owner_email":76,"owner_twitter":75,"owner_website":75,"owner_url":77,"languages":78,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":99,"env_os":100,"env_gpu":101,"env_ram":102,"env_deps":103,"category_tags":110,"github_topics":112,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":116,"updated_at":117,"faqs":118,"releases":148},5601,"PKU-MARL\u002FDexterousHands","DexterousHands","This is a library that provides dual dexterous hand manipulation tasks through Isaac Gym","DexterousHands（又名 Bi-DexHands）是一个基于 Isaac Gym 构建的开源库，专注于提供双手灵巧操作任务与强化学习算法。它旨在解决现代机器人研究中双手协调与手指精细操作难以达到人类水平的挑战，为学术界和工业界提供了一个高保真的仿真基准。\n\n该工具特别适合机器人学研究人员、强化学习开发者以及多智能体系统探索者使用。其核心亮点在于极高的运行效率，利用 GPU 并行计算能力，单张显卡即可同时运行数千个仿真环境，大幅加速训练过程。此外，DexterousHands 支持异构智能体协作，真实模拟了手指、关节等不同部件的特性，区别于传统参数共享的多智能体环境。它还提供了丰富的任务场景（如传递、投掷、放置等）和超过 2000 种物体模型，并支持视觉观测与点云输入，能够很好地满足元学习、多任务学习及离线强化学习等前沿算法的验证需求。无论是想要复现经典算法，还是探索新的双手协作策略，DexterousHands 都是一个功能强大且易于集成的研究平台。","# Bi-DexHands: Bimanual Dexterous Manipulation via Reinforcement Learning\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9149870c4210.jpg\" width=\"1000\" border=\"1\"\u002F>\n\n****\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fbidexhands)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fbidexhands\u002F)\n[![Organization](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOrganization-PKU_MARL-blue.svg \"Organization\")](https:\u002F\u002Fgithub.com\u002FPKU-MARL \"Organization\")\n[![Unittest](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnittest-passing-green.svg \"Unittest\")](https:\u002F\u002Fgithub.com\u002FPKU-MARL \"Unittest\")\n[![Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocs-In_development-red.svg \"Author\")](https:\u002F\u002Fgithub.com\u002FPKU-MARL \"Docs\")\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FPKU-MARL\u002FDexterousHands)](https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands\u002Fblob\u002Fmain\u002FLICENSE)\n\n### Update\n\n[2023\u002F02\u002F09] We re-package the Bi-DexHands. Now you can call the Bi-DexHands' environments not only on the command line, but also in your Python script. check our README [Use Bi-DexHands in Python scripts](#Use-Bi-DexHands-in-Python-scripts) below.\n\n[2022\u002F11\u002F24] Now we support visual observation for all the tasks, check this [document for visual input](#.\u002Fdocs\u002Fvisual-in.md).\n\n[2022\u002F10\u002F02] Now we support for the default IsaacGymEnvs RL library [rl-games](https:\u002F\u002Fgithub.com\u002FDenys88\u002Frl_games), check our README [below](#Use-rl_games-to-train-our-tasks).\n\n**Bi-DexHands** ([click bi-dexhands.ai](https:\u002F\u002Fpku-marl.github.io\u002FDexterousHands\u002F)) provides a collection of bimanual dexterous manipulations tasks and reinforcement learning algorithms. \nReaching human-level sophistication of hand dexterity and bimanual coordination remains an open challenge for modern robotics researchers. To better help the community study this problem, Bi-DexHands are developed with the following key features:\n- **Isaac Efficiency**: Bi-DexHands is built within [Isaac Gym](https:\u002F\u002Fdeveloper.nvidia.com\u002Fisaac-gym); it supports running thousands of environments simultaneously. For example, on one NVIDIA RTX 3090 GPU, Bi-DexHands can reach **40,000+ mean FPS** by running  2,048  environments in parallel. \n- **Comprehensive RL Benchmark**: we provide the first bimanual manipulation task environment for RL, MARL, Multi-task RL, Meta RL, and Offline RL practitioners, along with a comprehensive benchmark for SOTA continuous control model-free RL\u002FMARL methods. See [example](.\u002Fbidexhands\u002Falgorithms\u002Fmarl\u002F)\n- **Heterogeneous-agents Cooperation**: Agents in Bi-DexHands (i.e., joints, fingers, hands,...) are genuinely heterogeneous; this is very different from common multi-agent environments such as [SMAC](https:\u002F\u002Fgithub.com\u002Foxwhirl\u002Fsmac)  where agents can simply share parameters to solve the task. \n- **Task Generalization**: we introduce a variety of dexterous manipulation tasks (e.g., handover, lift up, throw, place, put...) as well as enormous target objects from the [YCB](https:\u002F\u002Frse-lab.cs.washington.edu\u002Fprojects\u002Fposecnn\u002F) and [SAPIEN](https:\u002F\u002Fsapien.ucsd.edu\u002F) dataset (>2,000 objects); this allows meta-RL and multi-task RL algorithms to be tested on the task generalization front. \n- **Point Cloud**: We provide the ability to use point clouds as observations. We used the depth camera in Isaacc Gym to get the depth image and then convert it to partial point cloud. We can customize the pose and numbers of depth cameras to get point cloud from difference angles. The density of generated point cloud depends on the number of the camera pixels. See the [visual input docs](.\u002Fdocs\u002Fpoint-cloud.md). \n- **Quick Demos**\n\n\u003Cdiv align=center>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_3b95bbf6b13d.gif\" align=\"center\" width=\"600\"\u002F>\n\u003C\u002Fdiv> \n \n\nContents of this repo are as follows:\n\n- [Installation](#Installation)\n  - [Pre-requisites](#Installation)\n  \u003C!-- - [Install from PyPI](#Install-from-PyPI) -->\n  - [Install from source code](#Install-from-source-code)\n- [Introduction to Bi-DexHands](#Introduction-to-Bi-DexHands)\n- [Overview of Environments](.\u002Fdocs\u002Fenvironments.md)\n- [Overview of Algorithms](.\u002Fdocs\u002Falgorithms.md)\n- [Getting Started](#Getting-Started)\n  - [Tasks](#Tasks)\n  - [Training](#Training)\n  - [Testing](#Testing)\n  - [Plotting](#Plotting)\n  - [Use Bi-DexHands in Python scripts](#Use-Bi-DexHands-in-Python-scripts)\n- [Enviroments Performance](#Enviroments-Performance)\n  - [Figures](#Figures)\n- [Offline RL Datasets](#Offline-RL-Datasets)\n- [Use rl_games to train our tasks](#Use-rl_games-to-train-our-tasks)\n- [Future Plan](#Future-Plan)\n- [Customizing your Environments](docs\u002Fcustomize-the-environment.md)\n- [How to change the type of dexterous hand](docs\u002FChange-the-type-of-dexterous-hand.md)\n- [How to add a robotic arm drive to the dexterous hand](docs\u002FAdd-a-robotic-arm-drive-to-the-dexterous-hand.md)\n- [The Team](#The-Team)\n- [License](#License)\n\u003Cbr>\u003C\u002Fbr>\n\nFor more information about this work, please check [our paper.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08686)\n\n****\n## Installation\n\nDetails regarding installation of IsaacGym can be found [here](https:\u002F\u002Fdeveloper.nvidia.com\u002Fisaac-gym). **We currently support the `Preview Release 3\u002F4` version of IsaacGym.**\n\n### Pre-requisites\n\nThe code has been tested on Ubuntu 18.04\u002F20.04 with Python 3.7\u002F3.8. The minimum recommended NVIDIA driver\nversion for Linux is `470.74` (dictated by support of IsaacGym).\n\nIt uses [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002F) to create virtual environments.\nTo install Anaconda, follow instructions [here](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002Flinux\u002F).\n\nEnsure that Isaac Gym works on your system by running one of the examples from the `python\u002Fexamples` \ndirectory, like `joint_monkey.py`. Please follow troubleshooting steps described in the Isaac Gym Preview Release 3\u002F4\ninstall instructions if you have any trouble running the samples.\n\nOnce Isaac Gym is installed and samples work within your current python environment, install this repo:\n\n\u003C!-- #### Install from PyPI\nBi-DexHands is hosted on PyPI. It requires Python >= 3.7.\nYou can simply install Bi-DexHands from PyPI with the following command:\n\n```bash\npip install bidexhands\n``` -->\n\n#### Install from source code\nYou can also install this repo from the source code:\n\n```bash\npip install -e .\n```\n\n## Introduction\n\nThis repository contains complex dexterous hands control tasks. Bi-DexHands is built in the NVIDIA Isaac Gym with high performance guarantee for training RL algorithms. Our environments focus on applying model-free RL\u002FMARL algorithms for bimanual dexterous manipulation, which are considered as a challenging task for traditional control methods. \n\n## Getting Started\n\n### \u003Cspan id=\"task\">Tasks\u003C\u002Fspan>\n\nSource code for tasks can be found in `envs\u002Ftasks`. The detailed settings of state\u002Faction\u002Freward are in [here](.\u002Fdocs\u002Fenvironments.md).\n\nSo far, we release the following tasks (with many more to come):\n\n| Environments | Description | Demo     |\n|  :----:  | :----:  | :----:  |\n|ShadowHand Over| These environments involve two fixed-position hands. The hand which starts with the object must find a way to hand it over to the second hand. | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_13a79f6ec884.gif\" width=\"250\"\u002F>    |\n|ShadowHandCatch Underarm|These environments again have two hands, however now they have some additional degrees of freedom that allows them to translate\u002Frotate their centre of masses within some constrained region. | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9f546dc4b7b2.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandCatch Over2Underarm| This environment is made up of half ShadowHandCatchUnderarm and half ShadowHandCatchOverarm, the object needs to be thrown from the vertical hand to the palm-up hand | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_aeda24cbc09d.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandCatch Abreast| This environment is similar to ShadowHandCatchUnderarm, the difference is that the two hands are changed from relative to side-by-side posture. | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_14e7bef41965.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandCatch TwoCatchUnderarm| These environments involve coordination between the two hands so as to throw the two objects between hands (i.e. swapping them). | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_aab5beca5a81.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandLift Underarm | This environment requires grasping the pot handle with two hands and lifting the pot to the designated position  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_4f926e1e7029.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandDoor OpenInward | This environment requires the closed door to be opened, and the door can only be pulled inwards | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_96a89a68f0c8.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandDoor OpenOutward | This environment requires a closed door to be opened and the door can only be pushed outwards  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_5707c4c17f33.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandDoor CloseInward | This environment requires the open door to be closed, and the door is initially open inwards | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_bc283b31c935.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand BottleCap | This environment involves two hands and a bottle, we need to hold the bottle with one hand and open the bottle cap with the other hand  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_f82022b8ebc8.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandPush Block | This environment requires both hands to touch the block and push it forward | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_b67bbd4b5919.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandOpen Scissors | This environment requires both hands to cooperate to open the scissors | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_78ca83845672.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandOpen PenCap | This environment requires both hands to cooperate to open the pen cap  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_c43c600c7728.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandSwing Cup | This environment requires two hands to hold the cup handle and rotate it 90 degrees | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_945950570528.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandTurn Botton | This environment requires both hands to press the button | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_7f2845e9a123.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHandGrasp AndPlace | This environment has a bucket and an object, we need to put the object into the bucket  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_3038ec45efb4.gif\" align=\"middle\" width=\"250\"\u002F>    |\n\n\n### Training\n\n#### Training Examples\n\n##### RL\u002FMARL Examples\nFor example, if you want to train a policy for the ShadowHandOver task by the PPO algorithm, run this line in `bidexhands` folder:\n\n```bash\npython train.py --task=ShadowHandOver --algo=ppo\n```\n\nTo select an algorithm, pass `--algo=ppo\u002Fmappo\u002Fhappo\u002Fhatrpo\u002F...` \nas an argument. For example, if you want to use happo algorithm, run this line in `bidexhands` folder:\n\n```bash\npython train.py --task=ShadowHandOver --algo=happo\n``` \n\nSupported Single-Agent RL algorithms are listed below:\n\n- [Proximal Policy Optimization (PPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.06347.pdf)\n- [Trust Region Policy Optimization (TRPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.05477.pdf)\n- [Twin Delayed DDPG (TD3)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.09477.pdf)\n- [Soft Actor-Critic (SAC)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.05905.pdf)\n- [Deep Deterministic Policy Gradient (DDPG)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1509.02971.pdf)\n\nSupported Multi-Agent RL algorithms are listed below:\n\n- [Heterogeneous-Agent Proximal Policy Optimization (HAPPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.11251.pdf)\n- [Heterogeneous-Agent Trust Region Policy Optimization (HATRPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.11251.pdf)\n- [Multi-Agent Proximal Policy Optimization (MAPPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.01955.pdf)\n- [Independent Proximal Policy Optimization (IPPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.09533.pdf)\n- [Multi-Agent Deep Deterministic Policy Gradient  (MADDPG)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.02275.pdf)\n\n##### Multi-task\u002FMeta RL Examples\n\nThe training method of multi-task\u002Fmeta RL is similar to the RL\u002FMARL, it is only need to select the multi-task\u002Fmeta categories and the corresponding algorithm. For example, if you want to train a policy for the ShadowHandMT4 categories by the MTPPO algorithm, run this line in `bidexhands` folder:\n\n```bash\npython train.py --task=ShadowHandMetaMT4 --algo=mtppo\n```\n\nSupported Multi-task RL algorithms are listed below:\n\n- [Multi-task Proximal Policy Optimization (MTPPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.06347.pdf)\n- [Multi-task Trust Region Policy Optimization (MTTRPO)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.05477.pdf)\n- [Multi-task Soft Actor-Critic (MTSAC)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.05905.pdf)\n\nSupported Meta RL algorithms are listed below:\n\n- [ProMP: Proximal Meta-Policy Search (ProMP)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.06784.pdf)\n\n\n\n#### Gym-Like API\n\nWe provide a Gym-Like API that allows us to get information from the Isaac Gym environment. Our single-agent Gym-Like wrapper is the code of the Isaac Gym team used, and we have developed a multi-agent Gym-Like wrapper based on it:\n\n```python\nclass MultiVecTaskPython(MultiVecTask):\n    # Get environment state information\n    def get_state(self):\n        return torch.clamp(self.task.states_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)\n\n    def step(self, actions):\n        # Stack all agent actions in order and enter them into the environment\n        a_hand_actions = actions[0]\n        for i in range(1, len(actions)):\n            a_hand_actions = torch.hstack((a_hand_actions, actions[i]))\n        actions = a_hand_actions\n        # Clip the actions\n        actions_tensor = torch.clamp(actions, -self.clip_actions, self.clip_actions)\n        self.task.step(actions_tensor)\n        # Obtain information in the environment and distinguish the observation of different agents by hand\n        obs_buf = torch.clamp(self.task.obs_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)\n        hand_obs = []\n        hand_obs.append(torch.cat([obs_buf[:, :self.num_hand_obs], obs_buf[:, 2*self.num_hand_obs:]], dim=1))\n        hand_obs.append(torch.cat([obs_buf[:, self.num_hand_obs:2*self.num_hand_obs], obs_buf[:, 2*self.num_hand_obs:]], dim=1))\n        rewards = self.task.rew_buf.unsqueeze(-1).to(self.rl_device)\n        dones = self.task.reset_buf.to(self.rl_device)\n        # Organize information into Multi-Agent RL format\n        # Refer to https:\u002F\u002Fgithub.com\u002Ftinyzqh\u002Flight_mappo\u002Fblob\u002FHEAD\u002Fenvs\u002Fenv.py\n        sub_agent_obs = []\n        ...\n        sub_agent_done = []\n        for i in range(len(self.agent_index[0] + self.agent_index[1])):\n            ...\n            sub_agent_done.append(dones)\n        # Transpose dim-0 and dim-1 values\n        obs_all = torch.transpose(torch.stack(sub_agent_obs), 1, 0)\n        ...\n        done_all = torch.transpose(torch.stack(sub_agent_done), 1, 0)\n        return obs_all, state_all, reward_all, done_all, info_all, None\n\n    def reset(self):\n        # Use a random action as the first action after the environment reset\n        actions = 0.01 * (1 - 2 * torch.rand([self.task.num_envs, self.task.num_actions * 2], dtype=torch.float32, device=self.rl_device))\n        # step the simulator\n        self.task.step(actions)\n        # Get the observation and state buffer in the environment, the detailed are the same as step(self, actions)\n        obs_buf = torch.clamp(self.task.obs_buf, -self.clip_obs, self.clip_obs)\n        ...\n        obs = torch.transpose(torch.stack(sub_agent_obs), 1, 0)\n        state_all = torch.transpose(torch.stack(agent_state), 1, 0)\n        return obs, state_all, None\n```\n#### RL\u002FMulti-Agent RL API\n\nWe also provide single-agent and multi-agent RL interfaces. In order to adapt to Isaac Gym and speed up the running efficiency, all operations are implemented on GPUs using tensor. Therefore, there is no need to transfer data between the CPU and GPU.\n\nWe give an example using ***HATRPO (the SOTA MARL algorithm for cooperative tasks)*** to illustrate multi-agent RL APIs, please refer to [https:\u002F\u002Fgithub.com\u002Fcyanrain7\u002FTRPO-in-MARL](https:\u002F\u002Fgithub.com\u002Fcyanrain7\u002FTRPO-in-MARL):\n\n```python\nfrom algorithms.marl.hatrpo_trainer import HATRPO as TrainAlgo\nfrom algorithms.marl.hatrpo_policy import HATRPO_Policy as Policy\n...\n# warmup before the main loop starts\nself.warmup()\n# log data\nstart = time.time()\nepisodes = int(self.num_env_steps) \u002F\u002F self.episode_length \u002F\u002F self.n_rollout_threads\ntrain_episode_rewards = torch.zeros(1, self.n_rollout_threads, device=self.device)\n# main loop\nfor episode in range(episodes):\n    if self.use_linear_lr_decay:\n        self.trainer.policy.lr_decay(episode, episodes)\n    done_episodes_rewards = []\n    for step in range(self.episode_length):\n        # Sample actions\n        values, actions, action_log_probs, rnn_states, rnn_states_critic = self.collect(step)\n        # Obser reward and next obs\n        obs, share_obs, rewards, dones, infos, _ = self.envs.step(actions)\n        dones_env = torch.all(dones, dim=1)\n        reward_env = torch.mean(rewards, dim=1).flatten()\n        train_episode_rewards += reward_env\n        # Record reward at the end of each episode\n        for t in range(self.n_rollout_threads):\n            if dones_env[t]:\n                done_episodes_rewards.append(train_episode_rewards[:, t].clone())\n                train_episode_rewards[:, t] = 0\n\n        data = obs, share_obs, rewards, dones, infos, \\\n                values, actions, action_log_probs, \\\n                rnn_states, rnn_states_critic\n        # insert data into buffer\n        self.insert(data)\n\n    # compute return and update network\n    self.compute()\n    train_infos = self.train()\n    # post process\n    total_num_steps = (episode + 1) * self.episode_length * self.n_rollout_threads\n    # save model\n    if (episode % self.save_interval == 0 or episode == episodes - 1):\n        self.save()\n```\n\n### Testing\n\nThe trained model will be saved to `logs\u002F${Task Name}\u002F${Algorithm Name}`folder.\n\nTo load a trained model and only perform inference (no training), pass `--test` \nas an argument, and pass `--model_dir` to specify the trained models which you want to load.\nFor single-agent reinforcement learning, you need to pass `--model_dir` to specify exactly what .pt model you want to load. An example of PPO algorithm is as follows:\n\n```bash\npython train.py --task=ShadowHandOver --algo=ppo --model_dir=logs\u002Fshadow_hand_over\u002Fppo\u002Fppo_seed0\u002Fmodel_5000.pt --test\n```\n\nFor multi-agent reinforcement learning, pass `--model_dir` to specify the path to the folder where all your agent model files are saved. An example of HAPPO algorithm is as follows:\n\n```bash\npython train.py --task=ShadowHandOver --algo=happo --model_dir=logs\u002Fshadow_hand_over\u002Fhappo\u002Fmodels_seed0 --test\n```\n\n### Plotting\n\nUsers can convert all tfevent files into csv files and then try plotting the results. Note that you should verify `env-num` and `env-step` same as your experimental setting. For the details, please refer to the `.\u002Futils\u002Flogger\u002Ftools.py`.\n\n```bash\n# geenrate csv for sarl and marl algorithms\n$ python .\u002Futils\u002Flogger\u002Ftools.py --alg-name \u003Csarl algorithm> --alg-type sarl --env-num 2048 --env-step 8 --root-dir .\u002Flogs\u002Fshadow_hand_over --refresh \n$ python .\u002Futils\u002Flogger\u002Ftools.py --alg-name \u003Cmarl algorithm> --alg-type marl --env-num 2048 --env-step 8 --root-dir .\u002Flogs\u002Fshadow_hand_over --refresh \n# generate figures\n$ python .\u002Futils\u002Flogger\u002Fplotter.py --root-dir .\u002Flogs\u002Fshadow_hand_over --shaded-std --legend-pattern \"\\\\w+\"  --output-path=.\u002Flogs\u002Fshadow_hand_over\u002Ffigure.png\n```\n\n### Use Bi-DexHands in Python scripts\n\n```python\nimport bidexhands as bi\nimport torch\n\nenv_name = 'ShadowHandOver'\nalgo = \"ppo\"\nenv = bi.make(env_name, algo)\n\nobs = env.reset()\nterminated = False\n\nwhile not terminated:\n    act = torch.tensor(env.action_space.sample()).repeat((env.num_envs, 1))\n    obs, reward, done, info = env.step(act)\n```\n\n## Enviroment Performance\n\n### Figures\n\nWe provide stable and reproducible baselins run by **PPO, HAPPO, MAPPO, SAC** algorithms. All baselines are run under the parameters of `2048 num_env` and `100M total_step`. The `dataset` folder contains the raw csv files. \n\n\u003Ctable>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandOver\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandLiftUnderarm\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_fe027b1285be.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_44b56019e6ef.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_d07541e56ad7.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_e6dae50d313f.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandCatchUnderarm\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorOpenInward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9de49bc269f9.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_1a8ebae1ade7.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_8b0a21adae16.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_8ad410604730.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandOver2Underarm\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorOpenOutward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9bc615112dc6.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_027d4d12eb88.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_02871589bf20.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_f1639a9eaa96.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandCatchAbreast\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorCloseInward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_cf501285a330.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_8bac5460b67b.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_f578ccaf3c73.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_952d06197de2.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandTwoCatchUnderarm\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorCloseOutward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_109daeb317c2.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_b7c715847ee7.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_e9cdca918e2d.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_57cac8c7ca57.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandPushBlock\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandPen\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9ae601ea92db.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_454df77aafdb.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_eb00740b7191.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_7e0f15c6efee.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandScissors\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandSwingCup\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_565adad69156.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_e09985a1b4e3.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_24ceabebfd93.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_6134cbef0906.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandBlockStack\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandReOrientation\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_a75bc9f7c830.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_629bc05e17f7.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_ce473d41f609.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_1d6d71ef6d3a.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandPourWater\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandSwitch\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_2b1dce337d83.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_63c94bed81cb.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_d561c6110cb4.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_7530fadc1a4e.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandGraspAndPlace\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandBottleCap\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_425535f4659d.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_57c949f19880.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_a4468ed9d073.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_7503b7f5f64e.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n\u003C\u002Ftable>\n\n\u003C!-- ## Building the Documentation\n\nTo build documentation in various formats, you will need [Sphinx](http:\u002F\u002Fwww.sphinx-doc.org) and the\nreadthedocs theme.\n\n```bash\ncd docs\u002F\npip install -r requirements.txt\n```\nYou can then build the documentation by running `make \u003Cformat>` from the\n`docs\u002F` folder. Run `make` to get a list of all available output formats.\n\nIf you get a katex error run `npm install katex`.  If it persists, try\n`npm install -g katex` -->\n\n## Offline RL Datasets\n\n### Data Collection\n\n`ppo_collect` is the algo that collects offline data, which is basically the same as the mujoco data collection in d4rl. Firstly train the PPO for 5000 iterations, and collect and save the demonstration data in the first 2500 iterations:\n\n```bash\npython train.py --task=ShadowHandOver --algo=ppo_collection --num_envs=2048 --headless\n```\n\nSelect model_5000.pt as the export policy to collect the expert dataset:\n\n```bash\npython3\ttrain.py --task=ShadowHandOver --algo=ppo_collect --model_dir=.\u002Flogs\u002Fshadow_hand_over\u002Fppo_collect\u002Fppo_collect_seed-1\u002Fmodel_5000.pt --test --num_envs=200 --headless\n```\n\nSimilarly, select model.pt as the random policy, select a model as the medium policy, collect random data and medium data as above, and evenly sample the replay data set from the demonstration data before training to the medium policy. The size of each dataset is 10e6. Run merge.py to get the medium-expert dataset.\n\n### Offline Data\n\nThe originally collected data in our paper is available at: \n[Shadow Hand Over](https:\u002F\u002Fdisk.pku.edu.cn:443\u002Flink\u002F74F68A12FFE4A12048604CFC37A54F9C), \n[Shadow Hand Door Open Outward](https:\u002F\u002Fdisk.pku.edu.cn:443\u002Flink\u002FA696D1FF69D60AC2E3E033C4567C108E).\n\n### Use rl_games to train our tasks\n\n**Please note that we have only test to support rl-games==1.5.2. Higher or lower version may cause an error.**\n\nFor example, if you want to train a policy for the ShadowHandOver task by the PPO algorithm, run this line in `bidexhands` folder:\n\n```bash\npython train_rlgames.py --task=ShadowHandOver --algo=ppo\n```\n\nCurrently we only support PPO and PPO with LSTM methods in rl_games. If you want to use PPO with LSTM, run this line in `bidexhands` folder:\n\n```bash\npython train_rlgames.py --task=ShadowHandOver --algo=ppo_lstm\n``` \n\nThe log files using rl_games can be found in `bidexhands\u002Fruns` folder.\n\n## Known issue\n\nIt must be pointed out that Bi-DexHands is still under development, and there are some known issue: \n- Some environments may report errors due to PhysX's collision calculation bugs in the later stage of program runtime.\n```\nRuntimeError: CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\n```\n- Although we provide the implementation, we did not tested **DDPG**, **TD3**, and **MADDPG** algorithms, they may still have bugs.\n\n## Future Plan\n - [x] Success Metric for all tasks\n - [ ] Add fatory environment (see [this](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Ffactory))\n - [x] Add support for the default IsaacGymEnvs RL library [rl-games](https:\u002F\u002Fgithub.com\u002FDenys88\u002Frl_games)\n\n## Citation\nPlease cite as following if you think this work is helpful for you:\n```\n@inproceedings{\nchen2022towards,\ntitle={Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning},\nauthor={Yuanpei Chen and Yaodong Yang and Tianhao Wu and Shengjie Wang and Xidong Feng and Jiechuan Jiang and Zongqing Lu and Stephen Marcus McAleer and Hao Dong and Song-Chun Zhu},\nbooktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},\nyear={2022},\nurl={https:\u002F\u002Fopenreview.net\u002Fforum?id=D29JbExncTP}\n}\n```\n\n## The Team\n\nBi-DexHands is a project contributed by [Yuanpei Chen](https:\u002F\u002Fgithub.com\u002Fcypypccpy), [Yaodong Yang](https:\u002F\u002Fwww.yangyaodong.com\u002F), [Tianhao Wu](https:\u002F\u002Ftianhaowuhz.github.io\u002F), [Shengjie Wang](https:\u002F\u002Fgithub.com\u002FShengjie-bob), [Xidong Feng](https:\u002F\u002Fgithub.com\u002Fwaterhorse1), [Jiechuang Jiang](https:\u002F\u002Fgithub.com\u002Fjiechuanjiang), [Hao Dong](https:\u002F\u002Fzsdonghao.github.io), [Zongqing Lu](https:\u002F\u002Fz0ngqing.github.io), [Song-chun Zhu](http:\u002F\u002Fwww.stat.ucla.edu\u002F~sczhu\u002F) at Peking University, please contact yaodong.yang@pku.edu.cn if you are interested to collaborate.\n\n\nWe also thank the list of contributors from the following two open source repositories: \n[Isaac Gym](https:\u002F\u002Fgithub.com\u002FNVIDIA-Omniverse\u002FIsaacGymEnvs), [HATRPO](https:\u002F\u002Fgithub.com\u002Fcyanrain7\u002FTRPO-in-MARL).\n\nWe also recommend users to read [the early work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05104) on dexterous hands manipulation that inpisres this work.\n\n## License\n\nBi-DexHands has an Apache license, as found in the [LICENSE](LICENSE) file.\n","# Bi-DexHands：基于强化学习的双手灵巧操作\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9149870c4210.jpg\" width=\"1000\" border=\"1\"\u002F>\n\n****\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fbidexhands)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fbidexhands\u002F)\n[![组织](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOrganization-PKU_MARL-blue.svg \"Organization\")](https:\u002F\u002Fgithub.com\u002FPKU-MARL \"Organization\")\n[![单元测试](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnittest-passing-green.svg \"Unittest\")](https:\u002F\u002Fgithub.com\u002FPKU-MARL \"Unittest\")\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocs-In_development-red.svg \"Author\")](https:\u002F\u002Fgithub.com\u002FPKU-MARL \"Docs\")\n[![GitHub许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FPKU-MARL\u002FDexterousHands)](https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands\u002Fblob\u002Fmain\u002FLICENSE)\n\n### 更新\n\n[2023\u002F02\u002F09] 我们重新封装了Bi-DexHands。现在您不仅可以在命令行中调用Bi-DexHands环境，还可以在Python脚本中使用。请查看下方的README中的[在Python脚本中使用Bi-DexHands](#Use-Bi-DexHands-in-Python-scripts)部分。\n\n[2022\u002F11\u002F24] 现在我们支持所有任务的视觉观测，请参阅这篇关于[视觉输入](#.\u002Fdocs\u002Fvisual-in.md)的文档。\n\n[2022\u002F10\u002F02] 现在我们支持默认的IsaacGymEnvs强化学习库[rl_games](https:\u002F\u002Fgithub.com\u002FDenys88\u002Frl_games)，请参阅我们的README中的[下方内容](#Use-rl_games-to-train-our-tasks)。\n\n**Bi-DexHands**（[点击bi-dexhands.ai](https:\u002F\u002Fpku-marl.github.io\u002FDexterousHands\u002F)）提供了一系列双手灵巧操作任务和强化学习算法。\n达到人类水平的手部灵巧性和双手协调性仍然是现代机器人研究领域的一个开放性挑战。为了更好地帮助社区研究这一问题，Bi-DexHands具备以下关键特性：\n- **Isaac效率**：Bi-DexHands构建于[Isaac Gym](https:\u002F\u002Fdeveloper.nvidia.com\u002Fisaac-gym)之上；它支持同时运行数千个环境。例如，在一块NVIDIA RTX 3090 GPU上，Bi-DexHands通过并行运行2,048个环境，可以达到**40,000+的平均FPS**。\n- **全面的强化学习基准**：我们为RL、MARL、多任务RL、元强化学习和离线强化学习的研究者提供了首个双手操作任务环境，并附带针对SOTA连续控制无模型RL\u002FMARL方法的全面基准。请参阅[示例](.\u002Fbidexhands\u002Falgorithms\u002Fmarl\u002F)。\n- **异构智能体协作**：Bi-DexHands中的智能体（即关节、手指、手等）是真正意义上的异构体；这与常见的多智能体环境（如[SMAC](https:\u002F\u002Fgithub.com\u002Foxwhirl\u002Fsmac)）截然不同，在那些环境中智能体通常可以通过共享参数来解决问题。\n- **任务泛化能力**：我们引入了多种灵巧操作任务（如交接、举起、投掷、放置等），以及来自[YCB](https:\u002F\u002Frse-lab.cs.washington.edu\u002Fprojects\u002Fposecnn\u002F)和[SAPIEN](https:\u002F\u002Fsapien.ucsd.edu\u002F)数据集的大量目标物体（超过2,000种）；这使得元强化学习和多任务强化学习算法能够在任务泛化方面得到测试。\n- **点云**：我们提供了使用点云作为观测值的功能。我们利用Isaacc Gym中的深度相机获取深度图像，然后将其转换为局部点云。我们可以自定义深度相机的位置和数量，以从不同角度获取点云。生成点云的密度取决于相机像素的数量。请参阅[视觉输入文档](.\u002Fdocs\u002Fpoint-cloud.md)。\n- **快速演示**\n\n\u003Cdiv align=center>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_3b95bbf6b13d.gif\" align=\"center\" width=\"600\"\u002F>\n\u003C\u002Fdiv> \n \n\n此仓库的内容如下：\n\n- [安装](#Installation)\n  - [先决条件](#Installation)\n  \u003C!-- - [从PyPI安装](#Install-from-PyPI) -->\n  - [从源代码安装](#Install-from-source-code)\n- [Bi-DexHands简介](#Introduction-to-Bi-DexHands)\n- [环境概述](.\u002Fdocs\u002Fenvironments.md)\n- [算法概述](.\u002Fdocs\u002Falgorithms.md)\n- [开始使用](#Getting-Started)\n  - [任务](#Tasks)\n  - [训练](#Training)\n  - [测试](#Testing)\n  - [绘图](#Plotting)\n  - [在Python脚本中使用Bi-DexHands](#Use-Bi-DexHands-in-Python-scripts)\n- [环境性能](#Enviroments-Performance)\n  - [图表](#Figures)\n- [离线强化学习数据集](#Offline-RL-Datasets)\n- [使用rl_games训练我们的任务](#Use-rl_games-to-train-our-tasks)\n- [未来计划](#Future-Plan)\n- [自定义您的环境](docs\u002Fcustomize-the-environment.md)\n- [如何更改灵巧手类型](docs\u002FChange-the-type-of-dexterous-hand.md)\n- [如何为灵巧手添加机械臂驱动](docs\u002FAdd-a-robotic-arm-drive-to-the-dexterous-hand.md)\n- [团队](#The-Team)\n- [许可证](#License)\n\u003Cbr>\u003C\u002Fbr>\n\n有关这项工作的更多信息，请参阅[我们的论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08686)。\n\n****\n## 安装\n\n关于IsaacGym的安装详情，请参阅[此处](https:\u002F\u002Fdeveloper.nvidia.com\u002Fisaac-gym)。**我们目前支持IsaacGym的`Preview Release 3\u002F4`版本。**\n\n### 先决条件\n\n该代码已在Ubuntu 18.04\u002F20.04系统上使用Python 3.7\u002F3.8进行了测试。Linux系统上推荐的最低NVIDIA驱动版本为`470.74`（由IsaacGym的支持要求决定）。\n\n它使用[Anaconda](https:\u002F\u002Fwww.anaconda.com\u002F)来创建虚拟环境。\n要安装Anaconda，请按照[此处](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002Flinux\u002F)的说明进行操作。\n\n请确保Isaac Gym能在您的系统上正常运行，方法是运行`python\u002Fexamples`目录下的示例之一，比如`joint_monkey.py`。如果您在运行示例时遇到任何问题，请按照Isaac Gym Preview Release 3\u002F4安装说明中的故障排除步骤操作。\n\n一旦Isaac Gym安装完成且示例在您当前的Python环境中能够正常运行，即可安装本仓库：\n\n\u003C!-- #### 从PyPI安装\nBi-DexHands托管在PyPI上。它需要Python >= 3.7。\n您可以通过以下命令简单地从PyPI安装Bi-DexHands：\n\n```bash\npip install bidexhands\n``` -->\n\n#### 从源代码安装\n您也可以从源代码安装本仓库：\n\n```bash\npip install -e .\n```\n\n## 简介\n\n此仓库包含复杂的灵巧手控制任务。Bi-DexHands构建于NVIDIA Isaac Gym之上，具有高性能保障，可用于训练强化学习算法。我们的环境专注于应用无模型RL\u002FMARL算法进行双手灵巧操作，而这类任务被普遍认为对传统控制方法而言极具挑战性。\n\n## 开始使用\n\n### \u003Cspan id=\"task\">任务\u003C\u002Fspan>\n\n任务的源代码可以在 `envs\u002Ftasks` 中找到。状态\u002F动作\u002F奖励的具体设置请参见[这里](.\u002Fdocs\u002Fenvironments.md)。\n\n截至目前，我们发布了以下任务（未来还将推出更多）：\n\n| 环境 | 描述 | 演示     |\n|  :----:  | :----:  | :----:  |\n|ShadowHand 传递| 这些环境涉及两只位置固定的机械手。初始持有物体的手需要设法将物体传递给另一只手。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_13a79f6ec884.gif\" width=\"250\"\u002F>    |\n|ShadowHand 腋下接球| 这些环境同样包含两只机械手，但它们现在具有额外的自由度，可以在一定范围内平移或旋转其质心。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9f546dc4b7b2.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 上抛下接| 该环境由一半 ShadowHandCatchUnderarm 和一半 ShadowHandCatchOverarm 组成，物体需要从垂直放置的手抛向掌心朝上的另一只手。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_aeda24cbc09d.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 并排接球| 该环境与 ShadowHandCatchUnderarm 类似，不同之处在于两只手的位置从相对变为并排。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_14e7bef41965.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 双人腋下接球| 这些环境要求两只手协同配合，将两个物体在手中来回抛掷（即交换位置）。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_aab5beca5a81.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 腋下提壶| 该环境要求用两只手抓住壶柄，并将壶提升到指定位置。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_4f926e1e7029.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 向内开门| 该环境要求打开一扇关闭的门，且门只能向内拉动。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_96a89a68f0c8.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 向外开门| 该环境要求打开一扇关闭的门，且门只能向外推开。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_5707c4c17f33.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 向内关门| 该环境要求关闭一扇已经打开的门，且门最初是向内打开的。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_bc283b31c935.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 开瓶盖| 该环境涉及两只机械手和一个瓶子，我们需要用一只手握住瓶子，另一只手打开瓶盖。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_f82022b8ebc8.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 推方块| 该环境要求两只手同时接触方块，并将其向前推动。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_b67bbd4b5919.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 打开剪刀| 该环境要求两只手协同合作，打开一把剪刀。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_78ca83845672.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 打开笔帽| 该环境要求两只手协同合作，打开一支笔的笔帽。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_c43c600c7728.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 摆动杯子| 该环境要求两只手握住杯柄，将杯子旋转 90 度。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_945950570528.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 按按钮| 该环境要求两只手同时按下按钮。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_7f2845e9a123.gif\" align=\"middle\" width=\"250\"\u002F>    |\n|ShadowHand 抓取并放置| 该环境有一个水桶和一个物体，我们需要将物体放入水桶中。 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_3038ec45efb4.gif\" align=\"middle\" width=\"250\"\u002F>    |\n\n### 训练\n\n#### 训练示例\n\n##### 强化学习\u002F多智能体强化学习示例\n\n例如，如果你想使用PPO算法为ShadowHandOver任务训练策略，可以在`bidexhands`文件夹中运行以下命令：\n\n```bash\npython train.py --task=ShadowHandOver --algo=ppo\n```\n\n要选择算法，只需通过`--algo=ppo\u002Fmappo\u002Fhappo\u002Fhatrpo\u002F...`作为参数即可。例如，如果你想使用happo算法，可以在`bidexhands`文件夹中运行以下命令：\n\n```bash\npython train.py --task=ShadowHandOver --algo=happo\n```\n\n支持的单智能体强化学习算法如下：\n\n- [近端策略优化（PPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.06347.pdf)\n- [信任域策略优化（TRPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.05477.pdf)\n- [双延迟DDPG（TD3）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.09477.pdf)\n- [软演员-评论家（SAC）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.05905.pdf)\n- [深度确定性策略梯度（DDPG）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1509.02971.pdf)\n\n支持的多智能体强化学习算法如下：\n\n- [异构智能体近端策略优化（HAPPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.11251.pdf)\n- [异构智能体信任域策略优化（HATRPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.11251.pdf)\n- [多智能体近端策略优化（MAPPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.01955.pdf)\n- [独立近端策略优化（IPPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.09533.pdf)\n- [多智能体深度确定性策略梯度（MADDPG）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.02275.pdf)\n\n##### 多任务\u002F元强化学习示例\n\n多任务\u002F元强化学习的训练方法与RL\u002FMARL类似，只需选择相应的多任务\u002F元类别及对应算法即可。例如，如果你想使用MTPPO算法为ShadowHandMT4类别训练策略，可以在`bidexhands`文件夹中运行以下命令：\n\n```bash\npython train.py --task=ShadowHandMetaMT4 --algo=mtppo\n```\n\n支持的多任务强化学习算法如下：\n\n- [多任务近端策略优化（MTPPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.06347.pdf)\n- [多任务信任域策略优化（MTTRPO）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.05477.pdf)\n- [多任务软演员-评论家（MTSAC）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.05905.pdf)\n\n支持的元强化学习算法如下：\n\n- [ProMP：近端元策略搜索（ProMP）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.06784.pdf)\n\n\n\n#### 类Gym API\n\n我们提供了一个类Gym API，允许从Isaac Gym环境中获取信息。我们的单智能体类Gym封装直接沿用了Isaac Gym团队的代码，并在此基础上开发了多智能体类Gym封装：\n\n```python\nclass MultiVecTaskPython(MultiVecTask):\n    # 获取环境状态信息\n    def get_state(self):\n        return torch.clamp(self.task.states_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)\n\n    def step(self, actions):\n        # 将所有智能体的动作按顺序堆叠并输入到环境中\n        a_hand_actions = actions[0]\n        for i in range(1, len(actions)):\n            a_hand_actions = torch.hstack((a_hand_actions, actions[i]))\n        actions = a_hand_actions\n        # 对动作进行裁剪\n        actions_tensor = torch.clamp(actions, -self.clip_actions, self.clip_actions)\n        self.task.step(actions_tensor)\n        # 获取环境中的信息，并手动区分不同智能体的观测值\n        obs_buf = torch.clamp(self.task.obs_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)\n        hand_obs = []\n        hand_obs.append(torch.cat([obs_buf[:, :self.num_hand_obs], obs_buf[:, 2*self.num_hand_obs:]], dim=1))\n        hand_obs.append(torch.cat([obs_buf[:, self.num_hand_obs:2*self.num_hand_obs], obs_buf[:, 2*self.num_hand_obs:]], dim=1))\n        rewards = self.task.rew_buf.unsqueeze(-1).to(self.rl_device)\n        dones = self.task.reset_buf.to(self.rl_device)\n        # 将信息整理成多智能体强化学习格式\n        # 参考 https:\u002F\u002Fgithub.com\u002Ftinyzqh\u002Flight_mappo\u002Fblob\u002FHEAD\u002Fenvs\u002Fenv.py\n        sub_agent_obs = []\n        ...\n        sub_agent_done = []\n        for i in range(len(self.agent_index[0] + self.agent_index[1])):\n            ...\n            sub_agent_done.append(dones)\n        # 转置第0维和第1维的值\n        obs_all = torch.transpose(torch.stack(sub_agent_obs), 1, 0)\n        ...\n        done_all = torch.transpose(torch.stack(sub_agent_done), 1, 0)\n        return obs_all、state_all、reward_all、done_all、info_all、None\n\n    def reset(self):\n        # 环境重置后，使用随机动作作为首次动作\n        actions = 0.01 * (1 - 2 * torch.rand([self.task.num_envs, self.task.num_actions * 2], dtype=torch.float32, device=self.rl_device))\n        # 运行模拟器\n        self.task.step(actions)\n        # 获取环境中的观测值和状态缓冲区，详细过程与step(self, actions)相同\n        obs_buf = torch.clamp(self.task.obs_buf, -self.clip_obs, self.clip_obs)\n        ...\n        obs = torch.transpose(torch.stack(sub_agent_obs), 1, 0)\n        state_all = torch.transpose(torch.stack(agent_state), 1, 0)\n        return obs、state_all、None\n```\n#### RL\u002F多智能体RL API\n\n我们还提供了单智能体和多智能体RL接口。为了适配Isaac Gym并提高运行效率，所有操作均在GPU上以张量形式实现，因此无需在CPU和GPU之间传输数据。\n\n我们以***HATRPO（用于合作任务的SOTA多智能体强化学习算法）***为例来说明多智能体RL API，请参考[https:\u002F\u002Fgithub.com\u002Fcyanrain7\u002FTRPO-in-MARL](https:\u002F\u002Fgithub.com\u002Fcyanrain7\u002FTRPO-in-MARL)：\n\n```python\nfrom algorithms.marl.hatrpo_trainer import HATRPO as TrainAlgo\nfrom algorithms.marl.hatrpo_policy import HATRPO_Policy as Policy\n...\n# 主循环开始前的预热\nself.warmup()\n# 记录数据\nstart = time.time()\nepisodes = int(self.num_env_steps) \u002F\u002F self.episode_length \u002F\u002F self.n_rollout_threads\ntrain_episode_rewards = torch.zeros(1, self.n_rollout_threads, device=self.device)\n\n# 主循环\nfor episode in range(episodes):\n    if self.use_linear_lr_decay:\n        self.trainer.policy.lr_decay(episode, episodes)\n    done_episodes_rewards = []\n    for step in range(self.episode_length):\n        # 采样动作\n        values, actions, action_log_probs, rnn_states, rnn_states_critic = self.collect(step)\n        # 观测奖励和下一个观测\n        obs, share_obs, rewards, dones, infos, _ = self.envs.step(actions)\n        dones_env = torch.all(dones, dim=1)\n        reward_env = torch.mean(rewards, dim=1).flatten()\n        train_episode_rewards += reward_env\n        # 记录每个回合结束时的奖励\n        for t in range(self.n_rollout_threads):\n            if dones_env[t]:\n                done_episodes_rewards.append(train_episode_rewards[:, t].clone())\n                train_episode_rewards[:, t] = 0\n\n        data = obs, share_obs, rewards, dones, infos, \\\n                values, actions, action_log_probs, \\\n                rnn_states, rnn_states_critic\n        # 将数据插入缓冲区\n        self.insert(data)\n\n    # 计算回报并更新网络\n    self.compute()\n    train_infos = self.train()\n    # 后处理\n    total_num_steps = (episode + 1) * self.episode_length * self.n_rollout_threads\n    # 保存模型\n    if (episode % self.save_interval == 0 or episode == episodes - 1):\n        self.save()\n```\n\n### 测试\n\n训练好的模型将被保存到 `logs\u002F${Task Name}\u002F${Algorithm Name}` 文件夹中。\n\n要加载已训练好的模型并仅进行推理（不进行训练），请传递 `--test` 参数，并使用 `--model_dir` 指定您想要加载的已训练模型路径。对于单智能体强化学习，您需要通过 `--model_dir` 精确指定要加载的 `.pt` 模型文件。以 PPO 算法为例：\n\n```bash\npython train.py --task=ShadowHandOver --algo=ppo --model_dir=logs\u002Fshadow_hand_over\u002Fppo\u002Fppo_seed0\u002Fmodel_5000.pt --test\n```\n\n对于多智能体强化学习，则需通过 `--model_dir` 指定包含所有智能体模型文件的文件夹路径。以 HAPPO 算法为例：\n\n```bash\npython train.py --task=ShadowHandOver --algo=happo --model_dir=logs\u002Fshadow_hand_over\u002Fhappo\u002Fmodels_seed0 --test\n```\n\n### 绘图\n\n用户可以将所有的 tfevent 文件转换为 csv 文件，然后尝试绘制结果。请注意，应确保 `env-num` 和 `env-step` 与您的实验设置一致。详细信息请参阅 `.\u002Futils\u002Flogger\u002Ftools.py`。\n\n```bash\n# 生成单智能体和多智能体算法的 csv 文件\n$ python .\u002Futils\u002Flogger\u002Ftools.py --alg-name \u003Csarl algorithm> --alg-type sarl --env-num 2048 --env-step 8 --root-dir .\u002Flogs\u002Fshadow_hand_over --refresh \n$ python .\u002Futils\u002Flogger\u002Ftools.py --alg-name \u003Cmarl algorithm> --alg-type marl --env-num 2048 --env-step 8 --root-dir .\u002Flogs\u002Fshadow_hand_over --refresh \n# 生成图表\n$ python .\u002Futils\u002Flogger\u002Fplotter.py --root-dir .\u002Flogs\u002Fshadow_hand_over --shaded-std --legend-pattern \"\\\\w+\"  --output-path=.\u002Flogs\u002Fshadow_hand_over\u002Ffigure.png\n```\n\n### 在 Python 脚本中使用 Bi-DexHands\n\n```python\nimport bidexhands as bi\nimport torch\n\nenv_name = 'ShadowHandOver'\nalgo = \"ppo\"\nenv = bi.make(env_name, algo)\n\nobs = env.reset()\nterminated = False\n\nwhile not terminated:\n    act = torch.tensor(env.action_space.sample()).repeat((env.num_envs, 1))\n    obs, reward, done, info = env.step(act)\n```\n\n## 环境性能\n\n### 图表\n\n我们提供了由 **PPO、HAPPO、MAPPO、SAC** 算法运行的稳定且可复现的基准测试结果。所有基准测试均在 `2048 num_env` 和 `100M total_step` 的参数设置下运行。`dataset` 文件夹中包含了原始的 CSV 文件。\n\n\u003Ctable>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandOver\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandLiftUnderarm\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_fe027b1285be.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_44b56019e6ef.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_d07541e56ad7.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_e6dae50d313f.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandCatchUnderarm\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorOpenInward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9de49bc269f9.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_1a8ebae1ade7.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_8b0a21adae16.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_8ad410604730.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandOver2Underarm\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorOpenOutward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9bc615112dc6.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_027d4d12eb88.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_02871589bf20.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_f1639a9eaa96.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandCatchAbreast\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorCloseInward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_cf501285a330.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_8bac5460b67b.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_f578ccaf3c73.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_952d06197de2.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandTwoCatchUnderarm\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandDoorCloseOutward\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_109daeb317c2.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_b7c715847ee7.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_e9cdca918e2d.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_57cac8c7ca57.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandPushBlock\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandPen\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_9ae601ea92db.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_454df77aafdb.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_eb00740b7191.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_7e0f15c6efee.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandScissors\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandSwingCup\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_565adad69156.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_e09985a1b4e3.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_24ceabebfd93.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_6134cbef0906.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Cth colspan=\"2\">ShadowHandBlockStack\u003C\u002Fth>\n        \u003Cth colspan=\"2\">ShadowHandReOrientation\u003C\u002Fth>\n    \u003Ctr>\n    \u003Ctr>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_a75bc9f7c830.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_629bc05e17f7.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_ce473d41f609.jpg\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd>\n        \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_readme_1d6d71ef6d3a.png\" align=\"middle\" width=\"750\"\u002F>\u003C\u002Ftd\n\n### 数据收集\n\n`ppo_collect` 是用于收集离线数据的算法，其基本流程与 D4RL 中的 Mujoco 数据收集相同。首先使用 PPO 算法训练 5000 次迭代，并在前 2500 次迭代中收集并保存示范数据：\n\n```bash\npython train.py --task=ShadowHandOver --algo=ppo_collection --num_envs=2048 --headless\n```\n\n选择 `model_5000.pt` 作为导出策略来收集专家数据集：\n\n```bash\npython3 train.py --task=ShadowHandOver --algo=ppo_collect --model_dir=.\u002Flogs\u002Fshadow_hand_over\u002Fppo_collect\u002Fppo_collect_seed-1\u002Fmodel_5000.pt --test --num_envs=200 --headless\n```\n\n类似地，选择 `model.pt` 作为随机策略，并选择另一个模型作为中等难度策略，分别按照上述方法收集随机数据和中等难度数据。在训练至中等难度策略之前，从示范数据中均匀采样得到回放缓冲区数据。每个数据集的大小均为 10^6 条样本。运行 `merge.py` 脚本以生成中等-专家数据集。\n\n### 离线数据\n\n我们论文中最初收集的数据可在此处获取：\n[Shadow Hand Over](https:\u002F\u002Fdisk.pku.edu.cn:443\u002Flink\u002F74F68A12FFE4A12048604CFC37A54F9C)，\n[Shadow Hand Door Open Outward](https:\u002F\u002Fdisk.pku.edu.cn:443\u002Flink\u002FA696D1FF69D60AC2E3E033C4567C108E)。\n\n### 使用 rl_games 训练我们的任务\n\n**请注意，我们仅测试过 rl-games==1.5.2 版本，更高或更低版本可能会导致错误。**\n\n例如，若要使用 PPO 算法为 ShadowHandOver 任务训练策略，请在 `bidexhands` 文件夹中运行以下命令：\n\n```bash\npython train_rlgames.py --task=ShadowHandOver --algo=ppo\n```\n\n目前，rl_games 仅支持 PPO 和带有 LSTM 的 PPO 方法。若要使用带有 LSTM 的 PPO，请在 `bidexhands` 文件夹中运行以下命令：\n\n```bash\npython train_rlgames.py --task=ShadowHandOver --algo=ppo_lstm\n```\n\n使用 rl_games 生成的日志文件可在 `bidexhands\u002Fruns` 文件夹中找到。\n\n## 已知问题\n\n需要指出的是，Bi-DexHands 仍处于开发阶段，存在一些已知问题：\n- 部分环境在程序运行后期可能因 PhysX 的碰撞计算缺陷而报错：\n```\nRuntimeError: CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\n```\n- 尽管我们提供了实现，但并未对 **DDPG**、**TD3** 和 **MADDPG** 算法进行测试，这些算法可能仍存在 bug。\n\n## 未来计划\n- [x] 为所有任务定义成功指标\n- [ ] 添加工厂环境（参见 [此链接](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Ffactory)）\n- [x] 增加对默认 IsaacGymEnvs RL 库 [rl-games](https:\u002F\u002Fgithub.com\u002FDenys88\u002Frl_games) 的支持\n\n## 引用\n如果您认为本工作对您有所帮助，请按以下方式引用：\n```\n@inproceedings{\nchen2022towards,\ntitle={Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning},\nauthor={Yuanpei Chen and Yaodong Yang and Tianhao Wu and Shengjie Wang and Xidong Feng and Jiechuan Jiang and Zongqing Lu and Stephen Marcus McAleer and Hao Dong and Song-Chun Zhu},\nbooktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},\nyear={2022},\nurl={https:\u002F\u002Fopenreview.net\u002Fforum?id=D29JbExncTP}\n}\n```\n\n## 团队\nBi-DexHands 是由 [Yuanpei Chen](https:\u002F\u002Fgithub.com\u002Fcypypccpy)、[Yaodong Yang](https:\u002F\u002Fwww.yangyaodong.com\u002F)、[Tianhao Wu](https:\u002F\u002Ftianhaowuhz.github.io\u002F)、[Shengjie Wang](https:\u002F\u002Fgithub.com\u002FShengjie-bob)、[Xidong Feng](https:\u002F\u002Fgithub.com\u002Fwaterhorse1)、[Jiechuang Jiang](https:\u002F\u002Fgithub.com\u002Fjiechuanjiang)、[Hao Dong](https:\u002F\u002Fzsdonghao.github.io)、[Zongqing Lu](https:\u002F\u002Fz0ngqing.github.io) 以及来自北京大学的 [Song-chun Zhu](http:\u002F\u002Fwww.stat.ucla.edu\u002F~sczhu\u002F) 共同贡献的项目。如有合作意向，请联系 yaodong.yang@pku.edu.cn。\n\n我们还感谢以下两个开源仓库的贡献者：\n[Isaac Gym](https:\u002F\u002Fgithub.com\u002FNVIDIA-Omniverse\u002FIsaacGymEnvs)、[HATRPO](https:\u002F\u002Fgithub.com\u002Fcyanrain7\u002FTRPO-in-MARL)。\n\n此外，我们也建议用户阅读启发本工作的早期关于灵巧手操作的研究成果：[arxiv.org\u002Fabs\u002F2009.05104](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05104)。\n\n## 许可证\nBi-DexHands 采用 Apache 许可证，详情请参阅 [LICENSE](LICENSE) 文件。","# Bi-DexHands 快速上手指南\n\nBi-DexHands 是一个基于 NVIDIA Isaac Gym 构建的双手机器人灵巧操作强化学习平台，支持大规模并行环境训练及多种单智能体\u002F多智能体强化学习算法。\n\n## 1. 环境准备\n\n在开始之前，请确保您的系统满足以下硬件和软件要求：\n\n*   **操作系统**: Ubuntu 18.04 或 20.04\n*   **Python 版本**: 3.7 或 3.8\n*   **GPU**: NVIDIA 显卡（推荐 RTX 3090 或更高以发挥并行优势）\n*   **NVIDIA 驱动**: 最低版本 `470.74`（由 Isaac Gym 决定）\n*   **Isaac Gym**: 必须安装 **Preview Release 3** 或 **Preview Release 4** 版本。\n    *   下载地址：[NVIDIA Isaac Gym](https:\u002F\u002Fdeveloper.nvidia.com\u002Fisaac-gym)\n    *   *验证安装*：安装完成后，运行 Isaac Gym 自带示例（如 `python\u002Fexamples\u002Fjoint_monkey.py`）确保其正常工作。\n*   **包管理工具**: 推荐使用 [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002F) 创建虚拟环境。\n\n## 2. 安装步骤\n\n建议使用 Anaconda 创建独立的 Python 环境，然后从源码安装 Bi-DexHands。\n\n```bash\n# 1. 创建并激活虚拟环境 (示例使用 Python 3.8)\nconda create -n bidexhands python=3.8\nconda activate bidexhands\n\n# 2. 克隆仓库代码\ngit clone https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands.git\ncd DexterousHands\n\n# 3. 以可编辑模式安装依赖包\npip install -e .\n```\n\n> **提示**：如果下载依赖较慢，可添加国内镜像源加速，例如：\n> `pip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 3. 基本使用\n\n安装完成后，您可以立即开始训练任务。Bi-DexHands 支持通过命令行指定任务（`--task`）和算法（`--algo`）。\n\n### 训练示例\n\n进入项目根目录，运行以下命令开始训练：\n\n**示例 1：使用 PPO 算法训练“双手传递物体”任务**\n```bash\npython train.py --task=ShadowHandOver --algo=ppo\n```\n\n**示例 2：使用 HAPPO 算法（异构多智能体）训练同一任务**\n```bash\npython train.py --task=ShadowHandOver --algo=happo\n```\n\n### 支持的算法参数 (`--algo`)\n\n*   **单智能体 RL**: `ppo`, `trpo`, `td3`, `sac`, `ddpg`\n*   **多智能体 MARL**: `happo`, `hatrpo`, `mappo`, `ippo`, `maddpg`\n\n### 可用任务示例 (`--task`)\n\n平台内置了丰富的双手机器人操作任务，部分常用任务名称如下：\n\n| 任务名称 | 描述 |\n| :--- | :--- |\n| `ShadowHandOver` | 双手传递物体（一手交，一手接） |\n| `ShadowHandCatchUnderarm` | 腋下接物（手部可在限定区域移动\u002F旋转） |\n| `ShadowHandLiftUnderarm` | 双手协作提壶 |\n| `ShadowHandDoorOpenInward` | 向内开门 |\n| `ShadowHandBottleCap` | 一手持瓶，一手开盖 |\n| `ShadowHandOpenScissors` | 双手协作打开剪刀 |\n| `ShadowHandGraspAndPlace` | 抓取物体并放入桶中 |\n\n更多任务详情及状态\u002F动作空间定义，请参阅项目内的 `docs\u002Fenvironments.md` 文档。","某机器人实验室团队正致力于训练一双机械手协同完成复杂的“物体递接”与“精准放置”任务，以服务于柔性制造场景。\n\n### 没有 DexterousHands 时\n- **仿真效率极低**：传统单环境串行训练方式导致数据采样缓慢，在普通显卡上难以支撑大规模强化学习所需的亿级交互步数，模型收敛需数周时间。\n- **缺乏双手基准**：社区缺少专门针对双臂异构协作的标准测试环境，研究人员不得不自行搭建简陋场景，难以公平对比多智能体强化学习（MARL）算法效果。\n- **感知能力受限**：现有工具多仅支持关节角度等本体感知，缺乏深度相机点云输入支持，导致机器人无法像人类一样基于视觉进行精细的空间定位与抓取。\n- **泛化验证困难**：由于缺乏多样化的物体库（如 YCB 数据集），训练出的策略往往只能处理特定形状物体，一旦更换目标物品，机器人便无法适应。\n\n### 使用 DexterousHands 后\n- **千倍加速训练**：依托 Isaac Gym 并行架构，DexterousHands 能在单张 RTX 3090 上同时运行 2048 个环境，实现超 40,000 FPS 的仿真速度，将训练周期从数周缩短至数小时。\n- **开箱即用的基准**：直接提供涵盖递接、抛掷、放置等丰富任务的双手操作基准，支持主流 RL 算法一键调用，让团队能立即专注于算法优化而非环境搭建。\n- **视觉融合感知**：原生支持将深度图像转换为点云作为观测输入，使机械手能基于实时视觉反馈调整手指姿态，显著提升了非结构化环境下的操作成功率。\n- **海量物体泛化**：内置超过 2000 种来自 YCB 和 SAPIEN 数据集的物体，轻松验证元学习与多任务算法的泛化能力，确保机器人面对新物品时依然灵活可靠。\n\nDexterousHands 通过极致的仿真效率与丰富的双手协作基准，彻底打破了复杂灵巧操作算法研发的效率瓶颈。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-MARL_DexterousHands_9149870c.jpg","PKU-MARL","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPKU-MARL_95f1bd1d.png","RL Research Group, Institute for AI @ Peking University",null,"yaodong.yang@pku.edu.cn","https:\u002F\u002Fgithub.com\u002FPKU-MARL",[79,83,87,91],{"name":80,"color":81,"percentage":82},"Python","#3572A5",99.3,{"name":84,"color":85,"percentage":86},"HTML","#e34c26",0.4,{"name":88,"color":89,"percentage":90},"CMake","#DA3434",0.3,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0,1001,117,"2026-04-03T09:13:52","Apache-2.0",4,"Linux (Ubuntu 18.04\u002F20.04)","必需 NVIDIA GPU，支持 Isaac Gym (最低驱动版本 470.74)，文中示例提及 RTX 3090 可达高性能","未说明",{"notes":104,"python":105,"dependencies":106},"该工具基于 NVIDIA Isaac Gym 构建，必须安装 Isaac Gym Preview Release 3 或 4 版本并验证其示例脚本（如 joint_monkey.py）能正常运行。建议使用 Anaconda 创建虚拟环境。支持在命令行或 Python 脚本中调用环境。","3.7, 3.8",[107,108,109],"Isaac Gym (Preview Release 3\u002F4)","Anaconda","rl_games (可选)",[14,111],"其他",[113,114,115],"deep-reinforcement-learning","dexterous-robotic-hand","reinforcement-learning","2026-03-27T02:49:30.150509","2026-04-09T01:38:55.939911",[119,124,129,134,139,144],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},25423,"使用 URDF 资产重置环境时出现物理模拟错误（物体飞起或穿模），如何解决？","这通常不是代码逻辑错误，而是物理引擎参数配置问题。尝试调整 `sim_params.physx` 的相关参数，特别是将速度迭代次数设为 0 并指定线程数。参考配置如下：\n```python\nsim_params.physx.solver_type = 1\nsim_params.physx.num_position_iterations = 8\nsim_params.physx.num_velocity_iterations = 0  # 关键修改\nsim_params.physx.rest_offset = 0.0\nsim_params.physx.contact_offset = 0.001\nsim_params.physx.friction_offset_threshold = 0.001\nsim_params.physx.friction_correlation_distance = 0.0005\nsim_params.physx.num_threads = 16  # 关键修改\nsim_params.physx.use_gpu = True\n```\n如果问题依旧，检查是否使用了特定的 URDF 文件（如 mobility.urdf），有时替换为 XML 格式资产可临时规避问题。","https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands\u002Fissues\u002F13",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},25424,"如何更换项目中的机械手模型以使用自定义的机械手进行训练？","目前不支持仅通过修改配置文件名称来直接切换机械手模型。您需要手动修改代码以加载自定义的 URDF 文件，并相应地设置该机械手的参数（如关节限制、动作空间等）。虽然过程不复杂，但需要针对新模型调整代码逻辑，暂无一键替换的简单方法。","https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands\u002Fissues\u002F42",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},25425,"使用点云观测或相机 API 时返回 handle 为 -1 或渲染报错，怎么办？","该错误通常发生在无显示器（headless）的服务器环境中，Isaac Gym 在此类环境下渲染存在已知 Bug。\n1. 确认是否在无显示器服务器上运行，尝试不使用 `--headless` 参数看是否解决。\n2. 如果是服务器环境，建议前往 Isaac Gym 官方论坛查找相关渲染问题的解决方案。\n3. 确保相机初始化逻辑正确，但在某些 Isaac Gym 版本中，无头模式下的相机句柄获取确实会失败。","https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands\u002Fissues\u002F20",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},25426,"运行 train_rlgames.py 时出现与 rl-games 版本不兼容的错误，如何解决？","该项目目前仅测试支持 rl-games 的 1.5.2 版本。如果您安装了其他版本，会导致类加载失败。请执行以下命令卸载当前版本并安装指定版本：\n```bash\npip uninstall rl-games\npip install rl-games==1.5.2\n```\n安装完成后即可正常运行。","https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands\u002Fissues\u002F19",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},25427,"在 Docker 容器中运行训练任务结束时出现 'Segmentation fault (core dumped)' 错误，但模型已保存，这是为什么？","如果在控制台看到训练已完成（例如显示了平均奖励且模型权重 .pt 文件已正确保存到 logs 目录），随后的 'Segmentation fault' 通常发生在程序退出清理阶段，不影响已训练好的模型。这可能与 Docker 环境、GPU 驱动或 Isaac Gym 在容器内的资源释放机制有关。只要训练过程中的日志正常且模型文件完整，通常可以忽略此退出时的崩溃错误。","https:\u002F\u002Fgithub.com\u002FPKU-MARL\u002FDexterousHands\u002Fissues\u002F8",{"id":145,"question_zh":146,"answer_zh":147,"source_url":133},25428,"Isaac Gym 在无显示器服务器上运行时遇到渲染相关的问题有哪些常见表现？","常见表现包括相机句柄（camera_handle）返回 -1，或者在使用 `--headless` 参数时触发未知 Bug。这是因为 Isaac Gym 的渲染模块在无显示器环境下支持不完善。如果遇到此类问题，建议检查是否必须使用视觉观测，或者参考 NVIDIA 官方论坛中关于 Headless 模式渲染的特定补丁和讨论。",[]]