[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-AndrejOrsula--drl_grasping":3,"tool-AndrejOrsula--drl_grasping":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85267,2,"2026-04-18T11:00:28",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75940,"2026-04-19T21:42:30",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":83,"owner_website":84,"owner_url":85,"languages":86,"stars":103,"forks":104,"last_commit_at":105,"license":106,"difficulty_score":107,"env_os":108,"env_gpu":109,"env_ram":110,"env_deps":111,"category_tags":123,"github_topics":124,"view_count":10,"oss_zip_url":83,"oss_zip_packed_at":83,"status":22,"created_at":138,"updated_at":139,"faqs":140,"releases":173},10019,"AndrejOrsula\u002Fdrl_grasping","drl_grasping","Deep Reinforcement Learning for Robotic Grasping from Octrees","drl_grasping 是一个专注于利用深度强化学习提升机器人抓取能力的开源项目。它旨在解决机器人在面对未知物体、复杂地形或全新视角时，难以仅凭紧凑的 3D 观测数据实现稳定抓取的难题。通过引入八叉树（Octrees）作为核心感知输入，该项目让机器人能够高效理解三维空间结构，从而学习到鲁棒的抓取策略。\n\n该工具特别适合机器人领域的研究人员与开发者，尤其是那些致力于探索仿真到现实迁移（Sim-to-Real）、无模型强化学习算法以及机械臂控制的专业人士。其独特的技术亮点在于支持多种观测模式（如 RGB 图像、深度图及八叉树）的直接对比，并基于 ROS 2、Gazebo 和 MoveIt 2 构建了完整的训练与评估闭环。项目不仅集成了 TD3、SAC 等主流端到端算法，还成功验证了策略在零样本情况下从仿真环境迁移至真实机器人甚至类月面场景的能力，为复杂环境下的智能操作提供了强有力的实验平台。","# Deep Reinforcement Learning for Robotic Grasping from Octrees\n\nThis project focuses on applying deep reinforcement learning to acquire a robust policy that allows robots to grasp diverse objects from compact 3D observations in the form of octrees.\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=1-cudiW4eaU\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_f3f541d51d9a.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>Evaluation of a trained policy on novel scenes (previously unseen camera poses, objects, terrain textures, ...).\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=btxqzFOgCyQ\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_b460b86a81aa.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>Sim-to-Real transfer of a policy trained solely inside a simulation (zero-shot transfer). Credit: Aalborg University\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FZSoOkK6VFc\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_2886a9f288d4.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>Evaluation of a trained policy for grasping rocks on the Moon inside a simulation.\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FZSoOkK6VFc\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_8e88a5a0a569.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>Sim-to-Real transfer in a Moon-analogue facility (zero-shot transfer). Credit: University of Luxembourg\u003C\u002Fem>\n\u003C\u002Fp>\n\n## Overview\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fdocs.ros.org\u002Fen\u002Fgalactic\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMiddleware-ROS%202%20Galactic-38469E\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgazebosim.org\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRobotics%20Simulator-Gazebo%20Fortress-F58113\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fmoveit.ros.org\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMotion%20Planning-MoveIt%202-0A58F7\"\u002F>\n  \u003C\u002Fa>\n  \u003Cbr>\n  \u003Ca href=\"https:\u002F\u002Fwww.gymlibrary.ml\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRL%20Environment%20API-OpenAI%20Gym-CBCBCC\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fstable-baselines3.readthedocs.io\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPrimary%20RL%20Framework-Stable--Baselines3-BDF25E\"\u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\nThis repository contains multiple RL environments for robotic manipulation, focusing on robotic grasping using continuous actions in Cartesian space. All environments have several observation variants that enable direct comparison (RGB images, depth maps, octrees, ...). Each task is coupled with a simulation environment that can be used to train RL agents. These agents can subsequently be evaluated on real robots that integrate [ros2_control](https:\u002F\u002Fcontrol.ros.org) (or [ros_control](https:\u002F\u002Fwiki.ros.org\u002Fros_control) via [ros1_bridge](https:\u002F\u002Fgithub.com\u002Fros2\u002Fros1_bridge)).\n\nEnd-to-end model-free actor-critic algorithms have been tested on these environments ([TD3](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09477), [SAC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.01290) and [TQC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04269) | [SB3 PyTorch implementation](https:\u002F\u002Fgithub.com\u002FDLR-RM\u002Fstable-baselines3)). A setup for experimenting with model-based algorithm ([DreamerV2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02193) | [original TensorFlow implementation](https:\u002F\u002Fgithub.com\u002Fdanijar\u002Fdreamerv2)) is also provided, however, it is currently limited to RGB image observations. Interoperability of environments with most algorithms and their implementations should be possible due to compatibility with the [Gym](https:\u002F\u002Fwww.gymlibrary.ml) API.\n\n\u003Cdetails open>\u003Csummary>\u003Cb>List of Environments\u003C\u002Fb>\u003C\u002Fsummary>\n\nBelow is the list of implemented environments. Each environment (observation variant) has two alternatives, `Task-Obs-vX` and `Task-Obs-Gazebo-vX` (omitted from the table). Here, `Task-Obs-vX` implements the logic of the environment and can be used on real robots, whereas `Task-Obs-Gazebo-vX` combines this logic with the simulation environment inside Gazebo. Robots should be interchangeable for most parts, with some limitations (e.g. `GraspPlanetary` task requires a mobile manipulator to randomize the environment fully).\n\nIf you are interested in configuring these environments, first take a look at the list of their parameters inside [Gym registration](.\u002Fdrl_grasping\u002Fenvs\u002F__init__.py) and then at their individual source code.\n\n\u003Cdiv align=\"center\" class=\"tg-wrap\">\n\u003Ctable>\n\u003Cthead>\n  \u003Ctr align=\"center\" valign=\"bottom\">\n    \u003Cth>\n      \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Freach\">\n        \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_d14c6021dd37.png\"\u002F>\n      \u003C\u002Fa>\n      \u003Cem>Reach the end-effector goal.\u003C\u002Fem>\n    \u003C\u002Fth>\n    \u003Cth>\n      \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Fgrasp\">\n        \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_abb1234bb0bc.png\"\u002F>\n      \u003C\u002Fa>\n      \u003Cem>Grasp and lift a random object.\u003C\u002Fem>\n    \u003C\u002Fth>\n    \u003Cth>\n      \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Fgrasp_planetary\">\n        \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_517c77c4a1ce.png\"\u002F>\n      \u003C\u002Fa>\n      \u003Cem>Grasp and lift a Moon rock.\u003C\u002Fem>\n    \u003C\u002Fth>\n  \u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n  \u003Ctr>\n    \u003Ctd>Reach-v0 (state obs)\u003C\u002Ftd>\n    \u003Ctd>Grasp-v0 (state obs)\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-v0 (state obs)\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-MonoImage-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-ColorImage-v0\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-ColorImage-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-DepthImage-v0\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-DepthImage-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-DepthImageWithIntensity-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-DepthImageWithColor-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-Octree-v0\u003C\u002Ftd>\n    \u003Ctd>Grasp-Octree-v0\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-Octree-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-OctreeWithIntensity-v0\u003C\u002Ftd>\n    \u003Ctd>Grasp-OctreeWithIntensity-v0\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-OctreeWithIntensity-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-OctreeWithColor-v0\u003C\u002Ftd>\n    \u003Ctd>Grasp-OctreeWithColor-v0\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-OctreeWithColor-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\u003C\u002Fdiv>\n\nBy default, `Grasp` and `GraspPlanetary` tasks utilize [`GraspCurriculum`](.\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Fcurriculums\u002Fgrasp.py) that shapes their reward function and environment difficulty.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Domain Randomization\u003C\u002Fb>\u003C\u002Fsummary>\n\nTo facilitate the sim-to-real transfer of trained agents, simulation environments introduce domain randomization with the aim of improving the generalization of learned policies. This randomization is accomplished via [`ManipulationGazeboEnvRandomizer`](.\u002Fdrl_grasping\u002Fenvs\u002Frandomizers\u002Fmanipulation.py) that populates the virtual world and enables randomizing of several properties at each reset of the environment. As this randomizer is configurable with numerous parameters, please take a look at the source code to see what environments you can create.\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Frandomizers\u002Fmanipulation.py\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_4c7972d0c01d.png\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>Examples of domain randomization for the \u003Ccode>Grasp\u003C\u002Fcode> task.\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Frandomizers\u002Fmanipulation.py\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_dc716b022172.png\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>Examples of domain randomization for the \u003Ccode>GraspPlanetary\u003C\u002Fcode> task.\u003C\u002Fem>\n\u003C\u002Fp>\n\n#### Model Datasets\n\nSimulation environments in this repository can utilize datasets of any [SDF](http:\u002F\u002Fsdformat.org) models, e.g. models from [Fuel](https:\u002F\u002Fapp.gazebosim.org). By default, the `Grasp` task uses [Google Scanned Objects collection](https:\u002F\u002Fapp.gazebosim.org\u002FGoogleResearch\u002Ffuel\u002Fcollections\u002FScanned%20Objects%20by%20Google%20Research) together with a set of PBR textures pointed to by `TEXTURE_DIRS` environment variable. On the contrary, the `GraspPlanetary` task employs custom models that are procedurally generated via [Blender](https:\u002F\u002Fblender.org). However, this can be adjusted if desired.\n\nAll external models can be automatically configured and randomized in several ways via [`ModelCollectionRandomizer`](.\u002Fdrl_grasping\u002Fenvs\u002Fmodels\u002Futils\u002Fmodel_collection_randomizer.py) before their insertion into the world, e.g. optimization of collision geometry, estimation of (randomized) inertial properties and randomization of parameters such as geometry scale or surface friction. When processing large collections, model filtering can also be enabled based on several aspects, such as the complexity of the geometry or the existence of disconnected components. A few scripts for managing datasets can be found under [scripts\u002Futils\u002F](.\u002Fscripts\u002Futils\u002F) directory.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>End-to-End Learning from 3D Octree Observations\u003C\u002Fb>\u003C\u002Fsummary>\n\nThis project initially investigated how 3D visual observations can be leveraged to improve end-to-end learning of manipulation skills. Octrees were selected for this purpose due to their efficiently organized structure compared to other 3D representations.\n\nTo enable the extraction of abstract features from 3D octree observations, an octree-based 3D CNN is employed. The network module that accomplishes such feature extraction is implemented in the form of [`OctreeCnnFeaturesExtractor`](.\u002Fdrl_grasping\u002Fdrl_octree\u002Ffeatures_extractor\u002Foctree_cnn.py) (PyTorch). This features extractor is part of the `OctreeCnnPolicy` policy implemented for TD3, SAC and TQC algorithms. Internally, the feature extractor utilizes [O-CNN](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FO-CNN) implementation to benefit from hardware acceleration on NVIDIA GPUs.\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\".\u002Fdrl_grasping\u002Fdrl_octree\u002Ffeatures_extractor\u002Foctree_cnn.py\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F22929099\u002F176558147-600646ce-ff9c-4660-8300-532acb6df0e4.svg\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>Illustration of the end-to-end actor-critic network architecture with octree-based 3D CNN feature extractor.\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Limitations\u003C\u002Fb>\u003C\u002Fsummary>\n\nThe known limitations of this repository are listed below for your convenience.\n\n- **No parallel environments –** It is currently not possible to run multiple instances of the environment simultaneously.\n- **Slow training –** The simulation environments are computationally complex (physics, rendering, underlying low-level control, ...). This significantly impacts the ability to train agents with time and computational constraints. The performance of some of these aspects can be improved at the cost of accuracy and realism (e.g. `physics_rate`\u002F`step_size`).\n- **Suboptimal hyperparameters –** Although a hyperparameter optimization framework was employed for some combinations of environments and algorithms, it is a prolonged process. This problem is exacerbated by the vast quantity of hyperparameters and their general brittleness. Therefore, the default hyperparameters provided in this repository might not be optimal.\n- **Nondeterministic –** Experiments are not fully repeatable, and even the same seed of the pseudorandom generator can lead to different results. This is caused by several aspects, such as the nondeterministic nature of network-based communication and non-determinism in the underlying deep learning frameworks and hardware.\n\n\u003C\u002Fdetails>\n\n## Instructions\n\nSetup-wise, there are two options when using this repository. **Option A – Docker** is recommended when trying this repository due to its simplicity. Otherwise, **Option B – Local Installation** can be used if a local setup is preferred. Both of these options are equal for the usage of this repository; however, pre-built Docker images come with all the required datasets while enabling isolation of runs.\n\n\u003Cdetails>\u003Csummary>\u003Cb>Option A – Docker\u003C\u002Fb>\u003C\u002Fsummary>\n\n### Hardware Requirements\n\n- **CUDA GPU –** CUDA-enabled GPU is required for hardware-accelerated processing of octree observations. Everything else should also be functional on the CPU.\n\n### Install Docker\n\nFirst, ensure your system has a setup for using Docker with NVIDIA GPUs. You can follow [`install_docker_with_nvidia.bash`](.\u002F.docker\u002Fhost\u002Finstall_docker_with_nvidia.bash) installation script for Debian-based distributions. Alternatively, consult the [NVIDIA Container Toolkit Installation Guide](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html) for other Linux distributions.\n\n```bash\n# Execute script inside a cloned repository\n.docker\u002Fhost\u002Finstall_docker_with_nvidia.bash\n# (Alternative) Execute script from URL\nbash -c \"$(wget -qO - https:\u002F\u002Fraw.githubusercontent.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fmaster\u002F.docker\u002Fhost\u002Finstall_docker_with_nvidia.bash)\"\n```\n\n### Clone a Prebuilt Docker Image\n\nPrebuilt Docker images of `drl_grasping` can be pulled directly from [Docker Hub](https:\u002F\u002Fhub.docker.com\u002Frepository\u002Fdocker\u002Fandrejorsula\u002Fdrl_grasping) without needing to build them locally. You can use the following command to manually pull the latest image or one of the previous tagged [Releases](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Freleases). The average size of images is 25GB (including datasets).\n\n```bash\ndocker pull andrejorsula\u002Fdrl_grasping:${TAG:-latest}\n```\n\n### (Optional) Build a New Image\n\nIt is also possible to build the Docker image locally using the included [Dockerfile](.\u002FDockerfile). To do this, [`build.bash`](.\u002F.docker\u002Fbuild.bash) script can be executed as shown below (arguments are optional). This script will always print the corresponding low-level `docker build ...` command for your reference.\n\n```bash\n.docker\u002Fbuild.bash ${TAG:-latest} ${BUILD_ARGS}\n```\n\n### Run a Docker Container\n\nFor simplicity, please run `drl_grasping` Docker containers using the included [`run.bash`](.\u002F.docker\u002Frun.bash) script shown below (arguments are optional). It enables NVIDIA GPUs and GUI interface while automatically mounting the necessary volumes (e.g. persistent logging) and setting environment variables (e.g. synchronization of middleware communication with the host). This script will always print the corresponding low-level `docker run ...` command for your reference.\n\n```bash\n# Execute script inside a cloned repository\n.docker\u002Frun.bash ${TAG:-latest} ${CMD}\n# (Alternative) Execute script from URL\nbash -c \"$(wget -qO - https:\u002F\u002Fraw.githubusercontent.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fmaster\u002F.docker\u002Frun.bash)\" -- ${TAG:-latest} ${CMD}\n```\n\nThe network communication of `drl_grasping` within this Docker container is configured based on the ROS 2 [`ROS_DOMAIN_ID`](https:\u002F\u002Fdocs.ros.org\u002Fen\u002Fgalactic\u002FConcepts\u002FAbout-Domain-ID.html) environment variable, which can be set via `ROS_DOMAIN_ID={0...101} .docker\u002Frun.bash ${TAG:-latest} ${CMD}`. By default (`ROS_DOMAIN_ID=0`), external communication is restricted and multicast is disabled. With `ROS_DOMAIN_ID=42`, the communication remains restricted to `localhost` with multicast enabled, enabling monitoring of communication outside the container but within the same system. Using `ROS_DOMAIN_ID=69` will use the default network interface and multicast settings, which can enable monitoring of communication within the same LAN. All other `ROS_DOMAIN_ID`s share the default behaviour and can be employed to enable communication partitioning for running of multiple `drl_grasping` instances.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Option B – Local Installation\u003C\u002Fb>\u003C\u002Fsummary>\n\n### Hardware Requirements\n\n- **CUDA GPU –** CUDA-enabled GPU is required for hardware-accelerated processing of octree observations. Everything else should also be functional on the CPU.\n\n### Dependencies\n\n> Ubuntu 20.04 (Focal Fossa) is the recommended OS for local installation. Other Linux distributions might work but require most dependencies to be built from the source.\n\nThese are the primary dependencies required to use this project that must be installed on your system.\n\n- [Python 3.8](https:\u002F\u002Fpython.org\u002Fdownloads)\n- ROS 2 [Galactic](https:\u002F\u002Fdocs.ros.org\u002Fen\u002Fgalactic\u002FInstallation.html)\n- Gazebo [Fortress](https:\u002F\u002Fgazebosim.org\u002Fdocs\u002Ffortress)\n- [Gym-Ignition](https:\u002F\u002Fgithub.com\u002Frobotology\u002Fgym-ignition)\n  - Please use [AndrejOrsula\u002Fgym-ignition](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fgym-ignition) fork in order to ensure compatibility (default branch – [`drl_grasping`](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fgym-ignition\u002Ftree\u002Fdrl_grasping)).\n- [O-CNN](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FO-CNN)\n  - Please use [AndrejOrsula\u002FO-CNN](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002FO-CNN) fork in order to ensure compatibility (default branch – [`master`](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002FO-CNN\u002Ftree\u002Fmaster)).\n\nAll additional dependencies are either pulled via [vcstool](https:\u002F\u002Fwiki.ros.org\u002Fvcstool) ([drl_grasping.repos](.\u002Fdrl_grasping.repos)) or installed via [pip](https:\u002F\u002Fpip.pypa.io\u002Fen\u002Fstable\u002Finstallation) ([python_requirements.txt](.\u002Fpython_requirements.txt)) and [rosdep](https:\u002F\u002Fwiki.ros.org\u002Frosdep) during the building process below.\n\n### Building\n\nClone this repository recursively and import VCS dependencies. Then install dependencies and build with [colcon](https:\u002F\u002Fcolcon.readthedocs.io).\n\n```bash\n# Clone this repository into your favourite ROS 2 workspace\ngit clone --recursive https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping.git\n# Install Python requirements\npip3 install -r drl_grasping\u002Fpython_requirements.txt\n# Import dependencies\nvcs import \u003C drl_grasping\u002Fdrl_grasping.repos\n# Install dependencies\nIGNITION_VERSION=fortress rosdep install -y -r -i --rosdistro ${ROS_DISTRO} --from-paths .\n# Build\ncolcon build --merge-install --symlink-install --cmake-args \"-DCMAKE_BUILD_TYPE=Release\"\n```\n\n### Sourcing\n\nBefore utilizing this project via local installation, remember to source the ROS 2 workspace.\n\n```bash\nsource install\u002Flocal_setup.bash\n```\n\nThis enables:\n\n- Use of `drl_grasping` Python module\n- Execution of binaries, scripts and examples via `ros2 run drl_grasping \u003Cexecutable>`\n- Launching of setup scripts via `ros2 launch drl_grasping \u003Claunch_script>`\n- Discoverability of shared resources\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Test Random Agents\u003C\u002Fb>\u003C\u002Fsummary>\n\nA good starting point is to simulate some episodes using random agents where actions are sampled from the defined action space. This is also useful when modifying environments because it lets you analyze the consequences of actions and resulting observations without deep learning pipelines running in the background. To get started, run the following example. It should open RViz 2 and Gazebo client instances that provide you with visual feedback.\n\n```bash\nros2 run drl_grasping ex_random_agent.bash\n```\n\nAfter running the example script, the underlying `ros2 launch drl_grasping random_agent.launch.py ...` command with all arguments will always be printed for your reference (example shown below). If desired, you can launch this command directly with custom arguments.\n\n```bash\nros2 launch drl_grasping random_agent.launch.py seed:=42 robot_model:=lunalab_summit_xl_gen env:=GraspPlanetary-Octree-Gazebo-v0 check_env:=false render:=true enable_rviz:=true log_level:=warn\n```\n\n\u003C\u002Fdetails>\n\n\u003C!-- \u003Cdetails>\u003Csummary>\u003Cb>[WIP] Try Pre-trained Agents\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Note:** Submodule `pretrained_agents` is currently incompatible with `drl_grasping` version `2.0.0`. Previously released versions using the Docker setup are functional if you want to test this feature.\n\nSubmodule [pretrained_agents](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping_pretrained_agents) contains a selection of agents that are already trained and ready. To try them out, run the following example. It should open RViz 2 and Gazebo client instances that provide you with visual feedback, while the agent's performance will be logged and printed to `STDOUT`.\n\n```bash\nros2 run drl_grasping ex_evaluate_pretrained_agent.bash\n```\n\nAfter running the example script, the underlying `ros2 launch drl_grasping evaluate.launch.py ...` command with all arguments will always be printed for your reference (example shown below). If desired, you can launch this command directly with custom arguments. For example, you can select what agent to try according to the support matrix from [AndrejOrsula\u002Fdrl_grasping_pretrained_agents](.\u002Fpretrained_agents\u002FREADME.md).\n\n```bash\nros2 launch drl_grasping evaluate.launch.py seed:=77 robot_model:=panda env:=Grasp-Octree-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fws\u002Finstall\u002Fshare\u002Fdrl_grasping\u002Fpretrained_agents reward_log:=\u002Froot\u002Fdrl_grasping_training\u002Fevaluate\u002FGrasp-Octree-Gazebo-v0 stochastic:=false n_episodes:=200 load_best:=false enable_rviz:=true log_level:=error\n```\n\n\u003C\u002Fdetails> -->\n\n\u003Cdetails>\u003Csummary>\u003Cb>Train New Agents\u003C\u002Fb>\u003C\u002Fsummary>\n\nYou can also train your agents from scratch. To begin the training, run the following example. By default, headless mode is used during the training to reduce computational load.\n\n```bash\nros2 run drl_grasping ex_train.bash\n```\n\nAfter running the example script, the underlying `ros2 launch drl_grasping train.launch.py ...` command with all arguments will always be printed for your reference (example shown below). If desired, you can launch this command directly with custom arguments.\n\n```bash\nros2 launch drl_grasping train.launch.py seed:=42 robot_model:=panda env:=Grasp-OctreeWithColor-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGrasp-OctreeWithColor-Gazebo-v0\u002Flogs tensorboard_log:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGrasp-OctreeWithColor-Gazebo-v0\u002Ftensorboard_logs save_freq:=10000 save_replay_buffer:=true log_interval:=-1 eval_freq:=10000 eval_episodes:=20 enable_rviz:=false log_level:=fatal\n```\n\n#### Remote Visualization\n\nTo visualize the agent while training, separate RViz 2 and Gazebo client instances can be opened. For the Docker setup, these commands can be executed in a new `drl_grasping` container with the same `ROS_DOMAIN_ID`.\n\n```bash\n# RViz 2 (Note: Visualization of robot model will not be loaded using this approach)\nrviz2 -d $(ros2 pkg prefix --share drl_grasping)\u002Frviz\u002Fdrl_grasping.rviz\n# Gazebo client\nign gazebo -g\n```\n\n#### TensorBoard\n\nTensorBoard logs will be generated during training in a directory specified by the `tensorboard_log:=${TENSORBOARD_LOG}` argument. You can open them in your web browser using the following command.\n\n```bash\ntensorboard --logdir ${TENSORBOARD_LOG}\n```\n\n#### (Experimental) Train with Dreamer V2\n\nYou can also try to train some agents using the model-based Dreamer V2 algorithm. To begin the training, run the following example. By default, headless mode is used during the training to reduce computational load.\n\n```bash\nros2 run drl_grasping ex_train_dreamerv2.bash\n```\n\nAfter running the example script, the underlying `ros2 launch drl_grasping train_dreamerv2.launch.py ...` command with all arguments will always be printed for your reference (example shown below). If desired, you can launch this command directly with custom arguments.\n\n```bash\nros2 launch drl_grasping train_dreamerv2.launch.py seed:=42 robot_model:=lunalab_summit_xl_gen env:=GraspPlanetary-ColorImage-Gazebo-v0 log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGraspPlanetary-ColorImage-Gazebo-v0\u002Flogs eval_freq:=10000 enable_rviz:=false log_level:=fatal\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Evaluate New Agents\u003C\u002Fb>\u003C\u002Fsummary>\n\nOnce you train your agents, you can evaluate them. Start by looking at [ex_evaluate.bash](.\u002Fexamples\u002Fex_evaluate.bash), which can be modified to fit your trained agent. It should open RViz 2 and Gazebo client instances that provide you with visual feedback, while the agent's performance will be logged and printed to `STDOUT`.\n\n```bash\nros2 run drl_grasping ex_evaluate.bash\n```\n\nAfter running the example script, the underlying `ros2 launch drl_grasping evaluate.launch.py ...` command with all arguments will always be printed for your reference (example shown below). If desired, you can launch this command directly with custom arguments. For example, you can select a specific checkpoint with the `load_checkpoint:=${LOAD_CHECKPOINT}` argument instead of running the final model.\n\n```bash\nros2 launch drl_grasping evaluate.launch.py seed:=77 robot_model:=panda env:=Grasp-Octree-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGrasp-Octree-Gazebo-v0\u002Flogs reward_log:=\u002Froot\u002Fdrl_grasping_training\u002Fevaluate\u002FGrasp-Octree-Gazebo-v0 stochastic:=false n_episodes:=200 load_best:=false enable_rviz:=true log_level:=warn\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Optimize Hyperparameters\u003C\u002Fb>\u003C\u002Fsummary>\n\nThe default hyperparameters for training agents with TD3, SAC and TQC can be found under the [hyperparams](.\u002Fhyperparams) directory. [Optuna](https:\u002F\u002Foptuna.org) can be employed to autotune some of these parameters. To get started, run the following example. By default, headless mode is used during hyperparameter optimization to reduce computational load.\n\n```bash\nros2 run drl_grasping ex_optimize.bash\n```\n\nAfter running the example script, the underlying `ros2 launch drl_grasping train.launch.py ...` command with all arguments will always be printed for your reference (example shown below). If desired, you can launch this command directly with custom arguments.\n\n```bash\nros2 launch drl_grasping optimize.launch.py seed:=69 robot_model:=panda env:=Grasp-Octree-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Foptimize\u002FGrasp-Octree-Gazebo-v0\u002Flogs tensorboard_log:=\u002Froot\u002Fdrl_grasping_training\u002Foptimize\u002FGrasp-Octree-Gazebo-v0\u002Ftensorboard_logs n_timesteps:=1000000 sampler:=tpe pruner:=median n_trials:=20 n_startup_trials:=5 n_evaluations:=4 eval_episodes:=20 log_interval:=-1 enable_rviz:=true log_level:=fatal\n```\n\n\u003C\u002Fdetails>\n\n## Citation\n\nPlease use the following citation if you use `drl_grasping` in your work.\n\n```bibtex\n@inproceedings{orsula_learning_2022,\n  author    = {Andrej Orsula and Simon B{\\o}gh and Miguel Olivares-Mendez and Carol Martinez},\n  title     = {{Learning} to {Grasp} on the {Moon} from {3D} {Octree} {Observations} with {Deep} {Reinforcement} {Learning}},\n  year      = {2022},\n  booktitle = {2022 IEEE\u002FRSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages     = {4112--4119},\n  doi       = {10.1109\u002FIROS47612.2022.9981661}\n}\n```\n\n## Directory Structure\n\n```bash\n.\n├── drl_grasping\u002F        # [dir] Primary Python module of this project\n│   ├── drl_octree\u002F      # [dir] Submodule for end-to-end learning from 3D octree observations\n│   ├── envs\u002F            # [dir] Submodule for environments\n│   │   ├── control\u002F     # [dir] Interfaces for the control of agents\n│   │   ├── models\u002F      # [dir] Functional models for simulation environments\n│   │   ├── perception\u002F  # [dir] Interfaces for the perception of agents\n│   │   ├── randomizers\u002F # [dir] Domain randomization of the simulated environments\n│   │   ├── runtimes\u002F    # [dir] Runtime implementations of the task (sim\u002Freal)\n│   │   ├── tasks\u002F       # [dir] Implementation of tasks\n│   │   ├── utils\u002F       # [dir] Environment-specific utilities used across the submodule\n│   │   └── worlds\u002F      # [dir] Minimal templates of worlds for simulation environments\n│   └── utils\u002F           # [dir] Submodule for training and evaluation scripts boilerplate (using SB3)\n├── examples\u002F            # [dir] Examples for training and evaluating RL agents\n├── hyperparams\u002F         # [dir] Default hyperparameters for training RL agents\n├── launch\u002F              # [dir] ROS 2 launch scripts that can be used to interact with this repository\n├── pretrained_agents\u002F   # [dir] Collection of pre-trained agents\n├── rviz\u002F                # [dir] RViz2 config for visualization\n├── scripts\u002F             # [dir] Helpful scripts for training, evaluation and other utilities\n├── CMakeLists.txt       # Colcon-enabled CMake recipe\n└── package.xml          # ROS 2 package metadata\n```\n","# 基于八叉树的机器人抓取深度强化学习\n\n本项目专注于应用深度强化学习，从紧凑的八叉树形式的3D观测中获取鲁棒的策略，使机器人能够抓取各种不同的物体。\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=1-cudiW4eaU\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_f3f541d51d9a.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>在新场景上评估训练好的策略（之前未见过的相机位姿、物体、地形纹理等）。\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=btxqzFOgCyQ\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_b460b86a81aa.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>仅在仿真环境中训练的策略的仿真到现实迁移（零样本迁移）。鸣谢：奥尔堡大学\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FZSoOkK6VFc\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_2886a9f288d4.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>在仿真环境中评估用于月球表面岩石抓取的训练策略。\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FZSoOkK6VFc\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_8e88a5a0a569.webp\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>在月球模拟设施中的仿真到现实迁移（零样本迁移）。鸣谢：卢森堡大学\u003C\u002Fem>\n\u003C\u002Fp>\n\n## 概述\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fdocs.ros.org\u002Fen\u002Fgalactic\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMiddleware-ROS%202%20Galactic-38469E\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgazebosim.org\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRobotics%20Simulator-Gazebo%20Fortress-F58113\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fmoveit.ros.org\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMotion%20Planning-MoveIt%202-0A58F7\"\u002F>\n  \u003C\u002Fa>\n  \u003Cbr>\n  \u003Ca href=\"https:\u002F\u002Fwww.gymlibrary.ml\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRL%20Environment%20API-OpenAI%20Gym-CBCBCC\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fstable-baselines3.readthedocs.io\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPrimary%20RL%20Framework-Stable--Baselines3-BDF25E\"\u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n该仓库包含多个用于机器人操作的强化学习环境，重点是使用笛卡尔空间中的连续动作进行机器人抓取。所有环境都有多种观测变体，便于直接比较（RGB图像、深度图、八叉树等）。每个任务都与一个仿真环境相关联，可用于训练强化学习智能体。这些智能体随后可以在集成[ros2_control](https:\u002F\u002Fcontrol.ros.org)（或通过[ros1_bridge](https:\u002F\u002Fgithub.com\u002Fros2\u002Fros1_bridge)集成[ros_control](https:\u002F\u002Fwiki.ros.org\u002Fros_control)）的实体机器人上进行评估。\n\n端到端的无模型演员-评论家算法已在这些环境中进行了测试（[TD3](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09477)、[SAC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.01290)和[TQC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04269) | [SB3 PyTorch实现](https:\u002F\u002Fgithub.com\u002FDLR-RM\u002Fstable-baselines3)）。还提供了一个用于实验基于模型的算法（[DreamerV2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02193) | [原始TensorFlow实现](https:\u002F\u002Fgithub.com\u002Fdanijar\u002Fdreamerv2))的设置，不过目前仅限于RGB图像观测。由于与[Gym](https:\u002F\u002Fwww.gymlibrary.ml) API兼容，这些环境应能与大多数算法及其实现互操作。\n\n\u003Cdetails open>\u003Csummary>\u003Cb>环境列表\u003C\u002Fb>\u003C\u002Fsummary>\n\n以下是已实现的环境列表。每个环境（观测变体）都有两个版本，`Task-Obs-vX`和`Task-Obs-Gazebo-vX`（表格中省略）。其中，`Task-Obs-vX`实现了环境的逻辑，可用于真实机器人；而`Task-Obs-Gazebo-vX`则将该逻辑与Gazebo内的仿真环境结合在一起。对于大部分任务而言，机器人之间可以互换，但也有一些限制（例如，`GraspPlanetary`任务需要移动机械臂来完全随机化环境）。\n\n如果您有兴趣配置这些环境，请先查看[Gym注册](.\u002Fdrl_grasping\u002Fenvs\u002F__init__.py)中的参数列表，然后再查看各自的源代码。\n\n\u003Cdiv align=\"center\" class=\"tg-wrap\">\n\u003Ctable>\n\u003Cthead>\n  \u003Ctr align=\"center\" valign=\"bottom\">\n    \u003Cth>\n      \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Freach\">\n        \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_d14c6021dd37.png\"\u002F>\n      \u003C\u002Fa>\n      \u003Cem>到达末端执行器目标。\u003C\u002Fem>\n    \u003C\u002Fth>\n    \u003Cth>\n      \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Fgrasp\">\n        \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_abb1234bb0bc.png\"\u002F>\n      \u003C\u002Fa>\n      \u003Cem>抓取并举起一个随机物体。\u003C\u002Fem>\n    \u003C\u002Fth>\n    \u003Cth>\n      \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Fgrasp_planetary\">\n        \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_517c77c4a1ce.png\"\u002F>\n      \u003C\u002Fa>\n      \u003Cem>抓取并举起一块月球岩石。\u003C\u002Fem>\n    \u003C\u002Fth>\n  \u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n  \u003Ctr>\n    \u003Ctd>Reach-v0（状态观测）\u003C\u002Ftd>\n    \u003Ctd>Grasp-v0（状态观测）\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-v0（状态观测）\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-MonoImage-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-ColorImage-v0\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-ColorImage-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-DepthImage-v0\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-DepthImage-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-DepthImageWithIntensity-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd align=\"center\">—\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-DepthImageWithColor-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-Octree-v0\u003C\u002Ftd>\n    \u003Ctd>Grasp-Octree-v0\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-Octree-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-OctreeWithIntensity-v0\u003C\u002Ftd>\n    \u003Ctd>Grasp-OctreeWithIntensity-v0\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-OctreeWithIntensity-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Reach-OctreeWithColor-v0\u003C\u002Ftd>\n    \u003Ctd>Grasp-OctreeWithColor-v0\u003C\u002Ftd>\n    \u003Ctd>GraspPlanetary-OctreeWithColor-v0\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\u003C\u002Fdiv>\n\n默认情况下，`Grasp`和`GraspPlanetary`任务会使用[`GraspCurriculum`](.\u002Fdrl_grasping\u002Fenvs\u002Ftasks\u002Fcurriculums\u002Fgrasp.py)，以调整其奖励函数和环境难度。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>领域随机化\u003C\u002Fb>\u003C\u002Fsummary>\n\n为了促进训练好的智能体从仿真环境到真实世界的迁移，仿真环境引入了领域随机化技术，旨在提高学习策略的泛化能力。这种随机化通过[`ManipulationGazeboEnvRandomizer`](.\u002Fdrl_grasping\u002Fenvs\u002Frandomizers\u002Fmanipulation.py)实现，它会在每次环境重置时填充虚拟世界，并支持对多个属性进行随机化。由于该随机化器包含大量可配置参数，请查看源代码以了解您可以创建哪些环境。\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Frandomizers\u002Fmanipulation.py\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_4c7972d0c01d.png\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>“Grasp”任务的领域随机化示例。\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\".\u002Fdrl_grasping\u002Fenvs\u002Frandomizers\u002Fmanipulation.py\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_readme_dc716b022172.png\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>“GraspPlanetary”任务的领域随机化示例。\u003C\u002Fem>\n\u003C\u002Fp>\n\n#### 模型数据集\n\n本仓库中的仿真环境可以使用任何[SDF](http:\u002F\u002Fsdformat.org)模型的数据集，例如来自[Fuel](https:\u002F\u002Fapp.gazebosim.org)的模型。默认情况下，“Grasp”任务会使用[Google扫描对象集合](https:\u002F\u002Fapp.gazebosim.org\u002FGoogleResearch\u002Ffuel\u002Fcollections\u002FScanned%20Objects%20by%20Google%20Research)，并结合由`TEXTURE_DIRS`环境变量指定的一组PBR材质贴图。相比之下，“GraspPlanetary”任务则采用通过[Blender](https:\u002F\u002Fblender.org)程序化生成的自定义模型。不过，这些设置也可以根据需要进行调整。\n\n所有外部模型在被插入到场景之前，都可以通过[`ModelCollectionRandomizer`](.\u002Fdrl_grasping\u002Fenvs\u002Fmodels\u002Futils\u002Fmodel_collection_randomizer.py)以多种方式进行自动配置和随机化，例如优化碰撞几何、估算（随机化的）惯性属性，以及随机化几何尺度或表面摩擦等参数。在处理大型模型集合时，还可以基于几何复杂度或是否存在孤立组件等多个方面启用模型过滤功能。用于管理数据集的一些脚本可以在[scripts\u002Futils\u002F](.\u002Fscripts\u002Futils\u002F)目录下找到。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>基于3D八叉树观测的端到端学习\u003C\u002Fb>\u003C\u002Fsummary>\n\n该项目最初研究了如何利用3D视觉观测来提升操纵技能的端到端学习效果。之所以选择八叉树结构，是因为与其他3D表示方法相比，它具有更高效的组织方式。\n\n为从3D八叉树观测中提取抽象特征，项目采用了基于八叉树的3D卷积神经网络。负责此类特征提取的网络模块以[`OctreeCnnFeaturesExtractor`](.\u002Fdrl_grasping\u002Fdrl_octree\u002Ffeatures_extractor\u002Foctree_cnn.py)的形式实现（PyTorch）。该特征提取器是为TD3、SAC和TQC算法实现的`OctreeCnnPolicy`策略的一部分。内部，该提取器利用了[Microsoft O-CNN](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FO-CNN)的实现，从而能够在NVIDIA GPU上获得硬件加速。\n\n\u003Cp align=\"center\" float=\"middle\">\n  \u003Ca href=\".\u002Fdrl_grasping\u002Fdrl_octree\u002Ffeatures_extractor\u002Foctree_cnn.py\">\n    \u003Cimg width=\"100.0%\" src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F22929099\u002F176558147-600646ce-ff9c-4660-8300-532acb6df0e4.svg\"\u002F>\n  \u003C\u002Fa>\n  \u003Cem>带有八叉树3D卷积神经网络特征提取器的端到端演员-评论家网络架构示意图。\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>局限性\u003C\u002Fb>\u003C\u002Fsummary>\n\n为方便起见，以下列出了本仓库已知的局限性：\n\n- **不支持并行环境——** 目前无法同时运行多个环境实例。\n- **训练速度较慢——** 仿真环境计算复杂度较高（物理模拟、渲染、底层低级控制等），这显著影响了在时间和计算资源受限的情况下训练智能体的能力。可以通过牺牲部分精度和真实性来提升某些方面的性能（如调整`physics_rate`\u002F`step_size`）。\n- **超参数欠佳——** 尽管针对部分环境与算法组合使用了超参数优化框架，但这一过程较为耗时。加之超参数数量众多且普遍较为敏感，因此本仓库提供的默认超参数可能并非最优。\n- **非确定性——** 实验结果并不完全可重复，即使使用相同的伪随机数种子也可能产生不同结果。这主要归因于基于网络的通信的非确定性，以及底层深度学习框架和硬件本身的非确定性等因素。\n\n\u003C\u002Fdetails>\n\n\n\n## 使用说明\n\n在使用本仓库时，有两种设置方式可供选择。对于初次尝试本仓库的用户，推荐使用**选项A——Docker**，因为它更为简便。如果您更倾向于本地安装，则可以选择**选项B——本地安装**。这两种方式在使用本仓库时并无优劣之分；然而，预构建的Docker镜像自带所有必要的数据集，并能实现运行环境的隔离。\n\n\u003Cdetails>\u003Csummary>\u003Cb>选项A——Docker\u003C\u002Fb>\u003C\u002Fsummary>\n\n### 硬件要求\n\n- **CUDA GPU——** 处理八叉树观测需要支持CUDA的GPU。其他部分则可在CPU上正常运行。\n\n### 安装Docker\n\n首先，请确保您的系统已配置好使用Docker与NVIDIA GPU。您可以按照适用于Debian系发行版的[`install_docker_with_nvidia.bash`](.\u002F.docker\u002Fhost\u002Finstall_docker_with_nvidia.bash)安装脚本进行操作。或者，您也可以参考[NVIDIA Container Toolkit安装指南](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html)来为其他Linux发行版进行配置。\n\n```bash\n# 在克隆的仓库内执行脚本\n.docker\u002Fhost\u002Finstall_docker_with_nvidia.bash\n# （替代方案）直接从URL执行脚本\nbash -c \"$(wget -qO - https:\u002F\u002Fraw.githubusercontent.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fmaster\u002F.docker\u002Fhost\u002Finstall_docker_with_nvidia.bash)\"\n```\n\n### 克隆预构建的 Docker 镜像\n\n`drl_grasping` 的预构建 Docker 镜像可以直接从 [Docker Hub](https:\u002F\u002Fhub.docker.com\u002Frepository\u002Fdocker\u002Fandrejorsula\u002Fdrl_grasping) 拉取，无需在本地构建。您可以使用以下命令手动拉取最新镜像或之前标记的某个 [Release](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Freleases)。镜像的平均大小为 25GB（包含数据集）。\n\n```bash\ndocker pull andrejorsula\u002Fdrl_grasping:${TAG:-latest}\n```\n\n### （可选）构建新镜像\n\n您也可以使用附带的 [Dockerfile](.\u002FDockerfile) 在本地构建 Docker 镜像。为此，可以按如下方式执行 [`build.bash`](.\u002F.docker\u002Fbuild.bash) 脚本（参数为可选）。该脚本始终会打印相应的底层 `docker build ...` 命令，供您参考。\n\n```bash\n.docker\u002Fbuild.bash ${TAG:-latest} ${BUILD_ARGS}\n```\n\n### 运行 Docker 容器\n\n为简便起见，请使用附带的 [`run.bash`](.\u002F.docker\u002Frun.bash) 脚本运行 `drl_grasping` Docker 容器（参数为可选）。该脚本启用了 NVIDIA GPU 和 GUI 界面，同时自动挂载必要的卷（例如持久化日志）并设置环境变量（例如使中间件通信与主机同步）。该脚本始终会打印相应的底层 `docker run ...` 命令，供您参考。\n\n```bash\n# 在克隆的仓库内执行脚本\n.docker\u002Frun.bash ${TAG:-latest} ${CMD}\n# （替代方法）直接从 URL 执行脚本\nbash -c \"$(wget -qO - https:\u002F\u002Fraw.githubusercontent.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fmaster\u002F.docker\u002Frun.bash)\" -- ${TAG:-latest} ${CMD}\n```\n\n此 Docker 容器中 `drl_grasping` 的网络通信基于 ROS 2 的 [`ROS_DOMAIN_ID`](https:\u002F\u002Fdocs.ros.org\u002Fen\u002Fgalactic\u002FConcepts\u002FAbout-Domain-ID.html) 环境变量进行配置，可通过 `ROS_DOMAIN_ID={0...101} .docker\u002Frun.bash ${TAG:-latest} ${CMD}` 来设置。默认情况下（`ROS_DOMAIN_ID=0`），外部通信受限且禁用多播。当设置为 `ROS_DOMAIN_ID=42` 时，通信仍限制在 `localhost`，但启用多播，允许监控容器外、同一系统内的通信。而使用 `ROS_DOMAIN_ID=69` 则会采用默认的网络接口和多播设置，从而能够监控同一局域网内的通信。其他所有 `ROS_DOMAIN_ID` 值均保持默认行为，可用于实现通信分区，以运行多个 `drl_grasping` 实例。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>选项 B – 本地安装\u003C\u002Fb>\u003C\u002Fsummary>\n\n### 硬件要求\n\n- **CUDA GPU –** 必须配备支持 CUDA 的 GPU，以实现八叉树观测的硬件加速处理。其余部分在 CPU 上也可正常运行。\n\n### 依赖项\n\n> 推荐使用 Ubuntu 20.04（Focal Fossa）作为本地安装的操作系统。其他 Linux 发行版也可能适用，但需要从源代码编译大部分依赖项。\n\n以下是使用该项目所需的主依赖项，必须安装到您的系统中。\n\n- [Python 3.8](https:\u002F\u002Fpython.org\u002Fdownloads)\n- ROS 2 [Galactic](https:\u002F\u002Fdocs.ros.org\u002Fen\u002Fgalactic\u002FInstallation.html)\n- Gazebo [Fortress](https:\u002F\u002Fgazebosim.org\u002Fdocs\u002Ffortress)\n- [Gym-Ignition](https:\u002F\u002Fgithub.com\u002Frobotology\u002Fgym-ignition)\n  - 请使用 [AndrejOrsula\u002Fgym-ignition](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fgym-ignition) 分支以确保兼容性（默认分支为 [`drl_grasping`](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fgym-ignition\u002Ftree\u002Fdrl_grasping)）。\n- [O-CNN](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FO-CNN)\n  - 请使用 [AndrejOrsula\u002FO-CNN](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002FO-CNN) 分支以确保兼容性（默认分支为 [`master`](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002FO-CNN\u002Ftree\u002Fmaster)）。\n\n所有其他依赖项将通过 [vcstool](https:\u002F\u002Fwiki.ros.org\u002Fvcstool)（[drl_grasping.repos](.\u002Fdrl_grasping.repos)）获取，或在下方构建过程中通过 [pip](https:\u002F\u002Fpip.pypa.io\u002Fen\u002Fstable\u002Finstallation)（[python_requirements.txt](.\u002Fpython_requirements.txt)）和 [rosdep](https:\u002F\u002Fwiki.ros.org\u002Frosdep) 安装。\n\n### 构建\n\n递归克隆此仓库并导入 VCS 依赖项。然后安装依赖项，并使用 [colcon](https:\u002F\u002Fcolcon.readthedocs.io) 进行构建。\n\n```bash\n# 将此仓库克隆到您喜欢的 ROS 2 工作空间\ngit clone --recursive https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping.git\n# 安装 Python 依赖项\npip3 install -r drl_grasping\u002Fpython_requirements.txt\n# 导入依赖项\nvcs import \u003C drl_grasping\u002Fdrl_grasping.repos\n# 安装依赖项\nIGNITION_VERSION=fortress rosdep install -y -r -i --rosdistro ${ROS_DISTRO} --from-paths .\n# 构建\ncolcon build --merge-install --symlink-install --cmake-args \"-DCMAKE_BUILD_TYPE=Release\"\n```\n\n### 引用环境变量\n\n在通过本地安装使用该项目之前，请务必先引用 ROS 2 工作空间。\n\n```bash\nsource install\u002Flocal_setup.bash\n```\n\n这将启用以下功能：\n\n- 使用 `drl_grasping` Python 模块\n- 通过 `ros2 run drl_grasping \u003C可执行文件>` 运行二进制文件、脚本和示例\n- 通过 `ros2 launch drl_grasping \u003C启动脚本>` 启动设置脚本\n- 共享资源的可发现性\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>测试随机智能体\u003C\u002Fb>\u003C\u002Fsummary>\n\n一个好的起点是使用随机智能体模拟一些训练回合，其中动作从定义的动作空间中随机采样。这在修改环境时也非常有用，因为它允许您在没有深度学习流水线运行的情况下分析动作及其产生的观测结果。要开始，请运行以下示例。它应该会打开 RViz 2 和 Gazebo 客户端实例，为您提供可视化反馈。\n\n```bash\nros2 run drl_grasping ex_random_agent.bash\n```\n\n运行示例脚本后，底层的 `ros2 launch drl_grasping random_agent.launch.py ...` 命令及其所有参数都会被打印出来供您参考（示例如下）。如果需要，您可以直接使用自定义参数来启动该命令。\n\n```bash\nros2 launch drl_grasping random_agent.launch.py seed:=42 robot_model:=lunalab_summit_xl_gen env:=GraspPlanetary-Octree-Gazebo-v0 check_env:=false render:=true enable_rviz:=true log_level:=warn\n```\n\n\u003C\u002Fdetails>\n\n\u003C!-- \u003Cdetails>\u003Csummary>\u003Cb>[开发中] 尝试预训练智能体\u003C\u002Fb>\u003C\u002Fsummary>\n\n**注意：** 子模块 `pretrained_agents` 目前与 `drl_grasping` 版本 `2.0.0` 不兼容。如果您想测试此功能，可以使用之前发布的基于 Docker 的版本。\n\n子模块 [pretrained_agents](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping_pretrained_agents) 包含一组已经训练好的智能体。要尝试它们，请运行以下示例。它应该会打开 RViz 2 和 Gazebo 客户端实例，提供可视化反馈，同时智能体的表现会被记录并输出到 `STDOUT`。\n\n```bash\nros2 run drl_grasping ex_evaluate_pretrained_agent.bash\n```\n\n运行示例脚本后，底层的 `ros2 launch drl_grasping evaluate.launch.py ...` 命令及其所有参数都会被打印出来供您参考（示例如下）。如果需要，您可以直接使用自定义参数来启动该命令。例如，您可以根据 [AndrejOrsula\u002Fdrl_grasping_pretrained_agents](.\u002Fpretrained_agents\u002FREADME.md) 中的支持矩阵选择要尝试的智能体。\n\n```bash\nros2 launch drl_grasping evaluate.launch.py seed:=77 robot_model:=panda env:=Grasp-Octree-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fws\u002Finstall\u002Fshare\u002Fdrl_grasping\u002Fpretrained_agents reward_log:=\u002Froot\u002Fdrl_grasping_training\u002Fevaluate\u002FGrasp-Octree-Gazebo-v0 stochastic:=false n_episodes:=200 load_best:=false enable_rviz:=true log_level:=error\n```\n\n\u003C\u002Fdetails> -->\n\n\u003Cdetails>\u003Csummary>\u003Cb>训练新智能体\u003C\u002Fb>\u003C\u002Fsummary>\n\n您也可以从头开始训练自己的智能体。要开始训练，请运行以下示例。默认情况下，训练过程中会使用无头模式以减少计算负载。\n\n```bash\nros2 run drl_grasping ex_train.bash\n```\n\n运行示例脚本后，底层的 `ros2 launch drl_grasping train.launch.py ...` 命令及其所有参数都会被打印出来供您参考（示例如下）。如果需要，您可以直接使用自定义参数来启动该命令。\n\n```bash\nros2 launch drl_grasping train.launch.py seed:=42 robot_model:=panda env:=Grasp-OctreeWithColor-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGrasp-OctreeWithColor-Gazebo-v0\u002Flogs tensorboard_log:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGrasp-OctreeWithColor-Gazebo-v0\u002Ftensorboard_logs save_freq:=10000 save_replay_buffer:=true log_interval:=-1 eval_freq:=10000 eval_episodes:=20 enable_rviz:=false log_level:=fatal\n```\n\n#### 远程可视化\n\n为了在训练过程中可视化智能体，可以分别打开 RViz 2 和 Gazebo 客户端实例。对于 Docker 部署，这些命令可以在一个新的 `drl_grasping` 容器中执行，且需使用相同的 `ROS_DOMAIN_ID`。\n\n```bash\n# RViz 2（注：采用此方法不会加载机器人模型的可视化）\nrviz2 -d $(ros2 pkg prefix --share drl_grasping)\u002Frviz\u002Fdrl_grasping.rviz\n\n# 阁楼客户端\nign gazebo -g\n```\n\n#### TensorBoard\n\n训练过程中，TensorBoard 日志将被生成到由 `tensorboard_log:=${TENSORBOARD_LOG}` 参数指定的目录中。你可以使用以下命令在浏览器中打开这些日志。\n\n```bash\ntensorboard --logdir ${TENSORBOARD_LOG}\n```\n\n#### （实验性）使用 Dreamer V2 进行训练\n\n你也可以尝试使用基于模型的 Dreamer V2 算法来训练一些智能体。要开始训练，请运行以下示例。默认情况下，训练过程中会使用无头模式以减少计算负载。\n\n```bash\nros2 run drl_grasping ex_train_dreamerv2.bash\n```\n\n运行示例脚本后，底层的 `ros2 launch drl_grasping train_dreamerv2.launch.py ...` 命令及其所有参数都会被打印出来供你参考（示例如下）。如果需要，你可以直接使用自定义参数来启动该命令。\n\n```bash\nros2 launch drl_grasping train_dreamerv2.launch.py seed:=42 robot_model:=lunalab_summit_xl_gen env:=GraspPlanetary-ColorImage-Gazebo-v0 log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGraspPlanetary-ColorImage-Gazebo-v0\u002Flogs eval_freq:=10000 enable_rviz:=false log_level:=fatal\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>评估新智能体\u003C\u002Fb>\u003C\u002Fsummary>\n\n一旦你训练好智能体，就可以对其进行评估。首先查看 [ex_evaluate.bash](.\u002Fexamples\u002Fex_evaluate.bash)，你可以根据自己的训练结果进行修改。该脚本会打开 RViz 2 和 Gazebo 客户端实例，为你提供可视化反馈，同时智能体的表现会被记录并输出到 `STDOUT`。\n\n```bash\nros2 run drl_grasping ex_evaluate.bash\n```\n\n运行示例脚本后，底层的 `ros2 launch drl_grasping evaluate.launch.py ...` 命令及其所有参数也会被打印出来供你参考（示例如下）。如果需要，你可以直接使用自定义参数来启动该命令。例如，你可以通过 `load_checkpoint:=${LOAD_CHECKPOINT}` 参数选择特定的检查点，而不是运行最终模型。\n\n```bash\nros2 launch drl_grasping evaluate.launch.py seed:=77 robot_model:=panda env:=Grasp-Octree-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Ftrain\u002FGrasp-Octree-Gazebo-v0\u002Flogs reward_log:=\u002Froot\u002Fdrl_grasping_training\u002Fevaluate\u002FGrasp-Octree-Gazebo-v0 stochastic:=false n_episodes:=200 load_best:=false enable_rviz:=true log_level:=warn\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>优化超参数\u003C\u002Fb>\u003C\u002Fsummary>\n\n使用 TD3、SAC 和 TQC 训练智能体的默认超参数可以在 [hyperparams](.\u002Fhyperparams) 目录下找到。可以使用 [Optuna](https:\u002F\u002Foptuna.org) 自动调优其中的一些参数。要开始，运行以下示例。默认情况下，超参数优化过程中会使用无头模式以减少计算负载。\n\n```bash\nros2 run drl_grasping ex_optimize.bash\n```\n\n运行示例脚本后，底层的 `ros2 launch drl_grasping train.launch.py ...` 命令及其所有参数也会被打印出来供你参考（示例如下）。如果需要，你可以直接使用自定义参数来启动该命令。\n\n```bash\nros2 launch drl_grasping optimize.launch.py seed:=69 robot_model:=panda env:=Grasp-Octree-Gazebo-v0 algo:=tqc log_folder:=\u002Froot\u002Fdrl_grasping_training\u002Foptimize\u002FGrasp-Octree-Gazebo-v0\u002Flogs tensorboard_log:=\u002Froot\u002Fdrl_grasping_training\u002Foptimize\u002FGrasp-Octree-Gazebo-v0\u002Ftensorboard_logs n_timesteps:=1000000 sampler:=tpe pruner:=median n_trials:=20 n_startup_trials:=5 n_evaluations:=4 eval_episodes:=20 log_interval:=-1 enable_rviz:=true log_level:=fatal\n```\n\n\u003C\u002Fdetails>\n\n## 引用\n\n如果你在工作中使用了 `drl_grasping`，请使用以下引用：\n\n```bibtex\n@inproceedings{orsula_learning_2022,\n  author    = {Andrej Orsula and Simon B{\\o}gh and Miguel Olivares-Mendez and Carol Martinez},\n  title     = {{Learning} to {Grasp} on the {Moon} from {3D} {Octree} {Observations} with {Deep} {Reinforcement} {Learning}},\n  year      = {2022},\n  booktitle = {2022 IEEE\u002FRSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages     = {4112--4119},\n  doi       = {10.1109\u002FIROS47612.2022.9981661}\n}\n```\n\n## 目录结构\n\n```bash\n.\n├── drl_grasping\u002F        # [dir] 该项目的主要 Python 模块\n│   ├── drl_octree\u002F      # [dir] 用于从 3D 八叉树观测数据进行端到端学习的子模块\n│   ├── envs\u002F            # [dir] 环境子模块\n│   │   ├── control\u002F     # [dir] 智能体控制接口\n│   │   ├── models\u002F      # [dir] 用于仿真环境的功能性模型\n│   │   ├── perception\u002F  # [dir] 智能体感知接口\n│   │   ├── randomizers\u002F # [dir] 仿真环境的领域随机化\n│   │   ├── runtimes\u002F    # [dir] 任务的运行时实现（仿真\u002F真实）\n│   │   ├── tasks\u002F       # [dir] 任务的具体实现\n│   │   ├── utils\u002F       # [dir] 该子模块中跨模块使用的环境特定工具\n│   │   └── worlds\u002F      # [dir] 用于仿真环境的最小世界模板\n│   └── utils\u002F           # [dir] 用于训练和评估脚本样板的子模块（使用 SB3）\n├── examples\u002F            # [dir] 用于训练和评估强化学习智能体的示例\n├── hyperparams\u002F         # [dir] 用于训练强化学习智能体的默认超参数\n├── launch\u002F              # [dir] 可用于与该仓库交互的 ROS 2 启动脚本\n├── pretrained_agents\u002F   # [dir] 预训练智能体集合\n├── rviz\u002F                # [dir] RViz2 的可视化配置\n├── scripts\u002F             # [dir] 用于训练、评估及其他实用功能的辅助脚本\n├── CMakeLists.txt       # 支持 Colcon 的 CMake 构建文件\n└── package.xml          # ROS 2 包元数据\n```","# drl_grasping 快速上手指南\n\n`drl_grasping` 是一个基于深度强化学习（DRL）的机器人抓取开源项目。它支持机器人利用紧凑的 3D 观测数据（如八叉树 Octrees）在多样化场景中执行鲁棒的抓取策略，并支持从仿真到真机（Sim-to-Real）的零样本迁移。\n\n## 环境准备\n\n本项目依赖 ROS 2、Gazebo 仿真器以及深度学习框架。推荐使用 **Ubuntu 20.04** 系统。\n\n### 系统要求\n*   **操作系统**: Ubuntu 20.04 (推荐)\n*   **中间件**: ROS 2 Galactic\n*   **仿真器**: Gazebo Fortress\n*   **运动规划**: MoveIt 2\n*   **GPU**: 推荐使用 NVIDIA GPU 以加速八叉树卷积神经网络（O-CNN）的训练\n\n### 前置依赖\n确保已安装以下基础工具：\n```bash\nsudo apt update\nsudo apt install -y python3-pip python3-venv git curl wget\n```\n\n> **注意**：由于涉及复杂的仿真和深度学习依赖，强烈建议使用官方提供的 **Docker** 方案（选项 A），以避免环境配置冲突。若需在本地直接安装（选项 B），请确保已正确配置 ROS 2 Galactic 和 Gazebo Fortress 环境。\n\n## 安装步骤\n\n### 方案 A：使用 Docker（推荐）\n\n这是最简便且能保证复现性的方法。\n\n1.  **克隆仓库**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping.git\n    cd drl_grasping\n    ```\n\n2.  **构建 Docker 镜像**：\n    项目根目录包含 `Dockerfile`，使用以下命令构建镜像（需确保已安装 Docker 和 nvidia-docker2 以支持 GPU）：\n    ```bash\n    docker build -t drl_grasping:latest .\n    ```\n\n3.  **运行容器**：\n    启动容器并挂载当前目录以便代码同步：\n    ```bash\n    docker run -it --rm --gpus all -v $(pwd):\u002Fworkspace\u002Fdrl_grasping drl_grasping:latest\n    ```\n\n### 方案 B：本地源码安装\n\n如果你选择在本机环境中运行，请按以下步骤操作：\n\n1.  **创建 Python 虚拟环境**：\n    ```bash\n    python3 -m venv venv\n    source venv\u002Fbin\u002Factivate\n    ```\n\n2.  **安装 Python 依赖**：\n    ```bash\n    pip install -r requirements.txt\n    ```\n    *(注：若下载速度慢，可临时指定国内源，例如：`pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`)*\n\n3.  **安装项目包**：\n    ```bash\n    pip install -e .\n    ```\n\n4.  **配置环境变量**：\n    确保 sourcing ROS 2 和 Gazebo 的设置文件：\n    ```bash\n    source \u002Fopt\u002Fros\u002Fgalactic\u002Fsetup.bash\n    source \u002Fusr\u002Fshare\u002Fgazebo\u002Fsetup.bash\n    ```\n\n## 基本使用\n\n本项目提供了多种预定义的 RL 环境，涵盖从简单的到达任务（Reach）到复杂的行星岩石抓取（GraspPlanetary），支持状态、图像、深度图及八叉树等多种观测模式。\n\n### 1. 验证环境注册\n在 Python 中检查环境是否成功注册：\n\n```python\nimport gym\nimport drl_grasping\n\n# 列出所有可用的 drl_grasping 环境\nenv_ids = [spec.id for spec in gym.envs.registry.all() if 'drl_grasping' in spec.id]\nprint(env_ids)\n```\n\n### 2. 运行最简单的示例\n以下示例展示如何初始化一个基于八叉树观测的抓取环境 (`Grasp-Octree-v0`) 并执行随机动作：\n\n```python\nimport gym\nimport drl_grasping\n\n# 创建环境实例\n# 可选环境包括：Reach-Octree-v0, Grasp-Octree-v0, GraspPlanetary-Octree-v0 等\nenv = gym.make(\"Grasp-Octree-v0\")\n\nobservation = env.reset()\n\n# 模拟一步交互\naction = env.action_space.sample()  # 采样随机动作\nnext_observation, reward, done, info = env.step(action)\n\nprint(f\"Reward: {reward}\")\nprint(f\"Done: {done}\")\n\nenv.close()\n```\n\n### 3. 训练代理 (Agent)\n项目主要兼容 `stable-baselines3` 库。以下是一个使用 SAC 算法进行训练的简要示例：\n\n```python\nfrom stable_baselines3 import SAC\nfrom stable_baselines3.common.callbacks import CheckpointCallback\nimport gym\nimport drl_grasping\n\n# 创建环境\nenv = gym.make(\"Grasp-Octree-v0\")\n\n# 定义回调以保存模型\ncheckpoint_callback = CheckpointCallback(save_freq=10000, save_path=\".\u002Flogs\u002F\")\n\n# 初始化 SAC 模型\n# 注意：对于 Octree 观测，可能需要自定义 Policy (OctreeCnnPolicy)\nmodel = SAC(\"MlpPolicy\", env, verbose=1)\n\n# 开始训练\nmodel.learn(total_timesteps=100000, callback=checkpoint_callback)\n\n# 保存模型\nmodel.save(\"sac_grasp_octree\")\n```\n\n> **提示**：针对八叉树（Octree）观测输入，请使用项目中提供的 `OctreeCnnPolicy` 策略类替换默认的 `\"MlpPolicy\"`，以利用 3D CNN 提取特征。具体导入路径参考 `drl_grasping.drl_octree` 模块。","某航天科研团队正在开发月球采样返回任务中的自主机械臂系统，需要在未知且复杂的月面环境中抓取形态各异的岩石样本。\n\n### 没有 drl_grasping 时\n- **感知数据冗余且处理慢**：传统方法依赖高分辨率 RGB 图像或深度图，数据传输带宽占用大，且在嵌入式设备上实时处理 3D 点云的计算延迟极高。\n- **泛化能力差，难以应对新环境**：基于规则或监督学习的策略一旦遇到训练集中未出现的岩石形状、光照变化或月壤纹理，抓取成功率急剧下降。\n- **仿真到现实的鸿沟巨大**：在模拟器中训练好的模型，部署到真实机械臂上往往因物理参数差异而失效，需要耗费数周时间进行繁琐的微调（Sim-to-Real Gap）。\n- **动作规划不够灵活**：传统的离散动作空间限制了机械臂的灵活性，难以在狭窄或非结构化地形中执行精细的连续笛卡尔空间运动。\n\n### 使用 drl_grasping 后\n- **高效紧凑的八叉树感知**：利用八叉树（Octrees）作为输入，将 3D 观测数据压缩至极致，显著降低了计算负载，实现了毫秒级的实时决策响应。\n- **强大的零样本迁移能力**：基于深度强化学习训练的策略具备极强的鲁棒性，能直接将在仿真中学到的技能“零样本”迁移到真实的月球模拟设施中，无需额外微调。\n- **适应多样化未知场景**：模型在面对从未见过的相机角度、陌生岩石形态及复杂地形纹理时，仍能保持高成功率，完美契合月球探测的不确定性。\n- **连续的精细操作控制**：支持连续动作空间的端到端训练，使机械臂能够像人类一样平滑调整姿态，在复杂月面环境中完成高精度的抓取任务。\n\ndrl_grasping 通过结合紧凑的八叉树感知与深度强化学习，彻底解决了非结构化环境下机器人抓取的泛化难题，实现了从仿真到真实月球场景的无缝跨越。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAndrejOrsula_drl_grasping_d14c6021.png","AndrejOrsula","Andrej Orsula","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FAndrejOrsula_2efd5881.png","Robot Learning in Space","@snt-spacer","Luxembourg","orsula.andrej@gmail.com",null,"https:\u002F\u002FAndrejOrsula.github.io","https:\u002F\u002Fgithub.com\u002FAndrejOrsula",[87,91,95,99],{"name":88,"color":89,"percentage":90},"Python","#3572A5",96.1,{"name":92,"color":93,"percentage":94},"Shell","#89e051",2.4,{"name":96,"color":97,"percentage":98},"Dockerfile","#384d54",1.2,{"name":100,"color":101,"percentage":102},"CMake","#DA3434",0.3,508,65,"2026-04-12T07:59:11","BSD-3-Clause",4,"Linux","需要 NVIDIA GPU (用于 O-CNN 硬件加速)，具体型号和显存未说明，需支持 CUDA","未说明 (但提及仿真环境计算复杂，训练缓慢，暗示需要较高配置)",{"notes":112,"python":113,"dependencies":114},"1. 该项目强依赖机器人仿真生态，必须安装 ROS 2 Galactic、Gazebo Fortress 和 MoveIt 2。2. 核心算法基于 PyTorch 和 Stable-Baselines3，3D 特征提取使用微软的 O-CNN 库以利用 NVIDIA GPU 加速。3. 提供 Docker 部署选项（推荐）以避免复杂的环境配置。4. 目前不支持并行环境运行，且由于物理仿真和渲染的计算复杂性，训练速度较慢。5. 实验具有非确定性，即使种子相同结果也可能不同。","未说明",[115,116,117,118,119,120,121,122],"ROS 2 Galactic","Gazebo Fortress","MoveIt 2","PyTorch","Stable-Baselines3","OpenAI Gym","O-CNN","TensorFlow (仅用于 DreamerV2)",[18],[125,126,127,128,129,130,131,132,133,134,135,136,137],"robotics","grasping","reinforcement-learning","octree","domain-randomization","sim2real","ros2","gym-ignition","stable-baselines3","deep-reinforcement-learning","openai-gym","gazebo","ros","2026-03-27T02:49:30.150509","2026-04-20T16:46:42.703543",[141,146,150,155,159,164,168],{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},45004,"本地安装时应该使用哪个 O-CNN 仓库？README 和 .repos 文件中的链接不一致。","虽然 README 提到需要安装维护者的 O-CNN fork，但 drl_grasping.repos 文件中指定的是官方仓库 URL。如果遇到版本兼容性问题（例如 stable-baselines3 新版本导致的 `AttributeError: module 'stable_baselines3.common.logger' has no attribute 'record'` 错误），建议尝试安装特定版本的 stable-baselines3（如 1.1.0a7）以解决兼容性问题。","https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fissues\u002F88",{"id":147,"question_zh":148,"answer_zh":149,"source_url":145},45005,"在没有 sudo 权限的情况下如何在本地安装项目依赖？","Dockerfile 中的指令默认假设用户拥有 `sudo` 权限。如果没有 sudo 权限，对于系统级安装通常需要 `sudo apt-get install ...`、`sudo cmake install` 或 `sudo python3 setup.py install` 等命令。不过，部分工具支持指定用户安装目录或使用 `cmake --build . --target install` 等方式进行非特权安装。",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},45006,"如何在 Docker 环境中将机器人模型从 UR5 替换为 UR10e？","可以参考 `ur5_rg2_ign` 仓库，修改其中的描述文件（description）和网格模型（meshes）以匹配 UR10e 机器人。完成修改后，将新的仓库挂载或提交到 Docker 容器内部即可使用。","https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fissues\u002F108",{"id":156,"question_zh":157,"answer_zh":158,"source_url":154},45007,"如何在 Docker 容器中实现 Sim2Real（仿真到真机）部署？","其原理与本地安装相同。运行 Docker 容器时，需要使用 `docker run --device ...` 映射具体设备，或者为了简便使用 `docker run --privileged ...`（虽安全性较低但配置简单），这样容器就能与真实机器人通信。此外，有一个包含基础实时运行实现的 `sim2real` 分支（commit: aa7d750），但该代码较为混乱且未合并到主分支，建议仅作为参考，正式部署推荐使用 MoveIt 2 配合 `ros2_control`。",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},45008,"本地安装 Gazebo Fortress 时遇到 SDF 转换错误或仿真无法启动怎么办？","如果遇到 \"Tried to convert SDF [world] into [plugin]\" 错误，建议使用特定 commit hash（2938ede）的 gz-sim 版本从源码编译 Gazebo Fortress。此外，仿真无法启动往往是因为对象模型下载失败。目前从 Fuel 自动下载 Google Scanned Objects 集合可能已失效，需手动下载数据集。可以使用 `drl_grasping\u002Fscripts\u002Futils\u002Fdataset` 目录下的 bash 脚本，或运行 `drl_grasping\u002Fscripts\u002Futils\u002Fprocess_collection.py` 来正确处理模型文件。","https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fissues\u002F104",{"id":165,"question_zh":166,"answer_zh":167,"source_url":163},45009,"为什么 RVIZ 中机器人连杆的 TF 变换无法加载或地面纹理缺失？","这通常是因为 Google 物体数据集未正确下载或放置。如果物体模型缺失，会导致相机数据或 TF 变换发布异常。请确保手动下载并正确放置 Google Dataset 文件。关于地面纹理缺失的问题，检查是否成功下载了纹理文件，正确放置后通常可解决。若仍有报错但功能正常，可通过修改日志级别隐藏相关错误信息。",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},45010,"Kinova 机械臂使用 SAC 算法训练不收敛（不学习抓取）怎么办？","如果在默认配置下训练不收敛，可能需要调整超参数或环境设置。关于 Kinova 的 Sim2Real 部署，官方提供了一个位于 `sim2real` 分支的早期实现（commit: aa7d750），包含针对 UR 和 Panda 的基础实时控制代码。若要适配 Kinova，最简单的方案是利用其内部控制器和官方 API 进行开发，而不是直接复用该分支中较为混乱的代码。","https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fissues\u002F97",[174,179,184],{"id":175,"version":176,"summary_zh":177,"released_at":178},359897,"2.0.0","第二个主要版本新增了 `GraspPlanetary` 环境，以及一款新的移动机械臂和若干模型。控制模块经过重新设计，并与 `ros2_control` 完全集成，同时支持通过 MoveIt 2 Servo 的全新控制方式。此外，大多数模块和 Docker 部署配置也进行了大规模重构，以提升代码的整体质量。\n\n有关详细变更，请参阅 [CHANGELOG@2.0.0](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fblob\u002Fmaster\u002FCHANGELOG.md#200---2022-12-16)。","2022-12-16T17:42:02",{"id":180,"version":181,"summary_zh":182,"released_at":183},359898,"1.1.0","这是一个次要版本，包含一些小的新增功能。本次发布的主要目标是确保与所有依赖库的最新版本兼容，同时对部分实现进行了简化。环境、算法和智能体等核心模块并未进行任何影响功能性的本地修改（不过，更新后的依赖库可能会对部分定量结果产生影响，例如物理引擎内部的变化）。\n\n有关详细变更，请参阅 [CHANGELOG@1.1.0](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fblob\u002Fmaster\u002FCHANGELOG.md#110---2021-10-13)。","2021-10-14T08:46:33",{"id":185,"version":186,"summary_zh":187,"released_at":188},359899,"1.0.0","本版本与我在[硕士论文](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fmaster_thesis)中所描述的内容完全对应。目前，源代码中还包含了我在整个项目周期内尝试过的多种不同方法的变体，并提供了高度可配置性。其中许多只是临时性的快速修复，也有一些方案并未带来任何改进，因此代码整体较为杂乱。\n\n如果后续有时间，我可能会发布 2.0.0 版本，对代码进行重构和清理，将项目拆分为多个模块，并进一步提升对不同机器人的通用性。此外，还考虑用更底层的语言（如 Rust 或 C++）重写部分代码。这一计划将在 https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Fissues\u002F85 中跟踪。\n\n本次发布的版本不包含 Sim2Real 相关内容（运行时及与控制器的接口）。这些代码保留在 [sim2real](https:\u002F\u002Fgithub.com\u002FAndrejOrsula\u002Fdrl_grasping\u002Ftree\u002Fsim2real) 分支中，仅供有兴趣者参考。不过，这部分代码的安全性并不高，因此我认为，如果未来要将其应用于可能损坏设备或伤害人员的真实机器人上，最好还是由使用者自行编写并完全理解的版本。","2021-06-08T20:37:56"]