[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Pointcept--Pointcept":3,"tool-Pointcept--Pointcept":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",156804,2,"2026-04-15T11:34:33",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":64,"owner_name":64,"owner_avatar_url":72,"owner_bio":73,"owner_company":73,"owner_location":73,"owner_email":73,"owner_twitter":73,"owner_website":73,"owner_url":74,"languages":75,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":96,"env_os":97,"env_gpu":98,"env_ram":97,"env_deps":99,"category_tags":106,"github_topics":108,"view_count":32,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":112,"updated_at":113,"faqs":114,"releases":140},7786,"Pointcept\u002FPointcept","Pointcept","Pointcept: Perceive the world with sparse points, a codebase for point cloud perception research. Latest works: Utonia, Concerto (NeurIPS'25), Sonata (CVPR'25 Highlight), PTv3 (CVPR'24 Oral)","Pointcept 是一个专为点云感知研究打造的强大且灵活的开源代码库，旨在帮助开发者通过稀疏点数据高效地“感知”三维世界。它主要解决了三维视觉领域中算法复现难、框架不统一以及大规模预训练模型缺失等痛点，为从基础骨干网络到前沿自监督学习提供了标准化的实现路径。\n\n这款工具特别适合从事计算机视觉、机器人导航及自动驾驶领域的研究人员与算法工程师使用。无论是希望快速验证新想法的学术探索者，还是需要构建高精度三维理解系统的工业界开发者，都能从中获益。Pointcept 的独特亮点在于其集成了多项顶会最新成果，包括 CVPR 2024 口头报告论文 Point Transformer V3（以更简洁架构实现更强性能）、NeurIPS 2025 的 Concerto（联合 2D-3D 自监督学习）以及 CVPR 2025 高光论文 Sonata。此外，它还支持 Utonia 等通用编码器方案，推动了多数据集提示训练与大规模三维表示学习的发展，让用户能够轻松调用先进的预训练权重，显著降低科研与开发门槛。","\u003Cp align=\"center\">\n    \u003C!-- pypi-strip -->\n    \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fpointcept\u002Fassets\u002Fmain\u002Fpointcept\u002Flogo_dark.png\">\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_4bbbaf8439c3.png\">\n    \u003C!-- \u002Fpypi-strip -->\n    \u003Cimg alt=\"pointcept\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_4bbbaf8439c3.png\" width=\"400\">\n    \u003C!-- pypi-strip -->\n    \u003C\u002Fpicture>\u003Cbr>\n    \u003C!-- \u002Fpypi-strip -->\n\u003C\u002Fp>\n\n[![Formatter](https:\u002F\u002Fgithub.com\u002Fpointcept\u002Fpointcept\u002Factions\u002Fworkflows\u002Fformatter.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fpointcept\u002Fpointcept\u002Factions\u002Fworkflows\u002Fformatter.yml)\n\n**Pointcept** is a powerful and flexible codebase for point cloud perception research. It is also an official implementation of the following paper:\n- 🚀 **Utonia: Toward One Encoder for All Point Clouds**  \n*Yujia Zhang, Xiaoyang Wu, Yunhan Yang, Xianzhe Fan, Han Li, Yuechen Zhang, Zehao Huang, Naiyan Wang, Hengshuang Zhao*  \n[ Pretrain ] [Utonia] - [ [Project](https:\u002F\u002Fpointcept.github.io\u002FUtonia\u002F) ] [ [Bib](https:\u002F\u002Fpointcept.github.io\u002FUtonia\u002F#citation) ] [ [HF Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fpointcept-bot\u002FUtonia) ] [ [Inference](https:\u002F\u002Fgithub.com\u002FPointcept\u002FUtonia) ] [ [Weight](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FUtonia) ] &rarr; [here](#utonia)\n\n\n- **Concerto: Joint 2D-3D Self-Supervised Learning Emerges Spatial Representations**  \n*Yujia Zhang, Xiaoyang Wu, Yixing Lao, Chengyao Wang, Zhuotao Tian, Naiyan Wang, Hengshuang Zhao*   \nConference on Neural Information Processing Systems (**NeurIPS**) 2025  \n[ Pretrain ] [Concerto] - [ [Project](https:\u002F\u002Fpointcept.github.io\u002FConcerto\u002F) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fconcerto\u002Fbib.txt) ] [ [HF Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FPointcept\u002FConcerto) ] [ [Inference](https:\u002F\u002Fgithub.com\u002FPointcept\u002FConcerto) ] [ [Weight](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FConcerto) ] &rarr; [here](#concerto)\n\n\n- **Sonata: Self-Supervised Learning of Reliable Point Representations**  \n*Xiaoyang Wu, Daniel DeTone, Duncan Frost, Tianwei Shen, Chris Xie, Nan Yang, Jakob Engel, Richard Newcombe, Hengshuang Zhao, Julian Straub*  \nIEEE Conference on Computer Vision and Pattern Recognition (**CVPR**) 2025 - Highlight  \n[ Pretrain ] [Sonata] - [ [Project](https:\u002F\u002Fxywu.me\u002Fsonata\u002F) ] [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16429) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fsonata\u002Fbib.txt) ] [ [Demo](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsonata) ] [ [Weight](https:\u002F\u002Fhuggingface.co\u002Ffacebook\u002Fsonata) ] &rarr; [here](#sonata)\n\n\n- **Point Transformer V3: Simpler, Faster, Stronger**  \n*Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, Hengshuang Zhao*  \nIEEE Conference on Computer Vision and Pattern Recognition (**CVPR**) 2024 - Oral  \n[ Backbone ] [PTv3] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fptv3\u002Fbib.txt) ] [ [Project](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3) ] &rarr; [here](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3)\n\n\n- **OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation**  \n*Bohao Peng, Xiaoyang Wu, Li Jiang, Yukang Chen, Hengshuang Zhao, Zhuotao Tian, Jiaya Jia*  \nIEEE Conference on Computer Vision and Pattern Recognition (**CVPR**) 2024  \n[ Backbone ] [ OA-CNNs ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14418) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Foacnns\u002Fbib.txt) ] &rarr; [here](#oa-cnns)\n\n\n- **Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training**  \n*Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao*  \nIEEE Conference on Computer Vision and Pattern Recognition (**CVPR**) 2024  \n[ Pretrain ] [PPT] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.09718) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fppt\u002Fbib.txt) ] &rarr; [here](#point-prompt-training-ppt)\n\n\n- **Masked Scene Contrast: A Scalable Framework for Unsupervised 3D Representation Learning**  \n*Xiaoyang Wu, Xin Wen, Xihui Liu, Hengshuang Zhao*  \nIEEE Conference on Computer Vision and Pattern Recognition (**CVPR**) 2023  \n[ Pretrain ] [ MSC ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14191) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fmsc\u002Fbib.txt) ] &rarr; [here](#masked-scene-contrast-msc)\n\n\n- **Learning Context-aware Classifier for Semantic Segmentation** (3D Part)  \n*Zhuotao Tian, Jiequan Cui, Li Jiang, Xiaojuan Qi, Xin Lai, Yixin Chen, Shu Liu, Jiaya Jia*  \nAAAI Conference on Artificial Intelligence (**AAAI**) 2023 - Oral  \n[ SemSeg ] [ CAC ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11633) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fcac\u002Fbib.txt) ] [ [2D Part](https:\u002F\u002Fgithub.com\u002Ftianzhuotao\u002FCAC) ] &rarr; [here](#context-aware-classifier)\n\n\n- **Point Transformer V2: Grouped Vector Attention and Partition-based Pooling**   \n*Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao*  \nConference on Neural Information Processing Systems (**NeurIPS**) 2022  \n[ Backbone ] [ PTv2 ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05666) ] [ [Bib](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fptv2\u002Fbib.txt) ] &rarr; [here](#point-transformers)\n\n\n- **Point Transformer**   \n*Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun*  \nIEEE International Conference on Computer Vision (**ICCV**) 2021 - Oral  \n[ Backbone ] [ PTv1 ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09164) ] [ [Bib](https:\u002F\u002Fhszhao.github.io\u002Fpapers\u002Ficcv21_pointtransformer_bib.txt) ] &rarr; [here](#point-transformers)\n\nAdditionally, **Pointcept** integrates the following excellent work (contain above):  \nBackbone: \n[MinkUNet](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine) ([here](#sparseunet)),\n[SpUNet](https:\u002F\u002Fgithub.com\u002Ftraveller59\u002Fspconv) ([here](#sparseunet)),\n[SPVCNN](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fspvnas) ([here](#spvcnn)),\n[OACNNs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14418) ([here](#oa-cnns)),\n[PTv1](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09164) ([here](#point-transformers)),\n[PTv2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05666) ([here](#point-transformers)),\n[PTv3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035) ([here](#point-transformers)),\n[StratifiedFormer](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FStratified-Transformer) ([here](#stratified-transformer)),\n[OctFormer](https:\u002F\u002Fgithub.com\u002Foctree-nn\u002Foctformer) ([here](#octformer)),\n[Swin3D](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin3D) ([here](#swin3d)),\n[LitePT](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FLitePT) ([here](#litept));   \nSemantic Segmentation:\n[Mix3d](https:\u002F\u002Fgithub.com\u002Fkumuji\u002Fmix3d) ([here](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet\u002Fsemseg-spunet-v1m1-0-base.py#L5)),\n[CAC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11633) ([here](#context-aware-classifier));  \nInstance Segmentation: \n[PointGroup](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FPointGroup) ([here](#pointgroup));  \nPre-training: \n[PointContrast](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FPointContrast) ([here](#pointcontrast)), \n[Contrastive Scene Contexts](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FContrastiveSceneContexts) ([here](#contrastive-scene-contexts)),\n[Masked Scene Contrast](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14191) ([here](#masked-scene-contrast-msc)),\n[Point Prompt Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.09718) ([here](#point-prompt-training-ppt)),\n[Sonata](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16429) ([here](#sonata)),\n[Concerto](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23607) ([here](#concerto)),\n[Utonia](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.03283) ([here](#utonia));  \nDatasets:\n[ScanNet](http:\u002F\u002Fwww.scan-net.org\u002F) ([here](#scannet-v2)), \n[ScanNet200](http:\u002F\u002Fwww.scan-net.org\u002F) ([here](#scannet-v2)),\n[ScanNet++](https:\u002F\u002Fkaldir.vc.in.tum.de\u002Fscannetpp\u002F) ([here](#scannet)),\n[S3DIS](https:\u002F\u002Fdocs.google.com\u002Fforms\u002Fd\u002Fe\u002F1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw\u002Fviewform?c=0&w=1) ([here](#s3dis)),\n[ArkitScene](https:\u002F\u002Fgithub.com\u002Fapple\u002FARKitScenes) ([here](#arkitscenes)),\n[HM3D](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-matterport3d-dataset\u002F) ([here](#habitat---matterport-3d-hm3d)),\n[Matterport3D](https:\u002F\u002Fniessner.github.io\u002FMatterport\u002F) ([here](#matterport3d)),\n[Structured3D](https:\u002F\u002Fstructured3d-dataset.org\u002F) ([here](#structured3d)),\n[SemanticKITTI](http:\u002F\u002Fwww.semantic-kitti.org\u002F) ([here](#semantickitti)),\n[nuScenes](https:\u002F\u002Fwww.nuscenes.org\u002Fnuscenes) ([here](#nuscenes)),\n[Waymo](https:\u002F\u002Fwaymo.com\u002Fopen\u002F) ([here](#waymo)),\n[ModelNet40](https:\u002F\u002Fmodelnet.cs.princeton.edu\u002F) ([here](#modelnet)),\n[ScanObjectNN](https:\u002F\u002Fhkust-vgd.github.io\u002Fscanobjectnn\u002F) ([here](#scanobjectnn)),\n[ShapeNetPart](https:\u002F\u002Fshapenet.org\u002F) ([here](#shapenetpart)),\n[PartNetE](https:\u002F\u002Fcolin97.github.io\u002FPartSLIP_page\u002F) ([here](#partnete)).\n\n\n## Highlights\n- *Mar 2026* 🚀: **Utonia** code is released with Pointcept v1.7.0 and we provide an easy-to-use pre-trained model for inference, tuning, and visualization in our project **[repository](https:\u002F\u002Fgithub.com\u002FPointcept\u002FUtonia)**.\n- *Oct 2025* : **Concerto** is accepted by NeurIPS 2025! We release the pre-training **[code](#concerto)** along with Pointcept v1.6.1 and provide an easy-to-use pre-trained model for inference, tuning, and visualization in our project **[repository](https:\u002F\u002Fgithub.com\u002FPointcept\u002FConcerto)**.\n- *Apr 2025* : We now support `wandb`, check the [Quick Start](#quick-start) training section for more information. (Thanks @Streakfull for his contribution!)\n- *Mar 2025* : **Sonata** is accepted by CVPR 2025 and selected as one of the **Highlight** presentations (3.0% submissions)! We release the code with Pointcept v1.6.0. We release the pre-training **[code](#sonata)** along with Pointcept v1.6.0 and provide an easy-to-use pre-trained model for inference, tuning, and visualization in our project **[repository](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsonata)** hosted by Meta.\n- *May 2024*: In v1.5.2, we redesigned the default structure for each dataset for better performance. Please **re-preprocess** datasets or **download** our preprocessed datasets from **[here](https:\u002F\u002Fhuggingface.co\u002FPointcept)**.\n- *Apr 2024*: **PTv3** is selected as one of the 90 **Oral** papers (3.3% accepted papers, 0.78% submissions) by CVPR'24!\n- *Mar 2024*: We release code for **OA-CNNs**, accepted by CVPR'24. Issue related to **OA-CNNs** can @Pbihao.\n- *Feb 2024*: **PTv3** and **PPT** are accepted by CVPR'24, another **two** papers by our Pointcept team have also been accepted by CVPR'24 🎉🎉🎉. We will make them publicly available soon!\n- *Dec 2023*: **PTv3** is released on arXiv, and the code is available in Pointcept. PTv3 is an efficient backbone model that achieves SOTA performances across indoor and outdoor scenarios.\n- *Aug 2023*: **PPT** is released on arXiv. PPT presents a multi-dataset pre-training framework that achieves SOTA performance in both **indoor** and **outdoor** scenarios. It is compatible with various existing pre-training frameworks and backbones.  A **pre-release** version of the code is accessible; for those interested, please feel free to contact me directly for access.\n- *Mar 2023*: We released our codebase, **Pointcept**, a highly potent tool for point cloud representation learning and perception. We welcome new work to join the _Pointcept_ family and highly recommend reading [Quick Start](#quick-start) before starting your trail.\n- *Feb 2023*: **MSC** and **CeCo** accepted by CVPR 2023. _MSC_ is a highly efficient and effective pretraining framework that facilitates cross-dataset large-scale pretraining, while _CeCo_ is a segmentation method specifically designed for long-tail datasets. Both approaches are compatible with all existing backbone models in our codebase, and we will soon make the code available for public use.\n- *Jan 2023*: **CAC**, oral work of AAAI 2023, has expanded its 3D result with the incorporation of Pointcept. This addition will allow CAC to serve as a pluggable segmentor within our codebase.\n- *Sep 2022*: **PTv2** accepted by NeurIPS 2022. It is a continuation of the Point Transformer. The proposed GVA theory can apply to most existing attention mechanisms, while Grid Pooling is also a practical addition to existing pooling methods.\n\n## Citation\nIf you find _Pointcept_ useful to your research, please cite our work as encouragement. (੭ˊ꒳​ˋ)੭✧\n```\n@misc{pointcept2023,\n    title={Pointcept: A Codebase for Point Cloud Perception Research},\n    author={Pointcept Contributors},\n    howpublished = {\\url{https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept}},\n    year={2023}\n}\n```\n\n## Overview\n\n- [Installation](#installation)\n- [Data Preparation](#data-preparation)\n- [Quick Start](#quick-start)\n- [Model Zoo](#model-zoo)\n- [Acknowledgement](#acknowledgement)\n\n## Installation\n\n### Requirements\n- Ubuntu: 18.04 and above.\n- CUDA: 11.3 and above.\n- PyTorch: 1.10.0 and above.\n\n### Conda Environment\n- **Method 1**: Utilize conda `environment.yml` to create a new environment with one line code:\n  ```bash\n  # Create and activate conda environment named as 'pointcept-torch2.5.0-cu12.4'\n  # cuda: 12.4, pytorch: 2.5.0\n\n  # run `unset CUDA_PATH` if you have installed cuda in your local environment\n  conda env create -f environment.yml --verbose\n  conda activate pointcept-torch2.5.0-cu12.4\n  ```\n\n- **Method 2**: Use our pre-built Docker image and refer to the supported tags [here](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fpointcept\u002Fpointcept\u002Ftags). Quickly verify the Docker image on your local machine with the following command:\n  ```bash\n  docker run --gpus all -it --rm pointcept\u002Fpointcept:v1.6.0-pytorch2.5.0-cuda12.4-cudnn9-devel bash\n  git clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsonata\n  cd sonata\n  export PYTHONPATH=.\u002F && python demo\u002F0_pca.py\n  # Ignore the GUI error, we cannot expect a container to have its GUI, right?\n  ```\n\n- **Method 3**: Manually create a conda environment:\n  ```bash\n  conda create -n pointcept python=3.10 -y\n  conda activate pointcept\n  \n  # (Optional) If no CUDA installed\n  conda install nvidia\u002Flabel\u002Fcuda-12.4.1::cuda conda-forge::cudnn conda-forge::gcc=13.2 conda-forge::gxx=13.2 -y\n  \n  conda install ninja -y\n  # Choose version you want here: https:\u002F\u002Fpytorch.org\u002Fget-started\u002Fprevious-versions\u002F\n  conda install pytorch==2.5.0 torchvision==0.13.1 torchaudio==0.20.0 pytorch-cuda=12.4 -c pytorch -y\n  conda install h5py pyyaml -c anaconda -y\n  conda install sharedarray tensorboard tensorboardx wandb yapf addict einops scipy plyfile termcolor timm -c conda-forge -y\n  conda install pytorch-cluster pytorch-scatter pytorch-sparse -c pyg -y\n  pip install torch-geometric\n\n  # spconv (SparseUNet)\n  # refer https:\u002F\u002Fgithub.com\u002Ftraveller59\u002Fspconv\n  pip install spconv-cu124\n\n  # PPT (clip)\n  pip install ftfy regex tqdm\n  pip install git+https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP.git\n\n  # transformers and peft\n  pip install transformers==4.50.3\n  pip install peft\n\n  # PTv1 & PTv2 or precise eval\n  cd libs\u002Fpointops\n  # usual\n  python setup.py install\n  # docker & multi GPU arch\n  TORCH_CUDA_ARCH_LIST=\"ARCH LIST\" python  setup.py install\n  # e.g. 7.5: RTX 3000; 8.0: a100 More available in: https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-gpus\n  TORCH_CUDA_ARCH_LIST=\"7.5 8.0\" python  setup.py install\n  cd ..\u002F..\n\n  # Open3D (visualization, optional)\n  pip install open3d\n  ```\n\n## Data Preparation\n\n### ScanNet v2\n\nThe preprocessing supports semantic and instance segmentation for both `ScanNet20`, `ScanNet200`, and `ScanNet Data Efficient`.\n- Download the [ScanNet](http:\u002F\u002Fwww.scan-net.org\u002F) v2 dataset.\n- Run preprocessing code for raw ScanNet as follows:\n\n  ```bash\n  # RAW_SCANNET_DIR: the directory of downloaded ScanNet v2 raw dataset.\n  # PROCESSED_SCANNET_DIR: the directory of the processed ScanNet dataset (output dir).\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fscannet\u002Fpreprocess_scannet.py --dataset_root ${RAW_SCANNET_DIR} --output_root ${PROCESSED_SCANNET_DIR}\n  ```\n- (Optional) Download ScanNet Data Efficient files:\n  ```bash\n  # download-scannet.py is the official download script\n  # or follow instructions here: https:\u002F\u002Fkaldir.vc.in.tum.de\u002Fscannet_benchmark\u002Fdata_efficient\u002Fdocumentation#download\n  python download-scannet.py --data_efficient -o ${RAW_SCANNET_DIR}\n  # unzip downloads\n  cd ${RAW_SCANNET_DIR}\u002Ftasks\n  unzip limited-annotation-points.zip\n  unzip limited-reconstruction-scenes.zip\n  # copy files to processed dataset folder\n  mkdir ${PROCESSED_SCANNET_DIR}\u002Ftasks\n  cp -r ${RAW_SCANNET_DIR}\u002Ftasks\u002Fpoints ${PROCESSED_SCANNET_DIR}\u002Ftasks\n  cp -r ${RAW_SCANNET_DIR}\u002Ftasks\u002Fscenes ${PROCESSED_SCANNET_DIR}\u002Ftasks\n  ```\n- (Alternative) Our preprocess data can be directly downloaded [[here](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fscannet-compressed)], please agree the official license before download it.\n\n- Link processed dataset to codebase:\n  ```bash\n  # PROCESSED_SCANNET_DIR: the directory of the processed ScanNet dataset.\n  mkdir data\n  ln -s ${PROCESSED_SCANNET_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fscannet\n  ```\n\n### ScanNet++\n- Download the [ScanNet++](https:\u002F\u002Fkaldir.vc.in.tum.de\u002Fscannetpp\u002F) dataset.\n- Run preprocessing code for raw ScanNet++ as follows:\n  ```bash\n  # RAW_SCANNETPP_DIR: the directory of downloaded ScanNet++ raw dataset.\n  # PROCESSED_SCANNETPP_DIR: the directory of the processed ScanNet++ dataset (output dir).\n  # NUM_WORKERS: the number of workers for parallel preprocessing.\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fscannetpp\u002Fpreprocess_scannetpp.py --dataset_root ${RAW_SCANNETPP_DIR} --output_root ${PROCESSED_SCANNETPP_DIR} --num_workers ${NUM_WORKERS}\n  ```\n- Sampling and chunking large point cloud data in train\u002Fval split as follows (only used for training):\n  ```bash\n  # PROCESSED_SCANNETPP_DIR: the directory of the processed ScanNet++ dataset (output dir).\n  # NUM_WORKERS: the number of workers for parallel preprocessing.\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fsampling_chunking_data.py --dataset_root ${PROCESSED_SCANNETPP_DIR} --grid_size 0.01 --chunk_range 6 6 --chunk_stride 3 3 --split train --num_workers ${NUM_WORKERS}\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fsampling_chunking_data.py --dataset_root ${PROCESSED_SCANNETPP_DIR} --grid_size 0.01 --chunk_range 6 6 --chunk_stride 3 3 --split val --num_workers ${NUM_WORKERS}\n  ```\n- Link processed dataset to codebase:\n  ```bash\n  # PROCESSED_SCANNETPP_DIR: the directory of the processed ScanNet dataset.\n  mkdir data\n  ln -s ${PROCESSED_SCANNETPP_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fscannetpp\n  ```\n\n### S3DIS\n\n- Download S3DIS data by filling this [Google form](https:\u002F\u002Fdocs.google.com\u002Fforms\u002Fd\u002Fe\u002F1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw\u002Fviewform?c=0&w=1). Download the `Stanford3dDataset_v1.2.zip` file and unzip it.\n- Fix error in `Area_5\u002Foffice_19\u002FAnnotations\u002Fceiling` Line 323474 (103.0�0000 => 103.000000).\n- (Optional) Download Full 2D-3D S3DIS dataset (no XYZ) from [here](https:\u002F\u002Fgithub.com\u002Falexsax\u002F2D-3D-Semantics) for parsing normal.\n- Run preprocessing code for S3DIS as follows:\n\n  ```bash\n  # S3DIS_DIR: the directory of downloaded Stanford3dDataset_v1.2 dataset.\n  # RAW_S3DIS_DIR: the directory of Stanford2d3dDataset_noXYZ dataset. (optional, for parsing normal)\n  # PROCESSED_S3DIS_DIR: the directory of processed S3DIS dataset (output dir).\n  \n  # S3DIS without aligned angle\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR}\n  # S3DIS with aligned angle\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --align_angle\n  # S3DIS with normal vector (recommended, normal is helpful)\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --raw_root ${RAW_S3DIS_DIR} --parse_normal\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --raw_root ${RAW_S3DIS_DIR} --align_angle --parse_normal\n  ```\n\n- (Alternative) Our preprocess data can also be downloaded [[here](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fs3dis-compressed\n)] (with normal vector and aligned angle), please agree with the official license before downloading it.\n\n- Link processed dataset to codebase.\n  ```bash\n  # PROCESSED_S3DIS_DIR: the directory of processed S3DIS dataset.\n  mkdir data\n  ln -s ${PROCESSED_S3DIS_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fs3dis\n  ```\n\n\n### ArkitScenes\n\n- Download ArkitScenes 3DOD split with the following commands:\n  ```bash\n  # RAW_AS_DIR: the directory of downloaded Raw ArkitScenes dataset.\n  git clone https:\u002F\u002Fgithub.com\u002Fapple\u002FARKitScenes.git\n  cd ARKitScenes\n  python download_data.py 3dod --download_dir $RAW_AS_DIR --video_id_csv threedod\u002F3dod_train_val_splits.csv\n  ```\n- Run preprocessing code for ArkitScenes as follows:\n  ```bash\n  # RAW_AS_DIR: the directory of downloaded ArkitScenes dataset.\n  # PROCESSED_AS_DIR: the directory of processed ArkitScenes dataset (output dir).\n  # NUM_WORKERS: Number for workers for preprocessing, default same as cpu count (might OOM).\n  cd $POINTCEPT_DIR\n  export PYTHONPATH=.\u002F\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Farkitscenes\u002Fpreprocess_arkitscenes_mesh.py --dataset_root $RAW_AS_DIR --output_root $PROCESSED_AS_DIR --num_workers $NUM_WORKERS\n  ```\n\n- (Alternative) Our preprocess data can also be downloaded [[here](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Farkitscenes-compressed\n)] please read and agree the official [license](https:\u002F\u002Fgithub.com\u002Fapple\u002FARKitScenes?tab=License-1-ov-file#readme) before download it. (Unzip with the following command:  \n `find .\u002F -name '*.tar.gz' | xargs -n 1 -P 8 -I {} sh -c 'tar -xzvf {}'`)\n\n- Link processed dataset to codebase.\n  ```bash\n  # PROCESSED_AR_DIR: the directory of processed ArkitScenes dataset (output dir).\n  mkdir data\n  ln -s ${PROCESSED_AR_DIR} ${CODEBASE_DIR}\u002Fdata\u002Farkitscenes\n  ```\n\n### Habitat - Matterport 3D (HM3D)\n\n- Download HM3D `hm3d-train-glb-v0.2.tar` and `hm3d-val-glb-v0.2.tar` with instuction [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-sim\u002Fblob\u002Fmain\u002FDATASETS.md#habitat-matterport-3d-research-dataset-hm3d) and unzip them.\n- Run preprocessing code for HM3D as follows:\n  ```bash\n  # RAW_HM_DIR: the directory of downloaded HM3D dataset.\n  # PROCESSED_HM_DIR: the directory of processed HM3D dataset (output dir).\n  # NUM_WORKERS: Number for workers for preprocessing, default same as cpu count (might OOM).\n  export PYTHONPATH=.\u002F\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fhm3d\u002Fpreprocess_hm3d.py --dataset_root $RAW_HM_DIR --output_root $PROCESSED_HM_DIR --density 0.02 --num_workers $NUM_WORKERS\n  ```\n\n- (Alternative) Our preprocess data can also be downloaded [[here](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fhm3d-compressed\n)] please read and agree the official [license](https:\u002F\u002Fmatterport.com\u002Flegal\u002Fmatterport-end-user-license-agreement-academic-use-model-data) before download it. (Unzip with the following command:  \n `find .\u002F -name '*.tar.gz' | xargs -n 1 -P 4 -I {} sh -c 'tar -xzvf {}'`)\n\n- Link processed dataset to codebase.\n  ```bash\n  # PROCESSED_HM_DIR: the directory of processed HM3D dataset (output dir).\n  mkdir data\n  ln -s ${PROCESSED_HM_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fhm3d\n\n\n### Matterport3D\n- Follow [this page](https:\u002F\u002Fniessner.github.io\u002FMatterport\u002F#download) to request access to the dataset.\n- Download the \"region_segmentation\" type, which represents the division of a scene into individual rooms.\n  ```bash\n  # download-mp.py is the official download script\n  # MATTERPORT3D_DIR: the directory of downloaded Matterport3D dataset.\n  python download-mp.py -o {MATTERPORT3D_DIR} --type region_segmentations\n  ```\n- Unzip the region_segmentations data\n  ```bash\n  # MATTERPORT3D_DIR: the directory of downloaded Matterport3D dataset.\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fmatterport3d\u002Funzip_matterport3d_region_segmentation.py --dataset_root {MATTERPORT3D_DIR}\n  ```\n- Run preprocessing code for Matterport3D as follows:\n  ```bash\n  # MATTERPORT3D_DIR: the directory of downloaded Matterport3D dataset.\n  # PROCESSED_MATTERPORT3D_DIR: the directory of processed Matterport3D dataset (output dir).\n  # NUM_WORKERS: the number of workers for this preprocessing.\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fmatterport3d\u002Fpreprocess_matterport3d_mesh.py --dataset_root ${MATTERPORT3D_DIR} --output_root ${PROCESSED_MATTERPORT3D_DIR} --num_workers ${NUM_WORKERS}\n  ```\n- Link processed dataset to codebase.\n  ```bash\n  # PROCESSED_MATTERPORT3D_DIR: the directory of processed Matterport3D dataset (output dir).\n  mkdir data\n  ln -s ${PROCESSED_MATTERPORT3D_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fmatterport3d\n  ```\n\nFollowing the instruction of [OpenRooms](https:\u002F\u002Fgithub.com\u002FViLab-UCSD\u002FOpenRooms), we remapped Matterport3D's categories to ScanNet 20 semantic categories with the addition of a ceiling category.\n* (Alternative) Our preprocess data can also be downloaded [here](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fmatterport3d-compressed), please agree the official license before download it.\n\n\n### Structured3D\n\n- Download Structured3D panorama related and perspective (full) related zip files by filling this [Google form](https:\u002F\u002Fdocs.google.com\u002Fforms\u002Fd\u002Fe\u002F1FAIpQLSc0qtvh4vHSoZaW6UvlXYy79MbcGdZfICjh4_t4bYofQIVIdw\u002Fviewform?pli=1) (no need to unzip them).\n- Organize all downloaded zip file in one folder (`${STRUCT3D_DIR}`).\n- Run preprocessing code for Structured3D as follows:\n  ```bash\n  # STRUCT3D_DIR: the directory of downloaded Structured3D dataset.\n  # PROCESSED_STRUCT3D_DIR: the directory of processed Structured3D dataset (output dir).\n  # NUM_WORKERS: Number for workers for preprocessing, default same as cpu count (might OOM).\n  export PYTHONPATH=.\u002F\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fstructured3d\u002Fpreprocess_structured3d.py --dataset_root ${STRUCT3D_DIR} --output_root ${PROCESSED_STRUCT3D_DIR} --num_workers ${NUM_WORKERS} --grid_size 0.01 --fuse_prsp --fuse_pano\n  ```\nFollowing the instruction of [Swin3D](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06906), we keep 25 categories with frequencies of more than 0.001, out of the original 40 categories.\n\n[\u002F\u002F]: # (- &#40;Alternative&#41; Our preprocess data can also be downloaded [[here]&#40;&#41;], please agree the official license before download it.)\n\n- (Alternative) Our preprocess data can also be downloaded [[here](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fstructured3d-compressed\n)] (with perspective views and panorama view, 471.7G after unzipping), please agree the official license before download it. (Unzip with the following command:  \n `find .\u002F -name '*.tar.gz' | xargs -n 1 -P 15 -I {} sh -c 'tar -xzvf {}'`)\n\n- Link processed dataset to codebase.\n  ```bash\n  # PROCESSED_STRUCT3D_DIR: the directory of processed Structured3D dataset (output dir).\n  mkdir data\n  ln -s ${PROCESSED_STRUCT3D_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fstructured3d\n  ```\n\n### SemanticKITTI\n- Download [SemanticKITTI](http:\u002F\u002Fwww.semantic-kitti.org\u002Fdataset.html#download) dataset.\n- Link dataset to codebase.\n  ```bash\n  # SEMANTIC_KITTI_DIR: the directory of SemanticKITTI dataset.\n  # |- SEMANTIC_KITTI_DIR\n  #   |- dataset\n  #     |- sequences\n  #       |- 00\n  #       |- 01\n  #       |- ...\n  \n  mkdir -p data\n  ln -s ${SEMANTIC_KITTI_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fsemantic_kitti\n  ```\n\n### nuScenes\n- Download the official [NuScene](https:\u002F\u002Fwww.nuscenes.org\u002Fnuscenes#download) dataset (with Lidar Segmentation) and organize the downloaded files as follows:\n  ```bash\n  NUSCENES_DIR\n  │── samples\n  │── sweeps\n  │── lidarseg\n  ...\n  │── v1.0-trainval \n  │── v1.0-test\n  ```\n- Run information preprocessing code (modified from OpenPCDet) for nuScenes as follows:\n  ```bash\n  # NUSCENES_DIR: the directory of downloaded nuScenes dataset.\n  # PROCESSED_NUSCENES_DIR: the directory of processed nuScenes dataset (output dir).\n  # MAX_SWEEPS: Max number of sweeps. Default: 10.\n  pip install nuscenes-devkit pyquaternion\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fnuscenes\u002Fpreprocess_nuscenes_info.py --dataset_root ${NUSCENES_DIR} --output_root ${PROCESSED_NUSCENES_DIR} --max_sweeps ${MAX_SWEEPS} --with_camera\n  ```\n- (Alternative) Our preprocess nuScenes information data can also be downloaded [[here](\nhttps:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fnuscenes-compressed)] (only processed information, still need to download raw dataset and link to the folder), please agree the official license before download it.\n\n- Link raw dataset to processed NuScene dataset folder:\n  ```bash\n  # NUSCENES_DIR: the directory of downloaded nuScenes dataset.\n  # PROCESSED_NUSCENES_DIR: the directory of processed nuScenes dataset (output dir).\n  ln -s ${NUSCENES_DIR} {PROCESSED_NUSCENES_DIR}\u002Fraw\n  ```\n  then the processed nuscenes folder is organized as follows:\n  ```bash\n  nuscene\n  |── raw\n      │── samples\n      │── sweeps\n      │── lidarseg\n      ...\n      │── v1.0-trainval\n      │── v1.0-test\n  |── info\n  ```\n\n- Link processed dataset to codebase.\n  ```bash\n  # PROCESSED_NUSCENES_DIR: the directory of processed nuScenes dataset (output dir).\n  mkdir data\n  ln -s ${PROCESSED_NUSCENES_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fnuscenes\n  ```\n\n### Waymo\n- Download the official [Waymo](https:\u002F\u002Fwaymo.com\u002Fopen\u002Fdownload\u002F) dataset (v1.4.3) and organize the downloaded files as follows:\n  ```bash\n  WAYMO_RAW_DIR\n  │── training\n  │── validation\n  │── testing\n  ```\n- Install the following dependence:\n  ```bash\n  # If shows \"No matching distribution found\", download whl directly from Pypi and install the package.\n  conda create -n waymo python=3.10 -y\n  conda activate waymo\n  pip install waymo-open-dataset-tf-2-12-0\n  ```\n- Run the preprocessing code as follows:\n  ```bash\n  # WAYMO_DIR: the directory of the downloaded Waymo dataset.\n  # PROCESSED_WAYMO_DIR: the directory of the processed Waymo dataset (output dir).\n  # NUM_WORKERS: num workers for preprocessing\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fwaymo\u002Fpreprocess_waymo.py --dataset_root ${WAYMO_DIR} --output_root ${PROCESSED_WAYMO_DIR} --splits training validation --num_workers ${NUM_WORKERS}\n  ```\n\n- Link processed dataset to the codebase.\n  ```bash\n  # PROCESSED_WAYMO_DIR: the directory of the processed Waymo dataset (output dir).\n  mkdir data\n  ln -s ${PROCESSED_WAYMO_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fwaymo\n  ```\n\n### ModelNet40\n- Download [modelnet40_normal_resampled.zip](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fmodelnet40_normal_resampled-compressed) and unzip.\n- Link dataset to the codebase.\n  ```bash\n  mkdir -p data\n  ln -s ${MODELNET_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fmodelnet40_normal_resampled\n  ```\n\n### ScanObjectNN\n  - Download the [ScanObjectNN](https:\u002F\u002Fforms.gle\u002FZZRnnmaUdwfRucoy7) dataset, including `h5_files.zip` and `raw\u002Fobject_dataset.zip`. Unzip them to \\${BENCHMARK_SCANOBJECTNN_DIR} and ${RAW_SCANOBJECTNN_DIR}.\n  ```\n  ln -s ${BENCHMARK_SCANOBJECTNN_DIR} data\u002Fscanobject_eval\n  ```\n\n### ShapeNetPart\n  - Download [ShapeNetPart](https:\u002F\u002Fdrive.usercontent.google.com\u002Fdownload?id=1W3SEE-dY1sxvlECcOwWSDYemwHEUbJIS&authuser=0).\n  - Link dataset to the codebase.\n  ```bash\n  mkdir -p data\n  ln -s ${RAW_SHAPENETPART_DIR} ${CODEBASE_DIR}\u002Fdata\u002F\n  ```\n\n### PartNetE\n - Download [PartNetE](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Fu\u002F0\u002Ffolders\u002F13boiefNs2XvhoSvvDiaOATKAPB7XbE6g) (data.zip)\n - Run preprocessing code for raw PartNetE as follows:\n\n  ```bash\n  # RAW_PARTNETE_DIR: the directory of downloaded PartNetE dataset.\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fpartnete\u002Fpreprocess_partnete.py --dataset_root ${RAW_PARTNETE_DIR}\n  ```\n - Link dataset to the codebase.\n  ```bash\n  mkdir -p data\n  ln -s ${RAW_PARTNETE_DIR} ${CODEBASE_DIR}\u002Fdata\u002F\n  ```\n\n## Quick Start\n\n### Training\n**Train from scratch.** The training processing is based on configs in `configs` folder. \nThe training script will generate an experiment folder in `exp` folder and backup essential code in the experiment folder.\nTraining config, log, tensorboard, and checkpoints will also be saved into the experiment folder during the training process.\n```bash\nexport CUDA_VISIBLE_DEVICES=${CUDA_VISIBLE_DEVICES}\n# Script (Recommended)\nsh scripts\u002Ftrain.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -c ${CONFIG_NAME} -n ${EXP_NAME}\n# Direct\nexport PYTHONPATH=.\u002F\npython tools\u002Ftrain.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path=${SAVE_PATH}\n```\n\nFor example:\n```bash\n# By script (Recommended)\n# -p is default set as python and can be ignored\nsh scripts\u002Ftrain.sh -p python -d scannet -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# Direct\nexport PYTHONPATH=.\u002F\npython tools\u002Ftrain.py --config-file configs\u002Fscannet\u002Fsemseg-pt-v2m2-0-base.py --options save_path=exp\u002Fscannet\u002Fsemseg-pt-v2m2-0-base\n```\n**Resume training from checkpoint.** If the training process is interrupted by accident, the following script can resume training from a given checkpoint.\n```bash\nexport CUDA_VISIBLE_DEVICES=${CUDA_VISIBLE_DEVICES}\n# Script (Recommended)\n# simply add \"-r true\"\nsh scripts\u002Ftrain.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -c ${CONFIG_NAME} -n ${EXP_NAME} -r true\n# Direct\nexport PYTHONPATH=.\u002F\npython tools\u002Ftrain.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path=${SAVE_PATH} resume=True weight=${CHECKPOINT_PATH}\n```\n**Weights and Biases.**\nPointcept by default enables both `tensorboard` and `wandb`. There are some usage notes related to `wandb`:\n1. Disable by set `enable_wandb=False`;\n2. Sync with  `wandb` remote server by `wandb login` in the terminal or set `wandb_key=YOUR_WANDB_KEY` in config.\n3. The project name is \"Pointcept\" by default, custom it to your research project name by setting `wandb_project=YOUR_PROJECT_NAME` (e.g. Sonata-Dev, PointTransformerV3-Dev)\n\n### Testing\nDuring training, model evaluation is performed on point clouds after grid sampling (voxelization), providing an initial assessment of model performance. ~~However, to obtain precise evaluation results, testing is **essential**~~ *(now we automatically run the testing process after training with the `PreciseEvaluation` hook)*. The testing process involves subsampling a dense point cloud into a sequence of voxelized point clouds, ensuring comprehensive coverage of all points. These sub-results are then predicted and collected to form a complete prediction of the entire point cloud. This approach yields  higher evaluation results compared to simply mapping\u002Finterpolating the prediction. In addition, our testing code supports TTA (test time augmentation) testing, which further enhances the stability of evaluation performance.\n\n```bash\n# By script (Based on experiment folder created by training script)\nsh scripts\u002Ftest.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -n ${EXP_NAME} -w ${CHECKPOINT_NAME}\n# Direct\nexport PYTHONPATH=.\u002F\npython tools\u002Ftest.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path=${SAVE_PATH} weight=${CHECKPOINT_PATH}\n```\nFor example:\n```bash\n# By script (Based on experiment folder created by training script)\n# -p is default set as python and can be ignored\n# -w is default set as model_best and can be ignored\nsh scripts\u002Ftest.sh -p python -d scannet -n semseg-pt-v2m2-0-base -w model_best\n# Direct\nexport PYTHONPATH=.\u002F\npython tools\u002Ftest.py --config-file configs\u002Fscannet\u002Fsemseg-pt-v2m2-0-base.py --options save_path=exp\u002Fscannet\u002Fsemseg-pt-v2m2-0-base weight=exp\u002Fscannet\u002Fsemseg-pt-v2m2-0-base\u002Fmodel\u002Fmodel_best.pth\n```\n\nThe TTA can be disabled by replace `data.test.test_cfg.aug_transform = [...]` with:\n\n```python\ndata = dict(\n    train = dict(...),\n    val = dict(...),\n    test = dict(\n        ...,\n        test_cfg = dict(\n            ...,\n            aug_transform = [\n                [dict(type=\"RandomRotateTargetAngle\", angle=[0], axis=\"z\", center=[0, 0, 0], p=1)]\n            ]\n        )\n    )\n)\n```\n\n### Offset\n`Offset` is the separator of point clouds in batch data, and it is similar to the concept of `Batch` in PyG. \nA visual illustration of batch and offset is as follows:\n\u003Cp align=\"center\">\n    \u003C!-- pypi-strip -->\n    \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fpointcept\u002Fassets\u002Fmain\u002Fpointcept\u002Foffset_dark.png\">\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_83d25dfed8e3.png\">\n    \u003C!-- \u002Fpypi-strip -->\n    \u003Cimg alt=\"pointcept\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_83d25dfed8e3.png\" width=\"480\">\n    \u003C!-- pypi-strip -->\n    \u003C\u002Fpicture>\u003Cbr>\n    \u003C!-- \u002Fpypi-strip -->\n\u003C\u002Fp>\n\n## Model Zoo\n### 1. Backbones and Semantic Segmentation\n#### SparseUNet\n\n_Pointcept_ provides `SparseUNet` implemented by `SpConv` and `MinkowskiEngine`. The SpConv version is recommended since SpConv is easy to install and faster than MinkowskiEngine. Meanwhile, SpConv is also widely applied in outdoor perception.\n\n- **SpConv (recommend)**\n\nThe SpConv version `SparseUNet` in the codebase was fully rewrite from `MinkowskiEngine` version, example running script is as follows:\n\n```bash\n# ScanNet val\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# S3DIS (with normal)\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-spunet-v1m1-0-cn-base -n semseg-spunet-v1m1-0-cn-base\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# nuScenes\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# ModelNet40\nsh scripts\u002Ftrain.sh -g 2 -d modelnet40 -c cls-spunet-v1m1-0-base -n cls-spunet-v1m1-0-base\n\n# ScanNet Data Efficient\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la20 -n semseg-spunet-v1m1-2-efficient-la20\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la50 -n semseg-spunet-v1m1-2-efficient-la50\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la100 -n semseg-spunet-v1m1-2-efficient-la100\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la200 -n semseg-spunet-v1m1-2-efficient-la200\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr1 -n semseg-spunet-v1m1-2-efficient-lr1\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr5 -n semseg-spunet-v1m1-2-efficient-lr5\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr10 -n semseg-spunet-v1m1-2-efficient-lr10\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr20 -n semseg-spunet-v1m1-2-efficient-lr20\n\n# Profile model run time\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-0-enable-profiler -n semseg-spunet-v1m1-0-enable-profiler\n```\n\n- **MinkowskiEngine**\n\nThe MinkowskiEngine version `SparseUNet` in the codebase was modified from the original MinkowskiEngine repo, and example running scripts are as follows:\n1. Install MinkowskiEngine, refer https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\n2. Training with the following example scripts:\n```bash\n# Uncomment \"# from .sparse_unet import *\" in \"pointcept\u002Fmodels\u002F__init__.py\"\n# Uncomment \"# from .mink_unet import *\" in \"pointcept\u002Fmodels\u002Fsparse_unet\u002F__init__.py\"\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 2 -d semantic_kitti -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n```\n\n#### OA-CNNs\nIntroducing Omni-Adaptive 3D CNNs (**OA-CNNs**), a family of networks that integrates a lightweight module to greatly enhance the adaptivity of sparse CNNs at minimal computational cost. Without any self-attention modules, **OA-CNNs** favorably surpass point transformers in terms of accuracy in both indoor and outdoor scenes, with much less latency and memory cost. Issue related to **OA-CNNs** can @Pbihao.\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-oacnns-v1m1-0-base -n semseg-oacnns-v1m1-0-base\n```\n\n#### Point Transformers\n- **LitePT**\n\nLitePT (CVPR 2026) is a state-of-the-art point cloud backbone that delivers superior or competitive performance with significantly improved efficiency compared to prior point Transformers. \n\n1. Additional requirements:\n\nCompile CUDA implementation of PointROPE. Otherwise, the system will automatically fall back to a slower PyTorch implementation. \n\n```bash\ncd libs\u002Fpointrope\npython setup.py install\ncd ..\u002F..\n```\n\n2. Example running scripts:\n\nThe model is registered as `LitePT-v1` and shared across small\u002Fbase\u002Flarge variants. In config filenames, `v1m1` refers to the lightweight decoder (no conv\u002Fattn), while `v1m2` uses a decoder with conv or attention at selected stages. See **Decoder design** in Sec 4.1 in the paper for details.\n\n```bash\n### NuScenes + LitePT-S\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### Waymo + LitePT-S\nsh scripts\u002Ftrain.sh -g 4 -d waymo -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### ScanNet + LitePT-S\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### Structured3D + LitePT-S\nsh scripts\u002Ftrain.sh -g 16 -d structured3d -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### Structured3D + LitePT-B\nsh scripts\u002Ftrain.sh -g 16 -d structured3d -c semseg-litept-v1m1-0-base -n semseg-litept-v1m1-0-base\n### Structured3D + LitePT-L\nsh scripts\u002Ftrain.sh -g 16 -d structured3d -c semseg-litept-v1m1-0-large -n semseg-litept-v1m1-0-large\n```\n\nDetailed instructions and weights are available in the [project repository](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FLitePT). \n\n- **PTv3**\n\n[PTv3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035) is an efficient backbone model that achieves SOTA performances across indoor and outdoor scenarios. The full PTv3 relies on FlashAttention, while FlashAttention relies on CUDA 11.6 and above, make sure your local Pointcept environment satisfies the requirements.\n\nIf you can not upgrade your local environment to satisfy the requirements (CUDA >= 11.6), then you can disable FlashAttention by setting the model parameter `enable_flash` to `false` and reducing the `enc_patch_size` and `dec_patch_size` to a level (e.g. 128).\n\nFlashAttention force disables RPE and forces the accuracy reduced to fp16. If you require these features, please disable `enable_flash` and adjust `enable_rpe`, `upcast_attention` and`upcast_softmax`.\n\nDetailed instructions and experiment records (containing weights) are available on the [project repository](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3). Example running scripts are as follows:\n```bash\n# Scratched ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# PPT joint training (ScanNet + Structured3D) and evaluate in ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c semseg-pt-v3m1-1-ppt-extreme -n semseg-pt-v3m1-1-ppt-extreme\n\n# Scratched ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# Fine-tuning from  PPT joint training (ScanNet + Structured3D) with ScanNet200\n# PTV3_PPT_WEIGHT_PATH: Path to model weight trained by PPT multi-dataset joint training\n# e.g. exp\u002Fscannet\u002Fsemseg-pt-v3m1-1-ppt-extreme\u002Fmodel\u002Fmodel_best.pth\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v3m1-1-ppt-ft -n semseg-pt-v3m1-1-ppt-ft -w ${PTV3_PPT_WEIGHT_PATH}\n\n# Scratched ScanNet++\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# Scratched ScanNet++ test\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v3m1-1-submit -n semseg-pt-v3m1-1-submit\n\n\n# Scratched S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# an example for disbale flash_attention and enable rpe.\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v3m1-1-rpe -n semseg-pt-v3m1-0-rpe\n# PPT joint training (ScanNet + S3DIS + Structured3D) and evaluate in ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d s3dis -c semseg-pt-v3m1-1-ppt-extreme -n semseg-pt-v3m1-1-ppt-extreme\n# S3DIS 6-fold cross validation\n# 1. The default configs are evaluated on Area_5, modify the \"data.train.split\", \"data.val.split\", and \"data.test.split\" to make the config evaluated on Area_1 ~ Area_6 respectively.\n# 2. Train and evaluate the model on each split of areas and gather result files located in \"exp\u002Fs3dis\u002FEXP_NAME\u002Fresult\u002FArea_x.pth\" in one single folder, noted as RECORD_FOLDER.\n# 3. Run the following script to get S3DIS 6-fold cross validation performance:\nexport PYTHONPATH=.\u002F\npython tools\u002Ftest_s3dis_6fold.py --record_root ${RECORD_FOLDER}\n\n# Scratched nuScenes\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# Scratched Waymo\nsh scripts\u002Ftrain.sh -g 4 -d waymo -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n\n# More configs and exp records for PTv3 will be available soon.\n```\n\nIndoor semantic segmentation  \n| Model | Benchmark | Additional Data | Num GPUs | Val mIoU | Config | Tensorboard | Exp Record |\n| :---: | :---: |:---------------:| :---: | :---: | :---: | :---: | :---: |\n| PTv3 | ScanNet |     &cross;     | 4 | 77.6% | [link](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet\u002Fsemseg-pt-v3m1-0-base.py) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fscannet-semseg-pt-v3m1-0-base) |\n| PTv3 + PPT | ScanNet |     &check;     | 8 | 78.5% | [link](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet\u002Fsemseg-pt-v3m1-1-ppt-extreme.py) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fscannet-semseg-pt-v3m1-1-ppt-extreme) |\n| PTv3 | ScanNet200 |     &cross;     | 4 | 35.3% | [link](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet200\u002Fsemseg-pt-v3m1-0-base.py) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) |[link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fscannet200-semseg-pt-v3m1-0-base)|\n| PTv3 | S3DIS (Area5) |     &cross;     | 4 | 73.6% | [link](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fs3dis\u002Fsemseg-pt-v3m1-0-rpe.py) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fs3dis-semseg-pt-v3m1-0-rpe) |\n| PTv3 + PPT | S3DIS (Area5) |     &check;     | 8 | 75.4% | [link](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fs3dis\u002Fsemseg-pt-v3m1-1-ppt-extreme.py) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [link](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fs3dis-semseg-pt-v3m1-1-ppt-extreme) |\n_**\\*Released model weights are trained for v1.5.1, weights for v1.5.2 and later is still ongoing.**_\n\n- **PTv2 mode2**\n\nThe original PTv2 was trained on 4 * RTX a6000 (48G memory). Even enabling AMP, the memory cost of the original PTv2 is slightly larger than 24G. Considering GPUs with 24G memory are much more accessible, I tuned the PTv2 on the latest Pointcept and made it runnable on 4 * RTX 3090 machines.\n\n`PTv2 Mode2` enables AMP and disables _Position Encoding Multiplier_ & _Grouped Linear_. During our further research, we found that precise coordinates are not necessary for point cloud understanding (Replacing precise coordinates with grid coordinates doesn't influence the performance. Also, SparseUNet is an example). As for Grouped Linear, my implementation of Grouped Linear seems to cost more memory than the Linear layer provided by PyTorch. Benefiting from the codebase and better parameter tuning, we also relieve the overfitting problem. The reproducing performance is even better than the results reported in our paper.\n\nExample running scripts are as follows:\n\n```bash\n# ptv2m2: PTv2 mode2, disable PEM & Grouped Linear, GPU memory cost \u003C 24G (recommend)\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m2-3-lovasz -n semseg-pt-v2m2-3-lovasz\n\n# ScanNet test\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m2-1-submit -n semseg-pt-v2m2-1-submit\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# ScanNet++\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# ScanNet++ test\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v2m2-1-submit -n semseg-pt-v2m2-1-submit\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# nuScenes\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n```\n\n- **PTv2 mode1**\n\n`PTv2 mode1` is the original PTv2 we reported in our paper, example running scripts are as follows:\n\n```bash\n# ptv2m1: PTv2 mode1, Original PTv2, GPU memory cost > 24G\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base\n```\n\n- **PTv1**\n\nThe original PTv1 is also available in our Pointcept codebase. I haven't run PTv1 for a long time, but I have ensured that the example running script works well. \n\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base\n```\n\n\n#### Stratified Transformer\n1. Additional requirements:\n```bash\npip install torch-points3d\n# Fix dependence, caused by installing torch-points3d \npip uninstall SharedArray\npip install SharedArray==3.2.1\n\ncd libs\u002Fpointops2\npython setup.py install\ncd ..\u002F..\n```\n2. Uncomment `# from .stratified_transformer import *` in `pointcept\u002Fmodels\u002F__init__.py`.\n3. Refer [Optional Installation](installation) to install dependence.\n4. Training with the following example scripts:\n```bash\n# stv1m1: Stratified Transformer mode1, Modified from the original Stratified Transformer code.\n# PTv2m2: Stratified Transformer mode2, My rewrite version (recommend).\n\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-st-v1m1-0-origin -n semseg-st-v1m1-0-origin\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined\n```\n\n#### SPVCNN\n`SPVCNN` is a baseline model of [SPVNAS](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fspvnas), it is also a practical baseline for outdoor datasets.\n1. Install torchsparse:\n```bash\n# refer https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Ftorchsparse\n# install method without sudo apt install\nconda install google-sparsehash -c bioconda\nexport C_INCLUDE_PATH=${CONDA_PREFIX}\u002Finclude:$C_INCLUDE_PATH\nexport CPLUS_INCLUDE_PATH=${CONDA_PREFIX}\u002Finclude:CPLUS_INCLUDE_PATH\npip install --upgrade git+https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Ftorchsparse.git\n```\n2. Training with the following example scripts:\n```bash\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 2 -d semantic_kitti -c semseg-spvcnn-v1m1-0-base -n semseg-spvcnn-v1m1-0-base\n```\n\n#### OctFormer\nOctFormer from _OctFormer: Octree-based Transformers for 3D Point Clouds_.\n1. Additional requirements:\n```bash\ncd libs\ngit clone https:\u002F\u002Fgithub.com\u002Foctree-nn\u002Fdwconv.git\npip install .\u002Fdwconv\npip install ocnn\n```\n2. Uncomment `# from .octformer import *` in `pointcept\u002Fmodels\u002F__init__.py`.\n2. Training with the following example scripts:\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-octformer-v1m1-0-base -n semseg-octformer-v1m1-0-base\n```\n\n#### Swin3D\nSwin3D from _Swin3D: A Pretrained Transformer Backbone for 3D Indoor Scene Understanding_. \n1. Additional requirements:\n```bash\n# 1. Install MinkEngine v0.5.4, follow readme in https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine;\n# 2. Install Swin3D, mainly for cuda operation:\ncd libs\ngit clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin3D.git\ncd Swin3D\npip install .\u002F\n```\n2. Uncomment `# from .swin3d import *` in `pointcept\u002Fmodels\u002F__init__.py`.\n3. Pre-Training with the following example scripts (Structured3D preprocessing refer [here](#structured3d)):\n```bash\n# Structured3D + Swin-S\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small\n# Structured3D + Swin-L\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large\n\n# Addition\n# Structured3D + SpUNet\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# Structured3D + PTv2\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n```\n4. Fine-tuning with the following example scripts:\n```bash\n# ScanNet + Swin-S\nsh scripts\u002Ftrain.sh -g 4 -d scannet -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small\n# ScanNet + Swin-L\nsh scripts\u002Ftrain.sh -g 4 -d scannet -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large\n\n# S3DIS + Swin-S (here we provide config support S3DIS normal vector)\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small\n# S3DIS + Swin-L (here we provide config support S3DIS normal vector)\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large\n```\n\n#### Context-Aware Classifier\n`Context-Aware Classifier` is a segmentor that can further boost the performance of each backbone, as a replacement for `Default Segmentor`.  Training with the following example scripts:\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-cac-v1m1-0-spunet-base -n semseg-cac-v1m1-0-spunet-base\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-cac-v1m1-1-spunet-lovasz -n semseg-cac-v1m1-1-spunet-lovasz\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-cac-v1m1-2-ptv2-lovasz -n semseg-cac-v1m1-2-ptv2-lovasz\n\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-cac-v1m1-0-spunet-base -n semseg-cac-v1m1-0-spunet-base\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-cac-v1m1-1-spunet-lovasz -n semseg-cac-v1m1-1-spunet-lovasz\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-cac-v1m1-2-ptv2-lovasz -n semseg-cac-v1m1-2-ptv2-lovasz\n```\n\n\n### 2. Instance Segmentation\n#### PointGroup\n[PointGroup](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FPointGroup) is a baseline framework for point cloud instance segmentation.\n1. Additional requirements:\n```bash\nconda install -c bioconda google-sparsehash \ncd libs\u002Fpointgroup_ops\npython setup.py install --include_dirs=${CONDA_PREFIX}\u002Finclude\ncd ..\u002F..\n```\n2. Uncomment `# from .point_group import *` in `pointcept\u002Fmodels\u002F__init__.py`.\n3. Training with the following example scripts:\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-pointgroup-v1m1-0-spunet-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-pointgroup-v1m1-0-spunet-base\n```\n\n### 3. Pre-training\n#### Utonia\nFollow the instruction [here](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Ftree\u002Fmain\u002Fpointcept\u002Fmodels\u002Futonia).\n\n#### Concerto\nFollow the instruction [here](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Ftree\u002Fmain\u002Fpointcept\u002Fmodels\u002Fconcerto).\n\n#### Sonata\nFollow the instruction [here](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Ftree\u002Fmain\u002Fpointcept\u002Fmodels\u002Fsonata).\n\n#### Masked Scene Contrast (MSC)\n1. Pre-training with the following example scripts:\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c pretrain-msc-v1m1-0-spunet-base -n pretrain-msc-v1m1-0-spunet-base\n```\n\n2. Fine-tuning with the following example scripts:  \nenable PointGroup ([here](#pointgroup)) before fine-tuning on instance segmentation task.\n```bash\n# ScanNet20 Semantic Segmentation\nsh scripts\u002Ftrain.sh -g 8 -d scannet -w exp\u002Fscannet\u002Fpretrain-msc-v1m1-0-spunet-base\u002Fmodel\u002Fmodel_last.pth -c semseg-spunet-v1m1-4-ft -n semseg-msc-v1m1-0f-spunet-base\n# ScanNet20 Instance Segmentation (enable PointGroup before running the script)\nsh scripts\u002Ftrain.sh -g 4 -d scannet -w exp\u002Fscannet\u002Fpretrain-msc-v1m1-0-spunet-base\u002Fmodel\u002Fmodel_last.pth -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-msc-v1m1-0f-pointgroup-spunet-base\n```\n3. Example log and weight: [[Pretrain](https:\u002F\u002Fconnecthkuhk-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fwuxy_connect_hku_hk\u002FEYvNV4XUJ_5Mlk-g15RelN4BW_P8lVBfC_zhjC_BlBDARg?e=UoGFWH)] [[Semseg](https:\u002F\u002Fconnecthkuhk-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fwuxy_connect_hku_hk\u002FEQkDiv5xkOFKgCpGiGtAlLwBon7i8W6my3TIbGVxuiTttQ?e=tQFnbr)]\n\n#### Point Prompt Training (PPT)\nPPT presents a multi-dataset pre-training framework, and it is compatible with various existing pre-training frameworks and backbones.\n1. PPT supervised joint training with the following example scripts:\n```bash\n# ScanNet + Structured3d, validate on ScanNet (S3DIS might cause long data time, w\u002Fo S3DIS for a quick validation) >= 3090 * 8 \nsh scripts\u002Ftrain.sh -g 8 -d scannet -c semseg-ppt-v1m1-0-sc-st-spunet -n semseg-ppt-v1m1-0-sc-st-spunet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c semseg-ppt-v1m1-1-sc-st-spunet-submit -n semseg-ppt-v1m1-1-sc-st-spunet-submit\n# ScanNet + S3DIS + Structured3d, validate on S3DIS (>= a100 * 8)\nsh scripts\u002Ftrain.sh -g 8 -d s3dis -c semseg-ppt-v1m1-0-s3-sc-st-spunet -n semseg-ppt-v1m1-0-s3-sc-st-spunet\n# SemanticKITTI + nuScenes + Waymo, validate on SemanticKITTI (bs12 >= 3090 * 4 >= 3090 * 8, v1m1-0 is still on tuning)\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m1-0-nu-sk-wa-spunet -n semseg-ppt-v1m1-0-nu-sk-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m2-0-sk-nu-wa-spunet -n semseg-ppt-v1m2-0-sk-nu-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m2-1-sk-nu-wa-spunet-submit -n semseg-ppt-v1m2-1-sk-nu-wa-spunet-submit\n# SemanticKITTI + nuScenes + Waymo, validate on nuScenes (bs12 >= 3090 * 4; bs24 >= 3090 * 8, v1m1-0 is still on tuning))\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-ppt-v1m1-0-nu-sk-wa-spunet -n semseg-ppt-v1m1-0-nu-sk-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-ppt-v1m2-0-nu-sk-wa-spunet -n semseg-ppt-v1m2-0-nu-sk-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-ppt-v1m2-1-nu-sk-wa-spunet-submit -n semseg-ppt-v1m2-1-nu-sk-wa-spunet-submit\n```\n\n#### PointContrast\n1. Preprocess and link ScanNet-Pair dataset (pair-wise matching with ScanNet raw RGB-D frame, ~1.5T):\n```bash\n# RAW_SCANNET_DIR: the directory of downloaded ScanNet v2 raw dataset.\n# PROCESSED_SCANNET_PAIR_DIR: the directory of processed ScanNet pair dataset (output dir).\npython pointcept\u002Fdatasets\u002Fpreprocessing\u002Fscannet\u002Fscannet_pair\u002Fpreprocess.py --dataset_root ${RAW_SCANNET_DIR} --output_root ${PROCESSED_SCANNET_PAIR_DIR}\nln -s ${PROCESSED_SCANNET_PAIR_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fscannet\n```\n2. Pre-training with the following example scripts:\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c pretrain-msc-v1m1-1-spunet-pointcontrast -n pretrain-msc-v1m1-1-spunet-pointcontrast\n```\n3. Fine-tuning refer [MSC](#masked-scene-contrast-msc).\n\n#### Contrastive Scene Contexts\n1. Preprocess and link ScanNet-Pair dataset (refer [PointContrast](#pointcontrast)):\n2. Pre-training with the following example scripts:\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c pretrain-msc-v1m2-0-spunet-csc -n pretrain-msc-v1m2-0-spunet-csc\n```\n3. Fine-tuning refer [MSC](#masked-scene-contrast-msc).\n\n## Acknowledgement\n_Pointcept_ is designed by [Xiaoyang](https:\u002F\u002Fxywu.me\u002F), named by [Yixing](https:\u002F\u002Fgithub.com\u002Fyxlao) and the logo is created by [Yuechen](https:\u002F\u002Fjulianjuaner.github.io\u002F). It is derived from [Hengshuang](https:\u002F\u002Fhszhao.github.io\u002F)'s [Semseg](https:\u002F\u002Fgithub.com\u002Fhszhao\u002Fsemseg) and inspirited by several repos, e.g., [MinkowskiEngine](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine), [pointnet2](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet2), [mmcv](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmcv\u002Ftree\u002Fmaster\u002Fmmcv), and [Detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2).\n","\u003Cp align=\"center\">\n    \u003C!-- pypi-strip -->\n    \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fpointcept\u002Fassets\u002Fmain\u002Fpointcept\u002Flogo_dark.png\">\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_4bbbaf8439c3.png\">\n    \u003C!-- \u002Fpypi-strip -->\n    \u003Cimg alt=\"pointcept\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_4bbbaf8439c3.png\" width=\"400\">\n    \u003C!-- pypi-strip -->\n    \u003C\u002Fpicture>\u003Cbr>\n    \u003C!-- \u002Fpypi-strip -->\n\u003C\u002Fp>\n\n[![Formatter](https:\u002F\u002Fgithub.com\u002Fpointcept\u002Fpointcept\u002Factions\u002Fworkflows\u002Fformatter.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fpointcept\u002Fpointcept\u002Factions\u002Fworkflows\u002Fformatter.yml)\n\n**Pointcept** 是一个功能强大且灵活的点云感知研究代码库。它同时也是以下论文的官方实现：\n- 🚀 **Utonia：迈向适用于所有点云的单一编码器**  \n*张宇佳、吴晓阳、杨云汉、范贤哲、李翰、张岳辰、黄泽浩、王乃彦、赵恒爽*  \n[ 预训练 ] [Utonia] - [ [项目](https:\u002F\u002Fpointcept.github.io\u002FUtonia\u002F) ] [ [引用](https:\u002F\u002Fpointcept.github.io\u002FUtonia\u002F#citation) ] [ [HF 演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fpointcept-bot\u002FUtonia) ] [ [推理](https:\u002F\u002Fgithub.com\u002FPointcept\u002FUtonia) ] [ [权重](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FUtonia) ] &rarr; [此处](#utonia)\n\n\n- **Concerto：2D-3D 联合自监督学习催生空间表征**  \n*张宇佳、吴晓阳、劳一星、王成耀、田卓涛、王乃彦、赵恒爽*  \n神经信息处理系统大会（**NeurIPS**）2025  \n[ 预训练 ] [Concerto] - [ [项目](https:\u002F\u002Fpointcept.github.io\u002FConcerto\u002F) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fconcerto\u002Fbib.txt) ] [ [HF 演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FPointcept\u002FConcerto) ] [ [推理](https:\u002F\u002Fgithub.com\u002FPointcept\u002FConcerto) ] [ [权重](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FConcerto) ] &rarr; [此处](#concerto)\n\n\n- **Sonata：可靠点表示的自监督学习**  \n*吴晓阳、Daniel DeTone、Duncan Frost、沈天伟、Chris Xie、Yang Nan、Jakob Engel、Richard Newcombe、赵恒爽、Julian Straub*  \nIEEE 计算机视觉与模式识别会议（**CVPR**）2025 - 亮点  \n[ 预训练 ] [Sonata] - [ [项目](https:\u002F\u002Fxywu.me\u002Fsonata\u002F) ] [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16429) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fsonata\u002Fbib.txt) ] [ [演示](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsonata) ] [ [权重](https:\u002F\u002Fhuggingface.co\u002Ffacebook\u002Fsonata) ] &rarr; [此处](#sonata)\n\n\n- **Point Transformer V3：更简单、更快、更强**  \n*吴晓阳、江立、王鹏帅、刘志坚、刘希辉、乔宇、欧阳万利、何通、赵恒爽*  \nIEEE 计算机视觉与模式识别会议（**CVPR**）2024 - 口头报告  \n[ 主干网络 ] [PTv3] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fptv3\u002Fbib.txt) ] [ [项目](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3) ] &rarr; [此处](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3)\n\n\n- **OA-CNNs：用于 3D 语义分割的全适应稀疏 CNN**  \n*彭博豪、吴晓阳、江立、陈宇康、赵恒爽、田卓涛、贾继雅*  \nIEEE 计算机视觉与模式识别会议（**CVPR**）2024  \n[ 主干网络 ] [ OA-CNNs ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14418) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Foacnns\u002Fbib.txt) ] &rarr; [此处](#oa-cnns)\n\n\n- **通过多数据集点提示训练实现大规模 3D 表征学习**  \n*吴晓阳、田卓涛、温鑫、彭博豪、刘希辉、于凯成、赵恒爽*  \nIEEE 计算机视觉与模式识别会议（**CVPR**）2024  \n[ 预训练 ] [PPT] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.09718) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fppt\u002Fbib.txt) ] &rarr; [此处](#point-prompt-training-ppt)\n\n\n- **掩码场景对比：一种可扩展的无监督 3D 表征学习框架**  \n*吴晓阳、温鑫、刘希辉、赵恒爽*  \nIEEE 计算机视觉与模式识别会议（**CVPR**）2023  \n[ 预训练 ] [ MSC ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14191) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fmsc\u002Fbib.txt) ] &rarr; [此处](#masked-scene-contrast-msc)\n\n\n- **面向语义分割的上下文感知分类器学习**（3D 部分）  \n*田卓涛、崔杰泉、江立、齐小娟、赖欣、陈怡心、刘舒、贾继雅*  \nAAAI 人工智能大会（**AAAI**）2023 - 口头报告  \n[ 语义分割 ] [ CAC ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11633) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fcac\u002Fbib.txt) ] [ [2D 部分](https:\u002F\u002Fgithub.com\u002Ftianzhuotao\u002FCAC) ] &rarr; [此处](#context-aware-classifier)\n\n\n- **Point Transformer V2：分组向量注意力与基于分区的池化**  \n*吴晓阳、劳一星、江立、刘希辉、赵恒爽*  \n神经信息处理系统大会（**NeurIPS**）2022  \n[ 主干网络 ] [ PTv2 ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05666) ] [ [引用](https:\u002F\u002Fxywu.me\u002Fresearch\u002Fptv2\u002Fbib.txt) ] &rarr; [此处](#point-transformers)\n\n\n- **Point Transformer**  \n*赵恒爽、江立、贾继雅、Philip Torr、Vladlen Koltun*  \nIEEE 国际计算机视觉会议（**ICCV**）2021 - 口头报告  \n[ 主干网络 ] [ PTv1 ] - [ [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09164) ] [ [引用](https:\u002F\u002Fhszhao.github.io\u002Fpapers\u002Ficcv21_pointtransformer_bib.txt) ] &rarr; [此处](#point-transformers)\n\n此外，**Pointcept** 集成了以下优秀工作（包含上述内容）：  \n骨干网络：  \n[MinkUNet](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine)（[此处](#sparseunet)），  \n[SpUNet](https:\u002F\u002Fgithub.com\u002Ftraveller59\u002Fspconv)（[此处](#sparseunet)），  \n[SPVCNN](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fspvnas)（[此处](#spvcnn)），  \n[OACNNs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14418)（[此处](#oa-cnns)），  \n[PTv1](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09164)（[此处](#point-transformers)），  \n[PTv2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05666)（[此处](#point-transformers)），  \n[PTv3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035)（[此处](#point-transformers)），  \n[StratifiedFormer](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FStratified-Transformer)（[此处](#stratified-transformer)），  \n[OctFormer](https:\u002F\u002Fgithub.com\u002Foctree-nn\u002Foctformer)（[此处](#octformer)），  \n[Swin3D](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin3D)（[此处](#swin3d)），  \n[LitePT](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FLitePT)（[此处](#litept)）；   \n语义分割：  \n[Mix3d](https:\u002F\u002Fgithub.com\u002Fkumuji\u002Fmix3d)（[此处](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet\u002Fsemseg-spunet-v1m1-0-base.py#L5)），  \n[CAC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11633)（[此处](#context-aware-classifier)）；  \n实例分割：  \n[PointGroup](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FPointGroup)（[此处](#pointgroup)）；  \n预训练：  \n[PointContrast](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FPointContrast)（[此处](#pointcontrast)），  \n[Contrastive Scene Contexts](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FContrastiveSceneContexts)（[此处](#contrastive-scene-contexts)），  \n[Masked Scene Contrast](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14191)（[此处](#masked-scene-contrast-msc)），  \n[Point Prompt Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.09718)（[此处](#point-prompt-training-ppt)），  \n[Sonata](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16429)（[此处](#sonata)），  \n[Concerto](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23607)（[此处](#concerto)），  \n[Utonia](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.03283)（[此处](#utonia)）；  \n数据集：  \n[ScanNet](http:\u002F\u002Fwww.scan-net.org\u002F)（[此处](#scannet-v2)），  \n[ScanNet200](http:\u002F\u002Fwww.scan-net.org\u002F)（[此处](#scannet-v2)），  \n[ScanNet++](https:\u002F\u002Fkaldir.vc.in.tum.de\u002Fscannetpp\u002F)（[此处](#scannet)），  \n[S3DIS](https:\u002F\u002Fdocs.google.com\u002Fforms\u002Fd\u002Fe\u002F1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw\u002Fviewform?c=0&w=1)（[此处](#s3dis)），  \n[ArkitScene](https:\u002F\u002Fgithub.com\u002Fapple\u002FARKitScenes)（[此处](#arkitscenes)），  \n[HM3D](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-matterport3d-dataset\u002F)（[此处](#habitat---matterport-3d-hm3d)），  \n[Matterport3D](https:\u002F\u002Fniessner.github.io\u002FMatterport\u002F)（[此处](#matterport3d)），  \n[Structured3D](https:\u002F\u002Fstructured3d-dataset.org\u002F)（[此处](#structured3d)），  \n[SemanticKITTI](http:\u002F\u002Fwww.semantic-kitti.org\u002F)（[此处](#semantickitti)），  \n[nuScenes](https:\u002F\u002Fwww.nuscenes.org\u002Fnuscenes)（[此处](#nuscenes)），  \n[Waymo](https:\u002F\u002Fwaymo.com\u002Fopen\u002F)（[此处](#waymo)），  \n[ModelNet40](https:\u002F\u002Fmodelnet.cs.princeton.edu\u002F)（[此处](#modelnet)），  \n[ScanObjectNN](https:\u002F\u002Fhkust-vgd.github.io\u002Fscanobjectnn\u002F)（[此处](#scanobjectnn)），  \n[ShapeNetPart](https:\u002F\u002Fshapenet.org\u002F)（[此处](#shapenetpart)），  \n[PartNetE](https:\u002F\u002Fcolin97.github.io\u002FPartSLIP_page\u002F)（[此处](#partnete)）。\n\n\n\n\n## 亮点\n- *2026年3月* 🚀：**Utonia** 代码随 Pointcept v1.7.0 一同发布，并在我们的项目 **[仓库](https:\u002F\u002Fgithub.com\u002FPointcept\u002FUtonia)** 中提供了易于使用的预训练模型，用于推理、调优和可视化。\n- *2025年10月*：**Concerto** 被 NeurIPS 2025 接受！我们随 Pointcept v1.6.1 发布了预训练的 **[代码](#concerto)**，并在 Meta 托管的项目 **[仓库](https:\u002F\u002Fgithub.com\u002FPointcept\u002FConcerto)** 中提供了易于使用的预训练模型，用于推理、调优和可视化。\n- *2025年4月*：我们现在支持 `wandb`，更多信息请参阅 [快速入门](#quick-start) 的训练部分。（感谢 @Streakfull 的贡献！）\n- *2025年3月*：**Sonata** 被 CVPR 2025 接受，并被选为 **Highlight** 报告之一（仅占提交论文的 3.0%）！我们随 Pointcept v1.6.0 发布了代码。同时，我们还随 Pointcept v1.6.0 发布了预训练的 **[代码](#sonata)**，并在 Meta 托管的项目 **[仓库](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsonata)** 中提供了易于使用的预训练模型，用于推理、调优和可视化。\n- *2024年5月*：在 v1.5.2 中，我们重新设计了每个数据集的默认结构，以提升性能。请 **重新预处理** 数据集，或从 **[这里](https:\u002F\u002Fhuggingface.co\u002FPointcept)** 下载我们预处理好的数据集。\n- *2024年4月*：**PTv3** 被选为 CVPR'24 的 90 篇 **口头报告** 论文之一（占接受论文的 3.3%，占提交论文的 0.78%）！\n- *2024年3月*：我们发布了被 CVPR'24 接受的 **OA-CNNs** 的代码。关于 **OA-CNNs** 的问题可 @Pbihao。\n- *2024年2月*：**PTv3** 和 **PPT** 被 CVPR'24 接受，我们 Pointcept 团队还有另外 **两篇** 论文也被 CVPR'24 接受 🎉🎉🎉。我们将很快公开这些成果！\n- *2023年12月*：**PTv3** 在 arXiv 上发布，其代码已在 Pointcept 中提供。PTv3 是一种高效的骨干网络模型，在室内和室外场景中均达到 SOTA 性能。\n- *2023年8月*：**PPT** 在 arXiv 上发布。PPT 提出了一种多数据集预训练框架，在 **室内** 和 **室外** 场景中均达到 SOTA 性能。它兼容现有的各种预训练框架和骨干网络。代码的 **预发布** 版本现已开放，感兴趣者可直接联系我获取访问权限。\n- *2023年3月*：我们发布了我们的代码库 **Pointcept**，这是一个功能强大的点云表征学习与感知工具。我们欢迎新的工作加入 _Pointcept_ 大家庭，并强烈建议在开始探索之前阅读 [快速入门](#quick-start)。\n- *2023年2月*：**MSC** 和 **CeCo** 被 CVPR 2023 接受。_MSC_ 是一个高效且有效的预训练框架，可促进跨数据集的大规模预训练；而 _CeCo_ 则是一种专为长尾数据集设计的分割方法。这两种方法都兼容我们代码库中的所有现有骨干网络模型，我们也将很快向公众开放相关代码。\n- *2023年1月*：**CAC** 是 AAAI 2023 的口头报告作品，通过结合 Pointcept 将其 3D 结果进一步扩展。这一补充将使 CAC 能够作为我们代码库中的可插拔分割器使用。\n- *2022年9月*：**PTv2** 被 NeurIPS 2022 接受。它是 Point Transformer 的延续。所提出的 GVA 理论可应用于大多数现有的注意力机制，而 Grid Pooling 也是对现有池化方法的实用补充。\n\n## 引用\n如果您发现 _Pointcept_ 对您的研究有所帮助，请引用我们的工作以示鼓励。（੭ˊ꒳​ˋ)੭✧\n```\n@misc{pointcept2023,\n    title={Pointcept: A Codebase for Point Cloud Perception Research},\n    author={Pointcept Contributors},\n    howpublished = {\\url{https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept}},\n    year={2023}\n}\n```\n\n## 概述\n\n- [安装](#installation)\n- [数据准备](#data-preparation)\n- [快速入门](#quick-start)\n- [模型库](#model-zoo)\n- [致谢](#acknowledgement)\n\n## 安装\n\n### 要求\n- Ubuntu: 18.04 及以上版本。\n- CUDA: 11.3 及以上版本。\n- PyTorch: 1.10.0 及以上版本。\n\n### Conda 环境\n- **方法 1**：使用 conda `environment.yml` 文件，通过一行命令即可创建新环境：\n  ```bash\n  # 创建并激活名为 'pointcept-torch2.5.0-cu12.4' 的 conda 环境\n  # cuda: 12.4, pytorch: 2.5.0\n\n  # 如果本地已安装 CUDA，请先运行 `unset CUDA_PATH`\n  conda env create -f environment.yml --verbose\n  conda activate pointcept-torch2.5.0-cu12.4\n  ```\n\n- **方法 2**：使用我们预构建的 Docker 镜像，并参考支持的标签 [这里](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fpointcept\u002Fpointcept\u002Ftags)。您可以通过以下命令在本地快速验证 Docker 镜像：\n  ```bash\n  docker run --gpus all -it --rm pointcept\u002Fpointcept:v1.6.0-pytorch2.5.0-cuda12.4-cudnn9-devel bash\n  git clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsonata\n  cd sonata\n  export PYTHONPATH=.\u002F && python demo\u002F0_pca.py\n  # 忽略 GUI 错误，毕竟容器本身并不具备图形界面，对吧？\n  ```\n\n- **方法 3**：手动创建 conda 环境：\n  ```bash\n  conda create -n pointcept python=3.10 -y\n  conda activate pointcept\n  \n  # （可选）如果未安装 CUDA\n  conda install nvidia\u002Flabel\u002Fcuda-12.4.1::cuda conda-forge::cudnn conda-forge::gcc=13.2 conda-forge::gxx=13.2 -y\n  \n  conda install ninja -y\n  # 在此选择所需版本：https:\u002F\u002Fpytorch.org\u002Fget-started\u002Fprevious-versions\u002F\n  conda install pytorch==2.5.0 torchvision==0.13.1 torchaudio==0.20.0 pytorch-cuda=12.4 -c pytorch -y\n  conda install h5py pyyaml -c anaconda -y\n  conda install sharedarray tensorboard tensorboardx wandb yapf addict einops scipy plyfile termcolor timm -c conda-forge -y\n  conda install pytorch-cluster pytorch-scatter pytorch-sparse -c pyg -y\n  pip install torch-geometric\n\n  # spconv (SparseUNet)\n  # 参考 https:\u002F\u002Fgithub.com\u002Ftraveller59\u002Fspconv\n  pip install spconv-cu124\n\n  # PPT (clip)\n  pip install ftfy regex tqdm\n  pip install git+https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP.git\n\n  # transformers 和 peft\n  pip install transformers==4.50.3\n  pip install peft\n\n  # PTv1 & PTv2 或精确评估\n  cd libs\u002Fpointops\n  # 常规方式\n  python setup.py install\n  # Docker 和多 GPU 架构\n  TORCH_CUDA_ARCH_LIST=\"ARCH LIST\" python setup.py install\n  # 例如 7.5：RTX 3000；8.0：a100 更多信息请参见：https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-gpus\n  TORCH_CUDA_ARCH_LIST=\"7.5 8.0\" python setup.py install\n  cd ..\u002F..\n\n  # Open3D（可视化，可选）\n  pip install open3d\n  ```\n\n## 数据准备\n\n### ScanNet v2\n\n预处理支持语义分割和实例分割，适用于 `ScanNet20`、`ScanNet200` 和 `ScanNet Data Efficient`。\n- 下载 [ScanNet](http:\u002F\u002Fwww.scan-net.org\u002F) v2 数据集。\n- 运行原始 ScanNet 数据的预处理代码如下：\n\n  ```bash\n  # RAW_SCANNET_DIR：下载的 ScanNet v2 原始数据集目录。\n  # PROCESSED_SCANNET_DIR：处理后的 ScanNet 数据集目录（输出目录）。\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fscannet\u002Fpreprocess_scannet.py --dataset_root ${RAW_SCANNET_DIR} --output_root ${PROCESSED_SCANNET_DIR}\n  ```\n- （可选）下载 ScanNet Data Efficient 文件：\n  ```bash\n  # download-scannet.py 是官方下载脚本\n  # 或者按照此处说明操作：https:\u002F\u002Fkaldir.vc.in.tum.de\u002Fscannet_benchmark\u002Fdata_efficient\u002Fdocumentation#download\n  python download-scannet.py --data_efficient -o ${RAW_SCANNET_DIR}\n  # 解压下载内容\n  cd ${RAW_SCANNET_DIR}\u002Ftasks\n  unzip limited-annotation-points.zip\n  unzip limited-reconstruction-scenes.zip\n  # 将文件复制到处理后的数据集文件夹\n  mkdir ${PROCESSED_SCANNET_DIR}\u002Ftasks\n  cp -r ${RAW_SCANNET_DIR}\u002Ftasks\u002Fpoints ${PROCESSED_SCANNET_DIR}\u002Ftasks\n  cp -r ${RAW_SCANNET_DIR}\u002Ftasks\u002Fscenes ${PROCESSED_SCANNET_DIR}\u002Ftasks\n  ```\n- （替代方案）我们的预处理数据可以直接下载 [[这里](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fscannet-compressed)]，请在下载前同意官方许可协议。\n\n- 将处理后的数据集链接到代码库：\n  ```bash\n  # PROCESSED_SCANNET_DIR：处理后的 ScanNet 数据集目录。\n  mkdir data\n  ln -s ${PROCESSED_SCANNET_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fscannet\n  ```\n\n### ScanNet++\n- 下载 [ScanNet++](https:\u002F\u002Fkaldir.vc.in.tum.de\u002Fscannetpp\u002F) 数据集。\n- 运行原始 ScanNet++ 数据的预处理代码如下：\n  ```bash\n  # RAW_SCANNETPP_DIR：下载的 ScanNet++ 原始数据集目录。\n  # PROCESSED_SCANNETPP_DIR：处理后的 ScanNet++ 数据集目录（输出目录）。\n  # NUM_WORKERS：用于并行预处理的工作线程数。\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fscannetpp\u002Fpreprocess_scannetpp.py --dataset_root ${RAW_SCANNETPP_DIR} --output_root ${PROCESSED_SCANNETPP_DIR} --num_workers ${NUM_WORKERS}\n  ```\n- 对大型点云数据进行采样和分块处理，以用于训练\u002F验证划分（仅用于训练）：\n  ```bash\n  # PROCESSED_SCANNETPP_DIR：处理后的 ScanNet++ 数据集目录（输出目录）。\n  # NUM_WORKERS：用于并行预处理的工作线程数。\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fsampling_chunking_data.py --dataset_root ${PROCESSED_SCANNETPP_DIR} --grid_size 0.01 --chunk_range 6 6 --chunk_stride 3 3 --split train --num_workers ${NUM_WORKERS}\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fsampling_chunking_data.py --dataset_root ${PROCESSED_SCANNETPP_DIR} --grid_size 0.01 --chunk_range 6 6 --chunk_stride 3 3 --split val --num_workers ${NUM_WORKERS}\n  ```\n- 将处理后的数据集链接到代码库：\n  ```bash\n  # PROCESSED_SCANNETPP_DIR：处理后的 ScanNet 数据集目录。\n  mkdir data\n  ln -s ${PROCESSED_SCANNETPP_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fscannetpp\n  ```\n\n### S3DIS\n\n- 通过填写此 [Google 表单](https:\u002F\u002Fdocs.google.com\u002Fforms\u002Fd\u002Fe\u002F1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw\u002Fviewform?c=0&w=1) 下载 S3DIS 数据。下载 `Stanford3dDataset_v1.2.zip` 文件并解压。\n- 修复 `Area_5\u002Foffice_19\u002FAnnotations\u002Fceiling` 文件第 323474 行的错误（将 `103.00000` 改为 `103.000000`）。\n- （可选）从 [这里](https:\u002F\u002Fgithub.com\u002Falexsax\u002F2D-3D-Semantics) 下载完整的 2D-3D S3DIS 数据集（不含 XYZ 坐标），用于解析法线。\n- 按照以下步骤运行 S3DIS 的预处理代码：\n\n  ```bash\n  # S3DIS_DIR：已下载的 Stanford3dDataset_v1.2 数据集目录。\n  # RAW_S3DIS_DIR：Stanford2d3dDataset_noXYZ 数据集目录。（可选，用于解析法线）\n  # PROCESSED_S3DIS_DIR：已处理的 S3DIS 数据集目录（输出目录）。\n\n  # 不带对齐角度的 S3DIS\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR}\n  # 带对齐角度的 S3DIS\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --align_angle\n  # 带法线向量的 S3DIS（推荐，法线有助于提升效果）\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --raw_root ${RAW_S3DIS_DIR} --parse_normal\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fs3dis\u002Fpreprocess_s3dis.py --dataset_root ${S3DIS_DIR} --output_root ${PROCESSED_S3DIS_DIR} --raw_root ${RAW_S3DIS_DIR} --align_angle --parse_normal\n  ```\n\n- （替代方案）我们的预处理数据也可以从 [[这里](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fs3dis-compressed)] 下载（包含法线向量和对齐角度），请在下载前同意官方许可协议。\n\n- 将处理后的数据集链接到代码库：\n  ```bash\n  # PROCESSED_S3DIS_DIR：已处理的 S3DIS 数据集目录。\n  mkdir data\n  ln -s ${PROCESSED_S3DIS_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fs3dis\n  ```\n\n\n### ArkitScenes\n\n- 使用以下命令下载 ArkitScenes 3DOD 划分数据集：\n  ```bash\n  # RAW_AS_DIR：已下载的原始 ArkitScenes 数据集目录。\n  git clone https:\u002F\u002Fgithub.com\u002Fapple\u002FARKitScenes.git\n  cd ARKitScenes\n  python download_data.py 3dod --download_dir $RAW_AS_DIR --video_id_csv threedod\u002F3dod_train_val_splits.csv\n  ```\n- 按照以下步骤运行 ArkitScenes 的预处理代码：\n  ```bash\n  # RAW_AS_DIR：已下载的 ArkitScenes 数据集目录。\n  # PROCESSED_AS_DIR：已处理的 ArkitScenes 数据集目录（输出目录）。\n  # NUM_WORKERS：预处理时使用的进程数，默认与 CPU 核心数相同（可能导致内存溢出）。\n  cd $POINTCEPT_DIR\n  export PYTHONPATH=.\u002F\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Farkitscenes\u002Fpreprocess_arkitscenes_mesh.py --dataset_root $RAW_AS_DIR --output_root $PROCESSED_AS_DIR --num_workers $NUM_WORKERS\n  ```\n\n- （替代方案）我们的预处理数据也可以从 [[这里](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Farkitscenes-compressed)] 下载，请在下载前阅读并同意官方 [许可协议](https:\u002F\u002Fgithub.com\u002Fapple\u002FARKitScenes?tab=License-1-ov-file#readme)。（解压命令如下：\n `find .\u002F -name '*.tar.gz' | xargs -n 1 -P 8 -I {} sh -c 'tar -xzvf {}'`）\n\n- 将处理后的数据集链接到代码库：\n  ```bash\n  # PROCESSED_AR_DIR：已处理的 ArkitScenes 数据集目录（输出目录）。\n  mkdir data\n  ln -s ${PROCESSED_AR_DIR} ${CODEBASE_DIR}\u002Fdata\u002Farkitscenes\n  ```\n\n### Habitat - Matterport 3D (HM3D)\n\n- 按照 [这里](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhabitat-sim\u002Fblob\u002Fmain\u002FDATASETS.md#habitat-matterport-3d-research-dataset-hm3d) 的说明下载 HM3D 的 `hm3d-train-glb-v0.2.tar` 和 `hm3d-val-glb-v0.2.tar` 文件，并解压它们。\n- 按照以下步骤运行 HM3D 的预处理代码：\n  ```bash\n  # RAW_HM_DIR：已下载的 HM3D 数据集目录。\n  # PROCESSED_HM_DIR：已处理的 HM3D 数据集目录（输出目录）。\n  # NUM_WORKERS：预处理时使用的进程数，默认与 CPU 核心数相同（可能导致内存溢出）。\n  export PYTHONPATH=.\u002F\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fhm3d\u002Fpreprocess_hm3d.py --dataset_root $RAW_HM_DIR --output_root $PROCESSED_HM_DIR --density 0.02 --num_workers $NUM_WORKERS\n  ```\n\n- （替代方案）我们的预处理数据也可以从 [[这里](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fhm3d-compressed)] 下载，请在下载前阅读并同意官方 [许可协议](https:\u002F\u002Fmatterport.com\u002Flegal\u002Fmatterport-end-user-license-agreement-academic-use-model-data)。（解压命令如下：\n `find .\u002F -name '*.tar.gz' | xargs -n 1 -P 4 -I {} sh -c 'tar -xzvf {}'`）\n\n- 将处理后的数据集链接到代码库：\n  ```bash\n  # PROCESSED_HM_DIR：已处理的 HM3D 数据集目录（输出目录）。\n  mkdir data\n  ln -s ${PROCESSED_HM_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fhm3d\n  ```\n\n\n### Matterport3D\n- 按照 [此页面](https:\u002F\u002Fniessner.github.io\u002FMatterport\u002F#download) 申请访问该数据集。\n- 下载“region_segmentation”类型的数据，它表示场景被划分为独立房间。\n  ```bash\n  # download-mp.py 是官方下载脚本\n  # MATTERPORT3D_DIR：已下载的 Matterport3D 数据集目录。\n  python download-mp.py -o {MATTERPORT3D_DIR} --type region_segmentations\n  ```\n- 解压 region_segmentation 数据：\n  ```bash\n  # MATTERPORT3D_DIR：已下载的 Matterport3D 数据集目录。\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fmatterport3d\u002Funzip_matterport3d_region_segmentation.py --dataset_root {MATTERPORT3D_DIR}\n  ```\n- 按照以下步骤运行 Matterport3D 的预处理代码：\n  ```bash\n  # MATTERPORT3D_DIR：已下载的 Matterport3D 数据集目录。\n  # PROCESSED_MATTERPORT3D_DIR：已处理的 Matterport3D 数据集目录（输出目录）。\n  # NUM_WORKERS：本次预处理使用的进程数。\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fmatterport3d\u002Fpreprocess_matterport3d_mesh.py --dataset_root ${MATTERPORT3D_DIR} --output_root ${PROCESSED_MATTERPORT3D_DIR} --num_workers ${NUM_WORKERS}\n  ```\n- 将处理后的数据集链接到代码库：\n  ```bash\n  # PROCESSED_MATTERPORT3D_DIR：已处理的 Matterport3D 数据集目录（输出目录）。\n  mkdir data\n  ln -s ${PROCESSED_MATTERPORT3D_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fmatterport3d\n  ```\n\n根据 [OpenRooms](https:\u002F\u002Fgithub.com\u002FViLab-UCSD\u002FOpenRooms) 的说明，我们将 Matterport3D 的类别重新映射为 ScanNet 20 种语义类别，并新增了一个天花板类别。\n* （替代方案）我们的预处理数据也可以从 [这里](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fmatterport3d-compressed) 下载，请在下载前同意官方许可协议。\n\n### Structured3D\n\n- 通过填写此[Google表单](https:\u002F\u002Fdocs.google.com\u002Fforms\u002Fd\u002Fe\u002F1FAIpQLSc0qtvh4vHSoZaW6UvlXYy79MbcGdZfICjh4_t4bYofQIVIdw\u002Fviewform?pli=1)下载Structured3D全景图相关及透视图相关（完整版）的zip文件（无需解压）。\n- 将所有下载的zip文件整理到一个文件夹中（`${STRUCT3D_DIR}`）。\n- 按照以下方式运行Structured3D的预处理代码：\n  ```bash\n  # STRUCT3D_DIR：已下载的Structured3D数据集目录。\n  # PROCESSED_STRUCT3D_DIR：已处理的Structured3D数据集目录（输出目录）。\n  # NUM_WORKERS：预处理时使用的进程数，默认与CPU核心数相同（可能会导致内存溢出）。\n  export PYTHONPATH=.\u002F\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fstructured3d\u002Fpreprocess_structured3d.py --dataset_root ${STRUCT3D_DIR} --output_root ${PROCESSED_STRUCT3D_DIR} --num_workers ${NUM_WORKERS} --grid_size 0.01 --fuse_prsp --fuse_pano\n  ```\n根据[Swin3D](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06906)的说明，我们从原始的40个类别中保留了频率大于0.001的25个类别。\n\n[\u002F\u002F]: # (- &#40;Alternative&#41; Our preprocess data can also be downloaded [[here]&#40;&#41;], please agree the official license before download it.)\n\n- （可选）我们的预处理数据也可以从[[这里](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fstructured3d-compressed\n)]下载（包含透视视图和全景视图，解压后为471.7G），请在下载前同意官方许可。（使用以下命令解压：  \n `find .\u002F -name '*.tar.gz' | xargs -n 1 -P 15 -I {} sh -c 'tar -xzvf {}'`）\n\n- 将处理后的数据集链接到代码库。\n  ```bash\n  # PROCESSED_STRUCT3D_DIR：已处理的Structured3D数据集目录（输出目录）。\n  mkdir data\n  ln -s ${PROCESSED_STRUCT3D_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fstructured3d\n  ```\n\n### SemanticKITTI\n- 下载[SemanticKITTI](http:\u002F\u002Fwww.semantic-kitti.org\u002Fdataset.html#download)数据集。\n- 将数据集链接到代码库。\n  ```bash\n  # SEMANTIC_KITTI_DIR：SemanticKITTI数据集目录。\n  # |- SEMANTIC_KITTI_DIR\n  #   |- dataset\n  #     |- sequences\n  #       |- 00\n  #       |- 01\n  #       |- ...\n  \n  mkdir -p data\n  ln -s ${SEMANTIC_KITTI_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fsemantic_kitti\n  ```\n\n### nuScenes\n- 下载官方的[NuScene](https:\u002F\u002Fwww.nuscenes.org\u002Fnuscenes#download)数据集（包含激光雷达分割信息），并将下载的文件按如下方式组织：\n  ```bash\n  NUSCENES_DIR\n  │── samples\n  │── sweeps\n  │── lidarseg\n  ...\n  │── v1.0-trainval \n  │── v1.0-test\n  ```\n- 按照以下方式运行nuScenes的信息预处理代码（基于OpenPCDet修改）：\n  ```bash\n  # NUSCENES_DIR：已下载的nuScenes数据集目录。\n  # PROCESSED_NUSCENES_DIR：已处理的nuScenes数据集目录（输出目录）。\n  # MAX_SWEEPS：最大扫描次数。默认值：10。\n  pip install nuscenes-devkit pyquaternion\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fnuscenes\u002Fpreprocess_nuscenes_info.py --dataset_root ${NUSCENES_DIR} --output_root ${PROCESSED_NUSCENES_DIR} --max_sweeps ${MAX_SWEEPS} --with_camera\n  ```\n- （可选）我们的nuScenes信息预处理数据也可以从[[这里](\nhttps:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fnuscenes-compressed)]下载（仅包含处理后的信息，仍需下载原始数据并将其链接到相应文件夹），请在下载前同意官方许可。\n\n- 将原始数据集链接到已处理的NuScene数据集文件夹：\n  ```bash\n  # NUSCENES_DIR：已下载的nuScenes数据集目录。\n  # PROCESSED_NUSCENES_DIR：已处理的nuScenes数据集目录（输出目录）。\n  ln -s ${NUSCENES_DIR} {PROCESSED_NUSCENES_DIR}\u002Fraw\n  ```\n  此时，已处理的nuScenes文件夹将按如下方式组织：\n  ```bash\n  nuscene\n  |── raw\n      │── samples\n      │── sweeps\n      │── lidarseg\n      ...\n      │── v1.0-trainval\n      │── v1.0-test\n  |── info\n  ```\n\n- 将处理后的数据集链接到代码库。\n  ```bash\n  # PROCESSED_NUSCENES_DIR：已处理的nuScenes数据集目录（输出目录）。\n  mkdir data\n  ln -s ${PROCESSED_NUSCENES_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fnuscenes\n  ```\n\n### Waymo\n- 下载官方的[Waymo](https:\u002F\u002Fwaymo.com\u002Fopen\u002Fdownload\u002F)数据集（v1.4.3），并将下载的文件按如下方式组织：\n  ```bash\n  WAYMO_RAW_DIR\n  │── training\n  │── validation\n  │── testing\n  ```\n- 安装以下依赖：\n  ```bash\n  # 如果提示“未找到匹配的发行版”，请直接从Pypi下载whl文件并安装该包。\n  conda create -n waymo python=3.10 -y\n  conda activate waymo\n  pip install waymo-open-dataset-tf-2-12-0\n  ```\n- 按照以下方式运行预处理代码：\n  ```bash\n  # WAYMO_DIR：已下载的Waymo数据集目录。\n  # PROCESSED_WAYMO_DIR：已处理的Waymo数据集目录（输出目录）。\n  # NUM_WORKERS：预处理时使用的进程数。\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fwaymo\u002Fpreprocess_waymo.py --dataset_root ${WAYMO_DIR} --output_root ${PROCESSED_WAYMO_DIR} --splits training validation --num_workers ${NUM_WORKERS}\n  ```\n\n- 将处理后的数据集链接到代码库。\n  ```bash\n  # PROCESSED_WAYMO_DIR：已处理的Waymo数据集目录（输出目录）。\n  mkdir data\n  ln -s ${PROCESSED_WAYMO_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fwaymo\n  ```\n\n### ModelNet40\n- 下载[modelnet40_normal_resampled.zip](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPointcept\u002Fmodelnet40_normal_resampled-compressed)并解压。\n- 将数据集链接到代码库。\n  ```bash\n  mkdir -p data\n  ln -s ${MODELNET_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fmodelnet40_normal_resampled\n  ```\n\n### ScanObjectNN\n  - 下载[ScanObjectNN](https:\u002F\u002Fforms.gle\u002FZZRnnmaUdwfRucoy7)数据集，包括`h5_files.zip`和`raw\u002Fobject_dataset.zip`。将它们分别解压到\\${BENCHMARK_SCANOBJECTNN_DIR}和${RAW_SCANOBJECTNN_DIR}。\n  ```\n  ln -s ${BENCHMARK_SCANOBJECTNN_DIR} data\u002Fscanobject_eval\n  ```\n\n### ShapeNetPart\n  - 下载[ShapeNetPart](https:\u002F\u002Fdrive.usercontent.google.com\u002Fdownload?id=1W3SEE-dY1sxvlECcOwWSDYemwHEUbJIS&authuser=0)。\n  - 将数据集链接到代码库。\n  ```bash\n  mkdir -p data\n  ln -s ${RAW_SHAPENETPART_DIR} ${CODEBASE_DIR}\u002Fdata\u002F\n  ```\n\n### PartNetE\n - 下载[PartNetE](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Fu\u002F0\u002Ffolders\u002F13boiefNs2XvhoSvvDiaOATKAPB7XbE6g)（data.zip）\n - 按照以下方式运行PartNetE原始数据的预处理代码：\n\n  ```bash\n  # RAW_PARTNETE_DIR：已下载的PartNetE数据集目录。\n  python pointcept\u002Fdatasets\u002Fpreprocessing\u002Fpartnete\u002Fpreprocess_partnete.py --dataset_root ${RAW_PARTNETE_DIR}\n  ```\n - 将数据集链接到代码库。\n  ```bash\n  mkdir -p data\n  ln -s ${RAW_PARTNETE_DIR} ${CODEBASE_DIR}\u002Fdata\u002F\n  ```\n\n## 快速入门\n\n### 训练\n**从零开始训练。** 训练过程基于`configs`文件夹中的配置文件。\n训练脚本将在`exp`文件夹中生成一个实验文件夹，并将必要的代码备份到该实验文件夹中。\n训练配置、日志、TensorBoard记录以及检查点也会在训练过程中保存到该实验文件夹中。\n```bash\nexport CUDA_VISIBLE_DEVICES=${CUDA_VISIBLE_DEVICES}\n\n# 脚本（推荐）\nsh scripts\u002Ftrain.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -c ${CONFIG_NAME} -n ${EXP_NAME}\n# 直接运行\nexport PYTHONPATH=.\u002F\npython tools\u002Ftrain.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path=${SAVE_PATH}\n```\n\n例如：\n```bash\n# 通过脚本（推荐）\n# -p 默认设置为 python，可以忽略\nsh scripts\u002Ftrain.sh -p python -d scannet -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# 直接运行\nexport PYTHONPATH=.\u002F\npython tools\u002Ftrain.py --config-file configs\u002Fscannet\u002Fsemseg-pt-v2m2-0-base.py --options save_path=exp\u002Fscannet\u002Fsemseg-pt-v2m2-0-base\n```\n**从检查点恢复训练。** 如果训练过程意外中断，以下脚本可以从给定的检查点恢复训练。\n```bash\nexport CUDA_VISIBLE_DEVICES=${CUDA_VISIBLE_DEVICES}\n# 脚本（推荐）\n# 只需添加 \"-r true\"\nsh scripts\u002Ftrain.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -c ${CONFIG_NAME} -n ${EXP_NAME} -r true\n# 直接运行\nexport PYTHONPATH=.\u002F\npython tools\u002Ftrain.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path=${SAVE_PATH} resume=True weight=${CHECKPOINT_PATH}\n```\n**Weights and Biases。**\nPointcept 默认同时启用 `tensorboard` 和 `wandb`。关于 `wandb` 的一些使用注意事项如下：\n1. 可通过设置 `enable_wandb=False` 来禁用；\n2. 通过在终端中执行 `wandb login` 或在配置文件中设置 `wandb_key=YOUR_WANDB_KEY` 来与 `wandb` 远程服务器同步；\n3. 项目名称默认为 “Pointcept”，可通过设置 `wandb_project=YOUR_PROJECT_NAME` 自定义为你的研究项目名称（例如 Sonata-Dev、PointTransformerV3-Dev）。\n\n### 测试\n在训练过程中，模型评估是在经过网格采样（体素化）后的点云上进行的，这提供了对模型性能的初步评估。~~然而，要获得精确的评估结果，测试是~~ **必不可少的** ~~（现在我们会在训练结束后自动运行测试流程，借助 `PreciseEvaluation` 钩子）~~。测试过程涉及将密集点云下采样为一系列体素化的点云，以确保全面覆盖所有点。然后对这些子结果进行预测并汇总，形成对整个点云的完整预测。这种方法相比简单地映射或插值预测，能够得到更高的评估结果。此外，我们的测试代码还支持 TTA（测试时增强）测试，这进一步提高了评估性能的稳定性。\n\n```bash\n# 通过脚本（基于训练脚本创建的实验文件夹）\nsh scripts\u002Ftest.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -n ${EXP_NAME} -w ${CHECKPOINT_NAME}\n# 直接运行\nexport PYTHONPATH=.\u002F\npython tools\u002Ftest.py --config-file ${CONFIG_PATH} --num-gpus ${NUM_GPU} --options save_path=${SAVE_PATH} weight=${CHECKPOINT_PATH}\n```\n例如：\n```bash\n# 通过脚本（基于训练脚本创建的实验文件夹）\n# -p 默认设置为 python，可以忽略\n# -w 默认设置为 model_best，也可以忽略\nsh scripts\u002Ftest.sh -p python -d scannet -n semseg-pt-v2m2-0-base -w model_best\n# 直接运行\nexport PYTHONPATH=.\u002F\npython tools\u002Ftest.py --config-file configs\u002Fscannet\u002Fsemseg-pt-v2m2-0-base.py --options save_path=exp\u002Fscannet\u002Fsemseg-pt-v2m2-0-base weight=exp\u002Fscannet\u002Fsemseg-pt-v2m2-0-base\u002Fmodel\u002Fmodel_best.pth\n```\n\n可以通过将 `data.test.test_cfg.aug_transform = [...]` 替换为以下内容来禁用 TTA：\n\n```python\ndata = dict(\n    train = dict(...),\n    val = dict(...),\n    test = dict(\n        ...,\n        test_cfg = dict(\n            ...,\n            aug_transform = [\n                [dict(type=\"RandomRotateTargetAngle\", angle=[0], axis=\"z\", center=[0, 0, 0], p=1)]\n            ]\n        )\n    )\n)\n```\n\n### Offset\n`Offset` 是批量数据中点云之间的分隔符，类似于 PyG 中的 `Batch` 概念。\n批量和 offset 的可视化说明如下：\n\u003Cp align=\"center\">\n    \u003C!-- pypi-strip -->\n    \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fpointcept\u002Fassets\u002Fmain\u002Fpointcept\u002Foffset_dark.png\">\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_83d25dfed8e3.png\">\n    \u003C!-- \u002Fpypi-strip -->\n    \u003Cimg alt=\"pointcept\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_readme_83d25dfed8e3.png\" width=\"480\">\n    \u003C!-- pypi-strip -->\n    \u003C\u002Fpicture>\u003Cbr>\n    \u003C!-- \u002Fpypi-strip -->\n\u003C\u002Fp>\n\n## 模型库\n### 1. 主干网络与语义分割\n#### SparseUNet\n\n_Pointcept_ 提供了由 `SpConv` 和 `MinkowskiEngine` 实现的 `SparseUNet`。推荐使用 SpConv 版本，因为 SpConv 安装方便且速度比 MinkowskiEngine 更快。同时，SpConv 在室外感知领域也得到了广泛应用。\n\n- **SpConv（推荐）**\n\n代码库中的 SpConv 版本 `SparseUNet` 是完全从 `MinkowskiEngine` 版本重写而来的，示例运行脚本如下：\n\n```bash\n# ScanNet 验证集\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# S3DIS（带法线）\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-spunet-v1m1-0-cn-base -n semseg-spunet-v1m1-0-cn-base\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# nuScenes\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# ModelNet40\nsh scripts\u002Ftrain.sh -g 2 -d modelnet40 -c cls-spunet-v1m1-0-base -n cls-spunet-v1m1-0-base\n\n# ScanNet 数据高效版\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la20 -n semseg-spunet-v1m1-2-efficient-la20\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la50 -n semseg-spunet-v1m1-2-efficient-la50\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la100 -n semseg-spunet-v1m1-2-efficient-la100\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-la200 -n semseg-spunet-v1m1-2-efficient-la200\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr1 -n semseg-spunet-v1m1-2-efficient-lr1\nsh scripts.train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr5 -n semseg-spunet-v1m1-2-efficient-lr5\nsh scripts.train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr10 -n semseg-spunet-v1m1-2-efficient-lr10\nsh scripts.train.sh -g 4 -d scannet -c semseg-spunet-v1m1-2-efficient-lr20 -n semseg-spunet-v1m1-2-efficient-lr20\n\n# 配置文件模型运行时间\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-spunet-v1m1-0-enable-profiler -n semseg-spunet-v1m1-0-enable-profiler\n```\n\n- **MinkowskiEngine**\n\n代码库中的 `SparseUNet` 版本基于原始的 MinkowskiEngine 仓库进行了修改，示例运行脚本如下：\n1. 安装 MinkowskiEngine，请参考：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\n2. 使用以下示例脚本来进行训练：\n```bash\n# 解注释 \"pointcept\u002Fmodels\u002F__init__.py\" 中的 \"# from .sparse_unet import *\"\n# 解注释 \"pointcept\u002Fmodels\u002Fsparse_unet\u002F__init__.py\" 中的 \"# from .mink_unet import *\"\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 2 -d semantic_kitti -c semseg-minkunet34c-0-base -n semseg-minkunet34c-0-base\n```\n\n#### OA-CNNs\n介绍全适配性 3D CNN（**OA-CNNs**），这是一系列网络，通过集成一个轻量级模块，在几乎不增加计算成本的情况下，极大地提升了稀疏 CNN 的自适应能力。在没有任何自注意力模块的情况下，**OA-CNNs** 在室内和室外场景中均以更高的准确率、更低的延迟和更少的内存消耗，显著优于点云 Transformer。关于 **OA-CNNs** 的问题可以 @Pbihao。\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-oacnns-v1m1-0-base -n semseg-oacnns-v1m1-0-base\n```\n\n#### 点云 Transformer\n- **LitePT**\n\nLitePT（CVPR 2026）是一种最先进的点云骨干网络，与之前的点云 Transformer 相比，其性能更优或相当，同时效率大幅提升。\n\n1. 需要额外准备：\n\n编译 PointROPE 的 CUDA 实现。否则，系统将自动回退到较慢的 PyTorch 实现。\n\n```bash\ncd libs\u002Fpointrope\npython setup.py install\ncd ..\u002F..\n```\n\n2. 示例运行脚本：\n\n该模型注册为 `LitePT-v1`，并共享于小型\u002F基础\u002F大型变体。在配置文件名中，`v1m1` 表示轻量级解码器（无卷积\u002F注意力），而 `v1m2` 则在选定阶段使用带有卷积或注意力的解码器。详细信息请参阅论文第 4.1 节中的“解码器设计”。\n\n```bash\n### NuScenes + LitePT-S\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### Waymo + LitePT-S\nsh scripts\u002Ftrain.sh -g 4 -d waymo -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### ScanNet + LitePT-S\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### Structured3D + LitePT-S\nsh scripts\u002Ftrain.sh -g 16 -d structured3d -c semseg-litept-v1m1-0-small -n semseg-litept-v1m1-0-small\n### Structured3D + LitePT-B\nsh scripts\u002Ftrain.sh -g 16 -d structured3d -c semseg-litept-v1m1-0-base -n semseg-litept-v1m1-0-base\n### Structured3D + LitePT-L\nsh scripts\u002Ftrain.sh -g 16 -d structured3d -c semseg-litept-v1m1-0-large -n semseg-litept-v1m1-0-large\n```\n\n详细的说明和权重可在 [项目仓库](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FLitePT) 中找到。\n\n- **PTv3**\n\n[PTv3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035) 是一种高效的骨干模型，在室内和室外场景中均达到了 SOTA 性能。完整的 PTv3 依赖于 FlashAttention，而 FlashAttention 又依赖于 CUDA 11.6 及以上版本，因此请确保本地 Pointcept 环境满足要求。\n\n如果您无法升级本地环境以满足要求（CUDA ≥ 11.6），则可以通过将模型参数 `enable_flash` 设置为 `false`，并将 `enc_patch_size` 和 `dec_patch_size` 降低至一定水平（例如 128）来禁用 FlashAttention。\n\n启用 FlashAttention 会强制禁用 RPE，并将精度降至 fp16。如果您需要这些功能，请禁用 `enable_flash` 并调整 `enable_rpe`、`upcast_attention` 和 `upcast_softmax`。\n\n详细的说明和实验记录（包含权重）可在 [项目仓库](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3) 中找到。示例运行脚本如下：\n```bash\n# 从头开始训练 ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# PPT 联合训练（ScanNet + Structured3D），并在 ScanNet 上评估\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c semseg-pt-v3m1-1-ppt-extreme -n semseg-pt-v3m1-1-ppt-extreme\n\n# 从头开始训练 ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# 基于 PPT 联合训练（ScanNet + Structured3D）对 ScanNet200 进行微调\n# PTV3_PPT_WEIGHT_PATH：PPT 多数据集联合训练所得到的模型权重路径\n# 例如：exp\u002Fscannet\u002Fsemseg-pt-v3m1-1-ppt-extreme\u002Fmodel\u002Fmodel_best.pth\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v3m1-1-ppt-ft -n semseg-pt-v3m1-1-ppt-ft -w ${PTV3_PPT_WEIGHT_PATH}\n\n# 从头开始训练 ScanNet++\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# 对 ScanNet++ 进行测试\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v3m1-1-submit -n semseg-pt-v3m1-1-submit\n\n\n# 从头开始训练 S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# 禁用 flash_attention 并启用 rpe 的示例。\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v3m1-1-rpe -n semseg-pt-v3m1-0-rpe\n# PPT 联合训练（ScanNet + S3DIS + Structured3D），并在 ScanNet 上评估\nsh scripts\u002Ftrain.sh -g 8 -d s3dis -c semseg-pt-v3m1-1-ppt-extreme -n semseg-pt-v3m1-1-ppt-extreme\n# S3DIS 6 折交叉验证\n# 1. 默认配置是在 Area_5 上评估的，需修改 \"data.train.split\"、\"data.val.split\" 和 \"data.test.split\"，使配置分别在 Area_1 至 Area_6 上评估。\n# 2. 在每个区域划分上训练并评估模型，将位于 \"exp\u002Fs3dis\u002FEXP_NAME\u002Fresult\u002FArea_x.pth\" 的结果文件收集到一个文件夹中，称为 RECORD_FOLDER。\n# 3. 运行以下脚本以获得 S3DIS 6 折交叉验证的性能：\nexport PYTHONPATH=.\u002F\npython tools\u002Ftest_s3dis_6fold.py --record_root ${RECORD_FOLDER}\n\n# 从头开始训练 nuScenes\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n# 从头开始训练 Waymo\nsh scripts\u002Ftrain.sh -g 4 -d waymo -c semseg-pt-v3m1-0-base -n semseg-pt-v3m1-0-base\n\n# PTv3 的更多配置和实验记录将很快发布。\n```\n\n室内语义分割  \n| 模型 | 基准 | 额外数据 | GPU 数量 | 验证 mIoU | 配置 | TensorBoard | 实验记录 |\n| :---: | :---: |:---------------:| :---: | :---: | :---: | :---: | :---: |\n| PTv3 | ScanNet |     &cross;     | 4 | 77.6% | [链接](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet\u002Fsemseg-pt-v3m1-0-base.py) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fscannet-semseg-pt-v3m1-0-base) |\n| PTv3 + PPT | ScanNet |     &check;     | 8 | 78.5% | [链接](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet\u002Fsemseg-pt-v3m1-1-ppt-extreme.py) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fscannet-semseg-pt-v3m1-1-ppt-extreme) |\n| PTv3 | ScanNet200 |     &cross;     | 4 | 35.3% | [链接](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fscannet200\u002Fsemseg-pt-v3m1-0-base.py) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) |[链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fscannet200-semseg-pt-v3m1-0-base)|\n| PTv3 | S3DIS (Area5) |     &cross;     | 4 | 73.6% | [链接](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fs3dis\u002Fsemseg-pt-v3m1-0-rpe.py) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fs3dis-semseg-pt-v3m1-0-rpe) |\n| PTv3 + PPT | S3DIS (Area5) |     &check;     | 8 | 75.4% | [链接](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fblob\u002Fmain\u002Fconfigs\u002Fs3dis\u002Fsemseg-pt-v3m1-1-ppt-extreme.py) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftensorboard) | [链接](https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FPointTransformerV3\u002Ftree\u002Fmain\u002Fs3dis-semseg-pt-v3m1-1-ppt-extreme) |\n_**\\*已发布的模型权重是基于 v1.5.1 训练的，v1.5.2 及更高版本的权重仍在训练中。**_\n\n- **PTv2 mode2**\n\n原始的 PTv2 是在 4 张 RTX a6000（每张显存 48G）上训练的。即使启用了 AMP，原始 PTv2 的显存占用仍略高于 24G。考虑到 24G 显存的显卡更为普及，我在最新的 Pointcept 上对 PTv2 进行了调整，使其能够在 4 张 RTX 3090 的机器上运行。\n\n`PTv2 Mode2` 启用 AMP，并禁用了 _Position Encoding Multiplier_ 和 _Grouped Linear_。在进一步的研究中，我们发现点云理解并不需要精确的坐标信息（用网格坐标替代精确坐标并不会影响性能，SparseUNet 就是一个例子）。至于 Grouped Linear，我发现我实现的 Grouped Linear 比 PyTorch 自带的 Linear 层更占显存。得益于代码库和更好的参数调优，我们也缓解了过拟合问题。复现的效果甚至优于我们在论文中报告的结果。\n\n示例运行脚本如下：\n\n```bash\n# ptv2m2: PTv2 mode2，禁用 PEM 和 Grouped Linear，显存占用 \u003C 24G（推荐）\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m2-3-lovasz -n semseg-pt-v2m2-3-lovasz\n\n# ScanNet 测试\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m2-1-submit -n semseg-pt-v2m2-1-submit\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# ScanNet++\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# ScanNet++ 测试\nsh scripts\u002Ftrain.sh -g 4 -d scannetpp -c semseg-pt-v2m2-1-submit -n semseg-pt-v2m2-1-submit\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n# nuScenes\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n```\n\n- **PTv2 mode1**\n\n`PTv2 mode1` 是我们在论文中报道的原始 PTv2，示例运行脚本如下：\n\n```bash\n# ptv2m1: PTv2 mode1，原始 PTv2，显存占用 > 24G\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v2m1-0-base -n semseg-pt-v2m1-0-base\n```\n\n- **PTv1**\n\n原始的 PTv1 也在我们的 Pointcept 代码库中提供。虽然我已经很久没有运行 PTv1 了，但我已经确认示例运行脚本可以正常工作。\n\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-pt-v1-0-base -n semseg-pt-v1-0-base\n```\n\n\n#### 分层 Transformer\n1. 需要额外安装：\n```bash\npip install torch-points3d\n# 修复因安装 torch-points3d 导致的依赖问题\npip uninstall SharedArray\npip install SharedArray==3.2.1\n\ncd libs\u002Fpointops2\npython setup.py install\ncd ..\u002F..\n```\n2. 在 `pointcept\u002Fmodels\u002F__init__.py` 中取消注释 `# from .stratified_transformer import *`。\n3. 参阅 [可选安装](installation) 来安装依赖。\n4. 使用以下示例脚本来训练：\n```bash\n# stv1m1: 分层 Transformer mode1，基于原始分层 Transformer 代码修改。\n# PTv2m2: 分层 Transformer mode2，我的重写版本（推荐）。\n\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-st-v1m1-0-origin -n semseg-st-v1m1-0-origin\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -c semseg-st-v1m2-0-refined -n semseg-st-v1m2-0-refined\n```\n\n#### SPVCNN\n`SPVCNN` 是 [SPVNAS](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fspvnas) 的基线模型，同时也是室外数据集的一个实用基线。\n1. 安装 torchsparse：\n```bash\n# 参考 https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Ftorchsparse\n# 不使用 sudo apt install 的安装方法\nconda install google-sparsehash -c bioconda\nexport C_INCLUDE_PATH=${CONDA_PREFIX}\u002Finclude:$C_INCLUDE_PATH\nexport CPLUS_INCLUDE_PATH=${CONDA_PREFIX}\u002Finclude:CPLUS_INCLUDE_PATH\npip install --upgrade git+https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Ftorchsparse.git\n```\n2. 使用以下示例脚本来训练：\n```bash\n\n# SemanticKITTI\nsh scripts\u002Ftrain.sh -g 2 -d semantic_kitti -c semseg-spvcnn-v1m1-0-base -n semseg-spvcnn-v1m1-0-base\n```\n\n#### OctFormer\nOctFormer 来自论文 _OctFormer: 基于八叉树的 3D 点云 Transformer_。\n1. 需要安装的额外依赖：\n```bash\ncd libs\ngit clone https:\u002F\u002Fgithub.com\u002Foctree-nn\u002Fdwconv.git\npip install .\u002Fdwconv\npip install ocnn\n```\n2. 在 `pointcept\u002Fmodels\u002F__init__.py` 中取消注释 `# from .octformer import *`。\n2. 使用以下示例脚本进行训练：\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-octformer-v1m1-0-base -n semseg-octformer-v1m1-0-base\n```\n\n#### Swin3D\nSwin3D 来自论文 _Swin3D: 用于 3D 室内场景理解的预训练 Transformer 主干网络_。\n1. 需要安装的额外依赖：\n```bash\n# 1. 安装 MinkEngine v0.5.4，按照 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine 中的说明操作；\n# 2. 安装 Swin3D，主要用于 CUDA 运行：\ncd libs\ngit clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin3D.git\ncd Swin3D\npip install .\u002F\n```\n2. 在 `pointcept\u002Fmodels\u002F__init__.py` 中取消注释 `# from .swin3d import *`。\n3. 使用以下示例脚本进行预训练（Structured3D 的预处理请参考 [这里](#structured3d)）：\n```bash\n# Structured3D + Swin-S\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small\n# Structured3D + Swin-L\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large\n\n# 补充\n# Structured3D + SpUNet\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-spunet-v1m1-0-base -n semseg-spunet-v1m1-0-base\n# Structured3D + PTv2\nsh scripts\u002Ftrain.sh -g 4 -d structured3d -c semseg-pt-v2m2-0-base -n semseg-pt-v2m2-0-base\n```\n4. 使用以下示例脚本进行微调：\n```bash\n# ScanNet + Swin-S\nsh scripts\u002Ftrain.sh -g 4 -d scannet -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small\n# ScanNet + Swin-L\nsh scripts\u002Ftrain.sh -g 4 -d scannet -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large\n\n# S3DIS + Swin-S（此处提供支持 S3DIS 法向量的配置）\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-0-small -n semseg-swin3d-v1m1-0-small\n# S3DIS + Swin-L（此处提供支持 S3DIS 法向量的配置）\nsh scripts\u002Ftrain.sh -g 4 -d s3dis -w exp\u002Fstructured3d\u002Fsemseg-swin3d-v1m1-1-large\u002Fmodel\u002Fmodel_last.pth -c semseg-swin3d-v1m1-1-large -n semseg-swin3d-v1m1-1-large\n```\n\n#### 上下文感知分类器\n`上下文感知分类器` 是一种可以进一步提升各主干网络性能的分割模型，可替代 `默认分割器`。使用以下示例脚本进行训练：\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-cac-v1m1-0-spunet-base -n semseg-cac-v1m1-0-spunet-base\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-cac-v1m1-1-spunet-lovasz -n semseg-cac-v1m1-1-spunet-lovasz\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c semseg-cac-v1m1-2-ptv2-lovasz -n semseg-cac-v1m1-2-ptv2-lovasz\n\n# ScanNet200\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-cac-v1m1-0-spunet-base -n semseg-cac-v1m1-0-spunet-base\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-cac-v1m1-1-spunet-lovasz -n semseg-cac-v1m1-1-spunet-lovasz\nsh scripts\u002Ftrain.sh -g 4 -d scannet200 -c semseg-cac-v1m1-2-ptv2-lovasz -n semseg-cac-v1m1-2-ptv2-lovasz\n```\n\n\n### 2. 实例分割\n#### PointGroup\n[PointGroup](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FPointGroup) 是点云实例分割的一个基准框架。\n1. 需要安装的额外依赖：\n```bash\nconda install -c bioconda google-sparsehash \ncd libs\u002Fpointgroup_ops\npython setup.py install --include_dirs=${CONDA_PREFIX}\u002Finclude\ncd ..\u002F..\n```\n2. 在 `pointcept\u002Fmodels\u002F__init__.py` 中取消注释 `# from .point_group import *`。\n3. 使用以下示例脚本进行训练：\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-pointgroup-v1m1-0-spunet-base\n# S3DIS\nsh scripts\u002Ftrain.sh -g 4 -d scannet -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-pointgroup-v1m1-0-spunet-base\n```\n\n### 3. 预训练\n#### Utonia\n请按照 [这里](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Ftree\u002Fmain\u002Fpointcept\u002Fmodels\u002Futonia) 的说明进行操作。\n\n#### Concerto\n请按照 [这里](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Ftree\u002Fmain\u002Fpointcept\u002Fmodels\u002Fconcerto) 的说明进行操作。\n\n#### Sonata\n请按照 [这里](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Ftree\u002Fmain\u002Fpointcept\u002Fmodels\u002Fsonata) 的说明进行操作。\n\n#### 掩码场景对比学习 (MSC)\n1. 使用以下示例脚本进行预训练：\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c pretrain-msc-v1m1-0-spunet-base -n pretrain-msc-v1m1-0-spunet-base\n```\n\n2. 使用以下示例脚本进行微调：  \n在进行实例分割任务的微调之前，请先启用 PointGroup（[这里](#pointgroup)）。\n```bash\n# ScanNet20 语义分割\nsh scripts\u002Ftrain.sh -g 8 -d scannet -w exp\u002Fscannet\u002Fpretrain-msc-v1m1-0-spunet-base\u002Fmodel\u002Fmodel_last.pth -c semseg-spunet-v1m1-4-ft -n semseg-msc-v1m1-0f-spunet-base\n# ScanNet20 实例分割（运行脚本前需启用 PointGroup）\nsh scripts\u002Ftrain.sh -g 4 -d scannet -w exp\u002Fscannet\u002Fpretrain-msc-v1m1-0-spunet-base\u002Fmodel\u002Fmodel_last.pth -c insseg-pointgroup-v1m1-0-spunet-base -n insseg-msc-v1m1-0f-pointgroup-spunet-base\n```\n3. 示例日志和权重：[[预训练](https:\u002F\u002Fconnecthkuhk-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fwuxy_connect_hku_hk\u002FEYvNV4XUJ_5Mlk-g15RelN4BW_P8lVBfC_zhjC_BlBDARg?e=UoGFWH)] [[语义分割](https:\u002F\u002Fconnecthkuhk-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fwuxy_connect_hku_hk\u002FEQkDiv5xkOFKgCpGiGtAlLwBon7i8W6my3TIbGVxuiTttQ?e=tQFnbr)]\n\n#### 点提示训练 (PPT)\nPPT 提出了一种多数据集的预训练框架，它与现有的各种预训练框架和主干网络兼容。\n1. 使用以下示例脚本进行 PPT 监督联合训练：\n```bash\n# ScanNet + Structured3d，在 ScanNet 上验证（S3DIS 可能导致数据加载时间过长，为快速验证可不包含 S3DIS）>= 3090 * 8\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c semseg-ppt-v1m1-0-sc-st-spunet -n semseg-ppt-v1m1-0-sc-st-spunet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c semseg-ppt-v1m1-1-sc-st-spunet-submit -n semseg-ppt-v1m1-1-sc-st-spunet-submit\n# ScanNet + S3DIS + Structured3d，在 S3DIS 上验证（>= a100 * 8）\nsh scripts\u002Ftrain.sh -g 8 -d s3dis -c semseg-ppt-v1m1-0-s3-sc-st-spunet -n semseg-ppt-v1m1-0-s3-sc-st-spunet\n# SemanticKITTI + nuScenes + Waymo，在 SemanticKITTI 上验证（bs12 >= 3090 * 4 >= 3090 * 8，v1m1-0 仍在调试中）\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m1-0-nu-sk-wa-spunet -n semseg-ppt-v1m1-0-nu-sk-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m2-0-sk-nu-wa-spunet -n semseg-ppt-v1m2-0-sk-nu-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d semantic_kitti -c semseg-ppt-v1m2-1-sk-nu-wa-spunet-submit -n semseg-ppt-v1m2-1-sk-nu-wa-spunet-submit\n\n# SemanticKITTI + nuScenes + Waymo，在 nuScenes 上进行验证（bs12 >= 3090 * 4；bs24 >= 3090 * 8，v1m1-0 仍在调参中）\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-ppt-v1m1-0-nu-sk-wa-spunet -n semseg-ppt-v1m1-0-nu-sk-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-ppt-v1m2-0-nu-sk-wa-spunet -n semseg-ppt-v1m2-0-nu-sk-wa-spunet\nsh scripts\u002Ftrain.sh -g 4 -d nuscenes -c semseg-ppt-v1m2-1-nu-sk-wa-spunet-submit -n semseg-ppt-v1m2-1-nu-sk-wa-spunet-submit\n```\n\n#### PointContrast\n1. 预处理并链接 ScanNet-Pair 数据集（与 ScanNet 原始 RGB-D 帧进行成对匹配，约 1.5T）：\n```bash\n# RAW_SCANNET_DIR：下载的 ScanNet v2 原始数据集目录。\n# PROCESSED_SCANNET_PAIR_DIR：处理后的 ScanNet 对数据集目录（输出目录）。\npython pointcept\u002Fdatasets\u002Fpreprocessing\u002Fscannet\u002Fscannet_pair\u002Fpreprocess.py --dataset_root ${RAW_SCANNET_DIR} --output_root ${PROCESSED_SCANNET_PAIR_DIR}\nln -s ${PROCESSED_SCANNET_PAIR_DIR} ${CODEBASE_DIR}\u002Fdata\u002Fscannet\n```\n2. 使用以下示例脚本进行预训练：\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c pretrain-msc-v1m1-1-spunet-pointcontrast -n pretrain-msc-v1m1-1-spunet-pointcontrast\n```\n3. 微调请参考 [MSC](#masked-scene-contrast-msc)。\n\n#### 对比场景上下文\n1. 预处理并链接 ScanNet-Pair 数据集（参考 [PointContrast](#pointcontrast)）：\n2. 使用以下示例脚本进行预训练：\n```bash\n# ScanNet\nsh scripts\u002Ftrain.sh -g 8 -d scannet -c pretrain-msc-v1m2-0-spunet-csc -n pretrain-msc-v1m2-0-spunet-csc\n```\n3. 微调请参考 [MSC](#masked-scene-contrast-msc)。\n\n## 致谢\n_Pointcept_ 由 [Xiaoyang](https:\u002F\u002Fxywu.me\u002F) 设计，命名者为 [Yixing](https:\u002F\u002Fgithub.com\u002Fyxlao)，标志由 [Yuechen](https:\u002F\u002Fjulianjuaner.github.io\u002F) 创作。它源自 [Hengshuang](https:\u002F\u002Fhszhao.github.io\u002F) 的 [Semseg](https:\u002F\u002Fgithub.com\u002Fhszhao\u002Fsemseg)，并受到多个仓库的启发，例如 [MinkowskiEngine](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine)、[pointnet2](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet2)、[mmcv](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmcv\u002Ftree\u002Fmaster\u002Fmmcv) 和 [Detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)。","# Pointcept 快速上手指南\n\nPointcept 是一个强大且灵活的点云感知研究代码库，集成了 Point Transformer V3、Utonia、Concerto、Sonata 等多个 SOTA 模型及预训练框架。本指南将帮助你快速搭建环境并运行基础示例。\n\n## 1. 环境准备\n\n在开始之前，请确保你的系统满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04\u002F22.04)\n*   **Python**: 3.8 或更高版本\n*   **GPU**: 支持 CUDA 的 NVIDIA GPU (建议显存 ≥ 16GB 以运行大型模型)\n*   **CUDA**: 根据显卡驱动安装对应的 CUDA Toolkit (推荐 11.3+)\n*   **PyTorch**: 需预先安装与 CUDA 版本匹配的 PyTorch\n\n**前置依赖安装：**\n\n建议先创建虚拟环境并安装 PyTorch。国内用户可使用清华源加速：\n\n```bash\n# 创建虚拟环境\nconda create -n pointcept python=3.9 -y\nconda activate pointcept\n\n# 安装 PyTorch (以 CUDA 11.8 为例，其他版本请访问 pytorch.org 获取对应命令)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n```\n\n## 2. 安装步骤\n\n你可以选择从源码安装（推荐，便于修改和调试）或通过 pip 安装。\n\n### 方式一：从源码安装（推荐）\n\n克隆仓库并安装依赖：\n\n```bash\n# 克隆项目\ngit clone https:\u002F\u002Fgithub.com\u002Fpointcept\u002Fpointcept.git\ncd pointcept\n\n# 安装核心依赖\npip install -r requirements.txt\n\n# 安装 Pointcept 包\npip install -e .\n```\n\n> **注意**：部分稀疏卷积后端（如 MinkowskiEngine, SpConv）可能需要单独编译安装。如果遇到编译错误，请参考各子模块的官方文档安装对应的系统级依赖（如 `libopenblas-dev` 等）。\n\n### 方式二：通过 pip 安装\n\n如果你仅需使用预置模型进行推理或微调，可直接安装发布版：\n\n```bash\npip install pointcept\n```\n\n## 3. 基本使用\n\nPointcept 采用配置文件驱动的方式运行。以下以 **ScanNet** 数据集上的语义分割任务为例，展示最基础的训练流程。\n\n### 步骤 1: 数据准备\n\n下载 ScanNet 数据集并按照项目说明进行预处理。为了节省时间，国内用户可直接下载作者提供的**预处理后数据集**：\n\n```bash\n# 示例：从 HuggingFace 下载预处理好的 ScanNet 数据 (需安装 huggingface-cli)\nhuggingface-cli download Pointcept\u002Fscannet --local-dir .\u002Fdata\u002Fscannet\n```\n\n确保目录结构符合 `configs\u002F_base_\u002Fdatasets\u002Fscannet.py` 中的定义。\n\n### 步骤 2: 运行训练\n\n使用内置配置启动训练。以下命令使用 Point Transformer V3 (PTv3)  backbone 在 ScanNet 上进行训练，并启用 `wandb` 记录日志：\n\n```bash\n# 单卡训练示例\npython tools\u002Ftrain.py --config-file configs\u002Fscannet\u002Fsemseg-ptv3m1-0-base.py \\\n       --num-gpus 1 \\\n       --options save_path=output\u002Fscannet_ptv3_demo\n```\n\n*   `--config-file`: 指定配置文件路径。\n*   `--options`: 动态覆盖配置项，此处指定输出目录。\n*   若需使用多卡，修改 `--num-gpus` 即可。\n\n### 步骤 3: 模型推理\n\n使用训练好的权重或官方预训练模型进行推理：\n\n```bash\npython tools\u002Ftest.py --config-file configs\u002Fscannet\u002Fsemseg-ptv3m1-0-base.py \\\n       --weights path\u002Fto\u002Fyour\u002Fmodel.pth \\\n       --options test_data.list_path=data\u002Fscannet\u002Fval.txt\n```\n\n### 进阶：使用预训练模型 (Utonia\u002FSonata 等)\n\n对于 Utonia、Concerto 或 Sonata 等最新模型，建议访问其专属项目仓库获取专门的推理和微调脚本，或直接加载 HuggingFace 上的预训练权重：\n\n```python\n# 代码内加载预训练权重的简单示例 (伪代码)\nfrom pointcept.models import build_model\nfrom pointcept.utils.checkpoint import load_checkpoint\n\ncfg = \"configs\u002Futonia\u002Fpretrain-utonia-base.py\"\nmodel = build_model(cfg)\nload_checkpoint(model, \"https:\u002F\u002Fhuggingface.co\u002FPointcept\u002FUtonia\u002Fresolve\u002Fmain\u002Fmodel.pth\")\n```\n\n现在你已经成功运行了 Pointcept！更多详细配置和高级用法请参阅 `configs\u002F` 目录下的具体配置文件及官方文档。","某自动驾驶初创团队正在开发城市道路感知系统，急需从激光雷达采集的稀疏点云数据中精准识别车道线、行人及障碍物。\n\n### 没有 Pointcept 时\n- **模型复现困难**：团队需手动重构 PTv3 或 OA-CNNs 等前沿算法的代码，常因细节缺失导致无法复现论文中的高精度结果。\n- **训练效率低下**：缺乏统一的数据加载与增强模块，处理大规模多数据集（如 Waymo、nuScenes）时预处理耗时极长，迭代周期以周计。\n- **泛化能力不足**：自研模型在未见过的场景（如恶劣天气或特殊路况）下表现糟糕，缺乏类似 Utonia 或 PPT 的大规模预训练权重支持。\n- **研发资源分散**：工程师将 70% 的时间耗费在调试底层代码和适配不同硬件上，而非优化核心感知逻辑。\n\n### 使用 Pointcept 后\n- **即插即用前沿模型**：直接调用内置的 PTv3、Sonata 等官方实现，一键加载预训练权重，首日即可达到 SOTA（最先进）基准性能。\n- **高效流水线加速**：利用其灵活的数据引擎统一处理多源点云，训练速度提升 3 倍，模型迭代周期从数周缩短至数天。\n- **强泛化感知能力**：基于 Utonia“单一编码器”架构和 PPT 多点提示训练技术，模型在复杂长尾场景下的识别鲁棒性显著增强。\n- **聚焦核心创新**：研发团队得以从繁琐的工程泥潭中解脱，将精力集中于上层策略优化与特定场景的微调。\n\nPointcept 通过提供标准化且高性能的点云感知基座，让自动驾驶团队从“造轮子”转向“赛车”，大幅降低了 3D 视觉技术的落地门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPointcept_Pointcept_4bbbaf84.png","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPointcept_6d4c62fb.png",null,"https:\u002F\u002Fgithub.com\u002FPointcept",[76,80,84,88],{"name":77,"color":78,"percentage":79},"Python","#3572A5",95.7,{"name":81,"color":82,"percentage":83},"Cuda","#3A4E3A",2.1,{"name":85,"color":86,"percentage":87},"C++","#f34b7d",1.8,{"name":89,"color":90,"percentage":91},"Shell","#89e051",0.4,2958,371,"2026-04-15T07:18:08","MIT",4,"未说明","需要 NVIDIA GPU（依赖 MinkowskiEngine, SpConv 等稀疏卷积库及 CUDA 加速），具体显存和 CUDA 版本未说明",{"notes":100,"python":97,"dependencies":101},"该工具集成了多种点云处理骨干网络（如 PTv3, OA-CNNs）和预训练模型。部分依赖库（如 MinkowskiEngine, spconv）通常需要编译安装且对 CUDA 环境有特定要求。2024 年 5 月后的版本（v1.5.2+）建议重新预处理数据集或从 HuggingFace 下载预处理后的数据。支持使用 wandb 进行训练监控。",[102,103,104,105],"torch","MinkowskiEngine","spconv","wandb",[107,14],"其他",[109,110,111],"3d-vision","point-cloud","pytorch","2026-03-27T02:49:30.150509","2026-04-16T01:43:24.679059",[115,120,125,130,135],{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},34867,"安装时遇到 'No CUDA runtime is found' 或编译错误怎么办？","这通常是由于 CUDA 环境配置不正确导致的。可以尝试以下解决方案：\n1. 重置环境：有时重启机器或重新创建虚拟环境能解决大部分问题。\n2. 重新安装 PyTorch：确保安装与你的 CUDA 版本匹配的 PyTorch。例如，对于 CUDA 11.3，运行：\n   pip uninstall torch\n   pip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu113\n3. 设置 CUDA_HOME 环境变量：明确指定 CUDA 路径，例如：\n   export CUDA_HOME='\u002Fusr\u002Flocal\u002Fcuda-11.3'","https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fissues\u002F185",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},34868,"为什么 PTv2 的训练速度比日志中显示的要慢很多（batch time 过高）？","训练速度慢通常是因为数据增强管道在 CPU 上运行造成的瓶颈。基准测试显示，'ElasticDistortion' 和 'GridSample' 分别占据了 65% 和 14% 的数据加载时间。维护者正在考虑以下优化方案：\n1. 调整增强顺序：将 ElasticDistortion 移到 GridSampling 和 SphereCrop 之后执行。\n2. 启用 GPU 后端：为 GridSampling 启用 GPU 加速后端。\n3. 启用共享内存：重新启用共享内存数据加载机制。\n如果问题持续，请检查是否使用了复杂的 CPU 数据增强流程。","https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fissues\u002F32",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},34869,"如何在自定义数据集上进行训练或实现实例分割？","若要实现实例分割，建议在模型中添加额外的实例分割器（instance segmenter）并在其中加入后处理步骤。对于聚类算法（如 mean-shift）的集成，通常需要将其作为后处理模块添加到分割器代码中。此外，如果遇到环境依赖问题（如 TensorFlow 版本冲突），请确保安装兼容的版本（例如 numpy=1.20.3）。具体的代码插入位置取决于模型架构，通常位于生成最终掩码或聚类标签的阶段。","https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fissues\u002F8",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},34870,"可以将 PointGroup 的主干网络替换为 PTv2 吗？","是的，可以将 PointGroup 的主干网络（backbone）从 SpUnet 替换为 PTv2。但需要注意的是，由于不熟悉 PointGroup 的具体超参数和拟合机制，替换后可能需要调整相关的超参数以获得最佳结果。在配置文件中，可能还需要修改相关的索引设置（例如将 _boll_ index 改为 1，并根据情况调整 _branch_ index）。","https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fissues\u002F40",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},34871,"使用 Structured3D 预训练的模型在 ScanNet 上微调时出现输出层尺寸冲突错误怎么办？","这是一个已知问题，源于不同数据集（Structured3D 和 ScanNet）的类别数量不同，导致预训练模型的输出层大小与微调任务的头不匹配。虽然报错是加载权重时的尺寸冲突，但这属于正常现象。理想的解决方案是在代码库中添加特殊逻辑，在加载预训练权重时自动忽略或跳过输出层（classification head）的权重加载，只加载主干网络的权重，然后随机初始化新的输出层以适应新数据集的类别数。","https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointcept\u002Fissues\u002F77",[141,146,151,156,161,166,171,176,181,186,191],{"id":142,"version":143,"summary_zh":144,"released_at":145},272188,"v1.7.0","更新日志：\n1. 发布 Utonia 论文的训练代码，包括 RoPE 增强的 Point Transformer V3、相应配置文件、数据预处理以及下游任务评估。\n2. 新增 Utonia 模型 `utonia_v1m1_base` 作为预训练框架。\n3. 新增带有 3D 旋转位置编码的 Point Transformer V3 第 3 模式骨干网络 `point_transformer_v3m3_utonia`。\n4. 新增 Utonia 在跨域基准数据集上的语义分割下游配置，涵盖线性探查、解码器和微调三种方式，并包含无颜色和无法线的消融实验。\n5. 新增针对 Tiny 和 Small 学生模型的 Utonia 知识蒸馏配置示例。\n6. 新增 Concerto 知识蒸馏模型 `concerto_v1m2_distill`，用于知识蒸馏任务。\n7. 新增多个数据集：ScanObjectNN、PartNet、ParNetE、ShapeNetPart、Cap3D、GraspNet 和 HK 遥感，并提供相应的预处理代码。\n8. 新增 `PartialSampledTrainer`，用于在预训练过程中对多数据集进行不均衡采样（适用于大型 Cap3D 数据集）。\n9. 新增 `PartNetEPartSegEvaluator`、`PartNetEPartSegTester`、`ShapeNetPartSegEvaluator` 和 `ShapeNetPartSegTester`，用于零件分割任务的评估。\n10. 新增 `MuonKIMI` 优化器，用于模型训练。\n11. 更新 `transform.py` 和 `utils.py`，以支持 Utonia 中点云数据增强。","2026-04-02T11:32:23",{"id":147,"version":148,"summary_zh":149,"released_at":150},272189,"v1.6.1","更新日志：\n\n0. 发布 Concerto 论文的训练代码。\n1. 添加 `DefaultImagePointDataset` 用于加载图像与点云配对数据，同时也支持单模态点云输入。\n2. 添加 Concerto 预训练和微调所用的室外数据集。\n3. 添加 Concerto 室内预训练数据集的预处理代码。\n4. 添加 Concerto 室外预训练数据集的预处理代码。\n5. 更新 `transform.py` 和 `utils.py`，以支持图像与点云的配对输入。\n6. 添加 `DefaultLORASegmentorV2`，以便在预训练权重上进行 LoRA 微调。\n7. 更新 `PT-v3m2` 模型，记录 Concerto 所需的必要信息。\n8. 添加 `Sonata-v1m3` 模型，用于 Sonata 蒸馏示例。\n9. 在所有模型中将 `cls_mode` 重命名为 `enc_mode`。","2026-02-28T05:03:49",{"id":152,"version":153,"summary_zh":154,"released_at":155},272190,"v1.6.0","更新日志：\n\n0. 发布Sonata论文的训练代码\n1. 新增`ModelHook`和`PointModel`。继承自`PointModel`的PyTorch模型可以自定义为Pointcept钩子（可自定义`before_step()`、`after_step()`等方法），这些自定义操作将由`ModelHook`启用（默认配置下已启用）。\n2. 在`PointSequential`中添加对OCNN的支持。\n3. 在`DefaultSegmenter`中通过`freeze_backbone`启用语义分割的线性探测。\n4. 新增`DINOEnhancedSegmenter`，该模型能够利用额外的DINO特征进行点云分割。所使用的DINO特征已在数据集加载时预先计算并缓存。\n5. 在钩子中新增`GarbageHandler`，用于手动控制垃圾回收（参考[此处](https:\u002F\u002Fpytorch.org\u002Fblog\u002Funderstanding-gpu-memory-2\u002F)的优化说明）。\n6. 在钩子中新增`WeightDecaySchedular`，使权重衰减受余弦调度器控制。\n7. 将数据增强中的“Add”更名为“Update”。\n8. 添加Arkitscene和HM3D的数据预处理代码。\n9. 优化多节点训练的`train.sh`脚本。\n10. 移除GridSampling中的“key”控制，改用`data_dict`中的`index_valid_keys`来控制各增强操作的索引操作。","2025-03-25T09:27:32",{"id":157,"version":158,"summary_zh":159,"released_at":160},272191,"v1.5.2","**警告**：ScanNet、S3DIS、Structured3D、Waymo 数据集的数据预处理。请重新对数据集进行预处理，或从[这里](https:\u002F\u002Fhuggingface.co\u002FPointcept)下载我们预处理好的数据集（若适用相应许可协议，请务必在下载前签署各数据集的许可协议）。\n\n变更日志：\n1. 重新设计了各个数据集及默认数据集的预处理代码，默认数据集应如下所示：\n```json\ndataset\n    |- split1 (如 train)\n        |- data 1\n            |- coord.npy\n            |- color.npy\n            |- normal.npy\n            ...\n            |- segment.npy\n        ...\n        |- data k\n    ...\n    |- split n \n```\n\n2. 增加了对 Waymo 数据集多帧训练的支持（通过时间戳控制）；\n3. 新增每 epoch 清空缓存的选项；\n4. 增加了对 ScanNet++ 数据集的支持（包括预处理、训练、测试和提交）；\n5. 增加了针对目标分类的随机投票机制（这似乎是一种常见的技巧，但我觉得有些“作弊”，不建议使用）；\n6. 修复了 PTv3 结构中的一个错误（对性能无影响），已发布的模型需要重新训练；\n7. 使默认的 PTv3 配置在 S3DIS 数据集上也能取得很好的效果；\n8. 增加了对 ModelNet40 的支持（目前尚未仔细调优性能，仅供参考）。TODO\n\n待办事项：\n1. 调优目标分类任务的性能；\n2. 为 v1.5.3 版本重新训练已发布的模型权重；\n3. 为室外数据集提供更多详细的配置文件；","2024-05-17T13:35:42",{"id":162,"version":163,"summary_zh":164,"released_at":165},272192,"v1.5.1","更新日志：\n1. 发布 PTv3 室内感知的大部分配置。（我忘了 ScanNet200 的 PPT 版本）\n2. 发布 PTv3 户外感知的若干配置。（最近计算资源有限，我想确保最终版本的代码可复现）\n3. 修复处理后的 NuScenes 信息下载链接。\n4. 优化 Octformer 的输入 logits。\n5. 优化 PPT。\n\n任务：\n1. PTv3 的所有配置\n2. 支持 ImageNet 分类\n3. 支持目标级数据集\n4. 在目标级数据集中应用预训练技术\n\n（希望我能有足够的时间完成这些任务）","2024-02-25T13:11:27",{"id":167,"version":168,"summary_zh":169,"released_at":170},272193,"v1.5.0","更新日志：\n1. 发布 Point Transformer V3 的模型代码；\n2. 发布在 ScanNet 和 ScanNet200 数据集上从头训练 PTv3 的配置文件及实验记录；\n3. 修复 Swin3D 和 OctFormer 的位移计算问题；\n4. 将超时限制提高至 60 分钟；\n5. 采用更灵活的导入策略，将大多数模型默认设置为已导入状态；\n6. 添加 Point.PointModule.PointSequence；\n7. 将 PDNorm 独立出来，作为 Pointcept 模块；\n8. 新增 DefaultSegmentorV2（仅用于特征提取的骨干网络）。","2023-12-31T13:21:53",{"id":172,"version":173,"summary_zh":174,"released_at":175},272194,"v1.4.0","1. 发布 PPT（检查 MultiDatasetLoader、Trainer 和模型）；\r\n2. 增加对 Waymo 数据集的支持；\r\n3. 优化精确测试（采用双网格采样，参考 NuScenes 配置）；\r\n4. 将 `discrete_coord` 重命名为 `grid_coord`；\r\n5. 添加 S3DIS 6 折交叉验证的测试脚本；","2023-12-14T17:00:19",{"id":177,"version":178,"summary_zh":179,"released_at":180},272195,"v1.3.1","1. 发布 CAC 代码；2. 使用 PTv2 增加对 NuScenes 和 SemanticKITTI 的支持；3. 增加对 Swin3d 和 OctFormer 的支持；4. 增加对 Structured3D 预训练的支持。","2023-07-10T05:03:48",{"id":182,"version":183,"summary_zh":184,"released_at":185},272196,"v1.3.0","1. 发布 `Masked Scene Contrast`（MSC）的代码；\r\n2. 增加对 `PointContrast` 和 `Contrastive Scene Contexts` 的支持；\r\n3. 将 `pointcept.utils.losses` 移至 `pointcept.models.losses`；\r\n4. 将 `Voxelize` 重命名为 `GridSample`；\r\n5. 移除 `max_batch_point` 机制；\r\n6. 启用 `PreciseEvaluator` 作为语义分割的默认钩子；\r\n7. 启用 `DDP 测试`（钩子及测试脚本）。","2023-06-16T15:32:53",{"id":187,"version":188,"summary_zh":189,"released_at":190},272197,"v1.2.1","1. 通过 PointGroup 增加对 **实例分割** 的支持。  \n2. 增加对 **Python >= 3.10** 的支持。  \n3. 统一配置和模型的 **命名规范**。  \n4. 添加 **PreciseEvaluator** 作为钩子，在训练结束后自动执行测试流程。","2023-06-05T03:13:03",{"id":192,"version":193,"summary_zh":194,"released_at":195},272198,"v1.2.0","We upgraded the released codebase of Point Transformer V2 and named it **Pointcept**. Compared with the PTv2 released code, the main change is as follows:\r\n\r\n1. Enable the `hook` mechanism, which widely increases the flexibility of _Pointcept_.\r\n2. Move loss computation from trainer to model, and after that, Pointcept can support various downstream tasks.\r\n3. Better dataflow design, to distinguish labels for different tasks, we use `category` for classification label, `segment` for semantic segmentation, `instance` for instance segmentation, `bbox` for object detection.\r\n4. Adding support to ModelNet40.\r\n5. Adding support to ScanNet Data Efficient Benchmark.\r\n6. Enable PyTorch Profiler and Share Memory as a hook.\r\n7. Better detail design.","2023-03-23T09:38:23"]