[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-NVIDIA--MinkowskiEngine":3,"tool-NVIDIA--MinkowskiEngine":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",149489,2,"2026-04-10T11:32:46",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":104,"forks":105,"last_commit_at":106,"license":107,"difficulty_score":108,"env_os":109,"env_gpu":110,"env_ram":111,"env_deps":112,"category_tags":120,"github_topics":122,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":143,"updated_at":144,"faqs":145,"releases":174},6264,"NVIDIA\u002FMinkowskiEngine","MinkowskiEngine","Minkowski Engine is an auto-diff neural network library for high-dimensional sparse tensors","Minkowski Engine 是一款专为高维稀疏张量设计的自动微分神经网络库。它主要解决了传统深度学习框架在处理空间稀疏数据（如 3D 点云、体素网格或高维表面数据）时效率低下的痛点。在传统方法中，即使数据大部分为空，网络仍需对密集张量进行计算，导致显存占用巨大且推理速度缓慢。Minkowski Engine 通过原生支持稀疏卷积、池化及广播等操作，仅对有效数据进行计算，从而显著降低内存 footprint 并加速训练与推理过程。\n\n该工具特别适合从事计算机视觉、机器人感知及三维重建领域的研究人员与开发者，尤其是那些需要构建高效 3D 语义分割、分类、检测或生成模型的专业人士。其核心技术亮点在于将稀疏性从“参数层面”延伸至“空间层面”，并提供了完整的 CUDA 加速坐标管理功能，让用户能像使用普通 PyTorch 层一样轻松搭建复杂的稀疏神经网络架构。无论是学术探索还是工业级应用，Minkowski Engine 都为处理大规模稀疏几何数据提供了强大而灵活的基础设施。","[pypi-image]: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FMinkowskiEngine.svg\n[pypi-url]: https:\u002F\u002Fpypi.org\u002Fproject\u002FMinkowskiEngine\u002F\n[pypi-download]: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002FMinkowskiEngine\n[slack-badge]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-join%20chats-brightgreen\n[slack-url]: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fminkowskiengine\u002Fshared_invite\u002Fzt-piq2x02a-31dOPocLt6bRqOGY3U_9Sw\n\n# Minkowski Engine\n\n[![PyPI Version][pypi-image]][pypi-url] [![pypi monthly download][pypi-download]][pypi-url] [![slack chat][slack-badge]][slack-url]\n\nThe Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, unpooling, and broadcasting operations for sparse tensors. For more information, please visit [the documentation page](http:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Foverview.html).\n\n## News\n\n- 2021-08-11 Docker installation instruction added\n- 2021-08-06 All installation errors with pytorch 1.8 and 1.9 have been resolved.\n- 2021-04-08 Due to recent errors in [pytorch 1.8 + CUDA 11](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F330), it is recommended to use [anaconda for installation](#anaconda).\n- 2020-12-24 v0.5 is now available! The new version provides CUDA accelerations for all coordinate management functions.\n\n## Example Networks\n\nThe Minkowski Engine supports various functions that can be built on a sparse tensor. We list a few popular network architectures and applications here. To run the examples, please install the package and run the command in the package root directory.\n\n| Examples              | Networks and Commands                                                                                                                                                           |\n|:---------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| Semantic Segmentation | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_398aeee5ae4a.png\"> \u003Cbr \u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_5c0e7670c416.png\" width=\"256\"> \u003Cbr \u002F> `python -m examples.indoor` |\n| Classification        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_3c03fd83f819.png) \u003Cbr \u002F> `python -m examples.classification_modelnet40`                                                          |\n| Reconstruction        | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_04747d614f57.png\"> \u003Cbr \u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_15f25281a58b.gif\" width=\"256\"> \u003Cbr \u002F> `python -m examples.reconstruction` |\n| Completion            | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_87d3ea94bdcc.png\"> \u003Cbr \u002F> `python -m examples.completion`                                                       |\n| Detection             | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_e228614cc790.png\">                                                                                               |\n\n\n## Sparse Tensor Networks: Neural Networks for Spatially Sparse Tensors\n\nCompressing a neural network to speedup inference and minimize memory footprint has been studied widely. One of the popular techniques for model compression is pruning the weights in convnets, is also known as [*sparse convolutional networks*](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FLiu_Sparse_Convolutional_Neural_2015_CVPR_paper.pdf). Such parameter-space sparsity used for model compression compresses networks that operate on dense tensors and all intermediate activations of these networks are also dense tensors.\n\nHowever, in this work, we focus on [*spatially* sparse data](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.6070), in particular, spatially sparse high-dimensional inputs and 3D data and convolution on the surface of 3D objects, first proposed in [Siggraph'17](https:\u002F\u002Fwang-ps.github.io\u002FO-CNN.html). We can also represent these data as sparse tensors, and these sparse tensors are commonplace in high-dimensional problems such as 3D perception, registration, and statistical data. We define neural networks specialized for these inputs as *sparse tensor networks*  and these sparse tensor networks process and generate sparse tensors as outputs. To construct a sparse tensor network, we build all standard neural network layers such as MLPs, non-linearities, convolution, normalizations, pooling operations as the same way we define them on a dense tensor and implemented in the Minkowski Engine.\n\nWe visualized a sparse tensor network operation on a sparse tensor, convolution, below. The convolution layer on a sparse tensor works similarly to that on a dense tensor. However, on a sparse tensor, we compute convolution outputs on a few specified points which we can control in the [generalized convolution](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fsparse_tensor_network.html). For more information, please visit [the documentation page on sparse tensor networks](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fsparse_tensor_network.html) and [the terminology page](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fterminology.html).\n\n| Dense Tensor                                                                | Sparse Tensor                                                                |\n|:---------------------------------------------------------------------------:|:----------------------------------------------------------------------------:|\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_6b3b7971b5e3.gif\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_4fd96ce062cb.gif\"> |\n\n--------------------------------------------------------------------------------\n\n## Features\n\n- Unlimited high-dimensional sparse tensor support\n- All standard neural network layers (Convolution, Pooling, Broadcast, etc.)\n- Dynamic computation graph\n- Custom kernel shapes\n- Multi-GPU training\n- Multi-threaded kernel map\n- Multi-threaded compilation\n- Highly-optimized GPU kernels\n\n\n## Requirements\n\n- Ubuntu >= 14.04\n- CUDA >= 10.1.243 and **the same CUDA version used for pytorch** (e.g. if you use conda cudatoolkit=11.1, use CUDA=11.1 for MinkowskiEngine compilation)\n- pytorch >= 1.7 To specify CUDA version, please use conda for installation. You must match the CUDA version pytorch uses and CUDA version used for Minkowski Engine installation. `conda install -y -c nvidia -c pytorch pytorch=1.8.1 cudatoolkit=10.2`)\n- python >= 3.6\n- ninja (for installation)\n- GCC >= 7.4.0\n\n\n## Installation\n\nYou can install the Minkowski Engine with `pip`, with anaconda, or on the system directly. If you experience issues installing the package, please checkout the [the installation wiki page](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fwiki\u002FInstallation).\nIf you cannot find a relevant problem, please report the issue on [the github issue page](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues).\n\n- [PIP](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#pip) installation\n- [Conda](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#anaconda) installation\n- [Python](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#system-python) installation\n- [Docker](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#docker) installation\n\n\n### Pip\n\nThe MinkowskiEngine is distributed via [PyPI MinkowskiEngine][pypi-url] which can be installed simply with `pip`.\nFirst, install pytorch following the [instruction](https:\u002F\u002Fpytorch.org). Next, install `openblas`.\n\n```\nsudo apt install build-essential python3-dev libopenblas-dev\npip install torch ninja\npip install -U MinkowskiEngine --install-option=\"--blas=openblas\" -v --no-deps\n\n# For pip installation from the latest source\n# pip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine --no-deps\n```\n\nIf you want to specify arguments for the setup script, please refer to the following command.\n\n```\n# Uncomment some options if things don't work\n# export CXX=c++; # set this if you want to use a different C++ compiler\n# export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1; # or select the correct cuda version on your system.\npip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps \\\n#                           \\ # uncomment the following line if you want to force cuda installation\n#                           --install-option=\"--force_cuda\" \\\n#                           \\ # uncomment the following line if you want to force no cuda installation. force_cuda supercedes cpu_only\n#                           --install-option=\"--cpu_only\" \\\n#                           \\ # uncomment the following line to override to openblas, atlas, mkl, blas\n#                           --install-option=\"--blas=openblas\" \\\n```\n\n### Anaconda\n\nMinkowskiEngine supports both CUDA 10.2 and cuda 11.1, which work for most of latest pytorch versions.\n#### CUDA 10.2\n\nWe recommend `python>=3.6` for installation.\nFirst, follow [the anaconda documentation](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F) to install anaconda on your computer.\n\n```\nsudo apt install g++-7  # For CUDA 10.2, must use GCC \u003C 8\n# Make sure `g++-7 --version` is at least 7.4.0\nconda create -n py3-mink python=3.8\nconda activate py3-mink\n\nconda install openblas-devel -c anaconda\nconda install pytorch=1.9.0 torchvision cudatoolkit=10.2 -c pytorch -c nvidia\n\n# Install MinkowskiEngine\nexport CXX=g++-7\n# Uncomment the following line to specify the cuda home. Make sure `$CUDA_HOME\u002Fnvcc --version` is 10.2\n# export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-10.2\npip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps --install-option=\"--blas_include_dirs=${CONDA_PREFIX}\u002Finclude\" --install-option=\"--blas=openblas\"\n\n# Or if you want local MinkowskiEngine\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine.git\ncd MinkowskiEngine\nexport CXX=g++-7\npython setup.py install --blas_include_dirs=${CONDA_PREFIX}\u002Finclude --blas=openblas\n```\n\n#### CUDA 11.X\n\nWe recommend `python>=3.6` for installation.\nFirst, follow [the anaconda documentation](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F) to install anaconda on your computer.\n\n```\nconda create -n py3-mink python=3.8\nconda activate py3-mink\n\nconda install openblas-devel -c anaconda\nconda install pytorch=1.9.0 torchvision cudatoolkit=11.1 -c pytorch -c nvidia\n\n# Install MinkowskiEngine\n\n# Uncomment the following line to specify the cuda home. Make sure `$CUDA_HOME\u002Fnvcc --version` is 11.X\n# export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1\npip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps --install-option=\"--blas_include_dirs=${CONDA_PREFIX}\u002Finclude\" --install-option=\"--blas=openblas\"\n\n# Or if you want local MinkowskiEngine\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine.git\ncd MinkowskiEngine\npython setup.py install --blas_include_dirs=${CONDA_PREFIX}\u002Finclude --blas=openblas\n```\n\n### System Python\n\nLike the anaconda installation, make sure that you install pytorch with the same CUDA version that `nvcc` uses.\n\n```\n# install system requirements\nsudo apt install build-essential python3-dev libopenblas-dev\n\n# Skip if you already have pip installed on your python3\ncurl https:\u002F\u002Fbootstrap.pypa.io\u002Fget-pip.py | python3\n\n# Get pip and install python requirements\npython3 -m pip install torch numpy ninja\n\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine.git\n\ncd MinkowskiEngine\n\npython setup.py install\n# To specify blas, CXX, CUDA_HOME and force CUDA installation, use the following command\n# export CXX=c++; export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1; python setup.py install --blas=openblas --force_cuda\n```\n\n### Docker\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\ncd MinkowskiEngine\ndocker build -t minkowski_engine docker\n```\n\nOnce the docker is built, check it loads MinkowskiEngine correctly.\n\n```\ndocker run MinkowskiEngine python3 -c \"import MinkowskiEngine; print(MinkowskiEngine.__version__)\"\n```\n\n## CPU only build and BLAS configuration (MKL)\n\nThe Minkowski Engine supports CPU only build on other platforms that do not have NVidia GPUs. Please refer to [quick start](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fquick_start.html) for more details.\n\n\n## Quick Start\n\nTo use the Minkowski Engine, you first would need to import the engine.\nThen, you would need to define the network. If the data you have is not\nquantized, you would need to voxelize or quantize the (spatial) data into a\nsparse tensor.  Fortunately, the Minkowski Engine provides the quantization\nfunction (`MinkowskiEngine.utils.sparse_quantize`).\n\n\n### Creating a Network\n\n```python\nimport torch.nn as nn\nimport MinkowskiEngine as ME\n\nclass ExampleNetwork(ME.MinkowskiNetwork):\n\n    def __init__(self, in_feat, out_feat, D):\n        super(ExampleNetwork, self).__init__(D)\n        self.conv1 = nn.Sequential(\n            ME.MinkowskiConvolution(\n                in_channels=in_feat,\n                out_channels=64,\n                kernel_size=3,\n                stride=2,\n                dilation=1,\n                bias=False,\n                dimension=D),\n            ME.MinkowskiBatchNorm(64),\n            ME.MinkowskiReLU())\n        self.conv2 = nn.Sequential(\n            ME.MinkowskiConvolution(\n                in_channels=64,\n                out_channels=128,\n                kernel_size=3,\n                stride=2,\n                dimension=D),\n            ME.MinkowskiBatchNorm(128),\n            ME.MinkowskiReLU())\n        self.pooling = ME.MinkowskiGlobalPooling()\n        self.linear = ME.MinkowskiLinear(128, out_feat)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = self.conv2(out)\n        out = self.pooling(out)\n        return self.linear(out)\n```\n\n### Forward and backward using the custom network\n\n```python\n    # loss and network\n    criterion = nn.CrossEntropyLoss()\n    net = ExampleNetwork(in_feat=3, out_feat=5, D=2)\n    print(net)\n\n    # a data loader must return a tuple of coords, features, and labels.\n    coords, feat, label = data_loader()\n    input = ME.SparseTensor(feat, coordinates=coords)\n    # Forward\n    output = net(input)\n\n    # Loss\n    loss = criterion(output.F, label)\n```\n\n## Discussion and Documentation\n\nFor discussion and questions, please use `minkowskiengine@googlegroups.com`.\nFor API and general usage, please refer to the [MinkowskiEngine documentation\npage](http:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002F) for more detail.\n\nFor issues not listed on the API and feature requests, feel free to submit\nan issue on the [github issue\npage](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues).\n\n\n## Known Issues\n\n### Specifying CUDA architecture list\n\nIn some cases, you need to explicitly specify which compute capability your GPU uses. The default list might not contain your architecture.\n\n```bash\nexport TORCH_CUDA_ARCH_LIST=\"5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX\"; python setup.py install --force_cuda\n```\n\n### Unhandled Out-Of-Memory thrust::system exception\n\nThere is [a known issue](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fthrust\u002Fissues\u002F1448) in thrust with CUDA 10 that leads to an unhandled thrust exception. Please refer to the [issue](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F357) for detail.\n\n### Too much GPU memory usage or Frequent Out of Memory\n\nThere are a few causes for this error.\n\n1. Out of memory during a long running training\n\nMinkowskiEngine is a specialized library that can handle different number of points or different number of non-zero elements at every iteration during training, which is common in point cloud data.\nHowever, pytorch is implemented assuming that the number of point, or size of the activations do not change at every iteration. Thus, the GPU memory caching used by pytorch can result in unnecessarily large memory consumption.\n\nSpecifically, pytorch caches chunks of memory spaces to speed up allocation used in every tensor creation. If it fails to find the memory space, it splits an existing cached memory or allocate new space if there's no cached memory large enough for the requested size. Thus, every time we use different number of point (number of non-zero elements) with pytorch, it either split existing cache or reserve new memory. If the cache is too fragmented and allocated all GPU space, it will raise out of memory error.\n\n**To prevent this, you must clear the cache at regular interval with `torch.cuda.empty_cache()`.**\n\n### CUDA 11.1 Installation\n\n```\nwget https:\u002F\u002Fdeveloper.download.nvidia.com\u002Fcompute\u002Fcuda\u002F11.1.1\u002Flocal_installers\u002Fcuda_11.1.1_455.32.00_linux.run\nsudo sh cuda_11.1.1_455.32.00_linux.run --toolkit --silent --override\n\n# Install MinkowskiEngine with CUDA 11.1\nexport CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1; pip install MinkowskiEngine -v --no-deps\n```\n\n### Running the MinkowskiEngine on nodes with a large number of CPUs\n\nThe MinkowskiEngine uses OpenMP to parallelize the kernel map generation. However, when the number of threads used for parallelization is too large (e.g. OMP_NUM_THREADS=80), the efficiency drops rapidly as all threads simply wait for multithread locks to be released.\nIn such cases, set the number of threads used for OpenMP. Usually, any number below 24 would be fine, but search for the optimal setup on your system.\n\n```\nexport OMP_NUM_THREADS=\u003Cnumber of threads to use>; python \u003Cyour_program.py>\n```\n\n## Citing Minkowski Engine\n\nIf you use the Minkowski Engine, please cite:\n\n- [4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks, CVPR'19](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.08755), [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.08755.pdf)\n\n```\n@inproceedings{choy20194d,\n  title={4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks},\n  author={Choy, Christopher and Gwak, JunYoung and Savarese, Silvio},\n  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},\n  pages={3075--3084},\n  year={2019}\n}\n```\n\nFor multi-threaded kernel map generation, please cite:\n\n```\n@inproceedings{choy2019fully,\n  title={Fully Convolutional Geometric Features},\n  author={Choy, Christopher and Park, Jaesik and Koltun, Vladlen},\n  booktitle={Proceedings of the IEEE International Conference on Computer Vision},\n  pages={8958--8966},\n  year={2019}\n}\n```\n\nFor strided pooling layers for high-dimensional convolutions, please cite:\n\n```\n@inproceedings{choy2020high,\n  title={High-dimensional Convolutional Networks for Geometric Pattern Recognition},\n  author={Choy, Christopher and Lee, Junha and Ranftl, Rene and Park, Jaesik and Koltun, Vladlen},\n  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},\n  year={2020}\n}\n```\n\nFor generative transposed convolution, please cite:\n\n```\n@inproceedings{gwak2020gsdn,\n  title={Generative Sparse Detection Networks for 3D Single-shot Object Detection},\n  author={Gwak, JunYoung and Choy, Christopher B and Savarese, Silvio},\n  booktitle={European conference on computer vision},\n  year={2020}\n}\n```\n\n\n## Unittest\n\nFor unittests and gradcheck, use torch >= 1.7\n\n## Projects using Minkowski Engine\n\nPlease feel free to update [the wiki page](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fwiki\u002FUsage) to add your projects!\n\n- [Projects using MinkowskiEngine](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fwiki\u002FUsage)\n\n- Segmentation: [3D and 4D Spatio-Temporal Semantic Segmentation, CVPR'19](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002FSpatioTemporalSegmentation)\n- Representation Learning: [Fully Convolutional Geometric Features, ICCV'19](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002FFCGF)\n- 3D Registration: [Learning multiview 3D point cloud registration, CVPR'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.05119)\n- 3D Registration: [Deep Global Registration, CVPR'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.11540)\n- Pattern Recognition: [High-Dimensional Convolutional Networks for Geometric Pattern Recognition, CVPR'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.08144)\n- Detection: [Generative Sparse Detection Networks for 3D Single-shot Object Detection, ECCV'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12356)\n- Image matching: [Sparse Neighbourhood Consensus Networks, ECCV'20](https:\u002F\u002Fwww.di.ens.fr\u002Fwillow\u002Fresearch\u002Fsparse-ncnet\u002F)\n","[pypi-image]: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FMinkowskiEngine.svg\n[pypi-url]: https:\u002F\u002Fpypi.org\u002Fproject\u002FMinkowskiEngine\u002F\n[pypi-download]: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002FMinkowskiEngine\n[slack-badge]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-join%20chats-brightgreen\n[slack-url]: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fminkowskiengine\u002Fshared_invite\u002Fzt-piq2x02a-31dOPocLt6bRqOGY3U_9Sw\n\n# Minkowski Engine\n\n[![PyPI 版本][pypi-image]][pypi-url] [![PyPI 每月下载量][pypi-download]][pypi-url] [![Slack 聊天][slack-badge]][slack-url]\n\nMinkowski Engine 是一个用于稀疏张量的自动微分库。它支持所有标准的神经网络层，例如卷积、池化、反池化以及针对稀疏张量的广播操作。更多信息请访问 [文档页面](http:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Foverview.html)。\n\n## 最新消息\n\n- 2021-08-11 添加了 Docker 安装说明\n- 2021-08-06 已解决与 PyTorch 1.8 和 1.9 相关的所有安装问题。\n- 2021-04-08 由于近期在 [PyTorch 1.8 + CUDA 11](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F330) 中出现的问题，建议使用 [Anaconda 进行安装](#anaconda)。\n- 2020-12-24 v0.5 现已发布！新版本为所有坐标管理函数提供了 CUDA 加速。\n\n## 示例网络\n\nMinkowski Engine 支持多种基于稀疏张量构建的功能。我们在此列出几种流行的网络架构和应用。要运行这些示例，请先安装该包，然后在包的根目录下执行相应命令。\n\n| 示例              | 网络及命令                                                                                                                                                           |\n|:---------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| 语义分割 | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_398aeee5ae4a.png\"> \u003Cbr \u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_5c0e7670c416.png\" width=\"256\"> \u003Cbr \u002F> `python -m examples.indoor` |\n| 分类        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_3c03fd83f819.png) \u003Cbr \u002F> `python -m examples.classification_modelnet40`                                                          |\n| 重建        | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_04747d614f57.png\"> \u003Cbr \u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_15f25281a58b.gif\" width=\"256\"> \u003Cbr \u002F> `python -m examples.reconstruction` |\n| 完善            | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_87d3ea94bdcc.png\"> \u003Cbr \u002F> `python -m examples.completion`                                                       |\n| 检测             | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_e228614cc790.png\">                                                                                               |\n\n\n## 稀疏张量网络：面向空间稀疏张量的神经网络\n\n通过压缩神经网络来加速推理并最小化内存占用一直是广泛研究的主题。一种流行的模型压缩技术是剪枝卷积神经网络中的权重，也被称为 [*稀疏卷积网络*](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FLiu_Sparse_Convolutional_Neural_2015_CVPR_paper.pdf)。这种用于模型压缩的参数空间稀疏性通常应用于处理稠密张量的网络，而这些网络的所有中间激活也是稠密张量。\n\n然而，在这项工作中，我们专注于 [*空间上* 稀疏的数据](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.6070)，特别是高维的空间稀疏输入、3D 数据以及 3D 对象表面上的卷积，这一概念最早由 [Siggraph'17](https:\u002F\u002Fwang-ps.github.io\u002FO-CNN.html) 提出。我们也可以将这些数据表示为稀疏张量，而这类稀疏张量在 3D 感知、配准和统计数据分析等高维问题中非常常见。我们将专为这类输入设计的神经网络称为 *稀疏张量网络*，这些稀疏张量网络以稀疏张量作为输入和输出进行处理和生成。为了构建稀疏张量网络，我们可以像定义稠密张量上的网络层一样，使用 MLP、非线性变换、卷积、归一化、池化等标准神经网络层，并将其在 Minkowski Engine 中实现。\n\n我们在下方展示了稀疏张量网络对稀疏张量进行卷积运算的过程。稀疏张量上的卷积层与稠密张量上的卷积层类似。然而，在稀疏张量上，我们只在少数指定的点上计算卷积输出，这些点可以通过 [广义卷积](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fsparse_tensor_network.html) 来控制。更多相关信息请访问 [稀疏张量网络的文档页面](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fsparse_tensor_network.html) 和 [术语页面](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fterminology.html)。\n\n| 稠密张量                                                                | 稀疏张量                                                                |\n|:---------------------------------------------------------------------------:|:----------------------------------------------------------------------------:|\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_6b3b7971b5e3.gif\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_readme_4fd96ce062cb.gif\"> |\n\n--------------------------------------------------------------------------------\n\n## 特性\n\n- 无限制的高维稀疏张量支持\n- 所有标准神经网络层（卷积、池化、广播等）\n- 动态计算图\n- 自定义核形状\n- 多 GPU 训练\n- 多线程核映射\n- 多线程编译\n- 高度优化的 GPU 核\n\n\n## 系统要求\n\n- Ubuntu >= 14.04\n- CUDA >= 10.1.243 并且 **CUDA 版本需与 PyTorch 使用的版本一致**（例如，如果使用 conda 的 cudatoolkit=11.1，则 MinkowskiEngine 编译时也应使用 CUDA=11.1）\n- PyTorch >= 1.7 为指定 CUDA 版本，请使用 conda 进行安装。必须确保 PyTorch 和 MinkowskiEngine 使用的 CUDA 版本一致。`conda install -y -c nvidia -c pytorch pytorch=1.8.1 cudatoolkit=10.2`)\n- Python >= 3.6\n- ninja（用于安装）\n- GCC >= 7.4.0\n\n## 安装\n\n您可以使用 `pip`、Anaconda 或直接在系统上安装 Minkowski Engine。如果您在安装过程中遇到问题，请查看[安装维基页面](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fwiki\u002FInstallation)。\n\n如果找不到相关问题，请在[GitHub 问题页面](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues)上报错。\n\n- [PIP](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#pip) 安装\n- [Conda](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#anaconda) 安装\n- [Python](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#system-python) 安装\n- [Docker](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine#docker) 安装\n\n\n### Pip\n\nMinkowskiEngine 通过 [PyPI MinkowskiEngine][pypi-url] 发布，可以使用 `pip` 简单安装。\n\n首先，按照 [PyTorch 官方文档](https:\u002F\u002Fpytorch.org) 安装 PyTorch。然后安装 `openblas`：\n\n```\nsudo apt install build-essential python3-dev libopenblas-dev\npip install torch ninja\npip install -U MinkowskiEngine --install-option=\"--blas=openblas\" -v --no-deps\n\n# 从最新源码安装\n# pip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine --no-deps\n```\n\n如果您想为安装脚本指定参数，请参考以下命令：\n\n```\n# 如果安装失败，可以取消注释部分选项\n# export CXX=c++; # 如果您想使用不同的 C++ 编译器，请设置此环境变量\n# export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1; # 或者选择您系统上正确的 CUDA 版本。\npip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps \\\n#                           \\ # 如果您想强制安装 CUDA，请取消注释下一行\n#                           --install-option=\"--force_cuda\" \\\n#                           \\ # 如果您想强制不安装 CUDA，请取消注释下一行。force_cuda 会覆盖 cpu_only\n#                           --install-option=\"--cpu_only\" \\\n#                           \\ # 如果您想覆盖为 openblas、atlas、mkl 或其他 BLAS 库，请取消注释下一行\n#                           --install-option=\"--blas=openblas\" \\\n```\n\n### Anaconda\n\nMinkowskiEngine 同时支持 CUDA 10.2 和 CUDA 11.1，适用于大多数最新的 PyTorch 版本。\n\n#### CUDA 10.2\n\n建议使用 `python>=3.6` 进行安装。首先，按照 [Anaconda 官方文档](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F) 在您的计算机上安装 Anaconda。\n\n```\nsudo apt install g++-7  # 对于 CUDA 10.2，必须使用 GCC \u003C 8\n# 确保 `g++-7 --version` 至少为 7.4.0\nconda create -n py3-mink python=3.8\nconda activate py3-mink\n\nconda install openblas-devel -c anaconda\nconda install pytorch=1.9.0 torchvision cudatoolkit=10.2 -c pytorch -c nvidia\n\n# 安装 MinkowskiEngine\nexport CXX=g++-7\n# 如果需要指定 CUDA 安装路径，请取消注释下一行。确保 `$CUDA_HOME\u002Fnvcc --version` 是 10.2\n# export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-10.2\npip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps --install-option=\"--blas_include_dirs=${CONDA_PREFIX}\u002Finclude\" --install-option=\"--blas=openblas\"\n\n# 或者如果您想要本地安装的 MinkowskiEngine\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine.git\ncd MinkowskiEngine\nexport CXX=g++-7\npython setup.py install --blas_include_dirs=${CONDA_PREFIX}\u002Finclude --blas=openblas\n```\n\n#### CUDA 11.X\n\n建议使用 `python>=3.6` 进行安装。首先，按照 [Anaconda 官方文档](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F) 在您的计算机上安装 Anaconda。\n\n```\nconda create -n py3-mink python=3.8\nconda activate py3-mink\n\nconda install openblas-devel -c anaconda\nconda install pytorch=1.9.0 torchvision cudatoolkit=11.1 -c pytorch -c nvidia\n\n# 安装 MinkowskiEngine\n\n# 如果需要指定 CUDA 安装路径，请取消注释下一行。确保 `$CUDA_HOME\u002Fnvcc --version` 是 11.X\n# export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1\npip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps --install-option=\"--blas_include_dirs=${CONDA_PREFIX}\u002Finclude\" --install-option=\"--blas=openblas\"\n\n# 或者如果您想要本地安装的 MinkowskiEngine\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine.git\ncd MinkowskiEngine\npython setup.py install --blas_include_dirs=${CONDA_PREFIX}\u002Finclude --blas=openblas\n```\n\n### 系统 Python\n\n与 Anaconda 安装类似，确保您安装的 PyTorch 使用的 CUDA 版本与 `nvcc` 的版本一致。\n\n```\n# 安装系统依赖\nsudo apt install build-essential python3-dev libopenblas-dev\n\n# 如果您的 Python3 已经安装了 pip，则跳过此步骤\ncurl https:\u002F\u002Fbootstrap.pypa.io\u002Fget-pip.py | python3\n\n# 获取 pip 并安装 Python 依赖\npython3 -m pip install torch numpy ninja\n\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine.git\n\ncd MinkowskiEngine\n\npython setup.py install\n# 若要指定 BLAS、CXX、CUDA_HOME 并强制安装 CUDA，请使用以下命令：\n# export CXX=c++; export CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1; python setup.py install --blas=openblas --force_cuda\n```\n\n### Docker\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\ncd MinkowskiEngine\ndocker build -t minkowski_engine docker\n```\n\n构建完成后，请检查 Docker 是否正确加载了 MinkowskiEngine：\n\n```\ndocker run MinkowskiEngine python3 -c \"import MinkowskiEngine; print(MinkowskiEngine.__version__)\"\n```\n\n## 仅 CPU 构建及 BLAS 配置（MKL）\n\nMinkowski Engine 支持在没有 NVIDIA GPU 的平台上进行仅 CPU 构建。更多详细信息请参阅[快速入门](https:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002Fquick_start.html)。\n\n\n## 快速入门\n\n要使用 Minkowski Engine，您首先需要导入该引擎。然后，您需要定义网络。如果您的数据尚未量化，您需要将其体素化或量化为稀疏张量。幸运的是，Minkowski Engine 提供了量化函数 (`MinkowskiEngine.utils.sparse_quantize`)。\n\n### 创建网络\n\n```python\nimport torch.nn as nn\nimport MinkowskiEngine as ME\n\nclass ExampleNetwork(ME.MinkowskiNetwork):\n\n    def __init__(self, in_feat, out_feat, D):\n        super(ExampleNetwork, self).__init__(D)\n        self.conv1 = nn.Sequential(\n            ME.MinkowskiConvolution(\n                in_channels=in_feat,\n                out_channels=64,\n                kernel_size=3,\n                stride=2,\n                dilation=1,\n                bias=False,\n                dimension=D),\n            ME.MinkowskiBatchNorm(64),\n            ME.MinkowskiReLU())\n        self.conv2 = nn.Sequential(\n            ME.MinkowskiConvolution(\n                in_channels=64,\n                out_channels=128,\n                kernel_size=3,\n                stride=2,\n                dimension=D),\n            ME.MinkowskiBatchNorm(128),\n            ME.MinkowskiReLU())\n        self.pooling = ME.MinkowskiGlobalPooling()\n        self.linear = ME.MinkowskiLinear(128, out_feat)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = self.conv2(out)\n        out = self.pooling(out)\n        return self.linear(out)\n```\n\n### 使用自定义网络进行前向和反向传播\n\n```python\n    # 损失函数和网络\n    criterion = nn.CrossEntropyLoss()\n    net = ExampleNetwork(in_feat=3, out_feat=5, D=2)\n    print(net)\n\n    # 数据加载器必须返回坐标、特征和标签的元组。\n    coords, feat, label = data_loader()\n    input = ME.SparseTensor(feat, coordinates=coords)\n    # 前向传播\n    output = net(input)\n\n    # 损失\n    loss = criterion(output.F, label)\n```\n\n## 讨论与文档\n\n如有讨论或问题，请发送邮件至 `minkowskiengine@googlegroups.com`。\n有关 API 和一般用法的详细信息，请参阅 [MinkowskiEngine 文档页面](http:\u002F\u002Fnvidia.github.io\u002FMinkowskiEngine\u002F)。\n\n对于未在 API 中列出的问题以及功能请求，欢迎在 [GitHub 问题页面](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues) 上提交。\n\n## 已知问题\n\n### 指定 CUDA 架构列表\n\n在某些情况下，您需要显式指定 GPU 所使用的计算能力。默认列表可能不包含您的架构。\n\n```bash\nexport TORCH_CUDA_ARCH_LIST=\"5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX\"; python setup.py install --force_cuda\n```\n\n### 未处理的内存不足 thrust::system 异常\n\nCUDA 10 中的 thrust 存在一个 [已知问题](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fthrust\u002Fissues\u002F1448)，会导致未处理的 thrust 异常。详情请参阅 [此问题](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F357)。\n\n### GPU 显存占用过高或频繁出现内存不足\n\n导致此错误的原因有以下几点：\n\n1. 长时间训练过程中内存不足\n\nMinkowskiEngine 是一个专门用于处理点云数据的库，能够在每次迭代中处理不同数量的点或非零元素。然而，PyTorch 的实现假设每次迭代中的点数或激活大小不会发生变化。因此，PyTorch 使用的 GPU 内存缓存机制可能会导致不必要的大量内存消耗。\n\n具体来说，PyTorch 会缓存一些内存块以加快张量创建时的分配速度。如果找不到合适的内存空间，它会分割现有的缓存块，或者在没有足够大的缓存时分配新的内存。因此，当我们使用 PyTorch 处理不同数量的点（即非零元素数量）时，它要么分割现有缓存，要么预留新内存。如果缓存被过度碎片化并占用了所有 GPU 显存，就会抛出内存不足的错误。\n\n**为防止这种情况发生，必须定期调用 `torch.cuda.empty_cache()` 清空缓存。**\n\n### CUDA 11.1 安装\n\n```\nwget https:\u002F\u002Fdeveloper.download.nvidia.com\u002Fcompute\u002Fcuda\u002F11.1.1\u002Flocal_installers\u002Fcuda_11.1.1_455.32.00_linux.run\nsudo sh cuda_11.1.1_455.32.00_linux.run --toolkit --silent --override\n\n# 使用 CUDA 11.1 安装 MinkowskiEngine\nexport CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1; pip install MinkowskiEngine -v --no-deps\n```\n\n### 在具有大量 CPU 核心的节点上运行 MinkowskiEngine\n\nMinkowskiEngine 使用 OpenMP 来并行化核映射的生成。然而，当用于并行化的线程数过多时（例如 OMP_NUM_THREADS=80），效率会迅速下降，因为所有线程都在等待多线程锁释放。在这种情况下，应设置 OpenMP 使用的线程数。通常，24 个以下的线程数即可，但您仍需根据系统情况找到最佳配置。\n\n```\nexport OMP_NUM_THREADS=\u003C要使用的线程数>; python \u003Cyour_program.py>\n```\n\n## 引用 Minkowski Engine\n\n如果您使用了 Minkowski Engine，请引用以下文献：\n\n- [4D 空时卷积神经网络：Minkowski 卷积神经网络，CVPR'19](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.08755), [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.08755.pdf)\n\n```\n@inproceedings{choy20194d,\n  title={4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks},\n  author={Choy, Christopher and Gwak, JunYoung and Savarese, Silvio},\n  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},\n  pages={3075--3084},\n  year={2019}\n}\n```\n\n关于多线程核映射生成，请引用：\n\n```\n@inproceedings{choy2019fully,\n  title={Fully Convolutional Geometric Features},\n  author={Choy, Christopher and Park, Jaesik and Koltun, Vladlen},\n  booktitle={Proceedings of the IEEE International Conference on Computer Vision},\n  pages={8958--8966},\n  year={2019}\n}\n```\n\n关于高维卷积的步进池化层，请引用：\n\n```\n@inproceedings{choy2020high,\n  title={High-dimensional Convolutional Networks for Geometric Pattern Recognition},\n  author={Choy, Christopher and Lee, Junha and Ranftl, Rene and Park, Jaesik and Koltun, Vladlen},\n  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},\n  year={2020}\n}\n```\n\n关于生成式转置卷积，请引用：\n\n```\n@inproceedings{gwak2020gsdn,\n  title={Generative Sparse Detection Networks for 3D Single-shot Object Detection},\n  author={Gwak, JunYoung and Choy, Christopher B and Savarese, Silvio},\n  booktitle={European conference on computer vision},\n  year={2020}\n}\n```\n\n\n## 单元测试\n\n进行单元测试和梯度检查时，请使用 PyTorch >= 1.7版本。\n\n## 使用 Minkowski Engine 的项目\n\n欢迎您更新[维基页面](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fwiki\u002FUsage)，添加您的项目！\n\n- [使用 MinkowskiEngine 的项目](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fwiki\u002FUsage)\n\n- 分割：[3D 和 4D 空时语义分割，CVPR'19](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002FSpatioTemporalSegmentation)\n- 表征学习：[全卷积几何特征，ICCV'19](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002FFCGF)\n- 3D 配准：[学习多视角 3D 点云配准，CVPR'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.05119)\n- 3D 配准：[深度全局配准，CVPR'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.11540)\n- 模式识别：[用于几何模式识别的高维卷积网络，CVPR'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.08144)\n- 检测：[面向 3D 单次检测的生成式稀疏检测网络，ECCV'20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12356)\n- 图像匹配：[稀疏邻域一致性网络，ECCV'20](https:\u002F\u002Fwww.di.ens.fr\u002Fwillow\u002Fresearch\u002Fsparse-ncnet\u002F)","# MinkowskiEngine 快速上手指南\n\nMinkowski Engine 是一个专为**稀疏张量**设计的自动微分库，支持卷积、池化、广播等标准神经网络层。它特别适用于处理高维空间数据（如 3D 点云、体素网格），能够显著减少内存占用并加速推理。\n\n## 1. 环境准备\n\n在开始之前，请确保您的系统满足以下要求。**关键注意：PyTorch 使用的 CUDA 版本必须与编译 MinkowskiEngine 时使用的 CUDA 版本完全一致。**\n\n### 系统要求\n- **操作系统**: Ubuntu >= 14.04 (推荐 Ubuntu 18.04\u002F20.04)\n- **Python**: >= 3.6\n- **编译器**: GCC >= 7.4.0, Ninja\n- **CUDA**: >= 10.1.243 (需与 PyTorch 版本匹配)\n- **PyTorch**: >= 1.7\n\n### 前置依赖安装\n建议先安装系统级依赖：\n```bash\nsudo apt install build-essential python3-dev libopenblas-dev ninja-build\n```\n\n## 2. 安装步骤\n\n推荐使用 **Conda** 进行安装，因为它能更好地管理 CUDA 工具包和 Python 环境的依赖关系。\n\n### 方案 A：使用 Conda 安装（推荐）\n\n此方法可避免大多数 CUDA 版本不匹配的问题。以下以 **CUDA 11.1** 为例（如需 CUDA 10.2，请将 `cudatoolkit` 改为 `10.2` 并确保 GCC 版本 \u003C 8）。\n\n1. **创建并激活虚拟环境**：\n   ```bash\n   conda create -n py3-mink python=3.8\n   conda activate py3-mink\n   ```\n\n2. **安装 PyTorch 和 OpenBLAS**：\n   *注：国内用户可使用清华源加速 `-c` 后的频道或添加 `-c https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fanaconda\u002Fcloud\u002Fpytorch\u002F`*\n   ```bash\n   conda install openblas-devel -c anaconda\n   conda install pytorch=1.9.0 torchvision cudatoolkit=11.1 -c pytorch -c nvidia\n   ```\n\n3. **设置编译器并安装 MinkowskiEngine**：\n   ```bash\n   # 指定 C++ 编译器 (根据系统情况可能需要调整，CUDA 11+ 通常不需要强制指定 g++-7)\n   export CXX=g++\n   \n   # 从源码安装\n   pip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps \\\n       --install-option=\"--blas_include_dirs=${CONDA_PREFIX}\u002Finclude\" \\\n       --install-option=\"--blas=openblas\"\n   ```\n\n### 方案 B：使用 Pip 安装\n\n如果您已经配置好了系统级的 PyTorch 和 CUDA 环境：\n\n```bash\n# 确保 torch 和 ninja 已安装\npip install torch ninja\n\n# 安装 MinkowskiEngine\npip install -U MinkowskiEngine --install-option=\"--blas=openblas\" -v --no-deps\n```\n\n*如果安装失败，请尝试从最新源码安装：*\n```bash\nexport CUDA_HOME=\u002Fusr\u002Flocal\u002Fcuda-11.1  # 请替换为您实际的 CUDA 路径\npip install -U git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine -v --no-deps \\\n    --install-option=\"--force_cuda\" \\\n    --install-option=\"--blas=openblas\"\n```\n\n### 验证安装\n运行以下命令检查是否安装成功：\n```bash\npython3 -c \"import MinkowskiEngine; print(MinkowskiEngine.__version__)\"\n```\n\n## 3. 基本使用\n\nMinkowskiEngine 的核心是 `SparseTensor`。以下是构建一个简单的稀疏卷积网络的最小示例。\n\n### 代码示例\n\n```python\nimport torch\nimport MinkowskiEngine as ME\n\n# 1. 定义坐标和特征\n# coords: [N, D] 整数坐标，例如 3D 空间为 [x, y, z]\ncoords = torch.IntTensor([\n    [0, 0, 0],\n    [1, 0, 0],\n    [0, 1, 0],\n    [1, 1, 0]\n])\n\n# feats: [N, C] 浮点数特征\nfeats = torch.rand(4, 16) \n\n# 2. 创建稀疏张量\n# quantization_mode 用于处理重复坐标 (例如 'sum' 会将同一坐标的特征相加)\nsparse_tensor = ME.SparseTensor(\n    features=feats,\n    coordinates=coords,\n    quantization_mode=ME.SparseTensorQuantizationMode.UNIQUE\n)\n\n# 3. 定义稀疏卷积层\nconv_layer = ME.MinkowskiConvolution(\n    in_channels=16,\n    out_channels=32,\n    kernel_size=3,\n    stride=1,\n    dimension=3  # 指定空间维度，3 代表 3D\n)\n\n# 4. 前向传播\noutput_tensor = conv_layer(sparse_tensor)\n\nprint(f\"输入稀疏张量形状: {sparse_tensor.shape}\")\nprint(f\"输出稀疏张量形状: {output_tensor.shape}\")\nprint(f\"输出特征维度: {output_tensor.features.shape}\")\n```\n\n### 核心概念简述\n- **Coordinates**: 稀疏数据的有效位置索引。\n- **Features**: 对应位置上的数据值。\n- **Dimension**: 构造层时必须指定数据的空间维度（如 2D 图像为 2，3D 点云为 3）。\n- **量化 (Quantization)**: 当输入坐标有重复时，需要通过量化模式（如求和、平均）将其合并为唯一的稀疏坐标。","一家自动驾驶初创公司的感知团队正在开发基于激光雷达（LiDAR）点云的 3D 语义分割系统，以识别道路上的行人、车辆和障碍物。\n\n### 没有 MinkowskiEngine 时\n- **显存爆炸式增长**：为了处理稀疏的激光雷达点云，工程师被迫将数据体素化为稠密张量，导致 99% 的显存被无效的空值占用，根本无法在单卡上运行高分辨率模型。\n- **计算资源严重浪费**：传统卷积神经网络会对大量空白区域执行无效的乘法运算，推理延迟高达数百毫秒，无法满足实时驾驶决策需求。\n- **模型架构受限**：由于内存瓶颈，团队不得不大幅降低输入分辨率或裁剪网络深度，直接导致对远处小目标（如锥桶、儿童）的识别准确率低下。\n- **预处理流程繁琐**：需要编写复杂的自定义代码来手动管理稀疏坐标索引，不仅开发效率低，还极易引入难以排查的 Bug。\n\n### 使用 MinkowskiEngine 后\n- **显存占用降低两个数量级**：MinkowskiEngine 原生支持高维稀疏张量，仅存储有效点的坐标与特征，使得在同等显存下可处理更大范围、更精细的点云场景。\n- **推理速度显著提升**：库内建的稀疏卷积算子自动跳过空白区域计算，结合 CUDA 加速，将端到端推理延迟压缩至实时范围内，满足车载部署标准。\n- **模型性能全面释放**：摆脱了内存束缚，团队得以部署更深、更宽的 3D 网络架构，显著提升了复杂场景下的分割精度和小目标检测能力。\n- **开发流程标准化**：直接调用 MinkowskiEngine 提供的标准层（如稀疏卷积、池化），无需重复造轮子，让算法工程师能专注于网络结构创新而非底层优化。\n\nMinkowskiEngine 通过将“空间稀疏性”转化为计算优势，彻底解决了 3D 深度学习在处理大规模点云时的内存与效率瓶颈。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_MinkowskiEngine_d1e5fd9a.png","NVIDIA","NVIDIA Corporation","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FNVIDIA_7dcf6000.png","",null,"https:\u002F\u002Fnvidia.com","https:\u002F\u002Fgithub.com\u002FNVIDIA",[80,84,88,92,96,100],{"name":81,"color":82,"percentage":83},"Python","#3572A5",36.7,{"name":85,"color":86,"percentage":87},"Cuda","#3A4E3A",32.3,{"name":89,"color":90,"percentage":91},"C++","#f34b7d",30.4,{"name":93,"color":94,"percentage":95},"Makefile","#427819",0.4,{"name":97,"color":98,"percentage":99},"Dockerfile","#384d54",0.1,{"name":101,"color":102,"percentage":103},"Shell","#89e051",0,2907,469,"2026-04-09T04:52:02","NOASSERTION",4,"Linux (Ubuntu >= 14.04)","需要 NVIDIA GPU，CUDA 版本需 >= 10.1.243，且必须与 PyTorch 使用的 CUDA 版本完全一致（如 10.2 或 11.1）","未说明",{"notes":113,"python":114,"dependencies":115},"1. 核心要求：编译 MinkowskiEngine 时使用的 CUDA 版本必须与已安装的 PyTorch 所使用的 cudatoolkit 版本严格匹配，否则会导致安装失败。2. 编译器要求：若使用 CUDA 10.2，必须使用 GCC \u003C 8 (推荐 g++-7)；通用要求 GCC >= 7.4.0。3. 安装建议：官方强烈推荐使用 Anaconda 进行安装以解决依赖和版本兼容问题。4. 支持无 GPU 环境：可通过设置 '--cpu_only' 参数进行纯 CPU 编译。5. macOS 和 Windows 未在系统需求中明确列出，主要支持 Linux。",">= 3.6",[116,117,118,119],"torch >= 1.7","ninja","GCC >= 7.4.0","libopenblas-dev",[14,15,121,16],"其他",[123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142],"neural-network","computer-vision","sparse-tensors","convolutional-neural-networks","semantic-segmentation","auto-differentiation","spatio-temporal-analysis","space-time","deep-learning","3d-convolutional-network","4d-convolutional-neural-network","high-dimensional-data","high-dimensional-inference","trilateral-filter","3d-vision","sparse-convolution","pytorch","minkowski-engine","cuda","sparse-tensor-network","2026-03-27T02:49:30.150509","2026-04-10T20:40:51.585578",[146,151,156,161,166,170],{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},28340,"安装时遇到 'TypeError: can only concatenate str (not \"list\") to str' 错误怎么办？","这通常与集群环境中 OPENBLAS 目录配置有关。建议在集群的计算节点上直接进行安装，而不是在头节点安装。如果您有批处理系统，可以尝试将安装过程作为批处理作业运行。如果使用共享文件系统（如 NetApp），只需在一个计算节点上安装一次，所有节点即可共享使用。请确保在安装 GPU 版本时，当前环境能够访问 GPU 资源。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F94",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},28341,"使用 CUDA 11.1 时出现 'Invalid in_feat size' 断言错误如何解决？","该问题在特定版本的 MinkowskiEngine (0.5.1\u002F0.5.2) 与 CUDA 11.1 组合下会出现，但在升级到 PyTorch 1.9 + CUDA 11.1\u002F11.2 环境后已得到修复。建议升级您的开发环境至 PyTorch 1.9 及以上版本，并确保 NVCC 编译器版本与 CUDA 驱动匹配（例如 CUDA 11.1）。如果问题依旧，请检查代码逻辑是否在坐标管理器中传入了空特征。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F330",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},28342,"训练过程中发现 GPU 内存逐渐增加导致崩溃，是内存泄漏吗？","维护者指出，在超过 10 万次迭代的测试中并未发现 Minkowski Engine 本身存在内存泄漏。如果遇到显存持续增长，通常是由于用户代码中未正确释放中间变量或计算图未被清理所致。请检查代码中是否有保留了对旧批次数据、网络输出或梯度的引用。建议提交可复现的最小代码示例以便进一步排查，大多数情况下这是用户端代码逻辑问题而非库本身的缺陷。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F150",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},28343,"使用 Conda 安装时编译报错 'fatal error: cblas.h: No such file or directory' 如何解决？","这是因为编译器找不到 BLAS 库的头文件。解决方法有两种：\n1. 如果有 sudo 权限，安装 OpenBLAS 开发包：`sudo apt install libopenblas-dev`。\n2. 如果没有 sudo 权限或在使用 Conda，可以指定包含路径进行安装。例如切换到 v0.4.3 版本并运行：`python setup.py install --blas=openblas --blas_include_dirs=\u002Fusr\u002Finclude\u002Fopenblas`。\n对于 CentOS 用户，可以使用 `yum install openblas-devel.x86_64` 安装，然后指定路径：`python setup.py install --blas_include_dirs=\u002Fusr\u002Finclude\u002Fopenblas:${CONDA_PREFIX}\u002Finclude --blas=openblas`。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMinkowskiEngine\u002Fissues\u002F300",{"id":167,"question_zh":168,"answer_zh":169,"source_url":150},28344,"在没有 root 权限的集群上如何安装 MinkowskiEngine？","您可以在用户目录下通过 Conda 或 pip 进行本地安装。关键在于正确指定 BLAS 库的路径。首先安装必要的依赖（如 `conda install mkl mkl-include -c intel`），然后在运行 `setup.py` 时通过 `--blas_include_dirs` 参数手动指定头文件路径（例如 Conda 环境下的 include 目录或用户安装的 openblas 目录）。如果集群使用共享文件系统，建议在拥有 GPU 的计算节点上通过交互式作业或批处理作业完成安装，这样安装后的包对所有节点可见。",{"id":171,"question_zh":172,"answer_zh":173,"source_url":155},28345,"不同 CUDA 版本对 MinkowskiEngine 的兼容性有何影响？","某些特定版本的 MinkowskiEngine 可能与特定的 CUDA 版本存在兼容性问题。例如，ME 0.5.1\u002F0.5.2 在 CUDA 11.1 下曾出现过坐标管理器断言错误，而在 CUDA 10.2 下工作正常。该问题后续在新版 PyTorch (1.9+) 和更新的 CUDA 工具链中得到了修复。建议在遇到奇怪的系统级错误或断言失败时，尝试切换 CUDA 版本（如降级到 10.2 或升级到 11.2+）或更新 PyTorch 版本来排除环境兼容性问题。",[175,180,185,190,195,200],{"id":176,"version":177,"summary_zh":178,"released_at":179},189258,"v0.5.4","- 修复 `TensorField.sparse()` 在无重复坐标时的行为\n- 如果 `SparseTensor.initialize_coordinates()` 没有重复坐标，则跳过不必要的稀疏矩阵乘法运算\n- 添加了模型摘要工具函数\n- `TensorField.splat` 函数，用于将特征“splat”到稀疏张量上\n- `SparseTensor.interpolate` 函数，用于提取插值后的特征\n- 为 `SparseTensor` 和 `TensorField` 添加了 `coordinate_key` 属性函数\n- 修复了 GPU 张量的 `.dense()` 方法。（PR #319）\n","2021-05-21T19:58:33",{"id":181,"version":182,"summary_zh":183,"released_at":184},189259,"v0.5.3","- 更新 README，支持 PyTorch 1.8.1\n- 使用自定义的 `gpu_storage` 替代 Thrust 向量，以加快构造函数执行速度\n- 更新 PyTorch 安装说明\n- 修复当 `kernel_size == stride_size` 时的转置卷积映射问题\n- 针对 v0.5 API 更新重建和 VAE 示例\n- 添加 `stack_unet.py` 示例，并更新相关 API\n- 添加 `MinkowskiToFeature` 层\n\n","2021-04-14T03:45:03",{"id":186,"version":187,"summary_zh":188,"released_at":189},189260,"v0.5.2","- spmm 平均 CUDA 函数\n- SparseTensor 列表运算符（cat、mean、sum、var）\n- MinkowskiStack 容器\n- 将所有 `at::cuda::getCurrentCUDASparseHandle` 替换为自定义的 `getCurrentCUDASparseHandle`（问题 #308）\n- 修复坐标管理器内核映射的 Python 函数\n- 直接最大池化\n    - SparseTensorQuantizationMode.MAX_POOL\n- TensorField 全局最大池化\n    - 原始场\n    - 原始场映射\n    - MinkowskiGlobalMaxPool CPU\u002FGPU 对字段输入的更新\n- 当坐标为负值时，SparseTensor.dense() 会抛出 ValueError，而不是从稀疏张量中减去最小坐标。（问题 #316）\n- 添加了 `to_sparse()` 方法，用于移除零元素。（问题 #317）\n    - 之前的 `to_sparse()` 方法更名为 `to_sparse_all()`\n    - `MinkowskiToSparseTensor` 接受一个可选的 `remove_zeros` 布尔参数。\n- 修复批量大小为 1 的全局最大池化问题\n- 对于 `gpu_kernel_map`，为输入、输出映射以及内核索引使用独立的内存块，以避免 GPU 内存未对齐错误\n\n","2021-03-05T12:59:13",{"id":191,"version":192,"summary_zh":193,"released_at":194},189261,"v0.5.1","- v0.5 文档更新\n- 非线性泛函和模块\n- 在没有 ME CUDA 支持的情况下使用 CUDA 时的警告\n- 诊断测试\n- TensorField 切片\n    - 在坐标管理器中缓存唯一映射和逆映射对\n    - 动态生成逆映射\n- 坐标管理器\n    - `field_to_sparse_insert_and_map`\n    - `exists_field_to_sparse`\n    - `get_field_to_sparse_map`\n    - 修复带有空坐标映射的 `kernel_map`\n- 坐标场映射\n    - `quantize_coordinates`\n- TensorField 二元运算修复\n- Minkowski 同步批归一化\n    - 支持 tfield\n    - 转换同步批归一化的更新操作\n- 使用坐标映射键将 TensorField 转换为稀疏格式\n- 稀疏矩阵乘法\n    - 强制矩阵连续存储\n- 修复 CUDA 10 下 AveragePooling 的 cudaErrorMisalignedAddress 错误 (#246)\n\n","2021-02-06T00:27:54",{"id":196,"version":197,"summary_zh":198,"released_at":199},189262,"v0.5.0","使用 GPU 坐标映射\u002F管理器后端实现了大幅加速。","2020-12-31T04:18:59",{"id":201,"version":202,"summary_zh":76,"released_at":203},189263,"v0.4.3","2020-06-11T17:58:21"]