[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-facebookresearch--pytorch3d":3,"tool-facebookresearch--pytorch3d":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",153609,2,"2026-04-13T11:34:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":111,"forks":112,"last_commit_at":113,"license":114,"difficulty_score":115,"env_os":116,"env_gpu":117,"env_ram":118,"env_deps":119,"category_tags":124,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":126,"updated_at":127,"faqs":128,"releases":158},7188,"facebookresearch\u002Fpytorch3d","pytorch3d","PyTorch3D is FAIR's library of reusable components for deep learning with 3D data","PyTorch3D 是 Facebook AI Research（FAIR）开源的一个专为 3D 深度学习打造的工具库。它旨在解决传统 3D 数据处理难以直接融入现代深度学习流程的痛点，让研究人员能够像处理图像或文本一样，高效地对三维模型进行训练和推理。\n\n这款工具特别适合从事计算机视觉、图形学研究的开发者与科研人员使用。无论是需要构建复杂的 3D 重建模型，还是探索神经隐式表示（如 Implicitron 框架），PyTorch3D 都能提供强有力的支持。其核心亮点在于提供了一套完全可微分（differentiable）的组件，包括高效的三角网格数据结构、投影变换、图卷积操作以及一个可微分的网格渲染器。\n\n这意味着所有操作均基于 PyTorch 张量实现，不仅天然支持 GPU 加速以大幅提升计算效率，还能轻松处理包含不同形状数据的迷你批次（minibatches）。通过将 3D 几何操作无缝集成到深度学习的反向传播过程中，PyTorch3D 极大地简化了从 Mesh R-CNN 到新视角合成等前沿项目的开发难度，是连接 3D 几何与深度神经网络的重要桥梁。","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_23de2fd6d0c3.png\" width=\"900\"\u002F>\n\n[![CircleCI](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ffacebookresearch\u002Fpytorch3d.svg?style=svg)](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ffacebookresearch\u002Fpytorch3d)\n[![Anaconda-Server Badge](https:\u002F\u002Fanaconda.org\u002Fpytorch3d\u002Fpytorch3d\u002Fbadges\u002Fversion.svg)](https:\u002F\u002Fanaconda.org\u002Fpytorch3d\u002Fpytorch3d)\n\n# Introduction\n\nPyTorch3D provides efficient, reusable components for 3D Computer Vision research with [PyTorch](https:\u002F\u002Fpytorch.org).\n\nKey features include:\n\n- Data structure for storing and manipulating triangle meshes\n- Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions)\n- A differentiable mesh renderer\n- Implicitron, see [its README](projects\u002Fimplicitron_trainer), a framework for new-view synthesis via implicit representations. ([blog post](https:\u002F\u002Fai.facebook.com\u002Fblog\u002Fimplicitron-a-new-modular-extensible-framework-for-neural-implicit-representations-in-pytorch3d\u002F))\n\nPyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data.\nFor this reason, all operators in PyTorch3D:\n\n- Are implemented using PyTorch tensors\n- Can handle minibatches of hetereogenous data\n- Can be differentiated\n- Can utilize GPUs for acceleration\n\nWithin FAIR, PyTorch3D has been used to power research projects such as [Mesh R-CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02739).\n\nSee our [blog post](https:\u002F\u002Fai.facebook.com\u002Fblog\u002F-introducing-pytorch3d-an-open-source-library-for-3d-deep-learning\u002F) to see more demos and learn about PyTorch3D.\n\n## Installation\n\nFor detailed instructions refer to [INSTALL.md](INSTALL.md).\n\n## License\n\nPyTorch3D is released under the [BSD License](LICENSE).\n\n## Tutorials\n\nGet started with PyTorch3D by trying one of the tutorial notebooks.\n\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_a6554a1eed49.gif\" width=\"310\"\u002F>|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_7bf685565d13.gif\" width=\"310\"\u002F>|\n|:-----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|\n| [Deform a sphere mesh to dolphin](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fdeform_source_mesh_to_target_mesh.ipynb)| [Bundle adjustment](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fbundle_adjustment.ipynb) |\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_36da7942860e.gif\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_70909cf211d5.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [Render textured meshes](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Frender_textured_meshes.ipynb)| [Camera position optimization](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fcamera_position_optimization_with_differentiable_rendering.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_71a521087d0f.png\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_e70c85454e55.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [Render textured pointclouds](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Frender_colored_points.ipynb)| [Fit a mesh with texture](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Ffit_textured_mesh.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_cb9325943d65.png\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_3212ced291cb.png\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [Render DensePose data](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Frender_densepose.ipynb)| [Load & Render ShapeNet data](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fdataloaders_ShapeNetCore_R2N2.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_fbc57f64c8a1.gif\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_5eaec6c91cca.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [Fit Textured Volume](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Ffit_textured_volume.ipynb)| [Fit A Simple Neural Radiance Field](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Ffit_simple_neural_radiance_field.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_fbc57f64c8a1.gif\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_ebee152c8916.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [Fit Textured Volume in Implicitron](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fimplicitron_volumes.ipynb)| [Implicitron Config System](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fimplicitron_config_system.ipynb)|\n\n\n\n\n\n## Documentation\n\nLearn more about the API by reading the PyTorch3D [documentation](https:\u002F\u002Fpytorch3d.readthedocs.org\u002F).\n\nWe also have deep dive notes on several API components:\n\n- [Heterogeneous Batching](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fdocs\u002Fnotes\u002Fbatching.md)\n- [Mesh IO](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fdocs\u002Fnotes\u002Fmeshes_io.md)\n- [Differentiable Rendering](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fdocs\u002Fnotes\u002Frenderer_getting_started.md)\n\n### Overview Video\n\nWe have created a short (~14 min) video tutorial providing an overview of the PyTorch3D codebase including several code examples. Click on the image below to watch the video on YouTube:\n\n\u003Ca href=\"http:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Pph1r-x9nyY\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_e52e715b4510.jpg\" height=\"225\" >\u003C\u002Fa>\n\n## Development\n\nWe welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to [CONTRIBUTING.md](.\u002F.github\u002FCONTRIBUTING.md) for full instructions on how to run the code, tests and linter, and submit your pull requests.\n\n## Development and Compatibility\n\n- `main` branch: actively developed, without any guarantee, Anything can be broken at any time\n  - REMARK: this includes nightly builds which are built from `main`\n  - HINT: the commit history can help locate regressions or changes\n- backward-compatibility between releases: no guarantee. Best efforts to communicate breaking changes and facilitate migration of code or data (incl. models).\n\n## Contributors\n\nPyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.\n\nIn alphabetical order:\n\n* Amitav Baruah\n* Steve Branson\n* Krzysztof Chalupka\n* Jiali Duan\n* Luya Gao\n* Georgia Gkioxari\n* Taylor Gordon\n* Justin Johnson\n* Patrick Labatut\n* Christoph Lassner\n* Wan-Yen Lo\n* David Novotny\n* Nikhila Ravi\n* Jeremy Reizenstein\n* Dave Schnizlein\n* Roman Shapovalov\n* Olivia Wiles\n\n## Citation\n\nIf you find PyTorch3D useful in your research, please cite our tech report:\n\n```bibtex\n@article{ravi2020pytorch3d,\n    author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon\n                  and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},\n    title = {Accelerating 3D Deep Learning with PyTorch3D},\n    journal = {arXiv:2007.08501},\n    year = {2020},\n}\n```\n\nIf you are using the pulsar backend for sphere-rendering (the `PulsarPointRenderer` or `pytorch3d.renderer.points.pulsar.Renderer`), please cite the tech report:\n\n```bibtex\n@article{lassner2020pulsar,\n    author = {Christoph Lassner and Michael Zollh\\\"ofer},\n    title = {Pulsar: Efficient Sphere-based Neural Rendering},\n    journal = {arXiv:2004.07484},\n    year = {2020},\n}\n```\n\n## News\n\nPlease see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases),  and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md).\n\n**[Oct 31st 2023]:**   PyTorch3D [v0.7.5](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.5) released.\n\n**[May 10th 2023]:**   PyTorch3D [v0.7.4](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.4) released.\n\n**[Apr 5th 2023]:**   PyTorch3D [v0.7.3](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.3) released.\n\n**[Dec 19th 2022]:**   PyTorch3D [v0.7.2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.2) released.\n\n**[Oct 23rd 2022]:**   PyTorch3D [v0.7.1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.1) released.\n\n**[Aug 10th 2022]:**   PyTorch3D [v0.7.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.0) released with Implicitron and MeshRasterizerOpenGL.\n\n**[Apr 28th 2022]:**   PyTorch3D [v0.6.2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.6.2) released\n\n**[Dec 16th 2021]:**   PyTorch3D [v0.6.1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.6.1) released\n\n**[Oct 6th 2021]:**   PyTorch3D [v0.6.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.6.0) released\n\n**[Aug 5th 2021]:**   PyTorch3D [v0.5.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.5.0) released\n\n**[Feb 9th 2021]:** PyTorch3D [v0.4.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.4.0) released with support for implicit functions, volume rendering and a [reimplementation of NeRF](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fprojects\u002Fnerf).\n\n**[November 2nd 2020]:** PyTorch3D [v0.3.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.3.0) released, integrating the pulsar backend.\n\n**[Aug 28th 2020]:**   PyTorch3D [v0.2.5](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.2.5) released\n\n**[July 17th 2020]:**   PyTorch3D tech report published on ArXiv: https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08501\n\n**[April 24th 2020]:**   PyTorch3D [v0.2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.2.0) released\n\n**[March 25th 2020]:**   [SynSin](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.08804) codebase released using PyTorch3D: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsynsin\n\n**[March 8th 2020]:**   PyTorch3D [v0.1.1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.1.1) bug fix release\n\n**[Jan 23rd 2020]:**   PyTorch3D [v0.1.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.1.0) released. [Mesh R-CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02739) codebase released: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmeshrcnn\n","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_23de2fd6d0c3.png\" width=\"900\"\u002F>\n\n[![CircleCI](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ffacebookresearch\u002Fpytorch3d.svg?style=svg)](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ffacebookresearch\u002Fpytorch3d)\n[![Anaconda-Server Badge](https:\u002F\u002Fanaconda.org\u002Fpytorch3d\u002Fpytorch3d\u002Fbadges\u002Fversion.svg)](https:\u002F\u002Fanaconda.org\u002Fpytorch3d\u002Fpytorch3d)\n\n# 简介\n\nPyTorch3D 为使用 [PyTorch](https:\u002F\u002Fpytorch.org) 进行 3D 计算机视觉研究提供了高效、可复用的组件。\n\n其主要特性包括：\n\n- 用于存储和操作三角网格的数据结构\n- 针对三角网格的高效操作（投影变换、图卷积、采样、损失函数）\n- 可微分的网格渲染器\n- Implicitron，详见 [其 README](projects\u002Fimplicitron_trainer)，一个基于隐式表示的新视角合成框架。([博客文章](https:\u002F\u002Fai.facebook.com\u002Fblog\u002Fimplicitron-a-new-modular-extensible-framework-for-neural-implicit-representations-in-pytorch3d\u002F))\n\nPyTorch3D 旨在与用于预测和操作 3D 数据的深度学习方法无缝集成。因此，PyTorch3D 中的所有算子：\n\n- 均基于 PyTorch 张量实现\n- 能够处理异构数据的小批量\n- 支持自动微分\n- 可利用 GPU 加速\n\n在 FAIR 内部，PyTorch3D 已被用于支持诸如 [Mesh R-CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02739) 等研究项目。\n\n请参阅我们的 [博客文章](https:\u002F\u002Fai.facebook.com\u002Fblog\u002F-introducing-pytorch3d-an-open-source-library-for-3d-deep-learning\u002F) ，以获取更多演示并了解 PyTorch3D。\n\n## 安装\n\n有关详细说明，请参阅 [INSTALL.md](INSTALL.md)。\n\n## 许可证\n\nPyTorch3D 采用 [BSD 许可证](LICENSE) 发布。\n\n## 教程\n\n通过尝试以下教程笔记本开始使用 PyTorch3D。\n\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_a6554a1eed49.gif\" width=\"310\"\u002F>|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_7bf685565d13.gif\" width=\"310\"\u002F>|\n|:-----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|\n| [将球形网格变形为海豚](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fdeform_source_mesh_to_target_mesh.ipynb)| [光束法平差](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fbundle_adjustment.ipynb) |\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_36da7942860e.gif\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_70909cf211d5.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [渲染纹理化网格](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Frender_textured_meshes.ipynb)| [相机位置优化](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fcamera_position_optimization_with_differentiable_rendering.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_71a521087d0f.png\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_e70c85454e55.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [渲染纹理化点云](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Frender_colored_points.ipynb)| [拟合带纹理的网格](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Ffit_textured_mesh.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_cb9325943d65.png\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_3212ced291cb.png\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [渲染 DensePose 数据](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Frender_densepose.ipynb)| [加载并渲染 ShapeNet 数据](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fdataloaders_ShapeNetCore_R2N2.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_fbc57f64c8a1.gif\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_5eaec6c91cca.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [拟合纹理化体积](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Ffit_textured_volume.ipynb)| [拟合一个简单的神经辐射场](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Ffit_simple_neural_radiance_field.ipynb)|\n\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_fbc57f64c8a1.gif\" width=\"310\"\u002F> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_ebee152c8916.gif\" width=\"310\" height=\"310\"\u002F>\n|:------------------------------------------------------------:|:--------------------------------------------------:|\n| [在 Implicitron 中拟合纹理化体积](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fimplicitron_volumes.ipynb)| [Implicitron 配置系统](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002Fdocs\u002Ftutorials\u002Fimplicitron_config_system.ipynb)|\n\n\n\n\n\n## 文档\n\n通过阅读 PyTorch3D 的 [文档](https:\u002F\u002Fpytorch3d.readthedocs.org\u002F)，了解更多关于 API 的信息。\n\n我们还针对多个 API 组件提供了深入解析笔记：\n\n- [异构批处理](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fdocs\u002Fnotes\u002Fbatching.md)\n- [网格 I\u002FO](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fdocs\u002Fnotes\u002Fmeshes_io.md)\n- [可微分渲染](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fdocs\u002Fnotes\u002Frenderer_getting_started.md)\n\n### 概览视频\n\n我们制作了一段简短的（约 14 分钟）视频教程，概述了 PyTorch3D 的代码库，并包含多个代码示例。点击下方图片即可在 YouTube 上观看该视频：\n\n\u003Ca href=\"http:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Pph1r-x9nyY\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_readme_e52e715b4510.jpg\" height=\"225\" >\u003C\u002Fa>\n\n## 开发\n\n我们欢迎对 PyTorch3D 的新贡献，并将持续积极维护此库！请参阅 [CONTRIBUTING.md](.\u002F.github\u002FCONTRIBUTING.md) ，以获取有关如何运行代码、测试和 linter，以及提交拉取请求的完整说明。\n\n## 开发与兼容性\n\n- `main` 分支：处于积极开发状态，不提供任何保证，随时可能引入破坏性变更。\n  - 注意：这包括基于 `main` 分支构建的夜间版本。\n  - 提示：可以通过提交历史来定位回归问题或相关更改。\n- 版本间的向后兼容性：不保证。我们将尽最大努力提前通知破坏性变更，并协助用户迁移代码或数据（包括模型）。\n\n## 贡献者\n\nPyTorch3D 由 Facebook AI 研究院计算机视觉团队编写和维护。\n\n按字母顺序排列如下：\n\n* Amitav Baruah\n* Steve Branson\n* Krzysztof Chalupka\n* Jiali Duan\n* Luya Gao\n* Georgia Gkioxari\n* Taylor Gordon\n* Justin Johnson\n* Patrick Labatut\n* Christoph Lassner\n* Wan-Yen Lo\n* David Novotny\n* Nikhila Ravi\n* Jeremy Reizenstein\n* Dave Schnizlein\n* Roman Shapovalov\n* Olivia Wiles\n\n## 引用\n\n如果您在研究中使用了 PyTorch3D，请引用我们的技术报告：\n\n```bibtex\n@article{ravi2020pytorch3d,\n    author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon\n                  and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},\n    title = {Accelerating 3D Deep Learning with PyTorch3D},\n    journal = {arXiv:2007.08501},\n    year = {2020},\n}\n```\n\n如果您使用了用于球体渲染的 Pulsar 后端（即 `PulsarPointRenderer` 或 `pytorch3d.renderer.points.pulsar.Renderer`），请同时引用以下技术报告：\n\n```bibtex\n@article{lassner2020pulsar,\n    author = {Christoph Lassner and Michael Zollh\\\"ofer},\n    title = {Pulsar: Efficient Sphere-based Neural Rendering},\n    journal = {arXiv:2004.07484},\n    year = {2020},\n}\n```\n\n## 最新动态\n\n以下是代码库更新的时间线，按时间倒序排列。我们不仅会分享发布版本的更新信息，还会介绍基于 PyTorch3D 构建的研究项目。各版本的变更日志可在 [`Releases`](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases) 页面查看，安装包则可通过 `conda` 按照 [INSTALL.md](INSTALL.md) 中的说明进行安装。\n\n**[2023年10月31日]:** PyTorch3D 发布 [v0.7.5](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.5)。\n\n**[2023年5月10日]:** PyTorch3D 发布 [v0.7.4](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.4)。\n\n**[2023年4月5日]:** PyTorch3D 发布 [v0.7.3](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.3)。\n\n**[2022年12月19日]:** PyTorch3D 发布 [v0.7.2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.2)。\n\n**[2022年10月23日]:** PyTorch3D 发布 [v0.7.1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.1)。\n\n**[2022年8月10日]:** PyTorch3D 发布 [v0.7.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.7.0)，新增 Implicitron 和 MeshRasterizerOpenGL 功能。\n\n**[2022年4月28日]:** PyTorch3D 发布 [v0.6.2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.6.2)。\n\n**[2021年12月16日]:** PyTorch3D 发布 [v0.6.1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.6.1)。\n\n**[2021年10月6日]:** PyTorch3D 发布 [v0.6.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.6.0)。\n\n**[2021年8月5日]:** PyTorch3D 发布 [v0.5.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.5.0)。\n\n**[2021年2月9日]:** PyTorch3D 发布 [v0.4.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.4.0)，新增对隐式函数、体积渲染的支持，并包含 [NeRF 的重新实现](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmain\u002Fprojects\u002Fnerf)。\n\n**[2020年11月2日]:** PyTorch3D 发布 [v0.3.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.3.0)，集成了 Pulsar 后端。\n\n**[2020年8月28日]:** PyTorch3D 发布 [v0.2.5](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.2.5)。\n\n**[2020年7月17日]:** PyTorch3D 的技术报告在 ArXiv 上发表：https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08501。\n\n**[2020年4月24日]:** PyTorch3D 发布 [v0.2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.2.0)。\n\n**[2020年3月25日]:** 使用 PyTorch3D 的 [SynSin](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.08804) 代码库发布：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsynsin。\n\n**[2020年3月8日]:** PyTorch3D 发布 [v0.1.1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.1.1)，为修复 bug 的版本。\n\n**[2020年1月23日]:** PyTorch3D 发布 [v0.1.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Freleases\u002Ftag\u002Fv0.1.0)。同时发布了 [Mesh R-CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02739) 的代码库：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmeshrcnn。","# PyTorch3D 快速上手指南\n\nPyTorch3D 是 Facebook AI Research (FAIR) 开源的高效、可重用的 3D 计算机视觉组件库。它基于 PyTorch 构建，支持三角网格操作、可微分渲染器以及隐式表示新视图合成（Implicitron），所有算子均支持 GPU 加速和自动求导。\n\n## 环境准备\n\n在开始之前，请确保满足以下系统要求和依赖：\n\n*   **操作系统**: Linux 或 macOS (Windows 支持有限，通常建议使用 WSL2)。\n*   **Python**: 推荐 Python 3.8 - 3.10。\n*   **PyTorch**: 必须预先安装与你的 CUDA 版本匹配的 PyTorch。\n    *   访问 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) 获取安装命令。\n    *   **国内加速**: 推荐使用清华源或阿里源安装 PyTorch。\n      ```bash\n      pip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n      ```\n*   **编译依赖** (若从源码安装): `gcc`, `g++`, `cmake`。\n\n## 安装步骤\n\nPyTorch3D 推荐通过 `conda` 或 `pip` 进行安装。由于包含自定义 CUDA 算子，**强烈建议预编译的二进制包版本与你的 PyTorch\u002FCUDA 版本严格匹配**。\n\n### 方法一：使用 Conda 安装（推荐）\n\n这是最稳定的方式。请访问 [Anaconda PyTorch3D 页面](https:\u002F\u002Fanaconda.org\u002Fpytorch3d\u002Fpytorch3d\u002Ffiles) 查找与你当前 PyTorch 和 CUDA 版本对应的具体版本号。\n\n假设你已安装 PyTorch 1.13 + CUDA 11.7，示例命令如下：\n\n```bash\nconda install -c pytorch3d pytorch3d\n```\n\n*注意：如果默认频道找不到对应版本，可能需要指定具体的 channel 或版本字符串，例如 `conda install -c pytorch3d pytorch3d=0.7.5`。*\n\n### 方法二：使用 Pip 安装（预编译轮文件）\n\n如果你确定有匹配你环境的预编译 wheel 包，可以直接使用 pip。\n\n```bash\npip install pytorch3d\n```\n\n*国内用户若下载缓慢，可尝试指定清华源（需确认源中是否有对应版本的 wheel）：*\n```bash\npip install pytorch3d -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方法三：从源码安装（适用于最新特性或无匹配包时）\n\n如果没有预编译包，需要从源码编译。确保已安装 `gcc` 和 `cuda-toolkit`。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d.git\ncd pytorch3d\npip install -e .\n```\n\n## 基本使用\n\n以下是一个最简单的示例，展示如何加载一个网格模型并使用可微分渲染器将其渲染为图像。\n\n### 1. 导入依赖与加载数据\n\n```python\nimport torch\nfrom pytorch3d.io import load_obj\nfrom pytorch3d.structures import Meshes\nfrom pytorch3d.vis.texture_vis import texturesuv_image_matplotlib\nimport matplotlib.pyplot as plt\n\n# 加载 OBJ 文件 (替换为你的文件路径)\n# verts: 顶点坐标, faces_idx: 面索引, aux: 包含纹理等辅助信息\nverts, faces_idx, aux = load_obj(\".\u002Fdata\u002Fcow.obj\")\n\n# 创建 Meshes 对象，支持批量处理\nmeshes = Meshes(\n    verts=[verts],\n    faces=[faces_idx.verts_idx],\n    textures=aux.textures\n)\n\n# 可视化纹理\nif meshes.textures is not None:\n    plt.imshow(texturesuv_image_matplotlib(meshes.textures))\n    plt.show()\n```\n\n### 2. 设置相机与渲染器并渲染\n\n```python\nfrom pytorch3d.renderer import (\n    look_at_view_transform,\n    FoVPerspectiveCameras, \n    PointLights, \n    DirectionalLights, \n    Materials, \n    RasterizationSettings, \n    MeshRenderer, \n    MeshRasterizer, \n    SoftPhongShader,\n)\n\n# 设置相机位置\nR, T = look_at_view_transform(2.0, 30, 30)\ncameras = FoVPerspectiveCameras(device=\"cuda\", R=R, T=T)\n\n# 定义光照\nlights = PointLights(device=\"cuda\", location=[[2.0, 2.0, 2.0]])\n\n# 配置光栅化设置\nraster_settings = RasterizationSettings(\n    image_size=512, \n    blur_radius=0.0, \n    faces_per_pixel=1, \n)\n\n# 构建渲染器\nrenderer = MeshRenderer(\n    rasterizer=MeshRasterizer(\n        cameras=cameras, \n        raster_settings=raster_settings\n    ),\n    shader=SoftPhongShader(\n        device=\"cuda\", \n        cameras=cameras,\n        lights=lights,\n        materials=Materials(device=\"cuda\"),\n    )\n)\n\n# 执行渲染 (将 mesh 移动到 GPU)\nimages = renderer(meshes.to(\"cuda\"))\n\n# 显示结果 (images 形状为 [N, H, W, C])\nplt.imshow(images[0, ..., :3].cpu().numpy())\nplt.axis(\"off\")\nplt.show()\n```\n\n### 3. 利用可微分特性进行优化\n\n由于渲染过程是可微分的，你可以直接对网格顶点或相机参数进行梯度下降优化。\n\n```python\n# 开启梯度计算\nmeshes_verts = meshes.verts_packed()\nmeshes_verts.requires_grad = True\n\noptimizer = torch.optim.SGD([meshes_verts], lr=0.05)\n\nfor i in range(100):\n    optimizer.zero_grad()\n    \n    # 重新构建带有更新顶点的 Meshes 对象\n    new_meshes = Meshes(\n        verts=[meshes_verts],\n        faces=[faces_idx.verts_idx],\n        textures=aux.textures\n    )\n    \n    # 渲染\n    rendered_images = renderer(new_meshes.to(\"cuda\"))\n    \n    # 计算损失 (例如：与目标图像的差异)\n    # loss = ((rendered_images - target_images) ** 2).mean()\n    loss = rendered_images.mean() # 仅作演示\n    \n    loss.backward()\n    optimizer.step()\n    \n    if i % 10 == 0:\n        print(f\"Iteration {i}, Loss: {loss.item()}\")\n```","某电商平台的 3D 视觉团队正致力于开发一个自动化系统，旨在从单张商品照片中重建高精度的 3D 模型，并生成可交互的旋转展示视频。\n\n### 没有 pytorch3d 时\n- **渲染不可导**：传统渲染管线（如 OpenGL）无法融入深度学习训练循环，导致无法通过图像误差直接反向传播优化 3D 几何形状。\n- **批处理困难**：难以高效处理批次中不同顶点数量的异构网格数据，被迫使用低效的 Python 循环逐个处理，严重拖慢训练速度。\n- **算子缺失**：缺乏原生支持 GPU 加速的网格采样、图卷积及投影变换算子，需自行编写复杂的 CUDA 内核，开发周期长且易出错。\n- **调试复杂**：3D 数据结构与 PyTorch 张量不兼容，需要在 CPU 和 GPU 间频繁转换数据格式，不仅占用显存还增加了代码维护难度。\n\n### 使用 pytorch3d 后\n- **端到端优化**：利用其可微分网格渲染器，直接将渲染图像与真实照片的像素差异作为损失函数，实现了从 2D 图像到 3D 形状的端到端自动优化。\n- **高效批处理**：借助专为异构数据设计的打包（Pack\u002FUnpack）机制，轻松实现变长网格数据的并行 GPU 计算，训练吞吐量提升数倍。\n- **开箱即用**：直接调用库内成熟的高效算子进行网格变形和纹理拟合，无需底层开发，将算法验证周期从数周缩短至几天。\n- **无缝集成**：所有操作均基于 PyTorch 张量构建，完美契合现有深度学习工作流，开发者可专注于模型逻辑而非数据转换细节。\n\npytorch3d 通过提供可微分、高效率且原生兼容 PyTorch 的 3D 算子，彻底打通了 2D 视觉感知与 3D 几何重建之间的技术壁垒。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_pytorch3d_23de2fd6.png","facebookresearch","Meta Research","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffacebookresearch_449342bd.png","",null,"https:\u002F\u002Fopensource.fb.com","https:\u002F\u002Fgithub.com\u002Ffacebookresearch",[80,84,88,92,96,100,104,108],{"name":81,"color":82,"percentage":83},"Python","#3572A5",80.8,{"name":85,"color":86,"percentage":87},"C++","#f34b7d",10.2,{"name":89,"color":90,"percentage":91},"Cuda","#3A4E3A",6.3,{"name":93,"color":94,"percentage":95},"C","#555555",0.9,{"name":97,"color":98,"percentage":99},"Shell","#89e051",0.8,{"name":101,"color":102,"percentage":103},"JavaScript","#f1e05a",0.5,{"name":105,"color":106,"percentage":107},"Batchfile","#C1F12E",0.2,{"name":109,"color":110,"percentage":107},"CSS","#663399",9850,1454,"2026-04-13T09:19:44","NOASSERTION",4,"Linux, macOS","需要 GPU 以加速（支持 CUDA），具体型号和显存大小未说明，但需兼容已安装的 PyTorch CUDA 版本","未说明",{"notes":120,"python":118,"dependencies":121},"该工具深度集成 PyTorch，所有算子均基于 PyTorch 张量实现并支持 GPU 加速。README 中未列出具体版本号，详细安装指令（包括特定的 Python、PyTorch 和 CUDA 版本对应关系）请参考项目根目录下的 INSTALL.md 文件。Windows 系统在官方安装指南中通常支持有限或需要额外编译步骤，主要推荐 Linux 和 macOS 环境。",[122,123,64],"torch","fvcore",[14,125],"其他","2026-03-27T02:49:30.150509","2026-04-14T03:11:05.915768",[129,134,139,144,149,154],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},32258,"在 Windows 上安装 PyTorch3D 后，torch.cuda.is_available() 返回 False 怎么办？","这通常是因为 PyTorch3D 为了适配 PyTorch 的除法运算变更而进行了代码修改。你需要应用特定的提交补丁到本地副本中。具体参考提交记录：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F5444c53ceed57969ee1a8e1186b3e3010d437c19。应用这些更改后通常可以解决 CUDA 不可用的问题。此外，如果遇到 'Integer division' 错误，请检查代码中是否混用了 '\u002F' 和 '\u002F\u002F'，在较新版本的 PyTorch 中应使用 'true_divide' 或 'floor_divide (\u002F\u002F)'。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fissues\u002F711",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},32259,"如何在 Windows 10 + RTX 4090 + CUDA 11.8\u002F12.1 环境下安装 PyTorch3D？","在较新的硬件（如 RTX 4090）和高版本 CUDA 上安装较为困难。建议步骤如下：\n1. 安装 Visual Studio (2019 或 2022) 并勾选 C++ 构建工具和 SDK。\n2. 创建 Conda 环境：`conda create -n pytorch3d python=3.8`\n3. 安装 PyTorch (注意版本匹配)：`conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia`\n4. 安装依赖：`conda install -c fvcore -c iopath -c conda-forge fvcore iopath`\n5. 尝试通过 pip 安装源码：`pip install \"git+https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d.git\"`\n如果编译失败，可以尝试设置环境变量 `DISTUTILS_USE_SDK=1` 和 `PYTORCH3D_NO_NINJA=1`。若仍失败，可能需要手动修改 setup.py 或等待官方支持该架构的预编译包。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fissues\u002F1509",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},32260,"在 Windows 上使用 pip 安装 PyTorch3D 时遇到 'setup.py install' 错误或找不到文件怎么办？","这是 Windows 上常见的编译环境问题。解决方法包括：\n1. 确保已安装 'wheel' 包：`pip install wheel`，以避免回退到 setup.py install。\n2. 必须安装 Microsoft C++ Build Tools (对应你的 Visual Studio 版本)。\n3. 如果使用 `pip install \"git+...\"` 失败，可以尝试克隆仓库后本地安装：\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d.git\n   cd pytorch3d\n   pip install .\n   ```\n4. 确保 Python 版本与 PyTorch 版本兼容（例如 Python 3.8\u002F3.9 配合 PyTorch 1.10+）。\n如果报错提示 \"The system cannot find the file specified\"，通常意味着编译器路径未正确配置或缺少必要的构建工具。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fissues\u002F1055",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},32261,"为什么渲染出的图像是左右翻转（镜像）的？","PyTorch3D 的相机坐标系与某些软件（如 MeshLab）或传统 OpenGL 坐标系可能存在差异，导致渲染结果沿 X 轴或 Y 轴翻转。\n解决方案：\n1. 检查相机定义中的 `R` (旋转) 和 `T` (平移) 矩阵。\n2. 可以通过在相机前增加一个翻转矩阵来修正，或者在渲染后对图像进行水平翻转。\n3. 确认加载的 OBJ 模型本身的法线方向和顶点顺序是否符合预期。\n这是一个已知行为，用户通常需要手动调整相机视角或对输出图像进行后处理以匹配其他软件的视图。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fissues\u002F78",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},32262,"在使用 TexturesVertex 进行优化时，为什么顶点的梯度 (grad) 为 None？","如果在优化过程中发现 `sphere_verts_rgb.grad_fn` 为 `None`，说明该张量未参与到计算图中或梯度传播被阻断。\n检查点：\n1. 确保创建张量时设置了 `requires_grad=True`（代码中已体现）。\n2. 确认 `TexturesVertex` 正确接收了该张量，并且后续的渲染操作（如 `renderer(meshes)`）确实使用了这些纹理信息。\n3. 检查是否有操作 detach 了计算图。\n4. 确保损失函数 (Loss) 是基于渲染结果计算的，并且执行了 `loss.backward()`。\n如果问题依旧，可能是特定版本（如 0.4.0）的 Bug，建议升级到最新版本或检查 Meshes 对象的构建方式是否正确传递了梯度需求。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fissues\u002F763",{"id":155,"question_zh":156,"answer_zh":157,"source_url":133},32263,"遇到 'Integer division of tensors' 运行时错误该如何修复？","错误信息 `RuntimeError: Integer division of tensors using div or \u002F is no longer supported` 表明代码中使用了 `\u002F` 进行整数除法，而新版 PyTorch 要求明确区分真除法和地板除法。\n解决方法：\n1. 将代码中所有用于整数除法的 `\u002F` 替换为 `\u002F\u002F` (floor_divide)。\n2. 或者使用 `torch.true_divide()` 进行真除法。\n3. 如果是使用旧版本的 PyTorch3D (如 0.2.0)，建议直接升级到最新版本，因为维护者已经在后续提交中修复了这些除法兼容性问题。如果不升级，需手动遍历项目代码替换所有相关的除法运算符。",[159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254],{"id":160,"version":161,"summary_zh":162,"released_at":163},247059,"v0.7.9","各种修复和改进。","2025-11-28T12:43:29",{"id":165,"version":166,"summary_zh":167,"released_at":168},247060,"V0.7.8","此版本支持 PyTorch 2.1 至 2.4。\n\n新特性\n* Hausdorff 距离 \u002F 为 Chamfer 距离添加“max”点归约 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F44702fdb4ba0f80e96bee724766c545d4d93509c\n* 对 AMD GPU 的有限支持。不包括 Pulsar。\n\n改进\n* 允许 matrix_to_quaternion 的 ONNX 导出 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7edaee71a9583f8f1eedf91187aa77a0797c38e0\n* 移除对 fvcore 的依赖 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4df110b0a94786bc6a312e6f5214ae6990492d88\n\n修复\n* 在部分材质名称未指定但并非全部未指定的情况下，修复 OBJ 材质加载问题 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9acdd67b831f2d3d9db249b1c46d42139024cebf\n* 修复：为保持向后兼容性而设置 FrameData.crop_bbox_xywh https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd0d0e020071c34ffa0953c00f9e85be0672b597e","2024-09-13T14:37:23",{"id":170,"version":171,"summary_zh":172,"released_at":173},247061,"v0.7.7","此版本支持 PyTorch 2.0 至 2.3。\n\n## 新特性\n* 允许对继承自 Transform3d 的类进行索引 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb0462d80799543c6ebec06d156a583f42209e9ff\n* ImplicitronRayBundle.float https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4ae25bfce7eb42042a34585acc3df81cf4be7d85\n* TexturesUV 中支持多张贴图 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F38cf0dc1c52138987e6e66295c5a2d192a6914bd\n\n## 改进\n* 优化 list_to_packed https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fccf22911d4daa74af7fbf70b3373bc0fe46d6d7c\n* 使教程中构建的安装过程更加稳健 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F31e3488a5199b62880542919498bb24b72a7b901\n\n## 修复\n* 修复上一版本引入的 CUDA Marching Cubes Bug https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7566530669203769783c94024c25a39e1744e4ed\n* KNN 错误检查 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F717493cb79f16e67a0d64653bbfd36558683f78b","2024-06-27T11:32:25",{"id":175,"version":176,"summary_zh":177,"released_at":178},247062,"v0.7.6","此版本支持 PyTorch 2.2.0。\n\n## 新特性\n* 现在可以从 PLY 文件中读取 UV 纹理 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fpull\u002F1100\n* Volumes 中的 `align_corners` 开关 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F94da8841af444312da13a1e2d96924e44a9d0d10\n* 子网格化现在支持 `TexturesUV` 和 `TexturesAtlas` https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8a27590c5fd6aba4d138660614c7a18832701671 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe46ab49a34aebe8ca4f3de62b06be2d84e09a2ce\n* `cubify` 中的颜色处理 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fae9d8787ce4be8be6ac87eeac6ed82ca02919056\n\n## 改进\n* `PointsRenderer` 的文档更新 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3b4f8a4980e889936650e6841c6861ac45ed1117\n* 提升了 `TexturesUVs` 的效率 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F06cdc313a7996a3363e78b19edaf893f4454ba1c\n* 改进了 so3_log_map \u002F so3_exp_map 的数值稳定性 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F292acc71a33bf389225ef02af237dd82a8319f59\n* `get_rgbd_point_cloud` 现在可以接受任意数量的通道，而不仅是 RGB https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F799c1cd21beff84e50ac4ab7a480e715780da2de\n\n## 修复\n* 修复了 C++ 拓荒者算法中的问题 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc292c71c1adb0712c12cf4fa67a7a84ad9b44e5c\n* 修复了 CUDA 拓荒者算法（该算法在 0.7.5 版本中已损坏） https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff61368255184694483db6d42a329f6d92b9f597d https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7606854ff7d87b2b3ae99102d9868de57b88f9df\n* 移除了 `get_rgbd_point_cloud` 中未使用的参数 `mask_points`，并修复了假设该参数被使用的情况 `get_implicitron_sequence_pointcloud` https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff4f22092712251d9bf0f01519dbadb1419480ae0\n* 统一了 `matrix_to_quaternion` 的输出格式 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3087ab7f62b5d581a133e54849d462f37fdf4c2d\n","2024-02-22T18:30:39",{"id":180,"version":181,"summary_zh":182,"released_at":183},247063,"v0.7.5","带来 PyTorch 2.1 支持\n\n## 新特性\n* Implicitron：积分位置编码、blurpool；基于 MIP-NeRF https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fccf860f1db38b839db9dcde206b6b5091ac50385 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F29b8ebd80206d729e80264662519f7ab168f046e https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F5910d81b7bec2e8d328a8d4e1435e1041c9921a7 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3d011a9198a041bee93d16f42872b797b48be67b https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9446d91fae56dd86b1f58d106360aab4491a0b2a \n* Chamfer 距离：单向选项、绝对值选项 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F5ffeb4d580f5c7043ed1691e49d2d99f0f655bbc\n* 不进行点云降采样的 Chamfer 距离 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd84f274a0822da969668d00e831870fd88327845 \n* VolumeSampler：公开用于内外渲染的 padding_mode https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb7f4ba097cebab303471119fdaac97b50a72e7d0 \n\n## Implicitron 改进\n* 渐进式弃用 stats.print 中的 get_str=False 标志。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd08fe6d45a537d6d57d704c37d33ae01359a029a\n* FrameDataBuilder 更具扩展性 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb0462598ac59ba01c9866a3268f9c0da44d62456 \n* SQLDataset 改进 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd2119c285fa00438de168e17f8a453681dbed4d3 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F09a99f2e6d9164066ca78aaebe4c9a811ab5fbb9 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fcd5db076d5c849494b86e9404f2a30ff5aa7f0e8 \n* OverfitModel 的精细和粗糙隐式函数拥有正确的名称 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F35badc0892275c35818ca39800ec55d9c7342c8f\n* 使用 OpenCV 从 OpenXR 文件中读取深度图 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd851bc3173550197c588f5887d89677632faafdb \n\n## 小改进\n* matplotlib 兼容性  https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F928efdd640358d3affcf493c4311b85350ea1103 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F5592d25f68939767c1580241cbf4e7fa6040364e https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F88429853b932471cfdfefd259855d6b8a23aa7c3\n* Implicitron：fg_mask\u002Ffg_probability 类型注解修复 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8164ac4081d574e4e3f55a18fc7fb12389c144f1 \n","2023-10-31T20:06:30",{"id":185,"version":186,"summary_zh":187,"released_at":188},247064,"v0.7.4","一个小版本发布。它还提供了适用于 PyTorch 2.0.1 的构建版本。\n\n#### 小幅新增功能\n* 将法线保存到 OBJ 文件：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F092400f1e79826cbb503b5f309c2d55cd5ca43ef\n* 将 TexturesVertex 数据保存到 GLB 文件：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F178a7774d4424ca2f0047a8d48aa40c39b67e1d2\n* Implicitron 现在可以读取一种新的数据集格式，该格式将元数据存储在 SQLite 文件中。这需要 sqlalchemy 库的支持。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F32e1992924929a9b79e880ed6f5bdc74089e8c73\n\n#### 修复内容\n* cameras.md 中示例的错误：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff5a117c74b3c34f1e2b4fb5dfa2e68dec2f42975\n* 使用较新版本 CUB 时的构建修复：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb921efae3e52dcd93e553db3d02378951e894769\n* 当不遵循用户设置 `torch.use_deterministic_algorithms` 时显示的警告：pytorch3d 中部分 CUDA 实现是非确定性的。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fef5f620263562657a022dee419abd63534d123e7\n* 在未提供真实深度图的情况下，Implicitron 的评估仍能正常工作：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0e3138eca89336d222d9338687741da824a39443\n* 在 Implicitron 中加载 Co3D 类型数据集中的边界框时，若未请求掩码，则会失败的问题已修复：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7aeedd17a4140eef139987e946a7017df7a97433","2023-05-10T15:14:23",{"id":190,"version":191,"summary_zh":192,"released_at":193},247065,"v0.7.3","一个小版本发布。我们现在支持 PyTorch 2.0。\n\n## 微小的新特性\n* join_pointclouds_as_scene https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa123815f40d19127ad8dc20ae93483303220046f\n* OverfitModel 用于 NeRF 风格的训练，专门针对单个场景进行过拟合，是 GenericModel 的一种特殊情况 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F813e941de5084902fb07c5e95dc58314b9bf8a27 \n\n\n## 改进\n* GenericModel 中不再使用相对导入，使得该文件更容易复制到用户的项目中。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd388881f2ca41713d8e20c64c10abfa0c76b73cc\n* 同时是 nn.Module 和可配置组件的 Implicitron 类现在具有更简单的初始化方式，用户无需担心调用 nn.Module 的 `__init__` 方法。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9540c29023c2b6bb53e5a26a5e7a9d34ce88e9b1 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc8af1c45ca9f4fdd4e59b49172ca74983ff3147a \n* 修复了 Windows 上的编译问题 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3388d3f0aa6bc44fe704fca78d11743a0fcac38c\n* 文档改进：OpenCV 相机 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F18c38ad6006d046dc1d841d6020c6fbe6ad34a94 \n* 相机对象现在可以被迭代 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F84851c8312a709a95215d36adc103f7af97d5b5f \n* 修复了 RayBundle 绘图中的问题 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9dc28f5dd5c529026cc47dfcc4dbf0d34dbaa67b \n* 修复了 rasterizer.py 中因重复行导致的错误（#1421）https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb95535c8b73299134bb2144a8b36edfffaf2e225 \n* get_rgbd_point_cloud 的文档和修复，包括新的欧几里得选项 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa12612a48fa6e3578294dc4f418d3abce10a7938\n* Implicitron 在 Co3D 风格数据集和评估方面的改进，包括 render_flyaround：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1de2d0c8207b0f3b7f6119eeae5562fe57efeea9  https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd561f1913ed41e2e2a6b292f1c91fe8b96cbaf61  https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F54eb76d48c936da9848cfa56a6c487f45cb2ecf1  https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3239594f78632fe207195e4622c00fb1656c4675  https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F11959e0b24933d80e41260dd47e95430a2e3c465  https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa7256e4034376f3e802178da5e8498f8ba184888 \n* 对张量就地修改时的版本递增处理 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7d8b029aaed5f247e3898f4b51e6a79ca52bfc54 \n* Implicitron 的各个组件现在通过一个新的扩展点 pre_expand 确保注册表在使用前已被预填充。这是为了与 cinder 等延迟加载系统兼容。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc759fc560f84eaff3577afac0083a2a2f07b349f \n* 对 Implicitron 的 json_index_dataset 进行了大规模重构，大量加载逻辑被移至一个新的 FrameDataBuilder 概念中。http","2023-04-05T13:04:31",{"id":195,"version":196,"summary_zh":197,"released_at":198},247066,"v0.7.2","一个小版本发布。Python 3.7 已不再受支持。我们目前支持 PyTorch 1.9.0 到 1.13.0。[编辑：发布后，也增加了针对 1.13.1 的构建。]\n\n### 新特性\n* Marching Cubes 现在有了高效的 CUDA 实现。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8b8291830e6ca1f5882700e214f114d5442a04db 。CPU 代码也有改进 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1706eb8216248e54f68cad86f7ea4125c79a3ca4\n\n### 较小的新特性\n* 实验性的 glTF 支持现在允许将网格写入 GLB 文件，包括带有 TextureUV 的纹理。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fcc2840eb4408f8fd6ad0531e3a817d73e0a53e03\n* Implicitron 中参数组的指定变得更加灵活 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7be49bf46fdce44ec41fc22d29e99d614c1988d6\n* Implicitron 的 Co3D 样式数据集加载器可以加载多个类别 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe4a329814978934b2fda8fb92c2c14ebb42fa0b2\n* Implicitron 即使没有 lpips 库也可以使用。https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F322c8f2f50bb95e9d89c47126e081cb57395d116\n\n### 修复\n* Implicitron 可视化相关修复 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc3a6ab02da3192aa18cda85d66d957a11aacfa10 和 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F94f321fa3dd776da5edd1efa80e8a094ee5e6b02\n* Implicitron 的 rasterize_mc 相关修复 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1706eb8216248e54f68cad86f7ea4125c79a3ca4\n* Implicitron 的体素网格，尤其是其支架结构相关修复 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa1f2ded58a502f0d65c86ff6c86f417689a5c8d4、https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbea84a6fcd9bf35950f32868deb9fb5b540c4534 以及 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff711c4bfe9b1e3dcc968c3182bfb563dc0299893\n* raysampling：当 unit_directions=True 时，raybundle 中的 `origins`（下游代码通常不使用）存在错误 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Feff0aad15acf9125c1dae6820ae1414f783466f0\n\n### 文档\n* 对 README 和 ReadTheDocs 的修复\n","2022-12-19T23:45:33",{"id":200,"version":201,"summary_zh":202,"released_at":203},247067,"v0.7.1","本次发布为 Implicitron 带来了大量修复和改进。\n\nPyTorch 1.8 已不再支持。我们现在支持的版本范围是 PyTorch 1.9.0 到 1.12.1。\n\n### 新特性\n- 对现有 Python 实现的 Marching Cubes 算法进行了修复，并在数据位于 CPU 上时提供了快速的 C++ 实现：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0d8608b9f99ac53d95256e124eaf9126e00adef5 和 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F850efdf7065c238d4d1294d375278318005fd098。\n- 新增鱼眼相机对象。这涉及到对 API 的更改，以支持此类“非线性”相机，即其投影不是射影变换的相机。特别是，get_projection_transform 方法现在可能会失败：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb0515e14615abe6e154f6dcf671ec8e54f29aaf4、https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd19e6243d0294bf83a9f6debc6f55e14becf43b6、https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F84fa966643aa002b0075f3b4a0eddd22d1076a86 以及 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8a96770dc28d393ebc2caacd18c7f5dfee8889a0。\n\n### 小型新特性\n- Transform3D 类中的 get_se3_log 函数：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9a0f9ae57280d5d38c9e59d0517599ccb834b81b。\n- circle_fitting 模块中的 get_rotation_to_best_fit_xy 函数：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F74bbd6fd76742466bc28134c9b8dfb99e4a677af。\n- 通过 IO 将网格保存为 OBJ 格式时，现在会包含 TexturesUV 纹理信息：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F6ae6ff9cf73221ce60617ef4658b4892b986ba9d。\n- 棋盘格网格工具：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fce3fce49d7ad1a680d8c9be660164d5f7a0bb976。\n- 相机批次现在可以使用布尔张量进行索引：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb7c826b7863a4fb3c4ce0e10e5ed7400d32ed512。\n- Implicitron：可以直接使用 Configurable 类，无需再先调用 expand_args_fields 或 get_default_args：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd6a197be3662cdfa57a34e3134fea1bb04eb1614。\n- Implicitron：在最新版本的 PyTorch 中，现在可以使用优化器的更快 `foreach` 版本：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F209c160a20ce4d87d4ca7a06f2975ba998765087。\n- Implicitron：新增了 psnr、l1 和 lpips 损失函数的 full_image 变体（忽略掩码）。带有掩码的版本则被重命名以明确区分：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7b985702bb660110dc70c2b8c6e6ed0a1a6bcd66 和 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd35781f2d79ffe5a895025ec386c47f7d77c085c。\n- Implicitron：为 json 数据集提供者 v2 启用了额外的测试时源视图：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2ff2c7c836c2d47bb5b6fab57e7363862de6e423。\n- Implicitron：在轨迹估计中增加了过滤异常输入相机的选项：https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd281f8efd1e52172256ecdf21e82c7547f235ef2。\n- Implicitron：提供了用于在 CO3Dv2 数据集上训练选定方法的 YAML 配置文件：https:\u002F\u002F","2022-10-23T16:13:11",{"id":205,"version":206,"summary_zh":207,"released_at":208},247068,"v0.7.0","本次发布引入了 **Implicitron** 和 **MeshRasterizerOpenGL**。我们提供了适用于 PyTorch 1.12.0 的构建版本，但不再支持 1.7.x 版本。\n\n### 重大新特性\n- Implicitron：一个基于神经网络表示建模的新视图合成框架。有关介绍，请参阅其 [README](projects\u002Fimplicitron_trainer)。\n- MeshRasterizerOpenGL：这是对 [Cole 等人](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FCole_Differentiable_Surface_Rendering_via_Non-Differentiable_Sampling_ICCV_2021_paper.pdf) 中描述的 MeshRasterizer 的更快替代方案。在处理包含 200 万张面的大网格时，其速度提升可达约 20 倍，且随着网格规模的增大而进一步提高。该光栅化器专为新的逐像素光照模型 SplatterPhongShader 设计。要使用它，您需要安装启用了 gl 扩展的 [pycuda](https:\u002F\u002Fgithub.com\u002Finducer\u002Fpycuda\u002Ftree\u002Fmain\u002Fpycuda)，以及 [pyopengl](https:\u002F\u002Fpypi.org\u002Fproject\u002FPyOpenGL\u002F)。\n\n### 新特性\n- 现在可以选择忽略相机 `transform_points_screen` 中的 XY 轴翻转 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4372001981868657ea2111575dc8704404a25090)\n- `Fragments` 现在是一个带有文档字符串和 `detach()` 方法的数据类 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc21ba144e7b03562e28cb82341eb6b00d442f21c)\n- 用于渲染深度图的 `SoftDepthShader` 和 `HardDepthShader` [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7e0146ece438f6de98a7ae930a339587311cb410) 和 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4ecc9ea89d55b51c6ad66996ff0edd013ded0815)\n- `AmbientLights` 现在可以用于渲染任意数量的通道（不仅限于 RGB）[commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8d10ba52b2c8aa59d6c54a193145492b1c196d4d)\n\n### Bug 修复\n- 修复了加载包含异构面（例如三角形和四边形混合）的 PLY 文件的问题 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F90d00f1b2be4b2f2c5eec40dd10894b2a449fbdf)\n- `Pointclouds.num_points_per_cloud` 现在始终返回张量 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc6519f29f0512e209906f8265e0d049085670304)\n- 修复了空点云的 Chamfer 距离计算问题 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc6519f29f0512e209906f8265e0d049085670304)\n\n### 针对开发者\n- 我们现在使用 black 22.3 或更高版本。\n- 测试应从 pytorch3d 根目录运行，而非 tests 目录。\n\n### 其他小改进\n- 在混合算法中，将公共功能提取到 `get_background_color` 中 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fea5df60d72307378d4c0641519e4e7a3671458dc)\n- 使用 `from None` 抛出翻译后的错误，以便打印更简洁的回溯信息 https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8e0c82b89a96a1f1bed1ae5b84bd37524d0fe154\n- 修复了 PnP 测试中的问题 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F379c8b27803ce527387854ea9f7f612170a5ecbb)\n- 修复了一个测试中引用 CPU 张量 CUDA 索引的问题 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresear","2022-08-10T11:16:16",{"id":210,"version":211,"summary_zh":212,"released_at":213},247069,"v0.6.2","This release brings new targets and many improvements.\r\n\r\nBuilds for PyTorch 1.10.1, 1.10.2, 1.11.0, but no longer 1.6.0. Builds for Python 3.10 but no longer 3.6.\r\n\r\n## New Features\r\n- MultinomialRaysampler and NDCMultinomialRaysampler replace GridRaysampler and NDCGridRaysampler [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3eb4233844a8a3c6441e91ebe22a4354da8f5fae) which can sample using the new n_rays input and also bring stratified sampling along rays and direction normalization (also [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F67778caee87b5852250ad4c15d17d7c608e6f1bc)\r\n- Function join_cameras_as_batch [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F39bb2ce06301d8085e8003a7e536aa72d5c969c6)\r\n- Function join_pointclouds_as_batch [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F262c1bfcd4a25e7ab796573644968e4d843b7ffe)\r\n- Camera batches can be indexed [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F28ccdb73280a2b2bc47d1202f922df67b3b2e63e) and [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd67662d13ce497e746196f68cfd90b61954fea23)\r\n- Meshes.submesh function to take a set of faces from a mesh. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F22f86072ca1cda37a679fc7937311af47bf8fa3b) and [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F050f650ae84b07fb430617a8db65d257e5df129a)\r\n- L1 support for KNN and chamfer [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F67fff956a26c0abdc4f3ddb495b61c8935972bf1)\r\n\r\n## Bug fixes\r\n- Rasterizer.to broken without cameras [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc371a9a6ccec687a298a3c23cd3d4c673f039eed)\r\n- Joining batches of meshes with TexturesAtlas and TexturesUV broke first input meshes’ texture [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4a1f1760540b791e638c5bcbb974b59583a0ada8)  \r\n- Transform3D.stack was entirely broken. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc8f3d6bc0bc366d44c8fca790c5e433503c7785f) which also added typing.\r\n- The function cameras_from_opencv_projection always created on CPU #1021 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F45d096e219379bd0b3d14bbe4633b09f818f988d)\r\n- Batching didn’t work for AmbientLights #1043 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9e2bc3a17faf7dfffad9b0803f335da328b08a61) \r\n- Pointclouds.subsample failed on windows #1015 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd6a12afbe77f8129b9285b30343b33e9cbf1bdec)\r\n- Fit_textured_meshes tutorial now turns off perspective_correct in the final optimization, to avoid nans. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7ea0756b05d58938ea25c0cd05f37f35524000d0)\r\n\r\n## Improvements\r\n- Points_normals much faster through use of symeig workaround [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc2862ff42706ae98ce4053e7d76959f05fb2c3b3)  \r\n- A warning is now printed to console on rasterizer bin overflow [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F6726500ad39f1f135464b50086dc47953f98a4bb)\r\n\r\n## Small improvements\r\n- [MeshRendererWithFragments](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F57a33b25c1105b8424ca4ec927d19d530f03d3eb) and [matrix_to_axis_angle](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1cbf80dab6611efa305345fd8047bf6e2e4cbe6a) made more importable\r\n- FacePointDistance, FacePointDistance and point_mesh_face_distance get min_triangle_area argument [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F471b12681888d37e15bd3fe6ae2b70032f06c026) - default also changed in [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fee71c7c44786e41049829a5e461ce860d80a80c7)\r\n- Fix for small faces in IoU3D [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fccfb72cc502cc3fe30e168fcd220791ee8e449a2) \r\n- Lower the epsilon value in the IoU3D [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3de41223dde5facc3ba5c445e2cb11e6b4410d6d)\r\n- Flexible background color for point compositing [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F59972b121d8c7bfc0e156b5ad5fcd77c11874178) and [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0e377c6850f96b881680d40b7bce1e0104a10793)\r\n- In points_to_volumes, the rescaling of features is now optional. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F78fd5af1a6c8174fd3f6f4080e218a55f0ba6fce)\r\n- LinearWithRepeat layer clarified and moved inside PyTorch3D from the NeRF project [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2a1de3b610b8f2e95b3aeda6de36805a0baa0e9d)\r\n- HarmonicEmbedding moved inside pytorch3d from projects\u002Fnerf. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff9a26a22fcd6f7f16eae7dc8fd6e48ecadd7486b) and [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F52c71b88169350bbeda0c18aaa73d6c9eeab5524)\r\n- Invalid default values in Meshes.__init__ remov","2022-04-28T16:15:23",{"id":215,"version":216,"summary_zh":217,"released_at":218},247070,"v0.6.1","This release brings PyTorch 1.10 builds and numerical fixes and improvements \r\n\r\n## Large fixes\r\n- Raysampling now works with non-square image conventions [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fcac6cb1b7813a4f09a05e0ade43c63292bb08b79)\r\n- Perspective_correct mesh rasterization calculation is protected against divide-by-zero. This fixes quite a number of bug reports, e.g. #561. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F29417d1f9b181f907f7e3729791a43554f3bbf56)\r\n\r\n## Breaking changes\r\n- [This commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbf3bc6f8e36385398c0be1bc03304e07964026b1) makes camera code more consistent in behaving like align_corners=False everywhere, by removing some extra -1’s in the code for screen space conversion. (1) If you create a camera in screen coordinates, i.e. setting in_ndc=False, then anything you do with the camera which touches NDC space may be affected, including trying to use renderers. The transform_points_screen function will not be affected. (2) If you call the function “transform_points_screen” on a camera defined in NDC space then results will be different.\r\n\r\n## Bug fixes\r\n- Raysampling now works with cameras defined in screen space. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fcff4876131e79b7c2ad0c13fc875292d9dc29f8c). \r\n- Pointclouds.inside_box is now properly reduced rather than returning separate results per axis. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa6508ac3dfaaf59d8bdce176bfbafad94c1d0604)\r\n- PulsarPointsRenderer fixed with non-square cameras [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa0247ea6bd1b4e32b61addc28cc368476e917ce2)\r\n- Functions clone and detach on TexturesUV properly propagate align_corners and padding_mode options. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd9f709599b600f4d0739fdbe20f1a0be867e5db9)\r\n- Fixing default arguments of add_points_features_to_volume_densities_features [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F34b1b4ab8bb2dd619e57d230cdaf8b0a35196a85)\r\n- Fix opencv camera conversion for non-square images (affects pulsar) [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8fa438cbda382602ad64afac5713f4e7e0461f88)\r\n\r\n## Small improvements\r\n- Some matrix conversions are now traceable with torch.jit. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbee31c48d3d36a8ea268f9835663c52ff4a476ec)\r\n- Fixes to compiled code to make windows builds happier [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1a7442a483ef92be720e88633b5e47e7b1e9e60c) [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3953de47eefed08466f32db2334b03cd2363b625)\r\n- A new set of tests, test_camera_pixels, illustrates the precise mapping of pixels for cameras. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F70acb3e415203fb9e4d646d9cfeb449971092e1d)\r\n- Numerical improvements in IOU calculation [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2f2466f472c5c431508dc3e45441130313e4df2f) and [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F6dfa32692277735b369a5c7a28aceb47263d451a)\r\n- New option on TexturesUV to choose sampling_mode. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd9f709599b600f4d0739fdbe20f1a0be867e5db9)\r\n- The plotly visualization will show face colors from a TexturesAtlas with K=1 [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7ce18f38cd7e7f61037322d7532cd7891190a540)\r\n\r\n\r\n## Internal\r\n- Benchmarks have been moved to tests\u002Fbenchmarks [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa0e2d2e3c3020f5e5899a93e5744fdb26de703fe)\r\n- Spelling [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7fa333f63240cf92068991f81fd20b4faea5c15d)\r\n- Some more type annotations for Pointclouds [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbfeb82efa38f29ed5b9cf8d8986fab744fe559ea)\r\n- Test fixes [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb26f4bc33ad19c0ff10990f75e827010b1a15d85)\r\n- Special implementation of eigenvalue calculation for 3x3 matrices which can be faster and more reliable  than native PyTorch. Currently not used [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd7d740abe9bada1d3187118b2e8d54b9c119737b)\r\n","2021-12-16T19:13:11",{"id":220,"version":221,"summary_zh":222,"released_at":223},247071,"v0.6.0","This release contains several new optimized operations related to point clouds.\r\n\r\n## New features\r\n- Farthest point sampling [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3b7d78c7a7bbd321fe181cb53f028c46ce78dfe1), [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd9f7611c4b5e1b7182192e05611fd615728ab29d) and [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbd04ffaf778074f267250ea5ce2d4a77a20afff5)\r\n- Ball query operation. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F103da63393d6bbb697835ddbfc86b07572ea4d0c)\r\n- Sample_pdf importance sampling operation, implemented with a CUDA kernel. Previously in Python in the NeRF project. [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7d7d00f2883b13a79681a9ccbbe41fc951533d9b) and [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1ea2b7272a23b09987a2dc4cb34bcfd9596301a8)\r\n- Fast accurate calculation of Intersection over union for 3D boxes. See [note](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fv0.6.0\u002Fdocs\u002Fnotes\u002Fiou3d.md). [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2293f1fed096642246d3e97a6b8478fa32c61d5e), [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fff8d4762f43fb19cf426e91f34babec5def4fc89) and [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F53266ec9ff02e4bc5e471216eb92a1c867473dcb)\r\n- Subsample method for Pointclouds [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4281df19cefb640067a49b961587342d9e4d85ba)\r\n- Adding point clouds to volumes is now implemented in C++\u002FCUDA, which means it can always operate inplace [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fee2b2feb9891a26939a688fd3c57d03462d7f773), [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9ad98c87c314877541187724a620c81332339a87) and [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0dfc6e0eb8a252878784dc9ae749d5298c5830b2)   \r\n\r\n## Breaking changes\r\n- [This commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1b8d86a104eab24ac25863c423d084d611f64bae) removes `_xy_grid` from the saved state of the `GridRaySampler` module, if the PyTorch version is 1.6.0 or higher. This means that you will get errors if you try to load a model saved with such a module by an older version of PyTorch3D (unless `strict==False`). Similarly the NeRF project’s `HarmonicEmbedding` no longer stores `_frequencies`. \r\n- PyTorch 1.5.0 or above is now required.\r\n\r\n## Bug fixes\r\n- Fix duplicate arguments errors when cameras are specified in certain places [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F804117833e3c02c19a6774f70bac1a780a322228)\r\n- Fix to join_scene for meshes using TexturesUV, which picked the wrong verts_uvs in certain cases [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa0d76a7080e263e2244abd67eb8ddf6667194b25)\r\n- Fix to edge cases when creating TexturesAtlas. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ffc156b50c0d6147ca00755059fb1ff96133827df)\r\n- Points to volumes fix when the grid_sizes are not specified. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb481cfbd019e1508e90fef39a0eeefc1b2759291)\r\n\r\n## Small improvements\r\n- Making the rasterizer deterministic if there are ties between faces [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F860b742a02e62ec85f48929268349916ca4ce8a5)\r\n- The function so3_log_map is now torchscriptable [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F46f727cb68d4477183160efd411706e637dffbbc)\r\n- The GridRaySampler change means it can be reused at different image sizes. [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1b8d86a104eab24ac25863c423d084d611f64bae)\r\n- More documentation in the renderer, with RasterizationSettings and PointRasterizationSettings being dataclasses [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9a737da83c8f45c010ce818355bc28aae6cfafcd) and [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4ad8576541009694f33f3db7468e28b9f8879d29)\r\n- Ability to save colors as 8bit (i.e. uint8) when writing data to PLY files [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fdd76b4101468d61233eff7f240870ab13a8b8662)\r\n\r\n## Internal\r\n- Coarse rasterization code has been reorganized [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F62dbf371aef2aeac11802901a771d85116a3717d), [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Feed68f457d690c70ccea75598fad60c63504bd0d) and [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbbc7573261ebfbbd63cc7bb80c071d04e836acbe)\r\n","2021-10-06T13:00:09",{"id":225,"version":226,"summary_zh":227,"released_at":228},247072,"v0.5.0","This release includes several very significant bug fixes and performance improvements as well as some new features for rendering, dataloading and visualization. Please read the breaking changes carefully and update your code accordingly for this new PyTorch3D version. \r\n\r\n## Breaking changes\r\n- [This commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0c32f094afbb5c7206589e4a5516b6836d1d7f2a) changes the cameras objects and will affect you if (a) you use a cameras object with non-square images, (b) you call the transform_points_screen method, (c) you initialise a cameras object in screen space, i.e. you have been specifying image_size. See [here](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fdiscussions\u002F783) for more details on the changes and how to update your code.\r\n- The functions `random_rotations`, `random_rotation` and `random_quaternions` no longer have a potentially confusing `requires_grad` parameter. | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fce60d4b00e1dc975af49c99b7e6ebe0b4c997f8f)\r\n- The call `pytorch3d.loss.mesh_laplacian_smoothing.laplacian_cot(meshes)` should now be `pytorch3d.ops.cot_laplacian(meshes.verts_packed(), meshes.faces_packed())` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F07a5a68d5034da9507a7fae1cf0717247ab255ba)\r\n\r\n## New deprecations\r\n- The function `so3_exponential_map` deprecated in favor of new function so3_exp_map | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F5284de6e97f1a206526045349beb3b11b8568238)\r\n\r\n## New features\r\n- Cameras can be defined and used regardless of coordinate system conventions. They can be defined in NDC or screen space and are converted appropriately to interface with the PyTorch3D renderers according to their conventions | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0c32f094afbb5c7206589e4a5516b6836d1d7f2a)\r\n- The standard mesh laplacian calculation has been added and now all three laplacians (standard, cot, norm) live in `pytorch3d.ops.laplacian_matrices`  | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F07a5a68d5034da9507a7fae1cf0717247ab255ba)\r\n- RayBundles can be viewed in `plotly_vis` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4426a9d12c16751c3afe5a4f5de8db89d58e6811)\r\n- Support for the OFF file format  for loading meshes | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0345f860d4d44e0a36b3a366644f6432458ae5cc)\r\n- Experimental support for loading some glTF files | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fed6983ea843590c05593fcd4c8de40f5c7bb0970)\r\n- PLY format now supports mesh vertex colors, i.e. `TexturesVertex` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F6c3fe952d1a65696701bba1b037a1b34ba33e4fc)\r\n- Saving a mesh to an OBJ file can include `TexturesUV` data | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F542e2e7c07fdeef815312b087acfa58094a7aa1e)\r\n- User can now specify vertex normals explicitly on a Meshes object | [commit]( https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2bbca5f2a7de1db1e398d0c50ce4242871957965)\r\n- Pointcloud normals and mesh vertex normals can now be [saved to](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb314beeda1092337458fca8cd993536463172f8e) and [loaded from](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F6fa66f55341fe1fa8c84c2611a6ec57c1d83b4fb) PLY files\r\n- New `rotate_on_spot` function to relatively rotate a camera position | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F61e38de034b57f3c703d5049a117764e78f72fe2)\r\n- New `AmbientLights` object for ambient-only lighting | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0e85652f07ecca0da06146895921e008e8b839c8)\r\n- Updated the alpha channel in the `hard_rgb_blend` function to return the mask of the pixels which have overlapping mesh faces | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa15c33a3cc8857e282d535e27821e8304a4f146d)\r\n- Camera conversions [from](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8006842f2a5ab1546a90797a6394f875adce045c) and [to](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fda9974b4160b6afe2587b473cdd471d4e299b323) OpenCV cameras\r\n- SE3 exponential and logarithm maps | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb2ac2655b3b5a95eea49e72a543f06be4c18e688)\r\n- `TensorProperties` classes (e.g. `Pointclouds` and `Cameras`) now have a cuda() function. | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F0c02ae907edc2db9aee7d5bda1159814ce06ee56)\r\n\r\n## Internal-facing new features\r\n- New linearly extrapolated acos function | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fdd45123f202441e7539c4af9b35d07317b786528)\r\n- New custom 3x3 determinant calculation | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbcee361d048f14b3d1fbfa2c3e498d64c06a7612)\r\n- New function Meshes.has_verts_normals | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorc","2021-08-05T14:51:34",{"id":230,"version":231,"summary_zh":232,"released_at":233},247073,"v0.4.0","# Changelog\r\n\r\nThe key new features in this release is support for Implicit\u002FVolume Rendering. This includes several methods for sampling camera rays and marching along the rays in order to render their color. We further introduce support for voxel grids. To this end, we implemented a new `Volumes` structure and methods for converting between `Pointclouds` and `Volumes`. The rendering of implicit surfaces as well as voxel grids has been showcased in two new tutorial jupyter notebooks.  \r\n\r\nWe are also introducing a new [projects](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Ftree\u002Fmaster\u002Fprojects\u002Fnerf) folder with an implementation of NeRF. We plan to add more examples of papers which can be implemented using components from PyTorch3D.\r\n\r\n## Key features\r\n- Volumes Datastructure | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F03ee1dbf8238e22589b6afb7440351e79e34b099)\r\n- Raysamplers: `GridRaysampler, MonteCarloRaysampler, NDCGridRaysampler`; `RayBundle` datastructure | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe6bc960fb565cd3a8bbc26edcd1109be4b0856a2)\r\nRaymarchers: `AbsorptionOnlyRaymarcher, EmissionAbsorptionRaymarcher` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1af1a36bd61f60b16b29fb5adbc3a5d740de9444)\r\n- Implicit\u002FVolume Renderer | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb466c381da3db7cc462c1df21d44b06a9ac7b191)\r\n- Pointclouds to Volumes conversion | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Faa9bcaf04c520b5fd0aa8d6d807b8090ea43d61c)\r\n\r\n## Projects\r\n- Reimplementation of NeRF | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F51de308b808ba28096d72ccd3f7c1019da4dea74)\r\n\r\n## Additional new features\r\n- Taubin Smoothing for Meshes | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F112959e0871ffd5ac0588ebf24e3d9aa21174ffa) \r\n- Non Square Image Rasterization for Meshes | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fd07307a451f3521e4cf522876b67b14b34021809) \r\n- Non Square Image Rasterization for Pointclouds | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3d769a66cb184d75126600abeb4ad953cd56cb8d) \r\n- Naive PyTorch version of Marching Cubes | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Febac66daeb4ea226291f6bd6c1516e690657c4c8) \r\n- New Pluggable Data Loading API | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb183dcb6e8494e43d1ea5bf8e3bd0fa0816ca3f5)\r\n  - Mesh formats | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F89532a876e77c09edf581f3b7a0de39df761d457)\r\n  - Pointcloud formats | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F95707fba1c8eff99757691468f440f9ce63c8027)\r\n\r\n## New Tutorials\r\n- Fit Textured Volume | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F01f86ddeb1058e06e74fa95c3975e8c78826c813)\r\n- Fit Neural Radiance Field | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9fc661f8b325142323b1925109ccf87ebb5904f2)\r\n\r\n## Small Updates\r\n- Change io functions to use `iopath` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F513a6476bc41c7f5a26c77bd585b68026b643cde) \r\n- Read heterogenous nonlist PLY properties as arrays | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F3b9fbfc08c528d6c7388b570622fe7fc025d7890)\r\n- Update the `MeshRasterizer` to automatically infer the camera type and set the `perspective_correct` setting for correcting barycentric coordinates after rasterization | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F838b73d3b6eae97b2d536f2b9c5a35de5e112c20)\r\n\r\n## Bug Fixes\r\n- Rasterization of mesh faces partially behind the image plane \r\n  - Full fix which clips meshes at specified z value prior to rasterization instead of only culling.  | [commit1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F23279c5f1da5f6e1c0bb8f305e6636b8c857f3fd), [commit2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F39f49c22cd99feb5a50cf98287644cc56e2cd39e) \r\n  - Introduced two new rasterization settings (`z_clip_value`, `cull_to_frustum`)  | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F340662e98e97c5e105cf6570765d7bae3e6228bf) \r\n- Check for verts\u002Ffaces in Meshes to be on the same device | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F569e5229a9cf3b92e7acaa657efe036935c10f51) \r\n- Fix for index error with Texture Atlas sampling | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F01759d8ffb4899de1157398e4052f76a13c56527) \r\n\r\n## Builds\r\n- For Linux, instead of uploading wheels to PyPI which will only work with one particular version of PyTorch and CUDA, we now provide a range of built wheels on S3. Refer to [INSTALL.md](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmaster\u002FINSTALL.md#3-install-wheels-for-linux) for instructions on how to download and install.","2021-02-09T18:11:34",{"id":235,"version":236,"summary_zh":237,"released_at":238},247074,"v0.3.0","# Changelog\r\n\r\nThe key new feature in this release is support for [Pulsar](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.07484) as an alternative backend for point cloud rendering. Pulsar is a highly tuned backend with better scaling performance than the default backend but with less flexibility: it does not have support for custom rasterizers and compositors. It can be used for scenes with millions of spheres and up to 4K resolution and supports opacity optimization. \r\n\r\n*   CUDA\u002FC++ implementation | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb19fe1de2fd39aa40c888a1cc02e45ed91a87851)\r\n*   PyTorch interface | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F960fd6d8b6f55257dc1d205e8c8f3366202c23b7) \r\n*   Examples and Demos | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F039e02601d447c127cc3a56c3d16716ef7d97a16)\r\n\r\n## Additional new features\r\n*   Plotly functions for visualization | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F964893cdcba11c188c42716683fbc39a0d07ff43)\r\n    *   Support for rendering batches of meshes\u002Fcameras\u002Fpointclouds\r\n    *   Support for viewing plotly plots from PyTorch3D Cameras point-of-view\r\n    *   `plot_scene` function \r\n*   Extend `sample_points_from_meshes` to support sampling textures | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F327bd2b9762c05ec7a6f74c2ec1e46f2a764e326)\r\n*   `corresponding_cameras_alignment` function that estimates a similarity transformation between two sets of cameras | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F316b77782e2aedea817bd2a4c6a7a787f3b61a1a) \r\n*   Support for variable size point radius in the pointcloud rasterizer | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Febe2693b11bffe8aa1d4d312ce812fcd5ee8f928) \r\n*   `save_ply` now by default saves to binary instead of ascii. An option makes the previous functionality available | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F197f1d6217a6148b91b75c3a475f247218ccb488) \r\n*   Functions to convert to and from axis-angle representation of rotations | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc93c4dd7b2b68db92997f65e11d7b08acecce891)\r\n*   Visualization functions for the TexturesUV class | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Faa4cc0adbce5f6277a862f5cedacd1c4555bb66e)\r\n\r\n## New tutorials\r\n*   Rendering DensePose Data with PyTorch3D | [commit1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F956d3a010cf8f1b0338436cd6697f1ab21099648), [commit2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff34f4073f0287ec961f43dd451d9470baf0db557) \r\n*   Updates to Pointcloud and Mesh rendering tutorials | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9a5341bde3400c426385ba60b697ff95f54f89c5)\r\n\r\n## Small updates\r\n*   Fixes to support multigpu rendering | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F563d441b000ad95c90f70ad77018c410a4626637)\r\n*   Support texture atlas loading for `.obj` files with meshes which only have material properties | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff5383a7e5a79bd9e912af0fb0199c557b9987877) \r\n*   Add support for changing the background color in Pointcloud compositors | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F872ff8c796e0947469fda76a028439e1dc8d3696) \r\n*   Add support for returning the fragments from the output of the `MeshRenderer` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc41aff23f0d4a3a91226fa73a3b37579f6087844)\r\n*   Support for axis angle representation of rotations | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc93c4dd7b2b68db92997f65e11d7b08acecce891)\r\n\r\n\r\n## Bug Fixes\r\n*   Fixed corner case in `look_at_view_transform` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Feb517dd70b20c8431a1cfc46e47b4c5c78529c5b)\r\n*   Fixed softmax_rgb_blend with mesh is outside zfar | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff8ea5906c0ae5ef6fb7800e3f0a05ebf56cdd927)\r\n*   Fixed corner case in `matrix_to_quaternion` function | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4d52f9fb8b5e53f5c6f98475fa0d005f7845e3b1)\r\n\r\n## Misc\r\nThe introduction of Pulsar adds the [NVIDIA CUB library](https:\u002F\u002Fdocs.nvidia.com\u002Fcuda\u002Fcub\u002Findex.html) as a build-time dependency.  ","2020-11-11T18:38:34",{"id":240,"version":241,"summary_zh":242,"released_at":243},247075,"v0.2.5","# Changelog\r\n\r\n## New features\r\n- Data loaders for common 3D datasets\r\n  - ShapeNetCore dataloader | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F358e211cde4412c24675af3d048f2d6d4391df59)\r\n  - R2N2 dataloader | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F63ba74f1a8d5aa73d36999f5b5a7bf2af0fd8066)\r\n- New texturing API\r\n  -Separate classes: TexturesVertex, TexturesUV, TexturesAtlas | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa3932960b31ff07e942a54e4608eae6ba12bf40a)\r\n  - Existing Textures class is now deprecated and will be removed in the next release.\r\n- Cameras API refactor\r\n  - Renaming and restructure to have consistency across all classes | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F57a22e7306db24140cb2133aa12a613cbf971c4c)\r\n  - Cameras have been renamed as follows:\r\n`OpenGLPerspectiveCameras` -> `FoVPerspectiveCameras` `OpenGLOrthographicCameras` -> `FoVOrthographicCameras`\r\n`SfMPerspectiveCameras` ->  `PerspectiveCameras`\r\n`SfMOrthographicCameras` -> `OrthographicCameras`\r\n  - All cameras now output projected values in NDC with the option to provide params in screen coordinates and convert them to NDC.  \r\n  - Refer to the new note [cameras.md](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmaster\u002Fdocs\u002Fnotes\u002Fcameras.md) for more detailed information. \r\n- Barycentric clipping in CUDA\r\n  - Move barycentric clipping from PyTorch to CUDA for increased efficiency. Now available as a rasterization setting `clip_barycentric_coords`. | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fcc70950f4064e3feeb55281b829aa55aa4a7e942)\r\n- One new representation for rotations\r\n  - Conversions to and from Zhou et al 6d rotation represention | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2f3cd987253c5da3741af9ceaec95c1bdbf12583)\r\n- Customizable Background color \r\n  - Option added to HardPhongShader, HardGouraudShader, and HardFlatShader | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F65620e716c184ec162df04df2f3fe08351a957f5)\r\n- Joining several meshes to render a single scene\r\n  - New `join_meshes_as_scene` function which also supports joining textures | [commit1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe053d7c45609ab25e345ac277a89232bfede8e90), [commit2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F909dc835050f84fc739fe75fd02884f305195afb )\r\n- CUDA op for interpolating face attributes\r\n  - Functionality which was in python moved to cuda\r\nhttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F26d2cc24c1382047a81dd182f9621a17184e0a95\r\n- Gather scatter on CPU | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7944d24d4872bdb01b821450840049e28d0ce12b) \r\n- C++\u002FCUDA implementations of sigmoid\u002Fsoftmax blending functions | [commit 1](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fbce396df93e338cc0d35256d650a3b6ab3f8b973)\r\n- C++ implementations for point-to-mesh distance functions | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F74659aef26db47342d71df99f31b3f63eacd7182)  \r\n- `detach` method for `Meshes`, `Pointclouds` and `Textures` | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F7f2f95f2255431267b9aabf90bb2bba7bb2b0880)\r\n- Support for multiple `align_modes` in the Cubify operator | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fa61c9376d578525c218c2e0ba7eeedef3d418076)\r\n- Texture maps (i.e. the TexturesUV class) now has align_corners and padding_mode customizable, and the default has changed to align_corners=True. | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe25ccab3d9e4d5c8877627ab7d95dd3f77c9a993)\r\n\r\n## New tutorials:\r\n- Data loading with ShapeNetCore and R2N2 dataloaders | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9242e7e65d3c1c841333ece7b7469cd677100e34) \r\n- Fitting a textured mesh from multiview images | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9a5341bde3400c426385ba60b697ff95f54f89c5)\r\n\r\n## Small updates\r\n- Compatibility with PyTorch 1.6 \r\n- Flag to make sorting optional in KNN\r\nhttps:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F806ca361c0d701e0269070e4d58be55e99d3b70e\r\n- `update_padded` method on meshes\r\n  - Other optimizations use this | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F1fb97f9c84598d3cc880713912df1d9a44fd1abe)\r\n\r\n## Bug Fixes:\t\r\n- Temporary fix for rendering from inside a surface results in uninterpretable images | [issue][commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9aaba0483c08c9a40c26db0858f8c0688f33e850)\r\n  - This fix culls all face which are partially behind the image plane\r\n- Scaling Pointclouds by a scalar now works | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2f0fd60186ef3ae8c6691841be315609f196fe42)\r\n- SO3 log map fix for singularity at PI | [commit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F34a0df0630c964d4e4be225b1dc0ccf166743e75) \r\n- Join mismatched texture maps on CUDA | [c","2020-08-28T15:17:47",{"id":245,"version":246,"summary_zh":247,"released_at":248},247076,"v0.2.0","# Changelog\r\n\r\n## New Features\r\n\r\n- Pointclouds Datastructure  | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F767d68a3af16b22f70dde29537334a51f98c75d6\r\n  - Support for batches of pointclouds with additional helper methods e.g. `get_bounding_boxes` \r\n- Pointclouds rendering | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F53599770dd642f0bc59a6554bd7c46e4a8044334\r\n  - C++, CUDA and Python implementations of Pointcloud rendering with heterogeneous pointclouds. Exposed via Rasterizer, Renderer and Compositor Pytorch Classes with 3 types of Compositors available. A jupyter notebook tutorial has also been added. \r\n- Umeyama - estimate a rigid motion between two sets of corresponding points. | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe5b1d6d3a3614749d2ddcae9b42e50869c8266a2\r\n- Weighted Umeyama | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe37085d9990bf94c7fcee6a9df5dc8db35391608\r\n- Iterative Closest Point Algorithm | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8abbe22ffbc306b7be0e2e09ba1ce167430f2c7f\r\n- Efficient PnP algorithm to Fit 2D to 3D correspondences under perspective assumption |  https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F04d8bf6a435da136331cdb33be3f5cf85a678e2c\r\n- Pointcloud normal estimation | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F365945b1fd097a4eed391843408c31e86f8592cb\r\n  - Supports batches of pointclouds \r\n  - Also available on the `Pointclouds` datastructure as a method `estimate_normals`\r\n- Point to Mesh distances | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F487d4d6607a60a8be7135b334137985f40953a92\r\n  - Python, C++ and Cuda implementations of Point to Triangle Face, Face to point algorithms including array versions \r\n- K nearest neighbors\r\n  - Multiple KNN methods for pointclouds based on the input parameters | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F870290df345873492d88f70b942893cd3b5deb87\r\n  - KNN with heterogeneous pointclouds (i.e different numbers of points) | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F01b5f7b228378b6d12eaa78b86fb5215d6b4eec7\r\n  - Autograd wrapper for KNN including backward pass for distances | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb2b0c5a4426bb907517452a6fe643eda39dd73c8\r\n    - `knn_points` function to return the neighbor points and distances\r\n    - `knn_gather` function to allow gathering of additional features based on the knn indices\r\n\r\n## Updates to existing Operators \r\n- Chamfer loss support for heterogeneous pointclouds | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F790eb8c4020994a673033cc9b1fe92caad6281ac\r\n  - Support for chamfer loss for two batches of pointclouds where each pointcloud in the batch can have different numbers of points \r\n- Vert align for pointcloud objects | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff25af9695999133667ee76734c3139465b445a6a\r\n- Cameras\r\n  - `unproject_points` function to convert from screen to world coordinates | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F365945b1fd097a4eed391843408c31e86f8592cb\r\n  - `look_at_transform` update to enable specifying the `eye` (camera center) directly | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F2480723adf1ce8a5cfca5c190f5fba7a48549f75\r\n- `Transforms3D` update to allow init of `Transforms3D` class with a custom matrix | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F90dc7a08568072375fe9f7ecc3201618fba86287\r\n- Mesh Rendering update to enable back face culling in rasterization - this is available as a `cull_backfaces` boolean setting in `raster_settings` | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F4bf30593ffc5488a03c75a16dd118013f5d0eb5e\r\n- Mesh loading - update to `load_obj` to support loading textures as per face textures [following the approach from SoftRasterizer]. There is a new boolean argument called 'create_texture_atlas` for the `load_obj` function  to enable this | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc9267ab7af0f72217a1ee1a0b37941a5c8fdb325\r\n- `join_meshes_as_batch` method to create a batch of meshes from a list of Meshes objects | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fb64fe5136002b94caaaa97720b19fc8b3ba8da3c\r\n\r\n## Bug Fixes\r\n- `nan` check in sample points from meshes | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F6c48ff6ad9005cfc03704c77531a4a25d1c8d843\r\n- Texturing function reshape\u002Fview fixes | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F5d3cc3569a03f44647857243efd0d80588a6785b\r\n- `SfMPerspectiveCameras` projection matrix - add the principal points after the perspective divide | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F677b0bd5aecb3069e3e8d4de41656f786cfa4312\r\n- Enable cuda kernels to be laucnhed on any GPU (not just the default) | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fc3d636dc8c68cb2fd36b32d8dcc4bad27e2a551b\r\n\r\n## Breaking changes\r\n- Nearest neighbors implementation has been entirely removed - use KNN instead. | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpyt","2020-04-27T02:22:26",{"id":250,"version":251,"summary_zh":252,"released_at":253},247077,"v0.1.1","# Changelog \r\n\r\n## New features:\r\n\r\n- `load_textures` boolean parameter for `load_obj` function | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F659ad34389d47bf13f37a340a18a5784ad9b2695\r\n- Single function to load mesh and texture data from OBJs. | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8fe65d5f560611f4f60b8ea549b6bf7e75e3ae7f\r\n- Cpu implementations for\r\n   - Nearest neighbor | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe290f87ca949c077803d2da02c48173607ce70e4\r\n   - Face Areas Normals | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe290f87ca949c077803d2da02c48173607ce70e4\r\n   - Packed to padded tensor  | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fe290f87ca949c077803d2da02c48173607ce70e4\r\n- Mesh Rendering\r\n   - Flat Shading for meshes | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fff19c642cb22a5b6a073d611e593baa836e5ebe4\r\n   - Barycentric clipping before z buffer and texture interpolation | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fff19c642cb22a5b6a073d611e593baa836e5ebe4\r\n- Support for building with windows | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9e21659fc5a26c427cbb186d55c84e6f8e8bc21d\r\n\r\n## Bug fixes:\r\n\r\n- Several documentation, installation and correctness fixes including:\r\n`expsumlog` in soft blending replaced with `torch.prod` which makes soft blending backward pass stable | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Fba11c0b59cb53d50dc5f50e0e0148b3f2e43f39f\r\n- Fix matrix convention for rotations in Tranforms3D | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F8301163d2466799ba932eae9ff7170110711bff6\r\n- Rendering flipping - the y axis flip in the blending functions has been removed and the rasterization step has been updated to ensure the directions of the axes are correct in the rendered image. The documentation for the renderer has been updated with the convention for world and camera coordinates. | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F15c72be444fab0d1ba6879d097399b12f6a2a8b0\r\n\r\n## Breaking changes\r\n\r\n- The spelling of \u002FGourad\u002F has been fixed to \u002FGouraud\u002F. | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002F9ca5489107ce887875cb4c24059c8810119ebe11\r\n- Shaders have been renamed to make clear if they are Hard or Soft (probabilistic) | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fcommit\u002Ff0dc65110aa9687e83712950f6d47b280761f078\r\n","2020-03-08T19:55:49",{"id":255,"version":256,"summary_zh":76,"released_at":257},247078,"v0.1.0","2020-03-05T12:52:36"]