[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-maximeraafat--BlenderNeRF":3,"tool-maximeraafat--BlenderNeRF":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":23,"env_os":92,"env_gpu":93,"env_ram":92,"env_deps":94,"category_tags":98,"github_topics":99,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":142},2071,"maximeraafat\u002FBlenderNeRF","BlenderNeRF","Easy NeRF synthetic dataset creation within Blender","BlenderNeRF 是一款专为 Blender 设计的插件，旨在让用户通过单次点击即可轻松生成用于训练神经辐射场（NeRF）和高斯泼溅（Gaussian Splatting）的合成数据集。它不仅能自动渲染图像，还能同步导出精确的相机参数文件，省去了繁琐的手动配置过程。\n\n在三维重建领域，获取带有准确相机位姿的训练数据通常门槛较高且耗时费力。BlenderNeRF 完美解决了这一痛点，将原本需要复杂代码提取的参数工作简化为直观的一键操作，大幅降低了数据准备的时间成本。无论是视觉特效艺术家、科研人员，还是计算机图形学爱好者，都能利用它在完全可控的三维场景中快速构建高质量的训练与测试数据。\n\n该工具的独特亮点在于提供了灵活的数据生成策略：既支持从相机动画中按间隔抽取帧序列（SOF 模式），适用于静态场景的大范围动画插值；也允许用户分别定义独立的训练与测试相机路径（TTC 模式），以便更严谨地评估模型效果。最终生成的数据会自动打包为包含图像和 JSON 配置文件的标准格式，可直接对接主流 NeRF 算法进行模型训练与新视角合成，是连接三维创作与前沿 AI 渲染技术的高效桥梁。","# BlenderNeRF\n\nWhether a VFX artist, a research fellow or a graphics amateur, **BlenderNeRF** is the easiest and fastest way to create synthetic NeRF and Gaussian Splatting datasets within Blender. Obtain renders and camera parameters with a single click, while having full user control over the 3D scene and camera!\n\n\u003Cp align='center'>\n  \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FC8YuDoU11cg\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_fd27b08d0f7d.jpg\" width='90%'>\u003C\u002Fa>\n  \u003Cbr>\n  Are you ready to NeRF? Start with a single click in Blender by checking out \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FC8YuDoU11cg\">this tutorial\u003C\u002Fa>!\n\u003C\u002Fp>\n\n\n## Neural Radiance Fields\n\n**Neural Radiance Fields ([NeRF](https:\u002F\u002Fwww.matthewtancik.com\u002Fnerf))** aim at representing a 3D scene as a view dependent volumetric object from 2D images only, alongside their respective camera information. The 3D scene is reverse engineered from the training images with help of a simple neural network.\n\n[**Gaussian Splatting**](https:\u002F\u002Frepo-sam.inria.fr\u002Ffungraph\u002F3d-gaussian-splatting\u002F) is a follow-up method for rendering radiance fields in a point-based manner. This representation is highly optimised for GPU rendering and leverages more traditional graphics techniques to achieve high frame rates.\n\nI recommend watching [this YouTube video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YX5AoaWrowY) by **Corridor Crew** for a thrilling investigation on a few use cases and future potential applications of NeRFs.\n\n\n## Motivation\n\nRendering is an expensive computation. Photorealistic scenes can take seconds to hours to render depending on the scene complexity, hardware and available software resources.\n\nNeRFs and Gaussian splats can speed up this process, but require camera information typically extracted via cumbersome code. This plugin enables anyone to get renders and cameras with a single click in Blender.\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_3d6e69af896c.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n\n## Installation\n\n1. Download this repository as a **ZIP** file\n2. Open Blender (4.0.0 or above)\n3. In Blender, head to **Edit > Preferences > Add-ons**, and select **Install From Disk** under the drop icon\n4. Select the downloaded **ZIP** file\n\nAlthough release versions of **BlenderNeRF** are available for download, they are primarily intended for tracking major code changes and for citation purposes. I recommend downloading the current repository directly, since minor changes or bug fixes might not be included in a release right away.\n\n\n## Setting\n\n**BlenderNeRF** consists of 3 methods discussed in the sub-sections below. Each method is capable of creating **training** data and **testing** data for NeRF in the form of training images and a `transforms_train.json` respectively `transforms_test.json` file with the corresponding camera information. The data is archived into a single **ZIP** file containing training and testing folders. Training data can then be used by a NeRF model to learn the 3D scene representation. Once trained, the model may be evaluated (or tested) on the testing data (camera information only) to obtain novel renders.\n\n### Subset of Frames\n\n**Subset of Frames (SOF)** renders every **N** frames from a camera animation, and utilises the rendered subset of frames as NeRF training data. The registered testing data spans over all frames of the same camera animation, including training frames. When trained, the NeRF model can render the full camera animation and is consequently well suited for interpolating or rendering large animations of static scenes.\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_f8bf3117fa5b.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n### Train and Test Cameras\n\n**Train and Test Cameras (TTC)** registers training and testing data from two separate user defined cameras. A NeRF model can then be fitted with the data extracted from the training camera, and be evaluated on the testing data.\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_abee65f1fa10.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n### Camera on Sphere\n\n**Camera on Sphere (COS)** renders training frames by uniformly sampling random camera views directed at the center from a user controlled sphere. Testing data is extracted from a selected camera.\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_e89c71366102.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n\n## How to use the Methods\n\nThe add-on properties panel is available under `3D View > N panel > BlenderNeRF` (the **N panel** is accessible under the 3D viewport when pressing `N`). All 3 methods (**SOF**, **TTC** and **COS**) share a common tab called `BlenderNeRF shared UI` with the below listed controllable properties.\n\n* `Train` (activated by default) : whether to register training data (renderings + camera information)\n* `Test` (activated by default) : whether to register testing data (camera information only)\n* `AABB` (by default set to **4**) : aabb scale parameter as described in Instant NGP (more details below)\n* `Render Frames` (activated by default) : whether to render the frames\n* `Save Log File` (deactivated by default) : whether to save a log file containing reproducibility information on the **BlenderNeRF** run\n* `File Format` (**NGP** by default) : whether to export the camera files in the Instant NGP or defaut NeRF file format convention\n* `Gaussian Points` (deactivated by default) : whether to export a `points3d.ply` file for Gaussian Splatting\n* `Gaussian Test Camera Poses` (**Dummy** by default): whether to export a dummy test camera file or the full set of test camera poses (only with `Gaussian Points`)\n* `Save Path` (empty by default) : path to the output directory in which the dataset will be created\n\nIf the `Gaussian Points` property is active, **BlenderNeRF** will create an additional `points3d.ply` file from all visible meshes (at render time) where each vertex will be used as initialization point. Vertex colors will be stored if available, and set to black otherwise.\n\nThe [**Gaussian Splatting**](https:\u002F\u002Fgithub.com\u002Fgraphdeco-inria\u002Fgaussian-splatting) repository natively supports **NeRF** datasets, but requires both train and test data. The `Dummy` option for the `Gaussian Test Camera Poses` property creates an empty test camera pose file, in the case no test images are needed. The `Full` option exports the default test camera poses, but will require separately rendering a `test` folder containing all the test renders.\n\n`AABB` is restricted to be an integer power of 2, it defines the side length of the bounding box volume in which NeRF will trace rays. The property was introduced with **NVIDIA's [Instant NGP](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Finstant-ngp)** version of NeRF.\n\nThe `File Format` property can either be **NGP** or **NeRF**. The **NGP** file format convention is the same as the **NeRF** one, with a few additional parameters which can be accessed by Instant NGP.\n\nNotice that each method has its distinctive `Name` property (by default set to `dataset`) corresponding to the dataset name and created **ZIP** filename for the respective method. Please note that unsupported characters, such as spaces, `#` or `\u002F`, will automatically be replaced by an underscore.\n\nBelow are described the properties specific to each method (the `Name` property is left out, since already discussed above).\n\n### How to SOF\n\n* `Frame Step` (by default set to **3**) : **N** (as defined in the [Setting](#setting) section) = frequency at which the training frames are registered\n* `Camera` (always set to the active camera) : camera used for registering training and testing data\n* `PLAY SOF` : play the **Subset of Frames** method operator to export NeRF data\n\n### How to TTC\n\n* `Frames` (by default set to **100**) : number of training frames used from the training camera\n* `Train Cam` (empty by default) : camera used for registering the training data\n* `Test Cam` (empty by default) : camera used for registering the testing data\n* `PLAY TTC` : play the **Train and Test Cameras** method operator to export NeRF data\n\n`Frames` amount of training frames will be captured using the `Train Cam` object, starting from the scene start frame.\n\n### How to COS\n\n* `Camera` (always set to the active camera) : camera used for registering the testing data\n* `Location` (by default set to **0 m** vector) : center position of the training sphere from which camera views are sampled\n* `Rotation` (by default set to **0°** vector) : rotation of the training sphere from which camera views are sampled\n* `Scale` (by default set to **1** vector) : scale vector of the training sphere in xyz axes\n* `Radius` (by default set to **4 m**) : radius scalar of the training sphere\n* `Lens` (by default set to **50 mm**) : focal length of the training camera\n* `Seed` (by default set to **0**) : seed to initialize the random camera view sampling procedure\n* `Frames` (by default set to **100**) : number of training frames sampled and rendered from the training sphere\n* `Sphere` (deactivated by default) : whether to show the training sphere from which random views will be sampled\n* `Camera` (deactivated by default) : whether to show the camera used for registering the training data\n* `Upper Views` (deactivated by default) : whether to sample views from the upper training hemisphere only (rotation variant)\n* `Outwards` (deactivated by default) : whether to point the camera outwards of the training sphere\n* `PLAY COS` : play the **Camera on Sphere** method operator to export NeRF data\n\nNote that activating the `Sphere` and `Camera` properties creates a `BlenderNeRF Sphere` empty object and a `BlenderNeRF Camera` camera object respectively. Please do not create any objects with these names manually, since this might break the add-on functionalities.\n\n`Frames` amount of training frames will be captured using the `BlenderNeRF Camera` object, starting from the scene start frame. Finally, keep in mind that the training camera is locked in place and cannot manually be moved.\n\n\n## Tips for Optimal Results\n\nNVIDIA provides a few helpful tips on how to train a NeRF model using [Instant NGP](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Finstant-ngp\u002Fblob\u002Fmaster\u002Fdocs\u002Fnerf_dataset_tips.md). Feel free to visit their repository for further help. Below are some quick tips for optimal **nerfing** gained from personal experience.\n\n* NeRF trains best with 50 to 150 images\n* Testing views should not deviate too much from training views\n* Scene movement, motion blur or blurring artefacts can degrade the reconstruction quality\n* The captured scene should be at least one Blender unit away from the camera\n* Keep `AABB` as tight as possible to the scene scale, higher values will slow down training\n* If the reconstruction quality appears blurry, start by adjusting `AABB` while keeping it a power of 2\n* Avoid adjusting the camera focal lengths during the animation, the vanilla NeRF methods do not support multiple focal lengths\n* Avoid extreme focal lengths, values between 30 mm and 70 mm work well in practice\n* A `Vertical` camera sensor fit sometimes leads to distorted NeRF volumes, avoid it if possible\n\n\n## How to NeRF\n\nIf you have access to an NVIDIA GPU, you might want to install [Instant NGP](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Finstant-ngp#installation) on your own device for an optimal user experience, by following the instructions provided on their repository. Otherwise, you can run NeRF in a COLAB notebook on Google GPUs for free with a Google account.\n\nOpen this [COLAB notebook](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1dQInHx0Eg5LZUpnhEfoHDP77bCMwAPab?usp=sharing) (also downloadable [here](https:\u002F\u002Fgist.github.com\u002Fmaximeraafat\u002F122a63c81affd6d574c67d187b82b0b0)) and follow the instructions.\n\n\n## Remarks\n\nThis add-on is being developed as a fun side project over the course of multiple months and versions of Blender, mainly on macOS. If you encounter any issues with the plugin functionalities, feel free to open a GitHub issue with a clear description of the problem, which **BlenderNeRF** version the issues have been experienced with, and any further information if relevant.\n\n### Real World Data\n\nWhile this extension is intended for synthetic datasets creation, existing tools for importing motion tracking data from real world cameras are available. One such example is **[Tracky](https:\u002F\u002Fgithub.com\u002FShopify\u002Ftracky)** by **Shopify**, an open source iOS app and an adjacent Blender plugin recording motion tracking data from an ARKit session on iPhone. Keep in mind however that tracking data can be subject to drifts and inaccuracies, which might affect the resulting NeRF reconstruction quality.\n\n\n## Citation\n\nIf you find this repository useful in your research, please consider citing **BlenderNeRF** using the dedicated GitHub button above. If you made use of this extension for your artistic projects, feel free to share some of your work using the `#blendernerf` hashtag on social media! :)\n","# BlenderNeRF\n\n无论您是视觉特效艺术家、研究人员，还是图形爱好者，**BlenderNeRF** 都是在 Blender 中创建合成 NeRF 和 Gaussian Splatting 数据集的最简单快捷方式。只需点击一下即可获取渲染结果和相机参数，同时完全掌控 3D 场景和相机！\n\n\u003Cp align='center'>\n  \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FC8YuDoU11cg\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_fd27b08d0f7d.jpg\" width='90%'>\u003C\u002Fa>\n  \u003Cbr>\n  准备好开始 NeRF 吗？立即在 Blender 中点击一下，观看 \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FC8YuDoU11cg\">本教程\u003C\u002Fa> 吧！\n\u003C\u002Fp>\n\n\n## 神经辐射场\n\n**神经辐射场（[NeRF](https:\u002F\u002Fwww.matthewtancik.com\u002Fnerf)）** 的目标是仅利用 2D 图像及其对应的相机信息，将 3D 场景表示为与视角相关的体三维对象。通过一个简单的神经网络，从训练图像中逆向重建出 3D 场景。\n\n**Gaussian Splatting** 是一种后续方法，以基于点的方式渲染辐射场。这种表示方式针对 GPU 渲染进行了高度优化，并借助更传统的图形技术实现高帧率。\n\n我推荐观看 **Corridor Crew** 制作的 [这则 YouTube 视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YX5AoaWrowY)，它深入探讨了 NeRF 的一些应用场景及未来潜力。\n\n\n## 动机\n\n渲染是一项计算密集型任务。逼真的场景根据其复杂度、硬件条件以及可用的软件资源，可能需要几秒到几小时才能完成渲染。\n\nNeRF 和 Gaussian Splat 可以加速这一过程，但通常需要通过繁琐的代码提取相机信息。而这款插件让任何人都能在 Blender 中只需点击一下，即可轻松获得渲染结果和相机数据。\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_3d6e69af896c.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n\n## 安装\n\n1. 将本仓库下载为 **ZIP** 文件。\n2. 打开 Blender（4.0.0 或更高版本）。\n3. 在 Blender 中，前往 **编辑 > 首选项 > 插件**，并在下拉菜单中选择 **从磁盘安装**。\n4. 选择已下载的 **ZIP** 文件。\n\n尽管 **BlenderNeRF** 提供了可下载的发布版本，但这些版本主要用于追踪重大代码变更和引用目的。我建议直接下载当前仓库，因为一些小的改动或错误修复可能不会立即包含在发布版本中。\n\n\n## 设置\n\n**BlenderNeRF** 包含以下三个方法，分别在下面的小节中介绍。每种方法都能生成用于 NeRF 训练的数据和测试数据：训练数据以训练图像的形式提供，而测试数据则以 `transforms_train.json` 和 `transforms_test.json` 文件的形式存储相应的相机信息。这些数据会被归档到一个包含训练和测试文件夹的单个 **ZIP** 文件中。训练数据可供 NeRF 模型学习 3D 场景的表示；训练完成后，该模型可以使用测试数据（仅包含相机信息）进行评估（或测试），从而生成新的渲染结果。\n\n### 帧子集\n\n**帧子集（SOF）** 会从摄像机动画中每隔 **N** 帧渲染一次，并将这些渲染帧作为 NeRF 的训练数据。注册的测试数据则涵盖同一摄像机动画的所有帧，包括训练帧。经过训练后，NeRF 模型能够渲染完整的摄像机动画，因此非常适合对静态场景的大规模动画进行插值或渲染。\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_f8bf3117fa5b.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n### 训练与测试相机\n\n**训练与测试相机（TTC）** 会从用户定义的两台独立摄像机中分别记录训练和测试数据。随后，NeRF 模型可以使用从训练相机中提取的数据进行拟合，并在测试数据上进行评估。\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_abee65f1fa10.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n### 球面上的相机\n\n**球面上的相机（COS）** 会通过均匀采样随机的相机视角，从用户控制的球体上朝向中心进行训练帧的渲染。测试数据则从选定的相机中提取。\n\n\u003Cp align='center'>\n  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_readme_e89c71366102.gif' width='90%'\u002F>\n\u003C\u002Fp>\n\n## 如何使用这些方法\n\n附加组件的属性面板可在 `3D视图 > N面板 > BlenderNeRF` 中找到（按 `N` 键即可打开 **N面板**）。所有三种方法（**SOF**、**TTC** 和 **COS**）共享一个名为 `BlenderNeRF 共享UI` 的选项卡，其中包含以下可控制的属性。\n\n* `Train`（默认启用）：是否注册训练数据（渲染图像 + 相机信息）\n* `Test`（默认启用）：是否注册测试数据（仅相机信息）\n* `AABB`（默认设置为 **4**）：如 Instant NGP 中所述的 aabb 缩放参数（详情见下文）\n* `Render Frames`（默认启用）：是否渲染帧\n* `Save Log File`（默认禁用）：是否保存包含 **BlenderNeRF** 运行可复现性信息的日志文件\n* `File Format`（默认为 **NGP**）：是否以 Instant NGP 或默认 NeRF 文件格式导出相机文件\n* `Gaussian Points`（默认禁用）：是否导出用于高斯泼溅的 `points3d.ply` 文件\n* `Gaussian Test Camera Poses`（默认为 **Dummy**）：是否导出虚拟测试相机文件，或完整的测试相机位姿集（仅在启用 `Gaussian Points` 时可用）\n* `Save Path`（默认为空）：输出目录路径，数据集将在此处创建\n\n如果 `Gaussian Points` 属性处于启用状态，**BlenderNeRF** 将从所有可见网格（在渲染时）中创建一个额外的 `points3d.ply` 文件，其中每个顶点都将用作初始化点。如果有顶点颜色，则会保留；否则设为黑色。\n\n[**Gaussian Splatting**](https:\u002F\u002Fgithub.com\u002Fgraphdeco-inria\u002Fgaussian-splatting) 仓库原生支持 **NeRF** 数据集，但需要同时提供训练和测试数据。对于 `Gaussian Test Camera Poses` 属性，选择 `Dummy` 选项会生成一个空的测试相机位姿文件，适用于不需要测试图像的情况。而选择 `Full` 选项则会导出默认的测试相机位姿，但需要单独渲染一个包含所有测试渲染图像的 `test` 文件夹。\n\n`AABB` 限制为 2 的整数次幂，它定义了 NeRF 在其中追踪光线的包围盒体积的边长。该属性由 NVIDIA 的 **Instant NGP** 版本的 NeRF 引入。\n\n`File Format` 属性可选择 **NGP** 或 **NeRF**。**NGP** 文件格式与 **NeRF** 格式相同，但增加了一些额外的参数，这些参数可通过 Instant NGP 访问。\n\n请注意，每种方法都有其独特的 `Name` 属性（默认设置为 `dataset`），对应于数据集名称以及相应方法生成的 **ZIP** 文件名。请记住，不支持的字符，例如空格、`#` 或 `\u002F`，将自动替换为下划线。\n\n以下是每种方法特有的属性说明（已略去 `Name` 属性，因其已在上文讨论过）。\n\n### SOF 使用方法\n\n* `Frame Step`（默认设置为 **3**）：**N**（如 [设置](#setting) 部分所定义）= 注册训练帧的频率\n* `Camera`（始终设置为当前活动相机）：用于注册训练和测试数据的相机\n* `PLAY SOF`：运行 **Subset of Frames** 方法操作符以导出 NeRF 数据\n\n### TTC 使用方法\n\n* `Frames`（默认设置为 **100**）：从训练相机中使用的训练帧数量\n* `Train Cam`（默认为空）：用于注册训练数据的相机\n* `Test Cam`（默认为空）：用于注册测试数据的相机\n* `PLAY TTC`：运行 **Train and Test Cameras** 方法操作符以导出 NeRF 数据\n\n`Frames` 指定数量的训练帧将使用 `Train Cam` 对象从场景开始帧开始捕获。\n\n### COS 使用方法\n\n* `Camera`（始终设置为当前活动相机）：用于注册测试数据的相机\n* `Location`（默认设置为 **0 m** 向量）：采样相机视角的训练球体中心位置\n* `Rotation`（默认设置为 **0°** 向量）：采样相机视角的训练球体旋转角度\n* `Scale`（默认设置为 **1** 向量）：训练球体在 xyz 轴上的缩放向量\n* `Radius`（默认设置为 **4 m**）：训练球体的半径标量\n* `Lens`（默认设置为 **50 mm**）：训练相机的焦距\n* `Seed`（默认设置为 **0**）：用于初始化随机相机视角采样过程的种子\n* `Frames`（默认设置为 **100**）：从训练球体中采样并渲染的训练帧数量\n* `Sphere`（默认禁用）：是否显示用于随机视角采样的训练球体\n* `Camera`（默认禁用）：是否显示用于注册训练数据的相机\n* `Upper Views`（默认禁用）：是否仅从训练球体的上半球采样视角（旋转变体）\n* `Outwards`（默认禁用）：是否让相机指向训练球体外部\n* `PLAY COS`：运行 **Camera on Sphere** 方法操作符以导出 NeRF 数据\n\n请注意，启用 `Sphere` 和 `Camera` 属性会分别创建一个名为 `BlenderNeRF Sphere` 的空对象和一个名为 `BlenderNeRF Camera` 的相机对象。请勿手动创建具有这些名称的对象，否则可能会导致附加组件功能失效。\n\n`Frames` 指定数量的训练帧将使用 `BlenderNeRF Camera` 对象从场景开始帧开始捕获。最后，请注意，训练相机被锁定在原位，无法手动移动。\n\n\n## 获得最佳效果的提示\n\nNVIDIA 提供了一些关于如何使用 [Instant NGP](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Finstant-ngp\u002Fblob\u002Fmaster\u002Fdocs\u002Fnerf_dataset_tips.md) 训练 NeRF 模型的实用建议。欢迎访问他们的仓库获取更多帮助。以下是一些基于个人经验总结的优化 **nerfing** 的快速提示。\n\n* NeRF 最适合使用 50 至 150 张图像进行训练\n* 测试视角不应与训练视角相差过大\n* 场景运动、运动模糊或模糊伪影会降低重建质量\n* 捕获的场景应至少距离相机一个 Blender 单位\n* 尽可能将 `AABB` 设置为与场景规模相匹配，过大的值会减慢训练速度\n* 如果重建质量看起来模糊，可以先调整 `AABB`，同时确保其为 2 的整数次幂\n* 动画过程中避免调整相机焦距，标准 NeRF 方法不支持多焦距\n* 避免使用极端焦距，实践中 30 mm 至 70 mm 的焦距效果较好\n* 垂直传感器的相机有时会导致 NeRF 体积变形，应尽量避免\n\n## 如何使用 NeRF\n\n如果您有一块 NVIDIA GPU，为了获得最佳的使用体验，建议您按照 [Instant NGP](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Finstant-ngp#installation) 仓库中的说明，在自己的设备上安装它。否则，您也可以通过 Google 账号，在 Google 的 COLAB 笔记本中免费使用 Google GPU 来运行 NeRF。\n\n打开这个 [COLAB 笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1dQInHx0Eg5LZUpnhEfoHDP77bCMwAPab?usp=sharing)（也可在此 [下载](https:\u002F\u002Fgist.github.com\u002Fmaximeraafat\u002F122a63c81affd6d574c67d187b82b0b0)），并按照其中的指示操作即可。\n\n\n## 备注\n\n此插件是一个历时数月、伴随 Blender 不同版本开发的趣味性副项目，主要在 macOS 系统上进行。如果您在使用插件功能时遇到任何问题，请随时在 GitHub 上提交一个问题，并清晰地描述问题的具体情况、出现问题的 **BlenderNeRF** 版本，以及任何其他相关的信息。\n\n### 现实世界数据\n\n虽然此扩展主要用于创建合成数据集，但也有一些现成的工具可以导入来自真实世界相机的运动跟踪数据。例如，由 Shopify 开发的开源 iOS 应用程序及其配套的 Blender 插件 **[Tracky](https:\u002F\u002Fgithub.com\u002FShopify\u002Ftracky)**，能够记录 iPhone 上 ARKit 会话中的运动跟踪数据。不过需要注意的是，跟踪数据可能会出现漂移和不准确性，从而影响最终的 NeRF 重建质量。\n\n\n## 引用\n\n如果您在研究中觉得本仓库很有帮助，请考虑使用上方的专用 GitHub 按钮来引用 **BlenderNeRF**。如果您在艺术创作中使用了此扩展，也欢迎在社交媒体上使用 `#blendernerf` 标签分享您的作品！ :)","# BlenderNeRF 快速上手指南\n\nBlenderNeRF 是一款 Blender 插件，旨在帮助视觉特效艺术家、研究人员及图形学爱好者，一键生成用于训练 **NeRF (神经辐射场)** 和 **Gaussian Splatting (高斯泼溅)** 的合成数据集。它能在保留用户对 3D 场景和相机完全控制权的同时，自动输出渲染图像及对应的相机参数文件。\n\n## 环境准备\n\n在开始之前，请确保满足以下系统要求：\n\n*   **操作系统**：Windows, macOS 或 Linux（插件主要在 macOS 上开发，但跨平台兼容）。\n*   **核心软件**：**Blender 4.0.0** 或更高版本。\n    *   *注意：低于 4.0.0 的版本可能无法正常运行该插件。*\n*   **硬件建议**：\n    *   若需本地训练 NeRF 模型，推荐配备 **NVIDIA GPU** 并安装 [Instant NGP](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Finstant-ngp)。\n    *   若无高性能显卡，可使用 Google Colab 免费云端 GPU 进行后续模型训练。\n*   **前置依赖**：无需额外安装 Python 库，插件内置所需逻辑。\n\n## 安装步骤\n\nBlenderNeRF 以插件形式分发，请按以下步骤安装：\n\n1.  **下载插件**：\n    访问项目仓库，将代码库下载为 **ZIP** 文件（推荐直接下载主分支以获取最新修复，而非仅下载 Release 版本）。\n    ```bash\n    # 示例：通过 git 克隆（可选，或直接网页下载 ZIP）\n    git clone https:\u002F\u002Fgithub.com\u002Fmaximeraafat\u002FBlenderNeRF.git\n    ```\n\n2.  **启动 Blender**：\n    打开 Blender 4.0.0 或更新版本。\n\n3.  **安装插件**：\n    *   点击顶部菜单栏的 `Edit` (编辑) > `Preferences` (偏好设置)。\n    *   在左侧选择 `Add-ons` (插件)。\n    *   点击右上角的 `Install...` (安装...) 按钮（图标通常为向下箭头或文件夹）。\n    *   在文件选择器中，找到并选中刚才下载的 **ZIP** 文件。\n\n4.  **启用插件**：\n    安装后，在列表中找到 `BlenderNeRF`，勾选其名称左侧的复选框以激活插件。\n\n## 基本使用\n\n安装完成后，插件面板位于 3D 视图右侧。按下键盘上的 `N` 键展开侧边栏，点击 `BlenderNeRF` 标签页即可看到操作界面。\n\n### 1. 通用配置\n所有方法共享以下基础设置（位于 `BlenderNeRF shared UI`）：\n*   **Train \u002F Test**: 默认勾选，分别表示生成训练数据（图像 + 相机参数）和测试数据（仅相机参数）。\n*   **File Format**: 默认为 `NGP` (兼容 Instant NGP)，也可选标准 `NeRF` 格式。\n*   **Gaussian Points**: 若需进行 Gaussian Splatting 训练，请勾选此项以生成 `points3d.ply` 文件。\n*   **Save Path**: 指定数据集输出的文件夹路径（留空则保存在默认位置）。\n\n### 2. 选择数据采集方法\n根据场景需求，选择以下三种模式之一：\n\n#### 模式 A: Subset of Frames (SOF) - 动画子集\n适用于静态场景的长动画渲染。\n*   **原理**：每隔 **N** 帧渲染一张图作为训练集，所有帧作为测试集。\n*   **操作**：\n    1.  确保场景中已设置好相机动画。\n    2.  设置 `Frame Step` (默认 3，即每 3 帧取一帧)。\n    3.  点击 `PLAY SOF` 按钮开始导出。\n\n#### 模式 B: Train and Test Cameras (TTC) - 双相机模式\n适用于需要严格区分训练视角和测试视角的场景。\n*   **原理**：使用两个独立的相机，一个负责采集训练数据，另一个负责采集测试数据。\n*   **操作**：\n    1.  在场景中创建或选择两个相机对象。\n    2.  在 `Train Cam` 中选择训练用相机，在 `Test Cam` 中选择测试用相机。\n    3.  设置 `Frames` (默认 100，即采集帧数)。\n    4.  点击 `PLAY TTC` 按钮开始导出。\n\n#### 模式 C: Camera on Sphere (COS) - 球面采样\n适用于围绕物体进行全方位扫描的场景。\n*   **原理**：在用户定义的球面上随机采样相机视角指向中心。\n*   **操作**：\n    1.  调整 `Location` (球心), `Radius` (半径), 和 `Frames` (采样数量，默认 100)。\n    2.  (可选) 勾选 `Sphere` 预览采样球体，或勾选 `Upper Views` 仅采样上半球。\n    3.  点击 `PLAY COS` 按钮。插件会自动创建 `BlenderNeRF Camera` 并执行渲染。\n    *   *注意：请勿手动创建名为 \"BlenderNeRF Sphere\" 或 \"BlenderNeRF Camera\" 的对象，以免冲突。*\n\n### 3. 获取结果\n执行完毕后，插件会在指定的 `Save Path` 下生成一个 **ZIP** 压缩包。内容包括：\n*   `transforms_train.json` \u002F `transforms_test.json`: 包含相机位姿和内参。\n*   对应的渲染图像文件夹。\n*   (若开启) `points3d.ply`: 用于高斯泼溅初始化的点云文件。\n\n### 4. 后续训练\n生成的数据集可直接用于以下流程：\n*   **本地训练**：配合 [Instant NGP](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Finstant-ngp) 使用。\n*   **云端训练**：上传至 Google Colab Notebook 进行免费训练。\n\n> **优化提示**：为获得最佳重建效果，建议采集 50-150 张图片，保持相机焦距在 30mm-70mm 之间，并确保场景距离相机至少 1 个 Blender 单位。","一位视觉特效艺术家需要在 Blender 中为静态文物场景构建高保真的 NeRF 训练数据集，以便后续实现自由视角的沉浸式展示。\n\n### 没有 BlenderNeRF 时\n- **参数提取繁琐**：必须编写复杂的 Python 脚本手动遍历每一帧，从 Blender 内部提取相机内参和外参矩阵，极易因代码错误导致数据错位。\n- **渲染流程割裂**：需要单独设置渲染队列输出图像，再另行处理相机数据，两者难以自动对齐，人工核对耗时且容易出错。\n- **测试集构建困难**：难以快速生成用于验证模型泛化能力的独立测试集（如不同角度的相机路径），通常只能复用训练数据，导致评估结果不可靠。\n- **迭代成本高昂**：一旦调整了相机动画或场景布局，整个数据导出流程需重新手动执行，严重拖慢研发进度。\n\n### 使用 BlenderNeRF 后\n- **一键自动化导出**：只需点击一次按钮，BlenderNeRF 即可自动渲染图像序列并同步生成包含精确相机信息的 `transforms_train.json` 和 `transforms_test.json` 文件。\n- **智能数据划分**：利用“子集帧（SOF）”或“训练\u002F测试相机（TTC）”模式，能瞬间将动画帧划分为训练集与测试集，确保数据格式直接适配主流 NeRF 算法。\n- **全流程可控**：艺术家可在熟悉的 Blender 界面中直观调整灯光、材质及相机轨迹，实时预览并立即重新生成数据集，无需切换工具或修改代码。\n- **高效闭环验证**：快速构建包含独立测试相机的数据集，让模型训练后立即在新视角下进行推理验证，大幅缩短从场景搭建到效果评估的周期。\n\nBlenderNeRF 将原本需要数小时编码与调试的数据准备过程压缩至分钟级，让创作者能专注于场景艺术表现而非底层数据工程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaximeraafat_BlenderNeRF_fd27b08d.jpg","maximeraafat","Maxime Raafat","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmaximeraafat_6cbe88d3.png","Founder of UV Gen","UV Gen",null,"https:\u002F\u002Fmaximeraafat.github.io","https:\u002F\u002Fgithub.com\u002Fmaximeraafat",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,995,73,"2026-03-28T04:42:24","MIT","未说明","非插件运行必需，但训练 NeRF 模型时推荐 NVIDIA GPU（用于 Instant NGP）；插件本身在 Blender 中运行，依赖宿主硬件进行渲染",{"notes":95,"python":92,"dependencies":96},"该工具是 Blender 插件，主要运行环境需求为 Blender 4.0.0 及以上版本。插件本身用于生成数据集，不直接依赖特定的 Python 库或 CUDA 版本。若要使用生成的数据进行 NeRF 训练，需额外安装 NVIDIA Instant NGP（需要 NVIDIA GPU）或使用 Google Colab。开发者主要在 macOS 上开发，但未明确限制操作系统。",[97],"Blender>=4.0.0",[53,15,13,14],[100,101,102,103,104,105,106,107,108,109],"ai","blender","computer-graphics","computer-vision","instant-ngp","nerf","python","addons","neural-rendering","gaussian-splatting","2026-03-27T02:49:30.150509","2026-04-06T07:13:13.955281",[113,118,122,127,132,137],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},9426,"如何开始使用 BlenderNeRF？我应该提供图像进行训练还是使用已训练好的数据集？","该插件用于生成训练和测试数据集，而非直接训练模型。使用方法如下：\n1. 在共享用户界面中同时选择 `Train` 和 `Test` 按钮。\n2. **SOF 方法**：选择相机并运行操作符。训练数据集将包含该动画相机每隔 **n** 帧的渲染图像（n 为设定的帧步长），`transforms_train.json` 包含对应的相机姿态；`transforms_test.json` 则包含所有帧的姿态。\n3. **TTC 方法**：运行操作符时，训练数据由 `Train Cam` 渲染的 **n** 帧构成，测试数据由测试相机生成。\n4. 若需分离训练和测试数据，可先取消勾选 `Test` 运行 SOF，再取消 `Train` 勾选 `Test` 运行 TTC，最后合并文件夹。\n5. 插件直接使用 Blender 的渲染属性（如起止帧、分辨率等）。","https:\u002F\u002Fgithub.com\u002Fmaximeraafat\u002FBlenderNeRF\u002Fissues\u002F7",{"id":119,"question_zh":120,"answer_zh":121,"source_url":117},9427,"如何生成用于测试相机路径的渲染图像（Test Renders）？","插件本身设计上不支持直接渲染测试图像，但可以通过以下方式实现：\n1. **在 Blender 中手动渲染**：选择所需的测试相机，按下 `Render Animation` 命令。\n2. **使用 Instant NGP 脚本**：如果已在 NGP 中训练好模型，可以使用以下 Python 脚本生成测试渲染（适用于 Linux shell）：\n```bash\npython instant-ngp\u002Fscripts\u002Frun.py --training_data=\u003CtrainPath>\\\n                                  --screenshot_transforms=$\u003CtestJson> \\\n                                  --screenshot_w=\u003Cwidth>--screenshot_h=\u003Cheight>\\\n                                  --screenshot_dir=\u003CtestPath> \\\n                                  --save_snapshot=snapshot.msgpack \\\n                                  --n_steps=\u003Citerations>\n```\n其中 `\u003CtrainPath>` 是包含训练图像和 `transforms_train.json` 的路径，`\u003CtestJson>` 是 `transforms_test.json` 的路径。",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},9428,"从 Blender 导出的数据集在 Instant NGP 中相机位置不匹配怎么办？","这是一个已知的代码 Bug，会影响多种相机设置（包括传感器拟合模式 `auto`\u002F`vertical`\u002F`horizontal` 和任意宽高比）。\n解决方案：请更新到 **BlenderNeRF v5** 或更高版本。维护者已通过大量实验修复了该问题，确保 Blender 渲染与 NGP 渲染在任何相机类型下都能完美匹配。如果更新后仍有问题，建议检查 `.blend` 文件或高斯泼溅实现是否有异常。","https:\u002F\u002Fgithub.com\u002Fmaximeraafat\u002FBlenderNeRF\u002Fissues\u002F18",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},9429,"如何使用 BlenderNeRF 创建 3D Gaussian Splatting (3DGS) 数据集？","从 **BlenderNeRF v5.1.0** 版本开始，已原生支持创建 3DGS 数据集。\n- 旧版本用户可能需要手动转换相机姿态数据或使用外部脚本（如修改 `blender_nerf_operator.py`）来适配 3DGS 格式。\n- 新版本实现了不同的导出逻辑，可直接生成兼容的数据集。如果您使用的是 Nerfstudio 或其他衍生版本（非 Inria 原版），请注意数据集结构差异（如需要 `images.txt`, `points3D.txt`, `cameras.txt` 等文件），建议升级到最新版插件以获得最佳支持。","https:\u002F\u002Fgithub.com\u002Fmaximeraafat\u002FBlenderNeRF\u002Fissues\u002F36",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},9430,"如何制作动态 NeRF 场景？相机在圆形轨迹运动时出现万向节锁（Gimbal Lock）翻转怎么办？","当相机在 YOZ 平面上沿圆形轨迹绕原点物体运动并经过 Z 轴上方时，可能会遇到万向节锁导致相机翻转。\n建议解决方案：\n1. 检查 `.blend` 文件中是否有异常设置。\n2. 审查所使用的高斯泼溅或 NeRF 实现是否对相机姿态有特殊要求。\n3. 由于该问题可能与特定轨迹算法有关，建议尝试调整相机约束或使用四元数旋转以避免欧拉角奇异点。如果问题持续，可参考社区讨论或寻求更详细的几何变换指导。","https:\u002F\u002Fgithub.com\u002Fmaximeraafat\u002FBlenderNeRF\u002Fissues\u002F44",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},9431,"AABB 体积设置为 1 是指 1x1x1 的立方体吗？为什么默认值是 4？","AABB（轴对齐包围盒）的具体含义取决于 NeRF 采样网格的行为：\n- 如果 3D 网格分辨率固定，那么 AABB 设为 1 确实代表 1x1x1 的立方体，增大 AABB 会导致渲染质量下降（因为采样点变稀疏）。\n- 根据实验观察，增加 AABB 值通常会增加渲染时间，这暗示网格分辨率可能随 AABB 增大而自动增加，以维持采样密度。\n目前插件尚未提供 AABB 立方体的可视化功能，用户需根据场景实际大小调整该参数以平衡质量和速度。","https:\u002F\u002Fgithub.com\u002Fmaximeraafat\u002FBlenderNeRF\u002Fissues\u002F10",[143,148,153,158,163,168],{"id":144,"version":145,"summary_zh":146,"released_at":147},106814,"v6","This release introduces two major updates:\r\n\r\n* Transition from the deprecated Blender add-ons to the new [Blender extension](https:\u002F\u002Fdocs.blender.org\u002Fmanual\u002Fen\u002F4.2\u002Fadvanced\u002Fextensions\u002F) format convention introduced in [Blender 4.2](https:\u002F\u002Fwww.blender.org\u002Fdownload\u002Freleases\u002F4-2\u002F),\r\n* Support for Gaussian Splatting dataset creation.\r\n\r\nAlthough the `bl_info` dictionary in the` __init__.py` file is no longer necessary, I have retained it for extended compatibility.\r\n\r\n## Gaussian Splatting Dataset Creation Support\r\n\r\nThe [Gaussian Splatting](https:\u002F\u002Fgithub.com\u002Fgraphdeco-inria\u002Fgaussian-splatting) repository (referred to as **3DGS**) natively supports the NeRF file format convention, but benefits from an additional point cloud initialisation. This release enables the optional creation of a `points3d.ply` file when the `Gaussian Points (PLY file)` option is selected.\r\n\r\nThe `points3d.ply` file is generated using Blender's built-in `bpy.ops.wm.ply_export` function, introduced in [Blender 4.0](https:\u002F\u002Fdeveloper.blender.org\u002Fdocs\u002Frelease_notes\u002F4.0\u002Fpython_api\u002F#importexport). Therefore Blender 4.0 is the lowest supported version of this release. While previous versions of **BlenderNeRF** (compatible with Blender versions `>= 3.0.0`) remain available, using Blender 4.0 or later is recommended for this functionality.\r\n\r\nThe `points3d.ply` file contains vertices of all visible meshes at render time, serving as the initial points for 3DGS. Only mesh objects visible in the render will be included. Meshes hidden in render or within hidden collections will be excluded.\r\n\r\n### Known issues\r\n\r\n* The functionality may break in specific scenarios, such as scenes with rendered child objects but hidden parent objects (e.g., a particle system where the emitter is hidden). Despite potential mismatches, the `ply` file is merely a starting point for 3DGS and will be optimised further.\r\n* Modifiers **are applied** to meshes during vertex storage, affecting the `ply` file's total vertex count. If more or fewer points are desired for 3DGS initialisation, consider applying subdivision or decimate modifiers accordingly.\r\n\r\n### Additional notes\r\n\r\n* Vertex colours: if vertex colours are available, they will be included in the `ply` file. If not, vertex colours will default to zero, as their existence is required by 3DGS.\r\n* Export time: generating the `ply` file can be time-consuming, naturally depending on the scene’s vertex count.\r\n\r\nWhen using NeRF datasets, 3DGS currently requires both `transforms_train.json` and `transforms_test.json` camera files. For users solely interested in creating a splat file, the new `dummy` option creates an empty `transforms_test.json` file, bypassing the need for a complete test dataset. This is a temporary measure until the 3DGS repository potentially supports optional NeRF test camera poses.\r\n\r\n### File path adjustments\r\n\r\nWhen exporting with the `Gaussian Points` option, file extensions are removed from paths in the `transforms.json` files, as 3DGS automatically appends a PNG extension. This feature remains until the 3DGS repository offers an argument for specifying file extensions or automatically detects them.","2024-08-06T23:38:43",{"id":149,"version":150,"summary_zh":151,"released_at":152},106815,"v5","This release mainly focuses on fixing a bug pointed out in issue #18, in which a miscomputation of the camera intrinsics sometimes leads to mismatching fields of view between ground truth Blender renders and the corresponding NeRF renders. The mismatch is due to the wrong computation of focal lengths, when the camera sensor fit is not `Horizontal`.\r\n\r\nTo this end, 27 datasets have been captured for debugging purposes, in which different aspect ratios, pixel ratios and camera sensor fits have been evaluated. The dataset and respective Blender file are made available [here](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13-Mka1ccteXcXUGT4Abu2f8b1GG5laFj?usp=share_link) and a thorough description of the data and capturing process is described in the contained `README.txt` file.\r\n\r\nThe mismatch should now have been alleviated. Below are depicted a few results in which NeRF renders (left column of each image) are compared to their respective ground truth with pixel ratio 1:1 rendered in Blender (right column). Camera parameters for each set of renders are described in the table underneath the image. The NGP renders are automatically stretched or squeezed to _undo_ the non uniform pixel ratio in training frames. Feel free to inspect the data for further details.\r\n\r\n![debug](https:\u002F\u002Fgithub.com\u002Fmaximeraafat\u002FBlenderNeRF\u002Fassets\u002F51931580\u002F1626040e-9023-48b9-987d-e73efcc7e987)\r\n\r\n| | Left | Middle | Right |\r\n| :--- | :---: | :---: | :---: |\r\n| **Sensor Fit** | `Auto` | `Horizontal` | `Vertical` |\r\n| **Aspect Ratio** | 1 : 1 | 16 : 9 | 2 : 3 |\r\n| **Resolution** | 300 * 300 | 576 * 324 | 300 * 450 |\r\n| **Pixel Ratio** | 1 : 1.5 | 1.5 : 1 | 1 : 1 |\r\n\r\nBelow are a two relevant observations and take aways.\r\n* For the middle image, the NeRF volume is partially cropped at the top and bottom of the donut. This is because the training images are stretched out (pixel ratio 1.5:1), and therefore a smaller region of the donut is only visible in these frames (see the corresponding frames in the data). As validated here, the NeRF render reshapes the scene to a uniform pixel ratio, thereby _undoing_ the stretching effect.\r\n* The `Vertical` sensor fit often results in somewhat distorted NeRF volumes. I am however unsure of the cause, and consequently recommend avoiding it if possible.\r\n\r\n---\r\n\r\nThis release additionally comprises a new functionality and a few warning\u002Ferror fixes.\r\n\r\n1. The number of training frames used from the training camera with the **TTC** method can now be changed independently of the number of testing frames. The latter (number of testing frames) remains the frame range of the Blender scene.\r\n2. The issues highlighted in the pull request #19 have been resolved.\r\n","2023-05-11T17:54:31",{"id":154,"version":155,"summary_zh":156,"released_at":157},106816,"v4","New features and updated functionalities\r\n\r\n1. Starting frame with the COS training method is now the default scene frame start, instead of frame 1\r\n2. New option for outwards facing cameras with the COS training method\r\n3. Revised file and folder output structure: the output convention follows the original NeRF synthetic dataset structure. The output folder contains the `transforms.json` files directly, alongside a `train` folder with images\r\n4. Enabled support for the original NeRF `transforms.json` file format convention (alongside the Instant NGP file convention)\r\n5. Enabled option to save a log file containing all the necessary information for reproducibility on a BlenderNeRF run","2023-02-11T23:19:20",{"id":159,"version":160,"summary_zh":161,"released_at":162},106817,"v3","Third stable version of BlenderNeRF add-on : support for all methods (SOF, TTC and COS)\r\n","2023-01-07T00:35:10",{"id":164,"version":165,"summary_zh":166,"released_at":167},106818,"v2","Second stable version of BlenderNeRF add-on : support for SOF and TTC methods + simplified UI redesign","2022-07-19T15:50:07",{"id":169,"version":170,"summary_zh":171,"released_at":172},106819,"v1","First public release with stable version of BlenderNeRF add-on : currently only supports SOF with Instant NGP","2022-07-13T22:39:28"]