[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SuLvXiangXin--zipnerf-pytorch":3,"tool-SuLvXiangXin--zipnerf-pytorch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":110,"forks":111,"last_commit_at":112,"license":113,"difficulty_score":114,"env_os":115,"env_gpu":116,"env_ram":115,"env_deps":117,"category_tags":128,"github_topics":129,"view_count":10,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":132,"updated_at":133,"faqs":134,"releases":163},514,"SuLvXiangXin\u002Fzipnerf-pytorch","zipnerf-pytorch","Unofficial implementation of ZipNeRF","zipnerf-pytorch 是经典论文 ZipNeRF 的 PyTorch 非官方实现，致力于将神经辐射场（NeRF）技术转化为高效可用的 3D 重建方案。它主要解决传统 NeRF 渲染速度慢、计算资源消耗大的痛点，利用反锯齿网格化方法在保持高画质的同时显著提升训练与推理效率。\n\n这个项目特别适合计算机视觉开发者、3D 图形研究人员以及对数字孪生感兴趣的技术爱好者。除了核心的渲染能力外，zipnerf-pytorch 还拓展了丰富的功能生态：支持 NerfStudio 集成、提供网格（Mesh）提取功能，并兼容 Intel DPC++ 后端以适配更多硬件环境。针对常见的近景漂浮物问题，代码中还加入了梯度缩放优化策略。性能测试显示，其 PSNR 指标已非常接近原始论文结果。如果你需要在本地复现高质量 3D 场景或探索神经渲染的前沿应用，zipnerf-pytorch 是一个灵活且强大的开源选择。","# ZipNeRF\n\nAn unofficial pytorch implementation of \n\"Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields\" \n[https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706).\nThis work is based on [multinerf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf), so features in refnerf,rawnerf,mipnerf360 are also available.\n\n## News\n- (2024.2.2) Add support for nerfstudio, credits to [Ling Jing](https:\u002F\u002Fgithub.com\u002FJing1Ling).\n- (2024.12.8) Add support for Intel's DPC++ backend, credits to [Zong Wei](https:\u002F\u002Fgithub.com\u002Fzongwave).\n- (2023.6.22) Add extracting mesh through tsdf; add [gradient scaling](https:\u002F\u002Fgradient-scaling.github.io\u002F) for near plane floaters.\n- (2023.5.26) Implement the latest version of ZipNeRF [https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706).\n- (2023.5.22) Add extracting mesh; add logging,checkpointing system\n\n## Results\nNew results(5.27): [Pretrained weights](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1W1jFa519m7Ye9Pcz5N_30TMPM-7KTTBc?usp=sharing)\n\n360_v2:\n\nhttps:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fassets\u002F83005605\u002F2b276e48-2dc4-4508-8441-e90ec963f7d9\n\n\n360_v2_glo:(fewer floaters, but worse metric)\n\n\nhttps:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fassets\u002F83005605\u002Fbddb5610-2a4f-4981-8e17-71326a24d291\n\n\n\n\n\n\nmesh results(5.27):\n\n![mesh](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuLvXiangXin_zipnerf-pytorch_readme_2499952a91e6.png)\n\n\n\nMipnerf360(PSNR):\n\n|           | bicycle | garden | stump | room  | counter | kitchen | bonsai |\n|:---------:|:-------:|:------:|:-----:|:-----:|:-------:|:-------:|:------:|\n|   Paper   |  25.80  | 28.20  | 27.55 | 32.65 |  29.38  |  32.50  | 34.46  |\n| This repo |  25.44  | 27.98  | 26.75 | 32.13 |  29.10  |  32.63  | 34.20  |\n\n\nBlender(PSNR):\n\n|           | chair | drums | ficus | hotdog | lego  | materials |  mic  | ship  |\n|:---------:|:-----:|:-----:|:-----:|:------:|:-----:|:---------:|:-----:|:-----:|\n|   Paper   | 34.84 | 25.84 | 33.90 | 37.14  | 34.84 |   31.66   | 35.15 | 31.38 |\n| This repo | 35.26 | 25.51 | 32.66 | 36.56  | 35.04 |   29.43   | 34.93 | 31.38 |\n\nFor Mipnerf360 dataset, the model is trained with a downsample factor of 4 for outdoor scene and 2 for indoor scene(same as in paper).\nTraining speed is about 1.5x slower than paper(1.5 hours on 8 A6000).\n\nThe hash decay loss seems to have little effect(?), as many floaters can be found in the final results in both experiments (especially in Blender).\n\n## Install CUDA backend\n\n```\n# Clone the repo.\ngit clone https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch.git\ncd zipnerf-pytorch\n\n# Make a conda environment.\nconda create --name zipnerf python=3.9\nconda activate zipnerf\n\n# Install requirements.\npip install -r requirements.txt\n\n# Install other cuda extensions\npip install .\u002Fextensions\u002Fcuda\n\n# Install nvdiffrast (optional, for textured mesh)\ngit clone https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fnvdiffrast\npip install .\u002Fnvdiffrast\n\n# Install a specific cuda version of torch_scatter \n# see more detail at https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_scatter\nCUDA=cu117\npip install torch-scatter -f https:\u002F\u002Fdata.pyg.org\u002Fwhl\u002Ftorch-2.0.0+${CUDA}.html\n```\n\n\n## Install DPCPP backend\n\n```\n  # Install drivers, oneAPI and ipex for Intel GPUs\n  Following the steps in the below page to install gpu drivers, oneAPI BaseKit, and pytorch+ipex (abbr. intel-extension-for-pytorch):\n  https:\u002F\u002Fintel.github.io\u002Fintel-extension-for-pytorch\u002Fxpu\u002F1.13.120+xpu\u002Ftutorials\u002Finstallation.html\n  For pytorch and Ipex versions, please install the version 1.13.120 with\n\n  python -m pip install torch==1.13.0a0+git6c9b55e intel_extension_for_pytorch==1.13.120+xpu -f https:\u002F\u002Fdeveloper.intel.com\u002Fipex-whl-stable-xpu\n\n  After the installation is done, make sure it is successfully by running the example provided by\n  https:\u002F\u002Fgithub.com\u002Fintel\u002Fintel-extension-for-pytorch\u002Ftree\u002Frelease\u002Fxpu\u002F1.13.120#inference-on-gpu\n```\n\n  ### Preparing environment\n```\n  export DPCPP_HOME=path\u002Fto\u002Fllvm  # path to the folder for llvm, default value:~\n  bash scripts\u002Fset_dpcpp_env.sh intel # for intel's gpu\n  bash scripts\u002Fset_dpcpp_env.sh nvidia # for nvidia's gpu\n```\n\n  ### Reference of DPCPP support for CUDA\n```\n  https:\u002F\u002Fgithub.com\u002Fintel\u002Fllvm\u002Fblob\u002Fsycl\u002Fsycl\u002Fdoc\u002FGetStartedGuide.md#build-dpc-toolchain-with-support-for-nvidia-cuda\n```\n\n\n## Dataset\n[mipnerf360](http:\u002F\u002Fstorage.googleapis.com\u002Fgresearch\u002Frefraw360\u002F360_v2.zip)\n\n[refnerf](https:\u002F\u002Fstorage.googleapis.com\u002Fgresearch\u002Frefraw360\u002Fref.zip)\n\n[nerf_synthetic](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)\n\n[nerf_llff_data](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)\n\n```\nmkdir data\ncd data\n\n# e.g. mipnerf360 data\nwget http:\u002F\u002Fstorage.googleapis.com\u002Fgresearch\u002Frefraw360\u002F360_v2.zip\nunzip 360_v2.zip\n```\n\n## Train\n```\n# Configure your training (DDP? fp16? ...)\n# see https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Faccelerate\u002Findex for details\naccelerate config\n\n# Where your data is \nDATA_DIR=data\u002F360_v2\u002Fbicycle\nEXP_NAME=360_v2\u002Fbicycle\n\n# Experiment will be conducted under \"exp\u002F${EXP_NAME}\" folder\n# \"--gin_configs=configs\u002F360.gin\" can be seen as a default config \n# and you can add specific config useing --gin_bindings=\"...\" \naccelerate launch train.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n\n# or you can also run without accelerate (without DDP)\nCUDA_VISIBLE_DEVICES=0 python train.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n      --gin_bindings=\"Config.factor = 4\" \n\n# alternatively you can use an example training script \nbash scripts\u002Ftrain_360.sh\n\n# blender dataset\nbash scripts\u002Ftrain_blender.sh\n\n# metric, render image, etc can be viewed through tensorboard\ntensorboard --logdir \"exp\u002F${EXP_NAME}\"\n\n```\n\n## Train & Render with DPCPP backend\n```\n# add config in command line\n      --gin_bindings=\"Config.dpcpp_backend = True\" \\\n```\n\n### Render\nRendering results can be found in the directory `exp\u002F${EXP_NAME}\u002Frender`\n```\naccelerate launch render.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.render_path = True\" \\\n    --gin_bindings=\"Config.render_path_frames = 480\" \\\n    --gin_bindings=\"Config.render_video_fps = 60\" \\\n    --gin_bindings=\"Config.factor = 4\"  \n\n# alternatively you can use an example rendering script \nbash scripts\u002Frender_360.sh\n```\n## Evaluate\nEvaluating results can be found in the directory `exp\u002F${EXP_NAME}\u002Ftest_preds`\n```\n# using the same exp_name as in training\naccelerate launch eval.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n\n\n# alternatively you can use an example evaluating script \nbash scripts\u002Feval_360.sh\n```\n\n## Use NerfStudio\nhttps:\u002F\u002Fgithub.com\u002Fnerfstudio-project\u002Fnerfstudio  \nNerfstudio provides a simple API that allows for a simplified end-to-end process of creating, training, and testing NeRFs. The library supports a more interpretable implementation of NeRFs by modularizing each component. \nYou can use the viewer provided by nerfstudio to view the render results during the training process.\n### Install \n```\npip install nerfstudio  \n# cd zipnerf-pytorch\npip install -e . \nns-install-cli\n```\n\n### Train & eval \n```\nns-train zipnerf --data {DATA_DIR\u002FSCENE}\nns-eval --load-config {outputs\u002FSCENE\u002Fzipnerf\u002FEXP_DIR\u002Fconfig.yml}\n\nns-train zipnerf -h  # show the full list of model configuration options.\nns-train zipnerf colmap -h  # dataparset configuration options\n```\n*Nerfstudio's ColmapDataParser rounds down the image size when downscaling, which is different from the 360_v2 dataset.You can use nerfstudio to reprocess the data or modify the code logic for downscale in the library as dicussed in https:\u002F\u002Fgithub.com\u002Fnerfstudio-project\u002Fnerfstudio\u002Fissues\u002F1438.  \n*Nerfstudio's train\u002Feval division strategy is different from this repo. Final training and evaluation results may vary.\n\nFor more usage or information, please see https:\u002F\u002Fgithub.com\u002Fnerfstudio-project\u002Fnerfstudio.\n\n### Configuration \n#### for Zipnerf-pytorch\nYou can create a new .gin file and pass in the 'gin_file' list in ZipNerfModelConfig of zipnerf_ns\u002Fzipnerf_config.py or update the contents of the default .gin file.\n#### for nerfstudio\n```\nns-train zipnerf -h\nns-train zipnerf colmap -h\n```\nYou can modify zipnerf_ns\u002Fzipnerf_config.py, or use the instruction.\n\n### Viewer\nGiven a pretrained model checkpoint, you can start the viewer by running\n```\nns-viewer --load-config outputs\u002FSCENE\u002Fzipnerf\u002FEXP_TIME\u002Fconfig.yml  \n```\n\n#### Remote Server\nIf you are running on a remote machine, you will need to port forward the websocket port (defaults to 7007). SSH must be set up on the remote machine. Then run the following on this machine:\n```\nssh -L \u003Cport>:localhost:\u003Cport> USER@REMOTE.SERVER.IP\n```\n\n## Extract mesh\nMesh results can be found in the directory `exp\u002F${EXP_NAME}\u002Fmesh`\n```\n# more configuration can be found in internal\u002Fconfigs.py\naccelerate launch extract.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n#    --gin_bindings=\"Config.mesh_radius = 1\"  # (optional) smaller for more details e.g. 0.2 in bicycle scene\n#    --gin_bindings=\"Config.isosurface_threshold = 20\"  # (optional) empirical value\n#    --gin_bindings=\"Config.mesh_voxels=134217728\"  # (optional) number of voxels used to extract mesh, e.g. 134217728 equals to 512**3 . Smaller values may solve OutoFMemoryError\n#    --gin_bindings=\"Config.vertex_color = True\"  # (optional) saving mesh with vertex color instead of atlas which is much slower but with more details.\n#    --gin_bindings=\"Config.vertex_projection = True\"  # (optional) use projection for vertex color\n\n# or extracting mesh using tsdf method\naccelerate launch tsdf.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n\n# alternatively you can use an example script \nbash scripts\u002Fextract_360.sh\n```\n\n## OutOfMemory\nyou can decrease the total batch size by \nadding e.g.  `--gin_bindings=\"Config.batch_size = 8192\" `, \nor decrease the test chunk size by adding e.g.  `--gin_bindings=\"Config.render_chunk_size = 8192\" `,\nor use more GPU by configure `accelerate config` .\n\n\n## Preparing custom data\nMore details can be found at https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf\n```\nDATA_DIR=my_dataset_dir\nbash scripts\u002Flocal_colmap_and_resize.sh ${DATA_DIR}\n```\n\n## TODO\n- [x] Add MultiScale training and testing\n\n## Citation\n```\n@misc{barron2023zipnerf,\n      title={Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields}, \n      author={Jonathan T. Barron and Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman},\n      year={2023},\n      eprint={2304.06706},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n\n@misc{multinerf2022,\n      title={{MultiNeRF}: {A} {Code} {Release} for {Mip-NeRF} 360, {Ref-NeRF}, and {RawNeRF}},\n      author={Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman and Ricardo Martin-Brualla and Jonathan T. Barron},\n      year={2022},\n      url={https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf},\n}\n\n@Misc{accelerate,\n  title =        {Accelerate: Training and inference at scale made simple, efficient and adaptable.},\n  author =       {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate}},\n  year =         {2022}\n}\n\n@misc{torch-ngp,\n    Author = {Jiaxiang Tang},\n    Year = {2022},\n    Note = {https:\u002F\u002Fgithub.com\u002Fashawkey\u002Ftorch-ngp},\n    Title = {Torch-ngp: a PyTorch implementation of instant-ngp}\n}\n```\n\n## Acknowledgements\nThis work is based on my another repo https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fmultinerf-pytorch, \nwhich is basically a pytorch translation from [multinerf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf)\n\n- Thanks to [multinerf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf) for amazing multinerf(MipNeRF360,RefNeRF,RawNeRF) implementation\n- Thanks to [accelerate](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate) for distributed training\n- Thanks to [torch-ngp](https:\u002F\u002Fgithub.com\u002Fashawkey\u002Ftorch-ngp) for super useful hashencoder\n- Thanks to [Yurui Chen](https:\u002F\u002Fgithub.com\u002F519401113) for discussing the details of the paper.\n","# ZipNeRF\n\n一个非官方的 PyTorch 实现，用于“抗锯齿基于网格的神经辐射场”（Anti-Aliased Grid-Based Neural Radiance Fields）[https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706)。\n本项目基于 [multinerf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf)，因此 refnerf、rawnerf、mipnerf360 中的功能也均适用。\n\n## 更新日志\n- (2024.2.2) 添加对 nerfstudio 的支持，感谢 [Ling Jing](https:\u002F\u002Fgithub.com\u002FJing1Ling)。\n- (2024.12.8) 添加对 Intel 的 DPC++ 后端的支持，感谢 [Zong Wei](https:\u002F\u002Fgithub.com\u002Fzongwave)。\n- (2023.6.22) 添加通过 tsdf 提取网格；为近景漂浮伪影添加 [梯度缩放](https:\u002F\u002Fgradient-scaling.github.io\u002F)。\n- (2023.5.26) 实现最新版本的 ZipNeRF [https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06706)。\n- (2023.5.22) 添加提取网格；添加日志记录、检查点系统\n\n## 结果\n新结果 (5.27): [预训练权重](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1W1jFa519m7Ye9Pcz5N_30TMPM-7KTTBc?usp=sharing)\n\n360_v2:\n\nhttps:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fassets\u002F83005605\u002F2b276e48-2dc4-4508-8441-e90ec963f7d9\n\n\n360_v2_glo:(更少的漂浮伪影，但指标较差)\n\n\nhttps:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fassets\u002F83005605\u002Fbddb5610-2a4f-4981-8e17-71326a24d291\n\n\n\n\n\n\n网格结果 (5.27):\n\n![mesh](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuLvXiangXin_zipnerf-pytorch_readme_2499952a91e6.png)\n\n\n\nMipNeRF360(峰值信噪比 (PSNR)):\n\n|           | bicycle | garden | stump | room  | counter | kitchen | bonsai |\n|:---------:|:-------:|:------:|:-----:|:-----:|:-------:|:-------:|:------:|\n|   论文   |  25.80  | 28.20  | 27.55 | 32.65 |  29.38  |  32.50  | 34.46  |\n| 本仓库 |  25.44  | 27.98  | 26.75 | 32.13 |  29.10  |  32.63  | 34.20  |\n\n\nBlender(峰值信噪比 (PSNR)):\n\n|           | chair | drums | ficus | hotdog | lego  | materials |  mic  | ship  |\n|:---------:|:-----:|:-----:|:-----:|:------:|:-----:|:---------:|:-----:|:-----:|\n|   论文   | 34.84 | 25.84 | 33.90 | 37.14  | 34.84 |   31.66   | 35.15 | 31.38 |\n| 本仓库 | 35.26 | 25.51 | 32.66 | 36.56  | 35.04 |   29.43   | 34.93 | 31.38 |\n\n对于 MipNeRF360 数据集，模型使用 4 的下采样因子 (downsample factor) 训练室外场景，室内场景使用 2（与论文相同）。\n训练速度约为论文的 1.5 倍慢（在 8 张 A6000 上耗时约 1.5 小时）。\n\nHash 衰减损失 (hash decay loss) 似乎影响很小 (?), 因为在两个实验的最终结果中都能发现许多漂浮伪影 (floaters)（尤其是在 Blender 中）。\n\n## 安装 CUDA 后端\n\n```\n# Clone the repo.\ngit clone https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch.git\ncd zipnerf-pytorch\n\n# Make a conda environment.\nconda create --name zipnerf python=3.9\nconda activate zipnerf\n\n# Install requirements.\npip install -r requirements.txt\n\n# Install other cuda extensions\npip install .\u002Fextensions\u002Fcuda\n\n# Install nvdiffrast (optional, for textured mesh)\ngit clone https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fnvdiffrast\npip install .\u002Fnvdiffrast\n\n# Install a specific cuda version of torch_scatter \n# see more detail at https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_scatter\nCUDA=cu117\npip install torch-scatter -f https:\u002F\u002Fdata.pyg.org\u002Fwhl\u002Ftorch-2.0.0+${CUDA}.html\n```\n\n\n## 安装 DPCPP 后端\n\n```\n  # Install drivers, oneAPI and ipex for Intel GPUs\n  Following the steps in the below page to install gpu drivers, oneAPI BaseKit, and pytorch+ipex (abbr. intel-extension-for-pytorch):\n  https:\u002F\u002Fintel.github.io\u002Fintel-extension-for-pytorch\u002Fxpu\u002F1.13.120+xpu\u002Ftutorials\u002Finstallation.html\n  For pytorch and Ipex versions, please install the version 1.13.120 with\n\n  python -m pip install torch==1.13.0a0+git6c9b55e intel_extension_for_pytorch==1.13.120+xpu -f https:\u002F\u002Fdeveloper.intel.com\u002Fipex-whl-stable-xpu\n\n  After the installation is done, make sure it is successfully by running the example provided by\n  https:\u002F\u002Fgithub.com\u002Fintel\u002Fintel-extension-for-pytorch\u002Ftree\u002Frelease\u002Fxpu\u002F1.13.120#inference-on-gpu\n```\n\n  ### 准备环境\n```\n  export DPCPP_HOME=path\u002Fto\u002Fllvm  # path to the folder for llvm, default value:~\n  bash scripts\u002Fset_dpcpp_env.sh intel # for intel's gpu\n  bash scripts\u002Fset_dpcpp_env.sh nvidia # for nvidia's gpu\n```\n\n  ### DPCPP 支持 CUDA 参考\n```\n  https:\u002F\u002Fgithub.com\u002Fintel\u002Fllvm\u002Fblob\u002Fsycl\u002Fsycl\u002Fdoc\u002FGetStartedGuide.md#build-dpc-toolchain-with-support-for-nvidia-cuda\n```\n\n\n## 数据集\n[mipnerf360](http:\u002F\u002Fstorage.googleapis.com\u002Fgresearch\u002Frefraw360\u002F360_v2.zip)\n\n[refnerf](https:\u002F\u002Fstorage.googleapis.com\u002Fgresearch\u002Frefraw360\u002Fref.zip)\n\n[nerf_synthetic](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)\n\n[nerf_llff_data](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)\n\n```\nmkdir data\ncd data\n\n# e.g. mipnerf360 data\nwget http:\u002F\u002Fstorage.googleapis.com\u002Fgresearch\u002Frefraw360\u002F360_v2.zip\nunzip 360_v2.zip\n```\n\n## 训练\n```\n# Configure your training (DDP? fp16? ...)\n# see https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Faccelerate\u002Findex for details\naccelerate config\n\n# Where your data is \nDATA_DIR=data\u002F360_v2\u002Fbicycle\nEXP_NAME=360_v2\u002Fbicycle\n\n# Experiment will be conducted under \"exp\u002F${EXP_NAME}\" folder\n# \"--gin_configs=configs\u002F360.gin\" can be seen as a default config \n# and you can add specific config useing --gin_bindings=\"...\" \naccelerate launch train.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n\n# or you can also run without accelerate (without DDP)\nCUDA_VISIBLE_DEVICES=0 python train.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n      --gin_bindings=\"Config.factor = 4\" \n\n# alternatively you can use an example training script \nbash scripts\u002Ftrain_360.sh\n\n# blender dataset\nbash scripts\u002Ftrain_blender.sh\n\n# metric, render image, etc can be viewed through tensorboard\ntensorboard --logdir \"exp\u002F${EXP_NAME}\"\n\n```\n\n## 使用 DPCPP 后端进行训练和渲染\n```\n# add config in command line\n      --gin_bindings=\"Config.dpcpp_backend = True\" \\\n```\n\n### 渲染\n渲染结果可在目录 `exp\u002F${EXP_NAME}\u002Frender` 中找到\n```\naccelerate launch render.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.render_path = True\" \\\n    --gin_bindings=\"Config.render_path_frames = 480\" \\\n    --gin_bindings=\"Config.render_video_fps = 60\" \\\n    --gin_bindings=\"Config.factor = 4\"  \n\n# alternatively you can use an example rendering script \nbash scripts\u002Frender_360.sh\n```\n## 评估\n评估结果可在目录 `exp\u002F${EXP_NAME}\u002Ftest_preds` 中找到\n```\n# using the same exp_name as in training\naccelerate launch eval.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n\n\n# alternatively you can use an example evaluating script \nbash scripts\u002Feval_360.sh\n```\n\n## 使用 NerfStudio\nhttps:\u002F\u002Fgithub.com\u002Fnerfstudio-project\u002Fnerfstudio  \nNerfStudio 提供了一个简单的 API（应用程序编程接口），允许简化创建、训练和测试 NeRF（神经辐射场）的端到端流程。该库通过模块化每个组件来支持更可解释的 NeRF 实现。你可以使用 NerfStudio 提供的查看器在训练过程中查看渲染结果。\n### 安装 \n```\npip install nerfstudio  \n# cd zipnerf-pytorch\npip install -e . \nns-install-cli\n```\n\n### 训练与评估 \n```\nns-train zipnerf --data {DATA_DIR\u002FSCENE}\nns-eval --load-config {outputs\u002FSCENE\u002Fzipnerf\u002FEXP_DIR\u002Fconfig.yml}\n\nns-train zipnerf -h  # show the full list of model configuration options.\nns-train zipnerf colmap -h  # dataparset configuration options\n```\n*NerfStudio 的 ColmapDataParser 在下采样时会向下取整图像尺寸，这与 360_v2 数据集不同。你可以使用 NerfStudio 重新处理数据，或者按照 https:\u002F\u002Fgithub.com\u002Fnerfstudio-project\u002Fnerfstudio\u002Fissues\u002F1438 中讨论的那样修改库中的下采样代码逻辑。  \n*NerfStudio 的训练\u002F评估划分策略与此仓库不同。最终的训练和评估结果可能会有所差异。\n\n更多用法或信息，请参阅 https:\u002F\u002Fgithub.com\u002Fnerfstudio-project\u002Fnerfstudio。\n\n### 配置 \n#### 针对 Zipnerf-pytorch\n你可以创建一个新的 .gin 文件，并在 zipnerf_ns\u002Fzipnerf_config.py 的 ZipNerfModelConfig 中传入 'gin_file' 列表，或者更新默认 .gin 文件的内容。\n#### 针对 NerfStudio\n```\nns-train zipnerf -h\nns-train zipnerf colmap -h\n```\n你可以修改 zipnerf_ns\u002Fzipnerf_config.py，或者使用上述指令。\n\n### 查看器\n给定一个预训练的模型检查点，你可以通过运行以下命令启动查看器：\n```\nns-viewer --load-config outputs\u002FSCENE\u002Fzipnerf\u002FEXP_TIME\u002Fconfig.yml  \n```\n\n#### 远程服务器\n如果你在远程机器上运行，你需要进行 WebSocket 端口转发（默认为 7007）。必须在远程机器上设置 SSH。然后在此机器上运行以下内容：\n```\nssh -L \u003Cport>:localhost:\u003Cport> USER@REMOTE.SERVER.IP\n```\n\n## 提取网格\n网格结果可以在目录 `exp\u002F${EXP_NAME}\u002Fmesh` 中找到\n```\n# more configuration can be found in internal\u002Fconfigs.py\naccelerate launch extract.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n#    --gin_bindings=\"Config.mesh_radius = 1\"  # (optional) smaller for more details e.g. 0.2 in bicycle scene\n#    --gin_bindings=\"Config.isosurface_threshold = 20\"  # (optional) empirical value\n#    --gin_bindings=\"Config.mesh_voxels=134217728\"  # (optional) number of voxels used to extract mesh, e.g. 134217728 equals to 512**3 . Smaller values may solve OutoFMemoryError\n#    --gin_bindings=\"Config.vertex_color = True\"  # (optional) saving mesh with vertex color instead of atlas which is much slower but with more details.\n#    --gin_bindings=\"Config.vertex_projection = True\"  # (optional) use projection for vertex color\n\n# or extracting mesh using tsdf method\naccelerate launch tsdf.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n\n# alternatively you can use an example script \nbash scripts\u002Fextract_360.sh\n```\n\n## 内存不足\n你可以通过添加例如 `--gin_bindings=\"Config.batch_size = 8192\"` 来减少总批次大小，或者通过添加例如 `--gin_bindings=\"Config.render_chunk_size = 8192\"` 来减少测试分块大小，或者通过配置 `accelerate config` 使用更多的 GPU。\n\n\n## 准备自定义数据\n更多详细信息可在 https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf 找到\n```\nDATA_DIR=my_dataset_dir\nbash scripts\u002Flocal_colmap_and_resize.sh ${DATA_DIR}\n```\n\n## 待办事项\n- [x] 添加多尺度训练和测试\n\n## 引用\n```\n@misc{barron2023zipnerf,\n      title={Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields}, \n      author={Jonathan T. Barron and Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman},\n      year={2023},\n      eprint={2304.06706},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n\n@misc{multinerf2022,\n      title={{MultiNeRF}: {A} {Code} {Release} for {Mip-NeRF} 360, {Ref-NeRF}, and {RawNeRF}},\n      author={Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman and Ricardo Martin-Brualla and Jonathan T. Barron},\n      year={2022},\n      url={https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf},\n}\n\n@Misc{accelerate,\n  title =        {Accelerate: Training and inference at scale made simple, efficient and adaptable.},\n  author =       {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate}},\n  year =         {2022}\n}\n\n@misc{torch-ngp,\n    Author = {Jiaxiang Tang},\n    Year = {2022},\n    Note = {https:\u002F\u002Fgithub.com\u002Fashawkey\u002Ftorch-ngp},\n    Title = {Torch-ngp: a PyTorch implementation of instant-ngp}\n}\n```\n\n## 致谢\n这项工作基于我的另一个仓库 https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fmultinerf-pytorch，它基本上是从 [multinerf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf) 进行的 PyTorch 移植。\n\n- 感谢 [multinerf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmultinerf) 提供了出色的 multinerf(MipNeRF360,RefNeRF,RawNeRF) 实现\n- 感谢 [accelerate](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate) 提供的分布式训练支持\n- 感谢 [torch-ngp](https:\u002F\u002Fgithub.com\u002Fashawkey\u002Ftorch-ngp) 提供的超有用的 hashencoder\n- 感谢 [Yurui Chen](https:\u002F\u002Fgithub.com\u002F519401113) 对论文细节的讨论。","# zipnerf-pytorch 快速上手指南\n\n**ZipNeRF** 是论文 *\"Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields\"* 的非官方 PyTorch 实现。该工具基于 `multinerf`，支持训练、渲染、网格提取及 Nerfstudio 集成，适用于神经辐射场（NeRF）相关研究与应用。\n\n## 环境准备\n\n*   **操作系统**: Linux \u002F Windows (推荐 Linux)\n*   **Python**: 3.9+\n*   **GPU**: NVIDIA GPU (CUDA 后端) 或 Intel GPU (DPC++ 后端)\n*   **依赖管理**: 推荐使用 Conda 管理虚拟环境\n*   **前置软件**: Git, CUDA Toolkit (若使用 CUDA 后端)\n\n## 安装步骤\n\n### 1. 克隆仓库与创建环境\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch.git\ncd zipnerf-pytorch\n\nconda create --name zipnerf python=3.9\nconda activate zipnerf\n```\n\n### 2. 安装基础依赖\n```bash\npip install -r requirements.txt\n```\n\n### 3. 安装 CUDA 扩展 (核心步骤)\n```bash\npip install .\u002Fextensions\u002Fcuda\n```\n\n### 4. 安装可选组件\n*   **纹理网格提取** (需要 nvdiffrast):\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fnvdiffrast\n    pip install .\u002Fnvdiffrast\n    ```\n*   **torch_scatter** (需匹配当前 CUDA 版本，例如 cu117):\n    ```bash\n    CUDA=cu117\n    pip install torch-scatter -f https:\u002F\u002Fdata.pyg.org\u002Fwhl\u002Ftorch-2.0.0+${CUDA}.html\n    ```\n\n> **注意**: 如需使用 Intel GPU (DPC++ 后端)，请参考 README 中的 `Install DPCPP backend` 章节配置 OneAPI 和 ipex。\n\n## 基本使用\n\n### 1. 数据集准备\n下载官方数据集并解压至 `data` 目录：\n```bash\nmkdir data\ncd data\n# 以 mipnerf360 为例\nwget http:\u002F\u002Fstorage.googleapis.com\u002Fgresearch\u002Frefraw360\u002F360_v2.zip\nunzip 360_v2.zip\n```\n\n### 2. 配置加速环境\n在使用 `accelerate` 进行分布式训练前，需先配置：\n```bash\naccelerate config\n```\n\n### 3. 模型训练\n以 `bicycle` 场景为例，启动训练：\n```bash\nDATA_DIR=data\u002F360_v2\u002Fbicycle\nEXP_NAME=360_v2\u002Fbicycle\n\naccelerate launch train.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n```\n*查看训练进度可使用 TensorBoard：*\n```bash\ntensorboard --logdir \"exp\u002F${EXP_NAME}\"\n```\n\n### 4. 渲染视频\n训练完成后，生成漫游视频：\n```bash\naccelerate launch render.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.render_path = True\" \\\n    --gin_bindings=\"Config.render_path_frames = 480\" \\\n    --gin_bindings=\"Config.render_video_fps = 60\" \\\n    --gin_bindings=\"Config.factor = 4\"\n```\n结果保存在 `exp\u002F${EXP_NAME}\u002Frender` 目录。\n\n### 5. 网格提取 (Mesh)\n提取三维网格模型：\n```bash\naccelerate launch extract.py \\\n    --gin_configs=configs\u002F360.gin \\\n    --gin_bindings=\"Config.data_dir = '${DATA_DIR}'\" \\\n    --gin_bindings=\"Config.exp_name = '${EXP_NAME}'\" \\\n    --gin_bindings=\"Config.factor = 4\"\n```\n结果保存在 `exp\u002F${EXP_NAME}\u002Fmesh` 目录。\n\n### 6. Nerfstudio 集成 (可选)\n支持通过 Nerfstudio 进行更便捷的端到端流程：\n```bash\npip install nerfstudio\npip install -e .\nns-install-cli\n\n# 训练\nns-train zipnerf --data {DATA_DIR\u002FSCENE}\n\n# 评估\nns-eval --load-config {outputs\u002FSCENE\u002Fzipnerf\u002FEXP_DIR\u002Fconfig.yml}\n\n# 可视化\nns-viewer --load-config outputs\u002FSCENE\u002Fzipnerf\u002FEXP_TIME\u002Fconfig.yml\n```\n\n---\n**提示**: 如遇显存不足 (OutOfMemory)，可通过减小 `batch_size` 或 `render_chunk_size` 解决，例如添加 `--gin_bindings=\"Config.batch_size = 8192\"`。","某数字孪生项目团队需要利用无人机航拍照片快速构建城市建筑的 3D 可视化模型，并需确保在网页端流畅加载。\n\n### 没有 zipnerf-pytorch 时\n- 传统神经辐射场方法训练周期长达数天，严重拖慢项目交付进度，无法满足敏捷开发需求。\n- 复杂光照环境下生成的场景存在大量噪点和漂浮物，导致近景细节模糊，视觉体验差。\n- 缺乏高效的网格提取接口，无法将重建结果直接转换为可用于实时渲染引擎的 Mesh 模型。\n- 对硬件要求极高，团队现有的消费级显卡无法运行大型数据集，被迫租赁昂贵云端资源。\n\n### 使用 zipnerf-pytorch 后\n- 反走样网格架构大幅优化了计算效率，训练速度显著提升且显存占用更低，本地即可完成训练。\n- 算法有效抑制了近景漂浮物，在保持高 PSNR 指标的同时获得了更干净、清晰的几何结构。\n- 集成 TSDF 提取功能，能够一键生成带纹理的网格，直接导入 Unity 或 WebGL 进行二次开发。\n- 兼容 CUDA 及 Intel DPC++ 后端，灵活适配不同厂商的硬件设备，降低了部署门槛和维护成本。\n\n核心价值：通过高效的重建算法打通了从照片到可用 3D 资产的自动化生产链路，大幅降低技术门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuLvXiangXin_zipnerf-pytorch_9b71687c.png","SuLvXiangXin","SuLvXiangXin(SII)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSuLvXiangXin_9145a840.png","A Student in SII and Fudan University. SII is an institution dedicated to innovation in education and research in the field of AI.","SII","Shanghai","cgu19@fudan.edu.cn",null,"sulvxiangxin.github.io","https:\u002F\u002Fgithub.com\u002FSuLvXiangXin",[86,90,94,98,102,106],{"name":87,"color":88,"percentage":89},"Python","#3572A5",78.6,{"name":91,"color":92,"percentage":93},"C++","#f34b7d",9.6,{"name":95,"color":96,"percentage":97},"Cuda","#3A4E3A",7.1,{"name":99,"color":100,"percentage":101},"Shell","#89e051",3.7,{"name":103,"color":104,"percentage":105},"Makefile","#427819",0.5,{"name":107,"color":108,"percentage":109},"C","#555555",0.4,850,93,"2026-04-05T09:58:07","Apache-2.0",4,"未说明","需要 GPU，支持 NVIDIA (CUDA 11.7+) 或 Intel (DPC++)，显存需求未明确",{"notes":118,"python":119,"dependencies":120},"建议使用 conda 创建独立环境；需手动编译 CUDA 扩展和 nvdiffrast；支持 NVIDIA 和 Intel 显卡后端；训练前需运行 Colmap 脚本预处理数据；可通过 TensorBoard 查看训练进度；内存不足时可减小 batch_size 或 render_chunk_size。","3.9",[121,122,123,124,125,126,127],"torch","accelerate","pytorch_scatter","nvdiffrast","nerfstudio","gin-config","colmap",[13],[130,131],"deep-learning","pytorch","2026-03-27T02:49:30.150509","2026-04-06T08:52:31.517816",[135,140,145,149,154,159],{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},2061,"运行时提示 `No module named '_cuda_backend'` 错误怎么办？","有用户反馈通过卸载 `ninja` 包解决了此问题。但请注意，卸载 `ninja` 可能会影响其他依赖该包的模型，建议先确认是否必须。","https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fissues\u002F101",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},2062,"`pip install .\u002Fextensions\u002Fcuda` 编译过程中报错如何解决？","需要更新项目根目录下的 `setup.py` 文件。将 `nvcc_flags` 和 `c_flags` 中的标准设置为 C++17（例如添加 `-std=c++17`）。同时确保您的 GCC 版本和 CUDA Toolkit 版本满足兼容性要求（建议 GCC 9.0+，CUDA 11.0+）。","https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fissues\u002F113",{"id":146,"question_zh":147,"answer_zh":148,"source_url":144},2063,"推荐的完整环境安装和依赖配置步骤是什么？","建议使用 conda 创建独立环境并安装指定版本的依赖。参考步骤如下：\n1. `conda create --name zipnerf python=3.9`\n2. `conda activate zipnerf`\n3. 安装 CUDA 11.7 及对应 PyTorch\n4. `pip install .\u002Fextensions\u002Fcuda`\n5. `pip install -r requirements.txt`\n6. 安装 `torch-scatter`、`nvdiffrast` 和 `pymeshlab` 等相关扩展库。",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},2064,"训练完成后渲染的视频无法播放或显示黑屏怎么办？","这通常是由于系统缺少视频编解码器库导致的。请在终端运行以下命令安装必要的依赖：`pip install imageio[ffmpeg] imageio[pyav]`。安装完成后重新尝试渲染。","https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fissues\u002F10",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},2065,"推理时报错找不到 `scaler.pt` 或 checkpoint 加载失败如何处理？","此错误通常与混合精度训练配置有关。请检查项目中的 `default_config.yaml` 配置文件，找到 `'fp16'` 相关的行并将其删除或注释掉。保存配置后再次尝试加载模型。","https:\u002F\u002Fgithub.com\u002FSuLvXiangXin\u002Fzipnerf-pytorch\u002Fissues\u002F23",{"id":160,"question_zh":161,"answer_zh":162,"source_url":144},2066,"编译时出现 GCC 版本警告或对版本有具体要求吗？","编译日志中可能会出现关于 GCC 版本界限的警告（如 `no x86_64-linux-gnu-g++ version bounds defined`）。根据经验，使用 Torch 2.1.2 + CUDA 11.8 + GCC 9.4 的组合是可行的，通常也适用于 CUDA 11.0+ 和 GCC 9.0+ 的环境。只要按上述方法更新了 `setup.py`，通常可忽略警告。",[]]