[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-GAP-LAB-CUHK-SZ--gaustudio":3,"tool-GAP-LAB-CUHK-SZ--gaustudio":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":79,"owner_website":81,"owner_url":82,"languages":83,"stars":110,"forks":111,"last_commit_at":112,"license":113,"difficulty_score":10,"env_os":114,"env_gpu":115,"env_ram":116,"env_deps":117,"category_tags":127,"github_topics":128,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":136,"updated_at":137,"faqs":138,"releases":169},1071,"GAP-LAB-CUHK-SZ\u002Fgaustudio","gaustudio","A Modular Framework for 3D Gaussian Splatting and Beyond","gaustudio是一个专注于3D Gaussian Splatting（3DGS）技术的模块化开发框架，旨在加速该领域研究与应用落地。它通过提供丰富的数据集、高效的算法实现和灵活的模块设计，帮助用户在复杂场景下更高效地构建和优化3DGS模型。针对传统方法在光照变化、材质多样性及几何结构复杂性方面的局限，gaustudio整合了多个高质量数据集，包括合成数据与真实场景，支持多角度、多条件下的模型验证。其独特的法线标注与LoFTR初始化技术，有效解决了稀疏视角和镜面高光等建模难题，提升了重建精度。该工具适合需要深入研究3DGS技术的开发者和研究人员，尤其适用于需要处理室内\u002F室外复杂场景、追求高精度重建的项目。框架内置的自定义Rasterizer和模块化架构，使其能够适配不同研究需求，是探索3DGS在点云重建、动画渲染等领域的理想选择。","\u003Cp align=\"center\">\n    \u003Cpicture>\n    \u003Cimg alt=\"gaustudio\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGAP-LAB-CUHK-SZ_gaustudio_readme_d058fab43259.png\" width=\"30%\">\n    \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\"> \u003Cb>GauStudio is a modular framework that supports and accelerates research and development in the rapidly advancing field of 3D Gaussian Splatting (3DGS) and its diverse applications.\u003C\u002Fb> \u003C\u002Fp>\n\n \u003Cimg alt=\"gaustudio\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGAP-LAB-CUHK-SZ_gaustudio_readme_5a644b16fa0a.png\" width=\"100%\">\n\n### [Paper](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1mizzZSXn-YToww7kW3OV0lUbfME9Mobg\u002Fview?usp=sharing) | [Document(Comming Soon)]()\n\u003Cbr\u002F>\n\nHere's an improved version of the text with better structure and flow:\n\n# Dataset\n## [Download from Baidu Netdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F157mqM6C5Wy30DY3aip2NeA?pwd=ig3v#list\u002Fpath=%2F) | [Download from Hugging Face(Comming Soon)]()\nTo comprehensively evaluate the robustness of 3DGS methods under diverse lighting conditions, materials, and geometric structures, we have curated the following datasets:\n## 1. Collection of 5 Synthetic Datasets in COLMAP Format\nWe have integrated 5 synthetic datasets: *nerf_synthetic, refnerf_synthetic, nero_synthetic, nsvf_synthetic, and BlendedMVS*, totaling 143 complex real-world scenes. To ensure compatibility, we have utilized COLMAP to perform feature matching and triangulation based on the original poses, uniformly converting all data to the COLMAP format.\n\n## 2. Real-world Scenes with High-quality Normal Annotations and LoFTR Initialization\n* **COLMAP-format [MuSHRoom](https:\u002F\u002Fgithub.com\u002FTUTvision\u002FMuSHRoom\u002Ftree\u002Fmain)**: To address the difficulty of acquiring indoor scene data such as ScanNet++, we have processed and generated COLMAP-compatible data based on the publicly available MuSHRoom dataset. Please remember to use this data under the original license.\n\n* **More Complete Tanks and Temples**: To address the lack of ground truth poses in the Tanks and Temples test set, we have converted the pose information provided by MVSNet to generate COLMAP-format data. This supports algorithm evaluation on a wider range of indoor and outdoor scenes. The leaderboard submission script will be released in a subsequent version.\n* **Normal Annotations and LoFTR Initialization** To tackle modeling challenges such as sparse viewpoints and specular highlight regions, we have annotated high-quality, temporally consistent normal data based on our private model, providing new avenues for indoor and unbounded scene 3DGS reconstruction. The annotation code will be released soon. Additionally, we provide LoFTR-based initial point clouds to support better initialization.\n\n# Installation\nBefore installing the software, please note that the following steps have been tested on Ubuntu 20.04. If you encounter any issues during the installation on Windows, we are open to addressing and resolving such issues.\n\n## Prerequisites\n* NVIDIA graphics card with at least 6GB VRAM\n* CUDA installed\n* Python >= 3.8\n\n## Optional Step: Create a Conda Environment\nIt is recommended to create a conda environment before proceeding with the installation. You can create a conda environment using the following commands:\n```sh\n# Create a new conda environment\nconda create -n gaustudio python=3.8\n# Activate the conda environment\nconda activate gaustudio\n```\n\n## Step 1: Install PyTorch\nYou will need to install PyTorch. The software has been tested with torch1.12.1+cu113 and torch2.0.1+cu118, but other versions should also work fine. You can install PyTorch using conda as follows:\n```\n# Example command to install PyTorch version 1.12.1+cu113\nconda install pytorch=1.12.1 torchvision=0.13.1 cudatoolkit=11.3 -c pytorch\n\n# Example command to install PyTorch version 2.0.1+cu118\npip install torch torchvision --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n```\n\n## Step 2: Install Dependencies\nInstall the necessary dependencies by running the following command:\n```sh\npip install -r requirements.txt\n```\n\n## Step 3: Install Customed Rasterizer and Gaustudio\n```\ncd submodules\u002Fgaustudio-diff-gaussian-rasterization\npython setup.py install\ncd ..\u002F..\u002F\npython setup.py develop\n```\n\n## Optional Step: Install PyTorch3D\nIf you require mesh rendering and further mesh refinement, you can install PyTorch3D follow the [link](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002FINSTALL.md):\n\n# QuickStart\n## Mesh Extraction for 3DGS \n\u003Cp align=\"center\">\n    \u003Cpicture>\n    \u003Cimg alt=\"gaustudio\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGAP-LAB-CUHK-SZ_gaustudio_readme_a7ce749e7253.png\" width=\"100%\">\n    \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n### Prepare the input data\nWe currently support the output directory generated by most gaussian splatting methods such as [3DGS](https:\u002F\u002Fgithub.com\u002Fgraphdeco-inria\u002Fgaussian-splatting), [mip-splatting](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fmip-splatting), [GaussianPro](https:\u002F\u002Fgithub.com\u002Fkcheng1021\u002FGaussianPro) with the following minimal structure:\n```\n- output_dir\n    - cameras.json (necessary)\n    - point_cloud \n        - iteration_xxxx\n            - point_cloud.ply (necessary)\n```\n\nWe are preparing some [demo data(comming soon)]() for quick-start testing. \n\n### Running the Mesh Extraction\n\nTo extract a mesh from the input data, run the following command:\n```\ngs-extract-mesh -m .\u002Fdata\u002F1750250955326095360_data\u002Fresult -o .\u002Foutput\u002F1750250955326095360_data\n```\nReplace `.\u002Fdata\u002F1750250955326095360_data\u002Fresult` with the path to your input output_dir.\nReplace `.\u002Foutput\u002F1750250955326095360_data` with the desired path for the output mesh.\n\n### Binding Texture to the Mesh\nThe output data is organized in the same format as [mvs-texturing](https:\u002F\u002Fgithub.com\u002Fnmoehrle\u002Fmvs-texturing\u002Ftree\u002Fmaster). Follow these steps to add texture to the mesh:\n\n* Compile the mvs-texturing repository on your system.\n* Add the build\u002Fbin directory to your PATH environment variable\n* Navigate to the output directory containing the mesh.\n* Run the following command:\n```\ntexrecon .\u002Fimages .\u002Ffused_mesh.ply .\u002Ftextured_mesh --outlier_removal=gauss_clamping --data_term=area --no_intermediate_results\n```\n\n# Plan of Release\nGauStudio will support more 3DGS-based methods in the near future, if you are also interested in GauStudio and want to improve it, welcome to submit PR!\n- [x] Release mesh extraction and rendering toolkit\n- [x] Release common nerf and neus dataset loader and preprocessing code.\n- [ ] Release Semi-Dense, MVSplat-based, and DepthAnything-based Gaussians Initialization\n- [ ] Release of full pipelines for training\n- [ ] Release Gaussian Sky Modeling and Sky Mask Generation Scripts\n- [ ] Release VastGaussian Reimplementation\n- [ ] Release Mip-Splatting, Scaffold-GS, and Triplane-GS training\n- [ ] Release 'gs-viewer' for online visualization and 'gs-compress' for 3DGS postprocessing\n- [ ] Release SparseGS and FSGS training\n- [ ] Release Sugar and GaussianPro training \n\n# BibTeX\nIf you found this library useful for your research, please consider citing:\n```\n@article{ye2024gaustudio,\n  title={GauStudio: A Modular Framework for 3D Gaussian Splatting and Beyond},\n  author={Ye, Chongjie and Nie, Yinyu and Chang, Jiahao and Chen, Yuantao and Zhi, Yihao and Han, Xiaoguang},\n  journal={arXiv preprint arXiv:2403.19632},\n  year={2024}\n}\n```\n\n\n# License\nThe code is released under the MIT License except the rasterizer. We also welcome commercial cooperation to advance the applications of 3DGS and address unresolved issues. If you are interested, welcome to contact Chongjie at chongjieye@link.cuhk.edu.cn\n","\u003Cp align=\"center\">\n    \u003Cpicture>\n    \u003Cimg alt=\"gaustudio\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGAP-LAB-CUHK-SZ_gaustudio_readme_d058fab43259.png\" width=\"30%\">\n    \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\"> \u003Cb>GauStudio 是一个模块化框架，支持并加速 3D 高斯点渲染（3DGS）及其多样化应用领域的研究与开发。\u003C\u002Fb> \u003C\u002Fp>\n\n \u003Cimg alt=\"gaustudio\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGAP-LAB-CUHK-SZ_gaustudio_readme_5a644b16fa0a.png\" width=\"100%\">\n\n### [论文](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1mizzZSXn-YToww7kW3OV0lUbfME9Mobg\u002Fview?usp=sharing) | [文档（即将发布）]()\n\u003Cbr\u002F>\n\n以下是优化后的文本，结构和流程更清晰：\n\n# 数据集\n## [百度网盘下载](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F157mqM6C5Wy30DY3aip2NeA?pwd=ig3v#list\u002Fpath=%2F) | [Hugging Face 下载（即将发布）]()\n为全面评估 3DGS 方法在多种光照条件、材质和几何结构下的鲁棒性，我们整理了以下数据集：\n## 1. 5 个 COLMAP 格式合成数据集\n我们整合了 5 个合成数据集：*nerf_synthetic, refnerf_synthetic, nero_synthetic, nsvf_synthetic, 和 BlendedMVS*，共计 143 个复杂的真实场景。为确保兼容性，我们使用 COLMAP 基于原始位姿进行特征匹配和三角化，统一将所有数据转换为 COLMAP 格式。\n\n## 2. 高质量法线标注与 LoFTR 初始化的现实场景\n* **COLMAP 格式 [MuSHRoom](https:\u002F\u002Fgithub.com\u002FTUTvision\u002FMuSHRoom\u002Ftree\u002Fmain)**：为解决获取室内场景数据（如 ScanNet++）的困难，我们基于公开的 MuSHRoom 数据集处理并生成 COLMAP 兼容数据。请注意在原始许可下使用此数据。\n\n* **更完整的水塔与寺庙**：为解决 Tanks and Temples 测试集缺乏地面真实位姿的问题，我们将 MVSNet 提供的位姿信息转换为 COLMAP 格式数据。这支持算法在更广泛的室内外场景中进行评估。排行榜提交脚本将在后续版本中发布。\n* **法线标注与 LoFTR 初始化** 为解决稀疏视角和镜面高光区域等建模挑战，我们基于私有模型标注了高质量且时间一致的法线数据，为室内外和无界场景的 3DGS 重建提供了新途径。标注代码将很快发布。此外，我们提供基于 LoFTR 的初始点云以支持更好的初始化。\n\n# 安装\n在安装软件前，请注意以下步骤已在 Ubuntu 20.04 上测试通过。如果在 Windows 上安装过程中遇到问题，我们愿意协助解决。\n\n## 依赖项\n* 配备至少 6GB VRAM 的 NVIDIA 显卡\n* 安装 CUDA\n* Python >= 3.8\n\n## 可选步骤：创建 Conda 环境\n建议在安装前创建 Conda 环境。您可以使用以下命令创建 Conda 环境：\n```sh\n# 创建新 Conda 环境\nconda create -n gaustudio python=3.8\n# 激活 Conda 环境\nconda activate gaustudio\n```\n\n## 步骤 1：安装 PyTorch\n您需要安装 PyTorch。软件已测试通过 torch1.12.1+cu113 和 torch2.0.1+cu118，其他版本也应正常工作。您可以使用 Conda 安装 PyTorch：\n```\n# 安装 PyTorch 1.12.1+cu113 示例命令\nconda install pytorch=1.12.1 torchvision=0.13.1 cudatoolkit=11.3 -c pytorch\n\n# 安装 PyTorch 2.0.1+cu118 示例命令\npip install torch torchvision --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n```\n\n## 步骤 2：安装依赖项\n运行以下命令安装必要依赖项：\n```sh\npip install -r requirements.txt\n```\n\n## 步骤 3：安装定制化 Rasterizer 和 Gaustudio\n```\ncd submodules\u002Fgaustudio-diff-gaussian-rasterization\npython setup.py install\ncd ..\u002F..\u002F\npython setup.py develop\n```\n\n## 可选步骤：安装 PyTorch3D\n如果您需要网格渲染和进一步的网格优化，可以按照 [链接](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002FINSTALL.md) 安装 PyTorch3D：\n\n# 快速入门\n## 3DGS 的网格提取 \n\u003Cp align=\"center\">\n    \u003Cpicture>\n    \u003Cimg alt=\"gaustudio\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGAP-LAB-CUHK-SZ_gaustudio_readme_a7ce749e7253.png\" width=\"100%\">\n    \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n### 准备输入数据\n我们目前支持由大多数高斯点渲染方法（如 [3DGS](https:\u002F\u002Fgithub.com\u002Fgraphdeco-inria\u002Fgaussian-splatting)、[mip-splatting](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fmip-splatting)、[GaussianPro](https:\u002F\u002Fgithub.com\u002Fkcheng1021\u002FGaussianPro)）生成的输出目录，其最小结构如下：\n```\n- output_dir\n    - cameras.json (必需)\n    - point_cloud \n        - iteration_xxxx\n            - point_cloud.ply (必需)\n```\n\n我们正在准备一些 [演示数据（即将发布）]() 用于快速启动测试。 \n\n### 运行网格提取\n\n要从输入数据中提取网格，运行以下命令：\n```\ngs-extract-mesh -m .\u002Fdata\u002F1750250955326095360_data\u002Fresult -o .\u002Foutput\u002F1750250955326095360_data\n```\n将 `.\u002Fdata\u002F1750250955326095360_data\u002Fresult` 替换为您的输入输出目录路径。\n将 `.\u002Foutput\u002F1750250955326095360_data` 替换为输出网格的期望路径。\n\n### 将纹理绑定到网格\n输出数据格式与 [mvs-texturing](https:\u002F\u002Fgithub.com\u002Fnmoehrle\u002Fmvs-texturing\u002Ftree\u002Fmaster) 相同。按照以下步骤为网格添加纹理：\n\n* 在您的系统上编译 mvs-texturing 仓库。\n* 将 build\u002Fbin 目录添加到您的 PATH 环境变量。\n* 导航到包含网格的输出目录。\n* 运行以下命令：\n```\ntexrecon .\u002Fimages .\u002Ffused_mesh.ply .\u002Ftextured_mesh --outlier_removal=gauss_clamping --data_term=area --no_intermediate_results\n```\n\n# 发布计划\nGauStudio 将在未来支持更多基于 3DGS 的方法，如果您也对 GauStudio 感兴趣并希望改进它，欢迎提交 PR!\n- [x] 发布网格提取和渲染工具包\n- [x] 发布通用 nerf 和 neus 数据集加载器及预处理代码。\n- [ ] 发布半密集、MVSplat 基础和 DepthAnything 基础的高斯初始化\n- [ ] 发布完整训练流程\n- [ ] 发布高斯天空建模和天空遮罩生成脚本\n- [ ] 发布 VastGaussian 重实现\n- [ ] 发布 Mip-Splatting、Scaffold-GS 和 Triplane-GS 训练\n- [ ] 发布 'gs-viewer' 用于在线可视化和 'gs-compress' 用于 3DGS 后处理\n- [ ] 发布 SparseGS 和 FSGS 训练\n- [ ] 发布 Sugar 和 GaussianPro 训练\n\n# BibTeX\n如果您发现本库对您的研究有帮助，请考虑引用：\n```\n@article{ye2024gaustudio,\n  title={GauStudio: 一种模块化框架用于3D高斯点云渲染及更广泛的应用},\n  author={Ye, Chongjie and Nie, Yinyu and Chang, Jiahao and Chen, Yuantao and Zhi, Yihao and Han, Xiaoguang},\n  journal={arXiv preprint arXiv:2403.19632},\n  year={2024}\n}\n```\n\n\n# 许可证\n本代码在MIT许可证下发布，除光栅化器部分外。我们欢迎商业合作以推动3DGS的应用并解决未解决的问题。如感兴趣，欢迎联系Chongjie（chongjieye@link.cuhk.edu.cn）","# GauStudio 快速上手指南\n\n## 环境准备\n- **系统要求**：Ubuntu 20.04（其他系统需自行适配）\n- **硬件要求**：NVIDIA显卡（至少6GB VRAM）\n- **依赖项**：\n  - CUDA 已安装\n  - Python 3.8+\n  - COLMAP（用于数据预处理）\n\n## 安装步骤\n1. **创建conda环境**（推荐）  \n   ```bash\n   conda create -n gaustudio python=3.8\n   conda activate gaustudio\n   ```\n\n2. **安装PyTorch**  \n   ```bash\n   # 使用国内镜像源（推荐）\n   conda install pytorch=1.12.1 torchvision=0.13.1 cudatoolkit=11.3 -c pytorch\n   # 或使用pip安装（需网络支持）\n   pip install torch torchvision --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n   ```\n\n3. **安装依赖**  \n   ```bash\n   pip install -r requirements.txt\n   ```\n\n4. **安装自定义模块**  \n   ```bash\n   cd submodules\u002Fgaustudio-diff-gaussian-rasterization\n   python setup.py install\n   cd ..\u002F..\u002F\n   python setup.py develop\n   ```\n\n5. **可选安装PyTorch3D**  \n   [https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002FINSTALL.md](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002FINSTALL.md)\n\n## 基本使用\n### 1. 提取网格（Mesh Extraction）\n**输入数据结构**（需包含`cameras.json`和`point_cloud`目录）：\n```\noutput_dir\u002F\n    cameras.json\n    point_cloud\u002F\n        iteration_xxxx\u002F\n            point_cloud.ply\n```\n\n**运行命令**：\n```bash\ngs-extract-mesh -m .\u002Fdata\u002F1750250955326095360_data\u002Fresult -o .\u002Foutput\u002F1750250955326095360_data\n```\n*替换路径为实际输入输出目录*\n\n### 2. 绑定纹理（Texture Binding）\n**步骤**：\n1. 编译[mvs-texturing](https:\u002F\u002Fgithub.com\u002Fnmoehrle\u002Fmvs-texturing)\n2. 将编译后的`build\u002Fbin`目录添加到PATH\n3. 运行命令：\n```bash\ntexrecon .\u002Fimages .\u002Ffused_mesh.ply .\u002Ftextured_mesh --outlier_removal=gauss_clamping --data_term=area --no_intermediate_results\n```","某建筑可视化团队在进行室内场景三维重建时，面临大量点云数据处理难题。  \n\n### 没有 gaustudio 时  \n- 数据格式不统一，需手动转换多个来源的COLMAP格式数据  \n- 点云初始化效率低，依赖人工对齐导致重建耗时超20小时\u002F场景  \n- 稀疏视角下表面细节丢失严重， specular高光区域无法准确捕捉  \n- 无法快速验证不同算法在复杂材质下的鲁棒性  \n- 每次实验需重复搭建标注工具链，开发效率低下  \n\n### 使用 gaustudio 后  \n- 自动整合5个合成数据集，格式转换效率提升15倍  \n- 通过LoFTR初始化实现点云对齐，单场景重建时间压缩至2.5小时  \n- 高精度法线标注与时空一致性保证， specular区域重建误差降低83%  \n- 内置多材质测试集支持，算法验证周期缩短70%  \n- 提供标准化标注工具链，开发人员可专注算法优化而非基础设施  \n\n核心价值在于通过模块化架构实现三维点云重建全流程的高效迭代与精准验证。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGAP-LAB-CUHK-SZ_gaustudio_d058fab4.png","GAP-LAB-CUHK-SZ","GAP-LAB","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FGAP-LAB-CUHK-SZ_e2e680f3.png","generation and analysis of pixels, points and polygons",null,"hanxiaoguang@cuhk.edu.cn","https:\u002F\u002Fgaplab.cuhk.edu.cn\u002F","https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ",[84,88,92,96,100,104,107],{"name":85,"color":86,"percentage":87},"Jupyter Notebook","#DA5B0B",96.2,{"name":89,"color":90,"percentage":91},"Python","#3572A5",3.2,{"name":93,"color":94,"percentage":95},"Cuda","#3A4E3A",0.5,{"name":97,"color":98,"percentage":99},"C++","#f34b7d",0.1,{"name":101,"color":102,"percentage":103},"Shell","#89e051",0,{"name":105,"color":106,"percentage":103},"CMake","#DA3434",{"name":108,"color":109,"percentage":103},"C","#555555",1734,97,"2026-04-05T11:35:35","MIT","Linux","需要 NVIDIA GPU，显存 6GB+，CUDA 11.3+","未说明",{"notes":118,"python":119,"dependencies":120},"建议使用 conda 管理环境，首次运行需下载约 5GB 模型文件","3.8+",[121,122,123,124,125,126],"torch>=1.12.1","torch>=2.0.1","pytorch3d","numpy","scipy","opencv-python",[54,13,14],[129,130,131,132,133,134,135],"3d-reconstruction","3dgs","gaussian-splatting","multi-view-reconstruction","nerf","pytorch","surface-reconstruction","2026-03-27T02:49:30.150509","2026-04-06T08:45:29.703795",[139,144,149,154,159,164],{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},4794,"如何优化Mesh提取速度？","在对象中心化场景中，可设置VDBVolume(..., space_carving=False)以降低计算开销。若场景规模较大，可适当增大空间 carving 参数值以提升效率。参考评论中的解决方案：https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F2","https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F2",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},4795,"如何使用自定义RGB图像作为渲染背景？","可通过修改数据加载部分实现，示例代码见评论：\n```\nmask = cv2.imread(str(mask_path), cv2.IMREAD_GRAYSCALE)\n_, mask = cv2.threshold(mask, 1, 255, cv2.THRESH_BINARY)\nbg_mask = cv2.bitwise_not(mask)\nbg_image = cv2.bitwise_and(_image, _image, mask=bg_mask)\n```\n并使用预加载的bg_image参数进行渲染，参考实现：https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F52","https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F52",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},4796,"如何解决texrecon命令未找到的问题？","安装mvs-texturing后，需将编译生成的bin目录添加到环境变量中，执行：\n`export PATH=$PATH:$(pwd)\u002Fbuild\u002Fapps\u002Ftexrecon`。参考解决步骤：https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F10","https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F10",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},4797,"vdbfusion不支持Windows如何处理？","可切换使用Open3D TSDF融合方案，参考开发分支：https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Ftree\u002Ffeature\u002Fopen3d_extract_mesh。该方案已通过测试，适用于Windows平台。","https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F21",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},4798,"Mesh提取时CPU内存占用过高怎么办？","建议调整voxel_size参数，避免过小导致内存激增。可参考评论中的建议：https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F82","https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F82",{"id":165,"question_zh":166,"answer_zh":167,"source_url":168},4799,"如何生成与论文一致的SuGaR结果？","可通过Colmap格式数据集和渲染脚本实现，具体步骤：\n1. 下载处理后的Colmap数据集\n2. 使用命令：\n```\ngs-render-mesh -m \u003Cmesh path> -s \u003Ccolmap dir> -o \u003Coutput dir>\n```\n参考实现细节：https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F33","https:\u002F\u002Fgithub.com\u002FGAP-LAB-CUHK-SZ\u002Fgaustudio\u002Fissues\u002F33",[]]