[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-natowi--3D-Reconstruction-with-Deep-Learning-Methods":3,"tool-natowi--3D-Reconstruction-with-Deep-Learning-Methods":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":80,"languages":77,"stars":81,"forks":82,"last_commit_at":83,"license":84,"difficulty_score":85,"env_os":86,"env_gpu":87,"env_ram":88,"env_deps":89,"category_tags":98,"github_topics":99,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":109},3606,"natowi\u002F3D-Reconstruction-with-Deep-Learning-Methods","3D-Reconstruction-with-Deep-Learning-Methods","List of projects for 3d reconstruction","3D-Reconstruction-with-Deep-Learning-Methods 是一个专注于汇集基于深度学习技术的开源 3D 重建项目的资源列表。它旨在解决从单目图像、多视角照片或点云数据中高效恢复三维几何结构的技术难题，涵盖了深度估计、场景补全、语义分割及相机定位等核心任务。\n\n该列表精选了多个托管在 GitHub 上的高质量项目，如利用迁移学习实现高质量单目深度估计的 DenseDepth、处理大规模场景补全的 ScanComplete，以及经典的 PointNet 点云分类框架。这些项目大多提供了基于 TensorFlow 或 PyTorch 的代码实现，部分还结合了 GAN（生成对抗网络）与混合集成方法，展现了当前学术界在几何特征提取与语义理解融合方面的前沿探索。\n\n这份资源特别适合计算机视觉领域的研究人员、算法工程师以及希望深入探索 3D 视觉技术的开发者使用。无论是需要复现最新论文成果，还是寻找特定场景下的基线模型进行二次开发，用户都能在此找到对应的开源方案。对于设计师或普通用户而言，虽然直接上手可能需要一定的编程基础，但它为理解后端 3D 生成逻辑提供了宝贵的技","3D-Reconstruction-with-Deep-Learning-Methods 是一个专注于汇集基于深度学习技术的开源 3D 重建项目的资源列表。它旨在解决从单目图像、多视角照片或点云数据中高效恢复三维几何结构的技术难题，涵盖了深度估计、场景补全、语义分割及相机定位等核心任务。\n\n该列表精选了多个托管在 GitHub 上的高质量项目，如利用迁移学习实现高质量单目深度估计的 DenseDepth、处理大规模场景补全的 ScanComplete，以及经典的 PointNet 点云分类框架。这些项目大多提供了基于 TensorFlow 或 PyTorch 的代码实现，部分还结合了 GAN（生成对抗网络）与混合集成方法，展现了当前学术界在几何特征提取与语义理解融合方面的前沿探索。\n\n这份资源特别适合计算机视觉领域的研究人员、算法工程师以及希望深入探索 3D 视觉技术的开发者使用。无论是需要复现最新论文成果，还是寻找特定场景下的基线模型进行二次开发，用户都能在此找到对应的开源方案。对于设计师或普通用户而言，虽然直接上手可能需要一定的编程基础，但它为理解后端 3D 生成逻辑提供了宝贵的技术窗口。","# 3D-Reconstruction-with-Deep-Learning-Methods\n\nThe focus of this list is on open-source projects hosted on Github.\n\n**Projects released on Github**\n\n| TITLE                                                        | KEYWORDS                    | URL                                                          | LICENSE                                                      | Awesomeness |\n| ------------------------------------------------------------ | --------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------- |\n| High Quality Monocular Depth Estimation via Transfer Learning | TensorFlow, PyTorch         | https:\u002F\u002Fgithub.com\u002Fialhashim\u002FDenseDepth https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.11941 | GPL-3.0                                                      |             |\n| Multi-view stereo image-based 3D reconstruction              |                             | https:\u002F\u002Fgithub.com\u002Fadahbingee\u002Fpais-mvs                       | nn                                                           |             |\n| Hybrid Ensemble Approach For 3D Object Reconstruction from Multi-View Monocular RGB images |                             | https:\u002F\u002Fgithub.com\u002FAjithbalakrishnan\u002F3D-Object-Reconstruction-from-Multi-View-Monocular-RGB-images | nn                                                           |             |\n| Deep 3D Semantic Scene Extrapolation                         | hybrid CNN, GAN, TensorFlow | https:\u002F\u002Fgithub.com\u002FAliAbbasi\u002FDeep-3D-Semantic-Scene-Extrapolation http:\u002F\u002Fuser.ceng.metu.edu.tr\u002F~ys\u002Fpubs\u002Fextrap-tvcj18.pdf | nn                                                           |             |\n| ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans | TensorFlow                  | https:\u002F\u002Fgithub.com\u002Fangeladai\u002FScanComplete                    | Apache-2.0                                                   |             |\n| AtLoc: Attention Guided Camera Localization                  | PyTorch                     | https:\u002F\u002Fgithub.com\u002FBingCS\u002FAtLoc https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.03557 | BY-NC-SA 4.0                                                 |             |\n| PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation | TensorFlow, cuDNN           | https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet                       | MIT License                                                  |             |\n| PyTorch Implementation of DeepVO                             | PyTorch, CNN                | https:\u002F\u002Fgithub.com\u002FChiWeiHsiao\u002FDeepVO-pytorch                | nn                                                           |             |\n| Fully Convolutional Geometric Features: Fast and accurate 3D features for registration and correspondence. | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fchrischoy\u002FFCGF                            | MIT License                                                  |             |\n| Morphing and Sampling Network for Dense Point Cloud Completion (AAAI2020) | PyTorch                     | https:\u002F\u002Fgithub.com\u002FColin97\u002FMSN-Point-Cloud-Completion        | Apache-2.0                                                   |             |\n| Real-Time Self-Adaptive Deep Stereo                          | TensorFlow                  | https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FReal-time-self-adaptive-deep-stereo | Apache-2.0                                                   |             |\n| Geometry meets semantics for semi-supervised monocular depth estimation - ACCV 2018 | TensorFlow                  | https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FSemantic-Mono-Depth           | MIT License                                                  |             |\n| BlenderProc: A procedural blender pipeline to generate images for deep learning |  Blender                    | https:\u002F\u002Fgithub.com\u002FDLR-RM\u002FBlenderProc                        | GPL-3.0                                                      |             |\n| SingleViewReconstruction: 3D Scene Reconstruction from a Single Viewport |  TensorFlow               | https:\u002F\u002Fgithub.com\u002FDLR-RM\u002FSingleViewReconstruction                        | MIT License                                                      |             |\n| NNCAP — Neural Network Complex Approach to Photogrammetry    |                             | https:\u002F\u002Fgithub.com\u002FDok11\u002Fnn-dldm                             | nn                                                           |             |\n| Pytorch Implementation of Deeper Depth Prediction with Fully Convolutional Residual Networks | PyTorch                     | https:\u002F\u002Fgithub.com\u002FdontLoveBugs\u002FFCRN_pytorch                 | nn                                                           |             |\n| Improved Adversarial Systems for 3D Object Generation and Reconstruction | GAN                         | https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002F3D-IWGAN                  | MIT License                                                  |             |\n| Deep Learning for Visual-Inertial Odometry                   | PyTorch, CNN                | https:\u002F\u002Fgithub.com\u002FElliotHYLee\u002FDeep_Visual_Inertial_Odometry | MIT License                                                  |             |\n| Machine Vision                                               | List                        | https:\u002F\u002Fgithub.com\u002FEwenwan\u002FMVision                           | nn                                                           |             |\n| Mesh R-CNN, an academic publication, presented at ICCV 2019  | PyTorch, R-CNN              | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmeshrcnn                 | BSD-3-Clause License                                         |             |\n| PyTorch3d is FAIR's library of reusable components for deep learning with 3D data. | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d                | BSD-3-Clause License                                         |             |\n| Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffangchangma\u002Fself-supervised-depth-completion | MIT License                                                  |             |\n| Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffangchangma\u002Fsparse-to-dense               | BSD License                                                  |             |\n| Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffangchangma\u002Fsparse-to-dense.pytorch       | nn                                                           |             |\n| PackNet-SfM: 3D Packing for Self-Supervised Monocular Depth Estimation | PyTorch                     | https:\u002F\u002Fgithub.com\u002FFangGet\u002FPackNet-SFM-PyTorch               | GPL-3.0                                                      |             |\n| InvSFM: Revealing Scenes by Inverting Structure from Motion Reconstructions [CVPR 2019] | TensorFlow                  | https:\u002F\u002Fgithub.com\u002Ffrancescopittaluga\u002Finvsfm                 | MIT License                                                  |             |\n| Deep Monocular Visual Odometry using PyTorch (Experimental)  | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffshamshirdar\u002FDeepVO                       | nn                                                           |             |\n| PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch                   | MIT License                                                  |             |\n| Pix2Depth - Depth Map Estimation from Monocular Image        | Keras                       | https:\u002F\u002Fgithub.com\u002Fgautam678\u002FPix2Depth                       | GPL-3.0                                                      |             |\n| 3DRegNet: A Deep Neural Network for 3D Point Registration    | TensorFlow                  | https:\u002F\u002Fgithub.com\u002Fgoncalo120\u002F3DRegNet                       | MIT License                                                  |             |\n| Neural 3D Mesh Renderer – Single-Image 3D Reconstruction using Neural Renderer |                             | https:\u002F\u002Fgithub.com\u002Fhiroharu-kato\u002Fmesh_reconstruction         | MIT License                                                  |             |\n| Real-time Scalable Dense Surfel Mapping                      |                             | https:\u002F\u002Fgithub.com\u002FHKUST-Aerial-Robotics\u002FDenseSurfelMapping  | nn                                                           |             |\n| MVDepthNet: real-time multiview depth estimation neural network | PyTorch                     | https:\u002F\u002Fgithub.com\u002FHKUST-Aerial-Robotics\u002FMVDepthNet          | nn                                                           |             |\n| DeepMatchVO: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation |                             | https:\u002F\u002Fgithub.com\u002Fhlzz\u002FDeepMatchVO                          | MIT License                                                  |             |\n| MIRorR: Matchable Image Retrieval by Learning from Surface Reconstruction | TensorFlow, CNN             | https:\u002F\u002Fgithub.com\u002Fhlzz\u002Fmirror                               | MIT License                                                  |             |\n| Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction | Caffe                       | https:\u002F\u002Fgithub.com\u002FHuangying-Zhan\u002FDepth-VO-Feat              | non-commercial                                               |             |\n| Deep Learning 3D vision papers                               | papers, list, CN            | https:\u002F\u002Fgithub.com\u002Fhuayong\u002Fdl-vision-papers                  | nn                                                           |             |\n| Open3D PointNet implementation with PyTorch                  | PyTorch, jupyter, Open3D    | https:\u002F\u002Fgithub.com\u002Fintel-isl\u002FOpen3D-PointNet                 | MIT License                                                  |             |\n| Semantic-TSDF for Self-driving Static Scene Reconstruction   | PyTorch                     | https:\u002F\u002Fgithub.com\u002Firsisyphus\u002Fsemantic-tsdf                  | MIT License                                                  |             |\n| Weakly supervised 3D Reconstruction with Adversarial Constraint |                             | https:\u002F\u002Fgithub.com\u002Fjgwak\u002FMcRecon                             | MIT License                                                  |             |\n| Using Deep learning Technique for Stereo vision and 3D reconstruction | TensorFlow, CN              | https:\u002F\u002Fgithub.com\u002Fjiafeng5513\u002FEvisionNet                    | nn                                                           |             |\n| Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video | PyTorch                     | https:\u002F\u002Fgithub.com\u002FJiawangBian\u002FSC-SfMLearner-Release         | GPL-3.0                                                      |             |\n| Revisiting Single Image Depth Estimation: Toward  Higher Resolution Maps with Accurate Object Boundaries (official  implementation) | PyTorch                     | https:\u002F\u002Fgithub.com\u002FJunjH\u002FRevisiting_Single_Depth_Estimation  | nn                                                           |             |\n| Visualization of Convolutional Neural Networks for Monocular Depth Estimation (official  implementation) | CNN, PyTorch                | https:\u002F\u002Fgithub.com\u002FJunjH\u002FVisualizing-CNNs-for-monocular-depth-estimation | MIT License                                                  |             |\n| DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fkrrish94\u002FDeepVO                           | nn                                                           |             |\n| DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FDISN                           | nn                                                           |             |\n| DeepTAM: Deep Tracking and Mapping                           |                             | https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fdeeptam                      | GPL-3.0                                                      |             |\n| DeMoN: Depth and Motion Network                              | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fdemon                        | GPL-3.0                                                      |             |\n| PyTorch implementation of CloudWalk's recent work DenseBody  | PyTorch                     | https:\u002F\u002Fgithub.com\u002FLotayou\u002Fdensebody_pytorch                 | GPL-3.0                                                      |             |\n| Self-supervised learning for dense depth estimation in monocular endoscopy | Tensorflow, Torch           | https:\u002F\u002Fgithub.com\u002Flppllppl920\u002FEndoscopyDepthEstimation-Pytorch | non-commercial                                               |             |\n| ContextDesc: Local Descriptor Augmentation with Cross-Modality Context | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Flzx551402\u002Fcontextdesc                     | nn                                                           |             |\n| GL3D (Geometric Learning with 3D Reconstruction): a  large-scale database created for 3D reconstruction and geometry-related  learning problems |                             | https:\u002F\u002Fgithub.com\u002Flzx551402\u002FGL3D                            | MIT License                                                  |             |\n| Deeper Depth Prediction with Fully Convolutional Residual Networks (official implementation) | Tensorflow                  | https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDeeperDepthEstimation        | nn                                                           |             |\n| Fine-Tuning Vgg16 For Depth Estimation                       | Tensorflow                  | https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDepthEstimationVGG           | nn                                                           |             |\n| 3D reconstruction with neural networks using Tensorflow. See link for Video |                             | https:\u002F\u002Fgithub.com\u002Fmicmelesse\u002F3D-reconstruction-with-Neural-Networks | nn                                                           |             |\n| Learning Depth from Monocular Videos using Direct Methods    | PyTorch                     | https:\u002F\u002Fgithub.com\u002FMightyChaos\u002FLKVOLearner                   | BSD-3-Clause                                                 |             |\n| PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fmikacuy\u002Fpointnetvlad                      | MIT License                                                  |             |\n| Attempting to estimate topography of a region from image data |                             | https:\u002F\u002Fgithub.com\u002Fnbelakovski\u002Ftopography_neural_net         | nn                                                           |             |\n| DDRNet: Depth Map Denoising and Refinement for Consumer Depth Cameras Using Cascaded CNNs | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fneycyanshi\u002FDDRNet                         | MIT License                                                  |             |\n| Monocular depth estimation from a single image               | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fnianticlabs\u002Fmonodepth2                    | Copyright © Niantic, Inc. 2018. Patent Pending - non-commercial use only |             |\n| 3D-RelNet: Joint Object and Relation Network for 3D prediction | Torch, jupyter              | https:\u002F\u002Fgithub.com\u002Fnileshkulkarni\u002Frelative3d                 | nn                                                           |             |\n| PlaneRCNN detects and reconstructs piece-wise planar surfaces from a single RGB image | Torch, RCNN                 | https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fplanercnn                          | Copyright (c) 2018 NVIDIA Corp.  All Rights Reserved. This work is licensed under the [Creative Commons Attribution NonCommercial ShareAlike 4.0 License](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Flegalcode). |             |\n| OctoMap - An Efficient Probabilistic 3D Mapping Framework Based on Octrees. |                             | https:\u002F\u002Fgithub.com\u002FOctoMap\u002Foctomap                           | University of Freiburg, Copyright (C) 2009-2014, octomap: New BSD License, octovis and related libraries: GPL |             |\n| Unsupervised Monocular Depth Estimation neural network MonoDepth in PyTorch (Unofficial implementation) | PyTorch                     | https:\u002F\u002Fgithub.com\u002FOniroAI\u002FMonoDepth-PyTorch                 | nn                                                           |             |\n| Learning to Sample: A learned sampling approach for point clouds |                             | https:\u002F\u002Fgithub.com\u002Forendv\u002Flearning_to_sample                 | MIT License                                                  |             |\n| DeepMVS: Learning Multi-View Stereopsis                      | CNN, PyTorch                | https:\u002F\u002Fgithub.com\u002Fphuang17\u002FDeepMVS                          | BSD 2-clause                                                 |             |\n| DeepV2D: Video to Depth with Differentiable Structure from Motion | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FDeepV2D                      | nn                                                           |             |\n| High Quality Monocular Depth Estimation via Transfer Learning | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fpriya-dwivedi\u002FDeep-Learning\u002Ftree\u002Fmaster\u002Fdepth_estimation | nn (GPL-3.0 ?)                                               |             |\n| Deep Single-View 3D Object Reconstruction with Visual Hull Embedding | CNN, Tensorflow             | https:\u002F\u002Fgithub.com\u002Fqweas120\u002FPSVH-3d-reconstruction           | MIT License                                                  |             |\n| ScanNet is an RGB-D video dataset containing 2.5 million views in more than 1500 scans, annotated with 3D camera poses,... |                             | https:\u002F\u002Fgithub.com\u002FScanNet\u002FScanNet                           | Can be used with the restriction to give credit and include original Copyright |             |\n| Visual inspection of bridges is customarily used to identify and evaluate faults | CNN                         | https:\u002F\u002Fgithub.com\u002FShaggyshak\u002FCS543_project_Image-based-Localization-of-Bridge-Defects-with-AR-Visualization | nn                                                           |             |\n| Semantic 3D Occupancy Mapping through Efficient High Order CRFs | CNN                         | https:\u002F\u002Fgithub.com\u002Fshichaoy\u002Fsemantic_3d_mapping              | BSD-3-Clause                                                 |             |\n| Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene |                             | https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002Ffactored3d                      | nn                                                           |             |\n| Motion R-CNN codebase (old)                                  | RCNN                        | https:\u002F\u002Fgithub.com\u002Fsimonmeister\u002Fold-motion-rcnn              | MIT License                                                  |             |\n| Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fsshan-zhao\u002FGASDA                          | nn                                                           |             |\n| 3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera |                             | https:\u002F\u002Fgithub.com\u002FStanfordVL\u002F3DSceneGraph                   | MIT License                                                  |             |\n| Minkowski Engine is an auto-diff convolutional neural network library for high-dimensional sparse tensors | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fstanfordvl\u002FMinkowskiEngine                | MIT License                                                  |             |\n| Learning Single-View 3D Reconstruction with Limited Pose Supervision (Official implementation) | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon                        | MIT License                                                  |             |\n| VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera (Tensorflow version) | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Ftimctho\u002FVNect-tensorflow                  | Apache-2.0                                                   |             |\n| 3D-LMNet: Latent Embedding Matching for Accurate and Diverse 3D Point Cloud Reconstruction from a Single Image |                             | https:\u002F\u002Fgithub.com\u002Fval-iisc\u002F3d-lmnet                         | MIT License                                                  |             |\n| Learning to Find Good Correspondences                        |                             | https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flearned-correspondence-release   | For reserch and evaluation only. Commercial usage requires written approval |             |\n| A Framework for the Volumetric Integration of Depth Images   |                             | https:\u002F\u002Fgithub.com\u002Fvictorprad\u002FInfiniTAM                      | non-commercial                                               |             |\n| Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation  | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fwalsvid\u002FPixel2MeshPlusPlus                | BSD-3-Clause                                                 |             |\n| Adversarial Semantic Scene Completion from a Single Depth Image (Official implementation) | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fwangyida\u002Fgan-depth-semantic3d             | nn                                                           |             |\n| SurfelWarp: Efficient Non-Volumetric Dynamic Reconstruction  |                             | https:\u002F\u002Fgithub.com\u002Fweigao95\u002Fsurfelwarp                       | BSD-3-Clause                                                 |             |\n| PCN: Point Completion Network                                | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fwentaoyuan\u002Fpcn                            | MIT License                                                  |             |\n| DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction |                             | https:\u002F\u002Fgithub.com\u002FXharlie\u002FDISN                              | nn                                                           |             |\n| Real-time motion from structure                              | CNN                         | https:\u002F\u002Fgithub.com\u002Fyan99033\u002FCNN-SVO                          | nn                                                           |             |\n| Dense 3D Object Reconstruction from a Single Depth View      | Tensorflow                  | https:\u002F\u002Fgithub.com\u002FYang7879\u002F3D-RecGAN-extended               | MIT License                                                  |             |\n| Semi-supervised monocular depth map prediction               | Tensorflow                  | https:\u002F\u002Fgithub.com\u002FYevkuzn\u002Fsemodepth                         | GPL-3.0                                                      |             |\n| 3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fyewzijian\u002F3DFeatNet                       | MIT License                                                  |             |\n| Estimated Depth Map Helps Image Classification: Depth estimation with neural network, and learning on RGBD images |                             | https:\u002F\u002Fgithub.com\u002Fyihui-he\u002FEstimated-Depth-Map-Helps-Image-Classification | MIT License                                                  |             |\n| Fit 3DMM to front and side face images simultaneously.       |                             | https:\u002F\u002Fgithub.com\u002FYinghao-Li\u002F3DMM-fitting                   | nn                                                           |             |\n| The Perfect Match: 3D Point Cloud Matching with Smoothed Densities | CNN, Tensorflow             | https:\u002F\u002Fgithub.com\u002Fzgojcic\u002F3DSmoothNet                       | BSD-3-Clause                                                 |             |\n| NeurVPS: Neural Vanishing Point Scanning via Conic Convolution | Tenosorflow                 | https:\u002F\u002Fgithub.com\u002Fzhou13\u002Fneurvps                            | MIT License                                                  |             |\n| LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image (Torch implementation) | Torch                       | https:\u002F\u002Fgithub.com\u002Fzouchuhang\u002FLayoutNet                      | MIT License                                                  |             |\n| NeRF: Neural Radiance Fields                                 |                             | https:\u002F\u002Fgithub.com\u002Fbmild\u002Fnerf                                | MIT License                                                  | 10          |\n| Local Light Field Fusion at SIGGRAPH 2019                    |                             | https:\u002F\u002Fgithub.com\u002Ffyusion\u002Fllff                              | GPL-3.0                                                      | 10          |\n| neural-volumes-learning-dynamic-renderable-volumes-from-images |                             | https:\u002F\u002Fresearch.fb.com\u002Fpublications\u002Fneural-volumes-learning-dynamic-renderable-volumes-from-images\u002F\u003Cbr\u002F>https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fneuralvolumes | BY-NC 4.0                                                    |             |\n| Learning Less is More - 6D Camera Localization via 3D Surface Regression |                             | https:\u002F\u002Fgithub.com\u002Fvislearn\u002FLessMore                         | BSD-3-Clause                                                 |             |\n| Local features                                               |                             | https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flf-net-release                   |                                                              |             |\n| Pix2Vox                                                      |                             | https:\u002F\u002Fgithub.com\u002Fhzxie\u002FPix2Vox                             |                                                              |             |\n| PlanarReconstruction: Single-Image Piece-wise Planar 3D Reconstruction via Associative Embedding | pytorch                     | https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FPlanarReconstruction             |                                                              |             |\n| Depth estimation with deep Neural networks                   |                             | https:\u002F\u002Fmedium.com\u002F@omarbarakat1995\u002Fdepth-estimation-with-deep-neural-networks-part-1-5fa6d2237d0d\u003Cbr\u002F>https:\u002F\u002Fmedium.com\u002Fdatadriveninvestor\u002Fdepth-estimation-with-deep-neural-networks-part-2-81ee374888eb\u003Cbr\u002F>https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDeeperDepthEstimation\u003Cbr\u002F>https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDepthEstimationVGG\u002Fblob\u002Fmaster\u002FREADME.md |                                                              |             |\n| High Quality Monocular Depth Estimation via Transfer Learning |                             | https:\u002F\u002Fgithub.com\u002Fialhashim\u002FDenseDepth\u003Cbr \u002F>https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.11941 |                                                              |             |\n| D3Feat                                                        |                             | https:\u002F\u002Fgithub.com\u002FXuyangBai\u002FD3Feat               |                                                               |             |\n| Hierarchical Deep Stereo Matching on High Resolution Images  | pytorch           | https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Fhigh-res-stereo                | MIT          |             |\n| Structure-Aware Residual Pyramid Network for Monocular Depth Estimation | pytorch           | https:\u002F\u002Fgithub.com\u002FXt-Chen\u002FSARPN                             | nn           |             |\n| Pytorch code to construct a 3D point cloud model from single RGB image. | pytorch           | https:\u002F\u002Fgithub.com\u002Flkhphuc\u002Fpytorch-3d-point-cloud-generation | nn           |             |\n| Depth estimation from RGB images using fully convolutional neural networks | pytorch           | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| Single-Image Piece-wise Planar 3D Reconstruction via Associative Embedding | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FPlanarReconstruction             | MIT          |             |\n| TriDepth: Triangular Patch-based Deep Depth Prediction       | PyTorch           | https:\u002F\u002Fgithub.com\u002Fsyinari0123\u002Ftridepth                      | MIT          |             |\n| Depth Map Prediction from a Single Image using a Multi-Scale Deep Network | torch             | https:\u002F\u002Fgithub.com\u002Fimran3180\u002Fdepth-map-prediction            | nn           |             |\n| Hybrid CNN for Single Image Depth Estimation                 | torch             | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| MarrNet: 3D Shape Reconstruction via 2.5D Sketches           | torch             | https:\u002F\u002Fgithub.com\u002Fjiajunwu\u002Fmarrnet                          | nn           |             |\n| Consistent Video Depth Estimation                            |                   | https:\u002F\u002Froxanneluo.github.io\u002FConsistent-Video-Depth-Estimation\u002F | nn           |             |\n| HF-Net: Robust Hierarchical Localization at Large Scale      | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fethz-asl\u002Fhfnet                            | MIT          |             |\n| Hierarchical Deep Stereo Matching on High Resolution Images  | pytorch           | https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Fhigh-res-stereo                | MIT          |             |\n| Structure-Aware Residual Pyramid Network for Monocular Depth Estimation | pytorch           | https:\u002F\u002Fgithub.com\u002FXt-Chen\u002FSARPN                             | nn           |             |\n| Pytorch code to construct a 3D point cloud model from single RGB image. | pytorch           | https:\u002F\u002Fgithub.com\u002Flkhphuc\u002Fpytorch-3d-point-cloud-generation | nn           |             |\n| Depth estimation from RGB images using fully convolutional neural networks | pytorch           | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| Single-Image Piece-wise Planar 3D Reconstruction via Associative Embedding | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FPlanarReconstruction             | MIT          |             |\n| TriDepth: Triangular Patch-based Deep Depth Prediction       | PyTorch           | https:\u002F\u002Fgithub.com\u002Fsyinari0123\u002Ftridepth                      | MIT          |             |\n| Depth Map Prediction from a Single Image using a Multi-Scale Deep Network | torch             | https:\u002F\u002Fgithub.com\u002Fimran3180\u002Fdepth-map-prediction            | nn           |             |\n| Hybrid CNN for Single Image Depth Estimation                 | torch             | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| MarrNet: 3D Shape Reconstruction via 2.5D Sketches           | torch             | https:\u002F\u002Fgithub.com\u002Fjiajunwu\u002Fmarrnet                          | nn           |             |\n| Consistent Video Depth Estimation                            |                   | https:\u002F\u002Froxanneluo.github.io\u002FConsistent-Video-Depth-Estimation\u002F | nn           |             |\n| HF-Net: Robust Hierarchical Localization at Large Scale      | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fethz-asl\u002Fhfnet                            | MIT          |             |\n\n\n\n**Other Projects**\n\n| TITLE                                                        | KEYWORDS              | URL                                                          | LICENSE |\n| ------------------------------------------------------------ | --------------------- | ------------------------------------------------------------ | ------- |\n| 3D-Scene-GAN: Three-dimensional Scene Reconstruction with Generative Adversarial Networks | paper                 | https:\u002F\u002Fopenreview.net\u002Fforum?id=SkNEsmJwf                    |         |\n| Google: Deep Learning Depth Prediction                       | magazine article, GER | https:\u002F\u002Fwww.digitalproduction.com\u002F2019\u002F05\u002F27\u002Fgoogle-deep-learning-depth-prediction\u002F |         |\n| SLAM and Deep Leraning for 3D Indoor Scene Understanding     | PhD thesis            | https:\u002F\u002Fwww.doc.ic.ac.uk\u002F~ajd\u002FPublications\u002FMcCormac-J-2019-PhD-Thesis.pdf |         |\n| Dense 3D Object Reconstruction from a Single Depth View      | 3D-RecGAN++           | https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.00411                             |         |\n| Moving Camera, Moving People: A Deep Learning Approach to Depth Prediction |                       | https:\u002F\u002Fai.googleblog.com\u002F2019\u002F05\u002Fmoving-camera-moving-people-deep.html |         |\n| Depth Estimation from a Single RGB Image                     |                       | http:\u002F\u002Fcampar.in.tum.de\u002FChair\u002FProjectDepthPrediction         |         |\n| Deep Fundamental Matrix Estimation                           |                       | http:\u002F\u002Fvladlen.info\u002Fpapers\u002Fdeep-fundamental.pdf              |         |\n| depth_estimation                                             |                       | https:\u002F\u002Ftowardsdatascience.com\u002Fdepth-estimation-on-camera-images-using-densenets-ac454caa893 |         |\n| **3D-Machine-Learning List**                                 |                       | https:\u002F\u002Fgithub.com\u002Ftimzhang642\u002F3D-Machine-Learning           |         |\n| **DEEP LEARNING-BASED 3D OBJECT RECONSTRUCTION - A SURVEY - Image-based 3D Object Reconstruction:State-of-the-Art and Trends in the DeepLearning Era** |                       | https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06543.pdf                         |         |\n|                                                              |                       |                                                              |         |\n|                                                              |                       |                                                              |         |\n|                                                              |                       |                                                              |         |\n|                                                              |                       |                                                              |         |\n\nI2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs (CVPR 2023) https:\u002F\u002Fgithub.com\u002Fjingsenzhu\u002Fi2-sdf MIT \n\nhttps:\u002F\u002Fgithub.com\u002Flioryariv\u002Fidr\n\nhttps:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fdifferentiable_volumetric_rendering\n\nhttps:\u002F\u002Fgithub.com\u002FDok11\u002Fsurface-match-dataset\n\nImage-based 3D Object Reconstruction:State-of-the-Art and Trends in the DeepLearning Era https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06543v3.pdf\n\nDense 3D Object Reconstructionfrom a Single Depth View https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.00411v2.pdf\n\nhttps:\u002F\u002Fdagshub.com\u002FOperationSavta\u002FSavtaDepth https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1XU4DgQ217_hUMU1dllppeQNw3pTRlHy1?usp=sharing https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fkingabzpro\u002Fsavtadepth MIT License\n\nhttps:\u002F\u002Fgithub.com\u002Fgradslam\u002Fgradslam pyTorch\n\nhttps:\u002F\u002Fgithub.com\u002Fventusff\u002Fneurecon\n\nhttps:\u002F\u002Fgithub.com\u002FtheICTlab\u002F3DUNDERWORLD-SLS-GPU_CPU\n","# 基于深度学习方法的3D重建\n\n本列表的重点是托管在Github上的开源项目。\n\n**在Github上发布的项目**\n\n| 标题                                                        | 关键词                    | URL                                                          | 许可证                                                      | 优秀程度 |\n| ------------------------------------------------------------ | --------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------- |\n| 通过迁移学习实现高质量单目深度估计 | TensorFlow, PyTorch         | https:\u002F\u002Fgithub.com\u002Fialhashim\u002FDenseDepth https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.11941 | GPL-3.0                                                      |             |\n| 基于多视角立体图像的3D重建              |                             | https:\u002F\u002Fgithub.com\u002Fadahbingee\u002Fpais-mvs                       | nn                                                           |             |\n| 基于多视角单目RGB图像的3D物体重建混合集成方法 |                             | https:\u002F\u002Fgithub.com\u002FAjithbalakrishnan\u002F3D-Object-Reconstruction-from-Multi-View-Monocular-RGB-images | nn                                                           |             |\n| 深度3D语义场景外推                        | 混合CNN、GAN、TensorFlow   | https:\u002F\u002Fgithub.com\u002FAliAbbasi\u002FDeep-3D-Semantic-Scene-Extrapolation http:\u002F\u002Fuser.ceng.metu.edu.tr\u002F~ys\u002Fpubs\u002Fextrap-tvcj18.pdf | nn                                                           |             |\n| ScanComplete：大规模场景补全与3D扫描语义分割 | TensorFlow                  | https:\u002F\u002Fgithub.com\u002Fangeladai\u002FScanComplete                    | Apache-2.0                                                   |             |\n| AtLoc：注意力引导的相机定位                | PyTorch                     | https:\u002F\u002Fgithub.com\u002FBingCS\u002FAtLoc https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.03557 | BY-NC-SA 4.0                                                 |             |\n| PointNet：用于3D分类和分割的点云深度学习 | TensorFlow, cuDNN           | https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet                       | MIT许可证                                                    |             |\n| DeepVO的PyTorch实现                        | PyTorch, CNN                | https:\u002F\u002Fgithub.com\u002FChiWeiHsiao\u002FDeepVO-pytorch                | nn                                                           |             |\n| 全卷积几何特征：用于配准和对应关系的快速准确3D特征。 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fchrischoy\u002FFCGF                            | MIT许可证                                                    |             |\n| 用于密集点云补全的变形与采样网络（AAAI2020） | PyTorch                     | https:\u002F\u002Fgithub.com\u002FColin97\u002FMSN-Point-Cloud-Completion        | Apache-2.0                                                   |             |\n| 实时自适应深度立体视觉                    | TensorFlow                  | https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FReal-time-self-adaptive-deep-stereo | Apache-2.0                                                   |             |\n| 几何与语义结合的半监督单目深度估计 - ACCV 2018 | TensorFlow                  | https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FSemantic-Mono-Depth           | MIT许可证                                                    |             |\n| BlenderProc：用于生成深度学习数据的程序化Blender管道 | Blender                    | https:\u002F\u002Fgithub.com\u002FDLR-RM\u002FBlenderProc                        | GPL-3.0                                                      |             |\n| SingleViewReconstruction：从单一视口重建3D场景 | TensorFlow               | https:\u002F\u002Fgithub.com\u002FDLR-RM\u002FSingleViewReconstruction                        | MIT许可证                                                      |             |\n| NNCAP — 用于摄影测量的神经网络复杂方法    |                             | https:\u002F\u002Fgithub.com\u002FDok11\u002Fnn-dldm                             | nn                                                           |             |\n| 使用全卷积残差网络改进深度预测的PyTorch实现 | PyTorch                     | https:\u002F\u002Fgithub.com\u002FdontLoveBugs\u002FFCRN_pytorch                 | nn                                                           |             |\n| 用于3D物体生成和重建的改进对抗系统       | GAN                         | https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002F3D-IWGAN                  | MIT许可证                                                    |             |\n| 视觉惯性里程计中的深度学习                | PyTorch, CNN                | https:\u002F\u002Fgithub.com\u002FElliotHYLee\u002FDeep_Visual_Inertial_Odometry | MIT许可证                                                    |             |\n| 机器视觉                                  | 列表                        | https:\u002F\u002Fgithub.com\u002FEwenwan\u002FMVision                           | nn                                                           |             |\n| Mesh R-CNN，一篇在ICCV 2019上发表的学术论文 | PyTorch, R-CNN              | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmeshrcnn                 | BSD-3-Clause许可证                                           |             |\n| PyTorch3d是FAIR公司提供的用于3D数据深度学习的可重用组件库。 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d                | BSD-3-Clause许可证                                           |             |\n| 自监督稀疏转密集：来自LiDAR和单目相机的自监督深度补全 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffangchangma\u002Fself-supervised-depth-completion | MIT许可证                                                    |             |\n| 稀疏转密集：从稀疏深度样本和单张图像进行深度预测 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffangchangma\u002Fsparse-to-dense               | BSD许可证                                                    |             |\n| 稀疏转密集：从稀疏深度样本和单张图像进行深度预测 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffangchangma\u002Fsparse-to-dense.pytorch       | nn                                                           |             |\n| PackNet-SfM：用于自监督单目深度估计的3D打包 | PyTorch                     | https:\u002F\u002Fgithub.com\u002FFangGet\u002FPackNet-SFM-PyTorch               | GPL-3.0                                                      |             |\n| InvSFM：通过反转运动恢复结构来揭示场景 [CVPR 2019] | TensorFlow                  | https:\u002F\u002Fgithub.com\u002Ffrancescopittaluga\u002Finvsfm                 | MIT许可证                                                    |             |\n| 使用PyTorch的深度单目视觉里程计（实验性）  | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffshamshirdar\u002FDeepVO                       | nn                                                           |             |\n| PointNet：用于3D分类和分割的点云深度学习 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch                   | MIT许可证                                                    |             |\n| Pix2Depth - 从单目图像估计深度图          | Keras                       | https:\u002F\u002Fgithub.com\u002Fgautam678\u002FPix2Depth                       | GPL-3.0                                                      |             |\n| 3DRegNet：用于3D点云配准的深度神经网络    | TensorFlow                  | https:\u002F\u002Fgithub.com\u002Fgoncalo120\u002F3DRegNet                       | MIT许可证                                                    |             |\n| 神经3D网格渲染器 – 使用神经渲染器从单张图像重建3D |                             | https:\u002F\u002Fgithub.com\u002Fhiroharu-kato\u002Fmesh_reconstruction         | MIT许可证                                                    |             |\n| 实时可扩展的密集Surfel映射                |                             | https:\u002F\u002Fgithub.com\u002FHKUST-Aerial-Robotics\u002FDenseSurfelMapping  | nn                                                           |             |\n| MVDepthNet：实时多视角深度估计神经网络     | PyTorch                     | https:\u002F\u002Fgithub.com\u002FHKUST-Aerial-Robotics\u002FMVDepthNet          | nn                                                           |             |\n| DeepMatchVO：超越光度损失的自监督自我运动估计 |                             | https:\u002F\u002Fgithub.com\u002Fhlzz\u002FDeepMatchVO                          | MIT许可证                                                    |             |\n| MIRorR：通过学习表面重建进行可匹配的图像检索 | TensorFlow, CNN             | https:\u002F\u002Fgithub.com\u002Fhlzz\u002Fmirror                               | MIT许可证                                                    |             |\n| 使用深度特征重建进行单目深度估计和视觉里程计的无监督学习 | Caffe                       | https:\u002F\u002Fgithub.com\u002FHuangying-Zhan\u002FDepth-VO-Feat              | 非商业用途                                                   |             |\n| 深度学习3D视觉论文                        | 论文、列表、CN             | https:\u002F\u002Fgithub.com\u002Fhuayong\u002Fdl-vision-papers                  | nn                                                           |             |\n| Open3D PointNet的PyTorch实现                | PyTorch, jupyter, Open3D    | https:\u002F\u002Fgithub.com\u002Fintel-isl\u002FOpen3D-PointNet                 | MIT许可证                                                    |             |\n| 用于自动驾驶静态场景重建的语义TSDF        | PyTorch                     | https:\u002F\u002Fgithub.com\u002Firsisyphus\u002Fsemantic-tsdf                  | MIT许可证                                                    |             |\n| 带有对抗约束的弱监督3D重建                |                             | https:\u002F\u002Fgithub.com\u002Fjgwak\u002FMcRecon                             | MIT许可证                                                    |             |\n| 使用深度学习技术进行立体视觉和3D重建     | TensorFlow, CN              | https:\u002F\u002Fgithub.com\u002Fjiafeng5513\u002FEvisionNet                    | nn                                                           |             |\n| 从单目视频中无监督地学习尺度一致的深度和自我运动 | PyTorch                     | https:\u002F\u002Fgithub.com\u002FJiawangBian\u002FSC-SfMLearner-Release         | GPL-3.0                                                      |             |\n| 重新审视单张图像深度估计：迈向具有精确物体边界的更高分辨率地图（官方实现） | PyTorch                     | https:\u002F\u002Fgithub.com\u002FJunjH\u002FRevisiting_Single_Depth_Estimation  | nn                                                           |             |\n| 用于单目深度估计的卷积神经网络可视化（官方实现） | CNN, PyTorch                | https:\u002F\u002Fgithub.com\u002FJunjH\u002FVisualizing-CNNs-for-monocular-depth-estimation | MIT许可证                                                    |             |\n| DeepVO：迈向使用深度循环卷积神经网络的端到端视觉里程计 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fkrrish94\u002FDeepVO                           | nn                                                           |             |\n| DISN：用于高质量单视角3D重建的深度隐式表面网络 | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FDISN                           | nn                                                           |             |\n| DeepTAM：深度跟踪与 mapping                |                             | https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fdeeptam                      | GPL-3.0                                                      |             |\n| DeMoN：深度与运动网络                      | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fdemon                        | GPL-3.0                                                      |             |\n| CloudWalk最近工作的DenseBody的PyTorch实现 | PyTorch                     | https:\u002F\u002Fgithub.com\u002FLotayou\u002Fdensebody_pytorch                 | GPL-3.0                                                      |             |\n| 在单目内窥镜检查中进行密集深度估计的自监督学习 | Tensorflow, Torch           | https:\u002F\u002Fgithub.com\u002Flppllppl920\u002FEndoscopyDepthEstimation-Pytorch | 非商业用途                                                   |             |\n| ContextDesc：跨模态上下文增强的局部描述符   | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Flzx551402\u002Fcontextdesc                     | nn                                                           |             |\n| GL3D（基于3D重建的几何学习）：为3D重建和几何相关学习问题创建的大规模数据库 |                             | https:\u002F\u002Fgithub.com\u002Flzx551402\u002FGL3D                            | MIT许可证                                                    |             |\n| 使用全卷积残差网络进行更深层次的深度预测（官方实现） | Tensorflow                  | https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDeeperDepthEstimation        | nn                                                           |             |\n| 为深度估计微调Vgg16                       | Tensorflow                  | https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDepthEstimationVGG           | nn                                                           |             |\n| 使用TensorFlow的神经网络进行3D重建。视频链接见此处 |                             | https:\u002F\u002Fgithub.com\u002Fmicmelesse\u002F3D-reconstruction-with-Neural-Networks | nn                                                           |             |\n| 使用直接方法从单目视频中学习深度          | PyTorch                     | https:\u002F\u002Fgithub.com\u002FMightyChaos\u002FLKVOLearner                   | BSD-3-Clause                                                 |             |\n| PointNetVLAD：用于大规模场所识别的基于深度点云的检索 | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fmikacuy\u002Fpointnetvlad                      | MIT许可证                                                    |             |\n| 尝试从图像数据估计一个地区的地形          |                             | https:\u002F\u002Fgithub.com\u002Fnbelakovski\u002Ftopography_neural_net         | nn                                                           |             |\n| DDRNet：使用级联CNN为消费级深度相机去噪和优化深度图 | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fneycyanshi\u002FDDRNet                         | MIT许可证                                                    |             |\n| 从单张图像进行单目深度估计                | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fnianticlabs\u002Fmonodepth2                    | 版权所有 © Niantic, Inc. 2018。专利待批 - 仅限非商业用途 |             |\n| 3D-RelNet：用于3D预测的联合对象与关系网络 | Torch, jupyter              | https:\u002F\u002Fgithub.com\u002Fnileshkulkarni\u002Frelative3d                 | nn                                                           |             |\n| PlaneRCNN从单张RGB图像中检测并重建分段平面表面 | Torch, RCNN                 | https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fplanercnn                          | 版权所有 (c) 2018 NVIDIA Corp. 保留所有权利。本作品根据[知识共享署名-非商业性使用-相同方式共享4.0许可协议](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Flegalcode)授权。 |             |\n| OctoMap - 基于八叉树的高效概率3D映射框架。 |                             | https:\u002F\u002Fgithub.com\u002FOctoMap\u002Foctomap                           | 弗莱堡大学，版权 (C) 2009-2014，octomap：新BSD许可证，octovis及相关库：GPL |             |\n| 无监督单目深度估计神经网络MonoDepth的PyTorch实现（非官方实现） | PyTorch                     | https:\u002F\u002Fgithub.com\u002FOniroAI\u002FMonoDepth-PyTorch                 | nn                                                           |             |\n| 学习采样：一种用于点云的可学习采样方法    |                             | https:\u002F\u002Fgithub.com\u002Forendv\u002Flearning_to_sample                 | MIT许可证                                                    |             |\n| DeepMVS：学习多视角立体视觉                | CNN, PyTorch                | https:\u002F\u002Fgithub.com\u002Fphuang17\u002FDeepMVS                          | BSD 2条款                                                    |             |\n| DeepV2D：利用可微分运动恢复结构将视频转换为深度 | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FDeepV2D                      | nn                                                           |             |\n| 通过迁移学习实现高质量单目深度估计      | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fpriya-dwivedi\u002FDeep-Learning\u002Ftree\u002Fmaster\u002Fdepth_estimation | nn（GPL-3.0？）                                               |             |\n| 带有视觉外壳嵌入的深度单视角3D物体重建    | CNN, Tensorflow             | https:\u002F\u002Fgithub.com\u002Fqweas120\u002FPSVH-3d-reconstruction           | MIT许可证                                                    |             |\n| ScanNet是一个包含250万帧、超过1500次扫描的RGB-D视频数据集，标注了3D相机位姿,... |                             | https:\u002F\u002Fgithub.com\u002FScanNet\u002FScanNet                           | 可以使用，但需注明出处并包含原始版权声明 |             |\n| 桥梁的视觉检查通常用于识别和评估缺陷      | CNN                         | https:\u002F\u002Fgithub.com\u002FShaggyshak\u002FCS543_project_Image-based-Localization-of-Bridge-Defects-with-AR-Visualization | nn                                                           |             |\n| 通过高效的高阶CRF进行语义3D占用映射      | CNN                         | https:\u002F\u002Fgithub.com\u002Fshichaoy\u002Fsemantic_3d_mapping              | BSD-3-Clause                                                 |             |\n| 从3D场景的2D图像中分解形状、姿态和布局    |                             | https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002Ffactored3d                      | nn                                                           |             |\n| Motion R-CNN代码库（旧版）                  | RCNN                        | https:\u002F\u002Fgithub.com\u002Fsimonmeister\u002Fold-motion-rcnn              | MIT许可证                                                    |             |\n| 几何感知的对称域适应用于单目深度估计       | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fsshan-zhao\u002FGASDA                          | nn                                                           |             |\n| 3D场景图：统一语义、3D空间和相机的结构    |                             | https:\u002F\u002Fgithub.com\u002FStanfordVL\u002F3DSceneGraph                   | MIT许可证                                                    |             |\n| Minkowski Engine是一个用于高维稀疏张量的自动微分卷积神经网络库 | PyTorch                     | https:\u002F\u002Fgithub.com\u002Fstanfordvl\u002FMinkowskiEngine                | MIT许可证                                                    |             |\n| 在有限姿态监督下学习单视角3D重建（官方实现） | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon                        | MIT许可证                                                    |             |\n| VNect：使用单个RGB摄像头进行实时3D人体姿态估计（TensorFlow版本） | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Ftimctho\u002FVNect-tensorflow                  | Apache-2.0                                                   |             |\n| 3D-LMNet：用于从单张图像准确且多样化地重建3D点云的潜在嵌入匹配 |                             | https:\u002F\u002Fgithub.com\u002Fval-iisc\u002F3d-lmnet                         | MIT许可证                                                    |             |\n| 学习寻找良好的对应关系                      |                             | https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flearned-correspondence-release   | 仅用于研究和评估。商业用途需书面批准                       |             |\n| 用于深度图像体积积分的框架                |                             | https:\u002F\u002Fgithub.com\u002Fvictorprad\u002FInfiniTAM                      | 非商业用途                                                   |             |\n| Pixel2Mesh++：通过变形进行多视角3D网格生成 | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fwalsvid\u002FPixel2MeshPlusPlus                | BSD-3-Clause                                                 |             |\n| 来自单个深度图像的对抗性语义场景补全（官方实现） | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fwangyida\u002Fgan-depth-semantic3d             | nn                                                           |             |\n| SurfelWarp：高效的非体积动态重建          |                             | https:\u002F\u002Fgithub.com\u002Fweigao95\u002Fsurfelwarp                       | BSD-3-Clause                                                 |             |\n| PCN：点云补全网络                          | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fwentaoyuan\u002Fpcn                            | MIT许可证                                                    |             |\n| DISN：用于高质量单视角3D重建的深度隐式表面网络 |                             | https:\u002F\u002Fgithub.com\u002FXharlie\u002FDISN                              | nn                                                           |             |\n| 从结构中实时获取运动                        | CNN                         | https:\u002F\u002Fgithub.com\u002Fyan99033\u002FCNN-SVO                          | nn                                                           |             |\n| 从单个深度视图进行密集3D物体重建         | TensorFlow                  | https:\u002F\u002Fgithub.com\u002FYang7879\u002F3D-RecGAN-extended               | MIT许可证                                                    |             |\n| 半监督单目深度图预测                        | Tensorflow                  | https:\u002F\u002Fgithub.com\u002FYevkuzn\u002Fsemodepth                         | GPL-3.0                                                      |             |\n| 3DFeat-Net：用于点云配准的弱监督局部3D特征 | Tensorflow                  | https:\u002F\u002Fgithub.com\u002Fyewzijian\u002F3DFeatNet                       | MIT许可证                                                    |             |\n| 估计深度图有助于图像分类：使用神经网络进行深度估计，并在RGBD图像上学习 |                             | https:\u002F\u002Fgithub.com\u002Fyihui-he\u002FEstimated-Depth-Map-Helps-Image-Classification | MIT许可证                                                    |             |\n| 同时拟合正面和侧面脸部图像的3DMM模型。    |                             | https:\u002F\u002Fgithub.com\u002FYinghao-Li\u002F3DMM-fitting                   | nn                                                           |             |\n| 完美匹配：使用平滑密度进行3D点云匹配      | CNN, Tensorflow             | https:\u002F\u002Fgithub.com\u002Fzgojcic\u002F3DSmoothNet                       | BSD-3-Clause                                                 |             |\n| NeurVPS：通过圆锥卷积进行神经消失点扫描    | Tenosorflow                 | https:\u002F\u002Fgithub.com\u002Fzhou13\u002Fneurvps                            | MIT许可证                                                    |             |\n| LayoutNet：从单张RGB图像重建3D房间布局（Torch实现） | Torch                       | https:\u002F\u002Fgithub.com\u002Fzouchuhang\u002FLayoutNet                      | MIT许可证                                                    |             |\n| NeRF：神经辐射场                          |                             | https:\u002F\u002Fgithub.com\u002Fbmild\u002Fnerf                                | MIT许可证                                                    | 10          |\n| SIGGRAPH 2019上的局部光场融合              |                             | https:\u002F\u002Fgithub.com\u002Ffyusion\u002Fllff                              | GPL-3.0                                                      | 10          |\n| 神经体积——从图像中学习动态可渲染体积      |                             | https:\u002F\u002Fresearch.fb.com\u002Fpublications\u002Fneural-volumes-learning-dynamic-renderable-volumes-from-images\u002F\u003Cbr\u002F>https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fneuralvolumes | BY-NC 4.0                                                    |             |\n| 学得越少越好——通过3D表面回归进行6D相机定位 |                             | https:\u002F\u002Fgithub.com\u002Fvislearn\u002FLessMore                         | BSD-3-Clause                                                 |             |\n| 局部特征                                  |                             | https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flf-net-release                   |                                                              |             |\n| Pix2Vox                                      |                             | https:\u002F\u002Fgithub.com\u002Fhzxie\u002FPix2Vox                             |                                                              |             |\n| 平面重建：通过关联嵌入从单张图像分段重建3D平面 | pytorch                     | https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FPlanarReconstruction             |                                                              |             |\n| 使用深度神经网络进行深度估计              |                             | https:\u002F\u002Fmedium.com\u002F@omarbarakat1995\u002Fdepth-estimation-with-deep-neural-networks-part-1-5fa6d2237d0d\u003Cbr\u002F>https:\u002F\u002Fmedium.com\u002Fdatadriveninvestor\u002Fdepth-estimation-with-deep-neural-networks-part-2-81ee374888eb\u003Cbr\u002F>https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDeeperDepthEstimation\u003Cbr\u002F>https:\u002F\u002Fgithub.com\u002FMahmoudSelmy\u002FDepthEstimationVGG\u002Fblob\u002Fmaster\u002FREADME.md |                                                              |             |\n| 通过迁移学习实现高质量单目深度估计      |                             | https:\u002F\u002Fgithub.com\u002Fialhashim\u002FDenseDepth\u003Cbr \u002F>https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.11941 |                                                              |             |\n| D3Feat                                        |                             | https:\u002F\u002Fgithub.com\u002FXuyangBai\u002FD3Feat               |                                                               |             |\n| 高分辨率图像上的层次化深度立体匹配        | pytorch           | https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Fhigh-res-stereo                | MIT          |             |\n| 结构感知的残差金字塔网络用于单目深度估计   | pytorch           | https:\u002F\u002Fgithub.com\u002FXt-Chen\u002FSARPN                             | nn           |             |\n| 用于从单张RGB图像构建3D点云模型的PyTorch代码。 | pytorch           | https:\u002F\u002Fgithub.com\u002Flkhphuc\u002Fpytorch-3d-point-cloud-generation | nn           |             |\n| 使用全卷积神经网络从RGB图像估计深度       | pytorch           | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| 通过关联嵌入从单张图像分段重建3D平面      | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FPlanarReconstruction             | MIT          |             |\n| TriDepth：基于三角形补丁的深度深度预测     | PyTorch           | https:\u002F\u002Fgithub.com\u002Fsyinari0123\u002Ftridepth                      | MIT          |             |\n| 使用多尺度深度网络从单张图像预测深度图     | torch             | https:\u002F\u002Fgithub.com\u002Fimran3180\u002Fdepth-map-prediction            | nn           |             |\n| 用于单张图像深度估计的混合CNN             | torch             | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| MarrNet：通过2.5D草图重建3D形状           | torch             | https:\u002F\u002Fgithub.com\u002Fjiajunwu\u002Fmarrnet                          | nn           |             |\n| 一致的视频深度估计                          |                   | https:\u002F\u002Froxanneluo.github.io\u002FConsistent-Video-Depth-Estimation\u002F | nn           |             |\n| HF-Net：大规模下的稳健分层定位              | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fethz-asl\u002Fhfnet                            | MIT          |             |\n| 高分辨率图像上的层次化深度立体匹配        | pytorch           | https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Fhigh-res-stereo                | MIT          |             |\n| 结构感知的残差金字塔网络用于单目深度估计   | pytorch           | https:\u002F\u002Fgithub.com\u002FXt-Chen\u002FSARPN                             | nn           |             |\n| 用于从单张RGB图像构建3D点云模型的PyTorch代码。 | pytorch           | https:\u002F\u002Fgithub.com\u002Flkhphuc\u002Fpytorch-3d-point-cloud-generation | nn           |             |\n| 使用全卷积神经网络从RGB图像估计深度       | pytorch           | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| 通过关联嵌入从单张图像分段重建3D平面      | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FPlanarReconstruction             | MIT          |             |\n| TriDepth：基于三角形补丁的深度深度预测     | PyTorch           | https:\u002F\u002Fgithub.com\u002Fsyinari0123\u002Ftridepth                      | MIT          |             |\n| 使用多尺度深度网络从单张图像预测深度图     | torch             | https:\u002F\u002Fgithub.com\u002Fimran3180\u002Fdepth-map-prediction            | nn           |             |\n| 用于单张图像深度估计的混合CNN             | torch             | https:\u002F\u002Fgithub.com\u002Fkaroly-hars\u002FDE_resnet_unet_hyb            | BSD-3-Clause |             |\n| MarrNet：通过2.5D草图重建3D形状           | torch             | https:\u002F\u002Fgithub.com\u002Fjiajunwu\u002Fmarrnet                          | nn           |             |\n| 一致的视频深度估计                          |                   | https:\u002F\u002Froxanneluo.github.io\u002FConsistent-Video-Depth-Estimation\u002F | nn           |             |\n| HF-Net：大规模下的稳健分层定位              | torch, tensorflow | https:\u002F\u002Fgithub.com\u002Fethz-asl\u002Fhfnet                            | MIT          |             |\n\n**其他项目**\n\n| 标题                                                        | 关键词              | URL                                                          | 许可证 |\n| ------------------------------------------------------------ | --------------------- | ------------------------------------------------------------ | ------- |\n| 3D-Scene-GAN：基于生成对抗网络的三维场景重建 | 论文                 | https:\u002F\u002Fopenreview.net\u002Fforum?id=SkNEsmJwf                    |         |\n| Google：深度学习深度预测                       | 杂志文章，德语 | https:\u002F\u002Fwww.digitalproduction.com\u002F2019\u002F05\u002F27\u002Fgoogle-deep-learning-depth-prediction\u002F |         |\n| SLAM与深度学习用于室内场景理解     | 博士论文            | https:\u002F\u002Fwww.doc.ic.ac.uk\u002F~ajd\u002FPublications\u002FMcCormac-J-2019-PhD-Thesis.pdf |         |\n| 基于单张深度图像的稠密3D物体重建      | 3D-RecGAN++           | https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.00411                             |         |\n| 移动相机，移动人群：一种基于深度学习的深度预测方法 |                       | https:\u002F\u002Fai.googleblog.com\u002F2019\u002F05\u002Fmoving-camera-moving-people-deep.html |         |\n| 从单张RGB图像中估计深度                     |                       | http:\u002F\u002Fcampar.in.tum.de\u002FChair\u002FProjectDepthPrediction         |         |\n| 深度基础矩阵估计                           |                       | http:\u002F\u002Fvladlen.info\u002Fpapers\u002Fdeep-fundamental.pdf              |         |\n| depth_estimation                                             |                       | https:\u002F\u002Ftowardsdatascience.com\u002Fdepth-estimation-on-camera-images-using-densenets-ac454caa893 |         |\n| **3D机器学习列表**                                 |                       | https:\u002F\u002Fgithub.com\u002Ftimzhang642\u002F3D-Machine-Learning           |         |\n| **基于深度学习的3D物体重建——综述——基于图像的3D物体重建：深度学习时代的现状与趋势** |                       | https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06543.pdf                         |         |\n|                                                              |                       |                                                              |         |\n|                                                              |                       |                                                              |         |\n|                                                              |                       |                                                              |         |\n|                                                              |                       |                                                              |         |\n\nI2-SDF：通过神经SDF中的光线追踪进行室内场景的内在重建与编辑（CVPR 2023） https:\u002F\u002Fgithub.com\u002Fjingsenzhu\u002Fi2-sdf MIT\n\nhttps:\u002F\u002Fgithub.com\u002Flioryariv\u002Fidr\n\nhttps:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fdifferentiable_volumetric_rendering\n\nhttps:\u002F\u002Fgithub.com\u002FDok11\u002Fsurface-match-dataset\n\n基于图像的3D物体重建：深度学习时代的现状与趋势 https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06543v3.pdf\n\n稠密3D物体重建：基于单张深度图像 https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.00411v2.pdf\n\nhttps:\u002F\u002Fdagshub.com\u002FOperationSavta\u002FSavtaDepth https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1XU4DgQ217_hUMU1dllppeQNw3pTRlHy1?usp=sharing https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fkingabzpro\u002Fsavtadepth MIT许可证\n\nhttps:\u002F\u002Fgithub.com\u002Fgradslam\u002Fgradslam pyTorch\n\nhttps:\u002F\u002Fgithub.com\u002Fventusff\u002Fneurecon\n\nhttps:\u002F\u002Fgithub.com\u002FtheICTlab\u002F3DUNDERWORLD-SLS-GPU_CPU","# 3D-Reconstruction-with-Deep-Learning-Methods 快速上手指南\n\n**注意**：本项目是一个开源项目列表（Awesome List），汇集了多个基于深度学习的 3D 重建工具，而非单一的可执行软件。以下指南以列表中极具代表性且广泛使用的 **PointNet**（用于 3D 分类与分割）和 **DenseDepth**（单目深度估计）为例，演示如何在此类项目中开展开发工作。大多数列表中的项目遵循类似的安装逻辑。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下基本要求。由于涉及深度学习模型训练与推理，强烈建议使用配备 NVIDIA GPU 的环境。\n\n*   **操作系统**：Linux (Ubuntu 18.04\u002F20.04 推荐) 或 macOS。Windows 用户建议使用 WSL2。\n*   **硬件要求**：\n    *   NVIDIA GPU (显存建议 8GB 以上)\n    *   CUDA Toolkit (版本需与 PyTorch\u002FTensorFlow 匹配，通常建议 11.0+)\n    *   cuDNN\n*   **软件依赖**：\n    *   Python 3.6 - 3.9\n    *   Git\n    *   Conda (推荐用于管理虚拟环境)\n\n## 2. 安装步骤\n\n以下以 **PointNet** (PyTorch 实现版) 为例进行安装。其他项目（如 DenseDepth, ScanComplete 等）步骤类似，只需替换对应的仓库地址。\n\n### 2.1 创建虚拟环境\n使用 Conda 创建独立的 Python 环境，避免依赖冲突。\n\n```bash\nconda create -n pointnet python=3.8\nconda activate pointnet\n```\n\n### 2.2 安装深度学习框架\n根据项目要求安装 PyTorch。推荐使用国内镜像源加速下载。\n\n```bash\n# 使用清华镜像源安装 PyTorch (CUDA 11.3 示例)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu113\n```\n\n若项目基于 TensorFlow (如 DenseDepth)，则安装：\n\n```bash\n# 使用清华镜像源安装 TensorFlow-GPU\npip install tensorflow-gpu==2.6.0 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2.3 克隆项目代码\n从 GitHub 克隆具体项目的源代码。\n\n```bash\n# 示例：克隆 PointNet PyTorch 实现\ngit clone https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch.git\ncd pointnet.pytorch\n```\n\n### 2.4 安装项目依赖\n进入项目目录，安装 `requirements.txt` 中列出的特定依赖库。\n\n```bash\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n*注：部分项目可能需要额外编译 CUDA 扩展，如遇报错，请确保已安装 `build-essential` 和对应版本的 `cuda-toolkit`。*\n\n## 3. 基本使用\n\n安装完成后，您可以运行项目提供的脚本进行数据预处理、训练或测试。以下以最简单的**推理\u002F测试**流程为例。\n\n### 3.1 准备数据\n大多数 3D 重建项目需要特定的数据集格式（如 ModelNet40, ShapeNet 等）。通常项目会提供下载脚本或说明。\n\n```bash\n# 示例：下载并预处理 ModelNet40 数据集 (具体命令视项目而定)\npython scripts\u002Fdownload_modelnet.py\npython scripts\u002Fpreprocess_data.py\n```\n\n### 3.2 运行预训练模型或测试\n使用提供的预训练权重对样本进行测试，验证环境是否配置成功。\n\n```bash\n# 示例：运行 PointNet 分类测试\n# --model_path 指向下载的预训练权重文件\npython classify.py --model_path checkpoints\u002Fmodel.pth --input_path data\u002Ftest_sample.npy\n```\n\n对于单目深度估计项目（如 DenseDepth），运行示例如下：\n\n```bash\n# 示例：运行单目图像深度估计\npython run_inference.py --image_path images\u002Fsample.jpg --output_dir results\u002F\n```\n\n### 3.3 开始训练（可选）\n若需从头训练模型，使用训练脚本并指定参数。\n\n```bash\n# 示例：启动训练过程\npython train.py --dataset modelnet40 --batch_size 32 --epochs 50\n```\n\n**提示**：列表中其他项目（如 `ScanComplete`, `Mesh R-CNN`, `DeepVO` 等）的使用方式大同小异，请参阅各自仓库根目录下的 `README.md` 获取具体的参数说明和数据集链接。","某自动驾驶初创团队正利用车载单目摄像头采集的城市街道视频，构建高保真 3D 语义地图以训练感知算法。\n\n### 没有 3D-Reconstruction-with-Deep-Learning-Methods 时\n- **数据获取成本高昂**：依赖昂贵的激光雷达（LiDAR）设备或人工建模来获取精确深度信息，导致数据采集预算严重超支。\n- **场景还原不完整**：传统多视图几何方法在处理纹理缺失区域（如白墙、路面）时经常失效，生成的 3D 模型存在大量空洞和噪点。\n- **语义信息割裂**：重建的几何模型缺乏语义标签，无法直接区分道路、车辆与行人，需额外开发复杂的后处理流程进行对齐。\n- **开发迭代缓慢**：团队需手动整合分散的开源代码库，环境配置冲突频发，从数据输入到可用模型往往耗时数周。\n\n### 使用 3D-Reconstruction-with-Deep-Learning-Methods 后\n- **低成本高精度重建**：直接调用列表中如 `DenseDepth` 等项目，仅凭单目 RGB 图像即可通过迁移学习估算出高质量深度图，大幅降低硬件门槛。\n- **智能补全缺失细节**：利用 `ScanComplete` 或 `MSN-Point-Cloud-Completion` 等算法，自动推断并补全被遮挡或未扫描区域的几何结构，生成稠密且平滑的点云。\n- **几何与语义融合**：采用 `Deep 3D Semantic Scene Extrapolation` 等方案，在重建过程中同步输出语义分割结果，直接生成带标签的 3D 场景。\n- **一站式高效开发**：基于该清单快速定位并部署成熟的 PyTorch\u002FTensorFlow 实现（如 `PointNet`），将原本数周的集成工作缩短至几天，加速算法验证。\n\n3D-Reconstruction-with-Deep-Learning-Methods 通过聚合前沿开源项目，将高门槛的 3D 视觉重建转化为可快速落地的标准化流程，显著提升了研发效率与模型质量。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fnatowi_3D-Reconstruction-with-Deep-Learning-Methods_263f0513.png","natowi",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fnatowi_936a3ebe.jpg"," ","https:\u002F\u002Fgithub.com\u002Fnatowi",1013,131,"2026-03-31T14:37:12","Unlicense",4,"","未说明（部分项目提及 TensorFlow, PyTorch, cuDNN，通常暗示需要 NVIDIA GPU，但无具体型号或显存要求）","未说明",{"notes":90,"python":88,"dependencies":91},"该 README 是一个开源项目列表而非单一工具，因此没有统一的运行环境需求。列表中各项目依赖不同的深度学习框架（如 TensorFlow, PyTorch, Keras, Caffe）和特定库（如 cuDNN, Blender）。用户需根据列表中具体选择的项目（例如 DenseDepth, PointNet, ScanComplete 等）前往其对应的 GitHub 仓库查看详细的安装和环境配置说明。",[92,93,94,95,96,97],"TensorFlow","PyTorch","cuDNN","Keras","Caffe","Blender",[13,54],[100,101,102,103,104,105],"list","deep-learning","deep-neural-networks","depth-prediction","3d-reconstruction","depth-estimation","2026-03-27T02:49:30.150509","2026-04-06T09:46:11.071153",[],[]]