[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bluestyle97--awesome-3d-reconstruction-papers":3,"tool-bluestyle97--awesome-3d-reconstruction-papers":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85267,2,"2026-04-18T11:00:28",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75895,"2026-04-18T23:09:57",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":83,"owner_website":84,"owner_url":85,"languages":83,"stars":86,"forks":87,"last_commit_at":88,"license":83,"difficulty_score":29,"env_os":89,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":94,"github_topics":83,"view_count":10,"oss_zip_url":83,"oss_zip_packed_at":83,"status":22,"created_at":95,"updated_at":96,"faqs":97,"releases":98},9362,"bluestyle97\u002Fawesome-3d-reconstruction-papers","awesome-3d-reconstruction-papers","A collection of 3D reconstruction papers in the deep learning era.","awesome-3d-reconstruction-papers 是一个专注于深度学习时代 3D 重建领域的论文精选合集。它致力于解决研究人员和开发者在面对海量学术文献时，难以快速定位高质量、分类清晰的研究成果的痛点。\n\n该资源将复杂的 3D 重建技术体系化地梳理为对象级（单视图、多视图、无监督）、场景级、神经表面表示以及综述等多个维度。其独特亮点在于不仅罗列论文标题，还详细标注了每篇研究采用的几何表示方法（如点云、网格、体素等）、发表会议（如 CVPR、ICCV、NeurIPS）以及对应的项目主页或开源代码链接。这种结构化的整理方式，极大地降低了追踪前沿算法复现细节的门槛。\n\n无论是希望快速了解领域全貌的初学者，还是需要寻找特定技术路线参考的资深研究员，亦或是正在探索 3D 视觉落地的工程师，都能从中高效获取所需信息。通过持续更新的社区贡献，awesome-3d-reconstruction-papers 已成为连接理论与工程实践的重要桥梁，帮助用户在 3D 重建的探索之路上少走弯路。","# Awesome 3D Reconstruction Papers\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re)\n\nA collection of 3D reconstruction papers in the deep learning era. Feel free to contribute :)\n\nTable of Contents\n=================\n\n  * [Object-level](#object-level)\n     * [Single-view](#single-view)\n     * [Multi-view](#multi-view)\n     * [Unsupervised](#unsupervised)\n  * [Scene-level](#scene-level)\n     * [Single-view](#single-view-1)\n     * [Multi-view](#multi-view-1)\n  * [Neural-Surface](#neural-surface)\n     * [Multi-view](#multi-view-2)\n     * [Point-cloud](#point-cloud)\n     * [RGB-D](#rgb-d)\n  * [Survey](#survey)\n\n## Object-level\n\n### Single-view\n\n| Paper | Representation| Publisher | Project\u002FCode |\n| :----------------------------------------------------------: | :-------: | :-------: | :-----------------------------------------------------: |\n| [A Point Set Generation Network for 3D Object Reconstruction from a Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFan_A_Point_Set_CVPR_2017_paper.html) | Point Cloud | CVPR 2017 | [Code](https:\u002F\u002Fgithub.com\u002Ffanhqme\u002FPointSetGeneration) |\n| [SurfNet: Generating 3D Shape Surfaces Using Deep Residual Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSinha_SurfNet_Generating_3D_CVPR_2017_paper.html) | Mesh | CVPR 2017 | [Code](https:\u002F\u002Fgithub.com\u002Fsinhayan\u002Fsurfnet) |\n| [OctNet: Learning Deep 3D Representations at High Resolutions](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRiegler_OctNet_Learning_Deep_CVPR_2017_paper.html) | Voxel | CVPR 2017 | [Code](https:\u002F\u002Fgithub.com\u002Fgriegler\u002Foctnet) |\n| [Rethinking Reprojection: Closing the Loop for Pose-Aware Shape Reconstruction From a Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Rethinking_Reprojection_Closing_ICCV_2017_paper.html) | Voxel | ICCV 2017 | \u002F |\n| [MarrNet: 3D Shape Reconstruction via 2.5D Sketches](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2017\u002Fhash\u002Fad972f10e0800b49d76fed33a21f6698-Abstract.html) | Voxel | NIPS 2017 | [Project](http:\u002F\u002Fmarrnet.csail.mit.edu\u002F) |\n| [Hierarchical Surface Prediction for 3D Object Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00710) | Voxel | 3DV 2017 | [Code](https:\u002F\u002Fgithub.com\u002Fchaene\u002Fhsp) |\n| [Image2Mesh: A Learning Framework for Single Image 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.10669) | Mesh | ACCV 2018 | [Code](https:\u002F\u002Fgithub.com\u002Fjhonykaesemodel\u002Fimage2mesh) |\n| [Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16530) | Point Cloud | AAAI 2018 | [Project](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002F3D-point-cloud-generation\u002F) |\n| [A Papier-Mâché Approach to Learning 3D Surface Generation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FGroueix_A_Papier-Mache_Approach_CVPR_2018_paper.html) | Mesh | CVPR 2018 | [Project](http:\u002F\u002Fimagine.enpc.fr\u002F~groueixt\u002Fatlasnet\u002F) |\n| [Pixels, voxels, and views: A study of shape representations for single view 3D object shape prediction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FShin_Pixels_Voxels_and_CVPR_2018_paper.html) | Generic | CVPR 2018 | [Project](https:\u002F\u002Fwww.ics.uci.edu\u002F~daeyuns\u002Fpixels-voxels-views\u002F) |\n| [Im2Struct: Recovering 3D Shape Structure From a Single RGB Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FNiu_Im2Struct_Recovering_3D_CVPR_2018_paper.html) | Parts | CVPR 2018 | [Code](https:\u002F\u002Fgithub.com\u002Fchengjieniu\u002FIm2Struct) |\n| [Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FRichter_Matryoshka_Networks_Predicting_CVPR_2018_paper.html) | Voxel | CVPR 2018 | [Code](https:\u002F\u002Fbitbucket.org\u002Fvisinf\u002Fprojects-2018-matryoshka\u002Fsrc\u002Fmaster\u002F) |\n| [Multi-View Consistency as Supervisory Signal for Learning Shape and Pose Prediction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FTulsiani_Multi-View_Consistency_as_CVPR_2018_paper.html) | Voxel | CVPR 2018 | [Project](https:\u002F\u002Fshubhtuls.github.io\u002FmvcSnP\u002F) |\n| [Efficient Dense Point Cloud Object Reconstruction using Deformation Vector Fields](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKejie_Li_Efficient_Dense_Point_ECCV_2018_paper.html) | Point Cloud | ECCV 2018 | \u002F |\n| [GAL: Geometric Adversarial Loss for Single-View 3D-Object Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLi_Jiang_GAL_Geometric_Adversarial_ECCV_2018_paper.html) | Point Cloud | ECCV 2018 | \u002F |\n| [Learning Category-Specific Mesh Reconstruction from Image Collections](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngjoo_Kanazawa_Learning_Category-Specific_Mesh_ECCV_2018_paper.html) | Mesh | ECCV 2018 | [Project](https:\u002F\u002Fakanazawa.github.io\u002Fcmr\u002F) |\n| [Learning Shape Priors for Single-View 3D Completion and Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJiajun_Wu_Learning_3D_Shape_ECCV_2018_paper.html) | Voxel | ECCV 2018 | \u002F |\n| [Learning Single-View 3D Reconstruction with Limited Pose Supervision](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuandao_Yang_A_Unified_Framework_ECCV_2018_paper.html) | Voxel | ECCV 2018 | [Code](https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon) |\n| [Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FNanyang_Wang_Pixel2Mesh_Generating_3D_ECCV_2018_paper.html) | Mesh | ECCV 2018 | [Code](https:\u002F\u002Fgithub.com\u002Fnywang16\u002FPixel2Mesh) |\n| [Residual MeshNet: Learning to Deform Meshes for Single-View 3D Reconstruction](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8491025) | Mesh | 3DV 2018 | \u002F |\n| [Learning to Reconstruct Shapes from Unseen Classes](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Fhash\u002F208e43f0e45c4c78cafadb83d2888cb6-Abstract.html) | Generic | NIPS 2018 | [Project](http:\u002F\u002Fgenre.csail.mit.edu\u002F) |\n| [Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Fhash\u002F39ae2ed11b14a4ccb41d35e9d1ba5d11-Abstract.html) | Voxel | NIPS 2018 | [Code](https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002FMulti-View-Silhouette-and-Depth-Decomposition-for-High-Resolution-3D-Object-Representation) |\n| [MVPNet: Multi-View Point Regression Networks for 3D Object Reconstruction from A Single Image](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F4923) | Point Cloud | AAAI 2019 | [Project](https:\u002F\u002Fjingluw.github.io\u002Fprojects\u002Fmvpnet\u002F) |\n| [Deep Single-View 3D Object Reconstruction with Visual Hull Embedding](https:\u002F\u002Fojs.aaai.org\u002F\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F4922) | Voxel | AAAI 2019 | [Code](https:\u002F\u002Fgithub.com\u002FHanqingWangAI\u002FPSVH-3d-reconstruction) |\n| [Occupancy Networks: Learning 3D Reconstruction in Function Space](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FMescheder_Occupancy_Networks_Learning_3D_Reconstruction_in_Function_Space_CVPR_2019_paper.html) | Implicit | CVPR 2019 | [Code](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Foccupancy_networks) |\n| [Learning Implicit Fields for Generative Shape Modeling](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FChen_Learning_Implicit_Fields_for_Generative_Shape_Modeling_CVPR_2019_paper.html) | Implicit | CVPR 2019 | [Project](https:\u002F\u002Fwww.sfu.ca\u002F~zhiqinc\u002Fimgan\u002FReadme.html) |\n| [A Skeleton-Bridged Deep Learning Approach for Generating Meshes of Complex Topologies From Single RGB Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FTang_A_Skeleton-Bridged_Deep_Learning_Approach_for_Generating_Meshes_of_Complex_CVPR_2019_paper.html) | Mesh | CVPR 2019 | [Code](https:\u002F\u002Fgithub.com\u002Ftangjiapeng\u002FSkeletonBridgeRecon) |\n| [What Do Single-view 3D Reconstruction Networks Learn?](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FTatarchenko_What_Do_Single-View_3D_Reconstruction_Networks_Learn_CVPR_2019_paper.html) | Generic | CVPR 2019 | [Code](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fwhat3d) |\n| [Deep Level Sets: Implicit Surface Representations for 3D Shape Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06802) | Implicit | arXiv 2019 | \u002F |\n| [Deep Mesh Reconstruction From Single RGB Images via Topology Modification Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FPan_Deep_Mesh_Reconstruction_From_Single_RGB_Images_via_Topology_Modification_ICCV_2019_paper.html) | Mesh | ICCV 2019 | [Code](https:\u002F\u002Fgithub.com\u002Fjnypan\u002FTMNet) |\n| [Deep Meta Functionals for Shape Representation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FLittwin_Deep_Meta_Functionals_for_Shape_Representation_ICCV_2019_paper.html) | Implicit | ICCV 2019 | [Code](https:\u002F\u002Fgithub.com\u002Fgidilittwin\u002FDeep-Meta) |\n| [GraphX-Convolution for Point Cloud Deformation in 2D-to-3D Conversion](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FNguyen_GraphX-Convolution_for_Point_Cloud_Deformation_in_2D-to-3D_Conversion_ICCV_2019_paper.html) | Point Cloud | ICCV 2019 | [Code](https:\u002F\u002Fgithub.com\u002Fywcmaike\u002Fpcdnet) |\n| [Pix2Vox: Context-Aware 3D Reconstruction From Single and Multi-View Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FXie_Pix2Vox_Context-Aware_3D_Reconstruction_From_Single_and_Multi-View_Images_ICCV_2019_paper.html) | Voxel | ICCV 2019 | [Code](https:\u002F\u002Fgithub.com\u002Fhzxie\u002FPix2Vox) |\n| [Domain-Adaptive Single-View 3D Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FPinheiro_Domain-Adaptive_Single-View_3D_Reconstruction_ICCV_2019_paper.html) | Voxel | ICCV 2019 | [Code](https:\u002F\u002Fgithub.com\u002FGitikameher\u002FDomain-Adaptive-Single-View-3D-Reconstruction) |\n| [Few-Shot Generalization for Single-Image 3D Reconstruction via Priors](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FWallace_Few-Shot_Generalization_for_Single-Image_3D_Reconstruction_via_Priors_ICCV_2019_paper.html) | Voxel | ICCV 2019 | [Code](https:\u002F\u002Fgithub.com\u002FBramSW\u002Ficcv_2019_few_shot_3d_wallace) |\n| [DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002F39059724f73a9969845dfe4146c5660e-Abstract.html) | Implicit | NIPS 2019 | [Code](https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FDISN) |\n| [Front2Back: Single View 3D Shape Reconstruction via Front to Back Prediction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FYao_Front2Back_Single_View_3D_Shape_Reconstruction_via_Front_to_Back_CVPR_2020_paper.html) | Mesh | CVPR 2020 | [Code](https:\u002F\u002Fgithub.com\u002Frozentill\u002FFront2Back) |\n| [BSP-Net: Generating Compact Meshes via Binary Space Partitioning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FChen_BSP-Net_Generating_Compact_Meshes_via_Binary_Space_Partitioning_CVPR_2020_paper.html) | Mesh | CVPR 2020 | [Project](https:\u002F\u002Fbsp-net.github.io\u002F) |\n| [Height and Uprightness Invariance for 3D Prediction From a Single View](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FBaradad_Height_and_Uprightness_Invariance_for_3D_Prediction_From_a_Single_CVPR_2020_paper.html) | Point Cloud | CVPR 2020 | [Code](https:\u002F\u002Fgithub.com\u002Fmbaradad\u002Fim2pcl) |\n| [Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FChibane_Implicit_Functions_in_Feature_Space_for_3D_Shape_Reconstruction_and_CVPR_2020_paper.html) | Implicit | CVPR 2020 | [Project](https:\u002F\u002Fvirtualhumans.mpi-inf.mpg.de\u002Fifnets\u002F) |\n| [Unsupervised Learning of Probably Symmetric Deformable 3D Objects From Images in the Wild](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FWu_Unsupervised_Learning_of_Probably_Symmetric_Deformable_3D_Objects_From_Images_CVPR_2020_paper.html) | Mesh | CVPR 2020 | [Project](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fblog\u002Funsupervised-learning-of-probably-symmetric-deformable-3d-objects-from-images-in-the-wild.html?image=004_face&type=human) |\n| [CvxNet: Learnable Convex Decomposition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FDeng_CvxNet_Learnable_Convex_Decomposition_CVPR_2020_paper.html) | Primitive | CVPR 2020 | [Project](https:\u002F\u002Fcvxnet.github.io\u002F) |\n| [Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fhtml\u002F6873_ECCV_2020_paper.php) | Implicit | ECCV 2020 | [Code](https:\u002F\u002Fgithub.com\u002FKamysek\u002FDeepLocalShapes) |\n| [Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123700613.pdf) | Voxel | ECCV 2020 | [Code](https:\u002F\u002Fgithub.com\u002FJeremyFisher\u002Ffew_shot_3dr) |\n| [GSIR: Generalizable 3D Shape Interpretation and Reconstruction](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fhtml\u002F1955_ECCV_2020_paper.php) | Voxel | ECCV 2020 | \u002F |\n| [DR-KFS: A Differentiable Visual Similarity Metric for 3D Shape Reconstruction](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123660290.pdf) | Mesh | ECCV 2020 | \u002F |\n| [Self-supervised Single-view 3D Reconstruction via Semantic Consistency](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123590664.pdf) | Mesh | ECCV 2020 | [Project](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Funsup-mesh-2020) |\n| [Shape and Viewpoint without Keypoints](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600086.pdf) | Mesh | ECCV 2020 | [Project](https:\u002F\u002Fshubham-goel.github.io\u002Fucmr\u002F) |\n| [Ladybird: Quasi-Monte Carlo Sampling for Deep Implicit Field Based 3D Reconstruction with Symmetry](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123460239.pdf) | Mesh | ECCV 2020 | [Code](https:\u002F\u002Fgithub.com\u002FFuxiCV\u002FLadybird) |\n| [Learning Deformable Tetrahedral Meshes for 3D Reconstruction](https:\u002F\u002Fproceedings.neurips.cc\u002F\u002Fpaper\u002F2020\u002Ffile\u002F7137debd45ae4d0ab9aa953017286b20-Paper.pdf) | Mesh | NIPS 2020 | [Project](https:\u002F\u002Fnv-tlabs.github.io\u002FDefTet\u002F) |\n| [SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F83fa5a432ae55c253d0e60dbfa716723-Paper.pdf) | Implicit | NIPS 2020 | [Project](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002Fsigned-distance-SRN\u002F) |\n| [UCLID-Net: Single View Reconstruction in Object Space](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F21327ba33b3689e713cdff1641128004-Abstract.html) | Mesh | NIPS 2020 | [Code](https:\u002F\u002Fgithub.com\u002Fcvlab-epfl\u002FUCLID-Net) |\n| [Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12250) | Voxel | IJCV 2020 | [Code](https:\u002F\u002Fgitlab.com\u002Fhzxie\u002FPix2Vox) |\n| [D2IM-Net: Learning Detail Disentangled Implicit Fields From Single Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FLi_D2IM-Net_Learning_Detail_Disentangled_Implicit_Fields_From_Single_Images_CVPR_2021_paper.html) | Implicit | CVPR 2021 | [Code](https:\u002F\u002Fgithub.com\u002FManyiLi12345\u002FD2IM-Net) |\n| [NeRD: Neural 3D Reflection Symmetry Detector](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FZhou_NeRD_Neural_3D_Reflection_Symmetry_Detector_CVPR_2021_paper.html) | \u002F | CVPR 2021 | [Code](https:\u002F\u002Fgithub.com\u002Fzhou13\u002Fnerd) |\n| [Fostering Generalization in Single-view 3D Reconstruction by Learning a Hierarchy of Local and Global Shape Priors](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FBechtold_Fostering_Generalization_in_Single-View_3D_Reconstruction_by_Learning_a_Hierarchy_CVPR_2021_paper.html) | Implicit | CVPR 2021 | [Code](https:\u002F\u002Fgithub.com\u002Fboschresearch\u002FHierarchicalPriorNetworks) |\n| [Single-View 3D Object Reconstruction From Shape Priors in Memory](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYang_Single-View_3D_Object_Reconstruction_From_Shape_Priors_in_Memory_CVPR_2021_paper.html) | Voxel | CVPR 2021 | [Project](https:\u002F\u002Fcvxnet.github.io\u002F) |\n| [Implicit Surface Representations as Layers in Neural Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMichalkiewicz_Implicit_Surface_Representations_As_Layers_in_Neural_Networks_ICCV_2019_paper.html) | Implicit | ICCV 2021 | \u002F |\n| [Ray-ONet: Efficient 3D Reconstruction From A Single RGB Image](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01899) | Implicit | BMVC 2021 | [Project](https:\u002F\u002Frayonet.active.vision\u002F) |\n| [Learning Anchored Unsigned Distance Functions with Gradient Direction Alignment for Single-view Garment Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FZhao_Learning_Anchored_Unsigned_Distance_Functions_With_Gradient_Direction_Alignment_for_ICCV_2021_paper.html) | Implicit | ICCV 2021 | [Code](https:\u002F\u002Fgithub.com\u002Fzhaofang0627\u002FAnchorUDF) |\n| [Geometric Granularity Aware Pixel-to-Mesh](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FShi_Geometric_Granularity_Aware_Pixel-To-Mesh_ICCV_2021_paper.html) | Mesh | ICCV 2021 | \u002F |\n| [Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FGuillard_Sketch2Mesh_Reconstructing_and_Editing_3D_Shapes_From_Sketches_ICCV_2021_paper.html) | Mesh | ICCV 2021 | [Code](https:\u002F\u002Fgithub.com\u002Fcvlab-epfl\u002Fsketch2mesh) |\n| [3DIAS: 3D Shape Reconstruction With Implicit Algebraic Surfaces](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FYavartanoo_3DIAS_3D_Shape_Reconstruction_With_Implicit_Algebraic_Surfaces_ICCV_2021_paper.html) | Primitive | ICCV 2021 | [Project](https:\u002F\u002Fmyavartanoo.github.io\u002F3dias\u002F) |\n| [A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.15158) | Point Cloud | 3DV 2021 | [Code](https:\u002F\u002Fgithub.com\u002Fyefanzhou\u002Fdispersion-score) |\n| [3D Reconstruction of Novel Object Shapes from Single Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07752) | Implicit | 3DV 2021 | [Project](https:\u002F\u002Fdevlearning-gt.github.io\u002F3DShapeGen\u002F) |\n| [AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09516) | Implicit | CVPR 2022 | [Project](https:\u002F\u002Fyccyenchicheng.github.io\u002FAutoSDF\u002F) |\n| [3D Shape Reconstruction from 2D Images with Disentangled Attribute Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15190) | Point Cloud | CVPR 2022 | [Code](https:\u002F\u002Fgithub.com\u002Fjunshengzhou\u002F3dattriflow) |\n| [Pre-train, Self-train, Distill: A simple recipe for Supersizing 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15190) | Implicit | CVPR 2022 | [Project](https:\u002F\u002Fshubhtuls.github.io\u002Fss3d\u002F) |\n| [Neural Template: Topology-aware Reconstruction and Disentangled Generation of 3D Meshes](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FHui_Neural_Template_Topology-Aware_Reconstruction_and_Disentangled_Generation_of_3D_Meshes_CVPR_2022_paper.html) | Hybrid | CVPR 2022 | [Code](https:\u002F\u002Fgithub.com\u002Fedward1997104\u002FNeural-Template) |\n| [SkeletonNet: A Topology-Preserving Solution for Learning Mesh Reconstruction of Object Surfaces from RGB Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05742) | Mesh | TPAMI 2022 | [Code](https:\u002F\u002Fgithub.com\u002Ftangjiapeng\u002FSkeletonNet) |\n| [Training Data Generating Networks: Shape Reconstruction via Bi-level Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08276) | Implicit | ICLR 2022 | \u002F |\n| [Structural Causal 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10156) | Hybrid | ECCV 2022 | \u002F |\n| [Few-shot Single-view 3D Reconstruction with Memory Prior Contrastive Network](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610054.pdf) | Voxel | ECCV 2022 | \u002F |\n| [Semi-Supervised Single-View 3D Reconstruction via Prototype Shape Priors](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610528.pdf) | Voxel | ECCV 2022 | \u002F |\n| [Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.00457.pdf) | Implicit | 3DV 2024 | [Code](https:\u002F\u002Fgithub.com\u002FDaLi-Jack\u002FSSR-code) |\n\n### Multi-view\n\n|                            Paper                             | Representation | Publisher |                         Project\u002FCode                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.00449) |     Voxel      | ECCV 2016 |        [Code](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002F3D-R2N2)        |\n| [3D Shape Induction from 2D Views of Multiple Objects](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05872) |     Voxel      | 3DV 2017  |      [Code](https:\u002F\u002Fgithub.com\u002Fmatheusgadelha\u002FPrGAN)       |\n| [Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction](https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fdownload\u002F16530\u002F16302) |  Point Cloud   | AAAI 2018 | [Project](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002F3D-point-cloud-generation\u002F) |\n| [Conditional Single-view Shape Generation for Multi-view Stereo Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FWei_Conditional_Single-View_Shape_Generation_for_Multi-View_Stereo_Reconstruction_CVPR_2019_paper.html) |  Point Cloud   | CVPR 2019 |      [Code](https:\u002F\u002Fgithub.com\u002Fweiyithu\u002FOptimizeMVS)       |\n| [Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FWen_Pixel2Mesh_Multi-View_3D_Mesh_Generation_via_Deformation_ICCV_2019_paper.html) |      Mesh      | ICCV 2019 |  [Project](https:\u002F\u002Fwalsvid.github.io\u002FPixel2MeshPlusPlus\u002F)  |\n| [Multiview Aggregation for Learning Category-Specific Shape Reconstruction](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8506-multiview-aggregation-for-learning-category-specific-shape-reconstruction.pdf) |  Point Cloud   | NIPS 2019 |     [Code](https:\u002F\u002Fgithub.com\u002Fdrsrinathsridhar\u002Fxnocs)      |\n| [Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.07760) |      Patches      | ECCV 2020  |        [Project](https:\u002F\u002Fgeometry.stanford.edu\u002Fprojects\u002Fpix2surf\u002F)         |\n| [Multi-view 3D Reconstruction with Transformers](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FWang_Multi-View_3D_Reconstruction_With_Transformers_ICCV_2021_paper.html) |      Voxel      | ICCV 2021  | \u002F |\n| [3D-C2FT: Coarse-to-fine Transformer for Multi-view 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14575) |      Voxel      | ACCV 2022  | \u002F |\n| [FvOR: Robust Joint Shape and Pose Optimization for Few-view Object Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FYang_FvOR_Robust_Joint_Shape_and_Pose_Optimization_for_Few-View_Object_CVPR_2022_paper.html) |      Implicit      | CVPR 2022  | [Code](https:\u002F\u002Fgithub.com\u002Fzhenpeiyang\u002FFvOR\u002F) |\n| [FOUND: Foot Optimisation with Uncertain Normals for Surface Deformation using Synthetic Data](https:\u002F\u002Follieboyne.com\u002FFOUND) | Mesh | WACV 2024 | [Code](https:\u002F\u002Fgithub.com\u002FOllieBoyne\u002FFOUND) |\n\n### Unsupervised\n|                            Paper                             | Representation | Publisher |                         Project\u002FCode                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2016\u002Fhash\u002Fe820a45f1dfc7b95282d10b6087e11c0-Abstract.html) | Voxel | NIPS 2016 |      [Code](https:\u002F\u002Fgithub.com\u002Fxcyan\u002Fnips16_PTN)      |\n| [Multi-view Supervision for Single-View Reconstruction via Differentiable Ray Consistency](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTulsiani_Multi-View_Supervision_for_CVPR_2017_paper.html) |      Voxel      | CVPR 2017 |        [Project](https:\u002F\u002Fshubhtuls.github.io\u002Fdrc\u002F)         |\n| [Rethinking Reprojection: Closing the Loop for Pose-Aware Shape Reconstruction from a Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Rethinking_Reprojection_Closing_ICCV_2017_paper.html) |      Voxel      | ICCV 2017 |  \u002F  |\n| [Learning Category-Specific Mesh Reconstruction from Image Collections](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngjoo_Kanazawa_Learning_Category-Specific_Mesh_ECCV_2018_paper.html) |      Mesh      | ECCV 2018 |        [Project](https:\u002F\u002Fakanazawa.github.io\u002Fcmr\u002F)         |\n| [Learning Single-View 3D Reconstruction with Limited Pose Supervision](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuandao_Yang_A_Unified_Framework_ECCV_2018_paper.html) |      Voxel      | ECCV 2018 |        [Code](https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon)         |\n| [Multi-View Consistency as Supervisory Signal for Learning Shape and Pose Prediction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FTulsiani_Multi-View_Consistency_as_CVPR_2018_paper.html) |      Voxel      | CVPR 2018 |        [Project](https:\u002F\u002Fshubhtuls.github.io\u002FmvcSnP\u002F)         |\n| [Learning View Priors for Single-view 3D Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FKato_Learning_View_Priors_for_Single-View_3D_Reconstruction_CVPR_2019_paper.html) |      Mesh      | CVPR 2019 |        [Code](https:\u002F\u002Fgithub.com\u002Fhiroharu-kato\u002Fview_prior_learning)         |\n| [Escaping Plato's Cave: 3D Shape From Adversarial Rendering](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FHenzler_Escaping_Platos_Cave_3D_Shape_From_Adversarial_Rendering_ICCV_2019_paper.html) |      Voxel      | ICCV 2019 |        [Project](https:\u002F\u002Fgeometry.cs.ucl.ac.uk\u002Fprojects\u002F2019\u002Fplatonicgan\u002F)         |\n| [Learning to Infer Implicit Surfaces without 3D Supervision](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fbdf3fd65c81469f9b74cedd497f2f9ce-Abstract.html) |      Implicit      | NIPS 2019 |  \u002F  |\n| [Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002Ff5ac21cd0ef1b88e9848571aeb53551a-Abstract.html) |      Mesh      | NIPS 2019 |        [Project](https:\u002F\u002Fnv-tlabs.github.io\u002FDIB-R\u002F)         |\n| [Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FWu_Unsupervised_Learning_of_Probably_Symmetric_Deformable_3D_Objects_From_Images_CVPR_2020_paper.html) |      Mesh      | CVPR 2020 |        [Project](https:\u002F\u002Felliottwu.com\u002Fprojects\u002F20_unsup3d\u002F)         |\n| [Leveraging 2D Data to Learn Textured 3D Mesh Generation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FHenderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.html) |      Mesh      | CVPR 2020 |        [Code](https:\u002F\u002Fgithub.com\u002Fpmh47\u002Ftextured-mesh-gen)         |\n| [Implicit Mesh Reconstruction from Unannotated Image Collections](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08504) |      Mesh      | arXiv 2020 |        [Project](https:\u002F\u002Fshubhtuls.github.io\u002Fimr\u002F)         |\n| [Shape and Viewpoint without Keypoints](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600086.pdf) |      Mesh      | ECCV 2020 |        [Project](https:\u002F\u002Fshubham-goel.github.io\u002Fucmr\u002F)         |\n| [Self-supervised Single-view 3D Reconstruction via Semantic Consistency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06473) |      Mesh      | ECCV 2020 |        [Project](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Funsup-mesh-2020)         |\n| [SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F83fa5a432ae55c253d0e60dbfa716723-Abstract.html) |      Implicit      | NIPS 2020 |        [Project](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002Fsigned-distance-SRN\u002F)         |\n| [Shelf-Supervised Mesh Prediction in the Wild](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYe_Shelf-Supervised_Mesh_Prediction_in_the_Wild_CVPR_2021_paper.html) |      Mesh      | CVPR 2021 |        [Project](https:\u002F\u002Fjudyye.github.io\u002FShSMesh\u002F)         |\n| [Fully Understanding Generic Objects: Modeling, Segmentation, and Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FLiu_Fully_Understanding_Generic_Objects_Modeling_Segmentation_and_Reconstruction_CVPR_2021_paper.html) |      Implicit      | CVPR 2021 |        [Project](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fproject-fully3dobject.html)         |\n| [Self-Supervised 3D Mesh Reconstruction from Single Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FHu_Self-Supervised_3D_Mesh_Reconstruction_From_Single_Images_CVPR_2021_paper.html) |      Mesh      | CVPR 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSMR)         |\n| [View Generalization for Single Image Textured 3D Models](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FBhattad_View_Generalization_for_Single_Image_Textured_3D_Models_CVPR_2021_paper.html) |      Mesh      | CVPR 2021 |        [Project](https:\u002F\u002Fnv-adlr.github.io\u002Fview-generalization)         |\n| [Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D Image GANs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.00844) |      Mesh      | ICLR 2021 |        [Project](https:\u002F\u002Fxingangpan.github.io\u002Fprojects\u002FGAN2Shape.html)         |\n| [Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.09125) |      Mesh      | ICLR 2021 |        [Project](https:\u002F\u002Fnv-tlabs.github.io\u002FGANverse3D\u002F)    |\n| [Discovering 3D Parts from Image Collections](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FYao_Discovering_3D_Parts_From_Image_Collections_ICCV_2021_paper.html) |      Mesh      | ICCV 2021 |        [Project](https:\u002F\u002Fchhankyao.github.io\u002Flpd\u002F)         |\n| [Learning Canonical 3D Object Representation for Fine-Grained Recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FJoung_Learning_Canonical_3D_Object_Representation_for_Fine-Grained_Recognition_ICCV_2021_paper.html) |      Mesh      | ICCV 2021 |  \u002F  |\n| [Toward Realistic Single-View 3D Object Reconstruction with Unsupervised Learning from Multiple Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FHo_Toward_Realistic_Single-View_3D_Object_Reconstruction_With_Unsupervised_Learning_From_ICCV_2021_paper.html) |      Mesh      | ICCV 2021 |        [Code](https:\u002F\u002Fgithub.com\u002FVinAIResearch\u002FLeMul)         |\n| [Learning Generative Models of Textured 3D Meshes from Real-World Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FPavllo_Learning_Generative_Models_of_Textured_3D_Meshes_From_Real-World_Images_ICCV_2021_paper.html) |      Mesh      | ICCV 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fdariopavllo\u002Ftextured-3d-gan)         |\n| [To The Point: Correspondence-driven monocular 3D category reconstruction](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F40008b9a5380fcacce3976bf7c08af5b-Abstract.html) |      Mesh      | NIPS 2021 |        [Project](https:\u002F\u002Ffkokkinos.github.io\u002Fto_the_point\u002F)         |\n| [Topologically-Aware Deformation Fields for Single-View 3D Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FDuggal_Topologically-Aware_Deformation_Fields_for_Single-View_3D_Reconstruction_CVPR_2022_paper.html) |      Implicit      | CVPR 2022 |        [Project](https:\u002F\u002Fshivamduggal4.github.io\u002Ftars-3D\u002F)     |\n| [2D GANs Meet Unsupervised Single-View 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10183) |      Implicit      | ECCV 2022 |        [Project](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fproject-gansvr.html)    |\n| [Monocular 3D Object Reconstruction with GAN Inversion](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610665.pdf) |      Mesh      | ECCV 2022 |        [Project](https:\u002F\u002Fwww.mmlab-ntu.com\u002Fproject\u002Fmeshinversion\u002F)    |\n| [Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610282.pdf) |      Mesh      | ECCV 2022 |        [Project](http:\u002F\u002Fimagine.enpc.fr\u002F~monniert\u002FUNICORN\u002F)         |\n| [Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11674) |      Implicit      | CVPR 2023 | [Code](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fnerf-from-image) |\n| [Seeing a Rose in Five Thousand Ways](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04965) |      Implicit      | CVPR 2023 | [Project](https:\u002F\u002Fcs.stanford.edu\u002F~yzzhang\u002Fprojects\u002Frose\u002F) |\n| [ShapeClipper: Scalable 3D Shape Learning From Single-View Images via Geometric and CLIP-Based Consistency](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FHuang_ShapeClipper_Scalable_3D_Shape_Learning_From_Single-View_Images_via_Geometric_CVPR_2023_paper.html) |      Implicit      | CVPR 2023 | [Project](https:\u002F\u002Fzixuanh.com\u002Fprojects\u002Fshapeclipper.html) |\n| [SAOR: Single-View Articulated Object Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13514) |      Implicit      | arXiv 2023 | [Project](https:\u002F\u002Fmehmetaygun.github.io\u002Fsaor) |\n| [Progressive Learning of 3D Reconstruction Network from 2D GAN Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11102) |      Mesh      | arXiv 2023 | [Project](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fadlr\u002Fprogressive-3d-learning\u002F) |\n\n## Scene-level\n\n### Single-view\n|                            Paper                             | Representation | Publisher |                         Project\u002FCode                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [IM2CAD](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIzadinia_IM2CAD_CVPR_2017_paper.html) |      CAD       | CVPR 2017 |         [Code](https:\u002F\u002Fgithub.com\u002Fyyong119\u002FIM2CAD)         |\n| [3D-RCNN: Instance-level 3D Object Reconstruction via Render-and-Compare](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FKundu_3D-RCNN_Instance-Level_3D_CVPR_2018_paper.html) |     Priors     | CVPR 2018 |   [Project](https:\u002F\u002Fabhijitkundu.info\u002Fprojects\u002F3D-RCNN\u002F)   |\n| [Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FTulsiani_Factoring_Shape_Pose_CVPR_2018_paper.html) |     Voxel      | CVPR 2018 |     [Project](https:\u002F\u002Fshubhtuls.github.io\u002Ffactored3d\u002F)     |\n| [Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSiyuan_Huang_Monocular_Scene_Parsing_ECCV_2018_paper.html) |      Mesh       | ECCV 2018 | [Project](https:\u002F\u002Fsiyuanhuang.com\u002Fholistic_parsing\u002Fmain.html) |\n| [Mesh R-CNN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FGkioxari_Mesh_R-CNN_ICCV_2019_paper.html) |      Mesh      | ICCV 2019 |    [Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmeshrcnn)    |\n| [3D Scene Reconstruction With Multi-Layer Depth and Epipolar Transformers](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FShin_3D_Scene_Reconstruction_With_Multi-Layer_Depth_and_Epipolar_Transformers_ICCV_2019_paper.html) |      Mesh      | ICCV 2019 | [Project](https:\u002F\u002Fresearch.dshin.org\u002Ficcv19\u002Fmulti-layer-depth) |\n| [3D-RelNet: Joint Object and Relational Network for 3D Prediction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FKulkarni_3D-RelNet_Joint_Object_and_Relational_Network_for_3D_Prediction_ICCV_2019_paper.html) |     Voxel      | ICCV 2019 |  [Project](https:\u002F\u002Fnileshkulkarni.github.io\u002Frelative3d\u002F)   |\n| [Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes from a Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FNie_Total3DUnderstanding_Joint_Layout_Object_Pose_and_Mesh_Reconstruction_for_Indoor_CVPR_2020_paper.html) |      Mesh      | CVPR 2020 |       [Project](https:\u002F\u002Fyinyunie.github.io\u002FTotal3D\u002F)       |\n| [3D Scene Reconstruction from a Single Viewport](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123670052.pdf) |     Voxel      | ECCV 2020 | [Code](https:\u002F\u002Fgithub.com\u002FDLR-RM\u002FSingleViewReconstruction) |\n| [CoReNet: Coherent 3D scene reconstruction from a single RGB image](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123470358.pdf) | Voxel+Implicit | ECCV 2020 |     [Code](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fcorenet)     |\n| [Image-to-Voxel Model Translation for 3D Scene Reconstruction and Segmentation](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123520103.pdf) |     Voxel      | ECCV 2020 |           [Code](https:\u002F\u002Fgithub.com\u002Fvlkniaz\u002FSSZ)           |\n| [Holistic 3D Scene Understanding from a Single Image with Implicit Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06422) |    Implicit    | CVPR 2021 |  [Project](https:\u002F\u002Fchengzhag.github.io\u002Fpublication\u002Fim3d\u002F)  |\n| [From Points to Multi-Object 3D Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FEngelmann_From_Points_to_Multi-Object_3D_Reconstruction_CVPR_2021_paper.html) |    Implicit    | CVPR 2021 |  [Project](https:\u002F\u002Ffrancisengelmann.github.io\u002Fpoints2objects\u002F)  |\n| [Learning to Recover 3D Scene Shape from a Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYin_Learning_To_Recover_3D_Scene_Shape_From_a_Single_Image_CVPR_2021_paper.html) |    Point Cloud    | CVPR 2021 |  [Code](https:\u002F\u002Fgithub.com\u002Faim-uofa\u002FAdelaiDepth)  |\n| [Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval from a Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FKuo_Patch2CAD_Patchwise_Embedding_Learning_for_In-the-Wild_Shape_Retrieval_From_a_ICCV_2021_paper.html) |    Mesh    | ICCV 2021 |  \u002F  |\n| [Panoptic 3D Scene Reconstruction From a Single RGB Image](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F46031b3d04dc90994ca317a7c55c4289-Abstract.html) |    Voxel    | NIPS 2021 |  [Project](https:\u002F\u002Fmanuel-dahnert.com\u002Fresearch\u002Fpanoptic-reconstruction)  |\n| [Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F1415db70fe9ddb119e23e9b2808cde38-Abstract.html) |    Implicit    | NIPS 2021 |  [Project](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fproject-mdr.html)  |\n| [Towards High-Fidelity Single-view Holistic Reconstruction of Indoor Scenes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.08656) |    Implicit    | ECCV 2022 |  [Code](https:\u002F\u002Fgithub.com\u002FUncleMEDM\u002FInstPIFu)  |\n| [3D-Former: Monocular Scene Reconstruction with SDF 3D Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13510) |    Implicit    | ICLR 2023 |  [Project](https:\u002F\u002Fweihaosky.github.io\u002Fformer3d\u002F)  |\n| [BUOL: A Bottom-Up Framework With Occupancy-Aware Lifting for Panoptic 3D Scene Reconstruction From a Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FChu_BUOL_A_Bottom-Up_Framework_With_Occupancy-Aware_Lifting_for_Panoptic_3D_CVPR_2023_paper.html) |    Implicit    | CVPR 2023 |  [Code](https:\u002F\u002Fgithub.com\u002Fchtsy\u002Fbuol)  |\n\n### Multi-view\n|                            Paper                             | Representation | Publisher |                         Project\u002FCode                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [MARMVS: Matching Ambiguity Reduced Multiple View Stereo for Efficient Large Scale Scene Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FXu_MARMVS_Matching_Ambiguity_Reduced_Multiple_View_Stereo_for_Efficient_Large_CVPR_2020_paper.html) | Point Cloud    | CVPR 2020 | \u002F |\n| [FroDO: From Detections to 3D Objects](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FRunz_FroDO_From_Detections_to_3D_Objects_CVPR_2020_paper.html) | Implicit | CVPR 2020 | \u002F |\n| [Associative3D: Volumetric Reconstruction from Sparse Views](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600137.pdf) | Voxel          | ECCV 2020 | [Project](https:\u002F\u002Fjasonqsy.github.io\u002FAssociative3D\u002F) |\n| [Atlas: End-to-End 3D Scene Reconstruction from Posed Images](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123520409.pdf) | Mesh           | ECCV 2020 | [Project](http:\u002F\u002Fzak.murez.com\u002Fatlas\u002F)               |\n| [NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.00681) | Mesh           | CVPR 2021 | [Project](https:\u002F\u002Fzju3dv.github.io\u002Fneuralrecon\u002F)     |\n| [TransformerFusion: Monocular RGB Scene Reconstruction using Transformers](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F0a87257e5308197df43230edf4ad1dae-Abstract.html) |    Implicit    | NIPS 2021 |  [Project](https:\u002F\u002Faljazbozic.github.io\u002Ftransformerfusion\u002F)  |\n| [Learning 3D Object Shape and Layout without 3D Supervision](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FGkioxari_Learning_3D_Object_Shape_and_Layout_Without_3D_Supervision_CVPR_2022_paper.html) |    Mesh    | CVPR 2022 |  [Project](https:\u002F\u002Fgkioxari.github.io\u002Fusl\u002Findex.html)  |\n| [Directed Ray Distance Functions for 3D Scene Reconstruction](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136620193.pdf) |    Implicit    | ECCV 2022 |  [Project](https:\u002F\u002Fnileshkulkarni.github.io\u002Fscene_drdf\u002F)  |\n| [Learning 3D Scene Priors with 2D Supervision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14157) |    Mesh    | arXiv 2022 |  [Project](https:\u002F\u002Fyinyunie.github.io\u002Fsceneprior-page\u002F)  |\n| [FineRecon: Depth-aware Feed-forward Network for Detailed 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01480) |    Implicit    | arXiv 2023 |  [Code](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-finerecon)  |\n| [CVRecon: Rethinking 3D Geometric Feature Learning For Neural Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14633) |    Implicit    | arXiv 2023 |  [Project](https:\u002F\u002Fcvrecon.ziyue.cool\u002F)  |\n| [VisFusion: Visibility-Aware Online 3D Scene Reconstruction From Videos](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FGao_VisFusion_Visibility-Aware_Online_3D_Scene_Reconstruction_From_Videos_CVPR_2023_paper.html) |    Implicit    | CVPR 2023 |  [Project](https:\u002F\u002Fhuiyu-gao.github.io\u002Fvisfusion\u002F)  |\n\n## Neural Surface\n\n### Multi-view\n|                            Paper                             | Representation | Publisher |                         Project\u002FCode                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FJiang_SDFDiff_Differentiable_Rendering_of_Signed_Distance_Fields_for_3D_Shape_CVPR_2020_paper.html) |      Implicit      | CVPR 2020 |        [Code](https:\u002F\u002Fyuejiang-nj.github.io\u002Fpapers\u002FCVPR2020_SDFDiff\u002Fproject_page.html)         |\n| [Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FNiemeyer_Differentiable_Volumetric_Rendering_Learning_Implicit_3D_Representations_Without_3D_Supervision_CVPR_2020_paper.html) |      Implicit      | CVPR 2020 |        [Code](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fdifferentiable_volumetric_rendering)         |\n| [Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F1a77befc3b608d6ed363567685f70e1e-Abstract.html) |      Implicit      | NIPS 2020 |        [Project](https:\u002F\u002Flioryariv.github.io\u002Fidr\u002F)         |\n| [Unsupervised Learning of 3D Object Categories from Videos in the Wild](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FHenzler_Unsupervised_Learning_of_3D_Object_Categories_From_Videos_in_the_CVPR_2021_paper.html) |    Implicit    | CVPR 2021 |  [Project](https:\u002F\u002Fhenzler.github.io\u002Fpublication\u002Funsupervised_videos\u002F)  |\n| [Neural Lumigraph Rendering](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FKellnhofer_Neural_Lumigraph_Rendering_CVPR_2021_paper.html) |    Implicit    | CVPR 2021 |  [Project](http:\u002F\u002Fwww.computationalimaging.org\u002Fpublications\u002Fnlr\u002F)  |\n| [Iso-Points: Optimizing Neural Implicit Surfaces With Hybrid Representations](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYifan_Iso-Points_Optimizing_Neural_Implicit_Surfaces_With_Hybrid_Representations_CVPR_2021_paper.html) |    Implicit    | CVPR 2021 |  [Project](https:\u002F\u002Fyifita.github.io\u002Fpublication\u002Fiso_points\u002F)  |\n| [Learning Signed Distance Field for Multi-view Surface Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FZhang_Learning_Signed_Distance_Field_for_Multi-View_Surface_Reconstruction_ICCV_2021_paper.html) |      Implicit      | ICCV 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fjzhangbs\u002FMVSDF)         |\n| [UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FOechsle_UNISURF_Unifying_Neural_Implicit_Surfaces_and_Radiance_Fields_for_Multi-View_ICCV_2021_paper.html) |      Implicit      | ICCV 2021 |        [Project](https:\u002F\u002Fmoechsle.github.io\u002Funisurf\u002F)         |\n| [NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fe41e164f7485ec4a28741a2d0ea41c74-Abstract.html) |      Implicit      | NIPS 2021 |        [Project](https:\u002F\u002Flingjie0206.github.io\u002Fpapers\u002FNeuS\u002F)         |\n| [NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Ff95ec3de395b4bce25b39ef6138da871-Abstract.html) |    Implicit    | NIPS 2021 |  [Project](https:\u002F\u002Fjasonyzhang.com\u002Fners\u002F)  |\n| [Volume Rendering of Neural Implicit Surfaces](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F25e2a30f44898b9f3e978b1786dcd85c-Abstract.html) |      Implicit      | ICCV 2021 |        [Unofficial Code](https:\u002F\u002Fgithub.com\u002Fventusff\u002Fneurecon)         |\n| [NeuralWarp: Improving neural implicit surfaces geometry with patch warping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.09648) |      Implicit      | CVPR 2022 |        [Project](http:\u002F\u002Fimagine.enpc.fr\u002F~darmonf\u002FNeuralWarp\u002F)         |\n| [Neural 3D Scene Reconstruction with the Manhattan-world Assumption](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02836) |    Implicit    | CVPR 2022 |  [Project](https:\u002F\u002Fzju3dv.github.io\u002Fmanhattan_sdf\u002F)  |\n| [GenDR: A Generalized Differentiable Renderer](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FPetersen_GenDR_A_Generalized_Differentiable_Renderer_CVPR_2022_paper.html) |    Mesh    | CVPR 2022 |  [Code](https:\u002F\u002Fgithub.com\u002FFelix-Petersen\u002Fgendr)  |\n| [NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_NeRFusion_Fusing_Radiance_Fields_for_Large-Scale_Scene_Reconstruction_CVPR_2022_paper.html) |    Implicit    | CVPR 2022 |  [Project](https:\u002F\u002Fjetd1.github.io\u002FNeRFusion-Web\u002F)  |\n| [Critical Regularizations for Neural Surface Reconstruction in the Wild](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_Critical_Regularizations_for_Neural_Surface_Reconstruction_in_the_Wild_CVPR_2022_paper.html) |    Implicit    | CVPR 2022 | \u002F |\n| [Multi-View Mesh Reconstruction with Neural Deferred Shading](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWorchel_Multi-View_Mesh_Reconstruction_With_Neural_Deferred_Shading_CVPR_2022_paper.html) |    Mesh    | CVPR 2022 |  [Project](https:\u002F\u002Ffraunhoferhhi.github.io\u002Fneural-deferred-shading\u002F)  |\n| [Differentiable Stereopsis: Meshes From Multiple Views Using Differentiable Rendering](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FGoel_Differentiable_Stereopsis_Meshes_From_Multiple_Views_Using_Differentiable_Rendering_CVPR_2022_paper.html) |    Mesh    | CVPR 2022 |  [Code](https:\u002F\u002Fgithub.com\u002Fshubham-goel\u002Fds)  |\n| [SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse views](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05737) |    Implicit    | ECCV 2022 |  [Project](https:\u002F\u002Fwww.xxlong.site\u002FSparseNeuS\u002F)  |\n| [Object-Compositional Neural Implicit Surfaces](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136870194.pdf) |    Implicit    | ECCV 2022 | [Project](https:\u002F\u002Fwuqianyi.top\u002Fobjectsdf\u002F) |\n| [SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.06340) |    Implicit    | ECCV 2022 | [Project](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fresearch\u002Fsnes\u002F) |\n| [Neural 3D Reconstruction in the Wild](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.12955) |    Implicit    | SIGGRAPH 2022 |  [Project](https:\u002F\u002Fzju3dv.github.io\u002Fneuralrecon-w\u002F)  |\n| [Differentiable Signed Distance Function Rendering](http:\u002F\u002Frgl.s3.eu-central-1.amazonaws.com\u002Fmedia\u002Fpapers\u002FVicini2022sdf_1.pdf) |    Implicit    | SIGGRAPH 2022 |  [Project](http:\u002F\u002Frgl.epfl.ch\u002Fpublications\u002FVicini2022SDF)  |\n| [Differentiable Rendering of Neural SDFs through Reparameterization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05344) |    Implicit    | SIGGRAPH Asia 2022 | [Project](https:\u002F\u002Fpeople.csail.mit.edu\u002Fsbangaru\u002Fprojects\u002Fdsdf-2022\u002Findex.html) |\n| [Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02757) |    Implicit    | NIPS 2022 | [Project](https:\u002F\u002Fjunshengzhou.github.io\u002FCAP-UDF\u002F) |\n| [Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15848) |    Implicit    | NIPS 2022 |  [Code](https:\u002F\u002Fgithub.com\u002FGhiXu\u002FGeo-Neus)  |\n| [MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00665) |    Implicit    | NIPS 2022 |  [Project](https:\u002F\u002Fniujinshuchong.github.io\u002Fmonosdf\u002F)  |\n| [HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07850) |    Implicit    | NIPS 2022 |  [Project](https:\u002F\u002Fgithub.com\u002Fyiqun-wang\u002FHFS)  |\n| [Recovering Fine Details for Neural Implicit Surface Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2023\u002Fhtml\u002FChen_Recovering_Fine_Details_for_Neural_Implicit_Surface_Reconstruction_WACV_2023_paper.html) |    Implicit    | WACV 2022 |  [Code](https:\u002F\u002Fgithub.com\u002Ffraunhoferhhi\u002FD-NeuS)  |\n| [NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13597) |    Implicit    | ECCV 2022 |  [Project](https:\u002F\u002Fjiepengwang.github.io\u002FNeuRIS\u002F)  |\n| [Sphere-Guided Training of Neural Implicit Surfaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15511) |    Implicit    | arXiv 2022 | \u002F |\n| [QFF: Quantized Fourier Features for Neural Field Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00914) |    Implicit    | arXiv 2022 |  \u002F  |\n| [NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.05231) |    Implicit    | arXiv 2022 |  [Project](https:\u002F\u002Fvcai.mpi-inf.mpg.de\u002Fprojects\u002FNeuS2\u002F)  |\n| [Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.12697) |    Implicit    | ICLR 2023 | [Code](https:\u002F\u002Fgithub.com\u002Fwutong16\u002FVoxurf) |\n| [PermutoSDF: Fast Multi-View Reconstruction with Implicit Surfaces using Permutohedral Lattices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12562) |    Implicit    | CVPR 2023 | [Project](https:\u002F\u002Fradualexandru.github.io\u002Fpermuto_sdf\u002F) |\n| [ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14086) |    Implicit    | CVPR 2023 |  [Project](https:\u002F\u002Fgerwang.github.io\u002Fshadowneus\u002F)  |\n| [NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14173) |    Implicit    | CVPR 2023 |  [Project](https:\u002F\u002Fwww.xxlong.site\u002FNeuralUDF\u002F)  |\n| [NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.02375) |     Implicit      | CVPR 2023 | [Project](https:\u002F\u002F3d-front-future.github.io\u002Fneuda\u002F) |\n| [SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00792) |    Implicit    | CVPR 2023 |  [Project](https:\u002F\u002Fsparsefusion.github.io\u002F)  |\n| [I$^2$-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07634) |     Implicit      | CVPR 2023 | [Project](https:\u002F\u002Fjingsenzhu.github.io\u002Fi2-sdf\u002F) |\n| [NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from Multi-view Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12012) |     Implicit      | CVPR 2023 | [Project](https:\u002F\u002Fxmeng525.github.io\u002Fxiaoxumeng.github.io\u002Fprojects\u002Fcvpr23_neat) |\n| [NeUDF: Leaning Neural Unsigned Distance Fields With Volume Rendering](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLiu_NeUDF_Leaning_Neural_Unsigned_Distance_Fields_With_Volume_Rendering_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | [Project](http:\u002F\u002Fgeometrylearning.com\u002Fneudf\u002F) |\n| [Towards Better Gradient Consistency for Neural Signed Distance Functions via Level Set Alignment](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FMa_Towards_Better_Gradient_Consistency_for_Neural_Signed_Distance_Functions_via_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | [Code](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FTowardsBetterGradient\u002F) |\n| [Neuralangelo: High-Fidelity Neural Surface Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLi_Neuralangelo_High-Fidelity_Neural_Surface_Reconstruction_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | [Project](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fdir\u002Fneuralangelo\u002F) |\n| [VolRecon: Volume Rendering of Signed Ray Distance Functions for Generalizable Multi-View Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FRen_VolRecon_Volume_Rendering_of_Signed_Ray_Distance_Functions_for_Generalizable_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | [Project](https:\u002F\u002Ffangjinhuawang.github.io\u002FVolRecon\u002F) |\n| [PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWang_PET-NeuS_Positional_Encoding_Tri-Planes_for_Neural_Surfaces_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | \u002F |\n| [HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit Surfaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06793) |    Implicit    | arXiv 2023 | \u002F |\n| [RICO: Regularizing the Unobservable for Indoor Compositional Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08605) |    Implicit    | arXiv 2023 | \u002F |\n| [Learning a Room with the Occ-SDF Hybrid: Signed Distance Function Mingled with Occupancy Aids Scene Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09152) |    Implicit    | arXiv 2023 | \u002F |\n| [NeUDF: Learning Unsigned Distance Fields from Multi-view Images for Reconstructing Non-watertight Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.15368) |     Implicit      | arXiv 2023 | \u002F |\n| [S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17712) |     Implicit      | arXiv 2023 | [Project](https:\u002F\u002Fhao-yu-wu.github.io\u002Fs-volsdf\u002F) |\n| [VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence Normalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17968) |     Implicit      | arXiv 2023 | \u002F |\n| [FastMESH: Fast Surface Reconstruction by Hexagonal Mesh-based Neural Rendering](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17858) |     Mesh      | arXiv 2023 | \u002F |\n| [Explicit Neural Surfaces: Learning Continuous Geometry With Deformation Fields](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02956) |     Implicit      | arXiv 2023 | \u002F |\n\n### Point-cloud\n|                            Paper                             | Representation | Publisher |                         Project\u002FCode                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [Deep Geometric Prior for Surface Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FWilliams_Deep_Geometric_Prior_for_Surface_Reconstruction_CVPR_2019_paper.html) |     Patches      | CVPR 2019 |        [Code](https:\u002F\u002Fgithub.com\u002Ffwilliams\u002Fdeep-geometric-prior)        |\n| [Scan2Mesh: From Unstructured Range Scans to 3D Meshes](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FDai_Scan2Mesh_From_Unstructured_Range_Scans_to_3D_Meshes_CVPR_2019_paper.html) |     Mesh      | CVPR 2019 | [Code](https:\u002F\u002Fgithub.com\u002Fmohamed-ebbed\u002FScan2Mesh) |\n| [Meshlet Priors for 3D Mesh Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FBadki_Meshlet_Priors_for_3D_Mesh_Reconstruction_CVPR_2020_paper.html) |     Mesh      | CVPR 2020 | [Code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fmeshlets) |\n| [SSRNet: Scalable 3D Surface Reconstruction Network](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FMi_SSRNet_Scalable_3D_Surface_Reconstruction_Network_CVPR_2020_paper.html) |     Implicit      | CVPR 2020 | \u002F |\n| [SAL: Sign Agnostic Learning of Shapes from Raw Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FAtzmon_SAL_Sign_Agnostic_Learning_of_Shapes_From_Raw_Data_CVPR_2020_paper.html) |     Implicit      | CVPR 2020 |        [Code](https:\u002F\u002Fgithub.com\u002Fmatanatz\u002FSAL)      |\n| [Implicit Geometric Regularization for Learning Shapes](https:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fgropp20a.html) |     Implicit      | ICML 2020 |        [Code](https:\u002F\u002Fgithub.com\u002Famosgropp\u002FIGR)        |\n| [Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09267) |     Mesh      | ECCV 2020 |        [Code](https:\u002F\u002Fgithub.com\u002FColin97\u002FPoint2Mesh)        |\n| [PointTriNet: Learned Triangulation of 3D Point Sets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.02138) |     Mesh      | ECCV 2020 |        [Code](https:\u002F\u002Fgithub.com\u002Fnmwsharp\u002Flearned-triangulation)      |\n| [Points2Surf: Learning Implicit Surfaces from Point Cloud Patches](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.10453) |     Implicit      | ECCV 2020 |        [Code](https:\u002F\u002Fgithub.com\u002FErlerPhilipp\u002Fpoints2surf)      |\n| [Convolutional Occupancy Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.04618) |     Implicit      | ECCV 2020 |        [Code](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fconvolutional_occupancy_networks)      |\n| [Implicit Neural Representations with Periodic Activation Functions](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F53c04118df112c13a8c34b38343b9c10-Abstract.html) |     Implicit      | NIPS 2020 |        [Project](https:\u002F\u002Fwww.vincentsitzmann.com\u002Fsiren\u002F)      |\n| [Neural Unsigned Distance Fields for Implicit Function Learning](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002Ff69e505b08403ad2298b9f262659929a-Abstract.html) |     Implicit      | NIPS 2020 |        [Code](https:\u002F\u002Fgithub.com\u002Fjchibane\u002Fndf)        |\n| [Differentiable Surface Triangulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10695) |     Mesh      | TOG 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fmrakotosaon\u002Fdiff-surface-triangulation)      |\n| [SALD: Sign Agnostic Learning with Derivatives](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05400) |     Implicit      | ICLR 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fmatanatz\u002FSALD)      |\n| [Deep Implicit Moving Least-Squares Functions for 3D Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FLiu_Deep_Implicit_Moving_Least-Squares_Functions_for_3D_Reconstruction_CVPR_2021_paper.html) |     Implicit      | CVPR 2021 |        [Code](https:\u002F\u002Fgithub.com\u002FAndy97\u002FDeepMLS)      |\n| [Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction from Raw Point Clouds](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FZhao_Sign-Agnostic_Implicit_Learning_of_Surface_Self-Similarities_for_Shape_Modeling_and_CVPR_2021_paper.html) |     Implicit      | CVPR 2021 | \u002F |\n| [Learning Delaunay Surface Elements for Mesh Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FRakotosaona_Learning_Delaunay_Surface_Elements_for_Mesh_Reconstruction_CVPR_2021_paper.html) |     Mesh      | CVPR 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fmrakotosaon\u002Fdse-meshing)        |\n| [Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FWilliams_Neural_Splines_Fitting_3D_Surfaces_With_Infinitely-Wide_Neural_Networks_CVPR_2021_paper.html) |     Implicit      | CVPR 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Ffwilliams\u002Fneural-splines)      |\n| [Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces](https:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fma21b.html) |     Implicit      | ICML 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FNeuralPull)      |\n| [Phase Transitions, Distance Functions, and Implicit Neural Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.07689) |     Implicit      | ICML 2021 | \u002F |\n| [Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds of Large Scenes with Learned Virtual View Visibility](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSong_Vis2Mesh_Efficient_Mesh_Reconstruction_From_Unstructured_Point_Clouds_of_Large_ICCV_2021_paper.pdf) |     Mesh      | ICCV 2021 |        [Code](https:\u002F\u002Fgithub.com\u002FGDAOSU\u002Fvis2mesh)      |\n| [Deep Hybrid Self-Prior for Full 3D Mesh Generation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FWei_Deep_Hybrid_Self-Prior_for_Full_3D_Mesh_Generation_ICCV_2021_paper.html) |     Mesh      | ICCV 2021 | [Project](https:\u002F\u002Fyqdch.github.io\u002FDHSP3D\u002F) |\n| [Adaptive Surface Reconstruction with Multiscale Convolutional Kernels](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FUmmenhofer_Adaptive_Surface_Reconstruction_With_Multiscale_Convolutional_Kernels_ICCV_2021_paper.pdf) |     Mesh      | ICCV 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Fisl-org\u002Fadaptive-surface-reconstruction)      |\n| [SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FTang_SA-ConvONet_Sign-Agnostic_Optimization_of_Convolutional_Occupancy_Networks_ICCV_2021_paper.html) |     Implicit      | ICCV 2021 |        [Code](https:\u002F\u002Fgithub.com\u002Ftangjiapeng\u002FSA-ConvONet)      |\n| [Deep Implicit Surface Point Prediction Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FVenkatesh_Deep_Implicit_Surface_Point_Prediction_Networks_ICCV_2021_paper.html) |     Implicit      | ICCV 2021 |        [Project](https:\u002F\u002Fsites.google.com\u002Fview\u002Fcspnet)      |\n| [Shape As Points: A Differentiable Poisson Solver](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03452) |     Mesh      | NIPS 2021 |        [Project](https:\u002F\u002Fpengsongyou.github.io\u002Fsap)      |\n| [AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11860) |     Implicit      | 3DV 2021 |     [Code](https:\u002F\u002Fgithub.com\u002FSimonGiebenhain\u002FAIR-Nets)    |\n| [Scalable Surface Reconstruction with Delaunay-Graph Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06130) |     Mesh      | SGP 2021 | [Code](https:\u002F\u002Fgithub.com\u002Fraphaelsulzer\u002Fdgnn) |\n| [Neural-IMLS: Learning Implicit Moving Least-Squares for Surface Reconstruction from Unoriented Point clouds](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.04398) |     Implicit      | arXiv 2021 |        [Project](https:\u002F\u002Fqiujiedong.github.io\u002Fpublications\u002FNeural_IMLS\u002F)      |\n| [Neural Fields as Learnable Kernels for 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13674) |     Implicit      | CVPR 2022 |        [Project](https:\u002F\u002Fnv-tlabs.github.io\u002Fnkf\u002F)      |\n| [POCO: Point Convolution for Surface Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.01831) |     Implicit      | CVPR 2022 |     [Code](https:\u002F\u002Fgithub.com\u002Fvaleoai\u002FPOCO)    |\n| [GIFS: Neural Implicit Function for General Shape Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07126) |     Implicit      | CVPR 2022 |     [Project](https:\u002F\u002Fjianglongye.com\u002Fgifs\u002F)    |\n| [Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.10603) |     Implicit      | CVPR 2022 |     [Code](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FOnSurfacePrior)    |\n| [Surface Reconstruction from Point Clouds by Learning Predictive Context Priors](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.11015) |     Implicit      | CVPR 2022 |     [Code](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002Fpredictablecontextprior)    |\n| [DiGS: Divergence Guided Shape Implicit Neural Representation for Unoriented Point Clouds](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FBen-Shabat_DiGS_Divergence_Guided_Shape_Implicit_Neural_Representation_for_Unoriented_Point_CVPR_2022_paper.html) |     Implicit      | CVPR 2022 |     [Project](https:\u002F\u002Fchumbyte.github.io\u002FDiGS-Site\u002F)    |\n| [VisCo Grids: Surface Reconstruction with Viscosity and Coarea Grids](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14569) |     Implicit      | NIPS 2022 | \u002F |\n| [GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02780) |     Implicit      | NIPS 2022 | [Project](https:\u002F\u002Flight.princeton.edu\u002Fpublication\u002Fgensdf\u002F) |\n| [Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02825) |     Implicit      | SIGGRAPH 2022 | [Code](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDualOctreeGNN) |\n| [Deep Point Cloud Simplification for High-quality Surface Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09088) |     Implicit      | arXiv 2022 | \u002F |\n| [RangeUDF: Semantic Surface Reconstruction from 3D Point Clouds](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.09138) |     Implicit      | arXiv 2022 | [Code](https:\u002F\u002Fgithub.com\u002Fvlar-group\u002Frangeudf) |\n| [Neural Poisson: Indicator Functions for Neural Fields](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14249) |     Implicit      | arXiv 2022 | \u002F |\n| [GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided Distance Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16762) |     Implicit      | arXiv 2022 | [Code](https:\u002F\u002Fgithub.com\u002Frsy6318\u002FGeoUDF) |\n| [CircNet: Meshing 3D Point Clouds with Circumcenter Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09253) |     Mesh      | ICLR 2023 | \u002F |\n| [ALTO: Alternating Latent Topologies for Implicit 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04096) |     Implicit      | CVPR 2023 | [Project](http:\u002F\u002Fvisual.ee.ucla.edu\u002Falto.htm\u002F) |\n| [Octree Guided Unoriented Surface Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FKoneputugodage_Octree_Guided_Unoriented_Surface_Reconstruction_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | [Project](https:\u002F\u002Fchumbyte.github.io\u002FOG-INR-Site\u002F) |\n| [Unsupervised Inference of Signed Distance Functions From Single Sparse Point Clouds Without Learning Priors](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FChen_Unsupervised_Inference_of_Signed_Distance_Functions_From_Single_Sparse_Point_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | [Code](https:\u002F\u002Fgithub.com\u002Fchenchao15\u002FNeuralTPS) |\n| [Neural Vector Fields: Implicit Representation by Explicit Learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FYang_Neural_Vector_Fields_Implicit_Representation_by_Explicit_Learning_CVPR_2023_paper.html) |     Implicit      | CVPR 2023 | [Code](https:\u002F\u002Fgithub.com\u002FWi-sc\u002FNVF) |\n| [StEik: Stabilizing the Optimization of Neural Signed Distance Functions and Finer Shape Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18414) |     Implicit      | arXiv 2023 | \u002F |\n| [Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01405) |     Implicit      | ICML 2023 | [Project](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FNoise2NoiseMapping\u002F) |\n\n### RGB-D\n|                            Paper                             | Representation | Publisher |                         Project\u002FCode                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [Neural RGB-D Surface Reconstruction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FAzinovic_Neural_RGB-D_Surface_Reconstruction_CVPR_2022_paper.html) |     Implicit      | CVPR 2022 |        [Project](https:\u002F\u002Fdazinovic.github.io\u002Fneural-rgbd-surface-reconstruction\u002F)        |\n| [BNV-Fusion: Dense 3D Reconstruction Using Bi-Level Neural Volume Fusion](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FLi_BNV-Fusion_Dense_3D_Reconstruction_Using_Bi-Level_Neural_Volume_Fusion_CVPR_2022_paper.html) |     Implicit      | CVPR 2022 |        [Code](https:\u002F\u002Fgithub.com\u002Flikojack\u002Fbnv_fusion)        |\n| [NICE-SLAM: Neural Implicit Scalable Encoding for SLAM](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhu_NICE-SLAM_Neural_Implicit_Scalable_Encoding_for_SLAM_CVPR_2022_paper.html) |     Implicit      | CVPR 2022 |   [Project](https:\u002F\u002Fpengsongyou.github.io\u002Fnice-slam)   |\n| [ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and Pose Optimization](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136620266.pdf) |     Implicit      | ECCV 2022 |        [Project](https:\u002F\u002Fzubair-irshad.github.io\u002Fprojects\u002FShAPO.html)        |\n| [CIRCLE: Convolutional Implicit Reconstruction and Completion for Large-scale Indoor Scene](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fhtml\u002F4658_ECCV_2022_paper.php) |     Implicit      | ECCV 2022 |        [Code](https:\u002F\u002Fgithub.com\u002Fotakuxiang\u002Fcircle)        |\n| [Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.15258) |     Implicit      | NIPS 2022 |        [Project](https:\u002F\u002Fustc3dv.github.io\u002Fndr\u002F)        |\n| [GO-Surf: Neural Feature Grid Optimization for Fast, High-Fidelity RGB-D Surface Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.14735) |     Implicit      | 3DV 2022 |        [Project](https:\u002F\u002Fjingwenwang95.github.io\u002Fgo_surf\u002F)        |\n| [FastSurf: Fast Neural RGB-D Surface Reconstruction using Per-Frame Intrinsic Refinement and TSDF Fusion Prior Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04508) |     Implicit      | arXiv 2023 |        [Project](https:\u002F\u002Frokit-healthcare.github.io\u002FFastSurf\u002F)        |\n| [Dynamic Voxel Grid Optimization for High-Fidelity RGB-D Supervised Surface Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06178) |     Implicit      | arXiv 2023 |        \u002F        |\n| [Multiview Compressive Coding for 3D Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08247) |     Implicit      | CVPR 2023 |        [Project](https:\u002F\u002Fmcc3d.github.io\u002F)        |\n| [MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01932) |     Implicit      | CVPR 2023 |        [Project](https:\u002F\u002Fcode.active.vision\u002FMobileBrick\u002F)        |\n| [TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using Differentiable Rendering](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.15060) |     Mesh      | CVPR 2023 |        [Project](https:\u002F\u002Fjh-choi.github.io\u002FTMO\u002F)        |\n\n\n## Survey\n\n|                            Paper                             | Publisher  |\n| :--------------------------------------------------------------------: | :--------: |\n| [Image-based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.06543) | TPAMI 2019 |\n| [Neural Fields in Visual Computing and Beyond](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11426) | arXiv 2021 |\n| [Advances in Neural Rendering](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.05849) | EUROGRAPHICS 2022 |\n| [Surface Reconstruction from Point Clouds: A Survey and a Benchmark](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02413) | arXiv 2022 |\n| [NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00379) | arXiv 2022 |\n| [A Review of Deep Learning-Powered Mesh Reconstruction Methods](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.02879) | arXiv 2023 |\n","# 优秀的3D重建论文\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re)\n\n深度学习时代下的3D重建论文合集。欢迎贡献 :)\n\n目录\n====\n\n  * [对象级](#object-level)\n     * [单视角](#single-view)\n     * [多视角](#multi-view)\n     * [无监督](#unsupervised)\n  * [场景级](#scene-level)\n     * [单视角](#single-view-1)\n     * [多视角](#multi-view-1)\n  * [神经表面](#neural-surface)\n     * [多视角](#multi-view-2)\n     * [点云](#point-cloud)\n     * [RGB-D](#rgb-d)\n  * [综述](#survey)\n\n## 对象级\n\n### 单视角\n\n| 论文 | 表示形式 | 会议\u002F期刊 | 项目\u002F代码 |\n| :----------------------------------------------------------: | :-------: | :-------: | :-----------------------------------------------------: |\n| [基于单张图像的3D物体重建点云生成网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFan_A_Point_Set_CVPR_2017_paper.html) | 点云 | CVPR 2017 | [代码](https:\u002F\u002Fgithub.com\u002Ffanhqme\u002FPointSetGeneration) |\n| [SurfNet：利用深度残差网络生成3D形状表面](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSinha_SurfNet_Generating_3D_CVPR_2017_paper.html) | 网格 | CVPR 2017 | [代码](https:\u002F\u002Fgithub.com\u002Fsinhayan\u002Fsurfnet) |\n| [OctNet：在高分辨率下学习深度3D表示](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRiegler_OctNet_Learning_Deep_CVPR_2017_paper.html) | 体素 | CVPR 2017 | [代码](https:\u002F\u002Fgithub.com\u002Fgriegler\u002Foctnet) |\n| [重新思考重投影：闭合单张图像姿态感知形状重建的闭环](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Rethinking_Reprojection_Closing_ICCV_2017_paper.html) | 体素 | ICCV 2017 | \u002F |\n| [MarrNet：通过2.5D草图进行3D形状重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2017\u002Fhash\u002Fad972f10e0800b49d76fed33a21f6698-Abstract.html) | 体素 | NIPS 2017 | [项目](http:\u002F\u002Fmarrnet.csail.mit.edu\u002F) |\n| [用于3D物体重建的层次化表面预测](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00710) | 体素 | 3DV 2017 | [代码](https:\u002F\u002Fgithub.com\u002Fchaene\u002Fhsp) |\n| [Image2Mesh：单张图像3D重建的学习框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.10669) | 网格 | ACCV 2018 | [代码](https:\u002F\u002Fgithub.com\u002Fjhonykaesemodel\u002Fimage2mesh) |\n| [为密集3D物体重建学习高效的点云生成](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16530) | 点云 | AAAI 2018 | [项目](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002F3D-point-cloud-generation\u002F) |\n| [纸浆工艺方法用于学习3D表面生成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FGroueix_A_Papier-Mache_Approach_CVPR_2018_paper.html) | 网格 | CVPR 2018 | [项目](http:\u002F\u002Fimagine.enpc.fr\u002F~groueixt\u002Fatlasnet\u002F) |\n| [像素、体素和视角：单视图3D物体形状预测的形状表示研究](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FShin_Pixels_Voxels_and_CVPR_2018_paper.html) | 通用 | CVPR 2018 | [项目](https:\u002F\u002Fwww.ics.uci.edu\u002F~daeyuns\u002Fpixels-voxels-views\u002F) |\n| [Im2Struct：从单张RGB图像恢复3D形状结构](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FNiu_Im2Struct_Recovering_3D_CVPR_2018_paper.html) | 部件 | CVPR 2018 | [代码](https:\u002F\u002Fgithub.com\u002Fchengjieniu\u002FIm2Struct) |\n| [套娃网络：通过嵌套形状层预测3D几何](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FRichter_Matryoshka_Networks_Predicting_CVPR_2018_paper.html) | 体素 | CVPR 2018 | [代码](https:\u002F\u002Fbitbucket.org\u002Fvisinf\u002Fprojects-2018-matryoshka\u002Fsrc\u002Fmaster\u002F) |\n| [多视图一致性作为学习形状和姿态预测的监督信号](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FTulsiani_Multi-View_Consistency_as_CVPR_2018_paper.html) | 体素 | CVPR 2018 | [项目](https:\u002F\u002Fshubhtuls.github.io\u002FmvcSnP\u002F) |\n| [利用变形向量场进行高效密集点云物体重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKejie_Li_Efficient_Dense_Point_ECCV_2018_paper.html) | 点云 | ECCV 2018 | \u002F |\n| [GAL：用于单视图3D物体重建的几何对抗损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLi_Jiang_GAL_Geometric_Adversarial_ECCV_2018_paper.html) | 点云 | ECCV 2018 | \u002F |\n| [从图像集合中学习特定类别的网格重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngjoo_Kanazawa_Learning_Category-Specific_Mesh_ECCV_2018_paper.html) | 网格 | ECCV 2018 | [项目](https:\u002F\u002Fakanazawa.github.io\u002Fcmr\u002F) |\n| [学习单视图3D补全和重建的形状先验](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJiajun_Wu_Learning_3D_Shape_ECCV_2018_paper.html) | 体素 | ECCV 2018 | \u002F |\n| [在有限姿态监督下学习单视图3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuandao_Yang_A_Unified_Framework_ECCV_2018_paper.html) | 体素 | ECCV 2018 | [代码](https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon) |\n| [Pixel2Mesh：从单张RGB图像生成3D网格模型](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FNanyang_Wang_Pixel2Mesh_Generating_3D_ECCV_2018_paper.html) | 网格 | ECCV 2018 | [代码](https:\u002F\u002Fgithub.com\u002Fnywang16\u002FPixel2Mesh) |\n| [残差MeshNet：学习对网格进行变形以实现单视图3D重建](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8491025) | 网格 | 3DV 2018 | \u002F |\n| [学习从未见过的类别中重建形状](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Fhash\u002F208e43f0e45c4c78cafadb83d2888cb6-Abstract.html) | 通用 | NIPS 2018 | [项目](http:\u002F\u002Fgenre.csail.mit.edu\u002F) |\n| [多视图轮廓与深度分解用于高分辨率3D物体表示](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Fhash\u002F39ae2ed11b14a4ccb41d35e9d1ba5d11-Abstract.html) | 体素 | NIPS 2018 | [代码](https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002FMulti-View-Silhouette-and-Depth-Decomposition-for-High-Resolution-3D-Object-Representation) |\n| [MVPNet：用于从单张图像重建3D物体的多视图点回归网络](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F4923) | 点云 | AAAI 2019 | [项目](https:\u002F\u002Fjingluw.github.io\u002Fprojects\u002Fmvpnet\u002F) |\n| [带有视觉壳嵌入的深度单视图3D物体重建](https:\u002F\u002Fojs.aaai.org\u002F\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F4922) | 体素 | AAAI 2019 | [代码](https:\u002F\u002Fgithub.com\u002FHanqingWangAI\u002FPSVH-3d-reconstruction) |\n| [占用网络：在函数空间中学习3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FMescheder_Occupancy_Networks_Learning_3D_Reconstruction_in_Function_Space_CVPR_2019_paper.html) | 隐式 | CVPR 2019 | [代码](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Foccupancy_networks) |\n| [学习用于生成式形状建模的隐式场](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FChen_Learning_Implicit_Fields_for_Generative_Shape_Modeling_CVPR_2019_paper.html) | 隐式 | CVPR 2019 | [项目](https:\u002F\u002Fwww.sfu.ca\u002F~zhiqinc\u002Fimgan\u002FReadme.html) |\n| [基于骨架桥接的深度学习方法，用于从单张RGB图像生成复杂拓扑的网格](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FTang_A_Skeleton-Bridged_Deep_Learning_Approach_for_Generating_Meshes_of_Complex_CVPR_2019_paper.html) | 网格 | CVPR 2019 | [代码](https:\u002F\u002Fgithub.com\u002Ftangjiapeng\u002FSkeletonBridgeRecon) |\n| [单视图3D重建网络学到了什么？](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FTatarchenko_What_Do_Single-View_3D_Reconstruction_Networks_Learn_CVPR_2019_paper.html) | 通用 | CVPR 2019 | [代码](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fwhat3d) |\n| [深度水平集：用于3D形状推断的隐式表面表示](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06802) | 隐式 | arXiv 2019 | \u002F |\n| [通过拓扑修改网络从单张RGB图像进行深度网格重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FPan_Deep_Mesh_Reconstruction_From_Single_RGB_Images_via_Topology_Modification_ICCV_2019_paper.html) | 网格 | ICCV 2019 | [代码](https:\u002F\u002Fgithub.com\u002Fjnypan\u002FTMNet) |\n| [用于形状表示的深度元函数](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FLittwin_Deep_Meta_Functionals_for_Shape_Representation_ICCV_2019_paper.html) | 隐式 | ICCV 2019 | [代码](https:\u002F\u002Fgithub.com\u002Fgidilittwin\u002FDeep-Meta) |\n| [GraphX卷积用于2D到3D转换中的点云变形](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FNguyen_GraphX-Convolution_for_Point_Cloud_Deformation_in_2D-to-3D_Conversion_ICCV_2019_paper.html) | 点云 | ICCV 2019 | [代码](https:\u002F\u002Fgithub.com\u002Fywcmaike\u002Fpcdnet) |\n| [Pix2Vox：上下文感知的单视图和多视图图像3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FXie_Pix2Vox_Context-Aware_3D_Reconstruction_From_Single_and_Multi-View_Images_ICCV_2019_paper.html) | 体素 | ICCV 2019 | [代码](https:\u002F\u002Fgithub.com\u002Fhzxie\u002FPix2Vox) |\n| [领域自适应的单视图3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FPinheiro_Domain-Adaptive_Single-View_3D_Reconstruction_ICCV_2019_paper.html) | 体素 | ICCV 2019 | [代码](https:\u002F\u002Fgithub.com\u002FGitikameher\u002FDomain-Adaptive-Single-View-3D-Reconstruction) |\n| [通过先验知识实现单张图像3D重建的少样本泛化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FWallace_Few-Shot_Generalization_for_Single-Image_3D_Reconstruction_via_Priors_ICCV_2019_paper.html) | 体素 | ICCV 2019 | [代码](https:\u002F\u002Fgithub.com\u002FBramSW\u002Ficcv_2019_few_shot_3d_wallace) |\n| [DISN：高质量单视图3D重建的深度隐式表面网络](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002F39059724f73a9969845dfe4146c5660e-Abstract.html) | 隐式 | NIPS 2019 | [代码](https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FDISN) |\n| [Front2Back：通过前后预测进行单视图3D形状重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FYao_Front2Back_Single_View_3D_Shape_Reconstruction_via_Front_to_Back_CVPR_2020_paper.html) | 网格 | CVPR 2020 | [代码](https:\u002F\u002Fgithub.com\u002Frozentill\u002FFront2Back) |\n| [BSP-Net：通过二叉空间分割生成紧凑网格](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FChen_BSP-Net_Generating_Compact_Meshes_via_Binary_Space_Partitioning_CVPR_2020_paper.html) | 网格 | CVPR 2020 | [项目](https:\u002F\u002Fbsp-net.github.io\u002F) |\n| [单视图3D预测的高度和直立性不变性](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FBaradad_Height_and_Uprightness_Invariance_for_3D_Prediction_From_a_Single_CVPR_2020_paper.html) | 点云 | CVPR 2020 | [代码](https:\u002F\u002Fgithub.com\u002Fmbaradad\u002Fim2pcl) |\n| [特征空间中的隐式函数用于3D形状重建和补全](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FChibane_Implicit_Functions_in_Feature_Space_for_3D_Shape_Reconstruction_and_Completion_CVPR_2020_paper.html) | 隐式 | CVPR 2020 | [项目](https:\u002F\u002Fvirtualhumans.mpi-inf.mpg.de\u002Fifnets\u002F) |\n| [无监督学习可能对称的可变形3D物体，来自野外图像](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FWu_Unsupervised_Learning_of_Probably_Symmetric_Deformable_3D_Objects_From_Images_CVPR_2020_paper.html) | 网格 | CVPR 2020 | [项目](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fblog\u002Funsupervised-learning-of-probably-symmetric-deformable-3d-objects-from-images-in-the-wild.html?image=004_face&type=human) |\n| [CvxNet：可学习的凸分解](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FDeng_CvxNet_Learnable_Convex_Decomposition_CVPR_2020_paper.html) | 原始 | CVPR 2020 | [项目](https:\u002F\u002Fcvxnet.github.io\u002F) |\n| [深度局部形状：学习用于精细3D重建的局部SDF先验](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fhtml\u002F6873_ECCV_2020_paper.php) | 隐式 | ECCV 2020 | [代码](https:\u002F\u002Fgithub.com\u002FKamysek\u002FDeepLocalShapes) |\n| [具有组合先验的少样本单视图3D物体重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123700613.pdf) | 体素 | ECCV 2020 | [代码](https:\u002F\u002Fgithub.com\u002FJeremyFisher\u002Ffew_shot_3dr) |\n| [GSIR：可泛化的3D形状解释和重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fhtml\u002F1955_ECCV_2020_paper.php) | 体素 | ECCV 2020 | \u002F |\n| [DR-KFS：用于3D形状重建的可微分视觉相似度度量](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123660290.pdf) | 网格 | ECCV 2020 | \u002F |\n| [基于语义一致性的自监督单视图3D重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123590664.pdf) | 网格 | ECCV 2020 | [项目](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Funsup-mesh-2020) |\n| [无关键点的形状和视点](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600086.pdf) | 网格 | ECCV 2020 | [项目](https:\u002F\u002Fshubham-goel.github.io\u002Fucmr\u002F) |\n| [瓢虫：用于具有对称性的深度隐式场3D重建的准蒙特卡洛采样](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123460239.pdf) | 网格 | ECCV 2020 | [代码](https:\u002F\u002Fgithub.com\u002FFuxiCV\u002FLadybird) |\n| [学习用于3D重建的可变形四面体网格](https:\u002F\u002Fproceedings.neurips.cc\u002F\u002Fpaper\u002F2020\u002Ffile\u002F7137debd45ae4d0ab9aa953017286b20-Paper.pdf) | 网格 | NIPS 2020 | [项目](https:\u002F\u002Fnv-tlabs.github.io\u002FDefTet\u002F) |\n| [SDF-SRN：从静态图像学习有符号距离3D物体重建](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F83fa5a432ae55c253d0e60dbfa716723-Paper.pdf) | 隐式 | NIPS 2020 | [项目](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002Fsigned-distance-SRN\u002F) |\n| [UCLID-Net：在对象空间中进行单视图重建](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F21327ba33b3689e713cdff1641128004-Abstract.html) | 网格 | NIPS 2020 | [代码](https:\u002F\u002Fgithub.com\u002Fcvlab-epfl\u002FUCLID-Net) |\n| [Pix2Vox++：多尺度上下文感知的单张和多张图像3D物体重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12250) | 体素 | IJCV 2020 | [代码](https:\u002F\u002Fgitlab.com\u002Fhzxie\u002FPix2Vox) |\n| [D2IM-Net：从单张图像学习细节解耦的隐式场](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FLi_D2IM-Net_Learning_Detail_Disentangled_Implicit_Fields_From_Single_Images_CVPR_2021_paper.html) | 隐式 | CVPR 2021 | [代码](https:\u002F\u002Fgithub.com\u002FManyiLi12345\u002FD2IM-Net) |\n| [NeRD：神经3D反射对称检测器](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FZhou_NeRD_Neural_3D_Reflection_Symmetry_Detector_CVPR_2021_paper.html) | \u002F | CVPR 2021 | [代码](https:\u002F\u002Fgithub.com\u002Fzhou13\u002Fnerd) |\n| [通过学习局部和全局形状先验的层级结构来促进单视图3D重建的泛化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FBechtold_Fostering_Generalization_in_Single-View_3D_Reconstruction_by_Learning_a_Hierarchy_CVPR_2021_paper.html) | 隐式 | CVPR 2021 | [代码](https:\u002F\u002Fgithub.com\u002Fboschresearch\u002FHierarchicalPriorNetworks) |\n| [基于内存中形状先验的单视图3D物体重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYang_Single-View_3D_Object_Reconstruction_From_Shape_Priors_in_Memory_CVPR_2021_paper.html) | 体素 | CVPR 2021 | [项目](https:\u002F\u002Fcvxnet.github.io\u002F) |\n| [隐式表面表示作为神经网络中的层](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMichalkiewicz_Implicit_Surface_Representations_As_Layers_in_Neural_Networks_ICCV_2019_paper.html) | 隐式 | ICCV 2021 | \u002F |\n| [Ray-ONet：从单张RGB图像高效重建3D](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01899) | 隐式 | BMVC 2021 | [项目](https:\u002F\u002Frayonet.active.vision\u002F) |\n| [学习带有梯度方向对齐的锚定无符号距离函数，用于单视图服装重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FZhao_Learning_Anchored_Unsigned_Distance_Functions_With_Gradient_Direction_Alignment_for_ICCV_2021_paper.html) | 隐式 | ICCV 2021 | [代码](https:\u002F\u002Fgithub.com\u002Fzhaofang0627\u002FAnchorUDF) |\n| [几何粒度感知的像素到网格](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FShi_Geometric_Granularity_Aware_Pixel-To-Mesh_ICCV_2021_paper.html) | 网格 | ICCV 2021 | \u002F |\n| [Sketch2Mesh：从草图重建和编辑3D形状](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FGuillard_Sketch2Mesh_Reconstructing_and_Editing_3D_Shapes_From_Sketches_ICCV_2021_paper.html) | 网格 | ICCV 2021 | [代码](https:\u002F\u002Fgithub.com\u002Fcvlab-epfl\u002Fsketch2mesh) |\n| [3DIAS：使用隐式代数曲面进行3D形状重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FYavartanoo_3DIAS_3D_Shape_Reconstruction_With_Implicit_Algebraic_Surfaces_ICCV_2021_paper.html) | 原始 | ICCV 2021 | [项目](https:\u002F\u002Fmyavartanoo.github.io\u002F3dias\u002F) |\n| [单视图3D重建网络中重建与识别的数据分散视角](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.15158) | 点云 | 3DV 2021 | [代码](https:\u002F\u002Fgithub.com\u002Fyefanzhou\u002Fdispersion-score) |\n| [从单张图像重建新颖物体形状](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07752) | 隐式 | 3DV 2021 | [项目](https:\u002F\u002Fdevlearning-gt.github.io\u002F3DShapeGen\u002F) |\n| [AutoSDF：用于3D补全、重建和生成的形状先验](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09516) | 隐式 | CVPR 2022 | [项目](https:\u002F\u002Fyccyenchicheng.github.io\u002FAutoSDF\u002F) |\n| [通过解耦属性流从2D图像进行3D形状重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15190) | 点云 | CVPR 2022 | [代码](https:\u002F\u002Fgithub.com\u002Fjunshengzhou\u002F3dattriflow) |\n| [预训练、自训练、蒸馏：扩大3D重建规模的简单配方](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15190) | 隐式 | CVPR 2022 | [项目](https:\u002F\u002Fshubhtuls.github.io\u002Fss3d\u002F) |\n| [神经模板：拓扑感知的3D网格重建和解耦生成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FHui_Neural_Template_Topology-Aware_Reconstruction_and_Disentangled_Generation_of_3D_Meshes_CVPR_2022_paper.html) | 混合 | CVPR 2022 | [代码](https:\u002F\u002Fgithub.com\u002Fedward1997104\u002FNeural-Template) |\n| [SkeletonNet：一种保持拓扑结构的解决方案，用于学习从RGB图像重建物体表面的网格](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05742) | 网格 | TPAMI 2022 | [代码](https:\u002F\u002Fgithub.com\u002Ftangjiapeng\u002FSkeletonNet) |\n| [训练数据生成网络：通过双层优化进行形状重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08276) | 隐式 | ICLR 2022 | \u002F |\n| [结构性因果3D重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10156) | 混合 | ECCV 2022 | \u002F |\n| [具有记忆先验对比网络的少样本单视图3D重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610054.pdf) | 体素 | ECCV 2022 | \u002F |\n| [通过原型形状先验进行半监督单视图3D重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610528.pdf) | 体素 | ECCV 2022 | \u002F |\n| [具有高保真形状和纹理的单视图3D场景重建](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.00457.pdf) | 隐式 | 3DV 2024 | [代码](https:\u002F\u002Fgithub.com\u002FDaLi-Jack\u002FSSR-code) |\n\n### 多视角\n\n|                            论文                             | 表示方式 | 会议\u002F期刊 |                         项目\u002F代码                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [3D-R2N2：单视角与多视角三维物体重建的统一方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.00449) |     体素      | ECCV 2016 |        [代码](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002F3D-R2N2)        |\n| [基于多物体二维视图的三维形状推断](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05872) |     体素      | 3DV 2017  |      [代码](https:\u002F\u002Fgithub.com\u002Fmatheusgadelha\u002FPrGAN)       |\n| [用于密集三维物体重建的高效点云生成学习](https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fdownload\u002F16530\u002F16302) |  点云   | AAAI 2018 | [项目](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002F3D-point-cloud-generation\u002F) |\n| [面向多视角立体重建的条件式单视角形状生成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FWei_Conditional_Single-View_Shape_Generation_for_Multi-View_Stereo_Reconstruction_CVPR_2019_paper.html) |  点云   | CVPR 2019 |      [代码](https:\u002F\u002Fgithub.com\u002Fweiyithu\u002FOptimizeMVS)       |\n| [Pixel2Mesh++：通过形变实现多视角三维网格生成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FWen_Pixel2Mesh_Multi-View_3D_Mesh_Generation_via_Deformation_ICCV_2019_paper.html) |      网格      | ICCV 2019 |  [项目](https:\u002F\u002Fwalsvid.github.io\u002FPixel2MeshPlusPlus\u002F)  |\n| [用于学习特定类别形状重建的多视角聚合](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8506-multiview-aggregation-for-learning-category-specific-shape-reconstruction.pdf) |  点云   | NIPS 2019 |     [代码](https:\u002F\u002Fgithub.com\u002Fdrsrinathsridhar\u002Fxnocs)      |\n| [Pix2Surf：从图像中学习物体的参数化三维曲面模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.07760) |      片段      | ECCV 2020  |        [项目](https:\u002F\u002Fgeometry.stanford.edu\u002Fprojects\u002Fpix2surf\u002F)         |\n| [基于Transformer的多视角三维重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FWang_Multi-View_3D_Reconstruction_With_Transformers_ICCV_2021_paper.html) |      体素      | ICCV 2021  | \u002F |\n| [3D-C2FT：用于多视角三维重建的粗细结合Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14575) |      体素      | ACCV 2022  | \u002F |\n| [FvOR：针对少视角物体重建的鲁棒联合形状与姿态优化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FYang_FvOR_Robust_Joint_Shape_and_Pose_Optimization_for_Few-View_Object_CVPR_2022_paper.html) |      隐式表示      | CVPR 2022  | [代码](https:\u002F\u002Fgithub.com\u002Fzhenpeiyang\u002FFvOR\u002F) |\n| [FOUND：利用合成数据进行表面形变时具有不确定法线的足部优化](https:\u002F\u002Follieboyne.com\u002FFOUND) | 网格 | WACV 2024 | [代码](https:\u002F\u002Fgithub.com\u002FOllieBoyne\u002FFOUND) |\n\n### 无监督\n|                            论文                             | 表征 | 出版社 |                         项目\u002F代码                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [视角变换网络：无需3D监督的单视图3D物体重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2016\u002Fhash\u002Fe820a45f1dfc7b95282d10b6087e11c0-Abstract.html) | 体素 | NIPS 2016 |      [代码](https:\u002F\u002Fgithub.com\u002Fxcyan\u002Fnips16_PTN)      |\n| [基于可微光线一致性的多视图监督单视图重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTulsiani_Multi-View_Supervision_for_CVPR_2017_paper.html) |      体素      | CVPR 2017 |        [项目](https:\u002F\u002Fshubhtuls.github.io\u002Fdrc\u002F)         |\n| [重新思考重投影：闭合单张图像姿态感知形状重建的闭环](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Rethinking_Reprojection_Closing_ICCV_2017_paper.html) |      体素      | ICCV 2017 |  \u002F  |\n| [从图像集合中学习特定类别的网格重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngjoo_Kanazawa_Learning_Category-Specific_Mesh_ECCV_2018_paper.html) |      网格      | ECCV 2018 |        [项目](https:\u002F\u002Fakanazawa.github.io\u002Fcmr\u002F)         |\n| [在有限姿态监督下学习单视图3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuandao_Yang_A_Unified_Framework_ECCV_2018_paper.html) |      体素      | ECCV 2018 |        [代码](https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon)         |\n| [多视图一致性作为学习形状和姿态预测的监督信号](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FTulsiani_Multi-View_Consistency_as_CVPR_2018_paper.html) |      体素      | CVPR 2018 |        [项目](https:\u002F\u002Fshubhtuls.github.io\u002FmvcSnP\u002F)         |\n| [为单视图3D重建学习视图先验](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FKato_Learning_View_Priors_for_Single-View_3D_Reconstruction_CVPR_2019_paper.html) |      网格      | CVPR 2019 |        [代码](https:\u002F\u002Fgithub.com\u002Fhiroharu-kato\u002Fview_prior_learning)         |\n| [走出柏拉图的洞穴：基于对抗渲染的3D形状](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FHenzler_Escaping_Platos_Cave_3D_Shape_From_Adversarial_Rendering_ICCV_2019_paper.html) |      体素      | ICCV 2019 |        [项目](https:\u002F\u002Fgeometry.cs.ucl.ac.uk\u002Fprojects\u002F2019\u002Fplatonicgan\u002F)         |\n| [无需3D监督学习隐式曲面](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fbdf3fd65c81469f9b74cedd497f2f9ce-Abstract.html) |      隐式      | NIPS 2019 |  \u002F  |\n| [使用基于插值的可微渲染器预测3D物体](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002Ff5ac21cd0ef1b88e9848571aeb53551a-Abstract.html) |      网格      | NIPS 2019 |        [项目](https:\u002F\u002Fnv-tlabs.github.io\u002FDIB-R\u002F)         |\n| [从野外图像中无监督学习可能对称的可变形3D物体](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FWu_Unsupervised_Learning_of_Probably_Symmetric_Deformable_3D_Objects_From_Images_CVPR_2020_paper.html) |      网格      | CVPR 2020 |        [项目](https:\u002F\u002Felliottwu.com\u002Fprojects\u002F20_unsup3d\u002F)         |\n| [利用2D数据学习纹理化3D网格生成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FHenderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.html) |      网格      | CVPR 2020 |        [代码](https:\u002F\u002Fgithub.com\u002Fpmh47\u002Ftextured-mesh-gen)         |\n| [从未标注图像集合中隐式网格重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08504) |      网格      | arXiv 2020 |        [项目](https:\u002F\u002Fshubhtuls.github.io\u002Fimr\u002F)         |\n| [无需关键点的形状与视点](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600086.pdf) |      网格      | ECCV 2020 |        [项目](https:\u002F\u002Fshubham-goel.github.io\u002Fucmr\u002F)         |\n| [通过语义一致性进行自监督单视图3D重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06473) |      网格      | ECCV 2020 |        [项目](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Funsup-mesh-2020)         |\n| [SDF-SRN：从静态图像中学习带符号距离的3D物体重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F83fa5a432ae55c253d0e60dbfa716723-Abstract.html) |      隐式      | NIPS 2020 |        [项目](https:\u002F\u002Fchenhsuanlin.bitbucket.io\u002Fsigned-distance-SRN\u002F)         |\n| [野外货架监督下的网格预测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYe_Shelf-Supervised_Mesh_Prediction_in_the_Wild_CVPR_2021_paper.html) |      网格      | CVPR 2021 |        [项目](https:\u002F\u002Fjudyye.github.io\u002FShSMesh\u002F)         |\n| [全面理解通用物体：建模、分割与重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FLiu_Fully_Understanding_Generic_Objects_Modeling_Segmentation_and_Reconstruction_CVPR_2021_paper.html) |      隐式      | CVPR 2021 |        [项目](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fproject-fully3dobject.html)         |\n| [从单张图像中自监督3D网格重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FHu_Self-Supervised_3D_Mesh_Reconstruction_From_Single_Images_CVPR_2021_paper.html) |      网格      | CVPR 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSMR)         |\n| [单张图像纹理化3D模型的视图泛化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FBhattad_View_Generalization_for_Single_Image_Textured_3D_Models_CVPR_2021_paper.html) |      网格      | CVPR 2021 |        [项目](https:\u002F\u002Fnv-adlr.github.io\u002Fview-generalization)         |\n| [2D GAN是否了解3D形状？从2D图像GAN中无监督重建3D形状](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.00844) |      网格      | ICLR 2021 |        [项目](https:\u002F\u002Fxingangpan.github.io\u002Fprojects\u002FGAN2Shape.html)         |\n| [图像GAN与可微渲染结合用于逆向图形和可解释的3D神经渲染](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.09125) |      网格      | ICLR 2021 |        [项目](https:\u002F\u002Fnv-tlabs.github.io\u002FGANverse3D\u002F)    |\n| [从图像集合中发现3D部件](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FYao_Discovering_3D_Parts_From_Image_Collections_ICCV_2021_paper.html) |      网格      | ICCV 2021 |        [项目](https:\u002F\u002Fchhankyao.github.io\u002Flpd\u002F)         |\n| [学习用于细粒度识别的规范3D物体表示](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FJoung_Learning_Canonical_3D_Object_Representation_for_Fine-Grained_Recognition_ICCV_2021_paper.html) |      网格      | ICCV 2021 |  \u002F  |\n| [通过来自多张图像的无监督学习实现逼真的单视图3D物体重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FHo_Toward_Realistic_Single-View_3D_Object_Reconstruction_With_Unsupervised_Learning_From_ICCV_2021_paper.html) |      网格      | ICCV 2021 |        [代码](https:\u002F\u002Fgithub.com\u002FVinAIResearch\u002FLeMul)         |\n| [从真实世界图像中学习纹理化3D网格的生成模型](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FPavllo_Learning_Generative_Models_of_Textured_3D_Meshes_From_Real-World_Images_ICCV_2021_paper.html) |      网格      | ICCV 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fdariopavllo\u002Ftextured-3d-gan)         |\n| [直击要点：基于对应关系的单目3D类别重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F40008b9a5380fcacce3976bf7c08af5b-Abstract.html) |      网格      | NIPS 2021 |        [项目](https:\u002F\u002Ffkokkinos.github.io\u002Fto_the_point\u002F)         |\n| [面向拓扑的形变场用于单视图3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FDuggal_Topologically-Aware_Deformation_Fields_for_Single-View_3D_Reconstruction_CVPR_2022_paper.html) |      隐式      | CVPR 2022 |        [项目](https:\u002F\u002Fshivamduggal4.github.io\u002Ftars-3D\u002F)     |\n| [2D GAN与无监督单视图3D重建相遇](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10183) |      隐式      | ECCV 2022 |        [项目](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fproject-gansvr.html)    |\n| [通过GAN反演进行单目3D物体重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610665.pdf) |      网格      | ECCV 2022 |        [项目](https:\u002F\u002Fwww.mmlab-ntu.com\u002Fproject\u002Fmeshinversion\u002F)    |\n| [与邻共享：基于跨实例一致性的单视图重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136610282.pdf) |      网格      | ECCV 2022 |        [项目](http:\u002F\u002Fimagine.enpc.fr\u002F~monniert\u002FUNICORN\u002F)         |\n| [通过自举辐射场反演从一张图像中获取形状、姿态和外观](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11674) |      隐式      | CVPR 2023 | [代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fnerf-from-image) |\n| [以五千种方式看一朵玫瑰](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04965) |      隐式      | CVPR 2023 | [项目](https:\u002F\u002Fcs.stanford.edu\u002F~yzzhang\u002Fprojects\u002Frose\u002F) |\n| [ShapeClipper：通过几何和CLIP-based一致性从单视图图像中可扩展地学习3D形状](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FHuang_ShapeClipper_Scalable_3D_Shape_Learning_From_Single-View_Images_via_Geometric_CVPR_2023_paper.html) |      隐式      | CVPR 2023 | [项目](https:\u002F\u002Fzixuanh.com\u002Fprojects\u002Fshapeclipper.html) |\n| [SAOR：单视图关节物体重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13514) |      隐式      | arXiv 2023 | [项目](https:\u002F\u002Fmehmetaygun.github.io\u002Fsaor) |\n| [从2D GAN数据中逐步学习3D重建网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11102) |      网格      | arXiv 2023 | [项目](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fadlr\u002Fprogressive-3d-learning\u002F) |\n\n## 场景级\n\n### 单视角\n|                            论文                             | 表征方式 | 发表会议 |                         项目\u002F代码                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [IM2CAD](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIzadinia_IM2CAD_CVPR_2017_paper.html) |      CAD       | CVPR 2017 |         [代码](https:\u002F\u002Fgithub.com\u002Fyyong119\u002FIM2CAD)         |\n| [3D-RCNN: 基于渲染与比较的实例级3D物体重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FKundu_3D-RCNN_Instance-Level_3D_CVPR_2018_paper.html) |     先验     | CVPR 2018 |   [项目](https:\u002F\u002Fabhijitkundu.info\u002Fprojects\u002F3D-RCNN\u002F)   |\n| [从3D场景的2D图像中分解形状、姿态和布局](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FTulsiani_Factoring_Shape_Pose_CVPR_2018_paper.html) |     体素      | CVPR 2018 |     [项目](https:\u002F\u002Fshubhtuls.github.io\u002Ffactored3d\u002F)     |\n| [基于单张RGB图像的整体3D场景解析与重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSiyuan_Huang_Monocular_Scene_Parsing_ECCV_2018_paper.html) |      网格       | ECCV 2018 | [项目](https:\u002F\u002Fsiyuanhuang.com\u002Fholistic_parsing\u002Fmain.html) |\n| [Mesh R-CNN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FGkioxari_Mesh_R-CNN_ICCV_2019_paper.html) |      网格      | ICCV 2019 |    [代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmeshrcnn)    |\n| [基于多层深度和极线变换器的3D场景重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FShin_3D_Scene_Reconstruction_With_Multi-Layer_Depth_and_Epipolar_Transformers_ICCV_2019_paper.html) |      网格      | ICCV 2019 | [项目](https:\u002F\u002Fresearch.dshin.org\u002Ficcv19\u002Fmulti-layer-depth) |\n| [3D-RelNet: 用于3D预测的联合对象与关系网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FKulkarni_3D-RelNet_Joint_Object_and_Relational_Network_for_3D_Prediction_ICCV_2019_paper.html) |     体素      | ICCV 2019 |  [项目](https:\u002F\u002Fnileshkulkarni.github.io\u002Frelative3d\u002F)   |\n| [Total3DUnderstanding: 基于单张图像对室内场景的布局、物体姿态及网格重建进行联合建模](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FNie_Total3DUnderstanding_Joint_Layout_Object_Pose_and_Mesh_Reconstruction_for_Indoor_CVPR_2020_paper.html) |      网格      | CVPR 2020 |       [项目](https:\u002F\u002Fyinyunie.github.io\u002FTotal3D\u002F)       |\n| [基于单视口的3D场景重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123670052.pdf) |     体素      | ECCV 2020 | [代码](https:\u002F\u002Fgithub.com\u002FDLR-RM\u002FSingleViewReconstruction) |\n| [CoReNet: 基于单张RGB图像的一致性3D场景重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123470358.pdf) | 体素+隐式表示 | ECCV 2020 |     [代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fcorenet)     |\n| [用于3D场景重建与分割的图像到体素模型转换](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123520103.pdf) |     体素      | ECCV 2020 |           [代码](https:\u002F\u002Fgithub.com\u002Fvlkniaz\u002FSSZ)           |\n| [基于隐式表示的单张图像整体3D场景理解](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06422) |    隐式    | CVPR 2021 |  [项目](https:\u002F\u002Fchengzhag.github.io\u002Fpublication\u002Fim3d\u002F)  |\n| [从点云到多物体3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FEngelmann_From_Points_to_Multi-Object_3D_Reconstruction_CVPR_2021_paper.html) |    隐式    | CVPR 2021 |  [项目](https:\u002F\u002Ffrancisengelmann.github.io\u002Fpoints2objects\u002F)  |\n| [学习从单张图像恢复3D场景形状](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYin_Learning_To_Recover_3D_Scene_Shape_From_a_Single_Image_CVPR_2021_paper.html) |    点云    | CVPR 2021 |  [代码](https:\u002F\u002Fgithub.com\u002Faim-uofa\u002FAdelaiDepth)  |\n| [Patch2CAD: 基于补丁嵌入学习的野外场景形状检索](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FKuo_Patch2CAD_Patchwise_Embedding_Learning_for_In-the-Wild_Shape_Retrieval_From_a_ICCV_2021_paper.html) |    网格    | ICCV 2021 |  \u002F  |\n| [基于单张RGB图像的全景式3D场景重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F46031b3d04dc90994ca317a7c55c4289-Abstract.html) |    体素    | NIPS 2021 |  [项目](https:\u002F\u002Fmanuel-dahnert.com\u002Fresearch\u002Fpanoptic-reconstruction)  |\n| [基于体素的单张图像多物体3D检测与重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F1415db70fe9ddb119e23e9b2808cde38-Abstract.html) |    隐式    | NIPS 2021 |  [项目](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fproject-mdr.html)  |\n| [迈向高保真度的室内场景单视角整体重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.08656) |    隐式    | ECCV 2022 |  [代码](https:\u002F\u002Fgithub.com\u002FUncleMEDM\u002FInstPIFu)  |\n| [3D-Former: 基于SDF的3D变换器的单目场景重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13510) |    隐式    | ICLR 2023 |  [项目](https:\u002F\u002Fweihaosky.github.io\u002Fformer3d\u002F)  |\n| [BUOL: 基于占用感知提升的自底向上框架，用于单张图像的全景式3D场景重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FChu_BUOL_A_Bottom-Up_Framework_With_Occupancy-Aware_Lifting_for_Panoptic_3D_CVPR_2023_paper.html) |    隐式    | CVPR 2023 |  [代码](https:\u002F\u002Fgithub.com\u002Fchtsy\u002Fbuol)  |\n\n### 多视角\n|                            论文                             | 表示方式 | 会议\u002F期刊 |                         项目\u002F代码                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [MARMVS：减少匹配歧义的多视角立体视觉用于高效的大规模场景重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FXu_MARMVS_Matching_Ambiguity_Reduced_Multiple_View_Stereo_for_Efficient_Large_CVPR_2020_paper.html) | 点云    | CVPR 2020 | \u002F |\n| [FroDO：从检测到3D物体](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FRunz_FroDO_From_Detections_to_3D_Objects_CVPR_2020_paper.html) | 隐式表示 | CVPR 2020 | \u002F |\n| [Associative3D：基于稀疏视图的体素重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600137.pdf) | 体素          | ECCV 2020 | [项目](https:\u002F\u002Fjasonqsy.github.io\u002FAssociative3D\u002F) |\n| [Atlas：从已知位姿图像端到端的3D场景重建](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123520409.pdf) | 网格           | ECCV 2020 | [项目](http:\u002F\u002Fzak.murez.com\u002Fatlas\u002F)               |\n| [NeuralRecon：单目视频实时连贯的3D重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.00681) | 网格           | CVPR 2021 | [项目](https:\u002F\u002Fzju3dv.github.io\u002Fneuralrecon\u002F)     |\n| [TransformerFusion：使用Transformer的单目RGB场景重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F0a87257e5308197df43230edf4ad1dae-Abstract.html) |    隐式表示    | NIPS 2021 |  [项目](https:\u002F\u002Faljazbozic.github.io\u002Ftransformerfusion\u002F)  |\n| [无需3D监督学习3D物体形状与布局](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FGkioxari_Learning_3D_Object_Shape_and_Layout_Without_3D_Supervision_CVPR_2022_paper.html) |    网格    | CVPR 2022 |  [项目](https:\u002F\u002Fgkioxari.github.io\u002Fusl\u002Findex.html)  |\n| [用于3D场景重建的定向射线距离函数](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136620193.pdf) |    隐式表示    | ECCV 2022 |  [项目](https:\u002F\u002Fnileshkulkarni.github.io\u002Fscene_drdf\u002F)  |\n| [通过2D监督学习3D场景先验](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14157) |    网格    | arXiv 2022 |  [项目](https:\u002F\u002Fyinyunie.github.io\u002Fsceneprior-page\u002F)  |\n| [FineRecon：深度感知的前馈网络用于精细的3D重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01480) |    隐式表示    | arXiv 2023 |  [代码](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-finerecon)  |\n| [CVRecon：重新思考神经网络重建中的3D几何特征学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14633) |    隐式表示    | arXiv 2023 |  [项目](https:\u002F\u002Fcvrecon.ziyue.cool\u002F)  |\n| [VisFusion：基于视频的可见性感知在线3D场景重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FGao_VisFusion_Visibility-Aware_Online_3D_Scene_Reconstruction_From_Videos_CVPR_2023_paper.html) |    隐式表示    | CVPR 2023 |  [项目](https:\u002F\u002Fhuiyu-gao.github.io\u002Fvisfusion\u002F)  |\n\n## 神经表面\n\n### 多视角\n|                            论文                             | 表示方法 | 出版社 |                         项目\u002F代码                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [SDFDiff：用于3D形状的符号距离场可微渲染](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FJiang_SDFDiff_Differentiable_Rendering_of_Signed_Distance_Fields_for_3D_Shape_CVPR_2020_paper.html) |      隐式      | CVPR 2020 |        [代码](https:\u002F\u002Fyuejiang-nj.github.io\u002Fpapers\u002FCVPR2020_SDFDiff\u002Fproject_page.html)         |\n| [可微体渲染：无需3D监督的学习隐式3D表示](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FNiemeyer_Differentiable_Volumetric_Rendering_Learning_Implicit_3D_Representations_Without_3D_Supervision_CVPR_2020_paper.html) |      隐式      | CVPR 2020 |        [代码](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fdifferentiable_volumetric_rendering)         |\n| [通过解耦几何与外观的多视图神经表面重建](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F1a77befc3b608d6ed363567685f70e1e-Abstract.html) |      隐式      | NIPS 2020 |        [项目](https:\u002F\u002Flioryariv.github.io\u002Fidr\u002F)         |\n| [从野外视频中无监督学习3D物体类别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FHenzler_Unsupervised_Learning_of_3D_Object_Categories_From_Videos_in_the_CVPR_2021_paper.html) |    隐式    | CVPR 2021 |  [项目](https:\u002F\u002Fhenzler.github.io\u002Fpublication\u002Funsupervised_videos\u002F)  |\n| [神经光度图渲染](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FKellnhofer_Neural_Lumigraph_Rendering_CVPR_2021_paper.html) |    隐式    | CVPR 2021 |  [项目](http:\u002F\u002Fwww.computationalimaging.org\u002Fpublications\u002Fnlr\u002F)  |\n| [等值点：使用混合表示优化神经隐式曲面](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYifan_Iso-Points_Optimizing_Neural_Implicit_Surfaces_With_Hybrid_Representations_CVPR_2021_paper.html) |    隐式    | CVPR 2021 |  [项目](https:\u002F\u002Fyifita.github.io\u002Fpublication\u002Fiso_points\u002F)  |\n| [为多视图表面重建学习符号距离场](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FZhang_Learning_Signed_Distance_Field_for_Multi-View_Surface_Reconstruction_ICCV_2021_paper.html) |      隐式      | ICCV 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fjzhangbs\u002FMVSDF)         |\n| [UNISURF：统一神经隐式曲面和辐射场以进行多视图重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FOechsle_UNISURF_Unifying_Neural_Implicit_Surfaces_and_Radiance_Fields_for_Multi-View_ICCV_2021_paper.html) |      隐式      | ICCV 2021 |        [项目](https:\u002F\u002Fmoechsle.github.io\u002Funisurf\u002F)         |\n| [NeuS：通过体积渲染学习神经隐式曲面以进行多视图重建](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fe41e164f7485ec4a28741a2d0ea41c74-Abstract.html) |      隐式      | NIPS 2021 |        [项目](https:\u002F\u002Flingjie0206.github.io\u002Fpapers\u002FNeuS\u002F)         |\n| [NeRS：用于野外稀疏视图3D重建的神经反射曲面](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Ff95ec3de395b4bce25b39ef6138da871-Abstract.html) |    隐式    | NIPS 2021 |  [项目](https:\u002F\u002Fjasonyzhang.com\u002Fners\u002F)  |\n| [神经隐式曲面的体积渲染](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F25e2a30f44898b9f3e978b1786dcd85c-Abstract.html) |      隐式      | ICCV 2021 |        [非官方代码](https:\u002F\u002Fgithub.com\u002Fventusff\u002Fneurecon)         |\n| [NeuralWarp：通过补丁变形改进神经隐式曲面的几何形状](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.09648) |      隐式      | CVPR 2022 |        [项目](http:\u002F\u002Fimagine.enpc.fr\u002F~darmonf\u002FNeuralWarp\u002F)         |\n| [基于曼哈顿世界假设的神经3D场景重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02836) |    隐式    | CVPR 2022 |  [项目](https:\u002F\u002Fzju3dv.github.io\u002Fmanhattan_sdf\u002F)  |\n| [GenDR：一种通用的可微渲染器](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FPetersen_GenDR_A_Generalized_Differentiable_Renderer_CVPR_2022_paper.html) |    网格    | CVPR 2022 |  [代码](https:\u002F\u002Fgithub.com\u002FFelix-Petersen\u002Fgendr)  |\n| [NeRFusion：融合辐射场以进行大规模场景重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_NeRFusion_Fusing_Radiance_Fields_for_Large-Scale_Scene_Reconstruction_CVPR_2022_paper.html) |    隐式    | CVPR 2022 |  [项目](https:\u002F\u002Fjetd1.github.io\u002FNeRFusion-Web\u002F)  |\n| [野外神经表面重建的关键正则化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_Critical_Regularizations_for_Neural_Surface_Reconstruction_in_the_Wild_CVPR_2022_paper.html) |    隐式    | CVPR 2022 | \u002F |\n| [利用神经延迟着色的多视图网格重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWorchel_Multi-View_Mesh_Reconstruction_With_Neural_Deferred_Shading_CVPR_2022_paper.html) |    网格    | CVPR 2022 |  [项目](https:\u002F\u002Ffraunhoferhhi.github.io\u002Fneural-deferred-shading\u002F)  |\n| [可微立体视觉：使用可微渲染从多视图生成网格](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FGoel_Differentiable_Stereopsis_Meshes_From_Multiple_Views_Using_Differentiable_Rendering_CVPR_2022_paper.html) |    网格    | CVPR 2022 |  [代码](https:\u002F\u002Fgithub.com\u002Fshubham-goel\u002Fds)  |\n| [SparseNeuS：从稀疏视图快速且可推广的神经表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05737) |    隐式    | ECCV 2022 |  [项目](https:\u002F\u002Fwww.xxlong.site\u002FSparseNeuS\u002F)  |\n| [对象组合式的神经隐式曲面](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136870194.pdf) |    隐式    | ECCV 2022 | [项目](https:\u002F\u002Fwuqianyi.top\u002Fobjectsdf\u002F) |\n| [SNeS：从不完整数据中学习可能对称的神经曲面](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.06340) |    隐式    | ECCV 2022 | [项目](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fresearch\u002Fsnes\u002F) |\n| [野外的神经3D重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.12955) |    隐式    | SIGGRAPH 2022 |  [项目](https:\u002F\u002Fzju3dv.github.io\u002Fneuralrecon-w\u002F)  |\n| [可微符号距离函数渲染](http:\u002F\u002Frgl.s3.eu-central-1.amazonaws.com\u002Fmedia\u002Fpapers\u002FVicini2022sdf_1.pdf) |    隐式    | SIGGRAPH 2022 |  [项目](http:\u002F\u002Frgl.epfl.ch\u002Fpublications\u002FVicini2022SDF)  |\n| [通过重新参数化实现神经SDF的可微渲染](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05344) |    隐式    | SIGGRAPH Asia 2022 | [项目](https:\u002F\u002Fpeople.csail.mit.edu\u002Fsbangaru\u002Fprojects\u002Fdsdf-2022\u002Findex.html) |\n| [从原始点云逐步学习一致性感知的无符号距离函数](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02757) |    隐式    | NIPS 2022 | [项目](https:\u002F\u002Fjunshengzhou.github.io\u002FCAP-UDF\u002F) |\n| [Geo-Neus：用于多视图重建的几何一致神经隐式曲面学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15848) |    隐式    | NIPS 2022 |  [代码](https:\u002F\u002Fgithub.com\u002FGhiXu\u002FGeo-Neus)  |\n| [MonoSDF：探索单目几何线索以进行神经隐式曲面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00665) |    隐式    | NIPS 2022 |  [项目](https:\u002F\u002Fniujinshuchong.github.io\u002Fmonosdf\u002F)  |\n| [HF-NeuS：利用高频细节改进表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07850) |    隐式    | NIPS 2022 |  [项目](https:\u002F\u002Fgithub.com\u002Fyiqun-wang\u002FHFS)  |\n| [恢复精细细节以进行神经隐式曲面重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2023\u002Fhtml\u002FChen_Recovering_Fine_Details_for_Neural_Implicit_Surface_Reconstruction_WACV_2023_paper.html) |    隐式    | WACV 2022 |  [代码](https:\u002F\u002Fgithub.com\u002Ffraunhoferhhi\u002FD-NeuS)  |\n| [NeuRIS：利用法线先验进行室内场景的神经重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13597) |    隐式    | ECCV 2022 |  [项目](https:\u002F\u002Fjiepengwang.github.io\u002FNeuRIS\u002F)  |\n| [球面引导的神经隐式曲面训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15511) |    隐式    | arXiv 2022 | \u002F |\n| [QFF：用于神经场表示的量化傅里叶特征](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00914) |    隐式    | arXiv 2022 |  \u002F  |\n| [NeuS2：用于多视图重建的快速神经隐式曲面学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.05231) |    隐式    | arXiv 2022 |  [项目](https:\u002F\u002Fvcai.mpi-inf.mpg.de\u002Fprojects\u002FNeuS2\u002F)  |\n| [Voxurf：基于体素的高效且精确的神经表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.12697) |    隐式    | ICLR 2023 | [代码](https:\u002F\u002Fgithub.com\u002Fwutong16\u002FVoxurf) |\n| [PermutoSDF：利用排列晶格的隐式曲面进行快速多视图重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12562) |    隐式    | CVPR 2023 | [项目](https:\u002F\u002Fradualexandru.github.io\u002Fpermuto_sdf\u002F) |\n| [ShadowNeuS：通过阴影光线监督进行神经SDF重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14086) |    隐式    | CVPR 2023 |  [项目](https:\u002F\u002Fgerwang.github.io\u002Fshadowneus\u002F)  |\n| [NeuralUDF：学习无符号距离场以进行具有任意拓扑结构的表面多视图重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14173) |    隐式    | CVPR 2023 |  [项目](https:\u002F\u002Fwww.xxlong.site\u002FNeuralUDF\u002F)  |\n| [NeuDA：用于高保真隐式曲面重建的神经可变形锚点](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.02375) |     隐式      | CVPR 2023 | [项目](https:\u002F\u002F3d-front-future.github.io\u002Fneuda\u002F) |\n| [SparseFusion：提炼视图条件扩散以进行3D重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00792) |    隐式    | CVPR 2023 |  [项目](https:\u002F\u002Fsparsefusion.github.io\u002F)  |\n| [I$^2$-SDF：通过在神经SDF中进行光线追踪实现室内场景的内在重建与编辑](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07634) |     隐式      | CVPR 2023 | [项目](https:\u002F\u002Fjingsenzhu.github.io\u002Fi2-sdf\u002F) |\n| [NeAT：从多视图图像中学习具有任意拓扑结构的神经隐式曲面](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12012) |     隐式      | CVPR 2023 | [项目](https:\u002F\u002Fxmeng525.github.io\u002Fxiaoxumeng.github.io\u002Fprojects\u002Fcvpr23_neat) |\n| [NeUDF：通过体积渲染学习神经无符号距离场](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLiu_NeUDF_Leaning_Neural_Unsigned_Distance_Fields_With_Volume_Rendering_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | [项目](http:\u002F\u002Fgeometrylearning.com\u002Fneudf\u002F) |\n| [通过水平集对齐提升神经符号距离函数的梯度一致性](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FMa_Towards_Better_Gradient_Consistency_for_Neural_Signed_Distance_Functions_via_Level_Set_Alignment_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | [代码](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FTowardsBetterGradient\u002F) |\n| [Neuralangelo：高保真神经表面重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLi_Neuralangelo_High-Fidelity_Neural_Surface_Reconstruction_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | [项目](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fdir\u002Fneuralangelo\u002F) |\n| [VolRecon：用于可推广多视图重建的有符号射线距离函数体积渲染](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FRen_VolRecon_Volume_Rendering_of_Signed_Ray_Distance_Functions_for_Generalizable_Multi-View_Reconstruction_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | [项目](https:\u002F\u002Ffangjinhuawang.github.io\u002FVolRecon\u002F) |\n| [PET-NeuS：用于神经曲面的位置编码三平面](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWang_PET-NeuS_Positional_Encoding_Tri-Planes_for_Neural_Surfaces_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | \u002F |\n| [HR-NeuS：通过神经隐式曲面恢复高频表面几何](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06793) |    隐式    | arXiv 2023 | \u002F |\n| [RICO：为室内组合式重建正则化不可观测的部分](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08605) |    隐式    | arXiv 2023 | \u002F |\n| [使用Occ-SDF混合模型学习房间：将符号距离函数与占用率辅助场景表示相结合](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09152) |    隐式    | arXiv 2023 | \u002F |\n| [NeUDF：从多视图图像中学习无符号距离场以重建非水密模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.15368) |     隐式      | arXiv 2023 | \u002F |\n| [S-VolSDF：神经隐式的稀疏多视图立体正则化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17712) |     隐式      | arXiv 2023 | [项目](https:\u002F\u002Fhao-yu-wu.github.io\u002Fs-volsdf\u002F) |\n| [VDN-NeRF：通过视依赖归一化解决形状-辐射模糊](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17968) |     隐式      | arXiv 2023 | \u002F |\n| [FastMESH：基于六边形网格的神经渲染实现快速表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17858) |     网格      | arXiv 2023 | \u002F |\n| [显式神经曲面：通过变形场学习连续几何](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02956) |     隐式      | arXiv 2023 | \u002F |\n\n### 点云\n|                            论文                             | 表示方式 | 会议\u002F期刊 |                         项目\u002F代码                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [用于表面重建的深度几何先验](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FWilliams_Deep_Geometric_Prior_for_Surface_Reconstruction_CVPR_2019_paper.html) |     片段      | CVPR 2019 |        [代码](https:\u002F\u002Fgithub.com\u002Ffwilliams\u002Fdeep-geometric-prior)        |\n| [Scan2Mesh：从非结构化范围扫描到3D网格](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FDai_Scan2Mesh_From_Unstructured_Range_Scans_to_3D_Meshes_CVPR_2019_paper.html) |     网格      | CVPR 2019 | [代码](https:\u002F\u002Fgithub.com\u002Fmohamed-ebbed\u002FScan2Mesh) |\n| [用于3D网格重建的Meshlet先验](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FBadki_Meshlet_Priors_for_3D_Mesh_Reconstruction_CVPR_2020_paper.html) |     网格      | CVPR 2020 | [代码](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fmeshlets) |\n| [SSRNet：可扩展的3D表面重建网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FMi_SSRNet_Scalable_3D_Surface_Reconstruction_Network_CVPR_2020_paper.html) |     隐式      | CVPR 2020 | \u002F |\n| [SAL：从原始数据中进行符号无关的形状学习](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FAtzmon_SAL_Sign_Agnostic_Learning_of_Shapes_From_Raw_Data_CVPR_2020_paper.html) |     隐式      | CVPR 2020 |        [代码](https:\u002F\u002Fgithub.com\u002Fmatanatz\u002FSAL)      |\n| [用于形状学习的隐式几何正则化](https:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fgropp20a.html) |     隐式      | ICML 2020 |        [代码](https:\u002F\u002Fgithub.com\u002Famosgropp\u002FIGR)        |\n| [基于预测的内外比引导的点云网格化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09267) |     网格      | ECCV 2020 |        [代码](https:\u002F\u002Fgithub.com\u002FColin97\u002FPoint2Mesh)        |\n| [PointTriNet：3D点集的可学习三角剖分](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.02138) |     网格      | ECCV 2020 |        [代码](https:\u002F\u002Fgithub.com\u002Fnmwsharp\u002Flearned-triangulation)      |\n| [Points2Surf：从点云片段中学习隐式曲面](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.10453) |     隐式      | ECCV 2020 |        [代码](https:\u002F\u002Fgithub.com\u002FErlerPhilipp\u002Fpoints2surf)      |\n| [卷积占用网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.04618) |     隐式      | ECCV 2020 |        [代码](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fconvolutional_occupancy_networks)      |\n| [带有周期性激活函数的隐式神经表示](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F53c04118df112c13a8c34b38343b9c10-Abstract.html) |     隐式      | NIPS 2020 |        [项目](https:\u002F\u002Fwww.vincentsitzmann.com\u002Fsiren\u002F)      |\n| [用于隐式函数学习的神经无符号距离场](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002Ff69e505b08403ad2298b9f262659929a-Abstract.html) |     隐式      | NIPS 2020 |        [代码](https:\u002F\u002Fgithub.com\u002Fjchibane\u002Fndf)        |\n| [可微分的表面三角剖分](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10695) |     网格      | TOG 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fmrakotosaon\u002Fdiff-surface-triangulation)      |\n| [SALD：带导数的符号无关学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05400) |     隐式      | ICLR 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fmatanatz\u002FSALD)      |\n| [用于3D重建的深度隐式移动最小二乘函数](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FLiu_Deep_Implicit_Moving_Least-Squares_Functions_for_3D_Reconstruction_CVPR_2021_paper.html) |     隐式      | CVPR 2021 |        [代码](https:\u002F\u002Fgithub.com\u002FAndy97\u002FDeepMLS)      |\n| [从原始点云中进行符号无关的表面自相似性隐式学习，用于形状建模和重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FZhao_Sign-Agnostic_Implicit_Learning_of_Surface_Self-Similarities_for_Shape_Modeling_and_CVPR_2021_paper.html) |     隐式      | CVPR 2021 | \u002F |\n| [用于网格重建的学习Delaunay曲面元素](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FRakotosaona_Learning_Delaunay_Surface_Elements_for_Mesh_Reconstruction_CVPR_2021_paper.html) |     网格      | CVPR 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fmrakotosaon\u002Fdse-meshing)        |\n| [神经样条：用无限宽的神经网络拟合3D曲面](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FWilliams_Neural_Splines_Fitting_3D_Surfaces_With_Infinitely-Wide_Neural_Networks_CVPR_2021_paper.html) |     隐式      | CVPR 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Ffwilliams\u002Fneural-splines)      |\n| [Neural-Pull：通过学习将空间拉向曲面来从点云中学习有符号距离函数](https:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fma21b.html) |     隐式      | ICML 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FNeuralPull)      |\n| [相变、距离函数与隐式神经表示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.07689) |     隐式      | ICML 2021 | \u002F |\n| [Vis2Mesh：利用学习的虚拟视图可见性，高效地从大型场景的非结构化点云中重建网格](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSong_Vis2Mesh_Efficient_Mesh_Reconstruction_From_Unstructured_Point_Clouds_of_Large_ICCV_2021_paper.pdf) |     网格      | ICCV 2021 |        [代码](https:\u002F\u002Fgithub.com\u002FGDAOSU\u002Fvis2mesh)      |\n| [用于完整3D网格生成的深度混合自我先验](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FWei_Deep_Hybrid_Self-Prior_for_Full_3D_Mesh_Generation_ICCV_2021_paper.html) |     网格      | ICCV 2021 | [项目](https:\u002F\u002Fyqdch.github.io\u002FDHSP3D\u002F) |\n| [基于多尺度卷积核的自适应表面重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FUmmenhofer_Adaptive_Surface_Reconstruction_With_Multiscale_Convolutional_Kernels_ICCV_2021_paper.pdf) |     网格      | ICCV 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Fisl-org\u002Fadaptive-surface-reconstruction)      |\n| [SA-ConvONet：卷积占用网络的符号无关优化](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FTang_SA-ConvONet_Sign-Agnostic_Optimization_of_Convolutional_Occupancy_Networks_ICCV_2021_paper.html) |     隐式      | ICCV 2021 |        [代码](https:\u002F\u002Fgithub.com\u002Ftangjiapeng\u002FSA-ConvONet)      |\n| [深度隐式曲面点预测网络](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FVenkatesh_Deep_Implicit_Surface_Point_Prediction_Networks_ICCV_2021_paper.html) |     隐式      | ICCV 2021 |        [项目](https:\u002F\u002Fsites.google.com\u002Fview\u002Fcspnet)      |\n| [形状即点：一种可微分的泊松求解器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03452) |     网格      | NIPS 2021 |        [项目](https:\u002F\u002Fpengsongyou.github.io\u002Fsap)      |\n| [AIR-Nets：基于注意力的局部条件隐式表示框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11860) |     隐式      | 3DV 2021 |     [代码](https:\u002F\u002Fgithub.com\u002FSimonGiebenhain\u002FAIR-Nets)    |\n| [使用Delaunay图神经网络进行可扩展的表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06130) |     网格      | SGP 2021 | [代码](https:\u002F\u002Fgithub.com\u002Fraphaelsulzer\u002Fdgnn) |\n| [Neural-IMLS：从无方向点云中学习用于表面重建的隐式移动最小二乘法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.04398) |     隐式      | arXiv 2021 |        [项目](https:\u002F\u002Fqiujiedong.github.io\u002Fpublications\u002FNeural_IMLS\u002F)      |\n| [作为可学习内核的神经场用于3D重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13674) |     隐式      | CVPR 2022 |        [项目](https:\u002F\u002Fnv-tlabs.github.io\u002Fnkf\u002F)      |\n| [POCO：用于表面重建的点卷积](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.01831) |     隐式      | CVPR 2022 |     [代码](https:\u002F\u002Fgithub.com\u002Fvaleoai\u002FPOCO)    |\n| [GIFS：用于通用形状表示的神经隐式函数](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07126) |     隐式      | CVPR 2022 |     [项目](https:\u002F\u002Fjianglongye.com\u002Fgifs\u002F)    |\n| [利用表面先验对稀疏点云进行表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.10603) |     隐式      | CVPR 2022 |     [代码](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FOnSurfacePrior)    |\n| [通过学习预测性上下文先验从点云中重建表面](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.11015) |     隐式      | CVPR 2022 |     [代码](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002Fpredictablecontextprior)    |\n| [DiGS：用于无方向点云的散度引导型隐式神经表示](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FBen-Shabat_DiGS_Divergence_Guided_Shape_Implicit_Neural_Representation_for_Unoriented_Point_CVPR_2022_paper.html) |     隐式      | CVPR 2022 |     [项目](https:\u002F\u002Fchumbyte.github.io\u002FDiGS-Site\u002F)    |\n| [VisCo网格：利用粘度和共面积网格进行表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14569) |     隐式      | NIPS 2022 | \u002F |\n| [GenSDF：两阶段学习的可泛化有符号距离函数](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02780) |     隐式      | NIPS 2022 | [项目](https:\u002F\u002Flight.princeton.edu\u002Fpublication\u002Fgensdf\u002F) |\n| [用于学习自适应体积形状表示的双八叉树图网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02825) |     隐式      | SIGGRAPH 2022 | [代码](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDualOctreeGNN) |\n| [用于高质量表面重建的深度点云简化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09088) |     隐式      | arXiv 2022 | \u002F |\n| [RangeUDF：从3D点云中进行语义表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.09138) |     隐式      | arXiv 2022 | [代码](https:\u002F\u002Fgithub.com\u002Fvlar-group\u002Frangeudf) |\n| [神经泊松：神经场的指示函数](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.14249) |     隐式      | arXiv 2022 | \u002F |\n| [GeoUDF：通过几何引导的距离表示从3D点云中重建表面](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16762) |     隐式      | arXiv 2022 | [代码](https:\u002F\u002Fgithub.com\u002Frsy6318\u002FGeoUDF) |\n| [CircNet：通过外心检测对3D点云进行网格化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09253) |     网格      | ICLR 2023 | \u002F |\n| [ALTO：用于隐式3D重建的交替潜在拓扑](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04096) |     隐式      | CVPR 2023 | [项目](http:\u002F\u002Fvisual.ee.ucla.edu\u002Falto.htm\u002F) |\n| [八叉树引导的无方向表面重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FKoneputugodage_Octree_Guided_Unoriented_Surface_Reconstruction_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | [项目](https:\u002F\u002Fchumbyte.github.io\u002FOG-INR-Site\u002F) |\n| [无需学习先验，仅从单个稀疏点云中无监督推断有符号距离函数](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FChen_Unsupervised_Inference_of_Signed_Distance_Functions_From_Single_Sparse_Point_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | [代码](https:\u002F\u002Fgithub.com\u002Fchenchao15\u002FNeuralTPS) |\n| [神经向量场：通过显式学习实现隐式表示](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FYang_Neural_Vector_Fields_Implicit_Representation_by_Explicit_Learning_CVPR_2023_paper.html) |     隐式      | CVPR 2023 | [代码](https:\u002F\u002Fgithub.com\u002FWi-sc\u002FNVF) |\n| [StEik：稳定神经有符号距离函数的优化并实现更精细的形状表示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18414) |     隐式      | arXiv 2023 | \u002F |\n| [通过噪声到噪声映射从噪声3D点云中学习有符号距离函数](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01405) |     隐式      | ICML 2023 | [项目](https:\u002F\u002Fgithub.com\u002Fmabaorui\u002FNoise2NoiseMapping\u002F) |\n\n### RGB-D\n|                            论文                             | 表示方法 | 会议\u002F期刊 |                         项目\u002F代码                         |\n| :----------------------------------------------------------: | :------------: | :-------: | :----------------------------------------------------------: |\n| [神经RGB-D表面重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FAzinovic_Neural_RGB-D_Surface_Reconstruction_CVPR_2022_paper.html) |     隐式      | CVPR 2022 |        [项目](https:\u002F\u002Fdazinovic.github.io\u002Fneural-rgbd-surface-reconstruction\u002F)        |\n| [BNV-Fusion：基于双层神经体积融合的稠密3D重建](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FLi_BNV-Fusion_Dense_3D_Reconstruction_Using_Bi-Level_Neural_Volume_Fusion_CVPR_2022_paper.html) |     隐式      | CVPR 2022 |        [代码](https:\u002F\u002Fgithub.com\u002Flikojack\u002Fbnv_fusion)        |\n| [NICE-SLAM：用于SLAM的神经隐式可扩展编码](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhu_NICE-SLAM_Neural_Implicit_Scalable_Encoding_for_SLAM_CVPR_2022_paper.html) |     隐式      | CVPR 2022 |   [项目](https:\u002F\u002Fpengsongyou.github.io\u002Fnice-slam)   |\n| [ShAPO：用于多物体形状、外观和姿态优化的隐式表示](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136620266.pdf) |     隐式      | ECCV 2022 |        [项目](https:\u002F\u002Fzubair-irshad.github.io\u002Fprojects\u002FShAPO.html)        |\n| [CIRCLE：面向大规模室内场景的卷积隐式重建与补全](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fhtml\u002F4658_ECCV_2022_paper.php) |     隐式      | ECCV 2022 |        [代码](https:\u002F\u002Fgithub.com\u002Fotakuxiang\u002Fcircle)        |\n| [单目RGB-D相机下动态场景的神经表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.15258) |     隐式      | NIPS 2022 |        [项目](https:\u002F\u002Fustc3dv.github.io\u002Fndr\u002F)        |\n| [GO-Surf：用于快速、高保真RGB-D表面重建的神经特征网格优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.14735) |     隐式      | 3DV 2022 |        [项目](https:\u002F\u002Fjingwenwang95.github.io\u002Fgo_surf\u002F)        |\n| [FastSurf：利用逐帧内在精炼和TSDF融合先验学习的快速神经RGB-D表面重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04508) |     隐式      | arXiv 2023 |        [项目](https:\u002F\u002Frokit-healthcare.github.io\u002FFastSurf\u002F)        |\n| [用于高保真RGB-D监督表面重建的动态体素网格优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06178) |     隐式      | arXiv 2023 |        \u002F        |\n| [用于3D重建的多视角压缩编码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08247) |     隐式      | CVPR 2023 |        [项目](https:\u002F\u002Fmcc3d.github.io\u002F)        |\n| [MobileBrick：在移动设备上进行3D重建的乐高搭建](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01932) |     隐式      | CVPR 2023 |        [项目](https:\u002F\u002Fcode.active.vision\u002FMobileBrick\u002F)        |\n| [TMO：利用可微渲染通过移动设备获取物体的纹理网格](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.15060) |     网格      | CVPR 2023 |        [项目](https:\u002F\u002Fjh-choi.github.io\u002FTMO\u002F)        |\n\n\n## 综述\n\n|                            论文                             | 会议\u002F期刊  |\n| :--------------------------------------------------------------------: | :--------: |\n| [基于图像的3D物体重建：深度学习时代的现状与趋势](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.06543) | TPAMI 2019 |\n| [视觉计算及其他领域的神经场](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11426) | arXiv 2021 |\n| [神经渲染的进展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.05849) | EUROGRAPHICS 2022 |\n| [点云表面重建：综述与基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02413) | arXiv 2022 |\n| [NeRF：3D视觉中的神经辐射场，全面综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00379) | arXiv 2022 |\n| [深度学习驱动的网格重建方法综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.02879) | arXiv 2023 |","# awesome-3d-reconstruction-papers 快速上手指南\n\n`awesome-3d-reconstruction-papers` 并非一个可直接运行的软件工具或代码库，而是一个**精选的深度学习时代 3D 重建论文清单**。它旨在帮助开发者快速定位特定任务（如单视图重建、多视图重建、神经表面表示等）的前沿研究、开源代码和项目主页。\n\n本指南将指导你如何获取该列表，并如何利用它快速找到适合你项目的开源实现。\n\n## 环境准备\n\n由于这是一个文档资源库，无需安装复杂的深度学习环境即可浏览列表。但若要运行列表中链接的具体代码项目，通常需要以下基础环境：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS\n*   **Python**: 3.6 或更高版本\n*   **深度学习框架**: 根据具体论文需求，通常需安装 **PyTorch** 或 **TensorFlow**\n*   **依赖管理**: `pip` 或 `conda`\n*   **GPU**: 大多数 3D 重建模型需要 NVIDIA GPU 及 CUDA 支持\n\n> **提示**：具体的版本要求（如 PyTorch 1.7+）需查看你选定的具体论文对应的 GitHub 仓库 README。\n\n## 获取与使用步骤\n\n### 1. 克隆仓库\n首先，将该论文列表克隆到本地以便查阅。推荐使用国内镜像加速（如 Gitee 镜像或代理），若直接访问 GitHub 较慢，可尝试以下命令：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FYanchaoYang\u002Fawesome-3d-reconstruction-papers.git\ncd awesome-3d-reconstruction-papers\n```\n\n*如果 GitHub 访问受限，可在浏览器中直接访问其 GitHub 页面查看渲染后的 Markdown 表格，或使用国内代码托管平台的镜像（如有）。*\n\n### 2. 浏览与筛选\n打开根目录下的 `README.md` 文件。该文件按任务类型进行了清晰分类：\n\n*   **Object-level (物体级)**: 包含单视图 (Single-view)、多视图 (Multi-view) 和无监督 (Unsupervised) 方法。\n*   **Scene-level (场景级)**: 针对更大规模场景的重建。\n*   **Neural-Surface (神经表面)**: 涵盖多视图、点云和 RGB-D 输入的隐式表面表示方法。\n*   **Survey (综述)**: 领域内的综述文章。\n\n在表格中，你可以找到每篇论文的以下关键信息：\n*   **Paper**: 论文标题及链接。\n*   **Representation**: 3D 表示形式（如 Point Cloud, Mesh, Voxel, Implicit）。\n*   **Publisher**: 发表会议\u002F期刊及年份。\n*   **Project\u002FCode**: 指向官方项目主页或 GitHub 代码库的链接。\n\n### 3. 运行具体项目示例\n假设你对 **单视图网格重建** 感兴趣，在列表中找到 `Pixel2Mesh` (ECCV 2018)，点击其 **Code** 列的链接进入官方仓库。以下是基于该类项目通用的快速启动流程（以 Pixel2Mesh 为例）：\n\n#### A. 创建虚拟环境并安装依赖\n```bash\nconda create -n p2m python=3.7\nconda activate p2m\npip install torch torchvision tensorflow-gpu==1.15.0\n# 进入克隆的具体项目目录后安装其余依赖\npip install -r requirements.txt\n```\n\n#### B. 下载预训练模型与数据\n大多数项目会提供预训练权重。通常在项目页面的 \"Usage\" 部分有说明。\n```bash\n# 示例：下载预训练模型 (具体链接请参照对应项目的 README)\nwget https:\u002F\u002Fgithub.com\u002Fnywang16\u002FPixel2Mesh\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpretrained_model.zip\nunzip pretrained_model.zip\n```\n\n#### C. 执行推理测试\n使用单张 RGB 图片生成 3D 网格：\n```bash\npython demo.py --input_path .\u002Fdata\u002Fexample.png --output_path .\u002Fresults\u002Foutput.obj --checkpoint .\u002Fpretrained_model\u002Fmodel.ckpt\n```\n\n## 下一步建议\n*   **复现研究**：根据 `README.md` 中的分类，寻找与你当前数据集（如 ShapeNet, ScanNet）匹配的 SOTA 模型。\n*   **对比实验**：利用列表中的 \"Representation\" 列，快速对比体素 (Voxel)、点云 (Point Cloud) 和隐式场 (Implicit) 不同表示方法的优劣。\n*   **贡献社区**：如果你发现了新的相关论文，欢迎通过 Pull Request 向该仓库提交更新。","某自动驾驶初创公司的算法团队正致力于提升车辆对道路障碍物的感知能力，计划引入最新的单目 3D 重建技术，以便仅通过车载摄像头就能精准还原前方车辆的立体结构。\n\n### 没有 awesome-3d-reconstruction-papers 时\n- **文献检索如大海捞针**：研究人员需在 Google Scholar 和 arXiv 上手动筛选海量论文，难以区分哪些是真正针对“单视图物体级重建”的前沿成果，极易遗漏关键研究。\n- **复现成本高昂且盲目**：找到论文后，往往发现官方未开源代码或链接失效，团队不得不花费数周时间尝试复现基础模型，却不知已有成熟的开源项目（如 PointSetGeneration 或 AtlasNet）可直接参考。\n- **技术选型缺乏全局视野**：由于缺乏系统分类，团队难以快速对比体素（Voxel）、点云（Point Cloud）和网格（Mesh）等不同表示方法在特定场景下的优劣，导致技术路线决策缓慢且可能存在偏差。\n- **前沿动态跟进滞后**：深度学习领域迭代极快，人工追踪最新会议（如 CVPR、ICCV）的 3D 重建论文效率低下，容易错失能显著提升精度的新架构。\n\n### 使用 awesome-3d-reconstruction-papers 后\n- **精准定位核心资源**：团队直接查阅\"Object-level -> Single-view\"分类表，瞬间锁定近五年顶会中所有相关论文，并一键获取对应的代码仓库或项目主页，将调研时间从数周缩短至数小时。\n- **高效验证与复用**：借助列表中提供的成熟代码链接（如 SurfNet 或 Image2Mesh），工程师能快速搭建基线系统进行测试，避免了重复造轮子，将精力集中在针对驾驶场景的优化上。\n- **科学决策技术路线**：通过表格清晰对比不同论文的“表示类型”和“发表 venue\"，团队迅速评估出适合实时性要求的点云生成方案，制定了更稳健的开发路径。\n- **同步学术最前沿**：依托该清单的持续更新机制，团队能即时掌握神经表面重建等新兴方向，确保技术方案始终处于行业领先地位。\n\nawesome-3d-reconstruction-papers 将分散的学术成果转化为结构化的工程资产，极大降低了 3D 视觉技术的落地门槛与研发周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbluestyle97_awesome-3d-reconstruction-papers_6f2620a9.png","bluestyle97","Jiale Xu","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbluestyle97_1bd7fbed.png","Too young, too simple, sometimes naive.","Meshy AI","Shenzhen, China","bluestyle928@gmail.com",null,"https:\u002F\u002Fbluestyle97.github.io","https:\u002F\u002Fgithub.com\u002Fbluestyle97",908,64,"2026-04-14T15:06:06","","未说明",{"notes":92,"python":90,"dependencies":93},"该仓库是一个论文列表合集（Awesome List），而非单个可执行的软件工具。它收录了多篇关于 3D 重建的学术论文及其对应的外部代码仓库链接。因此，本仓库本身没有特定的运行环境、GPU、内存或依赖库需求。若要运行列表中提到的具体算法，需访问各论文对应的独立项目链接，并参考其各自的安装说明。",[],[18],"2026-03-27T02:49:30.150509","2026-04-19T09:20:01.960469",[],[]]