[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-KangLiao929--Awesome-Deep-Camera-Calibration":3,"tool-KangLiao929--Awesome-Deep-Camera-Calibration":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":76,"owner_website":83,"owner_url":84,"languages":82,"stars":85,"forks":86,"last_commit_at":87,"license":82,"difficulty_score":88,"env_os":89,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":94,"github_topics":82,"view_count":23,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":95,"updated_at":96,"faqs":97,"releases":118},1424,"KangLiao929\u002FAwesome-Deep-Camera-Calibration","Awesome-Deep-Camera-Calibration","Deep Learning for Camera Calibration and Beyond: A Survey","Awesome-Deep-Camera-Calibration 是一个专注于深度学习相机标定领域的开源资源汇总项目，旨在为研究人员和开发者提供一份全面、前沿的学术指南。它核心解决了传统相机标定方法在复杂场景下适应性不足的问题，系统梳理了如何利用深度学习技术实现更精准的参数估计、姿态推算乃至三维重建。\n\n该项目不仅收录了经典的几何视觉理论，更构建了一套完整的深度学习标定分类体系，涵盖了从基础原理、最新算法模型到专用数据集和基准测试（Benchmark）的全方位内容。其独特亮点在于持续更新的文献综述，已纳入超过 100 篇 2023 至 2024 年的最新论文，并深入探讨了神经辐射场（NeRF）等新兴技术与标定任务的结合。此外，项目还提出了具有潜力的新型标定表示方法，有望替代传统的神经网络优化目标。\n\n无论是从事计算机视觉、摄影测量、具身智能研究的学者，还是希望深入了解相机成像原理的工程师，都能从中获得宝贵的参考。通过结构化的知识整理和开放的社区维护，Awesome-Deep-Camera-Calibration 降低了进入该专业领域的门槛，是推动空间智能与多模态视觉技术发展的重要基础设施","Awesome-Deep-Camera-Calibration 是一个专注于深度学习相机标定领域的开源资源汇总项目，旨在为研究人员和开发者提供一份全面、前沿的学术指南。它核心解决了传统相机标定方法在复杂场景下适应性不足的问题，系统梳理了如何利用深度学习技术实现更精准的参数估计、姿态推算乃至三维重建。\n\n该项目不仅收录了经典的几何视觉理论，更构建了一套完整的深度学习标定分类体系，涵盖了从基础原理、最新算法模型到专用数据集和基准测试（Benchmark）的全方位内容。其独特亮点在于持续更新的文献综述，已纳入超过 100 篇 2023 至 2024 年的最新论文，并深入探讨了神经辐射场（NeRF）等新兴技术与标定任务的结合。此外，项目还提出了具有潜力的新型标定表示方法，有望替代传统的神经网络优化目标。\n\n无论是从事计算机视觉、摄影测量、具身智能研究的学者，还是希望深入了解相机成像原理的工程师，都能从中获得宝贵的参考。通过结构化的知识整理和开放的社区维护，Awesome-Deep-Camera-Calibration 降低了进入该专业领域的门槛，是推动空间智能与多模态视觉技术发展的重要基础设施。","# Awesome-Deep-Camera-Calibration\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2303.10559-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10559)\n[![Survey](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome) \n[![Maintenance](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-yes-green.svg)](https:\u002F\u002FGitHub.com\u002FNaereen\u002FStrapDown.js\u002Fgraphs\u002Fcommit-activity) \n[![PR's Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg?style=flat)](http:\u002F\u002Fmakeapullrequest.com) \n[![GitHub license](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_f15977e92e8a.png)](https:\u002F\u002Fgithub.com\u002FNaereen\u002FStrapDown.js\u002Fblob\u002Fmaster\u002FLICENSE)\n\u003C!-- [![made-with-Markdown](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMade%20with-Markdown-1f425f.svg)](http:\u002F\u002Fcommonmark.org) -->\n\u003C!-- [![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_13d664e1afd7.png)](http:\u002F\u002Fansicolortags.readthedocs.io\u002F?badge=latest) -->\n\n![Overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_e237075102d0.png)\n**\u003Cdiv align=\"center\">Popular calibration objectives, models, and extended applications in camera calibration\u003C\u002Fdiv>**\n\nMore content and details can be found in our Survey Paper: [Deep Learning for Camera Calibration and Beyond: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10559). \n\n## 🚩Contents \n1. [Basics](#Basics)\n2. [Taxonomy and Statistics](#Taxonomy-and-Statistics)\n3. [Benchmark](#Benchmark)\n4. [Novel Calibration Representations](#Novel-Calibration-Representations)\n4. [Methods](#Methods)\n5. [Datasets](#Datasets)\n6. [Citation](#Citation)\n7. [Handbook in Chinese](#Handbook-in-Chinese)\n8. [Project and Course using Our Survey](#Project-and-Course-using-Our-Survey)\n\n## 📢 News\nOur recent work **Puffin** can unify the camera-centric understanding (camera calibration, pose estimation) and generation (camera-controllable T2I and I2I generation) within a cohesive multimodal framework. It enables more precise understanding and generation performance by our proposed *thinking with camera*, and provides insights on the meaningful mutual effect among multimodal tasks. If you are interested in the camera-related 3D vision, photography, embodied AI, and spatial intelligence, please check out more details [here](https:\u002F\u002Fkangliao929.github.io\u002Fprojects\u002Fpuffin\u002F).\n\n## 📝 Changelog\n- [x] 2025.02.24: Update the [Novel Calibration Representations](#Novel-Calibration-Representations) in the learning-based camera calibration, which show the potential to replace the traditional calibration objectives for neural networks.\n- [x] 2025.02.24: Update the literature reviews for 2023 and 2024 (more than 100 new papers!). Please refer more details to our [arXiv-v3 version](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10559v3).\n- [x] 2024.06.05: Update the survey paper (supplementary material) with the evaluation on the constructed benchmark.\n- [x] 2024.06.05: Update the survey paper (Section 3.3.3-Calibration with Reconstruction) with more technical discussions with NeRF, especially in the camera parameter initialization.\n- [x] 2024.06.04: More details about the calibrated camera parameters in our [Benchmark](https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fblob\u002Fmain\u002FBenchmark\u002Freadme.md) are updated.\n- [x] 2024.01.05: The benchmark is released. Please refer to the dataset link and more details in [Benchmark](https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fblob\u002Fmain\u002FBenchmark\u002Freadme.md).\n- [x] 2023.03.19: The survey of the arXiv version is online.\n\n## 📖Basics\n* [Multiple View Geometry in Computer Vision](https:\u002F\u002Fcseweb.ucsd.edu\u002Fclasses\u002Fsp13\u002Fcse252B-a\u002FHZ2eCh2.pdf) - Hartley, R., & Zisserman, A. (2004)\n* [A Flexible New Technique for Camera Calibration](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fwp-content\u002Fuploads\u002F2016\u002F02\u002Ftr98-71.pdf) - Zhengyou Zhang. (2000)\n\n## 📊Taxonomy and Statistics\n![Overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_9ed74adb91e5.png)\n**\u003Cdiv align=\"center\">The structural and hierarchical taxonomy of camera calibration with deep learning. Some classical methods are listed under each category.\u003C\u002Fdiv>**\n\n![Overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_38f321209cd1.png)\n**\u003Cdiv align=\"center\">A concise milestone of deep learning-based camera calibration methods.\u003C\u002Fdiv>** \nWe classify all methods based on the uncalibrated camera model and its extended applications: standard model, distortion model, cross-view model, and cross-sensor model.\n\n![Overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_2caefaec81b7.png)\n**\u003Cdiv align=\"center\">A statistic analysis of deep learning-based camera calibration methods.\u003C\u002Fdiv>** \nWe summarize all literature based on the number of publications per year, calibration objectives, simulation of the dataset, and learning strategy.\n\n## 📁Benchmark\n![Overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_cbfb3ead8193.png)\n**\u003Cdiv align=\"center\">Overview of our collected benchmark, which covers all models reviewed in this survey.\u003C\u002Fdiv>** \nIn this dataset, the image and video were derived from diverse cameras under different environments. The accurate ground truth and label are provided for each data. Please refer to the dataset link and more details in [Benchmark](https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fblob\u002Fmain\u002FBenchmark\u002Freadme.md).\n\n\n## 🏁Novel Calibration Representations\n![Overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_f7588231e6a7.png)\n**\u003Cdiv align=\"center\">Novel calibration representations are designed for replacing the traditional calibration objectives.\u003C\u002Fdiv>** \nRecent learning-based camera calibration works tend to design a novel geometry field to replace the traditional camera parameters (*i.e.*, intrinsics and extrinsics) as the new learning target, which is inspired by the prior of camera models or the perspective properties of captured images, such as the [distortion distribution map](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.10689), [perspective field](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.03239), [incidence field](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.10988), [camera rays](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.14817), and [camera image](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.17240), etc. These fields represent a pixel-wise or patch-wise parametrization of the intrinsic and\u002For extrinsic invariants. They show an explicit relationship to the image details and are learning-friendly for neural networks.\n\n\n## 📸Methods\n\n|Year|Publication|Title|Abbreviation|Objective|Platform|Network|\n|---|---|---|---|---|---|---|\n|2015|[ICIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7351024)|Deepfocal: A method for direct focal length estimation|DeepFocal|Intrinsics|Caffe|AlexNet|\n|2015|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKendall_PoseNet_A_Convolutional_ICCV_2015_paper.html)|Posenet: A convolutional network for real-time 6-dof camera relocalization|PoseNet|Extrinsics|Caffe|GoogLeNet|\n|2016|[BMVC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.02129)|Horizon lines in the wild|DeepHorizon|Extrinsics|Caffe|GoogLeNet|\n|2016|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhai_Detecting_Vanishing_Points_CVPR_2016_paper.html)|Detecting vanishing points using global image context in a non-manhattan world|DeepVP|Extrinsics|Caffe|AlexNet|\n|2016|[ACCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-54187-7_3)|Radial lens distortion correction using convolutional neural networks trained with synthesized images|Rong et al.|Distortion coefficients|Caffe|AlexNet|\n|2016|[RSSW](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03798)|Deep image homography estimation|DHN|Projection matrixs|Caffe|VGG|\n|2017|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChang_CLKN_Cascaded_Lucas-Kanade_CVPR_2017_paper.html)|Clkn: Cascaded lucas-kanade networks for image alignment|CLKN|Projection matrixs|Torch|CNN + Lucas-Kanade layer|\n|2017|[ICCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017_workshops\u002Fw17\u002Fhtml\u002FNowruzi_Homography_Estimation_From_ICCV_2017_paper.html)|Homography estimation from image pairs with hierarchical convolutional networks|HierarchicalNet|Projection matrixs|TensorFlow|VGG|\n|2017|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FRengarajan_Unrolling_the_Shutter_CVPR_2017_paper.pdf)|Unrolling the Shutter: CNN to Correct Motion Distortions|URS-CNN|Undistortion|Torch|CNNs|\n|2017|[IV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.03167.pdf)|RegNet: Multimodal sensor registration using deep neural networks|RegNet|Camera + LiDAR|Caffe|CNNs|\n|2018|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FHold-Geoffroy_A_Perceptual_Measure_CVPR_2018_paper.html)|A perceptual measure for deep single image camera calibration|Hold-Geoffroy et al.|Intrinsics + Extrinsics| |DenseNet|\n|2018|[CVMP](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3278471.3278479)|DeepCalib: a deep learning approach for automatic intrinsic calibration of wide field-of-view cameras|DeepCalib|Intrinsics + Distortion coefficients|TensorFlow|Inception-V3|\n|2018|[ECCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaoqing_Yin_FishEyeRecNet_A_Multi-Context_ECCV_2018_paper.html)|Fisheyerecnet: A multi-context collaborative deep network for fisheye image rectification|FishEyeRecNet|Distortion coefficients|Caffe|VGG|\n|2018|[ICPR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8545218)|Radial lens distortion correction by adding a weight layer with inverted foveal models to convolutional neural networks|Shi et al.|Distortion coefficients|PyTorch|ResNet|\n|2018|[ECCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRene_Ranftl_Deep_Fundamental_Matrix_ECCV_2018_paper.html)|Deep fundamental matrix estimation|DeepFM|Projection matrixs|PyTorch|ResNet|\n|2018|[ECCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_eccv_2018_workshops\u002Fw16\u002Fhtml\u002FPoursaeed_Deep_Fundamental_Matrix_Estimation_without_Correspondences_ECCVW_2018_paper.html)|Deep fundamental matrix estimation without correspondences|Poursaeed et al.|Projection matrixs| |CNNs|\n|2018|[RAL](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8302515)|Unsupervised deep homography: A fast and robust homography estimation model|UDHN|Projection matrixs|TensorFlow|VGG|\n|2018|[ACCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-20876-9_36)|Rethinking planar homography estimation using perspective fields|PFNet|Projection matrixs|TensorFlow|FCN|\n|2018|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8593693)|CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks|CalibNet|Camera + LiDAR|TensorFlow|ResNet|\n|2018|[ICRA](https:\u002F\u002Fhttps:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8460499)|DeepVP: Deep Learning for Vanishing Point Detection on 1 Million Street View Images|Chang et al.|Standard|Matconvnet|AlexNet|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FLopez_Deep_Single_Image_Camera_Calibration_With_Radial_Distortion_CVPR_2019_paper.html)|Deep single image camera calibration with radial distortion|Lopez et al.|Intrinsics + Extrinsics + Distortion coefficients|PyTorch|DenseNet|\n|2019|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FXian_UprightNet_Geometry-Aware_Camera_Orientation_Estimation_From_Single_Images_ICCV_2019_paper.html)|UprightNet: geometry-aware camera orientation estimation from single images|UprightNet|Extrinsics|PyTorch|U-Net|\n|2019|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8967912)|Degeneracy in self-calibration revisited and a deep learning solution for uncalibrated slam|Zhuang et al.|Intrinsics + Distortion coefficients|PyTorch|ResNet|\n|2019|[PRL](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0141938221001062)|Self-Supervised deep homography estimation with invertibility constraints|SSR-Net|Projection matrixs|PyTorch|ResNet|\n|2019|[ICCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCVW_2019\u002Fhtml\u002FGMDL\u002FAbbas_A_Geometric_Approach_to_Obtain_a_Birds_Eye_View_From_ICCVW_2019_paper.html)|A geometric approach to obtain a bird's eye view from an image|Abbas et al.|Projection matrixs|TensorFlow|CNNs|\n|2019|[TCSVT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8636975)|DR-GAN: Automatic radial distortion rectification using conditional GAN in real-time|DR-GAN|Undistortion|TensorFlow|GANs|\n|2019|[TCSVT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8926530)|Distortion rectification from static to dynamic: A distortion sequence construction perspective|STD|Undistortion|TensorFlow|GANs|\n|2019|[VR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8798326)|Deep360Up: A deep learning-based approach for automatic VR image upright adjustment|Deep360Up|Extrinsics| |DenseNet|\n|2019|[JVCIR](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS104732031930313X)|Unsupervised fisheye image correction through bidirectional loss with geometric prior|UnFishCor|Distortion coefficients|TensorFlow|VGG|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FLi_Blind_Geometric_Distortion_Correction_on_Images_Through_Deep_Learning_CVPR_2019_paper.html)|Blind geometric distortion correction on images through deep learning|BlindCor|Undistortion|PyTorch|U-Net|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FZhuang_Learning_Structure-And-Motion-Aware_Rolling_Shutter_Correction_CVPR_2019_paper.html)|Learning structure-and-motion-aware rolling shutter correction|RSC-Net|Undistortion|PyTorch|ResNet|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FXue_Learning_to_Calibrate_Straight_Lines_for_Fisheye_Image_Rectification_CVPR_2019_paper.html)|Learning to calibrate straight lines for fisheye image rectification|Xue et al.|Distortion coefficients|PyTorch|ResNet|\n|2019|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FZhao_Learning_Perspective_Undistortion_of_Portraits_ICCV_2019_paper.html)|Learning perspective undistortion of portraits|Zhao et al.|Intrinsics + Undistortion||VGG + U-Net|\n|2019|[NeurIPS](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F8e6b42f1644ecb1327dc03ab345e618b-Paper.pdf)|NeurVPS: Neural Vanishing Point Scanning via Conic Convolution|NeurVPS|Standard|PyTorch|CNNs|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FSha_End-to-End_Camera_Calibration_for_Broadcast_Videos_CVPR_2020_paper.html)|End-to-end camera calibration for broadcast videos|Sha et al.|Projection matrixs|TensorFlow|Siamese-Net + U-Net|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58610-2_32)|Neural geometric parser for single image camera calibration|Lee et al.|Intrinsics + Extrinsics| |PointNet + CNNs|\n|2020|[ICRA](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9197378)|Learning camera miscalibration detection|MisCaliDet|Average pixel position difference|TensorFlow|CNNs|\n|2020|[WACV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fhtml\u002FZhang_DeepPTZ_Deep_Self-Calibration_for_PTZ_Cameras_WACV_2020_paper.html)|DeepPTZ: deep self-calibration for PTZ cameras|DeepPTZ|Intrinsics + Extrinsics + Distortion coefficients|PyTorch|Inception-V3|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FLe_Deep_Homography_Estimation_for_Dynamic_Scenes_CVPR_2020_paper.html)|Deep homography estimation for dynamic scenes|MHN|Projection matrixs|TensorFlow|VGG|\n|2020|[ACMMM](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3394171.3413870)|SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in Deep Latent Space|SRHEN|Projection matrixs| |CNNs|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58604-1_35)|360∘ camera alignment via segmentation|Davidson et al.|Extrinsics| |FCN|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58452-8_38)|Content-aware unsupervised deep homography estimation|CA-UDHN|Projection matrixs|PyTorch|FCN + ResNet|\n|2020|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9341229)|Deep keypoint-based camera pose estimation with geometric constraints|DeepFEPE|Extrinsics|PyTorch|VGG + PointNet|\n|2020|[TIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8962122)|Model-free distortion rectification framework bridged by distortion distribution map|DDM|Undistortion|Tensorflow|GANs|\n|2020|[TIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9184235)|Deep face rectification for 360° dual-fisheye cameras|Li et al.|Undistortion| |CNNs|\n|2020|[ICPR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9412305)|Position-aware and symmetry enhanced GAN for radial distortion correction|PSE-GAN|Undistortion| |GANs|\n|2020|[ICIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9191107)|A simple yet effective pipeline for radial distortion correction|RDC-Net|Undistortion|PyTorch|ResNet|\n|2020|[ICASSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9054191)|Self-supervised deep learning for fisheye image rectification|FE-GAN|Undistortion|PyTorch|GANs|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FZhao_RDCFace_Radial_Distortion_Correction_for_Face_Recognition_CVPR_2020_paper.html)|RDCFace: radial distortion correction for face recognition|RDCFace|Undistortion| |ResNet|\n|2020|[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.11386)|Fisheye distortion rectification from deep straight lines|LaRecNet|Distortion coefficients|PyTorch|ResNet|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FBaradad_Height_and_Uprightness_Invariance_for_3D_Prediction_From_a_Single_CVPR_2020_paper.html)|Height and uprightness invariance for 3d prediction from a single view|Baradad et al.|Intrinsics + Extrinsics|PyTorch|CNNs|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FZheng_What_Does_Plate_Glass_Reveal_About_Camera_Calibration_CVPR_2020_paper.html)|What does plate glass reveal about camera calibration?|Zheng et al.|Intrinsics + Extrinsics| |CNNs|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58621-8_19)|Single view metrology in the wild|Zhu et al.|Intrinsics + Extrinsics|PyTorch|CNNs + PointNet|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Shutter_Unrolling_Network_CVPR_2020_paper.pdf)|Deep Shutter Unrolling Network|DeepUnrollNet|Undistortion|PyTorch|FCN|\n|2020|[RAL](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9206138)|RGGNet: Tolerance Aware LiDAR-Camera Online Calibration With Geometric Deep Learning and Generative Model|RGGNet|Camera + LiDAR|Tensorflow|ResNet|\n|2020|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9341147)|CalibRCNN: Calibrating Camera and LiDAR by Recurrent Convolutional Neural Network and Geometric Constraints|CalibRCNN|Camera + LiDAR|Tensorflow|RNN|\n|2020|[ICRA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9196627)|Online Camera-LiDAR Calibration with Sensor Semantic Information|SSI-Calib|Camera-LiDAR|Tensorflow|CNNs|\n|2020|[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.04260)|SOIC: Semantic Online Initialization and Calibration for LiDAR and Camera|SOIC|Camera-LiDAR|-|ResNet+PointRCNN|\n|2020|[ICPR](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9412653)|NetCalib: A Novel Approach for LiDAR-Camera Auto-calibration Based on Deep Learning|NetCalib|Camera-LiDAR|PyTorch|CNNs|\n|2021|[TCI](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9495157)|Online training of stereo self-calibration using monocular depth estimation|StereoCaliNet|Extrinsics|PyTorch|U-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FLee_CTRL-C_Camera_Calibration_TRansformer_With_Line-Classification_ICCV_2021_paper.html?ref=https:\u002F\u002Fgithubhelp.com)|CTRL-C: Camera calibration TRansformer with Line-Classification|CTRL-C|Intrinsics + Extrinsics|PyTorch|Transformer|\n|2021|[ICCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FPBDL\u002Fhtml\u002FWakai_Deep_Single_Fisheye_Image_Camera_Calibration_for_Over_180-Degree_Projection_ICCVW_2021_paper.html)|Deep single fisheye image camera calibration for over 180-degree projection of field of view|Wakai et al.|Intrinsics + Extrinsics| |DenseNet|\n|2021|[TIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9366359)|A deep ordinal distortion estimation approach for distortion rectification|OrdianlDistortion|Distortion coefficients|TensorFlow|CNNs|\n|2021|[TCSVT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9567670)|Revisiting radial distortion rectification in polar-coordinates: A new and efficient learning perspective|PolarRecNet|Undistortion|PyTorch|VGG + U-Net|\n|2021|[PRL](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0167865521003299)|DQN-based gradual fisheye image rectification|DQN-RecNet|Undistortion|PyTorch|VGG|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FTan_Practical_Wide-Angle_Portraits_Correction_With_Deep_Structured_Models_CVPR_2021_paper.html)|Practical wide-angle portraits correction with deep structured models|Tan et al.|Undistortion|PyTorch|U-Net|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYang_Progressively_Complementary_Network_for_Fisheye_Image_Rectification_Using_Appearance_Flow_CVPR_2021_paper.html)|Progressively complementary network for fisheye image rectification using appearance flow|PCN|Undistortion|PyTorch|U-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FLiao_Multi-Level_Curriculum_for_Training_a_Distortion-Aware_Barrel_Distortion_Rectification_Model_ICCV_2021_paper.html)|Multi-level curriculum for training a distortion-aware barrel distortion rectification model|DaRecNet|Undistortion|TensorFlow|U-Net|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FZhao_Deep_Lucas-Kanade_Homography_for_Multimodal_Image_Alignment_CVPR_2021_paper.html)|Deep Lucas-Kanade homography for multimodal image alignment|DLKFM|Projection matrixs|TensorFlow|Siamese-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FShao_LocalTrans_A_Multiscale_Local_Transformer_Network_for_Cross-Resolution_Homography_Estimation_ICCV_2021_paper.html)|LocalTrans: A multiscale local transformer network for cross-resolution homography estimation|LocalTrans|Projection matrixs|PyTorch|Transformer|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FYe_Motion_Basis_Learning_for_Unsupervised_Deep_Homography_Estimation_With_Subspace_ICCV_2021_paper.html)|Motion basis learning for unsupervised deep homography estimation with subspace projection|BasesHomo|Projection matrixs|PyTorch|ResNet|\n|2021|[ICIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9506264)|Fast and accurate homography estimation using extendable compression network|ShuffleHomoNet|Projection matrixs|TensorFlow|ShuffleNet|\n|2021|[TCSVT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.02524)|Depth-aware multi-grid deep homography estimation with contextual correlation|DAMG-Homo|Projection matrixs|TensorFlow|CNNs|\n|2021|[BMVC](https:\u002F\u002Fwww.bmvc2021-virtualconference.com\u002Fassets\u002Fpapers\u002F1364.pdf)|A simple approach to image tilt correction with self-attention MobileNet for smartphones|SA-MobileNet|Extrinsics|TensorFlow|MobileNet|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FKocabas_SPEC_Seeing_People_in_the_Wild_With_an_Estimated_Camera_ICCV_2021_paper.html)|SPEC: Seeing people in the wild with an estimated camera|SPEC|Intrinsics + Extrinsics|PyTorch|ResNet|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChen_Wide-Baseline_Relative_Camera_Pose_Estimation_With_Directional_Learning_CVPR_2021_paper.pdf)|Wide-Baseline Relative Camera Pose Estimation with Directional Learning|DirectionNet|Extrinsics|TensorFlow|U-Net|\n|2021|[CVPR](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.01601.pdf)|Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes|JCD|Undistortion|PyTorch|FCN|\n|2021|[CVPRW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021W\u002FWAD\u002Fpapers\u002FLv_LCCNet_LiDAR_and_Camera_Self-Calibration_Using_Cost_Volume_Network_CVPRW_2021_paper.pdf)|LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network|LCCNet|Camera + LiDAR|PyTorch|CNNs|\n|2021|[Sensors](https:\u002F\u002Fwww.ncbi.nlm.nih.gov\u002Fpmc\u002Farticles\u002FPMC8662422\u002Fpdf\u002Fsensors-21-08112.pdf)|CFNet: LiDAR-Camera Registration Using Calibration Flow Network|CFNet|Camera + LiDAR|PyTorch|FCN|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FFan_Inverting_a_Rolling_Shutter_Camera_Bring_Rolling_Shutter_Images_to_ICCV_2021_paper.pdf)|Inverting a Rolling Shutter Camera: Bring Rolling Shutter Images to High Framerate Global Shutter Video|Fan et al.|Distortion|PyTorch|U-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FFan_SUNet_Symmetric_Undistortion_Network_for_Rolling_Shutter_Correction_ICCV_2021_paper.pdf)|SUNet: Symmetric Undistortion Network for Rolling Shutter Correction|SUNet|Distortion|PyTorch|DenseNet+ResNet|\n|2021|[IROS](https:\u002F\u002Fsemalign.mit.edu\u002Fassets\u002Fpaper.pdf)|SemAlign: Annotation-Free Camera-LiDAR Calibration with Semantic Alignment Loss|SemAlign|Camera-LiDAR|PyTorch|CNNs|\n|2022|[CVPR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08586)|Deep vanishing point detection: Geometric priors make dataset variations vanish|DVPD|Extrinsics|PyTorch|CNNs|\n|2022|[ICRA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.03325)|Self-supervised camera self-calibration from video|Fang et al.|Intrinsics + Extrinsics|PyTorch|CNNs|\n|2022|[ICASSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9746819)|Camera calibration through camera projection loss|CPL|Intrinsics + Extrinsics|TensorFlow|Inception-V3|\n|2022|[CVPR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15982)|Iterative Deep Homography Estimation|IHN|Projection matrixs|PyTorch|Siamese-Net|\n|2022|[CVPR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03821)|Unsupervised Homography Estimation with Coplanarity-Aware GAN|HomoGAN|Projection matrixs|PyTorch|GANs|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhu_Semi-Supervised_Wide-Angle_Portraits_Correction_by_Multi-Scale_Transformer_CVPR_2022_paper.pdf)|Semi-Supervised Wide-Angle Portraits Correction by Multi-Scale Transformer|SS-WPC|Undistortion|PyTorch|Transformer|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FCao_Learning_Adaptive_Warping_for_Real-World_Rolling_Shutter_Correction_CVPR_2022_paper.pdf)|Learning Adaptive Warping for Real-World Rolling Shutter Correction|AW-RSC|Undistortion| |CNNs|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_EvUnroll_Neuromorphic_Events_Based_Rolling_Shutter_Image_Correction_CVPR_2022_paper.pdf)|EvUnroll: Neuromorphic Events based Rolling Shutter Image Correction|EvUnroll|Undistortion|PyTorch|U-Net|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FDo_Learning_To_Detect_Scene_Landmarks_for_Camera_Localization_CVPR_2022_paper.pdf)|Learning to Detect Scene Landmarks for Camera Localization|Do et al.|Extrinsics|PyTorch|ResNet|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FParameshwara_DiffPoseNet_Direct_Differentiable_Camera_Pose_Estimation_CVPR_2022_paper.pdf)|DiffPoseNet: Direct Differentiable Camera Pose Estimation|DiffPoseNet|Extrinsics|PyTorch|CNNs + LSTM|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FYang_SceneSqueezer_Learning_To_Compress_Scene_for_Camera_Relocalization_CVPR_2022_paper.pdf)|SceneSqueezer: Learning to Compress Scene for Camera Relocalization|SceneSqueezer|Extrinsics|PyTorch|Transformer|\n|2022|[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01925)|FishFormer: Annulus Slicing-based Transformer for Fisheye Rectification with Efficacy Domain Exploration|FishFormer|Undistortion|PyTorch|Transformer|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FPonimatkin_Focal_Length_and_Object_Pose_Estimation_via_Render_and_Compare_CVPR_2022_paper.pdf)|Focal Length and Object Pose Estimation via Render and Compare|FocalPose|Intrinsics + Extrinsics|PyTorch|CNNs|\n|2022|[arXiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.09385.pdf)|DXQ-Net: Differentiable LiDAR-Camera Extrinsic Calibration Using Quality-aware Flow|DXQ-Net|Camera + LiDAR|PyTorch|CNNs + RNNs|\n|2022|[ITSC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.03704)|SST-Calib: Simultaneous Spatial-Temporal Parameter Calibration between LIDAR and Camera|SST-Calib|Camera + LiDAR|PyTorch|CNNs|\n|2022|[IROS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.00158.pdf)|Learning-Based Framework for Camera Calibration with Distortion Correction and High Precision Feature Detection|CCS-Net|Undistortion|PyTorch|UNet|\n|2022|[TIP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14611)|SIR: Self-supervised image rectification via seeing the same scene from multiple different lenses|SIR|Undistortion|PyTorch|ResNet|\n|2022|[TIV](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9802778)|ATOP: An Attention-to-Optimization Approach for Automatic LiDAR-Camera Calibration Via Cross-Modal Object Matching|ATOP|Camera + LiDAR||CNNs|\n|2022|[ICRA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9811945)|FusionNet: Coarse-to-Fine Extrinsic Calibration Network of LiDAR and Camera with Hierarchical Point-pixel Fusion|FusionNet|Camera + LiDAR|PyTorch|CNNs+PointNet|\n|2022|[TIM](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9623545)|Keypoint-Based LiDAR-Camera Online Calibration With Robust Geometric Network|RGKCNet|Camera + LiDAR|PyTorch|CNNs+PointNet|\n|2022|[ECCV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.12927)|Rethinking generic camera models for deep single image camera calibration to recover rotation and fisheye distortion|GenCaliNet|Intrinsics + Extrinsics + Distortion coefficients| |DenseNet|\n|2022|[PAMI](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9771389)|Content-Aware Unsupervised Deep Homography Estimation and Beyond|Liu et al.|Projection matrixs|PyTorch|ResNet|\n\n## 🏗️Datasets\n|Name|Publication|Real\u002FSynthetic|Image\u002FVideo|Objectives|Dataset|\n|---|---|---|---|---|---|\n|KITTI|[CVPR](https:\u002F\u002Fwww.cvlibs.net\u002Fpublications\u002FGeiger2012CVPR.pdf)|Real|Video|Base|[Dataset](https:\u002F\u002Fwww.cvlibs.net\u002Fdatasets\u002Fkitti\u002F)|\n|MS-COCO|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-10602-1_48)|Real|Image|Base|[Dataset](https:\u002F\u002Fcocodataset.org\u002F#download)|\n|SUN360|[CVPR](https:\u002F\u002Fvision.cs.princeton.edu\u002Fprojects\u002F2012\u002FSUN360\u002Fpaper.pdf)|Real|Image|Base|[Dataset](https:\u002F\u002Fvision.cs.princeton.edu\u002Fprojects\u002F2012\u002FSUN360\u002Fdata\u002F)|\n|Places2|[PAMI](http:\u002F\u002Fplaces2.csail.mit.edu\u002FPAMI_places.pdf)|Real|Image|Base|[Dataset](http:\u002F\u002Fplaces2.csail.mit.edu\u002Fdownload.html)|\n|CelebA|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FLiu_Deep_Learning_Face_ICCV_2015_paper.pdf)|Real|Image|Base|[Dataset](https:\u002F\u002Fmmlab.ie.cuhk.edu.hk\u002Fprojects\u002FCelebA.html)|\n|1DSfM|[ECCV](https:\u002F\u002Fwww.cs.cornell.edu\u002Fprojects\u002F1dsfm\u002Fdocs\u002F1DSfM_ECCV14.pdf)|Real|Image|Focal Length|[Dataset](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website\u002F)|\n|Cambridge Landmarks|[ICCV](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FKendall_PoseNet_A_Convolutional_ICCV_2015_paper.pdf)|Real|Video|Extrinsics|[Dataset](https:\u002F\u002Fwww.repository.cam.ac.uk\u002Fhandle\u002F1810\u002F251342;jsessionid=90AB1617B8707CD387CBF67437683F77)|\n|HLW|[BMVC](http:\u002F\u002Fwww.bmva.org\u002Fbmvc\u002F2016\u002Fpapers\u002Fpaper020\u002Fpaper020.pdf)|Real|Image|Horizon Line|[Dataset](https:\u002F\u002Fmvrl.cse.wustl.edu\u002Fdatasets\u002Fhlw\u002F)|\n|YUD|[ECCV](https:\u002F\u002Fwww.elderlab.yorku.ca\u002Fwp-content\u002Fuploads\u002F2016\u002F12\u002FDenisElderEstradaECCV08.pdf)|Real|Image|Vanishing Point|[Dataset](https:\u002F\u002Fwww.elderlab.yorku.ca\u002FYorkUrbanDB)|\n|ECD|[ECCV](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpublications\u002F2010\u002FBarinova10a\u002FBarinova10a.pdf)|Real|Image|Vanishing Point|[Dataset](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpublications\u002F2010\u002FBarinova10a\u002FBarinova10a.pdf)|\n|SU3 Wireframe|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FZhou_Learning_to_Reconstruct_3D_Manhattan_Wireframes_From_a_Single_Image_ICCV_2019_paper.pdf)|Synthetic|Image|Vanishing Point|[Dataset](https:\u002F\u002Fgithub.com\u002Fzhou13\u002Fshapeunity)|\n|ScanNet|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FDai_ScanNet_Richly-Annotated_3D_CVPR_2017_paper.pdf)|Real|Video|Extrinsics|[Dataset](http:\u002F\u002Fwww.scan-net.org\u002F#code-and-data)|\n|Indoor-6|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FDo_Learning_To_Detect_Scene_Landmarks_for_Camera_Localization_CVPR_2022_paper.pdf)|Real|Image|Extrinsics|[Dataset](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSceneLandmarkLocalization)|\n|DeepVP|[ICRA](http:\u002F\u002Filab.usc.edu\u002Fpublications\u002Fdoc\u002FChang_etal18icra.pdf)|Real|Image|Vanishing Point|[Dataset](http:\u002F\u002Filab.usc.edu\u002Fkai\u002Fdeepvp\u002F)|\n|CAHomo|[ECCV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.05983.pdf)|Real|Video|Homography|[Dataset](https:\u002F\u002Fgithub.com\u002FJirongZhang\u002FDeepHomography)|\n|MHN|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLe_Deep_Homography_Estimation_for_Dynamic_Scenes_CVPR_2020_paper.pdf)|Real|Video|Homography|[Dataset](https:\u002F\u002Fgithub.com\u002Flcmhoang\u002Fhmg-dynamics)|\n|UDIS|[TIP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12859)|Real|Video|Homography|[Dataset](https:\u002F\u002Fgithub.com\u002Fnie-lang\u002FUnsupervisedDeepImageStitching)|\n|Carla-RS|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Shutter_Unrolling_Network_CVPR_2020_paper.pdf)|Synthetic|Video|RS-Distortion|[Dataset](https:\u002F\u002Fgithub.com\u002Fethliup\u002FDeepUnrollNet)|\n|Fastec-RS|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Shutter_Unrolling_Network_CVPR_2020_paper.pdf)|Synthetic|Video|RS-Distortion|[Dataset](https:\u002F\u002Fgithub.com\u002Fethliup\u002FDeepUnrollNet)|\n|BS-RSC|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FCao_Learning_Adaptive_Warping_for_Real-World_Rolling_Shutter_Correction_CVPR_2022_paper.pdf)|Real|Video|RS-Distortion|[Dataset](https:\u002F\u002Fgithub.com\u002Fljzycmd\u002FBSRSC)|\n|GEV-RS|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_EvUnroll_Neuromorphic_Events_Based_Rolling_Shutter_Image_Correction_CVPR_2022_paper.pdf)|Real|Video|RS-Distortion|[Dataset](https:\u002F\u002Fgithub.com\u002Fzxyemo\u002FEvUnroll)|\n|LMS|[ICASSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7471935)|Both|Video|Radial Distortion|[Dataset](https:\u002F\u002Fwww.lms.tf.fau.eu\u002Fresearch\u002Fdownloads\u002Ffisheye-data-set\u002F)|\n|SS-WPC|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhu_Semi-Supervised_Wide-Angle_Portraits_Correction_by_Multi-Scale_Transformer_CVPR_2022_paper.pdf)|Real|Image|Radial Distortion|[Dataset](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPortraits_Correction)|\n\n\n\n## 📜\u003C\u002Fg-emoji>License\nThe survey and benchmark are only made available for academic research purposes.\n\n## 📚\u003C\u002Fg-emoji>Citation\n```\n@article{kang2023deep,\nAuthor = {Kang Liao and Lang Nie and Shujuan Huang and Chunyu Lin and Jing Zhang and Yao Zhao and Moncef Gabbouj and Dacheng Tao},\nTitle = {Deep Learning for Camera Calibration and Beyond: A Survey},\nYear = {2023},\nJournal = {arXiv:2303.10559}\n}\n```\n## 🚩Handbook in Chinese\n[导读链接](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F619217025)\n\n## 🚀Project and Course using Our Survey\n* [Camera Calibration and Pose Estimation](https:\u002F\u002Fsites.ecse.rpi.edu\u002F~qji\u002FCV\u002Fcamera_calibration_pose_estimation.pdf) of [ECSE 4961\u002F6650 Computer Vision](https:\u002F\u002Fsites.ecse.rpi.edu\u002F~qji\u002FCV\u002FECSE%204961-6650%20Computer%20Vision_Sylbus_Fall23b.pdf) by [Prof. Qiang Ji](https:\u002F\u002Fsites.ecse.rpi.edu\u002F~qji\u002F), Rensselaer Polytechnic Institute, US.\n\n## 📭Contact\n\n```\nkang.liao@ntu.edu.sg\n```\n","# 高效深度相机标定\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2303.10559-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10559)\n[![调查](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome) \n[![维护](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-yes-green.svg)](https:\u002F\u002FGitHub.com\u002FNaereen\u002FStrapDown.js\u002Fgraphs\u002Fcommit-activity) \n[![PR 的欢迎](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg?style=flat)](http:\u002F\u002Fmakeapullrequest.com) \n[![GitHub 许可证](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_f15977e92e8a.png)](https:\u002F\u002Fgithub.com\u002FNaereen\u002FStrapDown.js\u002Fblob\u002Fmaster\u002FLICENSE)\n\u003C!-- [![使用 Markdown 制作](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMade%20with-Markdown-1f425f.svg)](http:\u002F\u002Fcommonmark.org) -->\n\u003C!-- [![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_13d664e1afd7.png)](http:\u002F\u002Fansicolortags.readthedocs.io\u002F?badge=latest) -->\n\n![概览](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_e237075102d0.png)\n**\u003Cdiv align=\"center\">相机标定中的热门标定目标、模型及扩展应用\u003C\u002Fdiv>**\n\n更多内容和详细信息，请参阅我们的调查论文：[基于深度学习的相机标定及其拓展：一份调查](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10559)。\n\n## 🚩目录\n1. [基础知识](#基础知识)\n2. [分类与统计](#分类与统计)\n3. [基准测试](#基准测试)\n4. [新型标定表示法](#新型标定表示法)\n4. [方法](#方法)\n5. [数据集](#数据集)\n6. [引用](#引用)\n7. [中文手册](#中文手册)\n8. [使用我们调查的项目与课程](#使用我们调查的项目与课程)\n\n## 📢新闻\n我们近期的研究成果——Puffin，能够将以相机为中心的理解（相机标定、姿态估计）与生成能力（相机可控的T2I和I2I生成）统一在一个连贯的多模态框架中。通过我们提出的“用相机思考”方式，Puffin实现了更精准的理解与生成性能，并深入探讨了多模态任务之间有意义的相互作用。如果您对相机相关的3D视觉、摄影、具身人工智能以及空间智能感兴趣，请访问[这里](https:\u002F\u002Fkangliao929.github.io\u002Fprojects\u002Fpuffin\u002F)了解更多详情。\n\n## 📝变更记录\n- [x] 2025年2月24日：更新了基于学习的相机标定中关于“新型标定表示法”的相关内容，这些表示法展现了替代传统标定目标用于神经网络的潜力。\n- [x] 2025年2月24日：更新了2023年和2024年的文献综述（新增100多篇论文！）。更多详细信息请参阅我们的[arXiv v3 版本](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10559v3)。\n- [x] 2024年6月5日：更新了调查论文（补充材料），并对所构建的基准进行了评估。\n- [x] 2024年6月5日：更新了调查论文（第3.3.3节：基于重建的标定），并就NeRF进行了更多技术性讨论，尤其是在相机参数初始化方面。\n- [x] 2024年6月4日：我们对[基准测试](https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fblob\u002Fmain\u002FBenchmark\u002Freadme.md)中已标定的相机参数进行了进一步的详细说明。\n- [x] 2024年1月5日：基准测试正式发布。请参阅数据集链接及更多细节，详见[基准测试](https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fblob\u002Fmain\u002FBenchmark\u002Freadme.md)。\n- [x] 2023年3月19日：arXiv版本的调查报告已上线。\n\n## 📖基础知识\n* [计算机视觉中的多视图几何](https:\u002F\u002Fcseweb.ucsd.edu\u002Fclasses\u002Fsp13\u002Fcse252B-a\u002FHZ2eCh2.pdf) - Hartley, R., & Zisserman, A. (2004)\n* [一种灵活的全新相机标定技术](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fwp-content\u002Fuploads\u002F2016\u002F02\u002Ftr98-71.pdf) - Zhang Zhengyou. (2000)\n\n## 📊分类与统计\n![概览](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_9ed74adb91e5.png)\n**\u003Cdiv align=\"center\">基于深度学习的相机标定结构化与层次化分类体系。各类经典方法均列于相应类别之下。\u003C\u002Fdiv>**\n\n![概览](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_38f321209cd1.png)\n**\u003Cdiv align=\"center\">基于深度学习的相机标定方法的简洁里程碑列表。\u003C\u002Fdiv>** \n我们根据未标定的相机模型及其扩展应用对所有方法进行分类：标准模型、畸变模型、跨视图模型以及跨传感器模型。\n\n![概览](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_2caefaec81b7.png)\n**\u003Cdiv align=\"center\">基于深度学习的相机标定方法的统计分析。\u003C\u002Fdiv>** \n我们根据每年的发表数量、标定目标、数据集的模拟情况以及学习策略，对所有文献进行了汇总。\n\n## 📁基准测试\n![概览](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_cbfb3ead8193.png)\n**\u003Cdiv align=\"center\">我们收集的基准测试概述，涵盖了本次调查中所回顾的所有模型。\u003C\u002Fdiv>** \n在该数据集中，图像和视频均来自不同环境下的多种相机。每组数据均提供了精确的地面真实值和标签。请参阅数据集链接及更多细节，详见[基准测试](https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fblob\u002Fmain\u002FBenchmark\u002Freadme.md)。\n\n## 🏁新型标定表示法\n![概览](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_readme_f7588231e6a7.png)\n**\u003Cdiv align=\"center\">新型标定表示法旨在替代传统的标定目标。\u003C\u002Fdiv>** \n近年来，基于学习的相机标定研究倾向于设计全新的几何场，以取代传统的相机参数（即内参和外参）作为新的学习目标。这一设计灵感源自相机模型的先验知识或拍摄图像的视角特性，例如[畸变分布图](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.10689)、[透视场](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.03239)、[入射场](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.10988)、[相机光线](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.14817)以及[相机图像](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.17240)等。这些字段以像素级或补丁级的方式对内参和\u002F或外参的不变量进行参数化。它们与图像细节之间存在明确的关联，并且对神经网络而言具有良好的学习友好性。\n\n## 📸方法\n\n|年份|出版物|标题|缩写|目标|平台|网络|\n|---|---|---|---|---|---|---|\n|2015|[ICIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7351024)|Deepfocal：一种直接估算焦距的方法|DeepFocal|内参|Caffe|AlexNet|\n|2015|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKendall_PoseNet_A_Convolutional_ICCV_2015_paper.html)|Posenet：一种用于实时6自由度相机重新定位的卷积神经网络|PoseNet|外参|Caffe|GoogLeNet|\n|2016|[BMVC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.02129)|野外环境中的水平线|DeepHorizon|外参|Caffe|GoogLeNet|\n|2016|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhai_Detecting_Vanishing_Points_CVPR_2016_paper.html)|利用全局图像上下文在非曼哈顿世界中检测消失点|DeepVP|外参|Caffe|AlexNet|\n|2016|[ACCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-54187-7_3)|利用合成图像训练的卷积神经网络进行径向镜头畸变校正|Rong等|畸变系数|Caffe|AlexNet|\n|2016|[RSSW](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03798)|深度图像同构估计|DHN|投影矩阵|Caffe|VGG|\n|2017|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChang_CLKN_Cascaded_Lucas-Kanade_CVPR_2017_paper.html)|Clkn：用于图像对齐的级联Lucas-Kanade网络|CLKN|投影矩阵|Torch|CNN + Lucas-Kanade层|\n|2017|[ICCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017_workshops\u002Fw17\u002Fhtml\u002FNowruzi_Homography_Estimation_From_ICCV_2017_paper.html)|利用层次化卷积神经网络从图像对中估计同构矩阵|HierarchicalNet|投影矩阵|TensorFlow|VGG|\n|2017|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FRengarajan_Unrolling_the_Shutter_CVPR_2017_paper.pdf)|Unrolling the Shutter：使用CNN校正运动畸变|URS-CNN|去畸变|Torch|CNNs|\n|2017|[IV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.03167.pdf)|RegNet：利用深度神经网络实现多模态传感器配准|RegNet|相机+激光雷达|Caffe|CNNs|\n|2018|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FHold-Geoffroy_A_Perceptual_Measure_CVPR_2018_paper.html)|用于深度单张图像相机标定的感知性指标|Hold-Geoffroy等|内参+外参| |DenseNet|\n|2018|[CVMP](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3278471.3278479)|DeepCalib：一种用于广角相机自动内参校准的深度学习方法|DeepCalib|内参+畸变系数|TensorFlow|Inception-V3|\n|2018|[ECCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaoqing_Yin_FishEyeRecNet_A_Multi-Context_ECCV_2018_paper.html)|FishEyeRecNet：一种用于鱼眼图像矫正的多情境协作深度网络|FishEyeRecNet|畸变系数|Caffe|VGG|\n|2018|[ICPR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8545218)|通过在卷积神经网络中添加带有反向视网膜模型的权重层来校正径向镜头畸变|Shi等|畸变系数|PyTorch|ResNet|\n|2018|[ECCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRene_Ranftl_Deep_Fundamental_Matrix_ECCV_2018_paper.html)|深度基础矩阵估计|DeepFM|投影矩阵|PyTorch|ResNet|\n|2018|[ECCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_eccv_2018_workshops\u002Fw16\u002Fhtml\u002FPoursaeed_Deep_Fundamental_Matrix_Estimation_without_Correspondences_ECCVW_2018_paper.html)|无对应关系下的深度基础矩阵估计|Poursaeed等|投影矩阵| |CNNs|\n|2018|[RAL](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8302515)|无监督深度同构：一种快速且鲁棒的同构估计模型|UDHN|投影矩阵|TensorFlow|VGG|\n|2018|[ACCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-20876-9_36)|重新思考基于透视场的平面同构估计|PFNet|投影矩阵|TensorFlow|FCN|\n|2018|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8593693)|CalibNet：利用三维空间变换网络进行几何监督的外参校准|CalibNet|相机+激光雷达|TensorFlow|ResNet|\n|2018|[ICRA](https:\u002F\u002Fhttps:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8460499)|DeepVP：基于深度学习的100万张街景图像消失点检测|Chang等|标准|Matconvnet|AlexNet|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FLopez_Deep_Single_Image_Camera_Calibration_With_Radial_Distortion_CVPR_2019_paper.html)|利用径向畸变进行深度单张图像相机标定|Lopez等|内参+外参+畸变系数|PyTorch|DenseNet|\n|2019|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FXian_UprightNet_Geometry-Aware_Camera_Orientation_Estimation_From_Single_Images_ICCV_2019_paper.html)|UprightNet：基于单张图像的几何感知相机朝向估计|UprightNet|外参|PyTorch|U-Net|\n|2019|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8967912)|重新审视自校准中的退化问题，并为未校准的SLAM提出一种深度学习解决方案|Zhuang等|内参+畸变系数|PyTorch|ResNet|\n|2019|[PRL](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0141938221001062)|利用可逆约束的自监督深度同构估计|SSR-Net|投影矩阵|PyTorch|ResNet|\n|2019|[ICCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCVW_2019\u002Fhtml\u002FGMDL\u002FAbbas_A_Geometric_Approach_to_Obtain_a_Birds_Eye_View_From_ICCVW_2019_paper.html)|一种从图像中获取鸟瞰视角的几何方法|Abbas等|投影矩阵|TensorFlow|CNNs|\n|2019|[TCSVT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8636975)|DR-GAN：利用条件GAN在实时环境中自动进行径向畸变校正|DR-GAN|去畸变|TensorFlow|GANs|\n|2019|[TCSVT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8926530)|从静态到动态的畸变校正：从畸变序列构建的角度来看|STD|去畸变|TensorFlow|GANs|\n|2019|[VR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8798326)|Deep360Up：一种基于深度学习的自动VR图像直立调整方法|Deep360Up|外参| |DenseNet|\n|2019|[JVCIR](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS104732031930313X)|通过具有几何先验的双向损失实现无监督鱼眼图像矫正|UnFishCor|畸变系数|TensorFlow|VGG|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FLi_Blind_Geometric_Distortion_Correction_on_Images_Through_Deep_Learning_CVPR_2019_paper.html)|通过深度学习实现图像的盲目几何畸变校正|BlindCor|去畸变|PyTorch|U-Net|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FZhuang_Learning_Structure-And-Motion-Aware_Rolling_Shutter_Correction_CVPR_2019_paper.html)|学习结构与运动感知的滚动快门校正|RSC-Net|去畸变|PyTorch|ResNet|\n|2019|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FXue_Learning_to_Calibrate_Straight_Lines_for_Fisheye_Image_Rectification_CVPR_2019_paper.html)|学习为鱼眼图像矫正校准直线|Xue等|畸变系数|PyTorch|ResNet|\n|2019|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FZhao_Learning_Perspective_Undistortion_of_Portraits_ICCV_2019_paper.html)|学习人物肖像的视角去畸变|Zhao等|内参+去畸变||VGG + U-Net|\n|2019|[NeurIPS](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F8e6b42f1644ecb1327dc03ab345e618b-Paper.pdf)|NeurVPS：通过圆锥卷积实现神经消失点扫描|NeurVPS|标准|PyTorch|CNNs|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FSha_End-to-End_Camera_Calibration_for_Broadcast_Videos_CVPR_2020_paper.html)|端到端的广播视频相机标定|Sha等|投影矩阵|TensorFlow|Siamese-Net + U-Net|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58610-2_32)|单张图像相机标定的神经几何解析器|Lee等|内参+外参| |PointNet + CNNs|\n|2020|[ICRA](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9197378)|学习相机误差检测|MisCaliDet|平均像素位置差异|TensorFlow|CNNs|\n|2020|[WACV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fhtml\u002FZhang_DeepPTZ_Deep_Self-Calibration_for_PTZ_Cameras_WACV_2020_paper.html)|DeepPTZ：PTZ相机的深度自校准|DeepPTZ|内参+外参+畸变系数|PyTorch|Inception-V3|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FLe_Deep_Homography_Estimation_for_Dynamic_Scenes_CVPR_2020_paper.html)|动态场景下的深度同构估计|MHN|投影矩阵|TensorFlow|VGG|\n|2020|[ACMMM](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3394171.3413870)|SRHEN：通过在深层潜在空间中解析几何对应关系，逐步细化同构估计网络|SRHEN|投影矩阵| |CNNs|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58604-1_35)|通过分割实现360°相机对齐|Davidson等|外参| |FCN|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58452-8_38)|内容感知的无监督深度同构估计|CA-UDHN|投影矩阵|PyTorch|FCN + ResNet|\n|2020|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9341229)|基于几何约束的深度关键点相机姿态估计|DeepFEPE|外参|PyTorch|VGG + PointNet|\n|2020|[TIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8962122)|无需模型的畸变校正框架，由畸变分布图桥梁|DDM|去畸变|Tensorflow|GANs|\n|2020|[TIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9184235)|针对360°双鱼眼相机的深度人脸矫正|Li等|去畸变| |CNNs|\n|2020|[ICPR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9412305)|基于位置感知并增强对称性的GAN用于径向畸变校正|PSE-GAN|去畸变| |GANs|\n|2020|[ICIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9191107)|一种简单而有效的径向畸变校正流程|RDC-Net|去畸变|PyTorch|ResNet|\n|2020|[ICASSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9054191)|用于鱼眼图像矫正的自监督深度学习|FE-GAN|去畸变|PyTorch|GANs|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FZhao_RDCFace_Radial_Distortion_Correction_for_Face_Recognition_CVPR_2020_paper.html)|RDCFace：用于人脸识别的径向畸变校正|RDCFace|去畸变| |ResNet|\n|2020|[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.11386)|从深邃的直线中校正鱼眼畸变|LaRecNet|畸变系数|PyTorch|ResNet|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FBaradad_Height_and_Uprightness_Invariance_for_3D_Prediction_From_a_Single_CVPR_2020_paper.html)|从单个视角进行3D预测时的高度和垂直度不变性|Baradad等|内参+外参|PyTorch|CNNs|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FZheng_What_Does_Plate_Glass_Reveal_About_Camera_Calibration_CVPR_2020_paper.html)|玻璃板究竟揭示了关于相机标定的哪些信息？|Zheng等|内参+外参| |CNNs|\n|2020|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58621-8_19)|野外环境中的单视角计量|Zhu等|内参+外参|PyTorch|CNNs + PointNet|\n|2020|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Shutter_Unrolling_Network_CVPR_2020_paper.pdf)|深度快门展开网络|DeepUnrollNet|去畸变|PyTorch|FCN|\n|2020|[RAL](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9206138)|RGGNet：通过几何深度学习和生成模型实现的容差感知激光雷达-相机在线校准|RGGNet|相机+激光雷达|Tensorflow|ResNet|\n|2020|[IROS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9341147)|CalibRCNN：通过递归卷积神经网络和几何约束校准相机与激光雷达|CalibRCNN|相机+激光雷达|Tensorflow|RNN|\n|2020|[ICRA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9196627)|通过传感器语义信息实现的在线相机-激光雷达校准|SSI-Calib|相机-激光雷达|Tensorflow|CNNs|\n|2020|[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.04260)|SOIC：用于激光雷达和相机的语义在线初始化与校准|SOIC|相机-激光雷达|-|ResNet+PointRCNN|\n|2020|[ICPR](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9412653)|NetCalib：一种基于深度学习的新型激光雷达-相机自动校准方法|NetCalib|相机-激光雷达|PyTorch|CNNs|\n|2021|[TCI](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9495157)|利用单目深度估计进行立体自校准的在线训练|StereoCaliNet|外参|PyTorch|U-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FLee_CTRL-C_Camera_Calibration_TRansformer_With_Line-Classification_ICCV_2021_paper.html?ref=https:\u002F\u002Fgithubhelp.com)|CTRL-C：带线条分类的相机校准TRansformer|CTRL-C|内参+外参|PyTorch|Transformer|\n|2021|[ICCVW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FPBDL\u002Fhtml\u002FWakai_Deep_Single_Fisheye_Image_Camera_Calibration_for_Over_180-Degree_Projection_ICCVW_2021_paper.html)|深度单鱼眼图像相机校准，适用于视野超过180度的投影|Wakai等|内参+外参| |DenseNet|\n|2021|[TIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9366359)|一种用于畸变校正的深度序数畸变估计方法|OrdianlDistortion|畸变系数|TensorFlow|CNNs|\n|2021|[TCSVT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9567670)|重新审视极坐标系下的径向畸变校正：一种全新且高效的学习视角|PolarRecNet|去畸变|PyTorch|VGG + U-Net|\n|2021|[PRL](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0167865521003299)|基于DQN的渐进式鱼眼图像矫正|DQN-RecNet|去畸变|PyTorch|VGG|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FTan_Practical_Wide-Angle_Portraits_Correction_With_Deep_Structured_Models_CVPR_2021_paper.html)|利用深度结构化模型进行实用的广角人像矫正|Tan等|去畸变|PyTorch|U-Net|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FYang_Progressively_Complementary_Network_for_Fisheye_Image_Rectification_Using_Appearance_Flow_CVPR_2021_paper.html)|利用外观流进行的渐进式互补网络，用于鱼眼图像矫正|PCN|去畸变|PyTorch|U-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FLiao_Multi-Level_Curriculum_for_Training_a_Distortion-Aware_Barrel_Distortion_Rectification_Model_ICCV_2021_paper.html)|用于训练畸变感知桶形畸变校正模型的多级课程|DaRecNet|去畸变|TensorFlow|U-Net|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FZhao_Deep_Lucas-Kanade_Homography_for_Multimodal_Image_Alignment_CVPR_2021_paper.html)|用于多模态图像对齐的深度Lucas-Kanade同构|DLKFM|投影矩阵|TensorFlow|Siamese-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FShao_LocalTrans_A_Multiscale_Local_Transformer_Network_for_Cross-Resolution_Homography_Estimation_ICCV_2021_paper.html)|LocalTrans：用于跨分辨率同构估计的多尺度局部变压器网络|LocalTrans|投影矩阵|PyTorch|Transformer|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FYe_Motion_Basis_Learning_for_Unsupervised_Deep_Homography_Estimation_With_Subspace_ICCV_2021_paper.html)|利用子空间投影进行无监督深度同构估计的运动基学习|BasesHomo|投影矩阵|PyTorch|ResNet|\n|2021|[ICIP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9506264)|利用可扩展压缩网络实现快速且精确的同构估计|ShuffleHomoNet|投影矩阵|TensorFlow|ShuffleNet|\n|2021|[TCSVT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.02524)|基于深度感知的多网格深度同构估计，结合上下文相关性|DAMG-Homo|投影矩阵|TensorFlow|CNNs|\n|2021|[BMVC](https:\u002F\u002Fwww.bmvc2021-virtualconference.com\u002Fassets\u002Fpapers\u002F1364.pdf)|一种利用自注意力MobileNet的智能手机简单图像倾斜校正方法|SA-MobileNet|外参|TensorFlow|MobileNet|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FKocabas_SPEC_Seeing_People_in_the_Wild_With_an_Estimated_Camera_ICCV_2021_paper.html)|SPEC：利用估计的相机在野外“看见”人们|SPEC|内参+外参|PyTorch|ResNet|\n|2021|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChen_Wide-Baseline_Relative_Camera_Pose_Estimation_With_Directional_Learning_CVPR_2021_paper.pdf)|利用方向性学习的宽基线相对相机姿态估计|DirectionNet|外参|TensorFlow|U-Net|\n|2021|[CVPR](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.01601.pdf)|迈向动态场景中的滚动快门校正与去模糊|JCD|去畸变|PyTorch|FCN|\n|2021|[CVPRW](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021W\u002FWAD\u002Fpapers\u002FLv_LCCNet_LiDAR_and_Camera_Self-Calibration_Using_Cost_Volume_Network_CVPRW_2021_paper.pdf)|LCCNet：利用成本体积网络实现激光雷达与相机的自校准|LCCNet|相机+激光雷达|PyTorch|CNNs|\n|2021|[Sensors](https:\u002F\u002Fwww.ncbi.nlm.nih.gov\u002Fpmc\u002Farticles\u002FPMC8662422\u002Fpdf\u002Fsensors-21-08112.pdf)|CFNet：利用校准流网络实现激光雷达-相机配准|CFNet|相机+激光雷达|PyTorch|FCN|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FFan_Inverting_a_Rolling_Shutter_Camera_Bring_Rolling_Shutter_Images_to_ICCV_2021_paper.pdf)|反转滚动快门相机：将滚动快门图像转化为高帧率的全局快门视频|Fan等|畸变|PyTorch|U-Net|\n|2021|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FFan_SUNet_Symmetric_Undistortion_Network_for_Rolling_Shutter_Correction_ICCV_2021_paper.pdf)|SUNet：用于滚动快门校正的对称去畸变网络|SUNet|畸变|PyTorch|DenseNet+ResNet|\n|2021|[IROS](https:\u002F\u002Fsemalign.mit.edu\u002Fassets\u002Fpaper.pdf)|SemAlign：无标注的相机-激光雷达校准，采用语义对齐损失|SemAlign|相机-激光雷达|PyTorch|CNNs|\n|2022|[CVPR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08586)|深度消失点检测：几何先验让数据集的变化变得微不足道|DVPD|外参|PyTorch|CNNs|\n|2022|[ICRA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.03325)|基于视频的自监督相机自校准|Fang等|内参+外参|PyTorch|CNNs|\n|2022|[ICASSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9746819)|通过相机投影损失进行相机标定|CPL|内参+外参|TensorFlow|Inception-V3|\n|2022|[CVPR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15982)|迭代深度同构估计|IHN|投影矩阵|PyTorch|Siamese-Net|\n|2022|[CVPR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03821)|无监督同构估计，结合共面性感知的GAN|HomoGAN|投影矩阵|PyTorch|GANs|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhu_Semi-Supervised_Wide-Angle_Portraits_Correction_by_Multi-Scale_Transformer_CVPR_2022_paper.pdf)|基于多尺度转换器的半监督广角人像矫正|SS-WPC|去畸变|PyTorch|Transformer|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FCao_Learning_Adaptive_Warping_for_Real-World_Rolling_Shutter_Correction_CVPR_2022_paper.pdf)|学习用于现实世界滚动快门校正的自适应扭曲|AW-RSC|去畸变| |CNNs|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_EvUnroll_Neuromorphic_Events_Based_Rolling_Shutter_Image_Correction_CVPR_2022_paper.pdf)|EvUnroll：基于神经形态事件的滚动快门图像校正|EvUnroll|去畸变|PyTorch|U-Net|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FDo_Learning_To_Detect_Scene_Landmarks_for_Camera_Localization_CVPR_2022_paper.pdf)|学习用于相机定位的场景地标检测|Do等|外参|PyTorch|ResNet|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FParameshwara_DiffPoseNet_Direct_Differentiable_Camera_Pose_Estimation_CVPR_2022_paper.pdf)|DiffPoseNet：直接可微的相机姿态估计|DiffPoseNet|外参|PyTorch|CNNs + LSTM|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FYang_SceneSqueezer_Learning_To_Compress_Scene_for_Camera_Relocalization_CVPR_2022_paper.pdf)|SceneSqueezer：学习为相机重新定位压缩场景|SceneSqueezer|外参|PyTorch|Transformer|\n|2022|[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01925)|FishFormer：基于环状切分的变压器，用于鱼眼矫正，并探索效能域|FishFormer|去畸变|PyTorch|Transformer|\n|2022|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FPonimatkin_Focal_Length_and_Object_Pose_Estimation_via_Render_and_Compare_CVPR_2022_paper.pdf)|通过渲染与比较估计焦距和物体姿态|FocalPose|内参+外参|PyTorch|CNNs|\n|2022|[arXiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.09385.pdf)|DXQ-Net：利用质量感知流动实现的可微激光雷达-相机外参校准|DXQ-Net|相机+激光雷达|PyTorch|CNNs + RNNs|\n|2022|[ITSC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.03704)|SST-Calib：同时实现激光雷达与相机的空间-时间参数校准|SST-Calib|相机+激光雷达|PyTorch|CNNs|\n|2022|[IROS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.00158.pdf)|基于学习的相机校准框架，兼具畸变校正与高精度特征检测|CCS-Net|去畸变|PyTorch|UNet|\n|2022|[TIP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14611)|SIR：通过多种不同镜头“看见”同一场景，实现自监督图像矫正|SIR|去畸变|PyTorch|ResNet|\n|2022|[TIV](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9802778)|ATOP：一种以注意为导向的优化方法，用于通过跨模态对象匹配实现激光雷达-相机自动校准|ATOP|相机+激光雷达||CNNs|\n|2022|[ICRA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9811945)|FusionNet：基于层次化点-像素融合的激光雷达与相机粗细校准网络|FusionNet|相机+激光雷达|PyTorch|CNNs+PointNet|\n|2022|[TIM](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9623545)|基于关键点的激光雷达-相机在线校准，采用稳健的几何网络|RGKCNet|相机+激光雷达|PyTorch|CNNs+PointNet|\n|2022|[ECCV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.12927)|重新思考通用相机模型，用于深度单张图像相机标定，以恢复旋转和鱼眼畸变|GenCaliNet|内参+外参+畸变系数| |DenseNet|\n|2022|[PAMI](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9771389)|内容感知的无监督深度同构估计及其拓展|Liu等|投影矩阵|PyTorch|ResNet|\n\n## 🏗️数据集\n|名称|发布版本|真实\u002F合成|图像\u002F视频|目标|数据集|\n|---|---|---|---|---|---|\n|KITTI|[CVPR](https:\u002F\u002Fwww.cvlibs.net\u002Fpublications\u002FGeiger2012CVPR.pdf)|真实|视频|基础版|[数据集](https:\u002F\u002Fwww.cvlibs.net\u002Fdatasets\u002Fkitti\u002F)|\n|MS-COCO|[ECCV](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-10602-1_48)|真实|图像|基础版|[数据集](https:\u002F\u002Fcocodataset.org\u002F#download)|\n|SUN360|[CVPR](https:\u002F\u002Fvision.cs.princeton.edu\u002Fprojects\u002F2012\u002FSUN360\u002Fpaper.pdf)|真实|图像|基础版|[数据集](https:\u002F\u002Fvision.cs.princeton.edu\u002Fprojects\u002F2012\u002FSUN360\u002Fdata\u002F)|\n|Places2|[PAMI](http:\u002F\u002Fplaces2.csail.mit.edu\u002FPAMI_places.pdf)|真实|图像|基础版|[数据集](http:\u002F\u002Fplaces2.csail.mit.edu\u002Fdownload.html)|\n|CelebA|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FLiu_Deep_Learning_Face_ICCV_2015_paper.pdf)|真实|图像|基础版|[数据集](https:\u002F\u002Fmmlab.ie.cuhk.edu.hk\u002Fprojects\u002FCelebA.html)|\n|1DSfM|[ECCV](https:\u002F\u002Fwww.cs.cornell.edu\u002Fprojects\u002F1dsfm\u002Fdocs\u002F1DSfM_ECCV14.pdf)|真实|图像|焦距|[数据集](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website\u002F)|\n|剑桥地标|[ICCV](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FKendall_PoseNet_A_Convolutional_ICCV_2015_paper.pdf)|真实|视频|外参|[数据集](https:\u002F\u002Fwww.repository.cam.ac.uk\u002Fhandle\u002F1810\u002F251342;jsessionid=90AB1617B8707CD387CBF67437683F77)|\n|HLW|[BMVC](http:\u002F\u002Fwww.bmva.org\u002Fbmvc\u002F2016\u002Fpapers\u002Fpaper020\u002Fpaper020.pdf)|真实|图像|地平线线|[数据集](https:\u002F\u002Fmvrl.cse.wustl.edu\u002Fdatasets\u002Fhlw\u002F)|\n|YUD|[ECCV](https:\u002F\u002Fwww.elderlab.yorku.ca\u002Fwp-content\u002Fuploads\u002F2016\u002F12\u002FDenisElderEstradaECCV08.pdf)|真实|图像|消失点|[数据集](https:\u002F\u002Fwww.elderlab.yorku.ca\u002FYorkUrbanDB)|\n|ECD|[ECCV](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpublications\u002F2010\u002FBarinova10a\u002FBarinova10a.pdf)|真实|图像|消失点|[数据集](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpublications\u002F2010\u002FBarinova10a\u002FBarinova10a.pdf)|\n|SU3 线框图|[ICCV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FZhou_Learning_to_Reconstruct_3D_Manhattan_Wireframes_From_a_Single_Image_ICCV_2019_paper.pdf)|合成|图像|消失点|[数据集](https:\u002F\u002Fgithub.com\u002Fzhou13\u002Fshapeunity)|\n|ScanNet|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FDai_ScanNet_Richly-Annotated_3D_CVPR_2017_paper.pdf)|真实|视频|外参|[数据集](http:\u002F\u002Fwww.scan-net.org\u002F#code-and-data)|\n|室内-6|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FDo_Learning_To_Detect_Scene_Landmarks_for_Camera_Localization_CVPR_2022_paper.pdf)|真实|图像|外参|[数据集](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSceneLandmarkLocalization)|\n|DeepVP|[ICRA](http:\u002F\u002Filab.usc.edu\u002Fpublications\u002Fdoc\u002FChang_etal18icra.pdf)|真实|图像|消失点|[数据集](http:\u002F\u002Filab.usc.edu\u002Fkai\u002Fdeepvp\u002F)|\n|CAHomo|[ECCV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.05983.pdf)|真实|视频|单应矩阵|[数据集](https:\u002F\u002Fgithub.com\u002FJirongZhang\u002FDeepHomography)|\n|MHN|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLe_Deep_Homography_Estimation_for_Dynamic_Scenes_CVPR_2020_paper.pdf)|真实|视频|单应矩阵|[数据集](https:\u002F\u002Fgithub.com\u002Flcmhoang\u002Fhmg-dynamics)|\n|UDIS|[TIP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12859)|真实|视频|单应矩阵|[数据集](https:\u002F\u002Fgithub.com\u002Fnie-lang\u002FUnsupervisedDeepImageStitching)|\n|Carla-RS|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Shutter_Unrolling_Network_CVPR_2020_paper.pdf)|合成|视频|RS 畸变|[数据集](https:\u002F\u002Fgithub.com\u002Fethliup\u002FDeepUnrollNet)|\n|Fastec-RS|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Shutter_Unrolling_Network_CVPR_2020_paper.pdf)|合成|视频|RS 畸变|[数据集](https:\u002F\u002Fgithub.com\u002Fethliup\u002FDeepUnrollNet)|\n|BS-RSC|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FCao_Learning_Adaptive_Warping_for_Real-World_Rolling_Shutter_Correction_CVPR_2022_paper.pdf)|真实|视频|RS 畸变|[数据集](https:\u002F\u002Fgithub.com\u002Fljzycmd\u002FBSRSC)|\n|GEV-RS|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_EvUnroll_Neuromorphic_Events_Based_Rolling_Shutter_Image_Correction_CVPR_2022_paper.pdf)|真实|视频|RS 畸变|[数据集](https:\u002F\u002Fgithub.com\u002Fzxyemo\u002FEvUnroll)|\n|LMS|[ICASSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7471935)|两者|视频|径向畸变|[数据集](https:\u002F\u002Fwww.lms.tf.fau.eu\u002Fresearch\u002Fdownloads\u002Ffisheye-data-set\u002F)|\n|SS-WPC|[CVPR](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhu_Semi-Supervised_Wide-Angle_Portraits_Correction_by_Multi-Scale_Transformer_CVPR_2022_paper.pdf)|真实|图像|径向畸变|[数据集](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPortraits_Correction)|\n\n\n\n## 📜\u003C\u002Fg-emoji>许可证\n本调查与基准测试仅面向学术研究用途提供。\n\n## 📚\u003C\u002Fg-emoji>引用格式\n```\n@article{kang2023deep,\n作者 = {Kang Liao and Lang Nie and Shujuan Huang and Chunyu Lin and Jing Zhang and Yao Zhao and Moncef Gabbouj and Dacheng Tao},\n标题 = {深度学习在相机标定及更广泛领域的应用：一项综述},\n年份 = {2023},\n期刊 = {arXiv:2303.10559}\n}\n```\n## 🚩中文手册\n[导读链接](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F619217025)\n\n## 🚀使用我们调查的项目与课程\n* [相机标定与姿态估计](https:\u002F\u002Fsites.ecse.rpi.edu\u002F~qji\u002FCV\u002Fcamera_calibration_pose_estimation.pdf)——由美国伦斯勒理工学院的[季强教授](https:\u002F\u002Fsites.ecse.rpi.edu\u002F~qji\u002F)主讲，课程为[ECSE 4961\u002F6650 计算机视觉](https:\u002F\u002Fsites.ecse.rpi.edu\u002F~qji\u002FCV\u002FECSE%204961-6650%20计算机视觉_2023年秋季学期课程大纲.pdf)。\n\n## 📭联系方式\n\n```\nkang.liao@ntu.edu.sg\n```","# Awesome-Deep-Camera-Calibration 快速上手指南\n\n**Awesome-Deep-Camera-Calibration** 并非一个单一的可执行软件包，而是一个汇集了基于深度学习的相机标定方法、数据集、基准测试（Benchmark）及综述论文的资源库。本指南将帮助你快速获取核心资源、配置环境并运行相关的基准代码。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python**: 版本 3.7 或更高。\n*   **深度学习框架**: PyTorch (大多数现代方法的首选) 或 TensorFlow (部分早期方法)。\n*   **硬件**: 推荐使用 NVIDIA GPU (CUDA 支持) 以加速模型训练和推理。\n*   **前置依赖**:\n    *   `git`: 用于克隆仓库。\n    *   `pip` 或 `conda`: 用于管理 Python 包。\n    *   基础视觉库: `opencv-python`, `numpy`, `scipy`, `matplotlib`。\n\n## 安装步骤\n\n### 1. 克隆项目仓库\n首先，将资源库克隆到本地。国内用户可使用 Gitee 镜像（如有）或通过代理加速 GitHub 访问。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration.git\ncd Awesome-Deep-Camera-Calibration\n```\n\n### 2. 创建虚拟环境\n建议使用 Conda 创建独立的虚拟环境以避免依赖冲突。\n\n```bash\nconda create -n deep_calib python=3.8\nconda activate deep_calib\n```\n\n### 3. 安装基础依赖\n安装通用的计算机视觉和数据处理库。\n\n```bash\n# 推荐使用国内镜像源 (如清华源) 加速安装\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple opencv-python numpy scipy matplotlib pandas tqdm\n```\n\n### 4. 安装深度学习框架\n根据你的显卡驱动版本选择安装 PyTorch 或 TensorFlow。以下为 PyTorch (CUDA 11.8) 的安装示例：\n\n```bash\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n```\n\n### 5. 获取基准测试数据 (Benchmark)\n该项目提供了构建的基准测试数据集，包含多样化的图像\u002F视频及精确的真值（Ground Truth）。\n\n*   **查看详细说明**: 请参阅 `Benchmark\u002Freadme.md` 文件获取最新的数据集下载链接。\n*   **下载数据**: 通常数据集托管在 Google Drive 或 Baidu Netdisk。下载后解压至项目目录下的 `data\u002F` 文件夹。\n\n```bash\n# 示例：创建数据目录\nmkdir -p data\u002Fbenchmark\n# 请手动下载数据集并放入上述目录，具体链接见 Benchmark\u002Freadme.md\n```\n\n## 基本使用\n\n由于本项目是方法集合，具体的“使用”通常指复现列表中某一种算法或评估基准。以下以**加载并查看基准数据结构**为例，展示如何开始工作。\n\n### 1. 浏览方法列表\n打开根目录下的 `README.md`，在 **Methods** 表格中查找你感兴趣的算法（例如 `DeepCalib`, `PoseNet`, `UprightNet` 等）。每个条目都提供了原始论文链接和代码库链接。\n\n### 2. 运行基准数据加载示例\n假设你已经下载了基准数据，可以使用以下 Python 脚本快速验证数据格式并读取一张标定样本（需根据实际数据结构微调路径）：\n\n```python\nimport cv2\nimport numpy as np\nimport os\nimport json\n\n# 配置数据路径\nDATA_ROOT = \"data\u002Fbenchmark\"\nIMAGE_PATH = os.path.join(DATA_ROOT, \"images\", \"sample_001.jpg\")\nLABEL_PATH = os.path.join(DATA_ROOT, \"labels\", \"sample_001.json\")\n\ndef load_calibration_sample(img_path, label_path):\n    # 读取图像\n    if not os.path.exists(img_path):\n        print(f\"错误：未找到图像文件 {img_path}\")\n        return\n    \n    image = cv2.imread(img_path)\n    \n    # 读取标定真值 (通常为内参、外参或畸变系数)\n    if not os.path.exists(label_path):\n        print(f\"错误：未找到标签文件 {label_path}\")\n        return\n        \n    with open(label_path, 'r') as f:\n        ground_truth = json.load(f)\n    \n    print(\"=== 样本信息 ===\")\n    print(f\"图像尺寸：{image.shape}\")\n    print(f\"标定参数：{ground_truth}\")\n    \n    # 简单可视化 (例如绘制主点)\n    if 'principal_point' in ground_truth:\n        cx, cy = ground_truth['principal_point']\n        cv2.circle(image, (int(cx), int(cy)), 5, (0, 255, 0), -1)\n        cv2.imshow(\"Calibration Sample\", image)\n        cv2.waitKey(0)\n        cv2.destroyAllWindows()\n\n# 执行加载\nif __name__ == \"__main__\":\n    # 注意：请确保已下载真实数据并修改上述路径为有效路径\n    # 此处仅为逻辑演示\n    print(\"请确保已按照 Benchmark\u002Freadme.md 下载数据并更新脚本中的路径。\")\n    # load_calibration_sample(IMAGE_PATH, LABEL_PATH)\n```\n\n### 3. 复现特定算法\n要使用具体的标定模型（如 `DeepCalib`）：\n1.  点击 README 中该方法的 **Title** 或 **Abbreviation** 链接跳转到原始代码仓库。\n2.  按照该独立仓库的说明安装其特定依赖。\n3.  将本项目的 **Benchmark** 数据作为输入进行测试。\n\n> **提示**: 本项目中的 **Novel Calibration Representations** 章节介绍了最新的几何场表示方法（如 Perspective Field, Camera Rays），适合希望改进传统标定目标函数的研究者参考。","某自动驾驶初创团队正在开发基于多目视觉的感知系统，急需在复杂光照和动态场景下获取高精度的相机内外参数以重建 3D 环境。\n\n### 没有 Awesome-Deep-Camera-Calibration 时\n- **标定流程繁琐低效**：工程师需人工打印棋盘格并在不同位置拍摄大量照片，一旦光线变化或镜头微调，必须重新采集数据并运行传统算法（如 Zhang 氏标定法）。\n- **极端场景失效**：在夜间、强反光或纹理缺失的道路上，传统特征点检测极易失败，导致标定参数误差大，直接影响深度估计精度。\n- **技术选型迷茫**：面对层出不穷的深度学习标定论文，团队难以快速梳理出适合车载嵌入式设备的轻量级模型，研发周期被文献调研严重拖慢。\n- **缺乏统一基准**：自测数据与公开数据集格式不一，无法客观评估新算法相对于现有 SOTA（最先进）方法的真实性能提升。\n\n### 使用 Awesome-Deep-Camera-Calibration 后\n- **端到端智能标定**：团队直接复用综述中推荐的深度学习方案，仅需少量自然图像即可实现端到端参数回归，无需依赖特定标定板，大幅缩短部署时间。\n- **鲁棒性显著增强**：引入基于神经网络的标定表示方法，即使在低光照或运动模糊条件下，也能保持亚像素级的标定精度，提升了感知系统的稳定性。\n- **研发路径清晰**：利用其完善的分类体系（Taxonomy）和统计图表，团队迅速锁定了兼顾精度与速度的模型架构，避免了重复造轮子。\n- **标准化性能评估**：直接使用项目提供的 Benchmark 数据集和评估脚本，量化验证了算法改进效果，确保模型上线前的可靠性。\n\nAwesome-Deep-Camera-Calibration 通过整合前沿理论与标准化基准，将相机标定从耗时的人工实验转变为高效、鲁棒的智能化流程，加速了 3D 视觉应用的落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKangLiao929_Awesome-Deep-Camera-Calibration_e2370751.png","KangLiao929","Kang Liao","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FKangLiao929_8a47f2fb.jpg","Research Fellow at the MMLab@NTU, S-Lab","NTU","Singapore",null,"https:\u002F\u002Fkangliao929.github.io\u002F","https:\u002F\u002Fgithub.com\u002FKangLiao929",742,70,"2026-03-10T03:08:58",1,"","未说明",{"notes":92,"python":90,"dependencies":93},"该项目是一个综述列表（Awesome List），主要收集了相机校准相关的论文、数据集和方法，本身不是一个可直接运行的单一软件工具。README 中列出的方法使用了多种不同的深度学习框架（如 Caffe, Torch, TensorFlow, PyTorch）和网络架构，具体运行环境需参考各个独立项目的源代码。",[],[14,37],"2026-03-27T02:49:30.150509","2026-04-06T08:46:28.270051",[98,103,108,113],{"id":99,"question_zh":100,"answer_zh":101,"source_url":102},6534,"基准测试（Benchmark）何时发布？","基准测试已经发布，欢迎使用。此前维护者曾承诺将在综述论文发表前（预计 1-2 个月内）发布。","https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fissues\u002F1",{"id":104,"question_zh":105,"answer_zh":106,"source_url":107},6535,"是否可以提供论文《A Deep Ordinal Distortion Estimation Approach for Distortion Rectification》的源代码？","维护者计划于 4 月份提供该论文的代码仓库，请关注项目更新。","https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fissues\u002F4",{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},6536,"如何获取论文的补充材料（Supplementary Material）链接？","补充材料链接通常可在项目文档或相关论文页面找到，已有用户反馈自行找到了该链接。建议仔细查阅项目 README 或论文主页。","https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fissues\u002F3",{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},6537,"项目中关于论文'Keypoint-Based LiDAR-Camera Online Calibration With Robust Geometric Network'的网络名称描述是否有误？","是的，原论文中网络名称为 RGKCNet，项目中曾误写为 RKGCNet。维护者已确认该笔误并感谢用户的指正，后续版本中应已修正。","https:\u002F\u002Fgithub.com\u002FKangLiao929\u002FAwesome-Deep-Camera-Calibration\u002Fissues\u002F2",[]]