[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-4DVLab--Vision-Centric-BEV-Perception":3,"tool-4DVLab--Vision-Centric-BEV-Perception":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":74,"owner_email":74,"owner_twitter":74,"owner_website":74,"owner_url":76,"languages":74,"stars":77,"forks":78,"last_commit_at":79,"license":74,"difficulty_score":80,"env_os":81,"env_gpu":82,"env_ram":82,"env_deps":83,"category_tags":86,"github_topics":87,"view_count":32,"oss_zip_url":74,"oss_zip_packed_at":74,"status":17,"created_at":93,"updated_at":94,"faqs":95,"releases":96},8962,"4DVLab\u002FVision-Centric-BEV-Perception","Vision-Centric-BEV-Perception","Vision-Centric BEV Perception: A Survey","Vision-Centric-BEV-Perception 是一份专注于“以视觉为中心的鸟瞰图（BEV）感知”领域的权威综述资源。它旨在解决自动驾驶中如何将车载摄像头拍摄的多视角二维图像，高效、准确地转换为上帝视角的三维空间表示这一核心难题。通过统一 BEV 空间，该系统能更好地融合多相机数据，提升车辆对周围环境的理解能力，从而辅助路径规划与障碍物检测。\n\n这份资源特别适合自动驾驶算法研究人员、计算机视觉开发者以及相关领域的学生使用。它不仅系统梳理了该技术的发展脉络，还详细分类整理了基于几何变换（如单应性矩阵）和基于深度估计（如 Lift-Splat-Shoot 机制）的两大主流技术路线。其独特亮点在于构建了完整的知识图谱，收录了从早期经典算法到最新前沿研究的数十篇关键论文，并提供了清晰的时间线演进图和数据集汇总。对于希望快速切入 BEV 感知领域、寻找灵感或对比不同技术方案的专业人士而言，Vision-Centric-BEV-Perception 是不可或缺的入门指南与参考手册。","# Vision-Centric-BEV-Perception\nVision-Centric BEV Perception: A Survey\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftaxonomy_bev.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_70aff8d09916.png)\n\n## Introduction\n\n\n\n### (1) Datasets\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_755c421a08cf.png)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002FDatasets_bev.png\" width=\"95%\"> \u003C\u002Fp>)\n### (2) GEOMETRY BASED PV2BEV\n#### Homograph based PV2BEV\nPublic Papers:\n- IPM: Inverse perspective mapping simplifies optical flow computation and obstacle detection (Biological Cybernetics'1991) [[paper]](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002FBF00201978)\n- DSM: Automatic Dense Visual Semantic Mapping from Street-Level Imagery (IROS'12) [[paper]](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~tvg\u002Fpublications\u002F2012\u002FIROS_Mapping_ss.pdf)\n- MapV: Learning to map vehicles into bird’s eye view (ICIAP'17) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08442)\n- BridgeGAN: Generative Adversarial Frontal View to Bird View Synthesis (3DV'18) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.00327.pdf)[[project page]](https:\u002F\u002Fgithub.com\u002Fxinge008\u002FBridgeGAN)\n- VPOE: Deep learning based vehicle position and orientation estimation via inverse perspective mapping image (IV'19) [[paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8814050)\n- 3D-LaneNet: End-to-End 3D Multiple Lane Detection (ICCV'19) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGarnett_3D-LaneNet_End-to-End_3D_Multiple_Lane_Detection_ICCV_2019_paper.pdf)\n- The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping (IV'19) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.00913)\n- Cam2BEV: A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird’s Eye View (ITSC'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04078) [[project page]](https:\u002F\u002Fgithub.com\u002Fika-rwth-aachen\u002FCam2BEV)\n- MonoLayout: Amodal Scene Layout from a Single Image (WACA'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08394) [[project page]](https:\u002F\u002Fgithub.com\u002Fhbutsuak95\u002Fmonolayout)\n- MVNet: Multiview Detection with Feature Perspective Transformation (ECCV'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07247) [[project page]](https:\u002F\u002Fgithub.com\u002Fhou-yz\u002FMVDet)\n- OGMs: Driving among Flatmobiles: Bird-Eye-View occupancy grids from a monocular camera for holistic trajectory planning (WACA'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.04047) [[project page]]()\n- TrafCam3D: Monocular 3D Vehicle Detection Using Uncalibrated Traffic Camerasthrough Homography (IROS'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.15293.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fminghanz\u002Ftrafcam_3d)\n- SHOT:Stacked Homography Transformations for Multi-View Pedestrian Detection (ICCV'21) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSong_Stacked_Homography_Transformations_for_Multi-View_Pedestrian_Detection_ICCV_2021_paper.pdf)\n- HomoLoss: Homography Loss for Monocular 3D Object Detection (CVPR'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00754)\n\nChronological Overview:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_957d3da3d2cc.png)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fhomo-based-overview.PNG\" width=\"95%\"> \u003C\u002Fp>)\n#### Depth based PV2BEV\nPublic Papers:\n- OFT: Orthographic Feature Transform for Monocular 3D Object Detection (BMVC'19) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.08188) [[project page]](https:\u002F\u002Fgithub.com\u002Ftom-roddick\u002Foft)\n- CaDDN: Categorical Depth Distribution Network for Monocular 3D Object Detection (CVPR'21) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FReading_Categorical_Depth_Distribution_Network_for_Monocular_3D_Object_Detection_CVPR_2021_paper.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FTRAILab\u002FCaDDN)\n- DSGN: Deep Stereo Geometry Network for 3D Object Detection (CVPR'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.03398) [[project page]](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FDSGN)\n- Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D (ECCV'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.05711.pdf) [[project page]](https:\u002F\u002Fnv-tlabs.github.io\u002Flift-splat-shoot\u002F)\n- PanopticSeg: Bird’s-Eye-View Panoptic Segmentation Using Monocular Frontal View Images (RA-L'22) [[paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9681287) [[project page]](http:\u002F\u002Fpanoptic-bev.cs.uni-freiburg.de\u002F)\n- FIERY: Future Instance Prediction in Bird’s-Eye View from Surround Monocular Cameras (ICCV'21) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHu_FIERY_Future_Instance_Prediction_in_Birds-Eye_View_From_Surround_Monocular_ICCV_2021_paper.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fwayveai\u002Ffiery)\n- LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based 3D Detector (ICCV'21) [[paper]](https:\u002F\u002Fxy-guo.github.io\u002Fliga\u002Fliga-guo-iccv21.pdf) [[project page]](https:\u002F\u002Fxy-guo.github.io\u002Fliga\u002F)\n- ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection (WACV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.01178.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fsaic-vul\u002Fimvoxelnet)\n- BEVDet: High-performance Multi-camera 3D Object Detection in Bird-Eye-View (Arxiv'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.11790) [[project page]](https:\u002F\u002Fgithub.com\u002FHuangJunJie2017\u002FBEVDet)\n- M^2BEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Bird’s-Eye View Representation (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05088) [[project page]](https:\u002F\u002Fnvlabs.github.io\u002FM2BEV\u002F)\n- StretchBEV: Stretching Future Instance Prediction Spatially and Temporally (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13641) [[project page]](https:\u002F\u002Fgithub.com\u002Fkaanakan\u002Fstretchbev)\n- DfM: Monocular 3D Object Detection with Depth from Motion (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12988.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FTai-Wang\u002FDepth-from-Motion)\n- BEVDet4D: Exploit Temporal Cues in Multi-camera 3D Object Detection (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17054) [[project page]](https:\u002F\u002Fgithub.com\u002FHuangJunJie2017\u002FBEVDet)\n- BEVerse: Unified Perception and Prediction in Birds-Eye-View for Vision-Centric Autonomous Driving (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.09743) [[project page]](https:\u002F\u002Fgithub.com\u002Fzhangyp15\u002FBEVerse)\n- MV-FCOS3D++: Multi-View Camera-Only 4D Object Detection with Pretrained Monocular Backbones (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12716) [[project page]](https:\u002F\u002Fgithub.com\u002FTai-Wang\u002FDepth-from-Motion)\n- Putting People in their Place: Monocular Regression of 3D People in Depth (CVPR'22) [[Code]](https:\u002F\u002Fgithub.com\u002FArthur151\u002FROMP) [[Project Page]](https:\u002F\u002Farthur151.github.io\u002FBEV\u002FBEV.html) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.08274) [[Video]](https:\u002F\u002Fyoutu.be\u002FQ62fj_6AxRI) [[RH Dataset]](https:\u002F\u002Fgithub.com\u002FArthur151\u002FRelative_Human)\n\nChronological Overview:\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fdepth-based-overview.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_bafff72e2968.png)\n\nBenchmark Results:\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fdepth-based%20results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_c3595762f2f4.png)\n\n### (3) NETWORK BASED PV2BEV\n#### MLP based PV2BEV\nPublic Papers:\n- VED: Monocular Semantic Occupancy Grid Mapping with Convolutional Variational Encoder-Decoder Networks (RA-L'19) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.02176.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FChenyang-Lu\u002Fmono-semantic-occupancy)\n- VPN: Cross-view Semantic Segmentation for Sensing Surroundings (IROS'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.03560.pdf) [[project page]](https:\u002F\u002Fview-parsing-network.github.io\u002F)\n- FishingNet: Future Inference of Semantic Heatmaps In Grids (Arxiv'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.09917)\n- PON: Predicting Semantic Map Representations from Images using Pyramid Occupancy Networks (CVPR'20) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FRoddick_Predicting_Semantic_Map_Representations_From_Images_Using_Pyramid_Occupancy_Networks_CVPR_2020_paper.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Ftom-roddick\u002Fmono-semantic-maps)\n- STA-ST: Enabling spatio-temporal aggregation in Birds-Eye-View Vehicle Estimation (ICRA'21) [[paper]](https:\u002F\u002Fcvssp.org\u002FPersonal\u002FOscarMendez\u002Fpapers\u002Fpdf\u002FSahaICRA2021.pdf) \n- HDMapNet: An Online HD Map Construction and Evaluation Framework (ICRA'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.06307) [[project page]](https:\u002F\u002Ftsinghua-mars-lab.github.io\u002FHDMapNet\u002F)\n- Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation (CVPR'21) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYang_Projecting_Your_View_Attentively_Monocular_Road_Scene_Layout_Estimation_via_CVPR_2021_paper.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FJonDoe-297\u002Fcross-view)\n- HFT: Lifting Perspective Representations via Hybrid Feature Transformation (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05068) [[project page]](https:\u002F\u002Fgithub.com\u002FJiayuZou2020\u002FHFT)\n\nChronological Overview:\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002FMLP-based-overview.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_5b6c08951600.png)\n\nBenchmark Results:\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002FMLP-based-result.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_c95905d82595.png)\n\n#### Transformer based PV2BEV\nPublic Papers:\n- STSU: Structured Bird’s-Eye-View Traffic Scene Understanding from Onboard Images (ICCV'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01997) [[project page]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FSTSU)\n- Image2Map: Translating Images into Maps (ICRA'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00966.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Favishkarsaha\u002Ftranslating-images-into-maps)\n- DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries (CoRL'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.06922.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FWangYueFt\u002Fdetr3d)\n- TopologyPL: Topology Preserving Local Road Network Estimation from Single Onboard Camera Image (CVPR'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.10155.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FTopologicalLaneGraph)\n- PETR: Position Embedding Transformation for Multi-View 3D Object Detection (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05625) [[project page]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- BEVSegFormer: Bird's Eye View Semantic Segmentation From Arbitrary Camera Rigs (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04050) \n- PersFormer: a New Baseline for 3D Laneline Detection (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.11089) [[project page]](https:\u002F\u002Fgithub.com\u002FOpenPerceptionX\u002FPersFormer_3DLane)\n- MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer (CVPR'22) [[page]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10981) [[project page]](https:\u002F\u002Fgithub.com\u002Fkuanchihhuang\u002FMonoDTR)\n- MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13310.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FZrrSkywalker\u002FMonoDETR)\n- BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17270v1.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer)\n- GitNet: Geometric Prior-based Transformation for Birds-Eye-View Segmentation (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.07733) \n- Graph-DETR3D: Rethinking Overlapping Regions for Multi-View 3D Object Detection (MM'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.11582) \n- CVT: Cross-view Transformers for real-time Map-view Semantic Segmentation (CVPR'22) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_Cross-View_Transformers_for_Real-Time_Map-View_Semantic_Segmentation_CVPR_2022_paper.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fbradyz\u002F)\n- PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.01256) [[project page]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- Ego3RT: Learning Ego 3D Representation as Ray Tracing (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04042.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FEgo3RT)\n- GKT: Efficient and Robust 2D-to-BEV Representation Learning via Geometry-guided Kernel Transformer (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04584.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FGKT)\n- PolarDETR: Polar Parametrization for Vision-based Surround-View 3D Detection (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10965) [[project page]](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FPolarDETR)\n- LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.13294)\n- SRCN3D: Sparse R-CNN 3D Surround-View Cameras 3D Object Detection and Tracking for Autonomous Driving (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.14451) [[project page]](https:\u002F\u002Fgithub.com\u002Fsynsin0\u002FSRCN3D)\n- PolarFormer: Multi-camera 3D Object Detection with Polar Transformers (Arxiv'22)[[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.15398) [[project page]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FPolarFormer)\n- ORA3D: ORA3D: Overlap Region Aware Multi-view 3D Object Detection (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.00865)\n- CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers (Arxiv'22)  [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.02202.pdf)\n \nChronological Overview:\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftransformer-based-overview.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_d270fbd06cde.png)\n\nBenchmark Results:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_4eb575f82029.png)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_a4b33b636ccb.png)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftransformer-based-results.png\" width=\"95%\"> \u003C\u002Fp>)\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftransformer-based-results1.png\" width=\"95%\"> \u003C\u002Fp>)\n\n\n### (4)  EXTENSION\n#### Multi-Task Learning under BEV\n- FIERY: Future Instance Prediction in Bird’s-Eye View from Surround Monocular Cameras (ICCV'21) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHu_FIERY_Future_Instance_Prediction_in_Birds-Eye_View_From_Surround_Monocular_ICCV_2021_paper.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fwayveai\u002Ffiery)\n- StretchBEV: Stretching Future Instance Prediction Spatially and Temporally (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13641) [[project page]](https:\u002F\u002Fgithub.com\u002Fkaanakan\u002Fstretchbev)\n- BEVerse: Unified Perception and Prediction in Birds-Eye-View for Vision-Centric Autonomous Driving (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.09743) [[project page]](https:\u002F\u002Fgithub.com\u002Fzhangyp15\u002FBEVerse)\n- M^2BEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Bird’s-Eye View Representation (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05088) [[project page]](https:\u002F\u002Fnvlabs.github.io\u002FM2BEV\u002F)\n- STSU: Structured Bird’s-Eye-View Traffic Scene Understanding from Onboard Images (ICCV'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01997) [[project page]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FSTSU)\n- BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17270v1.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer)\n- Ego3RT: Learning Ego 3D Representation as Ray Tracing (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04042.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FEgo3RT)\n- PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.01256) [[project page]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- PolarFormer: Multi-camera 3D Object Detection with Polar Transformers (Arxiv'22)[[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.15398) [[project page]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FPolarFormer)\n\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fmulti-task-results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_153d0111bdb3.png)\n\n#### Fusion under BEV\nMulti-Modality Fusion:\n- PointPainting: Sequential Fusion for 3D Object Detection (CVPR'19) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.10150.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FAmrElsersy\u002FPointPainting)\n- 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection (ECCV'20) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.12636.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Frasd3\u002F3D-CVF)\n- FUTR3D: A Unified Sensor Fusion Framework for 3D Detection (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10642) [[project page]](https:\u002F\u002Fgithub.com\u002FTsinghua-MARS-Lab\u002Ffutr3d)\n- MVP: Multimodal Virtual Point 3D Detection (NIPS'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.06881.pdf) [[project page]](https:\u002F\u002Ftianweiy.github.io\u002Fmvp\u002F)\n- PointAugmenting: Cross-Modal Augmentation for 3D Object Detection (CVPR'21) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FWang_PointAugmenting_Cross-Modal_Augmentation_for_3D_Object_Detection_CVPR_2021_paper.html) [[project page]](https:\u002F\u002Fgithub.com\u002FVISION-SJTU\u002FPointAugmenting)\n- FusionPainting: Multimodal Fusion with Adaptive Attention for 3D Object Detection  (ITSC'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.12449) [[project page]](https:\u002F\u002Fgithub.com\u002FShaoqing26\u002FFusionPainting)\n- Unifying Voxel-based Representation with Transformer for 3D Object Detection (Arxiv'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.00630) [[project page]](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FUVTR)\n- TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers (CVPR'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.11496) [[project page]](https:\u002F\u002Fgithub.com\u002FXuyangBai\u002FTransFusion)\n- AutoAlign: Pixel-Instance Feature Aggregation for Multi-Modal 3D Object Detection (IJCAI'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.06493) [[project page]](https:\u002F\u002Fgithub.com\u002Fzehuichen123\u002FAutoAlignV2)\n- AutoAlignV2: Deformable Feature Aggregation for Dynamic Multi-Modal 3D Object Detection (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.10316v1.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fzehuichen123\u002FAutoAlignV2)\n- CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection  (WACV'21) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf]\u002F2011.04841) [[project page]](https:\u002F\u002Fgithub.com\u002Fmrnabati\u002FCenterFusion)\n- MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.03102.pdf)[[project page]](https:\u002F\u002Fgithub.com\u002FSxJyJay\u002FMSMDFusion)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fmulti-modal-results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_2d1753f2503b.png)\n\nTemporal Fusion:\n- BEVDet4D: Exploit Temporal Cues in Multi-camera 3D Object Detection (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17054) [[project page]](https:\u002F\u002Fgithub.com\u002FHuangJunJie2017\u002FBEVDet)\n- Image2Map: Translating Images into Maps (ICRA'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00966.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Favishkarsaha\u002Ftranslating-images-into-maps)\n- FIERY: Future Instance Prediction in Bird’s-Eye View from Surround Monocular Cameras (ICCV'21) [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHu_FIERY_Future_Instance_Prediction_in_Birds-Eye_View_From_Surround_Monocular_ICCV_2021_paper.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fwayveai\u002Ffiery)\n- Ego3RT: Learning Ego 3D Representation as Ray Tracing (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04042.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FEgo3RT)\n- PolarFormer: Multi-camera 3D Object Detection with Polar Transformers (Arxiv'22)[[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.15398) [[project page]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FPolarFormer)\n- BEVStitch: Understanding Bird’s-Eye View of Road Semantics using an Onboard Camera (ICRA'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.03040.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FBEV_feat_stitch)\n- PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.01256) [[project page]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17270v1.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer)\n- UniFormer: Unified Multi-view Fusion Transformer for Spatial-Temporal Representation in Bird’s-Eye-View (Arxiv'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.08536) \n- DfM: Monocular 3D Object Detection with Depth from Motion (ECCV'22) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12988.pdf) [[project page]](https:\u002F\u002Fgithub.com\u002FTai-Wang\u002FDepth-from-Motion)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftemporal-results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_3e95ed18a4be.png)\n\nMulti-agent Fusion:\n- CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers (Arxiv'22)  [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.02202.pdf)\n\n\n#### Empirical Know-Hows\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fempirical-know-hows.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_6e4ad7c80717.png)\n\n### Citation\n\nIf you find our work useful in your research, please consider citing:\n\n```tex\n@inproceedings{Ma2022VisionCentricBP,\n  title={Vision-Centric BEV Perception: A Survey},\n  author={Yuexin Ma and Tai Wang and Xuyang Bai and Huitong Yang and Yuenan Hou and Yaming Wang and Y. Qiao and Ruigang Yang and Dinesh Manocha and Xinge Zhu},\n  year={2022}\n}\n```\n\n## Contributing\n\nPlease feel free to submit a pull request to add the new paper or related project page.\n\n\n## Related Repos\n\n","# 视觉中心的BEV感知\n视觉中心的BEV感知：综述\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftaxonomy_bev.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_70aff8d09916.png)\n\n## 引言\n\n\n\n### (1) 数据集\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_755c421a08cf.png)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002FDatasets_bev.png\" width=\"95%\"> \u003C\u002Fp>)\n\n### (2) 基于几何的 PV2BEV\n#### 基于单应变换的 PV2BEV\n公开论文：\n- IPM：逆透视映射简化了光流计算和障碍物检测（生物控制论，1991年）[[论文]](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002FBF00201978)\n- DSM：基于街景图像的自动密集视觉语义建图（IROS'12）[[论文]](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~tvg\u002Fpublications\u002F2012\u002FIROS_Mapping_ss.pdf)\n- MapV：学习将车辆映射到鸟瞰视角（ICIAP'17）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08442)\n- BridgeGAN：生成对抗网络从前视图到鸟瞰图的合成（3DV'18）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.00327.pdf)[[项目页面]](https:\u002F\u002Fgithub.com\u002Fxinge008\u002FBridgeGAN)\n- VPOE：基于深度学习的通过逆透视映射图像进行车辆位置与姿态估计（IV'19）[[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8814050)\n- 3D-LaneNet：端到端三维多车道检测（ICCV'19）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGarnett_3D-LaneNet_End-to-End_3D_Multiple_Lane_Detection_ICCV_2019_paper.pdf)\n- 正确（倾斜）的视角：利用增强型逆透视映射提升对道路场景的理解（IV'19）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.00913)\n- Cam2BEV：一种Sim2Real深度学习方法，用于将多路车载摄像头拍摄的图像转换为鸟瞰视角下的语义分割图像（ITSC'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04078) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fika-rwth-aachen\u002FCam2BEV)\n- MonoLayout：从单张图像中恢复无遮挡场景布局（WACA'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08394) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fhbutsuak95\u002Fmonolayout)\n- MVNet：基于特征透视变换的多视角检测（ECCV'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07247) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fhou-yz\u002FMVDet)\n- OGMs：在“平板车”间行驶：基于单目相机的鸟瞰占用栅格地图，用于整体轨迹规划（WACA'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.04047) [[项目页面]]()\n- TrafCam3D：利用单应变换，通过未标定交通摄像头实现单目三维车辆检测（IROS'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.15293.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fminghanz\u002Ftrafcam_3d)\n- SHOT：用于多视角行人检测的堆叠单应变换（ICCV'21）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSong_Stacked_Homography_Transformations_for_Multi-View_Pedestrian_Detection_ICCV_2021_paper.pdf)\n- HomoLoss：用于单目三维目标检测的单应损失（CVPR'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00754)\n\n时间顺序概览：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_957d3da3d2cc.png)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fhomo-based-overview.PNG\" width=\"95%\"> \u003C\u002Fp>)\n#### 基于深度的 PV2BEV\n公开论文：\n- OFT：用于单目三维目标检测的正交特征变换（BMVC'19）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.08188) [[项目页面]](https:\u002F\u002Fgithub.com\u002Ftom-roddick\u002Foft)\n- CaDDN：用于单目三维目标检测的类别化深度分布网络（CVPR'21）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FReading_Categorical_Depth_Distribution_Network_for_Monocular_3D_Object_Detection_CVPR_2021_paper.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002FTRAILab\u002FCaDDN)\n- DSGN：用于三维目标检测的深度立体几何网络（CVPR'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.03398) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FDSGN)\n- Lift, Splat, Shoot：通过隐式反投影至三维空间来编码来自任意相机阵列的图像（ECCV'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.05711.pdf) [[项目页面]](https:\u002F\u002Fnv-tlabs.github.io\u002Flift-splat-shoot\u002F)\n- PanopticSeg：使用单目前视图图像进行鸟瞰全景分割（RA-L'22）[[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9681287) [[项目页面]](http:\u002F\u002Fpanoptic-bev.cs.uni-freiburg.de\u002F)\n- FIERY：基于环绕式单目相机的鸟瞰未来实例预测（ICCV'21）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHu_FIERY_Future_Instance_Prediction_in_Birds-Eye_View_From_Surround_Monocular_ICCV_2021_paper.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fwayveai\u002Ffiery)\n- LIGA-Stereo：为基于立体视觉的三维检测器学习激光雷达几何感知表示（ICCV'21）[[论文]](https:\u002F\u002Fxy-guo.github.io\u002Fliga\u002Fliga-guo-iccv21.pdf) [[项目页面]](https:\u002F\u002Fxy-guo.github.io\u002Fliga\u002F)\n- ImVoxelNet：用于单目及多视角通用三维目标检测的图像到体素投影（WACV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.01178.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fsaic-vul\u002Fimvoxelnet)\n- BEVDet：高性能多摄像头鸟瞰三维目标检测（Arxiv'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.11790) [[项目页面]](https:\u002F\u002Fgithub.com\u002FHuangJunJie2017\u002FBEVDet)\n- M^2BEV：多摄像头联合三维检测与分割，采用统一的鸟瞰视图表示（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05088) [[项目页面]](https:\u002F\u002Fnvlabs.github.io\u002FM2BEV\u002F)\n- StretchBEV：在空间和时间维度上扩展未来实例预测（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13641) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fkaanakan\u002Fstretchbev)\n- DfM：基于运动深度的单目三维目标检测（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12988.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002FTai-Wang\u002FDepth-from-Motion)\n- BEVDet4D：在多摄像头三维目标检测中利用时间线索（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17054) [[项目页面]](https:\u002F\u002Fgithub.com\u002FHuangJunJie2017\u002FBEVDet)\n- BEVerse：面向视觉中心自动驾驶的鸟瞰统一感知与预测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.09743) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fzhangyp15\u002FBEVerse)\n- MV-FCOS3D++：仅使用多视角摄像头，结合预训练单目骨干网络的四维目标检测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12716) [[项目页面]](https:\u002F\u002Fgithub.com\u002FTai-Wang\u002FDepth-from-Motion)\n- 将人放回原位：单目深度回归三维人体（CVPR'22）[[代码]](https:\u002F\u002Fgithub.com\u002FArthur151\u002FROMP) [[项目页面]](https:\u002F\u002Farthur151.github.io\u002FBEV\u002FBEV.html) [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.08274) [[视频]](https:\u002F\u002Fyoutu.be\u002FQ62fj_6AxRI) [[RH数据集]](https:\u002F\u002Fgithub.com\u002FArthur151\u002FRelative_Human)\n\n时间顺序概览：\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fdepth-based-overview.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_bafff72e2968.png)\n\n基准测试结果：\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fdepth-based%20results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_c3595762f2f4.png)\n\n### (3) 基于网络的 PV2BEV\n#### 基于 MLP 的 PV2BEV\n公开论文：\n- VED：基于卷积变分编码器-解码器网络的单目语义占用栅格地图构建（RA-L'19）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.02176.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002FChenyang-Lu\u002Fmono-semantic-occupancy)\n- VPN：用于环境感知的跨视图语义分割（IROS'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.03560.pdf) [[项目页面]](https:\u002F\u002Fview-parsing-network.github.io\u002F)\n- FishingNet：网格中语义热图的未来推理（Arxiv'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.09917)\n- PON：使用金字塔占用网络从图像预测语义地图表示（CVPR'20）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FRoddick_Predicting_Semantic_Map_Representations_From_Images_Using_Pyramid_Occupancy_Networks_CVPR_2020_paper.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Ftom-roddick\u002Fmono-semantic-maps)\n- STA-ST：在鸟瞰视角车辆估计中实现时空聚合（ICRA'21）[[论文]](https:\u002F\u002Fcvssp.org\u002FPersonal\u002FOscarMendez\u002Fpapers\u002Fpdf\u002FSahaICRA2021.pdf) \n- HDMapNet：在线高清地图构建与评估框架（ICRA'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.06307) [[项目页面]](https:\u002F\u002Ftsinghua-mars-lab.github.io\u002FHDMapNet\u002F)\n- 专注地投射你的视角：通过跨视图变换进行单目道路场景布局估计（CVPR'21）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYang_Projecting_Your_View_Attentively_Monocular_Road_Scene_Layout_Estimation_via_CVPR_2021_paper.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002FJonDoe-297\u002Fcross-view)\n- HFT：通过混合特征变换提升透视表示（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05068) [[项目页面]](https:\u002F\u002Fgithub.com\u002FJiayuZou2020\u002FHFT)\n\n时间顺序概览：\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002FMLP-based-overview.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_5b6c08951600.png)\n\n基准测试结果：\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002FMLP-based-result.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_c95905d82595.png)\n\n#### 基于 Transformer 的 PV2BEV\n公开论文：\n- STSU：基于车载图像的结构化鸟瞰视角交通场景理解（ICCV'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01997) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FSTSU)\n- Image2Map：将图像转换为地图（ICRA'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00966.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Favishkarsaha\u002Ftranslating-images-into-maps)\n- DETR3D：通过 3D 到 2D 查询实现多视角图像中的 3D 物体检测（CoRL'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.06922.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002FWangYueFt\u002Fdetr3d)\n- TopologyPL：从单张车载摄像头图像中进行拓扑保持的局部路网估计（CVPR'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.10155.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FTopologicalLaneGraph)\n- PETR：用于多视角 3D 物体检测的位置嵌入变换（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05625) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- BEVSegFormer：来自任意相机阵列的鸟瞰视角语义分割（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04050)\n- PersFormer：3D 车道线检测的新基线（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.11089) [[项目页面]](https:\u002F\u002Fgithub.com\u002FOpenPerceptionX\u002FPersFormer_3DLane)\n- MonoDTR：带有深度感知的单目 3D 物体检测（CVPR'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10981) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fkuanchihhuang\u002FMonoDTR)\n- MonoDETR：用于单目 3D 物体检测的深度引导型 Transformer（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13310.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002FZrrSkywalker\u002FMonoDETR)\n- BEVFormer：通过时空 Transformer 从多相机图像中学习鸟瞰视角表示（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17270v1.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer)\n- GitNet：基于几何先验的鸟瞰视角分割变换（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.07733)\n- Graph-DETR3D：重新思考多视角 3D 物体检测中的重叠区域（MM'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.11582)\n- CVT：用于实时地图视图语义分割的跨视图 Transformer（CVPR'22）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_Cross-View_Transformers_for_Real-Time_Map-View_Semantic_Segmentation_CVPR_2022_paper.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fbradyz\u002F)\n- PETRv2：一种用于从多相机图像中进行 3D 感知的统一框架（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.01256) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- Ego3RT：以光线追踪方式学习自我 3D 表示（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04042.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FEgo3RT)\n- GKT：通过几何引导的核 Transformer 实现高效且鲁棒的 2D 到 BEV 表示学习（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04584.pdf) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FGKT)\n- PolarDETR：基于极坐标参数化的视觉环绕式 3D 检测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10965) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FPolarDETR)\n- LaRa：用于多相机鸟瞰视角语义分割的潜在变量和光线（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.13294)\n- SRCN3D：稀疏 R-CNN 3D 环绕式摄像头用于自动驾驶的 3D 物体检测与跟踪（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.14451) [[项目页面]](https:\u002F\u002Fgithub.com\u002Fsynsin0\u002FSRCN3D)\n- PolarFormer：使用极坐标 Transformer 进行多相机 3D 物体检测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.15398) [[项目页面]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FPolarFormer)\n- ORA3D：ORA3D：重叠区域感知的多视角 3D 物体检测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.00865)\n- CoBEVT：使用稀疏 Transformer 进行协作式鸟瞰视角语义分割（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.02202.pdf)\n\n时间顺序概览：\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftransformer-based-overview.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_d270fbd06cde.png)\n\n基准测试结果：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_4eb575f82029.png)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_a4b33b636ccb.png)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftransformer-based-results.png\" width=\"95%\"> \u003C\u002Fp>)\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftransformer-based-results1.png\" width=\"95%\"> \u003C\u002Fp>)\n\n### (4) 扩展\n#### BEV下的多任务学习\n- FIERY：基于环视单目相机的鸟瞰图未来实例预测（ICCV'21）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHu_FIERY_Future_Instance_Prediction_in_Birds-Eye_View_From_Surround_Monocular_ICCV_2021_paper.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Fwayveai\u002Ffiery)\n- StretchBEV：在空间和时间上扩展未来实例预测（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13641) [[项目页]](https:\u002F\u002Fgithub.com\u002Fkaanakan\u002Fstretchbev)\n- BEVerse：面向视觉中心自动驾驶的统一鸟瞰感知与预测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.09743) [[项目页]](https:\u002F\u002Fgithub.com\u002Fzhangyp15\u002FBEVerse)\n- M^2BEV：基于统一鸟瞰表示的多摄像头联合3D检测与分割（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05088) [[项目页]](https:\u002F\u002Fnvlabs.github.io\u002FM2BEV\u002F)\n- STSU：基于车载图像的结构化鸟瞰交通场景理解（ICCV'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01997) [[项目页]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FSTSU)\n- BEVFormer：通过时空Transformer从多摄像头图像中学习鸟瞰表示（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17270v1.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer)\n- Ego3RT：以光线追踪方式学习自我3D表示（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04042.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FEgo3RT)\n- PETRv2：一种基于多摄像头图像的3D感知统一框架（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.01256) [[项目页]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- PolarFormer：基于极坐标Transformer的多摄像头3D目标检测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.15398) [[项目页]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FPolarFormer)\n\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fmulti-task-results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_153d0111bdb3.png)\n\n#### BEV下的融合\n多模态融合：\n- PointPainting：用于3D目标检测的顺序式融合（CVPR'19）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.10150.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002FAmrElsersy\u002FPointPainting)\n- 3D-CVF：利用跨视图空间特征融合生成相机与LiDAR联合特征，用于3D目标检测（ECCV'20）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.12636.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Frasd3\u002F3D-CVF)\n- FUTR3D：一种用于3D检测的统一传感器融合框架（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10642) [[项目页]](https:\u002F\u002Fgithub.com\u002FTsinghua-MARS-Lab\u002Ffutr3d)\n- MVP：多模态虚拟点3D目标检测（NIPS'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.06881.pdf) [[项目页]](https:\u002F\u002Ftianweiy.github.io\u002Fmvp\u002F)\n- PointAugmenting：用于3D目标检测的跨模态增强（CVPR'21）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FWang_PointAugmenting_Cross-Modal_Augmentation_for_3D_Object_Detection_CVPR_2021_paper.html) [[项目页]](https:\u002F\u002Fgithub.com\u002FVISION-SJTU\u002FPointAugmenting)\n- FusionPainting：采用自适应注意力机制进行多模态融合的3D目标检测（ITSC'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.12449) [[项目页]](https:\u002F\u002Fgithub.com\u002FShaoqing26\u002FFusionPainting)\n- 将基于体素的表示与Transformer统一用于3D目标检测（Arxiv'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.00630) [[项目页]](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FUVTR)\n- TransFusion：使用Transformer实现鲁棒的LiDAR-相机融合3D目标检测（CVPR'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.11496) [[项目页]](https:\u002F\u002Fgithub.com\u002FXuyangBai\u002FTransFusion)\n- AutoAlign：用于多模态3D目标检测的像素-实例特征聚合（IJCAI'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.06493) [[项目页]](https:\u002F\u002Fgithub.com\u002Fzehuichen123\u002FAutoAlignV2)\n- AutoAlignV2：用于动态多模态3D目标检测的可变形特征聚合（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.10316v1.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Fzehuichen123\u002FAutoAlignV2)\n- CenterFusion：基于中心点的雷达与相机融合3D目标检测（WACV'21）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.04841) [[项目页]](https:\u002F\u002Fgithub.com\u002Fmrnabati\u002FCenterFusion)\n- MSMDFusion：以多深度种子在多个尺度上融合LiDAR与相机进行3D目标检测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.03102.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002FSxJyJay\u002FMSMDFusion)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fmulti-modal-results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_2d1753f2503b.png)\n\n时序融合：\n- BEVDet4D：在多摄像头3D目标检测中利用时序线索（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17054) [[项目页]](https:\u002F\u002Fgithub.com\u002FHuangJunJie2017\u002FBEVDet)\n- Image2Map：将图像转化为地图（ICRA'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00966.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Favishkarsaha\u002Ftranslating-images-into-maps)\n- FIERY：基于环视单目相机的鸟瞰图未来实例预测（ICCV'21）[[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHu_FIERY_Future_Instance_Prediction_in_Birds-Eye_View_From_Surround_Monocular_ICCV_2021_paper.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Fwayveai\u002Ffiery)\n- Ego3RT：以光线追踪方式学习自我3D表示（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04042.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FEgo3RT)\n- PolarFormer：基于极坐标Transformer的多摄像头3D目标检测（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.15398) [[项目页]](https:\u002F\u002Fgithub.com\u002Ffudan-zvg\u002FPolarFormer)\n- BEVStitch：利用车载摄像头理解道路语义的鸟瞰视图（ICRA'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.03040.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Fybarancan\u002FBEV_feat_stitch)\n- PETRv2：一种基于多摄像头图像的3D感知统一框架（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.01256) [[项目页]](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPETR)\n- BEVFormer：通过时空Transformer从多摄像头图像中学习鸟瞰表示（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17270v1.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer)\n- UniFormer：用于鸟瞰视图中时空表示的统一多视角融合Transformer（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.08536)\n- DfM：基于运动估计深度的单目3D目标检测（ECCV'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12988.pdf) [[项目页]](https:\u002F\u002Fgithub.com\u002FTai-Wang\u002FDepth-from-Motion)\n\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Ftemporal-results.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_3e95ed18a4be.png)\n\n多智能体融合：\n- CoBEVT：基于稀疏Transformer的协作式鸟瞰语义分割（Arxiv'22）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.02202.pdf)\n\n#### 经验性知识\n[comment]: \u003C> (\u003Cp align=\"center\"> \u003Cimg src=\".\u002Fempirical-know-hows.png\" width=\"95%\"> \u003C\u002Fp>)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_readme_6e4ad7c80717.png)\n\n\n\n### 引用\n\n如果您在研究中使用了我们的工作，请考虑引用：\n\n```tex\n@inproceedings{Ma2022VisionCentricBP,\n  title={视觉中心的BEV感知：综述},\n  author={马跃鑫、王泰、白旭阳、杨慧通、侯元安、王亚明、Y. Qiao、杨瑞刚、迪内什·马诺查、朱新格},\n  year={2022}\n}\n```\n\n## 贡献\n\n欢迎提交拉取请求，以添加新的论文或相关项目页面。\n\n\n## 相关仓库","# Vision-Centric-BEV-Perception 快速上手指南\n\n**项目简介**：\nVision-Centric-BEV-Perception 是一个关于“以视觉为中心的鸟瞰图（BEV）感知”的综合综述资源库。该项目并非单一的可执行代码库，而是整理了从几何方法、深度学习方法到基于网络（MLP\u002FTransformer）的各类 BEV 感知算法论文、代码链接及基准测试结果。本指南旨在帮助开发者快速利用该资源库定位所需算法并搭建基础开发环境。\n\n## 1. 环境准备\n\n由于本项目主要作为论文与代码的索引清单，运行具体算法需参考各子项目的独立要求。但为了浏览综述内容、复现主流 Transformer 或深度学习类 BEV 算法（如 BEVFormer, PETR, DETR3D 等），建议准备以下通用环境：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 Windows (WSL2)\n*   **Python**: 3.8 或更高版本\n*   **GPU**: NVIDIA GPU (显存建议 16GB+ 以运行多相机 3D 检测模型)，驱动版本 >= 450.80.02\n*   **核心依赖**:\n    *   PyTorch >= 1.9.0\n    *   CUDA Toolkit (与 PyTorch 版本匹配，推荐 11.1 或 11.3)\n    *   MMCV \u002F MMDetection3D (许多 listed 项目基于 OpenMMLab 生态)\n\n## 2. 安装步骤\n\n### 2.1 克隆项目\n首先获取综述资源库，用于查阅论文列表和分类架构。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FOpenPerceptionX\u002FVision-Centric-BEV-Perception.git\ncd Vision-Centric-BEV-Perception\n```\n\n### 2.2 配置基础深度学习环境\n大多数列出的先进算法（如 `BEVFormer`, `PETR`, `DETR3D`）依赖 PyTorch 生态。推荐使用国内镜像源加速安装。\n\n**创建虚拟环境：**\n```bash\nconda create -n bev_env python=3.8 -y\nconda activate bev_env\n```\n\n**安装 PyTorch (使用清华\u002F中科大镜像)：**\n```bash\n# 示例：安装 CUDA 11.3 版本，请根据实际显卡调整\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fpypi\u002Fweb\u002Fsimple\n```\n\n**安装通用计算机视觉依赖 (OpenMMLab 体系)：**\n许多收录的论文代码基于 MMDetection3D，建议预装基础组件。\n```bash\n# 安装 mmcv-full (注意版本兼容性，此处以 1.7.0 为例)\npip install mmcv-full -f https:\u002F\u002Fdownload.openmmlab.com\u002Fmmcv\u002Fdist\u002Fcu113\u002Ftorch1.9.0\u002Findex.html\n\n# 安装 mmdetection 和 mmdetection3d\npip install mmdet==2.25.0\npip install mmdet3d==1.0.0rc6\n```\n\n> **注意**：本仓库本身不包含统一的 `setup.py`。若要运行具体的算法（如 `BEVFormer`），请进入 README 中对应的 `[project page]` 链接（通常是独立的 GitHub 仓库），按照该特定仓库的说明进行安装。\n\n## 3. 基本使用\n\n本项目的核心用途是**技术选型**与**代码导航**。以下是典型的使用流程：\n\n### 3.1 浏览算法分类\n打开本地克隆的 `README.md` 文件或在线查看，根据需求定位算法类别：\n*   **几何方法 (Geometry Based)**: 适合算力受限场景，查看 `Homograph based` 或 `Depth based` 章节（如 `OFT`, `CaDDN`）。\n*   **网络方法 (Network Based)**: 追求高精度，查看 `MLP based` 或 `Transformer based` 章节（如 `BEVFormer`, `PETR`, `CVT`）。\n\n### 3.2 获取并运行具体模型示例\n假设你选择了 **BEVFormer** (Transformer based PV2BEV) 进行尝试：\n\n1.  **跳转至项目页**：点击 README 中 BEVFormer 的 `[project page]` 链接 (https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer)。\n2.  **克隆具体代码库**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fzhiqi-li\u002FBEVFormer.git\n    cd BEVFormer\n    ```\n3.  **数据准备**：下载 nuScenes 数据集并整理目录结构（参考该子项目文档）。\n4.  **运行推理\u002F训练**：\n    ```bash\n    # 示例：使用预训练权重进行单张图像推理 (具体命令以子项目为准)\n    python tools\u002Ftest.py configs\u002Fbevformer\u002Fbevformer_base.py \\\n    checkpoints\u002Fbevformer_base.pth \\\n    --eval bbox\n    ```\n\n### 3.3 对比基准结果\n在 `Vision-Centric-BEV-Perception` 的 README 中，直接查看 `Benchmark Results` 部分的图片（如 `depth-based results.png` 或 `MLP-based-result.png`），快速对比不同算法在 nuScenes 等数据集上的 mAP 和 NDS 指标，辅助决策。","某自动驾驶初创团队正在开发城市道路辅助驾驶系统，急需利用车载单目摄像头实现高精度的周围车辆与车道线感知。\n\n### 没有 Vision-Centric-BEV-Perception 时\n- **视角局限严重**：工程师依赖传统的透视视图（PV）算法，难以准确判断远处车辆的真实距离和相对速度，导致频繁误刹车。\n- **多传感器融合困难**：由于缺乏统一的鸟瞰图（BEV）特征空间，摄像头数据与雷达点云无法在几何层面高效对齐，融合模型训练收敛极慢。\n- **遮挡处理薄弱**：在复杂路口，传统单目深度估计方法无法有效推断被部分遮挡车辆的完整轮廓和行驶轨迹。\n- **技术选型迷茫**：面对从早期的单应性变换（Homography）到最新的深度分布网络（如 CaDDN、OFT）等数十种方案，团队耗费数周调研仍难确定最佳技术路线。\n\n### 使用 Vision-Centric-BEV-Perception 后\n- **空间感知升级**：团队基于综述中梳理的深度估计方案（如 Lift-Splat-Shoot），成功构建出稠密的 BEV 特征图，将距离估算误差降低了 40%。\n- **融合效率倍增**：利用文中总结的几何转换范式，快速搭建了摄像头与雷达的统一坐标系，模型迭代周期从两周缩短至三天。\n- **轨迹预测更稳**：参考 OGMs 等占用网格论文的思路，系统现在能“脑补”出遮挡车辆的全貌，显著提升了变道决策的安全性。\n- **研发路径清晰**：借助详细的分类图谱和时序演进分析，架构师迅速锁定了适合当前硬件算力的最优算法组合，避免了重复造轮子。\n\nVision-Centric-BEV-Perception 不仅是一份文献清单，更是自动驾驶团队从二维视觉迈向三维空间感知的高效导航图。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F4DVLab_Vision-Centric-BEV-Perception_6fd779d2.png","4DVLab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002F4DVLab_3b363b95.png",null,"ShanghaiTech University","https:\u002F\u002Fgithub.com\u002F4DVLab",739,72,"2026-04-03T08:13:46",5,"","未说明",{"notes":84,"python":82,"dependencies":85},"该仓库是一个关于“以视觉为中心的 BEV 感知”的综述（Survey），主要整理了相关论文、数据集和方法分类（如基于几何、深度、MLP、Transformer 的方法），并未提供具体的可运行代码、模型权重或环境配置指南。因此，无法从提供的 README 内容中提取具体的操作系统、GPU、内存、Python 版本及依赖库需求。如需运行其中列出的具体算法（如 BEVFormer, PETR 等），需前往各论文对应的项目页面查看独立的环境要求。",[],[35,14],[88,89,90,91,92],"bev-perception","bird-eye-view","deep-learning","transformer","vision-transformer","2026-03-27T02:49:30.150509","2026-04-18T16:24:55.972138",[],[]]