[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SkalskiP--top-cvpr-2024-papers":3,"tool-SkalskiP--top-cvpr-2024-papers":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":95,"env_ram":95,"env_deps":96,"category_tags":99,"github_topics":100,"view_count":10,"oss_zip_url":109,"oss_zip_packed_at":109,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":113},620,"SkalskiP\u002Ftop-cvpr-2024-papers","top-cvpr-2024-papers","This repository is a curated collection of the most exciting and influential CVPR 2024 papers. 🔥 [Paper + Code + Demo]","top-cvpr-2024-papers 是一个精心整理的计算机视觉领域顶级会议论文资源库。作为全球最具影响力的计算机视觉与模式识别会议之一，CVPR 2024 收到了超过一万一千篇投稿，最终录用两千多篇。面对如此庞大的信息量，普通研究者很难快速锁定最有价值的成果。\n\ntop-cvpr-2024-papers 旨在解决“选论文难”的问题，专门收录了当年最受瞩目和具有影响力的论文。它不仅提供论文原文链接，还整合了对应的开源代码和演示视频，让技术复现变得更加便捷。从三维重建到文本生成图像，涵盖了 SpatialTracker、ViewDiff 等多个热门方向。\n\n无论是从事深度学习的研究人员、希望跟进前沿技术的开发者，还是对计算机视觉感兴趣的学生，都能在这里高效获取高质量的学习资料。列表由社区贡献并自动生成更新，确保了内容的时效性和多样性。通过 top-cvpr-2024-papers，你可以轻松跳过海量筛选过程，直接聚焦于真正推动行业发展的核心创新。","![visitor badge](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_c020a375744d.png)\n\n\u003Cdiv align=\"center\">\n  \u003Ch1 align=\"center\">top CVPR 2024 papers\u003C\u002Fh1>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2023-papers\">2023\u003C\u002Fa> | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\">2024\u003C\u002Fa> | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2025-papers\">2025\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n\u003Cdiv align=\"center\">\n  \u003Cimg width=\"600\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_ee32e4918b74.png\" alt=\"vancouver\">\n\u003C\u002Fdiv>\n\n## 👋 hello\n\nComputer Vision and Pattern Recognition is a massive conference. In **2024** alone,\n**11,532** papers were submitted, and **2,719** were accepted. I created this repository\nto help you search for crème de la crème of CVPR publications. If the paper you are\nlooking for is not on my short list, take a peek at the full\n[list](https:\u002F\u002Fcvpr.thecvf.com\u002FConferences\u002F2024\u002FAcceptedPapers) of accepted papers.\n\n## 🗞️ papers and posters\n\n*🔥 - highlighted papers*\n\n\u003C!--- AUTOGENERATED_PAPERS_LIST -->\n\u003C!---\n   WARNING: DO NOT EDIT THIS LIST MANUALLY. IT IS AUTOMATICALLY GENERATED.\n   HEAD OVER TO https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md FOR MORE DETAILS ON HOW TO MAKE CHANGES PROPERLY.\n-->\n### 3d from multi-view and sensors\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31668.png?t=1717417393.7589533\" title=\"SpatialTracker: Tracking Any 2D Pixels in 3D Space\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_535388afd184.png\" alt=\"SpatialTracker: Tracking Any 2D Pixels in 3D Space\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04319\" title=\"SpatialTracker: Tracking Any 2D Pixels in 3D Space\">\n        \u003Cstrong>🔥 SpatialTracker: Tracking Any 2D Pixels in 3D Space\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yuxi Xiao, Qianqian Wang, Shangzhan Zhang, Nan Xue, Sida Peng, Yujun Shen, Xiaowei Zhou\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04319\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhenry123-boy\u002FSpaTracker\">code\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> 3D from multi-view and sensors\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Fri 21 Jun 1:30 p.m. EDT — 3 p.m. EDT #84\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31616.png?t=1716470830.0209699\" title=\"ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_abacff45e276.png\" alt=\"ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01807\" title=\"ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models\">\n        \u003Cstrong>ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Lukas Höllein, Aljaž Božič, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, Matthias Nießner\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01807\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FViewDiff\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FSdjoCqHzMMk\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> 3D from multi-view and sensors\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 8 p.m. EDT — 9:30 p.m. EDT #20\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12979\" title=\"OmniGlue: Generalizable Feature Matching with Foundation Model Guidance\">\n        \u003Cstrong>OmniGlue: Generalizable Feature Matching with Foundation Model Guidance\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Hanwen Jiang, Arjun Karpur, Bingyi Cao, Qixing Huang, Andre Araujo\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12979\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fomniglue\">code\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fqubvel-hf\u002Fomniglue\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> 3D from multi-view and sensors\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Fri 21 Jun 1:30 p.m. EDT — 3 p.m. EDT #32\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n### deep learning architectures and techniques\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30529.png?t=1717455193.7819567\" title=\"Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_976b100760d8.png\" alt=\"Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06242\" title=\"Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks\">\n        \u003Cstrong>🔥 Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06242\">paper\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FcOlyA00K1ec\">video\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fgokaygokay\u002FFlorence-2\">demo\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FcOlyA00K1ec\">colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Deep learning architectures and techniques\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 8 p.m. EDT — 9:30 p.m. EDT #102\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### document analysis and understanding\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04408\" title=\"DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks\">\n        \u003Cstrong>DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Jiaxin Zhang, Dezhi Peng, Chongyu Liu, Peirong Zhang, Lianwen Jin\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04408\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FZZZHANG-jx\u002FDocRes\">code\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fqubvel-hf\u002Fdocuments-restoration\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Document analysis and understanding\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #101\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n### efficient and scalable vision\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_ed9555f769f3.png\" title=\"EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_ed9555f769f3.png\" alt=\"EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00863\" title=\"EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything\">\n        \u003Cstrong>🔥 EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00863\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fyformer\u002FEfficientSAM\">code\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSkalskiP\u002FEfficientSAM\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Efficient and scalable vision\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #144\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_4c7521586f40.png\" title=\"MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_4c7521586f40.png\" alt=\"MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17049\" title=\"MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training\">\n        \u003Cstrong>MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17049\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-mobileclip\">code\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FXenova\u002Fwebgpu-mobileclip\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Efficient and scalable vision\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #130\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### explainable computer vision\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_b1e0c7f05b02.png\" title=\"Describing Differences in Image Sets with Natural Language\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_b1e0c7f05b02.png\" alt=\"Describing Differences in Image Sets with Natural Language\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02974\" title=\"Describing Differences in Image Sets with Natural Language\">\n        \u003Cstrong>🔥 Describing Differences in Image Sets with Natural Language\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Lisa Dunlap, Yuhui Zhang, Xiaohan Wang, Ruiqi Zhong, Trevor Darrell, Jacob Steinhardt, Joseph E. Gonzalez, Serena Yeung-Levy\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02974\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FUnderstanding-Visual-Datasets\u002FVisDiff\">code\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Explainable computer vision\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Fri 21 Jun 8 p.m. EDT — 9:30 p.m. EDT #115\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### image and video synthesis and generation\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16973\" title=\"DemoFusion: Democratising High-Resolution Image Generation With No $$$\">\n        \u003Cstrong>DemoFusion: Democratising High-Resolution Image Generation With No $$$\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Ruoyi Du, Dongliang Chang, Timothy Hospedales, Yi-Zhe Song, Zhanyu Ma\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16973\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPRIS-CV\u002FDemoFusion\">code\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fradames\u002FEnhance-This-DemoFusion-SDXL\">demo\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcamenduru\u002FDemoFusion-colab\u002Fblob\u002Fmain\u002FDemoFusion_colab.ipynb\">colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Image and video synthesis and generation\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 8 p.m. EDT — 9:30 p.m. EDT #132\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002Fb0833f6b-6924-4f28-b409-ae85aaaa4dd6\" title=\"DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_47c65757a68c.png\" alt=\"DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14435\" title=\"DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing\">\n        \u003Cstrong>🔥 DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yujun Shi, Chuhui Xue, Jun Hao Liew, Jiachun Pan, Hanshu Yan, Wenqing Zhang, Vincent Y. F. Tan, Song Bai\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14435\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FYujun-Shi\u002FDragDiffusion\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FrysOFTpDBhc\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Image and video synthesis and generation\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 8 p.m. EDT — 9:30 p.m. EDT #392\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30657.png?t=1717473392.6694562\" title=\"Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_53a1fc28590c.png\" alt=\"Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17919\" title=\"Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models\">\n        \u003Cstrong>🔥 Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Daniel Geng, Inbum Park, Andrew Owens\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17919\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdangeng\u002Fvisual_anagrams\">code\u003C\u002Fa>]   [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fdangeng\u002Fvisual_anagrams\u002Fblob\u002Fmain\u002Fnotebooks\u002Fcolab_demo_free_tier.ipynb\">colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Image and video synthesis and generation\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Fri 21 Jun 8 p.m. EDT — 9:30 p.m. EDT #118\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### low-level vision\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F8eb6b4f0-4ae6-4615-9921-f73fa2aa3766\" title=\"XFeat: Accelerated Features for Lightweight Image Matching\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_53bb6b1d229a.png\" alt=\"XFeat: Accelerated Features for Lightweight Image Matching\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19174\" title=\"XFeat: Accelerated Features for Lightweight Image Matching\">\n        \u003Cstrong>XFeat: Accelerated Features for Lightweight Image Matching\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Guilherme Potje, Felipe Cadar, Andre Araujo, Renato Martins, Erickson R. Nascimento\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19174\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fverlab\u002Faccelerated_features\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FRamC70IkZuI\">video\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fqubvel-hf\u002Fxfeat\">demo\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fverlab\u002Faccelerated_features\u002Fblob\u002Fmain\u002Fnotebooks\u002Fxfeat_matching.ipynb\">colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Low-level vision\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #245\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F038bef8f-a6df-440d-9ebc-b58f69beb338\" title=\"Robust Image Denoising through Adversarial Frequency Mixup\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_2f6b1525f185.png\" alt=\"Robust Image Denoising through Adversarial Frequency Mixup\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FRyou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.html\" title=\"Robust Image Denoising through Adversarial Frequency Mixup\">\n        \u003Cstrong>Robust Image Denoising through Adversarial Frequency Mixup\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Donghun Ryou, Inju Ha, Hyewon Yoo, Dongwan Kim, Bohyung Han\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FRyou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.html\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdhryougit\u002FAFM\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FzQ0pwFSk7uo\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Low-level vision\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #250\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### multi-modal learning\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03744\" title=\"Improved Baselines with Visual Instruction Tuning\">\n        \u003Cstrong>🔥 Improved Baselines with Visual Instruction Tuning\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Haotian Liu, Chunyuan Li, Yuheng Li, Yong Jae Lee\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03744\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FLLaVA-VL\u002FLLaVA-NeXT\">code\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Multi-modal learning\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Fri 21 Jun 8 p.m. EDT — 9:30 p.m. EDT #209\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n### recognition: categorization, detection, retrieval\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31301.png?t=1717420504.9897285\" title=\"DETRs Beat YOLOs on Real-time Object Detection\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_a52db28cd29b.png\" alt=\"DETRs Beat YOLOs on Real-time Object Detection\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08069\" title=\"DETRs Beat YOLOs on Real-time Object Detection\">\n        \u003Cstrong>DETRs Beat YOLOs on Real-time Object Detection\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, Jie Chen\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08069\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flyuwenyu\u002FRT-DETR\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=UOc0qMSX4Ac\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Recognition: Categorization, detection, retrieval\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #229\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002Ff9023a28-aca5-4965-a194-984c62348dc0\" title=\"YOLO-World: Real-Time Open-Vocabulary Object Detection\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_051aa5ce7536.png\" alt=\"YOLO-World: Real-Time Open-Vocabulary Object Detection\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.17270\" title=\"YOLO-World: Real-Time Open-Vocabulary Object Detection\">\n        \u003Cstrong>YOLO-World: Real-Time Open-Vocabulary Object Detection\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, Ying Shan\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.17270\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FX7gKBGVz4vs\">video\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSkalskiP\u002FYOLO-World\">demo\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Froboflow-ai\u002Fnotebooks\u002Fblob\u002Fmain\u002Fnotebooks\u002Fzero-shot-object-detection-with-yolo-world.ipynb\">colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Recognition: Categorization, detection, retrieval\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #223\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31732.png?t=1717298372.5822952\" title=\"Object Recognition as Next Token Prediction\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_76800fa5a2c6.png\" alt=\"Object Recognition as Next Token Prediction\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02142\" title=\"Object Recognition as Next Token Prediction\">\n        \u003Cstrong>🔥 Object Recognition as Next Token Prediction\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Kaiyu Yue, Bor-Chun Chen, Jonas Geiping, Hengduo Li, Tom Goldstein, Ser-Nam Lim\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02142\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fkaiyuyue\u002Fnxtp\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FxeI8dZIpoco\">video\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1pJX37LP5xGLDzD3H7ztTmpq1RrIBeWX3?usp=sharing\">colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Recognition: Categorization, detection, retrieval\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #199\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### segmentation, grouping and shape analysis\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F62d34981-73d6-49b2-8058-46ec99bac94d\" title=\"RobustSAM: Segment Anything Robustly on Degraded Images\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_db4ba6747844.png\" alt=\"RobustSAM: Segment Anything Robustly on Degraded Images\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FChen_RobustSAM_Segment_Anything_Robustly_on_Degraded_Images_CVPR_2024_paper.html\" title=\"RobustSAM: Segment Anything Robustly on Degraded Images\">\n        \u003Cstrong>🔥 RobustSAM: Segment Anything Robustly on Degraded Images\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Wei-Ting Chen, Yu-Jiet Vong, Sy-Yen Kuo, Sizhou Ma, Jian Wang\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FChen_RobustSAM_Segment_Anything_Robustly_on_Degraded_Images_CVPR_2024_paper.html\">paper\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Awukqkbs6zM\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Segmentation, grouping and shape analysis\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #378\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30253.png?t=1716781257.513028\" title=\"Frozen CLIP: A Strong Backbone for Weakly Supervised Semantic Segmentation\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_559c5b76efa5.png\" alt=\"Frozen CLIP: A Strong Backbone for Weakly Supervised Semantic Segmentation\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FZhang_Frozen_CLIP_A_Strong_Backbone_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2024_paper.html\" title=\"Frozen CLIP: A Strong Backbone for Weakly Supervised Semantic Segmentation\">\n        \u003Cstrong>🔥 Frozen CLIP: A Strong Backbone for Weakly Supervised Semantic Segmentation\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Bingfeng Zhang, Siyue Yu, Yunchao Wei, Yao Zhao, Jimin Xiao\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FZhang_Frozen_CLIP_A_Strong_Backbone_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2024_paper.html\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzbf1991\u002FWeCLIP\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FLh489nTm_M0\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Segmentation, grouping and shape analysis\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #351\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F2f2bf794-3981-48c8-992d-04dd32ee9ced\" title=\"Semantic-aware SAM for Point-Prompted Instance Segmentation\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_5b593b38a8b4.png\" alt=\"Semantic-aware SAM for Point-Prompted Instance Segmentation\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15895\" title=\"Semantic-aware SAM for Point-Prompted Instance Segmentation\">\n        \u003Cstrong>🔥 Semantic-aware SAM for Point-Prompted Instance Segmentation\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Zhaoyang Wei, Pengfei Chen, Xuehui Yu, Guorong Li, Jianbin Jiao, Zhenjun Han\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15895\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzhaoyangwei123\u002FSAPNet\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002F42-tJFmT7Ao\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Segmentation, grouping and shape analysis\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #331\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.15789\" title=\"In-Context Matting\">\n        \u003Cstrong>🔥 In-Context Matting\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    He Guo, Zixuan Ye, Zhiguo Cao, Hao Lu\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.15789\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftiny-smart\u002Fin-context-matting\">code\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Segmentation, grouping and shape analysis\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #343\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002Fbfe79038-706d-491b-ac99-083f421dc5ec\" title=\"General Object Foundation Model for Images and Videos at Scale\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_a55423b8b181.png\" alt=\"General Object Foundation Model for Images and Videos at Scale\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09158\" title=\"General Object Foundation Model for Images and Videos at Scale\">\n        \u003Cstrong>🔥 General Object Foundation Model for Images and Videos at Scale\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Junfeng Wu, Yi Jiang, Qihao Liu, Zehuan Yuan, Xiang Bai, Song Bai\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09158\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PSVhfTPx0GQ\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Segmentation, grouping and shape analysis\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #350\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### self-supervised or unsupervised representation learning\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30014.png?t=1717339970.9614518\" title=\"InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_d27ad5e09966.png\" alt=\"InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.14238\" title=\"InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks\">\n        \u003Cstrong>🔥 InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, Jifeng Dai\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.14238\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FInternVL\">code\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FOpenGVLab\u002FInternVL\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Self-supervised or unsupervised representation learning\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Fri 21 Jun 8 p.m. EDT — 9:30 p.m. EDT #412\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### video: low-level analysis, motion, and tracking\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F29590.png?t=1717456006.3308516\" title=\"Matching Anything by Segmenting Anything\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_dcf8dd95caaa.png\" alt=\"Matching Anything by Segmenting Anything\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04221\" title=\"Matching Anything by Segmenting Anything\">\n        \u003Cstrong>🔥 Matching Anything by Segmenting Anything\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Siyuan Li, Lei Ke, Martin Danelljan, Luigi Piccinelli, Mattia Segu, Luc Van Gool, Fisher Yu\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04221\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsiyuanliii\u002Fmasa\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FKDQVujKAWFQ\">video\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Video: Low-level analysis, motion, and tracking\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #421\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F9711186c-b05b-472d-b095-d98dbe386171\" title=\"DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_48cf2cd7ac8a.png\" alt=\"DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02075\" title=\"DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction\">\n        \u003Cstrong>DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Weiyi Lv, Yuhang Huang, Ning Zhang, Ruei-Sung Lin, Mei Han, Dan Zeng\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02075\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FKroery\u002FDiffMOT\">code\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Video: Low-level analysis, motion, and tracking\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 8 p.m. EDT — 9:30 p.m. EDT #455\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### vision, language, and reasoning\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31492.png?t=1717327133.6073072\" title=\"Alpha-CLIP: A CLIP Model Focusing on Wherever You Want\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_5d8c960b3280.png\" alt=\"Alpha-CLIP: A CLIP Model Focusing on Wherever You Want\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03818\" title=\"Alpha-CLIP: A CLIP Model Focusing on Wherever You Want\">\n        \u003Cstrong>Alpha-CLIP: A CLIP Model Focusing on Wherever You Want\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Zeyi Sun, Ye Fang, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, Jiaqi Wang\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03818\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSunzeY\u002FAlphaCLIP\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FQCEIKPZpZz0\">video\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FZery\u002FAlpha-CLIP_LLaVA-1.5\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Vision, language, and reasoning\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #327\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.06209\" title=\"Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs\">\n        \u003Cstrong>🔥 Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, Saining Xie\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.06209\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftsb0601\u002FMMVP\">code\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Vision, language, and reasoning\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #390\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30109.png?t=1717509456.89997\" title=\"LISA: Reasoning Segmentation via Large Language Model\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_2beaf3ec057c.png\" alt=\"LISA: Reasoning Segmentation via Large Language Model\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00692\" title=\"LISA: Reasoning Segmentation via Large Language Model\">\n        \u003Cstrong>🔥 LISA: Reasoning Segmentation via Large Language Model\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, Jiaya Jia\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00692\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FLISA\">code\u003C\u002Fa>]  [\u003Ca href=\"http:\u002F\u002F103.170.5.190:7870\u002F\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Vision, language, and reasoning\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #413\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F53e03a08-4dd9-451a-975e-e3654fa5bc71\" title=\"ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_d0b23b88239a.png\" alt=\"ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00784\" title=\"ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts\">\n        \u003Cstrong>ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Mu Cai, Haotian Liu, Dennis Park, Siva Karthik Mustikovela, Gregory P. Meyer, Yuning Chai, Yong Jae Lee\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00784\">paper\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FWisconsinAIVision\u002FViP-LLaVA\">code\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002Fj_l1bRQouzc\">video\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fpages.cs.wisc.edu\u002F~mucai\u002Fvip-llava.html\">demo\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Vision, language, and reasoning\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #317\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31040.png?t=1718300473.5736258\" title=\"MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_00919d800269.png\" alt=\"MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16502\" title=\"MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI\">\n        \u003Cstrong>🔥 MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16502\">paper\u003C\u002Fa>]    \n    \u003Cbr\u002F>\n    \u003Cstrong>Topic:\u003C\u002Fstrong> Vision, language, and reasoning\n    \u003Cbr\u002F>\n    \u003Cstrong>Session:\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #382\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\u003C!--- AUTOGENERATED_PAPERS_LIST -->\n\n## 🦸 contribution\n\nWe would love your help in making this repository even better! If you know of an amazing\npaper that isn't listed here, or if you have any suggestions for improvement, feel free\nto open an\n[issue](https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fissues)\nor submit a\n[pull request](https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fpulls).\n","![访客徽章](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_32f5209f85dd.png)\n\n\u003Cdiv align=\"center\">\n  \u003Ch1 align=\"center\">top CVPR 2024 论文\u003C\u002Fh1>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2023-papers\">2023\u003C\u002Fa> | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\">2024\u003C\u002Fa> | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2025-papers\">2025\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n\u003Cdiv align=\"center\">\n  \u003Cimg width=\"600\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_ee32e4918b74.png\" alt=\"温哥华\">\n\u003C\u002Fdiv>\n\n## 👋 你好\n\n计算机视觉与模式识别（Computer Vision and Pattern Recognition，简称 CVPR）是一个大型会议。仅在 **2024** 年，\n就有 **11,532** 篇论文提交，其中 **2,719** 篇被录用。我创建了这个仓库\n来帮助你搜索 CVPR 出版物中的精华。如果你正在寻找的论文不在我的精选列表中，请查看完整的\n[已录用论文列表](https:\u002F\u002Fcvpr.thecvf.com\u002FConferences\u002F2024\u002FAcceptedPapers)。\n\n## 🗞️ 论文与海报\n\n*🔥 - 重点推荐论文*\n\n\u003C!--- AUTOGENERATED_PAPERS_LIST -->\n\u003C!---\n   警告：请勿手动编辑此列表。它是自动生成的。\n   前往 https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md 了解更多关于如何正确进行修改的详细信息。\n-->\n### 多视图与传感器三维重建\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31668.png?t=1717417393.7589533\" title=\"SpatialTracker: 在 3D 空间中追踪任意 2D 像素\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_535388afd184.png\" alt=\"SpatialTracker: 在 3D 空间中追踪任意 2D 像素\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04319\" title=\"SpatialTracker: 在 3D 空间中追踪任意 2D 像素\">\n        \u003Cstrong>🔥 SpatialTracker: Tracking Any 2D Pixels in 3D Space\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yuxi Xiao, Qianqian Wang, Shangzhan Zhang, Nan Xue, Sida Peng, Yujun Shen, Xiaowei Zhou\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04319\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhenry123-boy\u002FSpaTracker\">代码\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>主题:\u003C\u002Fstrong> 多视图与传感器三维重建\n    \u003Cbr\u002F>\n    \u003Cstrong>场次:\u003C\u002Fstrong> 周五 6 月 21 日 下午 1:30 EDT — 下午 3:00 EDT #84\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31616.png?t=1716470830.0209699\" title=\"ViewDiff: 使用文生图模型进行 3D 一致性图像生成\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_abacff45e276.png\" alt=\"ViewDiff: 使用文生图模型进行 3D 一致性图像生成\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01807\" title=\"ViewDiff: 使用文生图模型进行 3D 一致性图像生成\">\n        \u003Cstrong>ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Lukas Höllein, Aljaž Božič, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, Matthias Nießner\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01807\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FViewDiff\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FSdjoCqHzMMk\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题:\u003C\u002Fstrong> 多视图与传感器三维重建\n    \u003Cbr\u002F>\n    \u003Cstrong>场次:\u003C\u002Fstrong> 周三 6 月 19 日 晚上 8:00 EDT — 晚上 9:30 EDT #20\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12979\" title=\"OmniGlue: 基于基础模型引导的可泛化特征匹配\">\n        \u003Cstrong>OmniGlue: Generalizable Feature Matching with Foundation Model Guidance\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Hanwen Jiang, Arjun Karpur, Bingyi Cao, Qixing Huang, Andre Araujo\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12979\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fomniglue\">代码\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fqubvel-hf\u002Fomniglue\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题:\u003C\u002Fstrong> 多视图与传感器三维重建\n    \u003Cbr\u002F>\n    \u003Cstrong>场次:\u003C\u002Fstrong> 周五 6 月 21 日 下午 1:30 EDT — 下午 3:00 EDT #32\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n### 深度学习架构与技术\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30529.png?t=1717455193.7819567\" title=\"Florence-2: 推进各种视觉任务的统一表示\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_976b100760d8.png\" alt=\"Florence-2: 推进各种视觉任务的统一表示\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06242\" title=\"Florence-2: 推进各种视觉任务的统一表示\">\n        \u003Cstrong>🔥 Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06242\">论文\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FcOlyA00K1ec\">视频\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fgokaygokay\u002FFlorence-2\">演示\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FcOlyA00K1ec\">colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>主题:\u003C\u002Fstrong> 深度学习架构与技术\n    \u003Cbr\u002F>\n    \u003Cstrong>场次:\u003C\u002Fstrong> 周三 6 月 19 日 晚上 8:00 EDT — 晚上 9:30 EDT #102\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 文档分析与理解\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04408\" title=\"DocRes: 面向统一文档图像复原任务的通用模型\">\n        \u003Cstrong>DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Jiaxin Zhang, Dezhi Peng, Chongyu Liu, Peirong Zhang, Lianwen Jin\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04408\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FZZZHANG-jx\u002FDocRes\">代码\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fqubvel-hf\u002Fdocuments-restoration\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题:\u003C\u002Fstrong> 文档分析与理解\n    \u003Cbr\u002F>\n    \u003Cstrong>场次:\u003C\u002Fstrong> 周四 6 月 20 日 晚上 8:00 EDT — 晚上 9:30 EDT #101\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n### 高效且可扩展的视觉\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_ed9555f769f3.png\" title=\"EfficientSAM：利用掩码图像预训练（Masked Image Pretraining）实现高效的 Segment Anything（分割一切）\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_ed9555f769f3.png\" alt=\"EfficientSAM：利用掩码图像预训练实现高效的分割一切\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00863\" title=\"EfficientSAM：利用掩码图像预训练实现高效的 Segment Anything（分割一切）\">\n        \u003Cstrong>🔥 EfficientSAM：利用掩码图像预训练（Masked Image Pretraining）实现高效的 Segment Anything（分割一切）\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00863\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fyformer\u002FEfficientSAM\">代码\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSkalskiP\u002FEfficientSAM\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 高效且可扩展的视觉\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 周四 6 月 20 日 晚上 8 点 EDT（东部夏令时）— 晚上 9:30 点 EDT #144\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_4c7521586f40.png\" title=\"MobileCLIP：通过多模态强化训练实现快速图文模型\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_4c7521586f40.png\" alt=\"MobileCLIP：通过多模态强化训练实现快速图文模型\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17049\" title=\"MobileCLIP：通过多模态强化训练实现快速图文模型\">\n        \u003Cstrong>MobileCLIP：通过多模态强化训练实现快速图文模型\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17049\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-mobileclip\">代码\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FXenova\u002Fwebgpu-mobileclip\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 高效且可扩展的视觉\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 周四 6 月 20 日 晚上 8 点 EDT — 晚上 9:30 点 EDT #130\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 可解释计算机视觉\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_b1e0c7f05b02.png\" title=\"使用自然语言描述图像集之间的差异\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_b1e0c7f05b02.png\" alt=\"使用自然语言描述图像集之间的差异\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02974\" title=\"使用自然语言描述图像集之间的差异\">\n        \u003Cstrong>🔥 使用自然语言描述图像集之间的差异\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Lisa Dunlap, Yuhui Zhang, Xiaohan Wang, Ruiqi Zhong, Trevor Darrell, Jacob Steinhardt, Joseph E. Gonzalez, Serena Yeung-Levy\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02974\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FUnderstanding-Visual-Datasets\u002FVisDiff\">代码\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 可解释计算机视觉\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 周五 6 月 21 日 晚上 8 点 EDT — 晚上 9:30 点 EDT #115\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 图像和视频合成与生成\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16973\" title=\"DemoFusion：零成本普及高分辨率图像生成\">\n        \u003Cstrong>DemoFusion：零成本普及高分辨率图像生成\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Ruoyi Du, Dongliang Chang, Timothy Hospedales, Yi-Zhe Song, Zhanyu Ma\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16973\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPRIS-CV\u002FDemoFusion\">代码\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fradames\u002FEnhance-This-DemoFusion-SDXL\">演示\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcamenduru\u002FDemoFusion-colab\u002Fblob\u002Fmain\u002FDemoFusion_colab.ipynb\">Colab（Google Colaboratory）\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 图像和视频合成与生成\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 周三 6 月 19 日 晚上 8 点 EDT — 晚上 9:30 点 EDT #132\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002Fb0833f6b-6924-4f28-b409-ae85aaaa4dd6\" title=\"DragDiffusion：利用扩散模型进行交互式基于点的图像编辑\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_47c65757a68c.png\" alt=\"DragDiffusion：利用扩散模型进行交互式基于点的图像编辑\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14435\" title=\"DragDiffusion：利用扩散模型进行交互式基于点的图像编辑\">\n        \u003Cstrong>🔥 DragDiffusion：利用 Diffusion Models（扩散模型）进行交互式基于点的图像编辑\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yujun Shi, Chuhui Xue, Jun Hao Liew, Jiachun Pan, Hanshu Yan, Wenqing Zhang, Vincent Y. F. Tan, Song Bai\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14435\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FYujun-Shi\u002FDragDiffusion\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FrysOFTpDBhc\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 图像和视频合成与生成\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 周三 6 月 19 日 晚上 8 点 EDT — 晚上 9:30 点 EDT #392\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30657.png?t=1717473392.6694562\" title=\"视觉字谜：利用扩散模型生成多视角光学错觉\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_53a1fc28590c.png\" alt=\"视觉字谜：利用扩散模型生成多视角光学错觉\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17919\" title=\"视觉字谜：利用扩散模型生成多视角光学错觉\">\n        \u003Cstrong>🔥 视觉字谜：利用扩散模型生成多视角光学错觉\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Daniel Geng, Inbum Park, Andrew Owens\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17919\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdangeng\u002Fvisual_anagrams\">代码\u003C\u002Fa>]   [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fdangeng\u002Fvisual_anagrams\u002Fblob\u002Fmain\u002Fnotebooks\u002Fcolab_demo_free_tier.ipynb\">Colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 图像和视频合成与生成\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 周五 6 月 21 日 晚上 8 点 EDT — 晚上 9:30 点 EDT #118\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 低级视觉 (low-level vision)\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F8eb6b4f0-4ae6-4615-9921-f73fa2aa3766\" title=\"XFeat: Accelerated Features for Lightweight Image Matching\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_53bb6b1d229a.png\" alt=\"XFeat: Accelerated Features for Lightweight Image Matching\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19174\" title=\"XFeat: Accelerated Features for Lightweight Image Matching\">\n        \u003Cstrong>XFeat: Accelerated Features for Lightweight Image Matching\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Guilherme Potje, Felipe Cadar, Andre Araujo, Renato Martins, Erickson R. Nascimento\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19174\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fverlab\u002Faccelerated_features\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FRamC70IkZuI\">视频\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fqubvel-hf\u002Fxfeat\">演示\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fverlab\u002Faccelerated_features\u002Fblob\u002Fmain\u002Fnotebooks\u002Fxfeat_matching.ipynb\">Colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 低级视觉\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> 6 月 19 日周三 下午 1:30 EDT — 下午 3:00 EDT #245\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F038bef8f-a6df-440d-9ebc-b58f69beb338\" title=\"Robust Image Denoising through Adversarial Frequency Mixup\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_2f6b1525f185.png\" alt=\"Robust Image Denoising through Adversarial Frequency Mixup\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FRyou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.html\" title=\"Robust Image Denoising through Adversarial Frequency Mixup\">\n        \u003Cstrong>Robust Image Denoising through Adversarial Frequency Mixup\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Donghun Ryou, Inju Ha, Hyewon Yoo, Dongwan Kim, Bohyung Han\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FRyou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.html\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdhryougit\u002FAFM\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FzQ0pwFSk7uo\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 低级视觉\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> 6 月 19 日周三 下午 1:30 EDT — 下午 3:00 EDT #250\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 多模态学习 (multi-modal learning)\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03744\" title=\"Improved Baselines with Visual Instruction Tuning\">\n        \u003Cstrong>🔥 Improved Baselines with Visual Instruction Tuning\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Haotian Liu, Chunyuan Li, Yuheng Li, Yong Jae Lee\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03744\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FLLaVA-VL\u002FLLaVA-NeXT\">代码\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 多模态学习\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> 6 月 21 日周五 晚上 8:00 EDT — 晚上 9:30 EDT #209\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n### 识别：分类、检测、检索 (recognition: categorization, detection, retrieval)\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31301.png?t=1717420504.9897285\" title=\"DETRs Beat YOLOs on Real-time Object Detection\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_a52db28cd29b.png\" alt=\"DETRs Beat YOLOs on Real-time Object Detection\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08069\" title=\"DETRs Beat YOLOs on Real-time Object Detection\">\n        \u003Cstrong>DETRs Beat YOLOs on Real-time Object Detection\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, Jie Chen\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08069\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flyuwenyu\u002FRT-DETR\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=UOc0qMSX4Ac\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 识别：分类、检测、检索\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> 6 月 20 日周四 晚上 8:00 EDT — 晚上 9:30 EDT #229\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002Ff9023a28-aca5-4965-a194-984c62348dc0\" title=\"YOLO-World: Real-Time Open-Vocabulary Object Detection\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_051aa5ce7536.png\" alt=\"YOLO-World: Real-Time Open-Vocabulary Object Detection\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.17270\" title=\"YOLO-World: Real-Time Open-Vocabulary Object Detection\">\n        \u003Cstrong>YOLO-World: Real-Time Open-Vocabulary Object Detection\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, Ying Shan\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.17270\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FX7gKBGVz4vs\">视频\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSkalskiP\u002FYOLO-World\">演示\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Froboflow-ai\u002Fnotebooks\u002Fblob\u002Fmain\u002Fnotebooks\u002Fzero-shot-object-detection-with-yolo-world.ipynb\">Colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 识别：分类、检测、检索\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> 6 月 20 日周四 晚上 8:00 EDT — 晚上 9:30 EDT #223\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31732.png?t=1717298372.5822952\" title=\"Object Recognition as Next Token Prediction\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_76800fa5a2c6.png\" alt=\"Object Recognition as Next Token Prediction\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02142\" title=\"Object Recognition as Next Token Prediction\">\n        \u003Cstrong>🔥 Object Recognition as Next Token Prediction\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Kaiyu Yue, Bor-Chun Chen, Jonas Geiping, Hengduo Li, Tom Goldstein, Ser-Nam Lim\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02142\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fkaiyuyue\u002Fnxtp\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FxeI8dZIpoco\">视频\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1pJX37LP5xGLDzD3H7ztTmpq1RrIBeWX3?usp=sharing\">Colab\u003C\u002Fa>]\n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 识别：分类、检测、检索\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> 6 月 20 日周四 晚上 8:00 EDT — 晚上 9:30 EDT #199\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 分割、分组与形状分析\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F62d34981-73d6-49b2-8058-46ec99bac94d\" title=\"RobustSAM：在退化图像上鲁棒地分割任意对象\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_db4ba6747844.png\" alt=\"RobustSAM：在退化图像上鲁棒地分割任意对象\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FChen_RobustSAM_Segment_Anything_Robustly_on_Degraded_Images_CVPR_2024_paper.html\" title=\"RobustSAM：在退化图像上鲁棒地分割任意对象\">\n        \u003Cstrong>🔥 RobustSAM：在退化图像上鲁棒地分割任意对象\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Wei-Ting Chen, Yu-Jiet Vong, Sy-Yen Kuo, Sizhou Ma, Jian Wang\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FChen_RobustSAM_Segment_Anything_Robustly_on_Degraded_Images_CVPR_2024_paper.html\">论文\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Awukqkbs6zM\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 分割、分组与形状分析\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #378\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30253.png?t=1716781257.513028\" title=\"Frozen CLIP：用于弱监督语义分割的强大骨干网络\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_559c5b76efa5.png\" alt=\"Frozen CLIP：用于弱监督语义分割的强大骨干网络\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FZhang_Frozen_CLIP_A_Strong_Backbone_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2024_paper.html\" title=\"Frozen CLIP：用于弱监督语义分割的强大骨干网络\">\n        \u003Cstrong>🔥 Frozen CLIP：用于弱监督语义分割的强大骨干网络\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Bingfeng Zhang, Siyue Yu, Yunchao Wei, Yao Zhao, Jimin Xiao\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FZhang_Frozen_CLIP_A_Strong_Backbone_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2024_paper.html\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzbf1991\u002FWeCLIP\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FLh489nTm_M0\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 分割、分组与形状分析\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #351\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F2f2bf794-3981-48c8-992d-04dd32ee9ced\" title=\"面向点提示实例分割的语义感知 SAM\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_5b593b38a8b4.png\" alt=\"面向点提示实例分割的语义感知 SAM\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15895\" title=\"面向点提示实例分割的语义感知 SAM\">\n        \u003Cstrong>🔥 面向点提示实例分割的语义感知 SAM\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Zhaoyang Wei, Pengfei Chen, Xuehui Yu, Guorong Li, Jianbin Jiao, Zhenjun Han\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15895\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzhaoyangwei123\u002FSAPNet\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002F42-tJFmT7Ao\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 分割、分组与形状分析\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #331\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.15789\" title=\"上下文抠图\">\n        \u003Cstrong>🔥 上下文抠图\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    He Guo, Zixuan Ye, Zhiguo Cao, Hao Lu\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.15789\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftiny-smart\u002Fin-context-matting\">代码\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 分割、分组与形状分析\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #343\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002Fbfe79038-706d-491b-ac99-083f421dc5ec\" title=\"大规模通用图像和视频对象基础模型\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_a55423b8b181.png\" alt=\"大规模通用图像和视频对象基础模型\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09158\" title=\"大规模通用图像和视频对象基础模型\">\n        \u003Cstrong>🔥 大规模通用图像和视频对象基础模型\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Junfeng Wu, Yi Jiang, Qihao Liu, Zehuan Yuan, Xiang Bai, Song Bai\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09158\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PSVhfTPx0GQ\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 分割、分组与形状分析\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Wed 19 Jun 1:30 p.m. EDT — 3 p.m. EDT #350\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 自监督或无监督表示学习\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30014.png?t=1717339970.9614518\" title=\"InternVL：扩展视觉基础模型并适配通用视觉 - 语言任务\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_d27ad5e09966.png\" alt=\"InternVL：扩展视觉基础模型并适配通用视觉 - 语言任务\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.14238\" title=\"InternVL：扩展视觉基础模型并适配通用视觉 - 语言任务\">\n        \u003Cstrong>🔥 InternVL：扩展视觉基础模型并适配通用视觉 - 语言任务\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, Jifeng Dai\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.14238\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FInternVL\">代码\u003C\u002Fa>]  [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FOpenGVLab\u002FInternVL\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 自监督或无监督表示学习\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Fri 21 Jun 8 p.m. EDT — 9:30 p.m. EDT #412\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 视频：底层分析、运动与跟踪\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F29590.png?t=1717456006.3308516\" title=\"通过分割任意对象实现任意匹配\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_dcf8dd95caaa.png\" alt=\"通过分割任意对象实现任意匹配\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04221\" title=\"通过分割任意对象实现任意匹配\">\n        \u003Cstrong>🔥 通过分割任意对象实现任意匹配\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Siyuan Li, Lei Ke, Martin Danelljan, Luigi Piccinelli, Mattia Segu, Luc Van Gool, Fisher Yu\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04221\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsiyuanliii\u002Fmasa\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FKDQVujKAWFQ\">视频\u003C\u002Fa>]  \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 视频：底层分析、运动与跟踪\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 6 月 20 日周四 晚上 8:00 EDT（东部夏令时）— 9:30 PM EDT #421\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F9711186c-b05b-472d-b095-d98dbe386171\" title=\"DiffMOT：一种基于扩散模型的实时非线性预测多目标跟踪器\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_48cf2cd7ac8a.png\" alt=\"DiffMOT：一种基于扩散模型的实时非线性预测多目标跟踪器\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02075\" title=\"DiffMOT：一种基于扩散模型的实时非线性预测多目标跟踪器\">\n        \u003Cstrong>DiffMOT：一种基于扩散模型的实时非线性预测多目标跟踪器\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Weiyi Lv, Yuhang Huang, Ning Zhang, Ruei-Sung Lin, Mei Han, Dan Zeng\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02075\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FKroery\u002FDiffMOT\">代码\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 视频：底层分析、运动与跟踪\n    \u003Cbr\u002F>\n    \u003Cstrong>会议场次：\u003C\u002Fstrong> 6 月 20 日周四 晚上 8:00 EDT（东部夏令时）— 9:30 PM EDT #455\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n### 视觉、语言与推理\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31492.png?t=1717327133.6073072\" title=\"Alpha-CLIP：一个关注任意位置的 CLIP 模型\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_5d8c960b3280.png\" alt=\"Alpha-CLIP：一个关注任意位置的 CLIP 模型\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03818\" title=\"Alpha-CLIP：一个关注任意位置的 CLIP 模型\">\n        \u003Cstrong>Alpha-CLIP：一个关注任意位置的 CLIP 模型\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Zeyi Sun, Ye Fang, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, Jiaqi Wang\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03818\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSunzeY\u002FAlphaCLIP\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FQCEIKPZpZz0\">视频\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FZery\u002FAlpha-CLIP_LLaVA-1.5\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 视觉、语言与推理\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #327\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.06209\" title=\"🔥 睁眼瞎？探索多模态大语言模型的视觉短板\">\n        \u003Cstrong>🔥 睁眼瞎？探索多模态大语言模型的视觉短板\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, Saining Xie\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.06209\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftsb0601\u002FMMVP\">代码\u003C\u002Fa>]   \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 视觉、语言与推理\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #390\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F30109.png?t=1717509456.89997\" title=\"🔥 LISA：通过大语言模型进行推理分割\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_2beaf3ec057c.png\" alt=\"🔥 LISA：通过大语言模型进行推理分割\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00692\" title=\"🔥 LISA：通过大语言模型进行推理分割\">\n        \u003Cstrong>🔥 LISA：通过大语言模型进行推理分割\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, Jiaya Jia\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00692\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FLISA\">代码\u003C\u002Fa>]  [\u003Ca href=\"http:\u002F\u002F103.170.5.190:7870\u002F\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 视觉、语言与推理\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #413\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fassets\u002F26109316\u002F53e03a08-4dd9-451a-975e-e3654fa5bc71\" title=\"ViP-LLaVA：使大型多模态模型理解任意视觉提示\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_d0b23b88239a.png\" alt=\"ViP-LLaVA：使大型多模态模型理解任意视觉提示\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00784\" title=\"ViP-LLaVA：使大型多模态模型理解任意视觉提示\">\n        \u003Cstrong>ViP-LLaVA：使大型多模态模型理解任意视觉提示\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Mu Cai, Haotian Liu, Dennis Park, Siva Karthik Mustikovela, Gregory P. Meyer, Yuning Chai, Yong Jae Lee\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00784\">论文\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FWisconsinAIVision\u002FViP-LLaVA\">代码\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002Fj_l1bRQouzc\">视频\u003C\u002Fa>] [\u003Ca href=\"https:\u002F\u002Fpages.cs.wisc.edu\u002F~mucai\u002Fvip-llava.html\">演示\u003C\u002Fa>] \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 视觉、语言与推理\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #317\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\n\u003Cp align=\"left\">\n    \u003Ca href=\"https:\u002F\u002Fcvpr.thecvf.com\u002Fmedia\u002FPosterPDFs\u002FCVPR%202024\u002F31040.png?t=1718300473.5736258\" title=\"🔥 MMMU：面向专家级 AGI（通用人工智能）的超大规模多学科多模态理解与推理基准\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_readme_00919d800269.png\" alt=\"🔥 MMMU：面向专家级 AGI（通用人工智能）的超大规模多学科多模态理解与推理基准\" width=\"400px\" align=\"left\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16502\" title=\"🔥 MMMU：面向专家级 AGI（通用人工智能）的超大规模多学科多模态理解与推理基准\">\n        \u003Cstrong>🔥 MMMU：面向专家级 AGI（通用人工智能）的超大规模多学科多模态理解与推理基准\u003C\u002Fstrong>\n    \u003C\u002Fa>\n    \u003Cbr\u002F>\n    Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen\n    \u003Cbr\u002F>\n    [\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16502\">论文\u003C\u002Fa>]    \n    \u003Cbr\u002F>\n    \u003Cstrong>主题：\u003C\u002Fstrong> 视觉、语言与推理\n    \u003Cbr\u002F>\n    \u003Cstrong>场次：\u003C\u002Fstrong> Thu 20 Jun 1:30 p.m. EDT — 3 p.m. EDT #382\n\u003C\u002Fp>\n\u003Cbr\u002F>\n\u003Cbr\u002F>\n\n\u003C!--- AUTOGENERATED_PAPERS_LIST -->\n\n## 🦸 贡献\n\n我们非常乐意得到您的帮助，以使本仓库变得更好！如果您知道这里有遗漏的精彩论文，或者有任何改进建议，请随时提交\n[问题](https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fissues)\n或提交\n[拉取请求](https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers\u002Fpulls)。","# Top CVPR 2024 Papers 快速上手指南\n\n## 简介\n`top-cvpr-2024-papers` 是一个精选的计算机视觉顶会论文资源库，收录了 CVPR 2024 会议中备受关注的优秀论文。该仓库自动聚合了论文的标题、作者、代码链接、演示地址及分类信息，旨在帮助开发者快速检索和追踪前沿成果。\n\n> **注意**：本仓库为静态资源索引列表，并非可执行软件或算法库，无需编译运行。\n\n## 环境准备\n由于本工具主要用于浏览和检索资源，对系统环境要求极低：\n- **操作系统**：Windows \u002F macOS \u002F Linux 任意版本\n- **网络环境**：需能访问 GitHub 及学术资源网站（如 arXiv）\n- **必备工具**：\n  - 现代 Web 浏览器（Chrome, Edge, Firefox 等）\n  - Git 客户端（用于本地克隆，可选）\n\n## 安装步骤\n若希望离线查看或本地管理论文列表，可通过 Git 克隆仓库到本地。\n\n### 1. 克隆仓库\n推荐使用国内镜像加速下载速度：\n\n```bash\n# 标准方式\ngit clone https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers.git\n\n# 或使用 Gitee 镜像（如果可用）\ngit clone https:\u002F\u002Fgitee.com\u002Fskalskip\u002Ftop-cvpr-2024-papers.git\n```\n\n### 2. 进入目录\n```bash\ncd top-cvpr-2024-papers\n```\n\n## 基本使用\n\n### 在线浏览\n直接访问 GitHub 页面即可查看最新整理的论文列表：\n- [GitHub 主页](https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Ftop-cvpr-2024-papers)\n\n### 本地浏览\n克隆后，在终端打开 `README.md` 文件即可阅读完整列表：\n```bash\n# Linux\u002FmacOS\nopen README.md\n\n# Windows\nstart README.md\n```\n\n### 查找与访问资源\n1. **按主题筛选**：列表按 `Topic` 分类（如 `3d from multi-view`, `deep learning architectures` 等）。\n2. **识别高亮论文**：标题前带有 `🔥` 标记的为推荐重点论文。\n3. **获取资源**：点击对应的 `[paper]`、`[code]` 或 `[demo]` 链接跳转至 arXiv、GitHub 或 HuggingFace 等平台。\n\n### 贡献更新\n如需添加新论文或修正信息，请参考仓库中的 `CONTRIBUTING.md` 文档进行提交。","某互联网公司的视觉算法工程师小张，负责开发新一代增强现实应用，急需了解 CVPR 2024 中关于多视图几何与 3D 生成的最新进展。\n\n### 没有 top-cvpr-2024-papers 时\n- 面对超过一万一千篇投稿，人工筛选高影响力论文如同大海捞针，耗费大量时间。\n- 即使找到目标论文，也常面临代码缺失或环境配置困难，导致复现尝试屡屡失败。\n- 难以快速区分理论创新与实际落地价值，容易陷入对低质量工作的无效阅读。\n- 缺乏直观的 Demo 视频参考，仅凭文字描述难以评估模型在真实场景中的表现。\n\n### 使用 top-cvpr-2024-papers 后\n- 直接获取精心挑选的精华论文列表，跳过海量无关信息，调研效率提升数倍。\n- 每个条目均提供 Paper、Code 及 Demo 的直接链接，确保资源可用且易于上手。\n- 按 3D 重建等主题清晰分类，能快速定位 SpatialTracker 等关键技术的实现细节。\n- 通过 Highlight 标记优先关注社区认可的核心成果，避免被边缘研究分散精力。\n\ntop-cvpr-2024-papers 将碎片化的学术资源整合成高效的知识地图，让研究人员能迅速站在巨人的肩膀上开始创新。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSkalskiP_top-cvpr-2024-papers_abacff45.png","SkalskiP","Piotr Skalski","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSkalskiP_4b8675f3.jpg","Open Source Lead @roboflow | Founder @ makesense.ai","@roboflow","127.0.0.1","piotr.skalski92@gmail.com","skalskip92","https:\u002F\u002Fhuggingface.co\u002FSkalskiP","https:\u002F\u002Fgithub.com\u002FSkalskiP",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,736,57,"2026-03-27T10:18:02","CC0-1.0",1,"未说明",{"notes":97,"python":95,"dependencies":98},"该仓库为 CVPR 2024 优秀论文列表集合，主要提供论文摘要、代码及演示链接，本身并非可执行的软件工具。因此无特定的运行环境要求。如需运行列表中提到的具体模型（如 Florence-2, EfficientSAM 等），请分别访问各论文对应的官方代码库查阅详细的依赖和环境配置。",[95],[26,14],[101,102,103,104,105,106,107,108],"computer-vision","cvpr","cvpr2024","image-segmentation","object-detection","paper","transformers","vision-and-language",null,"2026-03-27T02:49:30.150509","2026-04-06T05:44:29.245994",[],[]]