[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-FoundationVision--GLEE":3,"tool-FoundationVision--GLEE":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,2,"2026-04-18T11:18:24",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[52,15,13,14],"语言模型",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,61],"视频",{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":103,"forks":104,"last_commit_at":105,"license":106,"difficulty_score":10,"env_os":107,"env_gpu":108,"env_ram":107,"env_deps":109,"category_tags":112,"github_topics":113,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":129,"updated_at":130,"faqs":131,"releases":167},9682,"FoundationVision\u002FGLEE","GLEE","[CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale","GLEE 是一款面向图像与视频的大规模通用物体基础模型，曾荣获 CVPR 2024 亮点论文。它旨在解决传统视觉模型在应对复杂场景、长尾类别物体以及跨模态任务时泛化能力不足的难题。无论是静态图片中的精细实例分割，还是动态视频里的多目标跟踪与指代表达理解，GLEE 都能提供统一且高精度的解决方案。\n\n该模型的核心亮点在于其强大的“通用性”与“规模化”处理能力。通过在海量数据上进行预训练，GLEE 打破了以往模型需针对特定任务单独训练的局限，能够同时胜任开放世界实例分割、 referring 表达分割及长尾视频对象分割等十余项前沿任务，并在多个权威基准测试中刷新了最佳成绩。对于开发者与研究人员而言，GLEE 提供了坚实的基座，可大幅降低下游应用开发门槛；对于需要处理复杂视觉数据的专业设计师或算法工程师，它也是提升工作效率的得力助手。如果你正在探索计算机视觉的前沿应用，或寻求一个能同时搞定看图与识视频的强力模型，GLEE 值得重点关注。","\n# GLEE: General Object Foundation Model for Images and Videos at Scale\n\n> #### Junfeng Wu\\*, Yi Jiang\\*,  Qihao Liu, Zehuan Yuan, Xiang Bai\u003Csup>&dagger;\u003C\u002Fsup>,and Song Bai\u003Csup>&dagger;\u003C\u002Fsup>\n>\n> \\* Equal Contribution, \u003Csup>&dagger;\u003C\u002Fsup>Correspondence\n\n\\[[Project Page](https:\u002F\u002Fglee-vision.github.io\u002F)\\]  \\[[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09158)\\]    \\[[HuggingFace Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo)\\]   \\[[Video Demo](https:\u002F\u002Fyoutu.be\u002FPSVhfTPx0GQ)\\]  \n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Flong-tail-video-object-segmentation-on-burst-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Flong-tail-video-object-segmentation-on-burst-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fvideo-instance-segmentation-on-ovis-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-instance-segmentation-on-ovis-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-video-object-segmentation-on-refer)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-video-object-segmentation-on-refer?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refer-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refer-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fmulti-object-tracking-on-tao)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fmulti-object-tracking-on-tao?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fopen-world-instance-segmentation-on-uvo)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fopen-world-instance-segmentation-on-uvo?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refcoco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refcoco?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refcocog)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refcocog?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fvideo-instance-segmentation-on-youtube-vis-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-instance-segmentation-on-youtube-vis-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fobject-detection-on-lvis-v1-0-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-lvis-v1-0-val?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Finstance-segmentation-on-lvis-v1-0-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-lvis-v1-0-val?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-comprehension-on-refcoco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-comprehension-on-refcoco?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refcoco-3)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refcoco-3?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Finstance-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco-minival?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-comprehension-on)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-comprehension-on?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Finstance-segmentation-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-comprehension-on-refcoco-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-comprehension-on-refcoco-1?p=general-object-foundation-model-for-images)\n\n\n\n\n![data_demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_2b64f699dd17.gif)\n\n## Highlight:\n\n- GLEE is accepted by **CVPR2024** as **Highlight**!\n- GLEE is a general object foundation model jointly trained on over **ten million images** from various benchmarks with diverse levels of supervision.\n- GLEE is capable of addressing **a wide range of object-centric tasks** simultaneously while maintaining **SOTA** performance.\n-  GLEE demonstrates remarkable versatility and robust **zero-shot transferability** across a spectrum of object-level image and video tasks, and able to **serve as a foundational component** for enhancing other architectures or models.\n\n\n\nWe will release the following contents for **GLEE**:exclamation:\n\n- [x] Demo Code\n\n- [x] Model Zoo\n\n- [x] Comprehensive User Guide\n\n- [x] Training Code and Scripts\n\n- [ ] Detailed Evaluation Code and Scripts\n\n- [ ] Tutorial for Zero-shot Testing or Fine-tuning GLEE on New Datasets\n\n  \n\n\n\n## Getting started\n\n1. Installation: Please refer to [INSTALL.md](assets\u002FINSTALL.md) for more details.\n2. Data preparation: Please refer to [DATA.md](assets\u002FDATA.md) for more details.\n3. Training: Please refer to [TRAIN.md](assets\u002FTRAIN.md) for more details.\n4. Testing: Please refer to [TEST.md](assets\u002FTEST.md) for more details. \n5. Model zoo: Please refer to [MODEL_ZOO.md](assets\u002FMODEL_ZOO.md) for more details.\n\n\n\n## Run the demo APP\n\nTry our online demo app on \\[[HuggingFace Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo)\\] or use it locally:\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\n# support CPU and GPU running\npython app.py\n```\n\n\n\n# Introduction \n\n\n\nGLEE has been trained on over ten million images from 16 datasets, fully harnessing both existing annotated data and cost-effective automatically labeled data to construct a diverse training set. This extensive training regime endows GLEE with formidable generalization capabilities. \n\n\n\n![data_demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_ea87f0ca747c.png)\n\n\n\nGLEE consists of an image encoder, a text encoder, a visual prompter, and an object decoder, as illustrated in Figure. The text encoder processes arbitrary descriptions related to the task, including **1) object category list 2）object names in any form 3）captions about objects 4）referring expressions**. The visual prompter encodes user inputs such as **1) points 2) bounding boxes 3) scribbles** during interactive segmentation into corresponding visual representations of target objects. Then they are integrated into a detector for extracting objects from images according to textual and visual input.\n\n![pipeline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_7c91c87be4c0.png)\n\n\n\nBased on the above designs, GLEE can be used to seamlessly unify a wide range of object perception tasks in images and videos, including object detection, instance segmentation, grounding, multi-target tracking (MOT), video instance segmentation (VIS), video object segmentation (VOS), interactive segmentation and tracking, and supports **open-world\u002Flarge-vocabulary image and video detection and segmentation** tasks. \n\n\n\n# Results\n\n## Image-level tasks\n\n![imagetask](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_13d880b6c63b.png)\n\n![odinw](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_65046712bde8.png)\n\n## Video-level tasks\n\n![videotask](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_abc38a8fb882.png)\n\n![visvosrvos](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_8ebbe206c1a0.png)`\n\n\n\n# Citing GLEE\n\n```\n@misc{wu2023GLEE,\n  author= {Junfeng Wu, Yi Jiang, Qihao Liu, Zehuan Yuan, Xiang Bai, Song Bai},\n  title = {General Object Foundation Model for Images and Videos at Scale},\n  year={2023},\n  eprint={2312.09158},\n  archivePrefix={arXiv}\n}\n```\n\n## Acknowledgments\n\n- Thanks [UNINEXT](https:\u002F\u002Fgithub.com\u002FMasterBin-IIAU\u002FUNINEXT) for the implementation of multi-dataset training and data processing.\n\n- Thanks [VNext](https:\u002F\u002Fgithub.com\u002Fwjf5203\u002FVNext) for providing experience of Video Instance Segmentation (VIS).\n\n- Thanks [SEEM](https:\u002F\u002Fgithub.com\u002FUX-Decoder\u002FSegment-Everything-Everywhere-All-At-Once) for providing the implementation of the visual prompter.\n\n- Thanks [MaskDINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FMaskDINO) for providing a powerful detector and segmenter.\n\n  \n","# GLEE：面向大规模图像和视频的通用目标基础模型\n\n> #### 吴俊峰\\*, 姜毅\\*, 刘启豪、袁泽寰、白翔\u003Csup>&dagger;\u003C\u002Fsup>，以及白松\u003Csup>&dagger;\u003C\u002Fsup>\n>\n> \\* 共同第一作者，\u003Csup>&dagger;\u003C\u002Fsup> 通讯作者\n\n\\[[项目主页](https:\u002F\u002Fglee-vision.github.io\u002F)\\]  \\[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09158)\\]    \\[[HuggingFace 演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo)\\]   \\[[视频演示](https:\u002F\u002Fyoutu.be\u002FPSVhfTPx0GQ)\\]  \n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Flong-tail-video-object-segmentation-on-burst-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Flong-tail-video-object-segmentation-on-burst-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fvideo-instance-segmentation-on-ovis-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-instance-segmentation-on-ovis-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-video-object-segmentation-on-refer)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-video-object-segmentation-on-refer?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refer-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refer-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fmulti-object-tracking-on-tao)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fmulti-object-tracking-on-tao?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fopen-world-instance-segmentation-on-uvo)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fopen-world-instance-segmentation-on-uvo?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refcoco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refcoco?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refcocog)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refcocog?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fvideo-instance-segmentation-on-youtube-vis-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-instance-segmentation-on-youtube-vis-1?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Fobject-detection-on-lvis-v1-0-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-lvis-v1-0-val?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Finstance-segmentation-on-lvis-v1-0-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-lvis-v1-0-val?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-comprehension-on-refcoco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-comprehension-on-refcoco?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-segmentation-on-refcoco-3)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-segmentation-on-refcoco-3?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Finstance-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco-minival?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-comprehension-on)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-comprehension-on?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Finstance-segmentation-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco?p=general-object-foundation-model-for-images)[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fgeneral-object-foundation-model-for-images\u002Freferring-expression-comprehension-on-refcoco-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Freferring-expression-comprehension-on-refcoco-1?p=general-object-foundation-model-for-images)\n\n\n\n\n![data_demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_2b64f699dd17.gif)\n\n## 亮点：\n\n- GLEE 被 **CVPR2024** 接收，并被评为 **Highlight**！\n- GLEE 是一个通用目标基础模型，在来自不同基准数据集的超过 **一千万张图像** 上进行联合训练，这些数据集具有多样化的监督信息。\n- GLEE 能够同时处理 **广泛的以目标为中心的任务**，并保持 **SOTA** 性能。\n- GLEE 在各类基于目标的图像和视频任务中展现出卓越的多功能性和强大的 **零样本迁移能力**，并且可以 **作为基础组件** 来增强其他架构或模型。\n\n\n\n我们将为 **GLEE** 发布以下内容：exclamation:\n\n- [x] 演示代码\n\n- [x] 模型库\n\n- [x] 完整用户指南\n\n- [x] 训练代码及脚本\n\n- [ ] 详细的评估代码及脚本\n\n- [ ] 针对新数据集进行零样本测试或微调 GLEE 的教程\n\n\n\n## 快速入门\n\n1. 安装：请参阅 [INSTALL.md](assets\u002FINSTALL.md) 获取更多详细信息。\n2. 数据准备：请参阅 [DATA.md](assets\u002FDATA.md) 获取更多详细信息。\n3. 训练：请参阅 [TRAIN.md](assets\u002FTRAIN.md) 获取更多详细信息。\n4. 测试：请参阅 [TEST.md](assets\u002FTEST.md) 获取更多详细信息。\n5. 模型库：请参阅 [MODEL_ZOO.md](assets\u002FMODEL_ZOO.md) 获取更多详细信息。\n\n\n\n## 运行演示应用\n\n您可以在 \\[[HuggingFace 演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo)\\] 上试用我们的在线演示应用，或者在本地运行：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\n# 支持 CPU 和 GPU 运行\npython app.py\n```\n\n# 引言\n\n\n\nGLEE 在来自 16 个数据集的超过一千万张图像上进行了训练，充分利用了现有的标注数据以及经济高效的自动标注数据，构建了一个多样化的训练集。这种大规模的训练使 GLEE 具备强大的泛化能力。\n\n\n\n![data_demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_ea87f0ca747c.png)\n\n\n\n如图所示，GLEE 由图像编码器、文本编码器、视觉提示器和目标解码器组成。文本编码器可以处理与任务相关的任意描述，包括 **1) 目标类别列表 2) 任何形式的目标名称 3) 关于目标的标题 4) 指代表达**。视觉提示器则会将用户在交互式分割过程中提供的输入，例如 **1) 点 2) 边界框 3) 涂鸦**，编码为对应目标对象的视觉表示。随后，这些信息会被整合到检测器中，根据文本和视觉输入从图像中提取目标对象。\n\n![pipeline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_7c91c87be4c0.png)\n\n\n\n基于上述设计，GLEE 可以无缝统一图像和视频中的多种目标感知任务，包括目标检测、实例分割、定位、多目标跟踪（MOT）、视频实例分割（VIS）、视频目标分割（VOS）、交互式分割与跟踪等，并支持 **开放世界\u002F大词汇量的图像和视频检测与分割** 任务。\n\n\n\n# 结果\n\n## 图像级任务\n\n![imagetask](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_13d880b6c63b.png)\n\n![odinw](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_65046712bde8.png)\n\n## 视频级任务\n\n![videotask](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_abc38a8fb882.png)\n\n![visvosrvos](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_readme_8ebbe206c1a0.png)`\n\n\n\n# 引用 GLEE\n\n```\n@misc{wu2023GLEE,\n  author= {Junfeng Wu, Yi Jiang, Qihao Liu, Zehuan Yuan, Xiang Bai, Song Bai},\n  title = {面向大规模图像和视频的通用目标基础模型},\n  year={2023},\n  eprint={2312.09158},\n  archivePrefix={arXiv}\n}\n```\n\n## 致谢\n\n- 感谢 [UNINEXT](https:\u002F\u002Fgithub.com\u002FMasterBin-IIAU\u002FUNINEXT) 提供多数据集训练和数据处理的实现。\n\n- 感谢 [VNext](https:\u002F\u002Fgithub.com\u002Fwjf5203\u002FVNext) 提供视频实例分割（VIS）的经验。\n\n- 感谢 [SEEM](https:\u002F\u002Fgithub.com\u002FUX-Decoder\u002FSegment-Everything-Everywhere-All-At-Once) 提供视觉提示器的实现。\n\n- 感谢 [MaskDINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FMaskDINO) 提供强大的检测与分割模型。","# GLEE 快速上手指南\n\nGLEE (General Object Foundation Model) 是一个面向图像和视频的通用物体基础模型，支持物体检测、实例分割、指代分割、视频目标跟踪等多种任务。本指南将帮助您快速在本地部署并运行 GLEE。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python**: 版本 >= 3.8。\n*   **PyTorch**: 版本 >= 1.9 (建议配合 CUDA 11.1+ 使用以获得最佳性能，同时也支持 CPU 运行)。\n*   **硬件**: \n    *   **GPU**: 推荐使用 NVIDIA GPU (显存建议 16GB+ 以处理高分辨率图像或视频)，用于加速推理和训练。\n    *   **CPU**: 仅用于轻量级测试或演示，速度较慢。\n\n**前置依赖安装：**\n建议使用 `conda` 创建独立环境。\n\n```bash\nconda create -n glee python=3.9 -y\nconda activate glee\n```\n\n## 2. 安装步骤\n\n### 第一步：克隆项目代码\n从 GitHub 获取最新源代码：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\ncd GLEE\n```\n\n### 第二步：安装核心依赖\n根据官方指引，核心依赖通常通过 `requirements.txt` 或特定的安装脚本进行。由于 README 指向了详细的 `INSTALL.md`，以下是通用的快速安装命令（基于常见 PyTorch 视觉项目结构）：\n\n```bash\n# 安装 PyTorch (请根据您的 CUDA 版本选择，此处以 CUDA 11.8 为例)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n\n# 安装其他 Python 依赖\npip install -r requirements.txt\n```\n\n> **注意**：如果 `requirements.txt` 中未包含某些特定视觉库（如 `detectron2`, `mmcv` 等），请参考项目根目录下的 `assets\u002FINSTALL.md` 文件进行针对性安装。国内用户若遇到下载慢的问题，可临时使用清华源：\n> `pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 第三步：验证安装\n确保没有报错后，即可进行下一步使用。\n\n## 3. 基本使用\n\nGLEE 提供了简单的演示应用 (`app.py`)，支持在本地启动一个交互界面，无需编写复杂代码即可体验模型功能。\n\n### 启动本地演示应用\n在项目根目录下运行以下命令：\n\n```bash\npython app.py\n```\n\n*   该脚本会自动加载预训练模型。\n*   支持 **CPU** 和 **GPU** 自动切换运行。\n*   启动成功后，终端会显示本地访问地址（通常为 `http:\u002F\u002F127.0.0.1:7860`），在浏览器中打开即可上传图像或视频进行测试。\n\n### 在线体验（免安装）\n如果您暂时不想配置本地环境，可以直接访问 Hugging Face 上的官方演示空间：\n\n*   **HuggingFace Demo**: [https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo)\n\n### 进阶使用提示\n*   **模型权重**：默认情况下，演示脚本可能会尝试自动下载模型。如果需要手动指定模型路径或下载特定版本的权重，请参阅 `assets\u002FMODEL_ZOO.md`。\n*   **数据集准备**：若需进行训练或在特定数据集上测试，请参考 `assets\u002FDATA.md` 配置数据路径。\n*   **训练与测试**：完整的训练和评估脚本说明请查阅 `assets\u002FTRAIN.md` 和 `assets\u002FTEST.md`。","某智慧交通研发团队正在构建一套城市道路视频分析系统，需要从海量监控录像中自动识别并追踪各类车辆、行人及突发障碍物。\n\n### 没有 GLEE 时\n- **长尾物体识别困难**：传统模型仅能识别常见车型，面对工程车、三轮车或特殊货物等“长尾”物体时漏检率极高，需针对每类新物体单独收集数据训练。\n- **视频追踪不连贯**：在车辆被遮挡或快速移动时，现有算法容易丢失目标 ID，导致同一辆车在不同帧被重复计数，统计结果严重失真。\n- **开发周期漫长**：为了支持“找出所有红色卡车”或“追踪戴安全帽的工人”等特定指令，团队需为每个新需求定制开发语义分割模型，耗时数周。\n- **泛化能力薄弱**：模型在白天训练好后，一到夜间或雨雪天气性能便大幅下降，无法适应复杂多变的真实路况。\n\n### 使用 GLEE 后\n- **通用物体全覆盖**：GLEE 作为基础模型，无需额外训练即可零样本识别包括罕见车型在内的上千种物体，显著提升了长尾场景的检出率。\n- **时空追踪更精准**：凭借强大的视频理解能力，GLEE 能在物体长时间遮挡或剧烈运动下保持 ID 一致，确保了车流统计和轨迹分析的准确性。\n- **自然语言即时交互**：团队成员直接输入“追踪画面中所有违规停放的货车”等自然语言指令，GLEE 即可实时生成对应的分割掩码和追踪结果，将开发时间从数周缩短至分钟级。\n- **鲁棒性显著增强**：得益于大规模数据预训练，GLEE 在夜间、低光照及恶劣天气下依然保持稳定的检测与分割性能，大幅降低了场景适配成本。\n\nGLEE 通过统一的通用物体基础模型架构，彻底解决了传统方案中碎片化建模的痛点，让视频感知系统具备了真正的开放世界适应能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFoundationVision_GLEE_2b64f699.gif","FoundationVision","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FFoundationVision_08beeedc.jpg","Bytedance's opensource FoundationVision models",null,"https:\u002F\u002Fgithub.com\u002FFoundationVision",[79,83,87,91,95,99],{"name":80,"color":81,"percentage":82},"Python","#3572A5",94.6,{"name":84,"color":85,"percentage":86},"Cuda","#3A4E3A",3.3,{"name":88,"color":89,"percentage":90},"C++","#f34b7d",1.7,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0.3,{"name":96,"color":97,"percentage":98},"Dockerfile","#384d54",0.1,{"name":100,"color":101,"percentage":102},"CMake","#DA3434",0,1172,76,"2026-04-17T14:16:06","MIT","未说明","支持 CPU 和 GPU 运行，具体显卡型号、显存大小及 CUDA 版本未在 README 中明确说明（通常此类视觉基础模型推荐 NVIDIA GPU）",{"notes":110,"python":107,"dependencies":111},"README 明确指出演示代码支持 CPU 和 GPU 运行。详细的安装步骤、数据准备、训练及测试指南需参考项目中的 INSTALL.md、DATA.md、TRAIN.md 和 TEST.md 文件。该模型是一个通用的物体基础模型，适用于图像和视频的多种任务。",[107],[61,15],[114,115,116,117,118,119,120,121,122,123,124,125,126,127,128],"foundation-model","object-detection","open-world","tracking","open-vocabulary-detection","open-vocabulary-segmentation","open-vocabulary-video-segmentation","referring-expression-comprehension","referring-expression-segmentation","video-instance-segmentation","video-object-segmentation","zero-shot-object-detection","referring-video-object-segmentation","interactive-segmentation","segment-anything","2026-03-27T02:49:30.150509","2026-04-20T04:05:07.721062",[132,137,142,147,152,157,162],{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},43474,"是否会开源视频任务（VOS）的 SwinL 版本模型？为什么目前只有 R50 版本？","目前只训练并发布了 VOS 任务的 R50 版本权重。经验表明，在 VOS 任务上增加背景数据带来的提升不大（可能在 2 个点以内）。SwinL 版本的权重将在后续更新中发布，或者用户可以等待后续更新的脚本自行训练。此外，由于训练时接触的部分级数据较少，模型对未学习过的语言提示词（如自定义列表中的特定部位）效果可能较差。","https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\u002Fissues\u002F8",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},43469,"运行时报错 ImportError: cannot import name 'add_glee_config' from 'detectron2.projects' 如何解决？","这是因为 detectron2 版本不正确。不要使用系统自带的 detectron2，必须按照 INSTALL.md 中的说明，通过命令 `pip3 install -e .` 安装作者更新后的版本，该版本包含了 GLEE 所需的配置和函数。","https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\u002Fissues\u002F33",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},43470,"运行时提示缺少 CLIP 权重文件（pytorch_model.bin 等）怎么办？","需要手动下载缺失的 CLIP 权重文件。可以使用以下 wget 命令下载到指定目录：\n`wget -P projects\u002FGLEE\u002Fclip_vit_base_patch32\u002F https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo\u002Fresolve\u002Fmain\u002FGLEE\u002Fclip_vit_base_patch32\u002Fpytorch_model.bin`\n或者直接从 HuggingFace 仓库的相关文件夹中下载。","https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\u002Fissues\u002F19",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},43471,"GLEE 模型能否像 SAM 或 CLIP 那样直接通过简单的 Python 代码调用？","目前暂时无法像 SAM 那样直接剥离调用，因为模型尚未从 Detectron2 框架中完全解耦。如果已安装 Detectron2，可以参考 HuggingFace Demo 的代码实现快速调用：https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FJunfeng5\u002FGLEE_demo\u002Fapp.py。如果有强烈需求或愿意贡献代码，欢迎提交 Pull Request。","https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\u002Fissues\u002F25",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},43472,"在 COCO 数据集上对 Stage 2 模型进行微调能提升性能吗？","实验表明，继续在 COCO 数据集上微调 GLEE-Lite-Joint 并不会提升性能。这突显了 GLEE 的优势：即通过使用更大规模的数据训练来增强模型的检测能力。目前 GLEE-Pro 在 COCO 上的表现尚未达到预期，团队正在探索原因。","https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\u002Fissues\u002F14",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},43473,"论文中提到的\"Hybrid matching\"是指 H-DETR 中的方法吗？","不是指 H-DETR 论文中的方法，这里指的是 Mask-DINO 论文中提出的 hybrid matching 机制。","https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\u002Fissues\u002F15",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},43475,"未来计划支持视频全景分割（Video Panoptic Segmentation）基准测试吗？","是的，计划在后续更新中支持带有语义分割和全景分割功能的 GLEE 版本。","https:\u002F\u002Fgithub.com\u002FFoundationVision\u002FGLEE\u002Fissues\u002F2",[]]