[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-qqlu--Entity":3,"tool-qqlu--Entity":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":80,"owner_url":81,"languages":82,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":10,"env_os":118,"env_gpu":118,"env_ram":118,"env_deps":119,"category_tags":122,"github_topics":123,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":140,"updated_at":141,"faqs":142,"releases":173},791,"qqlu\u002FEntity","Entity"," EntitySeg Toolbox: Towards Open-World and High-Quality Image Segmentation","Entity 是一个致力于实现开放世界与高品质图像分割的开源工具箱。它汇集了团队在计算机视觉领域的多项前沿成果，主要解决了传统分割模型局限于预定义类别、难以识别未知物体，以及在处理超高分辨率图像时质量下降的问题。\n\nEntity 非常适合计算机视觉方向的研究人员和开发者使用。目前，Entity 已经实现了包括开放世界实体分割、针对超高分辨率图像的高质量分割、类无关半监督学习等多项核心算法。其独特之处在于不仅关注分割精度，还重新思考了开放词汇分割的评估指标，并探索了图像生成与分割的统一表示，为模型泛化能力提供了新思路。\n\n项目中的各个子模块代码已陆续开源，未来将更好地协同工作。如果你需要构建能识别未知对象的分割系统，或者希望优化大图处理流程，Entity 提供了坚实的基础设施。具体使用方法请查阅各子项目的 README 文档。","# EntitySeg Toolbox: Towards open-world and high-quality image segmentation\n\nEntitySeg is an open source toolbox which towards open-world and high-quality image segmentation. All works related to image segmentation from our group are open-sourced here.\n\nTo date, EntitySeg implements the following algorithms:\n\n* [Open-World Entity Segmentation (TPAMI2022)](Entity\u002FREADME.md) --- _released_ \n* [High Quality Segmentation for Ultra High-resolution Images (CVPR2022)](High-Quality-Segmention\u002FREADME.md) --- _released_\n* [CA-SSL: Class-Agnostic Semi-Supervised Learning for Detection and Segmentation (ECCV2022)]() ---  _released_\n* [High-Quality Entity Segmentation (ICCV2023 Oral)](Entityv2\u002FREADME.md) ---  _released_\n* [Rethinking Evaluation Metrics of Open-Vocabulary Segmentation (Arxiv)](Open-Metrics\u002FREADME.md) --- _released_\n* [AIMS: All-Inclusive Multi-Level Segmentation (NeurlPS2023 Spotlight)]() -- code to be released\n* [UniGS: Unified Representation for Image Generation and Segmentation (Arxiv)]() -- code to be released\n  \n\n\n## Usage\n\nPlease refer to the README.md of each project. All projects would be merged to support each other in the soon.\n\n\n## Citing Ours\n\n\n```BibTeX\n@article{qi2022open,\n  title={Open world entity segmentation},\n  author={Qi, Lu and Kuen, Jason and Wang, Yi and Gu, Jiuxiang and Zhao, Hengshuang and Torr, Philip and Lin, Zhe and Jia, Jiaya},\n  journal={TPAMI},\n  year={2022},\n}\n\n@inproceedings{shen2021high,\n  title={High Quality Segmentation for Ultra High-resolution Images},\n  author={Tiancheng Shen, Yuechen Zhang, Lu Qi, Jason Kuen, Xingyu Xie, Jianlong Wu, Zhe Lin, Jiaya Jia},\n  booktitle={CVPR},\n  year={2022}\n}\n\n@inproceedings{qi2022cassl,\n  title={CA-SSL: Class-Agnostic Semi-Supervised Learning for Detection and Segmentation},\n  author={Qi, Lu and Kuen, Jason and Lin, Zhe and Gu, Jiuxiang and Rao, Fengyun and Li, Dian and Guo, Weidong and Wen, Zhen and Yang, Ming-Hsuan and Jia, Jiaya},\n  booktitle={ECCV},\n  year={2022}\n}\n\n@inproceedings{qi2022fine,\n  title={High-Quality entity segmentation},\n  author={Qi, Lu and Kuen, Jason and Shen, Tiancheng and Gu, Jiuxiang and Guo, Weidong and Jia, Jiaya and Lin, Zhe and Yang, Ming-Hsuan},\n  booktitle={ICCV},\n  year={2023}\n}\n\n```\n","# EntitySeg 工具箱：迈向开放世界与高质量图像分割\n\nEntitySeg 是一个开源工具箱，致力于实现开放世界和高质量的图像分割。我们团队所有与图像分割相关的工作均在此开源。\n\n截至目前，EntitySeg 实现了以下算法：\n\n* [开放世界实体分割 (TPAMI2022)](Entity\u002FREADME.md) --- _已发布_ \n* [超高分辨率图像的高质量分割 (CVPR2022)](High-Quality-Segmention\u002FREADME.md) --- _已发布_\n* [CA-SSL：用于检测与分割的类别无关半监督学习 (ECCV2022)]() ---  _已发布_\n* [高质量实体分割 (ICCV2023 口头报告)](Entityv2\u002FREADME.md) ---  _已发布_\n* [重新思考开放词汇分割的评估指标 (Arxiv)](Open-Metrics\u002FREADME.md) --- _已发布_\n* [AIMS：全包容多层级分割 (NeurIPS2023 亮点展示)]() -- 代码即将发布\n* [UniGS：图像生成与分割的统一表示 (Arxiv)]() -- 代码即将发布\n  \n\n## 使用方法\n\n请参见每个项目的 README.md 文件。所有项目将在近期整合以相互支持。\n\n\n## 引用我们的作品\n\n\n```BibTeX\n@article{qi2022open,\n  title={Open world entity segmentation},\n  author={Qi, Lu and Kuen, Jason and Wang, Yi and Gu, Jiuxiang and Zhao, Hengshuang and Torr, Philip and Lin, Zhe and Jia, Jiaya},\n  journal={TPAMI},\n  year={2022},\n}\n\n@inproceedings{shen2021high,\n  title={High Quality Segmentation for Ultra High-resolution Images},\n  author={Tiancheng Shen, Yuechen Zhang, Lu Qi, Jason Kuen, Xingyu Xie, Jianlong Wu, Zhe Lin, Jiaya Jia},\n  booktitle={CVPR},\n  year={2022}\n}\n\n@inproceedings{qi2022cassl,\n  title={CA-SSL: Class-Agnostic Semi-Supervised Learning for Detection and Segmentation},\n  author={Qi, Lu and Kuen, Jason and Lin, Zhe and Gu, Jiuxiang and Rao, Fengyun and Li, Dian and Guo, Weidong and Wen, Zhen and Yang, Ming-Hsuan and Jia, Jiaya},\n  booktitle={ECCV},\n  year={2022}\n}\n\n@inproceedings{qi2022fine,\n  title={High-Quality entity segmentation},\n  author={Qi, Lu and Kuen, Jason and Shen, Tiancheng and Gu, Jiuxiang and Guo, Weidong and Jia, Jiaya and Lin, Zhe and Yang, Ming-Hsuan},\n  booktitle={ICCV},\n  year={2023}\n}\n\n```","# EntitySeg Toolbox 快速上手指南\n\n## 简介\nEntitySeg 是一个面向开放世界和高质量图像分割的开源工具箱。本工具集成了多个前沿算法，支持从开放词汇到超高分辨率图像的分割任务。\n\n## 环境准备\n*   **操作系统**: 推荐使用 Linux (Ubuntu 16.04+)，Windows 和 macOS 亦可支持。\n*   **硬件**: 强烈建议使用 NVIDIA GPU 以加速训练与推理。\n*   **Python**: 建议 Python 3.6 及以上版本。\n*   **深度学习框架**: 需安装 PyTorch 及 CUDA 环境（具体版本请参照各子项目 `requirements.txt`）。\n*   **网络加速**: 国内用户建议在安装 Python 包时使用国内镜像源（如清华源、阿里源）以提升下载速度。\n\n## 安装步骤\n1.  **克隆仓库**\n    将项目代码下载到本地：\n    ```bash\n    git clone \u003Crepository_url>\n    cd \u003Crepository_name>\n    ```\n\n2.  **选择算法模块**\n    根据需求进入对应的算法目录（部分代码尚未发布）：\n    ```bash\n    # 示例：进入开放世界实体分割模块\n    cd Entity\n    \n    # 示例：进入超高分辨率图像分割模块\n    cd High-Quality-Segmention\n    \n    # 示例：进入高质量实体分割 v2 模块\n    cd Entityv2\n    ```\n\n3.  **安装依赖**\n    在选定的模块目录下安装所需依赖：\n    ```bash\n    pip install -r requirements.txt\n    ```\n\n## 基本使用\n由于不同算法的配置独立，具体的运行命令请以各子项目内的 `README.md` 为准。\n\n*   **常用功能模块**\n    *   [Open-World Entity Segmentation (TPAMI2022)](Entity\u002FREADME.md)\n    *   [High Quality Segmentation for Ultra High-resolution Images (CVPR2022)](High-Quality-Segmention\u002FREADME.md)\n    *   [CA-SSL: Class-Agnostic Semi-Supervised Learning (ECCV2022)]()\n    *   [High-Quality Entity Segmentation (ICCV2023 Oral)](Entityv2\u002FREADME.md)\n\n*   **运行示例**\n    一般流程为配置数据集路径后运行训练或测试脚本。例如（具体命令请查阅子目录文档）：\n    ```bash\n    python train.py --config configs\u002Fdefault.yaml\n    ```\n\n*   **引用说明**\n    若在使用本工具进行学术研究，请在论文中引用相关文献。主要引用格式如下：\n    ```BibTeX\n    @article{qi2022open,\n      title={Open world entity segmentation},\n      author={Qi, Lu and Kuen, Jason and Wang, Yi and Gu, Jiuxiang and Zhao, Hengshuang and Torr, Philip and Lin, Zhe and Jia, Jiaya},\n      journal={TPAMI},\n      year={2022},\n    }\n    ```","某智慧农业团队正在开发无人机巡检系统，需对万兆像素级的高清航拍图进行精细化分析以评估作物健康。\n\n### 没有 Entity 时\n- 传统模型受限于封闭集，遇到新型害虫或杂草时完全无法识别，漏报率极高。\n- 超高分辨率图像直接输入会导致显存爆炸，必须强行裁剪从而丢失全局上下文信息。\n- 缺乏充足标注数据时，重新收集样本并从头训练新模型耗时数月，迭代效率极低。\n- 现有分割边界模糊，难以精确计算作物受损面积，影响后续决策准确性。\n\n### 使用 Entity 后\n- Entity 的开放世界能力可自动定位未见过的异常目标，无需预先定义具体类别即可工作。\n- 专为超高分辨率设计的模块，能够直接处理整张高清大图，完整保留空间上下文关系。\n- 结合半监督学习技术，仅需少量标注样本即可快速适配新出现的病虫害场景。\n- 输出高保真分割掩码，边缘贴合度极高，满足农业统计对精度的严苛要求。\n\nEntity 实现了从封闭分类到开放感知、从低分到高分的跨越，大幅降低了农业视觉落地的技术门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqqlu_Entity_0d2cdb73.png","qqlu","Lu Qi","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fqqlu_13e7104d.png",null,"Insta360","www.luqi.info","https:\u002F\u002Fgithub.com\u002Fqqlu",[83,87,91,95,99,103,107,111],{"name":84,"color":85,"percentage":86},"Jupyter Notebook","#DA5B0B",47.6,{"name":88,"color":89,"percentage":90},"Python","#3572A5",46.9,{"name":92,"color":93,"percentage":94},"Cuda","#3A4E3A",1.7,{"name":96,"color":97,"percentage":98},"C++","#f34b7d",1.6,{"name":100,"color":101,"percentage":102},"Cython","#fedf5b",1.2,{"name":104,"color":105,"percentage":106},"C","#555555",1.1,{"name":108,"color":109,"percentage":110},"Shell","#89e051",0,{"name":112,"color":113,"percentage":110},"Makefile","#427819",1043,60,"2026-04-01T09:12:24","NOASSERTION","未说明",{"notes":120,"python":118,"dependencies":121},"提供的 README 内容主要介绍工具功能、已实现的算法列表及论文引用，未包含具体的安装与环境配置说明。Usage 部分提示用户参考各子项目（如 Entity\u002FREADME.md）的 README 文件以获取详细信息。",[118],[14,13],[124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139],"image-segmentation","segmentation","pytorch","instance-segmentation","panoptic-segmentation","semantic-segmentation","object-detection","fcos","condinst","detectron2","pretrained-weights","pretrained-models","computer-vision","deep-learning","cnn","pretraining","2026-03-27T02:49:30.150509","2026-04-06T05:15:58.498684",[143,148,153,158,163,168],{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},3400,"运行可视化脚本时出现配置键不存在的错误怎么办？","需要在 config.py 中添加 `cfg.TEST.CLASS_AGNOSTIC = True`。维护者确认更新代码后解决了此问题，用户反馈添加该配置后成功运行。","https:\u002F\u002Fgithub.com\u002Fqqlu\u002FEntity\u002Fissues\u002F1",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},3401,"使用预训练模型评估结果很低（AP 接近 0）是什么原因？","默认 COCO API 会考虑类别信息，而本项目 Ground Truth JSON 保留了类别信息但在评估时需忽略。需要使用项目提供的 modified_cocoapi。或者手动修改 `entityseg\u002Fevaluator\u002Fentity_evaluation.py` 文件，将 `self._coco_api = COCO(json_file, cfg.TEST.CLASS_AGNOSTIC)` 改为 `self._coco_api = COCO(json_file)`。","https:\u002F\u002Fgithub.com\u002Fqqlu\u002FEntity\u002Fissues\u002F2",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},3402,"为什么实体加载使用 `.npz` 中的 `gt_bitmasks` 而不是 `instances_*.json` 中的 RLE `gt_masks`？","因为需要全图的 mask id 特征图来解决非重叠问题。如果使用 RLE，需要在数据加载器中解码，导致 CPU 开销大。建议使用 panoptic segmentation 代码生成的 PNG 文件。转换代码示例如下：\n```python\nimport pycocotools.mask as maskUtils\nimport numpy as np\nimport cv2\nmask_id = np.zeros((H,W),dtype=np.uint8)\nfor index, dt in enumerate(dts):\n    dt_mask = maskUtils.decode(dt[\"segmentation\"]).astype(np.uint8)\n    mask_id[dt_mask==1] = index +1\ncv2.imwrite(mask_id, \"XXX.png\")\n```","https:\u002F\u002Fgithub.com\u002Fqqlu\u002FEntity\u002Fissues\u002F5",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},3403,"Huggingface 上的权重下载后缺少部分配置文件（config files），在哪里可以找到？","配置文件位于 `Entityv2\u002FCropFormer\u002Fconfigs\u002Fentityv2\u002Fentity_segmentation` 目录下，而非根目录。","https:\u002F\u002Fgithub.com\u002Fqqlu\u002FEntity\u002Fissues\u002F32",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},3404,"运行该方法对显卡显存和计算时间有什么要求？","训练建议至少两块 2080 Ti。推理建议使用 3090 或 A100。如果显存有限，可以减小推理时的 chunk size（分块大小）或动态提取特征。","https:\u002F\u002Fgithub.com\u002Fqqlu\u002FEntity\u002Fissues\u002F17",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},3405,"PanoFCN 或 DETR 的全景分割结果如何转换为实体分割结果进行评估？","在代码未修改的情况下，将所有实体视为实例（instances）。即全景结果中的所有 segments（包括 thing 和 stuff）直接被视为 'instances' 进行评估，无需额外的后处理步骤。","https:\u002F\u002Fgithub.com\u002Fqqlu\u002FEntity\u002Fissues\u002F16",[]]