[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-IDEA-Research--detrex":3,"tool-IDEA-Research--detrex":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":102,"env_ram":102,"env_deps":103,"category_tags":108,"github_topics":109,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":126,"updated_at":127,"faqs":128,"releases":157},1108,"IDEA-Research\u002Fdetrex","detrex","detrex is a research platform for DETR-based object detection, segmentation, pose estimation and other visual recognition tasks.","detrex是IDEA研究院开源的视觉识别研究平台，专注于基于Transformer的目标检测、图像分割和姿态估计等任务。它通过优化DETR系列模型的算法性能与训练效率，解决了传统检测方法在复杂场景中计算成本高、小目标识别弱等瓶颈问题，帮助开发者在COCO等数据集上实现最高1.1AP的精度提升。\n\n这个工具采用模块化设计，将检测框架拆解为可自由组合的组件，方便研究人员快速验证新模型架构。内置的LazyConfig系统和轻量化训练引擎大幅简化了配置流程，结合Detectron2与MMDetection的成熟实现，使开发者能以更少代码量完成算法复现与改进。项目特别适合计算机视觉方向的科研人员、算法工程师，以及需要部署高精度检测模型的AI应用开发者。\n\ndetrex的技术亮点在于其对Transformer检测范式的深度优化，包括动态匈牙利匹配策略、混合查询初始化方法等创新设计。项目配套完整的文档体系、预训练模型库和可视化工具，配合活跃的社区支持，有效降低了Transformer检测技术的使用门槛。目前代码库已通过Apache 2.0协议开源，支持PyTorch 1.10+环境运行。","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDEA-Research_detrex_readme_f1bf0f390a05.png\" width=\"30%\">\n\u003C\u002Fdiv>\n\u003Ch2 align=\"center\">🦖detrex: Benchmarking Detection Transformers\u003C\u002Fh2>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Freleases\">\n        \u003Cimg alt=\"release\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002FIDEA-Research\u002Fdetrex\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Findex.html\">\n        \u003Cimg alt=\"docs\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue\">\n    \u003C\u002Fa>\n    \u003Ca href='https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest'>\n    \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDEA-Research_detrex_readme_13d664e1afd7.png' alt='Documentation Status' \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fblob\u002Fmain\u002FLICENSE\">\n        \u003Cimg alt=\"GitHub\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FIDEA-Research\u002Fdetrex.svg?color=blue\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpulls\">\n        \u003Cimg alt=\"PRs Welcome\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-pink.svg\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\">\n        \u003Cimg alt=\"open issues\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FIDEA-Research\u002Fdetrex\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\n\u003Cdiv align=\"center\">\n\n\u003C!-- \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07265\">📚Read detrex Benchmarking Paper\u003C\u002Fa> \u003Csup>\u003Ci>\u003Cfont size=\"3\" color=\"#FF0000\">New\u003C\u002Ffont>\u003C\u002Fi>\u003C\u002Fsup> |\n\u003Ca href=\"https:\u002F\u002Frentainhe.github.io\u002Fprojects\u002Fdetrex\u002F\">🏠Project Page\u003C\u002Fa> \u003Csup>\u003Ci>\u003Cfont size=\"3\" color=\"#FF0000\">New\u003C\u002Ffont>\u003C\u002Fi>\u003C\u002Fsup> |  [🏷️Cite detrex](#citation) -->\n\n[📚Read detrex Benchmarking Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07265) | [🏠Project Page](https:\u002F\u002Frentainhe.github.io\u002Fprojects\u002Fdetrex\u002F) | [🏷️Cite detrex](#citation) | [🚢DeepDataSpace](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdeepdataspace)\n\n\u003C\u002Fdiv>\n\n\n\u003Cdiv align=\"center\">\n\n[📘Documentation](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Findex.html) |\n[🛠️Installation](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FInstallation.html) |\n[👀Model Zoo](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html) |\n[🚀Awesome DETR](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fawesome-detection-transformer) |\n[🆕News](#whats-new) |\n[🤔Reporting Issues](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002Fnew\u002Fchoose)\n\n\u003C\u002Fdiv>\n\n\n## Introduction\n\ndetrex is an open-source toolbox that provides state-of-the-art Transformer-based detection algorithms. It is built on top of [Detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) and its module design is partially borrowed from [MMDetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection) and [DETR](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetr). Many thanks for their nicely organized code. The main branch works with **Pytorch 1.10+** or higher (we recommend **Pytorch 1.12**).\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDEA-Research_detrex_readme_3a76450a7af1.png\" width=\"100%\"\u002F>\n\u003C\u002Fdiv>\n\n\u003Cdetails open>\n\u003Csummary> Major Features \u003C\u002Fsummary>\n\n- **Modular Design.** detrex decomposes the Transformer-based detection framework into various components which help users easily build their own customized models.\n\n- **Strong Baselines.** detrex provides a series of strong baselines for Transformer-based detection models. We have further boosted the model performance from **0.2 AP** to **1.1 AP** through optimizing hyper-parameters among most of the supported algorithms.\n\n- **Easy to Use.** detrex is designed to be **light-weight** and easy for users to use:\n  - [LazyConfig System](https:\u002F\u002Fdetectron2.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Flazyconfigs.html) for more flexible syntax and cleaner config files.\n  - Light-weight [training engine](.\u002Ftools\u002Ftrain_net.py) modified from detectron2 [lazyconfig_train_net.py](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2\u002Fblob\u002Fmain\u002Ftools\u002Flazyconfig_train_net.py)\n\nApart from detrex, we also released a repo [Awesome Detection Transformer](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fawesome-detection-transformer) to present papers about Transformer for detection and segmentation.\n\n\u003C\u002Fdetails>\n\n## Fun Facts\nThe repo name detrex has several interpretations:\n- \u003Cfont color=blue> \u003Cb> detr-ex \u003C\u002Fb> \u003C\u002Ffont>: We take our hats off to DETR and regard this repo as an extension of Transformer-based detection algorithms.\n\n- \u003Cfont color=#db7093> \u003Cb> det-rex \u003C\u002Fb> \u003C\u002Ffont>: rex literally means 'king' in Latin. We hope this repo can help advance the state of the art on object detection by providing the best Transformer-based detection algorithms from the research community.\n\n- \u003Cfont color=#008000> \u003Cb> de-t.rex \u003C\u002Fb> \u003C\u002Ffont>: de means 'the' in Dutch. T.rex, also called Tyrannosaurus Rex, means 'king of the tyrant lizards' and connects to our research work 'DINO', which is short for Dinosaur.\n\n## What's New\nv0.5.0 was released on 16\u002F07\u002F2023:\n- Support [Focus-DETR (ICCV'2023)](.\u002Fprojects\u002Ffocus_detr\u002F).\n- Support [SQR-DETR (CVPR'2023)](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fsqr_detr), credits to [Fangyi Chen](https:\u002F\u002Fgithub.com\u002FFangyi-Chen)\n- Support [Align-DETR (ArXiv'2023)](.\u002Fprojects\u002Falign_detr\u002F), credits to [Zhi Cai](https:\u002F\u002Fgithub.com\u002FFelixCaae)\n- Support [EVA-01 (CVPR'2023 Highlight)](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEVA\u002Ftree\u002Fmaster\u002FEVA-01) and [EVA-02 (ArXiv'2023)](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEVA\u002Ftree\u002Fmaster\u002FEVA-02) backbones, please check [DINO-EVA](.\u002Fprojects\u002Fdino_eva\u002F) for more benchmarking results.\n\nPlease see [changelog.md](.\u002Fchanglog.md) for details and release history.\n\n## Installation\n\nPlease refer to [Installation Instructions](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FInstallation.html) for the details of installation.\n\n## Getting Started\n\nPlease refer to [Getting Started with detrex](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FGetting_Started.html) for the basic usage of detrex. We also provides other tutorials for:\n- [Learn about the config system of detrex](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FConfig_System.html)\n- [How to convert the pretrained weights from original detr repo into detrex format](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FConverters.html)\n- [Visualize your training data and testing results on COCO dataset](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FTools.html#visualization)\n- [Analyze the model under detrex](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FTools.html#model-analysis)\n- [Download and initialize with the pretrained backbone weights](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FUsing_Pretrained_Backbone.html)\n- [Frequently asked questions](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F109)\n- [A simple onnx convert tutorial provided by powermano](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F192)\n- Simple training techniques: [Model-EMA](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F201), [Mixed Precision Training](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F198), [Activation Checkpoint](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F200)\n- [Simple tutorial about custom dataset training](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F187)\n\nAlthough some of the tutorials are currently presented with relatively simple content, we will constantly improve our documentation to help users achieve a better user experience.\n\n## Documentation\n\nPlease see [documentation](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Findex.html) for full API documentation and tutorials.\n\n## Model Zoo\nResults and models are available in [model zoo](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html).\n\n\u003Cdetails open>\n\u003Csummary> Supported methods \u003C\u002Fsummary>\n\n- [x] [DETR (ECCV'2020)](.\u002Fprojects\u002Fdetr\u002F)\n- [x] [Deformable-DETR (ICLR'2021 Oral)](.\u002Fprojects\u002Fdeformable_detr\u002F)\n- [x] [PnP-DETR (ICCV'2021)](.\u002Fprojects\u002Fpnp_detr\u002F)\n- [x] [Conditional-DETR (ICCV'2021)](.\u002Fprojects\u002Fconditional_detr\u002F)\n- [x] [Anchor-DETR (AAAI 2022)](.\u002Fprojects\u002Fanchor_detr\u002F)\n- [x] [DAB-DETR (ICLR'2022)](.\u002Fprojects\u002Fdab_detr\u002F)\n- [x] [DAB-Deformable-DETR (ICLR'2022)](.\u002Fprojects\u002Fdab_deformable_detr\u002F)\n- [x] [DN-DETR (CVPR'2022 Oral)](.\u002Fprojects\u002Fdn_detr\u002F)\n- [x] [DN-Deformable-DETR (CVPR'2022 Oral)](.\u002Fprojects\u002Fdn_deformable_detr\u002F)\n- [x] [Group-DETR (ICCV'2023)](.\u002Fprojects\u002Fgroup_detr\u002F)\n- [x] [DETA (ArXiv'2022)](.\u002Fprojects\u002Fdeta\u002F)\n- [x] [DINO (ICLR'2023)](.\u002Fprojects\u002Fdino\u002F)\n- [x] [H-Deformable-DETR (CVPR'2023)](.\u002Fprojects\u002Fh_deformable_detr\u002F)\n- [x] [MaskDINO (CVPR'2023)](.\u002Fprojects\u002Fmaskdino\u002F)\n- [x] [CO-MOT (ArXiv'2023)](.\u002Fprojects\u002Fco_mot\u002F)\n- [x] [SQR-DETR (CVPR'2023)](.\u002Fprojects\u002Fsqr_detr\u002F)\n- [x] [Align-DETR (ArXiv'2023)](.\u002Fprojects\u002Falign_detr\u002F)\n- [x] [EVA-01 (CVPR'2023 Highlight)](.\u002Fprojects\u002Fdino_eva\u002F)\n- [x] [EVA-02 (ArXiv'2023)](.\u002Fprojects\u002Fdino_eva\u002F)\n- [x] [Focus-DETR (ICCV'2023)](.\u002Fprojects\u002Ffocus_detr\u002F)\n\nPlease see [projects](.\u002Fprojects\u002F) for the details about projects that are built based on detrex.\n\n\u003C\u002Fdetails>\n\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n\n## Acknowledgement\n- detrex is an open-source toolbox for Transformer-based detection algorithms created by researchers of **IDEACVR**. We appreciate all contributions to detrex!\n- detrex is built based on [Detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) and part of its module design is borrowed from [MMDetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection), [DETR](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetr), and [Deformable-DETR](https:\u002F\u002Fgithub.com\u002Ffundamentalvision\u002FDeformable-DETR).\n\n\n## Citation\nIf you use this toolbox in your research or wish to refer to the baseline results published here, please use the following BibTeX entries:\n\n- Citing **detrex**:\n\n```BibTeX\n@misc{ren2023detrex,\n      title={detrex: Benchmarking Detection Transformers}, \n      author={Tianhe Ren and Shilong Liu and Feng Li and Hao Zhang and Ailing Zeng and Jie Yang and Xingyu Liao and Ding Jia and Hongyang Li and He Cao and Jianan Wang and Zhaoyang Zeng and Xianbiao Qi and Yuhui Yuan and Jianwei Yang and Lei Zhang},\n      year={2023},\n      eprint={2306.07265},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n\n\u003Cdetails>\n\u003Csummary> Citing Supported Algorithms \u003C\u002Fsummary>\n\n```BibTex\n@inproceedings{carion2020end,\n  title={End-to-end object detection with transformers},\n  author={Carion, Nicolas and Massa, Francisco and Synnaeve, Gabriel and Usunier, Nicolas and Kirillov, Alexander and Zagoruyko, Sergey},\n  booktitle={European conference on computer vision},\n  pages={213--229},\n  year={2020},\n  organization={Springer}\n}\n\n@inproceedings{\n  zhu2021deformable,\n  title={Deformable {\\{}DETR{\\}}: Deformable Transformers for End-to-End Object Detection},\n  author={Xizhou Zhu and Weijie Su and Lewei Lu and Bin Li and Xiaogang Wang and Jifeng Dai},\n  booktitle={International Conference on Learning Representations},\n  year={2021},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=gZ9hCDWe6ke}\n}\n\n@inproceedings{meng2021-CondDETR,\n  title       = {Conditional DETR for Fast Training Convergence},\n  author      = {Meng, Depu and Chen, Xiaokang and Fan, Zejia and Zeng, Gang and Li, Houqiang and Yuan, Yuhui and Sun, Lei and Wang, Jingdong},\n  booktitle   = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},\n  year        = {2021}\n}\n\n@inproceedings{\n  liu2022dabdetr,\n  title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}},\n  author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang},\n  booktitle={International Conference on Learning Representations},\n  year={2022},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=oMI9PjOb9Jl}\n}\n\n@inproceedings{li2022dn,\n  title={Dn-detr: Accelerate detr training by introducing query denoising},\n  author={Li, Feng and Zhang, Hao and Liu, Shilong and Guo, Jian and Ni, Lionel M and Zhang, Lei},\n  booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n  pages={13619--13627},\n  year={2022}\n}\n\n@inproceedings{\n  zhang2023dino,\n  title={{DINO}: {DETR} with Improved DeNoising Anchor Boxes for End-to-End Object Detection},\n  author={Hao Zhang and Feng Li and Shilong Liu and Lei Zhang and Hang Su and Jun Zhu and Lionel Ni and Heung-Yeung Shum},\n  booktitle={The Eleventh International Conference on Learning Representations },\n  year={2023},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=3mRwyG5one}\n}\n\n@InProceedings{Chen_2023_ICCV,\n  author    = {Chen, Qiang and Chen, Xiaokang and Wang, Jian and Zhang, Shan and Yao, Kun and Feng, Haocheng and Han, Junyu and Ding, Errui and Zeng, Gang and Wang, Jingdong},\n  title     = {Group DETR: Fast DETR Training with Group-Wise One-to-Many Assignment},\n  booktitle = {Proceedings of the IEEE\u002FCVF International Conference on Computer Vision (ICCV)},\n  month     = {October},\n  year      = {2023},\n  pages     = {6633-6642}\n}\n\n@InProceedings{Jia_2023_CVPR,\n  author    = {Jia, Ding and Yuan, Yuhui and He, Haodi and Wu, Xiaopei and Yu, Haojun and Lin, Weihong and Sun, Lei and Zhang, Chao and Hu, Han},\n  title     = {DETRs With Hybrid Matching},\n  booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  month     = {June},\n  year      = {2023},\n  pages     = {19702-19712}\n}\n\n@InProceedings{Li_2023_CVPR,\n  author    = {Li, Feng and Zhang, Hao and Xu, Huaizhe and Liu, Shilong and Zhang, Lei and Ni, Lionel M. and Shum, Heung-Yeung},\n  title     = {Mask DINO: Towards a Unified Transformer-Based Framework for Object Detection and Segmentation},\n  booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  month     = {June},\n  year      = {2023},\n  pages     = {3041-3050}\n}\n\n@article{yan2023bridging,\n  title={Bridging the Gap Between End-to-end and Non-End-to-end Multi-Object Tracking},\n  author={Yan, Feng and Luo, Weixin and Zhong, Yujie and Gan, Yiyang and Ma, Lin},\n  journal={arXiv preprint arXiv:2305.12724},\n  year={2023}\n}\n\n@InProceedings{Chen_2023_CVPR,\n  author    = {Chen, Fangyi and Zhang, Han and Hu, Kai and Huang, Yu-Kai and Zhu, Chenchen and Savvides, Marios},\n  title     = {Enhanced Training of Query-Based Object Detection via Selective Query Recollection},\n  booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  month     = {June},\n  year      = {2023},\n  pages     = {23756-23765}\n}\n```\n\n\n\u003C\u002Fdetails>\n\n\n\n","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDEA-Research_detrex_readme_f1bf0f390a05.png\" width=\"30%\">\n\u003C\u002Fdiv>\n\u003Ch2 align=\"center\">🦖detrex: 检测 Transformer (变换器) 基准测试\u003C\u002Fh2>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Freleases\">\n        \u003Cimg alt=\"release\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002FIDEA-Research\u002Fdetrex\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Findex.html\">\n        \u003Cimg alt=\"docs\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue\">\n    \u003C\u002Fa>\n    \u003Ca href='https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest'>\n    \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDEA-Research_detrex_readme_13d664e1afd7.png' alt='Documentation Status' \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fblob\u002Fmain\u002FLICENSE\">\n        \u003Cimg alt=\"GitHub\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FIDEA-Research\u002Fdetrex.svg?color=blue\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpulls\">\n        \u003Cimg alt=\"PRs Welcome\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-pink.svg\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\">\n        \u003Cimg alt=\"open issues\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FIDEA-Research\u002Fdetrex\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\n\u003Cdiv align=\"center\">\n\n\u003C!-- \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07265\">📚Read detrex Benchmarking Paper\u003C\u002Fa> \u003Csup>\u003Ci>\u003Cfont size=\"3\" color=\"#FF0000\">New\u003C\u002Ffont>\u003C\u002Fi>\u003C\u002Fsup> |\n\u003Ca href=\"https:\u002F\u002Frentainhe.github.io\u002Fprojects\u002Fdetrex\u002F\">🏠Project Page\u003C\u002Fa> \u003Csup>\u003Ci>\u003Cfont size=\"3\" color=\"#FF0000\">New\u003C\u002Ffont>\u003C\u002Fi>\u003C\u002Fsup> |  [🏷️Cite detrex](#citation) -->\n\n[📚阅读 detrex 基准测试论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07265) | [🏠项目主页](https:\u002F\u002Frentainhe.github.io\u002Fprojects\u002Fdetrex\u002F) | [🏷️引用 detrex](#citation) | [🚢DeepDataSpace](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdeepdataspace)\n\n\u003C\u002Fdiv>\n\n\n\u003Cdiv align=\"center\">\n\n[📘文档](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Findex.html) |\n[🛠️安装](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FInstallation.html) |\n[👀模型库 (Model Zoo)](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html) |\n[🚀Awesome DETR](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fawesome-detection-transformer) |\n[🆕动态](#whats-new) |\n[🤔报告问题](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002Fnew\u002Fchoose)\n\n\u003C\u002Fdiv>\n\n\n## 简介\n\ndetrex 是一个开源工具箱，提供了最先进的基于 Transformer (变换器) 的检测算法。它构建于 [Detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) 之上，其模块设计部分借鉴自 [MMDetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection) 和 [DETR](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetr)。非常感谢他们组织良好的代码。主分支适用于 **Pytorch 1.10+ (深度学习框架)** 或更高版本（我们推荐 **Pytorch 1.12**）。\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDEA-Research_detrex_readme_3a76450a7af1.png\" width=\"100%\"\u002F>\n\u003C\u002Fdiv>\n\n\u003Cdetails open>\n\u003Csummary> 主要特性 \u003C\u002Fsummary>\n\n- **模块化设计。** detrex 将基于 Transformer 的检测框架分解为各种组件，帮助用户轻松构建自己的定制模型。\n\n- **强基线 (Baselines)。** detrex 为基于 Transformer 的检测模型提供了一系列强大的基线。通过优化大多数支持算法中的超参数 (Hyper-parameters)，我们进一步将模型性能从 **0.2 AP (平均精度)** 提升到了 **1.1 AP (平均精度)**。\n\n- **易于使用。** detrex 旨在成为 **轻量级** 且易于用户使用的工具：\n  - [LazyConfig (延迟配置) 系统](https:\u002F\u002Fdetectron2.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Flazyconfigs.html) 提供更灵活的语法和更清晰的配置文件。\n  - 轻量级 [训练引擎](.\u002Ftools\u002Ftrain_net.py)，修改自 detectron2 的 [lazyconfig_train_net.py](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2\u002Fblob\u002Fmain\u002Ftools\u002Flazyconfig_train_net.py)\n\n除了 detrex，我们还发布了一个仓库 [Awesome Detection Transformer](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fawesome-detection-transformer)，用于展示关于用于检测和分割的 Transformer 的论文。\n\n\u003C\u002Fdetails>\n\n## 趣闻\n仓库名称 detrex 有几种解读：\n- \u003Cfont color=blue> \u003Cb> detr-ex \u003C\u002Fb> \u003C\u002Ffont>: 我们向 DETR 致敬，并将此仓库视为基于 Transformer 的检测算法的扩展。\n\n- \u003Cfont color=#db7093> \u003Cb> det-rex \u003C\u002Fb> \u003C\u002Ffont>: rex 在拉丁语中字面意思是“国王”。我们希望这个仓库能够通过提供研究社区中最好的基于 Transformer 的检测算法，帮助推动目标检测 (Object Detection) 的最先进水平 (State of the Art)。\n\n- \u003Cfont color=#008000> \u003Cb> de-t.rex \u003C\u002Fb> \u003C\u002Ffont>: de 在荷兰语中意为“该\u002F这个”。T.rex，也称为 Tyrannosaurus Rex (霸王龙)，意为“暴龙之王”，并与我们的研究工作 'DINO' 相关联，DINO 是 Dinosaur (恐龙) 的缩写。\n\n## 动态\nv0.5.0 于 2023 年 7 月 16 日发布：\n- 支持 [Focus-DETR (ICCV'2023)](.\u002Fprojects\u002Ffocus_detr\u002F)。\n- 支持 [SQR-DETR (CVPR'2023)](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fsqr_detr)，致谢 [Fangyi Chen](https:\u002F\u002Fgithub.com\u002FFangyi-Chen)\n- 支持 [Align-DETR (ArXiv'2023)](.\u002Fprojects\u002Falign_detr\u002F)，致谢 [Zhi Cai](https:\u002F\u002Fgithub.com\u002FFelixCaae)\n- 支持 [EVA-01 (CVPR'2023 Highlight)](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEVA\u002Ftree\u002Fmaster\u002FEVA-01) 和 [EVA-02 (ArXiv'2023)](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEVA\u002Ftree\u002Fmaster\u002FEVA-02) Backbones (骨干网络)，请查看 [DINO-EVA](.\u002Fprojects\u002Fdino_eva\u002F) 了解更多基准测试结果。\n\n请参阅 [changelog.md (变更日志)](.\u002Fchanglog.md) 了解详情和发布历史。\n\n## 安装\n\n请参阅 [安装说明](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FInstallation.html) 获取安装详情。\n\n## 快速开始\n\n请参考 [detrex 快速入门](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FGetting_Started.html) 了解 detrex 的基本用法。我们还提供了其他教程，包括：\n- [了解 detrex 的配置系统 (config system)](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FConfig_System.html)\n- [如何将原始 detr 仓库 (repo) 的预训练权重 (pretrained weights) 转换为 detrex 格式](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FConverters.html)\n- [在 COCO 数据集 (dataset) 上可视化训练数据和测试结果](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FTools.html#visualization)\n- [分析 detrex 下的模型](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FTools.html#model-analysis)\n- [下载并使用预训练骨干网络 (backbone) 权重进行初始化](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FUsing_Pretrained_Backbone.html)\n- [常见问题解答](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F109)\n- [powermano 提供的简单 ONNX (模型格式) 转换教程](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F192)\n- 简单训练技巧：[Model-EMA (模型指数移动平均)](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F201)、[Mixed Precision Training (混合精度训练)](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F198)、[Activation Checkpoint (激活检查点)](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F200)\n- [关于自定义数据集训练的简单教程](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F187)\n\n虽然部分教程目前内容相对简单，但我们将不断改进文档，帮助用户获得更好的体验。\n\n## 文档\n\n请参阅 [文档](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Findex.html) 获取完整的 API (应用程序接口) 文档和教程。\n\n## 模型库 (Model Zoo)\n结果和模型可在 [模型库](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html) 中找到。\n\n\u003Cdetails open>\n\u003Csummary> 支持的方法 \u003C\u002Fsummary>\n\n- [x] [DETR (ECCV'2020)](.\u002Fprojects\u002Fdetr\u002F)\n- [x] [Deformable-DETR (ICLR'2021 Oral)](.\u002Fprojects\u002Fdeformable_detr\u002F)\n- [x] [PnP-DETR (ICCV'2021)](.\u002Fprojects\u002Fpnp_detr\u002F)\n- [x] [Conditional-DETR (ICCV'2021)](.\u002Fprojects\u002Fconditional_detr\u002F)\n- [x] [Anchor-DETR (AAAI 2022)](.\u002Fprojects\u002Fanchor_detr\u002F)\n- [x] [DAB-DETR (ICLR'2022)](.\u002Fprojects\u002Fdab_detr\u002F)\n- [x] [DAB-Deformable-DETR (ICLR'2022)](.\u002Fprojects\u002Fdab_deformable_detr\u002F)\n- [x] [DN-DETR (CVPR'2022 Oral)](.\u002Fprojects\u002Fdn_detr\u002F)\n- [x] [DN-Deformable-DETR (CVPR'2022 Oral)](.\u002Fprojects\u002Fdn_deformable_detr\u002F)\n- [x] [Group-DETR (ICCV'2023)](.\u002Fprojects\u002Fgroup_detr\u002F)\n- [x] [DETA (ArXiv'2022)](.\u002Fprojects\u002Fdeta\u002F)\n- [x] [DINO (ICLR'2023)](.\u002Fprojects\u002Fdino\u002F)\n- [x] [H-Deformable-DETR (CVPR'2023)](.\u002Fprojects\u002Fh_deformable_detr\u002F)\n- [x] [MaskDINO (CVPR'2023)](.\u002Fprojects\u002Fmaskdino\u002F)\n- [x] [CO-MOT (ArXiv'2023)](.\u002Fprojects\u002Fco_mot\u002F)\n- [x] [SQR-DETR (CVPR'2023)](.\u002Fprojects\u002Fsqr_detr\u002F)\n- [x] [Align-DETR (ArXiv'2023)](.\u002Fprojects\u002Falign_detr\u002F)\n- [x] [EVA-01 (CVPR'2023 Highlight)](.\u002Fprojects\u002Fdino_eva\u002F)\n- [x] [EVA-02 (ArXiv'2023)](.\u002Fprojects\u002Fdino_eva\u002F)\n- [x] [Focus-DETR (ICCV'2023)](.\u002Fprojects\u002Ffocus_detr\u002F)\n\n请参阅 [projects](.\u002Fprojects\u002F) 了解基于 detrex 构建的项目详情。\n\n\u003C\u002Fdetails>\n\n\n## 许可证\n\n本项目基于 [Apache 2.0 许可证](LICENSE) 发布。\n\n\n## 致谢\n- detrex 是由 **IDEACVR** 研究人员创建的基于 Transformer (变换器) 的检测算法开源工具箱。我们感谢所有对 detrex 的贡献！\n- detrex 基于 [Detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) 构建，部分模块 (module) 设计借鉴了 [MMDetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)、[DETR](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetr) 和 [Deformable-DETR](https:\u002F\u002Fgithub.com\u002Ffundamentalvision\u002FDeformable-DETR)。\n\n## 引用\n如果您在研究中使用此工具箱或希望引用此处发布的基线结果，请使用以下 BibTeX (文献引用格式) 条目：\n\n- 引用 **detrex**：\n\n```BibTeX\n@misc{ren2023detrex,\n      title={detrex: Benchmarking Detection Transformers}, \n      author={Tianhe Ren and Shilong Liu and Feng Li and Hao Zhang and Ailing Zeng and Jie Yang and Xingyu Liao and Ding Jia and Hongyang Li and He Cao and Jianan Wang and Zhaoyang Zeng and Xianbiao Qi and Yuhui Yuan and Jianwei Yang and Lei Zhang},\n      year={2023},\n      eprint={2306.07265},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n\n\u003Cdetails>\n\u003Csummary> 引用支持的算法 \u003C\u002Fsummary>\n\n```BibTex\n@inproceedings{carion2020end,\n  title={End-to-end object detection with transformers},\n  author={Carion, Nicolas and Massa, Francisco and Synnaeve, Gabriel and Usunier, Nicolas and Kirillov, Alexander and Zagoruyko, Sergey},\n  booktitle={European conference on computer vision},\n  pages={213--229},\n  year={2020},\n  organization={Springer}\n}\n\n@inproceedings{\n  zhu2021deformable,\n  title={Deformable {\\{}DETR{\\}}: Deformable Transformers for End-to-End Object Detection},\n  author={Xizhou Zhu and Weijie Su and Lewei Lu and Bin Li and Xiaogang Wang and Jifeng Dai},\n  booktitle={International Conference on Learning Representations},\n  year={2021},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=gZ9hCDWe6ke}\n}\n\n@inproceedings{meng2021-CondDETR,\n  title       = {Conditional DETR for Fast Training Convergence},\n  author      = {Meng, Depu and Chen, Xiaokang and Fan, Zejia and Zeng, Gang and Li, Houqiang and Yuan, Yuhui and Sun, Lei and Wang, Jingdong},\n  booktitle   = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},\n  year        = {2021}\n}\n\n@inproceedings{\n  liu2022dabdetr,\n  title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}},\n  author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang},\n  booktitle={International Conference on Learning Representations},\n  year={2022},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=oMI9PjOb9Jl}\n}\n\n@inproceedings{li2022dn,\n  title={Dn-detr: Accelerate detr training by introducing query denoising},\n  author={Li, Feng and Zhang, Hao and Liu, Shilong and Guo, Jian and Ni, Lionel M and Zhang, Lei},\n  booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n  pages={13619--13627},\n  year={2022}\n}\n\n@inproceedings{\n  zhang2023dino,\n  title={{DINO}: {DETR} with Improved DeNoising Anchor Boxes for End-to-End Object Detection},\n  author={Hao Zhang and Feng Li and Shilong Liu and Lei Zhang and Hang Su and Jun Zhu and Lionel Ni and Heung-Yeung Shum},\n  booktitle={The Eleventh International Conference on Learning Representations },\n  year={2023},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=3mRwyG5one}\n}\n\n@InProceedings{Chen_2023_ICCV,\n  author    = {Chen, Qiang and Chen, Xiaokang and Wang, Jian and Zhang, Shan and Yao, Kun and Feng, Haocheng and Han, Junyu and Ding, Errui and Zeng, Gang and Wang, Jingdong},\n  title     = {Group DETR: Fast DETR Training with Group-Wise One-to-Many Assignment},\n  booktitle = {Proceedings of the IEEE\u002FCVF International Conference on Computer Vision (ICCV)},\n  month     = {October},\n  year      = {2023},\n  pages     = {6633-6642}\n}\n\n@InProceedings{Jia_2023_CVPR,\n  author    = {Jia, Ding and Yuan, Yuhui and He, Haodi and Wu, Xiaopei and Yu, Haojun and Lin, Weihong and Sun, Lei and Zhang, Chao and Hu, Han},\n  title     = {DETRs With Hybrid Matching},\n  booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  month     = {June},\n  year      = {2023},\n  pages     = {19702-19712}\n}\n\n@InProceedings{Li_2023_CVPR,\n  author    = {Li, Feng and Zhang, Hao and Xu, Huaizhe and Liu, Shilong and Zhang, Lei and Ni, Lionel M. and Shum, Heung-Yeung},\n  title     = {Mask DINO: Towards a Unified Transformer-Based Framework for Object Detection and Segmentation},\n  booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  month     = {June},\n  year      = {2023},\n  pages     = {3041-3050}\n}\n\n@article{yan2023bridging,\n  title={Bridging the Gap Between End-to-end and Non-End-to-end Multi-Object Tracking},\n  author={Yan, Feng and Luo, Weixin and Zhong, Yujie and Gan, Yiyang and Ma, Lin},\n  journal={arXiv preprint arXiv:2305.12724},\n  year={2023}\n}\n\n@InProceedings{Chen_2023_CVPR,\n  author    = {Chen, Fangyi and Zhang, Han and Hu, Kai and Huang, Yu-Kai and Zhu, Chenchen and Savvides, Marios},\n  title     = {Enhanced Training of Query-Based Object Detection via Selective Query Recollection},\n  booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  month     = {June},\n  year      = {2023},\n  pages     = {23756-23765}\n}\n```\n\n\n\u003C\u002Fdetails>","# detrex 快速上手指南\n\ndetrex 是一个基于 Transformer 的目标检测开源工具箱，构建于 [Detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) 之上，提供了多种 SOTA 检测算法（如 DETR、DINO 等）的基准实现。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n- **操作系统**: Linux (推荐)\n- **Python**: 3.7 或更高版本\n- **PyTorch**: 1.10 或更高版本（推荐使用 **1.12**）\n- **CUDA**: 根据您的 GPU 型号安装对应的 CUDA 版本\n\n## 安装步骤\n\n### 1. 安装 PyTorch\n请根据您的环境前往 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002F) 获取安装命令。例如：\n```bash\npip install torch==1.12.0 torchvision==0.13.0 --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu113\n```\n\n### 2. 安装 detrex\n您可以通过 pip 直接安装稳定版本：\n```bash\npip install detrex\n```\n\n或者克隆源码进行安装（适合开发模式）：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex.git\ncd detrex\npip install -e .\n```\n\n## 基本使用\n\ndetrex 采用了 **LazyConfig System** 来管理配置文件，使得配置更加灵活简洁。\n\n### 1. 训练模型\n使用提供的训练引擎 `tools\u002Ftrain_net.py` 启动训练。以下是一个使用 DINO 算法在 COCO 数据集上训练的示例命令：\n\n```bash\npython tools\u002Ftrain_net.py --config-file projects\u002Fdino\u002Fconfigs\u002Fdino_swint_4scale_300ep.py --num-gpus 8\n```\n\n### 2. 评估模型\n训练完成后，您可以使用相同的脚本进行模型评估：\n\n```bash\npython tools\u002Ftrain_net.py --config-file projects\u002Fdino\u002Fconfigs\u002Fdino_swint_4scale_300ep.py --eval-only --model-path \u002Fpath\u002Fto\u002Fcheckpoint.pth\n```\n\n### 3. 配置文件说明\n- 配置文件位于 `projects\u002F\u003Cmodel_name>\u002Fconfigs\u002F` 目录下。\n- 您可以通过修改配置文件中的参数来调整超参数、数据路径或模型结构。\n- 更多详细用法（如自定义数据集、模型分析等）请参考 [官方文档](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)。","某自动驾驶公司的算法团队正在研发基于 Transformer 的道路障碍物检测系统，需要快速验证最新的 DETR 变体模型效果并落地到实际场景。\n\n### 没有 detrex 时\n- 复现前沿论文代码耗时耗力，不同开源仓库结构差异大，整合困难。\n- 超参数调优缺乏权威参考，初始模型精度低，AP 值波动大且难以突破。\n- 修改模型结构需要改动大量底层代码，模块耦合度高，易引入 Bug。\n- 训练引擎配置复杂，每次切换实验配置都要手动修改多处脚本，效率低下。\n- 缺乏统一的基准对比，难以评估新改进是否真正有效。\n\n### 使用 detrex 后\n- 直接调用 detrex 提供的 SOTA 算法基线，复现效率提升显著，快速启动研发。\n- 享受官方优化后的超参数配置，模型精度稳步提升，AP 平均增长 0.2 至 1.1。\n- 利用模块化设计自由组合组件，定制模型无需重写底层逻辑，开发更灵活。\n- 通过 LazyConfig 系统管理实验，配置文件清晰简洁，训练启动与切换更便捷。\n- 内置强基线作为统一标准，方便团队内部横向对比实验效果，决策更科学。\n\ndetrex 通过标准化模块与强基线支持，大幅降低了 Transformer 检测模型的研发门槛与试错成本。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDEA-Research_detrex_dc5aed90.png","IDEA-Research","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FIDEA-Research_b8b3359e.png","The International Digital Economy Academy (“IDEA”). ",null,"www.idea.edu.cn","https:\u002F\u002Fgithub.com\u002FIDEA-Research",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",93.9,{"name":87,"color":88,"percentage":89},"Cuda","#3A4E3A",5.4,{"name":91,"color":92,"percentage":93},"C++","#f34b7d",0.7,{"name":95,"color":96,"percentage":97},"Shell","#89e051",0,2275,244,"2026-04-02T12:48:58","Apache-2.0","未说明",{"notes":104,"python":102,"dependencies":105},"详细安装步骤需参考官方文档链接。推荐使用 PyTorch 1.12。基于 Detectron2 构建，支持多种 DETR 变体模型（如 DINO, Deformable-DETR 等）。",[106,107],"torch>=1.10","detectron2",[14,13],[110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125],"detr","object-detection","pytorch","dino","state-of-the-art","dab-detr","deformable-detr","conditional-detr","dn-detr","group-detr","h-detr","mask-dino","anchor-detr","deta","pose-estimation","segmentation","2026-03-27T02:49:30.150509","2026-04-06T09:46:07.598300",[129,134,139,143,148,153],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},4992,"detrex 中 max_iter 和 epoch 是什么关系？如何设置训练轮次？","detrex 基于 detectron2，配置依赖迭代次数 (Iteration) 而非 epoch。`train.max_iter` 表示总训练迭代次数，`dataloader.train.total_batch_size` 表示所有 GPU 的总 batch size。例如 `max_iter=90000`, `total_batch_size=16`，总训练数据量等于 90k * 16。若要在 1000 张图片上训练 12 个 epoch，需根据 GPU 数量和单卡 batch size 计算总 batch size，再推算 max_iter。","https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F126",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},4993,"如何使用预训练权重在自定义数据集上训练模型？","可以使用 demo 进行预训练权重的推理。对于训练，建议在项目中直接添加一个新的 `config.py` 文件，而不是修改 `configs\u002Fcommon`。可以通过 `get_config()` 加载自己的配置文件。更多详情参考文档：[Inference demo with pre-trained models](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FGetting_Started.html#inference-demo-with-pre-trained-models)。","https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F78",{"id":140,"question_zh":141,"answer_zh":142,"source_url":138},4994,"自定义配置文件应该放在哪里以避免导入错误 (ImportError)？","不建议将自定义配置文件添加到 `configs\u002Fcommon` 目录，这会导致导入问题 (ImportError)。更好的方式是在你的项目目录中直接添加一个新的 `config.py` 文件，然后直接加载该配置文件。",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},4995,"代码中的 num_classes 是否需要包含背景类 (background class)？","对于 Focal Loss，`num_classes` 只需要等于实际类别数量 (actual_num_classes)，不需要 +1 (即不包含背景类)。如果是自定义数据集，注意类别 ID 通常从 1 开始 (如 COCO 数据集)。","https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F174",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},4996,"为什么 detrex 默认训练轮次 (12-50 epochs) 比原始 DETR (500 epochs) 少？","detrex 的默认设置通常基于多卡训练环境 (如 8 卡，单卡 batch_size=2)，收敛速度较快。轮次设置与其他框架效果类似，但具体收敛情况受硬件环境和批量大小影响。","https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F329",{"id":154,"question_zh":155,"answer_zh":156,"source_url":152},4997,"训练 Conditional DETR 时精度不稳定且低怎么办？","这可能与硬件环境有关。官方默认设置是 8 卡，单卡 batch_size=2。如果在其他环境 (如单卡或少卡) 下训练，可能未经过充分测试。建议尽量复现默认硬件环境，或调整学习率及 batch size 设置。",[158,163,168,173,178,183,188],{"id":159,"version":160,"summary_zh":161,"released_at":162},114219,"v0.5.0","## Release v0.5.0\r\n\r\n**Support New Algorithms and Benchmarks, including:**\r\n- Support [Focus-DETR (ICCV'2023)](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Ffocus_detr)\r\n- Support [SQR-DETR (CVPR'2023)](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fsqr_detr), credits to [Fangyi Chen](https:\u002F\u002Fgithub.com\u002FFangyi-Chen)\r\n- Support [EVA-01 (CVPR'2023 Highlight)](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEVA\u002Ftree\u002Fmaster\u002FEVA-01)\r\n- Support [EVA-02 (ArXiv'2023)](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEVA\u002Ftree\u002Fmaster\u002FEVA-02)\r\n- Support [DINO-EVA](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fblob\u002Fmain\u002Fprojects\u002Fdino_eva) benchmarks, including `dino-eva-01` and `dino-eva-02` with LSJ augmentation.\r\n- Support [Align-DETR (ArXiv'2023)](https:\u002F\u002Fgithub.com\u002FFelixCaae\u002FAlignDETR), credits to [Zhi Cai](https:\u002F\u002Fgithub.com\u002FFelixCaae)\r\n\r\n**All the pretrained DINO-EVA checkpoints can be downloaded in [Huggingface Space](https:\u002F\u002Fhuggingface.co\u002FIDEA-CVR\u002FDINO-EVA)**","2023-07-16T05:15:00",{"id":164,"version":165,"summary_zh":166,"released_at":167},114220,"v0.4.0","## Main updates\r\n- Support [CO-MOT](.\u002Fprojects\u002Fco_mot\u002F) aims for End-to-End Multi-Object Tracking by [Feng Yan](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=gO4divAAAAAJ&hl=zh-CN&oi=sra).\r\n- Release `DINO` with optimized hyper-parameters which achieves `50.0 AP` under 1x settings.\r\n- Release pretrained DINO based on `InternImage`, `ConvNeXt-1K pretrained` backbones.\r\n- Release `Deformable-DETR-R50` pretrained weights.\r\n- Release `DETA` and better `H-DETR` pretrained weights: achieving `50.2 AP` and `49.1 AP` respectively.\r\n\r\n## Pretrained Model\r\n- `DETA-R50-5scale-12ep` bs=8: **`50.0AP`**\r\n- `DETA-R50-5scale-12ep` aligned hyper-param: **`49.9AP`**\r\n- `DETA-R50-5scale-12ep` with only freeze the stem of backbone: **`50.2AP`**\r\n- `H-Deformable-DETR-two-stage-R50-12ep` aligned optimizer hyper-params: **`49.1AP`**\r\n- `DINO-R50-4scale-12ep` aligned optimizer hyper-params: **`49.4AP`**\r\n- `DINO-Focal-3level-4scale-36ep`: **`58.3AP`**\r\n\r\n**Benchmark ConvNeXt on DINO**\r\n- convnext-tiny-384: **`52.4AP`**\r\n- convnext-small-384: **`54.2AP`**\r\n- convnext-base-384: **`55.1AP`**\r\n- convnext-large-384: **`55.5AP`**\r\n\r\n**Benchmark InternImage on DINO**\r\n- internimage-tiny: **`52.3AP`**\r\n- internimage-small: **`53.5AP`**\r\n- internimage-base: **`54.7AP`**\r\n- internimage-large: **`57.0AP`**\r\n\r\n**Benchmark FocalNet on DINO**\r\n- focalnet-tiny\r\n- focalnet-small\r\n- focalnet-base\r\n\r\n**Other pre-trained weights**\r\n- Deformable-DETR R50: `44.9 AP` (better than 44.5AP from original repo)\r\n- `Group-DETR-R50-12ep`: **`37.8AP`**","2023-06-02T09:26:29",{"id":169,"version":170,"summary_zh":171,"released_at":172},114221,"v0.3.0","## What's New\r\n\r\n**New Algorithms**\r\n- Support `Anchor-DETR`\r\n- Support `DETA`\r\n\r\n**More training techniques**\r\n- Support `EMAHook` during training by setting `train.model_ema.enabled=True`, which can further enhance the model performance.\r\n- Fully support mixed precision training by setting `train.amp.enabled=True`, which can **reduce 20% to 30% GPU memory usage**.\r\n- Support **encoder-decoder checkpoint** in [DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fdino) which may **reduce 30% GPU memory usage**. And for more details about the checkpoint usage, please refer to this PR: #200 \r\n- Support **fast debugging** by setting `train.fast_dev_run=True`.\r\n- Support a great slurm training scripts by @rayleizhu , please check this issue #213 for more details.\r\n\r\n**Release more than 10+ pretrained checkpoints**\r\n| Method | Pretrained | Epochs | Box AP |\r\n|:---|:---:|:---:|:---:|\r\n| DETR-R50-DC5 | IN1k | 500 | 43.4 |\r\n| DETR-R101-DC5 | IN1k | 500 | 44.9 |\r\n| Anchor-DETR-R50 | IN1k | 50 | 42.2 |\r\n| Anchor-DETR-R50-DC5 | IN1k | 50 | 44.2 |\r\n| Anchor-DETR-R101 | IN1k | 50 | 43.5 |\r\n| Anchor-DETR-R101-DC5 | IN1k | 50 | 45.1 |\r\n| Conditional-DETR-R50-DC5 | IN1k | 50 | 43.8 |\r\n| Conditional-DETR-R101 | IN1k | 50 | 43.0 |\r\n| Conditional-DETR-R101 -DC5 | IN1k | 50 | 45.1 |\r\n| DAB-DETR-R50-3patterns | IN1k | 50 | 42.8 |\r\n| DAB-DETR-R50-DC5 | IN1k | 50 | 44.6 |\r\n| DAB-DETR-R50-DC5-3patterns | IN1k | 50 | 45.7 |\r\n| DAB-DETR-101-DC5 | IN1k | 50 | 45.7 |\r\n| DN-DETR-R50-DC5 | IN1k | 50 | 46.3 |\r\n| DINO with EMA | IN1k | 12 | 49.4 |\r\n| DETA-R50-5scale | IN1k | 12 | 50.1 |\r\n| DETA-Swin-Large | object-365 | 24 | **62.9** |\r\n\r\nPart of the pre-trained weights are converted from their official repo, and all the pre-trained weights can be downloaded in detrex [Model Zoo](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html)\r\n","2023-03-17T09:47:59",{"id":174,"version":175,"summary_zh":176,"released_at":177},114222,"maskdino","## MaskDINO Release\r\n- Release for MaskDINO Source Code: [MaskDINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FMaskDINO)\r\n- The detrex version can be found in [projects\u002Fmaskdino](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fmaskdino)","2022-12-02T10:51:08",{"id":179,"version":180,"summary_zh":181,"released_at":182},114223,"v0.2.1","## Highlights\r\n- **DINO** has been accepted to ICLR 2023!\r\n- Thanks a lot for @powermano provides us a detailed usage about onnx export in detrex. Please see this issue https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fissues\u002F192\r\n\r\n## What's New\r\n#### New Algorithm\r\n- MaskDINO COCO instance-seg\u002Fpanoptic-seg pre-release [#154](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F154)\r\n\r\n#### New Features\r\n- New baselines for `Res\u002FSwin-DINO-5scale`, `ViTDet-DINO`, `FocalNet-DINO`, etc. [#138](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F138), [#155](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F155)\r\n- Support FocalNet backbone [#145](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F145)\r\n- Support Swin-V2 backbone [#152](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F152)\r\n\r\n#### Documentation\r\n- Add ViTDet \u002F FocalNet download links and usage example, please refer to [Download Pretrained Weights](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FDownload_Pretrained_Weights.html).\r\n- Add tutorial on how to verify the correct installation of detrex. [#194](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F194)\r\n\r\n#### Bug Fixes\r\n- Fix demo confidence filter not to remove mask predictions [#156](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F156)\r\n\r\n#### Code Refinement\r\n- Make more readable logging info for criterion and matcher [#151](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F151)\r\n- Modified learning rate scheduler config usage, add fundamental scheduler configuration [#191](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Fpull\u002F191)\r\n\r\n## New Pretrained Models\r\nAll the pretrained weights can be downloaded in detrex [Model Zoo](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html)\r\n\r\n### DINO\r\n| Method | Pretrained | Epochs | Box AP |\r\n|:---|:---:|:---:|:---:|\r\n| DINO-ViTDet-Base-4scale | MAE | 12 | 50.2 |\r\n| DINO-ViTDet-Base-4scale | MAE | 50 | 55.0 |\r\n| DINO-ViTDet-Large-4scale | MAE | 12 | 50.2 |\r\n| DINO-ViTDet-Large-4scale | MAE | 50 | 55.0 |\r\n| DINO-FocalNet-Large-3level-4scale | IN22k| 12 | 57.5 |\r\n| DINO-FocalNet-Large-4level-4scale | IN22k| 12 | 58.0 |\r\n| DINO-FocalNet-Large-4level-5scale | IN22k| 12 | 58.5 |\r\n","2023-02-01T07:16:07",{"id":184,"version":185,"summary_zh":186,"released_at":187},114224,"v0.2.0","## What's New\r\n- Rebuild cleaner config files for projects\r\n- Support [H-Deformable-DETR](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fh_deformable_detr), thanks a lot for @JiaDingCN \r\n- Release H-Deformable-DETR pretrained weights including `H-Deformable-DETR-R50`, `H-Deformable-DETR-Swin-Tiny`, `H-Deformable-DETR-Swin-Large`.\r\n- Add demo for visualizing customized input images or videos using pretrained weights in [demo](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fdemo), please check our [documentation](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FGetting_Started.html#inference-demo-with-pre-trained-models) about the usage.\r\n- Release new baselines for `DINO-Swin-Large-36ep`, `DAB-Deformable-DETR-R50-50ep`, `DAB-Deformable-DETR-Two-Stage-50ep`.\r\n\r\n\r\n## New Pretrained Models\r\nAll the pretrained weights can be downloaded in detrex [Model Zoo](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html)\r\n\r\n### H-Deformable-DETR\r\n| Method | Pretrained | Epochs | Query Num | Box AP |\r\n|:---|:---:|:---:|:---:|:---:|\r\n| H-Deformable-DETR-R50 + tricks | IN1k | 12 | 300 | 48.9 |\r\n| H-Deformable-DETR-R50 + tricks | IN1k | 36 | 300 | 50.3 |\r\n| H-Deformable-DETR-Swin-T + tricks | IN1k | 12 | 300 | 50.6 |\r\n| H-Deformable-DETR-Swin-T+ tricks | IN1k | 36 | 300 | 53.5 |\r\n| H-Deformable-DETR-Swin-L + tricks | IN22k | 12 | 300 | 56.2 |\r\n| H-Deformable-DETR-Swin-L + tricks | IN22k | 36 | 300 | 57.5 |\r\n| H-Deformable-DETR-Swin-L + tricks | IN22k | 12 | 900 | 56.4 |\r\n| H-Deformable-DETR-Swin-L + tricks | IN22k | 36 | 900 | 57.7 |\r\n\r\n### DINO\r\n| Method | Pretrained | Epochs | Box AP |\r\n|:---|:---:|:---:|:---:|\r\n| DINO-R50-4Scale-12ep | IN1k | 12 | 49.2 |\r\n\r\n### DAB-Deformable-DETR\r\n| Method | Pretrained | Epochs | Box AP |\r\n|:---|:---:|:---:|:---:|\r\n| DAB-Deformable-DETR-R50 | IN1k | 50 | 49.0 |\r\n| DAB-Deformable-DETR-R50-Two-Stage | IN1k | 50 | 49.7 |","2022-11-13T08:18:12",{"id":189,"version":190,"summary_zh":191,"released_at":192},114225,"v0.1.1","## What's New\r\n- Add model analysis tools in [tools](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Ftools).\r\n- Support visualization on COCO eval results and annotations in [tools](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Ftools).\r\n- Support [Group-DETR](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fgroup_detr).\r\n- Release more `DINO` training results in [DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fdino).\r\n- Release better `Deformable-DETR` baselines in [Deformable-DETR](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002Fdetrex\u002Ftree\u002Fmain\u002Fprojects\u002Fdeformable_detr).\r\n- Fix ConvNeXt bugs.\r\n- Add pretrained weights download links and usage in documentation, see [Download Pretrained Backbone Weights](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FDownload_Pretrained_Weights.html).\r\n- Add documentation for tools, see [Practical Tools and Scripts](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FTools.html).\r\n\r\n## New Pretrained Models\r\nAll the pretrained weights can be downloaded in detrex [Model Zoo](https:\u002F\u002Fdetrex.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002FModel_Zoo.html).\r\n\r\n### DINO\r\n| Method | Pretrained | Epochs | Box AP |\r\n|:-----|:-----:|:-----:|:-----:|\r\n| DINO-R50-4Scale | IN1k | 24 | 50.60 |\r\n| DINO-R101-4Scale | IN1k | 12 | 49.95 |\r\n| DINO-Swin-Tiny-224-4Scale | IN1k | 12 | 51.30 |\r\n| DINO-Swin-Tiny-224-4Scale | IN22k to IN1k | 12 | 51.30 |\r\n| DINO-Swin-Small-224-4Scale | IN1k | 12 | 52.96 |\r\n| DINO-Swin-Base-384-4Scale | IN22k to IN1k | 12 | 55.83 |\r\n| DINO-Swin-Large-224-4Scale | IN22k to IN1k | 12 | 56.92 |\r\n| DINO-Swin-Large-384-4Scale | IN22k to IN1k | 12 | 56.93 |\r\n\r\n### Deformable-DETR\r\n| Method | Pretrained | Epochs | Box AP |\r\n|:-----|:-----:|:-----:|:-----:|\r\n| Deformable-DETR-R50 + Box-Refinement | IN1k | 50 | 46.99 |\r\n| Deformable-DETR-R50 + Box-Refinement + Two-Stage | IN1k | 50 | 48.19 |","2022-10-18T11:04:47"]