[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-lhoyer--DAFormer":3,"tool-lhoyer--DAFormer":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":79,"owner_twitter":79,"owner_website":81,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":10,"env_os":96,"env_gpu":97,"env_ram":98,"env_deps":99,"category_tags":105,"github_topics":106,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":111,"updated_at":112,"faqs":113,"releases":133},753,"lhoyer\u002FDAFormer","DAFormer","[CVPR22] Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation","DAFormer 是一款专注于无监督域自适应语义分割的开源项目。在计算机视觉领域，为真实图像获取像素级标注往往成本高昂，DAFormer 通过利用易得的合成数据进行训练，并自动适配到真实图像上，无需目标域的真实标注即可实现高效学习。\n\n针对以往方法多基于陈旧网络架构的问题，DAFormer 采用了更先进的 Transformer 编码器结合多层级上下文感知特征融合解码器。其核心亮点在于三项关键训练策略：稀有类采样、物体类别 ImageNet 特征距离以及学习率预热，这些策略有效稳定了训练过程并避免了源域过拟合。实验表明，DAFormer 在多个基准测试中显著提升了性能，例如在 GTA 到 Cityscapes 的迁移任务中 mIoU 提升了 10.8。\n\n此外，DAFormer 还可扩展至域泛化任务，进一步降低对目标域数据的依赖。DAFormer 适合计算机视觉领域的研究人员及希望构建鲁棒分割系统的开发者参考与使用。如需了解最新进展，可查阅其官方论文及后续扩展工作。","## DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation\n\n**by [Lukas Hoyer](https:\u002F\u002Flhoyer.github.io\u002F), [Dengxin Dai](https:\u002F\u002Fvas.mpi-inf.mpg.de\u002Fdengxin\u002F), and [Luc Van Gool](https:\u002F\u002Fscholar.google.de\u002Fcitations?user=TwMib_QAAAAJ&hl=en)**\n\n**[[CVPR22 Paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14887.pdf)**\n**[[Extension Paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf)**\n\n:bell: **News:**\n\n* [2024-07-03] We are happy to announce that our work [SemiVL](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fsemivl) on semi-supervised semantic segmentation with vision-language guidance was accepted at **ECCV24**.\n* [2024-07-03] We are happy to announce that our follow-up work [DGInStyle](https:\u002F\u002Fdginstyle.github.io\u002F) on image diffusion for domain-generalizable semantic segmentation was accepted at **ECCV24**.\n* [2023-09-26] We are happy to announce that our [Extension Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf) on domain generalization and clear-to-adverse-weather UDA was accapted at **PAMI**. \n* [2023-08-25] We are happy to announce that our follow-up work [EDAPS](https:\u002F\u002Fgithub.com\u002Fsusaha\u002Fedaps) on panoptic segmentation UDA was accepted at **ICCV23**.\n* [2023-04-23] We further extend DAFormer to domain generalization and clear-to-adverse-weather UDA in the [Extension Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf).\n* [2023-02-28] We are happy to announce that our follow-up work [MIC](https:\u002F\u002Fgithub.com\u002Flhoyer\u002FMIC) on context-enhanced UDA was accepted at **CVPR23**.\n* [2022-07-06] We are happy to announce that our follow-up work [HRDA](https:\u002F\u002Fgithub.com\u002Flhoyer\u002FHRDA) on high-resolution UDA was accepted at **ECCV22**.\n* [2022-03-09] We are happy to announce that DAFormer was accepted at **CVPR22**.\n\n## Overview\n\nAs acquiring pixel-wise annotations of real-world images for semantic\nsegmentation is a costly process, a model can instead be trained with more\naccessible synthetic data and adapted to real images without requiring their\nannotations. This process is studied in **Unsupervised Domain Adaptation (UDA)**.\n\nEven though a large number of methods propose new UDA strategies, they\nare mostly based on outdated network architectures. In this work, we\nparticularly study the influence of the network architecture on UDA performance\nand propose **DAFormer**, a network architecture tailored for UDA. It consists of a\nTransformer encoder and a multi-level context-aware feature fusion decoder.\n\nDAFormer is enabled by three simple but crucial training strategies to stabilize the\ntraining and to avoid overfitting the source domain: While the\n**Rare Class Sampling** on the source domain improves the quality of pseudo-labels\nby mitigating the confirmation bias of self-training towards common classes,\nthe **Thing-Class ImageNet Feature Distance** and a **Learning Rate Warmup** promote\nfeature transfer from ImageNet pretraining.\n\nDAFormer significantly improves\nthe state-of-the-art performance **by 10.8 mIoU for GTA→Cityscapes**\nand **by 5.4 mIoU for Synthia→Cityscapes** and enables learning even\ndifficult classes such as train, bus, and truck well.\n\n![UDA over time](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flhoyer_DAFormer_readme_ab6e18d6b258.png)\n\nThe strengths of DAFormer, compared to the previous state-of-the-art UDA method\nProDA, can also be observed in qualitative examples from the Cityscapes\nvalidation set.\n\n![Demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flhoyer_DAFormer_readme_f2094bae4d6d.gif)\n![Color Palette](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flhoyer_DAFormer_readme_50986a617588.png)\n\nDAFormer can be further **extended to domain generalization** lifting the requirement\nof access to target images. Also in domain generalization,\nDAFormer significantly improves the state-of-the-art performance by **+6.5 mIoU**.\n\nFor more information on DAFormer, please check our\n[[CVPR Paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14887.pdf) and the [[Extension Paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf).\n\nIf you find this project useful in your research, please consider citing:\n\n```\n@InProceedings{hoyer2022daformer,\n  title={{DAFormer}: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation},\n  author={Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc},\n  booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  pages={9924--9935},\n  year={2022}\n}\n\n@Article{hoyer2024domain,\n  title={Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation},\n  author={Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc},\n  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)}, \n  year={2024},\n  volume={46},\n  number={1},\n  pages={220-235},\n  doi={10.1109\u002FTPAMI.2023.3320613}}\n}\n```\n\n## Comparison with State-of-the-Art UDA\n\nDAFormer significantly outperforms previous works on several UDA benchmarks.\nThis includes synthetic-to-real adaptation on GTA→Cityscapes and\nSynthia→Cityscapes as well as clear-to-adverse-weather adaptation on\nCityscapes→ACDC and Cityscapes→DarkZurich.\n\n|                     | GTA→CS(val)    | Synthia→CS(val)    | CS→ACDC(test)   | CS→DarkZurich(test)   |\n|---------------------|----------------|--------------------|-----------------|-----------------------|\n| ADVENT [1]          | 45.5           | 41.2               | 32.7            | 29.7                  |\n| BDL [2]             | 48.5           | --                 | 37.7            | 30.8                  |\n| FDA [3]             | 50.5           | --                 | 45.7            | --                    |\n| DACS [4]            | 52.1           | 48.3               | --              | --                    |\n| ProDA [5]           | 57.5           | 55.5               | --              | --                    |\n| MGCDA [6]           | --             | --                 | 48.7            | 42.5                  |\n| DANNet [7]          | --             | --                 | 50.0            | 45.2                  |\n| **DAFormer (Ours)** | **68.3**       | **60.9**           | **55.4***       | **53.8***             |\n\n&ast; New results of our [extension paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf)\n\nReferences:\n\n1. Vu et al. \"Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation\" in CVPR 2019.\n2. Li et al. \"Bidirectional learning for domain adaptation of semantic segmentation\" in CVPR 2019.\n3. Yang et al. \"Fda: Fourier domain adaptation for semantic segmentation\" in CVPR 2020.\n4. Tranheden et al. \"Dacs: Domain adaptation via crossdomain mixed sampling\" in WACV 2021.\n5. Zhang et al. \"Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation\" in CVPR 2021.\n6. Sakaridis et al. \"Map-guided curriculum domain adaptation and uncertaintyaware evaluation for semantic nighttime image segmentation\" in TPAMI, 2020.\n7. Wu et al. \"DANNet: A one-stage domain adaptation network for unsupervised nighttime semantic segmentation\" in CVPR, 2021.\n\n## Comparison with State-of-the-Art Domain Generalization (DG)\n\nDAFormer significantly outperforms previous works on domain generalization from GTA to real street scenes.\n\n| DG Method       | Cityscapes     | BDD100K        | Mapillary        | Avg.           |\n|-----------------|----------------|----------------|------------------|----------------|\n| IBN-Net [1,5]   | 37.37          | 34.21          | 36.81            | 36.13          |\n| DRPC [2]        | 42.53          | 38.72          | 38.05            | 39.77          |\n| ISW [3,5]       | 37.20          | 33.36          | 35.57            | 35.38          |\n| SAN-SAW [4]     | 45.33          | 41.18          | 40.77            | 42.43          |\n| SHADE [5]       | 46.66          | 43.66          | 45.50            | 45.27          |\n| DAFormer (Ours) | 52.65&ast;     | 47.89&ast;     | 54.66&ast;       | 51.73&ast;     |\n\n&ast; New results of our [extension paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf)\n\nReferences:\n\n1. Pan et al. \"Two at once: Enhancing learning and generalization capacities via IBN-Net\" in ECCV, 2018.\n2. Yue et al. \"Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data\" ICCV, 2019.\n3. Choi et al. \"RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening\" in CVPR, 2021.\n4. Peng et al. \"Semantic-aware domain generalized segmentation\" in CVPR, 2022.\n5. Zhao et al. \"Style-Hallucinated Dual Consistency Learning for Domain Generalized Semantic Segmentation\" in ECCV, 2022.\n\n## Setup Environment\n\nFor this project, we used python 3.8.5. We recommend setting up a new virtual\nenvironment:\n\n```shell\npython -m venv ~\u002Fvenv\u002Fdaformer\nsource ~\u002Fvenv\u002Fdaformer\u002Fbin\u002Factivate\n```\n\nIn that environment, the requirements can be installed with:\n\n```shell\npip install -r requirements.txt -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\npip install mmcv-full==1.3.7  # requires the other packages to be installed first\n```\n\nPlease, download the MiT ImageNet weights (b3-b5) provided by [SegFormer](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer?tab=readme-ov-file#training)\nfrom their [OneDrive](https:\u002F\u002Fconnecthkuhk-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fxieenze_connect_hku_hk\u002FEvOn3l1WyM5JpnMQFSEO5b8B7vrHw9kDaJGII-3N9KNhrg?e=cpydzZ) and put them in the folder `pretrained\u002F`.\nFurther, download the checkpoint of [DAFormer on GTA→Cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1pG3kDClZDGwp1vSTEXmTchkGHmnLQNdP\u002Fview?usp=sharing) and extract it to the folder `work_dirs\u002F`.\n\nAll experiments were executed on an NVIDIA RTX 2080 Ti.\n\n## Inference Demo\n\nAlready as this point, the provided DAFormer model can be applied to a demo image:\n\n```shell\npython -m demo.image_demo demo\u002Fdemo.png work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002F211108_1622_gta2cs_daformer_s0_7f24c.json work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002Flatest.pth\n```\n\nWhen judging the predictions, please keep in mind that DAFormer had no access\nto real-world labels during the training.\n\n## Setup Datasets\n\n**Cityscapes:** Please, download leftImg8bit_trainvaltest.zip and\ngt_trainvaltest.zip from [here](https:\u002F\u002Fwww.cityscapes-dataset.com\u002Fdownloads\u002F)\nand extract them to `data\u002Fcityscapes`.\n\n**GTA:** Please, download all image and label packages from\n[here](https:\u002F\u002Fdownload.visinf.tu-darmstadt.de\u002Fdata\u002Ffrom_games\u002F) and extract\nthem to `data\u002Fgta`.\n\n**Synthia (Optional):** Please, download SYNTHIA-RAND-CITYSCAPES from\n[here](http:\u002F\u002Fsynthia-dataset.net\u002Fdownloads\u002F) and extract it to `data\u002Fsynthia`.\n\n**ACDC (Optional):** Please, download rgb_anon_trainvaltest.zip and\ngt_trainval.zip from [here](https:\u002F\u002Facdc.vision.ee.ethz.ch\u002Fdownload) and\nextract them to `data\u002Facdc`. Further, please restructure the folders from\n`condition\u002Fsplit\u002Fsequence\u002F` to `split\u002F` using the following commands:\n\n```shell\nrsync -a data\u002Facdc\u002Frgb_anon\u002F*\u002Ftrain\u002F*\u002F* data\u002Facdc\u002Frgb_anon\u002Ftrain\u002F\nrsync -a data\u002Facdc\u002Frgb_anon\u002F*\u002Fval\u002F*\u002F* data\u002Facdc\u002Frgb_anon\u002Fval\u002F\nrsync -a data\u002Facdc\u002Fgt\u002F*\u002Ftrain\u002F*\u002F*_labelTrainIds.png data\u002Facdc\u002Fgt\u002Ftrain\u002F\nrsync -a data\u002Facdc\u002Fgt\u002F*\u002Fval\u002F*\u002F*_labelTrainIds.png data\u002Facdc\u002Fgt\u002Fval\u002F\n```\n\n**Dark Zurich (Optional):** Please, download the Dark_Zurich_train_anon.zip\nand Dark_Zurich_val_anon.zip from\n[here](https:\u002F\u002Fwww.trace.ethz.ch\u002Fpublications\u002F2019\u002FGCMA_UIoU\u002F) and extract it\nto `data\u002Fdark_zurich`.\n\nThe final folder structure should look like this:\n\n```none\nDAFormer\n├── ...\n├── data\n│   ├── acdc (optional)\n│   │   ├── gt\n│   │   │   ├── train\n│   │   │   ├── val\n│   │   ├── rgb_anon\n│   │   │   ├── train\n│   │   │   ├── val\n│   ├── cityscapes\n│   │   ├── leftImg8bit\n│   │   │   ├── train\n│   │   │   ├── val\n│   │   ├── gtFine\n│   │   │   ├── train\n│   │   │   ├── val\n│   ├── dark_zurich (optional)\n│   │   ├── gt\n│   │   │   ├── val\n│   │   ├── rgb_anon\n│   │   │   ├── train\n│   │   │   ├── val\n│   ├── gta\n│   │   ├── images\n│   │   ├── labels\n│   ├── synthia (optional)\n│   │   ├── RGB\n│   │   ├── GT\n│   │   │   ├── LABELS\n├── ...\n```\n\n**Data Preprocessing:** Finally, please run the following scripts to convert the label IDs to the\ntrain IDs and to generate the class index for RCS:\n\n```shell\npython tools\u002Fconvert_datasets\u002Fgta.py data\u002Fgta --nproc 8\npython tools\u002Fconvert_datasets\u002Fcityscapes.py data\u002Fcityscapes --nproc 8\npython tools\u002Fconvert_datasets\u002Fsynthia.py data\u002Fsynthia\u002F --nproc 8\n```\n\n## Training\n\nFor convenience, we provide an [annotated config file](configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py) of the final DAFormer.\nA training job can be launched using:\n\n```shell\npython run_experiments.py --config configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py\n```\n\nFor the experiments in our paper (e.g. network architecture comparison,\ncomponent ablations, ...), we use a system to automatically generate\nand train the configs:\n\n```shell\npython run_experiments.py --exp \u003CID>\n```\n\nMore information about the available experiments and their assigned IDs, can be\nfound in [experiments.py](experiments.py). The generated configs will be stored\nin `configs\u002Fgenerated\u002F`.\n\n## Testing & Predictions\n\nThe provided DAFormer checkpoint trained on GTA→Cityscapes\n(already downloaded by `tools\u002Fdownload_checkpoints.sh`) can be tested on the\nCityscapes validation set using:\n\n```shell\nsh test.sh work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\n```\n\nThe predictions are saved for inspection to\n`work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002Fpreds`\nand the mIoU of the model is printed to the console. The provided checkpoint\nshould achieve 68.85 mIoU. Refer to the end of\n`work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002F20211108_164105.log` for\nmore information such as the class-wise IoU.\n\nSimilarly, also other models can be tested after the training has finished:\n\n```shell\nsh test.sh path\u002Fto\u002Fcheckpoint_directory\n```\n\nWhen evaluating a model trained on Synthia→Cityscapes, please note that the\nevaluation script calculates the mIoU for all 19 Cityscapes classes. However,\nSynthia contains only labels for 16 of these classes. Therefore, it is a common\npractice in UDA to report the mIoU for Synthia→Cityscapes only on these 16\nclasses. As the Iou for the 3 missing classes is 0, you can do the conversion\nmIoU16 = mIoU19 * 19 \u002F 16.\n\nThe results for Cityscapes→ACDC and Cityscapes→DarkZurich are reported on\nthe test split of the target dataset. To generate the predictions for the test\nset, please run:\n\n```shell\npython -m tools.test path\u002Fto\u002Fconfig_file path\u002Fto\u002Fcheckpoint_file --test-set --format-only --eval-option imgfile_prefix=labelTrainIds to_label_id=False\n```\n\nThe predictions can be submitted to the public evaluation server of the\nrespective dataset to obtain the test score.\n\n## Domain Generalization\n\nFor the domain generalization extension of DAFormer, please refer to\nthe DG branch of the HRDA repository: [https:\u002F\u002Fgithub.com\u002Flhoyer\u002FHRDA\u002Ftree\u002Fdg](https:\u002F\u002Fgithub.com\u002Flhoyer\u002FHRDA\u002Ftree\u002Fdg)\n\n## Checkpoints\n\nBelow, we provide checkpoints of DAFormer for different benchmarks.\nAs the results in the paper are provided as the mean over three random\nseeds, we provide the checkpoint with the median validation performance here.\n\n* [DAFormer for GTA→Cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1pG3kDClZDGwp1vSTEXmTchkGHmnLQNdP\u002Fview?usp=sharing)\n* [DAFormer for Synthia→Cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1V9EpoTePjGq33B8MfombxEEcq9a2rBEt\u002Fview?usp=sharing)\n* [DAFormer for Cityscapes→ACDC](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16RSBkzJbGprWr04LjyNleqRzRZgCaEBn\u002Fview?usp=sharing)\n* [DAFormer for Cityscapes→DarkZurich](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_VXKDhnp4x4sslBj5B8tqqBJXeOuI9hS\u002Fview?usp=sharing)\n* [DAFormer for GTA Domain Generalization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1up9x3R3HtU_MjM6F89xNIHzPbIqBSacx\u002Fview?usp=sharing)\n\nThe checkpoints come with the training logs. Please note that:\n\n* The logs provide the mIoU for 19 classes. For Synthia→Cityscapes, it is\n  necessary to convert the mIoU to the 16 valid classes. Please, read the\n  section above for converting the mIoU.\n* The logs provide the mIoU on the validation set. For Cityscapes→ACDC and\n  Cityscapes→DarkZurich the results reported in the paper are calculated on the\n  test split. For DarkZurich, the performance significantly differs between\n  validation and test split. Please, read the section above on how to obtain\n  the test mIoU.\n\n## Framework Structure\n\nThis project is based on [mmsegmentation version 0.16.0](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation\u002Ftree\u002Fv0.16.0).\nFor more information about the framework structure and the config system,\nplease refer to the [mmsegmentation documentation](https:\u002F\u002Fmmsegmentation.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)\nand the [mmcv documentation](https:\u002F\u002Fmmcv.readthedocs.ihttps:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08702o\u002Fen\u002Fv1.3.7\u002Findex.html).\n\nThe most relevant files for DAFormer are:\n\n* [configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py](configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py):\n  Annotated config file for the final DAFormer.\n* [mmseg\u002Fmodels\u002Fuda\u002Fdacs.py](mmseg\u002Fmodels\u002Fuda\u002Fdacs.py):\n  Implementation of UDA self-training with ImageNet Feature Distance.\n* [mmseg\u002Fdatasets\u002Fuda_dataset.py](mmseg\u002Fdatasets\u002Fuda_dataset.py):\n  Data loader for UDA with Rare Class Sampling.\n* [mmseg\u002Fmodels\u002Fdecode_heads\u002Fdaformer_head.py](mmseg\u002Fmodels\u002Fdecode_heads\u002Fdaformer_head.py):\n  Implementation of DAFormer decoder with context-aware feature fusion.\n* [mmseg\u002Fmodels\u002Fbackbones\u002Fmix_transformer.py](mmseg\u002Fmodels\u002Fbackbones\u002Fmix_transformer.py):\n  Implementation of Mix Transformer encoder (MiT).\n\n## Acknowledgements\n\nThis project is based on the following open-source projects. We thank their\nauthors for making the source code publically available.\n\n* [MMSegmentation](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation)\n* [SegFormer](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer)\n* [DACS](https:\u002F\u002Fgithub.com\u002Fvikolss\u002FDACS)\n\n## License\n\nThis project is released under the [Apache License 2.0](LICENSE), while some \nspecific features in this repository are with other licenses. Please refer to \n[LICENSES.md](LICENSES.md) for the careful check, if you are using our code for \ncommercial matters.\n","## DAFormer：改进领域自适应语义分割的网络架构与训练策略\n\n**作者：[Lukas Hoyer](https:\u002F\u002Flhoyer.github.io\u002F), [Dengxin Dai](https:\u002F\u002Fvas.mpi-inf.mpg.de\u002Fdengxin\u002F), 和 [Luc Van Gool](https:\u002F\u002Fscholar.google.de\u002Fcitations?user=TwMib_QAAAAJ&hl=en)**\n\n**[[CVPR22 论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14887.pdf)**\n**[[扩展论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf)**\n\n:bell: **新闻：**\n\n* [2024-07-03] 我们很高兴地宣布，我们的工作 [SemiVL](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fsemivl)（具有视觉 - 语言引导的半监督语义分割）已被 **ECCV24** 接收。\n* [2024-07-03] 我们很高兴地宣布，我们的后续工作 [DGInStyle](https:\u002F\u002Fdginstyle.github.io\u002F)（用于域泛化语义分割的图像扩散）已被 **ECCV24** 接收。\n* [2023-09-26] 我们很高兴地宣布，我们的 [扩展论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf)（关于域泛化和从清晰到恶劣天气的无监督域适应）已被 **PAMI** 接收。 \n* [2023-08-25] 我们很高兴地宣布，我们的后续工作 [EDAPS](https:\u002F\u002Fgithub.com\u002Fsusaha\u002Fedaps)（关于全景分割 UDA）已被 **ICCV23** 接收。\n* [2023-04-23] 我们在 [扩展论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf) 中进一步将 DAFormer 扩展到域泛化和从清晰到恶劣天气的无监督域适应 (UDA)。\n* [2023-02-28] 我们很高兴地宣布，我们的后续工作 [MIC](https:\u002F\u002Fgithub.com\u002Flhoyer\u002FMIC)（关于上下文增强的 UDA）已被 **CVPR23** 接收。\n* [2022-07-06] 我们很高兴地宣布，我们的后续工作 [HRDA](https:\u002F\u002Fgithub.com\u002Flhoyer\u002FHRDA)（关于高分辨率 UDA）已被 **ECCV22** 接收。\n* [2022-03-09] 我们很高兴地宣布，DAFormer 已被 **CVPR22** 接收。\n\n## 概述\n\n由于获取真实图像的像素级标注是一个昂贵的过程，模型可以使用更易获得的合成数据进行训练，并适应真实图像，而无需其标注。这个过程在**无监督域适应 (Unsupervised Domain Adaptation, UDA)** 中被研究。\n\n尽管大量方法提出了新的 UDA 策略，但它们大多基于过时的网络架构。在这项工作中，我们特别研究了网络架构对 UDA 性能的影响，并提出了 **DAFormer**，一种专为 UDA 设计的网络架构。它由一个 Transformer 编码器和一个多级上下文感知特征融合解码器组成。\n\nDAFormer 得益于三种简单但关键的训练策略，以稳定训练并避免过拟合源域：虽然源域上的**稀有类别采样 (Rare Class Sampling)** 通过减轻自训练对常见类的确认偏差来提高伪标签的质量，但**物体类 ImageNet 特征距离**和**学习率预热 (Learning Rate Warmup)** 促进了来自 ImageNet 预训练的特征迁移。\n\nDAFormer 显著提升了最先进 (SOTA) 性能，**GTA→Cityscapes 提升了 10.8 mIoU (平均交并比)**，**Synthia→Cityscapes 提升了 5.4 mIoU**，并且能够很好地学习甚至困难的类别，如火车、公交车和卡车。\n\n![UDA over time](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flhoyer_DAFormer_readme_ab6e18d6b258.png)\n\n与之前的 SOTA UDA 方法 ProDA 相比，DAFormer 的优势也可以在 Cityscapes 验证集的定性示例中观察到。\n\n![Demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flhoyer_DAFormer_readme_f2094bae4d6d.gif)\n![Color Palette](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flhoyer_DAFormer_readme_50986a617588.png)\n\nDAFormer 可以进一步**扩展到域泛化**，从而降低了对访问目标图像的需求。即使在域泛化中，DAFormer 也将 SOTA 性能显著提高了 **+6.5 mIoU**。\n\n有关 DAFormer 的更多信息，请查看我们的 [[CVPR 论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14887.pdf) 和 [[扩展论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf)。\n\n如果您认为本项目对您的研究有用，请考虑引用：\n\n```\n@InProceedings{hoyer2022daformer,\n  title={{DAFormer}: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation},\n  author={Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc},\n  booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n  pages={9924--9935},\n  year={2022}\n}\n\n@Article{hoyer2024domain,\n  title={Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation},\n  author={Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc},\n  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)}, \n  year={2024},\n  volume={46},\n  number={1},\n  pages={220-235},\n  doi={10.1109\u002FTPAMI.2023.3320613}}\n}\n```\n\n## 与最先进 UDA 方法的比较\n\nDAFormer 在多个 UDA 基准测试上显著优于以往的工作。这包括 GTA→Cityscapes 和 Synthia→Cityscapes 的合成到真实适应，以及 Cityscapes→ACDC 和 Cityscapes→DarkZurich 的从清晰到恶劣天气的适应。\n\n|                     | GTA→CS(val)    | Synthia→CS(val)    | CS→ACDC(test)   | CS→DarkZurich(test)   |\n|---------------------|----------------|--------------------|-----------------|-----------------------|\n| ADVENT [1]          | 45.5           | 41.2               | 32.7            | 29.7                  |\n| BDL [2]             | 48.5           | --                 | 37.7            | 30.8                  |\n| FDA [3]             | 50.5           | --                 | 45.7            | --                    |\n| DACS [4]            | 52.1           | 48.3               | --              | --                    |\n| ProDA [5]           | 57.5           | 55.5               | --              | --                    |\n| MGCDA [6]           | --             | --                 | 48.7            | 42.5                  |\n| DANNet [7]          | --             | --                 | 50.0            | 45.2                  |\n| **DAFormer (Ours)** | **68.3**       | **60.9**           | **55.4***       | **53.8***             |\n\n* 我们 [扩展论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf) 的新结果\n\n参考文献：\n\n1. Vu et al. \"Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation\" in CVPR 2019.\n2. Li et al. \"Bidirectional learning for domain adaptation of semantic segmentation\" in CVPR 2019.\n3. Yang et al. \"Fda: Fourier domain adaptation for semantic segmentation\" in CVPR 2020.\n4. Tranheden et al. \"Dacs: Domain adaptation via crossdomain mixed sampling\" in WACV 2021.\n5. Zhang et al. \"Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation\" in CVPR 2021.\n6. Sakaridis et al. \"Map-guided curriculum domain adaptation and uncertaintyaware evaluation for semantic nighttime image segmentation\" in TPAMI, 2020.\n7. Wu et al. \"DANNet: A one-stage domain adaptation network for unsupervised nighttime semantic segmentation\" in CVPR, 2021.\n\n## 与最先进领域泛化（Domain Generalization, DG）方法的对比\n\nDAFormer 在从 GTA 到真实街道场景的领域泛化任务上显著优于之前的工作。\n\n| DG 方法       | Cityscapes     | BDD100K        | Mapillary        | Avg.           |\n|-----------------|----------------|----------------|------------------|----------------|\n| IBN-Net [1,5]   | 37.37          | 34.21          | 36.81            | 36.13          |\n| DRPC [2]        | 42.53          | 38.72          | 38.05            | 39.77          |\n| ISW [3,5]       | 37.20          | 33.36          | 35.57            | 35.38          |\n| SAN-SAW [4]     | 45.33          | 41.18          | 40.77            | 42.43          |\n| SHADE [5]       | 46.66          | 43.66          | 45.50            | 45.27          |\n| DAFormer (本文) | 52.65&ast;     | 47.89&ast;     | 54.66&ast;       | 51.73&ast;     |\n\n&ast; 我们 [扩展论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.13615.pdf) 的新结果\n\n参考文献：\n\n1. Pan et al. \"Two at once: Enhancing learning and generalization capacities via IBN-Net\" in ECCV, 2018.\n2. Yue et al. \"Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data\" ICCV, 2019.\n3. Choi et al. \"RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening\" in CVPR, 2021.\n4. Peng et al. \"Semantic-aware domain generalized segmentation\" in CVPR, 2022.\n5. Zhao et al. \"Style-Hallucinated Dual Consistency Learning for Domain Generalized Semantic Segmentation\" in ECCV, 2022.\n\n## 环境设置\n\n本项目使用 Python 3.8.5。我们建议设置一个新的虚拟环境：\n\n```shell\npython -m venv ~\u002Fvenv\u002Fdaformer\nsource ~\u002Fvenv\u002Fdaformer\u002Fbin\u002Factivate\n```\n\n在该环境中，可以使用以下命令安装依赖项：\n\n```shell\npip install -r requirements.txt -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\npip install mmcv-full==1.3.7  # requires the other packages to be installed first\n```\n\n请从 [SegFormer](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer?tab=readme-ov-file#training) 的 [OneDrive](https:\u002F\u002Fconnecthkuhk-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fxieenze_connect_hku_hk\u002FEvOn3l1WyM5JpnMQFSEO5b8B7vrHw9kDaJGII-3N9KNhrg?e=cpydzZ) 下载 MiT ImageNet 权重（b3-b5），并将它们放入 `pretrained\u002F` 文件夹中。此外，请下载 [DAFormer 在 GTA→Cityscapes 上的检查点](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1pG3kDClZDGwp1vSTEXmTchkGHmnLQNdP\u002Fview?usp=sharing) 并解压到 `work_dirs\u002F` 文件夹中。\n\n所有实验均在 NVIDIA RTX 2080 Ti 上执行。\n\n## 推理演示\n\n在此阶段，提供的 DAFormer 模型即可应用于演示图像：\n\n```shell\npython -m demo.image_demo demo\u002Fdemo.png work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002F211108_1622_gta2cs_daformer_s0_7f24c.json work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002Flatest.pth\n```\n\n在评估预测结果时，请注意 DAFormer 在训练期间无法访问真实世界的标签。\n\n## 数据集设置\n\n**Cityscapes：** 请从 [此处](https:\u002F\u002Fwww.cityscapes-dataset.com\u002Fdownloads\u002F) 下载 leftImg8bit_trainvaltest.zip 和 gt_trainvaltest.zip，并将它们解压到 `data\u002Fcityscapes`。\n\n**GTA：** 请从 [此处](https:\u002F\u002Fdownload.visinf.tu-darmstadt.de\u002Fdata\u002Ffrom_games\u002F) 下载所有图像和标签包，并将它们解压到 `data\u002Fgta`。\n\n**Synthia（可选）：** 请从 [此处](http:\u002F\u002Fsynthia-dataset.net\u002Fdownloads\u002F) 下载 SYNTHIA-RAND-CITYSCAPES，并将其解压到 `data\u002Fsynthia`。\n\n**ACDC（可选）：** 请从 [此处](https:\u002F\u002Facdc.vision.ee.ethz.ch\u002Fdownload) 下载 rgb_anon_trainvaltest.zip 和 gt_trainval.zip，并将它们解压到 `data\u002Facdc`。此外，请使用以下命令将文件夹结构从 `condition\u002Fsplit\u002Fsequence\u002F` 重命名为 `split\u002F`：\n\n```shell\nrsync -a data\u002Facdc\u002Frgb_anon\u002F*\u002Ftrain\u002F*\u002F* data\u002Facdc\u002Frgb_anon\u002Ftrain\u002F\nrsync -a data\u002Facdc\u002Frgb_anon\u002F*\u002Fval\u002F*\u002F* data\u002Facdc\u002Frgb_anon\u002Fval\u002F\nrsync -a data\u002Facdc\u002Fgt\u002F*\u002Ftrain\u002F*\u002F*_labelTrainIds.png data\u002Facdc\u002Fgt\u002Ftrain\u002F\nrsync -a data\u002Facdc\u002Fgt\u002F*\u002Fval\u002F*\u002F*_labelTrainIds.png data\u002Facdc\u002Fgt\u002Fval\u002F\n```\n\n**Dark Zurich（可选）：** 请从 [此处](https:\u002F\u002Fwww.trace.ethz.ch\u002Fpublications\u002F2019\u002FGCMA_UIoU\u002F) 下载 Dark_Zurich_train_anon.zip 和 Dark_Zurich_val_anon.zip，并将它们解压到 `data\u002Fdark_zurich`。\n\n最终的文件夹结构应如下所示：\n\n```none\nDAFormer\n├── ...\n├── data\n│   ├── acdc (可选)\n│   │   ├── gt\n│   │   │   ├── train\n│   │   │   ├── val\n│   │   ├── rgb_anon\n│   │   │   ├── train\n│   │   │   ├── val\n│   ├── cityscapes\n│   │   ├── leftImg8bit\n│   │   │   ├── train\n│   │   │   ├── val\n│   │   ├── gtFine\n│   │   │   ├── train\n│   │   │   ├── val\n│   ├── dark_zurich (可选)\n│   │   ├── gt\n│   │   │   ├── val\n│   │   ├── rgb_anon\n│   │   │   ├── train\n│   │   │   ├── val\n│   ├── gta\n│   │   ├── images\n│   │   ├── labels\n│   ├── synthia (可选)\n│   │   ├── RGB\n│   │   ├── GT\n│   │   │   ├── LABELS\n├── ...\n```\n\n**数据预处理：** 最后，请运行以下脚本将标签 ID 转换为训练 ID，并为 RCS 生成类别索引：\n\n```shell\npython tools\u002Fconvert_datasets\u002Fgta.py data\u002Fgta --nproc 8\npython tools\u002Fconvert_datasets\u002Fcityscapes.py data\u002Fcityscapes --nproc 8\npython tools\u002Fconvert_datasets\u002Fsynthia.py data\u002Fsynthia\u002F --nproc 8\n```\n\n## 训练\n\n为了方便起见，我们提供了最终 DAFormer 的 [注释配置文件](configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py)。可以使用以下命令启动训练任务：\n\n```shell\npython run_experiments.py --config configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py\n```\n\n对于我们在论文中的实验（例如网络架构比较、组件消融等），我们使用一个系统来自动生成和训练配置：\n\n```shell\npython run_experiments.py --exp \u003CID>\n```\n\n有关可用实验及其分配 ID 的更多信息，请参见 [experiments.py](experiments.py)。生成的配置文件将存储在 `configs\u002Fgenerated\u002F` 中。\n\n## 测试与预测\n\n提供的在 GTA→Cityscapes 上训练的 DAFormer checkpoint (检查点)（已通过 `tools\u002Fdownload_checkpoints.sh` 下载）可以使用以下命令在 Cityscapes 验证集上进行测试：\n\n```shell\nsh test.sh work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\n```\n\n预测结果将保存至 `work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002Fpreds` 以供检查，模型的 mIoU (平均交并比) 将打印到控制台。提供的 checkpoint (检查点) 应能达到 68.85 的 mIoU。有关更多信息（例如各类别的 IoU (交并比)），请参阅 `work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002F20211108_164105.log` 的末尾。\n\n同样，训练完成后也可以测试其他模型：\n\n```shell\nsh test.sh path\u002Fto\u002Fcheckpoint_directory\n```\n\n当评估在 Synthia→Cityscapes 上训练的模型时，请注意评估脚本计算所有 19 个 Cityscapes 类别的 mIoU (平均交并比)。然而，Synthia 仅包含其中 16 个类别的标签。因此，在 UDA (无监督域适应) 中，通常仅针对这 16 个类别报告 Synthia→Cityscapes 的 mIoU (平均交并比)。由于缺失的 3 个类别的 IoU (交并比) 为 0，您可以进行转换：mIoU16 = mIoU19 * 19 \u002F 16。\n\nCityscapes→ACDC 和 Cityscapes→DarkZurich 的结果是在目标数据集的测试集划分上报告的。要为测试集生成预测，请运行：\n\n```shell\npython -m tools.test path\u002Fto\u002Fconfig_file path\u002Fto\u002Fcheckpoint_file --test-set --format-only --eval-option imgfile_prefix=labelTrainIds to_label_id=False\n```\n\n可以将预测结果提交到相应数据集的公共评估服务器以获取测试分数。\n\n## 域泛化\n\n关于 DAFormer 的域泛化扩展，请参考 HRDA 仓库的 DG 分支：[https:\u002F\u002Fgithub.com\u002Flhoyer\u002FHRDA\u002Ftree\u002Fdg](https:\u002F\u002Fgithub.com\u002Flhoyer\u002FHRDA\u002Ftree\u002Fdg)\n\n## 检查点\n\n下面，我们提供了不同基准下的 DAFormer checkpoint (检查点)。由于论文中的结果是三个随机种子的平均值，我们在此提供具有中位数验证性能的 checkpoint (检查点)。\n\n* [DAFormer 用于 GTA→Cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1pG3kDClZDGwp1vSTEXmTchkGHmnLQNdP\u002Fview?usp=sharing)\n* [DAFormer 用于 Synthia→Cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1V9EpoTePjGq33B8MfombxEEcq9a2rBEt\u002Fview?usp=sharing)\n* [DAFormer 用于 Cityscapes→ACDC](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16RSBkzJbGprWr04LjyNleqRzRZgCaEBn\u002Fview?usp=sharing)\n* [DAFormer 用于 Cityscapes→DarkZurich](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_VXKDhnp4x4sslBj5B8tqqBJXeOuI9hS\u002Fview?usp=sharing)\n* [DAFormer 用于 GTA 域泛化](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1up9x3R3HtU_MjM6F89xNIHzPbIqBSacx\u002Fview?usp=sharing)\n\nCheckpoint (检查点) 附带训练日志。请注意：\n\n* 日志提供 19 个类别的 mIoU (平均交并比)。对于 Synthia→Cityscapes，需要将 mIoU (平均交并比) 转换为 16 个有效类别。请参阅上述部分了解如何转换 mIoU (平均交并比)。\n* 日志提供验证集上的 mIoU (平均交并比)。对于 Cityscapes→ACDC 和 Cityscapes→DarkZurich，论文中报告的结果是基于测试集划分计算的。对于 DarkZurich，验证集和测试集划分的性能差异显著。请参阅上述部分了解如何获取测试 mIoU (平均交并比)。\n\n## 框架结构\n\n本项目基于 [mmsegmentation version 0.16.0](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation\u002Ftree\u002Fv0.16.0)。有关框架结构和配置系统的更多信息，请参阅 [mmsegmentation 文档](https:\u002F\u002Fmmsegmentation.readthedocs.io\u002Fen\u002Flatest\u002Findex.html) 和 [mmcv 文档](https:\u002F\u002Fmmcv.readthedocs.ihttps:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08702o\u002Fen\u002Fv1.3.7\u002Findex.html)。\n\nDAFormer 最相关的文件如下：\n\n* [configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py](configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py):\n  最终 DAFormer 的注释配置文件。\n* [mmseg\u002Fmodels\u002Fuda\u002Fdacs.py](mmseg\u002Fmodels\u002Fuda\u002Fdacs.py):\n  带有 ImageNet 特征距离的 UDA (无监督域适应) 自训练实现。\n* [mmseg\u002Fdatasets\u002Fuda_dataset.py](mmseg\u002Fdatasets\u002Fuda_dataset.py):\n  带有稀有类采样的 UDA (无监督域适应) 数据加载器。\n* [mmseg\u002Fmodels\u002Fdecode_heads\u002Fdaformer_head.py](mmseg\u002Fmodels\u002Fdecode_heads\u002Fdaformer_head.py):\n  带有上下文感知特征融合的 DAFormer 解码器实现。\n* [mmseg\u002Fmodels\u002Fbackbones\u002Fmix_transformer.py](mmseg\u002Fmodels\u002Fbackbones\u002Fmix_transformer.py):\n  Mix Transformer 编码器 (MiT) 的实现。\n\n## 致谢\n\n本项目基于以下开源项目。感谢其作者公开源代码。\n\n* [MMSegmentation](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation)\n* [SegFormer](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer)\n* [DACS](https:\u002F\u002Fgithub.com\u002Fvikolss\u002FDACS)\n\n## 许可证\n\n本项目根据 [Apache License 2.0](LICENSE) 发布，而该存储库中的一些特定功能可能使用其他许可证。如果您将我们的代码用于商业用途，请参阅 [LICENSES.md](LICENSES.md) 进行仔细检查。","# DAFormer 快速上手指南\n\n**DAFormer** 是一种专为无监督领域自适应（UDA）语义分割设计的网络架构。它通过 Transformer 编码器和多尺度上下文感知特征融合解码器，显著提升了从合成数据到真实场景的分割性能。\n\n## 环境准备\n\n建议配置如下：\n*   **操作系统**: Linux (推荐) \u002F Windows (需 WSL)\n*   **Python 版本**: 3.8.5\n*   **GPU**: NVIDIA (如 RTX 2080 Ti)\n*   **依赖**: PyTorch, MMDetection\u002FMMSegmentation (通过 `mmcv-full`)\n\n建议使用虚拟环境隔离依赖：\n\n```shell\npython -m venv ~\u002Fvenv\u002Fdaformer\nsource ~\u002Fvenv\u002Fdaformer\u002Fbin\u002Factivate\n```\n\n## 安装步骤\n\n在激活的虚拟环境中，依次执行以下命令安装依赖并下载预训练权重。\n\n### 1. 安装依赖包\n```shell\npip install -r requirements.txt -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\npip install mmcv-full==1.3.7  # 确保其他包已先安装\n```\n\n### 2. 下载预训练模型\n请将以下文件下载至指定目录：\n\n*   **MiT ImageNet 权重 (b3-b5)**:\n    从 [SegFormer OneDrive](https:\u002F\u002Fconnecthkuhk-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fxieenze_connect_hku_hk\u002FEvOn3l1WyM5JpnMQFSEO5b8B7vrHw9kDaJGII-3N9KNhrg?e=cpydzZ) 下载，放入 `pretrained\u002F` 文件夹。\n*   **DAFormer 检查点 (GTA→Cityscapes)**:\n    从 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1pG3kDClZDGwp1vSTEXmTchkGHmnLQNdP\u002Fview?usp=sharing) 下载并解压，放入 `work_dirs\u002F` 文件夹。\n\n> **提示**: 国内用户访问上述链接可能需要网络加速工具。\n\n## 基本使用\n\n### 推理演示 (Inference Demo)\n安装完成后，可直接对示例图片进行推理测试：\n\n```shell\npython -m demo.image_demo demo\u002Fdemo.png work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002F211108_1622_gta2cs_daformer_s0_7f24c.json work_dirs\u002F211108_1622_gta2cs_daformer_s0_7f24c\u002Flatest.pth\n```\n\n### 数据集准备\n若需进行训练或自定义测试，请按照以下结构组织数据：\n\n```none\ndata\n├── cityscapes       # 必须：leftImg8bit_trainvaltest.zip, gt_trainvaltest.zip\n├── gta              # 必须：图像与标签包\n├── synthia          # 可选\n├── acdc             # 可选\n└── dark_zurich      # 可选\n```\n\n具体下载链接及文件夹重命名脚本请参考项目官方 README。对于新数据集，还需运行预处理脚本来转换标签 ID 并生成类索引。","某自动驾驶初创团队正在开发城市道路感知系统，拥有大量合成驾驶数据，但缺乏标注好的真实路测图像。\n\n### 没有 DAFormer 时\n- 直接训练导致模型在真实场景中泛化能力差，车辆和行人识别率低，无法满足安全上路标准。\n- 为了提升效果，不得不投入大量人力对真实路测视频进行像素级标注，成本高昂且周期长达数月。\n- 传统 UDA 方法基于旧架构，难以捕捉复杂场景下的上下文信息，小目标如交通锥漏检严重。\n- 模型容易过拟合源域数据，遇到雨天或夜间等未见过的天气条件性能骤降，鲁棒性不足。\n\n### 使用 DAFormer 后\n- 利用 DAFormer 的 Transformer 架构，无需真实标签即可将合成数据学到的特征迁移至真实场景，mIoU 指标显著提升。\n- 通过稀有类采样策略，有效解决了公交车、卡车等不常见类别的识别难题，大幅减少漏检情况。\n- 多尺度上下文融合机制增强了模型对复杂路况的理解，即使在光照变化下也能保持稳定的分割精度。\n- 大幅降低了数据标注成本，原本需要数月的标注工作缩短为几天，加速了产品迭代上线进程。\n\nDAFormer 让低成本合成数据高效适配真实环境，显著提升了自动驾驶感知系统的落地效率与精度。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flhoyer_DAFormer_f2094bae.gif","lhoyer","Lukas Hoyer","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flhoyer_cadd4021.jpg","PhD Candidate at ETH Zurich",null,"Switzerland","lhoyer.github.io","https:\u002F\u002Fgithub.com\u002Flhoyer",[84,88],{"name":85,"color":86,"percentage":87},"Python","#3572A5",99.6,{"name":89,"color":90,"percentage":91},"Shell","#89e051",0.4,564,96,"2026-04-02T03:16:44","NOASSERTION","Linux, macOS","需要 NVIDIA GPU (实验环境：RTX 2080 Ti)","未说明",{"notes":100,"python":101,"dependencies":102},"需手动下载 MiT ImageNet 权重和 DAFormer 检查点；需下载并解压 Cityscapes\u002FGTA\u002FSynthia 等数据集；需按特定目录结构存放数据；ACDC 数据需使用 rsync 命令重组目录；需运行脚本预处理标签 ID 和生成类索引。","3.8+",[103,104],"mmcv-full==1.3.7","torch",[14,26],[107,108,109,110],"semantic-segmentation","unsupervised-domain-adaptation","transformer","cvpr2022","2026-03-27T02:49:30.150509","2026-04-06T05:37:18.612462",[114,119,124,128],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},3214,"为什么复现的 mIoU (66.21%) 低于论文结果 (68.3%)？","这是因为代码经过重构，旧配置文件中的某些标志（flags）不再兼容。请使用仓库中提供的特定配置文件：configs\u002Fdaformer\u002Fgta2cs_uda_warm_fdthings_rcs_croppl_a999_daformer_mitb5_s0.py。该配置具有与原始配置相同的功能，且默认禁用了 DropPath（在 dacs.py 中），使用此配置可复现论文结果。","https:\u002F\u002Fgithub.com\u002Flhoyer\u002FDAFormer\u002Fissues\u002F23",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},3215,"为什么基于 CNN 的结果比基于 Transformer 的结果更好？","这可能是由于预训练权重文件的问题导致的。建议检查是否使用了正确的预训练权重。例如，对于 MiT 模型，应使用 SegFormer 官方提供的在 ImageNet-1k 数据集上训练的 MiT 预训练权重文件（可通过 SegFormer 官方资源获取）。","https:\u002F\u002Fgithub.com\u002Flhoyer\u002FDAFormer\u002Fissues\u002F77",{"id":125,"question_zh":126,"answer_zh":127,"source_url":123},3216,"Potsdam IR-R-G 到 Vaihingen 任务 mIoU 只有 30% 左右如何解决？","请检查实验参数设置是否正确。可以尝试参考相关代码生成所需的 json 文件来配置实验参数，具体操作可参考 Issue 评论中提供的代码截图和链接。",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},3217,"训练过程中出现 KeyError: 'data_time' 错误如何解决？","这是一个已知的 Bug。可以参考 MMDetection 的相关修复 PR (PR #5882) 进行代码修正，或者遵循作者建议更新相关代码以解决此问题。","https:\u002F\u002Fgithub.com\u002Flhoyer\u002FDAFormer\u002Fissues\u002F7",[]]