[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-JingyunLiang--SwinIR":3,"tool-JingyunLiang--SwinIR":61},[4,18,26,36,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":10,"last_commit_at":58,"category_tags":59,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,60],"视频",{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":78,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":95,"env_deps":97,"category_tags":106,"github_topics":107,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":124,"updated_at":125,"faqs":126,"releases":161},4601,"JingyunLiang\u002FSwinIR","SwinIR","SwinIR: Image Restoration Using Swin Transformer (official repository)","SwinIR 是一款基于 Swin Transformer 架构的先进图像修复开源项目，由苏黎世联邦理工学院（ETH Zurich）团队研发。它致力于解决图像质量退化问题，能够智能地将模糊、充满噪点或带有压缩瑕疵的图片恢复为清晰高质量的原貌。\n\n具体而言，SwinIR 在多个关键领域达到了业界领先的性能水平：无论是将低分辨率图片进行超分辨率放大（包括轻量级和真实场景下的放大），还是去除黑白及彩色图像中的噪点，亦或是消除 JPEG 格式带来的压缩伪影，它都能提供卓越的修复效果。\n\n这款工具非常适合计算机视觉研究人员、AI 开发者以及需要高质量图像处理方案的设计师使用。对于普通用户，通过其提供的在线演示平台，也能轻松体验强大的图像增强功能而无需编写代码。\n\nSwinIR 的核心技术亮点在于创新性地引入了“移位窗口”（Shifted Window）机制。这一设计不仅保留了 Transformer 模型捕捉长距离依赖关系的强大能力，还显著降低了计算复杂度，使得模型在处理高分辨率图像时更加高效。作为官方提供的 PyTorch 实现，SwinIR 开放了预训练模型和完整代码，并支持在移动端部署，是","SwinIR 是一款基于 Swin Transformer 架构的先进图像修复开源项目，由苏黎世联邦理工学院（ETH Zurich）团队研发。它致力于解决图像质量退化问题，能够智能地将模糊、充满噪点或带有压缩瑕疵的图片恢复为清晰高质量的原貌。\n\n具体而言，SwinIR 在多个关键领域达到了业界领先的性能水平：无论是将低分辨率图片进行超分辨率放大（包括轻量级和真实场景下的放大），还是去除黑白及彩色图像中的噪点，亦或是消除 JPEG 格式带来的压缩伪影，它都能提供卓越的修复效果。\n\n这款工具非常适合计算机视觉研究人员、AI 开发者以及需要高质量图像处理方案的设计师使用。对于普通用户，通过其提供的在线演示平台，也能轻松体验强大的图像增强功能而无需编写代码。\n\nSwinIR 的核心技术亮点在于创新性地引入了“移位窗口”（Shifted Window）机制。这一设计不仅保留了 Transformer 模型捕捉长距离依赖关系的强大能力，还显著降低了计算复杂度，使得模型在处理高分辨率图像时更加高效。作为官方提供的 PyTorch 实现，SwinIR 开放了预训练模型和完整代码，并支持在移动端部署，是当前图像复原领域极具参考价值和实用性的基准工具。","# SwinIR: Image Restoration Using Swin Transformer\n[Jingyun Liang](https:\u002F\u002Fjingyunliang.github.io), [Jiezhang Cao](https:\u002F\u002Fwww.jiezhangcao.com\u002F), [Guolei Sun](https:\u002F\u002Fvision.ee.ethz.ch\u002Fpeople-details.MjYzMjMw.TGlzdC8zMjg5LC0xOTcxNDY1MTc4.html), [Kai Zhang](https:\u002F\u002Fcszn.github.io\u002F), [Luc Van Gool](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=TwMib_QAAAAJ&hl=en), [Radu Timofte](http:\u002F\u002Fpeople.ee.ethz.ch\u002F~timofter\u002F)\n\nComputer Vision Lab, ETH Zurich\n\n---\n\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Paper-\u003CCOLOR>.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.10257)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FSwinIR?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR)\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002FJingyunLiang\u002FSwinIR\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)\n![visitors](https:\u002F\u002Fvisitor-badge.glitch.me\u002Fbadge?page_id=jingyunliang\u002FSwinIR)\n[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb)\n\u003Ca href=\"https:\u002F\u002Freplicate.ai\u002Fjingyunliang\u002Fswinir\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Replicate&message=Demo and Docker Image&color=blue\">\u003C\u002Fa>\n[![PlayTorch Demo](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fplaytorch\u002Fblob\u002Fmain\u002Fwebsite\u002Fstatic\u002Fassets\u002Fplaytorch_badge.svg)](https:\u002F\u002Fplaytorch.dev\u002Fsnack\u002F@playtorch\u002Fswinir\u002F)\n[Gradio Web Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FSwinIR)\n\nThis repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Window Transformer\n([arxiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.10257.pdf), [supp](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases), [pretrained models](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases), [visual results](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)). SwinIR achieves **state-of-the-art performance** in\n- bicubic\u002Flighweight\u002Freal-world image SR\n- grayscale\u002Fcolor image denoising\n- grayscale\u002Fcolor JPEG compression artifact reduction\n\n\u003C\u002Fbr>\n\n:rocket:  :rocket:  :rocket: **News**:\n- **Aug. 16, 2022**: Add PlayTorch Demo on running the real-world image SR model on mobile devices [![PlayTorch Demo](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fplaytorch\u002Fblob\u002Fmain\u002Fwebsite\u002Fstatic\u002Fassets\u002Fplaytorch_badge.svg)](https:\u002F\u002Fplaytorch.dev\u002Fsnack\u002F@playtorch\u002Fswinir\u002F).\n- **Aug. 01, 2022**: Add pretrained models and results on JPEG compression artifact reduction for color images. \n- **Jun. 10, 2022**: See our work on video restoration :fire::fire::fire: [VRT: A Video Restoration Transformer](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FVRT) \n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FVRT?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FVRT)\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002FJingyunLiang\u002FVRT\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FVRT\u002Freleases)\nand [RVRT: Recurrent Video Restoration Transformer](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FRVRT) \n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FRVRT?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FRVRT)\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002FJingyunLiang\u002FRVRT\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FRVRT\u002Freleases)\nfor video SR, video deblurring, video denoising, video frame interpolation and space-time video SR.\n- **Sep. 07, 2021**: We provide an interactive online Colab demo for real-world image SR \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>:fire: for comparison with [the first practical degradation model BSRGAN (ICCV2021) ![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN) and a recent model RealESRGAN. Try to super-resolve your own images on Colab!\n\n|Real-World Image (x4)|[BSRGAN, ICCV2021](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)|[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)|SwinIR (ours)|SwinIR-Large (ours)|\n|       :---       |     :---:        |        :-----:         |        :-----:         |        :-----:         | \n| \u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_380151c15a0e.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_9b5c98b91974.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_b81eadf6a17f.jpg\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_ae1abf4ee967.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_a6773f15cd3a.png\">\n|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_20bb89d1f3fa.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_e7fde6cd735a.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_4236f61a51e8.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_eafac483f87f.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_431e6ded3599.png\">|\n  \n - ***Aug. 26, 2021**: See our recent work on [real-world image SR: a pratical degrdation model BSRGAN, ICCV2021](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)*\n - ***Aug. 26, 2021**: See our recent work on [generative modelling of image SR and image rescaling: normalizing-flow-based HCFlow, ICCV2021](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FHCFlow)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FHCFlow?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FHCFlow)[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fcdb3fef89ebd174eaa43794accb6f59d\u002Fhcflow-demo-on-x8-face-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fcdb3fef89ebd174eaa43794accb6f59d\u002Fhcflow-demo-on-x8-face-image-sr.ipynb)*\n - ***Aug. 26, 2021**: See our recent work on [blind SR: spatially variant kernel estimation (MANet, ICCV2021)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FMANet) [![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FMANet?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FMANet)\n[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002F4ed2524d6e08343710ee408a4d997e1c\u002Fmanet-demo-on-spatially-variant-kernel-estimation.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002F4ed2524d6e08343710ee408a4d997e1c\u002Fmanet-demo-on-spatially-variant-kernel-estimation.ipynb) and [unsupervised kernel estimation (FKP, CVPR2021)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FFKP)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FFKP?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FFKP)*\n\n---\n\n> Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from low-quality images (e.g., downscaled, noisy and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers which show impressive performance on high-level vision tasks. In this paper, we propose a strong baseline model SwinIR for image restoration based on the Swin Transformer. SwinIR consists of three parts: shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module is composed of several residual Swin Transformer blocks (RSTB), each of which has several Swin Transformer layers together with a residual connection. We conduct experiments on three representative tasks: image super-resolution (including classical, lightweight and real-world image super-resolution), image denoising (including grayscale and color image denoising) and JPEG compression artifact reduction. Experimental results demonstrate that SwinIR outperforms state-of-the-art methods on different tasks by up to 0.14~0.45dB, while the total number of parameters can be reduced by up to 67%.\n>\u003Cp align=\"center\">\n  \u003Cimg width=\"800\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_ec9a1f8fd120.png\">\n\u003C\u002Fp>\n\n\n\n#### Contents\n\n1. [Training](#Training)\n1. [Testing](#Testing)\n1. [Results](#Results)\n1. [Citation](#Citation)\n1. [License and Acknowledgement](#License-and-Acknowledgement)\n\n\n### Training\n\n\nUsed training and testing sets can be downloaded as follows:\n\n| Task                                                |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Training Set                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         | Testing Set|    Visual Results |    \n|:----------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|     :---:      |   :---:      |\n| classical\u002Flightweight image SR                      |                                                                                                                                                                                                                                                                                                                                                                                                                                                               [DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800 training images) or DIV2K +[Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650 images)                                                                                                                                                                                                                                                                                                                                                                                                                                                               | Set5 + Set14 + BSD100 + Urban100 + Manga109 [download all](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1B3DJGQKB6eNdwuQIhdskA64qUuVKLZ9u) | [here](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n| real-world image SR                                 | SwinIR-M (middle size): [DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800 training images) +[Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650 images) + [OST](https:\u002F\u002Fopenmmlab.oss-cn-hangzhou.aliyuncs.com\u002Fdatasets\u002FOST_dataset.zip) ([alternative link](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1iZfzAxAwOpeutz27HC56_y5RNqnsPPKr), 10324 images for sky,water,grass,mountain,building,plant,animal) \u003Cbr \u002F> SwinIR-L (large size): DIV2K + Flickr2K + OST + [WED](http:\u002F\u002Fivc.uwaterloo.ca\u002Fdatabase\u002FWaterlooExploration\u002Fexploration_database_and_code.rar)(4744 images) + [FFHQ](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1tZUcXDBeOibC6jcMCtgRRz67pzrAHeHL) (first 2000 images, face) + Manga109 (manga) + [SCUT-CTW1500](https:\u002F\u002Funiversityofadelaide.box.com\u002Fshared\u002Fstatic\u002Fpy5uwlfyyytbb2pxzq9czvu6fuqbjdh8.zip) (first 100 training images, texts) \u003Cbr \u002F>\u003Cbr \u002F>  ***We use the pionnerring practical degradation model from [BSRGAN, ICCV2021  ![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)** | [RealSRSet+5images](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases\u002Fdownload\u002Fv0.0\u002FRealSRSet+5images.zip) |  [here](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n| color\u002Fgrayscale image denoising                     |                                                                                                                                                                                                                                                                                                             [DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800 training images) + [Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650 images) + [BSD500](http:\u002F\u002Fwww.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fgrouping\u002FBSR\u002FBSR_bsds500.tgz) (400 training&testing images) + [WED](http:\u002F\u002Fivc.uwaterloo.ca\u002Fdatabase\u002FWaterlooExploration\u002Fexploration_database_and_code.rar)(4744 images)  \u003Cbr \u002F>\u003Cbr \u002F>  *BSD68\u002FBSD100 images are not used in training.                                                                                                                                                                                                                                                                                                              |  grayscale: Set12 + BSD68 + Urban100 \u003Cbr \u002F>  color: CBSD68 + Kodak24 + McMaster + Urban100 [download all](https:\u002F\u002Fgithub.com\u002Fcszn\u002FFFDNet\u002Ftree\u002Fmaster\u002Ftestsets) |  [here](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n| grayscale\u002Fcolor JPEG compression artifact reduction |                                                                                                                                                                                                                                                                                                                                            [DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800 training images) + [Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650 images) + [BSD500](http:\u002F\u002Fwww.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fgrouping\u002FBSR\u002FBSR_bsds500.tgz) (400 training&testing images) + [WED](http:\u002F\u002Fivc.uwaterloo.ca\u002Fdatabase\u002FWaterlooExploration\u002Fexploration_database_and_code.rar)(4744 images)                                                                                                                                                                                                                                                                                                                                             |  grayscale: Classic5 +LIVE1 [download all](https:\u002F\u002Fgithub.com\u002Fcszn\u002FDnCNN\u002Ftree\u002Fmaster\u002Ftestsets) | [here](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n\n\n\u003C!--\n| Task                 | Training Set | Testing Set|        Pretrained Model and Visual Results of SwinIR     | \n| :---                 | :---:        |     :---:      |:---:      |\n| image denoising (real)      | [SIDD-Medium-sRGB](https:\u002F\u002Fwww.eecs.yorku.ca\u002F~kamel\u002Fsidd\u002Fdataset.php) (320 images, [preprocess]()) + [RENOIR](http:\u002F\u002Fani.stat.fsu.edu\u002F~abarbu\u002FRenoir.html) (221 images, [preprocess](https:\u002F\u002Fgithub.com\u002FzsyOAOA\u002FDANet\u002Fblob\u002Fmaster\u002Fdatasets\u002Fpreparedata\u002FRenoir_big2small_all.py)) + [Poly](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FPolyU-Real-World-Noisy-Images-Dataset) (40 images in .\u002FOriginalImages) |    [SIDD validation set](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1S44fHXaVxAYW3KLNxK41NYCnyX9S79su) (1280 patches, identical to official [.mat](https:\u002F\u002Fwww.eecs.yorku.ca\u002F~kamel\u002Fsidd\u002Fbenchmark.php) version) +  [DND](https:\u002F\u002Fnoise.visinf.tu-darmstadt.de\u002Fdownloads\u002F) (pre-defined 100 patches of 50 images, [online eval](https:\u002F\u002Fnoise.visinf.tu-darmstadt.de\u002Fsubmit\u002F)) + [Nam](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F24kds7c436i5i11\u002Freal_image_noise_dataset.zip?dl=0) (random 100 patches of 17 images, [preprocess](https:\u002F\u002Fgithub.com\u002FzsyOAOA\u002FDANet\u002Fblob\u002Fmaster\u002Fdatasets\u002Fpreparedata\u002FNam_patch_prepare.py))|[download model]() [download results]() |\n| image deblurring (synthetic)   | [GoPro](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1AsgIP9_X0bg0olu2-1N6karm2x15cJWE) (2103 training images)  |  [GoPro](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1a2qKfXWpNuTGOm2-Jex8kfNSzYJLbqkf) (1111 images) + [HIDE](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1nRsTXj4iTUkTvBhTcGg8cySK8nd3vlhK) (2050 images) + [RealBlur_J](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1KYtzeKCiDRX9DSvC-upHrCqvC4sPAiJ1) (real blur, 980 images) + [RealBlur_R](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1EwDoajf5nStPIAcU4s9rdc8SPzfm3tW1) (real blur, 980 images) | [download model]() [download results]()|\n| image deraining (synthetic)  | [Multiple datasets](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Hnnlc5kI0v9_BtfMytC2LR5VpLAFZtVe) (13711 training images, see Table 1 of [MPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet) for details.)  |  Rain100H (100 images) + Rain100L (100 images) + Test100 (100 images) + Test2800 (2800 images) + Test1200 (1200 images), [download all](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1PDWggNh8ylevFmrjo-JEvlmqsDlWWvZs)  | [download model]() [download results]()|\n\nNote: above datasets may come from the official release or some awesome collections ([BasicSR](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FBasicSR), [MPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)).\n\n-->\n\nThe training code is at [KAIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FKAIR\u002Fblob\u002Fmaster\u002Fdocs\u002FREADME_SwinIR.md).\n\n## Testing (without preparing datasets)\nFor your convience, we provide some example datasets (~20Mb) in `\u002Ftestsets`. \nIf you just want codes, downloading `models\u002Fnetwork_swinir.py`, `utils\u002Futil_calculate_psnr_ssim.py` and `main_test_swinir.py` is enough.\nFollowing commands will download [pretrained models](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) **automatically** and put them in `model_zoo\u002Fswinir`. \n**[All visual results of SwinIR can be downloaded here](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)**. \n\nWe also provide an [online Colab demo for real-world image SR  \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb) for comparison with [the first practical degradation model BSRGAN (ICCV2021)  ![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN) and a recent model [RealESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN). Try to test your own images on Colab!\n\nWe provide a PlayTorch demo [![PlayTorch Demo](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fplaytorch\u002Fblob\u002Fmain\u002Fwebsite\u002Fstatic\u002Fassets\u002Fplaytorch_badge.svg)](https:\u002F\u002Fplaytorch.dev\u002Fsnack\u002F@playtorch\u002Fswinir\u002F) for real-world image SR to showcase how to run the SwinIR model in mobile application built with React Native.\n\n```bash\n# 001 Classical Image Super-Resolution (middle size)\n# Note that --training_patch_size is just used to differentiate two different settings in Table 2 of the paper. Images are NOT tested patch by patch.\n# (setting1: when model is trained on DIV2K and with training_patch_size=48)\npython main_test_swinir.py --task classical_sr --scale 2 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x2.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX2 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 3 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x3.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX3 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 4 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX4 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 8 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x8.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX8 --folder_gt testsets\u002FSet5\u002FHR\n\n# (setting2: when model is trained on DIV2K+Flickr2K and with training_patch_size=64)\npython main_test_swinir.py --task classical_sr --scale 2 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x2.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX2 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 3 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x3.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX3 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 4 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x4.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX4 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 8 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x8.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX8 --folder_gt testsets\u002FSet5\u002FHR\n\n\n# 002 Lightweight Image Super-Resolution (small size)\npython main_test_swinir.py --task lightweight_sr --scale 2 --model_path model_zoo\u002Fswinir\u002F002_lightweightSR_DIV2K_s64w8_SwinIR-S_x2.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX2 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task lightweight_sr --scale 3 --model_path model_zoo\u002Fswinir\u002F002_lightweightSR_DIV2K_s64w8_SwinIR-S_x3.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX3 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task lightweight_sr --scale 4 --model_path model_zoo\u002Fswinir\u002F002_lightweightSR_DIV2K_s64w8_SwinIR-S_x4.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX4 --folder_gt testsets\u002FSet5\u002FHR\n\n\n# 003 Real-World Image Super-Resolution (use --tile 400 if you run out-of-memory)\n# (middle size)\npython main_test_swinir.py --task real_sr --scale 4 --model_path model_zoo\u002Fswinir\u002F003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth --folder_lq testsets\u002FRealSRSet+5images --tile\n\n# (larger size + trained on more datasets)\npython main_test_swinir.py --task real_sr --scale 4 --large_model --model_path model_zoo\u002Fswinir\u002F003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth --folder_lq testsets\u002FRealSRSet+5images\n\n\n# 004 Grayscale Image Deoising (middle size)\npython main_test_swinir.py --task gray_dn --noise 15 --model_path model_zoo\u002Fswinir\u002F004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth --folder_gt testsets\u002FSet12\npython main_test_swinir.py --task gray_dn --noise 25 --model_path model_zoo\u002Fswinir\u002F004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth --folder_gt testsets\u002FSet12\npython main_test_swinir.py --task gray_dn --noise 50 --model_path model_zoo\u002Fswinir\u002F004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth --folder_gt testsets\u002FSet12\n\n\n# 005 Color Image Deoising (middle size)\npython main_test_swinir.py --task color_dn --noise 15 --model_path model_zoo\u002Fswinir\u002F005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth --folder_gt testsets\u002FMcMaster\npython main_test_swinir.py --task color_dn --noise 25 --model_path model_zoo\u002Fswinir\u002F005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth --folder_gt testsets\u002FMcMaster\npython main_test_swinir.py --task color_dn --noise 50 --model_path model_zoo\u002Fswinir\u002F005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth --folder_gt testsets\u002FMcMaster\n\n\n# 006 JPEG Compression Artifact Reduction (middle size, using window_size=7 because JPEG encoding uses 8x8 blocks)\n# grayscale\npython main_test_swinir.py --task jpeg_car --jpeg 10 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth --folder_gt testsets\u002Fclassic5\npython main_test_swinir.py --task jpeg_car --jpeg 20 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth --folder_gt testsets\u002Fclassic5\npython main_test_swinir.py --task jpeg_car --jpeg 30 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth --folder_gt testsets\u002Fclassic5\npython main_test_swinir.py --task jpeg_car --jpeg 40 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth --folder_gt testsets\u002Fclassic5\n\n# color\npython main_test_swinir.py --task color_jpeg_car --jpeg 10 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg10.pth --folder_gt testsets\u002FLIVE1\npython main_test_swinir.py --task color_jpeg_car --jpeg 20 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg20.pth --folder_gt testsets\u002FLIVE1\npython main_test_swinir.py --task color_jpeg_car --jpeg 30 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg30.pth --folder_gt testsets\u002FLIVE1\npython main_test_swinir.py --task color_jpeg_car --jpeg 40 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg40.pth --folder_gt testsets\u002FLIVE1\n\n```\n\n---\n\n## Results\nWe achieved state-of-the-art performance on classical\u002Flightweight\u002Freal-world image SR, grayscale\u002Fcolor image denoising and JPEG compression artifact reduction. Detailed results can be found in the [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.10257). All visual results of SwinIR can be downloaded [here](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases). \n\n\u003Cdetails>\n\u003Csummary>Classical Image Super-Resolution (click me)\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_0859ab6d5e33.png\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_14afa04e2303.png\">\n\u003C\u002Fp>\n  \n- More detailed comparison between SwinIR and a representative CNN-based model RCAN (classical image SR, X4)\n\n| Method             | Training Set    |  Training time  \u003Cbr \u002F> (8GeForceRTX2080Ti \u003Cbr \u002F> batch=32, iter=500k) |Y-PSNR\u002FY-SSIM \u003Cbr \u002F> on Manga109 | Run time  \u003Cbr \u002F> (1GeForceRTX2080Ti,\u003Cbr \u002F> on 256x256 LR image)* |  #Params   | #FLOPs |  Testing memory |\n| :---      | :---:        |        :-----:         |     :---:      |     :---:      |     :---:      |   :---:      |  :---:      |\n| RCAN | DIV2K | 1.6 days | 31.22\u002F0.9173 | 0.180s | 15.6M | 850.6G | 593.1M | \n| SwinIR | DIV2K | 1.8 days |31.67\u002F0.9226 | 0.539s | 11.9M | 788.6G | 986.8M | \n\n\\* We re-test the runtime when the GPU is idle. We refer to the evluation code [here](https:\u002F\u002Fgithub.com\u002Fcszn\u002FKAIR\u002Fblob\u002Fmaster\u002Fmain_challenge_sr.py).\n\n  \n- Results on DIV2K-validation (100 images)\n  \n|  Training Set | scale factor | PSNR (RGB) | PSNR (Y) | SSIM (RGB)  | SSIM (Y) |\n| :--- | :---: | :---:        |     :---:      | :---: | :---:        |\n|  DIV2K (800 images) | 2 | 35.25 | 36.77 | 0.9423 | 0.9500 |\n|  DIV2K+Flickr2K (2650 images) | 2 | 35.34 | 36.86 | 0.9430 |0.9507 |\n|  DIV2K (800 images) | 3 | 31.50 | 32.97 | 0.8832 |0.8965 |\n|  DIV2K+Flickr2K (2650 images) | 3 | 31.63 | 33.10 | 0.8854 |0.8985 |\n|  DIV2K (800 images) | 4 | 29.48 | 30.94 | 0.8311|0.8492 |\n|  DIV2K+Flickr2K (2650 images) | 4 | 29.63 | 31.08 | 0.8347|0.8523 |\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Lightweight Image Super-Resolution\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_d5b03d1d0a5f.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Real-World Image Super-Resolution\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_f11474d9e792.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Grayscale Image Deoising\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_910fc320c8d0.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Color Image Deoising\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_a572638fd3df.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>JPEG Compression Artifact Reduction\u003C\u002Fsummary>\n\non grayscale images\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_adea9b6b9863.png\">\n\u003C\u002Fp>\n\non color images\n\n| Training Set | quality factor | PSNR (RGB) | PSNR-B (RGB) | SSIM (RGB) |\n|:-------------|:--------------:|:----------:|:------------:|:----------:|\n| LIVE1        |       10       |   28.06    |    27.76     |   0.8089   |\n| LIVE1        |       20       |   30.45    |    29.97     |   0.8741   |\n| LIVE1        |       30       |   31.82    |    31.24     |   0.9018   |\n| LIVE1        |       40       |   32.75    |    32.12     |   0.9174   |\n\u003C\u002Fdetails>\n\n\n\n## Citation\n    @article{liang2021swinir,\n      title={SwinIR: Image Restoration Using Swin Transformer},\n      author={Liang, Jingyun and Cao, Jiezhang and Sun, Guolei and Zhang, Kai and Van Gool, Luc and Timofte, Radu},\n      journal={arXiv preprint arXiv:2108.10257},\n      year={2021}\n    }\n\n\n## License and Acknowledgement\nThis project is released under the Apache 2.0 license. The codes are based on [Swin Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer) and [KAIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FKAIR). Please also follow their licenses. Thanks for their awesome works.\n","# SwinIR：基于Swin Transformer的图像修复\n[Jingyun Liang](https:\u002F\u002Fjingyunliang.github.io)、[Jiezhang Cao](https:\u002F\u002Fwww.jiezhangcao.com\u002F)、[Guolei Sun](https:\u002F\u002Fvision.ee.ethz.ch\u002Fpeople-details.MjYzMjMw.TGlzdC8zMjg5LC0xOTcxNDY1MTc4.html)、[Kai Zhang](https:\u002F\u002Fcszn.github.io\u002F)、[Luc Van Gool](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=TwMib_QAAAAJ&hl=en)、[Radu Timofte](http:\u002F\u002Fpeople.ee.ethz.ch\u002F~timofter\u002F)\n\n苏黎世联邦理工学院计算机视觉实验室\n\n---\n\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Paper-\u003CCOLOR>.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.10257)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FSwinIR?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR)\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002FJingyunLiang\u002FSwinIR\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)\n![visitors](https:\u002F\u002Fvisitor-badge.glitch.me\u002Fbadge?page_id=jingyunliang\u002FSwinIR)\n[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb)\n\u003Ca href=\"https:\u002F\u002Freplicate.ai\u002Fjingyunliang\u002Fswinir\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Replicate&message=Demo and Docker Image&color=blue\">\u003C\u002Fa>\n[![PlayTorch Demo](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fplaytorch\u002Fblob\u002Fmain\u002Fwebsite\u002Fstatic\u002Fassets\u002Fplaytorch_badge.svg)](https:\u002F\u002Fplaytorch.dev\u002Fsnack\u002F@playtorch\u002Fswinir\u002F)\n[Gradio Web Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FSwinIR)\n\n本仓库是SwinIR的官方PyTorch实现，SwinIR是一种基于移位窗口Transformer的图像修复方法（[arxiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.10257.pdf)、[supp](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)、[预训练模型](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)、[可视化结果](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)）。SwinIR在以下任务中达到了**最先进水平**：\n- 双三次\u002F轻量级\u002F真实世界图像超分辨率\n- 灰度\u002F彩色图像去噪\n- 灰度\u002F彩色JPEG压缩伪影去除\n\n\u003C\u002Fbr>\n\n:rocket:  :rocket:  :rocket: **新闻**：\n- **2022年8月16日**：新增PlayTorch演示，展示如何在移动设备上运行真实世界图像超分辨率模型 [![PlayTorch Demo](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fplaytorch\u002Fblob\u002Fmain\u002Fwebsite\u002Fstatic\u002Fassets\u002Fplaytorch_badge.svg)](https:\u002F\u002Fplaytorch.dev\u002Fsnack\u002F@playtorch\u002Fswinir\u002F)。\n- **2022年8月1日**：新增彩色图像JPEG压缩伪影去除的预训练模型及结果。\n- **2022年6月10日**：查看我们在视频修复方面的最新工作 :fire::fire::fire: [VRT：一种视频修复Transformer](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FVRT) \n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FVRT?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FVRT)\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002FJingyunLiang\u002FVRT\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FVRT\u002Freleases)\n以及 [RVRT：递归视频修复Transformer](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FRVRT) \n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FRVRT?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FRVRT)\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002FJingyunLiang\u002FRVRT\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FRVRT\u002Freleases)\n用于视频超分辨率、视频去模糊、视频去噪、视频帧插值以及时空视频超分辨率。\n- **2021年9月7日**：我们提供了一个交互式的在线Colab演示，用于真实世界图像超分辨率 \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>:fire:，可与[首个实用退化模型BSRGAN（ICCV2021） ![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)和近期模型RealESRGAN进行对比。快来尝试在Colab上对您自己的图片进行超分辨率处理吧！\n\n|真实世界图像（×4）|[BSRGAN, ICCV2021](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)|[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)|SwinIR（我们的模型）|SwinIR-Large（我们的模型）|\n|       :---       |     :---:        |        :-----:         |        :-----:         |        :-----:         | \n| \u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_380151c15a0e.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_9b5c98b91974.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_b81eadf6a17f.jpg\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_ae1abf4ee967.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_a6773f15cd3a.png\">\n|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_20bb89d1f3fa.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_e7fde6cd735a.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_4236f61a51e8.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_eafac483f87f.png\">|\u003Cimg width=\"200\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_431e6ded3599.png\">|\n\n - ***2021年8月26日**: 查看我们在[真实世界图像超分辨率方面的工作：实用退化模型BSRGAN，ICCV2021](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)*\n - ***2021年8月26日**: 查看我们在[图像超分辨率和图像缩放的生成建模方面的工作：基于归一化流的HCFlow，ICCV2021](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FHCFlow)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FHCFlow?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FHCFlow)[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fcdb3fef89ebd174eaa43794accb6f59d\u002Fhcflow-demo-on-x8-face-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fcdb3fef89ebd174eaa43794accb6f59d\u002Fhcflow-demo-on-x8-face-image-sr.ipynb)*\n - ***2021年8月26日**: 查看我们在[盲超分辨率方面的工作：空间变核估计（MANet，ICCV2021）](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FMANet) [![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FMANet?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FMANet)\n[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002F4ed2524d6e08343710ee408a4d997e1c\u002Fmanet-demo-on-spatially-variant-kernel-estimation.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002F4ed2524d6e08343710ee408a4d997e1c\u002Fmanet-demo-on-spatially-variant-kernel-estimation.ipynb)以及[无监督核估计（FKP，CVPR2021）](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FFKP)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingyunLiang\u002FFKP?style=social)](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FFKP)*\n\n---\n\n> 图像恢复是一个由来已久的低层视觉问题，旨在从低质量图像（例如下采样、噪声和压缩图像）中恢复高质量图像。尽管当前最先进的图像恢复方法基于卷积神经网络，但很少有研究尝试使用在高层视觉任务中表现出色的Transformer模型。本文提出了一种基于Swin Transformer的强大基准模型SwinIR，用于图像恢复。SwinIR由浅层特征提取、深层特征提取和高质量图像重建三部分组成。其中，深层特征提取模块由多个残差Swin Transformer块（RSTB）构成，每个RSTB包含若干Swin Transformer层以及一个残差连接。我们在三个具有代表性的任务上进行了实验：图像超分辨率（包括经典、轻量级和真实场景图像超分辨率）、图像去噪（包括灰度和彩色图像去噪）以及JPEG压缩伪影去除。实验结果表明，SwinIR在不同任务上均优于当前最先进方法，峰值信噪比提升可达0.14~0.45dB，同时总参数量可减少多达67%。\n>\u003Cp align=\"center\">\n  \u003Cimg width=\"800\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_ec9a1f8fd120.png\">\n\u003C\u002Fp>\n\n\n\n#### 目录\n\n1. [训练](#Training)\n1. [测试](#Testing)\n1. [结果](#Results)\n1. [引用](#Citation)\n1. [许可与致谢](#License-and-Acknowledgement)\n\n\n\n\n### 训练\n\n\n可用于训练和测试的数据集可按如下方式下载：\n\n| 任务                                                |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        训练集                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         | 测试集|    可视化结果 |    \n|:----------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|     :---:      |   :---:      |\n| 经典\u002F轻量级图像超分辨率                      |                                                                                                                                                                                                                                                                                                                                                                                                                                                               [DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800张训练图像) 或 DIV2K +[Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650张图像)                                                                                                                                                                                                                                                                                                                                                                                                                                                               | Set5 + Set14 + BSD100 + Urban100 + Manga109 [下载全部](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1B3DJGQKB6eNdwuQIhdskA64qUuVKLZ9u) | [这里](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n| 现实世界图像超分辨率                                 | SwinIR-M（中等尺寸）：[DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800张训练图像) +[Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650张图像) + [OST](https:\u002F\u002Fopenmmlab.oss-cn-hangzhou.aliyuncs.com\u002Fdatasets\u002FOST_dataset.zip) ([替代链接](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1iZfzAxAwOpeutz27HC56_y5RNqnsPPKr), 10324张图像，涵盖天空、水、草地、山脉、建筑、植物、动物) \u003Cbr \u002F> SwinIR-L（大尺寸）：DIV2K + Flickr2K + OST + [WED](http:\u002F\u002Fivc.uwaterloo.ca\u002Fdatabase\u002FWaterlooExploration\u002Fexploration_database_and_code.rar)(4744张图像) + [FFHQ](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1tZUcXDBeOibC6jcMCtgRRz67pzrAHeHL) (前2000张图像，人脸) + Manga109 (漫画) + [SCUT-CTW1500](https:\u002F\u002Funiversityofadelaide.box.com\u002Fshared\u002Fstatic\u002Fpy5uwlfyyytbb2pxzq9czvu6fuqbjdh8.zip) (前100张训练图像，文本) \u003Cbr \u002F>\u003Cbr \u002F>  ***我们采用来自[BSRGAN, ICCV2021]的开创性实用退化模型 ![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)** | [RealSRSet+5images](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases\u002Fdownload\u002Fv0.0\u002FRealSRSet+5images.zip) |  [这里](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n| 彩色\u002F灰度图像去噪                     |                                                                                                                                                                                                                                                                                                             [DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800张训练图像) + [Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650张图像) + [BSD500](http:\u002F\u002Fwww.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fgrouping\u002FBSR\u002FBSR_bsds500.tgz) (400张用于训练和测试的图像) + [WED](http:\u002F\u002Fivc.uwaterloo.ca\u002Fdatabase\u002FWaterlooExploration\u002Fexploration_database_and_code.rar)(4744张图像)  \u003Cbr \u002F>\u003Cbr \u002F>  *BSD68\u002FBSD100图像未用于训练。                                                                                                                                                                                                                                                                                                              |  灰度：Set12 + BSD68 + Urban100 \u003Cbr \u002F>  彩色：CBSD68 + Kodak24 + McMaster + Urban100 [下载全部](https:\u002F\u002Fgithub.com\u002Fcszn\u002FFFDNet\u002Ftree\u002Fmaster\u002Ftestsets) |  [这里](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n| 灰度\u002F彩色JPEG压缩伪影去除 |                                                                                                                                                                                                                                                                                                                                            [DIV2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FDIV2K.tar) (800张训练图像) + [Flickr2K](https:\u002F\u002Fcv.snu.ac.kr\u002Fresearch\u002FEDSR\u002FFlickr2K.tar) (2650张图像) + [BSD500](http:\u002F\u002Fwww.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fgrouping\u002FBSR\u002FBSR_bsds500.tgz) (400张用于训练和测试的图像) + [WED](http:\u002F\u002Fivc.uwaterloo.ca\u002Fdatabase\u002FWaterlooExploration\u002Fexploration_database_and_code.rar)(4744张图像)                                                                                                                                                                                                                                                                                                                                             |  灰度：Classic5 + LIVE1 [下载全部](https:\u002F\u002Fgithub.com\u002Fcszn\u002FDnCNN\u002Ftree\u002Fmaster\u002Ftestsets) | [这里](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) |\n\n\u003C!--\n| 任务                 | 训练集 | 测试集 | SwinIR 的预训练模型及可视化结果 | \n| :---                 | :---:  | :---:  | :---:  |\n| 图像去噪（真实）      | [SIDD-Medium-sRGB](https:\u002F\u002Fwww.eecs.yorku.ca\u002F~kamel\u002Fsidd\u002Fdataset.php)（320张图像，[预处理]()) + [RENOIR](http:\u002F\u002Fani.stat.fsu.edu\u002F~abarbu\u002FRenoir.html)（221张图像，[预处理](https:\u002F\u002Fgithub.com\u002FzsyOAOA\u002FDANet\u002Fblob\u002Fmaster\u002Fdatasets\u002Fpreparedata\u002FRenoir_big2small_all.py)) + [Poly](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FPolyU-Real-World-Noisy-Images-Dataset)（.\u002FOriginalImages中的40张图像） |    [SIDD 验证集](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1S44fHXaVxAYW3KLNxK41NYCnyX9S79su)（1280个patch，与官方 [.mat](https:\u002F\u002Fwww.eecs.yorku.ca\u002F~kamel\u002Fsidd\u002Fbenchmark.php) 版本相同）+  [DND](https:\u002F\u002Fnoise.visinf.tu-darmstadt.de\u002Fdownloads\u002F)（预先定义的50张图像中的100个patch，[在线评估](https:\u002F\u002Fnoise.visinf.tu-darmstadt.de\u002Fsubmit\u002F)) + [Nam](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F24kds7c436i5i11\u002Freal_image_noise_dataset.zip?dl=0)（17张图像中随机选取的100个patch，[预处理](https:\u002F\u002Fgithub.com\u002FzsyOAOA\u002FDANet\u002Fblob\u002Fmaster\u002Fdatasets\u002Fpreparedata\u002FNam_patch_prepare.py))|[下载模型]() [下载结果]() |\n| 图像去模糊（合成）   | [GoPro](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1AsgIP9_X0bg0olu2-1N6karm2x15cJWE)（2103张训练图像）  |  [GoPro](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1a2qKfXWpNuTGOm2-Jex8kfNSzYJLbqkf)（1111张图像）+ [HIDE](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1nRsTXj4iTUkTvBhTcGg8cySK8nd3vlhK)（2050张图像）+ [RealBlur_J](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1KYtzeKCiDRX9DSvC-upHrCqvC4sPAiJ1)（真实模糊，980张图像）+ [RealBlur_R](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1EwDoajf5nStPIAcU4s9rdc8SPzfm3tW1)（真实模糊，980张图像） | [下载模型]() [下载结果]()|\n| 图像去雨（合成）  | [多个数据集](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Hnnlc5kI0v9_BtfMytC2LR5VpLAFZtVe)（13711张训练图像，详情见 [MPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet) 的表1。）  |  Rain100H（100张图像）+ Rain100L（100张图像）+ Test100（100张图像）+ Test2800（2800张图像）+ Test1200（1200张图像），[全部下载](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1PDWggNh8ylevFmrjo-JEvlmqsDlWWvZs)  | [下载模型]() [下载结果]()|\n\n注：上述数据集可能来自官方发布或一些优秀的开源项目（[BasicSR](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FBasicSR)，[MPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)）。\n\n-->\n\n训练代码位于 [KAIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FKAIR\u002Fblob\u002Fmaster\u002Fdocs\u002FREADME_SwinIR.md)。\n\n\n\n## 测试（无需准备数据集）\n为方便起见，我们在 `\u002Ftestsets` 中提供了一些示例数据集（约20Mb）。 \n如果您只需要代码，只需下载 `models\u002Fnetwork_swinir.py`、`utils\u002Futil_calculate_psnr_ssim.py` 和 `main_test_swinir.py` 即可。 \n以下命令会**自动**下载 [预训练模型](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) 并将其放置在 `model_zoo\u002Fswinir` 目录下。 \n**[SwinIR 的所有可视化结果可在此处下载](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)**。 \n\n我们还提供了一个针对真实世界图像超分辨率的在线 Colab 演示 \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgist\u002FJingyunLiang\u002Fa5e3e54bc9ef8d7bf594f6fee8208533\u002Fswinir-demo-on-real-world-image-sr.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Google Colab 标志\">\u003C\u002Fa>，用于与 [首个实用退化模型 BSRGAN（ICCV2021） ![GitHub 星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcszn\u002FBSRGAN?style=social)](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN) 以及近期模型 [RealESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN) 进行对比。不妨在 Colab 上测试您自己的图像！\n\n我们还提供了一个 PlayTorch 演示 [![PlayTorch 演示](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fplaytorch\u002Fblob\u002Fmain\u002Fwebsite\u002Fstatic\u002Fassets\u002Fplaytorch_badge.svg)](https:\u002F\u002Fplaytorch.dev\u002Fsnack\u002F@playtorch\u002Fswinir\u002F)，用于展示如何在基于 React Native 构建的移动应用中运行 SwinIR 模型，以实现真实世界图像的超分辨率。\n\n```bash\n# 001 经典图像超分辨率（中等尺寸）\n# 注意：--training_patch_size 仅用于区分论文表2中的两种不同设置。图像并非按patch逐块测试。\n# （设置1：当模型在DIV2K上训练且 training_patch_size=48时）\npython main_test_swinir.py --task classical_sr --scale 2 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x2.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX2 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 3 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x3.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX3 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 4 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX4 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 8 --training_patch_size 48 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DIV2K_s48w8_SwinIR-M_x8.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX8 --folder_gt testsets\u002FSet5\u002FHR\n\n# （设置2：当模型在DIV2K+Flickr2K上训练且 training_patch_size=64时）\npython main_test_swinir.py --task classical_sr --scale 2 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x2.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX2 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical_sr --scale 3 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x3.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX3 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical sr --scale 4 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x4.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX4 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task classical sr --scale 8 --training_patch_size 64 --model_path model_zoo\u002Fswinir\u002F001_classicalSR_DF2K_s64w8_SwinIR-M_x8.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX8 --folder_gt testsets\u002FSet5\u002FHR\n\n\n# 002 轻量级图像超分辨率（小尺寸）\npython main_test_swinir.py --task lightweight_sr --scale 2 --model_path model_zoo\u002Fswinir\u002F002_lightweightSR_DIV2K_s64w8_SwinIR-S_x2.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX2 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task lightweight_sr --scale 3 --model_path model_zoo\u002Fswinir\u002F002_lightweightSR_DIV2K_s64w8_SwinIR-S_x3.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX3 --folder_gt testsets\u002FSet5\u002FHR\npython main_test_swinir.py --task lightweight sr --scale 4 --model_path model_zoo\u002Fswinir\u002F002_lightweightSR_DIV2K_s64w8_SwinIR-S_x4.pth --folder_lq testsets\u002FSet5\u002FLR_bicubic\u002FX4 --folder_gt testsets\u002FSet5\u002FHR\n\n# 003 现实世界图像超分辨率（如果内存不足，请使用 --tile 400）\n# （中等尺寸）\npython main_test_swinir.py --task real_sr --scale 4 --model_path model_zoo\u002Fswinir\u002F003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth --folder_lq testsets\u002FRealSRSet+5images --tile\n\n# （更大尺寸 + 在更多数据集上训练）\npython main_test_swinir.py --task real_sr --scale 4 --large_model --model_path model_zoo\u002Fswinir\u002F003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth --folder_lq testsets\u002FRealSRSet+5images\n\n\n# 004 灰度图像去噪（中等尺寸）\npython main_test_swinir.py --task gray_dn --noise 15 --model_path model_zoo\u002Fswinir\u002F004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth --folder_gt testsets\u002FSet12\npython main_test_swinir.py --task gray_dn --noise 25 --model_path model_zoo\u002Fswinir\u002F004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth --folder_gt testsets\u002FSet12\npython main_test_swinir.py --task gray_dn --noise 50 --model_path model_zoo\u002Fswinir\u002F004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth --folder_gt testsets\u002FSet12\n\n\n# 005 彩色图像去噪（中等尺寸）\npython main_test_swinir.py --task color_dn --noise 15 --model_path model_zoo\u002Fswinir\u002F005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth --folder_gt testsets\u002FMcMaster\npython main_test_swinir.py --task color_dn --noise 25 --model_path model_zoo\u002Fswinir\u002F005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth --folder_gt testsets\u002FMcMaster\npython main_test_swinir.py --task color_dn --noise 50 --model_path model_zoo\u002Fswinir\u002F005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth --folder_gt testsets\u002FMcMaster\n\n\n# 006 JPEG压缩伪影去除（中等尺寸，由于JPEG编码使用8x8块，因此使用window_size=7）\n# 灰度\npython main_test_swinir.py --task jpeg_car --jpeg 10 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth --folder_gt testsets\u002Fclassic5\npython main_test_swinir.py --task jpeg_car --jpeg 20 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth --folder_gt testsets\u002Fclassic5\npython main_test_swinir.py --task jpeg_car --jpeg 30 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth --folder_gt testsets\u002Fclassic5\npython main_test_swinir.py --task jpeg_car --jpeg 40 --model_path model_zoo\u002Fswinir\u002F006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth --folder_gt testsets\u002Fclassic5\n\n# 彩色\npython main_test_swinir.py --task color_jpeg_car --jpeg 10 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg10.pth --folder_gt testsets\u002FLIVE1\npython main_test_swinir.py --task color_jpeg_car --jpeg 20 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg20.pth --folder_gt testsets\u002FLIVE1\npython main_test_swinir.py --task color_jpeg_car --jpeg 30 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg30.pth --folder_gt testsets\u002FLIVE1\npython main_test_swinir.py --task color_jpeg_car --jpeg 40 --model_path model_zoo\u002Fswinir\u002F006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg40.pth --folder_gt testsets\u002FLIVE1\n\n```\n\n---\n\n## 结果\n我们在经典\u002F轻量级\u002F现实世界图像超分辨率、灰度\u002F彩色图像去噪以及JPEG压缩伪影去除任务上均取得了当前最优性能。详细结果请参阅[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.10257)。SwinIR的所有可视化结果可在此处下载：[链接](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases)。\n\n\u003Cdetails>\n\u003Csummary>经典图像超分辨率（点击展开）\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_0859ab6d5e33.png\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_14afa04e2303.png\">\n\u003C\u002Fp>\n  \n- SwinIR与代表性CNN模型RCAN在经典图像超分辨率（X4）任务中的更详细对比\n\n| 方法             | 训练集    | 训练时间  \u003Cbr \u002F> (8个GeForce RTX 2080 Ti \u003Cbr \u002F> batch=32, iter=50万) | Manga109上的Y-PSNR\u002FY-SSIM | 运行时间  \u003Cbr \u002F> (1个GeForce RTX 2080 Ti,\u003Cbr \u002F> 处理256x256低分辨率图像)* | 参数量   | FLOPs | 测试内存 |\n| :---      | :---:        |        :-----:         |     :---:      |     :---:      |     :---:      |   :---:      |  :---:      |\n| RCAN | DIV2K | 1.6天 | 31.22\u002F0.9173 | 0.180秒 | 15.6M | 850.6G | 593.1M | \n| SwinIR | DIV2K | 1.8天 | 31.67\u002F0.9226 | 0.539秒 | 11.9M | 788.6G | 986.8M | \n\n\\* 我们在GPU空闲时重新测试了运行时间。评估代码参考[此处](https:\u002F\u002Fgithub.com\u002Fcszn\u002FKAIR\u002Fblob\u002Fmaster\u002Fmain_challenge_sr.py)。\n\n  \n- DIV2K验证集（100张图像）的结果\n  \n| 训练集 | 缩放因子 | PSNR (RGB) | PSNR (Y) | SSIM (RGB)  | SSIM (Y) |\n| :--- | :---: | :---:        |     :---:      | :---: | :---:        |\n|  DIV2K (800张) | 2 | 35.25 | 36.77 | 0.9423 | 0.9500 |\n|  DIV2K+Flickr2K (2650张) | 2 | 35.34 | 36.86 | 0.9430 | 0.9507 |\n|  DIV2K (800张) | 3 | 31.50 | 32.97 | 0.8832 | 0.8965 |\n|  DIV2K+Flickr2K (2650张) | 3 | 31.63 | 33.10 | 0.8854 | 0.8985 |\n|  DIV2K (800张) | 4 | 29.48 | 30.94 | 0.8311 | 0.8492 |\n|  DIV2K+Flickr2K (2650张) | 4 | 29.63 | 31.08 | 0.8347 | 0.8523 |\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>轻量级图像超分辨率\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_d5b03d1d0a5f.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>现实世界图像超分辨率\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_f11474d9e792.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>灰度图像去噪\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_910fc320c8d0.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>彩色图像去噪\u003C\u002Fsummary>\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_a572638fd3df.png\">\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>JPEG压缩伪影去除\u003C\u002Fsummary>\n\n针对灰度图像\n\u003Cp align=\"center\">\n  \u003Cimg width=\"900\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_readme_adea9b6b9863.png\">\n\u003C\u002Fp>\n\n针对彩色图像\n\n| 训练集 | 质量因子 | PSNR (RGB) | PSNR-B (RGB) | SSIM (RGB) |\n|:-------------|:--------------:|:----------:|:------------:|:----------:|\n| LIVE1        |       10       |   28.06    |    27.76     |   0.8089   |\n| LIVE1        |       20       |   30.45    |    29.97     |   0.8741   |\n| LIVE1        |       30       |   31.82    |    31.24     |   0.9018   |\n| LIVE1        |       40       |   32.75    |    32.12     |   0.9174   |\n\u003C\u002Fdetails>\n\n\n\n## 引用\n    @article{liang2021swinir,\n      title={SwinIR: 使用Swin Transformer进行图像恢复},\n      author={Liang, Jingyun and Cao, Jiezhang and Sun, Guolei and Zhang, Kai and Van Gool, Luc and Timofte, Radu},\n      journal={arXiv预印本 arXiv:2108.10257},\n      year={2021}\n    }\n\n\n## 许可与致谢\n本项目采用Apache 2.0许可证发布。代码基于[Swin Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer)和[KAIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FKAIR)。请一并遵守它们的许可证。感谢他们的杰出工作。","# SwinIR 快速上手指南\n\nSwinIR 是基于 Swin Transformer 的图像复原官方 PyTorch 实现，在图像超分辨率（经典\u002F轻量\u002F真实世界）、图像去噪（灰度\u002F彩色）以及 JPEG 压缩伪影去除任务上均达到了业界领先（SOTA）的性能。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**: Linux 或 Windows (推荐 Linux)\n*   **Python**: 3.7 或更高版本\n*   **PyTorch**: 1.7.0 或更高版本 (建议搭配 CUDA 使用以获得最佳性能)\n*   **其他依赖**: `torchvision`, `numpy`, `cv2` (opencv-python), `timm`, `einops`\n\n## 安装步骤\n\n### 1. 克隆仓库\n首先从 GitHub 克隆项目代码：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR.git\ncd SwinIR\n```\n\n### 2. 安装依赖\n推荐使用国内镜像源加速安装过程。您可以使用以下命令安装所需依赖：\n\n```bash\n# 使用清华源安装基础依赖\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 如果 requirements.txt 未包含所有包，可手动安装核心依赖\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\npip install timm einops opencv-python numpy -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：请根据您的 CUDA 版本调整 `torch` 的安装命令。上述命令示例为 CUDA 11.8，其他版本请参考 PyTorch 官网。\n\n## 基本使用\n\n以下是使用预训练模型进行**真实世界图像超分辨率 (Real-World Image SR)** 的最简示例。\n\n### 1. 下载预训练模型\n从 [Releases 页面](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases) 下载对应的预训练模型。\n例如，下载真实世界超分模型 `003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth` 并放入 `pretrained_models` 文件夹中：\n\n```bash\nmkdir -p pretrained_models\n# 假设您已手动下载或通过 wget 获取了模型文件\n# wget -P pretrained_models https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Freleases\u002Fdownload\u002Fv0.0.0\u002F003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth\n```\n\n### 2. 准备测试图片\n将需要处理的低质量图片放入 `testsets` 文件夹下的对应子目录（例如 `testsets\u002FSet5` 或直接放在自定义文件夹）。\n\n### 3. 运行推理\n使用提供的测试脚本进行图像复原。以下命令演示了对 `testsets` 中的图片进行 4 倍超分辨率处理：\n\n```bash\npython main_test_swinir.py \\\n  --task real_sr \\\n  --scale 4 \\\n  --model_path pretrained_models\u002F003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth \\\n  --folder_lq testsets\u002FSet5 \\\n  --folder_gt null\n```\n\n**参数说明：**\n*   `--task`: 任务类型，可选值包括 `classical_sr` (经典超分), `lightweight_sr` (轻量超分), `real_sr` (真实世界超分), `gray_dn` (灰度去噪), `color_dn` (彩色去噪), `jpeg_car` (JPEG 去伪影)。\n*   `--scale`: 超分倍数 (仅超分任务需要，如 2, 3, 4)。\n*   `--model_path`: 预训练模型的路径。\n*   `--folder_lq`: 低质量输入图片的文件夹路径。\n*   `--folder_gt`: 高质量参考图路径 (推理时可设为 `null`)。\n\n处理完成后，结果图片将默认保存在 `results` 文件夹中。\n\n---\n*更多任务（如去噪、JPEG 修复）只需更改 `--task` 参数并加载对应的预训练模型即可，用法同上。*","一家数字档案馆正在对一批 20 年前用早期数码相机拍摄的低分辨率历史照片进行数字化修复，以便在高清电子展屏上展示。\n\n### 没有 SwinIR 时\n- 传统插值算法放大图片后，人物面部和建筑纹理模糊不清，出现明显的锯齿和马赛克。\n- 使用早期的深度学习超分模型处理真实世界的复杂噪点时，容易过度平滑，导致皮肤质感像“塑料”一样不自然。\n- 旧照片因多次压缩产生的 JPEG 块状伪影严重干扰画面，常规去噪工具无法在去除噪点的同时保留边缘细节。\n- 修复过程需要人工反复调整参数并结合多个工具（先去噪再超分），工作流繁琐且效率极低。\n\n### 使用 SwinIR 后\n- 基于 Shifted Window Transformer 架构，SwinIR 能精准重建高频细节，让模糊的历史街景呈现出清晰的砖瓦纹理。\n- 针对真实世界退化场景优化的模型，在提升分辨率的同时完美保留了胶片和早期传感器的自然颗粒感，避免虚假的平滑效果。\n- 内置的 JPEG 压缩伪影去除功能，一键消除了老旧文件因反复保存产生的色块和网格纹，画面干净通透。\n- 单个模型即可同时完成去噪、去压缩伪影和超分辨率任务，将单张照片的修复时间从半小时缩短至几秒钟。\n\nSwinIR 通过单一的端到端模型，以接近人眼视觉的逼真度解决了低质历史影像的高清复原难题，极大提升了数字档案的可用性与观赏价值。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJingyunLiang_SwinIR_3c3cb3c8.png","JingyunLiang","Jingyun Liang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FJingyunLiang_e3344d41.png","Image\u002FVideo Restoration. PhD Student at Computer Vision Lab, ETH Zurich.","Computer Vision Lab, ETH Zürich","Zürich, Switzerland",null,"https:\u002F\u002Fjingyunliang.github.io\u002F","https:\u002F\u002Fgithub.com\u002FJingyunLiang",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",97.9,{"name":88,"color":89,"percentage":90},"Shell","#89e051",2.1,5416,639,"2026-04-06T07:52:28","Apache-2.0","未说明","需要 NVIDIA GPU（基于 PyTorch 实现），具体型号和显存大小未说明，CUDA 版本未说明",{"notes":98,"python":95,"dependencies":99},"该工具是基于 PyTorch 的官方实现，主要用于图像超分辨率、去噪和 JPEG 压缩伪影减少。README 中提供了 Colab 在线演示和移动端 PlayTorch 演示。训练和测试需要下载特定的数据集（如 DIV2K, Flickr2K 等）和预训练模型。由于是基于 Swin Transformer 的模型，推理和训练通常需要较大的显存，具体取决于模型大小（如 SwinIR-Large）和输入图像分辨率。建议使用支持 CUDA 的环境以获得最佳性能。",[100,101,102,103,104,105],"torch","torchvision","numpy","opencv-python","timm","basicSR",[35,15],[108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123],"image-super-resolution","image-denoising","compression-artifact-reduction","image-deblocking","transformer","real-world-image-super-resolution","lightweight-image-super-resolution","image-restoration","low-level-vision","vision-transformer","image-sr","restoration","super-resolution","denoising","deblocking","decompression","2026-03-27T02:49:30.150509","2026-04-07T04:11:24.497455",[127,132,137,142,147,152,156],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},20938,"SwinIR 为什么可以直接测试任意尺寸的图像，而不需要像其他 Transformer 模型那样分块处理？","SwinIR 基于窗口注意力机制（window attention），在测试时并非完全不需要处理尺寸问题。实际上，输入图像需要填充（pad）到窗口大小（默认为 8x8）的倍数。测试完成后，再将结果裁剪回与原始高分辨率图像（GT HR）相同的尺寸。这种操作对最终性能影响很小。因此，虽然可以接受任意分辨率输入，但内部仍涉及填充和裁剪步骤。","https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Fissues\u002F9",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},20939,"训练和测试时对输入图像尺寸（img_size）有什么具体要求？必须是特定数值吗？","在训练阶段，为了进行批量训练（batch training），通常需要将图像随机裁剪为固定大小的补丁（例如 64x64 或 128x128）。这些补丁会被划分为不重叠的窗口（如 8x8），并在窗口内计算注意力。在验证或测试阶段，不同图像可以有不同的尺寸。如果输入尺寸不是窗口大小的倍数，模型会自动计算掩码（mask）来处理边界情况，而无需强制输入为固定分辨率。","https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Fissues\u002F5",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},20940,"当输入图像尺寸不等于预设的 input_resolution（如 128x128）时，模型中使用的 mask 是什么？","SwinIR 无论输入分辨率是否为训练时的设定值（如 48x48 或 128x128），都需要使用 mask。区别在于：对于训练时固定的分辨率，mask 是预先计算好并存储在 `.pth` 模型文件中的；而对于非标准尺寸的测试图像，模型会在运行时重新计算 mask。这个 mask 主要用于处理窗口移位（shifted window）操作中的边界遮挡，确保注意力计算的正确性。","https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Fissues\u002F35",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},20941,"如何稳定训练 SwinIR？遇到损失函数（loss）突然翻倍或显存不足怎么办？","由于 Transformer 架构计算成本较高，建议适当减小批量大小（batch size）。有用户反馈将 batch size 设置为 32 对于图像到图像的任务已经足够大。如果遇到训练不稳定或显存问题，可以尝试降低 batch size（如设为 16 或更小），并检查学习率设置。此外，官方已发布基于 KAIR 的训练代码指南和 Colab 在线演示，可参考这些资源调整训练配置。","https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Fissues\u002F7",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},20942,"在微调 x4 超分辨率任务时，使用什么样的学习率策略效果最好？","根据维护者的实验对比，从 x2 经典超分模型微调至 x4 时，尝试了三种学习率策略。结果显示，将初始学习率固定为 1e-5（不进行衰减，总迭代次数 250,000）的策略比论文中使用的动态衰减策略（1e-4 起步，多次衰减）略好，PSNR 提升了约 0.03 到 0.06 dB。因此，建议在微调 x4 任务时尝试固定学习率为 1e-5。","https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Fissues\u002F3",{"id":153,"question_zh":154,"answer_zh":155,"source_url":136},20943,"去噪任务中 network_swinir.py 里的 window_size (patch_size) 设置为 1 是什么意思？","当去噪任务的窗口大小（window_size）设置为 1 时，意味着实际上不存在传统的“窗口”概念。此时模型退化为基于像素级的注意力机制（pixel-wise attention），即每个像素独立处理或与其直接相邻的像素交互，而不是在 8x8 等更大的窗口内计算注意力。这通常用于特定的去噪场景以适应局部特征。",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},20944,"我想用 SwinIR 做单图去雨（deraining）任务，应该修改代码的哪些部分？","要将 SwinIR 应用于去雨任务，主要需要调整数据集加载部分。你需要准备成对的训练数据：高清无雨图像（trainH）和对应的加雨纹图像（trainL）。在配置文件中，应将 dataset_type 设置为 'plain' 以支持成对图像训练（如果是去噪则用 'dncnn'）。如果报错，请检查数据路径配置和图像配对逻辑是否正确。此外，由于去雨属于图像恢复任务，网络结构本身通常无需大幅修改，重点在于数据预处理和损失函数的适配。","https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\u002Fissues\u002F48",[162],{"id":163,"version":164,"summary_zh":165,"released_at":166},126956,"v0.0","预训练模型、SwinIR 的补充材料及可视化结果","2021-08-26T05:46:59"]