[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-xinntao--Real-ESRGAN":3,"tool-xinntao--Real-ESRGAN":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":10,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":105,"github_topics":106,"view_count":115,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":116,"updated_at":117,"faqs":118,"releases":148},804,"xinntao\u002FReal-ESRGAN","Real-ESRGAN","Real-ESRGAN aims at developing Practical Algorithms for General Image\u002FVideo Restoration.","Real-ESRGAN 是一款专注于通用图像与视频修复的开源人工智能项目。它的核心目标是提供在实际场景中真正可用的算法，解决传统超分辨率技术在处理真实世界退化图像时容易出现模糊、噪声或伪影的问题。无论是老照片修复还是低清视频增强，Real-ESRGAN 都能有效去除瑕疵并提升画面清晰度。\n\nReal-ESRGAN 适合多类人群使用。开发者可以通过 PyPI 快速安装并集成到工作流中；研究人员可利用其训练框架探索新的恢复模型；而设计师和普通用户则无需复杂配置，直接下载 Windows、Linux 或 macOS 的便携可执行文件，甚至在线体验 Colab 演示即可上手。\n\n技术亮点方面，它在经典 ESRGAN 基础上进行了改进，采用纯合成数据进行训练，使其对现实世界的复杂退化情况具有更强的鲁棒性。此外，它还特别推出了针对动漫内容的专用模型，在二次元图像和视频修复上表现尤为出色。作为一个活跃且功能丰富的开源项目，它是提升媒体质量的高效选择。","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxinntao_Real-ESRGAN_readme_01b27ae11e45.png\" height=120>\n\u003C\u002Fp>\n\n## \u003Cdiv align=\"center\">\u003Cb>\u003Ca href=\"README.md\">English\u003C\u002Fa> | \u003Ca href=\"README_CN.md\">简体中文\u003C\u002Fa>\u003C\u002Fb>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n\n👀[**Demos**](#-demos-videos) **|** 🚩[**Updates**](#-updates) **|** ⚡[**Usage**](#-quick-inference) **|** 🏰[**Model Zoo**](docs\u002Fmodel_zoo.md) **|** 🔧[Install](#-dependencies-and-installation)  **|** 💻[Train](docs\u002FTraining.md) **|** ❓[FAQ](docs\u002FFAQ.md) **|** 🎨[Contribution](docs\u002FCONTRIBUTING.md)\n\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002Fxinntao\u002FReal-ESRGAN\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Frealesrgan)](https:\u002F\u002Fpypi.org\u002Fproject\u002Frealesrgan\u002F)\n[![Open issue](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fxinntao\u002FReal-ESRGAN)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues)\n[![Closed issue](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed\u002Fxinntao\u002FReal-ESRGAN)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues)\n[![LICENSE](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fxinntao\u002FReal-ESRGAN.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FLICENSE)\n[![python lint](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Factions\u002Fworkflows\u002Fpylint.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002F.github\u002Fworkflows\u002Fpylint.yml)\n[![Publish-pip](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Factions\u002Fworkflows\u002Fpublish-pip.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002F.github\u002Fworkflows\u002Fpublish-pip.yml)\n\n\u003C\u002Fdiv>\n\n🔥 **AnimeVideo-v3 model (动漫视频小模型)**. Please see [[*anime video models*](docs\u002Fanime_video_model.md)] and [[*comparisons*](docs\u002Fanime_comparisons.md)]\u003Cbr>\n🔥 **RealESRGAN_x4plus_anime_6B** for anime images **(动漫插图模型)**. Please see [[*anime_model*](docs\u002Fanime_model.md)]\n\n\u003C!-- 1. You can try in our website: [ARC Demo](https:\u002F\u002Farc.tencent.com\u002Fen\u002Fai-demos\u002FimgRestore) (now only support RealESRGAN_x4plus_anime_6B) -->\n1. :boom: **Update** online Replicate demo: [![Replicate](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Demo&message=Replicate&color=blue)](https:\u002F\u002Freplicate.com\u002Fxinntao\u002Frealesrgan)\n1. Online Colab demo for Real-ESRGAN: [![Colab](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Demo&message=Colab&color=orange)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) **|** Online Colab demo for for Real-ESRGAN (**anime videos**): [![Colab](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Demo&message=Colab&color=orange)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing)\n1. Portable [Windows](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-windows.zip) \u002F [Linux](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-ubuntu.zip) \u002F [MacOS](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel\u002FAMD\u002FNvidia GPU**. You can find more information [here](#portable-executable-files-ncnn). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan)\n\u003C!-- 1. You can watch enhanced animations in [Tencent Video](https:\u002F\u002Fv.qq.com\u002Fs\u002Ftopic\u002Fv_child\u002Frender\u002FfC4iyCAM.html). 欢迎观看[腾讯视频动漫修复](https:\u002F\u002Fv.qq.com\u002Fs\u002Ftopic\u002Fv_child\u002Frender\u002FfC4iyCAM.html) -->\n\nReal-ESRGAN aims at developing **Practical Algorithms for General Image\u002FVideo Restoration**.\u003Cbr>\nWe extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.\n\n🌌 Thanks for your valuable feedbacks\u002Fsuggestions. All the feedbacks are updated in [feedback.md](docs\u002Ffeedback.md).\n\n---\n\nIf Real-ESRGAN is helpful, please help to ⭐ this repo or recommend it to your friends 😊 \u003Cbr>\nOther recommended projects:\u003Cbr>\n▶️ [GFPGAN](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN): A practical algorithm for real-world face restoration \u003Cbr>\n▶️ [BasicSR](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FBasicSR): An open-source image and video restoration toolbox\u003Cbr>\n▶️ [facexlib](https:\u002F\u002Fgithub.com\u002Fxinntao\u002Ffacexlib): A collection that provides useful face-relation functions.\u003Cbr>\n▶️ [HandyView](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FHandyView): A PyQt5-based image viewer that is handy for view and comparison \u003Cbr>\n▶️ [HandyFigure](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FHandyFigure): Open source of paper figures \u003Cbr>\n\n---\n\n### 📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data\n\n> [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.10833)] &emsp; [[YouTube Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fxHWoDSSvSc)] &emsp; [[B站讲解](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1H34y1m7sS\u002F)] &emsp; [[Poster](https:\u002F\u002Fxinntao.github.io\u002Fprojects\u002FRealESRGAN_src\u002FRealESRGAN_poster.pdf)] &emsp; [[PPT slides](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL\u002Fedit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]\u003Cbr>\n> [Xintao Wang](https:\u002F\u002Fxinntao.github.io\u002F), Liangbin Xie, [Chao Dong](https:\u002F\u002Fscholar.google.com.hk\u002Fcitations?user=OSDCB0UAAAAJ), [Ying Shan](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=4oXBp9UAAAAJ&hl=en) \u003Cbr>\n> [Tencent ARC Lab](https:\u002F\u002Farc.tencent.com\u002Fen\u002Fai-demos\u002FimgRestore); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxinntao_Real-ESRGAN_readme_4842915ef584.jpg\">\n\u003C\u002Fp>\n\n---\n\n\u003C!---------------------------------- Updates --------------------------->\n## 🚩 Updates\n\n- ✅ Add the **realesr-general-x4v3** model - a tiny small model for general scenes. It also supports the **-dn** option to balance the noise (avoiding over-smooth results). **-dn** is short for denoising strength.\n- ✅ Update the **RealESRGAN AnimeVideo-v3** model. Please see [anime video models](docs\u002Fanime_video_model.md) and [comparisons](docs\u002Fanime_comparisons.md) for more details.\n- ✅ Add small models for anime videos. More details are in [anime video models](docs\u002Fanime_video_model.md).\n- ✅ Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan).\n- ✅ Add [*RealESRGAN_x4plus_anime_6B.pth*](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.2.4\u002FRealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https:\u002F\u002Fgithub.com\u002Fnihui\u002Fwaifu2x-ncnn-vulkan) are in [**anime_model.md**](docs\u002Fanime_model.md)\n- ✅ Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](docs\u002FTraining.md#Finetune-Real-ESRGAN-on-your-own-dataset)\n- ✅ Integrate [GFPGAN](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN) to support **face enhancement**.\n- ✅ Integrated to [Huggingface Spaces](https:\u002F\u002Fhuggingface.co\u002Fspaces) with [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio). See [Gradio Web Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FReal-ESRGAN). Thanks [@AK391](https:\u002F\u002Fgithub.com\u002FAK391)\n- ✅ Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.\n- ✅ [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.\n- ✅ The training codes have been released. A detailed guide can be found in [Training.md](docs\u002FTraining.md).\n\n---\n\n\u003C!---------------------------------- Demo videos --------------------------->\n## 👀 Demos Videos\n\n#### Bilibili\n\n- [大闹天宫片段](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1ja41117zb)\n- [Anime dance cut 动漫魔性舞蹈](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1wY4y1L7hT\u002F)\n- [海贼王片段](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1i3411L7Gy\u002F)\n\n#### YouTube\n\n## 🔧 Dependencies and Installation\n\n- Python >= 3.7 (Recommend to use [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fdownload\u002F#linux) or [Miniconda](https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html))\n- [PyTorch >= 1.7](https:\u002F\u002Fpytorch.org\u002F)\n\n### Installation\n\n1. Clone repo\n\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN.git\n    cd Real-ESRGAN\n    ```\n\n1. Install dependent packages\n\n    ```bash\n    # Install basicsr - https:\u002F\u002Fgithub.com\u002Fxinntao\u002FBasicSR\n    # We use BasicSR for both training and inference\n    pip install basicsr\n    # facexlib and gfpgan are for face enhancement\n    pip install facexlib\n    pip install gfpgan\n    pip install -r requirements.txt\n    python setup.py develop\n    ```\n\n---\n\n## ⚡ Quick Inference\n\nThere are usually three ways to inference Real-ESRGAN.\n\n1. [Online inference](#online-inference)\n1. [Portable executable files (NCNN)](#portable-executable-files-ncnn)\n1. [Python script](#python-script)\n\n### Online inference\n\n1. You can try in our website: [ARC Demo](https:\u002F\u002Farc.tencent.com\u002Fen\u002Fai-demos\u002FimgRestore) (now only support RealESRGAN_x4plus_anime_6B)\n1. [Colab Demo](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN **|** [Colab Demo](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**).\n\n### Portable executable files (NCNN)\n\nYou can download [Windows](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-windows.zip) \u002F [Linux](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-ubuntu.zip) \u002F [MacOS](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel\u002FAMD\u002FNvidia GPU**.\n\nThis executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.\u003Cbr>\n\nYou can simply run the following command (the Windows example, more information is in the README.md of each executable files):\n\n```bash\n.\u002Frealesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name\n```\n\nWe have provided five models:\n\n1. realesrgan-x4plus  (default)\n2. realesrnet-x4plus\n3. realesrgan-x4plus-anime (optimized for anime images, small model size)\n4. realesr-animevideov3 (animation video)\n\nYou can use the `-n` argument for other models, for example, `.\u002Frealesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`\n\n#### Usage of portable executable files\n\n1. Please refer to [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan#computer-usages) for more details.\n1. Note that it does not support all the functions (such as `outscale`) as the python script `inference_realesrgan.py`.\n\n```console\nUsage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...\n\n  -h                   show this help\n  -i input-path        input image path (jpg\u002Fpng\u002Fwebp) or directory\n  -o output-path       output image path (jpg\u002Fpng\u002Fwebp) or directory\n  -s scale             upscale ratio (can be 2, 3, 4. default=4)\n  -t tile-size         tile size (>=32\u002F0=auto, default=0) can be 0,0,0 for multi-gpu\n  -m model-path        folder path to the pre-trained models. default=models\n  -n model-name        model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)\n  -g gpu-id            gpu device to use (default=auto) can be 0,1,2 for multi-gpu\n  -j load:proc:save    thread count for load\u002Fproc\u002Fsave (default=1:2:2) can be 1:2,2,2:2 for multi-gpu\n  -x                   enable tta mode\"\n  -f format            output image format (jpg\u002Fpng\u002Fwebp, default=ext\u002Fpng)\n  -v                   verbose output\n```\n\nNote that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.\n\n### Python script\n\n#### Usage of python script\n\n1. You can use X4 model for **arbitrary output size** with the argument `outscale`. The program will further perform cheap resize operation after the Real-ESRGAN output.\n\n```console\nUsage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...\n\nA common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance\n\n  -h                   show this help\n  -i --input           Input image or folder. Default: inputs\n  -o --output          Output folder. Default: results\n  -n --model_name      Model name. Default: RealESRGAN_x4plus\n  -s, --outscale       The final upsampling scale of the image. Default: 4\n  --suffix             Suffix of the restored image. Default: out\n  -t, --tile           Tile size, 0 for no tile during testing. Default: 0\n  --face_enhance       Whether to use GFPGAN to enhance face. Default: False\n  --fp32               Use fp32 precision during inference. Default: fp16 (half precision).\n  --ext                Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto\n```\n\n#### Inference general images\n\nDownload pre-trained models: [RealESRGAN_x4plus.pth](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.1.0\u002FRealESRGAN_x4plus.pth)\n\n```bash\nwget https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.1.0\u002FRealESRGAN_x4plus.pth -P weights\n```\n\nInference!\n\n```bash\npython inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance\n```\n\nResults are in the `results` folder\n\n#### Inference anime images\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxinntao_Real-ESRGAN_readme_864ed732a54b.png\">\n\u003C\u002Fp>\n\nPre-trained models: [RealESRGAN_x4plus_anime_6B](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.2.4\u002FRealESRGAN_x4plus_anime_6B.pth)\u003Cbr>\n More details and comparisons with [waifu2x](https:\u002F\u002Fgithub.com\u002Fnihui\u002Fwaifu2x-ncnn-vulkan) are in [**anime_model.md**](docs\u002Fanime_model.md)\n\n```bash\n# download model\nwget https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.2.4\u002FRealESRGAN_x4plus_anime_6B.pth -P weights\n# inference\npython inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs\n```\n\nResults are in the `results` folder\n\n---\n\n## BibTeX\n\n    @InProceedings{wang2021realesrgan,\n        author    = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},\n        title     = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},\n        booktitle = {International Conference on Computer Vision Workshops (ICCVW)},\n        date      = {2021}\n    }\n\n## 📧 Contact\n\nIf you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.\n\n\u003C!---------------------------------- Projects that use Real-ESRGAN --------------------------->\n## 🧩 Projects that use Real-ESRGAN\n\nIf you develop\u002Fuse Real-ESRGAN in your projects, welcome to let me know.\n\n- NCNN-Android: [RealSR-NCNN-Android](https:\u002F\u002Fgithub.com\u002Ftumuyan\u002FRealSR-NCNN-Android) by [tumuyan](https:\u002F\u002Fgithub.com\u002Ftumuyan)\n- VapourSynth: [vs-realesrgan](https:\u002F\u002Fgithub.com\u002FHolyWu\u002Fvs-realesrgan) by [HolyWu](https:\u002F\u002Fgithub.com\u002FHolyWu)\n- NCNN: [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan)\n\n&nbsp;&nbsp;&nbsp;&nbsp;**GUI**\n\n- [Waifu2x-Extension-GUI](https:\u002F\u002Fgithub.com\u002FAaronFeng753\u002FWaifu2x-Extension-GUI) by [AaronFeng753](https:\u002F\u002Fgithub.com\u002FAaronFeng753)\n- [Squirrel-RIFE](https:\u002F\u002Fgithub.com\u002FJustin62628\u002FSquirrel-RIFE) by [Justin62628](https:\u002F\u002Fgithub.com\u002FJustin62628)\n- [Real-GUI](https:\u002F\u002Fgithub.com\u002Fscifx\u002FReal-GUI) by [scifx](https:\u002F\u002Fgithub.com\u002Fscifx)\n- [Real-ESRGAN_GUI](https:\u002F\u002Fgithub.com\u002Fnet2cn\u002FReal-ESRGAN_GUI) by [net2cn](https:\u002F\u002Fgithub.com\u002Fnet2cn)\n- [Real-ESRGAN-EGUI](https:\u002F\u002Fgithub.com\u002FWGzeyu\u002FReal-ESRGAN-EGUI) by [WGzeyu](https:\u002F\u002Fgithub.com\u002FWGzeyu)\n- [anime_upscaler](https:\u002F\u002Fgithub.com\u002Fshangar21\u002Fanime_upscaler) by [shangar21](https:\u002F\u002Fgithub.com\u002Fshangar21)\n- [Upscayl](https:\u002F\u002Fgithub.com\u002Fupscayl\u002Fupscayl) by [Nayam Amarshe](https:\u002F\u002Fgithub.com\u002FNayamAmarshe) and [TGS963](https:\u002F\u002Fgithub.com\u002FTGS963)\n\n## 🤗 Acknowledgement\n\nThanks for all the contributors.\n\n- [AK391](https:\u002F\u002Fgithub.com\u002FAK391): Integrate RealESRGAN to [Huggingface Spaces](https:\u002F\u002Fhuggingface.co\u002Fspaces) with [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio). See [Gradio Web Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FReal-ESRGAN).\n- [Asiimoviet](https:\u002F\u002Fgithub.com\u002FAsiimoviet): Translate the README.md to Chinese (中文).\n- [2ji3150](https:\u002F\u002Fgithub.com\u002F2ji3150): Thanks for the [detailed and valuable feedbacks\u002Fsuggestions](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues\u002F131).\n- [Jared-02](https:\u002F\u002Fgithub.com\u002FJared-02): Translate the Training.md to Chinese (中文).\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxinntao_Real-ESRGAN_readme_01b27ae11e45.png\" height=120>\n\u003C\u002Fp>\n\n## \u003Cdiv align=\"center\">\u003Cb>\u003Ca href=\"README.md\">English\u003C\u002Fa> | \u003Ca href=\"README_CN.md\">简体中文\u003C\u002Fa>\u003C\u002Fb>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n\n👀[**演示视频**](#-demos-videos) **|** 🚩[**更新**](#-updates) **|** ⚡[**使用**](#-quick-inference) **|** 🏰[**模型库**](docs\u002Fmodel_zoo.md) **|** 🔧[安装](#-dependencies-and-installation)  **|** 💻[训练](docs\u002FTraining.md) **|** ❓[常见问题](docs\u002FFAQ.md) **|** 🎨[贡献指南](docs\u002FCONTRIBUTING.md)\n\n[![download](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002Fxinntao\u002FReal-ESRGAN\u002Ftotal.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Frealesrgan)](https:\u002F\u002Fpypi.org\u002Fproject\u002Frealesrgan\u002F)\n[![Open issue](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fxinntao\u002FReal-ESRGAN)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues)\n[![Closed issue](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed\u002Fxinntao\u002FReal-ESRGAN)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues)\n[![LICENSE](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fxinntao\u002FReal-ESRGAN.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FLICENSE)\n[![python lint](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Factions\u002Fworkflows\u002Fpylint.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002F.github\u002Fworkflows\u002Fpylint.yml)\n[![Publish-pip](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Factions\u002Fworkflows\u002Fpublish-pip.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002F.github\u002Fworkflows\u002Fpublish-pip.yml)\n\n\u003C\u002Fdiv>\n\n🔥 **AnimeVideo-v3 模型（动漫视频小模型）**。请参阅 [[*动漫视频模型*](docs\u002Fanime_video_model.md)] 和 [[*对比*](docs\u002Fanime_comparisons.md)]\u003Cbr>\n🔥 **RealESRGAN_x4plus_anime_6B** 用于动漫图像 **(动漫插图模型)**。请参阅 [[*动漫模型*](docs\u002Fanime_model.md)]\n\n\u003C!-- 1. 您可以在我们的网站上尝试：[ARC 演示](https:\u002F\u002Farc.tencent.com\u002Fen\u002Fai-demos\u002FimgRestore)（目前仅支持 RealESRGAN_x4plus_anime_6B） -->\n1. :boom: **更新** 在线 Replicate 演示：[![Replicate](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Demo&message=Replicate&color=blue)](https:\u002F\u002Freplicate.com\u002Fxinntao\u002Frealesrgan)\n1. Real-ESRGAN 在线 Colab 演示：[![Colab](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Demo&message=Colab&color=orange)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) **|** Real-ESRGAN（**动漫视频**）在线 Colab 演示：[![Colab](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Demo&message=Colab&color=orange)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing)\n1. 便携式 [Windows](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-windows.zip) \u002F [Linux](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-ubuntu.zip) \u002F [MacOS](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-macos.zip) **适用于 Intel\u002FAMD\u002FNvidia GPU 的可执行文件**。您可以在 [此处](#portable-executable-files-ncnn) 找到更多信息。ncnn 实现位于 [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan)\n\u003C!-- 1. 您可以在 [腾讯视频](https:\u002F\u002Fv.qq.com\u002Fs\u002Ftopic\u002Fv_child\u002Frender\u002FfC4iyCAM.html) 观看增强后的动画。欢迎观看 [腾讯视频动漫修复](https:\u002F\u002Fv.qq.com\u002Fs\u002Ftopic\u002Fv_child\u002Frender\u002FfC4iyCAM.html) -->\n\nReal-ESRGAN 旨在开发 **通用图像\u002F视频恢复的实用算法**。\u003Cbr>\n我们将强大的 ESRGAN 扩展为实用的恢复应用（即 Real-ESRGAN），该模型使用纯合成数据进行训练。\n\n🌌 感谢您的宝贵反馈\u002F建议。所有反馈均更新至 [feedback.md](docs\u002Ffeedback.md)。\n\n---\n\n如果 Real-ESRGAN 对您有帮助，请帮忙 ⭐ 此仓库或推荐给您身边的朋友 😊 \u003Cbr>\n其他推荐项目：\u003Cbr>\n▶️ [GFPGAN](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN): 一种用于真实世界人脸恢复的实用算法 \u003Cbr>\n▶️ [BasicSR](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FBasicSR): 一个开源的图像和视频恢复工具箱\u003Cbr>\n▶️ [facexlib](https:\u002F\u002Fgithub.com\u002Fxinntao\u002Ffacexlib): 提供有用的人脸相关函数集合。\u003Cbr>\n▶️ [HandyView](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FHandyView): 一款基于 PyQt5 的图像查看器，方便查看和比较 \u003Cbr>\n▶️ [HandyFigure](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FHandyFigure): 论文图表开源项目 \u003Cbr>\n\n---\n\n### 📖 Real-ESRGAN：使用纯合成数据训练真实世界盲超分辨率\n\n> [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.10833)] &emsp; [[YouTube 视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fxHWoDSSvSc)] &emsp; [[B 站讲解](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1H34y1m7sS\u002F)] &emsp; [[海报](https:\u002F\u002Fxinntao.github.io\u002Fprojects\u002FRealESRGAN_src\u002FRealESRGAN_poster.pdf)] &emsp; [[PPT 幻灯片](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL\u002Fedit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]\u003Cbr>\n> [Xintao Wang](https:\u002F\u002Fxinntao.github.io\u002F), Liangbin Xie, [Chao Dong](https:\u002F\u002Fscholar.google.com.hk\u002Fcitations?user=OSDCB0UAAAAJ), [Ying Shan](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=4oXBp9UAAAAJ&hl=en) \u003Cbr>\n> [腾讯 ARC 实验室](https:\u002F\u002Farc.tencent.com\u002Fen\u002Fai-demos\u002FimgRestore); 中国科学院深圳先进技术研究院\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxinntao_Real-ESRGAN_readme_4842915ef584.jpg\">\n\u003C\u002Fp>\n\n---\n\n\u003C!---------------------------------- Updates --------------------------->\n\n## 🚩 更新\n\n- ✅ 添加 **realesr-general-x4v3** 模型——一个适用于通用场景的小型模型。它还支持 **-dn** 选项以平衡噪声（避免过度平滑的结果）。**-dn** 是 denoising strength（去噪强度）的缩写。\n- ✅ 更新 **RealESRGAN AnimeVideo-v3** 模型。更多详情请参见 [anime video models](docs\u002Fanime_video_model.md) 和 [comparisons](docs\u002Fanime_comparisons.md)。\n- ✅ 为动漫视频添加小型模型。更多详情在 [anime video models](docs\u002Fanime_video_model.md)。\n- ✅ 添加 ncnn 实现 [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan)。\n- ✅ 添加 [*RealESRGAN_x4plus_anime_6B.pth*](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.2.4\u002FRealESRGAN_x4plus_anime_6B.pth)，该模型针对 **anime**（动漫）图像进行了优化，且模型体积更小。更多详情及与 [waifu2x](https:\u002F\u002Fgithub.com\u002Fnihui\u002Fwaifu2x-ncnn-vulkan) 的比较见 [**anime_model.md**](docs\u002Fanime_model.md)。\n- ✅ 支持在自有数据或配对数据上进行 finetuning（微调）（*i.e.*, finetuning ESRGAN）。参见 [此处](docs\u002FTraining.md#Finetune-Real-ESRGAN-on-your-own-dataset)。\n- ✅ 集成 [GFPGAN](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN) 以支持 **face enhancement**（人脸增强）。\n- ✅ 已集成至 [Huggingface Spaces](https:\u002F\u002Fhuggingface.co\u002Fspaces)，使用 [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio)。参见 [Gradio Web Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FReal-ESRGAN)。感谢 [@AK391](https:\u002F\u002Fgithub.com\u002FAK391)。\n- ✅ 支持通过 `--outscale` 进行任意尺度缩放（它实际上使用 `LANCZOS4` 进一步调整输出大小）。添加 *RealESRGAN_x2plus.pth* 模型。\n- ✅ [推理代码](inference_realesrgan.py) 支持：1) **tile**（瓦片）选项；2) 带有 **alpha channel**（透明通道）的图像；3) **gray**（灰度）图像；4) **16-bit**（16 位）图像。\n- ✅ 训练代码已发布。详细指南可在 [Training.md](docs\u002FTraining.md) 中找到。\n\n---\n\n\u003C!---------------------------------- Demo videos --------------------------->\n## 👀 演示视频\n\n#### Bilibili\n\n- [大闹天宫片段](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1ja41117zb)\n- [Anime dance cut 动漫魔性舞蹈](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1wY4y1L7hT\u002F)\n- [海贼王片段](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1i3411L7Gy\u002F)\n\n#### YouTube\n\n## 🔧 依赖项与安装\n\n- Python >= 3.7（推荐使用 [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fdownload\u002F#linux) 或 [Miniconda](https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html)）\n- [PyTorch >= 1.7](https:\u002F\u002Fpytorch.org\u002F)\n\n### 安装\n\n1. 克隆仓库\n\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN.git\n    cd Real-ESRGAN\n    ```\n\n1. 安装依赖包\n\n    ```bash\n    # Install basicsr - https:\u002F\u002Fgithub.com\u002Fxinntao\u002FBasicSR\n    # We use BasicSR for both training and inference\n    pip install basicsr\n    # facexlib and gfpgan are for face enhancement\n    pip install facexlib\n    pip install gfpgan\n    pip install -r requirements.txt\n    python setup.py develop\n    ```\n\n---\n\n## ⚡ 快速推理\n\n通常有三种方式对 Real-ESRGAN 进行推理。\n\n1. [在线推理](#online-inference)\n1. [便携式可执行文件 (NCNN)](#portable-executable-files-ncnn)\n1. [Python 脚本](#python-script)\n\n### 在线推理\n\n1. 您可以在我们的网站上尝试：[ARC Demo](https:\u002F\u002Farc.tencent.com\u002Fen\u002Fai-demos\u002FimgRestore)（目前仅支持 RealESRGAN_x4plus_anime_6B）\n1. [Colab Demo](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) 用于 Real-ESRGAN **|** [Colab Demo](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) 用于 Real-ESRGAN（**anime videos**）。\n\n### 便携式可执行文件 (NCNN)\n\n您可以下载 [Windows](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-windows.zip) \u002F [Linux](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-ubuntu.zip) \u002F [MacOS](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-macos.zip) **适用于 Intel\u002FAMD\u002FNvidia GPU 的可执行文件**。\n\n此可执行文件是**便携**的，包含所有所需的二进制文件和模型。无需 CUDA 或 PyTorch 环境。\u003Cbr>\n\n您可以直接运行以下命令（Windows 示例，更多信息在各可执行文件的 README.md 中）：\n\n```bash\n.\u002Frealesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name\n```\n\n我们提供了五个模型：\n\n1. realesrgan-x4plus  （默认）\n2. realesrnet-x4plus\n3. realesrgan-x4plus-anime（针对动漫图像优化，模型体积小）\n4. realesr-animevideov3（动画视频）\n\n您可以使用 `-n` 参数选择其他模型，例如，`.\u002Frealesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`\n\n#### 便携式可执行文件的使用\n\n1. 请参考 [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan#computer-usages) 了解更多详情。\n1. 请注意，它不支持 python 脚本 `inference_realesrgan.py` 的所有功能（如 `outscale`）。\n\n```console\nUsage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...\n\n  -h                   show this help\n  -i input-path        input image path (jpg\u002Fpng\u002Fwebp) or directory\n  -o output-path       output image path (jpg\u002Fpng\u002Fwebp) or directory\n  -s scale             upscale ratio (can be 2, 3, 4. default=4)\n  -t tile-size         tile size (>=32\u002F0=auto, default=0) can be 0,0,0 for multi-gpu\n  -m model-path        folder path to the pre-trained models. default=models\n  -n model-name        model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)\n  -g gpu-id            gpu device to use (default=auto) can be 0,1,2 for multi-gpu\n  -j load:proc:save    thread count for load\u002Fproc\u002Fsave (default=1:2:2) can be 1:2,2,2:2 for multi-gpu\n  -x                   enable tta mode\"\n  -f format            output image format (jpg\u002Fpng\u002Fwebp, default=ext\u002Fpng)\n  -v                   verbose output\n```\n\n请注意，它可能会引入块不一致性（并且生成的结果与 PyTorch 实现略有不同），因为此可执行文件首先将输入图像裁剪为多个瓦片，然后分别处理它们，最后拼接在一起。\n\n### Python 脚本\n\n#### Python 脚本用法\n\n1. 您可以使用 X4 模型配合 `outscale` 参数实现**任意输出尺寸**。程序将在 Real-ESRGAN 输出后进一步执行轻量级缩放操作。\n\n```console\nUsage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...\n\nA common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance\n\n  -h                   show this help\n  -i --input           Input image or folder. Default: inputs\n  -o --output          Output folder. Default: results\n  -n --model_name      Model name. Default: RealESRGAN_x4plus\n  -s, --outscale       The final upsampling scale of the image. Default: 4\n  --suffix             Suffix of the restored image. Default: out\n  -t, --tile           Tile size, 0 for no tile during testing. Default: 0\n  --face_enhance       Whether to use GFPGAN to enhance face. Default: False\n  --fp32               Use fp32 precision during inference. Default: fp16 (half precision).\n  --ext                Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto\n```\n\n#### 通用图像推理\n\n下载预训练模型：[RealESRGAN_x4plus.pth](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.1.0\u002FRealESRGAN_x4plus.pth)\n\n```bash\nwget https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.1.0\u002FRealESRGAN_x4plus.pth -P weights\n```\n\n开始推理！\n\n```bash\npython inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance\n```\n\n结果位于 `results` 文件夹中\n\n#### 动漫图像推理\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxinntao_Real-ESRGAN_readme_864ed732a54b.png\">\n\u003C\u002Fp>\n\n预训练模型：[RealESRGAN_x4plus_anime_6B](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.2.4\u002FRealESRGAN_x4plus_anime_6B.pth)\u003Cbr>\n更多详情以及与 [waifu2x](https:\u002F\u002Fgithub.com\u002Fnihui\u002Fwaifu2x-ncnn-vulkan) 的对比请参见 [**anime_model.md**](docs\u002Fanime_model.md)\n\n```bash\n# download model\nwget https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.2.4\u002FRealESRGAN_x4plus_anime_6B.pth -P weights\n# inference\npython inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs\n```\n\n结果位于 `results` 文件夹中\n\n---\n\n## BibTeX\n\n    @InProceedings{wang2021realesrgan,\n        author    = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},\n        title     = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},\n        booktitle = {International Conference on Computer Vision Workshops (ICCVW)},\n        date      = {2021}\n    }\n\n## 📧 联系方式\n\n如果您有任何问题，请发送邮件至 `xintao.wang@outlook.com` 或 `xintaowang@tencent.com`。\n\n\u003C!---------------------------------- Projects that use Real-ESRGAN --------------------------->\n## 🧩 使用 Real-ESRGAN 的项目\n\n如果您在项目中开发或使用了 Real-ESRGAN，欢迎告知我。\n\n- NCNN-Android: [RealSR-NCNN-Android](https:\u002F\u002Fgithub.com\u002Ftumuyan\u002FRealSR-NCNN-Android) by [tumuyan](https:\u002F\u002Fgithub.com\u002Ftumuyan)\n- VapourSynth: [vs-realesrgan](https:\u002F\u002Fgithub.com\u002FHolyWu\u002Fvs-realesrgan) by [HolyWu](https:\u002F\u002Fgithub.com\u002FHolyWu)\n- NCNN: [Real-ESRGAN-ncnn-vulkan](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN-ncnn-vulkan)\n\n&nbsp;&nbsp;&nbsp;&nbsp;**图形用户界面 (GUI)**\n\n- [Waifu2x-Extension-GUI](https:\u002F\u002Fgithub.com\u002FAaronFeng753\u002FWaifu2x-Extension-GUI) by [AaronFeng753](https:\u002F\u002Fgithub.com\u002FAaronFeng753)\n- [Squirrel-RIFE](https:\u002F\u002Fgithub.com\u002FJustin62628\u002FSquirrel-RIFE) by [Justin62628](https:\u002F\u002Fgithub.com\u002FJustin62628)\n- [Real-GUI](https:\u002F\u002Fgithub.com\u002Fscifx\u002FReal-GUI) by [scifx](https:\u002F\u002Fgithub.com\u002Fscifx)\n- [Real-ESRGAN_GUI](https:\u002F\u002Fgithub.com\u002Fnet2cn\u002FReal-ESRGAN_GUI) by [net2cn](https:\u002F\u002Fgithub.com\u002Fnet2cn)\n- [Real-ESRGAN-EGUI](https:\u002F\u002Fgithub.com\u002FWGzeyu\u002FReal-ESRGAN-EGUI) by [WGzeyu](https:\u002F\u002Fgithub.com\u002FWGzeyu)\n- [anime_upscaler](https:\u002F\u002Fgithub.com\u002Fshangar21\u002Fanime_upscaler) by [shangar21](https:\u002F\u002Fgithub.com\u002Fshangar21)\n- [Upscayl](https:\u002F\u002Fgithub.com\u002Fupscayl\u002Fupscayl) by [Nayam Amarshe](https:\u002F\u002Fgithub.com\u002FNayamAmarshe) and [TGS963](https:\u002F\u002Fgithub.com\u002FTGS963)\n\n## 🤗 致谢\n\n感谢所有贡献者。\n\n- [AK391](https:\u002F\u002Fgithub.com\u002FAK391): 将 RealESRGAN 集成到 [Huggingface Spaces](https:\u002F\u002Fhuggingface.co\u002Fspaces) 并使用 [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio)。参见 [Gradio Web Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FReal-ESRGAN)。\n- [Asiimoviet](https:\u002F\u002Fgithub.com\u002FAsiimoviet): 将 README.md 翻译成中文（中文）。\n- [2ji3150](https:\u002F\u002Fgithub.com\u002F2ji3150): 感谢提供的 [详细且有价值的反馈\u002F建议](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues\u002F131)。\n- [Jared-02](https:\u002F\u002Fgithub.com\u002FJared-02): 将 Training.md 翻译成中文（中文）。","# Real-ESRGAN 快速上手指南\n\nReal-ESRGAN 旨在开发通用的图像\u002F视频恢复实用算法，基于纯合成数据训练，可将低分辨率图像\u002F视频恢复为高分辨率。\n\n## 环境准备\n\n- **Python**: >= 3.7（推荐使用 Anaconda 或 Miniconda）\n- **深度学习框架**: PyTorch >= 1.7\n\n## 安装步骤\n\n1. **克隆仓库**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN.git\n   cd Real-ESRGAN\n   ```\n\n2. **安装依赖包**\n   ```bash\n   # 安装 basicsr（训练和推理通用）\n   pip install basicsr\n   # 面部增强相关库\n   pip install facexlib\n   pip install gfpgan\n   # 安装其他依赖并进入开发模式\n   pip install -r requirements.txt\n   python setup.py develop\n   ```\n\n## 基本使用\n\nReal-ESRGAN 提供三种主要使用方式：**在线演示**、**便携可执行文件（NCNN）** 和 **Python 脚本**。\n\n### 1. 便携可执行文件（推荐新手）\n支持 Windows \u002F Linux \u002F MacOS，无需安装 CUDA 或 PyTorch 环境，开箱即用。\n\n**命令格式：**\n```bash\n.\u002Frealesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name\n```\n\n**常用模型参数 (`-n`)：**\n- `realesrgan-x4plus`（默认，通用场景）\n- `realesrnet-x4plus`\n- `realesrgan-x4plus-anime`（动漫插图优化，体积小）\n- `realesr-animevideov3`（动漫视频）\n\n> 注：Windows 用户请下载 `.exe` 文件，Linux\u002FMacOS 用户请下载对应系统版本。\n\n### 2. Python 脚本\n适合需要高级功能（如分块处理、任意缩放）的用户。\n\n**基础命令：**\n```bash\npython inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile\n```\n\n**进阶示例（任意缩放输出尺寸）：**\n```bash\npython inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5\n```\n\n### 3. 在线演示\n- **Web Demo**: [ARC Demo](https:\u002F\u002Farc.tencent.com\u002Fen\u002Fai-demos\u002FimgRestore)\n- **Colab Demo**: [Real-ESRGAN](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1k2Zod6kSHEvraybHl50Lys0LerhyTMCo) | [Anime Video](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B)","某纪录片制作团队在整理 1990 年代的家庭录像素材时，面临原始画质极差导致无法适配现代高清屏幕的困境。\n\n### 没有 Real-ESRGAN 时\n- 原始截图分辨率过低，放大至 1080P 后出现严重的锯齿和模糊现象。\n- 画面存在大量压缩噪点与色块，干扰观众对历史细节的观察。\n- 传统软件仅能简单插值，导致人物面部变形或产生不自然的伪影。\n- 人工逐帧修复成本过高，项目进度因技术瓶颈被迫延期。\n\n### 使用 Real-ESRGAN 后\n- Real-ESRGAN 智能重建图像细节，将低清素材无损提升至高清标准。\n- 有效去除视频压缩痕迹，同时保留真实的胶片颗粒与光影质感。\n- 支持命令行批量处理，团队可在数小时内完成整个项目的画质升级。\n- 专用模型优化了人脸结构，确保修复后的肖像既清晰又符合历史原貌。\n\nReal-ESRGAN 凭借强大的通用恢复能力，让老旧影像资源的现代化利用变得高效可行。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxinntao_Real-ESRGAN_4842915e.jpg","xinntao","Xintao","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fxinntao_f13bf2bf.jpg","Researcher at Tencent ARC Lab, (Applied Research Center)","Tencent","Shenzhen, China","xintao.alpha@gmail.com",null,"https:\u002F\u002Fxinntao.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fxinntao",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,34929,4296,"2026-04-05T21:18:36","BSD-3-Clause","Windows, Linux, macOS","支持 Intel\u002FAMD\u002FNvidia GPU，具体显存及 CUDA 版本未说明","未说明",{"notes":98,"python":99,"dependencies":100},"推荐使用 Anaconda 或 Miniconda 管理环境；提供无需安装 CUDA\u002FPyTorch 的便携式可执行文件（NCNN 实现）；支持通过集成 GFPGAN 进行人脸增强；包含动漫专用模型。","3.7+",[101,102,103,104],"torch>=1.7","basicsr","facexlib","gfpgan",[14,13],[107,108,109,110,111,112,113,114],"esrgan","pytorch","real-esrgan","super-resolution","image-restoration","denoise","jpeg-compression","amine",70,"2026-03-27T02:49:30.150509","2026-04-06T08:52:42.997242",[119,124,129,133,138,143],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},3461,"安装时遇到依赖包版本冲突或报错怎么办？","如果遇到问题，请在 pip install 命令中添加 --upgrade 参数强制升级包。例如：pip install basicsr --upgrade。维护者已更新 requirements.txt 并建议此操作以解决兼容性问题，防止因 BasicSR 过旧导致错误。","https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues\u002F33",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},3462,"处理视频时提示无法打开或读取文件如何解决？","检查输入文件的路径是否正确以及文件完整性。错误日志显示 OpenCV 无法读取文件路径（can't open\u002Fread file）。请确保视频文件路径有效且无特殊字符，同时注意程序可能不支持某些特定格式或位深的视频文件。","https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues\u002F216",{"id":130,"question_zh":131,"answer_zh":132,"source_url":128},3463,"处理长视频对硬件配置有什么具体要求？","深度学习处理非常消耗 GPU 资源。个人电脑建议配置为 RTX 3080 及以上显卡（显存 8G+），内存 32G。如果显存不足（如 RTX 3050ti），处理长视频可能会溢出或耗时过长，建议使用服务器进行解析以获得更好体验。",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},3464,"旧版 Linux 发行版或 Google Colab 无法运行新版本怎么办？","新版本（如 2.3.0）依赖 GLIBC 2.29，旧系统或 Colab 环境可能不满足要求。解决方法是将版本降级到 2.2.4，这样可以在旧环境中正常运行。","https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues\u002F204",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},3465,"如何使用自己训练的模型进行推理？","更新后的脚本不再支持 --input_path 参数，需将训练好的模型文件移动到 realesrgan\u002Fweights 目录下。如果遇到 UnboundLocalError 错误，请确认使用的模型文件名正确（如 RealESRGAN_x4plus.pth），并确保使用的是训练结束时的 net_g_last.pth 文件作为基础模型。","https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues\u002F189",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},3466,"Apple M1 芯片是否支持加速？如何配置？","可以通过配置 device='mps' 来启用 MPS 设备。需要在推理脚本中添加 --device 参数，或在代码初始化 RealESRGANer 时指定 device='mps'。例如：upsampler = RealESRGANer(..., device='mps')。","https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fissues\u002F378",[149,154,159,164,169,174,179,184,189,194],{"id":150,"version":151,"summary_zh":152,"released_at":153},112764,"v0.3.0","🚀 Long time no see ☄️\r\n\r\n✨ **Highlights**\r\n\r\n✅ [add realesr-general-x4v3 and realesr-general-wdn-x4v3](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fcommit\u002Fb827be13a1db242ebaea1be8669c62b757bd2796). They are very tiny models for general scenes, and they may more robust. But as they are tiny models, their performance may be limited.\r\n✅ [support denoise strength for realesr-general-x4v3](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fcommit\u002F576aaddfaf1dec031cdf580924c00f4b29b9b35a). You can use the **-dn** option to adjust the denising strength. Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. Only used for the realesr-general-x4v3 model.\r\n✅ [update inference_video: support auto download](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fcommit\u002F61e81d3108f9437a9d97b02fd94439e9fc191ed0). You do not need to download models explicitly. If the models are not download, they will be downloaded automatically.\r\n\r\n✅ [support ffmpeg stream for inference_realesrgan_video](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fcommit\u002Fcdc14b74a54e4247581b31c66caf93ea9e0cf159)\r\n✅ [fix colorspace bug & support multi-gpu and multi-processing](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fcommit\u002F8cb9bd403e0b8206eb69780b97b35cf7aa84bd4e)\r\n✅ [Added GPU selection feature to python inference](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fcommit\u002F6b15fc693646da1f16ba38ee47cf71a285f08390)\r\n✅ [deal with flv format](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fcommit\u002Fe5e79fbde32cf96720d484511fbae2272144beb6)\r\n\r\n\r\n📢📢📢\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesr-general-x4v3.pth\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesr-general-wdn-x4v3.pth\r\n\r\n\u003Cp align=\"center\">\r\n   \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fxinntao\u002FReal-ESRGAN\u002Fmaster\u002Fassets\u002Frealesrgan_logo.png\" height=150>\r\n\u003C\u002Fp>","2022-09-20T11:58:18",{"id":155,"version":156,"summary_zh":157,"released_at":158},112765,"v0.2.5.0","\u003Cp align=\"center\">\r\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fxinntao\u002FReal-ESRGAN\u002Fmaster\u002Fassets\u002Frealesrgan_logo.png\" height=150>\r\n\u003C\u002Fp>\r\n\r\nThe major update: 🎉\r\n\r\n✅ We update the **RealESRGAN AnimeVideo-v3** model, which can achieve better results with a faster inference speed. \r\nThe improvements are:\r\n  - **better naturalness**\r\n  - **Fewer artifacts**\r\n  - **more faithful to the original colors**\r\n  - **better texture restoration**\r\n  - **better background restoration**\r\n  \r\nYou can find more details in [anime video models](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002Fdocs\u002Fanime_video_model.md) and [comparisons](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002Fdocs\u002Fanime_comparisons.md).\r\n\r\nThe models can be downloaded from [realesr-animevideov3](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesr-animevideov3.pth).\r\n\r\n✅ We also update the ncnn.\r\n Portable [Windows](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-windows.zip) \u002F [Linux](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-ubuntu.zip) \u002F [MacOS](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.5.0\u002Frealesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel\u002FAMD\u002FNvidia GPU**. \r\n\r\n","2022-04-24T12:09:56",{"id":160,"version":161,"summary_zh":162,"released_at":163},112766,"v0.2.4.0","Have a nice day 😸 and happy everyday 😃 \r\n\r\nI am happy to add a simple logo to Real-ESRGAN 😋 (designed by myself! and inspired by the [4K logo](https:\u002F\u002Fwww.google.com\u002Fsearch?q=4k+logo))\r\nSo  I release a new version~\r\n\r\nYou can see it on [ReadMe](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)\r\n\r\n\u003Cp align=\"center\">\r\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fxinntao\u002FReal-ESRGAN\u002Fmaster\u002Fassets\u002Frealesrgan_logo.png\" height=150>\r\n\u003C\u002Fp>\r\n\r\n","2022-02-15T16:05:52",{"id":165,"version":166,"summary_zh":167,"released_at":168},112767,"v0.2.3.0","Long long time no see😺\r\n\r\nWe have added small models for **anime videos**: 🎉\r\n\r\nYou can find demos and usages in https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002Fdocs\u002Fanime_video_model.md\r\n\r\n\r\n:white_check_mark: We add small models that are optimized for anime videos :-)\r\n\r\nNow we have added two models.\r\n\r\n| Models                                                                                                                             | Scale | Description                    |\r\n| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- |\r\n| [RealESRGANv2-animevideo-xsx2](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.3.0\u002FRealESRGANv2-animevideo-xsx2.pth) | X2    | Anime video model with XS size |\r\n| [RealESRGANv2-animevideo-xsx4](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.3.0\u002FRealESRGANv2-animevideo-xsx4.pth) | X4    | Anime video model with XS size |\r\n\r\nThis release also contains the following files:\r\n\r\n- RealESRGANv2-animevideo-xsx2.pth\r\n- RealESRGANv2-animevideo-xsx4.pth\r\n- realesrgan-ncnn-vulkan-20211212-windows.zip\r\n- realesrgan-ncnn-vulkan-20211212-macos.zip\r\n- realesrgan-ncnn-vulkan-20211212-ubuntu.zip","2021-12-12T12:26:00",{"id":170,"version":171,"summary_zh":172,"released_at":173},112768,"v0.2.2.4","See you again 😺 \r\n\r\n**The major update**: :tada:\r\n\r\n- :white_check_mark: We add [*RealESRGAN_x4plus_anime_6B.pth*](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.2.2.4\u002FRealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https:\u002F\u002Fgithub.com\u002Fnihui\u002Fwaifu2x-ncnn-vulkan) are in [**anime_model.md**](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002Fdocs\u002Fanime_model.md)\r\n\r\n\u003Cp align=\"center\">\r\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fxinntao\u002Fpublic-figures\u002Fmaster\u002FReal-ESRGAN\u002Fcmp_realesrgan_anime_1.png\">\r\n\u003C\u002Fp>\r\n\r\n**Other updates**:\r\n\r\n- :white_check_mark: Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FTraining.md#Finetune-Real-ESRGAN-on-your-own-dataset)\r\n- :white_check_mark: We add [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md). Welcome to your contributions 😃 \r\n\r\nThis release also contains the following files:\r\n- RealESRGAN_x4plus_anime_6B.pth\r\n- RealESRGAN_x4plus_anime_6B_netD.pth\r\n- realesrgan-ncnn-vulkan-20210901-windows.zip\r\n- realesrgan-ncnn-vulkan-20210901-macos.zip\r\n- realesrgan-ncnn-vulkan-20210901-ubuntu.zip","2021-08-31T16:30:22",{"id":175,"version":176,"summary_zh":177,"released_at":178},112769,"v0.2.2.3","- Add `finetune_realesrgan_x4plus.yml`\r\n- Add scripts for data preparation:\r\n    - `generate_multiscale_DF2K.py`\r\n    -  `generate_meta_info.py`\r\n\r\nFile list:\r\n- `RealESRGAN_x4plus_netD.pth`\r\n-  `RealESRGAN_x2plus_netD.pth`","2021-08-26T14:31:57",{"id":180,"version":181,"summary_zh":182,"released_at":183},112770,"v0.2.1","- Support PyPI\r\n- Support arbitrary scale with `--outscale`\r\n- Add RealESRGAN_x2plus.pth model\r\n\r\nFile list:\r\n- `RealESRGAN_x2plus.pth`","2021-08-08T13:35:48",{"id":185,"version":186,"summary_zh":187,"released_at":188},112771,"v0.1.2","This release is mainly for updating `realesrgan-ncnn-vulkan` executable files.\r\n\r\n- We have added Linux\u002FMacOS executable files\r\n\r\n- We add back the `tta` option\r\n- We add the function: when the destination folder does not exist, create it!\r\n\r\n\r\nFile list:\r\n- `realesrgan-ncnn-vulkan-20210801-windows.zip`\r\n- `realesrgan-ncnn-vulkan-20210801-macos.zip`\r\n- `realesrgan-ncnn-vulkan-20210801-ubuntu.zip`","2021-07-31T18:30:29",{"id":190,"version":191,"summary_zh":192,"released_at":193},112772,"v0.1.1","This release is mainly for storing pre-trained models and executable files.\r\n\r\n- `RealESRGAN-ncnn-vulkan-20210725-windows.zip`\r\n\r\n- `RealESRNet_x4plus.pth`\r\n- `ESRGAN_SRx4_DF2KOST_official-ff704c30.pth`\r\n\r\nNote that: `RealESRGAN_x4plus.pth` can be found in https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Freleases\u002Fdownload\u002Fv0.1.0\u002FRealESRGAN_x4plus.pth","2021-07-25T08:06:01",{"id":195,"version":196,"summary_zh":197,"released_at":198},112773,"v0.1.0","This release is mainly for storing pre-trained models and executable files.\r\n\r\n1. RealESRGAN-ncnn-vulkan.zip\r\n2. RealESRGAN_x4plus.pth","2021-07-22T19:15:38"]