[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-sczhou--CodeFormer":3,"tool-sczhou--CodeFormer":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":98,"forks":36,"last_commit_at":99,"license":100,"difficulty_score":10,"env_os":101,"env_gpu":102,"env_ram":103,"env_deps":104,"category_tags":112,"github_topics":113,"view_count":122,"oss_zip_url":123,"oss_zip_packed_at":123,"status":16,"created_at":124,"updated_at":125,"faqs":126,"releases":156},566,"sczhou\u002FCodeFormer","CodeFormer","[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer","CodeFormer 是一款基于深度学习的开源人脸修复工具，源自 NeurIPS 2022 的前沿研究。它专注于解决人脸图像模糊、低分辨率、褪色或严重损坏的问题，能够智能地恢复清晰且自然的面部细节。\n\n其核心技术亮点在于采用了 Codebook Lookup Transformer 架构，这使得它在面对极端退化情况时依然具备极强的鲁棒性，有效避免了传统方法中常见的伪影问题。除了基础的画质提升，CodeFormer 还支持黑白照片自动上色、面部局部重绘以及视频帧的增强处理，功能十分全面。\n\n对于普通用户，通过 Hugging Face 或 OpenXLab 的在线 Demo 即可轻松体验修复效果；对于开发者及研究人员，项目不仅开放了模型权重，还提供了完整的训练代码和配置文件，便于深入学习与二次开发。无论是想复活家庭老照片的设计师，还是探索计算机视觉技术的学者，CodeFormer 都提供了强大的技术支持。","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_d3f5bbc7e7c7.png\" height=110>\n\u003C\u002Fp>\n\n## Towards Robust Blind Face Restoration with Codebook Lookup Transformer (NeurIPS 2022)\n\n[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11253) | [Project Page](https:\u002F\u002Fshangchenzhou.com\u002Fprojects\u002FCodeFormer\u002F) | [Video](https:\u002F\u002Fyoutu.be\u002Fd3VDpkXlueI)\n\n\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1m52PNveE4PBhYrecj34cnpEeiHcC5LTb?usp=sharing\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa> [![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%A4%97%20Hugging%20Face-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsczhou\u002FCodeFormer) [![Replicate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%9A%80%20Replicate-blue)](https:\u002F\u002Freplicate.com\u002Fsczhou\u002Fcodeformer) [![OpenXLab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%90%BC%20OpenXLab-blue)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FShangchenZhou\u002FCodeFormer) ![Visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_f8a564757e13.png)\n\n\n[Shangchen Zhou](https:\u002F\u002Fshangchenzhou.com\u002F), [Kelvin C.K. Chan](https:\u002F\u002Fckkelvinchan.github.io\u002F), [Chongyi Li](https:\u002F\u002Fli-chongyi.github.io\u002F), [Chen Change Loy](https:\u002F\u002Fwww.mmlab-ntu.com\u002Fperson\u002Fccloy\u002F) \n\nS-Lab, Nanyang Technological University\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_33fd916463bc.jpg\" width=\"800px\"\u002F>\n\n\n:star: If CodeFormer is helpful to your images or projects, please help star this repo. Thanks! :hugs: \n\n\n### Update\n- **2023.07.20**: Integrated to :panda_face: [OpenXLab](https:\u002F\u002Fopenxlab.org.cn\u002Fapps). Try out online demo! [![OpenXLab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%90%BC%20OpenXLab-blue)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FShangchenZhou\u002FCodeFormer)\n- **2023.04.19**: :whale: Training codes and config files are public available now.\n- **2023.04.09**: Add features of inpainting and colorization for cropped and aligned face images.\n- **2023.02.10**: Include `dlib` as a new face detector option, it produces more accurate face identity.\n- **2022.10.05**: Support video input `--input_path [YOUR_VIDEO.mp4]`. Try it to enhance your videos! :clapper: \n- **2022.09.14**: Integrated to :hugs: [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fspaces). Try out online demo! [![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%A4%97%20Hugging%20Face-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsczhou\u002FCodeFormer)\n- **2022.09.09**: Integrated to :rocket: [Replicate](https:\u002F\u002Freplicate.com\u002Fexplore). Try out online demo! [![Replicate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%9A%80%20Replicate-blue)](https:\u002F\u002Freplicate.com\u002Fsczhou\u002Fcodeformer)\n- [**More**](docs\u002Fhistory_changelog.md)\n\n### TODO\n- [x] Add training code and config files\n- [x] Add checkpoint and script for face inpainting\n- [x] Add checkpoint and script for face colorization\n- [x] ~~Add background image enhancement~~\n\n#### :panda_face: Try Enhancing Old Photos \u002F Fixing AI-arts\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_0e2724d05b50.jpg\" height=\"226px\"\u002F>](https:\u002F\u002Fimgsli.com\u002FMTI3NTE2) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_4ff2a7e10ed2.jpg\" height=\"226px\"\u002F>](https:\u002F\u002Fimgsli.com\u002FMTI3NTE1) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_2b234de46ff9.jpg\" height=\"226px\"\u002F>](https:\u002F\u002Fimgsli.com\u002FMTI3NTIw) \n\n#### Face Restoration\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_2ea40a221bdb.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_8964bb4c8f93.png\" width=\"400px\"\u002F>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_5bdc16099fd8.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_6969b60dd2e0.png\" width=\"400px\"\u002F>\n\n#### Face Color Enhancement and Restoration\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_6ffbdc5dc875.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_08decd1622d5.png\" width=\"400px\"\u002F>\n\n#### Face Inpainting\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_0591da729be9.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_07388669ccda.png\" width=\"400px\"\u002F>\n\n\n\n### Dependencies and Installation\n\n- Pytorch >= 1.7.1\n- CUDA >= 10.1\n- Other required packages in `requirements.txt`\n```\n# git clone this repository\ngit clone https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\ncd CodeFormer\n\n# create new anaconda env\nconda create -n codeformer python=3.8 -y\nconda activate codeformer\n\n# install python dependencies\npip3 install -r requirements.txt\npython basicsr\u002Fsetup.py develop\nconda install -c conda-forge dlib (only for face detection or cropping with dlib)\n```\n\u003C!-- conda install -c conda-forge dlib -->\n\n### Quick Inference\n\n#### Download Pre-trained Models:\nDownload the facelib and dlib pretrained models from [[Releases](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Freleases\u002Ftag\u002Fv0.1.0) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1b_3qwrzY_kTQh0-SnBoGBgOrJ_PLZSKm?usp=sharing) | [OneDrive](https:\u002F\u002Fentuedu-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fs200094_e_ntu_edu_sg\u002FEvDxR7FcAbZMp_MA9ouq7aQB8XTppMb3-T0uGZ_2anI2mg?e=DXsJFo)] to the `weights\u002Ffacelib` folder. You can manually download the pretrained models OR download by running the following command:\n```\npython scripts\u002Fdownload_pretrained_models.py facelib\npython scripts\u002Fdownload_pretrained_models.py dlib (only for dlib face detector)\n```\n\nDownload the CodeFormer pretrained models from [[Releases](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Freleases\u002Ftag\u002Fv0.1.0) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1CNNByjHDFt0b95q54yMVp6Ifo5iuU6QS?usp=sharing) | [OneDrive](https:\u002F\u002Fentuedu-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fs200094_e_ntu_edu_sg\u002FEoKFj4wo8cdIn2-TY2IV6CYBhZ0pIG4kUOeHdPR_A5nlbg?e=AO8UN9)] to the `weights\u002FCodeFormer` folder. You can manually download the pretrained models OR download by running the following command:\n```\npython scripts\u002Fdownload_pretrained_models.py CodeFormer\n```\n\n#### Prepare Testing Data:\nYou can put the testing images in the `inputs\u002FTestWhole` folder. If you would like to test on cropped and aligned faces, you can put them in the `inputs\u002Fcropped_faces` folder. You can get the cropped and aligned faces by running the following command:\n```\n# you may need to install dlib via: conda install -c conda-forge dlib\npython scripts\u002Fcrop_align_face.py -i [input folder] -o [output folder]\n```\n\n\n#### Testing:\n[Note] If you want to compare CodeFormer in your paper, please run the following command indicating `--has_aligned` (for cropped and aligned face), as the command for the whole image will involve a process of face-background fusion that may damage hair texture on the boundary, which leads to unfair comparison.\n\nFidelity weight *w* lays in [0, 1]. Generally, smaller *w* tends to produce a higher-quality result, while larger *w* yields a higher-fidelity result. The results will be saved in the `results` folder.\n\n\n🧑🏻 Face Restoration (cropped and aligned face)\n```\n# For cropped and aligned faces (512x512)\npython inference_codeformer.py -w 0.5 --has_aligned --input_path [image folder]|[image path]\n```\n\n:framed_picture: Whole Image Enhancement\n```\n# For whole image\n# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN\n# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN\npython inference_codeformer.py -w 0.7 --input_path [image folder]|[image path]\n```\n\n:clapper: Video Enhancement\n```\n# For Windows\u002FMac users, please install ffmpeg first\nconda install -c conda-forge ffmpeg\n```\n```\n# For video clips\n# Video path should end with '.mp4'|'.mov'|'.avi'\npython inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 1.0 --input_path [video path]\n```\n\n🌈 Face Colorization (cropped and aligned face)\n```\n# For cropped and aligned faces (512x512)\n# Colorize black and white or faded photo\npython inference_colorization.py --input_path [image folder]|[image path]\n```\n\n🎨 Face Inpainting (cropped and aligned face)\n```\n# For cropped and aligned faces (512x512)\n# Inputs could be masked by white brush using an image editing app (e.g., Photoshop) \n# (check out the examples in inputs\u002Fmasked_faces)\npython inference_inpainting.py --input_path [image folder]|[image path]\n```\n### Training:\nThe training commands can be found in the documents: [English](docs\u002Ftrain.md) **|** [简体中文](docs\u002Ftrain_CN.md).\n\n### License\n\nThis project is licensed under \u003Ca rel=\"license\" href=\"https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Fblob\u002Fmaster\u002FLICENSE\">NTU S-Lab License 1.0\u003C\u002Fa>. Redistribution and use should follow this license.\n\n---\n### 🐼 Ecosystem Applications & Deployments\n\nCodeFormer has been widely adopted and deployed across a broad range (>20) of online applications, platforms, API services, and independent websites, and has also been integrated into many open-source projects and toolkits.\n\n> Only demos on **Hugging Face Space**, **Replicate**, and **OpenXLab** are official deployments **maintained by the authors**. All other demos, APIs, apps, websites, and integrations listed below are **third-party (non-official)** and are not affiliated with the CodeFormer authors. Please verify their legitimacy to avoid potential financial loss.\n\n\n#### Websites (Non-official)\n\n⚠️⚠️⚠️ The following websites are **not official and are not operated by us**. They use our models without any license or authorization. Please verify their legitimacy to avoid potential financial loss.\n\n\n| Website | Link | Notes |\n|---------|------|--------|\n| CodeFormer.net | https:\u002F\u002Fcodeformer.net\u002F | Non-official website |\n| CodeFormer.cn | https:\u002F\u002Fwww.codeformer.cn\u002F | Non-official website |\n| CodeFormerAI.com | https:\u002F\u002Fcodeformerai.com\u002F | Non-official website |\n\n#### Online Demos \u002F API Platforms\n\n| Platform | Link | Notes |\n|----------|------|--------|\n| Hugging Face | https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsczhou\u002FCodeFormer | Maintained by Authors |\n| Replicate | https:\u002F\u002Freplicate.com\u002Fsczhou\u002Fcodeformer | Maintained by Authors |\n| OpenXLab | https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FShangchenZhou\u002FCodeFormer |Maintained by Authors |\n| Segmind | https:\u002F\u002Fwww.segmind.com\u002Fmodels\u002Fcodeformer | Non-official |\n| Sieve | https:\u002F\u002Fwww.sievedata.com\u002Ffunctions\u002Fsieve\u002Fcodeformer | Non-official |\n| Fal.ai | https:\u002F\u002Ffal.ai\u002Fmodels\u002Ffal-ai\u002Fcodeformer | Non-official |\n| VaikerAI | https:\u002F\u002Fvaikerai.com\u002Fsczhou\u002Fcodeformer | Non-official |\n| Scade.pro | https:\u002F\u002Fwww.scade.pro\u002Fprocessors\u002Flucataco-codeformer | Non-official |\n| Grandline | https:\u002F\u002Fwww.grandline.ai\u002Fmodel\u002Fcodeformer | Non-official |\n| AI Demos | https:\u002F\u002Faidemos.com\u002Ftools\u002Fcodeformer | Non-official |\n| Synexa | https:\u002F\u002Fsynexa.ai\u002Fexplore\u002Fsczhou\u002Fcodeformer | Non-official |\n| RentPrompts | https:\u002F\u002Frentprompts.ai\u002Fmodels\u002FCodeformer | Non-official |\n| ElevaticsAI | https:\u002F\u002Felevatics.ai\u002Fmodels\u002Fsuper-resolution\u002Fcodeformer | Non-official |\n| Anakin.ai | https:\u002F\u002Fanakin.ai\u002Fapps\u002Fcodeformer-online-face-restoration-by-codeformer-19343 | Non-official |\n| Relayto | https:\u002F\u002Frelayto.com\u002Fexplore\u002Fcodeformer-yf9rj8kwc7zsr | Non-official |\n\n\n#### Open-Source Projects & Toolkits\n\n| Project \u002F Toolkit | Link | Notes |\n|-------------------|------|--------|\n| Stable Diffusion GUI | https:\u002F\u002Fnmkd.itch.io\u002Ft2i-gui | Integration |\n| Stable Diffusion WebUI | https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui | Integration |\n| ChaiNNer | https:\u002F\u002Fgithub.com\u002FchaiNNer-org\u002FchaiNNer | Integration |\n| PyPI | https:\u002F\u002Fpypi.org\u002Fproject\u002Fcodeformer\u002F ; https:\u002F\u002Fpypi.org\u002Fproject\u002Fcodeformer-pip\u002F | Python packages |\n| ComfyUI | https:\u002F\u002Fstable-diffusion-art.com\u002Fcodeformer\u002F | Integration |\n\n---\n### Acknowledgement\n\nThis project is based on [BasicSR](https:\u002F\u002Fgithub.com\u002FXPixelGroup\u002FBasicSR). Some codes are brought from [Unleashing Transformers](https:\u002F\u002Fgithub.com\u002Fsamb-t\u002Funleashing-transformers), [YOLOv5-face](https:\u002F\u002Fgithub.com\u002Fdeepcam-cn\u002Fyolov5-face), and [FaceXLib](https:\u002F\u002Fgithub.com\u002Fxinntao\u002Ffacexlib). We also adopt [Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN) to support background image enhancement. Thanks for their awesome works.\n\n### Citation\nIf our work is useful for your research, please consider citing:\n\n    @inproceedings{zhou2022codeformer,\n        author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change},\n        title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer},\n        booktitle = {NeurIPS},\n        year = {2022}\n    }\n\n\n### Contact\nIf you have any questions, please feel free to reach me out at `shangchenzhou@gmail.com`. \n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_d3f5bbc7e7c7.png\" height=110>\n\u003C\u002Fp>\n\n## 迈向基于码本查找 Transformer (Codebook Lookup Transformer) 的鲁棒盲人脸修复 (NeurIPS 2022)\n\n[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11253) | [项目页面](https:\u002F\u002Fshangchenzhou.com\u002Fprojects\u002FCodeFormer\u002F) | [视频](https:\u002F\u002Fyoutu.be\u002Fd3VDpkXlueI)\n\n\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1m52PNveE4PBhYrecj34cnpEeiHcC5LTb?usp=sharing\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa> [![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%A4%97%20Hugging%20Face-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsczhou\u002FCodeFormer) [![Replicate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%9A%80%20Replicate-blue)](https:\u002F\u002Freplicate.com\u002Fsczhou\u002Fcodeformer) [![OpenXLab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%90%BC%20OpenXLab-blue)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FShangchenZhou\u002FCodeFormer) ![Visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_87f4f6d7a650.png)\n\n\n[Shangchen Zhou](https:\u002F\u002Fshangchenzhou.com\u002F), [Kelvin C.K. Chan](https:\u002F\u002Fckkelvinchan.github.io\u002F), [Chongyi Li](https:\u002F\u002Fli-chongyi.github.io\u002F), [Chen Change Loy](https:\u002F\u002Fwww.mmlab-ntu.com\u002Fperson\u002Fccloy\u002F) \n\nS 实验室，南洋理工大学\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_33fd916463bc.jpg\" width=\"800px\"\u002F>\n\n\n:star: 如果 CodeFormer 对您的图片或者项目有帮助，请帮忙给这个仓库点个星。谢谢！:hugs: \n\n\n### 更新\n- **2023.07.20**: 集成到 :panda_face: [OpenXLab](https:\u002F\u002Fopenxlab.org.cn\u002Fapps)。试用在线演示！[![OpenXLab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%90%BC%20OpenXLab-blue)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FShangchenZhou\u002FCodeFormer)\n- **2023.04.19**: :whale: 训练代码和配置文件现已公开可用。\n- **2023.04.09**: 为裁剪和对齐的人脸图像添加了修复 (inpainting) 和上色 (colorization) 功能。\n- **2023.02.10**: 将 `dlib` 作为新的人脸检测选项包含在内，它能产生更准确的人脸身份识别。\n- **2022.10.05**: 支持视频输入 `--input_path [YOUR_VIDEO.mp4]`。尝试用它来增强您的视频！:clapper: \n- **2022.09.14**: 集成到 :hugs: [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fspaces)。试用在线演示！[![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%A4%97%20Hugging%20Face-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsczhou\u002FCodeFormer)\n- **2022.09.09**: 集成到 :rocket: [Replicate](https:\u002F\u002Freplicate.com\u002Fexplore)。试用在线演示！[![Replicate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-%F0%9F%9A%80%20Replicate-blue)](https:\u002F\u002Freplicate.com\u002Fsczhou\u002Fcodeformer)\n- [**更多**](docs\u002Fhistory_changelog.md)\n\n### 待办事项\n- [x] 添加训练代码和配置文件\n- [x] 添加人脸修复 (inpainting) 的检查点 (checkpoint) 和脚本\n- [x] 添加人脸上色 (colorization) 的检查点 (checkpoint) 和脚本\n- [x] ~~添加背景图像增强~~\n\n#### :panda_face: 尝试增强老照片 \u002F 修复 AI 艺术\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_0e2724d05b50.jpg\" height=\"226px\"\u002F>](https:\u002F\u002Fimgsli.com\u002FMTI3NTE2) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_4ff2a7e10ed2.jpg\" height=\"226px\"\u002F>](https:\u002F\u002Fimgsli.com\u002FMTI3NTE1) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_2b234de46ff9.jpg\" height=\"226px\"\u002F>](https:\u002F\u002Fimgsli.com\u002FMTI3NTIw) \n\n#### 人脸修复\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_2ea40a221bdb.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_8964bb4c8f93.png\" width=\"400px\"\u002F>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_5bdc16099fd8.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_6969b60dd2e0.png\" width=\"400px\"\u002F>\n\n#### 人脸色彩增强与修复\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_6ffbdc5dc875.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_08decd1622d5.png\" width=\"400px\"\u002F>\n\n#### 人脸修复 (Inpainting)\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_0591da729be9.png\" width=\"400px\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_readme_07388669ccda.png\" width=\"400px\"\u002F>\n\n\n\n### 依赖与环境安装\n\n- Pytorch >= 1.7.1\n- CUDA >= 10.1\n- `requirements.txt` 中的其他所需包\n```\n# git clone this repository\ngit clone https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\ncd CodeFormer\n\n# create new anaconda env\nconda create -n codeformer python=3.8 -y\nconda activate codeformer\n\n# install python dependencies\npip3 install -r requirements.txt\npython basicsr\u002Fsetup.py develop\nconda install -c conda-forge dlib (only for face detection or cropping with dlib)\n```\n\u003C!-- conda install -c conda-forge dlib (仅用于通过 dlib 进行人脸检测或裁剪) -->\n\n### 快速推理\n\n#### 下载预训练模型：\n从 [[Releases](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Freleases\u002Ftag\u002Fv0.1.0) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1b_3qwrzY_kTQh0-SnBoGBgOrJ_PLZSKm?usp=sharing) | [OneDrive](https:\u002F\u002Fentuedu-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fs200094_e_ntu_edu_sg\u002FEvDxR7FcAbZMp_MA9ouq7aQB8XTppMb3-T0uGZ_2anI2mg?e=DXsJFo)] 下载 facelib 和 dlib 预训练模型到 `weights\u002Ffacelib` 文件夹。您可以手动下载预训练模型，或运行以下命令下载：\n```\npython scripts\u002Fdownload_pretrained_models.py facelib\npython scripts\u002Fdownload_pretrained_models.py dlib (only for dlib face detector)\n```\n\n从 [[Releases](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Freleases\u002Ftag\u002Fv0.1.0) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1CNNByjHDFt0b95q54yMVp6Ifo5iuU6QS?usp=sharing) | [OneDrive](https:\u002F\u002Fentuedu-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fs200094_e_ntu_edu_sg\u002FEoKFj4wo8cdIn2-TY2IV6CYBhZ0pIG4kUOeHdPR_A5nlbg?e=AO8UN9)] 下载 CodeFormer 预训练模型到 `weights\u002FCodeFormer` 文件夹。您可以手动下载预训练模型，或运行以下命令下载：\n```\npython scripts\u002Fdownload_pretrained_models.py CodeFormer\n```\n\n#### 准备测试数据：\n您可以将测试图像放入 `inputs\u002FTestWhole` 文件夹。如果您想测试裁剪和对齐的人脸，可以将它们放在 `inputs\u002Fcropped_faces` 文件夹中。您可以通过运行以下命令获取裁剪和对齐的人脸：\n```\n# you may need to install dlib via: conda install -c conda-forge dlib\npython scripts\u002Fcrop_align_face.py -i [input folder] -o [output folder]\n```\n\n\n#### 测试：\n[注意] 如果您想在论文中比较 CodeFormer，请运行以下命令并指定 `--has_aligned`（针对裁剪和对齐的人脸），因为整张图像的命令涉及人脸 - 背景融合过程，这可能会破坏边界处的头发纹理，导致不公平的比较。\n\n保真度权重 *w* 位于 [0, 1] 之间。通常，较小的 *w* 倾向于产生更高质量的结果，而较大的 *w* 会产生更高保真度的结果。结果将保存在 `results` 文件夹中。\n\n\n🧑🏻 人脸修复（裁剪和对齐的人脸）\n```\n# For cropped and aligned faces (512x512)\npython inference_codeformer.py -w 0.5 --has_aligned --input_path [image folder]|[image path]\n```\n\n:framed_picture: 整图增强\n```\n# For whole image\n# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN\n# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN\npython inference_codeformer.py -w 0.7 --input_path [image folder]|[image path]\n```\n\n:clapper: 视频增强\n```\n\n# 对于 Windows\u002FMac 用户，请先安装 ffmpeg (FFmpeg 多媒体框架)\nconda install -c conda-forge ffmpeg\n```\n```\n# For video clips\n# Video path should end with '.mp4'|'.mov'|'.avi'\npython inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 1.0 --input_path [video path]\n```\n\n🌈 人脸着色（裁剪并对齐的人脸）\n```\n# For cropped and aligned faces (512x512)\n# Colorize black and white or faded photo\npython inference_colorization.py --input_path [image folder]|[image path]\n```\n\n🎨 人脸修复（裁剪并对齐的人脸）\n```\n# For cropped and aligned faces (512x512)\n# Inputs could be masked by white brush using an image editing app (e.g., Photoshop) \n# (check out the examples in inputs\u002Fmasked_faces)\npython inference_inpainting.py --input_path [image folder]|[image path]\n```\n### 训练：\n训练命令可以在文档中找到：[英文](docs\u002Ftrain.md) **|** [简体中文](docs\u002Ftrain_CN.md)。\n\n### 许可证\n\n本项目采用 \u003Ca rel=\"license\" href=\"https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Fblob\u002Fmaster\u002FLICENSE\">NTU S-Lab License 1.0\u003C\u002Fa> 许可。重新分发和使用应遵循此许可证。\n\n---\n### 🐼 生态系统应用与部署\n\nCodeFormer 已被广泛采用并部署在广泛的在线应用、平台、API (应用程序接口) 服务和独立网站（超过 20 个）上，并已集成到许多开源项目和工具包中。\n\n> 仅在 **Hugging Face Space**、**Replicate** 和 **OpenXLab** 上的演示是作者**维护的官方部署**。以下列出的所有其他演示、API、应用、网站和集成均为**第三方（非官方）**，与 CodeFormer 作者无关。请核实其合法性以避免潜在的经济损失。\n\n\n#### 网站（非官方）\n\n⚠️⚠️⚠️ 以下网站**不是官方网站，也不是由我们运营的**。它们未经任何许可或授权使用了我们的模型。请核实其合法性以避免潜在的经济损失。\n\n\n| 网站 | 链接 | 备注 |\n|---------|------|--------|\n| CodeFormer.net | https:\u002F\u002Fcodeformer.net\u002F | 非官方网站 |\n| CodeFormer.cn | https:\u002F\u002Fwww.codeformer.cn\u002F | 非官方网站 |\n| CodeFormerAI.com | https:\u002F\u002Fcodeformerai.com\u002F | 非官方网站 |\n\n#### 在线演示 \u002F API 平台\n\n| 平台 | 链接 | 备注 |\n|----------|------|--------|\n| Hugging Face | https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsczhou\u002FCodeFormer | 由作者维护 |\n| Replicate | https:\u002F\u002Freplicate.com\u002Fsczhou\u002Fcodeformer | 由作者维护 |\n| OpenXLab | https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FShangchenZhou\u002FCodeFormer | 由作者维护 |\n| Segmind | https:\u002F\u002Fwww.segmind.com\u002Fmodels\u002Fcodeformer | 非官方 |\n| Sieve | https:\u002F\u002Fwww.sievedata.com\u002Ffunctions\u002Fsieve\u002Fcodeformer | 非官方 |\n| Fal.ai | https:\u002F\u002Ffal.ai\u002Fmodels\u002Ffal-ai\u002Fcodeformer | 非官方 |\n| VaikerAI | https:\u002F\u002Fvaikerai.com\u002Fsczhou\u002Fcodeformer | 非官方 |\n| Scade.pro | https:\u002F\u002Fwww.scade.pro\u002Fprocessors\u002Flucataco-codeformer | 非官方 |\n| Grandline | https:\u002F\u002Fwww.grandline.ai\u002Fmodel\u002Fcodeformer | 非官方 |\n| AI Demos | https:\u002F\u002Faidemos.com\u002Ftools\u002Fcodeformer | 非官方 |\n| Synexa | https:\u002F\u002Fsynexa.ai\u002Fexplore\u002Fsczhou\u002Fcodeformer | 非官方 |\n| RentPrompts | https:\u002F\u002Frentprompts.ai\u002Fmodels\u002FCodeformer | 非官方 |\n| ElevaticsAI | https:\u002F\u002Felevatics.ai\u002Fmodels\u002Fsuper-resolution\u002Fcodeformer | 非官方 |\n| Anakin.ai | https:\u002F\u002Fanakin.ai\u002Fapps\u002Fcodeformer-online-face-restoration-by-codeformer-19343 | 非官方 |\n| Relayto | https:\u002F\u002Frelayto.com\u002Fexplore\u002Fcodeformer-yf9rj8kwc7zsr | 非官方 |\n\n\n#### 开源项目与工具包\n\n| 项目 \u002F 工具包 | 链接 | 备注 |\n|-------------------|------|--------|\n| Stable Diffusion GUI | https:\u002F\u002Fnmkd.itch.io\u002Ft2i-gui | 集成 |\n| Stable Diffusion WebUI | https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui | 集成 |\n| ChaiNNer | https:\u002F\u002Fgithub.com\u002FchaiNNer-org\u002FchaiNNer | 集成 |\n| PyPI | https:\u002F\u002Fpypi.org\u002Fproject\u002Fcodeformer\u002F ; https:\u002F\u002Fpypi.org\u002Fproject\u002Fcodeformer-pip\u002F | Python 包 |\n| ComfyUI | https:\u002F\u002Fstable-diffusion-art.com\u002Fcodeformer\u002F | 集成 |\n\n---\n### 致谢\n\n本项目基于 [BasicSR](https:\u002F\u002Fgithub.com\u002FXPixelGroup\u002FBasicSR)。部分代码来自 [Unleashing Transformers](https:\u002F\u002Fgithub.com\u002Fsamb-t\u002Funleashing-transformers)、[YOLOv5-face](https:\u002F\u002Fgithub.com\u002Fdeepcam-cn\u002Fyolov5-face) 和 [FaceXLib](https:\u002F\u002Fgithub.com\u002Fxinntao\u002Ffacexlib)。我们还采用了 [Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN) 来支持背景图像增强。感谢他们的出色工作。\n\n### 引用\n如果我们的工作对您的研究有用，请考虑引用：\n\n    @inproceedings{zhou2022codeformer,\n        author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change},\n        title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer},\n        booktitle = {NeurIPS},\n        year = {2022}\n    }\n\n\n### 联系方式\n如果您有任何问题，请随时通过 `shangchenzhou@gmail.com` 联系我。","# CodeFormer 快速上手指南\n\nCodeFormer 是一款基于 Transformer 架构的鲁棒盲人脸修复工具（NeurIPS 2022），支持老照片修复、AI 绘画修复、人脸上色及补全等功能。\n\n## 1. 环境准备\n\n*   **操作系统**: Windows \u002F macOS \u002F Linux\n*   **Python 版本**: 3.8+\n*   **GPU 驱动**: CUDA >= 10.1\n*   **深度学习框架**: PyTorch >= 1.7.1\n*   **其他依赖**: `ffmpeg` (用于视频处理), `dlib` (可选，用于更精准的人脸检测)\n\n## 2. 安装步骤\n\n### 2.1 克隆代码库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\ncd CodeFormer\n```\n\n### 2.2 创建并激活虚拟环境\n```bash\nconda create -n codeformer python=3.8 -y\nconda activate codeformer\n```\n\n### 2.3 安装依赖包\n```bash\npip3 install -r requirements.txt\npython basicsr\u002Fsetup.py develop\n# 如需使用 dlib 进行人脸检测，请额外安装\nconda install -c conda-forge dlib\n```\n\n### 2.4 下载预训练模型\n运行以下脚本自动下载所需权重文件到对应目录：\n```bash\n# 下载基础库模型 (facelib & dlib)\npython scripts\u002Fdownload_pretrained_models.py facelib\npython scripts\u002Fdownload_pretrained_models.py dlib\n\n# 下载 CodeFormer 核心模型\npython scripts\u002Fdownload_pretrained_models.py CodeFormer\n```\n\n## 3. 基本使用\n\n### 3.1 准备测试数据\n将待处理的图片放入 `inputs\u002FTestWhole` 文件夹（整图）或 `inputs\u002Fcropped_faces` 文件夹（已裁剪对齐的人脸）。\n\n### 3.2 执行推理\n\n#### 场景 A：整图增强（推荐）\n适用于包含背景的图片，会自动识别人脸并进行修复。\n```bash\npython inference_codeformer.py -w 0.7 --input_path [image folder]|[image path]\n```\n*   `-w`: 保真度权重 (0~1)。值越小画质越高，值越大越接近原图。\n*   结果保存在 `results` 文件夹。\n\n> **进阶选项**：\n> *   增强背景：添加 `--bg_upsampler realesrgan`\n> *   放大人脸：添加 `--face_upsample`\n\n#### 场景 B：人脸修复（已裁剪\u002F对齐）\n适用于已经裁剪好的人脸图片（512x512）。\n```bash\npython inference_codeformer.py -w 0.5 --has_aligned --input_path [image folder]|[image path]\n```\n\n#### 场景 C：视频增强\n需先安装 `ffmpeg` (`conda install -c conda-forge ffmpeg`)。\n```bash\npython inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 1.0 --input_path [video path]\n```\n\n#### 场景 D：人脸上色与补全\n```bash\n# 黑白照片上色\npython inference_colorization.py --input_path [image folder]|[image path]\n\n# 人脸补全（需预先用画笔涂抹遮罩区域）\npython inference_inpainting.py --input_path [image folder]|[image path]\n```\n\n---\n**💡 在线体验**\n如果不想本地部署，可尝试官方维护的在线 Demo：\n*   [OpenXLab](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FShangchenZhou\u002FCodeFormer) (国内访问较快)\n*   [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsczhou\u002FCodeFormer)\n*   [Replicate](https:\u002F\u002Freplicate.com\u002Fsczhou\u002Fcodeformer)","一位数字档案管理员正在处理一批上世纪 80 年代的家庭老照片，需要将模糊破损的肖像恢复清晰以便客户制作纪念册。\n\n### 没有 CodeFormer 时\n- 传统修图软件难以修复严重模糊和噪点，手动涂抹耗时且容易导致人物五官变形。\n- 低分辨率人脸直接放大后边缘锯齿明显，关键细节如眼睛和发丝完全丢失。\n- 黑白照片上色依赖人工调色，肤色与光影关系不自然，看起来像假人。\n- 面部存在的划痕、折痕或遮挡部分修复需要高超 PS 技巧，普通用户无法完成精细操作。\n\n### 使用 CodeFormer 后\n- CodeFormer 自动识别并增强模糊人脸，在保留原始身份特征的同时大幅提升清晰度。\n- 模型能智能补全缺失的五官细节，有效消除噪点，使放大后的图像边缘平滑自然。\n- 结合色彩增强功能，一键将黑白老照还原为逼真的彩色影像，色调过渡柔和且符合时代感。\n- 内置的面部修复模块可自动填补划痕和遮挡区域，无需复杂的手动蒙版绘制即可实现无缝融合。\n\nCodeFormer 通过深度学习技术，让老旧人像修复效率提升数倍且效果远超传统修图手段。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsczhou_CodeFormer_d3f5bbc7.png","sczhou","Shangchen Zhou","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fsczhou_83746552.png","Research Assistant Professor at MMLab@NTU. ","Nanyang Technological University","Singapore","shangchenzhou@gmail.com","ShangchenZhou","shangchenzhou.com","https:\u002F\u002Fgithub.com\u002Fsczhou",[86,90,94],{"name":87,"color":88,"percentage":89},"Python","#3572A5",85.7,{"name":91,"color":92,"percentage":93},"Cuda","#3A4E3A",8.5,{"name":95,"color":96,"percentage":97},"C++","#f34b7d",5.7,17867,"2026-04-04T18:36:29","NOASSERTION","Linux, macOS, Windows","需要 NVIDIA GPU，CUDA 10.1+, 显存大小未说明","未说明",{"notes":105,"python":106,"dependencies":107},"需使用 conda 创建独立环境；首次运行需下载预训练模型至 weights 目录；视频处理需安装 ffmpeg；支持人脸修复、上色及修复功能；整图与裁剪人脸推理命令不同","3.8",[108,109,110,111],"torch>=1.7.1","basicsr","dlib","ffmpeg",[14,13],[114,115,116,117,118,119,120,121],"codebook","codeformer","face-enhancement","face-restoration","pytorch","super-resolution","vqgan","restoration",20,null,"2026-03-27T02:49:30.150509","2026-04-06T06:55:39.369564",[127,132,136,141,146,151],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},2307,"复现 CodeFormer 模型时出现嘴唇溢色（Color Bleeding）问题，原因是什么？","这通常是由于数据分布差异导致的。官方开源模型基于 FFHQ 数据集训练，在亚洲人脸部推理时可能偏向欧美人特征分布。建议尝试从头训练三个阶段，并修改 codebook 使其更鲁棒。此外，确保使用官方配置文件，并注意 GPU 数量与 Batch Size 的配置一致性（如 4 卡训练时加倍单卡 Batch Size 以维持有效 Batch Size 不变）。","https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Fissues\u002F261",{"id":133,"question_zh":134,"answer_zh":135,"source_url":131},2308,"训练 Stage 2 时，如何配置 GPU 数量和 Batch Size 以保持训练效果一致？","为了保持有效 Batch Size 一致，如果减少 GPU 数量（例如从 8 卡改为 4 卡），需要相应增加每块 GPU 的 Batch Size。应使用相同的配置文件，仅调整硬件相关的参数设置。有用户反馈通过调整 codebook 和训练阶段配置，可以解决重建质量不高和溢色问题。",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},2309,"推理后 final_results 目录图片模糊，但 restored_faces 目录图片清晰，如何解决？","这通常是环境兼容性问题。建议尝试不使用 Anaconda，改用 pip 创建虚拟环境安装依赖（命令：`pip -m venv venv`），激活环境后安装 requirements 并添加 torch。同时请确保拉取了最新的代码更新（fetch updates），必要时重启电脑以清除缓存影响。","https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Fissues\u002F274",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},2310,"AI 生成的 512x512 图像人脸占比极小（5-10%）导致无法检测，有什么改进方案？","开发者已将默认人脸检测模型切换为 `retinaface_resnet50` 以提升对小人脸的检测能力。如果自动检测仍然失败，建议先在 Photoshop 等工具中手动裁剪出人脸区域，保存后再输入 CodeFormer 进行修复，修复完成后可再次合成回原图。","https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Fissues\u002F19",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},2311,"CodeFormer 是否支持类似 GFPGAN 的全图背景增强（Tiles Processing）功能？","该项目主要专注于面部增强，背景增强功能相对有限。不过，近期更新已修复了人脸与背景融合时的边界框线条问题，并将 `retinaface_resnet50` 设为默认检测模型。建议关注仓库后续更新以获取更完善的图像处理方案。","https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Fissues\u002F6",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},2312,"将 CodeFormer 转换为 ONNX 用于 AMD GPU 时，fidelity_weight ('w') 参数失效并报错 TracerWarning，如何解决？","这是因为 ONNX 追踪器将 Python 值视为常量，导致 `w` 参数在转换后固定。建议在转换代码中将 `w` 等动态参数显式加入输入张量，而不是在模型内部硬编码。可以尝试使用 olive-ai 等优化工具进行适配，或检查 `vqgan_arch.py` 和 `codeformer_arch.py` 中涉及 `int(c)` 或 `if w>0` 的逻辑部分。","https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\u002Fissues\u002F252",[157],{"id":158,"version":159,"summary_zh":160,"released_at":161},101868,"v0.1.0","This release is mainly for storing pre-trained models, etc.","2022-08-10T12:54:36"]