[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-neuralchen--SimSwap":3,"tool-neuralchen--SimSwap":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":82,"owner_url":83,"languages":84,"stars":97,"forks":98,"last_commit_at":99,"license":100,"difficulty_score":10,"env_os":101,"env_gpu":102,"env_ram":101,"env_deps":103,"category_tags":116,"github_topics":117,"view_count":127,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":128,"updated_at":129,"faqs":130,"releases":161},928,"neuralchen\u002FSimSwap","SimSwap","An arbitrary face-swapping framework on images and videos with one single trained model!","SimSwap 是一个高效的人脸交换框架，能够在图片和视频中实现高质量的任意人脸替换。它最大的特点是仅需一个训练好的模型，就能处理不同的人脸交换需求，解决了传统方法需要针对特定人脸训练不同模型的局限性。\n\n对于研究人员和开发者来说，SimSwap 提供了灵活的训练和测试代码，并支持高分辨率版本（SimSwap-HQ），适合用于多媒体处理、计算机视觉等领域的研究。同时，由于其开源性质，开发者可以轻松集成到自己的项目中，或者基于它进行二次开发。\n\n技术亮点方面，SimSwap 使用 PyTorch 实现，支持 Docker 部署，并提供了高质量的数据集（如 VGGFace2-HQ）用于训练和优化模型。此外，团队还积极维护代码库，修复已知问题并持续更新功能，确保工具的稳定性和易用性。\n\n需要注意的是，SimSwap 仅限于技术研究和学术用途，请勿将其应用于非法或不道德的场景。如果你对人脸交换技术感兴趣，或者希望探索多媒体处理的前沿应用，SimSwap 将是一个非常值得尝试的选择。","# SimSwap: An Efficient Framework For High Fidelity Face Swapping\n## Proceedings of the 28th ACM International Conference on Multimedia\n**The official repository with Pytorch**\n\n**Our method can realize **arbitrary face swapping** on images and videos with **one single trained model**.**\n\n***We are recruiting full-time engineers. If you are interested, please send an [email](mailto:chen19910528@sjtu.edu.cn?subject=[GitHub]%20Source%20Han%20Sans) to my team. Please refer to the website for specific recruitment conditions: [Requirements](https:\u002F\u002Fjoin.sjtu.edu.cn\u002FAdmin\u002FQsPreview.aspx?qsid=44f5413a90974114b8f5e643177ef32d)***\n\nTraining and test code are now available!\n[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb)\n\nWe are working with our incoming paper SimSwap++, keeping expecting!\n\nThe high resolution version of ***SimSwap-HQ*** is supported!\n\n[![simswaplogo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_5fe7632dadce.png)](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap)\n\nOur paper can be downloaded from [[Arxiv]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06340v1.pdf) [[ACM DOI]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3394171.3413630)\n\n\n### This project also received support from [SocialBook](https:\u002F\u002Fsocialbook.io).\n\u003C!-- [![logo](.\u002Fsimswaplogo\u002Fsocialbook_logo.2020.357eed90add7705e54a8.svg)](https:\u002F\u002Fsocialbook.io) -->\n\u003Cimg width=30% src=\".\u002Fsimswaplogo\u002Fsocialbook_logo.2020.357eed90add7705e54a8.svg\"\u002F>\n\n\u003C!-- [[Google Drive]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1fcfWOGt1mkBo7F0gXVKitf8GJMAXQxZD\u002Fview?usp=sharing) \n[[Baidu Drive ]](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-TKFuycRNUKut8hn4IimvA) Password: ```ummt``` -->\n\n## Attention\n***This project is for technical and academic use only. Please do not apply it to illegal and unethical scenarios.***\n\n***In the event of violation of the legal and ethical requirements of the user's country or region, this code repository is exempt from liability***\n\n***Please do not ignore the content at the end of this README!***\n\nIf you find this project useful, please star it. It is the greatest appreciation of our work.\n\n## Top News \u003Cimg width=8% src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_273c0f42f6dd.gif\"\u002F>\n\n**`2023-09-26`**: We fixed bugs in colab!\n\n**`2023-04-25`**: We fixed the \"AttributeError: 'SGD' object has no attribute 'defaults' now\" bug. If you have already downloaded **arcface_checkpoint.tar**, please **download it again**. Also, you also need to update the scripts in ```.\u002Fmodels\u002F```.\n\n**`2022-04-21`**: For resource limited users, we provide the cropped VGGFace2-224 dataset [[Google Driver] VGGFace2-224 (10.8G)](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc\u002Fview?usp=sharing) [[Baidu Driver]](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OiwLJHVBSYB4AY2vEcfN0A) [Password: lrod].\n\n**`2022-04-20`**: Training scripts are now available. We highly recommend that you guys train the simswap model with our released high quality dataset [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ).\n\n**`2021-11-24`**: We have trained a beta version of ***SimSwap-HQ*** on [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) and open sourced the checkpoint of this model (if you think the Simswap 512 is cool, please star our  [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) repo). Please don’t forget to go to [Preparation](.\u002Fdocs\u002Fguidance\u002Fpreparation.md) and [Inference for image or video face swapping](.\u002Fdocs\u002Fguidance\u002Fusage.md) to check the latest set up.\n\n**`2021-11-23`**: The google drive link of [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) is released. \n\n**`2021-11-17`**: We released a high resolution face dataset [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) and the method to generate this dataset. This dataset is for research purpose. \n\n**`2021-08-30`**: Docker has been supported, please refer [here](https:\u002F\u002Freplicate.ai\u002Fneuralchen\u002Fsimswap-image) for details.\n\n**`2021-08-17`**: We have updated the [Preparation](.\u002Fdocs\u002Fguidance\u002Fpreparation.md), The main change is that the gpu version of onnx is now installed by default, Now the time to process a video is greatly reduced.\n\n**`2021-07-19`**: ***Obvious border abruptness has been resolved***. We add the ability to using mask and upgrade the old algorithm for better visual effect, please go to [Inference for image or video face swapping](.\u002Fdocs\u002Fguidance\u002Fusage.md) for details. Please don’t forget to go to [Preparation](.\u002Fdocs\u002Fguidance\u002Fpreparation.md) to check the latest set up. (Thanks for the help from [@woctezuma](https:\u002F\u002Fgithub.com\u002Fwoctezuma) and [@instant-high](https:\u002F\u002Fgithub.com\u002Finstant-high))\n\n## The first open source high resolution dataset for face swapping!!!\n## High Resolution Dataset [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_0ebdba96057f.png)](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)\n\n\n\n\n## Dependencies\n- python3.6+\n- pytorch1.5+\n- torchvision\n- opencv\n- pillow\n- numpy\n- imageio\n- moviepy\n- insightface\n- ***timm==0.5.4***\n\n## Training\n\n[Preparation](.\u002Fdocs\u002Fguidance\u002Fpreparation.md)\n\nThe training script is slightly different from the original version, e.g., we replace the patch discriminator with the projected discriminator, which saves a lot of hardware overhead and achieves slightly better results.\n\nIn order to ensure the normal training, the batch size must be greater than 1.\n\nFriendly reminder, due to the difference in training settings, the user-trained model will have subtle differences in visual effects from the pre-trained model we provide.\n\n- Train 224 models with VGGFace2 224*224 [[Google Driver] VGGFace2-224 (10.8G)](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc\u002Fview?usp=sharing) [[Baidu Driver] ](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OiwLJHVBSYB4AY2vEcfN0A) [Password: lrod]\n\nFor faster convergence and better results, a large batch size (more than 16) is recommended!\n\n***We recommend training more than 400K iterations (batch size is 16), 600K~800K will be better, more iterations will not be recommended.***\n\n\n```\npython train.py --name simswap224_test --batchSize 8  --gpu_ids 0 --dataset \u002Fpath\u002Fto\u002FVGGFace2HQ --Gdeep False\n```\n\n[Colab demo for training 224 model][ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb)\n\nFor faster convergence and better results, a large batch size (more than 16) is recommended!\n\n- Train 512 models with VGGFace2-HQ 512*512 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ).\n```\npython train.py --name simswap512_test  --batchSize 16  --gpu_ids 0 --dataset \u002Fpath\u002Fto\u002FVGGFace2HQ --Gdeep True\n```\n\n\n\n## Inference with a pretrained SimSwap model\n[Preparation](.\u002Fdocs\u002Fguidance\u002Fpreparation.md)\n\n[Inference for image or video face swapping](.\u002Fdocs\u002Fguidance\u002Fusage.md)\n\n[Colab demo](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FSimSwap%20colab.ipynb)\n\n\u003Cdiv style=\"background: yellow; width:140px; font-weight:bold;font-family: sans-serif;\">Stronger feature\u003C\u002Fdiv>\n\n[Colab for switching specific faces in multi-face videos][ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FMultiSpecific.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FMultiSpecific.ipynb)\n\n[Image face swapping demo & Docker image on Replicate](https:\u002F\u002Freplicate.ai\u002Fneuralchen\u002Fsimswap-image)\n\n\n\n## Video\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_6d7847d58f00.webp\"\u002F>\n\u003Cdiv>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_beae05aa20d8.webp\"\u002F>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_54c6e884bd01.webp\"\u002F>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_345e12ba8dcb.webp\"\u002F>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_69a04fcce071.webp\"\u002F>\n\u003C\u002Fdiv>\n\u003Cdiv>\n\u003Cimg width=49% src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_acb0fab7d618.webp\"\u002F>\n\u003Cimg width=49% src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_8ee92e4e9d97.webp\"\u002F>\n\u003C\u002Fdiv>\n\n## Results\n![Results1](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_4aa5ecca869a.png)\n\n![Results2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_0159dd1756b2.png)\n\n\n\u003C!-- ![video2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_beae05aa20d8.webp)\n![video3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_54c6e884bd01.webp)\n![video4](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_345e12ba8dcb.webp)\n![video5](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_69a04fcce071.webp) -->\n\n\n**High-quality videos can be found in the link below:**\n\n[[Mama(video) 1080p]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1mnSlwzz7f4H2O7UwApAHo64mgK4xSNyK\u002Fview?usp=sharing)\n\n[[Google Drive link for video 1]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1hdne7Gw39d34zt3w1NYV3Ln5cT8PfCNm\u002Fview?usp=sharing)\n\n[[Google Drive link for video 2]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bDEg_pVeFYLnf9QLSMuG8bsjbRPk0X5_\u002Fview?usp=sharing)\n\n[[Google Drive link for video 3]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1oftHAnLmgFis4XURcHTccGSWbWSXYKK1\u002Fview?usp=sharing)\n\n[[Baidu Drive link for video]](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1WTS6jm2TY17bYJurw57LUg ) Password: ```b26n```\n\n[[Online Video]](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV12v411p7j5\u002F)\n\n## User case\nIf you have some interesting results after using our project and are willing to share, you can contact us by email or share directly on the issue. Later, we may make a separate section to show these results, which should be cool.\n\nAt the same time, if you have suggestions for our project, please feel free to ask questions in the issue, or contact us directly via email: [email1](mailto:chenxuanhongzju@outlook.com), [email2](mailto:nicklau26@foxmail.com), [email3](mailto:ziangliu824@gmail.com). (All three can be contacted, just choose any one)\n\n## License\nFor academic and non-commercial use only.The whole project is under the CC-BY-NC 4.0 license. See [LICENSE](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FLICENSE) for additional details.\n\n\n## To cite our papers\n```\n@inproceedings{DBLP:conf\u002Fmm\u002FChenCNG20,\n  author    = {Renwang Chen and\n               Xuanhong Chen and\n               Bingbing Ni and\n               Yanhao Ge},\n  title     = {SimSwap: An Efficient Framework For High Fidelity Face Swapping},\n  booktitle = {{MM} '20: The 28th {ACM} International Conference on Multimedia},\n  year      = {2020}\n}\n```\n```\n@Article{simswapplusplus,\n    author  = {Xuanhong Chen and\n              Bingbing Ni and\n              Yutian Liu and\n              Naiyuan Liu and\n              Zhilin Zeng and\n              Hang Wang},\n    title   = {SimSwap++: Towards Faster and High-Quality Identity Swapping},\n    journal = {{IEEE} Trans. Pattern Anal. Mach. Intell.},\n    volume  = {46},\n    number  = {1},\n    pages   = {576--592},\n    year    = {2024}\n}\n```\n\n## Related Projects\n\n**Please visit our another ACMMM2020 high-quality style transfer project**\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_65d89d4509b6.png)](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FASMAGAN)\n\n[![title](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_63e022095f34.png)](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FASMAGAN)\n\n**Please visit our AAAI2021 sketch based rendering project**\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_3431fee26a62.gif)](https:\u002F\u002Fgithub.com\u002FTZYSJTU\u002FSketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale)\n[![title](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_841238d7aff1.png)](https:\u002F\u002Fgithub.com\u002FTZYSJTU\u002FSketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale)\n\n**Please visit our high resolution face dataset VGGFace2-HQ**\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_0ebdba96057f.png)](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)\n\nLearn about our other projects \n\n[[VGGFace2-HQ]](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ);\n\n[[RainNet]](https:\u002F\u002Fneuralchen.github.io\u002FRainNet);\n\n[[Sketch Generation]](https:\u002F\u002Fgithub.com\u002FTZYSJTU\u002FSketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale);\n\n[[CooGAN]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FCooGAN);\n\n[[Knowledge Style Transfer]](https:\u002F\u002Fgithub.com\u002FAceSix\u002FKnowledge_Transfer);\n\n[[SimSwap]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap);\n\n[[ASMA-GAN]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FASMAGAN);\n\n[[SNGAN-Projection-pytorch]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSNGAN_Projection)\n\n[[Pretrained_VGG19]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FPretrained_VGG19).\n\n## Acknowledgements\n\n\u003C!--ts-->\n* [Deepfacelab](https:\u002F\u002Fgithub.com\u002Fiperov\u002FDeepFaceLab)\n* [Insightface](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface)\n* [Face-parsing.PyTorch](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fface-parsing.PyTorch)\n* [BiSeNet](https:\u002F\u002Fgithub.com\u002FCoinCheung\u002FBiSeNet)\n\u003C!--te-->\n","# SimSwap: 一个高效的高保真人脸交换框架\n## 第28届ACM国际多媒体会议论文集\n**官方 Pytorch（一种深度学习框架）代码仓库**\n\n**我们的方法可以使用**单个训练好的模型**在图像和视频上实现**任意人脸交换**。**\n\n***我们正在招聘全职工程师。如果你感兴趣，请发送[邮件](mailto:chen19910528@sjtu.edu.cn?subject=[GitHub]%20Source%20Han%20Sans)到我的团队。具体招聘条件请参考网站：[要求](https:\u002F\u002Fjoin.sjtu.edu.cn\u002FAdmin\u002FQsPreview.aspx?qsid=44f5413a90974114b8f5e643177ef32d)***\n\n训练和测试代码现已发布！\n[ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb)\n\n我们正在与即将发表的论文 SimSwap++ 合作，敬请期待！\n\n支持高分辨率版本的 ***SimSwap-HQ***！\n\n[![simswaplogo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_5fe7632dadce.png)](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap)\n\n我们的论文可以从以下链接下载：[[Arxiv]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06340v1.pdf) [[ACM DOI]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3394171.3413630)\n\n\n### 本项目还得到了 [SocialBook](https:\u002F\u002Fsocialbook.io) 的支持。\n\u003C!-- [![logo](.\u002Fsimswaplogo\u002Fsocialbook_logo.2020.357eed90add7705e54a8.svg)](https:\u002F\u002Fsocialbook.io) -->\n\u003Cimg width=30% src=\".\u002Fsimswaplogo\u002Fsocialbook_logo.2020.357eed90add7705e54a8.svg\"\u002F>\n\n\u003C!-- [[Google Drive]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1fcfWOGt1mkBo7F0gXVKitf8GJMAXQxZD\u002Fview?usp=sharing) \n[[Baidu Drive ]](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-TKFuycRNUKut8hn4IimvA) 密码: ```ummt``` -->\n\n## 注意事项\n***本项目仅用于技术和学术用途。请勿将其应用于非法或不道德的场景。***\n\n***如果用户违反其所在国家或地区的法律和道德要求，此代码仓库将免责。***\n\n***请不要忽略本 README 文件末尾的内容！***\n\n如果你觉得这个项目有用，请给它点个星。这是对我们工作的最大认可。\n\n## 最新动态 \u003Cimg width=8% src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_273c0f42f6dd.gif\"\u002F>\n\n**`2023-09-26`**: 我们修复了 Colab 中的错误！\n\n**`2023-04-25`**: 我们修复了“AttributeError: 'SGD' 对象没有属性 'defaults'”的错误。如果你已经下载了 **arcface_checkpoint.tar**，请**重新下载**。此外，你还需要更新 ```.\u002Fmodels\u002F``` 中的脚本。\n\n**`2022-04-21`**: 针对资源有限的用户，我们提供了裁剪后的 VGGFace2-224 数据集 [[Google Driver] VGGFace2-224 (10.8G)](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc\u002Fview?usp=sharing) [[Baidu Driver]](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OiwLJHVBSYB4AY2vEcfN0A) [密码: lrod]。\n\n**`2022-04-20`**: 训练脚本现已可用。我们强烈建议大家使用我们发布的高质量数据集 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) 来训练 SimSwap 模型。\n\n**`2021-11-24`**: 我们在 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) 上训练了一个 ***SimSwap-HQ*** 的测试版，并开源了该模型的检查点（如果你认为 Simswap 512 很酷，请为我们的 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) 仓库点个星）。别忘了前往 [准备](.\u002Fdocs\u002Fguidance\u002Fpreparation.md) 和 [图像或视频人脸交换推理](.\u002Fdocs\u002Fguidance\u002Fusage.md) 查看最新的设置。\n\n**`2021-11-23`**: [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) 的 Google Drive 链接已发布。\n\n**`2021-11-17`**: 我们发布了一个高分辨率人脸数据集 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ) 以及生成该数据集的方法。该数据集仅供研究使用。\n\n**`2021-08-30`**: 已支持 Docker，请参阅[这里](https:\u002F\u002Freplicate.ai\u002Fneuralchen\u002Fsimswap-image)了解详细信息。\n\n**`2021-08-17`**: 我们更新了 [准备](.\u002Fdocs\u002Fguidance\u002Fpreparation.md)，主要变化是现在默认安装 onnx 的 GPU 版本，这大大减少了处理视频的时间。\n\n**`2021-07-19`**: ***明显的边界突兀问题已解决***。我们增加了使用掩码的能力并升级了旧算法以获得更好的视觉效果，请参阅 [图像或视频人脸交换推理](.\u002Fdocs\u002Fguidance\u002Fusage.md) 了解详细信息。别忘了前往 [准备](.\u002Fdocs\u002Fguidance\u002Fpreparation.md) 查看最新的设置。（感谢 [@woctezuma](https:\u002F\u002Fgithub.com\u002Fwoctezuma) 和 [@instant-high](https:\u002F\u002Fgithub.com\u002Finstant-high) 的帮助）\n\n## 第一个开源的高分辨率人脸交换数据集！！！\n## 高分辨率数据集 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_0ebdba96057f.png)](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)\n\n\n\n\n## 依赖项\n- python3.6+\n- pytorch1.5+\n- torchvision\n- opencv\n- pillow\n- numpy\n- imageio\n- moviepy\n- insightface\n- ***timm==0.5.4***\n\n## 训练\n\n[准备](.\u002Fdocs\u002Fguidance\u002Fpreparation.md)\n\n训练脚本与原始版本略有不同，例如，我们用投影判别器替换了补丁判别器，这节省了大量的硬件开销并取得了稍好的结果。\n\n为了确保正常训练，批量大小必须大于 1。\n\n温馨提示：由于训练设置的不同，用户训练的模型在视觉效果上会与我们提供的预训练模型有细微差异。\n\n- 使用 VGGFace2 224*224 训练 224 模型 [[Google Driver] VGGFace2-224 (10.8G)](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc\u002Fview?usp=sharing) [[Baidu Driver] ](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OiwLJHVBSYB4AY2vEcfN0A) [密码: lrod]\n\n为了更快的收敛和更好的结果，建议使用较大的批量大小（超过 16）！\n\n***我们建议训练超过 400K 次迭代（批量大小为 16），600K~800K 会更好，不建议更多次迭代。***\n\n\n```\npython train.py --name simswap224_test --batchSize 8  --gpu_ids 0 --dataset \u002Fpath\u002Fto\u002FVGGFace2HQ --Gdeep False\n```\n\n[Colab demo for training 224 model][ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Ftrain.ipynb)\n\n为了更快的收敛和更好的结果，建议使用较大的批量大小（超过 16）！\n\n- 使用 VGGFace2-HQ 512*512 训练 512 模型 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)。\n```\npython train.py --name simswap512_test  --batchSize 16  --gpu_ids 0 --dataset \u002Fpath\u002Fto\u002FVGGFace2HQ --Gdeep True\n```\n\n## 使用预训练的 SimSwap 模型进行推理\n[准备工作](.\u002Fdocs\u002Fguidance\u002Fpreparation.md)\n\n[图像或视频人脸交换的推理](.\u002Fdocs\u002Fguidance\u002Fusage.md)\n\n[Colab 示例](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FSimSwap%20colab.ipynb)\n\n\u003Cdiv style=\"background: yellow; width:140px; font-weight:bold;font-family: sans-serif;\">更强的功能\u003C\u002Fdiv>\n\n[用于在多脸视频中切换特定人脸的 Colab][ \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FMultiSpecific.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"google colab logo\">\u003C\u002Fa>](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FMultiSpecific.ipynb)\n\n[Replicate 上的图像人脸交换示例和 Docker 镜像](https:\u002F\u002Freplicate.ai\u002Fneuralchen\u002Fsimswap-image)\n\n\n\n## 视频\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_6d7847d58f00.webp\"\u002F>\n\u003Cdiv>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_beae05aa20d8.webp\"\u002F>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_54c6e884bd01.webp\"\u002F>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_345e12ba8dcb.webp\"\u002F>\n\u003Cimg width=24% src=\".https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_69a04fcce071.webp\"\u002F>\n\u003C\u002Fdiv>\n\u003Cdiv>\n\u003Cimg width=49% src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_acb0fab7d618.webp\"\u002F>\n\u003Cimg width=49% src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_8ee92e4e9d97.webp\"\u002F>\n\u003C\u002Fdiv>\n\n## 结果\n![结果1](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_4aa5ecca869a.png)\n\n![结果2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_0159dd1756b2.png)\n\n\n\u003C!-- ![video2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_beae05aa20d8.webp)\n![video3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_54c6e884bd01.webp)\n![video4](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_345e12ba8dcb.webp)\n![video5](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_69a04fcce071.webp) -->\n\n\n**高质量视频可以在以下链接找到：**\n\n[[Mama(video) 1080p]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1mnSlwzz7f4H2O7UwApAHo64mgK4xSNyK\u002Fview?usp=sharing)\n\n[[Google Drive 视频1链接]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1hdne7Gw39d34zt3w1NYV3Ln5cT8PfCNm\u002Fview?usp=sharing)\n\n[[Google Drive 视频2链接]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bDEg_pVeFYLnf9QLSMuG8bsjbRPk0X5_\u002Fview?usp=sharing)\n\n[[Google Drive 视频3链接]](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1oftHAnLmgFis4XURcHTccGSWbWSXYKK1\u002Fview?usp=sharing)\n\n[[百度网盘视频链接]](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1WTS6jm2TY17bYJurw57LUg ) 密码: ```b26n```\n\n[[在线视频]](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV12v411p7j5\u002F)\n\n## 用户案例\n如果你在使用我们的项目后得到了一些有趣的结果并愿意分享，可以通过邮件联系我们，或者直接在 issue 中分享。稍后我们可能会单独开辟一个部分来展示这些结果，应该会很酷。\n\n同时，如果你对我们的项目有任何建议，请随时在 issue 中提问，或者通过邮件直接联系我们：[邮箱1](mailto:chenxuanhongzju@outlook.com), [邮箱2](mailto:nicklau26@foxmail.com), [邮箱3](mailto:ziangliu824@gmail.com)。（三个邮箱都可以联系，任选其一即可）\n\n## 许可证\n仅限学术和非商业用途。整个项目遵循 CC-BY-NC 4.0 许可证。更多详细信息请参阅 [LICENSE](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FLICENSE)。\n\n\n## 引用我们的论文\n```\n@inproceedings{DBLP:conf\u002Fmm\u002FChenCNG20,\n  author    = {Renwang Chen and\n               Xuanhong Chen and\n               Bingbing Ni and\n               Yanhao Ge},\n  title     = {SimSwap: An Efficient Framework For High Fidelity Face Swapping},\n  booktitle = {{MM} '20: The 28th {ACM} International Conference on Multimedia},\n  year      = {2020}\n}\n```\n```\n@Article{simswapplusplus,\n    author  = {Xuanhong Chen and\n              Bingbing Ni and\n              Yutian Liu and\n              Naiyuan Liu and\n              Zhilin Zeng and\n              Hang Wang},\n    title   = {SimSwap++: Towards Faster and High-Quality Identity Swapping},\n    journal = {{IEEE} Trans. Pattern Anal. Mach. Intell.},\n    volume  = {46},\n    number  = {1},\n    pages   = {576--592},\n    year    = {2024}\n}\n```\n\n## 相关项目\n\n**请访问我们另一个 ACMMM2020 高质量风格迁移项目**\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_65d89d4509b6.png)](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FASMAGAN)\n\n[![标题](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_63e022095f34.png)](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FASMAGAN)\n\n**请访问我们 AAAI2021 基于草图渲染的项目**\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_3431fee26a62.gif)](https:\u002F\u002Fgithub.com\u002FTZYSJTU\u002FSketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale)\n[![标题](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_841238d7aff1.png)](https:\u002F\u002Fgithub.com\u002FTZYSJTU\u002FSketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale)\n\n**请访问我们的高分辨率人脸数据集 VGGFace2-HQ**\n\n[![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_readme_0ebdba96057f.png)](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)\n\n了解我们的其他项目 \n\n[[VGGFace2-HQ]](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ);\n\n[[RainNet]](https:\u002F\u002Fneuralchen.github.io\u002FRainNet);\n\n[[Sketch Generation]](https:\u002F\u002Fgithub.com\u002FTZYSJTU\u002FSketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale);\n\n[[CooGAN]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FCooGAN);\n\n[[Knowledge Style Transfer]](https:\u002F\u002Fgithub.com\u002FAceSix\u002FKnowledge_Transfer);\n\n[[SimSwap]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap);\n\n[[ASMA-GAN]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FASMAGAN);\n\n[[SNGAN-Projection-pytorch]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSNGAN_Projection)\n\n[[Pretrained_VGG19]](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FPretrained_VGG19).\n\n## 致谢\n\n\u003C!--ts-->\n* [Deepfacelab](https:\u002F\u002Fgithub.com\u002Fiperov\u002FDeepFaceLab)\n* [Insightface](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface)\n* [Face-parsing.PyTorch](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fface-parsing.PyTorch)\n* [BiSeNet](https:\u002F\u002Fgithub.com\u002FCoinCheung\u002FBiSeNet)\n\u003C!--te-->","# SimSwap 快速上手指南\n\nSimSwap 是一个高效的高保真人脸交换框架，支持任意人脸在图像和视频中的替换。\n\n---\n\n## 环境准备\n\n### 系统要求\n- Python 3.6+\n- PyTorch 1.5+\n- GPU 推荐（CUDA 支持）\n\n### 前置依赖\n安装以下依赖库：\n```bash\npip install torch torchvision opencv-python pillow numpy imageio moviepy insightface timm==0.5.4\n```\n\n如果需要使用 Docker，请参考 [Replicate AI 提供的镜像](https:\u002F\u002Freplicate.ai\u002Fneuralchen\u002Fsimswap-image)。\n\n---\n\n## 安装步骤\n\n1. 克隆项目代码：\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap.git\n   cd SimSwap\n   ```\n\n2. 下载预训练模型：\n   - 访问 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1fcfWOGt1mkBo7F0gXVKitf8GJMAXQxZD\u002Fview?usp=sharing) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-TKFuycRNUKut8hn4IimvA)（密码：`ummt`）。\n   - 将下载的模型文件放置到 `.\u002Fcheckpoints\u002F` 目录下。\n\n3. 准备数据集（可选）：\n   - 如果需要训练模型，建议使用高质量数据集 [VGGFace2-HQ](https:\u002F\u002Fgithub.com\u002FNNNNAI\u002FVGGFace2-HQ)。\n   - 数据集下载链接：\n     - [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc\u002Fview?usp=sharing)\n     - [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OiwLJHVBSYB4AY2vEcfN0A)（密码：`lrod`）\n\n---\n\n## 基本使用\n\n### 使用预训练模型进行推理\n\n1. 准备输入图片或视频：\n   - 将目标人脸图片和源图片放置在 `.\u002Fassets\u002F` 目录下。\n\n2. 运行推理脚本：\n   ```bash\n   python test_one_image.py --isTrain False --name people --Arc_path .\u002Farcface_model\u002Farcface_checkpoint.tar --pic_a_path .\u002Fassets\u002Ftarget_face.jpg --pic_b_path .\u002Fassets\u002Fsource_face.jpg --output_path .\u002Fresults\u002Foutput.jpg\n   ```\n\n3. 查看结果：\n   - 输出文件将保存在 `.\u002Fresults\u002F` 目录下。\n\n### 视频处理示例\n\n1. 使用 Colab 进行快速体验：\n   - 打开 [Colab 示例](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FSimSwap%20colab.ipynb)。\n   - 按照说明上传视频并运行推理。\n\n2. 处理多脸视频：\n   - 使用 [MultiSpecific Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002FMultiSpecific.ipynb) 脚本，支持指定特定人脸替换。\n\n---\n\n## 注意事项\n\n- 本项目仅限技术研究和学术用途，请勿用于非法或不道德场景。\n- 如果您觉得项目有用，请为仓库点星以支持开发者！","一位独立游戏开发者正在制作一款角色扮演类游戏，需要为游戏角色创建多样化的面部形象，但预算有限无法雇佣大量模特或购买昂贵的素材库。\n\n### 没有 SimSwap 时\n- 开发者只能使用有限的原始角色照片，导致游戏角色面部特征单一，缺乏多样性\n- 需要手动调整每个角色的面部细节，耗费大量时间和精力\n- 无法快速实现玩家自定义角色面部的需求，降低了游戏的可玩性\n- 购买商业素材库成本过高，且难以找到完全符合需求的素材\n- 视频过场动画中的角色面部与游戏模型不一致，影响沉浸感\n\n### 使用 SimSwap 后\n- 可以用少量基础模型生成大量不同面部特征的角色形象，大幅提升角色多样性\n- 通过单个训练模型快速完成面部替换，显著提高开发效率\n- 玩家可以轻松将自己的面部特征移植到游戏角色上，增强代入感\n- 无需购买昂贵素材库，仅需准备少量基础素材即可满足需求\n- 能够在视频和游戏画面中保持角色面部特征的一致性，提升整体质量\n\nSimSwap 帮助开发者以低成本高效实现高质量的角色面部定制，极大提升了游戏的视觉效果和用户体验。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fneuralchen_SimSwap_4aa5ecca.png","neuralchen","Xuanhong Chen","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fneuralchen_1e84fcc4.png","Assistant professor at ICCI of Shanghai Jiao Tong University","Shanghai Jiao Tong University","Shanghai,China","chenxuanhongzju@outlook.com",null,"https:\u002F\u002Fgithub.com\u002Fneuralchen",[85,89,93],{"name":86,"color":87,"percentage":88},"Python","#3572A5",81.6,{"name":90,"color":91,"percentage":92},"Jupyter Notebook","#DA5B0B",18.2,{"name":94,"color":95,"percentage":96},"Shell","#89e051",0.2,5145,1018,"2026-04-04T09:24:45","NOASSERTION","未说明","需要 GPU，显存未说明，CUDA 未说明",{"notes":104,"python":105,"dependencies":106},"建议使用大批次训练（batch size > 16），推荐迭代次数 400K-800K；首次运行需下载预训练模型和数据集文件；支持 Docker 部署","3.6+",[107,108,109,110,111,112,113,114,115],"torch>=1.5","torchvision","opencv","pillow","numpy","imageio","moviepy","insightface","timm==0.5.4",[13,52,14],[118,119,120,121,122,123,124,125,126],"face","deepfakes","faceswap","gan","swap","deepfacelab","image-manipulation","video","pytorch",9,"2026-03-27T02:49:30.150509","2026-04-06T05:16:51.240031",[131,136,141,146,151,156],{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},4064,"如何恢复训练？","在达到 300,000 到 320,000 步时，将检查点备份保存到单独的文件夹中，并根据开发者的建议继续训练至 500,000 步，然后检查结果。","https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fissues\u002F252",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},4065,"训练代码什么时候发布？","训练脚本现在已经可用！可以尝试使用 https:\u002F\u002Fgithub.com\u002Fa312863063\u002FSimSwap-train。","https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fissues\u002F81",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},4066,"是否可以通过参数移除水印？","我们已经添加了一个超参数，允许用户选择是否添加 SimSwap 的 logo 水印，请参阅“关于 SimSwap logo 水印”部分获取详细信息：[图像或视频换脸推理](https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fblob\u002Fmain\u002Fdocs\u002Fguidance\u002Fusage.md)。","https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fissues\u002F25",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},4067,"如何解决 GPU 内存不足的问题？","在 ..\u002Futil\u002Fvideoswap.py 文件的第 48 和 49 行之间插入 **with torch.no_grad():** 命令，并为接下来的每一行（从 49 到 84）增加 4 个空格缩进即可解决问题。","https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fissues\u002F36",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},4068,"为什么使用 SimSwap 512 模式时脸部没有被替换？","需要确保正确设置 `opt.crop_size` 参数。例如，在调用 `create_model(opt)` 之前添加以下代码：\n```\nopt.crop_size = 512\ncrop_size = opt.crop_size\n```\n这将解决问题。","https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fissues\u002F198",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},4069,"如何解决 'SGD' 对象没有 'defaults' 属性的错误？","请重新下载 **arcface_checkpoint.tar** 文件并更新 `.\u002Fmodels\u002F` 中的脚本。此外，也可以尝试安装特定版本的 PyTorch：\n```\n!pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 torchtext==0.10.1 -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\n```\n安装后记得重启运行环境。","https:\u002F\u002Fgithub.com\u002Fneuralchen\u002FSimSwap\u002Fissues\u002F357",[162,167],{"id":163,"version":164,"summary_zh":165,"released_at":166},103493,"512_beta","This is the checkpoint of simswap 512 (beta version).","2021-11-24T11:17:40",{"id":168,"version":169,"summary_zh":82,"released_at":170},103494,"1.0","2021-07-04T03:01:22"]