[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-foivospar--Arc2Face":3,"tool-foivospar--Arc2Face":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",157379,2,"2026-04-15T23:32:42",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":10,"env_os":91,"env_gpu":92,"env_ram":93,"env_deps":94,"category_tags":106,"github_topics":107,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":155},8089,"foivospar\u002FArc2Face","Arc2Face","[ECCV 2024 Oral 🔥] Arc2Face: A Foundation Model for ID-Consistent Human Faces ------------------------ [ICCVW 2025] ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion","Arc2Face 是一款专注于生成身份一致人脸图像的开源基础模型。它只需输入一张人脸的 ArcFace 特征向量，就能在几秒钟内合成出该人物的高质量图像，且无需针对特定人物进行额外的模型微调。\n\n这一工具主要解决了现有 AI 绘图技术在“保持人物身份一致性”上的痛点。传统方法往往需要繁琐的训练过程才能固定角色形象，而 Arc2Face 基于大规模的 WebFace42M 数据集训练，能够更精准地锁定人物身份特征，即使在生成不同姿态或表情时，也能确保“长得像同一个人”。\n\nArc2Face 特别适合研究人员、开发者以及需要批量生成特定角色素材的设计师使用。对于普通用户，它也提供了便捷的 Hugging Face 在线演示，无需复杂配置即可体验。\n\n其技术亮点在于直接构建于流行的 Stable Diffusion 架构之上，具备极强的扩展性。它不仅支持结合 ControlNet 实现精确的姿势控制，最新更新的“表情适配器”功能更能通过混合形状引导扩散模型，生成包括极端、不对称在内的各种精细面部表情。此外，项目还集成了 LCM-LoRA 加速技术，显著提升了推理速度，让实时生成成为可能。","\u003Cdiv align=\"center\">\n\n## 🚀 NEW (2025): ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion\n\n📄 **Check out our latest extension!**  \nWe introduce a fine-grained **Expression Adapter**, enabling Arc2Face to generate any subject under any facial expression (even rare, asymmetric, subtle, or extreme ones). See details [below](#arc2face--expression-adapter).\n\n\u003Ca href='http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04706'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-arXiv-red'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Model-orange'>\u003C\u002Fa>\n\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_c0b3182dc2a6.jpg'>\n\n---\n\n# Arc2Face: A Foundation Model for ID-Consistent Human Faces\n\n[Foivos Paraperas Papantoniou](https:\u002F\u002Ffoivospar.github.io\u002F)\u003Csup>1\u003C\u002Fsup> &emsp; [Alexandros Lattas](https:\u002F\u002Falexlattas.com\u002F)\u003Csup>1\u003C\u002Fsup> &emsp; [Stylianos Moschoglou](https:\u002F\u002Fmoschoglou.com\u002F)\u003Csup>1\u003C\u002Fsup>   \n\n[Jiankang Deng](https:\u002F\u002Fjiankangdeng.github.io\u002F)\u003Csup>1\u003C\u002Fsup> &emsp; [Bernhard Kainz](https:\u002F\u002Fbernhard-kainz.com\u002F)\u003Csup>1,2\u003C\u002Fsup> &emsp; [Stefanos Zafeiriou](https:\u002F\u002Fwww.imperial.ac.uk\u002Fpeople\u002Fs.zafeiriou)\u003Csup>1\u003C\u002Fsup>  \n\n\u003Csup>1\u003C\u002Fsup>Imperial College London, UK \u003Cbr>\n\u003Csup>2\u003C\u002Fsup>FAU Erlangen-Nürnberg, Germany\n\n\u003Ca href='https:\u002F\u002Farc2face.github.io\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Page-blue'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11641'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-arXiv-red'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Demo-green'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Model-orange'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Data-8A2BE2'>\u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\nThis is the official implementation of **[Arc2Face](https:\u002F\u002Farc2face.github.io\u002F)**, an ID-conditioned face model:\n\n&emsp;✅ that generates high-quality images of any subject given only its ArcFace embedding, within a few seconds\u003Cbr>\n&emsp;✅ trained on the large-scale WebFace42M dataset offers superior ID similarity compared to existing models\u003Cbr>\n&emsp;✅ built on top of Stable Diffusion, can be extended to different input modalities, e.g. with ControlNet\u003Cbr>\n\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_34568a7e4bd7.gif'>\n\n# News\u002FUpdates\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Farc2face-a-foundation-model-of-human-faces\u002Fdiffusion-personalization-tuning-free-on)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fdiffusion-personalization-tuning-free-on?p=arc2face-a-foundation-model-of-human-faces)\n\n- [2025\u002F10\u002F07] 🔥 We release an extension for accurate and ID-consistent facial expression transfer. See details [below](#arc2face--expression-adapter)!\n- [2024\u002F08\u002F16] 🔥 Accepted to ECCV24 as an **oral**!\n- [2024\u002F08\u002F06] 🔥 ComfyUI support available at [caleboleary\u002FComfyUI-Arc2Face](https:\u002F\u002Fgithub.com\u002Fcaleboleary\u002FComfyUI-Arc2Face)!\n- [2024\u002F04\u002F12] 🔥 We add LCM-LoRA support for even faster inference (check the details [below](#lcm-lora-acceleration)).\n- [2024\u002F04\u002F11] 🔥 We release the training dataset on [HuggingFace Datasets](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFoivosPar\u002FArc2Face).\n- [2024\u002F03\u002F31] 🔥 We release our demo for pose control using Arc2Face + ControlNet (see instructions [below](#arc2face--controlnet-pose)).\n- [2024\u002F03\u002F28] 🔥 We release our Gradio [demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFoivosPar\u002FArc2Face) on HuggingFace Spaces (thanks to the HF team for their free GPU support)!\n- [2024\u002F03\u002F14] 🔥 We release Arc2Face.\n\n# Installation\n```bash\nconda create -n arc2face python=3.10\nconda activate arc2face\n\n# Install requirements\npip install -r requirements.txt\n```\n\n# Download Models\n1) The models can be downloaded manually from [HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face) or using python:\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arc2face\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arc2face\u002Fdiffusion_pytorch_model.safetensors\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"encoder\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"encoder\u002Fpytorch_model.bin\", local_dir=\".\u002Fmodels\")\n```\n\n2) For face detection and ID-embedding extraction, manually download the [antelopev2](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface\u002Ftree\u002Fmaster\u002Fpython-package#model-zoo) package ([direct link](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8\u002Fview)) and place the checkpoints under `models\u002Fantelopev2`. \n\n3) We use an ArcFace recognition model trained on WebFace42M. Download `arcface.onnx` from [HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face) and put it in `models\u002Fantelopev2` or using python:\n```python\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arcface.onnx\", local_dir=\".\u002Fmodels\u002Fantelopev2\")\n```\n4) Then **delete** `glintr100.onnx` (the default backbone from insightface).\n\nThe `models` folder structure should finally be:\n```\n  . ── models ──┌── antelopev2\n                ├── arc2face\n                └── encoder\n```\n\n# Usage\n\nLoad pipeline using [diffusers](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdiffusers\u002Findex):\n```python\nfrom diffusers import (\n    StableDiffusionPipeline,\n    UNet2DConditionModel,\n    DPMSolverMultistepScheduler,\n)\n\nfrom arc2face import CLIPTextModelWrapper, project_face_embs\n\nimport torch\nfrom insightface.app import FaceAnalysis\nfrom PIL import Image\nimport numpy as np\n\n# Arc2Face is built upon SD1.5\n# The repo below can be used instead of the now deprecated 'runwayml\u002Fstable-diffusion-v1-5'\nbase_model = 'stable-diffusion-v1-5\u002Fstable-diffusion-v1-5'\n\nencoder = CLIPTextModelWrapper.from_pretrained(\n    'models', subfolder=\"encoder\", torch_dtype=torch.float16\n)\n\nunet = UNet2DConditionModel.from_pretrained(\n    'models', subfolder=\"arc2face\", torch_dtype=torch.float16\n)\n\npipeline = StableDiffusionPipeline.from_pretrained(\n        base_model,\n        text_encoder=encoder,\n        unet=unet,\n        torch_dtype=torch.float16,\n        safety_checker=None\n    )\n```\nYou can use any SD-compatible schedulers and steps, just like with Stable Diffusion. By default, we use `DPMSolverMultistepScheduler` with 25 steps, which produces very good results in just a few seconds.\n```python\npipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)\npipeline = pipeline.to('cuda')\n```\nPick an image and extract the ID-embedding:\n```python\napp = FaceAnalysis(name='antelopev2', root='.\u002F', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])\napp.prepare(ctx_id=0, det_size=(640, 640))\n\nimg = np.array(Image.open('https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_a5f5eca82820.png'))[:,:,::-1]\n\nfaces = app.get(img)\nfaces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1]  # select largest face (if more than one detected)\nid_emb = torch.tensor(faces['embedding'], dtype=torch.float16)[None].cuda()\nid_emb = id_emb\u002Ftorch.norm(id_emb, dim=1, keepdim=True)   # normalize embedding\nid_emb = project_face_embs(pipeline, id_emb)    # pass through the encoder\n```\n\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_a5f5eca82820.png' style='width:25%;'>\n\u003C\u002Fdiv>\n\nGenerate images:\n```python\nnum_images = 4\nimages = pipeline(prompt_embeds=id_emb, num_inference_steps=25, guidance_scale=3.0, num_images_per_prompt=num_images).images\n```\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_f37aac014e82.jpg'>\n\u003C\u002Fdiv>\n\n# LCM-LoRA acceleration\n\n[LCM-LoRA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05556) allows you to reduce the sampling steps to as few as 2-4 for super-fast inference. Just plug in the pre-trained distillation adapter for SD v1.5 and switch to `LCMScheduler`:\n```python\nfrom diffusers import LCMScheduler\n\npipeline.load_lora_weights(\"latent-consistency\u002Flcm-lora-sdv1-5\")\npipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)\n```\nThen, you can sample with as few as 2 steps (and disable `guidance_scale` by using a value of 1.0, as LCM is very sensitive to it and even small values lead to oversaturation):\n```python\nimages = pipeline(prompt_embeds=id_emb, num_inference_steps=2, guidance_scale=1.0, num_images_per_prompt=num_images).images\n```\nNote that this technique accelerates sampling in exchange for a slight drop in quality.\n\n# Start a local gradio demo\nYou can start a local demo for inference by running:\n```bash\npython gradio_demo\u002Fapp.py\n```\n\n# Arc2Face + ControlNet (pose)\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_ae9690ebdffd.jpg'>\n\u003C\u002Fdiv>\n\nWe provide a ControlNet model trained on top of Arc2Face for pose control. We use [EMOCA](https:\u002F\u002Fgithub.com\u002Fradekd91\u002Femoca) for 3D pose extraction. To run our demo, follow the steps below:\n### 1) Download Model\nDownload the ControlNet checkpoint manually from [HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face) or using python:\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"controlnet\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"controlnet\u002Fdiffusion_pytorch_model.safetensors\", local_dir=\".\u002Fmodels\")\n```\n### 2) Pull EMOCA\n```bash\ngit submodule update --init external\u002Femoca\n```\n### 3) Installation\nThis is the most tricky part. You will need PyTorch3D to run EMOCA. As its installation may cause conflicts, we suggest to follow the process below:\n1) Create a new environment and start by installing PyTorch3D with GPU support first (follow the official [instructions](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002FINSTALL.md)).\n2) Add Arc2Face + EMOCA requirements with:\n```bash\npip install -r requirements_controlnet.txt\n```\n3) Install EMOCA code:\n```bash\npip install -e external\u002Femoca\n```\n4) Finally, you need to download the EMOCA\u002FFLAME assets. Run the following and follow the instructions in the terminal:\n```bash\ncd external\u002Femoca\u002Fgdl_apps\u002FEMOCA\u002Fdemos \nbash download_assets.sh\ncd ..\u002F..\u002F..\u002F..\u002F..\n```\n### 4) Start a local gradio demo\nYou can start a local ControlNet demo by running:\n```bash\npython gradio_demo\u002Fapp_controlnet.py\n```\n\n# Arc2Face + Expression Adapter\n\nOur extension [\"ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion\"](http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04706) combines Arc2Face with a custom IP-Adapter designed for generating ID-consistent images with precise expression control based on FLAME blendshape parameters. We also provide an optional Reference Adapter which can be used to condition the output directly on the input image, i.e. preserving the subject's appearance and background (to an extent). You can find more details in the report.\n\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_ae600fd18314.jpg'>\n\u003C\u002Fdiv>\n\n\u003Cbr>\nHere's how to run it:\n\n### 1) Download Model\nDownload the Expression and Reference Adapters manually from [HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face) or using python:\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"exp_adapter\u002Fexp_adapter.bin\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"ref_adapter\u002Fpytorch_lora_weights.safetensors\", local_dir=\".\u002Fmodels\")\n```\n### 2) Download third-party models (SMIRK)\nWe use the [SMIRK](https:\u002F\u002Fgithub.com\u002Fgeorgeretsi\u002Fsmirk) method to extract FLAME expression parameters from the target image. Download the required checkpoints **face_landmarker.task** and **SMIRK_em1.pt** and put them under `models\u002Fsmirk`:\n```bash\nmkdir models\u002Fsmirk\nwget https:\u002F\u002Fstorage.googleapis.com\u002Fmediapipe-models\u002Fface_landmarker\u002Fface_landmarker\u002Ffloat16\u002Flatest\u002Fface_landmarker.task --directory-prefix models\u002Fsmirk\npip install gdown\ngdown --id 1T65uEd9dVLHgVw5KiUYL66NUee-MCzoE -O models\u002Fsmirk\u002F\n```\n### 3) Start a local gradio demo\nThen, just run the demo and follow the instructions:\n```bash\npython gradio_demo\u002Fapp_exp_adapter.py\n```\n\n# Test Data\nThe test images used for comparisons in the paper (Synth-500, AgeDB) are available [here](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1exnvCECmqWcqNIFCck2EQD-hkE42Ayjc?usp=sharing). Please use them only for evaluation purposes and make sure to cite the corresponding [sources](https:\u002F\u002Fibug.doc.ic.ac.uk\u002Fresources\u002Fagedb\u002F) when using them.\n\n# Community Resources\n\n### Replicate Demo\n- [Demo link](https:\u002F\u002Freplicate.com\u002Fcamenduru\u002Farc2face) by [@camenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru).\n\n### ComfyUI\n- [caleboleary\u002FComfyUI-Arc2Face](https:\u002F\u002Fgithub.com\u002Fcaleboleary\u002FComfyUI-Arc2Face) by [@caleboleary](https:\u002F\u002Fgithub.com\u002Fcaleboleary).\n  \n### Pinokio\n- Pinokio [implementation](https:\u002F\u002Fpinokio.computer\u002Fitem?uri=https:\u002F\u002Fgithub.com\u002Fcocktailpeanutlabs\u002Farc2face) by [@cocktailpeanut](https:\u002F\u002Fgithub.com\u002Fcocktailpeanut) (runs locally on all OS - Windows, Mac, Linux).\n\n# Acknowledgements\n- Thanks to the creators of Stable Diffusion and the HuggingFace [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) team for the awesome work ❤️.\n- Thanks to the WebFace42M creators for providing such a million-scale facial dataset ❤️.\n- Thanks to the HuggingFace team for their generous support through the community GPU grant for our demo ❤️.\n- We also acknowledge the invaluable support of the HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), which made the training of Arc2Face possible.\n\n# Citation\nIf you find Arc2Face useful for your research, please consider citing us:\n\n```bibtex\n@inproceedings{paraperas2024arc2face,\n      title={Arc2Face: A Foundation Model for ID-Consistent Human Faces}, \n      author={Paraperas Papantoniou, Foivos and Lattas, Alexandros and Moschoglou, Stylianos and Deng, Jiankang and Kainz, Bernhard and Zafeiriou, Stefanos},\n      booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},\n      year={2024}\n}\n```\nAdditionally, if you use the Expression Adapter, please also cite the extension:\n\n```bibtex\n@inproceedings{paraperas2025arc2face_exp,\n      title={ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion}, \n      author={Paraperas Papantoniou, Foivos and Zafeiriou, Stefanos},\n      booktitle={Proceedings of the IEEE\u002FCVF International Conference on Computer Vision (ICCV) Workshops},\n      year={2025}\n}\n```\n","\u003Cdiv align=\"center\">\n\n## 🚀 新增（2025）：基于Blendshape引导的扩散模型实现ID一致、精确的表情生成\n\n📄 **查看我们的最新扩展！**  \n我们推出了一种细粒度的**表情适配器**，使Arc2Face能够根据任意面部表情生成任何主体图像（即使是罕见、不对称、细微或极端的表情）。详情请见[下方](#arc2face--expression-adapter)。\n\n\u003Ca href='http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04706'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-arXiv-red'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Model-orange'>\u003C\u002Fa>\n\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_c0b3182dc2a6.jpg'>\n\n---\n\n# Arc2Face：一个用于ID一致人脸的基础模型\n\n[Foivos Paraperas Papantoniou](https:\u002F\u002Ffoivospar.github.io\u002F)\u003Csup>1\u003C\u002Fsup> &emsp; [Alexandros Lattas](https:\u002F\u002Falexlattas.com\u002F)\u003Csup>1\u003C\u002Fsup> &emsp; [Stylianos Moschoglou](https:\u002F\u002Fmoschoglou.com\u002F)\u003Csup>1\u003C\u002Fsup>   \n\n[Jiankang Deng](https:\u002F\u002Fjiankangdeng.github.io\u002F)\u003Csup>1\u003C\u002Fsup> &emsp; [Bernhard Kainz](https:\u002F\u002Fbernhard-kainz.com\u002F)\u003Csup>1,2\u003C\u002Fsup> &emsp; [Stefanos Zafeiriou](https:\u002F\u002Fwww.imperial.ac.uk\u002Fpeople\u002Fs.zafeiriou)\u003Csup>1\u003C\u002Fsup>  \n\n\u003Csup>1\u003C\u002Fsup>帝国理工学院，英国 \u003Cbr>\n\u003Csup>2\u003C\u002Fsup>埃尔兰根-纽伦堡大学，德国\n\n\u003Ca href='https:\u002F\u002Farc2face.github.io\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Page-blue'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11641'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-arXiv-red'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Demo-green'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Model-orange'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFoivosPar\u002FArc2Face'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Data-8A2BE2'>\u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\n这是**[Arc2Face](https:\u002F\u002Farc2face.github.io\u002F)**的官方实现，它是一个基于ID条件的人脸生成模型：\n\n&emsp;✅ 仅需提供ArcFace嵌入即可在几秒钟内生成高质量的任意主体图像\u003Cbr>\n&emsp;✅ 基于大规模WebFace42M数据集训练，相比现有模型具有更优的ID相似性\u003Cbr>\n&emsp;✅ 构建于Stable Diffusion之上，可扩展至不同输入模态，例如结合ControlNet\u003Cbr>\n\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_34568a7e4bd7.gif'>\n\n# 最新消息\u002F更新\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Farc2face-a-foundation-model-of-human-faces\u002Fdiffusion-personalization-tuning-free-on)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fdiffusion-personalization-tuning-free-on?p=arc2face-a-foundation-model-of-human-faces)\n\n- [2025\u002F10\u002F07] 🔥 我们发布了用于准确且ID一致的面部表情迁移的扩展。详情请见[下方](#arc2face--expression-adapter)!\n- [2024\u002F08\u002F16] 🔥 被ECCV24接受为**口头报告**！\n- [2024\u002F08\u002F06] 🔥 ComfyUI支持已在[caleboleary\u002FComfyUI-Arc2Face](https:\u002F\u002Fgithub.com\u002Fcaleboleary\u002FComfyUI-Arc2Face)中提供！\n- [2024\u002F04\u002F12] 🔥 我们增加了LCM-LoRA支持，以实现更快的推理速度（详情请见[下方](#lcm-lora-acceleration)）。\n- [2024\u002F04\u002F11] 🔥 我们在[HuggingFace Datasets](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFoivosPar\u002FArc2Face)上公开了训练数据集。\n- [2024\u002F03\u002F31] 🔥 我们发布了使用Arc2Face + ControlNet进行姿态控制的演示（说明请见[下方](#arc2face--controlnet-pose)）。\n- [2024\u002F03\u002F28] 🔥 我们在HuggingFace Spaces上发布了Gradio[演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFoivosPar\u002FArc2Face)（感谢HuggingFace团队提供的免费GPU支持）！\n- [2024\u002F03\u002F14] 🔥 我们正式发布Arc2Face。\n\n# 安装\n```bash\nconda create -n arc2face python=3.10\nconda activate arc2face\n\n# 安装依赖\npip install -r requirements.txt\n```\n\n# 下载模型\n1) 模型可以从[HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face)手动下载，也可以使用Python下载：\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arc2face\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arc2face\u002Fdiffusion_pytorch_model.safetensors\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"encoder\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"encoder\u002Fpytorch_model.bin\", local_dir=\".\u002Fmodels\")\n```\n\n2) 对于人脸检测和ID嵌入提取，需手动下载[antelopev2](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface\u002Ftree\u002Fmaster\u002Fpython-package#model-zoo)包（[直接链接](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8\u002Fview)），并将检查点放置在`models\u002Fantelopev2`目录下。\n\n3) 我们使用基于WebFace42M训练的ArcFace识别模型。从[HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face)下载`arcface.onnx`并将其放入`models\u002Fantelopev2`目录，或使用Python下载：\n```python\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arcface.onnx\", local_dir=\".\u002Fmodels\u002Fantelopev2\")\n```\n4) 然后请**删除**`glintr100.onnx`（insightface的默认骨干网络）。\n\n最终的`models`文件夹结构应如下所示：\n```\n  . ── models ──┌── antelopev2\n                ├── arc2face\n                └── encoder\n```\n\n# 使用方法\n\n使用[diffusers](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdiffusers\u002Findex)加载管道：\n```python\nfrom diffusers import (\n    StableDiffusionPipeline,\n    UNet2DConditionModel,\n    DPMSolverMultistepScheduler,\n)\n\nfrom arc2face import CLIPTextModelWrapper, project_face_embs\n\nimport torch\nfrom insightface.app import FaceAnalysis\nfrom PIL import Image\nimport numpy as np\n\n# Arc2Face基于SD1.5构建\n\n# 下面的仓库可以用来替代现已弃用的 'runwayml\u002Fstable-diffusion-v1-5'\nbase_model = 'stable-diffusion-v1-5\u002Fstable-diffusion-v1-5'\n\nencoder = CLIPTextModelWrapper.from_pretrained(\n    'models', subfolder=\"encoder\", torch_dtype=torch.float16\n)\n\nunet = UNet2DConditionModel.from_pretrained(\n    'models', subfolder=\"arc2face\", torch_dtype=torch.float16\n)\n\npipeline = StableDiffusionPipeline.from_pretrained(\n        base_model,\n        text_encoder=encoder,\n        unet=unet,\n        torch_dtype=torch.float16,\n        safety_checker=None\n    )\n```\n你可以使用任何与SD兼容的调度器和步数，就像使用Stable Diffusion一样。默认情况下，我们使用`DPMSolverMultistepScheduler`，设置25个步骤，只需几秒钟就能生成非常好的效果。\n```python\npipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)\npipeline = pipeline.to('cuda')\n```\n选择一张图片并提取ID嵌入：\n```python\napp = FaceAnalysis(name='antelopev2', root='.\u002F', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])\napp.prepare(ctx_id=0, det_size=(640, 640))\n\nimg = np.array(Image.open('https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_a5f5eca82820.png'))[:,:,::-1]\n\nfaces = app.get(img)\nfaces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1]  # 选择最大的人脸（如果检测到多张）\nid_emb = torch.tensor(faces['embedding'], dtype=torch.float16)[None].cuda()\nid_emb = id_emb\u002Ftorch.norm(id_emb, dim=1, keepdim=True)   # 归一化嵌入\nid_emb = project_face_embs(pipeline, id_emb)    # 通过编码器\n```\n\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_a5f5eca82820.png' style='width:25%;'>\n\u003C\u002Fdiv>\n\n生成图像：\n```python\nnum_images = 4\nimages = pipeline(prompt_embeds=id_emb, num_inference_steps=25, guidance_scale=3.0, num_images_per_prompt=num_images).images\n```\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_f37aac014e82.jpg'>\n\u003C\u002Fdiv>\n\n# LCM-LoRA 加速\n\n[LCM-LoRA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05556) 允许你将采样步骤减少到仅2-4步，实现超快速推理。只需插入针对SD v1.5预训练的蒸馏适配器，并切换到`LCMScheduler`：\n```python\nfrom diffusers import LCMScheduler\n\npipeline.load_lora_weights(\"latent-consistency\u002Flcm-lora-sdv1-5\")\npipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)\n```\n然后，你可以使用最少2个步骤进行采样（并禁用`guidance_scale`，将其设置为1.0，因为LCM对此非常敏感，即使是较小的值也会导致过度饱和）：\n```python\nimages = pipeline(prompt_embeds=id_emb, num_inference_steps=2, guidance_scale=1.0, num_images_per_prompt=num_images).images\n```\n请注意，这种技术虽然加快了采样速度，但会略微降低生成质量。\n\n# 启动本地Gradio演示\n你可以通过运行以下命令启动本地推理演示：\n```bash\npython gradio_demo\u002Fapp.py\n```\n\n# Arc2Face + ControlNet（姿态）\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_ae9690ebdffd.jpg'>\n\u003C\u002Fdiv>\n\n我们提供了一个基于Arc2Face训练的ControlNet模型，用于姿态控制。我们使用[EMOCA](https:\u002F\u002Fgithub.com\u002Fradekd91\u002Femoca)来提取3D姿态。要运行我们的演示，请按照以下步骤操作：\n### 1) 下载模型\n可以从[HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face)手动下载ControlNet检查点，或者使用Python下载：\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"controlnet\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"controlnet\u002Fdiffusion_pytorch_model.safetensors\", local_dir=\".\u002Fmodels\")\n```\n### 2) 拉取EMOCA\n```bash\ngit submodule update --init external\u002Femoca\n```\n### 3) 安装\n这是最棘手的部分。你需要PyTorch3D才能运行EMOCA。由于其安装可能会引起冲突，我们建议按照以下步骤操作：\n1) 创建一个新的环境，首先安装支持GPU的PyTorch3D（请遵循官方[说明](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d\u002Fblob\u002Fmain\u002FINSTALL.md)）。\n2) 添加Arc2Face + EMOCA所需的依赖项：\n```bash\npip install -r requirements_controlnet.txt\n```\n3) 安装EMOCA代码：\n```bash\npip install -e external\u002Femoca\n```\n4) 最后，你需要下载EMOCA\u002FFLAME资源。运行以下命令并按照终端中的指示操作：\n```bash\ncd external\u002Femoca\u002Fgdl_apps\u002FEMOCA\u002Fdemos \nbash download_assets.sh\ncd ..\u002F..\u002F..\u002F..\u002F..\n```\n### 4) 启动本地Gradio演示\n你可以通过运行以下命令启动本地ControlNet演示：\n```bash\npython gradio_demo\u002Fapp_controlnet.py\n```\n\n# Arc2Face + 表情适配器\n\n我们的扩展“基于Blendshape引导扩散的ID一致、精确表情生成”（http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04706）将Arc2Face与自定义IP-Adapter结合，旨在根据FLAME blendshape参数生成具有精确表情控制且ID一致的图像。我们还提供一个可选的参考适配器，可用于直接以输入图像为条件生成输出，即在一定程度上保留主体的外观和背景。更多细节请参阅报告。\n\n\u003Cdiv align=\"center\">\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_readme_ae600fd18314.jpg'>\n\u003C\u002Fdiv>\n\n\u003Cbr>\n以下是运行方法：\n\n### 1) 下载模型\n可以从[HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face)手动下载表情和参考适配器，或者使用Python下载：\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"exp_adapter\u002Fexp_adapter.bin\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"ref_adapter\u002Fpytorch_lora_weights.safetensors\", local_dir=\".\u002Fmodels\")\n```\n### 2) 下载第三方模型（SMIRK）\n我们使用[SMIRK](https:\u002F\u002Fgithub.com\u002Fgeorgeretsi\u002Fsmirk)方法从目标图像中提取FLAME表情参数。下载所需的检查点`face_landmarker.task`和`SMIRK_em1.pt`，并将它们放入`models\u002Fsmirk`目录下：\n```bash\nmkdir models\u002Fsmirk\nwget https:\u002F\u002Fstorage.googleapis.com\u002Fmediapipe-models\u002Fface_landmarker\u002Fface_landmarker\u002Ffloat16\u002Flatest\u002Fface_landmarker.task --directory-prefix models\u002Fsmirk\npip install gdown\ngdown --id 1T65uEd9dVLHgVw5KiUYL66NUee-MCzoE -O models\u002Fsmirk\u002F\n```\n### 3) 启动本地Gradio演示\n然后，只需运行演示并按照指示操作：\n```bash\npython gradio_demo\u002Fapp_exp_adapter.py\n```\n\n# 测试数据\n论文中用于比较的测试图像（Synth-500、AgeDB）可在[这里](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1exnvCECmqWcqNIFCck2EQD-hkE42Ayjc?usp=sharing)获取。请仅将其用于评估目的，并在使用时务必引用相应的[来源](https:\u002F\u002Fibug.doc.ic.ac.uk\u002Fresources\u002Fagedb\u002F)。\n\n# 社区资源\n\n### 复刻演示\n- [演示链接](https:\u002F\u002Freplicate.com\u002Fcamenduru\u002Farc2face)由[@camenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru)提供。\n\n### ComfyUI\n- [caleboleary\u002FComfyUI-Arc2Face](https:\u002F\u002Fgithub.com\u002Fcaleboleary\u002FComfyUI-Arc2Face)由[@caleboleary](https:\u002F\u002Fgithub.com\u002Fcaleboleary)提供。\n\n### 皮诺曹\n- 皮诺曹 [实现](https:\u002F\u002Fpinokio.computer\u002Fitem?uri=https:\u002F\u002Fgithub.com\u002Fcocktailpeanutlabs\u002Farc2face) 由 [@cocktailpeanut](https:\u002F\u002Fgithub.com\u002Fcocktailpeanut) 提供（可在所有操作系统上本地运行——Windows、Mac 和 Linux）。\n\n# 致谢\n- 感谢 Stable Diffusion 的开发者以及 HuggingFace 的 [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) 团队所做出的卓越工作 ❤️。\n- 感谢 WebFace42M 的创建者提供了如此大规模的人脸数据集 ❤️。\n- 感谢 HuggingFace 团队通过社区 GPU 赠款对我们演示项目的慷慨支持 ❤️。\n- 我们还感谢弗里德里希-亚历山大大学埃尔兰根-纽伦堡分校（FAU）下属的埃尔兰根国家高性能计算中心（NHR@FAU）所提供的宝贵 HPC 资源，正是这些资源使得 Arc2Face 的训练成为可能。\n\n# 引用\n如果您在研究中使用了 Arc2Face，请考虑引用我们：\n\n```bibtex\n@inproceedings{paraperas2024arc2face,\n      title={Arc2Face: 用于身份一致人脸的基础模型}, \n      author={帕拉佩拉斯·帕潘托尼乌，福伊沃斯；拉塔斯，亚历山德罗斯；莫斯霍格鲁，斯蒂利亚诺斯；邓建康；凯因茨，伯恩哈德；扎菲里乌，斯特法诺斯},\n      booktitle={欧洲计算机视觉大会（ECCV）论文集},\n      year={2024}\n}\n```\n\n此外，如果您使用了表情适配器，请同时引用该扩展：\n\n```bibtex\n@inproceedings{paraperas2025arc2face_exp,\n      title={基于混合形状引导的扩散模型实现身份一致且精确的表情生成}, \n      author={帕拉佩拉斯·帕潘托尼乌，福伊沃斯；扎菲里乌，斯特法诺斯},\n      booktitle={IEEE\u002FCVF 国际计算机视觉会议（ICCV）研讨会论文集},\n      year={2025}\n}\n```","# Arc2Face 快速上手指南\n\nArc2Face 是一个基于 Stable Diffusion 构建的人脸基础模型，仅需一张人脸图片的 ArcFace 特征嵌入（Embedding），即可在几秒钟内生成该人物的高保真、身份一致的人脸图像。\n\n## 环境准备\n\n*   **操作系统**: Linux (推荐) 或 Windows\n*   **Python 版本**: 3.10\n*   **硬件要求**: NVIDIA GPU (支持 CUDA)，建议显存 8GB 以上\n*   **前置依赖**: Conda (推荐用于环境管理)\n\n> **注意**：本项目依赖 `insightface` 和 `PyTorch3D` (仅在使用 ControlNet 姿态控制时需要)，在 Windows 上安装可能较为复杂，建议优先使用 Linux 环境或 WSL2。\n\n## 安装步骤\n\n### 1. 创建并激活虚拟环境\n```bash\nconda create -n arc2face python=3.10\nconda activate arc2face\n```\n\n### 2. 安装核心依赖\n```bash\npip install -r requirements.txt\n```\n> **国内加速提示**：如果下载速度慢，可添加清华源或阿里源：\n> `pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 3. 下载模型文件\n\n#### A. 下载 Arc2Face 主模型与编码器\n你可以手动从 [HuggingFace](https:\u002F\u002Fhuggingface.co\u002FFoivosPar\u002FArc2Face) 下载，或使用以下 Python 脚本自动下载至 `.\u002Fmodels` 目录：\n\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arc2face\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arc2face\u002Fdiffusion_pytorch_model.safetensors\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"encoder\u002Fconfig.json\", local_dir=\".\u002Fmodels\")\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"encoder\u002Fpytorch_model.bin\", local_dir=\".\u002Fmodels\")\n```\n\n#### B. 下载人脸检测与特征提取模型 (Antelopev2)\n1. 手动下载 [antelopev2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8\u002Fview) 压缩包。\n2. 解压后将内容放入 `models\u002Fantelopev2` 文件夹。\n\n#### C. 下载 ArcFace 识别模型\n下载 `arcface.onnx` 并放入 `models\u002Fantelopev2`：\n```python\nhf_hub_download(repo_id=\"FoivosPar\u002FArc2Face\", filename=\"arcface.onnx\", local_dir=\".\u002Fmodels\u002Fantelopev2\")\n```\n\n#### D. 清理默认文件\n**重要**：删除 `models\u002Fantelopev2` 目录下的 `glintr100.onnx` 文件（这是 insightface 的默认骨干网络，需移除以避免冲突）。\n\n最终 `models` 目录结构应如下：\n```\nmodels\u002F\n├── antelopev2\u002F      # 包含 arcface.onnx 及 antelopev2 相关文件\n├── arc2face\u002F        # 包含扩散模型文件\n└── encoder\u002F         # 包含编码器文件\n```\n\n## 基本使用\n\n以下是最小化的 Python 推理示例，展示如何加载模型并生成图像。\n\n### 1. 加载 Pipeline\n```python\nfrom diffusers import (\n    StableDiffusionPipeline,\n    UNet2DConditionModel,\n    DPMSolverMultistepScheduler,\n)\nfrom arc2face import CLIPTextModelWrapper, project_face_embs\nimport torch\nfrom insightface.app import FaceAnalysis\nfrom PIL import Image\nimport numpy as np\n\n# 指定基础模型 (基于 SD1.5)\nbase_model = 'stable-diffusion-v1-5\u002Fstable-diffusion-v1-5'\n\n# 加载自定义编码器和 UNet\nencoder = CLIPTextModelWrapper.from_pretrained(\n    'models', subfolder=\"encoder\", torch_dtype=torch.float16\n)\n\nunet = UNet2DConditionModel.from_pretrained(\n    'models', subfolder=\"arc2face\", torch_dtype=torch.float16\n)\n\n# 构建 Pipeline\npipeline = StableDiffusionPipeline.from_pretrained(\n    base_model,\n    text_encoder=encoder,\n    unet=unet,\n    torch_dtype=torch.float16,\n    safety_checker=None\n)\n\n# 设置采样器 (推荐 DPM Solver, 25 步)\npipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)\npipeline = pipeline.to('cuda')\n```\n\n### 2. 提取人物 ID 特征\n准备一张包含人脸的图片（例如 `assets\u002Fexamples\u002Fjoacquin.png`），提取其归一化的 ID 嵌入向量。\n\n```python\n# 初始化人脸分析工具\napp = FaceAnalysis(name='antelopev2', root='.\u002F', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])\napp.prepare(ctx_id=0, det_size=(640, 640))\n\n# 读取图片 (注意 OpenCV\u002FBGR 格式转换)\nimg_path = 'assets\u002Fexamples\u002Fjoacquin.png'\nimg = np.array(Image.open(img_path))[:,:,::-1]\n\n# 检测人脸并选取最大的一张\nfaces = app.get(img)\nfaces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1]\n\n# 获取并处理 Embedding\nid_emb = torch.tensor(faces['embedding'], dtype=torch.float16)[None].cuda()\nid_emb = id_emb\u002Ftorch.norm(id_emb, dim=1, keepdim=True)   # 归一化\nid_emb = project_face_embs(pipeline, id_emb)    # 通过编码器投影\n```\n\n### 3. 生成图像\n使用提取的 `id_emb` 作为条件生成图像。无需输入文本提示词（prompt），模型将直接根据 ID 生成。\n\n```python\nnum_images = 4\nimages = pipeline(\n    prompt_embeds=id_emb, \n    num_inference_steps=25, \n    guidance_scale=3.0, \n    num_images_per_prompt=num_images\n).images\n\n# 保存结果\nfor i, img in enumerate(images):\n    img.save(f\"output_{i}.png\")\n```\n\n### 💡 加速推理 (可选)\n如需极速生成（2-4 步），可加载 LCM-LoRA：\n\n```python\nfrom diffusers import LCMScheduler\n\n# 加载 LCM LoRA 并切换采样器\npipeline.load_lora_weights(\"latent-consistency\u002Flcm-lora-sdv1-5\")\npipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)\n\n# 生成 (仅需 2 步，guidance_scale 设为 1.0)\nimages = pipeline(\n    prompt_embeds=id_emb, \n    num_inference_steps=2, \n    guidance_scale=1.0, \n    num_images_per_prompt=num_images\n).images\n```","某独立游戏开发团队正在为一款叙事驱动的角色扮演游戏制作大量具有特定主角面孔的过场动画，需要确保角色在不同表情和姿态下身份特征高度一致。\n\n### 没有 Arc2Face 时\n- **身份一致性难以维持**：传统生成模型在改变角色表情或角度时，极易导致面部特征漂移，主角看起来像不同的人，破坏剧情沉浸感。\n- **微调成本高昂**：为了让模型记住特定角色，团队需对每个角色进行耗时的 DreamBooth 微调，且显存占用大，迭代一次需数小时。\n- **极端表情生成失败**：面对剧本中要求的夸张大笑或微妙苦笑等罕见表情，现有工具往往生成僵硬扭曲的面孔，无法精准控制肌肉运动。\n- **工作流割裂**：调整姿态需依赖额外的 ControlNet 插件手动对齐，而保持 ID 又需单独训练 LoRA，流程繁琐且容易出错。\n\n### 使用 Arc2Face 后\n- **零样本 ID 锁定**：仅需输入一张主角参考图的 ArcFace 嵌入向量，Arc2Face 即可在几秒钟内生成该角色在任何场景下的高清图像，身份特征完美复刻。\n- **无需训练即时可用**：作为基础模型，Arc2Face 免去了针对每个角色的微调过程，开发人员可瞬间切换不同角色进行批量素材生产。\n- **精细表情操控**：结合最新的 Blendshape-Guided 适配器，团队能精确生成从不对称挑眉到极度愤怒等复杂表情，同时严格保持人物身份不变。\n- **一体化姿态控制**：原生支持结合 ControlNet 进行姿态引导，开发者可在一个统一流程中同时解决“是谁”、“做什么动作”和“什么表情”三大难题。\n\nArc2Face 通过其强大的 ID 一致性基础能力与细粒度表情控制，将游戏角色资产的生产效率提升了数十倍，让单人开发者也能拥有电影级的角色表现力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoivospar_Arc2Face_c0b3182d.jpg","foivospar","Foivos Paraperas","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffoivospar_5a74e5c8.png","Computer Vision PhD student @ Imperial College London","Imperial College London","London, United Kingdom","f.paraperas@imperial.ac.uk",null,"https:\u002F\u002Ffoivospar.github.io\u002F","https:\u002F\u002Fgithub.com\u002Ffoivospar",[83],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,791,58,"2026-04-09T08:55:17","MIT","Linux, Windows, macOS","必需 NVIDIA GPU。代码示例中使用了 'CUDAExecutionProvider' 和 'torch_dtype=torch.float16'，表明需要支持 CUDA 的 NVIDIA 显卡。显存需求未明确说明，但运行 Stable Diffusion v1.5 及 ControlNet\u002FEMOCA 扩展通常建议 8GB 或以上。","未说明",{"notes":95,"python":96,"dependencies":97},"1. 建议使用 conda 创建 Python 3.10 环境。2. 基础功能需下载 Arc2Face 模型、ArcFace 识别模型及 antelopev2 人脸检测模型。3. 若使用姿态控制 (ControlNet) 功能，安装过程较复杂，需先单独安装支持 GPU 的 PyTorch3D，再安装 EMOCA 及其依赖，并下载 FLAME 资产。4. 若使用表情适配器 (Expression Adapter) 功能，需下载 SMIRK 相关模型文件。5. 默认推理使用 DPMSolverMultistepScheduler (25 步)，也可配置 LCM-LoRA 实现 2-4 步快速推理。","3.10",[98,99,100,101,102,103,104,105],"torch","diffusers","insightface","transformers","gradio","PyTorch3D (仅 ControlNet 扩展必需)","EMOCA (仅 ControlNet 扩展必需)","SMIRK (仅 Expression Adapter 扩展必需)",[14,15],[108,109,110,111,112,113,114,115,116],"face","face-generation","stable-diffusion","id-embedding","subject-driven-generation","personalization","blendshapes","expressions","expression-adapter","2026-03-27T02:49:30.150509","2026-04-16T16:16:52.794209",[120,125,130,135,140,145,150],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},36206,"模型训练需要多长时间？如果在我的硬件上预计时间过长（如 7000 小时）该怎么办？","训练确实需要几周时间，但绝不应该长达 7000 小时。如果您遇到这种情况，可以尝试使用混合精度训练（fp16）或启用 xformers 内存高效注意力机制（enable_xformers_memory_efficient_attention）来加速训练过程。","https:\u002F\u002Fgithub.com\u002Ffoivospar\u002FArc2Face\u002Fissues\u002F40",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},36207,"为什么无法检测到由 Arc2Face 生成的人脸图像（face detection failed）？","这通常是由于 ArcFace 在某些生成图像上的失效案例导致的。具体的解决方案可以参考项目中的相关讨论（如 Issue #17 的评论），通常涉及调整检测参数或预处理步骤。","https:\u002F\u002Fgithub.com\u002Ffoivospar\u002FArc2Face\u002Fissues\u002F31",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},36208,"EMOCA ControlNet 是如何训练的？使用了什么数据集和脚本？","ControlNet 的训练基于 diffusers 提供的官方示例脚本（train_controlnet.py），并进行了少量调整（例如使用 ArcFace 条件机制和预训练检查点）。仅使用了 FFHQ 数据集进行训练，学习率设为 1e-5，训练了 50 个 epoch。要生成用于条件的 EMOCAv2 法线图，可参考项目代码中的 app_controlnet.py 实现。","https:\u002F\u002Fgithub.com\u002Ffoivospar\u002FArc2Face\u002Fissues\u002F30",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},36209,"能否结合人体姿态控制的 ControlNet 使用 Arc2Face 来生成全身人像？","目前不可行。因为该模型是在对齐和裁剪后的人脸图像上训练的，直接添加预训练的人体姿态 ControlNet 很可能会导致不一致。若要扩展到全身生成，可能需要使用包含身体数据的数据集对模型、ControlNet 或两者进行进一步的训练。","https:\u002F\u002Fgithub.com\u002Ffoivospar\u002FArc2Face\u002Fissues\u002F19",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},36210,"论文中的实验（如与 InstantID 的对比）是基于 SDXL 还是 SD1.5？分辨率是多少？","实验使用的是公开可用的 SDXL 模型以及 InstantID 仓库中的官方推理代码。表格 1 中的所有方法均在人脸检测和对齐后，统一在 512x512 分辨率下进行比较。","https:\u002F\u002Fgithub.com\u002Ffoivospar\u002FArc2Face\u002Fissues\u002F10",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},36211,"是否可以通过修改 face_embeddings 来改变特定的面部特征（如眼睛颜色）？","这并不简单（not trivial）。因为 ID 嵌入的潜在空间中没有清晰的语义结构，直接修改特定的嵌入值很难精确控制具体的面部特征变化。","https:\u002F\u002Fgithub.com\u002Ffoivospar\u002FArc2Face\u002Fissues\u002F41",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},36212,"在手动去噪循环中使用 ID 嵌入时，为什么图像质量下降且出现身份漂移？","问题通常出在提示词嵌入（prompt embedding）的处理上。必须将提示词嵌入传递给 pipeline.encode_prompt() 方法。生成的正向和负向嵌入需要拼接后作为参数传递给 UNet。请确保正确实现了 classifier-free guidance 的逻辑，将 id_emb 正确传入 encode_prompt 函数中。","https:\u002F\u002Fgithub.com\u002Ffoivospar\u002FArc2Face\u002Fissues\u002F33",[]]