[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-pkhungurn--talking-head-anime-demo":3,"tool-pkhungurn--talking-head-anime-demo":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":10,"env_os":97,"env_gpu":98,"env_ram":97,"env_deps":99,"category_tags":108,"github_topics":109,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":119,"updated_at":120,"faqs":121,"releases":152},659,"pkhungurn\u002Ftalking-head-anime-demo","talking-head-anime-demo","Demo for the \"Talking Head Anime from a Single Image.\"","talking-head-anime-demo 是一款基于神经网络的开源项目，致力于让单张动漫图片实现生动的头部动作与表情变化。它通过深度学习技术，解决了传统二维动画制作中繁琐的骨骼绑定与关键帧绘制难题。用户只需提供符合规格的单张动漫头像，即可通过两种模式进行创作：一是利用手动滑块精细调整角色姿态；二是启用 Puppeteer 模式，通过摄像头实时捕捉人脸动作，驱动虚拟角色同步模仿。\n\n对于希望快速生成互动内容的研究者、开发者及数字艺术创作者而言，这是一个极具价值的实验平台。考虑到本地部署对硬件的高要求（需配备较新的 NVIDIA 显卡），项目提供了 Google Colab 云端运行方案，极大降低了体验门槛。其核心技术亮点在于仅凭单张图像即可重建三维头部运动，并结合 dlib 人脸关键点检测算法，实现了自然流畅的面部动作迁移。尽管对输入图片格式有特定要求（如 256x256 透明背景 PNG），但其展现的 AI 驱动动画潜力令人印象深刻。","# Demo Code for \"Talking Head Anime from a Single Image\"\n\nYou may want to check out the much more capable Version 2 of the same software: http:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-2-demo\n  \nThis repository contains code for two applications that make use of the neural network system in the [Talking Head Anime from a Single Image](http:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F) project:  \n  \n* The *manual poser* allows the user to pose an anime character by manually manipulating sliders.\n* The *puppeteer* makes an anime character imitate the head movement of the human capture by a webcam feed.\n\n## Try the Manual Poser on Google Colab\n\nIf you do not have the required hardware (discussed below) or do not want to download the code and set up an environment to run it, click [![this link](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fblob\u002Fmaster\u002Ftha_colab.ipynb) to try running the manual poser on [Google Colab](https:\u002F\u002Fresearch.google.com\u002Fcolaboratory\u002Ffaq.html).\n\n## Hardware Requirements\n\nAs with many modern machine learning projects written with PyTorch, this piece of code requires **a recent and powerful Nvidia GPU** to run. I have personally run the code on a Geforce GTX 1080 Ti and a Titan RTX.\n\nAlso, the peppeteer tool requires a webcam.\n\n## Dependencies\n\n* Python >= 3.6\n* pytorch >= 1.4.0\n* dlib >= 19.19\n* opencv-python >= 4.1.0.30\n* pillow >= 7.0.0\n* numpy >= 1.17.l2\n\nIf you install these packages, you should be all good.\n\n## Recreating Python Environment with Anaconda\n\nIf you use [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002F), you also have the option of recreating the Python environment that can be used to run the demo. Open a shell and change directory to the project's root. Then, run the following command:\n\n> `conda env create -f environment.yml`\n\nThis should download and install all the dependencies. Keep in mind, though, that this will require several gigabytes of your storage. After the installation is done, you can activate the new environment with the following command:\n\n> `conda activate talking-head-anime`\n\nOnce you are done with the environment, you can deactivate it with:\n\n> `conda deactivate`\n\n## Prepare the Data\n\nAfter you cloned this repository to your machine's storage, you need to download the models: \n\n* Download the main models from [this link](https:\u002F\u002Fdrive.google.com\u002Fopen?id=1ajHViqyLDKFKfBtGPE5cbSGcMNa8rz8k). Unzip the file into the `data` directory under the project's root. The models are released separately with the [Creative Commons Attribution 4.0 International License](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby\u002F4.0\u002Flegalcode).\n* Download `shape_predictor_68_face_landmarks.dat` and save it to the `data` directory. You can download the bzip archive from [here](https:\u002F\u002Fgithub.com\u002Fdavisking\u002Fdlib-models). Do not forget to uncompress.\n\nOnce the downloading is done, the data directory should look like the following:\n\n```\n+ data\n  + illust\n    - placeholder.txt\n    - waifu_00_256.png\n    - waifu_01_256.png\n    - waifu_02_256.png\n    - waifu_03_256.png\n    - waifu_04_256.png\n  - combiner.pt\n  - face_morpher.pt\n  - placeholder.txt\n  - shape_predictor_68_face_landmarks.dat\n  - two_algo_face_rotator.pt\n```\n\nTo play with the demo, you can use the 5 images I included in the `data\u002Fillust`. Or, you can prepare some character images by yourself. Images that can be animated must satisfy the following requirements:\n* It must be in PNG format.\n* It must be of size 256 x 256.\n* The head of the character must be contained in the center 128 x 128 box.\n* It must have 4 channels (RGBA).\n* Pixels that do not belong to the character's body must have value (0,0,0,0). In other words, the background must be transparent.\n\nFor more details, consult Section 4 of the [web site of the project writeup](https:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F). You should save all the images in the `data\u002Fillust` directory. One good way to get character images is to generate one with [Waifu Labs](https:\u002F\u002Fwaifulabs.com\u002F) and edit the image to fit the above requirements.\n\n## Running the Program\n\nChange directory to the root directory of the project. To run the manual poser, issue the following command in your shell:\n\n> `python app\u002Fmanual_poser.py`\n\nTo run the puppeteer, issue the following command in your shell:\n\n> `python app\u002Fpuppeteer.py`\n\n## Citation\n\nIf your academic work benefits from the code in this repository, please cite the project's web page as follows:\n\n> Pramook Khungurn. **Talking Head Anime from a Single Image.** http:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F, 2019. Accessed: YYYY-MM-DD.\n\nYou can also used the following BibTex entry:\n\n```\n@misc{Khungurn:2019,\n    author = {Pramook Khungurn},\n    title = {Talking Head Anime from a Single Image},\n    howpublished = {\\url{http:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F}},\n    year = 2019,\n    note = {Accessed: YYYY-MM-DD},\n}\n```\n\n## Disclaimer\n\nWhile the author is an employee of Google Japan, this software is not Google's product and is not supported by Google.\n\nThe copyright of this software belongs to me as I have requested it using the \u003Ca href=\"https:\u002F\u002Fopensource.google\u002Fdocumentation\u002Freference\u002Freleasing#iarc\">IARC process\u003C\u002Fa>. However, one of the condition for the release of this source code is that the publication of the \"Talking Head Anime from a Single Image\" be approved by the internal publication approval process. I requested approval on 2019\u002F11\u002F17. It has been reviewed by a researcher, but has not been formally approved by a manager in my product area (Google Maps). I have decided to release this code, bearing all the risks that it may incur.\n\nI made use of [a face tracker code implemented by KwanHua Lee](https:\u002F\u002Fgithub.com\u002Flincolnhard\u002Fhead-pose-estimation) to implement the puppeteer tool.\n","# “单图生成说话动漫头像”演示代码\n\n您可能想查看同一软件的更强大版本 2：http:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-2-demo\n  \n此仓库包含两个应用程序的代码，它们使用了 [Talking Head Anime from a Single Image](http:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F) 项目中的神经网络系统：  \n  \n* *手动姿势调整器 (Manual Poser)* 允许用户通过手动操作滑块来摆出动漫角色的姿势。\n* *木偶控制器 (Puppeteer)* 使动漫角色模仿网络摄像头捕捉的人类头部运动。\n\n## 在 Google Colab 上尝试手动姿势调整器\n\n如果您没有所需的硬件（下文讨论）或不想下载代码并设置环境来运行它，请点击 [![this link](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fblob\u002Fmaster\u002Ftha_colab.ipynb) 以在 [Google Colab](https:\u002F\u002Fresearch.google.com\u002Fcolaboratory\u002Ffaq.html) 上尝试运行手动姿势调整器。\n\n## 硬件要求\n\n与许多使用 PyTorch 编写的现代机器学习项目一样，这段代码需要**较新的且强大的 Nvidia GPU (图形处理器)** 才能运行。我本人曾在 Geforce GTX 1080 Ti 和 Titan RTX 上运行过该代码。\n\n此外，puppeteer (木偶控制器) 工具需要一个网络摄像头。\n\n## 依赖项\n\n* Python >= 3.6\n* PyTorch >= 1.4.0\n* dlib >= 19.19\n* opencv-python >= 4.1.0.30\n* pillow >= 7.0.0\n* numpy >= 1.17.2\n\n如果您安装了这些包，就应该没问题了。\n\n## 使用 Anaconda 重建 Python 环境\n\n如果您使用 [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002F)，您还可以选择重建可用于运行演示的 Python 环境。打开 Shell 并将目录更改为项目的根目录。然后，运行以下命令：\n\n> `conda env create -f environment.yml`\n\n这将下载并安装所有依赖项。不过请注意，这需要占用几个 GB 的存储空间。安装完成后，您可以使用以下命令激活新环境：\n\n> `conda activate talking-head-anime`\n\n完成环境使用后，您可以使用以下命令将其停用：\n\n> `conda deactivate`\n\n## 准备数据\n\n将本仓库克隆到您的机器存储后，您需要下载模型： \n\n* 从 [此链接](https:\u002F\u002Fdrive.google.com\u002Fopen?id=1ajHViqyLDKFKfBtGPE5cbSGcMNa8rz8k) 下载主模型。将文件解压到项目根目录下的 `data` 目录中。这些模型是单独发布的，遵循 [Creative Commons Attribution 4.0 International License](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby\u002F4.0\u002Flegalcode)。\n* 下载 `shape_predictor_68_face_landmarks.dat` 并将其保存到 `data` 目录中。您可以从 [此处](https:\u002F\u002Fgithub.com\u002Fdavisking\u002Fdlib-models) 下载 bzip 归档文件。别忘了解压缩。\n\n下载完成后，数据目录应如下所示：\n\n```\n+ data\n  + illust\n    - placeholder.txt\n    - waifu_00_256.png\n    - waifu_01_256.png\n    - waifu_02_256.png\n    - waifu_03_256.png\n    - waifu_04_256.png\n  - combiner.pt\n  - face_morpher.pt\n  - placeholder.txt\n  - shape_predictor_68_face_landmarks.dat\n  - two_algo_face_rotator.pt\n```\n\n要玩这个演示，您可以使用我包含在 `data\u002Fillust` 中的 5 张图片。或者，您也可以自己准备一些角色图片。可以动画化的图像必须满足以下要求：\n* 必须是 PNG 格式。\n* 尺寸必须为 256 x 256。\n* 角色的头部必须包含在中心 128 x 128 的方框内。\n* 必须具有 4 个通道 (RGBA)。\n* 不属于角色身体的像素值必须为 (0,0,0,0)。换句话说，背景必须是透明的。\n\n更多详细信息，请参阅项目撰写网页的第 4 节 [https:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F](https:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F)。您应该将所有图像保存在 `data\u002Fillust` 目录中。获取角色图片的一个好方法是使用 [Waifu Labs](https:\u002F\u002Fwaifulabs.com\u002F) 生成一张，然后编辑图像以符合上述要求。\n\n## 运行程序\n\n将目录更改为项目的根目录。要运行手动姿势调整器，请在您的 Shell 中发出以下命令：\n\n> `python app\u002Fmanual_poser.py`\n\n要运行木偶控制器，请在您的 Shell 中发出以下命令：\n\n> `python app\u002Fpuppeteer.py`\n\n## 引用\n\n如果您的学术工作受益于本仓库中的代码，请按以下方式引用项目的网页：\n\n> Pramook Khungurn. **Talking Head Anime from a Single Image.** http:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F, 2019. Accessed: YYYY-MM-DD.\n\n您也可以使用以下 BibTeX 条目：\n\n```\n@misc{Khungurn:2019,\n    author = {Pramook Khungurn},\n    title = {Talking Head Anime from a Single Image},\n    howpublished = {\\url{http:\u002F\u002Fpkhungurn.github.io\u002Ftalking-head-anime\u002F}},\n    year = 2019,\n    note = {Accessed: YYYY-MM-DD},\n}\n```\n\n## 免责声明\n\n虽然作者是日本谷歌的员工，但该软件不是谷歌的产品，也不受谷歌支持。\n\n本软件的版权属于我，因为我是使用 \u003Ca href=\"https:\u002F\u002Fopensource.google\u002Fdocumentation\u002Freference\u002Freleasing#iarc\">IARC 流程\u003C\u002Fa> 请求发布的。但是，发布此源代码的条件之一是“单图生成说话动漫头像”的发表需经内部出版审批流程批准。我于 2019\u002F11\u002F17 申请了批准。它已由一名研究人员审查，但尚未在我的产品领域（Google Maps）由经理正式批准。我决定发布此代码，并承担其可能产生的所有风险。\n\n我使用了 [KwanHua Lee 实现的面部追踪代码](https:\u002F\u002Fgithub.com\u002Flincolnhard\u002Fhead-pose-estimation) 来实现 puppeteer (木偶控制器) 工具。","# talking-head-anime-demo 快速上手指南\n\n本项目用于根据单张图像生成动漫人物的说话或头部运动动画，支持手动摆姿（Manual Poser）和摄像头驱动（Puppeteer）两种模式。\n\n## 环境准备\n\n### 硬件要求\n*   **GPU**: 需要较新的强力 NVIDIA GPU（如 GeForce GTX 1080 Ti 或 Titan RTX）。\n*   **摄像头**: 若使用 Puppeteer 模式，需连接网络摄像头。\n*   **替代方案**: 若无本地硬件，可通过 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fblob\u002Fmaster\u002Ftha_colab.ipynb) 在线运行手动摆姿功能。\n\n### 软件依赖\n*   Python >= 3.6\n*   PyTorch >= 1.4.0\n*   dlib >= 19.19\n*   opencv-python >= 4.1.0.30\n*   pillow >= 7.0.0\n*   numpy >= 1.17.l2\n\n## 安装步骤\n\n### 1. 克隆仓库\n```bash\ngit clone \u003Crepository_url>\ncd \u003Cproject_directory>\n```\n\n### 2. 创建 Python 环境\n推荐使用 Anaconda 管理环境：\n```bash\nconda env create -f environment.yml\nconda activate talking-head-anime\n```\n\n### 3. 下载模型文件\n将以下模型文件解压至项目根目录下的 `data` 文件夹中：\n*   **主模型**: [Google Drive 链接](https:\u002F\u002Fdrive.google.com\u002Fopen?id=1ajHViqyLDKFKfBtGPE5cbSGcMNa8rz8k)\n*   **人脸关键点模型**: [dlib 模型链接](https:\u002F\u002Fgithub.com\u002Fdavisking\u002Fdlib-models) (下载 `shape_predictor_68_face_landmarks.dat.bz2` 并解压)\n\n确保 `data` 目录结构如下：\n```text\n+ data\n  + illust\n    - waifu_*.png (示例图片)\n  - combiner.pt\n  - face_morpher.pt\n  - shape_predictor_68_face_landmarks.dat\n  - two_algo_face_rotator.pt\n```\n\n### 4. 准备角色图片\n如需自定义图片，请放入 `data\u002Fillust` 目录，并满足以下要求：\n*   格式：PNG\n*   尺寸：256 x 256 像素\n*   通道：RGBA (4 通道)\n*   背景：透明 (非身体区域像素值为 0,0,0,0)\n*   构图：角色头部必须位于中心 128 x 128 区域内\n\n## 基本使用\n\n在激活 `talking-head-anime` 环境后，进入项目根目录运行以下命令：\n\n### 启动手动摆姿器\n允许通过滑块手动调整动漫角色的姿态：\n```bash\npython app\u002Fmanual_poser.py\n```\n\n### 启动 Puppeteer (木偶师)\n利用网络摄像头捕捉人脸动作，驱动动漫角色模仿：\n```bash\npython app\u002Fpuppeteer.py\n```","一位独立游戏开发者需要为角色制作一段动态立绘视频用于宣传片，但团队缺乏专业的动画师资源且预算有限，急需低成本解决方案。\n\n### 没有 talking-head-anime-demo 时\n- 必须依赖人工逐帧绘制角色口型与表情变化，耗时极长且容易出错。\n- 聘请专业动捕演员或外包制作成本高昂，往往超出小型项目的预算范围。\n- 静态图片直接嵌入视频显得僵硬死板，无法有效传达角色的情绪波动。\n- 现有的传统动画软件学习曲线陡峭，难以快速生成流畅自然的头部运动。\n\n### 使用 talking-head-anime-demo 后\n- 仅需上传一张符合规范的透明背景 PNG 图片，即可自动驱动角色说话和转头。\n- 利用摄像头实时捕捉面部动作，实现真人表情对虚拟角色的自然模仿效果。\n- 提供手动滑块调节功能，允许在不重新绘制的前提下微调特定角度和姿态。\n- 大幅降低技术门槛，开发者能在几分钟内产出高质量、流畅的动态素材。\n\n它让静态二次元立绘瞬间“活”起来，极大降低了个人创作者的动态内容生产成本，是独立开发者的得力助手。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpkhungurn_talking-head-anime-demo_69839c09.png","pkhungurn","Pramook Khungurn","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fpkhungurn_775a6f38.jpg","A software developer from Thailand, interested in computer graphics, machine learning, and algorithms.",null,"pramook@gmail.com","dragonmeteor","http:\u002F\u002Fpkhungurn.github.io","https:\u002F\u002Fgithub.com\u002Fpkhungurn",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",81.6,{"name":90,"color":91,"percentage":92},"Jupyter Notebook","#DA5B0B",18.4,2022,286,"2026-04-04T11:25:08","MIT","未说明","需要 NVIDIA GPU（测试过 GTX 1080 Ti, Titan RTX），具体显存及 CUDA 版本未说明",{"notes":100,"python":101,"dependencies":102},"需手动下载模型文件及人脸关键点检测器到 data 目录；输入图片需满足 256x256 分辨率、RGBA 通道且背景透明；若硬件不足可使用 Google Colab；安装依赖需占用数 GB 存储空间","3.6+",[103,104,105,106,107],"pytorch>=1.4.0","dlib>=19.19","opencv-python>=4.1.0.30","pillow>=7.0.0","numpy>=1.17.2",[13,15,14],[110,111,112,113,114,115,116,117,118],"ai","anime","computer-graphics","computer-vision","deep-learning","machine-learning","pytorch","vtuber","python","2026-03-27T02:49:30.150509","2026-04-06T08:46:50.963639",[122,127,132,137,142,147],{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},2722,"手动定位器显示“Nothing yet！”或报错怎么办？","这通常是 Python 环境问题，特别是 `typing` 模块异常。最简单的解决方法是在 Anaconda 中创建一个全新的 Python 环境并重新安装代码（请参考 README 中的说明）。此外，有用户反馈删除旧的 'poser' 包也能解决问题。","https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fissues\u002F16",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},2723,"运行 Puppeteer 时出现 `TypeError: unsupported operand type(s) for -: 'point' and 'point'` 错误如何解决？","这是由于使用的 dlib 版本不兼容导致的（例如使用了 19.7 版本）。请将 dlib 升级至 19.19 版本，该错误即可消失。","https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fissues\u002F22",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},2724,"可以使用自定义的图片作为源图像吗？","可以。但是你的自定义图片必须符合 README.md 文件中列出的指南要求。","https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fissues\u002F40",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},2725,"为什么 GPU 资源占用率很低且程序运行卡顿？","Windows 系统有时无法正确报告 GPU 使用率，实际上 GPU 可能已经处于最大负载状态。建议参考 Issue #10 了解更多关于性能优化的细节。","https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fissues\u002F24",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},2726,"能否将模型保存为 MTN 格式文件以便在网站上使用？","不能。该软件不是像 Live2D 那样基于底层可移动图像工作，而是使用神经网络根据输入生成新图像，因此无法提取和保存动作文件。","https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fissues\u002F29",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},2727,"导入图片后 Puppeteer 运行时出现 OpenCV 错误怎么办？","这是已知问题，维护者已更新代码修复。请从仓库拉取最新代码（pull）以应用修复。","https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo\u002Fissues\u002F42",[]]