[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-PeizhuoLi--neural-blend-shapes":3,"tool-PeizhuoLi--neural-blend-shapes":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":80,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":10,"env_os":93,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":106,"github_topics":107,"view_count":113,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":114,"updated_at":115,"faqs":116,"releases":147},112,"PeizhuoLi\u002Fneural-blend-shapes","neural-blend-shapes","An end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool [SIGGRAPH 2021]","neural-blend-shapes 是一个基于深度学习的端到端开源库，专注于实现 3D 角色的自动骨骼绑定、蒙皮权重分配以及混合形状生成。在传统动画制作流程中，角色绑定往往需要大量人工干预，耗时且门槛较高。neural-blend-shapes 通过神经网络学习骨骼关节的运动规律，能够从单一网格模型自动推导出合理的骨骼结构，并生成比传统线性蒙皮更自然的变形效果，显著提升了角色动画的生产效率。\n\nneural-blend-shapes 主要适合计算机图形学研究人员、AI 开发者以及寻求自动化工作流的 3D 设计师。核心技术源于 SIGGRAPH 2021 发表的论文，引入了“神经混合形状”概念，有效解决了复杂关节处的变形失真问题。项目基于 PyTorch 构建，支持与 Blender 联动，不仅提供预训练模型供双足角色快速测试，还支持用户导入自定义网格进行推理或重新训练。此外，neural-blend-shapes 支持输出 FBX 格式动画文件，方便直接融入现有的游戏或影视制作管线，让高质量的自动绑定技术更易于落地应用。","# Learning Skeletal Articulations with Neural Blend Shapes\n\n![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython->=3.8-Blue?logo=python)  ![Pytorch](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPyTorch->=1.8.0-Red?logo=pytorch)\n![Blender](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlender-%3E=2.8-Orange?logo=blender)\n\nThis repository provides an end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool. It is based on our work [Learning Skeletal Articulations with Neural Blend Shapes](https:\u002F\u002Fpeizhuoli.github.io\u002Fneural-blend-shapes\u002Findex.html) that is published in SIGGRAPH 2021.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPeizhuoLi_neural-blend-shapes_readme_9e89de117027.gif\" slign=\"center\">\n\n## Prerequisites\n\nOur code has been tested on Ubuntu 18.04. Before starting, please configure your Anaconda environment by\n\n~~~bash\nconda env create -f environment.yaml\nconda activate neural-blend-shapes\n~~~\n\nOr you may install the following packages (and their dependencies) manually:\n\n- pytorch 1.8\n- tensorboard\n- tqdm\n- chumpy\n\nNote the provided environment only includes the PyTorch CPU version for compatibility consideration.\n\n## Quick Start\n\nWe provide a pretrained model that is dedicated for biped characters. Download and extract the pretrained model from [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1S_JQY2N4qx1V6micWiIiNkHercs557rG\u002Fview?usp=sharing) or [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1y8iBqf1QfxcPWO0AWd2aVw) (9ras) and put the `pre_trained` folder under the project directory. Run\n\n~~~bash\npython demo.py --pose_file=.\u002Feval_constant\u002Fsequences\u002Fgreeting.npy --obj_path=.\u002Feval_constant\u002Fmeshes\u002Fmaynard.obj\n~~~\n\nThe nice greeting animation showed above will be saved in `demo\u002Fobj` as obj files. In addition, the generated skeleton will be saved as `demo\u002Fskeleton.bvh` and the skinning weight matrix will be saved as `demo\u002Fweight.npy`. If you need the bvh file animated, you may specify `--animated_bvh=1`.\n\nIf you are interested in traditional linear blend skinning (LBS) technique result generated with our rig, you can specify `--envelope_only=1` to evaluate our model only with the envelope branch.\n\nWe also provide other several meshes and animation sequences. Feel free to try their combinations!\n\n\n### FBX Output (New!)\n\nNow you can choose to output the animation as a single fbx file instead of a sequence of obj files! Simply do following:\n\n~~~bash\npython demo.py --animated_bvh=1 --obj_output=0\ncd blender_scripts\nblender -b -P nbs_fbx_output.py -- --input ..\u002Fdemo --output ..\u002Fdemo\u002Foutput.fbx\n~~~\n\nNote that you need to install Blender (>=2.80) to generate the fbx file. You may explore more options on the generated fbx file in the source code.\n\nThis code is contributed by [@huh8686](https:\u002F\u002Fgithub.com\u002Fhuh8686).\n\n### Test on Customized Meshes\n\nYou may try to run our model with your own meshes by pointing the `--obj_path` argument to the input mesh. Please make sure your mesh is triangulated and has a consistent upright and front facing orientation. Since our model requires the input meshes are spatially aligned, please specify `--normalize=1`. Alternatively, you can try to scale and translate your mesh to align the provided `eval_constant\u002Fmeshes\u002Fsmpl_std.obj` without specifying `--normalize=1`.\n\n### Evaluation\n\nTo reconstruct the quantitative result with the pretrained model, you need to download the test dataset from [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1RwdnnFYT30L8CkUb1E36uQwLNZd1EmvP\u002Fview?usp=sharing) or [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1c5QCQE3RXzqZo6PeYjhtqQ) (8b0f) and put the two extracted folders under `.\u002Fdataset` and run\n\n~~~bash\npython evaluation.py\n~~~\n\n\n## Train from Scratch\n\nWe provide instructions for retraining our model.\n\nNote that you may need to reinstall the PyTorch CUDA version since the provided environment only includes the PyTorch CPU version.\n\nTo train the model from scratch, you need to download the training set from [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1RSd6cPYRuzt8RYWcCVL0FFFsL42OeHA7\u002Fview?usp=sharing) or [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1J-hIVyz19hKZdwKPfS3TtQ) (uqub) and put the extracted folders under `.\u002Fdataset`.\n\nThe training process contains tow stages, each stage corresponding to one branch. To train the first stage, please run\n\n~~~bash\npython train.py --envelope=1 --save_path=[path to save the model] --device=[cpu\u002Fcuda:0\u002Fcuda:1\u002F...]\n~~~\n\nFor the second stage, it is strongly recommended to use a pre-process to extract the blend shapes basis then start the training for much better efficiency by\n\n~~~bash\npython preprocess_bs.py --save_path=[same path as the first stage] --device=[computing device]\npython train.py --residual=1 --save_path=[same path as the first stage] --device=[computing device] --lr=1e-4\n~~~\n\n## Blender Visualization\n\nWe provide a simple wrapper of blender's python API (>=2.80) for rendering 3D mesh animations and visualize skinning weight. The following code has been tested on Ubuntu 18.04 and macOS Big Sur with Blender 2.92.\n\nNote that due to the limitation of Blender, you cannot run Eevee render engine with a headless machine. \n\nWe also provide several arguments to control the behavior of the scripts. Please refer to the code for more details. To pass arguments to python script in blender, please do following:\n\n~~~bash\nblender [blend file path (optional)] -P [python script path] [-b (running at backstage, optional)] -- --arg1 [ARG1] --arg2 [ARG2]\n~~~\n\n\n\n### Animation\n\nWe provide a simple light and camera setting in `eval_constant\u002Fsimple_scene.blend`. You may need to adjust it before using. We use `ffmpeg` to convert images into video. Please make sure you have installed it before running. To render the obj files generated above, run\n\n~~~bash\ncd blender_script\nblender ..\u002Feval_constant\u002Fsimple_scene.blend -P render_mesh.py -b\n~~~\n\nThe rendered per-frame image will be saved in `demo\u002Fimages` and composited video will be saved as `demo\u002Fvideo.mov`. \n\n### Skinning Weight\n\nVisualizing the skinning weight is a good sanity check to see whether the model works as expected. We provide a script using Blender's built-in ShaderNodeVertexColor to visualize the skinning weight. Simply run\n\n~~~bash\ncd blender_script\nblender -P vertex_color.py\n~~~\n\nYou will see something similar to this if the model works as expected:\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPeizhuoLi_neural-blend-shapes_readme_8ed506d646c5.png\" slign=\"center\" width=\"50%\">\n\nMeanwhile, you can import the generated skeleton (in `demo\u002Fskeleton.bvh`) to Blender. For skeleton rendering, please refer to [deep-motion-editing](https:\u002F\u002Fgithub.com\u002FDeepMotionEditing\u002Fdeep-motion-editing).\n\n## Acknowledgements\n\nThe code in `blender_scripts\u002Fnbs_fbx_output.py` is contributed by [@huh8686](https:\u002F\u002Fgithub.com\u002Fhuh8686).\n\nThe code in `meshcnn` is adapted from [MeshCNN](https:\u002F\u002Fgithub.com\u002Franahanocka\u002FMeshCNN) by [@ranahanocka](https:\u002F\u002Fgithub.com\u002Franahanocka\u002F).\n\nThe code in `models\u002Fskeleton.py` is adapted from [deep-motion-editing](https:\u002F\u002Fgithub.com\u002FDeepMotionEditing\u002Fdeep-motion-editing) by [@kfiraberman](https:\u002F\u002Fgithub.com\u002Fkfiraberman), [@PeizhuoLi](https:\u002F\u002Fgithub.com\u002FPeizhuoLi) and [@HalfSummer11](https:\u002F\u002Fgithub.com\u002FHalfSummer11).\n\nThe code in `dataset\u002Fsmpl.py` is adapted from [SMPL](https:\u002F\u002Fgithub.com\u002FCalciferZh\u002FSMPL) by [@CalciferZh](https:\u002F\u002Fgithub.com\u002FCalciferZh).\n\nPart of the test models are taken from [SMPL](https:\u002F\u002Fsmpl.is.tue.mpg.de\u002Fen), [MultiGarmentNetwork](https:\u002F\u002Fgithub.com\u002Fbharat-b7\u002FMultiGarmentNetwork) and [Adobe Mixamo](https:\u002F\u002Fwww.mixamo.com).\n\n## Citation\n\nIf you use this code for your research, please cite our paper:\n\n~~~bibtex\n@article{li2021learning,\n  author = {Li, Peizhuo and Aberman, Kfir and Hanocka, Rana and Liu, Libin and Sorkine-Hornung, Olga and Chen, Baoquan},\n  title = {Learning Skeletal Articulations with Neural Blend Shapes},\n  journal = {ACM Transactions on Graphics (TOG)},\n  volume = {40},\n  number = {4},\n  pages = {130},\n  year = {2021},\n  publisher = {ACM}\n}\n~~~\n\n","# 使用神经混合形状（Neural Blend Shapes）学习骨骼关节运动（Skeletal Articulations）\n\n![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython->=3.8-Blue?logo=python)  ![Pytorch](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPyTorch->=1.8.0-Red?logo=pytorch)\n![Blender](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlender-%3E=2.8-Orange?logo=blender)\n\n本仓库提供了一个端到端（end-to-end）的库，用于自动角色绑定（rigging）、蒙皮（skinning）和混合形状（blend shapes）生成，以及一个可视化工具。它基于我们在 SIGGRAPH 2021 上发表的工作 [Learning Skeletal Articulations with Neural Blend Shapes](https:\u002F\u002Fpeizhuoli.github.io\u002Fneural-blend-shapes\u002Findex.html)。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPeizhuoLi_neural-blend-shapes_readme_9e89de117027.gif\" slign=\"center\">\n\n## 前提条件\n\n我们的代码已在 Ubuntu 18.04 上测试过。开始之前，请通过以下命令配置您的 Anaconda 环境：\n\n~~~bash\nconda env create -f environment.yaml\nconda activate neural-blend-shapes\n~~~\n\n或者您可以手动安装以下包（及其依赖）：\n\n- pytorch 1.8\n- tensorboard\n- tqdm\n- chumpy\n\n注意，提供的环境仅包含 PyTorch CPU 版本，以考虑兼容性。\n\n## 快速开始\n\n我们提供了一个专为双足角色（biped characters）设计的预训练模型。从 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1S_JQY2N4qx1V6micWiIiNkHercs557rG\u002Fview?usp=sharing) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1y8iBqf1QfxcPWO0AWd2aVw) (9ras) 下载并解压预训练模型，并将 `pre_trained` 文件夹放在项目目录下。运行：\n\n~~~bash\npython demo.py --pose_file=.\u002Feval_constant\u002Fsequences\u002Fgreeting.npy --obj_path=.\u002Feval_constant\u002Fmeshes\u002Fmaynard.obj\n~~~\n\n上述精彩的问候动画将作为 obj 文件保存在 `demo\u002Fobj` 中。此外，生成的骨骼（skeleton）将保存为 `demo\u002Fskeleton.bvh`，蒙皮权重矩阵（skinning weight matrix）将保存为 `demo\u002Fweight.npy`。如果您需要动画化的 bvh 文件，可以指定 `--animated_bvh=1`。\n\n如果您对我们绑定生成的传统线性混合蒙皮（Linear Blend Skinning, LBS）技术结果感兴趣，您可以指定 `--envelope_only=1` 仅使用包络分支（envelope branch）评估我们的模型。\n\n我们还提供了其他几个网格（meshes）和动画序列。欢迎尝试它们的组合！\n\n### FBX 输出（新！）\n\n现在您可以选择将动画输出为单个 fbx 文件，而不是一系列 obj 文件！只需执行以下操作：\n\n~~~bash\npython demo.py --animated_bvh=1 --obj_output=0\ncd blender_scripts\nblender -b -P nbs_fbx_output.py -- --input ..\u002Fdemo --output ..\u002Fdemo\u002Foutput.fbx\n~~~\n\n注意，您需要安装 Blender (>=2.80) 来生成 fbx 文件。您可以在源代码中探索生成 fbx 文件的更多选项。\n\n此代码由 [@huh8686](https:\u002F\u002Fgithub.com\u002Fhuh8686) 贡献。\n\n### 在自定义网格上测试\n\n您可以通过将 `--obj_path` 参数指向输入网格（mesh），尝试使用您自己的网格运行我们的模型。请确保您的网格已三角化，并且具有 consistent 的正立和朝前朝向。由于我们的模型要求输入网格在空间上对齐，请指定 `--normalize=1`。或者，您可以尝试缩放和平移您的网格以对齐提供的 `eval_constant\u002Fmeshes\u002Fsmpl_std.obj`，而不指定 `--normalize=1`。\n\n### 评估\n\n要使用预训练模型重建定量结果，您需要从 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1RwdnnFYT30L8CkUb1E36uQwLNZd1EmvP\u002Fview?usp=sharing) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1c5QCQE3RXzqZo6PeYjhtqQ) (8b0f) 下载测试数据集，并将两个解压后的文件夹放在 `.\u002Fdataset` 下，然后运行：\n\n~~~bash\npython evaluation.py\n~~~\n\n## 从头训练\n\n我们提供了重新训练模型的说明。\n\n注意，您可能需要重新安装 PyTorch CUDA 版本，因为提供的环境仅包含 PyTorch CPU 版本。\n\n要从头训练模型，您需要从 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1RSd6cPYRuzt8RYWcCVL0FFFsL42OeHA7\u002Fview?usp=sharing) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1J-hIVyz19hKZdwKPfS3TtQ) (uqub) 下载训练集，并将解压后的文件夹放在 `.\u002Fdataset` 下。\n\n训练过程包含两个阶段，每个阶段对应一个分支。要训练第一阶段，请运行：\n\n~~~bash\npython train.py --envelope=1 --save_path=[path to save the model] --device=[cpu\u002Fcuda:0\u002Fcuda:1\u002F...]\n~~~\n\n对于第二阶段，强烈建议使用预处理来提取混合形状基（blend shapes basis），然后通过以下命令开始训练以获得更高的效率：\n\n~~~bash\npython preprocess_bs.py --save_path=[same path as the first stage] --device=[computing device]\npython train.py --residual=1 --save_path=[same path as the first stage] --device=[computing device] --lr=1e-4\n~~~\n\n## Blender 可视化\n\n我们提供了一个 Blender Python API (>=2.80) 的简单封装（wrapper），用于渲染 3D 网格动画和可视化蒙皮权重。以下代码已在 Ubuntu 18.04 和 macOS Big Sur 上使用 Blender 2.92 测试过。\n\n注意，由于 Blender 的限制，您无法在无头机器（headless machine）上运行 Eevee 渲染引擎。\n\n我们还提供了几个参数来控制脚本的行为。请参阅代码了解更多详情。要将参数传递给 Blender 中的 Python 脚本，请执行以下操作：\n\n~~~bash\nblender [blend file path (optional)] -P [python script path] [-b (running at backstage, optional)] -- --arg1 [ARG1] --arg2 [ARG2]\n~~~\n\n### 动画\n\n我们在 `eval_constant\u002Fsimple_scene.blend` 中提供了简单的灯光和相机设置。使用前您可能需要调整它。我们使用 `ffmpeg` 将图像转换为视频。请确保在运行前已安装它。要渲染上述生成的 obj 文件，运行：\n\n~~~bash\ncd blender_script\nblender ..\u002Feval_constant\u002Fsimple_scene.blend -P render_mesh.py -b\n~~~\n\n渲染后的每帧图像将保存在 `demo\u002Fimages` 中，合成视频将保存为 `demo\u002Fvideo.mov`。\n\n### 蒙皮权重\n\n可视化蒙皮权重是一个很好的健全性检查（sanity check），用于查看模型是否按预期工作。我们提供了一个脚本，使用 Blender 内置的 ShaderNodeVertexColor 来可视化蒙皮权重。只需运行：\n\n~~~bash\ncd blender_script\nblender -P vertex_color.py\n~~~\n\n如果模型按预期工作，您将看到类似以下内容：\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPeizhuoLi_neural-blend-shapes_readme_8ed506d646c5.png\" slign=\"center\" width=\"50%\">\n\n同时，您可以将生成的骨骼（在 `demo\u002Fskeleton.bvh` 中）导入到 Blender。关于骨骼渲染，请参考 [deep-motion-editing](https:\u002F\u002Fgithub.com\u002FDeepMotionEditing\u002Fdeep-motion-editing)。\n\n## 致谢\n\n`blender_scripts\u002Fnbs_fbx_output.py` 中的代码由 [@huh8686](https:\u002F\u002Fgithub.com\u002Fhuh8686) 贡献。\n\n`meshcnn` 中的代码改编自 [@ranahanocka](https:\u002F\u002Fgithub.com\u002Franahanocka\u002F) 的 [MeshCNN](https:\u002F\u002Fgithub.com\u002Franahanocka\u002FMeshCNN)。\n\n`models\u002Fskeleton.py` 中的代码改编自 [@kfiraberman](https:\u002F\u002Fgithub.com\u002Fkfiraberman)、[@PeizhuoLi](https:\u002F\u002Fgithub.com\u002FPeizhuoLi) 和 [@HalfSummer11](https:\u002F\u002Fgithub.com\u002FHalfSummer11) 的 [deep-motion-editing](https:\u002F\u002Fgithub.com\u002FDeepMotionEditing\u002Fdeep-motion-editing)。\n\n`dataset\u002Fsmpl.py` 中的代码改编自 [@CalciferZh](https:\u002F\u002Fgithub.com\u002FCalciferZh) 的 [SMPL](https:\u002F\u002Fgithub.com\u002FCalciferZh\u002FSMPL)。\n\n部分测试模型取自 [SMPL](https:\u002F\u002Fsmpl.is.tue.mpg.de\u002Fen)、[MultiGarmentNetwork](https:\u002F\u002Fgithub.com\u002Fbharat-b7\u002FMultiGarmentNetwork) 和 [Adobe Mixamo](https:\u002F\u002Fwww.mixamo.com)。\n\n## 引用\n\n如果您将此代码用于研究，请引用我们的论文：\n\n~~~bibtex\n@article{li2021learning,\n  author = {Li, Peizhuo and Aberman, Kfir and Hanocka, Rana and Liu, Libin and Sorkine-Hornung, Olga and Chen, Baoquan},\n  title = {Learning Skeletal Articulations with Neural Blend Shapes},\n  journal = {ACM Transactions on Graphics (TOG)},\n  volume = {40},\n  number = {4},\n  pages = {130},\n  year = {2021},\n  publisher = {ACM}\n}\n~~~","# neural-blend-shapes 快速上手指南\n\n## 简介\nneural-blend-shapes 是一个端到端的库，用于自动角色绑定（Rigging）、蒙皮（Skinning）和混合形状（Blend Shapes）生成。本项目基于 SIGGRAPH 2021 论文 [Learning Skeletal Articulations with Neural Blend Shapes](https:\u002F\u002Fpeizhuoli.github.io\u002Fneural-blend-shapes\u002Findex.html)。\n\n## 环境准备\n在开始之前，请确保满足以下系统要求和依赖：\n\n- **操作系统**: Ubuntu 18.04 (已测试)\n- **Python**: >= 3.8\n- **PyTorch**: >= 1.8.0 (预置环境为 CPU 版本，如需 GPU 训练请自行安装 CUDA 版本)\n- **Blender**: >= 2.8 (可选，用于 FBX 导出和可视化)\n- **依赖包**: tensorboard, tqdm, chumpy\n\n## 安装步骤\n\n1. **配置 Anaconda 环境**\n   推荐使用 Conda 创建隔离环境：\n   ```bash\n   conda env create -f environment.yaml\n   conda activate neural-blend-shapes\n   ```\n\n2. **下载预训练模型**\n   下载专为双足角色设计的预训练模型，并解压将 `pre_trained` 文件夹置于项目根目录下。\n   - **百度网盘** (推荐国内用户): [下载链接](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1y8iBqf1QfxcPWO0AWd2aVw) (提取码: 9ras)\n   - Google Drive: [下载链接](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1S_JQY2N4qx1V6micWiIiNkHercs557rG\u002Fview?usp=sharing)\n\n## 基本使用\n\n### 1. 运行演示脚本\n使用提供的网格和动作序列生成动画。以下命令将生成问候动画：\n\n```bash\npython demo.py --pose_file=.\u002Feval_constant\u002Fsequences\u002Fgreeting.npy --obj_path=.\u002Feval_constant\u002Fmeshes\u002Fmaynard.obj\n```\n\n**输出文件说明：**\n- `demo\u002Fobj`: 生成的动画网格序列 (obj 格式)\n- `demo\u002Fskeleton.bvh`: 生成的骨骼文件\n- `demo\u002Fweight.npy`: 蒙皮权重矩阵\n\n若需要生成动态 BVH 文件，可添加参数 `--animated_bvh=1`。\n\n### 2. 导出 FBX 动画 (可选)\n如需将动画导出为单个 FBX 文件，需安装 Blender (>=2.80) 并执行以下步骤：\n\n```bash\npython demo.py --animated_bvh=1 --obj_output=0\ncd blender_scripts\nblender -b -P nbs_fbx_output.py -- --input ..\u002Fdemo --output ..\u002Fdemo\u002Foutput.fbx\n```\n\n### 3. 测试自定义网格\n您可以使用自己的网格文件进行测试，请确保网格已三角化且方向一致（ upright and front facing）。由于模型要求输入网格空间对齐，建议启用归一化参数：\n\n```bash\npython demo.py --pose_file=.\u002Feval_constant\u002Fsequences\u002Fgreeting.npy --obj_path=[您的网格路径] --normalize=1\n```","某独立游戏团队需要在两周内为原型版本制作 20 个不同体型 NPC 的行走动画，但团队缺乏专职绑定师，时间紧迫。\n\n### 没有 neural-blend-shapes 时\n- 手动绑定每个模型耗时极长，平均每个角色需 4 小时以上，严重挤压开发时间。\n- 权重绘制高度依赖人工经验，关节弯曲处容易出现模型穿插或不自然的拉伸变形。\n- 不同体型的角色需要反复调整骨骼适配，标准化程度低，导致动画效果参差不齐。\n- 一旦模型拓扑结构修改，必须重新进行绑定流程，迭代成本过高，难以响应策划变更。\n\n### 使用 neural-blend-shapes 后\n- neural-blend-shapes 自动完成绑定与蒙皮，单个角色处理仅需几分钟，效率提升数十倍。\n- 基于神经混合形状生成，关节变形自然流畅，显著减少穿插现象，视觉效果接近手工精修。\n- 支持批量处理不同体型网格，自动适配骨骼结构，保证所有 NPC 动画风格统一且稳定。\n- 模型微调后无需重新绑定，直接输入新网格即可生成动画，大幅降低迭代门槛，适应敏捷开发。\n\nneural-blend-shapes 将繁琐的手工绑定流程自动化，让小型团队也能高效产出高质量角色动画，专注核心玩法创新。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPeizhuoLi_neural-blend-shapes_9e89de11.gif","PeizhuoLi","Peizhuo Li","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPeizhuoLi_a8945389.jpg","PhD student at ETH Zurich","ETH Zurich",null,"peizhuo2020@gmail.com","peizhuoli.github.io","https:\u002F\u002Fgithub.com\u002FPeizhuoLi",[85],{"name":86,"color":87,"percentage":88},"Python","#3572A5",100,700,98,"2026-02-09T12:03:14","NOASSERTION","Linux, macOS","训练需要 CUDA 版本 PyTorch，推理可使用 CPU，具体显存大小和 CUDA 版本未说明","未说明",{"notes":97,"python":98,"dependencies":99},"默认提供的 conda 环境仅包含 PyTorch CPU 版本，训练需手动重新安装 CUDA 版本；Blender 用于 FBX 导出及可视化，无头模式下无法运行 Eevee 渲染引擎；输入网格需三角化并保持方向一致；预训练模型和训练\u002F测试数据集需从网盘手动下载","3.8+",[100,101,102,103,104,105],"pytorch>=1.8.0","tensorboard","tqdm","chumpy","blender>=2.8","ffmpeg",[52,13],[108,109,110,111,112],"computer-graphics","computer-animation","deep-learning","character-animation","rigging-framework",8,"2026-03-27T02:49:30.150509","2026-04-06T08:46:46.323697",[117,122,127,132,137,142],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},71,"是否支持导出动画格式（如 BVH 或 FBX）？","支持导出动画 BVH 文件。维护者已更新代码，可通过添加参数 `--animated_bvh=1` 来输出动画 BVH 文件。如果需要生成 neural-blendshapes 模型，需先导出 `coff.npy` 和 `basis.npy`（可在 `.\u002Farchitecture\u002Fblend_shapes.py` 的 forward 函数中添加保存代码）。目前暂时无法直接将生成结果集成到 FBX 文件中，建议使用 animated BVH 进行绑定。","https:\u002F\u002Fgithub.com\u002FPeizhuoLi\u002Fneural-blend-shapes\u002Fissues\u002F4",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},72,"使用 Blender 可视化动画时出现 AttributeError 错误怎么办？","这通常是因为仓库中上传了未完成的脚本（如 `vertex_color.py`）或 Blender 版本兼容性问题。请尝试拉取最新仓库代码（pull the repo），维护者已修复相关脚本。确保使用兼容的 Blender 版本（建议 2.91 以上），如果仍有 `shading` 属性错误，请检查脚本是否已更新。","https:\u002F\u002Fgithub.com\u002FPeizhuoLi\u002Fneural-blend-shapes\u002Fissues\u002F1",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},73,"输入的 3D 模型（obj 文件）需要如何对齐？","输入模型需要对齐空间顶点。可以尝试启用 `--normalize=1` 参数进行自动对齐。如果效果不佳，建议与提供的 `eval_constant\u002Fmeshes\u002Fsmpl_std.obj` 手动对齐，推荐的对齐方式为将手部高度以及手臂长度对齐。","https:\u002F\u002Fgithub.com\u002FPeizhuoLi\u002Fneural-blend-shapes\u002Fissues\u002F3",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},74,"该方法是否支持非 T-pose 的网格（如 A-pose）？","不支持。该方法仅适用于 T-pose 网格，因为 neural blend shapes 旨在补偿相对于 T-pose 的变形。如果模型不是 T-pose，建议使用 Blender 的 \"align_mesh\" 工具处理旋转、位置和缩放，将其调整为 T-pose 后再使用。","https:\u002F\u002Fgithub.com\u002FPeizhuoLi\u002Fneural-blend-shapes\u002Fissues\u002F9",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},75,"导入自定义网格时出现 ZeroDivisionError 错误如何解决？","这通常是因为自定义网格的拓扑结构问题（如邻居节点数量为 0）。确保网格已三角化（triangulated）。建议使用 `trimesh` 库的过滤器处理自定义网格后再送入模型，以修复拓扑结构问题。","https:\u002F\u002Fgithub.com\u002FPeizhuoLi\u002Fneural-blend-shapes\u002Fissues\u002F11",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},76,"前向运动学实现中 skeleton 参数的具体含义是什么？","`skeleton` 表示骨骼相对于其父骨骼的偏移量（offsets）。唯一的例外是根关节（root joint）的偏移量，它是相对于原点的偏移，即绝对位置。理解这一点对正确实现骨骼变换至关重要。","https:\u002F\u002Fgithub.com\u002FPeizhuoLi\u002Fneural-blend-shapes\u002Fissues\u002F26",[]]