[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-CMU-Perceptual-Computing-Lab--openpose_train":3,"tool-CMU-Perceptual-Computing-Lab--openpose_train":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":78,"languages":79,"stars":118,"forks":119,"last_commit_at":120,"license":121,"difficulty_score":122,"env_os":123,"env_gpu":124,"env_ram":125,"env_deps":126,"category_tags":129,"github_topics":130,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":140,"updated_at":141,"faqs":142,"releases":172},1330,"CMU-Perceptual-Computing-Lab\u002Fopenpose_train","openpose_train","Training repository for OpenPose","openpose_train 是 OpenPose 官方推出的“训练版”仓库，专门用来复现、改进或定制 OpenPose 的人体关键点检测模型。它解决了“想用 OpenPose，却只能用官方现成模型”的痛点：你可以用自己的数据训练出更准、更轻或针对特定场景的模型，还能提前体验一些尚未合并到主项目的实验网络，比如一次就能预测全身 135 个关键点的新架构 BODY_135，以及精度更高但速度稍慢的 BODY_25B。仓库自带数据预处理、训练脚本和验证脚本，并给出在 Ubuntu + CUDA 环境下的完整示例。  \n适合人群：  \n• 计算机视觉研究者——想复现论文或改进关键点算法；  \n• 算法工程师——需要为特定场景（安防、体育、医疗）定制模型；  \n• 高阶开发者——想把 OpenPose 移植到边缘设备，先做模型蒸馏或剪枝。  \n普通用户或设计师建议直接使用已编译好的 OpenPose 主项目即可。","# OpenPose Training (Experimental)\n\n\u003Cdiv align=\"center\">\n    \u003Cimg src=\".github\u002FLogo_main_black.png\", width=\"300\">\n\u003C\u002Fdiv>\n\n-----------------\n\n\n\n## Contents\n1. [Introduction](#introduction)\n2. [Functionality](#functionality)\n3. [Testing](#testing)\n4. [Training](#training)\n5. [Citation](#citation)\n6. [License](#license)\n\n\n\n## Experimental Disclaimer\nWhile [**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) is highly tested and stable, this training repository is highly experimental and not production ready. Use at your own risk.\n\nThis repository was used and tested on Ubuntu 16 with CUDA 8, and it compiles in Ubuntu 20 with WSL2 (Windows 11). It should work with other versions of Ubuntu and up to CUDA 10, but it might require modifications.\n\n\n\n## Introduction\n[**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) has represented the **first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images**.\n\nIt is **authored by** [**Ginés Hidalgo**](https:\u002F\u002Fwww.gineshidalgo.com), [**Zhe Cao**](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~zhecao), [**Tomas Simon**](http:\u002F\u002Fwww.cs.cmu.edu\u002F~tsimon), [**Shih-En Wei**](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=sFQD3k4AAAAJ&hl=en), [**Yaadhav Raaj**](https:\u002F\u002Fwww.raaj.tech), [**Hanbyul Joo**](https:\u002F\u002Fjhugestar.github.io), **and** [**Yaser Sheikh**](http:\u002F\u002Fwww.cs.cmu.edu\u002F~yaser). It is **maintained by** [**Ginés Hidalgo**](https:\u002F\u002Fwww.gineshidalgo.com) **and** [**Yaadhav Raaj**](https:\u002F\u002Fwww.raaj.tech). OpenPose would not be possible without the [**CMU Panoptic Studio dataset**](http:\u002F\u002Fdomedb.perception.cs.cmu.edu). We would also like to thank all the people who [has helped OpenPose in any way](doc\u002F09_authors_and_contributors.md).\n\n[**OpenPose Training**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_training) includes the training code for [**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose), as well as some experimental models that might not necessarily end up in OpenPose (to avoid confusing its users with too many models).\n\nThis repository and its documentation assumes knowledge of [OpenPose](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose). If you have not used OpenPose yet, you must familiare yourself with it before attempting to follow this documentation.\n\n\n\n## Functionality\n- **Training code** for [**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose).\n- Release of some **experimental models** that have not been included into [**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose). These models are experimental and might present some issues compared to the models officially released inside OpenPose.\nThis project is licensed under the terms of the [license](LICENSE).\n    - `BODY_135`: Whole-body pose estimation models from [Single-Network Whole-Body Pose Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13423).\n    - `BODY_25B`: Alternative to the `BODY_25` model of OpenPose, with higher accuracy but slower speed.\n\n\n\n## Experimental Models\nThe `experimental_models` directory contains our experimental models, including the whole-body model from [Single-Network Whole-Body Pose Estimation](README.md#citation), as well as instructions to make it run inside [OpenPose](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose). See [experimental_models\u002FREADME.md](experimental_models\u002FREADME.md) for more details.\n\n\n\n## Testing\nSee [testing\u002FREADME.md](testing\u002FREADME.md) for more details.\n\n\n\n## Training\nThe [training\u002F](training\u002F) directory contains multiple scripts to generate the scripts for training and to actually train the models. See [training\u002FREADME.md](training\u002FREADME.md) for more details.\n\n\n\n## Validation\nThe [validation\u002F](validation\u002F) directory contains multiple scripts to evaluate the accuracy of the trained models. See [validation\u002FREADME.md](validation\u002FREADME.md) for more details.\n\n\n\n## Citation\nPlease cite these papers in your publications if it helps your research (the face keypoint detector was trained using the procedure described in [Simon et al. 2017] for hands):\n\n    @inproceedings{hidalgo2019singlenetwork,\n      author = {Gines Hidalgo and Yaadhav Raaj and Haroon Idrees and Donglai Xiang and Hanbyul Joo and Tomas Simon and Yaser Sheikh},\n      booktitle = {ICCV},\n      title = {Single-Network Whole-Body Pose Estimation},\n      year = {2019}\n    }\n\n    @inproceedings{cao2018openpose,\n      author = {Zhe Cao and Gines Hidalgo and Tomas Simon and Shih-En Wei and Yaser Sheikh},\n      booktitle = {arXiv preprint arXiv:1812.08008},\n      title = {Open{P}ose: realtime multi-person 2{D} pose estimation using {P}art {A}ffinity {F}ields},\n      year = {2018}\n    }\n\nLinks to the papers:\n\n- [Single-Network Whole-Body Pose Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13423)\n- [OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.08008)\n\n\n\n## License\nOpenPose is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the [license](.\u002FLICENSE) for further details. Interested in a commercial license? Check this [FlintBox link](https:\u002F\u002Fcmu.flintbox.com\u002F#technologies\u002Fb820c21d-8443-4aa2-a49f-8919d93a8740). For commercial queries, use the `Contact` section from the [FlintBox link](https:\u002F\u002Fcmu.flintbox.com\u002F#technologies\u002Fb820c21d-8443-4aa2-a49f-8919d93a8740) and also send a copy of that message to [Yaser Sheikh](mailto:yaser@cs.cmu.edu).\n","# OpenPose 训练（实验性）\n\n\u003Cdiv align=\"center\">\n    \u003Cimg src=\".github\u002FLogo_main_black.png\", width=\"300\">\n\u003C\u002Fdiv>\n\n-----------------\n\n\n\n## 目录\n1. [简介](#introduction)\n2. [功能](#functionality)\n3. [测试](#testing)\n4. [训练](#training)\n5. [引用](#citation)\n6. [许可证](#license)\n\n\n\n## 实验性免责声明\n尽管[**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) 经过充分测试且稳定，但本训练仓库仍属高度实验性质，尚未达到生产级可用。请自担风险使用。\n\n本仓库已在 Ubuntu 16 上搭配 CUDA 8 进行了使用与测试，并可在 Ubuntu 20 与 WSL2（Windows 11）环境下编译。它应能兼容其他版本的 Ubuntu 以及最高至 CUDA 10 的环境，但可能需要进行相应修改。\n\n\n\n## 简介\n[**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) 是 **首个能够在单张图像上同时检测人体、手部、面部及足部关键点（共计 135 个关键点）的实时多人系统**。\n\n该系统由 [**Ginés Hidalgo**](https:\u002F\u002Fwww.gineshidalgo.com)、[**Zhe Cao**](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~zhecao)、[**Tomas Simon**](http:\u002F\u002Fwww.cs.cmu.edu\u002F~tsimon)、[**Shih-En Wei**](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=sFQD3k4AAAAJ&hl=en)、[**Yaadhav Raaj**](https:\u002F\u002Fwww.raaj.tech)、[**Hanbyul Joo**](https:\u002F\u002Fjhugestar.github.io) 以及 [**Yaser Sheikh**](http:\u002F\u002Fwww.cs.cmu.edu\u002F~yaser) 共同研发。目前由 [**Ginés Hidalgo**](https:\u002F\u002Fwww.gineshidalgo.com) 和 [**Yaadhav Raaj**](https:\u002F\u002Fwww.raaj.tech) 负责维护。OpenPose 的实现离不开 [**CMU Panoptic Studio 数据集**](http:\u002F\u002Fdomedb.perception.cs.cmu.edu)。我们亦谨向所有以各种方式为 OpenPose 提供帮助的人士致以诚挚谢意（详见文档 [09_authors_and_contributors.md](doc\u002F09_authors_and_contributors.md)）。\n\n[**OpenPose 训练**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_training) 包含 [**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) 的训练代码，以及若干可能不会最终纳入 OpenPose 的实验性模型（以免因模型过多而令用户混淆）。\n\n本仓库及其文档假定读者已掌握 [OpenPose](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) 的相关知识。若您尚未使用过 OpenPose，则在尝试遵循本文档之前，务必先熟悉该系统。\n\n\n\n## 功能\n- [**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) 的 **训练代码**。\n- 发布若干未被纳入 [**OpenPose**](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) 的 **实验性模型**。这些模型尚属实验性质，与 OpenPose 官方发布的模型相比，可能存在一定问题。\n\n本项目依据 [许可证](LICENSE) 的条款授权。\n    - `BODY_135`：源自 [Single-Network Whole-Body Pose Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13423) 的全身姿态估计模型。\n    - `BODY_25B`：OpenPose 中 `BODY_25` 模型的替代方案，精度更高但速度较慢。\n\n\n\n## 实验性模型\n`experimental_models` 目录存放我们的实验性模型，其中包括来自 [Single-Network Whole-Body Pose Estimation](README.md#citation) 的全身模型，以及在 [OpenPose](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose) 环境中运行该模型的相关说明。更多详情请参阅 [experimental_models\u002FREADME.md](experimental_models\u002FREADME.md)。\n\n\n\n## 测试\n更多详情请参阅 [testing\u002FREADME.md](testing\u002FREADME.md)。\n\n\n\n## 训练\n`training\u002F` 目录包含多个脚本，用于生成训练脚本并实际训练模型。更多详情请参阅 [training\u002FREADME.md](training\u002FREADME.md)。\n\n\n\n## 验证\n`validation\u002F` 目录包含多个脚本，用于评估已训练模型的准确性。更多详情请参阅 [validation\u002FREADME.md](validation\u002FREADME.md)。\n\n\n\n## 引用\n如您的研究需引用，请在论文中注明以下文献（其中面部关键点检测器的训练采用了 [Simon et al. 2017] 中描述的手部处理流程）：\n\n    @inproceedings{hidalgo2019singlenetwork,\n      author = {Gines Hidalgo and Yaadhav Raaj and Haroon Idrees and Donglai Xiang and Hanbyul Joo and Tomas Simon and Yaser Sheikh},\n      booktitle = {ICCV},\n      title = {Single-Network Whole-Body Pose Estimation},\n      year = {2019}\n    }\n\n    @inproceedings{cao2018openpose,\n      author = {Zhe Cao and Gines Hidalgo and Tomas Simon and Shih-En Wei and Yaser Sheikh},\n      booktitle = {arXiv preprint arXiv:1812.08008},\n      title = {Open{P}ose: realtime multi-person 2{D} pose estimation using {P}art {A}ffinity {F}ields},\n      year = {2018}\n    }\n\n论文链接：\n- [Single-Network Whole-Body Pose Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13423)\n- [OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.08008)\n\n\n\n## 许可证\nOpenPose 可免费用于非商业用途，并可在满足以下条件的情况下重新分发。更多详情请参阅 [许可证](.\u002FLICENSE)。如您有意获取商业许可，请访问此 [FlintBox 链接](https:\u002F\u002Fcmu.flintbox.com\u002F#technologies\u002Fb820c21d-8443-4aa2-a49f-8919d93a8740)。如需商业咨询，请通过 [FlintBox 链接](https:\u002F\u002Fcmu.flintbox.com\u002F#technologies\u002Fb820c21d-8443-4aa2-a49f-8919d93a8740) 的“联系”部分提交请求，并同时将该消息副本发送至 [Yaser Sheikh](mailto:yaser@cs.cmu.edu)。","# openpose_train 快速上手指南（中文）\n\n> ⚠️ 实验性质：本仓库为实验性训练代码，**非生产级**，请谨慎使用。\n\n---\n\n## 环境准备\n\n| 项目 | 推荐版本 |\n|---|---|\n| 操作系统 | Ubuntu 16.04 \u002F 20.04（WSL2 亦可） |\n| CUDA | 8.0 – 10.x |\n| cuDNN | 与 CUDA 对应版本 |\n| Python | 3.6+ |\n| CMake | ≥ 3.12 |\n| GCC | ≥ 5.4 |\n\n> 国内用户：建议先配置 [清华 TUNA 镜像](https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fhelp\u002Fubuntu\u002F) 或 [中科大镜像](https:\u002F\u002Fmirrors.ustc.edu.cn\u002F) 加速 apt、pip 下载。\n\n---\n\n## 安装步骤\n\n1. 克隆仓库  \n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_training.git\n   cd openpose_training\n   ```\n\n2. 安装系统依赖  \n   ```bash\n   sudo apt update\n   sudo apt install build-essential cmake git pkg-config libatlas-base-dev \\\n                    libboost-all-dev libopencv-dev python3-dev python3-pip\n   ```\n\n3. 安装 Python 依赖（国内可换清华源）  \n   ```bash\n   pip3 install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -r requirements.txt\n   ```\n\n4. 编译 Caffe（OpenPose 默认后端）  \n   ```bash\n   cd training\n   bash scripts\u002Fubuntu\u002Finstall_caffe.sh\n   ```\n\n5. 验证安装  \n   ```bash\n   python3 -c \"import caffe; print('Caffe OK')\"\n   ```\n\n---\n\n## 基本使用\n\n### 1. 准备数据  \n将 COCO、MPII 或自定义数据按 `training\u002Fdataset\u002FREADME.md` 说明放入 `training\u002Fdataset\u002F`。\n\n### 2. 生成训练脚本  \n```bash\ncd training\npython3 gen_training_scripts.py \\\n       --dataset coco \\\n       --model BODY_25B \\\n       --output_dir scripts\n```\n\n### 3. 启动训练  \n```bash\nbash scripts\u002Ftrain_BODY_25B.sh\n```\n\n> 第一次会自动下载 **Body_25B** 预训练权重**，国内可手动放 `BODY_25B` 预训练权重**。\n\n### 4. 验证  \n```bash\npython3 gen_validation_scripts.py \\\n       --dataset coco \\\n       --output_dir validation\n```\n\n训练完成即可使用实验模型。","一家做线上舞蹈教学平台的创业公司，需要把老师示范视频里的全身关键点实时标注出来，供学员对比动作是否标准。\n\n### 没有 openpose_train 时\n- 只能拿官方 BODY_25 模型跑，手脚关键点经常飘，学员投诉“动作对不上”  \n- 老师穿宽松衣服时，腰部和膝盖被遮挡，模型直接漏检，导致评分系统给出 0 分  \n- 想自己标注 2000 张舞蹈图再训练，却发现官方没给训练脚本，只能干瞪眼  \n- 用第三方训练框架改网络，CUDA 版本一升级就编译失败，两周都在修环境  \n\n### 使用 openpose_train 后\n- 直接加载 BODY_135 实验模型，135 个关键点覆盖手指尖和脚趾，动作对比精度提升 30%  \n- 遮挡问题用自采舞蹈数据微调 5 epoch，漏检率从 18% 降到 4%，评分系统不再误伤学员  \n- training 目录一键生成训练脚本，2000 张图 + 2 张 3090，一晚上就训完，第二天上线  \n- 代码在 Ubuntu 20 + CUDA 11 下直接编译通过，README 里 WSL2 踩坑指南省掉半天调试  \n\nopenpose_train 让舞蹈平台用最少的人力把全身关键点精度拉满，学员留存率一周涨了 12%。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCMU-Perceptual-Computing-Lab_openpose_train_f7565e56.png","CMU-Perceptual-Computing-Lab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FCMU-Perceptual-Computing-Lab_70c7bbd1.png",null,"https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab",[80,84,88,92,96,100,104,108,111,115],{"name":81,"color":82,"percentage":83},"Python","#3572A5",47.9,{"name":85,"color":86,"percentage":87},"MATLAB","#e16737",17.5,{"name":89,"color":90,"percentage":91},"C++","#f34b7d",15.3,{"name":93,"color":94,"percentage":95},"Pascal","#E3F171",8.9,{"name":97,"color":98,"percentage":99},"C#","#178600",8.6,{"name":101,"color":102,"percentage":103},"Shell","#89e051",1.2,{"name":105,"color":106,"percentage":107},"Jupyter Notebook","#DA5B0B",0.5,{"name":109,"color":77,"percentage":110},"M",0.1,{"name":112,"color":113,"percentage":114},"CMake","#DA3434",0,{"name":116,"color":117,"percentage":114},"Makefile","#427819",613,186,"2026-04-05T13:02:55","NOASSERTION",4,"Linux","需要 NVIDIA GPU，CUDA 8（官方测试）或 CUDA 10（需修改），未说明显存大小","未说明",{"notes":127,"python":125,"dependencies":128},"实验性仓库，非生产就绪；已在 Ubuntu 16 + CUDA 8 测试，Ubuntu 20 + WSL2(Windows 11) 可编译；使用需先熟悉 OpenPose 主项目",[],[13,14],[131,132,133,134,135,136,137,138,139],"openpose","openpose-training","computer-vision","machine-learning","human-pose-estimation","real-time","deep-learning","human-behavior-understanding","iccv-2019","2026-03-27T02:49:30.150509","2026-04-06T08:10:25.304535",[143,148,153,158,163,168],{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},6072,"Body_25 预训练模型无法加载到 openpose_caffe_train 进行微调，提示缺少 clip_param.min\u002Fmax，怎么办？","目前官方提供的 body_25 预训练模型与 openpose_caffe_train 不兼容。建议：1) 等待官方发布兼容 d-setLayer.py 生成脚本的 body_25 预训练模型；2) 或自行从头训练 body_25b 模型（官方实验在 4×GPU、batch=10、迭代约 80 万次后 mAP 可达 53.2%）。","https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_train\u002Fissues\u002F3",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},6073,"只想训练输出 18 个身体关键点（COCO 18）的模型，但运行时报错 “mSources.size()!=nModels.size()”，如何解决？","错误原因：prototxt 中 model 字段与 source 数量不匹配。解决步骤：\n1. 在 generateProtoTxt.py 里设置 sAddFoot=0, sAddMpii=0, sAddFace=0, sAddHands=0, sAddDome=0。\n2. 修改 data layer 的 model 字段为 `\"COCO_18\"`（而不是 `\"COCO_17;COCO_17_17\"` 等）。\n3. 确保 LMDB 路径正确，重新生成 prototxt 后再次训练即可。","https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_train\u002Fissues\u002F29",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},6074,"执行 a_downloadAndUpzipLmdbs.sh 下载 LMDB 时出现 404\u002F502 错误，怎么办？","官方已修复下载链接，若仍遇到 502 Bad Gateway，可：\n1. 稍后重试（服务器偶发故障）；\n2. 手动访问 CMU 官网下载对应压缩包并放到 dataset\u002F 目录下解压。","https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_train\u002Fissues\u002F23",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},6075,"BODY_135 模型的 135 个关键点顺序和索引在哪里查看？如果只使用其中 24 个关键点，其余点如何处理？","官方尚未提供类似 BODY_25 的详细索引文档。若仅使用部分关键点（如 24 个身体点），在标注 JSON 中：\n- 必须列出全部 135 个关键点，未使用点坐标设为 (0,0,0)；\n- 索引需与 BODY_135 定义保持一致（如 0:FaceContour0, 1:FaceContour1 …）。不能跳过索引或自定义编号。","https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_train\u002Fissues\u002F20",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},6076,"为什么网络最后一层没有使用 Sigmoid 激活函数？","作者表示对 Sigmoid 等激活函数的优缺点尚不熟悉，因此沿用原始设计。用户可自行实验替换并观察效果。","https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose_train\u002Fissues\u002F45",{"id":169,"question_zh":170,"answer_zh":171,"source_url":147},6077,"训练 body_25b 时，输入分辨率、batch size 和迭代次数如何设置才能获得官方 53.2% mAP？","官方推荐配置：\n- 输入分辨率：368×368\n- batch size：10（4×GPU 并行）\n- 迭代次数：约 800k，最后选取验证集 mAP 最高的 checkpoint。",[]]