[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-tensorlayer--HyperPose":3,"tool-tensorlayer--HyperPose":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":81,"owner_twitter":80,"owner_website":82,"owner_url":83,"languages":84,"stars":104,"forks":105,"last_commit_at":106,"license":80,"difficulty_score":10,"env_os":107,"env_gpu":108,"env_ram":109,"env_deps":110,"category_tags":120,"github_topics":121,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":128,"updated_at":129,"faqs":130,"releases":156},976,"tensorlayer\u002FHyperPose","HyperPose","Library for Fast and Flexible Human Pose Estimation","HyperPose 是一个专注于人体姿态估计的高性能开源库，能够实时识别图像或视频中的人体关键点（如关节、头部、四肢等），从而追踪人体动作和姿势。\n\n这款工具主要解决了姿态估计领域\"速度慢\"和\"难定制\"两大痛点。传统方案如 OpenPose 虽然功能强大，但在实际部署时往往面临帧率低、资源占用高的问题；同时，研究人员想要修改模型结构或训练流程也颇为繁琐。HyperPose 通过深度优化推理引擎，在 CPU\u002FGPU 上都能实现实时处理，速度可达同类方案的 10 倍；同时提供简洁的 Python 接口，让模型开发和定制变得更加灵活。\n\nHyperPose 特别适合两类用户：一是需要将姿态估计落地到实际产品中的**算法工程师和开发者**，借助其 C++ 推理库和 Docker 部署方案，可以快速集成到游戏、健身、安防等应用场景；二是从事姿态估计研究的**研究人员和高校师生**，利用 Python API 能够自由调整网络架构、训练数据和后处理逻辑，加速学术实验。\n\n技术亮点方面，HyperPose 采用 TensorRT 加速推理，结合流水线并行和 CPU\u002FGPU 混合调度等系统级优化，在保","HyperPose 是一个专注于人体姿态估计的高性能开源库，能够实时识别图像或视频中的人体关键点（如关节、头部、四肢等），从而追踪人体动作和姿势。\n\n这款工具主要解决了姿态估计领域\"速度慢\"和\"难定制\"两大痛点。传统方案如 OpenPose 虽然功能强大，但在实际部署时往往面临帧率低、资源占用高的问题；同时，研究人员想要修改模型结构或训练流程也颇为繁琐。HyperPose 通过深度优化推理引擎，在 CPU\u002FGPU 上都能实现实时处理，速度可达同类方案的 10 倍；同时提供简洁的 Python 接口，让模型开发和定制变得更加灵活。\n\nHyperPose 特别适合两类用户：一是需要将姿态估计落地到实际产品中的**算法工程师和开发者**，借助其 C++ 推理库和 Docker 部署方案，可以快速集成到游戏、健身、安防等应用场景；二是从事姿态估计研究的**研究人员和高校师生**，利用 Python API 能够自由调整网络架构、训练数据和后处理逻辑，加速学术实验。\n\n技术亮点方面，HyperPose 采用 TensorRT 加速推理，结合流水线并行和 CPU\u002FGPU 混合调度等系统级优化，在保持高精度的同时大幅提升吞吐量。项目还预置了 OpenPose、PifPaf 等经典模型的实现，并支持多卡训练，降低了复现和开发的门槛。","\u003C\u002Fa>\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_156e9b48037c.png\", width=\"600\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_13d664e1afd7.png\" title=\"Docs Building\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_13d664e1afd7.png\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Factions?query=workflow%3ACI\" title=\"Build Status\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fworkflows\u002FCI\u002Fbadge.svg\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Fhyperpose\" title=\"Docker\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fimage-size\u002Ftensorlayer\u002Fhyperpose\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Freleases\" title=\"Github Release\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Ftensorlayer\u002Fhyperpose?include_prereleases\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing\" title=\"PreTrainedModels\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FModelZoo-GoogleDrive-brightgreen.svg\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fen.cppreference.com\u002Fw\u002Fcpp\u002F17\" title=\"CppStandard\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FC++-17-blue.svg?style=flat&logo=c%2B%2B\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002FLICENSE.rst\" title=\"TensorLayer\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\">\n\u003C\u002Fp>\n\n\n---\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"#Features\">Features\u003C\u002Fa> •\n    \u003Ca href=\"#Documentation\">Documentation\u003C\u002Fa> •\n    \u003Ca href=\"#Quick-Start\">Quick Start\u003C\u002Fa> •\n    \u003Ca href=\"#Performance\">Performance\u003C\u002Fa> •\n    \u003Ca href=\"#Accuracy\">Accuracy\u003C\u002Fa> •\n    \u003Ca href=\"#Cite-Us\">Cite Us\u003C\u002Fa> •\n    \u003Ca href=\"#License\">License\u003C\u002Fa>\n\u003C\u002Fp>\n\n# HyperPose\n\nHyperPose is a library for building high-performance custom pose estimation applications.\n\n## Features\n\nHyperPose has two key features:\n\n- **High-performance pose estimation with CPUs\u002FGPUs**: HyperPose achieves real-time pose estimation through a high-performance pose estimation engine. This engine implements numerous system optimisations: pipeline parallelism, model inference with TensorRT, CPU\u002FGPU hybrid scheduling, and many others. These optimisations contribute to up to 10x higher FPS compared to OpenPose, TF-Pose and OpenPifPaf.\n- **Flexibility for developing custom pose estimation models**: HyperPose provides high-level Python APIs to develop pose estimation models. HyperPose users can:\n    * Customise training, evaluation, visualisation, pre-processing and post-processing in pose estimation.\n    * Customise model architectures (e.g., OpenPose, Pifpaf, PoseProposal Network) and training datasets.\n    * Speed up training with multiple GPUs.\n\n## Demo\n\n\u003C\u002Fa>\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_fe3da830271e.gif\", width=\"600\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    新宝岛 with HyperPose (Lightweight OpenPose model)\n\u003C\u002Fp>\n\n## Quick Start\n\nThe HyperPose library contains two parts:\n* A C++ library for high-performance pose estimation model inference.\n* A Python library for developing custom pose estimation models.\n\n### C++ inference library\n\nThe easiest way to use the inference library is through a [Docker image](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Fhyperpose). Pre-requisites for this image:\n\n- [CUDA Driver >= 418.81.07](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fgpu) (For default CUDA 10.0 image)\n- [NVIDIA Docker >= 2.0](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker)\n- [Docker CE Engine >= 19.03](https:\u002F\u002Fdocs.docker.com\u002Fengine\u002Finstall\u002F)\n\nRun this command to check if pre-requisites are ready:\n\n```bash\nwget https:\u002F\u002Fraw.githubusercontent.com\u002Ftensorlayer\u002Fhyperpose\u002Fmaster\u002Fscripts\u002Ftest_docker.py -qO- | python\n```\n\nOnce pre-requisites are ready, pull the HyperPose docker:\n\n```bash\ndocker pull tensorlayer\u002Fhyperpose\n```\n\nWe provide 4 examples within this image (The following commands have been tested on Ubuntu 18.04):\n\n```bash\n# [Example 1]: Doing inference on given video, copy the output.avi to the local path.\ndocker run --name quick-start --gpus all tensorlayer\u002Fhyperpose --runtime=stream\ndocker cp quick-start:\u002Fhyperpose\u002Fbuild\u002Foutput.avi .\ndocker rm quick-start\n\n\n# [Example 2](X11 server required to see the imshow window): Real-time inference.\n# You may need to install X11 server locally:\n# sudo apt install xorg openbox xauth\nxhost +; docker run --rm --gpus all -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix tensorlayer\u002Fhyperpose --imshow\n\n\n# [Example 3]: Camera + imshow window\nxhost +; docker run --name pose-camera --rm --gpus all -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix --device=\u002Fdev\u002Fvideo0:\u002Fdev\u002Fvideo0 tensorlayer\u002Fhyperpose --source=camera --imshow\n# To quit this image, please type `docker kill pose-camera` in another terminal.\n\n\n# [Dive into the image]\nxhost +; docker run --rm --gpus all -it -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix --device=\u002Fdev\u002Fvideo0:\u002Fdev\u002Fvideo0 --entrypoint \u002Fbin\u002Fbash tensorlayer\u002Fhyperpose\n# For users that cannot access a camera or X11 server. You may also use:\n# docker run --rm --gpus all -it --entrypoint \u002Fbin\u002Fbash tensorlayer\u002Fhyperpose\n```\n\nFor more usage regarding the command line flags, please visit [here](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002Fmarkdown\u002Fquick_start\u002Fprediction.html#table-of-flags-for-hyperpose-cli).\n\n### Python training library\n\nWe recommend using the Python training library within an [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fproducts\u002Findividual) environment. The below quick-start has been tested with these environments:\n\n| OS           | NVIDIA Driver | CUDA Toolkit | GPU            |\n| ------------ | ------------- | ------------ | -------------- |\n| Ubuntu 18.04 | 410.79        | 10.0         | Tesla V100-DGX |\n| Ubuntu 18.04 | 440.33.01     | 10.2         | Tesla V100-DGX |\n| Ubuntu 18.04 | 430.64        | 10.1         | TITAN RTX      |\n| Ubuntu 18.04 | 430.26        | 10.2         | TITAN XP       |\n| Ubuntu 16.04 | 430.50        | 10.1         | RTX 2080Ti     |\n\nOnce Anaconda is installed, run below Bash commands to create a virtual environment:\n\n```bash\n# Create virtual environment (choose yes)\nconda create -n hyperpose python=3.7\n# Activate the virtual environment, start installation\nconda activate hyperpose\n# Install cudatoolkit and cudnn library using conda\nconda install cudatoolkit=10.0.130\nconda install cudnn=7.6.0\n```\n\nWe then clone the repository and install the dependencies listed in [requirements.txt](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Frequirements.txt):\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose.git && cd hyperpose\npip install -r requirements.txt\n```\n\nWe demonstrate how to train a custom pose estimation model with HyperPose. HyperPose APIs contain three key modules: *Config*, *Model* and *Dataset*, and their basic usages are shown below.\n\n```python\nfrom hyperpose import Config, Model, Dataset\n\n# Set model name to distinguish models (necessary)\nConfig.set_model_name(\"MyLightweightOpenPose\")\n\n# Set model type, model backbone and dataset\nConfig.set_model_type(Config.MODEL.LightweightOpenpose)\nConfig.set_model_backbone(Config.BACKBONE.Vggtiny)\nConfig.set_dataset_type(Config.DATA.MSCOCO)\n\n# Set single-node training or parallel-training\nConfig.set_train_type(Config.TRAIN.Single_train)\n\nconfig = Config.get_config()\nmodel = Model.get_model(config)\ndataset = Dataset.get_dataset(config)\n\n# Start the training process\nModel.get_train(config)(model, dataset)\n```\n\nThe full training program is listed [here](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Ftrain.py). To evaluate the trained model, you can use the evaluation program [here](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Feval.py). More information about the training library can be found [here](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002Fmarkdown\u002Fquick_start\u002Ftraining.html).\n\n\n## Documentation\n\nThe APIs of the HyperPose training library and the inference library are described in the [Documentation](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002F).\n\n## Performance\n\nWe compare the prediction performance of HyperPose with [OpenPose 1.6](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose), [TF-Pose](https:\u002F\u002Fgithub.com\u002Fildoonet\u002Ftf-pose-estimation) and [OpenPifPaf 0.12](https:\u002F\u002Fgithub.com\u002Fopenpifpaf\u002Fopenpifpaf). The test-bed has Ubuntu18.04, 1070Ti GPU, Intel i7 CPU (12 logic cores).\n\n| HyperPose Configuration  | DNN Size | Input Size | HyperPose | Baseline |\n| --------------- | ------------- | ------------------ | ------------------ | --------------------- |\n| OpenPose (VGG)   | 209.3MB       | 656 x 368            | **27.32 FPS**           | 8 FPS (OpenPose)          |\n| OpenPose (TinyVGG)  | 34.7 MB       | 384 x 256          | **124.925 FPS**         | N\u002FA                   |\n| OpenPose (MobileNet) | 17.9 MB       | 432 x 368          | **84.32 FPS**           | 8.5 FPS (TF-Pose)         |\n| OpenPose (ResNet18)  | 45.0 MB       | 432 x 368          | **62.52 FPS**           | N\u002FA                  |\n| OpenPifPaf (ResNet50)  | 97.6 MB       | 432 x 368          | **44.16 FPS**           | 14.5 FPS (OpenPifPaf)    |\n\n## Accuracy\n\nWe evaluate the accuracy of pose estimation models developed by HyperPose. The environment is Ubuntu16.04, with 4 V100-DGXs and 24 Intel Xeon CPU. The training procedure takes 1~2 weeks using 1 V100-DGX for each model. (If you don't want to train from scratch, you could use our pre-trained backbone models)\n\n| HyperPose Configuration | DNN Size | Input Size | Evaluate Dataset | Accuracy-hyperpose (Iou=0.50:0.95) | Accuracy-original (Iou=0.50:0.95) |\n| -------------------- | ---------- | ------------- | ---------------- | --------------------- | ----------------------- |\n| OpenPose (VGG19)   | 199 MB | 432 x 368 | MSCOCO2014 (random 1160 images) | 57.0 map | 58.4 map  |\n| LightweightOpenPose (Dilated MobileNet)   | 17.7 MB | 432 x 368 | MSCOCO2017(all 5000 img.) | 46.1 map | 42.8 map |\n| LightweightOpenPose (MobileNet-Thin)   | 17.4 MB | 432 x 368 | MSCOCO2017 (all 5000 img.) | 44.2 map | 28.06 map (MSCOCO2014) |\n| LightweightOpenPose (tiny VGG)   | 23.6 MB | 432 x 368 | MSCOCO2017 (all 5000 img.) | 47.3 map | - |\n| LightweightOpenPose (ResNet50)   | 42.7 MB | 432 x 368 | MSCOCO2017 (all 5000 img.) | 48.2 map | - |\n| PoseProposal (ResNet18)   | 45.2 MB | 384 x 384 | MPII (all 2729 img.) | 54.9 map (PCKh) | 72.8 map (PCKh)|\n\n## Cite Us\n\nIf you find HyperPose helpful for your project, please cite our paper：\n\n```\n@article{hyperpose2021,\n    author  = {Guo, Yixiao and Liu, Jiawei and Li, Guo and Mai, Luo and Dong, Hao},\n    journal = {ACM Multimedia},\n    title   = {{Fast and Flexible Human Pose Estimation with HyperPose}},\n    url     = {https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose},\n    year    = {2021}\n}\n```\n\n## License\n\nHyperPose is open-sourced under the [Apache 2.0 license](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002FLICENSE.rst).\n\n","\u003C\u002Fa>\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_156e9b48037c.png\", width=\"600\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_13d664e1afd7.png\" title=\"文档构建状态\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_13d664e1afd7.png\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Factions?query=workflow%3ACI\" title=\"构建状态\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fworkflows\u002FCI\u002Fbadge.svg\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Fhyperpose\" title=\"Docker\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fimage-size\u002Ftensorlayer\u002Fhyperpose\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Freleases\" title=\"GitHub 发布版本\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Ftensorlayer\u002Fhyperpose?include_prereleases\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing\" title=\"预训练模型\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FModelZoo-GoogleDrive-brightgreen.svg\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fen.cppreference.com\u002Fw\u002Fcpp\u002F17\" title=\"C++ 标准\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FC++-17-blue.svg?style=flat&logo=c%2B%2B\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002FLICENSE.rst\" title=\"TensorLayer\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\">\n\u003C\u002Fp>\n\n\n---\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"#Features\">功能特性\u003C\u002Fa> •\n    \u003Ca href=\"#Documentation\">文档\u003C\u002Fa> •\n    \u003Ca href=\"#Quick-Start\">快速开始\u003C\u002Fa> •\n    \u003Ca href=\"#Performance\">性能\u003C\u002Fa> •\n    \u003Ca href=\"#Accuracy\">精度\u003C\u002Fa> •\n    \u003Ca href=\"#Cite-Us\">引用我们\u003C\u002Fa> •\n    \u003Ca href=\"#License\">许可证\u003C\u002Fa>\n\u003C\u002Fp>\n\n# HyperPose\n\nHyperPose 是一个用于构建高性能自定义姿态估计（Pose Estimation）应用的库。\n\n## 功能特性\n\nHyperPose 具有两个关键特性：\n\n- **基于 CPU\u002FGPU 的高性能姿态估计**：HyperPose 通过一个高性能的姿态估计引擎实现了实时姿态估计。该引擎实现了许多系统优化，例如流水线并行、使用 TensorRT 进行模型推理、CPU\u002FGPU 混合调度等。这些优化使得其相比 OpenPose、TF-Pose 和 OpenPifPaf 等工具，帧率（FPS）提升了高达 10 倍。\n- **开发自定义姿态估计模型的灵活性**：HyperPose 提供了高级 Python API 来开发姿态估计模型。HyperPose 用户可以：\n    * 自定义训练、评估、可视化、预处理和后处理流程。\n    * 自定义模型架构（如 OpenPose、Pifpaf、PoseProposal Network）和训练数据集。\n    * 使用多 GPU 加速训练。\n\n## 示例\n\n\u003C\u002Fa>\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_readme_fe3da830271e.gif\", width=\"600\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    使用 HyperPose 实现的新宝岛（轻量级 OpenPose 模型）\n\u003C\u002Fp>\n\n## 快速开始\n\nHyperPose 库包含两个部分：\n* 一个用于高性能姿态估计模型推理的 C++ 库。\n* 一个用于开发自定义姿态估计模型的 Python 库。\n\n### C++ 推理库\n\n使用推理库最简单的方式是通过 [Docker 镜像](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Fhyperpose)。此镜像的前置条件包括：\n\n- [CUDA 驱动 >= 418.81.07](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fgpu)（适用于默认 CUDA 10.0 镜像）\n- [NVIDIA Docker >= 2.0](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker)\n- [Docker CE 引擎 >= 19.03](https:\u002F\u002Fdocs.docker.com\u002Fengine\u002Finstall\u002F)\n\n运行以下命令以检查前置条件是否已准备好：\n\n```bash\nwget https:\u002F\u002Fraw.githubusercontent.com\u002Ftensorlayer\u002Fhyperpose\u002Fmaster\u002Fscripts\u002Ftest_docker.py -qO- | python\n```\n\n前置条件准备完成后，拉取 HyperPose Docker 镜像：\n\n```bash\ndocker pull tensorlayer\u002Fhyperpose\n```\n\n我们在该镜像中提供了 4 个示例（以下命令已在 Ubuntu 18.04 上测试通过）：\n\n```bash\n# [示例 1]：对给定视频进行推理，并将 output.avi 复制到本地路径。\ndocker run --name quick-start --gpus all tensorlayer\u002Fhyperpose --runtime=stream\ndocker cp quick-start:\u002Fhyperpose\u002Fbuild\u002Foutput.avi .\ndocker rm quick-start\n\n\n# [示例 2]（需要 X11 服务器才能看到 imshow 窗口）：实时推理。\n# 您可能需要在本地安装 X11 服务器：\n# sudo apt install xorg openbox xauth\nxhost +; docker run --rm --gpus all -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix tensorlayer\u002Fhyperpose --imshow\n\n\n# [示例 3]：摄像头 + imshow 窗口\nxhost +; docker run --name pose-camera --rm --gpus all -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix --device=\u002Fdev\u002Fvideo0:\u002Fdev\u002Fvideo0 tensorlayer\u002Fhyperpose --source=camera --imshow\n# 要退出此镜像，请在另一个终端中输入 `docker kill pose-camera`。\n\n\n# [深入镜像]\nxhost +; docker run --rm --gpus all -it -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix --device=\u002Fdev\u002Fvideo0:\u002Fdev\u002Fvideo0 --entrypoint \u002Fbin\u002Fbash tensorlayer\u002Fhyperpose\n# 对于无法访问摄像头或 X11 服务器的用户，也可以使用：\n# docker run --rm --gpus all -it --entrypoint \u002Fbin\u002Fbash tensorlayer\u002Fhyperpose\n```\n\n有关命令行标志的更多用法，请访问[此处](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002Fmarkdown\u002Fquick_start\u002Fprediction.html#table-of-flags-for-hyperpose-cli)。\n\n### Python 训练库\n\n我们建议在 [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fproducts\u002Findividual) 环境中使用 Python 训练库。以下快速入门已在以下环境中测试通过：\n\n| 操作系统       | NVIDIA 驱动 | CUDA 工具包 | GPU            |\n| -------------- | ----------- | ----------- | -------------- |\n| Ubuntu 18.04   | 410.79      | 10.0        | Tesla V100-DGX |\n| Ubuntu 18.04   | 440.33.01   | 10.2        | Tesla V100-DGX |\n| Ubuntu 18.04   | 430.64      | 10.1        | TITAN RTX      |\n| Ubuntu 18.04   | 430.26      | 10.2        | TITAN XP       |\n| Ubuntu 16.04   | 430.50      | 10.1        | RTX 2080Ti     |\n\n安装 Anaconda 后，运行以下 Bash 命令创建虚拟环境：\n\n```bash\n# 创建虚拟环境（选择“是”）\nconda create -n hyperpose python=3.7\n# 激活虚拟环境，开始安装\nconda activate hyperpose\n# 使用 conda 安装 cudatoolkit 和 cudnn 库\nconda install cudatoolkit=10.0.130\nconda install cudnn=7.6.0\n```\n\n然后克隆代码仓库并安装 [requirements.txt](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Frequirements.txt) 中列出的依赖项：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose.git && cd hyperpose\npip install -r requirements.txt\n```\n\n我们演示如何使用 HyperPose 训练一个自定义姿态估计模型。HyperPose API 包含三个关键模块：*Config*（配置）、*Model*（模型）和 *Dataset*（数据集），其基本用法如下所示。\n\n```python\nfrom hyperpose import Config, Model, Dataset\n\n# 设置模型名称以区分不同模型（必要）\nConfig.set_model_name(\"MyLightweightOpenPose\")\n```\n\n# 设置模型类型、模型主干网络和数据集\nConfig.set_model_type(Config.MODEL.LightweightOpenpose)\nConfig.set_model_backbone(Config.BACKBONE.Vggtiny)\nConfig.set_dataset_type(Config.DATA.MSCOCO)\n\n# 设置单节点训练或并行训练\nConfig.set_train_type(Config.TRAIN.Single_train)\n\nconfig = Config.get_config()\nmodel = Model.get_model(config)\ndataset = Dataset.get_dataset(config)\n\n# 开始训练过程\nModel.get_train(config)(model, dataset)\n```\n\n完整的训练程序可以在[这里](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Ftrain.py)找到。要评估训练好的模型，可以使用[这里的](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Feval.py)评估程序。更多关于训练库的信息可以参考[这里](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002Fmarkdown\u002Fquick_start\u002Ftraining.html)。\n\n\n## 文档\n\nHyperPose 训练库和推理库的 API 在[文档](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002F)中有详细描述。\n\n## 性能\n\n我们将 HyperPose 的预测性能与 [OpenPose 1.6](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose)、[TF-Pose](https:\u002F\u002Fgithub.com\u002Fildoonet\u002Ftf-pose-estimation) 和 [OpenPifPaf 0.12](https:\u002F\u002Fgithub.com\u002Fopenpifpaf\u002Fopenpifpaf) 进行了比较。测试环境为 Ubuntu18.04，1070Ti GPU，Intel i7 CPU（12个逻辑核心）。\n\n| HyperPose 配置 | DNN 大小 | 输入尺寸 | HyperPose | 基线 |\n| --------------- | ------------- | ------------------ | ------------------ | --------------------- |\n| OpenPose (VGG)   | 209.3MB       | 656 x 368            | **27.32 FPS**           | 8 FPS (OpenPose)          |\n| OpenPose (TinyVGG)  | 34.7 MB       | 384 x 256          | **124.925 FPS**         | N\u002FA                   |\n| OpenPose (MobileNet) | 17.9 MB       | 432 x 368          | **84.32 FPS**           | 8.5 FPS (TF-Pose)         |\n| OpenPose (ResNet18)  | 45.0 MB       | 432 x 368          | **62.52 FPS**           | N\u002FA                  |\n| OpenPifPaf (ResNet50)  | 97.6 MB       | 432 x 368          | **44.16 FPS**           | 14.5 FPS (OpenPifPaf)    |\n\n## 精度\n\n我们评估了由 HyperPose 开发的姿态估计模型的精度。测试环境为 Ubuntu16.04，配备 4 个 V100-DGX 和 24 核 Intel Xeon CPU。每个模型的训练过程使用 1 个 V100-DGX，耗时 1~2 周。（如果您不想从头开始训练，可以使用我们的预训练主干模型）\n\n| HyperPose 配置 | DNN 大小 | 输入尺寸 | 评估数据集 | 精度-hyperpose (Iou=0.50:0.95) | 精度-original (Iou=0.50:0.95) |\n| -------------------- | ---------- | ------------- | ---------------- | --------------------- | ----------------------- |\n| OpenPose (VGG19)   | 199 MB | 432 x 368 | MSCOCO2014 (随机 1160 张图像) | 57.0 map | 58.4 map  |\n| LightweightOpenPose (Dilated MobileNet)   | 17.7 MB | 432 x 368 | MSCOCO2017(全部 5000 张图像) | 46.1 map | 42.8 map |\n| LightweightOpenPose (MobileNet-Thin)   | 17.4 MB | 432 x 368 | MSCOCO2017 (全部 5000 张图像) | 44.2 map | 28.06 map (MSCOCO2014) |\n| LightweightOpenPose (tiny VGG)   | 23.6 MB | 432 x 368 | MSCOCO2017 (全部 5000 张图像) | 47.3 map | - |\n| LightweightOpenPose (ResNet50)   | 42.7 MB | 432 x 368 | MSCOCO2017 (全部 5000 张图像) | 48.2 map | - |\n| PoseProposal (ResNet18)   | 45.2 MB | 384 x 384 | MPII (全部 2729 张图像) | 54.9 map (PCKh) | 72.8 map (PCKh)|\n\n## 引用\n\n如果 HyperPose 对您的项目有所帮助，请引用我们的论文：\n\n```\n@article{hyperpose2021,\n    author  = {Guo, Yixiao and Liu, Jiawei and Li, Guo and Mai, Luo and Dong, Hao},\n    journal = {ACM Multimedia},\n    title   = {{Fast and Flexible Human Pose Estimation with HyperPose}},\n    url     = {https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose},\n    year    = {2021}\n}\n```\n\n## 许可证\n\nHyperPose 在 [Apache 2.0 许可证](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002FLICENSE.rst)下开源。","# HyperPose 快速上手指南\n\nHyperPose 是一个用于构建高性能自定义姿态估计应用的开源库，包含 C++ 推理库和 Python 训练库两部分。\n\n---\n\n## 环境准备\n\n### 系统要求\n- **操作系统**: Ubuntu 16.04 或 18.04\n- **GPU 驱动**: CUDA Driver >= 418.81.07（默认 CUDA 10.0 镜像）\n- **NVIDIA Docker**: NVIDIA Docker >= 2.0\n- **Docker 版本**: Docker CE Engine >= 19.03\n\n### 前置依赖\n- **C++ 推理库**:\n  - [CUDA Toolkit](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit)\n  - [NVIDIA Docker](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker)\n  - [Docker CE](https:\u002F\u002Fdocs.docker.com\u002Fengine\u002Finstall\u002F)\n- **Python 训练库**:\n  - Anaconda（推荐）\n  - GPU 环境（如 Tesla V100、TITAN RTX 等）\n\n---\n\n## 安装步骤\n\n### C++ 推理库\n通过 Docker 快速安装和运行：\n\n```bash\n# 检查前置依赖是否满足\nwget https:\u002F\u002Fraw.githubusercontent.com\u002Ftensorlayer\u002Fhyperpose\u002Fmaster\u002Fscripts\u002Ftest_docker.py -qO- | python\n\n# 拉取 HyperPose Docker 镜像\ndocker pull tensorlayer\u002Fhyperpose\n```\n\n### Python 训练库\n在 Anaconda 环境中安装：\n\n```bash\n# 创建并激活虚拟环境\nconda create -n hyperpose python=3.7\nconda activate hyperpose\n\n# 安装 CUDA 和 cuDNN 库\nconda install cudatoolkit=10.0.130\nconda install cudnn=7.6.0\n\n# 克隆代码仓库并安装依赖\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose.git && cd hyperpose\npip install -r requirements.txt\n```\n\n---\n\n## 基本使用\n\n### C++ 推理库示例\n\n以下命令展示了如何使用 HyperPose 的 C++ 推理库进行视频推理或实时摄像头姿态估计：\n\n```bash\n# 示例 1: 对给定视频进行推理，并将结果保存为 output.avi\ndocker run --name quick-start --gpus all tensorlayer\u002Fhyperpose --runtime=stream\ndocker cp quick-start:\u002Fhyperpose\u002Fbuild\u002Foutput.avi .\ndocker rm quick-start\n\n# 示例 2: 实时推理（需要 X11 支持）\nxhost +; docker run --rm --gpus all -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix tensorlayer\u002Fhyperpose --imshow\n\n# 示例 3: 使用摄像头进行实时推理（需要摄像头支持）\nxhost +; docker run --name pose-camera --rm --gpus all -e DISPLAY=$DISPLAY -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix --device=\u002Fdev\u002Fvideo0:\u002Fdev\u002Fvideo0 tensorlayer\u002Fhyperpose --source=camera --imshow\n```\n\n更多命令行参数说明，请参考[官方文档](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002Fmarkdown\u002Fquick_start\u002Fprediction.html#table-of-flags-for-hyperpose-cli)。\n\n---\n\n### Python 训练库示例\n\n以下代码展示了如何使用 HyperPose 的 Python API 训练一个轻量级 OpenPose 模型：\n\n```python\nfrom hyperpose import Config, Model, Dataset\n\n# 设置模型名称（必须）\nConfig.set_model_name(\"MyLightweightOpenPose\")\n\n# 设置模型类型、骨干网络和数据集\nConfig.set_model_type(Config.MODEL.LightweightOpenpose)\nConfig.set_model_backbone(Config.BACKBONE.Vggtiny)\nConfig.set_dataset_type(Config.DATA.MSCOCO)\n\n# 设置单节点训练或并行训练\nConfig.set_train_type(Config.TRAIN.Single_train)\n\n# 获取配置并初始化模型和数据集\nconfig = Config.get_config()\nmodel = Model.get_model(config)\ndataset = Dataset.get_dataset(config)\n\n# 开始训练\nModel.get_train(config)(model, dataset)\n```\n\n完整训练脚本可参考[train.py](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Ftrain.py)，评估脚本可参考[eval.py](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fblob\u002Fmaster\u002Feval.py)。\n\n---\n\n## 参考资料\n\n- **官方文档**: [HyperPose 文档](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002F)\n- **预训练模型**: [Google Drive 模型库](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing)\n- **论文引用**:\n  ```\n  @article{hyperpose2021,\n      author  = {Guo, Yixiao and Liu, Jiawei and Li, Guo and Mai, Luo and Dong, Hao},\n      journal = {ACM Multimedia},\n      title   = {{Fast and Flexible Human Pose Estimation with HyperPose}},\n      url     = {https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose},\n      year    = {2021}\n  }\n  ```","一家健身科技公司正在开发一款智能瑜伽辅助应用，希望通过摄像头实时分析用户的动作姿势，帮助用户纠正错误体位。\n\n### 没有 HyperPose 时\n- 使用 OpenPose 等传统姿态估计算法时，处理速度较慢，无法满足实时性要求，用户动作和画面反馈之间存在明显延迟\n- 系统资源占用过高，普通用户的笔记本电脑经常出现卡顿，影响用户体验\n- 开发团队难以根据瑜伽特定动作调整模型，缺乏灵活性导致某些特殊体位识别准确率较低\n- 多人同时使用时系统性能急剧下降，无法支持瑜伽课程的多人场景\n- 部署过程复杂，需要大量时间进行环境配置和依赖管理\n\n### 使用 HyperPose 后\n- 借助 TensorRT 加速和 pipeline 并行优化，实现了流畅的实时姿态估计，用户动作和反馈几乎同步\n- CPU\u002FGPU 混合调度显著降低了系统资源消耗，普通设备也能流畅运行，提升了用户体验\n- 提供灵活的 Python API，开发团队可以针对瑜伽动作特点定制模型，提高了特殊体位的识别准确率\n- 支持多 GPU 加速，轻松应对多人同时使用的场景，满足团体课程需求\n- 通过 Docker 镜像实现一键部署，大幅简化了开发和运维工作\n\nHyperPose 帮助这家公司将原本复杂的姿态估计功能转化为稳定高效的智能瑜伽助手，显著提升了产品竞争力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_HyperPose_9c53ea5c.png","tensorlayer","TensorLayer Community","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftensorlayer_5198405d.png","A neutral open community to promote AI technology.",null,"tensorlayer@gmail.com","http:\u002F\u002Fwww.tensorlayerx.com\u002Findex_en.html?chlang=&langid=2","https:\u002F\u002Fgithub.com\u002Ftensorlayer",[85,89,93,97,101],{"name":86,"color":87,"percentage":88},"Python","#3572A5",72.6,{"name":90,"color":91,"percentage":92},"C++","#f34b7d",25.2,{"name":94,"color":95,"percentage":96},"CMake","#DA3434",1.4,{"name":98,"color":99,"percentage":100},"Dockerfile","#384d54",0.4,{"name":102,"color":103,"percentage":100},"Shell","#89e051",1269,273,"2026-04-02T08:37:41","Linux","需要 NVIDIA GPU，CUDA 10.0+","未说明",{"notes":111,"python":112,"dependencies":113},"建议使用 Docker 或 Anaconda 环境运行；C++ 推理库需要 C++17 支持；推荐使用 NVIDIA 驱动版本 >= 418.81.07；首次运行可能需要下载预训练模型。","3.7",[114,115,116,117,118,119],"cudatoolkit=10.0.130","cudnn=7.6.0","tensorrt","tensorflow","numpy","opencv-python",[13,14],[76,117,122,123,124,125,116,126,127],"openpose","pose-estimation","computer-vision","distributed-training","mobilenet","neural-networks","2026-03-27T02:49:30.150509","2026-04-06T06:54:52.856591",[131,136,141,146,151],{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},4334,"在 Windows 10 上构建时遇到错误怎么办？","尝试安装所有依赖项后，确保使用正确的编译器版本（如 VS19、VS17 或 VS15），并检查是否正确配置了编译环境。如果仍然遇到“Error C2100 illegal indirection”问题，可以参考模板代码的修改建议，或者替换 offset 函数以避免编译期间的冲突。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FHyperPose\u002Fissues\u002F233",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},4335,"为什么我的 FPS 很低？","可能是线程池默认分配了过多线程导致性能问题，现在已设置线程数上限。可以查看此 PR 了解详情：https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fhyperpose\u002Fpull\u002F366。此外，建议尝试在 Windows 上使用 Docker 镜像运行，以获得更好的性能。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FHyperPose\u002Fissues\u002F253",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},4336,"如何解决训练过程中 CPU 占用过高的问题？","可以通过启用 GPU 加速训练，并增加预取数据量和使用的 CPU 核心数来缓解 CPU 瓶颈。具体方法可参考以下评论：[#150 (comment)](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fopenpose-plus\u002Fissues\u002F150#issuecomment-481486374) 和 [#150 (comment)](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fopenpose-plus\u002Fissues\u002F150#issuecomment-527684094)。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FHyperPose\u002Fissues\u002F150",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},4337,"在 TX2 上测试时 FPS 较低，如何优化？","确保关闭了不必要的绘图和保存功能，并检查初始化阶段是否计入了性能分析。此外，可能需要多次运行以完成管道的预热。更多细节可以参考 `example_stream_detector.cpp` 文件中的实现。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FHyperPose\u002Fissues\u002F156",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},4338,"如何将模型转换为 .trt 格式以在 Jetson AGX\u002FDeepStream 上使用？","可以使用 `.\u002Fexectrt` 工具将 ONNX 文件转换为 TRT 格式。在处理输出热图时，需将其调整为输入层的尺寸（例如从 19x46x46 调整为 19x356x356）。注意，TRT 推理结果可能与原始 ONNX 不同，但关键点位置应保持一致。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FHyperPose\u002Fissues\u002F290",[157,162,167,172,177],{"id":158,"version":159,"summary_zh":160,"released_at":161},103778,"v2.2.0","- OpenPifPaf model support (Python Training & C++ Inference), 3.1x SPEED UP 🚀🚀🚀\r\n- Docker container updated to 2.2.0: https:\u002F\u002Fhub.docker.com\u002Frepository\u002Fdocker\u002Ftensorlayer\u002Fhyperpose\r\n  - support CUDA 10 series (low driver requirement) and advanced CUDA 10.2 - CuDNN 8 - TensorRT 8 series.\r\n- Pip installation! https:\u002F\u002Fpypi.org\u002Fproject\u002Fhyperpose\u002F\r\n- Add a bunch of new pre-trained models, please check: https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR\r\n  - OpenPifPaf, OpenPose-VGGv2, OpenPose-MobileNet\r\n- Heavily refactored documentation for a better reading experience! https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002Findex.html\r\n","2021-06-30T18:01:40",{"id":163,"version":164,"summary_zh":165,"released_at":166},103779,"2.2.0-alpha","- OpenPifPaf model support (Python Training + C++ Inference);\r\n- Docker container updated to 2.2.0: https:\u002F\u002Fhub.docker.com\u002Frepository\u002Fdocker\u002Ftensorlayer\u002Fhyperpose\r\n- Add a bunch of new pre-trained models, please check: https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR\r\n- Refactored documents.\r\n","2021-06-26T13:45:22",{"id":168,"version":169,"summary_zh":170,"released_at":171},103780,"2.1.0","- Docker support: https:\u002F\u002Fhub.docker.com\u002Frepository\u002Fdocker\u002Ftensorlayer\u002Fhyperpose\r\n- HyperPose command-line tool: `hyperpose-cli`\r\n- Add functionality to keep the original aspect ratio(this is good for precision).\r\n- New README.\r\n- Visualization: \r\n    - Line size varies with person size.\r\n    - Translucent connection line in `hyperpose-cli`.","2020-08-30T08:55:01",{"id":173,"version":174,"summary_zh":175,"released_at":176},103781,"2.0.0","Key Features:\r\n\r\n- Python Training Module\r\n    - User-defined model arch\r\n    - MS COCO + MPII dataset support\r\n    - User-defined dataset filter and training pipeline\r\n- C++ Accelerated Prediction\r\n    - Operator API\r\n    - Stream API\r\n- More model supports\r\n    - OpenPose\r\n    - Lightweight-OpenPose\r\n    - PoseProposal Network\r\n- [Documentation](https:\u002F\u002Fhyperpose.readthedocs.io\u002Fen\u002Flatest\u002F)\r\n- [More pretrained models](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing)(Continuous updating)","2020-08-17T04:10:09",{"id":178,"version":179,"summary_zh":180,"released_at":181},103782,"1.0.0","The 1st version of OpenPose-Plus provides the functionality to do low-level inference,  local training, and distributed training. The inference API of the 1st version is quite straightforward and low-level(without much high-level abstraction).\r\n\r\nAs a new version of OpenPose-Plus is upcoming, we made the old version a release.","2020-05-03T15:10:05"]