[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-hukenovs--hagrid":3,"tool-hukenovs--hagrid":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",151314,2,"2026-04-11T23:32:58",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":76,"difficulty_score":10,"env_os":91,"env_gpu":92,"env_ram":93,"env_deps":94,"category_tags":99,"github_topics":100,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":147},6805,"hukenovs\u002Fhagrid","hagrid","HAnd Gesture Recognition Image Dataset","HaGRID 是一个专为手势识别系统打造的大规模开源图像数据集。它旨在解决现有数据在多样性、场景覆盖及动态手势识别能力上的不足，为开发视频会议、智能家居控制及车载交互等应用提供坚实的数据基础。\n\n该数据集非常适合计算机视觉领域的研究人员、算法工程师及开发者使用。其核心亮点在于庞大的规模与丰富的内容：HaGRIDv2 版本包含超过 108 万张全高清 RGB 图像，涵盖 33 种手势类别及独特的“无手势”自然姿态类。数据源自近 6.6 万名不同受试者，覆盖了多变的室内光照、极端逆光环境以及 0.5 至 4 米的拍摄距离，极大地提升了模型的泛化能力。\n\n此外，项目团队还发布了一种创新的动态手势识别算法。该算法的独特之处在于仅使用静态图像训练，却能有效识别动态手势，打破了传统方法对视频序列数据的依赖。配合官方提供的预训练模型和详细的划分策略（按用户 ID 区分训练、验证与测试集），HaGRID 能帮助开发者高效构建高精度、实时的手势交互系统。","# HaGRID - HAnd Gesture Recognition Image Dataset\n\n![hagrid](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_0730fae353a3.jpg)\n\nWe introduce a large image dataset **HaGRIDv2** (**HA**nd **G**esture **R**ecognition **I**mage **D**ataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc. We have also released an algorithm for dynamic gesture recognition, which we described in our paper. This model is trained entirely on HaGRIDv2 and enables the recognition of dynamic gestures while being trained exclusively on static ones. You can find it in our [repository](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fdynamic_gestures).\n\nHaGRIDv2 size is **1.5T** and dataset contains **1,086,158** FullHD RGB images divided into **33** classes of gestures and a new separate \"no_gesture\" class, containing domain-specific natural hand postures. Also, some images have `no_gesture` class if there is a second gesture-free hand in the frame. This extra class contains **2,164** samples. The data were split into training 76%, 9% validation and testing 15% sets by subject `user_id`, with 821,458 images for train, 99,200 images for validation and 165,500 for test.\n\n![gestures](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_bdb24f1a942f.png)\n\nThe dataset contains **65,977** unique persons and at least this number of unique scenes. The subjects are people over 18 years old. The dataset was collected mainly indoors with considerable variation in lighting, including artificial and natural light. Besides, the dataset includes images taken in extreme conditions such as facing and backing to a window. Also, the subjects had to show gestures at a distance of 0.5 to 4 meters from the camera.\n\nExample of sample and its annotation:\n\n![example](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_0ed82213febc.jpg)\n\nFor more information see our arxiv [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01508).\n\n## 🔥 Changelog\n- **`2025\u002F02\u002F27`**: We release [Dynamic Gesture Recognition algorithm](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fdynamic_gestures). 🙋\n  - Introduced a novel algorithm that enables dynamic gesture recognition while being trained exclusively on static gestures\n  - Fully trained on the HaGRIDv2-1M dataset\n  - Designed for real-time applications in video conferencing, smart home control, automotive systems, and more\n  - Open-source implementation with pretrained models available in the repository\n- **`2024\u002F09\u002F24`**: We release [HaGRIDv2](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Ftree\u002FHagrid_v2-1M). 🙏\n  - The HaGRID dataset has been expanded with 15 new gesture classes, including two-handed gestures\n  - New class \"no_gesture\" with domain-specific natural hand postures was addad (**2,164** samples, divided by train\u002Fval\u002Ftest containing 1,464, 200, 500 images, respectively)\n  - Extra class `no_gesture` contains **200,390** bounding boxes\n  - Added new models for gesture detection, hand detection and full-frame classification\n  - Dataset size is **1.5T**\n  - **1,086,158** FullHD RGB images\n  - Train\u002Fval\u002Ftest split: (821,458) **76%** \u002F (99,200) **9%** \u002F (165,500) **15%** by subject `user_id`\n  - **65,977** unique persons\n- **`2023\u002F09\u002F21`**: We release [HaGRID 2.0.](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Ftree\u002FHagrid_v2) ✌️\n  - All files for training and testing are combined into one directory\n  - The data was further cleared and new ones were added\n  - Multi-gpu training and testing\n  - Added new models for detection and full-frame classification\n  - Dataset size is **723GB**\n  - **554,800** FullHD RGB images (cleaned and updated classes, added diversity by race)\n  - Extra class `no_gesture` contains **120,105** samples\n  - Train\u002Fval\u002Ftest split: (410,800) **74%** \u002F (54,000) **10%** \u002F (90,000) **16%** by subject `user_id`\n  - **37,583** unique persons\n- **`2022\u002F06\u002F16`**: [HaGRID (Initial Dataset)](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Ftree\u002FHagrid_v1) 💪\n  - Dataset size is **716GB**\n  - **552,992** FullHD RGB images divided into **18** classes\n  - Extra class `no_gesture` contains **123,589** samples\n  - Train\u002Ftest split: (509,323) **92%** \u002F (43,669) **8%** by subject `user_id`\n  - **34,730** unique persons from 18 to 65 years old\n  - The distance is 0.5 to 4 meters from the camera\n\n## Installation\nClone and install required python packages:\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid.git\n# or mirror link:\ncd hagrid\n# Create virtual env by conda or venv\nconda create -n gestures python=3.11 -y\nconda activate gestures\n# Install requirements\npip install -r requirements.txt\n```\n\n## Downloads\nWe split the train dataset into 34 archives by gestures because of the large size of data. Download and unzip them from the following links:\n\n### Dataset\n\n| Gesture                           | Size    | Gesture                                   | Size    | Gesture | Size\n|-----------------------------------|---------|-------------------------------------------|---------|--------|----|\n| [`call`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fcall.zip)    | 37.2 GB | [`peace`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fpeace.zip)           | 41.4 GB | [`grabbing`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fgrabbing.zip) | 48.7 GB\n| [`dislike`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fdislike.zip) | 40.9 GB | [`peace_inverted`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fpeace_inverted.zip)  | 40.5 GB | [`grip`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fgrip.zip) | 48.6 GB\n| [`fist`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ffist.zip)    | 42.3 GB | [`rock`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Frock.zip)            | 41.7 GB | [`hand_heart`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fhand_heart.zip) | 39.6 GB\n| [`four`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ffour.zip)    | 43.1 GB | [`stop`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fstop.zip)            | 41.8 GB | [`hand_heart2`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fhand_heart2.zip) | 42.6 GB\n| [`like`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Flike.zip)    | 42.2 GB | [`stop_inverted`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fstop_inverted.zip)   | 41.4 GB | [`holy`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fholy.zip) | 52.7 GB\n | [`mute`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fmute.zip)    | 43.2 GB | [`three`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fthree.zip)           | 42.2 GB | [`little_finger`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Flittle_finger.zip) | 48.6 GB\n| [`ok`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fok.zip)      | 42.5 GB | [`three2`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fthree2.zip)          | 40.2 GB | [`middle_finger`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fmiddle_finger.zip) | 50.5 GB\n| [`one`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fone.zip)     | 42.7 GB | [`two_up`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ftwo_up.zip)          | 41.8 GB | [`point`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fpoint.zip) | 50.4 GB\n| [`palm`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fpalm.zip)    | 43.0 GB | [`two_up_inverted`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ftwo_up_inverted.zip) | 40.9 GB | [`take_picture`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Ftake_picture.zip) | 37.3 GB\n| [`three3`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthree3.zip) | 54 GB | [`three_gun`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthree_gun.zip) | 50.1 GB | [`thumb_index`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthumb_index.zip) | 62.8 GB\n| [`thumb_index2`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthumb_index2.zip) | 24.8 GB | [`timeout`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Ftimeout.zip) | 39.5 GB | [`xsign`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fxsign.zip) | 51.3 GB\n| [`no_gesture`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fno_gesture.zip) | 493.9 MB\n\n`dataset` **annotations**: [`annotations`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fannotations_with_landmarks\u002Fannotations.zip)\n\n[HaGRIDv2 512px - lightweight version of the full dataset with](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagridv2_512.zip) `min_side = 512p` `119.4 GB`\n\nor by using python script\n```bash\npython download.py --save_path \u003CPATH_TO_SAVE> \\\n                   --annotations \\\n                   --dataset\n```\n\nRun the following command with key `--dataset` to download dataset with images. Download annotations for selected stage by `--annotations` key.\n\n```bash\nusage: download.py [-h] [-a] [-d] [-t TARGETS [TARGETS ...]] [-p SAVE_PATH]\n\nDownload dataset...\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -a, --annotations     Download annotations\n  -d, --dataset         Download dataset\n  -t TARGETS [TARGETS ...], --targets TARGETS [TARGETS ...]\n                        Target(s) for downloading train set\n  -p SAVE_PATH, --save_path SAVE_PATH\n                        Save path\n```\nAfter downloading, you can unzip the archive by running the following command:\n```bash\nunzip \u003CPATH_TO_ARCHIVE> -d \u003CPATH_TO_SAVE>\n```\nThe structure of the dataset is as follows:\n```\n├── hagrid_dataset \u003CPATH_TO_DATASET_FOLDER>\n│   ├── call\n│   │   ├── 00000000.jpg\n│   │   ├── 00000001.jpg\n│   │   ├── ...\n├── hagrid_annotations\n│   ├── train \u003CPATH_TO_JSON_TRAIN>\n│   │   ├── call.json\n│   │   ├── ...\n│   ├── val \u003CPATH_TO_JSON_VAL>\n│   │   ├── call.json\n│   │   ├── ...\n│   ├── test \u003CPATH_TO_JSON_TEST>\n│   │   ├── call.json\n│   │   ├── ...\n```\n\n## Models\nWe provide some models pre-trained on HaGRIDv2 as the baseline with the classic backbone architectures for gesture classification, gesture detection and hand detection.\n\n| Gesture Detectors                                         | mAP      |\n|--------------------------------------------------|----------|\n| [YOLOv10x](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10x_gestures.pt)  | **89.4**     |\n| [YOLOv10n](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10n_gestures.pt)  | 88.2     |\n| [SSDLiteMobileNetV3Large](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FSSDLiteMobileNetV3Large.pth) | 72.7 |\n\nIn addition, if you need to detect hands, you can use YOLO detection models, pre-trained on HaGRIDv2\n\n| Hand Detectors                                         | mAP      |\n|--------------------------------------------------|----------|\n| [YOLOv10x](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10x_hands.pt)  | **88.8**     |\n| [YOLOv10n](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10n_hands.pt)  | 87.9     |\n\n\n\nHowever, if you need a single gesture, you can use pre-trained full frame classifiers instead of detectors.\nTo use full frame models, **remove the no_gesture class**\n\n| Full Frame Classifiers                    | F1 Gestures |\n|-------------------------------------------|---------|\n| [MobileNetV3_small](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FMobileNetV3_small.pth) | 86.7    |\n| [MobileNetV3_large](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FMobileNetV3_large.pth) | 93.4    |\n| [VitB16](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FVitB16.pth) | 91.7    |\n| [ResNet18](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FResNet18.pth)      | 98.3    |\n| [ResNet152](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FResNet152.pth)    | **98.6**    |\n| [ConvNeXt base](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FConvNeXt_base.pth)    | 96.4 |\n\n\n\u003Cdetails>\u003Csummary>\u003Ch3>Train\u003C\u002Fh3>\u003C\u002Fsummary>\n\nYou can use downloaded trained models, otherwise select a parameters for training in `configs` folder.\nTo train the model, execute the following command:\n\nSingle GPU:\n\n```bash\npython run.py -c train -p configs\u002F\u003Cconfig>\n```\nMulti GPU:\n```bash\nbash ddp_run.sh -g 0,1,2,3 -c train -p configs\u002F\u003Cconfig>\n```\nwhich -g is a list of GPU ids.\n\n\nEvery step, the current loss, learning rate and others values get logged to **Tensorboard**.\nSee all saved metrics and parameters by opening a command line (this will open a webpage at `localhost:6006`):\n```bash\ntensorboard --logdir=\u003Cworkdir>\n```\n\u003C\u002Fdetails>\n\u003Cdetails>\u003Csummary>\u003Ch3>Test\u003C\u002Fh3>\u003C\u002Fsummary>\n\nTest your model by running the following command:\n\nSingle GPU:\n\n```bash\npython run.py -c test -p configs\u002F\u003Cconfig>\n```\nMulti GPU:\n```bash\nbash ddp_run.sh -g 0,1,2,3 -c test -p configs\u002F\u003Cconfig>\n```\nwhich -g is a list of GPU ids.\n\u003C\u002Fdetails>\n\n## Demo\n ```bash\npython demo.py -p \u003CPATH_TO_CONFIG> --landmarks\n```\n![demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_d7865ab71372.gif)\n\n## Demo Full Frame Classifiers\n ```bash\npython demo_ff.py -p \u003CPATH_TO_CONFIG>\n```\n\n## Annotations\n\nThe annotations consist of bounding boxes of hands and gestures in COCO format `[top left X position, top left Y position, width, height]` with gesture labels. We provide `user_id` field that will allow you to split the train \u002F val \u002F test dataset yourself, as well as a meta-informations contains automatically annotated age, gender and race.\n```json\n\"04c49801-1101-4b4e-82d0-d4607cd01df0\": {\n    \"bboxes\": [\n        [0.0694444444, 0.3104166667, 0.2666666667, 0.2640625],\n        [0.5993055556, 0.2875, 0.2569444444, 0.2760416667]\n    ],\n    \"labels\": [\n        \"thumb_index2\",\n        \"thumb_index2\"\n    ],\n    \"united_bbox\": [\n        [0.0694444444, 0.2875, 0.7868055556, 0.2869791667]\n    ],\n    \"united_label\": [\n        \"thumb_index2\"\n    ],\n    \"user_id\": \"2fe6a9156ff8ca27fbce8ada318c592b\",\n    \"hand_landmarks\": [\n            [\n                [0.37233507701702123, 0.5935673528948108],\n                [0.3997604810145188, 0.5925499847441514],\n                ...\n            ],\n            [\n                [0.37388438145820907, 0.47547576284667353],\n                [0.39460467775730607, 0.4698847093520443],\n                ...\n            ]\n        ]\n    \"meta\": {\n        \"age\": [24.41],\n        \"gender\": [\"female\"],\n        \"race\": [\"White\"]\n    }\n```\n- Key - image name without extension\n- Bboxes - list of normalized bboxes for each hand `[top left X pos, top left Y pos, width, height]`\n- Labels - list of class labels for each hand e.g. `like`, `stop`, `no_gesture`\n- United_bbox - united combination of two hand boxes in the case of two-handed gestures (\"hand_heart\", \"hand_heart2\", \"thumb_index2\", \"timeout\", \"holy\", \"take_picture\", \"xsign\") and 'null' in the case of one-handed gestures\n- United_label - a class label for united_bbox in case of two-handed gestures and 'null' in the case of one-handed gestures\n- User ID - subject id (useful for split data to train \u002F val subsets).\n- Hand_landmarks - auto-annotated with MediaPipe landmarks for each hand.\n- Meta - automatically annotated with FairFace and MiVOLO neural networks meta-information contains age, gender and race\n\n\n### Bounding boxes\n\n| Object       | Train       | Val     | Test    |Total    |\n|--------------|-------------|---------|---------|---------|\n| gesture      | 980 924     | 120 003 | 200 006 |1 300 933|\n| no gesture   | 154 403     | 19 411  | 29 386  | 203 200 |\n| total boxes  | 1 135 327   | 139 414 | 229 392 |1 504 133|\n\n### Landmarks\n\n| Object                    | Train       | Val     | Test    |Total    |\n|---------------------------|-------------|---------|---------|---------|\n|Total hands with landmarks |  983 991    | 123 230 |201 131  |1 308 352|\n\n\n\n### Converters\n\n\u003Cdetails>\u003Csummary> \u003Cb>Yolo\u003C\u002Fb> \u003C\u002Fsummary>\n\nWe provide a script to convert annotations to [YOLO](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) format. To convert annotations, run the following command:\n\n```bash\npython -m converters.hagrid_to_yolo --cfg \u003CCONFIG_PATH> --mode \u003C'hands' or 'gestures'>\n```\n\nafter conversion, you need change original definition [img2labels](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7\u002Fblob\u002F2fdc7f14395f6532ad05fb3e6970150a6a83d290\u002Futils\u002Fdatasets.py#L347-L350) to:\n\n```python\ndef img2label_paths(img_paths):\n    img_paths = list(img_paths)\n    # Define label paths as a function of image paths\n    if \"train\" in img_paths[0]:\n        return [x.replace(\"train\", \"train_labels\").replace(\".jpg\", \".txt\") for x in img_paths]\n    elif \"test\" in img_paths[0]:\n        return [x.replace(\"test\", \"test_labels\").replace(\".jpg\", \".txt\") for x in img_paths]\n    elif \"val\" in img_paths[0]:\n        return [x.replace(\"val\", \"val_labels\").replace(\".jpg\", \".txt\") for x in img_paths]\n```\n\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\u003Csummary> \u003Cb>Coco\u003C\u002Fb> \u003C\u002Fsummary>\n\nAlso, we provide a script to convert annotations to [Coco](https:\u002F\u002Fcocodataset.org\u002F#home) format. To convert annotations, run the following command:\n\n```bash\npython -m converters.hagrid_to_coco --cfg \u003CCONFIG_PATH> --mode \u003C'hands' or 'gestures'>\n```\n\n\u003C\u002Fdetails>\n\n### License\n\u003Ca rel=\"license\" href=\"http:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">\u003Cimg alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_9599430498d9.png\" \u002F>\u003C\u002Fa>\u003Cbr \u002F>This work is licensed under a variant of \u003Ca rel=\"license\" href=\"http:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">Creative Commons Attribution-ShareAlike 4.0 International License\u003C\u002Fa>.\n\nPlease see the specific [license](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fblob\u002Fmaster\u002Flicense\u002Fen_us.pdf).\n\n### Authors and Credits\n- [Alexander Kapitanov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fhukenovs)\n- [Andrey Makhlyarchuk](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fmakhliarchuk)\n- [Karina Kvanchiani](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fkvanchiani)\n- [Aleksandr Nagaev](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fnagadit)\n- [Roman Kraynov](https:\u002F\u002Fru.linkedin.com\u002Fin\u002Froman-kraynov-25ab44265)\n- [Anton Nuzhdin](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fanton-nuzhdin-46b799234 )\n\n### Links\n- [Github (HaGRIDv2-1M)](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid)\n\u003C!-- - [Mirror](https:\u002F\u002Fgitlab.aicloud.sbercloud.ru\u002Frndcv\u002Fhagrid) -->\n- [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08219)\n- [Github (Dynamic Gesture Recognition)](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fdynamic_gestures)\n\u003C!-- - [Kaggle](https:\u002F\u002Fwww.kaggle.com\u002Fdatasets\u002Fkapitanov\u002Fhagrid) -->\n\u003C!-- - [Habr](https:\u002F\u002Fhabr.com\u002Fru\u002Fcompany\u002Fsberdevices\u002Fblog\u002F671614\u002F) -->\n\u003C!-- - [Paperswithcode](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fhagrid-hand-gesture-recognition-image-dataset) -->\n\n### Citation\nYou can cite the paper using the following BibTeX entry:\n\n    @misc{nuzhdin2024hagridv21mimagesstatic,\n        title={HaGRIDv2: 1M Images for Static and Dynamic Hand Gesture Recognition}, \n        author={Anton Nuzhdin and Alexander Nagaev and Alexander Sautin and Alexander Kapitanov and Karina Kvanchiani},\n        year={2024},\n        eprint={2412.01508},\n        archivePrefix={arXiv},\n        primaryClass={cs.CV},\n        url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01508}, \n    }\n\n    @InProceedings{Kapitanov_2024_WACV,\n        author    = {Kapitanov, Alexander and Kvanchiani, Karina and Nagaev, Alexander and Kraynov, Roman and Makhliarchuk, Andrei},\n        title     = {HaGRID -- HAnd Gesture Recognition Image Dataset},\n        booktitle = {Proceedings of the IEEE\u002FCVF Winter Conference on Applications of Computer Vision (WACV)},\n        month     = {January},\n        year      = {2024},\n        pages     = {4572-4581}\n    }\n","# HaGRID - 手势识别图像数据集\n\n![hagrid](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_0730fae353a3.jpg)\n\n我们推出了一套大型图像数据集 **HaGRIDv2**（**HA**nd **G**esture **R**ecognition **I**mage **D**ataset），用于手势识别（HGR）系统。您可以将其用于图像分类或目标检测任务。该数据集有助于构建可在视频会议服务（Zoom、Skype、Discord、Jazz 等）、智能家居系统、汽车行业等领域应用的手势识别系统。此外，我们还发布了一种动态手势识别算法，并在论文中进行了详细描述。该模型完全基于 HaGRIDv2 训练而成，能够在仅使用静态手势训练的情况下实现动态手势的识别。您可以在我们的 [仓库](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fdynamic_gestures) 中找到相关代码。\n\nHaGRIDv2 的总大小为 **1.5TB**，包含 **1,086,158** 张 FullHD 分辨率的 RGB 图像，分为 **33** 个手势类别，以及一个全新的“no_gesture”类别，其中包含了特定领域的自然手部姿态。此外，部分图像也标注为“no_gesture”，当画面中存在另一只未做手势的手时即归入此类。这一额外类别共包含 **2,164** 个样本。数据按用户 `user_id` 进行划分，训练集占 76%，验证集占 9%，测试集占 15%，具体数量分别为：训练集 821,458 张，验证集 99,200 张，测试集 165,500 张。\n\n![gestures](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_bdb24f1a942f.png)\n\n该数据集涵盖了 **65,977** 名不同的个体，至少对应相同数量的独特场景。参与者均为 18 岁以上的人群。数据主要在室内采集，光照条件变化较大，包括人工光源和自然光。此外，数据集中还包括一些极端条件下的图像，例如面向或背对窗户拍摄的情况。参与者需要在距离摄像头 0.5 至 4 米的范围内展示手势。\n\n示例样本及其标注：\n\n![example](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_0ed82213febc.jpg)\n\n更多信息请参阅我们在 arXiv 上发表的论文 [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01508)。\n\n## 🔥 更新日志\n- **`2025\u002F02\u002F27`**：我们发布了 [动态手势识别算法](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fdynamic_gestures)。 🙋  \n  - 提出了一种创新算法，能够在仅使用静态手势训练的情况下实现动态手势识别。  \n  - 模型完全基于 HaGRIDv2-1M 数据集训练而成。  \n  - 专为实时应用场景设计，适用于视频会议、智能家居控制、汽车系统等。  \n  - 开源实现，并在仓库中提供了预训练模型。  \n- **`2024\u002F09\u002F24`**：我们发布了 [HaGRIDv2](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Ftree\u002FHagrid_v2-1M)。 🙏  \n  - HaGRID 数据集新增了 15 个手势类别，其中包括双手手势。  \n  - 新增“no_gesture”类别，包含领域特定的自然手部姿态（共 2,164 个样本，按训练\u002F验证\u002F测试分别划分为 1,464、200、500 张）。  \n  - “no_gesture”类别共包含 **200,390** 个边界框。  \n  - 增加了手势检测、手部检测以及全帧分类的新模型。  \n  - 数据集总大小为 **1.5TB**。  \n  - 共有 **1,086,158** 张 FullHD RGB 图像。  \n  - 按用户 `user_id` 划分的训练\u002F验证\u002F测试比例为：821,458 张（76%）\u002F 99,200 张（9%）\u002F 165,500 张（15%）。  \n  - 包含 **65,977** 名不同个体。  \n- **`2023\u002F09\u002F21`**：我们发布了 [HaGRID 2.0](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Ftree\u002FHagrid_v2)。 ✌️  \n  - 所有训练和测试文件被合并到一个目录中。  \n  - 数据进一步清理并补充了新样本。  \n  - 支持多 GPU 训练和测试。  \n  - 增加了新的检测模型和全帧分类模型。  \n  - 数据集总大小为 **723GB**。  \n  - 共有 **554,800** 张 FullHD RGB 图像（清理并更新了类别，增加了种族多样性）。  \n  - “no_gesture”类别新增 **120,105** 个样本。  \n  - 按用户 `user_id` 划分的训练\u002F验证\u002F测试比例为：410,800 张（74%）\u002F 54,000 张（10%）\u002F 90,000 张（16%）。  \n  - 包含 **37,583** 名不同个体。  \n- **`2022\u002F06\u002F16`**：[HaGRID（初始数据集）](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Ftree\u002FHagrid_v1) 💪  \n  - 数据集总大小为 **716GB**。  \n  - 共有 **552,992** 张 FullHD RGB 图像，分为 **18** 个手势类别。  \n  - “no_gesture”类别包含 **123,589** 个样本。  \n  - 按用户 `user_id` 划分的训练\u002F测试比例为：509,323 张（92%）\u002F 43,669 张（8%）。  \n  - 包含 **34,730** 名 18 至 65 岁之间的不同个体。  \n  - 参与者需在距离摄像头 0.5 至 4 米的范围内进行手势展示。\n\n## 安装\n克隆项目并安装所需的 Python 包：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid.git\n# 或镜像链接：\ncd hagrid\n# 使用 conda 或 venv 创建虚拟环境\nconda create -n gestures python=3.11 -y\nconda activate gestures\n# 安装依赖\npip install -r requirements.txt\n```\n\n## 下载\n由于数据量庞大，我们将训练数据集按手势类别拆分为 34 个压缩包。请从以下链接下载并解压：\n\n### 数据集\n\n| 手势                           | 大小    | 手势                                   | 大小    | 手势 | 大小\n|-----------------------------------|---------|-------------------------------------------|---------|--------|----|\n| [`call`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fcall.zip)    | 37.2 GB | [`peace`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fpeace.zip)           | 41.4 GB | [`grabbing`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fgrabbing.zip) | 48.7 GB\n| [`dislike`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fdislike.zip) | 40.9 GB | [`peace_inverted`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fpeace_inverted.zip)  | 40.5 GB | [`grip`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fgrip.zip) | 48.6 GB\n| [`fist`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ffist.zip)    | 42.3 GB | [`rock`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Frock.zip)            | 41.7 GB | [`hand_heart`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fhand_heart.zip) | 39.6 GB\n| [`four`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ffour.zip)    | 43.1 GB | [`stop`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fstop.zip)            | 41.8 GB | [`hand_heart2`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fhand_heart2.zip) | 42.6 GB\n| [`like`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Flike.zip)    | 42.2 GB | [`stop_inverted`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fstop_inverted.zip)   | 41.4 GB | [`holy`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fholy.zip) | 52.7 GB\n | [`mute`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fmute.zip)    | 43.2 GB | [`three`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fthree.zip)           | 42.2 GB | [`little_finger`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Flittle_finger.zip) | 48.6 GB\n| [`ok`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fok.zip)      | 42.5 GB | [`three2`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fthree2.zip)          | 40.2 GB | [`middle_finger`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fmiddle_finger.zip) | 50.5 GB\n| [`one`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fone.zip)     | 42.7 GB | [`two_up`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ftwo_up.zip)          | 41.8 GB | [`point`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fpoint.zip) | 50.4 GB\n| [`palm`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Fpalm.zip)    | 43.0 GB | [`two_up_inverted`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid\u002Fhagrid_dataset_new_554800\u002Fhagrid_dataset\u002Ftwo_up_inverted.zip) | 40.9 GB | [`take_picture`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Ftake_picture.zip) | 37.3 GB\n| [`three3`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthree3.zip) | 54 GB | [`three_gun`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthree_gun.zip) | 50.1 GB | [`thumb_index`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthumb_index.zip) | 62.8 GB\n| [`thumb_index2`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fthumb_index2.zip) | 24.8 GB | [`timeout`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Ftimeout.zip) | 39.5 GB | [`xsign`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fxsign.zip) | 51.3 GB\n| [`no_gesture`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagrid_v2_zip\u002Fno_gesture.zip) | 493.9 MB\n\n`dataset` **annotations**: [`annotations`](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fannotations_with_landmarks\u002Fannotations.zip)\n\n[HaGRIDv2 512px - 轻量版完整数据集，包含](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fhagridv2_512.zip) `min_side = 512p` 的 `119.4 GB`\n\n或者使用 Python 脚本：\n```bash\npython download.py --save_path \u003CPATH_TO_SAVE> \\\n                   --annotations \\\n                   --dataset\n```\n\n运行以下命令并使用 `--dataset` 参数下载数据集图像。通过 `--annotations` 参数下载选定阶段的标注文件。\n\n```bash\nusage: download.py [-h] [-a] [-d] [-t TARGETS [TARGETS ...]] [-p SAVE_PATH]\n\n下载数据集...\n\n可选参数：\n  -h, --help            显示此帮助信息并退出\n  -a, --annotations     下载标注文件\n  -d, --dataset         下载数据集\n  -t TARGETS [TARGETS ...], --targets TARGETS [TARGETS ...]\n                        下载训练集的目标\n  -p SAVE_PATH, --save_path SAVE_PATH\n                        保存路径\n```\n下载完成后，可以使用以下命令解压压缩包：\n```bash\nunzip \u003CPATH_TO_ARCHIVE> -d \u003CPATH_TO_SAVE>\n```\n数据集的结构如下：\n```\n├── hagrid_dataset \u003CPATH_TO_DATASET_FOLDER>\n│   ├── call\n│   │   ├── 00000000.jpg\n│   │   ├── 00000001.jpg\n│   │   ├── ...\n├── hagrid_annotations\n│   ├── train \u003CPATH_TO_JSON_TRAIN>\n│   │   ├── call.json\n│   │   ├── ...\n│   ├── val \u003CPATH_TO_JSON_VAL>\n│   │   ├── call.json\n│   │   ├── ...\n│   ├── test \u003CPATH_TO_JSON_TEST>\n│   │   ├── call.json\n│   │   ├── ...\n```\n\n## 模型\n我们提供了一些在HaGRIDv2数据集上预训练的模型，作为手势分类、手势检测和手部检测任务的经典骨干网络架构基线。\n\n| 手势检测器                                         | mAP      |\n|--------------------------------------------------|----------|\n| [YOLOv10x](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10x_gestures.pt)  | **89.4**     |\n| [YOLOv10n](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10n_gestures.pt)  | 88.2     |\n| [SSDLiteMobileNetV3Large](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FSSDLiteMobileNetV3Large.pth) | 72.7 |\n\n此外，如果您需要检测手部，可以使用在HaGRIDv2数据集上预训练的YOLO检测模型。\n\n| 手部检测器                                         | mAP      |\n|--------------------------------------------------|----------|\n| [YOLOv10x](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10x_hands.pt)  | **88.8**     |\n| [YOLOv10n](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FYOLOv10n_hands.pt)  | 87.9     |\n\n\n\n然而，如果您只需要识别单个手势，可以使用预训练的全帧分类器，而不是检测器。\n要使用全帧模型，请**移除no_gesture类别**。\n\n| 全帧分类器                    | F1得分 |\n|-------------------------------------------|---------|\n| [MobileNetV3_small](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FMobileNetV3_small.pth) | 86.7    |\n| [MobileNetV3_large](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FMobileNetV3_large.pth) | 93.4    |\n| [VitB16](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FVitB16.pth) | 91.7    |\n| [ResNet18](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FResNet18.pth)      | 98.3    |\n| [ResNet152](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FResNet152.pth)    | **98.6**    |\n| [ConvNeXt base](https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fmodels\u002FConvNeXt_base.pth)    | 96.4 |\n\n\n\u003Cdetails>\u003Csummary>\u003Ch3>训练\u003C\u002Fh3>\u003C\u002Fsummary>\n\n您可以使用下载的预训练模型，或者在`configs`文件夹中选择参数进行训练。\n要训练模型，请执行以下命令：\n\n单GPU：\n\n```bash\npython run.py -c train -p configs\u002F\u003Cconfig>\n```\n多GPU：\n```bash\nbash ddp_run.sh -g 0,1,2,3 -c train -p configs\u002F\u003Cconfig>\n```\n其中`-g`是GPU ID列表。\n\n\n每一步的当前损失、学习率等值都会被记录到**TensorBoard**中。\n通过打开命令行查看所有保存的指标和参数（这将打开一个网页，地址为`localhost:6006`）：\n```bash\ntensorboard --logdir=\u003Cworkdir>\n```\n\u003C\u002Fdetails>\n\u003Cdetails>\u003Csummary>\u003Ch3>测试\u003C\u002Fh3>\u003C\u002Fsummary>\n\n通过运行以下命令来测试您的模型：\n\n单GPU：\n\n```bash\npython run.py -c test -p configs\u002F\u003Cconfig>\n```\n多GPU：\n```bash\nbash ddp_run.sh -g 0,1,2,3 -c test -p configs\u002F\u003Cconfig>\n```\n其中`-g`是GPU ID列表。\n\u003C\u002Fdetails>\n\n## 演示\n ```bash\npython demo.py -p \u003CPATH_TO_CONFIG> --landmarks\n```\n![demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_d7865ab71372.gif)\n\n## 全帧分类器演示\n ```bash\npython demo_ff.py -p \u003CPATH_TO_CONFIG>\n```\n\n## 标注\n\n标注由COCO格式的手部和手势边界框组成，格式为`[左上角X坐标，左上角Y坐标，宽度，高度]`，并附有手势标签。我们提供了`user_id`字段，允许您自行划分训练\u002F验证\u002F测试数据集；此外，元信息包含自动标注的年龄、性别和种族。\n```json\n\"04c49801-1101-4b4e-82d0-d4607cd01df0\": {\n    \"bboxes\": [\n        [0.0694444444, 0.3104166667, 0.2666666667, 0.2640625],\n        [0.5993055556, 0.2875, 0.2569444444, 0.2760416667]\n    ],\n    \"labels\": [\n        \"thumb_index2\",\n        \"thumb_index2\"\n    ],\n    \"united_bbox\": [\n        [0.0694444444, 0.2875, 0.7868055556, 0.2869791667]\n    ],\n    \"united_label\": [\n        \"thumb_index2\"\n    ],\n    \"user_id\": \"2fe6a9156ff8ca27fbce8ada318c592b\",\n    \"hand_landmarks\": [\n            [\n                [0.37233507701702123, 0.5935673528948108],\n                [0.3997604810145188, 0.5925499847441514],\n                ...\n            ],\n            [\n                [0.37388438145820907, 0.47547576284667353],\n                [0.39460467775730607, 0.4698847093520443],\n                ...\n            ]\n        ]\n    \"meta\": {\n        \"age\": [24.41],\n        \"gender\": [\"female\"],\n        \"race\": [\"White\"]\n    }\n```\n- 键：图像名称，不含扩展名\n- Bboxes：每个手的归一化边界框列表`[左上角X坐标，左上角Y坐标，宽度，高度]`\n- Labels：每个手的类别标签列表，例如`like`、`stop`、`no_gesture`\n- United_bbox：对于双手手势（如“hand_heart”、“hand_heart2”、“thumb_index2”、“timeout”、“holy”、“take_picture”、“xsign”），是两个手框的合并；对于单手手势，则为`null`\n- United_label：双手手势时的联合类别标签，单手手势时为`null`\n- User ID：受试者ID（可用于将数据划分为训练\u002F验证子集）\n- Hand_landmarks：使用MediaPipe地标自动标注的每只手的特征点\n- Meta：由FairFace和MiVOLO神经网络自动标注的元信息，包括年龄、性别和种族\n\n\n### 边界框\n\n| 对象       | 训练集       | 验证集     | 测试集    |总计    |\n|--------------|-------------|---------|---------|---------|\n| 手势      | 980 924     | 120 003 | 200 006 |1 300 933|\n| 无手势   | 154 403     | 19 411  | 29 386  | 203 200 |\n| 总框数  | 1 135 327   | 139 414 | 229 392 |1 504 133|\n\n### 特征点\n\n| 对象                    | 训练集       | 验证集     | 测试集    |总计    |\n|---------------------------|-------------|---------|---------|---------|\n| 总手数（带特征点） |  983 991    | 123 230 |201 131  |1 308 352|\n\n### 转换工具\n\n\u003Cdetails>\u003Csummary> \u003Cb>Yolo\u003C\u002Fb> \u003C\u002Fsummary>\n\n我们提供了一个脚本，用于将标注转换为 [YOLO](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) 格式。要进行标注转换，请运行以下命令：\n\n```bash\npython -m converters.hagrid_to_yolo --cfg \u003CCONFIG_PATH> --mode \u003C'hands' or 'gestures'>\n```\n\n转换完成后，您需要将原始的 `img2labels` 定义（位于 [yolov7 仓库](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7\u002Fblob\u002F2fdc7f14395f6532ad05fb3e6970150a6a83d290\u002Futils\u002Fdatasets.py#L347-L350)）修改为：\n\n```python\ndef img2label_paths(img_paths):\n    img_paths = list(img_paths)\n    # 根据图像路径定义标签路径\n    if \"train\" in img_paths[0]:\n        return [x.replace(\"train\", \"train_labels\").replace(\".jpg\", \".txt\") for x in img_paths]\n    elif \"test\" in img_paths[0]:\n        return [x.replace(\"test\", \"test_labels\").replace(\".jpg\", \".txt\") for x in img_paths]\n    elif \"val\" in img_paths[0]:\n        return [x.replace(\"val\", \"val_labels\").replace(\".jpg\", \".txt\") for x in img_paths]\n```\n\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\u003Csummary> \u003Cb>Coco\u003C\u002Fb> \u003C\u002Fsummary>\n\n此外，我们还提供了一个脚本，用于将标注转换为 [Coco](https:\u002F\u002Fcocodataset.org\u002F#home) 格式。要进行标注转换，请运行以下命令：\n\n```bash\npython -m converters.hagrid_to_coco --cfg \u003CCONFIG_PATH> --mode \u003C'hands' or 'gestures'>\n```\n\n\u003C\u002Fdetails>\n\n### 许可证\n\u003Ca rel=\"license\" href=\"http:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">\u003Cimg alt=\"知识共享许可协议\" style=\"border-width:0\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_readme_9599430498d9.png\" \u002F>\u003C\u002Fa>\u003Cbr \u002F>本作品采用一种变体的 \u003Ca rel=\"license\" href=\"http:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">知识共享 署名-相同方式共享 4.0 国际许可协议\u003C\u002Fa>。\n\n具体许可协议请参阅 [此处](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fblob\u002Fmaster\u002Flicense\u002Fen_us.pdf)。\n\n### 作者与致谢\n- [Alexander Kapitanov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fhukenovs)\n- [Andrey Makhlyarchuk](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fmakhliarchuk)\n- [Karina Kvanchiani](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fkvanchiani)\n- [Aleksandr Nagaev](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fnagadit)\n- [Roman Kraynov](https:\u002F\u002Fru.linkedin.com\u002Fin\u002Froman-kraynov-25ab44265)\n- [Anton Nuzhdin](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fanton-nuzhdin-46b799234 )\n\n### 链接\n- [Github (HaGRIDv2-1M)](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid)\n\u003C!-- - [镜像](https:\u002F\u002Fgitlab.aicloud.sbercloud.ru\u002Frndcv\u002Fhagrid) -->\n- [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08219)\n- [Github (动态手势识别)](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fdynamic_gestures)\n\u003C!-- - [Kaggle](https:\u002F\u002Fwww.kaggle.com\u002Fdatasets\u002Fkapitanov\u002Fhagrid) -->\n\u003C!-- - [Habr](https:\u002F\u002Fhabr.com\u002Fru\u002Fcompany\u002Fsberdevices\u002Fblog\u002F671614\u002F) -->\n\u003C!-- - [Paperswithcode](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fhagrid-hand-gesture-recognition-image-dataset) -->\n\n### 引用\n您可以使用以下 BibTeX 条目引用该论文：\n\n    @misc{nuzhdin2024hagridv21mimagesstatic,\n        title={HaGRIDv2: 1M 图像用于静态和动态手势识别}, \n        author={Anton Nuzhdin、Alexander Nagaev、Alexander Sautin、Alexander Kapitanov、Karina Kvanchiani},\n        year={2024},\n        eprint={2412.01508},\n        archivePrefix={arXiv},\n        primaryClass={cs.CV},\n        url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01508}, \n    }\n\n    @InProceedings{Kapitanov_2024_WACV,\n        author    = {Kapitanov, Alexander、Kvanchiani, Karina、Nagaev, Alexander、Kraynov, Roman、Makhliarchuk, Andrei},\n        title     = {HaGRID -- 手势识别图像数据集},\n        booktitle = {IEEE\u002FCVF 冬季计算机视觉应用会议（WACV）论文集},\n        month     = {一月},\n        year      = {2024},\n        pages     = {4572-4581}\n    }","# HaGRID 快速上手指南\n\nHaGRID (HAnd Gesture Recognition Image Dataset) 是一个大规模的手势识别图像数据集（最新版本 HaGRIDv2），包含超过 100 万张全高清 RGB 图像，涵盖 33 种手势类别及“无手势”类别。该数据集适用于构建视频会议、智能家居及车载系统中的手势识别应用。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐), macOS, 或 Windows\n*   **Python 版本**: 3.11 (官方推荐)\n*   **存储空间**: 完整数据集约为 **1.5TB**，请确保有足够的磁盘空间。若空间有限，可下载轻量版 (512px, 约 119GB) 或仅下载特定手势类别。\n*   **依赖管理**: 推荐使用 `conda` 或 `venv` 创建虚拟环境。\n\n## 安装步骤\n\n### 1. 克隆项目代码\n首先从 GitHub 克隆仓库到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid.git\ncd hagrid\n```\n\n### 2. 创建并激活虚拟环境\n使用 conda 创建 Python 3.11 环境（也可使用 venv）：\n\n```bash\nconda create -n gestures python=3.11 -y\nconda activate gestures\n```\n\n### 3. 安装依赖包\n安装项目所需的 Python 库：\n\n```bash\npip install -r requirements.txt\n```\n> **提示**: 国内用户若下载缓慢，可添加清华或阿里镜像源加速：\n> `pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 基本使用\n\n### 1. 下载数据集与标注文件\n由于数据集巨大，官方将其按手势类别分割为多个压缩包。您可以选择手动下载或使用提供的脚本下载。\n\n#### 方式 A：使用 Python 脚本下载（推荐）\n运行 `download.py` 脚本可以灵活选择下载内容（数据集图片、标注文件或特定类别）。\n\n**下载所有标注文件：**\n```bash\npython download.py --annotations --save_path .\u002Fdata\n```\n\n**下载特定手势类别的训练数据（例如 \"call\" 和 \"ok\"）：**\n```bash\npython download.py --dataset --targets call ok --save_path .\u002Fdata\n```\n\n**下载完整数据集（需极大带宽和存储空间，慎用）：**\n```bash\npython download.py --dataset --save_path .\u002Fdata\n```\n\n**脚本参数说明：**\n*   `-a, --annotations`: 下载标注文件 (JSON)\n*   `-d, --dataset`: 下载图像数据集\n*   `-t, --targets`: 指定要下载的手势类别名称（如 `call`, `fist`, `peace` 等）\n*   `-p, --save_path`: 指定保存路径\n\n#### 方式 B：手动下载\n访问 README 中的链接表格，按需下载特定手势的 `.zip` 包及 `annotations.zip`。下载完成后解压至指定目录。\n\n### 2. 数据集结构\n下载并解压后，目录结构应如下所示：\n\n```text\n├── hagrid_dataset (数据图片目录)\n│   ├── call\n│   │   ├── 00000000.jpg\n│   │   └── ...\n│   ├── ok\n│   │   └── ...\n├── hagrid_annotations (标注文件目录)\n│   ├── train\n│   │   ├── call.json\n│   │   └── ...\n│   ├── val\n│   │   └── ...\n│   └── test\n│       └── ...\n```\n\n### 3. 使用预训练模型\nHaGRID 提供了基于 YOLOv10 等架构的预训练模型，可直接用于手势检测。\n\n**示例：加载 YOLOv10x 手势检测模型**\n您需要先下载对应的 `.pt` 权重文件（见原文 Models 章节链接），然后使用 Ultralytics YOLO 库进行推理：\n\n```python\nfrom ultralytics import YOLO\n\n# 加载预训练模型\nmodel = YOLO(\"YOLOv10x_gestures.pt\")\n\n# 执行推理\nresults = model(\"path\u002Fto\u002Fyour\u002Fimage.jpg\")\n\n# 显示结果\nresults[0].show()\n```\n\n**可用模型性能参考：**\n*   **YOLOv10x**: mAP 89.4 (高精度)\n*   **YOLOv10n**: mAP 88.2 (轻量级)\n\n对于动态手势识别，请参考其独立仓库 [dynamic_gestures](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fdynamic_gestures)。","某智能会议软件团队正在开发一款无需触控的“手势静音\u002F举手”功能，旨在让用户在视频会议中通过简单挥手即可控制麦克风状态。\n\n### 没有 hagrid 时\n- **数据多样性严重不足**：团队自行采集的数据仅覆盖少数人和单一光照环境，导致模型在用户背光或距离摄像头较远（如 2 米外）时完全失效。\n- **误触率极高**：缺乏自然的“无手势”样本训练，模型常将用户托腮、整理头发等日常动作误判为控制指令，频繁意外静音。\n- **动态识别开发受阻**：团队难以获取足够标注数据来训练动态手势算法，只能依赖昂贵的静态图像堆叠，导致手势响应延迟高、不流畅。\n- **泛化能力差**：由于训练集中人物种族和手部形态单一，系统对不同肤色或手型的用户识别准确率波动巨大，引发公平性质疑。\n\n### 使用 hagrid 后\n- **全场景鲁棒性提升**：利用 hagrid 中 108 万张涵盖 6.5 万人、多种光照及 0.5-4 米距离的 FullHD 图像，模型在极端背光和远距离下依然保持高精度识别。\n- **精准过滤无效动作**：引入 hagrid 特有的\"no_gesture\"类别（含 20 多万自然手部姿态样本），有效区分控制手势与日常动作，误触率降低 90%。\n- **实现实时动态交互**：基于 hagrid 训练的专用算法，仅需静态图像数据即可实现流畅的动态手势识别，让用户挥手操作如原生般丝滑。\n- **全球用户无缝适配**：得益于数据集包含的多样化人种和独特场景，系统上线即具备强大的泛化能力，无需针对不同地区用户单独调优。\n\nhagrid 凭借海量高质量标注数据和创新的动态识别算法，将原本需数月打磨的手势交互功能缩短至数周落地，并显著提升了用户体验的稳定性与包容性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhukenovs_hagrid_0730fae3.jpg","hukenovs","Alexander Kapitanov","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhukenovs_7c5c0b0d.jpg","Data Scientist, ex. FPGA Engineer",null,"https:\u002F\u002Fhabr.com\u002Fru\u002Fusers\u002Fhukenovs\u002F","https:\u002F\u002Fgithub.com\u002Fhukenovs",[80,84],{"name":81,"color":82,"percentage":83},"Python","#3572A5",99,{"name":85,"color":86,"percentage":87},"Shell","#89e051",1,971,137,"2026-04-10T11:48:05","未说明","未明确说明具体型号，但提供 YOLOv10x 等深度学习模型及多 GPU 训练支持，隐含需要 NVIDIA GPU 以进行高效训练和推理","未说明（鉴于数据集高达 1.5TB 且包含百万级 FullHD 图像，建议大容量内存）",{"notes":95,"python":96,"dependencies":97},"1. 数据集体积巨大（HaGRIDv2 约 1.5TB），下载前请确保有足够的存储空间，官方提供了按手势分类的压缩包及轻量版（512px, 119.4GB）可选。\n2. 建议使用 conda 创建虚拟环境（示例命令使用 Python 3.11）。\n3. 项目支持多 GPU 训练和测试。\n4. 需手动运行脚本下载数据集和标注文件，并解压到指定目录结构。","3.11",[98],"requirements.txt 中定义的包（具体列表未在 README 文本中展开）",[15,14,16],[101,102,103,104,105,106,107,108],"computer-vision","dataset","deep-learning","gesture-recognition","gestures-classification","image-classification","bounding-boxes","hands","2026-03-27T02:49:30.150509","2026-04-12T14:00:15.915008",[112,117,122,127,132,137,142],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},30685,"运行 demo.py 时遇到 'UnicodeDecodeError: utf-8 codec can't decode byte' 错误怎么办？","该错误通常发生在 Windows 系统上读取配置文件时。请确保您的配置文件（.yaml）是以 UTF-8 编码保存的。如果问题依旧，可以尝试在代码中显式指定编码方式打开文件，或者检查配置文件路径是否正确。维护者建议参考相关的 YOLOv7 ONNX 导出笔记本以确认输入图像尺寸应为 (320,320)。","https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fissues\u002F30",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},30686,"轻量版数据集的标注文件在哪里下载？与完整版一样吗？","轻量版（low-resolution version）数据集使用的标注文件与完整版完全相同。您可以直接从仓库下载，或者通过以下链接直接获取包含地标信息的标注文件：https:\u002F\u002Frndml-team-cv.obs.ru-moscow-1.hc.sbercloud.ru\u002Fdatasets\u002Fhagrid_v2\u002Fannotations_with_landmarks\u002Fannotations.zip","https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fissues\u002F55",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},30687,"使用预训练模型测试新图片时识别结果不准确（如无法识别简单手势），如何解决？","这通常是因为图像预处理步骤与模型训练时不一致导致的。请确保使用以下代码进行预处理：\n1. 使用 `ImageOps.pad` 将图像填充为 224x224 大小，背景色设为黑色 (0,0,0)。\n2. 转换为 tensor 并转换数据类型。\n示例代码：\n```python\nimage = Image.open(\"path\u002Fto\u002Fimage.png\")\nimage_resized = ImageOps.pad(image, tuple([224, 224]), color=(0, 0, 0))\nimage_tensor = F.pil_to_tensor(image_resized)\nimage_tensor = F.convert_image_dtype(image_tensor)\nimages = torch.stack(list(image.to(device) for image in [image_tensor]))\npredicts = model(images)\n```\n同时请确认加载预训练权重的代码正确：`model.load_state_dict(torch.load(\"\u002Fpath\u002Fto\u002Fweights\")[\"state_dict\"])`。","https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fissues\u002F22",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},30688,"数据集或子样本下载链接失效或下载卡在 1GB 处怎么办？","下载中断可能是由于网络波动或特定地区 ISP 限制导致的。许多用户反馈稍后重试即可成功下载。如果在中国大陆下载受阻，建议尝试使用 `wget` 命令或其他下载工具，并检查网络连接稳定性。如果问题持续，请提供您的国家、ISP 服务商以及具体的报错信息以便进一步排查。","https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fissues\u002F21",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},30689,"如何将 .pth 模型文件转换为 ONNX 格式？","您可以使用模型对象中的 `hagrid_model` 属性进行导出。首先加载配置和权重，然后将模型设置为 eval 模式，最后使用 `torch.onnx.export`。注意输入张量的形状需符合模型要求（例如检测模型可能需要特定的分辨率）。\n关键步骤参考：\n```python\nconf = OmegaConf.load(\".\u002Fconfigs\u002FResNext50.yaml\")\nmodel = build_model(conf)\n# 加载权重...\nmodel.load_state_dict(snapshot[\"MODEL_STATE\"])\nmodel.eval()\n# 定义输入输出名称\ninput_names = [ \"image\" ]\noutput_names = [ \"output\" ]\n# 执行导出\ntorch.onnx.export(model, input, \"output.onnx\", input_names=input_names, output_names=output_names)\n```","https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fissues\u002F72",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},30690,"数据集太大，有没有缩小版或裁剪后的手部图像版本？","有的，项目提供了 HaGRID-512p 版本，这是经过调整大小的数据集。完整数据集的 512p 版本大小约为 13GB。您可以通过以下链接下载：https:\u002F\u002Fn-ws-620xz-pd11.s3pd11.sbercloud.ru\u002Fb-ws-620xz-pd11-jux\u002Fhagrid\u002Fhagrid\u002Fhagrid_512.zip","https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fissues\u002F45",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},30691,"训练新数据时必须包含关键点（landmarks）标注吗？如果只想训练部分手势类别怎么办？","不一定必须包含关键点，这取决于您的任务需求。如果您只想训练特定的几个手势类别（例如仅 3 种），可以在配置文件中移除不需要的类别。维护者建议：只需在 config 配置文件中删除不必要的类即可实现自定义训练。","https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Fhagrid\u002Fissues\u002F41",[]]