[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-vietanhdev--anylabeling":3,"tool-vietanhdev--anylabeling":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":75,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":10,"env_os":97,"env_gpu":98,"env_ram":99,"env_deps":100,"category_tags":110,"github_topics":111,"view_count":10,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":161},1000,"vietanhdev\u002Fanylabeling","anylabeling","Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything (SAM+SAM2\u002F2.1+SAM3), MobileSAM!!","anylabeling 是一个开源的AI辅助数据标注工具，专为简化图像标注流程而设计。它解决了传统手动标注耗时长、易出错的问题——以往标注一张图片可能需要反复调整边界框或多边形，anylabeling 则通过集成YOLOv8（用于快速对象检测）和Segment Anything系列模型（包括SAM、SAM2、SAM2.1及创新的SAM3），智能预测图像中的对象位置。用户只需轻点鼠标或输入简单文本提示（如“汽车”），工具就能自动生成精准标注，支持多边形、矩形、圆等多种形状，还涵盖文本检测和关键信息提取功能，大幅减少重复劳动。  \n\n它特别适合AI开发者、研究人员和数据科学家使用，无论是训练计算机视觉模型还是处理医疗、遥感等专业图像数据集，都能快速构建高质量标注数据。技术亮点在于SAM3的文本驱动分割能力，能用自然语言描述实现开放词汇标注，无需预定义类别。界面友好直观，融合了LabelImg和Labelme的优点，支持中英文等多语言，安装简单（提供一键可执行文件或Pypi安装）。anylabeling 让标注工作变得高效又省心，帮你把精力集中在AI创新上，而非繁琐的手动操作。","\u003Cp align=\"center\">\n  \u003Cimg alt=\"AnyLabeling\" style=\"width: 128px; height: 128px; height: auto;\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_d99ac3e4a329.png\"\u002F>\n  \u003Ch1 align=\"center\">🌟 AnyLabeling 🌟\u003C\u002Fh1>\n  \u003Cp align=\"center\">Effortless data labeling with AI support from \u003Cb>YOLO\u003C\u002Fb> and \u003Cb>Segment Anything\u003C\u002Fb>!\u003C\u002Fp>\n  \u003Cp align=\"center\">\u003Cb>AnyLabeling = LabelImg + Labelme + Improved UI + Auto-labeling\u003C\u002Fb>\u003C\u002Fp>\n\u003C\u002Fp>\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_339a3f56450b.png)\n\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fanylabeling)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fanylabeling)\n[![license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fvietanhdev\u002Fanylabeling.svg)](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fblob\u002Fmaster\u002FLICENSE)\n[![open issues](https:\u002F\u002Fisitmaintained.com\u002Fbadge\u002Fopen\u002Fvietanhdev\u002Fanylabeling.svg)](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues)\n[![Pypi Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_7d52cffd384e.png)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fanylabeling\u002F)\n[![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRead-Documentation-green)](https:\u002F\u002Fanylabeling.nrl.ai\u002F)\n[![Follow](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F+Follow-vietanhdev-blue)]([[https:\u002F\u002Fanylabeling.nrl.ai\u002F](https:\u002F\u002Ftwitter.com\u002Fvietanhdev)](https:\u002F\u002Ftwitter.com\u002Fvietanhdev))\n\n[![AnyLearning-Banner](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_2d24ef055c2d.png)](https:\u002F\u002Fanylearning.nrl.ai\u002F)\n\n[![ai-flow 62b3c222](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_7aa01c3f5deb.png)](https:\u002F\u002Fanylearning.nrl.ai\u002F)\n\n\n\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002F5qVJiYNX5Kk\">\n  \u003Cimg alt=\"AnyLabeling\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_be08dd2a82ab.png\"\u002F>\n\u003C\u002Fa>\n\n**Auto Labeling with Segment Anything**\n\n\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002F5qVJiYNX5Kk\">\n  \u003Cimg style=\"width: 800px; margin-left: auto; margin-right: auto; display: block;\" alt=\"AnyLabeling-SegmentAnything\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_8d388e9cea09.gif\"\u002F>\n\u003C\u002Fa>\n\n\n- **Youtube Demo:** [https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5qVJiYNX5Kk](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5qVJiYNX5Kk)\n- **Documentation:** [https:\u002F\u002Fanylabeling.nrl.ai](https:\u002F\u002Fanylabeling.nrl.ai)\n\n**Features:**\n\n- [x] Image annotation for polygon, rectangle, circle, line and point.\n- [x] Auto-labeling with **YOLOv8** (object detection).\n- [x] Auto-labeling with **Segment Anything** family:\n  - **SAM** (ViT-B \u002F ViT-L \u002F ViT-H) and **MobileSAM**\n  - **SAM 2** and **SAM 2.1** (Hiera-Tiny \u002F Small \u002F Base+ \u002F Large)\n  - **SAM 3** (ViT-H) — open-vocabulary segmentation with text prompts\n- [x] Text detection, recognition and KIE (Key Information Extraction) labeling.\n- [x] Multiple languages availables: English, Vietnamese, Chinese.\n\n### Supported Models\n\n| Model | Prompt Types | Notes |\n|-------|-------------|-------|\n| SAM ViT-B \u002F ViT-L \u002F ViT-H | Point, Rectangle | Original Segment Anything |\n| MobileSAM | Point, Rectangle | Lightweight SAM |\n| SAM 2 Hiera-Tiny \u002F Small \u002F Base+ \u002F Large | Point, Rectangle | Meta SAM 2 |\n| SAM 2.1 Hiera-Tiny \u002F Small \u002F Base+ \u002F Large | Point, Rectangle | Improved SAM 2 |\n| SAM 3 ViT-H | **Text**, Point, Rectangle | Open-vocabulary; text drives detection |\n| YOLOv8n \u002F s \u002F m \u002F l \u002F x | — | Object detection & auto-labeling |\n\nAll models are downloaded automatically on first use from Hugging Face.\n\n## Install and Run\n\n### 1. Download and run executable\n\n- Download and run newest version from [Releases](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Freleases).\n- For MacOS:\n  - Download the folder mode build (`AnyLabeling-Folder.zip`) from [Releases](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Freleases)\n  - See [macOS folder mode instructions](docs\u002Fmacos_folder_mode.md) for details\n\n### Install from Pypi\n\n- Requirements: Python 3.10+. Recommended: Python 3.12.\n- Recommended: [Miniconda\u002FAnaconda](https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html).\n\n- Create environment:\n\n```bash\nconda create -n anylabeling python=3.12\nconda activate anylabeling\n```\n\n- **(For macOS only)** Install PyQt6 using Conda:\n\n```bash\nconda install -c conda-forge pyqt=6\n```\n\n- Install anylabeling:\n\n```bash\npip install anylabeling # or pip install anylabeling-gpu for GPU support\n```\n\n- Start labeling:\n\n```bash\nanylabeling\n```\n\n## Documentation\n\n**Website:** [https:\u002F\u002Fanylabeling.nrl.ai](https:\u002F\u002Fanylabeling.nrl.ai)\u002F\n\n### Applications\n\n| **Object Detection** | **Recognition** | **Facial Landmark Detection** | **2D Pose Estimation** |\n| :---: | :---: | :---: | :---: |\n| \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_f2adb64b78e0.png' height=\"126px\" width=\"180px\"> |  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_e8fb567e79ce.png' height=\"126px\" width=\"180px\"> |  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_d9974b6bb4d7.jpg' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_eeadca3183b0.gif' height=\"126px\" width=\"180px\"> |\n|  **2D Lane Detection** | **OCR** | **Medical Imaging** | **Instance Segmentation** |\n| \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_2c4a87386ee1.jpg' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_a4ca7d6c21b3.png' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_0f5ee278977d.png' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_a57a18e802e4.jpg' height=\"126px\" width=\"180px\"> |\n|  **Image Tagging** | **Rotation** | **And more!** |\n| \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_66106f2f3dbd.png' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_7dbcec3f0c70.png' height=\"126px\" width=\"180px\"> | Your applications here! |\n## Development\n\n- Install packages:\n\n```bash\npip install -r requirements-dev.txt\n# or pip install -r requirements-macos-dev.txt for MacOS\n```\n\n- Generate resources:\n\n```bash\npyrcc5 -o anylabeling\u002Fresources\u002Fresources.py anylabeling\u002Fresources\u002Fresources.qrc\n```\n\n- Run app:\n\n```bash\npython anylabeling\u002Fapp.py\n```\n\n## Build executable\n\n- Install PyInstaller:\n\n```bash\npip install -r requirements-dev.txt\n```\n\n- Build:\n\n```bash\nbash build_executable.sh\n```\n\n- Check the outputs in: `dist\u002F`.\n\n## Contribution\n\nIf you want to contribute to **AnyLabeling**, please read [Contribution Guidelines](https:\u002F\u002Fanylabeling.nrl.ai\u002Fdocs\u002Fcontribution).\n\n## Star history\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_2036fef59504.png)](https:\u002F\u002Fstar-history.com\u002F#vietanhdev\u002Fanylabeling&Date)\n\n## References\n\n- Labeling UI built with ideas and components from [LabelImg](https:\u002F\u002Fgithub.com\u002Fheartexlabs\u002FlabelImg), [LabelMe](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Flabelme).\n- Auto-labeling with [Segment Anything](https:\u002F\u002Fsegment-anything.com\u002F) (SAM, SAM 2, SAM 2.1, SAM 3), [MobileSAM](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM).\n- Auto-labeling with [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics).\n- Icons from FlatIcon: [DinosoftLabs](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fsun \"sun icons\"), [Freepik](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fmoon \"moon icons\"), [Vectoricons](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fsystem \"system icons\"), [HideMaru](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fungroup \"ungroup icons\").\n","\u003Cp align=\"center\">\n  \u003Cimg alt=\"AnyLabeling\" style=\"width: 128px; height: 128px; height: auto;\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_d99ac3e4a329.png\"\u002F>\n  \u003Ch1 align=\"center\">🌟 AnyLabeling 🌟\u003C\u002Fh1>\n  \u003Cp align=\"center\">借助 \u003Cb>YOLO\u003C\u002Fb> 和 \u003Cb>Segment Anything\u003C\u002Fb> 的 AI 支持，让数据标注变得轻而易举！\u003C\u002Fp>\n  \u003Cp align=\"center\">\u003Cb>AnyLabeling = LabelImg + Labelme + 改进的 UI + 自动标注\u003C\u002Fb>\u003C\u002Fp>\n\u003C\u002Fp>\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_339a3f56450b.png)\n\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fanylabeling)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fanylabeling)\n[![license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fvietanhdev\u002Fanylabeling.svg)](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fblob\u002Fmaster\u002FLICENSE)\n[![open issues](https:\u002F\u002Fisitmaintained.com\u002Fbadge\u002Fopen\u002Fvietanhdev\u002Fanylabeling.svg)](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues)\n[![Pypi Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_7d52cffd384e.png)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fanylabeling\u002F)\n[![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRead-Documentation-green)](https:\u002F\u002Fanylabeling.nrl.ai\u002F)\n[![Follow](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F+Follow-vietanhdev-blue)]([[https:\u002F\u002Fanylabeling.nrl.ai\u002F](https:\u002F\u002Ftwitter.com\u002Fvietanhdev)](https:\u002F\u002Ftwitter.com\u002Fvietanhdev))\n\n[![AnyLearning-Banner](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_2d24ef055c2d.png)](https:\u002F\u002Fanylearning.nrl.ai\u002F)\n\n[![ai-flow 62b3c222](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_7aa01c3f5deb.png)](https:\u002F\u002Fanylearning.nrl.ai\u002F)\n\n\n\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002F5qVJiYNX5Kk\">\n  \u003Cimg alt=\"AnyLabeling\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_be08dd2a82ab.png\"\u002F>\n\u003C\u002Fa>\n\n**使用 Segment Anything 进行自动标注**\n\n\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002F5qVJiYNX5Kk\">\n  \u003Cimg style=\"width: 800px; margin-left: auto; margin-right: auto; display: block;\" alt=\"AnyLabeling-SegmentAnything\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_8d388e9cea09.gif\"\u002F>\n\u003C\u002Fa>\n\n\n- **YouTube 演示：** [https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5qVJiYNX5Kk](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5qVJiYNX5Kk)\n- **文档：** [https:\u002F\u002Fanylabeling.nrl.ai](https:\u002F\u002Fanylabeling.nrl.ai)\n\n**功能特性：**\n\n- [x] 支持多边形、矩形、圆形、线条和点等多种图像标注方式。\n- [x] 使用 **YOLOv8**（目标检测模型）进行自动标注。\n- [x] 支持 **Segment Anything** 系列模型进行自动标注：\n  - **SAM**（ViT-B \u002F ViT-L \u002F ViT-H）和 **MobileSAM**（ViT 指 Vision Transformer，视觉 Transformer）\n  - **SAM 2** 和 **SAM 2.1**（Hiera-Tiny \u002F Small \u002F Base+ \u002F Large，Hiera 为分层架构）\n  - **SAM 3**（ViT-H）— 支持文本提示的开放词汇分割\n- [x] 文本检测、识别和 KIE（Key Information Extraction，关键信息提取）标注。\n- [x] 支持多种语言：英语、越南语、中文。\n\n### 支持的模型\n\n| 模型 | 提示类型 | 说明 |\n|-------|-------------|-------|\n| SAM ViT-B \u002F ViT-L \u002F ViT-H | 点、矩形 | 原始 Segment Anything 模型 |\n| MobileSAM | 点、矩形 | 轻量级 SAM 模型 |\n| SAM 2 Hiera-Tiny \u002F Small \u002F Base+ \u002F Large | 点、矩形 | Meta 的 SAM 2 模型 |\n| SAM 2.1 Hiera-Tiny \u002F Small \u002F Base+ \u002F Large | 点、矩形 | 改进版 SAM 2 模型 |\n| SAM 3 ViT-H | **文本**、点、矩形 | 开放词汇；文本驱动检测 |\n| YOLOv8n \u002F s \u002F m \u002F l \u002F x | — | 目标检测与自动标注 |\n\n所有模型均在首次使用时自动从 Hugging Face 下载。\n\n## 安装与运行\n\n### 1. 下载并运行可执行文件\n\n- 从 [Releases](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Freleases) 下载并运行最新版本。\n- 对于 macOS：\n  - 从 [Releases](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Freleases) 下载文件夹模式构建（`AnyLabeling-Folder.zip`）\n  - 详见 [macOS 文件夹模式说明](docs\u002Fmacos_folder_mode.md)\n\n### 通过 PyPI 安装\n\n- 要求：Python 3.10+，推荐使用 Python 3.12。\n- 推荐：[Miniconda\u002FAnaconda](https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html)。\n\n- 创建环境：\n\n```bash\nconda create -n anylabeling python=3.12\nconda activate anylabeling\n```\n\n- **（仅 macOS）** 使用 Conda 安装 PyQt6：\n\n```bash\nconda install -c conda-forge pyqt=6\n```\n\n- 安装 anylabeling：\n\n```bash\npip install anylabeling # 或 pip install anylabeling-gpu 以启用 GPU 支持\n```\n\n- 开始标注：\n\n```bash\nanylabeling\n```\n\n## 文档\n\n**网站：** [https:\u002F\u002Fanylabeling.nrl.ai](https:\u002F\u002Fanylabeling.nrl.ai)\u002F\n\n### 应用场景\n\n| **目标检测** | **识别** | **人脸关键点检测** | **2D 姿态估计** |\n| :---: | :---: | :---: | :---: |\n| \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_f2adb64b78e0.png' height=\"126px\" width=\"180px\"> |  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_e8fb567e79ce.png' height=\"126px\" width=\"180px\"> |  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_d9974b6bb4d7.jpg' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_eeadca3183b0.gif' height=\"126px\" width=\"180px\"> |\n|  **2D 车道线检测** | **OCR** | **医学影像** | **实例分割** |\n| \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_2c4a87386ee1.jpg' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_a4ca7d6c21b3.png' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_0f5ee278977d.png' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_a57a18e802e4.jpg' height=\"126px\" width=\"180px\"> |\n|  **图像标签** | **旋转** | **更多应用！** |\n| \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_66106f2f3dbd.png' height=\"126px\" width=\"180px\"> | \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_7dbcec3f0c70.png' height=\"126px\" width=\"180px\"> | 您的应用可以在这里展示！ |\n## 开发\n\n- 安装依赖包：\n\n```bash\npip install -r requirements-dev.txt\n# macOS 用户请使用 pip install -r requirements-macos-dev.txt\n```\n\n- 生成资源：\n\n```bash\npyrcc5 -o anylabeling\u002Fresources\u002Fresources.py anylabeling\u002Fresources\u002Fresources.qrc\n```\n\n- 运行应用：\n\n```bash\npython anylabeling\u002Fapp.py\n```\n\n## 构建可执行文件\n\n- 安装 PyInstaller：\n\n```bash\npip install -r requirements-dev.txt\n```\n\n- 构建：\n\n```bash\nbash build_executable.sh\n```\n\n- 在 `dist\u002F` 目录查看输出。\n\n## 贡献\n\n如果您想为 **AnyLabeling** 做出贡献，请阅读[贡献指南](https:\u002F\u002Fanylabeling.nrl.ai\u002Fdocs\u002Fcontribution)。\n\n## Star 历史\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_readme_2036fef59504.png)](https:\u002F\u002Fstar-history.com\u002F#vietanhdev\u002Fanylabeling&Date)\n\n## 参考文献\n\n- 标注界面构建时借鉴了 [LabelImg](https:\u002F\u002Fgithub.com\u002Fheartexlabs\u002FlabelImg)、[LabelMe](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Flabelme) 的想法和组件。\n- 使用 [Segment Anything](https:\u002F\u002Fsegment-anything.com\u002F)（SAM、SAM 2、SAM 2.1、SAM 3，一种图像分割模型）和 [MobileSAM](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM) 进行自动标注。\n- 使用 [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)（一种目标检测模型）进行自动标注。\n- 图标来自 FlatIcon：[DinosoftLabs](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fsun \"sun icons\")、[Freepik](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fmoon \"moon icons\")、[Vectoricons](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fsystem \"system icons\")、[HideMaru](https:\u002F\u002Fwww.flaticon.com\u002Ffree-icons\u002Fungroup \"ungroup icons\")。","# AnyLabeling 快速上手指南\n\nAnyLabeling 是一款集成了 YOLO 和 Segment Anything 的 AI 辅助数据标注工具，支持自动标注和多种标注形状。\n\n## 环境准备\n\n- **操作系统**：Windows、Linux 或 macOS\n- **Python 版本**：3.10 或更高版本（推荐 3.12）\n- **推荐环境**：Miniconda 或 Anaconda\n\n## 安装步骤\n\n### 方式一：通过 PyPI 安装（推荐）\n\n1. 创建并激活 Conda 环境：\n\n```bash\nconda create -n anylabeling python=3.12\nconda activate anylabeling\n```\n\n2. **macOS 用户**需额外安装 PyQt6：\n\n```bash\nconda install -c conda-forge pyqt=6\n```\n\n3. 安装 AnyLabeling：\n\n```bash\n# CPU 版本\npip install anylabeling\n\n# 或 GPU 版本（推荐 NVIDIA 显卡用户）\npip install anylabeling-gpu\n```\n\n### 方式二：使用预编译可执行文件\n\n从 [GitHub Releases](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Freleases) 下载对应系统的最新版本直接运行。\n\n## 基本使用\n\n1. 启动应用：\n\n```bash\nanylabeling\n```\n\n2. 点击\"打开目录\"或\"打开文件\"加载需要标注的图片\n\n3. 选择标注工具（矩形、多边形等）进行手动标注，或：\n   - 选择\"AI 标注\"菜单中的模型（如 YOLOv8、SAM 系列）\n   - 模型首次使用时会自动从 Hugging Face 下载\n   - 使用自动标注功能快速生成标注\n\n4. 标注完成后，通过\"文件\"菜单保存为 YOLO 或 VOC 格式\n\n**提示**：支持中文界面，可在设置中切换语言。","某中型电商平台的算法工程师小李，正带领5人团队对10万张商品图片进行标注，用于训练一个能识别200多个品类的商品检测模型，项目周期仅有一个月。\n\n### 没有 anylabeling 时\n- **标注效率极低**：团队每天手动框选商品、调整多边形，人均只能完成200张图片，按此速度项目至少需要3个月\n- **复杂边缘处理困难**：服装、毛绒玩具等不规则商品用传统矩形框会包含大量背景，手动描边又耗时5-8分钟\u002F个\n- **标注质量参差不齐**：不同成员对商品边界的理解不一，有的框得紧有的框得松，导致模型训练时噪声数据过多\n- **新人上手成本高**：新员工需要学习LabelImg和Labelme两个工具，熟悉标注规范至少需3天，初期错误率超过30%\n- **项目进度严重滞后**：第一周仅完成8%的工作量，老板已经开始考虑外包，但预算又超支\n\n### 使用 anylabeling 后\n- **效率提升8倍**：启用YOLOv8自动检测后，80%的商品被直接预标注，团队人均日处理量飙升至1600张，项目3周内完成\n- **一键精准分割**：点击SAM 2.1模型，不规则商品边缘瞬间生成完美多边形，单张标注时间从5分钟缩短到20秒\n- **标准统一可控**：AI预标注遵循统一标准，人工只需审核修正，数据一致性从70%提升到95%，模型mAP直接提高5个点\n- **半天即可上手**：anylabeling集成化界面比传统工具更直观，新员工跟着视频学2小时就能独立作业，首周错误率低于10%\n- **预算和周期双达标**：项目提前一周交付，节省外包费用15万元，模型上线后商品识别准确率从82%提升到91%\n\nanylabeling让小李的团队用AI完成了AI的数据准备工作，将原本不可能完成的任务变成了可快速交付的标准化流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvietanhdev_anylabeling_2d24ef05.png","vietanhdev","Viet-Anh NGUYEN","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvietanhdev_ef209189.jpg","ML Engineer | Maker | Creator of AnyLabeling and DaisyKit | 8+ YoE Open Source Developer","Scopic","Vietnam",null,"https:\u002F\u002Fwww.vietanh.dev","https:\u002F\u002Fgithub.com\u002Fvietanhdev",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",99.9,{"name":90,"color":91,"percentage":92},"Shell","#89e051",0.1,3283,337,"2026-04-04T14:03:51","GPL-3.0","Linux, macOS, Windows","可选，未说明","未说明",{"notes":101,"python":102,"dependencies":103},"1. 建议使用 Conda 管理 Python 环境；2. macOS 需通过 Conda 安装 PyQt6；3. 首次运行自动从 Hugging Face 下载模型；4. GPU 支持可选，需安装 anylabeling-gpu 版本；5. 开发时需运行 pyrcc5 生成资源文件","3.10+（推荐 3.12）",[104,105,106,107,108,109],"PyQt6 (macOS)","onnxruntime","opencv-python","Pillow","numpy","huggingface_hub",[13,51,14],[112,113,114,115,116,117,118,119,120,121,122],"labeling","labeling-tool","segment-anything","yolov8","auto-labeling","computer-vision","onnx","mobilesam","sam2","segment-anything-2","yolo","2026-03-27T02:49:30.150509","2026-04-06T08:09:09.649708",[126,131,136,141,146,151,156],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},4445,"启动时出现 Qt xcb 插件加载错误如何解决？","此错误通常是由于缺少系统库或 OpenCV 与 Qt 冲突导致。请尝试以下解决方案：\n\n1. 安装缺失的系统库（适用于 Linux）：\n   ```bash\n   sudo apt install libxcb-xinerama0\n   ```\n\n2. 如果问题仍然存在，可能是 OpenCV 的 Qt 插件与系统冲突。建议安装 OpenCV 的 headless 版本：\n   ```bash\n   pip install opencv-python-headless\n   # 或\n   pip install opencv-contrib-python-headless\n   ```\n\n安装完成后重新启动 AnyLabeling。","https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues\u002F108",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},4446,"程序在下载模型时崩溃怎么办？","从 AnyLabeling v0.2.22 开始，建议手动下载模型并使用自定义模型功能：\n\n1. 手动下载模型：\n   - 访问 https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling-assets\u002Freleases\u002Ftag\u002Fv0.4.0\n   - 下载所需模型并解压\n\n2. 使用自定义模型功能加载：\n   - 参考官方文档：https:\u002F\u002Fanylabeling.com\u002Fdocs\u002Fcustom-models\n\n3. 网络问题排查：\n   - 如果使用代理，请尝试切换到全局模式\n   - 确保网络连接稳定\n\n这种方法可以避免程序自动下载时可能出现的崩溃问题。","https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues\u002F52",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},4447,"如何使用 GPU 运行 SAM 模型？","要使用 GPU 加速 SAM 模型，需要满足以下条件：\n\n1. 确保系统已安装 CUDA 环境：\n   - 检查 CUDA 是否正确安装并配置\n   - 验证显卡驱动版本兼容性\n\n2. 查看官方 GPU 支持文档：\n   - 详细配置指南：https:\u002F\u002Fanylabeling.com\u002Fdocs\u002Fgpu\n\n3. 安装支持 CUDA 的 ONNX Runtime\n\n注意：GPU 加速需要 NVIDIA 显卡和完整的 CUDA 工具链支持。如果未正确配置 CUDA，程序会自动回退到 CPU 模式运行。","https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues\u002F57",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},4448,"M1\u002FM2 Mac 上无法运行 AnyLabeling 怎么办？","M1\u002FM2 Mac 用户可能会遇到兼容性问题。根据官方反馈：\n\n1. 该问题将在后续版本中解决\n\n2. 临时启动方式：\n   - 使用 `anylabeling&` 命令后台启动（启动较快）\n   - 直接使用 `anylabeling` 命令启动（可能较慢）\n\n3. 系统要求：\n   - macOS 14.0 或更高版本\n   - Apple Silicon 芯片（M1\u002FM2）\n\n建议关注官方发布页面获取支持 Apple Silicon 的最新版本。","https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues\u002F149",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},4449,"升级到 v0.3.2 后分割结果异常怎么办？","v0.3.2 版本存在分割掩码生成错误的已知问题，表现为无论使用 mobileSAM 还是 ViT-b 模型，都会输出几乎全图的掩码。\n\n**解决方案**：\n立即升级到 v0.3.3 或更高版本：\n- 下载地址：https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Freleases\u002Ftag\u002Fv0.3.3\n\n该版本修复了掩码值转换的问题。如果升级后问题仍然存在，请向开发者提供以下信息：\n- 操作系统版本\n- 测试图像样本\n- 具体的操作步骤和与旧版本（0.2.x）的差异对比","https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues\u002F119",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},4450,"如何导入已有的二进制掩码标注进行编辑？","AnyLabeling 目前不直接支持导入二进制掩码文件进行编辑，主要原因是从二进制掩码转换为 polygon 格式是一个有损过程。\n\n**当前可行的方法**：\n1. 将二进制掩码转换为 COCO 格式的 polygon 标注 JSON 文件\n2. 使用\"Create polygon using SAM\"功能，通过像素选项手动标注\n\n**注意事项**：\n- 直接导入二进制掩码功能尚未支持\n- 转换过程可能导致精度损失\n- 对于 SA-1B 等大型数据集，建议使用支持 RLE 格式的专业工具\n\n建议关注后续版本是否增加对掩码导入的支持。","https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues\u002F83",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},4451,"CPU 环境下运行出现 CUDA 警告是否正常？","在 CPU 环境下运行时出现以下警告是正常的：\n```\n[ WARN:0@14.885] global net_impl.cpp:174 setUpNet DNN module was not built with CUDA backend; switching to CPU\n```\n\n这表示程序检测到未编译 CUDA 支持，自动切换到 CPU 模式运行，**不会影响功能使用**。\n\n**建议**：\n1. 确保使用最新版本（v0.2.21 或更高）\n2. 如果程序运行正常，可以忽略此警告\n3. 如需使用 GPU 加速，请参考 GPU 配置文档\n\n如果程序因此崩溃或无法使用，请检查安装环境或重新安装。","https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fissues\u002F62",[162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247,252,257],{"id":163,"version":164,"summary_zh":165,"released_at":166},113590,"v0.4.35","## What's Changed\r\n\r\n![](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F460aaa6c-303f-4ae7-9e6e-b205573c58ec)\r\n\r\n- Switched to PyQt6.\r\n- Added support for SAM2.1, SAM3 (with text prompt). CoreML SAM 2.1 was added by @paulbauriegel.\r\n- Fixed some bugs and improved UX by @baiyongrui , @matrss, @danilobirbiglia.\r\n\r\n**Note:** We don't have all machine environment for testing. Please test them and open a PR if you find any issues. Thank you very much!\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2026-02-22T04:17:49",{"id":168,"version":169,"summary_zh":170,"released_at":171},113591,"v0.4.29","## What's Changed\r\n\r\n![](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F460aaa6c-303f-4ae7-9e6e-b205573c58ec)\r\n\r\n-Added new Github workflow for binary builds. Now you can download pre-built binaries for **AnyLabeling**!\r\n\r\n**Note:** We don't have all machine environment for testing. Please test them and open a PR if you find any issues. Thank you very much!\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2025-05-04T09:02:44",{"id":173,"version":174,"summary_zh":175,"released_at":176},113592,"v0.4.25","## What's Changed\r\n\r\n- Dark\u002FLight theme:\r\n\r\n![](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F21e989d0-aedb-4e63-b88b-a84ea785f9c2)\r\n\r\n![](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F1b455e09-67cf-429b-bbfe-33ad66d0c7cb)\r\n\r\n- Dock-style toolbars - Freely arrange your labeling workspace:\r\n\r\n![](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd302c789-8b9d-46e3-9be5-1d6025be1d63)\r\n\r\n- Fixed inconsistent colors between Objects and Labels lists when using auto-labeling feature.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2025-05-03T19:00:54",{"id":178,"version":179,"summary_zh":180,"released_at":181},113593,"v0.4.16","## What's Changed\r\n\r\nExport data from AnyLearning to different formats:\r\n- YOLO\r\n- COCO\r\n- Pascal VOC\r\n- CreateML\r\n\r\n**Screenshot:**\r\n\r\n![](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd92fe133-90a3-41c2-83f3-df06ecc29b62)\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2025-03-14T15:15:10",{"id":183,"version":184,"summary_zh":185,"released_at":186},113594,"v0.4.8","## What's Changed\r\n\r\n- Add Segment Anything 2 support.\r\n- Remove YOLOv5.\r\n- Fix crash when removing points.\r\n   - PR: #163\r\n- Canvas performance improvements.\r\n   - PR: #165\r\n- Fix for loading model configs.\r\n   - PR: #166","2024-08-03T07:49:32",{"id":188,"version":189,"summary_zh":190,"released_at":191},113595,"v0.3.3","## What's Changed\r\n\r\n- Fixed some issues when inferencing SAM.\r\n- Adjust color for a better look in Dark mode.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-07-02T04:01:54",{"id":193,"version":194,"summary_zh":195,"released_at":196},113596,"v0.3.2","## What's Changed\r\n\r\n- Support models from [SAM Exporter](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fsamexporter).\r\n- Use zip format for model downloading.\r\n- Support [MobileSAM](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM) - super fast auto labeling experience.\r\n- Fix wrong model URL for SAM ViT-L.\r\n\r\n\u003Cimg width=\"771\" alt=\"Screenshot 2023-06-29 at 12 54 30 AM\" src=\"https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fassets\u002F18329471\u002Fe872d598-5e8d-4008-9bea-c7eefecca5b9\">\r\n\r\n\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-06-29T01:17:59",{"id":198,"version":199,"summary_zh":200,"released_at":201},113597,"v0.2.24","## What's Changed\r\n\r\n- Fix building .dmg for macOS.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-05-07T15:52:39",{"id":203,"version":204,"summary_zh":205,"released_at":206},113598,"v0.2.23","## What's Changed\r\n\r\n- Handle some issues with model downloading:\r\n  + Wrong status.\r\n  + Timeout when downloading models.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-05-07T15:28:20",{"id":208,"version":209,"summary_zh":210,"released_at":211},113599,"v0.2.22","## What's Changed\r\n\r\n- Add the option to load custom models.\r\n- Please download sample custom models from [AnyLabeling Assets v0.4.0](https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling-assets\u002Freleases\u002Ftag\u002Fv0.4.0).\r\n\r\n\u003Cimg width=\"623\" alt=\"Screenshot 2023-05-06 at 3 19 47 PM\" src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F18329471\u002F236612527-a62b0413-7179-48be-a17b-0f901eac7be8.png\">\r\n\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-05-06T08:22:49",{"id":213,"version":214,"summary_zh":215,"released_at":216},113600,"v0.2.21","## What's Changed\r\n\r\n- Add **Auto Use Last Label** to speed up labeling #74 . Documentation: https:\u002F\u002Fanylabeling.com\u002Fdocs\u002Fauto-use-last-label.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-05-05T13:10:44",{"id":218,"version":219,"summary_zh":220,"released_at":221},113601,"v0.2.19","## What's Changed\r\n\r\n- Separated two versions: with GPU and without GPU support.\r\n\r\n## Installing from Pypi\r\n\r\n- For CPU version:\r\n```\r\npip install -U anylabeling\r\n```\r\n- For GPU version:\r\n```\r\npip install -U anylabeling-gpu\r\n```\r\n\r\n## Download binary files\r\n\r\nUse versions with `-GPU` for GPU support.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.\r\n","2023-05-04T17:11:53",{"id":223,"version":224,"summary_zh":225,"released_at":226},113602,"v0.2.16","## What's Changed\r\n\r\n- Fixed random crashing when scrolling. Reported by @yibaoby.\r\n- Ignore SSL verification when downloading models to avoid errors when using a proxy. #52 \r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-05-01T17:20:36",{"id":228,"version":229,"summary_zh":230,"released_at":231},113603,"v0.2.15","- Restore the app icon.","2023-04-30T19:39:08",{"id":233,"version":234,"summary_zh":235,"released_at":236},113604,"v0.2.14","## What's Changed\r\n\r\n- Don't specify providers for ONNXRuntime - Let it decide automatically.\r\n- Optimize image size to max 64x64.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-04-30T19:34:21",{"id":238,"version":239,"summary_zh":240,"released_at":241},113605,"v0.2.13","## What's Changed\r\n\r\nSupport multiple languages for all UI features:\r\n- English (default language).\r\n- Vietnamese.\r\n- Chinese (zh_CN). Thank @yibaoby for the translation.\r\n\r\nNow you can select the language from the menu.\r\n\r\n\u003Cimg width=\"560\" alt=\"Screenshot 2023-04-29 at 11 10 55 PM\" src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F18329471\u002F235337893-5703ada3-81f0-4859-add1-53b55389eb08.png\">\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-04-30T07:10:51",{"id":243,"version":244,"summary_zh":245,"released_at":246},113606,"v0.2.12","## What's Changed\r\n\r\n- Update icons.\r\n- Fixed: Could not open single image file.\r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-04-25T17:36:17",{"id":248,"version":249,"summary_zh":250,"released_at":251},113607,"v0.2.11","## What's Changed\r\n\r\n- Fixed #50: Crashing when running auto labeling for gray images.\r\n- Fixed wrong label color for new objects from Segment Anything. \r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-04-23T16:57:29",{"id":253,"version":254,"summary_zh":255,"released_at":256},113608,"v0.2.10","## What's Changed\r\n\r\n- Add auto release workflow\r\n   - PR: #49 \r\n\r\nIf you find this project useful, please consider [sponsoring](https:\u002F\u002Fko-fi.com\u002Fvietanhdev) its development.","2023-04-23T15:39:11",{"id":258,"version":259,"summary_zh":260,"released_at":261},113609,"v0.2.9","## What's Changed\r\n* 🐞fix: fix `shape.label` is none caused by label_dialog return empty s… by @liaozihang in https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fpull\u002F44\r\n\r\n## New Contributors\r\n* @liaozihang made their first contribution in https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fpull\u002F44\r\n* @qqqhhh-any made their first contribution in https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fpull\u002F47\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvietanhdev\u002Fanylabeling\u002Fcompare\u002Fv0.2.8...v0.2.9","2023-04-23T09:30:58"]