[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Project-MONAI--MONAILabel":3,"tool-Project-MONAI--MONAILabel":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",157379,2,"2026-04-15T23:32:42",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":121,"forks":122,"last_commit_at":123,"license":124,"difficulty_score":10,"env_os":125,"env_gpu":126,"env_ram":127,"env_deps":128,"category_tags":142,"github_topics":144,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":154,"updated_at":155,"faqs":156,"releases":157},7953,"Project-MONAI\u002FMONAILabel","MONAILabel","MONAI Label is an intelligent open source image labeling and learning tool.","MONAI Label 是一款智能开源的医学图像标注与学习工具，旨在帮助研究人员和开发者高效创建带注释的数据集，并构建用于临床评估的 AI 标注模型。它主要解决了传统医学图像标注耗时费力、门槛高且难以持续优化的痛点，通过引入“人在回路”的交互机制，让 AI 在用户标注过程中不断自我学习，从而显著减少人工工作量并提升标注质量。\n\n这款工具特别适合医学影像领域的研究人员、AI 应用开发者以及临床医生使用。其独特的技术亮点在于采用了服务器 - 客户端架构，支持无服务器化开发模式：开发者可以将自定义标注应用作为服务部署在 MONAI Label 服务器上，而用户只需通过兼容的查看器即可进行交互式标注。系统既能在单张或多张 GPU 的本地机器上轻松运行，也支持服务端与客户端分离部署，灵活适应不同算力环境。作为 MONAI 生态的重要组成部分，MONAI Label 不仅提供了丰富的示例应用和详细的教程文档，还允许用户像普通使用者一样与应用程序互动，从而持续改进模型表现，实现从数据准备到模型迭代的闭环工作流。","\u003C!--\nCopyright (c) MONAI Consortium\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n    http:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n# MONAI Label\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0)\n[![CI Build](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fworkflows\u002Fbuild\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fcommits\u002Fmain)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FProject-MONAI_MONAILabel_readme_13d664e1afd7.png)](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002F?badge=latest)\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmonailabel.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmonailabel)\n[![Azure DevOps tests (compact)](https:\u002F\u002Fimg.shields.io\u002Fazure-devops\u002Ftests\u002Fprojectmonai\u002Fmonai-label\u002F10?compact_message)](https:\u002F\u002Fdev.azure.com\u002Fprojectmonai\u002Fmonai-label\u002F_test\u002Fanalytics?definitionId=10&contextType=build)\n[![Azure DevOps coverage](https:\u002F\u002Fimg.shields.io\u002Fazure-devops\u002Fcoverage\u002Fprojectmonai\u002Fmonai-label\u002F10)](https:\u002F\u002Fdev.azure.com\u002Fprojectmonai\u002Fmonai-label\u002F_build?definitionId=10)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FProject-MONAI\u002FMONAILabel\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FProject-MONAI\u002FMONAILabel)\n\nMONAI Label is an intelligent open source image labeling and learning tool that enables users to create annotated datasets and build AI annotation models for clinical evaluation. MONAI Label enables application developers to build labeling apps in a serverless way, where custom labeling apps are exposed as a service through the MONAI Label Server.\n\nMONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. It is an\nopen-source and easy-to-install ecosystem that can run locally on a machine with single or multiple GPUs. Both server\nand client work on the same\u002Fdifferent machine. It shares the same principles\nwith [MONAI](https:\u002F\u002Fgithub.com\u002FProject-MONAI).\n\nRefer to full [MONAI Label documentations](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002Findex.html) for more details or check out our [MONAI Label Deep Dive videos series](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLtoSVSQ2XzyD4lc-lAacFBzOdv5Ou-9IA).\n\nRefer to [MONAI Label Tutorial](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Ftree\u002Fmain\u002Fmonailabel) series for application and viewer workflows with different medical image tasks. Notebook-like tutorials are created for detailed instructions.\n\n### Table of Contents\n- [Overview](#overview)\n  - [Highlights and Features](#highlights-and-features)\n  - [Supported Matrix](#supported-matrix)\n- [Getting Started with MONAI Label](#getting-started-with-monai-label)\n  - [Step 1. Installation](#step-1-installation)\n  - [Step 2. MONAI Label Sample Applications](#step-2-monai-label-sample-applications)\n  - [Step 3. MONAI Label Supported Viewers](#step-3-monai-label-supported-viewers)\n  - [Step 4. Data Preparation](#step-4-data-preparation)\n  - [Step 5. Start MONAI Label Server and Start Annotating!](#step-5-start-monai-label-server-and-start-annotating)\n- [MONAI Label Tutorials](#monai-label-tutorials)\n- [Cite MONAI Label](#cite)\n- [Contributing](#contributing)\n- [Community](#community)\n- [Additional Resources](#additional-resources)\n\n### Overview\nMONAI Label reduces the time and effort of annotating new datasets and enables the adaptation of AI to the task at hand by continuously learning from user interactions and data. MONAI Label allows researchers and developers to make continuous improvements to their apps by allowing them to interact with their apps at the user would. End-users (clinicians, technologists, and annotators in general) benefit from AI continuously learning and becoming better at understanding what the end-user is trying to annotate.\n\nMONAI Label aims to fill the gap between developers creating new annotation applications, and the end users which want to benefit from these innovations.\n\n#### Highlights and Features\n- Framework for developing and deploying MONAI Label Apps to train and infer AI models\n- Compositional & portable APIs for ease of integration in existing workflows\n- Customizable labeling app design for varying user expertise\n- Annotation support via [3DSlicer](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fslicer)\n  & [OHIF](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fohif) for radiology\n- Annotation support via [QuPath](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fqupath), [Digital Slide Archive](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fdsa), and [CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat) for\n  pathology\n- Annotation support via [CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat) for Endoscopy\n- PACS connectivity via [DICOMWeb](https:\u002F\u002Fwww.dicomstandard.org\u002Fusing\u002Fdicomweb)\n- Automated Active Learning workflow for endoscopy using [CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat)\n\n#### Supported Matrix\n\nMONAI Label supports many state-of-the-art(SOTA) models in Model-Zoo, and their integration with viewers and monaibundle app. Please refer to [monaibundle](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fsample-apps\u002Fmonaibundle) app page for supported models, including whole body segmentation, whole brain segmentation, lung nodule detection, tumor segmentation and many more.\n\nIn addition, you can find a table of the basic supported fields, modalities, viewers, and general data types.  However, these are only ones that we've explicitly test and that doesn't mean that your dataset or file type won't work with MONAI Label.  Try MONAI for your given task and if you're having issues, reach out through GitHub Issues.\n\u003Ctable>\n\u003Ctr>\n  \u003Cth>Field\u003C\u002Fth>\n  \u003Cth>Models\u003C\u002Fth>\n  \u003Cth>Viewers\u003C\u002Fth>\n  \u003Cth>Data Types\u003C\u002Fth>\n  \u003Cth>Image Modalities\u002FTarget\u003C\u002Fth>\n\u003C\u002Ftr>\n  \u003Ctd>Radiology\u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>Segmentation\u003C\u002Fli>\n      \u003Cli>DeepGrow\u003C\u002Fli>\n      \u003Cli>DeepEdit\u003C\u002Fli>\n      \u003Cli>SAM2 (2D\u002F3D)\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>3DSlicer\u003C\u002Fli>\n      \u003Cli>MITK\u003C\u002Fli>\n      \u003Cli>OHIF\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>NIfTI\u003C\u002Fli>\n      \u003Cli>NRRD\u003C\u002Fli>\n      \u003Cli>DICOM\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>CT\u003C\u002Fli>\n      \u003Cli>MRI\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n\u003Ctr>\n\u003C\u002Ftr>\n  \u003Ctd>Pathology\u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>DeepEdit\u003C\u002Fli>\n      \u003Cli>NuClick\u003C\u002Fli>\n      \u003Cli>Segmentation\u003C\u002Fli>\n      \u003Cli>Classification\u003C\u002Fli>\n      \u003Cli>SAM2 (2D)\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>Digital Slide Archive\u003C\u002Fli>\n      \u003Cli>QuPath\u003C\u002Fli>\n      \u003Cli>CVAT\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>TIFF\u003C\u002Fli>\n      \u003Cli>SVS\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>Nuclei Segmentation\u003C\u002Fli>\n      \u003Cli>Nuclei Classification\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n\u003Ctr>\n\u003C\u002Ftr>\n  \u003Ctd>Video\u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>DeepEdit\u003C\u002Fli>\n      \u003Cli>Tooltracking\u003C\u002Fli>\n      \u003Cli>InBody\u002FOutBody\u003C\u002Fli>\n      \u003Cli>SAM2 (2D)\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>CVAT\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>JPG\u003C\u002Fli>\n      \u003Cli>3-channel Video Frames\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>Endoscopy\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n\u003Ctr>\n\u003C\u002Ftable>\n\n# Getting Started with MONAI Label\n### MONAI Label requires a few steps to get started:\n- Step 1: [Install MONAI Label](#step-1-installation)\n- Step 2: [Download a MONAI Label sample app or write your own custom app](#step-2-monai-label-sample-applications)\n- Step 3: [Install a compatible viewer and supported MONAI Label Plugin](#step-3-monai-label-supported-viewers)\n- Step 4: [Prepare your Data](#step-4-data-preparation)\n- Step 5: [Launch MONAI Label Server and start Annotating!](#step-5-start-monai-label-server-and-start-annotating)\n\n## Step 1 Installation\n\n### Current Stable Version\n\u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmonailabel\u002F#history\">\u003Cimg alt=\"GitHub release (latest SemVer)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fproject-monai\u002Fmonailabel\">\u003C\u002Fa>\n\u003Cpre>pip install -U monailabel\u003C\u002Fpre>\n\nMONAI Label supports the following OS with **GPU\u002FCUDA** enabled. For more details instruction, please see the installation guides.\n- [Ubuntu](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002Finstallation.html)\n- [Windows](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002Finstallation.html#windows)\n\n### GPU Acceleration (Optional Dependencies)\nFollowing are the optional dependencies which can help you to accelerate some GPU based transforms from MONAI. These dependencies are enabled by default if you are using `projectmonai\u002Fmonailabel` docker.\n- [CUCIM](https:\u002F\u002Fpypi.org\u002Fproject\u002Fcucim\u002F)\n- [CUPY](https:\u002F\u002Fdocs.cupy.dev\u002Fen\u002Fstable\u002Finstall.html#installing-cupy)\n- [CUDA Toolkit](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-downloads)\n\n### Development version\n\nTo install the _**latest features**_ using one of the following options:\n\n\u003Cdetails>\n  \u003Csummary>\u003Cstrong>Git Checkout (developer mode)\u003C\u002Fstrong>\u003C\u002Fsummary>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\">\u003Cimg alt=\"GitHub tag (latest SemVer)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Ftag\u002FProject-MONAI\u002Fmonailabel\">\u003C\u002Fa>\n  \u003Cbr>\n  \u003Cpre>\n  git clone https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\n  pip install -r MONAILabel\u002Frequirements.txt\n  export PATH=$PATH:`pwd`\u002FMONAILabel\u002Fmonailabel\u002Fscripts\u003C\u002Fpre>\n  \u003Cp>If you are using DICOM-Web + OHIF then you have to build OHIF package separate.  Please refer [here](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fohif#development-setup).\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n  \u003Csummary>\u003Cstrong>Docker\u003C\u002Fstrong>\u003C\u002Fsummary>\n  \u003Cimg alt=\"Docker Image Version (latest semver)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fv\u002Fprojectmonai\u002Fmonailabel\">\n  \u003Cbr>\n  \u003Cpre>docker run --gpus all --rm -ti --ipc=host --net=host projectmonai\u002Fmonailabel:latest bash\u003C\u002Fpre>\n\u003C\u002Fdetails>\n\n### SAM-2\n\n> By default, [**SAM2**](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002F) model is included for all the Apps when **_python >= 3.10_**\n>  - **sam_2d**: for any organ or tissue and others over a given slice\u002F2D image.\n>  - **sam_3d**: to support SAM2 propagation over multiple slices (Radiology\u002FMONAI-Bundle).\n\nIf you are using `pip install monailabel` by default it uses [SAM-2](https:\u002F\u002Fhuggingface.co\u002Ffacebook\u002Fsam2-hiera-large) models.\n\u003Cbr\u002F>\nTo use [SAM-2.1](https:\u002F\u002Fhuggingface.co\u002Ffacebook\u002Fsam2.1-hiera-large) use one of following options.\n - Use monailabel [Docker](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fprojectmonai\u002Fmonailabel) instead of pip package\n - Run monailabel in dev mode (git checkout)\n - If you have installed monailabel via pip then uninstall **_sam2_** package `pip uninstall sam2` and then run `pip install -r requirements.txt` or install latest **SAM-2** from it's [github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Ftree\u002Fmain?tab=readme-ov-file#installation).\n\n## Step 2 MONAI Label Sample Applications\n\n\u003Ch3>Radiology\u003C\u002Fh3>\n\u003Cp>This app has example models to do both interactive and automated segmentation over radiology (3D) images. Including auto segmentation with the latest deep learning models (e.g., UNet, UNETR) for multiple abdominal organs. Interactive tools include DeepEdit and Deepgrow for actively improving trained models and deployment.\u003C\u002Fp>\n\u003Cul>\n  \u003Cli>Deepedit\u003C\u002Fli>\n  \u003Cli>Deepgrow\u003C\u002Fli>\n  \u003Cli>Segmentation\u003C\u002Fli>\n  \u003Cli>Spleen Segmentation\u003C\u002Fli>\n  \u003Cli>Multi-Stage Vertebra Segmentation\u003C\u002Fli>\n\u003C\u002Ful>\n\n\u003Ch3>Pathology\u003C\u002Fh3>\n\u003Cp>This app has example models to do both interactive and automated segmentation over pathology (WSI) images. Including nuclei multi-label segmentation for Neoplastic cells, Inflammatory, Connective\u002FSoft tissue cells, Dead Cells, and Epithelial. The app provides interactive tools including DeepEdits for interactive nuclei segmentation.\u003C\u002Fp>\n\u003Cul>\n  \u003Cli>Deepedit\u003C\u002Fli>\n  \u003Cli>Deepgrow\u003C\u002Fli>\n  \u003Cli>Segmentation\u003C\u002Fli>\n  \u003Cli>Spleen Segmentation\u003C\u002Fli>\n  \u003Cli>Multi-Stage Vertebra Segmentation\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>Video\u003C\u002Fh3>\n\u003Cp>The Endoscopy app enables users to use interactive, automated segmentation and classification models over 2D images for endoscopy usecase. Combined with CVAT, it will demonstrate the fully automated Active Learning workflow to train + fine-tune a model.\u003C\u002Fp>\n\u003Cul>\n  \u003Cli>Deepedit\u003C\u002Fli>\n  \u003Cli>ToolTracking\u003C\u002Fli>\n  \u003Cli>InBody\u002FOutBody\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>Bundles\u003C\u002Fh3>\n\u003Cp>The Bundle app enables users with customized models for inference, training or pre and post processing any target anatomies. The specification for MONAILabel integration of the Bundle app links archived Model-Zoo for customized labeling (e.g., the third-party transformer model for labeling renal cortex, medulla, and pelvicalyceal system. Interactive tools such as DeepEdits).\u003C\u002Fp>\n\nFor a full list of supported bundles, see the \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fsample-apps\u002Fmonaibundle\">MONAI Label Bundles README\u003C\u002Fa>.\n\n## Step 3 MONAI Label Supported Viewers\n\n### Radiology\n#### 3D Slicer\n3D Slicer, a free and open-source platform for analyzing, visualizing and understanding medical image data. In MONAI Label, 3D Slicer is most tested with radiology studies and algorithms, develpoment and integration.\n\n[3D Slicer Setup](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fslicer)\n\n#### MITK\nThe Medical imaging Interaction ToolKit (MITK) is an open source, standalone, medical imaging platform. MONAI Label is partially integrated to MITK Workbench, a powerful and free application to view, process, and segment medical images. The MONAI Label tool in MITK is mostly tested for inferencing using radiology and bundle apps allowing for Auto and Click-based interactive models.\n\n[MITK Setup](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fmitk)\n\n#### OHIF\nThe Open Health Imaging Foundation (OHIF) Viewer is an open source, web-based, medical imaging platform. It aims to provide a core framework for building complex imaging applications.\n\n[OHIF Setup](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fohif)\n\n### Pathology\n#### QuPath\nQuantitative Pathology & Bioimage Analysis (QuPath) is an open, powerful, flexible, extensible software platform for bioimage analysis.\n\n[QuPath Setup](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fqupath)\n\n#### Digital Slide Archive\nThe Digital Slide Archive (DSA) is a platform that provides the ability to store, manage, visualize and annotate large imaging data sets.\n[Digital Slide Archive Setup](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fdsa)\n\n### Video\n#### CVAT\nCVAT is an interactive video and image annotation tool for computer vision.\n[CVAT Setup](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat)\n\n## Step 4 Data Preparation\nFor data preparation, you have two options, you can use a local data store or any image archive tool that supports DICOMWeb.\n\n#### Local Datastore for the Radiology App on single modality images\nFor a Datastore in a local file archive, there is a set folder structure that MONAI Label uses. Place your image data in a folder and if you have any segmentation files, create and place them in a subfolder called `labels\u002Ffinal`. You can see an example below:\n```\ndataset\n│-- spleen_10.nii.gz\n│-- spleen_11.nii.gz\n│   ...\n└───labels\n    └─── final\n        │-- spleen_10.nii.gz\n        │-- spleen_11.nii.gz\n        │   ...\n```\n\nIf you don't have labels, just place the images\u002Fvolumes in the dataset folder.\n\n#### DICOMWeb Support\nIf the viewer you're using supports DICOMweb standard, you can use that instead of a local datastore to serve images to MONAI Label. When starting the MONAI Label server, we need to specify the URL of the DICOMweb service in the studies argument (and, optionally, the username and password for DICOM servers that require them). You can see an example of starting the MONAI Label server with a DICOMweb URL below:\n\n\n```\nmonailabel start_server --app apps\u002Fradiology --studies http:\u002F\u002F127.0.0.1:8042\u002Fdicom-web --conf models segmentation\n```\n\n## Step 5 Start MONAI Label Server and Start Annotating\nYou're now ready to start using MONAI Label.  Once you've configured your viewer, app, and datastore, you can launch the MONAI Label server with the relevant parameters. For simplicity, you can see an example where we download a Radiology sample app and dataset, then start the MONAI Label server below:\n\n```\nmonailabel apps --download --name radiology --output apps\nmonailabel datasets --download --name Task09_Spleen --output datasets\nmonailabel start_server --app apps\u002Fradiology --studies datasets\u002FTask09_Spleen\u002FimagesTr --conf models segmentation\n```\n\n**Note:** If you want to work on different labels than the ones proposed by default, change the configs file following the instructions here: https:\u002F\u002Fyoutu.be\u002FKtPE8m0LvcQ?t=622\n\n## MONAI Label Tutorials\n\n**Content**\n\n- **Radiology App**:\n  - Viewer: [3D Slicer](https:\u002F\u002Fwww.slicer.org\u002F) | Datastore: Local | Task: Segmentation\n    - [MONAILabel: HelloWorld](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_HelloWorld_radiology_3dslicer.ipynb): Spleen segmentation with 3D Slicer setups.\n  - Viewer: [OHIF](https:\u002F\u002Fohif.org\u002F) | Datastore: Local | Task: Segmentation\n    - [MONAILabel: Web-based OHIF Viewer](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_radiology_spleen_segmentation_OHIF.ipynb): Spleen segmentation with OHIF setups.\n- **MONAIBUNDLE App**:\n  - Viewer: [3D Slicer](https:\u002F\u002Fwww.slicer.org\u002F) | Datastore: Local | Task: Segmentation\n    - [MONAILabel: Pancreas Tumor Segmentation with 3D Slicer](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_bring_your_own_data.ipynb): Pancreas and tumor segmentation with CT scans in 3D Slicer.\n    - [MONAILabel: Multi-organ Segmentation with 3D Slicer](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_monaibundle_3dslicer_multiorgan_seg.ipynb): Multi-organ segmentation with CT scans in 3D Slicer.\n    - [MONAILabel: Whole Body CT Segmentation with 3D Slicer](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_wholebody_totalSegmentator_3dslicer.ipynb): Whole body (104 structures) segmentation with CT scans.\n    - [MONAILabel: Lung nodule CT Detection with 3D Slicer](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_monaibundle_3dslicer_lung_nodule_detection.ipynb): Lung nodule detection task with CT scans.\n- **Pathology App**:\n  - Viewer: [QuPath](https:\u002F\u002Fqupath.github.io\u002F) | Datastore: Local | Task: Segmentation\n    - [MONAILabel: Nuclei Segmentation with QuPath](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_pathology_nuclei_segmentation_QuPath.ipynb) Nuclei segmentation with QuPath setup and Nuclick models.\n- **Endoscopy App**:\n  - Viewer: [CVAT](https:\u002F\u002Fgithub.com\u002Fopencv\u002Fcvat) | Datastore: Local | Task: Segmentation\n    - [MONAILabel: Tooltracking with CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_endoscopy_cvat_tooltracking.ipynb): Surgical tool segmentation with CVAT\u002FNuclio setup.\n\n## Cite\n\nIf you are using MONAI Label in your research, please use the following citation:\n\n```bash\n@article{DiazPinto2022monailabel,\n   author = {Diaz-Pinto, Andres and Alle, Sachidanand and Ihsani, Alvin and Asad, Muhammad and\n            Nath, Vishwesh and P{\\'e}rez-Garc{\\'\\i}a, Fernando and Mehta, Pritesh and\n            Li, Wenqi and Roth, Holger R. and Vercauteren, Tom and Xu, Daguang and\n            Dogra, Prerna and Ourselin, Sebastien and Feng, Andrew and Cardoso, M. Jorge},\n    title = {{MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images}},\n  journal = {arXiv e-prints},\n     year = 2022,\n     url  = {https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.12362.pdf}\n}\n\n@inproceedings{DiazPinto2022DeepEdit,\n      title={{DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images}},\n      author={Diaz-Pinto, Andres and Mehta, Pritesh and Alle, Sachidanand and Asad, Muhammad and Brown, Richard and Nath, Vishwesh and Ihsani, Alvin and Antonelli, Michela and Palkovics, Daniel and Pinter, Csaba and others},\n      booktitle={MICCAI Workshop on Data Augmentation, Labelling, and Imperfections},\n      pages={11--21},\n      year={2022},\n      organization={Springer}\n}\n ```\n\nOptional Citation: if you are using active learning functionality from MONAI Label, please support us:\n\n```bash\n@article{nath2020diminishing,\n  title={Diminishing uncertainty within the training pool: Active learning for medical image segmentation},\n  author={Nath, Vishwesh and Yang, Dong and Landman, Bennett A and Xu, Daguang and Roth, Holger R},\n  journal={IEEE Transactions on Medical Imaging},\n  volume={40},\n  number={10},\n  pages={2534--2547},\n  year={2020},\n  publisher={IEEE}\n}\n```\n\n## Contributing\n\nFor guidance on making a contribution to MONAI Label, see\nthe [contributing guidelines](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fblob\u002Fmain\u002FCONTRIBUTING.md).\n\n## Community\n\nJoin the conversation on Twitter [@ProjectMONAI](https:\u002F\u002Ftwitter.com\u002FProjectMONAI) or join\nour [Slack channel](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fprojectmonai\u002Fshared_invite\u002Fzt-3hucgm02q-i8Bn9XofDZs2UGOH4jUl4w).\n\nAsk and answer questions over\non [MONAI Label's GitHub Discussions tab](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fdiscussions).\n\n## Additional Resources\n\n- Website: https:\u002F\u002Fproject-monai.github.io\u002F\n- API documentation: https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\n- Code: https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\n- Project tracker: https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fprojects\n- Issue tracker: https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fissues\n- Wiki: https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fwiki\n- Test status: https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Factions\n- PyPI package: https:\u002F\u002Fpypi.org\u002Fproject\u002Fmonailabel\u002F\n- Docker Hub: https:\u002F\u002Fhub.docker.com\u002Fr\u002Fprojectmonai\u002Fmonailabel\n- Client API: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mPMYJyzSmyo\n- Demo Videos: https:\u002F\u002Fwww.youtube.com\u002Fc\u002FProjectMONAI\n","\u003C!--\n版权所有 © MONAI 联盟\n根据 Apache License, Version 2.0（“许可证”）授权；\n除非符合许可证的规定，否则不得使用此文件。\n您可以在以下网址获取许可证副本：\n    http:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\n除非适用法律要求或书面同意，否则软件按“原样”分发，\n不提供任何形式的保证或条件，无论是明示的还是暗示的。\n有关权限和限制的具体语言，请参阅许可证。\n-->\n\n# MONAI Label\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0)\n[![CI 构建](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fworkflows\u002Fbuild\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fcommits\u002Fmain)\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FProject-MONAI_MONAILabel_readme_13d664e1afd7.png)](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002F?badge=latest)\n[![PyPI 版本](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmonailabel.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fmonailabel)\n[![Azure DevOps 测试（紧凑版）](https:\u002F\u002Fimg.shields.io\u002Fazure-devops\u002Ftests\u002Fprojectmonai\u002Fmonai-label\u002F10?compact_message)](https:\u002F\u002Fdev.azure.com\u002Fprojectmonai\u002Fmonai-label\u002F_test\u002Fanalytics?definitionId=10&contextType=build)\n[![Azure DevOps 覆盖率](https:\u002F\u002Fimg.shields.io\u002Fazure-devops\u002Fcoverage\u002Fprojectmonai\u002Fmonai-label\u002F10)](https:\u002F\u002Fdev.azure.com\u002Fprojectmonai\u002Fmonai-label\u002F_build?definitionId=10)\n[![Codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FProject-MONAI\u002FMONAILabel\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FProject-MONAI\u002FMONAILabel)\n\nMONAI Label 是一款智能的开源医学影像标注与学习工具，使用户能够创建带标注的数据集，并构建用于临床评估的 AI 标注模型。MONAI Label 允许应用开发者以无服务器的方式构建标注应用，自定义标注应用可通过 MONAI Label Server 作为服务对外提供。\n\nMONAI Label 是一个基于服务器-客户端架构的系统，利用 AI 技术促进交互式的医学影像标注。它是一个开源且易于安装的生态系统，可在本地单 GPU 或多 GPU 的机器上运行。服务器端和客户端可以运行在同一台或不同的机器上。其设计原则与 [MONAI](https:\u002F\u002Fgithub.com\u002FProject-MONAI) 保持一致。\n\n如需了解更多详细信息，请参阅完整的 [MONAI Label 文档](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002Findex.html)，或观看我们的 [MONAI Label 深度解析视频系列](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLtoSVSQ2XzyD4lc-lAacFBzOdv5Ou-9IA)。\n\n关于不同医学影像任务的应用和查看器工作流，请参考 [MONAI Label 教程](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Ftree\u002Fmain\u002Fmonailabel) 系列。这些教程以笔记本形式呈现，提供详细的指导说明。\n\n### 目录\n- [概述](#overview)\n  - [亮点与特性](#highlights-and-features)\n  - [支持的环境](#supported-matrix)\n- [开始使用 MONAI Label](#getting-started-with-monai-label)\n  - [步骤 1. 安装](#step-1-installation)\n  - [步骤 2. MONAI Label 示例应用](#step-2-monai-label-sample-applications)\n  - [步骤 3. MONAI Label 支持的查看器](#step-3-monai-label-supported-viewers)\n  - [步骤 4. 数据准备](#step-4-data-preparation)\n  - [步骤 5. 启动 MONAI Label 服务器并开始标注！](#step-5-start-monai-label-server-and-start-annotating)\n- [MONAI Label 教程](#monai-label-tutorials)\n- [引用 MONAI Label](#cite)\n- [贡献](#contributing)\n- [社区](#community)\n- [其他资源](#additional-resources)\n\n### 概述\nMONAI Label 通过持续从用户交互和数据中学习，减少了标注新数据集所需的时间和精力，并使 AI 能够适应当前的任务。MONAI Label 允许研究人员和开发者像普通用户一样与他们的应用进行交互，从而不断改进应用。最终用户（如临床医生、技术人员和标注人员）则受益于 AI 的持续学习能力，使其能够更好地理解用户想要标注的内容。\n\nMONAI Label 的目标是弥合开发人员创建新的标注应用程序与希望从这些创新中获益的最终用户之间的差距。\n\n#### 亮点与功能\n- 用于开发和部署 MONAI Label 应用程序以训练和推理 AI 模型的框架\n- 组合式且可移植的 API，便于集成到现有工作流中\n- 可根据用户的专业水平自定义标注应用设计\n- 支持通过 [3DSlicer](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fslicer) 和 [OHIF](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fohif) 进行放射学标注\n- 支持通过 [QuPath](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fqupath)、[Digital Slide Archive](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fdsa) 和 [CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat) 进行病理学标注\n- 支持通过 [CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat) 进行内窥镜检查标注\n- 通过 [DICOMWeb](https:\u002F\u002Fwww.dicomstandard.org\u002Fusing\u002Fdicomweb) 实现 PACS 连接\n- 针对内窥镜检查的自动化主动学习工作流，使用 [CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat)\n\n#### 支持矩阵\n\nMONAI Label 支持 Model-Zoo 中的许多最先进（SOTA）模型，并将其与查看器和 monaibundle 应用程序集成。请参阅 [monaibundle](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fsample-apps\u002Fmonaibundle) 应用页面，了解支持的模型，包括全身分割、全脑分割、肺结节检测、肿瘤分割等众多任务。\n\n此外，您还可以找到一个表格，列出基本支持的领域、模态、查看器和通用数据类型。然而，这些只是我们明确测试过的选项，并不意味着您的数据集或文件类型无法与 MONAI Label 配合使用。请尝试将 MONAI 用于您的特定任务；如果遇到问题，请通过 GitHub Issues 与我们联系。\n\u003Ctable>\n\u003Ctr>\n  \u003Cth>领域\u003C\u002Fth>\n  \u003Cth>模型\u003C\u002Fth>\n  \u003Cth>查看器\u003C\u002Fth>\n  \u003Cth>数据类型\u003C\u002Fth>\n  \u003Cth>影像模态\u002F目标\u003C\u002Fth>\n\u003C\u002Ftr>\n  \u003Ctd>放射学\u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>分割\u003C\u002Fli>\n      \u003Cli>DeepGrow\u003C\u002Fli>\n      \u003Cli>DeepEdit\u003C\u002Fli>\n      \u003Cli>SAM2 (2D\u002F3D)\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>3DSlicer\u003C\u002Fli>\n      \u003Cli>MITK\u003C\u002Fli>\n      \u003Cli>OHIF\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>NIfTI\u003C\u002Fli>\n      \u003Cli>NRRD\u003C\u002Fli>\n      \u003Cli>DICOM\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>CT\u003C\u002Fli>\n      \u003Cli>MRI\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n\u003Ctr>\n\u003C\u002Ftr>\n  \u003Ctd>病理学\u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>DeepEdit\u003C\u002Fli>\n      \u003Cli>NuClick\u003C\u002Fli>\n      \u003Cli>分割\u003C\u002Fli>\n      \u003Cli>分类\u003C\u002Fli>\n      \u003Cli>SAM2 (2D)\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>Digital Slide Archive\u003C\u002Fli>\n      \u003Cli>QuPath\u003C\u002Fli>\n      \u003Cli>CVAT\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>TIFF\u003C\u002Fli>\n      \u003Cli>SVS\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>细胞核分割\u003C\u002Fli>\n      \u003Cli>细胞核分类\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n\u003Ctr>\n\u003C\u002Ftr>\n  \u003Ctd>视频\u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>DeepEdit\u003C\u002Fli>\n      \u003Cli>工具追踪\u003C\u002Fli>\n      \u003Cli>体内\u002F体外\u003C\u002Fli>\n      \u003Cli>SAM2 (2D)\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>CVAT\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>JPG\u003C\u002Fli>\n      \u003Cli>3通道视频帧\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n  \u003Ctd>\n    \u003Cul>\n      \u003Cli>内窥镜检查\u003C\u002Fli>\n    \u003C\u002Ful>\n  \u003C\u002Ftd>\n\u003Ctr>\n\u003C\u002Ftable>\n\n# 开始使用 MONAI Label\n### 使用 MONAI Label 开始需要几个步骤：\n- 步骤 1：[安装 MONAI Label](#step-1-installation)\n- 步骤 2：[下载 MONAI Label 示例应用或编写您自己的自定义应用](#step-2-monai-label-sample-applications)\n- 步骤 3：[安装兼容的查看器和受支持的 MONAI Label 插件](#step-3-monai-label-supported-viewers)\n- 步骤 4：[准备您的数据](#step-4-data-preparation)\n- 步骤 5：[启动 MONAI Label 服务器并开始标注！](#step-5-start-monai-label-server-and-start-annotating)\n\n## 步骤 1 安装\n\n### 当前稳定版本\n\u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmonailabel\u002F#history\">\u003Cimg alt=\"GitHub release (latest SemVer)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fproject-monai\u002Fmonailabel\">\u003C\u002Fa>\n\u003Cpre>pip install -U monailabel\u003C\u002Fpre>\n\nMONAI Label 支持以下启用了 **GPU\u002FCUDA** 的操作系统。有关详细说明，请参阅安装指南。\n- [Ubuntu](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002Finstallation.html)\n- [Windows](https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\u002Fen\u002Flatest\u002Finstallation.html#windows)\n\n### GPU 加速（可选依赖项）\n以下是可选的依赖项，可以帮助您加速 MONAI 中的一些基于 GPU 的变换。如果您使用 `projectmonai\u002Fmonailabel` Docker 镜像，则这些依赖项默认已启用。\n- [CUCIM](https:\u002F\u002Fpypi.org\u002Fproject\u002Fcucim\u002F)\n- [CUPY](https:\u002F\u002Fdocs.cupy.dev\u002Fen\u002Fstable\u002Finstall.html#installing-cupy)\n- [CUDA Toolkit](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-downloads)\n\n### 开发版本\n\n要安装 _**最新功能**_，您可以选择以下方式之一：\n\n\u003Cdetails>\n  \u003Csummary>\u003Cstrong>Git 检出（开发者模式）\u003C\u002Fstrong>\u003C\u002Fsummary>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\">\u003Cimg alt=\"GitHub tag (latest SemVer)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Ftag\u002FProject-MONAI\u002Fmonailabel\">\u003C\u002Fa>\n  \u003Cbr>\n  \u003Cpre>\n  git clone https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\n  pip install -r MONAILabel\u002Frequirements.txt\n  export PATH=$PATH:`pwd`\u002FMONAILabel\u002Fmonailabel\u002Fscripts\u003C\u002Fpre>\n  \u003Cp>如果您使用 DICOM-Web + OHIF，则需要单独构建 OHIF 包。请参阅 [此处](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fohif#development-setup)。\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n  \u003Csummary>\u003Cstrong>Docker\u003C\u002Fstrong>\u003C\u002Fsummary>\n  \u003Cimg alt=\"Docker Image Version (latest semver)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fv\u002Fprojectmonai\u002Fmonailabel\">\n  \u003Cbr>\n  \u003Cpre>docker run --gpus all --rm -ti --ipc=host --net=host projectmonai\u002Fmonailabel:latest bash\u003C\u002Fpre>\n\u003C\u002Fdetails>\n\n### SAM-2\n\n> 默认情况下，当 **_python >= 3.10_** 时，所有应用都会包含 [**SAM2**](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002F) 模型。\n>  - **sam_2d**：适用于给定切片\u002F2D图像上的任何器官或组织等。\n>  - **sam_3d**：支持在多个切片上进行 SAM2 传播（放射科\u002FMONAI-Bundle）。\n\n如果您使用 `pip install monailabel`，默认会使用 [SAM-2](https:\u002F\u002Fhuggingface.co\u002Ffacebook\u002Fsam2-hiera-large) 模型。\n\u003Cbr\u002F>\n要使用 [SAM-2.1](https:\u002F\u002Fhuggingface.co\u002Ffacebook\u002Fsam2.1-hiera-large)，请采用以下任一方式：\n - 使用 MONAI Label 的 [Docker](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fprojectmonai\u002Fmonailabel) 而不是 pip 包\n - 在开发模式下运行 MONAI Label（通过 git checkout）\n - 如果您已通过 pip 安装了 MONAI Label，请先卸载 **_sam2_** 包 `pip uninstall sam2`，然后运行 `pip install -r requirements.txt`，或者从其 [GitHub](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Ftree\u002Fmain?tab=readme-ov-file#installation) 上安装最新的 **SAM-2**。\n\n## 第2步 MONAI Label 示例应用\n\n\u003Ch3>放射科\u003C\u002Fh3>\n\u003Cp>此应用提供了用于对放射科（3D）图像进行交互式和自动化分割的示例模型。其中包括使用最新深度学习模型（如 UNet、UNETR）对多个腹部器官进行自动分割的功能。交互式工具包括 DeepEdit 和 Deepgrow，可用于主动改进训练好的模型并进行部署。\u003C\u002Fp>\n\u003Cul>\n  \u003Cli>Deepedit\u003C\u002Fli>\n  \u003Cli>Deepgrow\u003C\u002Fli>\n  \u003Cli>分割\u003C\u002Fli>\n  \u003Cli>脾脏分割\u003C\u002Fli>\n  \u003Cli>多阶段椎体分割\u003C\u002Fli>\n\u003C\u002Ful>\n\n\u003Ch3>病理学\u003C\u002Fh3>\n\u003Cp>此应用提供了用于对病理学（WSI）图像进行交互式和自动化分割的示例模型。其中包括针对肿瘤细胞、炎症细胞、结缔组织\u002F软组织细胞、死亡细胞和上皮细胞的多标签核分割。该应用还提供交互式工具，如用于交互式核分割的 DeepEdits。\u003C\u002Fp>\n\u003Cul>\n  \u003Cli>Deepedit\u003C\u002Fli>\n  \u003Cli>Deepgrow\u003C\u002Fli>\n  \u003Cli>分割\u003C\u002Fli>\n  \u003Cli>脾脏分割\u003C\u002Fli>\n  \u003Cli>多阶段椎体分割\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>视频\u003C\u002Fh3>\n\u003Cp>内窥镜应用使用户能够对内窥镜用途的 2D 图像使用交互式、自动化分割和分类模型。结合 CVAT，它将演示一个完全自动化的主动学习工作流，用于训练和微调模型。\u003C\u002Fp>\n\u003Cul>\n  \u003Cli>Deepedit\u003C\u002Fli>\n  \u003Cli>工具跟踪\u003C\u002Fli>\n  \u003Cli>体内\u002F体外\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>捆绑包\u003C\u002Fh3>\n\u003Cp>捆绑包应用允许用户为任何目标解剖结构使用自定义模型进行推理、训练或前后处理。MONAI Label 集成捆绑包应用的规范链接到归档的模型库，用于自定义标注（例如，第三方转换器模型用于标注肾皮质、髓质和肾盂肾盏系统）。还包括交互式工具，如 DeepEdits。\u003C\u002Fp>\n\n有关支持的全部捆绑包列表，请参阅 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fsample-apps\u002Fmonaibundle\">MONAI Label 捆绑包 README\u003C\u002Fa>。\n\n## 第3步 MONAI Label 支持的查看器\n\n### 放射科\n#### 3D Slicer\n3D Slicer 是一个免费且开源的平台，用于分析、可视化和理解医学影像数据。在 MONAI Label 中，3D Slicer 已被广泛测试用于放射科研究和算法、开发及集成。\n\n[3D Slicer 设置](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fslicer)\n\n#### MITK\n医学影像交互工具包（MITK）是一个开源、独立的医学影像平台。MONAI Label 已部分集成到 MITK Workbench 中，这是一款功能强大且免费的应用程序，可用于查看、处理和分割医学影像。MONAI Label 工具在 MITK 中主要针对放射科和捆绑包应用进行推理测试，支持自动和基于点击的交互式模型。\n\n[MITK 设置](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fmitk)\n\n#### OHIF\n开放健康影像基金会（OHIF）查看器是一个开源、基于 Web 的医学影像平台。其目标是提供构建复杂影像应用程序的核心框架。\n\n[OHIF 设置](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fohif)\n\n### 病理学\n#### QuPath\n定量病理学与生物图像分析（QuPath）是一个开放、强大、灵活且可扩展的软件平台，用于生物图像分析。\n\n[QuPath 设置](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fqupath)\n\n#### 数字玻片档案\n数字玻片档案（DSA）是一个平台，提供存储、管理、可视化和标注大型影像数据集的能力。\n[数字玻片档案设置](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fdsa)\n\n### 视频\n#### CVAT\nCVAT 是一款用于计算机视觉的交互式视频和图像标注工具。\n[CVAT 设置](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat)\n\n## 第4步 数据准备\n在数据准备方面，您有两种选择：可以使用本地数据存储，也可以使用任何支持 DICOMWeb 的影像存档工具。\n\n#### 放射科应用中单模态图像的本地数据存储\n对于本地文件存档中的数据存储，MONAI Label 使用一套固定的文件夹结构。请将您的图像数据放入文件夹中，如果有分割文件，则创建名为 `labels\u002Ffinal` 的子文件夹并将它们放入其中。示例如下：\n```\ndataset\n│-- spleen_10.nii.gz\n│-- spleen_11.nii.gz\n│   ...\n└───labels\n    └─── final\n        │-- spleen_10.nii.gz\n        │-- spleen_11.nii.gz\n        │   ...\n```\n\n如果您没有标签，只需将图像\u002F体积放置在 dataset 文件夹中即可。\n\n#### DICOMWeb 支持\n如果您使用的查看器支持 DICOMweb 标准，您可以使用它代替本地数据存储来向 MONAI Label 提供图像。启动 MONAI Label 服务器时，我们需要在 studies 参数中指定 DICOMweb 服务的 URL（以及可选的需要身份验证的 DICOM 服务器的用户名和密码）。以下是使用 DICOMweb URL 启动 MONAI Label 服务器的示例：\n\n```\nmonailabel start_server --app apps\u002Fradiology --studies http:\u002F\u002F127.0.0.1:8042\u002Fdicom-web --conf models segmentation\n```\n\n## 第5步 启动 MONAI Label 服务器并开始标注\n现在您已经准备好使用 MONAI Label 了。配置好查看器、应用和数据存储后，您可以使用相关参数启动 MONAI Label 服务器。为了简单起见，下面是一个下载放射科示例应用和数据集，然后启动 MONAI Label 服务器的示例：\n\n```\nmonailabel apps --download --name radiology --output apps\nmonailabel datasets --download --name Task09_Spleen --output datasets\nmonailabel start_server --app apps\u002Fradiology --studies datasets\u002FTask09_Spleen\u002FimagesTr --conf models segmentation\n```\n\n**注意：** 如果您想处理不同于默认建议的标签，请按照此处的说明更改配置文件：https:\u002F\u002Fyoutu.be\u002FKtPE8m0LvcQ?t=622\n\n## MONAI Label 教程\n\n**内容**\n\n- **放射科应用**：\n  - 查看器：[3D Slicer](https:\u002F\u002Fwww.slicer.org\u002F) | 数据存储：本地 | 任务：分割\n    - [MONAILabel：HelloWorld](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_HelloWorld_radiology_3dslicer.ipynb)：使用 3D Slicer 设置进行脾脏分割。\n  - 查看器：[OHIF](https:\u002F\u002Fohif.org\u002F) | 数据存储：本地 | 任务：分割\n    - [MONAILabel：基于 Web 的 OHIF 查看器](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_radiology_spleen_segmentation_OHIF.ipynb)：使用 OHIF 设置进行脾脏分割。\n- **MONAIBUNDLE 应用**：\n  - 查看器：[3D Slicer](https:\u002F\u002Fwww.slicer.org\u002F) | 数据存储：本地 | 任务：分割\n    - [MONAILabel：使用 3D Slicer 进行胰腺肿瘤分割](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_bring_your_own_data.ipynb)：在 3D Slicer 中使用 CT 扫描进行胰腺和肿瘤分割。\n    - [MONAILabel：使用 3D Slicer 进行多器官分割](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_monaibundle_3dslicer_multiorgan_seg.ipynb)：在 3D Slicer 中使用 CT 扫描进行多器官分割。\n    - [MONAILabel：使用 3D Slicer 进行全身 CT 分割](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_wholebody_totalSegmentator_3dslicer.ipynb)：使用 CT 扫描进行全身（104 个结构）分割。\n    - [MONAILabel：使用 3D Slicer 检测肺结节 CT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_monaibundle_3dslicer_lung_nodule_detection.ipynb)：使用 CT 扫描执行肺结节检测任务。\n- **病理学应用**：\n  - 查看器：[QuPath](https:\u002F\u002Fqupath.github.io\u002F) | 数据存储：本地 | 任务：分割\n    - [MONAILabel：使用 QuPath 进行细胞核分割](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_pathology_nuclei_segmentation_QuPath.ipynb)：使用 QuPath 设置和 Nuclick 模型进行细胞核分割。\n- **内窥镜应用**：\n  - 查看器：[CVAT](https:\u002F\u002Fgithub.com\u002Fopencv\u002Fcvat) | 数据存储：本地 | 任务：分割\n    - [MONAILabel：使用 CVAT 进行器械跟踪](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Fblob\u002Fmain\u002Fmonailabel\u002Fmonailabel_endoscopy_cvat_tooltracking.ipynb)：使用 CVAT\u002FNuclio 设置进行手术器械分割。\n\n## 引用\n\n如果您在研究中使用了 MONAI Label，请使用以下引用：\n\n```bash\n@article{DiazPinto2022monailabel,\n   author = {Diaz-Pinto, Andres and Alle, Sachidanand and Ihsani, Alvin and Asad, Muhammad and\n            Nath, Vishwesh and P{\\'e}rez-Garc{\\'\\i}a, Fernando and Mehta, Pritesh and\n            Li, Wenqi and Roth, Holger R. and Vercauteren, Tom and Xu, Daguang and\n            Dogra, Prerna and Ourselin, Sebastien and Feng, Andrew and Cardoso, M. Jorge},\n    title = {{MONAI Label: 用于 3D 医学图像 AI 辅助交互式标注的框架}},\n  journal = {arXiv e-prints},\n     year = 2022,\n     url  = {https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.12362.pdf}\n}\n\n@inproceedings{DiazPinto2022DeepEdit,\n      title={{DeepEdit：用于 3D 医学图像交互式分割的深度可编辑学习}},\n      author={Diaz-Pinto, Andres and Mehta, Pritesh and Alle, Sachidanand and Asad, Muhammad and Brown, Richard and Nath, Vishwesh and Ihsani, Alvin and Antonelli, Michela and Palkovics, Daniel and Pinter, Csaba and others},\n      booktitle={MICCAI 关于数据增强、标注和缺陷的研讨会},\n      pages={11--21},\n      year={2022},\n      organization={Springer}\n}\n ```\n\n可选引用：如果您使用了 MONAI Label 中的主动学习功能，请支持我们：\n\n```bash\n@article{nath2020diminishing,\n  title={训练集中的不确定性逐渐降低：医学图像分割的主动学习},\n  author={Nath, Vishwesh and Yang, Dong and Landman, Bennett A and Xu, Daguang and Roth, Holger R},\n  journal={IEEE 医学成像汇刊},\n  volume={40},\n  number={10},\n  pages={2534--2547},\n  year={2020},\n  publisher={IEEE}\n}\n```\n\n## 贡献\n\n有关如何为 MONAI Label 做出贡献的指导，请参阅\n[贡献指南](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fblob\u002Fmain\u002FCONTRIBUTING.md)。\n\n## 社区\n\n在 Twitter 上加入讨论 [@ProjectMONAI](https:\u002F\u002Ftwitter.com\u002FProjectMONAI)，或加入我们的\n[Slack 频道](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fprojectmonai\u002Fshared_invite\u002Fzt-3hucgm02q-i8Bn9XofDZs2UGOH4jUl4w)。\n\n您还可以在\n[MONAI Label 的 GitHub 讨论页](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fdiscussions)上提问和回答问题。\n\n## 其他资源\n\n- 官网：https:\u002F\u002Fproject-monai.github.io\u002F\n- API 文档：https:\u002F\u002Fmonai.readthedocs.io\u002Fprojects\u002Flabel\n- 代码：https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\n- 项目跟踪器：https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fprojects\n- 问题跟踪器：https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fissues\n- 维基：https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fwiki\n- 测试状态：https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Factions\n- PyPI 包：https:\u002F\u002Fpypi.org\u002Fproject\u002Fmonailabel\u002F\n- Docker Hub：https:\u002F\u002Fhub.docker.com\u002Fr\u002Fprojectmonai\u002Fmonailabel\n- 客户端 API：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mPMYJyzSmyo\n- 演示视频：https:\u002F\u002Fwww.youtube.com\u002Fc\u002FProjectMONAI","# MONAI Label 快速上手指南\n\nMONAI Label 是一个智能开源的医学图像标注与学习工具，旨在通过 AI 辅助减少标注时间。它采用服务器 - 客户端架构，支持放射科、病理科及内窥镜等多种场景的交互式标注，并支持主动学习工作流。\n\n## 1. 环境准备\n\n### 系统要求\n- **操作系统**: Ubuntu 或 Windows (推荐 Linux 以获得最佳 GPU 支持)\n- **Python 版本**: Python >= 3.8 (若需使用默认的 SAM2 模型，建议 Python >= 3.10)\n- **硬件加速**: 推荐使用配备 CUDA 支持的 NVIDIA GPU\n\n### 前置依赖\n确保已安装以下基础组件：\n- **CUDA Toolkit**: 用于 GPU 加速 (根据显卡驱动版本安装对应版本)\n- **可选加速库**: \n  - `CUCIM` (NVIDIA Clara 加速库)\n  - `CuPy` (GPU 加速数组库)\n  \n> **注意**: 若使用官方 Docker 镜像 (`projectmonai\u002Fmonailabel`)，上述 GPU 依赖默认已包含。\n\n## 2. 安装步骤\n\n### 方式一：通过 PyPI 安装（推荐稳定版）\n直接使用 pip 安装最新稳定版本：\n\n```bash\npip install -U monailabel\n```\n\n> **国内加速建议**: 如遇下载缓慢，可使用清华或阿里镜像源：\n> ```bash\n> pip install -U monailabel -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方式二：通过 Docker 运行（最简便，含所有依赖）\n无需配置本地环境，直接拉取包含完整依赖的镜像：\n\n```bash\ndocker run --gpus all --rm -ti --ipc=host --net=host projectmonai\u002Fmonailabel:latest bash\n```\n\n### 方式三：开发模式安装（获取最新特性）\n如需体验最新功能或使用 SAM-2.1 模型：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\ncd MONAILabel\npip install -r requirements.txt\nexport PATH=$PATH:`pwd`\u002Fmonailabel\u002Fscripts\n```\n\n## 3. 基本使用\n\nMONAI Label 的工作流程分为五步：安装 -> 准备应用 -> 准备查看器 -> 准备数据 -> 启动服务。\n\n### 第一步：获取示例应用\nMONAI Label 提供了针对放射科（Radiology）、病理科（Pathology）和视频（Video）的预训练示例应用。\n您可以从 GitHub 下载 `sample-apps`，或使用内置命令初始化。以下以放射科分割为例，假设您已将示例应用下载到本地目录 `.\u002Fapps`。\n\n### 第二步：准备数据\n创建一个文件夹存放您的医学图像数据（支持 NIfTI, NRRD, DICOM 等格式）。\n```bash\nmkdir .\u002Fdata\n# 将您的医学图像文件放入 .\u002Fdata 目录\n```\n\n### 第三步：启动 MONAI Label 服务器\n使用命令行启动服务器，指定应用路径和数据路径。以下命令启动一个基于放射科分割示例应用的服务器：\n\n```bash\nmonailabel start_server \\\n  --app .\u002Fapps\u002Fradiology \\\n  --studies .\u002Fdata \\\n  --conf models segment_spleen \\\n  --host 0.0.0.0 \\\n  --port 8000\n```\n*参数说明*:\n- `--app`: 选择的应用目录（如 radiology, pathology 等）\n- `--studies`: 待标注的数据目录\n- `--conf models`: 指定初始加载的模型名称\n\n### 第四步：连接客户端进行标注\n服务器启动后，您需要安装支持的客户端查看器插件来连接服务器进行交互式标注。\n\n**推荐客户端**:\n- **3D Slicer** (放射科首选): 安装 [MONAI Label Plugin](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fslicer) 插件。\n- **OHIF Viewer** (Web 端): 适用于放射科 DICOM 数据。\n- **QuPath \u002F CVAT**: 适用于病理科或内窥镜视频。\n\n**操作流程**:\n1. 打开 3D Slicer 并启用 MONAI Label 插件。\n2. 在插件设置中输入服务器地址（例如：`http:\u002F\u002Flocalhost:8000`）。\n3. 点击连接，加载数据集中的图像。\n4. 使用 AI 辅助工具（如 DeepEdit, Segment Anything）进行点击或框选，模型将实时返回分割结果。\n5. 修正标注并保存，系统将利用新标注数据持续优化模型（主动学习）。\n\n---\n*更多详细教程和特定任务的工作流，请参考 [MONAI Label Tutorials](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Ftree\u002Fmain\u002Fmonailabel)。*","某三甲医院放射科团队正急需构建一个针对脑部 MRI 肿瘤的专用 AI 分割模型，以辅助医生进行术前规划，但面临海量历史影像数据缺乏标注的困境。\n\n### 没有 MONAILabel 时\n- 放射科医生需手动逐层勾画肿瘤轮廓，标注一张三维 MRI 耗时超过 45 分钟，人力成本极高且难以持续。\n- 初始训练数据严重不足，导致早期训练的模型精度低下，无法在实际临床中提供有效参考。\n- 算法工程师与医生协作割裂，模型迭代周期长，医生无法实时反馈修正意见，只能等待下一版本发布。\n- 标注标准难以统一，不同年资医生的勾画习惯差异大，导致数据集噪声高，影响模型泛化能力。\n\n### 使用 MONAILabel 后\n- 医生只需在 3D Slicer 等客户端点击几下提供稀疏提示，MONAILabel 服务端即刻返回高精度自动分割结果，单例标注时间缩短至 3 分钟内。\n- 系统具备“人在回路”的主动学习能力，随着医生对自动结果的微调修正，后台模型实时在线更新，越用越准。\n- 实现了医工无缝协作，医生在标注过程中直接验证模型效果，即时纠正错误，模型当即可学习新特征并优化。\n- 通过智能预标注大幅降低了人为操作差异，确保了大规模数据集的一致性与高质量，加速了模型收敛。\n\nMONAILabel 通过将被动的人工标注转变为高效的人机交互式学习，让医疗 AI 模型的冷启动与持续进化变得触手可及。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FProject-MONAI_MONAILabel_17d9be2c.png","Project-MONAI","Project MONAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FProject-MONAI_0276102d.png","AI Toolkit for Healthcare Imaging",null,"ProjectMONAI","https:\u002F\u002Fproject-monai.github.io\u002F","https:\u002F\u002Fgithub.com\u002FProject-MONAI",[81,85,89,93,97,101,105,109,113,117],{"name":82,"color":83,"percentage":84},"Python","#3572A5",79.9,{"name":86,"color":87,"percentage":88},"JavaScript","#f1e05a",8.7,{"name":90,"color":91,"percentage":92},"TypeScript","#3178c6",4.9,{"name":94,"color":95,"percentage":96},"Java","#b07219",3.6,{"name":98,"color":99,"percentage":100},"Shell","#89e051",1.2,{"name":102,"color":103,"percentage":104},"Stylus","#ff6347",0.6,{"name":106,"color":107,"percentage":108},"CSS","#663399",0.5,{"name":110,"color":111,"percentage":112},"CMake","#DA3434",0.4,{"name":114,"color":115,"percentage":116},"Dockerfile","#384d54",0.1,{"name":118,"color":119,"percentage":120},"Batchfile","#C1F12E",0,833,263,"2026-04-13T03:01:10","Apache-2.0","Linux (Ubuntu), Windows","可选但推荐用于加速。支持单卡或多卡 GPU。需安装 CUDA Toolkit。具体显存大小和型号未说明，但支持 SAM2 等大模型通常建议较高显存。","未说明",{"notes":129,"python":130,"dependencies":131},"1. 这是一个服务器 - 客户端系统，服务端可本地运行（支持单\u002F多 GPU），客户端可通过 3DSlicer、OHIF、QuPath 或 CVAT 等工具连接。\n2. 若使用 pip 安装且 Python 版本大于等于 3.10，默认集成 SAM2 模型；若需使用更新的 SAM2.1，建议使用 Docker、开发模式或手动重新安装 sam2 包。\n3. 支持多种医学影像模态（CT, MRI, 病理切片，内窥镜视频等）及格式（NIfTI, NRRD, DICOM, TIFF, SVS 等）。\n4. 可通过 Docker 一键部署（projectmonai\u002Fmonailabel），Docker 镜像默认已启用 GPU 加速相关依赖。",">=3.10 (为了默认包含 SAM2 模型)",[132,133,134,135,136,137,138,139,140,141],"monailabel","MONAI","CUCIM (可选)","CuPy (可选)","CUDA Toolkit","sam2 (Python >=3.10 时默认包含)","3DSlicer (客户端插件)","OHIF (客户端插件)","QuPath (客户端插件)","CVAT (客户端插件)",[14,143,15,52],"其他",[145,146,147,148,149,150,151,152,153],"3d-slicer-extension","active-learning","pytorch","deep-learning","monai","segmentation","medical-imaging","machine-learning","3d","2026-03-27T02:49:30.150509","2026-04-16T08:12:00.095667",[],[158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238],{"id":159,"version":160,"summary_zh":161,"released_at":162},280808,"0.8.5","#### 新增\n- 将 [SAM2](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel?tab=readme-ov-file#sam-2) 作为所有放射学\u002F病理学\u002F内镜检查用例的基础模型（仅用于推理）。\n  - 更新了 3D Slicer 插件，以支持 SAM2 的 ROI 提示。\n![1732541491294](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fdc3d1651-1ad0-45d0-a407-49c4b7a41b85)\n\n- [OHIF V3 3.9+](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fohifv3)\n  - 重构了插件，以支持最新的 OHIF V3 3.9+ 版本。\n  - 自动分割功能。\n  - 点提示（Deepgrow\u002FDeepEdit\u002FSAM2）模型。\n  - 类别提示（自动分割中选择的部分类别）。\n![1732541491894](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F58d09d7f-bdac-457c-ad96-24f474eee42e)\n\n\n- [CVAT](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Ftree\u002Fmain\u002Fplugins\u002Fcvat)\n  - 重构了插件，以支持最新的 CVAT 版本。\n  - 通过 MONAI Label 提供 SAM2 模型。\n![unnamed](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fb5e57d14-106f-4de3-bbab-2c38bf8fe98c)\n\n\n- 支持 Python 3.12\n\n#### 变更\n- CVAT 插件：MONAI Label 将负责推理逻辑，而 nuclio 函数仅作为封装层。\n- Docker 基础镜像更新为 ubuntu:22.04（以节省空间）。\n- 文档修复。\n\n#### 移除\n- 移除了对 Python 3.8 的支持","2023-10-18T15:08:49",{"id":164,"version":165,"summary_zh":166,"released_at":167},280809,"0.8.4","### **新功能：**\n\n- **添加 VISTA2D 依赖项**：\n  - 添加了 VISTA2D 的依赖项，确保在使用此功能时具有更强的兼容性和性能。\n\n- **CellProfiler 应用程序及插件**：\n  - 集成了 CellProfiler 应用程序及其配套插件，支持全新的图像分析工作流。\n\n### **改进：**\n- **发布检查与更新**：\n  - 为 0.8.4 版本的发布所做的更新和改进。\n\n- **允许使用更新版本的依赖项**：\n  - 现已支持关键依赖项的新版本，从而提升在更广泛环境中的兼容性。\n\n- **将已弃用的 IgniteMetric 替换为 IgniteMetricHandler**：\n  - 已将废弃的 `IgniteMetric` 替换为 `IgniteMetricHandler`，以跟上库的更新并避免潜在问题。\n\n- **MITK 平台支持**：\n  - MITK 已正式被添加为受支持的平台，进一步扩展了该工具在医学图像处理领域的适用性。\n\n### **错误修复：**\n- **修复非标准英语双冒号问题**：\n  - 解决了非标准英语双冒号的问题，确保文档和消息传递的一致性得到改善。\n\n- **修复 Nuclio Docker 容器目录相关错误**：\n  - 修复了 Nuclio Docker 容器中可能导致目录相关错误的缺陷，从而确保在 Docker 环境中更顺畅地运行。\n\n### **依赖项更新：**\n- **升级 `requests` 库**：\n  - 将 `requests` 库从 `2.31.0` 升级至 `2.32.2`，以解决安全漏洞和已知问题。\n\n- **升级 `urllib3` 库**：\n  - 将 `urllib3` 库从 `2.2.1` 升级至 `2.2.2`，以修复已知问题并保持安全合规。\n\n- **更新 pre-commit 钩子**：\n  - 将 `pre-commit` 钩子更新至最新版本，确保项目代码的 linting 和格式化一致性。\n\n### **其他变更：**\n- **添加 OpenCV 安装测试用例**：\n  - 引入了一个测试用例，以确保在安装 OpenCV 时两种场景均能正常启用，从而提高测试覆盖率。\n\n- **修复 cv2 依赖项问题**：\n  - 解决了与 `cv2` 库相关的冲突，以保证在不同环境中流畅运行。","2024-10-17T22:32:00",{"id":169,"version":170,"summary_zh":171,"released_at":172},280810,"0.8.3","## 变更内容\n* 添加无涂鸦模型\n* 标记 MONAILabel 3D Slicer 扩展中的可翻译字符串\n* 升级 actions\u002Fsetup-node\n* 修复 #1601 —— 在进行推理时出现的服务器 URL 缺失错误\n* 更新 pydantic 版本\n* 升级 github\u002Fcodeql-action\n* 升级 actions\u002Fupload-artifact\n* 升级 actions\u002Fsetup-python\n* 更新版本，移除 requests 和 requests-toolbelt 的版本限制\n* 向 xnat 数据存储添加保存标签功能\n* 升级 actions\u002Fcache\n* 升级 codecov\u002Fcodecov-action\n* 更新软件包依赖\n* 修复响应编码问题，明确使用 UTF-8 对 HTTP 响应数据进行编码\n* 更新 fastapi 版本\n* 更新 GitHub MonAI 模型库配置\n* 更新 blossom-ci 列表\n* 降低 filelock 版本\n* 修复 cv2\n* jwt 包升级至 0.8.2\n* 更新 bundle 版本以避免冲突\n* 继续更新更多 bundle 版本\n\n","2024-07-23T17:48:05",{"id":174,"version":175,"summary_zh":176,"released_at":177},280811,"0.7.0","#### 新增\r\n- 多用户身份认证及 [KeyCloak 集成](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Fwiki\u002FIntegration-with-Keycloak)\r\n  - 带有 KeyCloak 集成的 MONAI Label API，用于用户身份认证和基于角色的访问控制。\r\n  - 支持通过 MONAILabel + KeyCloak 登录 3D Slicer。\r\n\r\n- [全身 CT 分割](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Fmodel-zoo\u002Ftree\u002Fdev\u002Fmodels\u002FwholeBody_ct_segmentation)\r\n   - 4 秒内即可分割 104 个解剖结构！\r\n\r\n- MONAI Bundle 支持改进\r\n   - 支持可视化 Bundle 配置选项。\r\n   - 提升了对 MONAI Zoo 的访问体验。\r\n   - 支持从 NGC 下载 Bundle。\r\n   - 改进了 Bundle 的多 GPU 训练功能。\r\n\r\n- 新的 MONAI Label 教程系列\r\n   - 在 [notebooks](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Ftutorials\u002Ftree\u002Fmain\u002Fmonailabel) 中提供了快速入门教程和安装说明。\r\n\r\n- 文档增强\r\n\r\n#### 变更\r\n- 更新了预训练模型：\r\n  - 分割模型\r\n  - deepedit 模型\r\n\r\n- CI\u002FCD 和测试\r\n  - 启用了 blossom CI\u002FCD 和预合并流水线。\r\n  - 单元测试覆盖率提升至 80%。\r\n\r\n#### 移除\r\n","2023-06-10T01:10:23",{"id":179,"version":180,"summary_zh":181,"released_at":182},280812,"0.6.0","#### 新增\n\n- 病理学模型\n  - [NuClick 标注模型](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Fmodel-zoo\u002Ftree\u002Fdev\u002Fmodels\u002Fpathology_nuclick_annotation)\n  - [细胞核分类模型](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Fmodel-zoo\u002Ftree\u002Fdev\u002Fmodels\u002Fpathology_nuclei_classification)\n  - [HoverNet 模型](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Fmodel-zoo\u002Ftree\u002Fdev\u002Fmodels\u002Fpathology_nuclei_segmentation_classification)\n- QuPath 扩展：[0.3.1](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAILabel\u002Freleases\u002Fdownload\u002F0.6.0\u002Fqupath-extension-monailabel-0.3.1.jar) | [演示](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F13hQUMVUDnsHcirLSbEW1umQMFPX1I4Wp\u002Fview)\n  - 用户体验优化\n  - MONAI Label 特有的工具栏操作\n  - 支持拖放 ROI 来运行自动分割模型\n  - 单击即可运行交互式模型（NuClick）\n  - 支持主动学习中的下一个样本\u002FROI\n- 实验管理\n- 3D Slicer：在放射科用例的 MONAI Bundle 应用中支持检测模型\n- 批量推理的多 GPU\u002F多线程支持\n\n#### 变更\n- 支持 MONAI Bundle 应用中的最新版本软件包\n- 升级椎体处理流程\n- MONAI 版本 ≥ 1.1.0\n\n#### 移除\n- 病理学中用于细胞核分割的 DeepEdit 模型\n","2022-12-20T05:20:49",{"id":184,"version":185,"summary_zh":186,"released_at":187},280813,"pretrained","> 这不是发布页面\n\n请从这里下载用于示例应用的预训练模型权重（仅用于**测试\u002F演示**目的）。\n","2022-11-03T20:01:11",{"id":189,"version":190,"summary_zh":191,"released_at":192},280814,"0.5.2","#### 新增\n\n- Bundle 支持（内窥镜示例应用）\n  - [器械分割模型](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Fmodel-zoo\u002Ftree\u002Fdev\u002Fmodels\u002Fendoscopic_tool_segmentation)\n  - [体内 vs 体外分类模型](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Fmodel-zoo\u002Ftree\u002Fdev\u002Fmodels\u002Fendoscopic_inbody_classification)\n- 用于下载最新训练模型和统计信息的 REST API\n- DSA 中的交互式 NuClick 分割 | [演示](https:\u002F\u002Fmedicine.ai.uky.edu\u002Fwp-content\u002Fuploads\u002F2022\u002F10\u002Finteractive_cell_labeling_via_nucklick_in_dsa.mp4)\n- 对 MONAI [1.0.1](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAI\u002Freleases\u002Ftag\u002F1.0.1) 的支持\n\n#### 变更\n\n- 增加了针对 OHIF 使用场景禁用 DICOM 转 NIFTI 的选项\n- 修复了 GCP 上的代理 URL 问题，并增加了对 DICOMWeb 的支持\n- 3D Slicer UI 改进\n  - [优化选项\u002F配置窗口大小](https:\u002F\u002Fuser-images.githubusercontent.com\u002F7339051\u002F194677198-d9deb3f2-a728-453a-b68d-a1b21afa6bee.png)\n  - 修复下载标签功能，使其能够获取原始标签\n- 修复了 `segmentation_nuclei` 模型（病理）多标签输出的问题\n- MONAI Bundle 应用改进\n  - 支持本地 Bundle（预先下载）\n  - 支持自定义脚本\n\n#### 移除\n\n- 移除在 3D Slicer 中运行所有训练任务的选项（已弃用）","2022-10-24T20:49:16",{"id":194,"version":195,"summary_zh":196,"released_at":197},280815,"0.5.1","#### 新增\n\n- 内窥镜示例应用\n  - 工具追踪分割模型 | [演示](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F190rqvSMQULUlzS3XDAZVfPma1_6znPsd\u002Fview?usp=sharing)\n  - 体内 vs 体外分类模型 | [演示](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Ii3_mYvHVykC-UsytdFvvPdfG1oIqz8u\u002Fview?usp=sharing)\n  - DeepEdit 交互模型\n  - 集成 CVAT，支持自动化工作流以运行主动学习迭代\n- 放射科应用中的多阶段椎体分割\n\n#### 变更\n\n- 提升放射科应用性能\n  - 支持交互模型重复推理时的预处理缓存\n  - 支持 DICOM Web API 响应缓存\n  - 优化预处理流程，利用 GPU 提高最大吞吐量\n  - 修复 WADO\u002FQIDO 的 DICOM 代理问题\n- 对认知不确定性（v2）主动学习策略的改进\n- 支持 MONAI 1.0.0 及以上版本\n- 修复 Scribbles，使其支持 MetaTensor\n\n#### 移除\n- 基于 TTA 的主动学习策略已弃用","2022-09-16T20:28:26",{"id":199,"version":200,"summary_zh":201,"released_at":202},280816,"0.4.2","#### 新增\n\n- [MONAI Bundle 应用](sample-apps\u002Fmonaibundle) - 从 [MONAI Zoo](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002Fmodel-zoo) 拉取兼容的 Bundle\n  - 脾脏 CT 分割\n  - 脾脏 DeepEdit 标注\n  - 其他\n\n#### 变更\n\n- 支持 MONAI [0.9.1](https:\u002F\u002Fgithub.com\u002FProject-MONAI\u002FMONAI\u002Freleases\u002Ftag\u002F0.9.1)\n\n\n","2022-07-25T19:18:32",{"id":204,"version":205,"summary_zh":206,"released_at":207},280817,"0.4.1","#### 已更改\r\n\r\n- 将 MONAI 依赖版本修复为 0.9.0\r\n\r\n","2022-07-06T03:40:35",{"id":209,"version":210,"summary_zh":211,"released_at":212},280818,"0.4.0","#### Added\r\n\r\n- Pathology Sample App\r\n  - DeepEdit, Segmentation, [NuClick](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.14511) models\r\n  - Digital Slide Archive plugin | [Demo](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16HnQY81kAVEbD9TvhAp_hlLnfgHQgX8I\u002Fview)\r\n  - QuPath plugin | [Demo](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F18mQ5DXuThp9YxXcbR0f19yS2klhmZozG\u002Fview)\r\n- Histogram-based GraphCut and Gaussian Mixture Model (GMM) based methods for scribbles\r\n\r\n#### Changed\r\n\r\n- Support for MONAI (supports 0.9.0 and above)\r\n- Radiology Sample App (Aggregation of previous radiology models)\r\n  - DeepEdit, Deepgrow, Segmentation, SegmentationSpleen models\r\n- NrrdWriter for multi-channel arrays\r\n- 3D Slicer Fixes\r\n  - Support Segmentation Editor and other UI enhancements\r\n  - Improvements for Scribble Interactions\r\n  - Support for _**.seg.nrrd**_ segmentation files\r\n  - Support to pre-load existing label masks during image fetch\u002Fload\r\n- Static checks using pre-commit ci\r\n\r\n#### Removed\r\n\r\n- SimpleCRF and dependent functions for scribbles","2022-06-13T18:30:46",{"id":214,"version":215,"summary_zh":216,"released_at":217},280819,"0.3.2","#### Added\r\n\r\n- SSL and multiple worker options while starting MONAI Label server\r\n- Scribbles support for OHIF Viewer\r\n- Add Citation page\r\n\r\n#### Changed\r\n\r\n- Support for MONAI (supports 0.8.1 and above)\r\n- Fix loading pretrained models for TTA and Epistemic Scoring\r\n- Fix Load MMAR API for deepgrow\u002Fsegmentation apps\r\n- Upgrade PIP Dependencies\r\n- Documentation Fixes\r\n","2022-02-28T16:50:02",{"id":219,"version":220,"summary_zh":221,"released_at":222},280820,"0.3.1","#### Added\r\n\r\n- Flexible version support for MONAI (supports 0.8.* instead of 0.8.0)\r\n\r\n#### Changed\r\n\r\n- Strict Flag is set to False while loading pretrained models\r\n- Fix inverse transform for DeepEdit sample app\r\n- Documentation Fixes\r\n\r\n#### Removed\r\n\r\n- DeepGrow Left-Atrium","2021-12-29T20:15:14",{"id":224,"version":225,"summary_zh":226,"released_at":227},280821,"0.3.0","#### Added\r\n\r\n- Multi GPU support for training\r\n  - Support for both Windows and Ubuntu\r\n  - Option to customize GPU selection\r\n- Multi Label support for DeepEdit\r\n  - DynUNET and UNETR\r\n- Multi Label support for Deepgrow App\r\n  - Annotate multiple organs (spleen, liver, pancreas, unknown etc..)\r\n  - Train Deepgrow 2D\u002F3D models to learn on existing + new labels submitted\r\n- 3D Slicer plugin\r\n  - Multi Label Interaction\r\n  - UI Enhancements\r\n  - Train\u002FUpdate specific model\r\n- Performance Improvements\r\n  - Dataset (Cached, Persistence, SmartCache)\r\n  - ThreadDataloader\r\n  - Early Stopping\r\n- Strategy Improvements to support Multi User environment \r\n- Extensibility for Server APIs\r\n\r\n#### Changed\r\n\r\n- Operate histogram likelihood transform in both normalized and unnormalized modes for Scribbles\r\n\r\n#### Removed\r\n\r\n- DeepGrow Left-Atrium\r\n","2021-11-28T09:57:19",{"id":229,"version":230,"summary_zh":231,"released_at":232},280822,"0.2.0","#### Added\r\n\r\n- Support for DICOMWeb connectivity to local\u002Fremote PACS\r\n- Annotations support via OHIF UI enabled in MONAI Label Server\r\n- Support for native and custom scoring methods to support next image selection strategies\r\n  - Native support for scoring and image selection using Epistemic Uncertainty and Test-time Augmentations (Aleatoric Uncertainty)\r\n- Custom `ScoringMethod` and `Strategy` implementation documentation\r\n- Scribbles-based annotation support for all sample apps\r\n\r\n#### Changed\r\n\r\n- Previously named `generic` apps now have default functionality under `deepedit`, `deepgrow` and `segmentation`\r\n- Updated `Modules Overview` documentation to include interaction between `ScoringMethod` and `Strategy`\r\n\r\n#### Removed\r\n\r\n- All spleen segmentation sample apps (DeepGrow, DeepEdit, auto-segmentation)\r\n\r\n","2021-09-23T16:10:36",{"id":234,"version":235,"summary_zh":236,"released_at":237},280823,"0.1.0","### Added\r\n\r\n#### Highlights\r\n\r\n- Framework for developing and deploying MONAI Label Apps to train and infer AI models\r\n- Compositional & portable APIs for ease of integration in existing workflows\r\n- Customizable design for varying user expertise\r\n- 3DSlicer support\r\n- Support for multi-label auto-segmentation\r\n\r\n#### Sample Apps\r\n\r\n- Template apps to customize the behavior of DeepGrow and DeepEdit\r\n- Automated segmentation of left atrium, spleen\r\n- DeepGrow AI annotation of left atrium, spleen\r\n- DeepEdit AI annotation of left atrium, spleen","2021-07-15T01:21:51",{"id":239,"version":240,"summary_zh":241,"released_at":242},280824,"data","> THIS IS NOT A RELEASE PAGE\r\n\r\nDownload Pretrained model weights (TEST\u002FDEMO only) for sample apps from here\r\n","2021-07-12T17:46:58"]