[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-facebookresearch--sam2":3,"tool-facebookresearch--sam2":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":10,"env_os":105,"env_gpu":106,"env_ram":107,"env_deps":108,"category_tags":117,"github_topics":79,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":118,"updated_at":119,"faqs":120,"releases":150},3856,"facebookresearch\u002Fsam2","sam2","The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.","SAM 2 是 Meta 推出的新一代基础模型，旨在解决图像与视频中的“提示式视觉分割”难题。无论是静态图片还是动态视频，用户只需提供简单的点击、框选等提示，SAM 2 就能精准识别并分割出目标对象。它将单张图像视为单帧视频进行处理，成功打破了以往模型在视频理解上的局限。\n\n这款工具特别适合计算机视觉开发者、AI 研究人员以及需要处理视频内容的设计师使用。对于希望探索多目标跟踪或构建交互式应用的技术团队，SAM 2 提供了强大的底层支持。其核心亮点在于采用了带有流式记忆机制的 Transformer 架构，能够实现实时的视频处理性能。此外，项目配套发布了迄今为止规模最大的视频分割数据集（SA-V），并通过“模型闭环数据引擎”不断自我进化。最新更新的 SAM 2.1 版本不仅提供了更优的预训练权重，还支持全模型编译加速及灵活的多目标独立追踪，让复杂场景下的视频分析变得更加高效与便捷。","# SAM 2: Segment Anything in Images and Videos\n\n**[AI at Meta, FAIR](https:\u002F\u002Fai.meta.com\u002Fresearch\u002F)**\n\n[Nikhila Ravi](https:\u002F\u002Fnikhilaravi.com\u002F), [Valentin Gabeur](https:\u002F\u002Fgabeur.github.io\u002F), [Yuan-Ting Hu](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=E8DVVYQAAAAJ&hl=en), [Ronghang Hu](https:\u002F\u002Fronghanghu.com\u002F), [Chaitanya Ryali](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=4LWx24UAAAAJ&hl=en), [Tengyu Ma](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=VeTSl0wAAAAJ&hl=en), [Haitham Khedr](https:\u002F\u002Fhkhedr.com\u002F), [Roman Rädle](https:\u002F\u002Fscholar.google.de\u002Fcitations?user=Tpt57v0AAAAJ&hl=en), [Chloe Rolland](https:\u002F\u002Fscholar.google.com\u002Fcitations?hl=fr&user=n-SnMhoAAAAJ), [Laura Gustafson](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=c8IpF9gAAAAJ&hl=en), [Eric Mintun](https:\u002F\u002Fericmintun.github.io\u002F), [Junting Pan](https:\u002F\u002Fjunting.github.io\u002F), [Kalyan Vasudev Alwala](https:\u002F\u002Fscholar.google.co.in\u002Fcitations?user=m34oaWEAAAAJ&hl=en), [Nicolas Carion](https:\u002F\u002Fwww.nicolascarion.com\u002F), [Chao-Yuan Wu](https:\u002F\u002Fchaoyuan.org\u002F), [Ross Girshick](https:\u002F\u002Fwww.rossgirshick.info\u002F), [Piotr Dollár](https:\u002F\u002Fpdollar.github.io\u002F), [Christoph Feichtenhofer](https:\u002F\u002Ffeichtenhofer.github.io\u002F)\n\n[[`Paper`](https:\u002F\u002Fai.meta.com\u002Fresearch\u002Fpublications\u002Fsam-2-segment-anything-in-images-and-videos\u002F)] [[`Project`](https:\u002F\u002Fai.meta.com\u002Fsam2)] [[`Demo`](https:\u002F\u002Fsam2.metademolab.com\u002F)] [[`Dataset`](https:\u002F\u002Fai.meta.com\u002Fdatasets\u002Fsegment-anything-video)] [[`Blog`](https:\u002F\u002Fai.meta.com\u002Fblog\u002Fsegment-anything-2)] [[`BibTeX`](#citing-sam-2)]\n\n![SAM 2 architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_sam2_readme_1855432a9374.png)\n\n**Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https:\u002F\u002Fai.meta.com\u002Fdatasets\u002Fsegment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains.\n\n![SA-V dataset](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_sam2_readme_bbc62a888d8d.jpg)\n\n## Latest updates\n\n**12\u002F11\u002F2024 -- full model compilation for a major VOS speedup and a new `SAM2VideoPredictor` to better handle multi-object tracking**\n\n- We now support `torch.compile` of the entire SAM 2 model on videos, which can be turned on by setting `vos_optimized=True` in `build_sam2_video_predictor`, leading to a major speedup for VOS inference.\n- We update the implementation of `SAM2VideoPredictor` to support independent per-object inference, allowing us to relax the assumption of prompting for multi-object tracking and adding new objects after tracking starts.\n- See [`RELEASE_NOTES.md`](RELEASE_NOTES.md) for full details.\n\n**09\u002F30\u002F2024 -- SAM 2.1 Developer Suite (new checkpoints, training code, web demo) is released**\n\n- A new suite of improved model checkpoints (denoted as **SAM 2.1**) are released. See [Model Description](#model-description) for details.\n  * To use the new SAM 2.1 checkpoints, you need the latest model code from this repo. If you have installed an earlier version of this repo, please first uninstall the previous version via `pip uninstall SAM-2`, pull the latest code from this repo (with `git pull`), and then reinstall the repo following [Installation](#installation) below.\n- The training (and fine-tuning) code has been released. See [`training\u002FREADME.md`](training\u002FREADME.md) on how to get started.\n- The frontend + backend code for the SAM 2 web demo has been released. See [`demo\u002FREADME.md`](demo\u002FREADME.md) for details.\n\n## Installation\n\nSAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.5.1` and `torchvision>=0.20.1`. Please follow the instructions [here](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using:\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2.git && cd sam2\n\npip install -e .\n```\nIf you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fwindows\u002Fwsl\u002Finstall) with Ubuntu.\n\nTo use the SAM 2 predictor and run the example notebooks, `jupyter` and `matplotlib` are required and can be installed by:\n\n```bash\npip install -e \".[notebooks]\"\n```\n\nNote:\n1. It's recommended to create a new Python environment via [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002F) for this installation and install PyTorch 2.5.1 (or higher) via `pip` following https:\u002F\u002Fpytorch.org\u002F. If you have a PyTorch version lower than 2.5.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version using `pip`.\n2. The step above requires compiling a custom CUDA kernel with the `nvcc` compiler. If it isn't already available on your machine, please install the [CUDA toolkits](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit-archive) with a version that matches your PyTorch CUDA version.\n3. If you see a message like `Failed to build the SAM 2 CUDA extension` during installation, you can ignore it and still use SAM 2 (some post-processing functionality may be limited, but it doesn't affect the results in most cases).\n\nPlease see [`INSTALL.md`](.\u002FINSTALL.md) for FAQs on potential issues and solutions.\n\n## Getting Started\n\n### Download Checkpoints\n\nFirst, we need to download a model checkpoint. All the model checkpoints can be downloaded by running:\n\n```bash\ncd checkpoints && \\\n.\u002Fdownload_ckpts.sh && \\\ncd ..\n```\n\nor individually from:\n\n- [sam2.1_hiera_tiny.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_tiny.pt)\n- [sam2.1_hiera_small.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_small.pt)\n- [sam2.1_hiera_base_plus.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_base_plus.pt)\n- [sam2.1_hiera_large.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_large.pt)\n\n(note that these are the improved checkpoints denoted as SAM 2.1; see [Model Description](#model-description) for details.)\n\nThen SAM 2 can be used in a few lines as follows for image and video prediction.\n\n### Image prediction\n\nSAM 2 has all the capabilities of [SAM](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything) on static images, and we provide image prediction APIs that closely resemble SAM for image use cases. The `SAM2ImagePredictor` class has an easy interface for image prompting.\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\ncheckpoint = \".\u002Fcheckpoints\u002Fsam2.1_hiera_large.pt\"\nmodel_cfg = \"configs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml\"\npredictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint))\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    predictor.set_image(\u003Cyour_image>)\n    masks, _, _ = predictor.predict(\u003Cinput_prompts>)\n```\n\nPlease refer to the examples in [image_predictor_example.ipynb](.\u002Fnotebooks\u002Fimage_predictor_example.ipynb) (also in Colab [here](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002Fnotebooks\u002Fimage_predictor_example.ipynb)) for static image use cases.\n\nSAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](.\u002Fnotebooks\u002Fautomatic_mask_generator_example.ipynb) (also in Colab [here](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002Fnotebooks\u002Fautomatic_mask_generator_example.ipynb)) for automatic mask generation in images.\n\n### Video prediction\n\nFor promptable segmentation and tracking in videos, we provide a video predictor with APIs for example to add prompts and propagate masklets throughout a video. SAM 2 supports video inference on multiple objects and uses an inference state to keep track of the interactions in each video.\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2_video_predictor\n\ncheckpoint = \".\u002Fcheckpoints\u002Fsam2.1_hiera_large.pt\"\nmodel_cfg = \"configs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml\"\npredictor = build_sam2_video_predictor(model_cfg, checkpoint)\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    state = predictor.init_state(\u003Cyour_video>)\n\n    # add new prompts and instantly get the output on the same frame\n    frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, \u003Cyour_prompts>):\n\n    # propagate the prompts to get masklets throughout the video\n    for frame_idx, object_ids, masks in predictor.propagate_in_video(state):\n        ...\n```\n\nPlease refer to the examples in [video_predictor_example.ipynb](.\u002Fnotebooks\u002Fvideo_predictor_example.ipynb) (also in Colab [here](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002Fnotebooks\u002Fvideo_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos.\n\n## Load from 🤗 Hugging Face\n\nAlternatively, models can also be loaded from [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fmodels?search=facebook\u002Fsam2) (requires `pip install huggingface_hub`).\n\nFor image prediction:\n\n```python\nimport torch\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\npredictor = SAM2ImagePredictor.from_pretrained(\"facebook\u002Fsam2-hiera-large\")\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    predictor.set_image(\u003Cyour_image>)\n    masks, _, _ = predictor.predict(\u003Cinput_prompts>)\n```\n\nFor video prediction:\n\n```python\nimport torch\nfrom sam2.sam2_video_predictor import SAM2VideoPredictor\n\npredictor = SAM2VideoPredictor.from_pretrained(\"facebook\u002Fsam2-hiera-large\")\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    state = predictor.init_state(\u003Cyour_video>)\n\n    # add new prompts and instantly get the output on the same frame\n    frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, \u003Cyour_prompts>):\n\n    # propagate the prompts to get masklets throughout the video\n    for frame_idx, object_ids, masks in predictor.propagate_in_video(state):\n        ...\n```\n\n## Model Description\n\n### SAM 2.1 checkpoints\n\nThe table below shows the improved SAM 2.1 checkpoints released on September 29, 2024.\n|      **Model**       | **Size (M)** |    **Speed (FPS)**     | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** |\n| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |\n|   sam2.1_hiera_tiny \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_t.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_tiny.pt))    |     38.9     |          91.2          |        76.5         |        71.8        |       77.3        |\n|   sam2.1_hiera_small \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_s.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_small.pt))   |      46      |          84.8          |        76.6         |        73.5        |       78.3        |\n| sam2.1_hiera_base_plus \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_b+.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_base_plus.pt)) |     80.8     |        64.1          |        78.2         |        73.7        |       78.2        |\n|   sam2.1_hiera_large \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_large.pt))   |    224.4     |          39.5          |        79.5         |        74.6        |       80.6        |\n\n### SAM 2 checkpoints\n\nThe previous SAM 2 checkpoints released on July 29, 2024 can be found as follows:\n\n|      **Model**       | **Size (M)** |    **Speed (FPS)**     | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** |\n| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |\n|   sam2_hiera_tiny \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_t.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_tiny.pt))   |     38.9     |          91.5          |        75.0         |        70.9        |       75.3        |\n|   sam2_hiera_small \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_s.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_small.pt))   |      46      |          85.6          |        74.9         |        71.5        |       76.4        |\n| sam2_hiera_base_plus \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_b+.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_base_plus.pt)) |     80.8     |     64.8    |        74.7         |        72.8        |       75.8        |\n|   sam2_hiera_large \u003Cbr \u002F> ([config](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_l.yaml), [checkpoint](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_large.pt))   |    224.4     | 39.7 |        76.0         |        74.6        |       79.8        |\n\nSpeed measured on an A100 with `torch 2.5.1, cuda 12.4`. See `benchmark.py` for an example on benchmarking (compiling all the model components). Compiling only the image encoder can be more flexible and also provide (a smaller) speed-up (set `compile_image_encoder: True` in the config).\n## Segment Anything Video Dataset\n\nSee [sav_dataset\u002FREADME.md](sav_dataset\u002FREADME.md) for details.\n\n## Training SAM 2\n\nYou can train or fine-tune SAM 2 on custom datasets of images, videos, or both. Please check the training [README](training\u002FREADME.md) on how to get started.\n\n## Web demo for SAM 2\n\nWe have released the frontend + backend code for the SAM 2 web demo (a locally deployable version similar to https:\u002F\u002Fsam2.metademolab.com\u002Fdemo). Please see the web demo [README](demo\u002FREADME.md) for details.\n\n## License\n\nThe SAM 2 model checkpoints, SAM 2 demo code (front-end and back-end), and SAM 2 training code are licensed under [Apache 2.0](.\u002FLICENSE), however the [Inter Font](https:\u002F\u002Fgithub.com\u002Frsms\u002Finter?tab=OFL-1.1-1-ov-file) and [Noto Color Emoji](https:\u002F\u002Fgithub.com\u002Fgooglefonts\u002Fnoto-emoji) used in the SAM 2 demo code are made available under the [SIL Open Font License, version 1.1](https:\u002F\u002Fopenfontlicense.org\u002Fopen-font-license-official-text\u002F).\n\n## Contributing\n\nSee [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).\n\n## Contributors\n\nThe SAM 2 project was made possible with the help of many contributors (alphabetical):\n\nKaren Bergan, Daniel Bolya, Alex Bosenberg, Kai Brown, Vispi Cassod, Christopher Chedeau, Ida Cheng, Luc Dahlin, Shoubhik Debnath, Rene Martinez Doehner, Grant Gardner, Sahir Gomez, Rishi Godugu, Baishan Guo, Caleb Ho, Andrew Huang, Somya Jain, Bob Kamma, Amanda Kallet, Jake Kinney, Alexander Kirillov, Shiva Koduvayur, Devansh Kukreja, Robert Kuo, Aohan Lin, Parth Malani, Jitendra Malik, Mallika Malhotra, Miguel Martin, Alexander Miller, Sasha Mitts, William Ngan, George Orlin, Joelle Pineau, Kate Saenko, Rodrick Shepard, Azita Shokrpour, David Soofian, Jonathan Torres, Jenny Truong, Sagar Vaze, Meng Wang, Claudette Ward, Pengchuan Zhang.\n\nThird-party code: we use a GPU-based connected component algorithm adapted from [`cc_torch`](https:\u002F\u002Fgithub.com\u002Fzsef123\u002FConnected_components_PyTorch) (with its license in [`LICENSE_cctorch`](.\u002FLICENSE_cctorch)) as an optional post-processing step for the mask predictions.\n\n## Citing SAM 2\n\nIf you use SAM 2 or the SA-V dataset in your research, please use the following BibTeX entry.\n\n```bibtex\n@article{ravi2024sam2,\n  title={SAM 2: Segment Anything in Images and Videos},\n  author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\\\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\\'a}r, Piotr and Feichtenhofer, Christoph},\n  journal={arXiv preprint arXiv:2408.00714},\n  url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00714},\n  year={2024}\n}\n```\n","# SAM 2：在图像和视频中进行任意分割\n\n**[Meta 的人工智能，FAIR](https:\u002F\u002Fai.meta.com\u002Fresearch\u002F)**\n\n[Nikhila Ravi](https:\u002F\u002Fnikhilaravi.com\u002F)、[Valentin Gabeur](https:\u002F\u002Fgabeur.github.io\u002F)、[Yuan-Ting Hu](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=E8DVVYQAAAAJ&hl=en)、[Ronghang Hu](https:\u002F\u002Fronghanghu.com\u002F)、[Chaitanya Ryali](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=4LWx24UAAAAJ&hl=en)、[Tengyu Ma](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=VeTSl0wAAAAJ&hl=en)、[Haitham Khedr](https:\u002F\u002Fhkhedr.com\u002F)、[Roman Rädle](https:\u002F\u002Fscholar.google.de\u002Fcitations?user=Tpt57v0AAAAJ&hl=en)、[Chloe Rolland](https:\u002F\u002Fscholar.google.com\u002Fcitations?hl=fr&user=n-SnMhoAAAAJ)、[Laura Gustafson](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=c8IpF9gAAAAJ&hl=en)、[Eric Mintun](https:\u002F\u002Fericmintun.github.io\u002F)、[Junting Pan](https:\u002F\u002Fjunting.github.io\u002F)、[Kalyan Vasudev Alwala](https:\u002F\u002Fscholar.google.co.in\u002Fcitations?user=m34oaWEAAAAJ&hl=en)、[Nicolas Carion](https:\u002F\u002Fwww.nicolascarion.com\u002F)、[Chao-Yuan Wu](https:\u002F\u002Fchaoyuan.org\u002F)、[Ross Girshick](https:\u002F\u002Fwww.rossgirshick.info\u002F)、[Piotr Dollár](https:\u002F\u002Fpdollar.github.io\u002F)、[Christoph Feichtenhofer](https:\u002F\u002Ffeichtenhofer.github.io\u002F)\n\n[[`论文`](https:\u002F\u002Fai.meta.com\u002Fresearch\u002Fpublications\u002Fsam-2-segment-anything-in-images-and-videos\u002F)] [[`项目`](https:\u002F\u002Fai.meta.com\u002Fsam2)] [[`演示`](https:\u002F\u002Fsam2.metademolab.com\u002F)] [[`数据集`](https:\u002F\u002Fai.meta.com\u002Fdatasets\u002Fsegment-anything-video)] [[`博客`](https:\u002F\u002Fai.meta.com\u002Fblog\u002Fsegment-anything-2)] [[`BibTeX`](#citing-sam-2)]\n\n![SAM 2 架构](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_sam2_readme_1855432a9374.png)\n\n**Segment Anything Model 2 (SAM 2)** 是一种用于解决图像和视频中可提示性视觉分割的基础模型。我们通过将图像视为仅包含一帧的视频，将 SAM 扩展到视频领域。该模型采用简单的 Transformer 架构，并配备流式内存以实现实时视频处理。我们构建了一个模型闭环数据引擎，通过用户交互不断改进模型和数据，从而收集了 [**我们的 SA-V 数据集**](https:\u002F\u002Fai.meta.com\u002Fdatasets\u002Fsegment-anything-video)，这是迄今为止最大的视频分割数据集。基于我们数据训练的 SAM 2 在广泛的任务和视觉领域中均表现出色。\n\n![SA-V 数据集](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_sam2_readme_bbc62a888d8d.jpg)\n\n## 最新更新\n\n**2024年12月11日——完整模型编译以大幅提升 VOS 性能，并推出新的 `SAM2VideoPredictor` 以更好地处理多目标跟踪**\n\n- 我们现在支持对整个 SAM 2 模型在视频上的 `torch.compile`，只需在 `build_sam2_video_predictor` 中设置 `vos_optimized=True` 即可启用，这将显著提升 VOS 推理速度。\n- 我们更新了 `SAM2VideoPredictor` 的实现，支持每个目标独立推理，从而放宽了多目标跟踪必须通过提示输入的假设，并允许在跟踪开始后添加新对象。\n- 完整详情请参阅 [`RELEASE_NOTES.md`](RELEASE_NOTES.md)。\n\n**2024年9月30日——SAM 2.1 开发者套件（全新检查点、训练代码、网页演示）发布**\n\n- 发布了一套改进后的模型检查点（标记为 **SAM 2.1**）。详细信息请参阅 [模型描述](#model-description)。\n  * 要使用新的 SAM 2.1 检查点，您需要从本仓库获取最新版本的模型代码。如果您已安装过本仓库的早期版本，请先通过 `pip uninstall SAM-2` 卸载旧版本，然后使用 `git pull` 获取最新代码，并按照下方的 [安装说明](#installation) 重新安装本仓库。\n- 训练（及微调）代码现已发布。请参阅 [`training\u002FREADME.md`](training\u002FREADME.md)，了解如何开始。\n- SAM 2 网页演示的前端 + 后端代码也已发布。详细信息请参阅 [`demo\u002FREADME.md`](demo\u002FREADME.md)。\n\n## 安装\n\n在使用 SAM 2 之前，需先完成安装。代码要求 `python>=3.10`，以及 `torch>=2.5.1` 和 `torchvision>=0.20.1`。请按照 [此处](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) 的说明安装 PyTorch 和 TorchVision 依赖项。您可以在 GPU 机器上通过以下命令安装 SAM 2：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2.git && cd sam2\n\npip install -e .\n```\n\n如果您在 Windows 上安装，强烈建议使用 [Windows Subsystem for Linux (WSL)](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fwindows\u002Fwsl\u002Finstall)，并搭配 Ubuntu 系统。\n\n要使用 SAM 2 预测器并运行示例笔记本，还需要 `jupyter` 和 `matplotlib`，可通过以下命令安装：\n\n```bash\npip install -e \".[notebooks]\"\n```\n\n注意：\n1. 建议使用 [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002F) 创建一个新的 Python 环境进行安装，并按照 https:\u002F\u002Fpytorch.org\u002F 使用 `pip` 安装 PyTorch 2.5.1 或更高版本。如果您的当前环境中 PyTorch 版本低于 2.5.1，上述安装命令将尝试通过 `pip` 将其升级到最新版本。\n2. 上述步骤需要使用 `nvcc` 编译器编译自定义 CUDA 内核。如果您的机器上尚未安装 CUDA 工具包，请根据您的 PyTorch CUDA 版本下载相应的 [CUDA 工具包](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit-archive)。\n3. 如果在安装过程中出现类似 `Failed to build the SAM 2 CUDA extension` 的提示，您可以忽略它，仍然可以正常使用 SAM 2（某些后处理功能可能会受限，但通常不会影响结果）。\n\n有关潜在问题及解决方案的常见问题解答，请参阅 [`INSTALL.md`](.\u002FINSTALL.md)。\n\n## 快速入门\n\n### 下载检查点\n\n首先，我们需要下载一个模型检查点。所有模型检查点都可以通过以下命令下载：\n\n```bash\ncd checkpoints && \\\n.\u002Fdownload_ckpts.sh && \\\ncd ..\n```\n\n或者单独从以下链接下载：\n- [sam2.1_hiera_tiny.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_tiny.pt)\n- [sam2.1_hiera_small.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_small.pt)\n- [sam2.1_hiera_base_plus.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_base_plus.pt)\n- [sam2.1_hiera_large.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_large.pt)\n\n（请注意，这些是标记为 SAM 2.1 的改进版检查点；详细信息请参阅 [模型描述](#model-description)。）\n\n随后，只需几行代码即可使用 SAM 2 进行图像和视频预测。\n\n### 图像预测\n\nSAM 2 具备 [SAM](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything) 在静态图像上的所有能力，并且我们提供了与 SAM 非常相似的图像预测 API，以满足图像使用场景的需求。`SAM2ImagePredictor` 类提供了一个简单的接口来处理图像提示。\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\ncheckpoint = \".\u002Fcheckpoints\u002Fsam2.1_hiera_large.pt\"\nmodel_cfg = \"configs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml\"\npredictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint))\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    predictor.set_image(\u003Cyour_image>)\n    masks, _, _ = predictor.predict(\u003Cinput_prompts>)\n```\n\n有关静态图像使用场景的示例，请参阅 [image_predictor_example.ipynb](.\u002Fnotebooks\u002Fimage_predictor_example.ipynb)（也可在 Colab 中查看 [这里](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002Fnotebooks\u002Fimage_predictor_example.ipynb)）。\n\nSAM 2 还支持像 SAM 一样在图像上自动生成掩码。有关图像中自动掩码生成的详细信息，请参阅 [automatic_mask_generator_example.ipynb](.\u002Fnotebooks\u002Fautomatic_mask_generator_example.ipynb)（也可在 Colab 中查看 [这里](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002Fnotebooks\u002Fautomatic_mask_generator_example.ipynb)）。\n\n### 视频预测\n\n对于视频中的可提示分割和跟踪，我们提供了一个视频预测器，其 API 允许用户添加提示并在整个视频中传播小掩码。SAM 2 支持多对象视频推理，并使用推理状态来跟踪每个视频中的交互过程。\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2_video_predictor\n\ncheckpoint = \".\u002Fcheckpoints\u002Fsam2.1_hiera_large.pt\"\nmodel_cfg = \"configs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml\"\npredictor = build_sam2_video_predictor(model_cfg, checkpoint)\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    state = predictor.init_state(\u003Cyour_video>)\n\n    # 添加新提示并立即在同一帧上获得输出\n    frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, \u003Cyour_prompts>):\n\n    # 传播提示以在整个视频中获取小掩码\n    for frame_idx, object_ids, masks in predictor.propagate_in_video(state):\n        ...\n```\n\n有关如何在视频中添加点击或框选提示、进行细化以及跟踪多个对象的详细信息，请参阅 [video_predictor_example.ipynb](.\u002Fnotebooks\u002Fvideo_predictor_example.ipynb)（也可在 Colab 中查看 [这里](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002Fnotebooks\u002Fvideo_predictor_example.ipynb)）。\n\n## 从 🤗 Hugging Face 加载\n\n此外，模型也可以从 [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fmodels?search=facebook\u002Fsam2) 加载（需要安装 `pip install huggingface_hub`）。\n\n对于图像预测：\n\n```python\nimport torch\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\npredictor = SAM2ImagePredictor.from_pretrained(\"facebook\u002Fsam2-hiera-large\")\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    predictor.set_image(\u003Cyour_image>)\n    masks, _, _ = predictor.predict(\u003Cinput_prompts>)\n```\n\n对于视频预测：\n\n```python\nimport torch\nfrom sam2.sam2_video_predictor import SAM2VideoPredictor\n\npredictor = SAM2VideoPredictor.from_pretrained(\"facebook\u002Fsam2-hiera-large\")\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    state = predictor.init_state(\u003Cyour_video>)\n\n    # 添加新提示并立即在同一帧上获得输出\n    frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, \u003Cyour_prompts>):\n\n    # 传播提示以在整个视频中获取小掩码\n    for frame_idx, object_ids, masks in predictor.propagate_in_video(state):\n        ...\n```\n\n## 模型说明\n\n### SAM 2.1 检查点\n\n下表展示了于 2024 年 9 月 29 日发布的改进版 SAM 2.1 检查点。\n|      **模型**       | **参数量 (M)** |    **速度 (FPS)**     | **SA-V 测试 (J&F)** | **MOSE 验证集 (J&F)** | **LVOS v2 (J&F)** |\n| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |\n|   sam2.1_hiera_tiny \u003Cbr \u002F> ([配置](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_t.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_tiny.pt))    |     38.9     |          91.2          |        76.5         |        71.8        |       77.3        |\n|   sam2.1_hiera_small \u003Cbr \u002F> ([配置](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_s.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_small.pt))   |      46      |          84.8          |        76.6         |        73.5        |       78.3        |\n| sam2.1_hiera_base_plus \u003Cbr \u002F> ([配置](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_b+.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_base_plus.pt)) |     80.8     |        64.1          |        78.2         |        73.7        |       78.2        |\n|   sam2.1_hiera_large \u003Cbr \u002F> ([配置](sam2\u002Fconfigs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_large.pt))   |    224.4     |          39.5          |        79.5         |        74.6        |       80.6        |\n\n### SAM 2 检查点\n\n此前于 2024 年 7月 29 日发布的 SAM 2 检查点如下：\n\n|      **模型**       | **参数量 (M)** |    **速度 (FPS)**     | **SA-V 测试 (J&F)** | **MOSE 验证集 (J&F)** | **LVOS v2 (J&F)** |\n| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |\n|   sam2_hiera_tiny \u003Cbr \u002F> ([配置文件](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_t.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_tiny.pt))   |     38.9     |          91.5          |        75.0         |        70.9        |       75.3        |\n|   sam2_hiera_small \u003Cbr \u002F> ([配置文件](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_s.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_small.pt))   |      46      |          85.6          |        74.9         |        71.5        |       76.4        |\n| sam2_hiera_base_plus \u003Cbr \u002F> ([配置文件](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_b+.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_base_plus.pt)) |     80.8     |     64.8    |        74.7         |        72.8        |       75.8        |\n|   sam2_hiera_large \u003Cbr \u002F> ([配置文件](sam2\u002Fconfigs\u002Fsam2\u002Fsam2_hiera_l.yaml), [检查点](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F072824\u002Fsam2_hiera_large.pt))   |    224.4     | 39.7 |        76.0         |        74.6        |       79.8        |\n\n速度是在配备 `torch 2.5.1, cuda 12.4` 的 A100 上测量的。有关基准测试的示例（编译所有模型组件），请参阅 `benchmark.py`。仅编译图像编码器可能更加灵活，并且也能带来（较小的）速度提升（在配置中设置 `compile_image_encoder: True`）。\n## Segment Anything Video 数据集\n\n详情请参阅 [sav_dataset\u002FREADME.md](sav_dataset\u002FREADME.md)。\n\n## 训练 SAM 2\n\n您可以在自定义的图像、视频或两者兼有的数据集上训练或微调 SAM 2。请查看训练 [README](training\u002FREADME.md)，了解如何开始。\n\n## SAM 2 的 Web 演示\n\n我们已发布了 SAM 2 Web 演示的前端和后端代码（一个可本地部署的版本，类似于 https:\u002F\u002Fsam2.metademolab.com\u002Fdemo）。详细信息请参阅 Web 演示 [README](demo\u002FREADME.md)。\n\n## 许可证\n\nSAM 2 模型检查点、SAM 2 演示代码（前端和后端）以及 SAM 2 训练代码均采用 [Apache 2.0](.\u002FLICENSE) 许可证，然而 SAM 2 演示代码中使用的 [Inter 字体](https:\u002F\u002Fgithub.com\u002Frsms\u002Finter?tab=OFL-1.1-1-ov-file) 和 [Noto Color Emoji](https:\u002F\u002Fgithub.com\u002Fgooglefonts\u002Fnoto-emoji) 则根据 [SIL 开源字体许可证，第 1.1 版](https:\u002F\u002Fopenfontlicense.org\u002Fopen-font-license-official-text\u002F) 提供。\n\n## 贡献\n\n请参阅 [贡献说明](CONTRIBUTING.md) 和 [行为准则](CODE_OF_CONDUCT.md)。\n\n## 贡献者\n\nSAM 2 项目得以实现，离不开众多贡献者的帮助（按字母顺序排列）：\n\nKaren Bergan, Daniel Bolya, Alex Bosenberg, Kai Brown, Vispi Cassod, Christopher Chedeau, Ida Cheng, Luc Dahlin, Shoubhik Debnath, Rene Martinez Doehner, Grant Gardner, Sahir Gomez, Rishi Godugu, Baishan Guo, Caleb Ho, Andrew Huang, Somya Jain, Bob Kamma, Amanda Kallet, Jake Kinney, Alexander Kirillov, Shiva Koduvayur, Devansh Kukreja, Robert Kuo, Aohan Lin, Parth Malani, Jitendra Malik, Mallika Malhotra, Miguel Martin, Alexander Miller, Sasha Mitts, William Ngan, George Orlin, Joelle Pineau, Kate Saenko, Rodrick Shepard, Azita Shokrpour, David Soofian, Jonathan Torres, Jenny Truong, Sagar Vaze, Meng Wang, Claudette Ward, Pengchuan Zhang。\n\n第三方代码：我们使用基于 GPU 的连通分量算法，该算法改编自 [`cc_torch`](https:\u002F\u002Fgithub.com\u002Fzsef123\u002FConnected_components_PyTorch)（其许可证见 [`LICENSE_cctorch`](.\u002FLICENSE_cctorch)），作为掩码预测的可选后处理步骤。\n\n## 引用 SAM 2\n\n如果您在研究中使用了 SAM 2 或 SA-V 数据集，请使用以下 BibTeX 条目。\n\n```bibtex\n@article{ravi2024sam2,\n  title={SAM 2：图像与视频中的任意对象分割},\n  author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\\\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\\'a}r, Piotr and Feichtenhofer, Christoph},\n  journal={arXiv 预印本 arXiv:2408.00714},\n  url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00714},\n  year={2024}\n}\n```","# SAM 2 快速上手指南\n\nSAM 2 (Segment Anything Model 2) 是 Meta FAIR 推出的新一代基础模型，支持对**图像和视频**进行提示式分割。相比前代，SAM 2 引入了流式记忆机制，能够实时处理视频并跟踪多个对象。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: 推荐 Linux (Ubuntu)。Windows 用户强烈建议使用 **WSL2 (Windows Subsystem for Linux)**。\n*   **Python**: 版本 >= 3.10\n*   **PyTorch**: 版本 >= 2.5.1\n*   **TorchVision**: 版本 >= 0.20.1\n*   **GPU**: 需要 NVIDIA GPU 及对应的 CUDA 工具包（用于编译自定义 CUDA 内核）。\n\n> **注意**：建议先访问 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) 安装匹配您 CUDA 版本的 PyTorch 和 TorchVision。国内用户可使用清华或中科大镜像源加速安装。\n\n## 安装步骤\n\n### 1. 克隆代码库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2.git && cd sam2\n```\n\n### 2. 安装依赖\n使用 `pip` 以可编辑模式安装 SAM 2：\n```bash\npip install -e .\n```\n\n如果需要运行示例 Notebook，请额外安装：\n```bash\npip install -e \".[notebooks]\"\n```\n\n> **提示**：安装过程中若出现 `Failed to build the SAM 2 CUDA extension` 警告，通常可以忽略，不影响核心功能使用。\n\n### 3. 下载模型权重\n进入检查点目录并运行下载脚本，获取最新的 **SAM 2.1** 改进版模型：\n```bash\ncd checkpoints && .\u002Fdownload_ckpts.sh && cd ..\n```\n\n或者手动下载单个模型（推荐 `sam2.1_hiera_large.pt` 以获得最佳效果）：\n*   [sam2.1_hiera_tiny.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_tiny.pt)\n*   [sam2.1_hiera_small.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_small.pt)\n*   [sam2.1_hiera_base_plus.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_base_plus.pt)\n*   [sam2.1_hiera_large.pt](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything_2\u002F092824\u002Fsam2.1_hiera_large.pt)\n\n## 基本使用\n\n### 场景一：图像分割预测\n\nSAM 2 完全兼容静态图像分割，接口与 SAM v1 类似。\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\n# 配置模型路径\ncheckpoint = \".\u002Fcheckpoints\u002Fsam2.1_hiera_large.pt\"\nmodel_cfg = \"configs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml\"\n\n# 初始化预测器\npredictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint))\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    # 设置输入图像\n    predictor.set_image(\u003Cyour_image>)\n    \n    # 根据提示（如点、框）生成掩码\n    masks, _, _ = predictor.predict(\u003Cinput_prompts>)\n```\n\n### 场景二：视频分割与跟踪\n\nSAM 2 的核心优势在于视频处理，支持添加提示点\u002F框并在整个视频中传播掩码。\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2_video_predictor\n\n# 配置模型路径\ncheckpoint = \".\u002Fcheckpoints\u002Fsam2.1_hiera_large.pt\"\nmodel_cfg = \"configs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml\"\n\n# 初始化视频预测器\npredictor = build_sam2_video_predictor(model_cfg, checkpoint)\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n    # 初始化视频状态\n    state = predictor.init_state(\u003Cyour_video>)\n\n    # 添加新提示（点或框），立即获取当前帧结果\n    frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, \u003Cyour_prompts>)\n\n    # 在视频中传播提示，获取全程掩码\n    for frame_idx, object_ids, masks in predictor.propagate_in_video(state):\n        # 在此处处理每一帧的 masks\n        ...\n```\n\n### 替代方案：从 Hugging Face 加载\n\n如果您无法直接下载权重文件，可以使用 Hugging Face 自动加载（需先安装 `huggingface_hub`）：\n\n```python\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\n# 图像预测\npredictor = SAM2ImagePredictor.from_pretrained(\"facebook\u002Fsam2-hiera-large\")\n\n# 视频预测\nfrom sam2.sam2_video_predictor import SAM2VideoPredictor\npredictor = SAM2VideoPredictor.from_pretrained(\"facebook\u002Fsam2-hiera-large\")\n```","某短视频后期团队正在处理一段长达 10 分钟的户外采访素材，需要精准抠出移动中的受访者并替换背景。\n\n### 没有 sam2 时\n- 视频逐帧处理极其耗时，剪辑师需手动在每一关键帧绘制遮罩，一旦人物快速移动或发生遮挡，后续几十帧都需要重新调整。\n- 传统跟踪算法在复杂背景下极易丢失目标，导致边缘闪烁或突然跳帧，人工修复这些错误的时间往往超过初始制作时间。\n- 若中途发现漏掉了画面中另一个移动物体（如手中的道具），必须推翻整个跟踪流程重新开始，无法动态添加新目标。\n- 渲染预览速度慢，无法实现实时反馈，创作者难以在拍摄现场立即确认合成效果。\n\n### 使用 sam2 后\n- 利用 sam2 的流式内存架构，只需在第一帧点击人物，模型即可自动完成全视频的高精度跟踪，即使面对快速运动或短暂遮挡也能保持连贯。\n- 得益于最新的 `SAM2VideoPredictor` 更新，支持独立对象推理，发现漏选物体时可随时在任意帧追加提示点，无需重启整个任务。\n- 开启 `torch.compile` 编译优化后，推理速度大幅提升，能够在普通显卡上接近实时地生成高质量分割掩码，显著缩短等待时间。\n- 生成的边缘细节自然平滑，直接消除了人工逐帧修图的繁琐步骤，将原本需要数小时的工作压缩至几分钟内完成。\n\nsam2 通过将图像分割能力扩展至视频流并支持动态交互，彻底解决了视频目标跟踪中效率低、容错差的核心痛点。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_sam2_a2f88368.png","facebookresearch","Meta Research","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffacebookresearch_449342bd.png","",null,"https:\u002F\u002Fopensource.fb.com","https:\u002F\u002Fgithub.com\u002Ffacebookresearch",[83,87,91,95,98],{"name":84,"color":85,"percentage":86},"Jupyter Notebook","#DA5B0B",97.8,{"name":88,"color":89,"percentage":90},"Python","#3572A5",2.1,{"name":92,"color":93,"percentage":94},"Cuda","#3A4E3A",0,{"name":96,"color":97,"percentage":94},"Shell","#89e051",{"name":99,"color":100,"percentage":94},"Dockerfile","#384d54",18853,2413,"2026-04-05T10:30:04","Apache-2.0","Linux, Windows (需通过 WSL 使用 Ubuntu)","必需 NVIDIA GPU，需安装与 PyTorch 版本匹配的 CUDA Toolkit，支持编译自定义 CUDA 内核","未说明",{"notes":109,"python":110,"dependencies":111},"Windows 用户强烈建议使用 Windows Subsystem for Linux (WSL) 搭配 Ubuntu 进行安装。安装过程需要 nvcc 编译器来构建自定义 CUDA 内核，若缺失需手动安装匹配的 CUDA Toolkit。若遇到 'Failed to build the SAM 2 CUDA extension' 错误可忽略，仅部分后处理功能受限，不影响主要结果。建议使用 Anaconda 创建新环境。","3.10+",[112,113,114,115,116],"torch>=2.5.1","torchvision>=0.20.1","jupyter","matplotlib","huggingface_hub",[14,35],"2026-03-27T02:49:30.150509","2026-04-06T05:17:37.494896",[121,126,131,136,141,146],{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},17653,"如何将 SAM2 的 memory_attention 模块导出为 ONNX 格式时避免 ComplexFloat 类型错误？","该错误通常由旋转编码（Rotary Encoder）中的复数张量引起。解决方案是修改 `apply_rotary_enc` 函数的实现，将其替换为基于矩阵乘法（Matmul）的实现 `apply_rotary_enc_mat`，从而避免使用复数类型。参考以下实现代码：https:\u002F\u002Fgithub.com\u002Fdxlcnm\u002Fsegment-anything-2-real-time-onnx\u002Fblob\u002Fmain\u002Fsam2\u002Fmodeling\u002Fposition_encoding.py。修改后可以使用 `torch.export` 成功导出 ONNX 模型。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Fissues\u002F186",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},17654,"加载 SAM2 模型时遇到 'hydra.errors.MissingConfigException: Cannot find primary config' 错误怎么办？","这是因为 Hydra 无法在搜索路径中找到配置文件。解决方法是在初始化 Hydra 前，先将工作目录切换到配置文件所在的目录，并使用相对路径进行初始化。示例代码如下：\n```python\nimport os\nfrom hydra import initialize, compose\nfrom hydra.core.global_hydra import GlobalHydra\n\nGlobalHydra.instance().clear()\noriginal_cwd = os.getcwd()\nos.chdir(base_path)  # base_path 为包含配置文件的目录\n\ninitialize(config_path=\".\", job_name=\"sam2_inference\", version_base=None)\nconfig_file = \"sam2.1_hiera_l.yaml\"\n```\n另外，确保配置路径正确，例如使用 `configs\u002Fsam2.1\u002Fsam2.1_hiera_l.yaml`。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Fissues\u002F81",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},17655,"在 Windows 上安装 SAM2 时遇到 'CUDA_HOME environment variable is not set' 错误如何解决？","即使已设置 CUDA 环境变量，该错误仍可能因 pip 构建隔离环境未使用正确的 torch 版本导致。建议在安装时显式指定 torch 版本和索引 URL，例如：\n```bash\npip install --extra-index-url \u003Cyour-torch-url> .\n```\n同时在 `pyproject.toml` 中声明依赖：`requires = [\"torch==\u003Cyourversion>\"]`。不建议直接使用 `--no-build-isolation` 标志，除非有特殊需求。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Fissues\u002F41",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},17656,"安装 SAM2 时出现 CUDA 版本不匹配错误（如 detected CUDA version mismatches PyTorch compiled version）怎么办？","该错误表示系统检测到的 CUDA 版本与编译 PyTorch 所用的版本不一致。解决方法是确保安装的 PyTorch 版本与系统 CUDA 版本匹配。可尝试使用 `--no-build-isolation` 参数安装：\n```bash\npip install --no-build-isolation -e .\n```\n但更推荐的做法是重新安装与当前 CUDA 版本匹配的 PyTorch 版本，以避免后续推理错误。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Fissues\u002F18",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},17657,"如何在 Docker 部署后从局域网其他设备访问 SAM2 的 Web UI？","默认情况下，Docker 容器可能仅绑定到 127.0.0.1，导致局域网内其他设备无法访问。需确保启动容器时映射端口并绑定到 0.0.0.0，例如：\n```bash\ndocker run -p 0.0.0.0:7262:7262 your-image\n```\n若仍遇到浏览器不支持提示，可考虑使用官方或社区提供的 Hugging Face Space 替代方案，如：https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Ffffiloni\u002FSAM2-Video-Predictor","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Fissues\u002F366",{"id":147,"question_zh":148,"answer_zh":149,"source_url":130},17658,"在自定义脚本中调用 SAM2 时为何会与项目自带的 Hydra 配置冲突？","当用户脚本自身使用了 Hydra 时，可能与 SAM2 内部初始化的 Hydra 实例冲突，导致配置加载失败。解决方法是在加载 SAM2 模型前手动清除全局 Hydra 实例：\n```python\nfrom hydra.core.global_hydra import GlobalHydra\nGlobalHydra.instance().clear()\n```\n然后重新初始化所需配置，确保路径和上下文正确。",[]]