[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-OML-Team--open-metric-learning":3,"tool-OML-Team--open-metric-learning":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",148568,2,"2026-04-09T23:34:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":75,"owner_email":76,"owner_twitter":75,"owner_website":75,"owner_url":77,"languages":78,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":95,"env_os":96,"env_gpu":97,"env_ram":96,"env_deps":98,"category_tags":104,"github_topics":106,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":150},6190,"OML-Team\u002Fopen-metric-learning","open-metric-learning","Metric learning and retrieval pipelines, models and zoo.","open-metric-learning 是一个基于 PyTorch 的开源框架，专为训练和验证能生成高质量嵌入（Embeddings）的模型而设计。它提供了一套完整的度量学习与检索流程、预训练模型库以及丰富的工具组件，帮助开发者轻松构建高效的相似性搜索系统。\n\n在传统分类任务中，模型通常不直接优化向量间的距离（如余弦相似度或 L2 距离），导致直接提取特征用于检索时效果不佳。open-metric-learning 正是为了解决这一痛点，通过专门的损失函数和训练策略，让模型学会“拉近”相似样本、“推远”不同样本，从而显著提升检索精度。\n\n该工具非常适合从事计算机视觉、推荐系统或搜索引擎开发的工程师与研究人员使用。无论是需要构建以图搜图功能，还是开发个性化推荐算法，都能从中获益。其亮点在于模块化设计清晰，支持多种主流度量学习算法，并兼容 Python 3.10 至 3.12 版本，便于快速实验与部署。此外，该项目已被牛津大学、高等经济大学等学术机构及多家科技企业应用于实际研究与产品中，具备良好的社区支持与可靠性。","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_59cb5b15e220.jpg\" width=\"400px\">\n\n\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_13d664e1afd7.png)](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![PyPI Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_552824f2fc69.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fopen-metric-learning)\n[![Pipi version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopen-metric-learning.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fopen-metric-learning\u002F)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.10-passing-success)](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Factions\u002Fworkflows\u002Ftests.yaml\u002Fbadge.svg?)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.11-passing-success)](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Factions\u002Fworkflows\u002Ftests.yaml\u002Fbadge.svg?)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.12-passing-success)](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Factions\u002Fworkflows\u002Ftests.yaml\u002Fbadge.svg?)\n\n\nOML is a PyTorch-based framework to train and validate the models producing high-quality embeddings.\n\n### Trusted by\n\n\u003Cdiv align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fdocs.neptune.ai\u002Fintegrations\u002Fcommunity_developed\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_da17376ad006.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.newyorker.de\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fd8\u002FNew_Yorker.svg\u002F1280px-New_Yorker.svg.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.epoch8.co\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_576b4d060b03.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.meituan.com\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002F6\u002F61\u002FMeituan_English_Logo.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fconstructor.io\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Frethink.industries\u002Fwp-content\u002Fuploads\u002F2022\u002F04\u002Fconstructor.io-logo.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fedgify.ai\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fedgify.ai\u002Fwp-content\u002Fuploads\u002F2024\u002F04\u002Fnew-edgify-logo.svg\" width=\"100\" height=\"30\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Finspector-cloud.ru\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_01ae87cd31d5.png\" width=\"150\" height=\"30\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fyango-tech.com\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fyango-backend.sborkademo.com\u002Fmedia\u002Fpages\u002Fhome\u002F205f66f309-1717169752\u002Fopengr4-1200x630-crop-q85.jpg\" width=\"100\" height=\"30\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.adagrad.ai\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_ff3114d64b40.png\" width=\"100\" height=\"30\"\u002F>\u003C\u002Fa>\n\n\u003Ca href=\"https:\u002F\u002Fwww.ox.ac.uk\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_e6f6f0101971.png\" width=\"120\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.hse.ru\u002Fen\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_80c3a6af9039.jpg\" width=\"100\"\u002F>\u003C\u002Fa>\n\nThere is a number of people from\n[Oxford](https:\u002F\u002Fwww.ox.ac.uk\u002F) and\n[HSE](https:\u002F\u002Fwww.hse.ru\u002Fen\u002F)\nuniversities who have used OML in their theses.\n[[1]](https:\u002F\u002Fgithub.com\u002Fnilomr\u002Fopen-metric-learning\u002Ftree\u002Fgreat-tit\u002Fgreat-tit-train)\n[[2]](https:\u002F\u002Fgithub.com\u002Fnastygorodi\u002FPROJECT-Deep_Metric_Learning)\n[[3]](https:\u002F\u002Fgithub.com\u002Fnik-fedorov\u002Fterm_paper_metric_learning)\n\n\n\u003Cdiv align=\"left\">\n\n\n\n## [Documentation](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)\n\n\u003Cdetails>\n\u003Csummary>FAQ\u003C\u002Fsummary>\n\n\u003Cdetails>\n\u003Csummary>Why do I need OML?\u003C\u002Fsummary>\n\u003Cp>\n\nYou may think *\"If I need image embeddings I can simply train a vanilla classifier and take its penultimate layer\"*.\nWell, it makes sense as a starting point. But there are several possible drawbacks:\n\n* If you want to use embeddings to perform searching you need to calculate some distance among them (for example, cosine or L2).\n  Usually, **you don't directly optimize these distances during the training** in the classification setup. So, you can only hope that\n  final embeddings will have the desired properties.\n\n* **The second problem is the validation process**.\n  In the searching setup, you usually care how related your top-N outputs are to the query.\n  The natural way to evaluate the model is to simulate searching requests to the reference set\n  and apply one of the retrieval metrics.\n  So, there is no guarantee that classification accuracy will correlate with these metrics.\n\n* Finally, you may want to implement a metric learning pipeline by yourself.\n  **There is a lot of work**: to use triplet loss you need to form batches in a specific way,\n  implement different kinds of triplets mining, tracking distances, etc. For the validation, you also need to\n  implement retrieval metrics,\n  which include effective embeddings accumulation during the epoch, covering corner cases, etc.\n  It's even harder if you have several gpus and use DDP.\n  You may also want to visualize your search requests by highlighting good and bad search results.\n  Instead of doing it by yourself, you can simply use OML for your purposes.\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>What is the difference between Open Metric Learning and PyTorch Metric Learning?\u003C\u002Fsummary>\n\u003Cp>\n\n[PML](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning) is the popular library for Metric Learning,\nand it includes a rich collection of losses, miners, distances, and reducers; that is why we provide straightforward\n[examples](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-metric-learning) of using them with OML.\nInitially, we tried to use PML, but in the end, we came up with our library, which is more pipeline \u002F recipes oriented.\nThat is how OML differs from PML:\n\n* OML has [Pipelines](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines)\n  which allows training models by preparing a config and your data in the required format\n  (it's like converting data into COCO format to train a detector from [mmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)).\n\n* OML focuses on end-to-end pipelines and practical use cases.\n  It has config based examples on popular benchmarks close to real life (like photos of products of thousands ids).\n  We found some good combinations of hyperparameters on these datasets, trained and published models and their configs.\n  Thus, it makes OML more recipes oriented than PML, and its author\n  [confirms](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F169#issuecomment-670814393)\n  this saying that his library is a set of tools rather the recipes, moreover, the examples in PML are mostly for CIFAR and MNIST datasets.\n\n* OML has the [Zoo](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning#zoo) of pretrained models that can be easily accessed from\n  the code in the same way as in `torchvision` (when you type `resnet50(pretrained=True)`).\n\n* OML is integrated with [PyTorch Lightning](https:\u002F\u002Fwww.pytorchlightning.ai\u002F), so, we can use the power of its\n  [Trainer](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Fstable\u002Fcommon\u002Ftrainer.html).\n  This is especially helpful when we work with DDP, so, you compare our\n  [DDP example](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-lightning)\n  and the\n  [PMLs one](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002FDistributedTripletMarginLossMNIST.ipynb).\n  By the way, PML also has [Trainers](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftrainers\u002F), but it's not\n  widely used in the examples and custom `train` \u002F `test` functions are used instead.\n\nWe believe that having Pipelines, laconic examples, and Zoo of pretrained models sets the entry threshold to a really low value.\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>What is Metric Learning?\u003C\u002Fsummary>\n\u003Cp>\n\nMetric Learning problem (also known as *extreme classification* problem) means a situation in which we\nhave thousands of ids of some entities, but only a few samples for every entity.\nOften we assume that during the test stage (or production) we will deal with unseen entities\nwhich makes it impossible to apply the vanilla classification pipeline directly. In many cases obtained embeddings\nare used to perform search or matching procedures over them.\n\nHere are a few examples of such tasks from the computer vision sphere:\n* Person\u002FAnimal Re-Identification\n* Face Recognition\n* Landmark Recognition\n* Searching engines for online shops\n and many others.\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>Glossary (Naming convention) \u003C\u002Fsummary>\n\u003Cp>\n\n* `embedding` - model's output (also known as `features vector` or `descriptor`).\n* `query` - a sample which is used as a request in the retrieval procedure.\n* `gallery set` - the set of entities to search items similar to `query` (also known as `reference` or `index`).\n* `Sampler` - an argument for `DataLoader` which is used to form batches\n* `Miner` - the object to form pairs or triplets after the batch was formed by `Sampler`. It's not necessary to form\n  the combinations of samples only inside the current batch, thus, the memory bank may be a part of `Miner`.\n* `Samples`\u002F`Labels`\u002F`Instances` - as an example let's consider DeepFashion dataset. It includes thousands of\n  fashion item ids (we name them `labels`) and several photos for each item id\n  (we name the individual photo as `instance` or `sample`). All of the fashion item ids have their groups like\n  \"skirts\", \"jackets\", \"shorts\" and so on (we name them `categories`).\n  Note, we avoid using the term `class` to avoid misunderstanding.\n* `training epoch` - batch samplers which we use for combination-based losses usually have a length equal to\n  `[number of labels in training dataset] \u002F [numbers of labels in one batch]`. It means that we don't observe all of\n  the available training samples in one epoch (as opposed to vanilla classification),\n  instead, we observe all of the available labels.\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>How good may be a model trained with OML? \u003C\u002Fsummary>\n\u003Cp>\n\nIt may be comparable with the current (2022 year) [SotA](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Fmetric-learning) methods,\nfor example, [Hyp-ViT](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10833.pdf).\n*(Few words about this approach: it's a ViT architecture trained with contrastive loss,\nbut the embeddings were projected into some hyperbolic space.\nAs the authors claimed, such a space is able to describe the nested structure of real-world data.\nSo, the paper requires some heavy math to adapt the usual operations for the hyperbolical space.)*\n\nWe trained the same architecture with triplet loss, fixing the rest of the parameters:\ntraining and test transformations, image size, and optimizer. See configs in [Models Zoo](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning#zoo).\nThe trick was in heuristics in our miner and sampler:\n\n* [Category Balance Sampler](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fsamplers.html#categorybalancesampler)\n  forms the batches limiting the number of categories *C* in it.\n  For instance, when *C = 1* it puts only jackets in one batch and only jeans into another one (just an example).\n  It automatically makes the negative pairs harder: it's more meaningful for a model to realise why two jackets\n  are different than to understand the same about a jacket and a t-shirt.\n\n* [Hard Triplets Miner](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fminers.html#hardtripletsminer)\n  makes the task even harder keeping only the hardest triplets (with maximal positive and minimal negative distances).\n\nHere are *CMC@1* scores for 2 popular benchmarks.\nSOP dataset: Hyp-ViT — 85.9, ours — 86.6. DeepFashion dataset: Hyp-ViT — 92.5, ours — 92.1.\nThus, utilising simple heuristics and avoiding heavy math we are able to perform on SotA level.\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>What about Self-Supervised Learning?\u003C\u002Fsummary>\n\u003Cp>\n\nRecent research in SSL definitely obtained great results. The problem is that these approaches\nrequired an enormous amount of computing to train the model. But in our framework, we consider the most common case\nwhen the average user has no more than a few GPUs.\n\nAt the same time, it would be unwise to ignore success in this sphere, so we still exploit it in two ways:\n* As a source of checkpoints that would be great to start training with. From publications and our experience,\n  they are much better as initialisation than the default supervised model trained on ImageNet. Thus, we added the possibility\n  to initialise your models using these pretrained checkpoints only by passing an argument in the config or the constructor.\n* As a source of inspiration. For example, we adapted the idea of a memory bank from *MoCo* for the *TripletLoss*.\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>Do I need to know other frameworks to use OML?\u003C\u002Fsummary>\n\u003Cp>\n\nNo, you don't. OML is a framework-agnostic. Despite we use PyTorch Lightning as a loop\nrunner for the experiments, we also keep the possibility to run everything on pure PyTorch.\nThus, only the tiny part of OML is Lightning-specific and we keep this logic separately from\nother code (see `oml.lightning`). Even when you use Lightning, you don't need to know it, since\nwe provide ready to use [Pipelines](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Fpipelines\u002F).\n\nThe possibility of using pure PyTorch and modular structure of the code leaves a room for utilizing\nOML with your favourite framework after the implementation of the necessary wrappers.\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>Can I use OML without any knowledge in DataScience?\u003C\u002Fsummary>\n\u003Cp>\n\nYes. To run the experiment with [Pipelines](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Fpipelines\u002F)\nyou only need to write a converter\nto our format (it means preparing the\n`.csv` table with a few predefined columns).\nThat's it!\n\nProbably we already have a suitable pre-trained model for your domain\nin our *Models Zoo*. In this case, you don't even need to train it.\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>Can I export models to ONNX?\u003C\u002Fsummary>\n\u003Cp>\n\nCurrently, we don't support exporting models to ONNX directly.\nHowever, you can use the built-in PyTorch capabilities to achieve this. For more information, please refer to this [issue](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F592).\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003C\u002Fdetails>\n\n\n[DOCUMENTATION](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)\n\nTUTORIAL TO START WITH:\n[English](https:\u002F\u002Fmedium.com\u002F@AlekseiShabanov\u002Fpractical-metric-learning-b0410cda2201) |\n[Russian](https:\u002F\u002Fhabr.com\u002Fru\u002Fcompany\u002Fods\u002Fblog\u002F695380\u002F) |\n[Chinese](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F683102241)\n\n\u003Cdetails>\n\u003Csummary>MORE\u003C\u002Fsummary>\n\n* The\n[DEMO](https:\u002F\u002Fdapladoc-oml-postprocessing-demo-srcappmain-pfh2g0.streamlit.app\u002F)\nfor our paper\n[STIR: Siamese Transformers for Image Retrieval Postprocessing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13393)\n\n* Meet OpenMetricLearning (OML) on\n[Marktechpost](https:\u002F\u002Fwww.marktechpost.com\u002F2023\u002F12\u002F26\u002Fmeet-openmetriclearning-oml-a-pytorch-based-python-framework-to-train-and-validate-the-deep-learning-models-producing-high-quality-embeddings\u002F)\n\n* The report for Berlin-based meetup: \"Computer Vision in production\". November, 2022.\n[Link](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1uHmLU8vMrMVMFodt36u0uXAgYjG_3D30?usp=share_link)\n\n\u003C\u002Fdetails>\n\n## [Installation](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Finstallation.html)\n\n```shell\npip install -U open-metric-learning  # minimum dependencies\npip install -U open-metric-learning[nlp]\npip install -U open-metric-learning[audio]\npip install -U open-metric-learning[pipelines]\n\n# in the case of conflicts install without dependencies and manage versions manually:\npip install --no-deps open-metric-learning\n```\n\n\n## OML features\n\n\u003Cdiv style=\"overflow-x: auto;\">\n\n\u003Ctable style=\"width: 100%; border-collapse: collapse; border-spacing: 0; margin: 0; padding: 0;\">\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Flosses.html\"> \u003Cb>Losses\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fminers.html\"> \u003Cb>Miners\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nminer = AllTripletsMiner()\nminer = NHardTripletsMiner()\nminer = MinerWithBank()\n...\ncriterion = TripletLossWithMiner(0.1, miner)\ncriterion = ArcFaceLoss()\ncriterion = SurrogatePrecision()\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fsamplers.html\"> \u003Cb>Samplers\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nlabels = train.get_labels()\nl2c = train.get_label2category()\n\n\nsampler = BalanceSampler(labels)\nsampler = CategoryBalanceSampler(labels, l2c)\nsampler = DistinctCategoryBalanceSampler(labels, l2c)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002F\">\u003Cb>Configs support\u003C\u002Fb>\u003C\u002Fa>\n\n```yaml\nmax_epochs: 10\nsampler:\n  name: balance\n  args:\n    n_labels: 2\n    n_instances: 2\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning?tab=readme-ov-file#zoo\">\u003Cb>Pre-trained models of different modalities\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nmodel_hf = AutoModel.from_pretrained(\"roberta-base\")\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\nextractor_txt = HFWrapper(model_hf)\n\nextractor_img = ViTExtractor.from_pretrained(\"vits16_dino\")\ntransforms, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\nextractor_audio = ECAPATDNNExtractor.from_pretrained()\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fpostprocessing\u002Falgo_examples.html\">\u003Cb>Post-processing\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nemb = inference(extractor, dataset)\nrr = RetrievalResults.from_embeddings(emb, dataset)\n\npostprocessor = AdaptiveThresholding()\nrr_upd = postprocessor.process(rr, dataset)\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fpostprocessing\u002Fsiamese_examples.html\">\u003Cb>Post-processing by NN\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Fpostprocessing\u002Fpairwise_postprocessing\">\u003Cb>Paper\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nembeddings = inference(extractor, dataset)\nrr = RetrievalResults.from_embeddings(embeddings, dataset)\n\npostprocessor = PairwiseReranker(ConcatSiamese(), top_n=3)\nrr_upd = postprocessor.process(rr, dataset)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Flogging.html#\">\u003Cb>Logging\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nlogger = TensorBoardPipelineLogger()\nlogger = NeptunePipelineLogger()\nlogger = WandBPipelineLogger()\nlogger = MLFlowPipelineLogger()\nlogger = ClearMLPipelineLogger()\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-metric-learning\">\u003Cb>PML\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nfrom pytorch_metric_learning import losses\n\ncriterion = losses.TripletMarginLoss(0.2, \"all\")\npred = ViTExtractor()(data)\ncriterion(pred, gts)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#handling-categories\">\u003Cb>Categories support\u003C\u002Fb>\u003C\u002Fa>\n\n```python\n# train\nloader = DataLoader(CategoryBalanceSampler())\n\n# validation\nrr = RetrievalResults.from_embeddings()\nm.calc_retrieval_metrics_rr(rr, query_categories)\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fmetrics.html\">\u003Cb>Misc metrics\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nembeddigs = inference(model, dataset)\nrr = RetrievalResults.from_embeddings(embeddings, dataset)\n\nm.calc_retrieval_metrics_rr(rr, precision_top_k=(5,))\nm.calc_fnmr_at_fmr_rr(rr, fmr_vals=(0.1,))\nm.calc_topological_metrics(embeddings, pcf_variance=(0.5,))\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-lightning\">\u003Cb>Lightning\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nimport pytorch_lightning as pl\n\nmodel = ViTExtractor.from_pretrained(\"vits16_dino\")\nclb = MetricValCallback(EmbeddingMetrics(dataset))\nmodule = ExtractorModule(model, criterion, optimizer)\n\ntrainer = pl.Trainer(max_epochs=3, callbacks=[clb])\ntrainer.fit(module, train_loader, val_loader)\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-lightning\">\u003Cb>Lightning DDP\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nclb = MetricValCallback(EmbeddingMetrics(val))\nmodule = ExtractorModuleDDP(\n    model, criterion, optimizer, train, val\n)\n\nddp = {\"devices\": 2, \"strategy\": DDPStrategy()}\ntrainer = pl.Trainer(max_epochs=3, callbacks=[clb], **ddp)\ntrainer.fit(module)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003C\u002Ftable>\n\n\u003C\u002Fdiv>\n\n## [Examples](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#)\n\nHere is an example of how to train, validate and post-process the model\non a tiny dataset of\n[images](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1plPnwyIkzg51-mLUXWTjREHgc1kgGrF4),\n[texts](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Foml\u002Futils\u002Fdownload_mock_dataset.py#L83),\nor\n[audios](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1NcKnyXqDyyYARrDETmhJcTTXegO3W0Ju).\nSee more details on dataset\n[format](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Fdata.html).\n\nSCROLL RIGHT FOR **IMAGES** > **TEXTS** > **AUDIOS**\n\n\u003Cdiv style=\"overflow-x: auto;\">\n\n\u003Ctable style=\"width: 100%; border-collapse: collapse; border-spacing: 0; margin: 0; padding: 0;\">\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n    \u003Ctd style=\"text-align: left; padding: 0;\">\u003Cb>IMAGES\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd style=\"text-align: left; padding: 0;\">\u003Cb>TEXTS\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd style=\"text-align: left; padding: 0;\">\u003Cb>AUDIOS\u003C\u002Fb>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\n\u003Ctd>\n\n[comment]:train-val-img-start\n```python\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\n\nfrom oml import datasets as d\nfrom oml.inference import inference\nfrom oml.losses import TripletLossWithMiner\nfrom oml.metrics import calc_retrieval_metrics_rr\nfrom oml.miners import HardTripletsMiner\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\nfrom oml.retrieval import RetrievalResults, AdaptiveThresholding\nfrom oml.samplers import BalanceSampler\nfrom oml.utils import get_mock_images_dataset\n\nmodel = ViTExtractor.from_pretrained(\"vits16_dino\").to(\"cpu\").train()\ntransform, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\ndf_train, df_val = get_mock_images_dataset(global_paths=True)\ntrain = d.ImageLabeledDataset(df_train, transform=transform)\nval = d.ImageQueryGalleryLabeledDataset(df_val, transform=transform)\n\noptimizer = Adam(model.parameters(), lr=1e-4)\ncriterion = TripletLossWithMiner(0.1, HardTripletsMiner(), need_logs=True)\nsampler = BalanceSampler(train.get_labels(), n_labels=2, n_instances=2)\n\n\n# training 1 epoch\nfor batch in DataLoader(train, batch_sampler=sampler):\n    embeddings = model(batch[\"input_tensors\"])\n    loss = criterion(embeddings, batch[\"labels\"])\n    loss.backward()\n    optimizer.step()\n    optimizer.zero_grad()\n    print(criterion.last_logs)\n\n\n# validation by retrieving relevant items\nembeddings = inference(model, val, batch_size=4, num_workers=0)\nrr = RetrievalResults.from_embeddings(embeddings, val, n_items=3)\nrr = AdaptiveThresholding(n_std=2).process(rr)\nrr.visualize(query_ids=[2, 1], dataset=val, show=True)\nprint(calc_retrieval_metrics_rr(rr, map_top_k=(3,), cmc_top_k=(1,)))\n\n\n\n```\n[comment]:train-val-img-end\n\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n[comment]:train-val-txt-start\n```python\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\nfrom transformers import AutoModel, AutoTokenizer\n\nfrom oml import datasets as d\nfrom oml.inference import inference\nfrom oml.losses import TripletLossWithMiner\nfrom oml.metrics import calc_retrieval_metrics_rr\nfrom oml.miners import NHardTripletsMiner\nfrom oml.models import HFWrapper\nfrom oml.retrieval import RetrievalResults, AdaptiveThresholding\nfrom oml.samplers import BalanceSampler\nfrom oml.utils import get_mock_texts_dataset\n\nmodel = HFWrapper(AutoModel.from_pretrained(\"bert-base-uncased\"), 768).to(\"cpu\").train()\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n\ndf_train, df_val = get_mock_texts_dataset()\ntrain = d.TextLabeledDataset(df_train, tokenizer=tokenizer)\nval = d.TextQueryGalleryLabeledDataset(df_val, tokenizer=tokenizer)\n\noptimizer = Adam(model.parameters(), lr=1e-4)\ncriterion = TripletLossWithMiner(\n    0.1, NHardTripletsMiner(n_positive=2, n_negative=2), need_logs=True\n)\nsampler = BalanceSampler(train.get_labels(), n_labels=2, n_instances=2)\n\n\n# training 1 epoch\nfor batch in DataLoader(train, batch_sampler=sampler):\n    embeddings = model(batch[\"input_tensors\"])\n    loss = criterion(embeddings, batch[\"labels\"])\n    loss.backward()\n    optimizer.step()\n    optimizer.zero_grad()\n    print(criterion.last_logs)\n\n\n# validation by retrieving relevant items\nembeddings = inference(model, val, batch_size=4, num_workers=0)\nrr = RetrievalResults.from_embeddings(embeddings, val, n_items=3)\nrr = AdaptiveThresholding(n_std=2).process(rr)\nrr.visualize(query_ids=[2, 1], dataset=val, show=True)\nprint(calc_retrieval_metrics_rr(rr, map_top_k=(3,), cmc_top_k=(1,)))\n\n\n\n```\n[comment]:train-val-txt-end\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n[comment]:train-val-audio-start\n```python\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\n\nfrom oml import datasets as d\nfrom oml.inference import inference\nfrom oml.losses import ArcFaceLoss\nfrom oml.metrics import calc_retrieval_metrics_rr\nfrom oml.models import ECAPATDNNExtractor\nfrom oml.retrieval import AdaptiveThresholding, RetrievalResults\nfrom oml.samplers import BalanceSampler\nfrom oml.utils import get_mock_audios_dataset\n\nmodel = ECAPATDNNExtractor.from_pretrained(\"ecapa_tdnn_taoruijie\").to(\"cpu\").train()\n\ndf_train, df_val = get_mock_audios_dataset(global_paths=True)\ntrain = d.AudioLabeledDataset(df_train)\nval = d.AudioQueryGalleryLabeledDataset(df_val)\n\noptimizer = Adam(model.parameters(), lr=1e-4)\ncriterion = ArcFaceLoss(m=0.2, s=30, in_features=192, num_classes=4)  # similar to paper\nsampler = BalanceSampler(train.get_labels(), n_labels=2, n_instances=2)\n\n\n# training 1 epoch\nfor batch in DataLoader(train, batch_sampler=sampler):\n    embeddings = model(batch[\"input_tensors\"])\n    loss = criterion(embeddings, batch[\"labels\"])\n    loss.backward()\n    optimizer.step()\n    optimizer.zero_grad()\n    print(criterion.last_logs)\n\n\n# validation by retrieving relevant items\nembeddings = inference(model, val, batch_size=4, num_workers=0)\nrr = RetrievalResults.from_embeddings(embeddings, val, n_items=3)\nrr = AdaptiveThresholding(n_std=2).process(rr)\nrr.visualize_as_html(query_ids=[2, 1], dataset=val, show=True)\nprint(calc_retrieval_metrics_rr(rr, map_top_k=(3,), cmc_top_k=(1,)))\n\n\n\n```\n[comment]:train-val-audio-end\n\u003C\u002Ftd>\n\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\n\u003Ctd>\n\n\u003Cdetails style=\"padding-bottom: 10px\">\n\u003Csummary>Output\u003C\u002Fsummary>\n\n```python\n{'active_tri': 0.125, 'pos_dist': 82.5, 'neg_dist': 100.5}  # batch 1\n{'active_tri': 0.0, 'pos_dist': 36.3, 'neg_dist': 56.9}     # batch 2\n\n{'cmc': {1: 0.75}, 'precision': {5: 0.75}, 'map': {3: 0.8}}\n\n```\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_a7415836dff9.png\" height=\"200px\">\n\n\u003C\u002Fdetails>\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Fr4HhDOqmjx1hCFS30G3MlYjeqBW5vDg?usp=sharing)\n\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n\u003Cdetails style=\"padding-bottom: 10px\">\n\u003Csummary>Output\u003C\u002Fsummary>\n\n```python\n{'active_tri': 0.0, 'pos_dist': 8.5, 'neg_dist': 11.0}  # batch 1\n{'active_tri': 0.25, 'pos_dist': 8.9, 'neg_dist': 9.8}  # batch 2\n\n{'cmc': {1: 0.8}, 'precision': {5: 0.7}, 'map': {3: 0.9}}\n\n```\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_30f5e3371961.png\" height=\"200px\">\n\n\u003C\u002Fdetails>\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F19o2Ox2VXZoOWOOXIns7mcs0aHJZgJWeO?usp=sharing)\n\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n\u003Cdetails style=\"padding-bottom: 10px\">\n\u003Csummary>Output\u003C\u002Fsummary>\n\n```python\n{'active_tri': 0.25, 'pos_dist': 17.3, 'neg_dist': 18.4}  # batch 1\n{'active_tri': 0.0, 'pos_dist': 17.1, 'neg_dist': 18.5}   # batch 2\n\n{'cmc': {1: 1.0}, 'precision': {5: 1.0}, 'map': {3: 1.0}}\n\n```\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_0ec1fba6229c.jpg\" height=\"200px\">\n\n\u003C\u002Fdetails>\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Sfz7xMdjXg634-3KmBPq8Zs6i_gbsWD1?usp=sharing)\n\n\u003C\u002Ftd>\n\n\u003C\u002Ftr>\n\n\u003C\u002Ftable>\n\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n[Extra illustrations, explanations and tips](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction#training)\nfor the code above.\n\n### Retrieval by trained model\n\nHere is an inference time example (in other words, retrieval on test set).\nThe code below works for both texts and images.\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>See example\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:usage-retrieval-start\n```python\nfrom oml.datasets import ImageQueryGalleryDataset\nfrom oml.inference import inference\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\nfrom oml.utils import get_mock_images_dataset\nfrom oml.retrieval import RetrievalResults, AdaptiveThresholding\n\n_, df_test = get_mock_images_dataset(global_paths=True)\ndel df_test[\"label\"]  # we don't need gt labels for doing predictions\n\nextractor = ViTExtractor.from_pretrained(\"vits16_dino\").to(\"cpu\")\ntransform, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\ndataset = ImageQueryGalleryDataset(df_test, transform=transform)\nembeddings = inference(extractor, dataset, batch_size=4, num_workers=0)\n\nrr = RetrievalResults.from_embeddings(embeddings, dataset, n_items=5)\nrr = AdaptiveThresholding(n_std=3.5).process(rr)\nrr.visualize(query_ids=[0, 1], dataset=dataset, show=True)\n\n# you get the ids of retrieved items and the corresponding distances\nprint(rr)\n```\n[comment]:usage-retrieval-end\n\n\u003C\u002Fdetails>\n\n\n\n### Retrieval by trained model: streaming & txt2im\n\nHere is an example where queries and galleries processed separately.\n* First, it may be useful for **streaming retrieval**, when a gallery (index) set is huge and fixed, but\n  queries are coming in batches.\n* Second, queries and galleries have different natures, for examples, **queries are texts, but galleries are images**.\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>See example\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:usage-streaming-retrieval-start\n```python\nimport pandas as pd\n\nfrom oml.datasets import ImageBaseDataset\nfrom oml.inference import inference\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\nfrom oml.retrieval import RetrievalResults, ConstantThresholding\nfrom oml.utils import get_mock_images_dataset\n\nextractor = ViTExtractor.from_pretrained(\"vits16_dino\").to(\"cpu\")\ntransform, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\npaths = pd.concat(get_mock_images_dataset(global_paths=True))[\"path\"]\ngalleries, queries1, queries2 = paths[:20], paths[20:22], paths[22:24]\n\n# gallery is huge and fixed, so we only process it once\ndataset_gallery = ImageBaseDataset(galleries, transform=transform)\nembeddings_gallery = inference(extractor, dataset_gallery, batch_size=4, num_workers=0)\n\n# queries come \"online\" in stream\nfor queries in [queries1, queries2]:\n    dataset_query = ImageBaseDataset(queries, transform=transform)\n    embeddings_query = inference(extractor, dataset_query, batch_size=4, num_workers=0)\n\n    # for the operation below we are going to provide integrations with vector search DB like QDrant or Faiss\n    rr = RetrievalResults.from_embeddings_qg(\n        embeddings_query=embeddings_query, embeddings_gallery=embeddings_gallery,\n        dataset_query=dataset_query, dataset_gallery=dataset_gallery\n    )\n    rr = ConstantThresholding(th=80).process(rr)\n    rr.visualize_qg([0, 1], dataset_query=dataset_query, dataset_gallery=dataset_gallery, show=True)\n    print(rr)\n```\n[comment]:usage-streaming-retrieval-end\n\n\u003C\u002Fdetails>\n\n## [Pipelines](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines)\n\nPipelines provide a way to run metric learning experiments via changing only the config file.\nAll you need is to prepare your dataset in a required format.\n\nSee [Pipelines](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Fpipelines\u002F) folder for more details:\n* Feature extractor [pipeline](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction)\n* Retrieval re-ranking [pipeline](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Fpostprocessing)\n\n## [Zoo: Images](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fzoo.html#zoo-images)\n\nYou can use an image model from our Zoo or\nuse other arbitrary models after you inherited it from [IExtractor](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Finterfaces.html#iextractor).\n\n\u003Cdetails style=\"padding-bottom: 15px\">\n\u003Csummary>\u003Cb>See how to use models\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:zoo-image-start\n```python\nfrom oml.const import CKPT_SAVE_ROOT as CKPT_DIR, MOCK_DATASET_PATH as DATA_DIR\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\n\nmodel = ViTExtractor.from_pretrained(\"vits16_dino\").eval()\ntransforms, im_reader = get_transforms_for_pretrained(\"vits16_dino\")\n\nimg = im_reader(DATA_DIR \u002F \"images\" \u002F \"circle_1.jpg\")  # put path to your image here\nimg_tensor = transforms(img)\n# img_tensor = transforms(image=img)[\"image\"]  # for transforms from Albumentations\n\nfeatures = model(img_tensor.unsqueeze(0))\n\n# Check other available models:\nprint(list(ViTExtractor.pretrained_models.keys()))\n\n# Load checkpoint saved on a disk:\nmodel_ = ViTExtractor(weights=CKPT_DIR \u002F \"vits16_dino.ckpt\", arch=\"vits16\", normalise_features=False)\n```\n[comment]:zoo-image-end\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n### Image models zoo\n\nModels, trained by us.\nThe metrics below are for **224 x 224** images:\n\n|                      model                      | cmc1  |         dataset          |                                              weights                                              |                                                    experiment                                                     |\n|:-----------------------------------------------:|:-----:|:------------------------:|:-------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|\n| `ViTExtractor.from_pretrained(\"vits16_inshop\")` | 0.921 |    DeepFashion Inshop    |    [link](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1niX-TC8cj6j369t7iU2baHQSVN3MVJbW\u002Fview?usp=sharing)     | [link](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_inshop) |\n|  `ViTExtractor.from_pretrained(\"vits16_sop\")`   | 0.866 | Stanford Online Products |   [link](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zuGRHvF2KHd59aw7i7367OH_tQNOGz7A\u002Fview?usp=sharing)      |  [link](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_sop)   |\n| `ViTExtractor.from_pretrained(\"vits16_cars\")`   | 0.907 |         CARS 196         |   [link](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F17a4_fg94dox2sfkXmw-KCtiLBlx-ut-1?usp=sharing)    |  [link](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_cars)  |\n|  `ViTExtractor.from_pretrained(\"vits16_cub\")`   | 0.837 |       CUB 200 2011       |   [link](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1TPCN-eZFLqoq4JBgnIfliJoEK48x9ozb?usp=sharing)    |  [link](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_cub)   |\n\nModels, trained by other researchers.\nNote, that some metrics on particular benchmarks are so high because they were part of the training dataset (for example `unicom`).\nThe metrics below are for 224 x 224 images:\n\n|                            model                             | Stanford Online Products | DeepFashion InShop | CUB 200 2011 | CARS 196 |\n|:------------------------------------------------------------:|:------------------------:|:------------------:|:------------:|:--------:|\n|    `ViTUnicomExtractor.from_pretrained(\"vitb16_unicom\")`     |          0.700           |       0.734        |    0.847     |  0.916   |\n|    `ViTUnicomExtractor.from_pretrained(\"vitb32_unicom\")`     |          0.690           |       0.722        |    0.796     |  0.893   |\n|    `ViTUnicomExtractor.from_pretrained(\"vitl14_unicom\")`     |          0.726           |       0.790        |    0.868     |  0.922   |\n| `ViTUnicomExtractor.from_pretrained(\"vitl14_336px_unicom\")`  |          0.745           |       0.810        |    0.875     |  0.924   |\n|    `ViTCLIPExtractor.from_pretrained(\"sber_vitb32_224\")`     |          0.547           |       0.514        |    0.448     |  0.618   |\n|    `ViTCLIPExtractor.from_pretrained(\"sber_vitb16_224\")`     |          0.565           |       0.565        |    0.524     |  0.648   |\n|    `ViTCLIPExtractor.from_pretrained(\"sber_vitl14_224\")`     |          0.512           |       0.555        |    0.606     |  0.707   |\n|   `ViTCLIPExtractor.from_pretrained(\"openai_vitb32_224\")`    |          0.612           |       0.491        |    0.560     |  0.693   |\n|   `ViTCLIPExtractor.from_pretrained(\"openai_vitb16_224\")`    |          0.648           |       0.606        |    0.665     |  0.767   |\n|   `ViTCLIPExtractor.from_pretrained(\"openai_vitl14_224\")`    |          0.670           |       0.675        |    0.745     |  0.844   |\n|        `ViTExtractor.from_pretrained(\"vits16_dino\")`         |          0.648           |       0.509        |    0.627     |  0.265   |\n|         `ViTExtractor.from_pretrained(\"vits8_dino\")`         |          0.651           |       0.524        |    0.661     |  0.315   |\n|        `ViTExtractor.from_pretrained(\"vitb16_dino\")`         |          0.658           |       0.514        |    0.541     |  0.288   |\n|         `ViTExtractor.from_pretrained(\"vitb8_dino\")`         |          0.689           |       0.599        |    0.506     |  0.313   |\n|       `ViTExtractor.from_pretrained(\"vits14_dinov2\")`        |          0.566           |       0.334        |    0.797     |  0.503   |\n|     `ViTExtractor.from_pretrained(\"vits14_reg_dinov2\")`      |          0.566           |       0.332        |    0.795     |  0.740   |\n|       `ViTExtractor.from_pretrained(\"vitb14_dinov2\")`        |          0.565           |       0.342        |    0.842     |  0.644   |\n|     `ViTExtractor.from_pretrained(\"vitb14_reg_dinov2\")`      |          0.557           |       0.324        |    0.833     |  0.828   |\n|       `ViTExtractor.from_pretrained(\"vitl14_dinov2\")`        |          0.576           |       0.352        |    0.844     |  0.692   |\n|     `ViTExtractor.from_pretrained(\"vitl14_reg_dinov2\")`      |          0.571           |       0.340        |    0.840     |  0.871   |\n|    `ResnetExtractor.from_pretrained(\"resnet50_moco_v2\")`     |          0.493           |       0.267        |    0.264     |  0.149   |\n| `ResnetExtractor.from_pretrained(\"resnet50_imagenet1k_v1\")`  |          0.515           |       0.284        |    0.455     |  0.247   |\n\n*The metrics may be different from the ones reported by papers,\nbecause the version of train\u002Fval split and usage of bounding boxes may differ.*\n\n## [Zoo: Texts](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fzoo.html#zoo-texts)\n\nHere is a lightweight integration with [HuggingFace Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) models.\nYou can replace it with other arbitrary models inherited from [IExtractor](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Finterfaces.html#iextractor).\n\n```shell\npip install open-metric-learning[nlp]\n```\n\n\u003Cdetails style=\"padding-bottom: 15px\">\n\u003Csummary>\u003Cb>See how to use models\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:zoo-text-start\n```python\nfrom transformers import AutoModel, AutoTokenizer\n\nfrom oml.models import HFWrapper\n\nmodel = AutoModel.from_pretrained('bert-base-uncased').eval()\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\nextractor = HFWrapper(model=model, feat_dim=768)\n\ninp = tokenizer(text=\"Hello world\", return_tensors=\"pt\", add_special_tokens=True)\nembeddings = extractor(inp)\n```\n[comment]:zoo-text-end\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\nNote, we don't have our own text models zoo at the moment.\n\n## [Zoo: Audios](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fzoo.html#zoo-audios)\n\n\nYou can use an audio model from our Zoo or\nuse other arbitrary models after you inherited it from [IExtractor](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Finterfaces.html#iextractor).\n\n```shell\npip install open-metric-learning[audio]\n```\n\n\u003Cdetails style=\"padding-bottom: 15px\">\n\u003Csummary>\u003Cb>See how to use models\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:zoo-audio-start\n```python\nimport torchaudio\n\nfrom oml.models import ECAPATDNNExtractor\nfrom oml.const import CKPT_SAVE_ROOT as CKPT_DIR, MOCK_AUDIO_DATASET_PATH as DATA_DIR\n\n# replace it by your actual paths\nckpt_path = CKPT_DIR \u002F \"ecapa_tdnn_taoruijie.pth\"\nfile_path = DATA_DIR \u002F \"voices\" \u002F \"voice0_0.wav\"\n\nmodel = ECAPATDNNExtractor(weights=ckpt_path, arch=\"ecapa_tdnn_taoruijie\", normalise_features=False).to(\"cpu\").eval()\naudio, sr = torchaudio.load(file_path)\n\nif audio.shape[0] > 1:\n    audio = audio.mean(dim=0, keepdim=True)  # mean by channels\nif sr != 16000:\n    audio = torchaudio.functional.resample(audio, sr, 16000)\n\nembeddings = model.extract(audio)\n```\n[comment]:zoo-audio-end\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n### Audio models zoo\n\n|                            model                             | Vox1_O | Vox1_E | Vox1_H |\n|:------------------------------------------------------------:|:------:|:------:|:------:|\n| `ECAPATDNNExtractor.from_pretrained(\"ecapa_tdnn_taoruijie\")` |  0.86  |  1.18  |  2.17  |\n\n*The metrics above represent Equal Error Rate (EER). Lower is better.*\n\n## [Contributing guide](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Fcontributing.html)\n\nWe welcome new contributors! Please, see our:\n* [Contributing guide](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Fcontributing.html)\n* [Kanban board](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fprojects\u002F1)\n\n## Acknowledgments\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_da88dba8fe2a.png\" width=\"100\"\u002F>\u003C\u002Fa>\n\nThe project was started in 2020 as a module for [Catalyst](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst) library.\nI want to thank people who worked with me on that module:\n[Julia Shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina),\n[Nikita Balagansky](https:\u002F\u002Fgithub.com\u002Felephantmipt),\n[Sergey Kolesnikov](https:\u002F\u002Fgithub.com\u002FScitator)\nand others.\n\nI would like to thank people who continue working on this pipeline when it became a separate project:\n[Julia Shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina),\n[Misha Kindulov](https:\u002F\u002Fgithub.com\u002Fb0nce),\n[Aron Dik](https:\u002F\u002Fgithub.com\u002Fdapladoc),\n[Aleksei Tarasov](https:\u002F\u002Fgithub.com\u002FDaloroAT) and\n[Verkhovtsev Leonid](https:\u002F\u002Fgithub.com\u002Fleoromanovich).\n\n\u003Ca href=\"https:\u002F\u002Fwww.newyorker.de\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fd8\u002FNew_Yorker.svg\u002F1280px-New_Yorker.svg.png\" width=\"100\"\u002F>\u003C\u002Fa>\n\nI also want to thank NewYorker, since the part of functionality was developed (and used) by its computer vision team led by me.\n","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_59cb5b15e220.jpg\" width=\"400px\">\n\n\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_13d664e1afd7.png)](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![PyPI Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_552824f2fc69.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fopen-metric-learning)\n[![Pipi version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopen-metric-learning.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fopen-metric-learning\u002F)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.10-passing-success)](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Factions\u002Fworkflows\u002Ftests.yaml\u002Fbadge.svg?)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.11-passing-success)](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Factions\u002Fworkflows\u002Ftests.yaml\u002Fbadge.svg?)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.12-passing-success)](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Factions\u002Fworkflows\u002Ftests.yaml\u002Fbadge.svg?)\n\n\nOML 是一个基于 PyTorch 的框架，用于训练和验证能够生成高质量嵌入的模型。\n\n### 受信任的机构\n\n\u003Cdiv align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fdocs.neptune.ai\u002Fintegrations\u002Fcommunity_developed\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_da17376ad006.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.newyorker.de\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fd8\u002FNew_Yorker.svg\u002F1280px-New_Yorker.svg.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.epoch8.co\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_576b4d060b03.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.meituan.com\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002F6\u002F61\u002FMeituan_English_Logo.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fconstructor.io\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Frethink.industries\u002Fwp-content\u002Fuploads\u002F2022\u002F04\u002Fconstructor.io-logo.png\" width=\"100\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fedgify.ai\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fedgify.ai\u002Fwp-content\u002Fuploads\u002F2024\u002F04\u002Fnew-edgify-logo.svg\" width=\"100\" height=\"30\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Finspector-cloud.ru\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_01ae87cd31d5.png\" width=\"150\" height=\"30\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fyango-tech.com\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fyango-backend.sborkademo.com\u002Fmedia\u002Fpages\u002Fhome\u002F205f66f309-1717169752\u002Fopengr4-1200x630-crop-q85.jpg\" width=\"100\" height=\"30\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.adagrad.ai\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_ff3114d64b40.png\" width=\"100\" height=\"30\"\u002F>\u003C\u002Fa>\n\n\u003Ca href=\"https:\u002F\u002Fwww.ox.ac.uk\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_e6f6f0101971.png\" width=\"120\"\u002F>\u003C\u002Fa>ㅤㅤ\n\u003Ca href=\"https:\u002F\u002Fwww.hse.ru\u002Fen\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_80c3a6af9039.jpg\" width=\"100\"\u002F>\u003C\u002Fa>\n\n来自\n[牛津大学](https:\u002F\u002Fwww.ox.ac.uk\u002F) 和\n[HSE 大学](https:\u002F\u002Fwww.hse.ru\u002Fen\u002F)\n的许多研究人员已经在他们的论文中使用了 OML。\n[[1]](https:\u002F\u002Fgithub.com\u002Fnilomr\u002Fopen-metric-learning\u002Ftree\u002Fgreat-tit\u002Fgreat-tit-train)\n[[2]](https:\u002F\u002Fgithub.com\u002Fnastygorodi\u002FPROJECT-Deep_Metric_Learning)\n[[3]](https:\u002F\u002Fgithub.com\u002Fnik-fedorov\u002Fterm_paper_metric_learning)\n\n\n\u003Cdiv align=\"left\">\n\n\n\n## [文档](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)\n\n\u003Cdetails>\n\u003Csummary>常见问题解答\u003C\u002Fsummary>\n\n\u003Cdetails>\n\u003Csummary>为什么需要 OML？\u003C\u002Fsummary>\n\u003Cp>\n\n你可能会想：“如果我需要图像嵌入，可以直接训练一个普通的分类器，然后取它的倒数第二层。”\n这确实是一个不错的起点。但这样做也存在一些潜在的问题：\n\n* 如果你想利用嵌入进行检索，就需要计算它们之间的距离（例如余弦距离或 L2 距离）。\n  在分类任务中，**这些距离通常不会在训练过程中被直接优化**。因此，你只能寄希望于最终的嵌入具备理想的性质。\n\n* **第二个问题是验证过程**。\n  在检索任务中，我们通常关心的是前 N 个结果与查询的相关性。评估模型的自然方式是模拟对参考集的检索请求，并使用某种检索指标来衡量性能。\n  因此，分类准确率并不能保证与这些检索指标相关。\n\n* 最后，你也可以尝试自己实现度量学习的流程。\n  **这其中涉及大量工作**：比如使用三元组损失时，需要以特定方式构建批次、实现不同类型的三元组挖掘、跟踪距离等；而在验证阶段，还需要实现检索指标，\n  包括高效地积累每个 epoch 的嵌入、处理各种边界情况等。如果你有多块 GPU 并使用 DDP 分布式训练，难度会更大。\n  此外，你可能还希望可视化自己的检索请求，突出显示好的和坏的检索结果。与其从头开始自己动手，不如直接使用 OML 来满足需求。\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>Open Metric Learning 和 PyTorch Metric Learning 有什么区别？\u003C\u002Fsummary>\n\u003Cp>\n\n[PML](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning) 是一个流行的度量学习库，它包含了丰富的损失函数、三元组挖掘方法、距离度量和归约策略；因此，我们也提供了如何将这些组件与 OML 结合使用的简单示例。\n最初，我们曾尝试使用 PML，但最终还是决定开发自己的库，更加注重流水线和实际应用。\n这就是 OML 与 PML 的主要区别：\n\n* OML 提供了 [流水线](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines)，只需准备好配置文件和符合要求的数据格式，即可开始训练模型。\n  这类似于将数据转换为 COCO 格式以便使用 [mmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection) 训练目标检测器。\n\n* OML 更加专注于端到端的流水线和实际应用场景。\n  它提供了基于配置的示例，涵盖了贴近现实生活的常用基准数据集（如包含数千个类别的商品图片）。我们在这些数据集中找到了一些优秀的超参数组合，并训练和发布了相应的模型及其配置文件。\n  因此，相比 PML，OML 更加注重“配方”式的解决方案。PML 的作者也曾在评论中表示，他的库更像是工具集而非现成的解决方案，而且 PML 中的示例大多针对 CIFAR 和 MNIST 数据集。\n\n* OML 拥有 [预训练模型库](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning#zoo)，可以通过代码轻松调用，就像使用 `torchvision` 中的 `resnet50(pretrained=True)` 一样。\n\n* OML 与 [PyTorch Lightning](https:\u002F\u002Fwww.pytorchlightning.ai\u002F) 集成，因此我们可以利用其\n  [Trainer](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Fstable\u002Fcommon\u002Ftrainer.html) 的强大功能。\n  这在使用 DDP 时尤其有帮助。你可以对比我们的\n  [DDP 示例](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-lightning)\n  和\n  [PML 的示例](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002FDistributedTripletMarginLossMNIST.ipynb)。\n  顺便说一下，PML 也有 [Trainers](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftrainers\u002F)，但在示例中并不常用，通常还是直接使用自定义的 `train` 或 `test` 函数。\n\n我们认为，提供流水线、简洁的示例以及预训练模型库，能够将入门门槛降到极低。\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>什么是度量学习？\u003C\u002Fsummary>\n\u003Cp>\n\n度量学习问题（也称为 *极端分类* 问题）是指我们拥有数千个实体的 ID，但每个实体只有少量样本的情况。\n通常假设在测试阶段（或生产环境中）我们会遇到未见过的实体，这就使得无法直接应用常规的分类流程。在这种情况下，通常会利用得到的嵌入向量来进行检索或匹配操作。\n\n以下是计算机视觉领域中的一些此类任务示例：\n* 人物\u002F动物重识别\n* 人脸识别\n* 地标识别\n* 电商网站的搜索引擎\n以及其他许多任务。\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>术语表（命名约定）\u003C\u002Fsummary>\n\u003Cp>\n\n* `embedding` - 模型的输出（也称为 `特征向量` 或 `描述符`）。\n* `query` - 在检索过程中用作查询的样本。\n* `gallery set` - 用于搜索与 `query` 相似项的实体集合（也称为 `reference` 或 `index`）。\n* `Sampler` - 用于 `DataLoader` 的参数，用来组成批次。\n* `Miner` - 在 `Sampler` 组成批次之后，用于形成样本对或三元组的对象。不一定只在当前批次内组合样本，有时还会结合内存库来完成这一工作。\n* `Samples`\u002F`Labels`\u002F`Instances` - 以 DeepFashion 数据集为例，它包含数千种时尚单品的 ID（我们称之为 `labels`），每种 ID 对应几张照片（我们称单张照片为 `instance` 或 `sample`）。所有这些时尚单品又可以分为不同的类别，如“裙子”、“夹克”、“短裤”等（我们称之为 `categories`）。\n  注意，为了避免误解，我们尽量不使用 “class” 这一术语。\n* `training epoch` - 对于基于组合损失的训练，我们使用的批次采样器长度通常等于\n  `[训练数据集中标签的数量] \u002F [每个批次中的标签数量]`。这意味着在一个 epoch 中，我们并不会遍历所有的训练样本（与常规分类不同），而是确保每个标签都被覆盖到。\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>使用 OML 训练的模型效果如何？\u003C\u002Fsummary>\n\u003Cp>\n\n其性能可以与当前（2022 年）的 [SotA](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Fmetric-learning) 方法相媲美，例如 [Hyp-ViT](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10833.pdf)。\n*(关于这种方法的简要说明：它是一种基于 ViT 架构并采用对比损失训练的模型，但其嵌入被投影到了双曲空间中。\n作者声称，这种空间能够更好地描述现实世界数据的层次化结构。\n因此，这篇论文需要大量的数学推导来将常规运算适配到双曲空间中。)*\n\n我们在保持其他参数不变的情况下，使用相同的架构训练了一个采用三元组损失的模型：包括训练和测试时的数据增强、图像尺寸以及优化器等。相关配置请参阅 [Models Zoo](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning#zoo)。\n关键在于我们所采用的启发式矿工和采样器：\n\n* [Category Balance Sampler](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fsamplers.html#categorybalancesampler)\n  通过限制每个批次中包含的类别数量 *C* 来生成批次。例如，当 *C = 1* 时，它会把所有的夹克放在一个批次里，而所有的牛仔裤则放在另一个批次里（仅为示例）。这种方式自动提高了负样本的难度：让模型区分两件夹克的不同要比区分一件夹克和一件T恤更有意义。\n\n* [Hard Triplets Miner](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fminers.html#hardtripletsminer)\n  通过仅保留最难的三元组（即正样本距离最大、负样本距离最小的三元组），进一步提升了任务难度。\n\n以下是两个流行基准上的 CMC@1 分数：\nSOP 数据集：Hyp-ViT — 85.9，我们的模型 — 86.6。DeepFashion 数据集：Hyp-ViT — 92.5，我们的模型 — 92.1。\n由此可见，通过简单的启发式方法和避免复杂的数学计算，我们同样能够在 SotA 水平上取得优异表现。\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>自监督学习呢？\u003C\u002Fsummary>\n\u003Cp>\n\n最近的自监督学习研究确实取得了显著成果。然而，这类方法往往需要非常庞大的计算资源才能训练出模型。而在我们的框架中，我们主要考虑的是普通用户通常只有几块 GPU 的情况。\n\n尽管如此，我们也不会忽视该领域的成功，而是从两个方面加以利用：\n* 作为预训练检查点的来源，以便更好地进行模型初始化。根据文献和我们的经验，这些检查点作为初始权重比直接使用 ImageNet 上预训练的监督模型要好得多。因此，我们提供了在配置文件或构造函数中传入参数即可加载这些预训练检查点的功能。\n* 作为灵感来源。例如，我们将 *MoCo* 中的内存库思想应用到了 *TripletLoss* 上。\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>使用 OML 是否需要了解其他深度学习框架？\u003C\u002Fsummary>\n\u003Cp>\n\n不需要。OML 是框架无关的。虽然我们在实验中使用 PyTorch Lightning 作为训练循环的运行者，但也保留了完全使用原生 PyTorch 运行的能力。因此，OML 中与 Lightning 相关的部分非常少，并且这部分逻辑与其他代码是分开存放的（见 `oml.lightning`）。即使你使用 Lightning，也不必深入了解它，因为我们已经提供了开箱即用的 [Pipelines](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Fpipelines\u002F)。\n\n由于支持纯 PyTorch 运行以及模块化的代码结构，你可以在实现必要的封装后，轻松地将 OML 与自己喜爱的框架结合使用。\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>没有数据科学基础也能使用 OML 吗？\u003C\u002Fsummary>\n\u003Cp>\n\n是的。要使用[流水线](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Fpipelines\u002F)运行实验，你只需要编写一个转换器，将其数据转换为我们的格式（即准备包含几个预定义列的`.csv`表格）。就这么简单！\n\n很可能我们在模型仓库中已经为你所在的领域准备了合适的预训练模型。在这种情况下，你甚至无需再进行训练。\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>我可以将模型导出为ONNX格式吗？\u003C\u002Fsummary>\n\u003Cp>\n\n目前我们还不支持直接将模型导出为ONNX格式。不过，你可以利用PyTorch内置的功能来实现这一点。更多信息请参阅此[议题](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F592)。\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003C\u002Fdetails>\n\n\n[文档](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)\n\n入门教程：\n[英文](https:\u002F\u002Fmedium.com\u002F@AlekseiShabanov\u002Fpractical-metric-learning-b0410cda2201) |\n[俄文](https:\u002F\u002Fhabr.com\u002Fru\u002Fcompany\u002Fods\u002Fblog\u002F695380\u002F) |\n[中文](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F683102241)\n\n\u003Cdetails>\n\u003Csummary>更多\u003C\u002Fsummary>\n\n* 我们论文的[演示](https:\u002F\u002Fdapladoc-oml-postprocessing-demo-srcappmain-pfh2g0.streamlit.app\u002F)\n[STIR：用于图像检索后处理的孪生Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13393)\n\n* 在[Marktechpost](https:\u002F\u002Fwww.marktechpost.com\u002F2023\u002F12\u002F26\u002Fmeet-openmetriclearning-oml-a-pytorch-based-python-framework-to-train-and-validate-the-deep-learning-models-producing-high-quality-embeddings\u002F)上了解OpenMetricLearning (OML)\n\n* 柏林本地聚会“生产环境中的计算机视觉”报告，2022年11月。\n[链接](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1uHmLU8vMrMVMFodt36u0uXAgYjG_3D30?usp=share_link)\n\n\u003C\u002Fdetails>\n\n\n\n## [安装](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Finstallation.html)\n\n```shell\npip install -U open-metric-learning  # 最小依赖\npip install -U open-metric-learning[nlp]\npip install -U open-metric-learning[audio]\npip install -U open-metric-learning[pipelines]\n\n# 如果出现冲突，可以不带依赖项安装，并手动管理版本：\npip install --no-deps open-metric-learning\n```\n\n\n## OML功能\n\n\u003Cdiv style=\"overflow-x: auto;\">\n\n\u003Ctable style=\"width: 100%; border-collapse: collapse; border-spacing: 0; margin: 0; padding: 0;\">\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Flosses.html\"> \u003Cb>损失函数\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fminers.html\"> \u003Cb>挖掘器\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nminer = AllTripletsMiner()\nminer = NHardTripletsMiner()\nminer = MinerWithBank()\n...\ncriterion = TripletLossWithMiner(0.1, miner)\ncriterion = ArcFaceLoss()\ncriterion = SurrogatePrecision()\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fsamplers.html\"> \u003Cb>采样器\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nlabels = train.get_labels()\nl2c = train.get_label2category()\n\n\nsampler = BalanceSampler(labels)\nsampler = CategoryBalanceSampler(labels, l2c)\nsampler = DistinctCategoryBalanceSampler(labels, l2c)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002F\">\u003Cb>配置支持\u003C\u002Fb>\u003C\u002Fa>\n\n```yaml\nmax_epochs: 10\nsampler:\n  name: balance\n  args:\n    n_labels: 2\n    n_instances: 2\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning?tab=readme-ov-file#zoo\">\u003Cb>多模态预训练模型\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nmodel_hf = AutoModel.from_pretrained(\"roberta-base\")\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\nextractor_txt = HFWrapper(model_hf)\n\nextractor_img = ViTExtractor.from_pretrained(\"vits16_dino\")\ntransforms, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\nextractor_audio = ECAPATDNNExtractor.from_pretrained()\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fpostprocessing\u002Falgo_examples.html\">\u003Cb>后处理\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nemb = inference(extractor, dataset)\nrr = RetrievalResults.from_embeddings(emb, dataset)\n\npostprocessor = AdaptiveThresholding()\nrr_upd = postprocessor.process(rr, dataset)\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fpostprocessing\u002Fsiamese_examples.html\">\u003Cb>基于神经网络的后处理\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Fpostprocessing\u002Fpairwise_postprocessing\">\u003Cb>论文\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nembeddings = inference(extractor, dataset)\nrr = RetrievalResults.from_embeddings(embeddings, dataset)\n\npostprocessor = PairwiseReranker(ConcatSiamese(), top_n=3)\nrr_upd = postprocessor.process(rr, dataset)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Flogging.html#\">\u003Cb>日志记录\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nlogger = TensorBoardPipelineLogger()\nlogger = NeptunePipelineLogger()\nlogger = WandBPipelineLogger()\nlogger = MLFlowPipelineLogger()\nlogger = ClearMLPipelineLogger()\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-metric-learning\">\u003Cb>PML\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nfrom pytorch_metric_learning import losses\n\ncriterion = losses.TripletMarginLoss(0.2, \"all\")\npred = ViTExtractor()(data)\ncriterion(pred, gts)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#handling-categories\">\u003Cb>类别支持\u003C\u002Fb>\u003C\u002Fa>\n\n```python\n# 训练\nloader = DataLoader(CategoryBalanceSampler())\n\n# 验证\nrr = RetrievalResults.from_embeddings()\nm.calc_retrieval_metrics_rr(rr, query_categories)\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Fmetrics.html\">\u003Cb>其他指标\u003C\u002Fb>\u003C\u002Fa>\n\n```python\nembeddigs = inference(model, dataset)\nrr = RetrievalResults.from_embeddings(embeddings, dataset)\n\nm.calc_retrieval_metrics_rr(rr, precision_top_k=(5,))\nm.calc_fnmr_at_fmr_rr(rr, fmr_vals=(0.1,))\nm.calc_topological_metrics(embeddings, pcf_variance=(0.5,))\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-lightning\">\u003Cb>Lightning\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nimport pytorch_lightning as pl\n\nmodel = ViTExtractor.from_pretrained(\"vits16_dino\")\nclb = MetricValCallback(EmbeddingMetrics(dataset))\nmodule = ExtractorModule(model, criterion, optimizer)\n\ntrainer = pl.Trainer(max_epochs=3, callbacks=[clb])\ntrainer.fit(module, train_loader, val_loader)\n```\n\n\u003C\u002Ftd>\n\u003Ctd style=\"text-align: left;\">\n\u003Ca href=\"https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#usage-with-pytorch-lightning\">\u003Cb>Lightning DDP\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n\n```python\nclb = MetricValCallback(EmbeddingMetrics(val))\nmodule = ExtractorModuleDDP(\n    model, criterion, optimizer, train, val\n)\n\nddp = {\"devices\": 2, \"strategy\": DDPStrategy()}\ntrainer = pl.Trainer(max_epochs=3, callbacks=[clb], **ddp)\ntrainer.fit(module)\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003C\u002Ftable>\n\n\u003C\u002Fdiv>\n\n## [示例](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fpython_examples.html#)\n\n以下是一个如何在小型数据集上训练、验证和后处理模型的示例，该数据集包含\n[图像](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1plPnwyIkzg51-mLUXWTjREHgc1kgGrF4)、\n[文本](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Foml\u002Futils\u002Fdownload_mock_dataset.py#L83)，\n或\n[音频](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1NcKnyXqDyyYARrDETmhJcTTXegO3W0Ju)。\n有关数据集格式的更多详细信息，请参阅\n[文档](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Fdata.html)。\n\n向右滚动以查看 **图像** > **文本** > **音频**\n\n\u003Cdiv style=\"overflow-x: auto;\">\n\n\u003Ctable style=\"width: 100%; border-collapse: collapse; border-spacing: 0; margin: 0; padding: 0;\">\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n    \u003Ctd style=\"text-align: left; padding: 0;\">\u003Cb>图像\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd style=\"text-align: left; padding: 0;\">\u003Cb>文本\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd style=\"text-align: left; padding: 0;\">\u003Cb>音频\u003C\u002Fb>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\n\u003Ctd>\n\n[comment]:train-val-img-start\n```python\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\n\nfrom oml import datasets as d\nfrom oml.inference import inference\nfrom oml.losses import TripletLossWithMiner\nfrom oml.metrics import calc_retrieval_metrics_rr\nfrom oml.miners import HardTripletsMiner\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\nfrom oml.retrieval import RetrievalResults, AdaptiveThresholding\nfrom oml.samplers import BalanceSampler\nfrom oml.utils import get_mock_images_dataset\n\nmodel = ViTExtractor.from_pretrained(\"vits16_dino\").to(\"cpu\").train()\ntransform, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\ndf_train, df_val = get_mock_images_dataset(global_paths=True)\ntrain = d.ImageLabeledDataset(df_train, transform=transform)\nval = d.ImageQueryGalleryLabeledDataset(df_val, transform=transform)\n\noptimizer = Adam(model.parameters(), lr=1e-4)\ncriterion = TripletLossWithMiner(0.1, HardTripletsMiner(), need_logs=True)\nsampler = BalanceSampler(train.get_labels(), n_labels=2, n_instances=2)\n\n\n# 训练1个epoch\nfor batch in DataLoader(train, batch_sampler=sampler):\n    embeddings = model(batch[\"input_tensors\"])\n    loss = criterion(embeddings, batch[\"labels\"])\n    loss.backward()\n    optimizer.step()\n    optimizer.zero_grad()\n    print(criterion.last_logs)\n\n\n# 验证：通过检索相关项目\nembeddings = inference(model, val, batch_size=4, num_workers=0)\nrr = RetrievalResults.from_embeddings(embeddings, val, n_items=3)\nrr = AdaptiveThresholding(n_std=2).process(rr)\nrr.visualize(query_ids=[2, 1], dataset=val, show=True)\nprint(calc_retrieval_metrics_rr(rr, map_top_k=(3,), cmc_top_k=(1,)))\n\n\n\n```\n[comment]:train-val-img-end\n\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n[comment]:train-val-txt-start\n```python\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\nfrom transformers import AutoModel, AutoTokenizer\n\nfrom oml import datasets as d\nfrom oml.inference import inference\nfrom oml.losses import TripletLossWithMiner\nfrom oml.metrics import calc_retrieval_metrics_rr\nfrom oml.miners import NHardTripletsMiner\nfrom oml.models import HFWrapper\nfrom oml.retrieval import RetrievalResults, AdaptiveThresholding\nfrom oml.samplers import BalanceSampler\nfrom oml.utils import get_mock_texts_dataset\n\nmodel = HFWrapper(AutoModel.from_pretrained(\"bert-base-uncased\"), 768).to(\"cpu\").train()\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n\ndf_train, df_val = get_mock_texts_dataset()\ntrain = d.TextLabeledDataset(df_train, tokenizer=tokenizer)\nval = d.TextQueryGalleryLabeledDataset(df_val, tokenizer=tokenizer)\n\noptimizer = Adam(model.parameters(), lr=1e-4)\ncriterion = TripletLossWithMiner(\n    0.1, NHardTripletsMiner(n_positive=2, n_negative=2), need_logs=True\n)\nsampler = BalanceSampler(train.get_labels(), n_labels=2, n_instances=2)\n\n\n# 训练1个epoch\nfor batch in DataLoader(train, batch_sampler=sampler):\n    embeddings = model(batch[\"input_tensors\"])\n    loss = criterion(embeddings, batch[\"labels\"])\n    loss.backward()\n    optimizer.step()\n    optimizer.zero_grad()\n    print(criterion.last_logs)\n\n\n# 验证：通过检索相关项目\nembeddings = inference(model, val, batch_size=4, num_workers=0)\nrr = RetrievalResults.from_embeddings(embeddings, val, n_items=3)\nrr = AdaptiveThresholding(n_std=2).process(rr)\nrr.visualize(query_ids=[2, 1], dataset=val, show=True)\nprint(calc_retrieval_metrics_rr(rr, map_top_k=(3,), cmc_top_k=(1,)))\n\n\n\n```\n[comment]:train-val-txt-end\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n[comment]:train-val-audio-start\n```python\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\n\nfrom oml import datasets as d\nfrom oml.inference import inference\nfrom oml.losses import ArcFaceLoss\nfrom oml.metrics import calc_retrieval_metrics_rr\nfrom oml.models import ECAPATDNNExtractor\nfrom oml.retrieval import AdaptiveThresholding, RetrievalResults\nfrom oml.samplers import BalanceSampler\nfrom oml.utils import get_mock_audios_dataset\n\nmodel = ECAPATDNNExtractor.from_pretrained(\"ecapa_tdnn_taoruijie\").to(\"cpu\").train()\n\ndf_train, df_val = get_mock_audios_dataset(global_paths=True)\ntrain = d.AudioLabeledDataset(df_train)\nval = d.AudioQueryGalleryLabeledDataset(df_val)\n\noptimizer = Adam(model.parameters(), lr=1e-4)\ncriterion = ArcFaceLoss(m=0.2, s=30, in_features=192, num_classes=4)  # 类似于论文\nsampler = BalanceSampler(train.get_labels(), n_labels=2, n_instances=2)\n\n# 训练1个epoch\nfor batch in DataLoader(train, batch_sampler=sampler):\n    embeddings = model(batch[\"input_tensors\"])\n    loss = criterion(embeddings, batch[\"labels\"])\n    loss.backward()\n    optimizer.step()\n    optimizer.zero_grad()\n    print(criterion.last_logs)\n\n\n# 通过检索相关项目进行验证\nembeddings = inference(model, val, batch_size=4, num_workers=0)\nrr = RetrievalResults.from_embeddings(embeddings, val, n_items=3)\nrr = AdaptiveThresholding(n_std=2).process(rr)\nrr.visualize_as_html(query_ids=[2, 1], dataset=val, show=True)\nprint(calc_retrieval_metrics_rr(rr, map_top_k=(3,), cmc_top_k=(1,)))\n\n\n\n```\n[comment]:train-val-audio-end\n\u003C\u002Ftd>\n\n\u003C\u002Ftr>\n\n\u003Ctr>\n\u003C\u002Ftr>\n\n\u003Ctr>\n\n\u003Ctd>\n\n\u003Cdetails style=\"padding-bottom: 10px\">\n\u003Csummary>输出\u003C\u002Fsummary>\n\n```python\n{'active_tri': 0.125, 'pos_dist': 82.5, 'neg_dist': 100.5}  # batch 1\n{'active_tri': 0.0, 'pos_dist': 36.3, 'neg_dist': 56.9}     # batch 2\n\n{'cmc': {1: 0.75}, 'precision': {5: 0.75}, 'map': {3: 0.8}}\n\n```\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_a7415836dff9.png\" height=\"200px\">\n\n\u003C\u002Fdetails>\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Fr4HhDOqmjx1hCFS30G3MlYjeqBW5vDg?usp=sharing)\n\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n\u003Cdetails style=\"padding-bottom: 10px\">\n\u003Csummary>输出\u003C\u002Fsummary>\n\n```python\n{'active_tri': 0.0, 'pos_dist': 8.5, 'neg_dist': 11.0}  # batch 1\n{'active_tri': 0.25, 'pos_dist': 8.9, 'neg_dist': 9.8}  # batch 2\n\n{'cmc': {1: 0.8}, 'precision': {5: 0.7}, 'map': {3: 0.9}}\n\n```\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_30f5e3371961.png\" height=\"200px\">\n\n\u003C\u002Fdetails>\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F19o2Ox2VXZoOWOOXIns7mcs0aHJZgJWeO?usp=sharing)\n\n\u003C\u002Ftd>\n\n\u003Ctd>\n\n\u003Cdetails style=\"padding-bottom: 10px\">\n\u003Csummary>输出\u003C\u002Fsummary>\n\n```python\n{'active_tri': 0.25, 'pos_dist': 17.3, 'neg_dist': 18.4}  # batch 1\n{'active_tri': 0.0, 'pos_dist': 17.1, 'neg_dist': 18.5}   # batch 2\n\n{'cmc': {1: 1.0}, 'precision': {5: 1.0}, 'map': {3: 1.0}}\n\n```\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_0ec1fba6229c.jpg\" height=\"200px\">\n\n\u003C\u002Fdetails>\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Sfz7xMdjXg634-3KmBPq8Zs6i_gbsWD1?usp=sharing)\n\n\u003C\u002Ftd>\n\n\u003C\u002Ftr>\n\n\u003C\u002Ftable>\n\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n[额外的插图、解释和技巧](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction#training)\n用于上述代码。\n\n### 由训练好的模型进行检索\n\n这里是一个推理时的例子（换句话说，就是在测试集上进行检索）。\n下面的代码既适用于文本也适用于图像。\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>查看示例\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:usage-retrieval-start\n```python\nfrom oml.datasets import ImageQueryGalleryDataset\nfrom oml.inference import inference\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\nfrom oml.utils import get_mock_images_dataset\nfrom oml.retrieval import RetrievalResults, AdaptiveThresholding\n\n_, df_test = get_mock_images_dataset(global_paths=True)\ndel df_test[\"label\"]  # 我们不需要真实标签来进行预测\n\nextractor = ViTExtractor.from_pretrained(\"vits16_dino\").to(\"cpu\")\ntransform, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\ndataset = ImageQueryGalleryDataset(df_test, transform=transform)\nembeddings = inference(extractor, dataset, batch_size=4, num_workers=0)\n\nrr = RetrievalResults.from_embeddings(embeddings, dataset, n_items=5)\nrr = AdaptiveThresholding(n_std=3.5).process(rr)\nrr.visualize(query_ids=[0, 1], dataset=dataset, show=True)\n\n# 你会得到检索到的项目的ID以及对应的距离\nprint(rr)\n```\n[comment]:usage-retrieval-end\n\n\u003C\u002Fdetails>\n\n\n\n### 由训练好的模型进行检索：流式处理与文本转图像\n\n这里有一个查询和图库分别处理的例子。\n* 首先，这可能对**流式检索**很有用，当图库（索引）集合非常庞大且固定时，而查询则是分批到达。\n* 其次，查询和图库的性质不同，例如，**查询是文本，而图库是图像**。\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>查看示例\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:usage-streaming-retrieval-start\n```python\nimport pandas as pd\n\nfrom oml.datasets import ImageBaseDataset\nfrom oml.inference import inference\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\nfrom oml.retrieval import RetrievalResults, ConstantThresholding\nfrom oml.utils import get_mock_images_dataset\n\nextractor = ViTExtractor.from_pretrained(\"vits16_dino\").to(\"cpu\")\ntransform, _ = get_transforms_for_pretrained(\"vits16_dino\")\n\npaths = pd.concat(get_mock_images_dataset(global_paths=True))[\"path\"]\ngalleries, queries1, queries2 = paths[:20], paths[20:22], paths[22:24]\n\n# 图库非常庞大且固定，所以我们只处理一次\ndataset_gallery = ImageBaseDataset(galleries, transform=transform)\nembeddings_gallery = inference(extractor, dataset_gallery, batch_size=4, num_workers=0)\n\n# 查询以“在线”流的形式到来\nfor queries in [queries1, queries2]:\n    dataset_query = ImageBaseDataset(queries, transform=transform)\n    embeddings_query = inference(extractor, dataset_query, batch_size=4, num_workers=0)\n\n    # 对于下面的操作，我们将提供与向量搜索数据库（如QDrant或Faiss）的集成\n    rr = RetrievalResults.from_embeddings_qg(\n        embeddings_query=embeddings_query, embeddings_gallery=embeddings_gallery,\n        dataset_query=dataset_query, dataset_gallery=dataset_gallery\n    )\n    rr = ConstantThresholding(th=80).process(rr)\n    rr.visualize_qg([0, 1], dataset_query=dataset_query, dataset_gallery=dataset_gallery, show=True)\n    print(rr)\n```\n[comment]:usage-streaming-retrieval-end\n\n\u003C\u002Fdetails>\n\n## [流水线](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines)\n\n流水线提供了一种仅通过更改配置文件即可运行度量学习实验的方式。你所需要做的就是将你的数据集准备成所需的格式。\n\n更多详情请参阅[Pipelines](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fblob\u002Fmain\u002Fpipelines\u002F)文件夹：\n* 特征提取器[流水线](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction)\n* 检索重排序[流水线](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Fpostprocessing)\n\n## [动物园：图像模型](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fzoo.html#zoo-images)\n\n你可以使用我们动物园中的图像模型，或者在继承自 [IExtractor](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Finterfaces.html#iextractor) 的基础上使用其他任意模型。\n\n\u003Cdetails style=\"padding-bottom: 15px\">\n\u003Csummary>\u003Cb>查看如何使用模型\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:zoo-image-start\n```python\nfrom oml.const import CKPT_SAVE_ROOT as CKPT_DIR, MOCK_DATASET_PATH as DATA_DIR\nfrom oml.models import ViTExtractor\nfrom oml.registry import get_transforms_for_pretrained\n\nmodel = ViTExtractor.from_pretrained(\"vits16_dino\").eval()\ntransforms, im_reader = get_transforms_for_pretrained(\"vits16_dino\")\n\nimg = im_reader(DATA_DIR \u002F \"images\" \u002F \"circle_1.jpg\")  # 在此处放置你的图像路径\nimg_tensor = transforms(img)\n# img_tensor = transforms(image=img)[\"image\"]  # 对于 Albumentations 提供的变换\nfeatures = model(img_tensor.unsqueeze(0))\n\n# 查看其他可用模型：\nprint(list(ViTExtractor.pretrained_models.keys()))\n\n# 加载保存在磁盘上的检查点：\nmodel_ = ViTExtractor(weights=CKPT_DIR \u002F \"vits16_dino.ckpt\", arch=\"vits16\", normalise_features=False)\n```\n[comment]:zoo-image-end\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n### 图像模型动物园\n\n由我们训练的模型。以下指标适用于 **224 x 224** 的图像：\n\n|                      模型                      | cmc1  |         数据集          |                                              权重                                              |                                                    实验                                                    |\n|:-----------------------------------------------:|:-----:|:------------------------:|:-------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|\n| `ViTExtractor.from_pretrained(\"vits16_inshop\")` | 0.921 |    DeepFashion Inshop    |    [链接](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1niX-TC8cj6j369t7iU2baHQSVN3MVJbW\u002Fview?usp=sharing)     | [链接](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_inshop) |\n|  `ViTExtractor.from_pretrained(\"vits16_sop\")`   | 0.866 | Stanford Online Products |   [链接](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zuGRHvF2KHd59aw7i7367OH_tQNOGz7A\u002Fview?usp=sharing)      |  [链接](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_sop)   |\n| `ViTExtractor.from_pretrained(\"vits16_cars\")`   | 0.907 |         CARS 196         |   [链接](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F17a4_fg94dox2sfkXmw-KCtiLBlx-ut-1?usp=sharing)    |  [链接](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_cars)  |\n|  `ViTExtractor.from_pretrained(\"vits16_cub\")`   | 0.837 |       CUB 200 2011       |   [链接](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1TPCN-eZFLqoq4JBgnIfliJoEK48x9ozb?usp=sharing)    |  [链接](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Ftree\u002Fmain\u002Fpipelines\u002Ffeatures_extraction\u002Fextractor_cub)   |\n\n由其他研究人员训练的模型。请注意，某些基准上的指标之所以很高，是因为这些数据曾被用作训练集的一部分（例如 `unicom`）。以下指标同样适用于 224 x 224 的图像：\n\n|                            模型                             | Stanford Online Products | DeepFashion InShop | CUB 200 2011 | CARS 196 |\n|:------------------------------------------------------------:|:------------------------:|:------------------:|:------------:|:--------:|\n|    `ViTUnicomExtractor.from_pretrained(\"vitb16_unicom\")`     |          0.700           |       0.734        |    0.847     |  0.916   |\n|    `ViTUnicomExtractor.from_pretrained(\"vitb32_unicom\")`     |          0.690           |       0.722        |    0.796     |  0.893   |\n|    `ViTUnicomExtractor.from_pretrained(\"vitl14_unicom\")`     |          0.726           |       0.790        |    0.868     |  0.922   |\n| `ViTUnicomExtractor.from_pretrained(\"vitl14_336px_unicom\")`  |          0.745           |       0.810        |    0.875     |  0.924   |\n|    `ViTCLIPExtractor.from_pretrained(\"sber_vitb32_224\")`     |          0.547           |       0.514        |    0.448     |  0.618   |\n|    `ViTCLIPExtractor.from_pretrained(\"sber_vitb16_224\")`     |          0.565           |       0.565        |    0.524     |  0.648   |\n|    `ViTCLIPExtractor.from_pretrained(\"sber_vitl14_224\")`     |          0.512           |       0.555        |    0.606     |  0.707   |\n|   `ViTCLIPExtractor.from_pretrained(\"openai_vitb32_224\")`    |          0.612           |       0.491        |    0.560     |  0.693   |\n|   `ViTCLIPExtractor.from_pretrained(\"openai_vitb16_224\")`    |          0.648           |       0.606        |    0.665     |  0.767   |\n|   `ViTCLIPExtractor.from_pretrained(\"openai_vitl14_224\")`    |          0.670           |       0.675        |    0.745     |  0.844   |\n|        `ViTExtractor.from_pretrained(\"vits16_dino\")`         |          0.648           |       0.509        |    0.627     |  0.265   |\n|         `ViTExtractor.from_pretrained(\"vits8_dino\")`         |          0.651           |       0.524        |    0.661     |  0.315   |\n|        `ViTExtractor.from_pretrained(\"vitb16_dino\")`         |          0.658           |       0.514        |    0.541     |  0.288   |\n|         `ViTExtractor.from_pretrained(\"vitb8_dino\")`         |          0.689           |       0.599        |    0.506     |  0.313   |\n|       `ViTExtractor.from_pretrained(\"vits14_dinov2\")`        |          0.566           |       0.334        |    0.797     |  0.503   |\n|     `ViTExtractor.from_pretrained(\"vits14_reg_dinov2\")`      |          0.566           |       0.332        |    0.795     |  0.740   |\n|       `ViTExtractor.from_pretrained(\"vitb14_dinov2\")`        |          0.565           |       0.342        |    0.842     |  0.644   |\n|     `ViTExtractor.from_pretrained(\"vitb14_reg_dinov2\")`      |          0.557           |       0.324        |    0.833     |  0.828   |\n|       `ViTExtractor.from_pretrained(\"vitl14_dinov2\")`        |          0.576           |       0.352        |    0.844     |  0.692   |\n|     `ViTExtractor.from_pretrained(\"vitl14_reg_dinov2\")`      |          0.571           |       0.340        |    0.840     |  0.871   |\n|    `ResnetExtractor.from_pretrained(\"resnet50_moco_v2\")`     |          0.493           |       0.267        |    0.264     |  0.149   |\n| `ResnetExtractor.from_pretrained(\"resnet50_imagenet1k_v1\")`  |          0.515           |       0.284        |    0.455     |  0.247   |\n\n*这些指标可能与论文中报告的不同，因为训练\u002F验证集划分的方式以及是否使用了边界框可能存在差异。*\n\n## [动物园：文本](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fzoo.html#zoo-texts)\n\n这里提供了一个与[HuggingFace Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers)模型的轻量级集成。\n你可以将其替换为其他任意继承自[IExtractor](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Finterfaces.html#iextractor)的模型。\n\n```shell\npip install open-metric-learning[nlp]\n```\n\n\u003Cdetails style=\"padding-bottom: 15px\">\n\u003Csummary>\u003Cb>查看如何使用模型\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:zoo-text-start\n```python\nfrom transformers import AutoModel, AutoTokenizer\n\nfrom oml.models import HFWrapper\n\nmodel = AutoModel.from_pretrained('bert-base-uncased').eval()\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\nextractor = HFWrapper(model=model, feat_dim=768)\n\ninp = tokenizer(text=\"Hello world\", return_tensors=\"pt\", add_special_tokens=True)\nembeddings = extractor(inp)\n```\n[comment]:zoo-text-end\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n请注意，目前我们还没有自己的文本模型动物园。\n\n## [动物园：音频](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Ffeature_extraction\u002Fzoo.html#zoo-audios)\n\n\n你可以使用我们动物园中的音频模型，或者在从[IExtractor](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Fcontents\u002Finterfaces.html#iextractor)继承后使用其他任意模型。\n\n```shell\npip install open-metric-learning[audio]\n```\n\n\u003Cdetails style=\"padding-bottom: 15px\">\n\u003Csummary>\u003Cb>查看如何使用模型\u003C\u002Fb>\u003C\u002Fsummary>\n\u003Cp>\n\n[comment]:zoo-audio-start\n```python\nimport torchaudio\n\nfrom oml.models import ECAPATDNNExtractor\nfrom oml.const import CKPT_SAVE_ROOT as CKPT_DIR, MOCK_AUDIO_DATASET_PATH as DATA_DIR\n\n# 替换为你的实际路径\nckpt_path = CKPT_DIR \u002F \"ecapa_tdnn_taoruijie.pth\"\nfile_path = DATA_DIR \u002F \"voices\" \u002F \"voice0_0.wav\"\n\nmodel = ECAPATDNNExtractor(weights=ckpt_path, arch=\"ecapa_tdnn_taoruijie\", normalise_features=False).to(\"cpu\").eval()\naudio, sr = torchaudio.load(file_path)\n\nif audio.shape[0] > 1:\n    audio = audio.mean(dim=0, keepdim=True)  # 按通道取平均\nif sr != 16000:\n    audio = torchaudio.functional.resample(audio, sr, 16000)\n\nembeddings = model.extract(audio)\n```\n[comment]:zoo-audio-end\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n### 音频模型动物园\n\n|                            模型                             | Vox1_O | Vox1_E | Vox1_H |\n|:------------------------------------------------------------:|:------:|:------:|:------:|\n| `ECAPATDNNExtractor.from_pretrained(\"ecapa_tdnn_taoruijie\")` |  0.86  |  1.18  |  2.17  |\n\n*以上指标表示等错误率（EER）。数值越低越好。*\n\n## [贡献指南](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Fcontributing.html)\n\n我们欢迎新贡献者！请参阅我们的：\n* [贡献指南](https:\u002F\u002Fopen-metric-learning.readthedocs.io\u002Fen\u002Flatest\u002Foml\u002Fcontributing.html)\n* [看板](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fprojects\u002F1)\n\n## 致谢\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_readme_da88dba8fe2a.png\" width=\"100\"\u002F>\u003C\u002Fa>\n\n该项目于2020年作为[Catalyst](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)库的一个模块启动。\n我要感谢当时与我一起开发该模块的人员：\n[Julia Shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina),\n[Nikita Balagansky](https:\u002F\u002Fgithub.com\u002Felephantmipt),\n[Sergey Kolesnikov](https:\u002F\u002Fgithub.com\u002FScitator)\n以及其他成员。\n\n同时，我也要感谢那些在项目独立出来后继续推进这一流程的人：\n[Julia Shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina),\n[Misha Kindulov](https:\u002F\u002Fgithub.com\u002Fb0nce),\n[Aron Dik](https:\u002F\u002Fgithub.com\u002Fdapladoc),\n[Aleksei Tarasov](https:\u002F\u002Fgithub.com\u002FDaloroAT)以及\n[Verkhovtsev Leonid](https:\u002F\u002Fgithub.com\u002Fleoromanovich)。\n\n\u003Ca href=\"https:\u002F\u002Fwww.newyorker.de\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fd8\u002FNew_Yorker.svg\u002F1280px-New_Yorker.svg.png\" width=\"100\"\u002F>\u003C\u002Fa>\n\n此外，我也要感谢NewYorker公司，因为其中一部分功能是由我领导的计算机视觉团队开发并使用的。","# Open Metric Learning (OML) 快速上手指南\n\nOpen Metric Learning (OML) 是一个基于 PyTorch 的框架，旨在帮助用户训练和验证能够生成高质量嵌入（Embeddings）的模型。它特别适用于度量学习（Metric Learning）场景，如以图搜图、人脸\u002F车辆重识别、商品检索等，其中类别数量巨大但每个类别的样本较少。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: 3.10, 3.11 或 3.12\n*   **核心依赖**:\n    *   PyTorch (需预先安装与您的 CUDA 版本匹配的 PyTorch)\n    *   PyTorch Lightning\n    *   torchvision\n    *   albumentations (用于图像增强)\n\n> **注意**：OML 本身不包含 PyTorch 的安装，请务必先根据 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002F) 指引安装好基础的 PyTorch 环境。\n\n## 2. 安装步骤\n\n推荐使用 `pip` 进行安装。为了获得更快的下载速度，国内用户建议使用清华或阿里镜像源。\n\n### 标准安装\n```bash\npip install open-metric-learning\n```\n\n### 使用国内镜像源加速安装\n```bash\npip install open-metric-learning -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 验证安装\n安装完成后，可以通过以下命令检查版本，确保安装成功：\n```bash\npython -c \"import oml; print(oml.__version__)\"\n```\n\n## 3. 基本使用\n\nOML 的核心优势在于其“配置驱动”的 Pipeline 设计。您只需准备数据和配置文件，即可启动训练。以下是基于 Python API 的最简使用流程。\n\n### 步骤一：准备数据\nOML 期望数据以特定的格式组织。通常需要一个包含图像路径和标签索引的数据集类。以下是一个简单的自定义 Dataset 示例：\n\n```python\nfrom torch.utils.data import Dataset\nfrom PIL import Image\nfrom pathlib import Path\n\nclass SimpleReIDDataset(Dataset):\n    def __init__(self, data_root, labels_df, transforms=None):\n        \"\"\"\n        data_root: 图片根目录\n        labels_df: 包含 'image_path' 和 'label' 列的 DataFrame\n        transforms: 图像预处理变换\n        \"\"\"\n        self.data_root = Path(data_root)\n        self.labels_df = labels_df\n        self.transforms = transforms\n\n    def __getitem__(self, idx):\n        row = self.labels_df.iloc[idx]\n        img_path = self.data_root \u002F row['image_path']\n        label = row['label']\n        \n        image = Image.open(img_path).convert(\"RGB\")\n        \n        if self.transforms:\n            image = self.transforms(image=image)[\"image\"]\n            \n        return {\"images\": image, \"labels\": label}\n\n    def __len__(self):\n        return len(self.labels_df)\n```\n\n### 步骤二：配置并启动训练\nOML 提供了预定义的 Sampler（采样器）和 Miner（挖掘器）来处理度量学习特有的批次构建（如保证每个批次包含多个类别，每个类别多个样本）。\n\n以下是一个最小化的训练脚本示例：\n\n```python\nimport torch\nfrom torch import nn, optim\nfrom pytorch_lightning import Trainer\nfrom pytorch_lightning.loggers import TensorBoardLogger\nfrom torch.utils.data import DataLoader\n\n# 引入 OML 核心组件\nfrom oml.configs import GeneralTrainConfig\nfrom oml.models import ResnetExtractor\nfrom oml.losses import TripletLossWithMiner\nfrom oml.miners import HardTripletsMiner\nfrom oml.samplers import CategoryBalanceSampler\nfrom oml.integrations.pl import MetricLearningModule\n\n# 1. 初始化模型 (使用 OML Zoo 中的预训练权重或随机初始化)\n# 这里以 ResNet50 为例，输出维度为 512\nmodel = ResnetExtractor(\n    arch=\"resnet50\", \n    weights=None, # 可设置为 \"resnet50\" 加载 ImageNet 预训练权重\n    num_classes=512, \n    global_pool_fn=\"avg\"\n)\n\n# 2. 定义损失函数和挖掘策略\ncriterion = TripletLossWithMiner(\n    miner=HardTripletsMiner(), # 仅保留最难的正负样本对\n    margin=0.5\n)\n\n# 3. 封装为 PyTorch Lightning 模块\npl_module = MetricLearningModule(\n    model=model,\n    criterion=criterion,\n    optimizer=optim.SGD(model.parameters(), lr=0.01, momentum=0.9),\n    scheduler=None, # 可选：添加学习率调度器\n)\n\n# 4. 准备数据加载器 (假设已实例化 train_dataset)\n# CategoryBalanceSampler 是度量学习的关键：确保每个 batch 有 C 个类别，每个类别 K 个样本\ntrain_sampler = CategoryBalanceSampler(\n    labels=train_dataset.labels, \n    n_categories=32, \n    n_instances=4\n)\n\ntrain_loader = DataLoader(\n    dataset=train_dataset,\n    batch_size=None, # Sampler 已经控制了批次大小，此处设为 None\n    sampler=train_sampler,\n    num_workers=4\n)\n\n# 5. 启动训练\ntrainer = Trainer(\n    max_epochs=10,\n    accelerator=\"gpu\", # 或 \"cpu\"\n    devices=1,\n    logger=TensorBoardLogger(save_dir=\"logs\u002F\")\n)\n\ntrainer.fit(pl_module, train_dataloaders=train_loader)\n```\n\n### 关键概念说明\n*   **Sampler (`CategoryBalanceSampler`)**: 不同于普通分类任务，度量学习需要精心构造 Batch。该采样器确保每个 Batch 中包含指定数量的类别（Categories）和每个类别的样本数（Instances），这对于三元组损失（Triplet Loss）至关重要。\n*   **Miner (`HardTripletsMiner`)**: 在 Batch 内部进一步筛选出“最难”的样本对参与梯度更新，能显著加速收敛并提升模型性能。\n*   **Pipeline**: 上述代码展示了 OML 如何与 PyTorch Lightning 无缝集成，利用其强大的分布式训练（DDP）和日志功能。\n\n对于更复杂的场景（如多 GPU 训练、详细的验证指标计算），推荐查阅 OML 官方文档中的配置文件示例（YAML 格式），通过 `oml.train` 命令行工具直接运行，无需编写大量 Python 代码。","某时尚电商平台的算法团队正致力于构建一个“以图搜图”功能，希望用户上传一张街拍照片后，系统能精准推荐库中款式最相似的商品。\n\n### 没有 open-metric-learning 时\n- **训练目标错位**：团队直接复用分类模型的倒数层特征，但分类任务优化的是类别边界，并未直接优化向量间的余弦距离或欧氏距离，导致检索排序不准。\n- **实验流程繁琐**：每次尝试新的损失函数（如 Triplet Loss 或 Circle Loss）都需要手动重写数据加载器和验证逻辑，开发周期长达数周。\n- **评估标准缺失**：缺乏统一的 mAP 或 CMC 曲线评估体系，难以量化不同模型在检索任务上的真实性能差异。\n- **复现难度极高**：由于缺少标准化的管道配置，团队成员间难以复现彼此的实验结果，调参过程如同“黑盒”摸索。\n\n### 使用 open-metric-learning 后\n- **度量对齐优化**：利用 open-metric-learning 内置的专用损失函数和采样策略，直接针对向量距离进行优化，显著提升了相似款式的召回率。\n- **流水线极速搭建**：通过其模块化的 PyTorch 框架，只需修改配置文件即可快速切换模型架构与损失函数，新实验上线时间从周缩短至小时级。\n- **专业指标监控**：集成标准的检索评估指标（如 mAP@K），实时可视化模型性能，让迭代方向清晰明确。\n- **标准化复现**：依托其成熟的实验管理范式，确保了从数据增强到模型验证的全流程可复现，团队协作效率大幅提升。\n\nopen-metric-learning 将原本碎片化、高门槛的度量学习研发过程，转化为标准化、高效率的工业级落地方案。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOML-Team_open-metric-learning_7ecfdfd0.png","OML-Team","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FOML-Team_cce3be3f.jpg","OpenMetricLearning-Team",null,"alexey.shabanoff@gmail.com","https:\u002F\u002Fgithub.com\u002FOML-Team",[79,83,87],{"name":80,"color":81,"percentage":82},"Python","#3572A5",98.6,{"name":84,"color":85,"percentage":86},"Makefile","#427819",0.8,{"name":88,"color":89,"percentage":90},"Jupyter Notebook","#DA5B0B",0.6,986,76,"2026-04-08T15:37:09","Apache-2.0",1,"未说明","未说明 (基于 PyTorch，支持 DDP 多卡训练，具体显存和 CUDA 版本取决于所选模型架构)",{"notes":99,"python":100,"dependencies":101},"该工具是一个基于 PyTorch 和 PyTorch Lightning 的度量学习框架，专注于端到端的训练流程和预训练模型库（Zoo）。README 中未明确列出具体的操作系统、GPU 型号、显存大小或内存需求，这些通常取决于用户选择的具体模型（如 ResNet, ViT 等）和数据集规模。项目提供了针对大规模类别（数千个 ID）和小样本情况的优化策略。","3.10, 3.11, 3.12",[102,103,64],"torch","pytorch-lightning",[14,15,105,16],"其他",[107,108,109,110,111,103,112,113,114,115,116],"computer-vision","data-science","deep-learning","metric-learning","pytorch","representation-learning","hacktoberfest","similarity-learning","hacktoberfest-2023","hacktoberfest2023","2026-03-27T02:49:30.150509","2026-04-10T19:13:16.176310",[120,125,130,135,140,145],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},28042,"在使用 `TripletMinerWithMemory` 且 `expand_k > 1` 时，为什么会出现内存错误或采样逻辑困惑？","这是因为当保留过去多个批次（如 50 个批次）的特征和标签时，可能的三元组数量会呈组合级爆炸（公式约为 `N_tri ~= N_labels**2 * N_instances**3`），远超内存限制。`TripletMinerWithMemory` 并不会使用所有可能的三元组，而是根据 `tri_expand_k` 参数进行采样。具体规则是：返回的三元组数量 = `tri_expand_k` * 原始批次中的三元组数量。例如，如果一个批次有 120 个三元组且 `k=2`，系统会从数百万个潜在三元组中随机抽取 240 个，而不应用复杂的硬挖掘逻辑，以此来控制内存占用。","https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F349",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},28043,"如何优化 Pairwise 后处理验证阶段速度过慢的问题？","验证速度慢主要由两个原因造成：1. `get_query_ids` 和 `get_gallery_ids` 方法在每次验证时重复计算，但实际这些 ID 在训练过程中是不变的。解决方案是在数据集初始化（`__init__`）时预先计算并缓存这些 ID。2. 在验证步骤中，即使只使用嵌入向量（embeddings），代码仍会重复从磁盘加载图像。解决方案是向 `PairwiseDataset` 传递一个类似 `load_images` 的参数，控制在不需要时跳过图像加载步骤，从而显著减少 I\u002FO 开销。","https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F599",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},28044,"在 Stanford Online Products (SOP) 数据集上验证时遇到 OOM（内存溢出）错误怎么办？","该问题通常由 `pairwise_dist` 函数引起，它在计算查询集和画廊集之间的距离矩阵时会产生巨大的内存足迹（例如 60k x 384 的张量无法放入个人电脑内存）。推荐的解决方案是将距离计算改为分批（batch）进行，避免一次性生成完整的距离矩阵。需要修改相关工具函数以支持分批计算，并在大规模数据上测试内存占用情况。","https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F283",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},28045,"如何将 `AllTripletsMiner` 的实现从循环改为向量化操作以提升性能？","可以将原本使用循环的实现替换为向量化操作。在开发新版本时，建议保留原有的 `get_available_triplets` 函数作为朴素实现用于测试，确保新旧版本返回相同的三元组集合。对于更新后的 Miner，可以在 `__init__` 方法中添加 `device` 参数，默认设置为 `cuda` 以利用 GPU 加速，但在某些兼容性考量下也可默认为 `cpu`。","https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F214",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},28046,"如何解决 GitHub Actions 自动部署到 PyPI 时因权限不足导致失败的问题？","默认情况下，新组织的 `GITHUB_TOKEN` 仅对 `contents` 范围具有读取权限，导致发布失败。解决方法有两种：1. 在组织和仓库设置中授予写权限（如果安全策略允许）。2. 如果不希望更改组织级权限，可以在 workflow 文件的具体 job 中显式声明权限，例如添加 `permissions: write-all` 配置项，以覆盖默认的限制并允许发布操作。","https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F171",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},28047,"是否可以在指标中加入基于统计的度量（如 FRR@FAR 或 FNMR@FMR）来辅助模型分析？","是的，引入基于统计的指标（如人脸识别中常用的 FRR@FAR、等错误率 EER 等）非常有意义，可以帮助洞察正负样本对之间的距离分布，即使检索排名指标很好，也可能存在正负样本距离过于接近的问题。建议通过重构代码，引入独立的计算函数（如 `calc_map`, `calc_cmc` 等），并在主指标计算函数中统一调用，以便灵活扩展此类统计指标。","https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-learning\u002Fissues\u002F226",[151,155],{"id":152,"version":153,"summary_zh":75,"released_at":154},188935,"release.4.0.0","2025-04-14T11:23:25",{"id":156,"version":157,"summary_zh":158,"released_at":159},188936,"release.3.1.0","本次更新主要集中在以下几个方面：\n\n* 我们新增了对“官方”文本数据的支持，并提供了相应的 Python 示例。（请注意，目前 Pipelines 尚未支持文本数据。）\n\n* 我们引入了 `RetrievalResults`（`RR`）类——一个用于存储针对给定查询检索到的图库项的容器。`RR` 提供了一种统一的方式来可视化预测结果并计算指标（如果已知真实标签）。它还简化了后处理流程：以一个 `RR` 对象作为输入，生成另一个 `RR_upd` 对象作为输出。通过这两个对象，可以直观地或通过指标来比较检索结果。此外，您还可以轻松地构建一系列这样的后处理器。\n  * `RR` 采用了批处理优化内存使用：换言之，它不会存储完整的查询-图库距离矩阵。\n    （但这并不会使搜索变得近似。）\n\n* 我们将 `Model` 和 `Dataset` 设为唯一负责处理特定模态逻辑的类。`Model` 负责解析其输入维度：例如，图像的 `BxCxHxW` 或序列（如文本）的 `BxLxD`。`Dataset` 则负责准备单个样本：对于图像可以使用 `Transforms`，而对于文本则可以使用 `Tokenizer`。计算指标的函数，如 `calc_retrieval_metrics_rr`、`RetrievalResults`、`PairwiseReranker` 等以及其他类和函数，现已统一，可适用于任何模态。\n  * 我们新增了 `IVisualizableDataset` 接口，其中包含 `.visualize()` 方法，用于展示单个样本。如果实现了该接口，`RetrievalResults` 就能够显示检索结果的布局。\n\n#### 从 OML 2.* 迁移到新版本 [Python API]：\n\n要快速了解这些变化，最简单的方法就是重新阅读示例代码！\n\n* 推荐的验证方式是使用 `RetrievalResults` 类以及 `calc_retrieval_metrics_rr`、`calc_fnmr_at_fmr_rr` 等函数。`EmbeddingMetrics` 类仍可用于 PyTorch Lightning 和 Pipelines 内部。需要注意的是，`EmbeddingMetrics` 类的方法签名已略有调整，请参考 Lightning 的相关示例。\n\n* 由于特定模态的逻辑已被限定在 `Dataset` 中，因此它不再输出 `PATHS_KEY`、`X1_KEY`、`X2_KEY`、`Y1_KEY` 和 `Y2_KEY`。而那些与模态无关的键，如 `LABELS_KEY`、`IS_GALLERY`、`IS_QUERY_KEY` 和 `CATEGORIES_KEY`，仍然在使用中。\n\n* `inference_on_images` 现已更名为 `inference`，并且现在可以处理任何模态的数据。\n\n* `Datasets` 的接口也进行了小幅调整。例如，我们新增了 `IQueryGalleryDataset` 和 `IQueryGalleryLabeledDataset` 两个接口。前者应用于推理阶段，后者则用于验证阶段。此外，还新增了 `IVisualizableDataset` 接口。\n\n* 移除了部分内部实现，如 `IMetricDDP`、`EmbeddingMetricsDDP`、`calc_distance_matrix`、`calc_gt_mask`、`calc_mask_to_ignore` 和 `apply_mask_to_ignore`。这些改动不应影响您的使用。同时，我们也移除了与预计算三元组流水线相关的代码。\n\n#### 从 OML 2.* 迁移到新版本 [Pipelines]：\n\n* [特征提取](https:\u002F\u002Fgithub.com\u002FOML-Team\u002Fopen-metric-","2024-06-13T17:29:04"]