[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-KevinMusgrave--pytorch-metric-learning":3,"tool-KevinMusgrave--pytorch-metric-learning":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":97,"env_os":98,"env_gpu":99,"env_ram":100,"env_deps":101,"category_tags":110,"github_topics":111,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":154},3922,"KevinMusgrave\u002Fpytorch-metric-learning","pytorch-metric-learning","The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.","pytorch-metric-learning 是一个基于 PyTorch 构建的开源库，旨在让开发者轻松地在应用中实现深度度量学习。它主要解决了传统方法中损失函数编写复杂、难例挖掘（Mining）逻辑难以复用以及实验流程繁琐等痛点，将原本需要大量自定义代码的环节封装为标准化模块。\n\n该工具非常适合从事人脸识别、图像检索、聚类分析等领域的 AI 研究人员和深度学习开发者使用。无论是希望快速复现前沿论文算法的科研人员，还是需要在生产环境中构建高效特征提取系统的工程师，都能从中受益。\n\n其核心亮点在于高度的模块化与灵活性。库内包含了损失函数、难例挖掘器、距离度量等九大独立模块，用户既可以单独调用某个组件嵌入现有代码，也能组合它们构建完整的训练测试工作流。例如，它可以自动计算批次内的所有三元组，或配合特定的挖掘策略智能筛选出“最难”的正负样本对，从而显著提升模型收敛速度和最终精度。此外，它还内置了常用数据集的下载接口，并提供了丰富的 Google Colab 示例笔记，帮助用户零门槛上手，极大地降低了深度度量学习的开发与实验成本。","\u003Ch1>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\">\n\u003Cimg alt=\"PyTorch Metric Learning\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKevinMusgrave_pytorch-metric-learning_readme_545c7b4a0c17.png\">\n\u003C\u002Fa>\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytorch-metric-learning\">\n     \u003Cimg alt=\"PyPi version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpytorch-metric-learning?color=bright-green\">\n \u003C\u002Fa>\n\t\n\t\n \n \u003Ca href=\"https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpytorch-metric-learning\">\n     \u003Cimg alt=\"Anaconda version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fv\u002Fconda-forge\u002Fpytorch-metric-learning?color=bright-green\">\n \u003C\u002Fa>\n\u003C\u002Fp>\n\n## News\n\n**August 17**: v2.9.0\n- Added [SmoothAPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#smoothaploss).\n- Improved SubCenterArcFaceLoss and GenericPairLoss.\n- Thank you [ir2718](https:\u002F\u002Fgithub.com\u002Fir2718), [lucamarini22](https:\u002F\u002Fgithub.com\u002Flucamarini22), and [marcpaga](https:\u002F\u002Fgithub.com\u002Fmarcpaga).\n\n**December 11**: v2.8.0\n- Added the [Datasets](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets) module for easy downloading of common datasets:\n  - [CUB200](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#cub-200-2011)\n  - [Cars196](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#cars196)\n  - [INaturalist 2018](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#inaturalist2018)\n  - [Stanford Online Products](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#stanfordonlineproducts)\n- Thank you [ir2718](https:\u002F\u002Fgithub.com\u002Fir2718).\n\n## Documentation\n- [**View the documentation here**](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002F)\n- [**View the installation instructions here**](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning#installation)\n- [**View the available losses, miners etc. here**](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002FCONTENTS.md) \n\n\n## Google Colab Examples\nSee the [examples folder](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002FREADME.md) for notebooks you can download or run on Google Colab.\n\n\n## PyTorch Metric Learning Overview\nThis library contains 9 modules, each of which can be used independently within your existing codebase, or combined together for a complete train\u002Ftest workflow.\n\n![high_level_module_overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKevinMusgrave_pytorch-metric-learning_readme_492dfad03aca.png)\n\n\n\n## How loss functions work\n\n### Using losses and miners in your training loop\nLet’s initialize a plain [TripletMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#tripletmarginloss):\n```python\nfrom pytorch_metric_learning import losses\nloss_func = losses.TripletMarginLoss()\n```\n\nTo compute the loss in your training loop, pass in the embeddings computed by your model, and the corresponding labels. The embeddings should have size (N, embedding_size), and the labels should have size (N), where N is the batch size.\n\n```python\n# your training loop\nfor i, (data, labels) in enumerate(dataloader):\n\toptimizer.zero_grad()\n\tembeddings = model(data)\n\tloss = loss_func(embeddings, labels)\n\tloss.backward()\n\toptimizer.step()\n```\n\nThe TripletMarginLoss computes all possible triplets within the batch, based on the labels you pass into it. Anchor-positive pairs are formed by embeddings that share the same label, and anchor-negative pairs are formed by embeddings that have different labels. \n\nSometimes it can help to add a mining function:\n```python\nfrom pytorch_metric_learning import miners, losses\nminer = miners.MultiSimilarityMiner()\nloss_func = losses.TripletMarginLoss()\n\n# your training loop\nfor i, (data, labels) in enumerate(dataloader):\n\toptimizer.zero_grad()\n\tembeddings = model(data)\n\thard_pairs = miner(embeddings, labels)\n\tloss = loss_func(embeddings, labels, hard_pairs)\n\tloss.backward()\n\toptimizer.step()\n```\nIn the above code, the miner finds positive and negative pairs that it thinks are particularly difficult. Note that even though the TripletMarginLoss operates on triplets, it’s still possible to pass in pairs. This is because the library automatically converts pairs to triplets and triplets to pairs, when necessary.\n\n### Customizing loss functions\nLoss functions can be customized using [distances](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdistances\u002F), [reducers](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Freducers\u002F), and [regularizers](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fregularizers\u002F). In the diagram below, a miner finds the indices of hard pairs within a batch. These are used to index into the distance matrix, computed by the distance object. For this diagram, the loss function is pair-based, so it computes a loss per pair. In addition, a regularizer has been supplied, so a regularization loss is computed for each embedding in the batch. The per-pair and per-element losses are passed to the reducer, which (in this diagram) only keeps losses with a high value. The averages are computed for the high-valued pair and element losses, and are then added together to obtain the final loss.\n\n![high_level_loss_function_overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKevinMusgrave_pytorch-metric-learning_readme_5a6418242862.png)\n\nNow here's an example of a customized TripletMarginLoss:\n```python\nfrom pytorch_metric_learning.distances import CosineSimilarity\nfrom pytorch_metric_learning.reducers import ThresholdReducer\nfrom pytorch_metric_learning.regularizers import LpRegularizer\nfrom pytorch_metric_learning import losses\nloss_func = losses.TripletMarginLoss(distance = CosineSimilarity(), \n\t\t\t\t     reducer = ThresholdReducer(high=0.3), \n\t\t\t \t     embedding_regularizer = LpRegularizer())\n```\nThis customized triplet loss has the following properties:\n\n - The loss will be computed using cosine similarity instead of Euclidean distance.\n - All triplet losses that are higher than 0.3 will be discarded.\n - The embeddings will be L2 regularized.  \n\n### Using loss functions for unsupervised \u002F self-supervised learning\n\nA `SelfSupervisedLoss` wrapper is provided for self-supervised learning:\n\n```python\nfrom pytorch_metric_learning.losses import SelfSupervisedLoss\nloss_func = SelfSupervisedLoss(TripletMarginLoss())\n\n# your training for-loop\nfor i, data in enumerate(dataloader):\n\toptimizer.zero_grad()\n\tembeddings = your_model(data)\n\taugmented = your_model(your_augmentation(data))\n\tloss = loss_func(embeddings, augmented)\n\tloss.backward()\n\toptimizer.step()\n```\n\nIf you're interested in [MoCo](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.05722.pdf)-style self-supervision, take a look at the [MoCo on CIFAR10](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Ftree\u002Fmaster\u002Fexamples#simple-examples) notebook. It uses CrossBatchMemory to implement the momentum encoder queue, which means you can use any tuple loss, and any tuple miner to extract hard samples from the queue.\n\n\n## Highlights of the rest of the library\n\n- For a convenient way to train your model, take a look at the [trainers](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftrainers\u002F).\n- Want to test your model's accuracy on a dataset? Try the [testers](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftesters\u002F).\n- To compute the accuracy of an embedding space directly, use [AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F).\n\nIf you're short of time and want a complete train\u002Ftest workflow, check out the [example Google Colab notebooks](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Ftree\u002Fmaster\u002Fexamples).\n\nTo learn more about all of the above, [see the documentation](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning). \n\n\n## Installation\n\n### Required PyTorch version\n - ```pytorch-metric-learning >= v0.9.90``` requires ```torch >= 1.6```\n - ```pytorch-metric-learning \u003C v0.9.90``` doesn't have a version requirement, but was tested with ```torch >= 1.2```\n\nOther dependencies: ```numpy, scikit-learn, tqdm, torchvision```\n\n### Pip\n```\npip install pytorch-metric-learning\n```\n\n**To get the latest dev version**:\n```\npip install pytorch-metric-learning --pre\n```\n\n**To install on Windows**:\n```\npip install torch===1.6.0 torchvision===0.7.0 -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\npip install pytorch-metric-learning\n```\n\n**To install with evaluation and logging capabilities**\n\n(This will install the unofficial pypi version of faiss-gpu, plus record-keeper and tensorboard):\n```\npip install pytorch-metric-learning[with-hooks]\n```\n\n**To install with evaluation and logging capabilities (CPU)**\n\n(This will install the unofficial pypi version of faiss-cpu, plus record-keeper and tensorboard):\n```\npip install pytorch-metric-learning[with-hooks-cpu]\n```\n\t\n### Conda\n```\nconda install -c conda-forge pytorch-metric-learning\n```\n\n**To use the testing module, you'll need faiss, which can be installed via conda as well. See the [installation instructions for faiss](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss\u002Fblob\u002Fmaster\u002FINSTALL.md).**\n\n\u003C\u002Fdetails>\n\t\n\n\n## Benchmark results\nSee [powerful-benchmarker](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpowerful-benchmarker\u002F) to view benchmark results and to use the benchmarking tool.\n\n\n## Development\nDevelopment is done on the ```dev``` branch:\n```\ngit checkout dev\n```\n\nUnit tests can be run with the default ```unittest``` library:\n```bash\npython -m unittest discover\n```\n\nYou can specify the test datatypes and test device as environment variables. For example, to test using float32 and float64 on the CPU:\n```bash\nTEST_DTYPES=float32,float64 TEST_DEVICE=cpu python -m unittest discover\n```\n\nTo run a single test file instead of the entire test suite, specify the file name:\n```bash\npython -m unittest tests\u002Flosses\u002Ftest_angular_loss.py\n```\n\nCode is formatted using ```black``` and ```isort```:\n```bash\npip install black isort\n.\u002Fformat_code.sh\n```\n\n\n## Acknowledgements\n\n### Contributors\nThanks to the contributors who made pull requests!\n\n| Contributor | Highlights |\n| -- | -- |\n|[domenicoMuscill0](https:\u002F\u002Fgithub.com\u002FdomenicoMuscill0)| - [ManifoldLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#manifoldloss) \u003Cbr\u002F> - [P2SGradLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#p2sgradloss) \u003Cbr\u002F> - [HistogramLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#histogramloss) \u003Cbr\u002F> - [DynamicSoftMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#dynamicsoftmarginloss) \u003Cbr\u002F> - [RankedListLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#rankedlistloss) |\n|[mlopezantequera](https:\u002F\u002Fgithub.com\u002Fmlopezantequera) | - Made the [testers](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftesters) work on any combination of query and reference sets \u003Cbr\u002F> - Made [AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F) work with arbitrary label comparisons |\n|[cwkeam](https:\u002F\u002Fgithub.com\u002Fcwkeam) | - [SelfSupervisedLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#selfsupervisedloss) \u003Cbr\u002F> - [VICRegLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#vicregloss) \u003Cbr\u002F> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F) \u003Cbr\u002F> - BaseLossWrapper|\n| [ir2718](https:\u002F\u002Fgithub.com\u002Fir2718) | - [ThresholdConsistentMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#thresholdconsistentmarginloss) \u003Cbr\u002F> - [SmoothAPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#smoothaploss) \u003Cbr\u002F> - The [Datasets](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets) module |\n|[marijnl](https:\u002F\u002Fgithub.com\u002Fmarijnl)| - [BatchEasyHardMiner](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fminers\u002F#batcheasyhardminer) \u003Cbr\u002F> - [TwoStreamMetricLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftrainers\u002F#twostreammetricloss) \u003Cbr\u002F> - [GlobalTwoStreamEmbeddingSpaceTester](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftesters\u002F#globaltwostreamembeddingspacetester) \u003Cbr\u002F> - [Example using trainers.TwoStreamMetricLoss](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002FTwoStreamMetricLoss.ipynb) |\n| [chingisooinar](https:\u002F\u002Fgithub.com\u002Fchingisooinar) | [SubCenterArcFaceLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#subcenterarcfaceloss) |\n| [elias-ramzi](https:\u002F\u002Fgithub.com\u002Felias-ramzi) | [HierarchicalSampler](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fsamplers\u002F#hierarchicalsampler) |\n| [fjsj](https:\u002F\u002Fgithub.com\u002Ffjsj) | [SupConLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#supconloss) |\n| [AlenUbuntu](https:\u002F\u002Fgithub.com\u002FAlenUbuntu) | [CircleLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#circleloss) |\n| [interestingzhuo](https:\u002F\u002Fgithub.com\u002Finterestingzhuo) | [PNPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#pnploss) |\n| [wconnell](https:\u002F\u002Fgithub.com\u002Fwconnell) | [Learning a scRNAseq Metric Embedding](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002FscRNAseq_MetricEmbedding.ipynb) |\n| [mkmenta](https:\u002F\u002Fgithub.com\u002Fmkmenta) | Improved `get_all_triplets_indices` (fixed the `INT_MAX` error) |\n| [AlexSchuy](https:\u002F\u002Fgithub.com\u002FAlexSchuy) | optimized ```utils.loss_and_miner_utils.get_random_triplet_indices``` |\n| [JohnGiorgi](https:\u002F\u002Fgithub.com\u002FJohnGiorgi) | ```all_gather``` in [utils.distributed](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdistributed) |\n| [Hummer12007](https:\u002F\u002Fgithub.com\u002FHummer12007) | ```utils.key_checker``` |\n| [vltanh](https:\u002F\u002Fgithub.com\u002Fvltanh) | Made ```InferenceModel.train_indexer``` accept datasets |\n| [btseytlin](https:\u002F\u002Fgithub.com\u002Fbtseytlin) | ```get_nearest_neighbors``` in [InferenceModel](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Finference_models) |\n| [mlw214](https:\u002F\u002Fgithub.com\u002Fmlw214) | Added ```return_per_class``` to [AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F) |\n| [layumi](https:\u002F\u002Fgithub.com\u002Flayumi) | [InstanceLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#instanceloss) |\n| [NoTody](https:\u002F\u002Fgithub.com\u002FNoTody) | Helped add `ref_emb` and `ref_labels` to the distributed wrappers. |\n| [ElisonSherton](https:\u002F\u002Fgithub.com\u002FElisonSherton) | Fixed an edge case in ArcFaceLoss. |\n| [stompsjo](https:\u002F\u002Fgithub.com\u002Fstompsjo) | Improved documentation for NTXentLoss. |\n| [Puzer](https:\u002F\u002Fgithub.com\u002FPuzer) | Bug fix for PNPLoss. |\n| [elisim](https:\u002F\u002Fgithub.com\u002Felisim) | Developer improvements to DistributedLossWrapper. |\n| [lucamarini22](https:\u002F\u002Fgithub.com\u002Flucamarini22) | |\n| [marcpaga](https:\u002F\u002Fgithub.com\u002Fmarcpaga) | |\n| [GaetanLepage](https:\u002F\u002Fgithub.com\u002FGaetanLepage) | |\n| [z1w](https:\u002F\u002Fgithub.com\u002Fz1w) | |\n| [thinline72](https:\u002F\u002Fgithub.com\u002Fthinline72) | |\n| [tpanum](https:\u002F\u002Fgithub.com\u002Ftpanum) | |\n| [fralik](https:\u002F\u002Fgithub.com\u002Ffralik) | |\n| [joaqo](https:\u002F\u002Fgithub.com\u002Fjoaqo) | |\n| [JoOkuma](https:\u002F\u002Fgithub.com\u002FJoOkuma) | |\n| [gkouros](https:\u002F\u002Fgithub.com\u002Fgkouros) | |\n| [yutanakamura-tky](https:\u002F\u002Fgithub.com\u002Fyutanakamura-tky) | |\n| [KinglittleQ](https:\u002F\u002Fgithub.com\u002FKinglittleQ) | |\n| [martin0258](https:\u002F\u002Fgithub.com\u002Fmartin0258) | |\n| [michaeldeyzel](https:\u002F\u002Fgithub.com\u002Fmichaeldeyzel) | |\n| [HSinger04](https:\u002F\u002Fgithub.com\u002FHSinger04) | |\n| [rheum](https:\u002F\u002Fgithub.com\u002Frheum) | |\n| [bot66](https:\u002F\u002Fgithub.com\u002Fbot66) | |\n\n\n\n### Facebook AI\nThank you to [Ser-Nam Lim](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fsernam) at [Facebook AI](https:\u002F\u002Fai.facebook.com\u002F), and my research advisor, [Professor Serge Belongie](https:\u002F\u002Fwww.belongielab.org\u002F). This project began during my internship at Facebook AI where I received valuable feedback from Ser-Nam, and his team of computer vision and machine learning engineers and research scientists. In particular, thanks to [Ashish Shah](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fashish217\u002F) and [Austin Reiter](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Faustin-reiter-3962aa7\u002F) for reviewing my code during its early stages of development.\n\n### Open-source repos\nThis library contains code that has been adapted and modified from the following great open-source repos:\n- https:\u002F\u002Fgithub.com\u002Fbnu-wangxun\u002FDeep_Metric\n- https:\u002F\u002Fgithub.com\u002Fchaoyuaw\u002Fincubator-mxnet\u002Fblob\u002Fmaster\u002Fexample\u002Fgluon\u002Fembedding_learning\n- https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeepcluster\n- https:\u002F\u002Fgithub.com\u002Fgeonm\u002Fproxy-anchor-loss\n- https:\u002F\u002Fgithub.com\u002Fidstcv\u002FSoftTriple\n- https:\u002F\u002Fgithub.com\u002Fkunhe\u002FFastAP-metric-learning\n- https:\u002F\u002Fgithub.com\u002Fronekko\u002Fdeep_metric_learning\n- https:\u002F\u002Fgithub.com\u002Ftjddus9597\u002FProxy-Anchor-CVPR2020\n- http:\u002F\u002Fkaizhao.net\u002Fregularface\n- https:\u002F\u002Fgithub.com\u002Fnii-yamagishilab\u002Fproject-NN-Pytorch-scripts\n\n### Logo\nThanks to [Jeff Musgrave](https:\u002F\u002Fwww.designgenius.ca\u002F) for designing the logo.\n\n## Citing this library\nIf you'd like to cite pytorch-metric-learning in your paper, you can use this bibtex:\n```latex\n@article{Musgrave2020PyTorchML,\n  title={PyTorch Metric Learning},\n  author={Kevin Musgrave and Serge J. Belongie and Ser-Nam Lim},\n  journal={ArXiv},\n  year={2020},\n  volume={abs\u002F2008.09164}\n}\n```\n","\u003Ch1>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\">\n\u003Cimg alt=\"PyTorch度量学习\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKevinMusgrave_pytorch-metric-learning_readme_545c7b4a0c17.png\">\n\u003C\u002Fa>\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytorch-metric-learning\">\n     \u003Cimg alt=\"PyPi版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpytorch-metric-learning?color=bright-green\">\n \u003C\u002Fa>\n\t\n\t\n \n \u003Ca href=\"https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpytorch-metric-learning\">\n     \u003Cimg alt=\"Anaconda版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fv\u002Fconda-forge\u002Fpytorch-metric-learning?color=bright-green\">\n \u003C\u002Fa>\n\u003C\u002Fp>\n\n## 最新消息\n\n**8月17日**: v2.9.0\n- 新增了[SmoothAPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#smoothaploss)。\n- 改进了SubCenterArcFaceLoss和GenericPairLoss。\n- 感谢[ir2718](https:\u002F\u002Fgithub.com\u002Fir2718)、[lucamarini22](https:\u002F\u002Fgithub.com\u002Flucamarini22)和[marcpaga](https:\u002F\u002Fgithub.com\u002Fmarcpaga)。\n\n**12月11日**: v2.8.0\n- 新增了[Datasets](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets)模块，方便下载常用数据集：\n  - [CUB200](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#cub-200-2011)\n  - [Cars196](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#cars196)\n  - [INaturalist 2018](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#inaturalist2018)\n  - [Stanford Online Products](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#stanfordonlineproducts)\n- 感谢[ir2718](https:\u002F\u002Fgithub.com\u002Fir2718)。\n\n## 文档\n- [**在此查看文档**](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002F)\n- [**在此查看安装说明**](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning#installation)\n- [**在此查看可用的损失函数、挖掘器等**](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002FCONTENTS.md) \n\n\n## Google Colab示例\n请参阅[examples文件夹](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002FREADME.md)，其中包含可下载或在Google Colab上运行的笔记本。\n\n## PyTorch度量学习概述\n该库包含9个模块，每个模块都可以独立地集成到您现有的代码库中，也可以组合使用以实现完整的训练\u002F测试流程。\n\n![high_level_module_overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKevinMusgrave_pytorch-metric-learning_readme_492dfad03aca.png)\n\n\n\n## 损失函数的工作原理\n\n### 在训练循环中使用损失函数和挖掘器\n让我们初始化一个普通的[TripletMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#tripletmarginloss)：\n```python\nfrom pytorch_metric_learning import losses\nloss_func = losses.TripletMarginLoss()\n```\n\n要在训练循环中计算损失，需要将模型计算出的嵌入向量和对应的标签传入。嵌入向量的形状应为(N, embedding_size)，标签的形状应为(N)，其中N是批次大小。\n\n```python\n# 您的训练循环\nfor i, (data, labels) in enumerate(dataloader):\n\toptimizer.zero_grad()\n\tembeddings = model(data)\n\tloss = loss_func(embeddings, labels)\n\tloss.backward()\n\toptimizer.step()\n```\n\nTripletMarginLoss会根据您传入的标签，在批次内计算所有可能的三元组。同标签的嵌入构成锚点-正样本对，不同标签的则构成锚点-负样本对。\n\n有时添加一个挖掘器会有帮助：\n```python\nfrom pytorch_metric_learning import miners, losses\nminer = miners.MultiSimilarityMiner()\nloss_func = losses.TripletMarginLoss()\n\n# 您的训练循环\nfor i, (data, labels) in enumerate(dataloader):\n\toptimizer.zero_grad()\n\tembeddings = model(data)\n\thard_pairs = miner(embeddings, labels)\n\tloss = loss_func(embeddings, labels, hard_pairs)\n\tloss.backward()\n\toptimizer.step()\n```\n\n在上述代码中，挖掘器会找到它认为特别困难的正样本和负样本对。请注意，尽管TripletMarginLoss是以三元组为基础的，但也可以传入成对的数据。这是因为该库会在必要时自动将成对数据转换为三元组，或将三元组转换为成对数据。\n\n### 自定义损失函数\n损失函数可以通过[距离度量](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdistances\u002F)、[规约器](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Freducers\u002F)和[正则化器](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fregularizers\u002F)进行定制。下图中，挖掘器找到了批次内困难样本的索引，这些索引用于从距离对象计算出的距离矩阵中提取对应值。对于这张图来说，损失函数是基于成对计算的，因此它会为每一对样本计算损失。此外，还提供了一个正则化器，这样就可以为批次中的每个嵌入计算正则化损失。成对损失和单个元素的损失会被传递给规约器，在这张图中，规约器只保留高值的损失。最后，对高值的成对损失和元素损失分别求平均，并相加得到最终的损失。\n\n![high_level_loss_function_overview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKevinMusgrave_pytorch-metric-learning_readme_5a6418242862.png)\n\n下面是一个自定义TripletMarginLoss的例子：\n```python\nfrom pytorch_metric_learning.distances import CosineSimilarity\nfrom pytorch_metric-learning.reducers import ThresholdReducer\nfrom pytorch_metric-learning.regularizers import LpRegularizer\nfrom pytorch_metric_learning import losses\nloss_func = losses.TripletMarginLoss(distance = CosineSimilarity(), \n\t\t\t\t     reducer = ThresholdReducer(high=0.3), \n\t\t\t \t     embedding_regularizer = LpRegularizer())\n```\n\n这个自定义的三元组损失具有以下特性：\n\n - 损失将使用余弦相似度而非欧氏距离来计算。\n - 所有高于0.3的三元组损失都将被丢弃。\n - 嵌入将接受L2正则化。\n\n### 将损失函数用于无监督\u002F自监督学习\n\n为自监督学习提供了一个`SelfSupervisedLoss`包装器：\n\n```python\nfrom pytorch_metric_learning.losses import SelfSupervisedLoss\nloss_func = SelfSupervisedLoss(TripletMarginLoss())\n\n# 您的训练循环\nfor i, data in enumerate(dataloader):\n\toptimizer.zero_grad()\n\tembeddings = your_model(data)\n\taugmented = your_model(your_augmentation(data))\n\tloss = loss_func(embeddings, augmented)\n\tloss.backward()\n\toptimizer.step()\n```\n\n如果您对[MoCo](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.05722.pdf)-风格的自监督感兴趣，请查看[MoCo on CIFAR10](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Ftree\u002Fmaster\u002Fexamples#simple-examples)笔记本。它使用CrossBatchMemory来实现动量编码器队列，这意味着您可以使用任何成对损失函数和成对挖掘器从队列中提取困难样本。\n\n## 库的其余部分亮点\n\n- 如果您想以一种便捷的方式训练模型，请查看[训练器](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftrainers\u002F)。\n- 想在数据集上测试模型的准确率吗？试试[测试器](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftesters\u002F)。\n- 要直接计算嵌入空间的准确率，可以使用[AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F)。\n\n如果您时间有限，想要一个完整的训练\u002F测试流程，可以查看[示例 Google Colab 笔记本](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Ftree\u002Fmaster\u002Fexamples)。\n\n如需了解更多关于上述内容的信息，请参阅[文档](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning)。\n\n\n## 安装\n\n### 所需的 PyTorch 版本\n- ```pytorch-metric-learning >= v0.9.90``` 需要 ```torch >= 1.6```\n- ```pytorch-metric-learning \u003C v0.9.90``` 没有版本要求，但已在 ```torch >= 1.2``` 上进行过测试\n\n其他依赖：```numpy, scikit-learn, tqdm, torchvision```\n\n### Pip\n```\npip install pytorch-metric-learning\n```\n\n**获取最新开发版本**：\n```\npip install pytorch-metric-learning --pre\n```\n\n**在 Windows 上安装**：\n```\npip install torch===1.6.0 torchvision===0.7.0 -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\npip install pytorch-metric-learning\n```\n\n**安装带有评估和日志记录功能的版本**\n\n（这将安装 faiss-gpu 的非官方 pypi 版本，以及 record-keeper 和 tensorboard）：\n```\npip install pytorch-metric-learning[with-hooks]\n```\n\n**安装带有评估和日志记录功能的 CPU 版本**\n\n（这将安装 faiss-cpu 的非官方 pypi 版本，以及 record-keeper 和 tensorboard）：\n```\npip install pytorch-metric-learning[with-hooks-cpu]\n```\n\n### Conda\n```\nconda install -c conda-forge pytorch-metric-learning\n```\n\n**要使用测试模块，您需要 faiss，也可以通过 conda 安装。请参阅 [faiss 的安装说明](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss\u002Fblob\u002Fmaster\u002FINSTALL.md)。**\n\n\u003C\u002Fdetails>\n\t\n\n\n## 基准测试结果\n请参阅 [powerful-benchmarker](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpowerful-benchmarker\u002F) 查看基准测试结果并使用该工具。\n\n\n## 开发\n开发工作在 ```dev``` 分支上进行：\n```\ngit checkout dev\n```\n\n单元测试可以使用默认的 ```unittest``` 库运行：\n```bash\npython -m unittest discover\n```\n\n您可以使用环境变量指定测试的数据类型和设备。例如，要在 CPU 上使用 float32 和 float64 进行测试：\n```bash\nTEST_DTYPES=float32,float64 TEST_DEVICE=cpu python -m unittest discover\n```\n\n如果只想运行单个测试文件而不是整个测试套件，可以指定文件名：\n```bash\npython -m unittest tests\u002Flosses\u002Ftest_angular_loss.py\n```\n\n代码使用 ```black``` 和 ```isort``` 格式化：\n```bash\npip install black isort\n.\u002Fformat_code.sh\n```\n\n\n## 致谢\n\n### 贡献者\n感谢所有提交拉取请求的贡献者！\n\n| 贡献者 | 亮点 |\n| -- | -- |\n|[domenicoMuscill0](https:\u002F\u002Fgithub.com\u002FdomenicoMuscill0)| - [ManifoldLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#manifoldloss) \u003Cbr\u002F> - [P2SGradLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#p2sgradloss) \u003Cbr\u002F> - [HistogramLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#histogramloss) \u003Cbr\u002F> - [DynamicSoftMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#dynamicsoftmarginloss) \u003Cbr\u002F> - [RankedListLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#rankedlistloss) |\n|[mlopezantequera](https:\u002F\u002Fgithub.com\u002Fmlopezantequera) | - 使[测试器](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftesters)能够在任意查询集和参考集的组合上运行 \u003Cbr\u002F> - 使[AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F)能够处理任意标签比较 |\n|[cwkeam](https:\u002F\u002Fgithub.com\u002Fcwkeam) | - [SelfSupervisedLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#selfsupervisedloss) \u003Cbr\u002F> - [VICRegLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#vicregloss) \u003Cbr\u002F> - 向[AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F)添加了平均倒数排名准确率 \u003Cbr\u002F> - BaseLossWrapper|\n| [ir2718](https:\u002F\u002Fgithub.com\u002Fir2718) | - [ThresholdConsistentMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#thresholdconsistentmarginloss) \u003Cbr\u002F> - [SmoothAPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#smoothaploss) \u003Cbr\u002F> - [Datasets](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets)模块 |\n|[marijnl](https:\u002F\u002Fgithub.com\u002Fmarijnl)| - [BatchEasyHardMiner](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fminers\u002F#batcheasyhardminer) \u003Cbr\u002F> - [TwoStreamMetricLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftrainers\u002F#twostreammetricloss) \u003Cbr\u002F> - [GlobalTwoStreamEmbeddingSpaceTester](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Ftesters\u002F#globaltwostreamembeddingspacetester) \u003Cbr\u002F> - [使用trainers.TwoStreamMetricLoss的示例](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002FTwoStreamMetricLoss.ipynb) |\n| [chingisooinar](https:\u002F\u002Fgithub.com\u002Fchingisooinar) | [SubCenterArcFaceLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#subcenterarcfaceloss) |\n| [elias-ramzi](https:\u002F\u002Fgithub.com\u002Felias-ramzi) | [HierarchicalSampler](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fsamplers\u002F#hierarchicalsampler) |\n| [fjsj](https:\u002F\u002Fgithub.com\u002Ffjsj) | [SupConLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#supconloss) |\n| [AlenUbuntu](https:\u002F\u002Fgithub.com\u002FAlenUbuntu) | [CircleLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#circleloss) |\n| [interestingzhuo](https:\u002F\u002Fgithub.com\u002Finterestingzhuo) | [PNPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#pnploss) |\n| [wconnell](https:\u002F\u002Fgithub.com\u002Fwconnell) | [学习scRNAseq度量嵌入](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002FscRNAseq_MetricEmbedding.ipynb) |\n| [mkmenta](https:\u002F\u002Fgithub.com\u002Fmkmenta) | 改进了`get_all_triplets_indices`（修复了`INT_MAX`错误） |\n| [AlexSchuy](https:\u002F\u002Fgithub.com\u002FAlexSchuy) | 优化了```utils.loss_and_miner_utils.get_random_triplet_indices``` |\n| [JohnGiorgi](https:\u002F\u002Fgithub.com\u002FJohnGiorgi) | [utils.distributed](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdistributed)中的```all_gather``` |\n| [Hummer12007](https:\u002F\u002Fgithub.com\u002FHummer12007) | ```utils.key_checker``` |\n| [vltanh](https:\u002F\u002Fgithub.com\u002Fvltanh) | 使```InferenceModel.train_indexer```能够接受数据集 |\n| [btseytlin](https:\u002F\u002Fgithub.com\u002Fbtseytlin) | [InferenceModel](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Finference_models)中的```get_nearest_neighbors``` |\n| [mlw214](https:\u002F\u002Fgithub.com\u002Fmlw214) | 向[AccuracyCalculator](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Faccuracy_calculation\u002F)添加了```return_per_class``` |\n| [layumi](https:\u002F\u002Fgithub.com\u002Flayumi) | [InstanceLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#instanceloss) |\n| [NoTody](https:\u002F\u002Fgithub.com\u002FNoTody) | 帮助在分布式包装器中添加了`ref_emb`和`ref_labels`。 |\n| [ElisonSherton](https:\u002F\u002Fgithub.com\u002FElisonSherton) | 修复了ArcFaceLoss中的一个边缘情况。 |\n| [stompsjo](https:\u002F\u002Fgithub.com\u002Fstompsjo) | 改进了NTXentLoss的文档。 |\n| [Puzer](https:\u002F\u002Fgithub.com\u002FPuzer) | 修复了PNPLoss中的一个错误。 |\n| [elisim](https:\u002F\u002Fgithub.com\u002Felisim) | 对DistributedLossWrapper进行了开发者层面的改进。 |\n| [lucamarini22](https:\u002F\u002Fgithub.com\u002Flucamarini22) | |\n| [marcpaga](https:\u002F\u002Fgithub.com\u002Fmarcpaga) | |\n| [GaetanLepage](https:\u002F\u002Fgithub.com\u002FGaetanLepage) | |\n| [z1w](https:\u002F\u002Fgithub.com\u002Fz1w) | |\n| [thinline72](https:\u002F\u002Fgithub.com\u002Fthinline72) | |\n| [tpanum](https:\u002F\u002Fgithub.com\u002Ftpanum) | |\n| [fralik](https:\u002F\u002Fgithub.com\u002Ffralik) | |\n| [joaqo](https:\u002F\u002Fgithub.com\u002Fjoaqo) | |\n| [JoOkuma](https:\u002F\u002Fgithub.com\u002FJoOkuma) | |\n| [gkouros](https:\u002F\u002Fgithub.com\u002Fgkouros) | |\n| [yutanakamura-tky](https:\u002F\u002Fgithub.com\u002Fyutanakamura-tky) | |\n| [KinglittleQ](https:\u002F\u002Fgithub.com\u002FKinglittleQ) | |\n| [martin0258](https:\u002F\u002Fgithub.com\u002Fmartin0258) | |\n| [michaeldeyzel](https:\u002F\u002Fgithub.com\u002Fmichaeldeyzel) | |\n| [HSinger04](https:\u002F\u002Fgithub.com\u002FHSinger04) | |\n| [rheum](https:\u002F\u002Fgithub.com\u002Frheum) | |\n| [bot66](https:\u002F\u002Fgithub.com\u002Fbot66) | |\n\n\n\n### Facebook AI\n感谢Facebook AI的[Ser-Nam Lim](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fsernam)，以及我的研究导师[Serge Belongie教授](https:\u002F\u002Fwww.belongielab.org\u002F)。这个项目始于我在Facebook AI的实习期间，当时我从Ser-Nam及其计算机视觉和机器学习工程师、研究科学家团队那里获得了宝贵的反馈。特别要感谢[Ashish Shah](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fashish217\u002F)和[Austin Reiter](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Faustin-reiter-3962aa7\u002F)，他们在项目早期阶段对我的代码进行了评审。\n\n### 开源仓库\n本库包含从以下优秀开源仓库改编和修改的代码：\n- https:\u002F\u002Fgithub.com\u002Fbnu-wangxun\u002FDeep_Metric\n- https:\u002F\u002Fgithub.com\u002Fchaoyuaw\u002Fincubator-mxnet\u002Fblob\u002Fmaster\u002Fexample\u002Fgluon\u002Fembedding_learning\n- https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeepcluster\n- https:\u002F\u002Fgithub.com\u002Fgeonm\u002Fproxy-anchor-loss\n- https:\u002F\u002Fgithub.com\u002Fidstcv\u002FSoftTriple\n- https:\u002F\u002Fgithub.com\u002Fkunhe\u002FFastAP-metric-learning\n- https:\u002F\u002Fgithub.com\u002Fronekko\u002Fdeep_metric_learning\n- https:\u002F\u002Fgithub.com\u002Ftjddus9597\u002FProxy-Anchor-CVPR2020\n- http:\u002F\u002Fkaizhao.net\u002Fregularface\n- https:\u002F\u002Fgithub.com\u002Fnii-yamagishilab\u002Fproject-NN-Pytorch-scripts\n\n### 标志\n感谢[Jeff Musgrave](https:\u002F\u002Fwww.designgenius.ca\u002F)设计了标志。\n\n## 引用本库\n如果您希望在论文中引用 pytorch-metric-learning，可以使用以下 BibTeX 格式：\n```latex\n@article{Musgrave2020PyTorchML,\n  title={PyTorch Metric Learning},\n  author={Kevin Musgrave and Serge J. Belongie and Ser-Nam Lim},\n  journal={ArXiv},\n  year={2020},\n  volume={abs\u002F2008.09164}\n}\n```","# PyTorch Metric Learning 快速上手指南\n\n`pytorch-metric-learning` 是一个功能强大的 PyTorch 库，提供了丰富的损失函数、挖掘器（Miners）、距离度量等模块，旨在简化度量学习（Metric Learning）模型的训练与评估流程。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS, 或 Windows\n*   **Python**: 建议 Python 3.6+\n*   **PyTorch**: \n    *   版本 `v0.9.90` 及以上需要 `torch >= 1.6`\n    *   旧版本已在 `torch >= 1.2` 上测试通过\n*   **其他依赖**: `numpy`, `scikit-learn`, `tqdm`, `torchvision`\n\n> **提示**：如果您需要使用评估和日志记录功能（如 FAISS），安装时可选择附加选项（见下文）。\n\n## 安装步骤\n\n您可以选择通过 `pip` 或 `conda` 进行安装。国内开发者建议使用清华或阿里镜像源以加速下载。\n\n### 方式一：使用 Pip 安装\n\n**标准安装：**\n```bash\npip install pytorch-metric-learning -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**获取最新开发版：**\n```bash\npip install pytorch-metric-learning --pre -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**安装包含评估和日志功能的版本（推荐）：**\n此选项会自动安装 `faiss` (GPU 或 CPU 版)、`record-keeper` 和 `tensorboard`。\n\n*   GPU 版本：\n    ```bash\n    pip install \"pytorch-metric-learning[with-hooks]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n*   CPU 版本：\n    ```bash\n    pip install \"pytorch-metric-learning[with-hooks-cpu]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n**Windows 用户特别注意：**\n如果在 Windows 上遇到 torch 兼容性问题，可能需要先指定版本安装 torch：\n```bash\npip install torch===1.6.0 torchvision===0.7.0 -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\npip install pytorch-metric-learning\n```\n\n### 方式二：使用 Conda 安装\n\n```bash\nconda install -c conda-forge pytorch-metric-learning\n```\n*注意：若需使用测试模块（Testers），需单独安装 `faiss`，可参考 FAISS 官方安装指南通过 conda 安装。*\n\n## 基本使用\n\n该库的核心在于灵活组合 **损失函数 (Losses)** 和 **挖掘器 (Miners)**。以下是最基础的使用流程。\n\n### 1. 基础用法：直接使用损失函数\n\n最简单的场景是初始化一个损失函数（例如三元组损失 `TripletMarginLoss`），并在训练循环中传入模型生成的嵌入向量（embeddings）和标签（labels）。\n\n```python\nfrom pytorch_metric_learning import losses\n\n# 初始化损失函数\nloss_func = losses.TripletMarginLoss()\n\n# 训练循环示例\nfor i, (data, labels) in enumerate(dataloader):\n    optimizer.zero_grad()\n    \n    # 获取嵌入向量 (形状: N x embedding_size)\n    embeddings = model(data)\n    \n    # 计算损失 (labels 形状: N)\n    loss = loss_func(embeddings, labels)\n    \n    loss.backward()\n    optimizer.step()\n```\n*原理：`TripletMarginLoss` 会根据标签自动在批次内构建所有可能的三元组（Anchor-Positive-Negative）。*\n\n### 2. 进阶用法：结合挖掘器 (Miner)\n\n为了提升训练效果，通常只关注那些“难以区分”的样本对。可以使用 `Miner` 来筛选困难样本，并将其传递给损失函数。\n\n```python\nfrom pytorch_metric_learning import miners, losses\n\n# 初始化挖掘器和损失函数\nminer = miners.MultiSimilarityMiner()\nloss_func = losses.TripletMarginLoss()\n\n# 训练循环示例\nfor i, (data, labels) in enumerate(dataloader):\n    optimizer.zero_grad()\n    embeddings = model(data)\n    \n    # 挖掘困难样本对\n    hard_pairs = miner(embeddings, labels)\n    \n    # 将挖掘出的样本对传入损失函数\n    # 库会自动将 pairs 转换为 triplets 进行计算\n    loss = loss_func(embeddings, labels, hard_pairs)\n    \n    loss.backward()\n    optimizer.step()\n```\n\n### 3. 自定义损失函数配置\n\n您可以根据需求自定义距离计算方式、损失缩减策略以及正则化项。\n\n```python\nfrom pytorch_metric_learning.distances import CosineSimilarity\nfrom pytorch_metric_learning.reducers import ThresholdReducer\nfrom pytorch_metric_learning.regularizers import LpRegularizer\nfrom pytorch_metric_learning import losses\n\nloss_func = losses.TripletMarginLoss(\n    distance=CosineSimilarity(),        # 使用余弦相似度代替欧氏距离\n    reducer=ThresholdReducer(high=0.3), # 丢弃大于 0.3 的损失值\n    embedding_regularizer=LpRegularizer() # 添加 L2 正则化\n)\n```\n\n### 4. 自监督学习支持\n\n库提供了 `SelfSupervisedLoss` 包装器，用于自监督学习任务（如对比学习）。\n\n```python\nfrom pytorch_metric_learning.losses import SelfSupervisedLoss\n\n# 包装基础损失函数\nloss_func = SelfSupervisedLoss(TripletMarginLoss())\n\nfor i, data in enumerate(dataloader):\n    optimizer.zero_grad()\n    \n    # 原始数据嵌入\n    embeddings = your_model(data)\n    # 增强数据嵌入\n    augmented = your_model(your_augmentation(data))\n    \n    # 计算自监督损失\n    loss = loss_func(embeddings, augmented)\n    \n    loss.backward()\n    optimizer.step()\n```\n\n更多高级功能（如内置训练器 `Trainers`、测试器 `Testers` 及精度计算器 `AccuracyCalculator`）请参考官方文档或 Google Colab 示例笔记。","某电商初创公司的算法团队正在构建一个“以图搜图”功能，旨在让用户上传商品照片即可快速找到库中视觉相似的同款或替代品。\n\n### 没有 pytorch-metric-learning 时\n- **损失函数实现繁琐**：工程师需手动编写复杂的 Triplet Loss 代码，不仅要处理锚点、正负样本对的排列组合，还要小心避免梯度计算错误，开发周期长达数周。\n- **难例挖掘效率低下**：缺乏现成的挖掘策略（Miner），模型容易收敛于简单样本，导致在面对外观极其相似的竞品时区分度不足，检索准确率卡在瓶颈。\n- **实验迭代成本高昂**：每当想尝试新的损失函数（如 ArcFace）或调整挖掘策略时，都需要重构大量训练循环代码，难以快速验证不同方案的效果。\n- **数据预处理重复造轮子**：团队需自行编写脚本下载和整理 CUB200 等标准基准数据集，用于验证模型基础性能，浪费了宝贵的研发时间。\n\n### 使用 pytorch-metric-learning 后\n- **模块化极速集成**：只需几行代码即可调用内置的 `TripletMarginLoss` 或 `SubCenterArcFaceLoss`，将原本数周的损失函数开发工作缩短至几小时，且无需担心底层数学实现错误。\n- **智能难例挖掘提升精度**：直接接入 `MultiSimilarityMiner` 等模块，自动在训练批次中筛选出高难度的正负样本对，显著提升了模型对细微视觉差异的敏感度，检索命中率大幅提升。\n- **灵活配置加速迭代**：得益于其高度解耦的设计，团队可以像搭积木一样随意组合不同的损失函数与挖掘器，一天内即可完成多种架构方案的对比实验。\n- **内置数据集一键加载**：利用新增的 Datasets 模块，一键下载并加载 Stanford Online Products 等行业标准数据集，快速建立了可靠的模型评估基准。\n\npytorch-metric-learning 通过模块化设计将深度度量学习的门槛降至最低，让团队能从繁琐的代码工程中解脱，专注于核心业务效果的优化。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FKevinMusgrave_pytorch-metric-learning_7622b915.png","KevinMusgrave","Kevin Musgrave","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FKevinMusgrave_abcd34d9.jpg","Research Engineer at Fujitsu Research.\r\nCo-creator of @settadev.\r\nCreator of PyTorch Metric Learning.\r\nML PhD from Cornell.","Fujitsu Research",null,"KevMusgrave","KevinMusgrave.com","https:\u002F\u002Fgithub.com\u002FKevinMusgrave",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",99.9,{"name":90,"color":91,"percentage":92},"Shell","#89e051",0.1,6311,661,"2026-04-05T11:10:06","MIT",1,"Linux, macOS, Windows","未说明（依赖 PyTorch 环境，可选配 faiss-gpu 以加速评估）","未说明",{"notes":102,"python":100,"dependencies":103},"核心库仅要求 torch>=1.6。若需使用评估和日志功能（with-hooks），需安装非官方的 faiss-gpu 或 faiss-cpu、record-keeper 和 tensorboard。测试模块需要单独安装 faiss。Windows 用户安装时可能需要指定特定版本的 torch 和 torchvision。",[104,105,106,107,108,109],"torch>=1.6","numpy","scikit-learn","tqdm","torchvision","faiss (可选，用于测试模块)",[13,54,14],[112,113,114,115,116,117,118,119,120,121],"metric-learning","deep-learning","computer-vision","machine-learning","pytorch","deep-metric-learning","image-retrieval","self-supervised-learning","contrastive-learning","embeddings","2026-03-27T02:49:30.150509","2026-04-06T07:13:05.443944",[125,130,135,140,145,150],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},17917,"NT-Xent Loss 是如何计算的？分母中的负样本对是如何确定的？","NT-Xent Loss 的计算中，每个正样本对（positive pair）的分母由与该正样本对具有相同“锚点”（anchor）的所有负样本对组成。具体来说，如果标签为 [a, b, c, d] 且对应的类别标签为 [0, 0, 2, 3]，那么正样本对是 [a, b]。对于该正样本对，其负样本对形式为 [a, _]，即 [a, c] 和 [a, d]。同样，[b, a] 也会作为一个独立的正样本对计算损失，其负样本对为 [b, c] 和 [b, d]。这意味着分母中的项数取决于与当前正样本对共享同一锚点的负样本对数量。","https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F6",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},17918,"在 Conda 安装时遇到 CUDA 驱动不兼容的 UnsatisfiableError 错误怎么办？","如果在 Conda 安装过程中遇到与 CUDA 驱动版本不兼容的错误（UnsatisfiableError），建议不要在 Conda 中直接安装包，而是先在 Conda 环境中安装 pip，然后使用 pip 进行安装。具体命令如下：\n1. conda install pip\n2. pip install pytorch-metric-learning\n此外，也可以尝试从 conda-forge 频道安装相关依赖包来解决兼容性问题。","https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F55",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},17919,"如何在类似 CPC 的场景中使用 NTXentLoss（例如：1 个正样本和 N-1 个负样本）？","在类似 `[问题，正样本段落，负样本段落 1, 负样本 2, ...]` 的场景中，如果你不希望负样本之间相互排斥（即只关心锚点与负样本的距离），可以通过手动创建 `indices_tuple` 来实现。默认情况下，NTXentLoss 会将所有同标签的样本视为互为正样本，并计算所有可能的正负对组合。若要自定义行为（如仅计算特定锚点的正负对），需显式传入 `indices_tuple` 参数来指定哪些样本作为锚点、正样本和负样本。","https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F179",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},17920,"如何在使用 testers 和 logging_presets 时配置自定义的 collate_fn（例如用于 RNN 填充）？","如果你使用的是 `logging_presets.HookContainer.end_of_epoch_hook` 并且希望测试器（tester）使用自定义的 `collate_fn`（例如用于处理 RNN 序列填充），你需要在调用 `end_of_epoch_hook` 时显式传入 `test_collate_fn` 参数。示例代码如下：\n\nimport pytorch_metric_learning.utils.logging_presets as logging_presets\nrecord_keeper, _, _ = logging_presets.get_record_keeper(\"example_logs\", \"example_tensorboard\")\nhooks = logging_presets.get_hook_container(record_keeper)\nend_of_epoch_hook = hooks.end_of_epoch_hook(\n    tester,\n    dataset_dict,\n    model_folder,\n    test_interval=5,\n    patience=None,\n    test_collate_fn=collate_fn  # 在此处传入自定义的 collate_fn\n)","https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F289",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},17921,"如何在 Siamese 网络架构中使用 ContrastiveLoss？","虽然提供的 Issue 内容被截断，但根据库的通用用法，对于返回两个嵌入向量（embeddings1, embeddings2）的 Siamese 网络，通常需要将它们拼接或分别处理后传入损失函数。在 pytorch-metric-learning 中，ContrastiveLoss 通常接受一个单一的嵌入张量和对应的标签，或者通过 `indices_tuple` 指定正负对。如果你的模型输出是两个独立的张量，你可能需要在计算损失前将它们沿批次维度拼接（concatenate），并构造相应的标签（例如前一半为锚点，后一半为正\u002F负样本），或者使用库提供的配对逻辑来生成 `indices_tuple`。具体实现需参考官方文档中关于 ContrastiveLoss 的输入格式说明。","https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F549",{"id":151,"question_zh":152,"answer_zh":153,"source_url":129},17922,"NT-Xent Loss 中，如果每个图像的阳性对数量不同，会影响损失计算吗？","不会影响损失的正确性，但会影响计算项的数量。NT-Xent Loss 的设计允许每个样本拥有不同数量的正样本对。在这种情况下，每个正样本对的分母项数会根据与其共享同一“锚点”的负样本对数量动态变化。虽然这可能导致需要计算的损失项总数增加（例如，如果每个图像有 2 个正样本对而不是 1 个，计算量大约增加 2 倍），但库内部通过向量化操作高效处理了这种情况，且分母项是可以复用的，因此逻辑上是健全且高效的。",[155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250],{"id":156,"version":157,"summary_zh":158,"released_at":159},108205,"v2.9.0","## 功能特性\n\n- 新增了 [SmoothAPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#smoothaploss)。感谢 @ir2718！\n- 改进了 SubCenterArcFaceLoss 和 GenericPairLoss。感谢 @lucamarini22、@marcpaga！","2025-08-17T17:07:40",{"id":161,"version":162,"summary_zh":163,"released_at":164},108206,"v2.8.1","修复了一些模块导入问题。","2024-12-11T19:23:54",{"id":166,"version":167,"summary_zh":168,"released_at":169},108207,"v2.8.0","## 功能特性\n\n- 新增了 [Datasets](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets) 模块，方便下载常用数据集：\n  - [CUB200](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#cub-200-2011)\n  - [Cars196](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#cars196)\n  - [INaturalist 2018](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#inaturalist2018)\n  - [斯坦福在线产品数据集](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Fdatasets\u002F#stanfordonlineproducts)\n\n感谢 @ir2718！","2024-12-11T16:57:29",{"id":171,"version":172,"summary_zh":173,"released_at":174},108208,"v2.7.0","## 功能特性\n\n- 新增了 [ThresholdConsistentMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#thresholdconsistentmarginloss)。\n\n感谢 @ir2718！","2024-11-02T22:08:19",{"id":176,"version":177,"summary_zh":178,"released_at":179},108209,"v2.6.0","### 对 `DistributedLossWrapper` 的改进 + 小规模破坏性变更\n\n- 将 `DistributedLossWrapper.forward` 方法中的 `emb` 参数改为 `embeddings`，以与库中其他部分保持一致。\n- 在非分布式环境下使用 `DistributedLossWrapper` 时，添加了警告并提前返回。\n- 感谢 @elisim！","2024-07-24T12:39:36",{"id":181,"version":182,"summary_zh":183,"released_at":184},108210,"v2.5.0","## 改进\n\n- [在使用 TripletMarginMiner 时允许扩大内存和批量大小](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F688)\n- 拉取请求：https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fpull\u002F689\n\n感谢 @mkmenta！","2024-04-01T08:16:02",{"id":186,"version":187,"summary_zh":188,"released_at":189},108211,"v2.4.1","此版本与 v2.4.0 完全相同，但新增了 v2.4.0 中缺失的 LICENSE 文件。","2023-12-16T19:24:49",{"id":191,"version":192,"summary_zh":193,"released_at":194},108212,"v2.4.0","## 功能特性\n\n- 新增了 [DynamicSoftMarginLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#dynamicsoftmarginloss)。详情请参阅 PR #659。感谢 @domenicoMuscill0！\n- 新增了 [RankedListLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#rankedlistloss)。详情请参阅 PR #659。感谢 @domenicoMuscill0！\n\n## 错误修复\n- 修复了当一个批次样本没有对应的正样本时，PNPLoss 会返回 NaN 的问题。详情请参阅 PR #660。感谢 @Puzer 和 @interestingzhuo！\n\n## 测试\n- 修复了 HistogramLoss 的测试，使其能够兼容 PyTorch 2.1。感谢 @GaetanLepage！","2023-12-16T04:55:34",{"id":196,"version":197,"summary_zh":198,"released_at":199},108213,"v2.3.0","## 功能特性\n\n- 新增了 [HistogramLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#histogramloss)。详情请参阅拉取请求 #651。感谢 @domenicoMuscill0！","2023-07-25T15:01:13",{"id":201,"version":202,"summary_zh":203,"released_at":204},108214,"v2.2.0","## 新特性\n\n- 新增了 [ManifoldLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#manifoldloss)。详情请参阅 pull request #635。感谢 @domenicoMuscill0！\n- 新增了 [P2SGradLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#p2sgradloss)。详情请参阅 pull request #635。感谢 @domenicoMuscill0！\n- 为 [SelfSupervisedLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#selfsupervisedloss) 添加了 `symmetric` 标志。当 `symmetric=True` 时，`embeddings` 和 `ref_emb` 中的嵌入都会被用作锚点；当 `symmetric=False` 时，则仅使用 `embeddings` 中的嵌入作为锚点。此前的行为等价于 `symmetric=False`。现在默认值已改为 `symmetric=True`，因为这通常是自监督学习相关论文（例如 SimCLR）中采用的做法。\n","2023-06-18T18:51:37",{"id":206,"version":207,"summary_zh":208,"released_at":209},108215,"v2.1.2","## Bug Fixes\r\n\r\n- Fixed bug where `set_stats` was not being called in `TripletMarginMiner` (#628)\r\n- Made `HierarchicalSampler` extend `torch.utils.data.Sampler` instead of `torch.utils.data.BatchSampler` (#613) \r\n- Made samplers documentation clearer (#615). Thanks @rheum !","2023-05-27T21:34:47",{"id":211,"version":212,"summary_zh":213,"released_at":214},108216,"v2.1.1","## Bug Fixes\r\n\r\n- Fixes bug where `BaseDistance.initial_avg_query_norm` was not actually being set (#620)","2023-05-03T11:58:37",{"id":216,"version":217,"summary_zh":218,"released_at":219},108217,"v2.1.0","## Features\r\n\r\nNew loss function: [PNPLoss](https:\u002F\u002Fkevinmusgrave.github.io\u002Fpytorch-metric-learning\u002Flosses\u002F#pnploss). Thanks @interestingzhuo!","2023-04-05T23:21:15",{"id":221,"version":222,"summary_zh":223,"released_at":224},108218,"v2.0.1","### Bug Fixes\r\n\r\n- Fixed #591. Thanks @HSinger04!","2023-02-21T17:34:17",{"id":226,"version":227,"summary_zh":228,"released_at":229},108219,"v2.0.0","# New features\r\n\r\n## SelfSupervisedLoss \r\nYou don't have to create labels for self-supervised learning anymore:\r\n```python\r\nfrom pytorch_metric_learning.losses import SelfSupervisedLoss\r\nloss_func = SelfSupervisedLoss(TripletMarginLoss())\r\nembeddings = model(data)\r\naugmented = model(augmented_data)\r\nloss = loss_func(embeddings, augmented)\r\n```\r\n\r\nThanks @cwkeam!\r\n\r\n# API changes\r\n\r\n## AccuracyCalculator.get_accuracy\r\nThe order and naming of arguments has changed.\r\n\r\n#### Before:\r\n```python\r\nget_accuracy(\r\n    query, \r\n    reference, \r\n    query_labels, \r\n    reference_labels, \r\n    embeddings_come_from_same_source=False\r\n)\r\n```\r\n\r\n#### Now:\r\n```python\r\nget_accuracy(\r\n    query, \r\n    query_labels,\r\n    reference=None,\r\n    reference_labels=None \r\n    ref_includes_query=False\r\n)\r\n```\r\n\r\nThe benefits of this change are: \r\n- if `query is reference`, then you only need to pass in `query, query_labels`\r\n- `ref_includes_query` is shorter and clearer in meaning than `embeddings_come_from_same_source`\r\n\r\nSome example usage of the new format:\r\n\r\n```python\r\n# Accuracy of a query set, where the query set is also the reference set:\r\nget_accuracy(query, query_labels)\r\n\r\n# Accuracy of a query set with a separate reference set:\r\nget_accuracy(query, query_labels, ref, ref_labels)\r\n\r\n# Accuracy of a query set with a reference set that includes the query set:\r\nget_accuracy(query, query_labels, ref, ref_labels, ref_includes_query=True)\r\n```\r\n\r\n## `BaseMiner` instead of `BaseTupleMiner`\r\n\r\nMiners must extend `BaseMiner` because `BaseTupleMiner` no longer exists\r\n\r\n\r\n## CrossBatchMemory's `enqueue_idx` is now `enqueue_mask`\r\n\r\nBefore, `enqueue_idx` specified the indices of `embeddings` that should be added to the memory bank.\r\n\r\nNow, `enqueue_mask[i]` should be `True` if `embeddings[i]` should be added to the memory bank. \r\n\r\nThe benefit of this change is that it fixed an issue in distributed training.\r\n\r\nHere's an example of the new usage:\r\n\r\n```python\r\n# enqueue the second half of a batch\r\nenqueue_mask = torch.zeros(batch_size).bool()\r\nenqueue_mask[batch_size\u002F2:] = True\r\n```\r\n\r\n## VICRegLoss requires keyword argument\r\n\r\nBefore:\r\n```python\r\nloss_fn = VICRegLoss()\r\nloss_fn(emb, ref_emb)\r\n```\r\n\r\nNow:\r\n```python\r\nloss_fn = VICRegLoss()\r\nloss_fn(emb, ref_emb=ref_emb)\r\n```\r\n\r\nThe reason is that VICRegLoss now uses the `forward` method of `BaseMetricLossFunction`, to allow for possible generalizations in the future without causing more breaking changes.\r\n\r\n## BaseTrainer `mining_funcs` and `dataset` have swapped order\r\nThis is to allow `mining_funcs` to be optional.\r\n\r\nBefore if you didn't want to use miners:\r\n```python\r\nMetricLossOnly(\r\n    models,\r\n    optimizers,\r\n    batch_size,\r\n    loss_funcs,\r\n    mining_funcs = {},\r\n    dataset = dataset,\r\n)\r\n```\r\n\r\nNow:\r\n```python\r\nMetricLossOnly(\r\n    models,\r\n    optimizers,\r\n    batch_size,\r\n    loss_funcs,\r\n    dataset,\r\n)\r\n```\r\n\r\n\r\n# Deletions\r\n\r\nThe following classes\u002Ffunctions were removed\r\n\r\n- `losses.CentroidTripletLoss` (it contained a bug that I don't have time to figure out)\r\n- `miners.BaseTupleMiner` (use `miners.BaseMiner` instead)\r\n- `miners.BaseSubsetBatchMiner` (rarely used)\r\n- `miners.MaximumLossMiner` (rarely used)\r\n- `trainers.UnsupervisedEmbeddingsUsingAugmentations` (rarely used)\r\n- `utils.common_functions.Identity` (use `torch.nn.Identity` instead)\r\n\r\n\r\n# Other minor changes\r\n\r\n- VICRegLoss should now work with DistributedLossWrapper (https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F535)\r\n- Dynamic recordable attribute names were removed (https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F436)\r\n- AccuracyCalculator now returns NaN instead of 0 when none of the query labels appear in the reference set (https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F397)","2023-01-29T23:57:25",{"id":231,"version":232,"summary_zh":233,"released_at":234},108220,"v1.7.3","## Bug fixes and minor improvements\r\n\r\n- Fixed #472, in which `enqueue_idx` for CrossBatchMemory could not be passed into DistributedLossWrapper\r\n- Converted CrossBatchMemory's `embedding_memory` and `label_memory` to buffers, so they can be saved and loaded as state dicts and transferred to devices use `.to(device)`.","2023-01-29T00:48:53",{"id":236,"version":237,"summary_zh":238,"released_at":239},108221,"v1.7.2","## Minor improvement\r\n\r\nResolved https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F565","2023-01-21T05:30:05",{"id":241,"version":242,"summary_zh":243,"released_at":244},108222,"v1.7.1","## Features and bug fixes\r\n\r\n- Added SumReducer\r\n- Fixed bug where labels were always required for DistributedLossWrapper\r\n- Removed torchvision as a dependency","2023-01-17T01:25:22",{"id":246,"version":247,"summary_zh":248,"released_at":249},108223,"v1.7.0","## Bug fixes\r\n\r\nFixes an edge case in ArcFaceLoss. Thanks @ElisonSherton!\r\n\r\nRelevant links:\r\n\r\n- https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fissues\u002F537\r\n- https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning\u002Fpull\u002F539\r\n- https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface\u002Fissues\u002F2126\r\n- https:\u002F\u002Fgithub.com\u002Fronghuaiyang\u002Farcface-pytorch\u002Fissues\u002F48\r\n\r\n","2023-01-16T20:03:06",{"id":251,"version":252,"summary_zh":253,"released_at":254},108224,"v1.6.3","## Bug Fixes\r\n\r\n- Fixed bug where `DistributedMinerWrapper` would crash when `world_size == 1` (#542)","2022-11-01T23:07:04"]