[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Lightning-AI--torchmetrics":3,"tool-Lightning-AI--torchmetrics":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":97,"env_os":98,"env_gpu":99,"env_ram":100,"env_deps":101,"category_tags":109,"github_topics":111,"view_count":10,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":119,"updated_at":120,"faqs":121,"releases":154},2858,"Lightning-AI\u002Ftorchmetrics","torchmetrics","Machine learning metrics for distributed, scalable PyTorch applications.","torchmetrics 是专为 PyTorch 生态打造的机器学习评估指标库，旨在帮助开发者高效、准确地衡量模型性能。在深度学习训练过程中，如何统一计算准确率、召回率、F1 分数等复杂指标，尤其是在多卡分布式训练或大规模数据场景下保持结果一致，往往是个棘手难题。torchmetrics 通过模块化设计，将各类常用指标封装为可复用的组件，自动处理状态更新与同步逻辑，让用户无需重复造轮子，也能轻松获得可靠评估结果。\n\n它特别适合从事算法研发的研究人员、构建生产级模型的工程师，以及需要快速验证实验效果的学生团队。无论是图像分类、自然语言处理还是推荐系统，torchmetrics 都提供了丰富的内置指标支持，并允许用户自定义扩展。其核心亮点在于原生支持分布式训练环境下的指标聚合，确保在多 GPU 或多节点场景中仍能输出全局一致的统计值，同时具备良好的可扩展性和与 PyTorch Lightning 等框架的无缝集成能力。安装简便，文档完善，社区活跃，是提升模型评估效率的实用利器。","\u003Cdiv align=\"center\">\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_5251175d6209.png\" width=\"400px\">\n\n**Machine learning metrics for distributed, scalable PyTorch applications.**\n\n______________________________________________________________________\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#what-is-torchmetrics\">What is Torchmetrics\u003C\u002Fa> •\n  \u003Ca href=\"#implementing-your-own-module-metric\">Implementing a metric\u003C\u002Fa> •\n  \u003Ca href=\"#build-in-metrics\">Built-in metrics\u003C\u002Fa> •\n  \u003Ca href=\"https:\u002F\u002Flightning.ai\u002Fdocs\u002Ftorchmetrics\u002Fstable\u002F\">Docs\u003C\u002Fa> •\n  \u003Ca href=\"#community\">Community\u003C\u002Fa> •\n  \u003Ca href=\"#license\">License\u003C\u002Fa>\n\u003C\u002Fp>\n\n______________________________________________________________________\n\n[![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftorchmetrics)](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchmetrics\u002F)\n[![PyPI Status](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftorchmetrics.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftorchmetrics)\n[![PyPI - Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Ftorchmetrics)\n](https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftorchmetrics)\n[![Conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fv\u002Fconda-forge\u002Ftorchmetrics?label=conda&color=success)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Ftorchmetrics)\n[![license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fblob\u002Fmaster\u002FLICENSE)\n\n[![CI testing | CPU](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Factions\u002Fworkflows\u002Fci-tests.yml\u002Fbadge.svg?event=push)](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Factions\u002Fworkflows\u002Fci-tests.yml)\n[![Build Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_3323fe13e747.png)](https:\u002F\u002Fdev.azure.com\u002FLightning-AI\u002FMetrics\u002F_build\u002Flatest?definitionId=54&branchName=master)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLightning-AI\u002Ftorchmetrics\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg?token=NER6LPI3HS)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLightning-AI\u002Ftorchmetrics)\n[![pre-commit.ci status](https:\u002F\u002Fresults.pre-commit.ci\u002Fbadge\u002Fgithub\u002FLightning-AI\u002Ftorchmetrics\u002Fmaster.svg)](https:\u002F\u002Fresults.pre-commit.ci\u002Flatest\u002Fgithub\u002FLightning-AI\u002Ftorchmetrics\u002Fmaster)\n\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_13d664e1afd7.png)](https:\u002F\u002Ftorchmetrics.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1077906959069626439?style=plastic)](https:\u002F\u002Fdiscord.gg\u002FVptPCZkGNa)\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.5844769.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.5844769)\n[![JOSS status](https:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F561d9bb59b400158bc8204e2639dca43\u002Fstatus.svg)](https:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F561d9bb59b400158bc8204e2639dca43)\n\n______________________________________________________________________\n\n\u003C\u002Fdiv>\n\n# Looking for GPUs?\n\nOver 340,000 developers use [Lightning Cloud](https:\u002F\u002Flightning.ai\u002F?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme) - purpose-built for PyTorch and PyTorch Lightning.\n\n- [GPUs](https:\u002F\u002Flightning.ai\u002Fpricing?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme) from $0.19.\n- [Clusters](https:\u002F\u002Flightning.ai\u002Fclusters?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): frontier-grade training\u002Finference clusters.\n- [AI Studio (vibe train)](https:\u002F\u002Flightning.ai\u002Fstudios?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): workspaces where AI helps you debug, tune and vibe train.\n- [AI Studio (vibe deploy)](https:\u002F\u002Flightning.ai\u002Fstudios?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): workspaces where AI helps you optimize, and deploy models.\n- [Notebooks](https:\u002F\u002Flightning.ai\u002Fnotebooks?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): Persistent GPU workspaces where AI helps you code and analyze.\n- [Inference](https:\u002F\u002Flightning.ai\u002Fdeploy?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme): Deploy models as inference APIs.\n\n# Installation\n\nSimple installation from PyPI\n\n```bash\npip install torchmetrics\n```\n\n\u003Cdetails>\n  \u003Csummary>Other installations\u003C\u002Fsummary>\n\nInstall using conda\n\n```bash\nconda install -c conda-forge torchmetrics\n```\n\nInstall using uv\n\n```bash\nuv add torchmetrics\n```\n\nPip from source\n\n```bash\n# with git\npip install git+https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics.git@release\u002Fstable\n```\n\nPip from archive\n\n```bash\npip install https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Farchive\u002Frefs\u002Fheads\u002Frelease\u002Fstable.zip\n```\n\nExtra dependencies for specialized metrics:\n\n```bash\npip install torchmetrics[audio]\npip install torchmetrics[image]\npip install torchmetrics[text]\npip install torchmetrics[all]  # install all of the above\n```\n\nInstall latest developer version\n\n```bash\npip install https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Farchive\u002Fmaster.zip\n```\n\n\u003C\u002Fdetails>\n\n______________________________________________________________________\n\n# What is TorchMetrics\n\nTorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics. It offers:\n\n- A standardized interface to increase reproducibility\n- Reduces boilerplate\n- Automatic accumulation over batches\n- Metrics optimized for distributed-training\n- Automatic synchronization between multiple devices\n\nYou can use TorchMetrics with any PyTorch model or with [PyTorch Lightning](https:\u002F\u002Flightning.ai\u002Fdocs\u002Fpytorch\u002Fstable\u002F) to enjoy additional features such as:\n\n- Module metrics are automatically placed on the correct device.\n- Native support for logging metrics in Lightning to reduce even more boilerplate.\n\n# Using TorchMetrics\n\n### Module metrics\n\nThe [module-based metrics](https:\u002F\u002Flightning.ai\u002Fdocs\u002Ftorchmetrics\u002Fstable\u002Freferences\u002Fmetric.html) contain internal metric states (similar to the parameters of the PyTorch module) that automate accumulation and synchronization across devices!\n\n- Automatic accumulation over multiple batches\n- Automatic synchronization between multiple devices\n- Metric arithmetic\n\n**This can be run on CPU, single GPU or multi-GPUs!**\n\nFor the single GPU\u002FCPU case:\n\n```python\nimport torch\n\n# import our library\nimport torchmetrics\n\n# initialize metric\nmetric = torchmetrics.classification.Accuracy(task=\"multiclass\", num_classes=5)\n\n# move the metric to device you want computations to take place\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmetric.to(device)\n\nn_batches = 10\nfor i in range(n_batches):\n    # simulate a classification problem\n    preds = torch.randn(10, 5).softmax(dim=-1).to(device)\n    target = torch.randint(5, (10,)).to(device)\n\n    # metric on current batch\n    acc = metric(preds, target)\n    print(f\"Accuracy on batch {i}: {acc}\")\n\n# metric on all batches using custom accumulation\nacc = metric.compute()\nprint(f\"Accuracy on all data: {acc}\")\n```\n\nModule metric usage remains the same when using multiple GPUs or multiple nodes.\n\n\u003Cdetails>\n  \u003Csummary>Example using DDP\u003C\u002Fsummary>\n\n\u003C!--phmdoctest-mark.skip-->\n\n```python\nimport os\nimport torch\nimport torch.distributed as dist\nimport torch.multiprocessing as mp\nfrom torch import nn\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport torchmetrics\n\n\ndef metric_ddp(rank, world_size):\n    os.environ[\"MASTER_ADDR\"] = \"localhost\"\n    os.environ[\"MASTER_PORT\"] = \"12355\"\n\n    # create default process group\n    dist.init_process_group(\"gloo\", rank=rank, world_size=world_size)\n\n    # initialize model\n    metric = torchmetrics.classification.Accuracy(task=\"multiclass\", num_classes=5)\n\n    # define a model and append your metric to it\n    # this allows metric states to be placed on correct accelerators when\n    # .to(device) is called on the model\n    model = nn.Linear(10, 10)\n    model.metric = metric\n    model = model.to(rank)\n\n    # initialize DDP\n    model = DDP(model, device_ids=[rank])\n\n    n_epochs = 5\n    # this shows iteration over multiple training epochs\n    for n in range(n_epochs):\n        # this will be replaced by a DataLoader with a DistributedSampler\n        n_batches = 10\n        for i in range(n_batches):\n            # simulate a classification problem\n            preds = torch.randn(10, 5).softmax(dim=-1)\n            target = torch.randint(5, (10,))\n\n            # metric on current batch\n            acc = metric(preds, target)\n            if rank == 0:  # print only for rank 0\n                print(f\"Accuracy on batch {i}: {acc}\")\n\n        # metric on all batches and all accelerators using custom accumulation\n        # accuracy is same across both accelerators\n        acc = metric.compute()\n        print(f\"Accuracy on all data: {acc}, accelerator rank: {rank}\")\n\n        # Resetting internal state such that metric ready for new data\n        metric.reset()\n\n    # cleanup\n    dist.destroy_process_group()\n\n\nif __name__ == \"__main__\":\n    world_size = 2  # number of gpus to parallelize over\n    mp.spawn(metric_ddp, args=(world_size,), nprocs=world_size, join=True)\n```\n\n\u003C\u002Fdetails>\n\n### Implementing your own Module metric\n\nImplementing your own metric is as easy as subclassing an [`torch.nn.Module`](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fgenerated\u002Ftorch.nn.Module.html). Simply, subclass `torchmetrics.Metric`\nand just implement the `update` and `compute` methods:\n\n```python\nimport torch\nfrom torchmetrics import Metric\n\n\nclass MyAccuracy(Metric):\n    def __init__(self):\n        # remember to call super\n        super().__init__()\n        # call `self.add_state`for every internal state that is needed for the metrics computations\n        # dist_reduce_fx indicates the function that should be used to reduce\n        # state from multiple processes\n        self.add_state(\"correct\", default=torch.tensor(0), dist_reduce_fx=\"sum\")\n        self.add_state(\"total\", default=torch.tensor(0), dist_reduce_fx=\"sum\")\n\n    def update(self, preds: torch.Tensor, target: torch.Tensor) -> None:\n        # extract predicted class index for computing accuracy\n        preds = preds.argmax(dim=-1)\n        assert preds.shape == target.shape\n        # update metric states\n        self.correct += torch.sum(preds == target)\n        self.total += target.numel()\n\n    def compute(self) -> torch.Tensor:\n        # compute final result\n        return self.correct.float() \u002F self.total\n\n\nmy_metric = MyAccuracy()\npreds = torch.randn(10, 5).softmax(dim=-1)\ntarget = torch.randint(5, (10,))\n\nprint(my_metric(preds, target))\n```\n\n### Functional metrics\n\nSimilar to [`torch.nn`](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnn.html), most metrics have both a [module-based](https:\u002F\u002Flightning.ai\u002Fdocs\u002Ftorchmetrics\u002Fstable\u002Freferences\u002Fmetric.html) and functional version.\nThe functional versions are simple python functions that as input take [torch.tensors](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Ftensors.html) and return the corresponding metric as a [torch.tensor](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Ftensors.html).\n\n```python\nimport torch\n\n# import our library\nimport torchmetrics\n\n# simulate a classification problem\npreds = torch.randn(10, 5).softmax(dim=-1)\ntarget = torch.randint(5, (10,))\n\nacc = torchmetrics.functional.classification.multiclass_accuracy(\n    preds, target, num_classes=5\n)\n```\n\n### Covered domains and example metrics\n\nIn total TorchMetrics contains [100+ metrics](https:\u002F\u002Flightning.ai\u002Fdocs\u002Ftorchmetrics\u002Fstable\u002Fall-metrics.html), which\ncovers the following domains:\n\n- Audio\n- Classification\n- Detection\n- Information Retrieval\n- Image\n- Multimodal (Image-Text-3D Talking Heads)\n- Nominal\n- Regression\n- Segmentation\n- Text\n\nEach domain may require some additional dependencies which can be installed with `pip install torchmetrics[audio]`,\n`pip install torchmetrics['image']` etc.\n\n### Additional features\n\n#### Plotting\n\nVisualization of metrics can be important to help understand what is going on with your machine learning algorithms.\nTorchmetrics have built-in plotting support (install dependencies with `pip install torchmetrics[visual]`) for nearly\nall modular metrics through the `.plot` method. Simply call the method to get a simple visualization of any metric!\n\n```python\nimport torch\nfrom torchmetrics.classification import MulticlassAccuracy, MulticlassConfusionMatrix\n\nnum_classes = 3\n\n# this will generate two distributions that comes more similar as iterations increase\nw = torch.randn(num_classes)\ntarget = lambda it: torch.multinomial((it * w).softmax(dim=-1), 100, replacement=True)\npreds = lambda it: torch.multinomial((it * w).softmax(dim=-1), 100, replacement=True)\n\nacc = MulticlassAccuracy(num_classes=num_classes, average=\"micro\")\nacc_per_class = MulticlassAccuracy(num_classes=num_classes, average=None)\nconfmat = MulticlassConfusionMatrix(num_classes=num_classes)\n\n# plot single value\nfor i in range(5):\n    acc_per_class.update(preds(i), target(i))\n    confmat.update(preds(i), target(i))\nfig1, ax1 = acc_per_class.plot()\nfig2, ax2 = confmat.plot()\n\n# plot multiple values\nvalues = []\nfor i in range(10):\n    values.append(acc(preds(i), target(i)))\nfig3, ax3 = acc.plot(values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_0898f1cbdf66.png\" width=\"1000\">\n\u003C\u002Fp>\n\nFor examples of plotting different metrics try running [this example file](_samples\u002Fplotting.py).\n\n# Contribute!\n\nThe lightning + TorchMetrics team is hard at work adding even more metrics.\nBut we're looking for incredible contributors like you to submit new metrics\nand improve existing ones!\n\nJoin our [Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002FtfXFetEZxv) to get help with becoming a contributor!\n\n# Community\n\nFor help or questions, join our huge community on [Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002FtfXFetEZxv)!\n\n# Citation\n\nWe’re excited to continue the strong legacy of open source software and have been inspired\nover the years by Caffe, Theano, Keras, PyTorch, torchbearer, ignite, sklearn and fast.ai.\n\nIf you want to cite this framework feel free to use GitHub's built-in citation option to generate a bibtex or APA-Style citation based on [this file](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fblob\u002Fmaster\u002FCITATION.cff) (but only if you loved it 😊).\n\n# License\n\nPlease observe the Apache 2.0 license that is listed in this repository.\nIn addition, the Lightning framework is Patent Pending.\n","\u003Cdiv align=\"center\">\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_5251175d6209.png\" width=\"400px\">\n\n**面向分布式、可扩展 PyTorch 应用的机器学习指标库。**\n\n______________________________________________________________________\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#what-is-torchmetrics\">什么是 TorchMetrics\u003C\u002Fa> •\n  \u003Ca href=\"#implementing-your-own-module-metric\">实现自定义指标\u003C\u002Fa> •\n  \u003Ca href=\"#build-in-metrics\">内置指标\u003C\u002Fa> •\n  \u003Ca href=\"https:\u002F\u002Flightning.ai\u002Fdocs\u002Ftorchmetrics\u002Fstable\u002F\">文档\u003C\u002Fa> •\n  \u003Ca href=\"#community\">社区\u003C\u002Fa> •\n  \u003Ca href=\"#license\">许可证\u003C\u002Fa>\n\u003C\u002Fp>\n\n______________________________________________________________________\n\n[![PyPI - Python 版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftorchmetrics)](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchmetrics\u002F)\n[![PyPI 状态](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftorchmetrics.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftorchmetrics)\n[![PyPI - 下载量](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Ftorchmetrics)\n](https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftorchmetrics)\n[![Conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fv\u002Fconda-forge\u002Ftorchmetrics?label=conda&color=success)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Ftorchmetrics)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fblob\u002Fmaster\u002FLICENSE)\n\n[![CI 测试 | CPU](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Factions\u002Fworkflows\u002Fci-tests.yml\u002Fbadge.svg?event=push)](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Factions\u002Fworkflows\u002Fci-tests.yml)\n[![构建状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_3323fe13e747.png)](https:\u002F\u002Fdev.azure.com\u002FLightning-AI\u002FMetrics\u002F_build\u002Flatest?definitionId=54&branchName=master)\n[![Codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLightning-AI\u002Ftorchmetrics\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg?token=NER6LPI3HS)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLightning-AI\u002Ftorchmetrics)\n[![pre-commit.ci 状态](https:\u002F\u002Fresults.pre-commit.ci\u002Fbadge\u002Fgithub\u002FLightning-AI\u002Ftorchmetrics\u002Fmaster.svg)](https:\u002F\u002Fresults.pre-commit.ci\u002Flatest\u002Fgithub\u002FLightning-AI\u002Ftorchmetrics\u002Fmaster)\n\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_13d664e1afd7.png)](https:\u002F\u002Ftorchmetrics.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1077906959069626439?style=plastic)](https:\u002F\u002Fdiscord.gg\u002FVptPCZkGNa)\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.5844769.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.5844769)\n[![JOSS 状态](https:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F561d9bb59b400158bc8204e2639dca43\u002Fstatus.svg)](https:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F561d9bb59b400158bc8204e2639dca43)\n\n______________________________________________________________________\n\n\u003C\u002Fdiv>\n\n# 正在寻找 GPU 吗？\n\n超过 34 万名开发者正在使用 [Lightning Cloud](https:\u002F\u002Flightning.ai\u002F?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme)，它专为 PyTorch 和 PyTorch Lightning 打造。\n\n- [GPU](https:\u002F\u002Flightning.ai\u002Fpricing?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme) 低至每小时 0.19 美元。\n- [集群](https:\u002F\u002Flightning.ai\u002Fclusters?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme)：前沿级别的训练与推理集群。\n- [AI Studio（调试模式）](https:\u002F\u002Flightning.ai\u002Fstudios?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme)：AI 助力调试、调优和高效训练的工作空间。\n- [AI Studio（部署模式）](https:\u002F\u002Flightning.ai\u002Fstudios?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme)：AI 助力模型优化与部署的工作空间。\n- [Notebooks](https:\u002F\u002Flightning.ai\u002Fnotebooks?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme)：持久化的 GPU 工作空间，AI 可帮助您编写代码并进行分析。\n- [推理服务](https:\u002F\u002Flightning.ai\u002Fdeploy?utm_source=tm_readme&utm_medium=referral&utm_campaign=tm_readme)：将模型部署为推理 API。\n\n# 安装\n\n从 PyPI 简单安装：\n\n```bash\npip install torchmetrics\n```\n\n\u003Cdetails>\n  \u003Csummary>其他安装方式\u003C\u002Fsummary>\n\n使用 conda 安装：\n\n```bash\nconda install -c conda-forge torchmetrics\n```\n\n使用 uv 安装：\n\n```bash\nuv add torchmetrics\n```\n\n从源码安装：\n\n```bash\n# 使用 git\npip install git+https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics.git@release\u002Fstable\n```\n\n从压缩包安装：\n\n```bash\npip install https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Farchive\u002Frefs\u002Fheads\u002Frelease\u002Fstable.zip\n```\n\n针对特定领域指标的额外依赖：\n\n```bash\npip install torchmetrics[audio]\npip install torchmetrics[image]\npip install torchmetrics[text]\npip install torchmetrics[all]  # 安装以上所有选项\n```\n\n安装最新的开发版本：\n\n```bash\npip install https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Farchive\u002Fmaster.zip\n```\n\n\u003C\u002Fdetails>\n\n______________________________________________________________________\n\n# 什么是 TorchMetrics\n\nTorchMetrics 是一个包含 100 多种 PyTorch 指标实现的库，并提供简单易用的 API 来创建自定义指标。它具有以下特点：\n\n- 标准化接口，提升实验可重复性\n- 减少样板代码\n- 自动跨批次累积\n- 针对分布式训练优化的指标\n- 自动在多设备间同步\n\n您可以将 TorchMetrics 与任何 PyTorch 模型一起使用，或结合 [PyTorch Lightning](https:\u002F\u002Flightning.ai\u002Fdocs\u002Fpytorch\u002Fstable\u002F) 使用，以享受更多功能，例如：\n\n- 模块级指标会自动放置到正确的设备上。\n- 原生支持在 Lightning 中记录指标，进一步减少样板代码。\n\n# 如何使用 TorchMetrics\n\n### 模块级指标\n\n基于模块的指标（[参考文档](https:\u002F\u002Flightning.ai\u002Fdocs\u002Ftorchmetrics\u002Fstable\u002Freferences\u002Fmetric.html)）包含内部指标状态（类似于 PyTorch 模块的参数），能够自动完成跨设备的累积与同步！\n\n- 自动跨多个批次累积\n- 自动在多设备间同步\n- 支持指标间的算术运算\n\n**无论是在 CPU、单 GPU 还是多 GPU 上，都可以运行！**\n\n对于单 GPU\u002FCPU 场景：\n\n```python\nimport torch\n\n# 导入我们的库\nimport torchmetrics\n\n# 初始化指标\nmetric = torchmetrics.classification.Accuracy(task=\"multiclass\", num_classes=5)\n\n# 将指标移动到计算所需的设备上\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmetric.to(device)\n\nn_batches = 10\nfor i in range(n_batches):\n    # 模拟分类任务\n    preds = torch.randn(10, 5).softmax(dim=-1).to(device)\n    target = torch.randint(5, (10,)).to(device)\n\n    # 当前批次的指标值\n    acc = metric(preds, target)\n    print(f\"第 {i} 个批次的准确率: {acc}\")\n\n# 使用自定义累积方法计算所有批次的指标\nacc = metric.compute()\nprint(f\"所有数据上的准确率: {acc}\")\n```\n\n在使用多 GPU 或多节点时，模块化指标的用法保持不变。\n\n\u003Cdetails>\n  \u003Csummary>使用 DDP 的示例\u003C\u002Fsummary>\n\n\u003C!--phmdoctest-mark.skip-->\n\n```python\nimport os\nimport torch\nimport torch.distributed as dist\nimport torch.multiprocessing as mp\nfrom torch import nn\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport torchmetrics\n\n\ndef metric_ddp(rank, world_size):\n    os.environ[\"MASTER_ADDR\"] = \"localhost\"\n    os.environ[\"MASTER_PORT\"] = \"12355\"\n\n    # 创建默认进程组\n    dist.init_process_group(\"gloo\", rank=rank, world_size=world_size)\n\n    # 初始化模型\n    metric = torchmetrics.classification.Accuracy(task=\"multiclass\", num_classes=5)\n\n    # 定义一个模型，并将你的指标附加到该模型上\n    # 这样当对模型调用 .to(device) 时，指标的状态会被放置到正确的加速器上\n    model = nn.Linear(10, 10)\n    model.metric = metric\n    model = model.to(rank)\n\n    # 初始化 DDP\n    model = DDP(model, device_ids=[rank])\n\n    n_epochs = 5\n    # 这里展示了对多个训练轮次的迭代\n    for n in range(n_epochs):\n        # 这里将被带有 DistributedSampler 的 DataLoader 替代\n        n_batches = 10\n        for i in range(n_batches):\n            # 模拟一个分类问题\n            preds = torch.randn(10, 5).softmax(dim=-1)\n            target = torch.randint(5, (10,))\n\n            # 当前批次的指标\n            acc = metric(preds, target)\n            if rank == 0:  # 只在 rank 0 打印\n                print(f\"第 {i} 个批次的准确率: {acc}\")\n\n        # 使用自定义累积方法计算所有批次和所有加速器上的指标\n        # 准确率在两个加速器上是相同的\n        acc = metric.compute()\n        print(f\"所有数据上的准确率: {acc}, 加速器 rank: {rank}\")\n\n        # 重置内部状态，使指标为新数据做好准备\n        metric.reset()\n\n    # 清理\n    dist.destroy_process_group()\n\n\nif __name__ == \"__main__\":\n    world_size = 2  # 并行化的 GPU 数量\n    mp.spawn(metric_ddp, args=(world_size,), nprocs=world_size, join=True)\n```\n\n\u003C\u002Fdetails>\n\n### 实现你自己的模块化指标\n\n实现你自己的指标非常简单，只需继承 [`torch.nn.Module`](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fgenerated\u002Ftorch.nn.Module.html) 即可。只需继承 `torchmetrics.Metric`，并实现 `update` 和 `compute` 方法：\n\n```python\nimport torch\nfrom torchmetrics import Metric\n\n\nclass MyAccuracy(Metric):\n    def __init__(self):\n        # 记得调用 super\n        super().__init__()\n        # 对于每个用于指标计算的内部状态，调用 `self.add_state`\n        # dist_reduce_fx 表示用于从多个进程归约状态的函数\n        self.add_state(\"correct\", default=torch.tensor(0), dist_reduce_fx=\"sum\")\n        self.add_state(\"total\", default=torch.tensor(0), dist_reduce_fx=\"sum\")\n\n    def update(self, preds: torch.Tensor, target: torch.Tensor) -> None:\n        # 提取预测的类别索引以计算准确率\n        preds = preds.argmax(dim=-1)\n        assert preds.shape == target.shape\n        # 更新指标状态\n        self.correct += torch.sum(preds == target)\n        self.total += target.numel()\n\n    def compute(self) -> torch.Tensor:\n        # 计算最终结果\n        return self.correct.float() \u002F self.total\n\n\nmy_metric = MyAccuracy()\npreds = torch.randn(10, 5).softmax(dim=-1)\ntarget = torch.randint(5, (10,))\n\nprint(my_metric(preds, target))\n```\n\n### 函数式指标\n\n与 [`torch.nn`](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnn.html) 类似，大多数指标都有基于模块的版本和函数式版本。函数式版本是简单的 Python 函数，它们接受 [torch.tensors](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Ftensors.html) 作为输入，并返回相应的指标作为 [torch.tensor](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Ftensors.html)。\n\n```python\nimport torch\n\n# 导入我们的库\nimport torchmetrics\n\n# 模拟一个分类问题\npreds = torch.randn(10, 5).softmax(dim=-1)\ntarget = torch.randint(5, (10,))\n\nacc = torchmetrics.functional.classification.multiclass_accuracy(\n    preds, target, num_classes=5\n)\n```\n\n### 涵盖的领域及示例指标\n\nTorchMetrics 总共包含 [100 多个指标](https:\u002F\u002Flightning.ai\u002Fdocs\u002Ftorchmetrics\u002Fstable\u002Fall-metrics.html)，涵盖了以下领域：\n\n- 音频\n- 分类\n- 检测\n- 信息检索\n- 图像\n- 多模态（图像-文本-3D 谈话头像）\n- 名义型\n- 回归\n- 分割\n- 文本\n\n每个领域可能需要一些额外的依赖项，可以通过 `pip install torchmetrics[audio]`、`pip install torchmetrics['image']` 等命令安装。\n\n### 其他功能\n\n#### 绘图\n\n可视化指标对于理解机器学习算法的工作情况非常重要。Torchmetrics 内置了绘图支持（通过 `pip install torchmetrics[visual]` 安装依赖），几乎所有模块化指标都可通过 `.plot` 方法进行简单可视化！只需调用该方法即可获得任何指标的简单可视化效果！\n\n```python\nimport torch\nfrom torchmetrics.classification import MulticlassAccuracy, MulticlassConfusionMatrix\n\nnum_classes = 3\n\n# 这将生成两个分布，随着迭代次数增加，它们会越来越相似\nw = torch.randn(num_classes)\ntarget = lambda it: torch.multinomial((it * w).softmax(dim=-1), 100, replacement=True)\npreds = lambda it: torch.multinomial((it * w).softmax(dim=-1), 100, replacement=True)\n\nacc = MulticlassAccuracy(num_classes=num_classes, average=\"micro\")\nacc_per_class = MulticlassAccuracy(num_classes=num_classes, average=None)\nconfmat = MulticlassConfusionMatrix(num_classes=num_classes)\n\n# 绘制单个值\nfor i in range(5):\n    acc_per_class.update(preds(i), target(i))\n    confmat.update(preds(i), target(i))\nfig1, ax1 = acc_per_class.plot()\nfig2, ax2 = confmat.plot()\n\n# 绘制多个值\nvalues = []\nfor i in range(10):\n    values.append(acc(preds(i), target(i)))\nfig3, ax3 = acc.plot(values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_readme_0898f1cbdf66.png\" width=\"1000\">\n\u003C\u002Fp>\n\n有关不同指标绘图的示例，请运行 [此示例文件](_samples\u002Fplotting.py)。\n\n# 贡献吧！\n\nLightning + TorchMetrics 团队正在努力添加更多指标。但我们正在寻找像您一样出色的贡献者，提交新的指标并改进现有指标！\n\n加入我们的 [Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002FtfXFetEZxv) 以获取成为贡献者的帮助！\n\n# 社区\n\n如需帮助或有任何问题，请加入我们在 [Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002FtfXFetEZxv) 上的庞大社区！\n\n# 引用\n\n我们很高兴能够延续开源软件的优良传统，多年来一直受到 Caffe、Theano、Keras、PyTorch、torchbearer、ignite、scikit-learn 和 fast.ai 等项目的启发。\n\n如果您想引用本框架，可以使用 GitHub 内置的引用功能，根据[此文件](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fblob\u002Fmaster\u002FCITATION.cff)生成 BibTeX 或 APA 格式的引用（当然，前提是您非常喜欢它 😊）。\n\n# 许可证\n\n请遵守本仓库中列出的 Apache 2.0 许可协议。此外，Lightning 框架目前正处于专利申请中。","# TorchMetrics 快速上手指南\n\nTorchMetrics 是一个专为 PyTorch 设计的机器学习指标库，提供 100+ 种内置指标实现。它支持分布式训练、自动批次累积和多设备同步，能显著减少样板代码并提高实验可复现性。\n\n## 环境准备\n\n- **操作系统**：Linux, macOS, Windows\n- **Python 版本**：3.8+\n- **核心依赖**：\n  - PyTorch >= 1.10.0\n  - NumPy\n  - Packaging\n- **可选依赖**：根据具体任务领域（如音频、图像、文本）可能需要额外安装对应扩展包。\n\n## 安装步骤\n\n### 基础安装\n推荐使用 pip 进行安装：\n```bash\npip install torchmetrics\n```\n\n### 国内加速安装\n使用清华大学镜像源加速安装：\n```bash\npip install torchmetrics -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 按需安装扩展包\n针对特定领域的指标，可安装额外依赖：\n```bash\n# 音频指标\npip install torchmetrics[audio]\n\n# 图像指标\npip install torchmetrics[image]\n\n# 文本指标\npip install torchmetrics[text]\n\n# 一次性安装所有扩展\npip install torchmetrics[all]\n```\n\n### Conda 安装\n```bash\nconda install -c conda-forge torchmetrics\n```\n\n## 基本使用\n\nTorchMetrics 提供两种主要使用方式：**模块式（Module-based）**和**函数式（Functional）**。推荐在训练循环中使用模块式，因为它能自动处理状态累积和设备同步。\n\n### 1. 模块式用法（推荐用于训练）\n\n模块式指标会自动累积多个 batch 的数据，并支持多 GPU 分布式训练。\n\n```python\nimport torch\nimport torchmetrics\n\n# 初始化指标 (例如：多分类准确率)\nmetric = torchmetrics.classification.Accuracy(task=\"multiclass\", num_classes=5)\n\n# 将指标移动到计算设备 (CPU 或 CUDA)\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmetric.to(device)\n\nn_batches = 10\nfor i in range(n_batches):\n    # 模拟数据\n    preds = torch.randn(10, 5).softmax(dim=-1).to(device)\n    target = torch.randint(5, (10,)).to(device)\n\n    # 更新当前 batch 的指标状态\n    acc = metric(preds, target)\n    print(f\"Batch {i} Accuracy: {acc}\")\n\n# 计算所有累积数据的最终指标\nfinal_acc = metric.compute()\nprint(f\"Total Data Accuracy: {final_acc}\")\n\n# 如果需要开始新一轮评估，重置状态\nmetric.reset()\n```\n\n### 2. 函数式用法（适用于简单评估）\n\n如果你只需要对单个 Tensor 进行一次性计算，可以使用函数式接口。\n\n```python\nimport torch\nimport torchmetrics\n\n# 模拟数据\npreds = torch.randn(10, 5).softmax(dim=-1)\ntarget = torch.randint(5, (10,))\n\n# 直接调用函数计算指标\nacc = torchmetrics.functional.classification.multiclass_accuracy(\n    preds, target, num_classes=5\n)\nprint(f\"Accuracy: {acc}\")\n```\n\n### 3. 自定义指标\n\n通过继承 `torchmetrics.Metric` 类，你可以轻松实现自定义指标。只需定义 `update`（更新状态）和 `compute`（计算结果）方法。\n\n```python\nimport torch\nfrom torchmetrics import Metric\n\nclass MyAccuracy(Metric):\n    def __init__(self):\n        super().__init__()\n        # 注册内部状态，dist_reduce_fx 指定分布式下的归约方式\n        self.add_state(\"correct\", default=torch.tensor(0), dist_reduce_fx=\"sum\")\n        self.add_state(\"total\", default=torch.tensor(0), dist_reduce_fx=\"sum\")\n\n    def update(self, preds: torch.Tensor, target: torch.Tensor) -> None:\n        preds = preds.argmax(dim=-1)\n        assert preds.shape == target.shape\n        self.correct += torch.sum(preds == target)\n        self.total += target.numel()\n\n    def compute(self) -> torch.Tensor:\n        return self.correct.float() \u002F self.total\n\n# 使用自定义指标\nmy_metric = MyAccuracy()\npreds = torch.randn(10, 5).softmax(dim=-1)\ntarget = torch.randint(5, (10,))\n\nresult = my_metric(preds, target)\nprint(f\"Custom Accuracy: {result}\")\n```","某计算机视觉团队正在基于 PyTorch Lightning 开发一个分布式医疗影像分类系统，需要在多卡训练过程中实时监控并记录模型的准确率与 F1 分数。\n\n### 没有 torchmetrics 时\n- **手动实现易出错**：开发者需手写复杂的指标计算逻辑（如混淆矩阵统计），极易在边界条件或数值稳定性上引入难以察觉的 Bug。\n- **分布式同步困难**：在多 GPU 环境下，手动聚合各卡上的中间状态（如总样本数、正确预测数）代码繁琐且容易出错，导致最终指标计算不准。\n- **训练与评估割裂**：训练循环中的临时指标计算代码无法直接复用于验证阶段，造成代码冗余且维护成本高昂。\n- **缺乏标准化接口**：不同成员编写的指标类风格各异，难以统一集成到现有的日志系统（如 TensorBoard 或 WandB）中。\n\n### 使用 torchmetrics 后\n- **开箱即用的高精度指标**：直接调用 `Accuracy` 或 `F1Score` 等内置模块，无需重复造轮子，确保算法逻辑经过社区严格验证。\n- **自动处理分布式同步**：torchmetrics 内部自动管理多卡间的状态聚合（`.compute()` 时自动归约），开发者无需关心底层通信细节。\n- **无缝集成训练流程**：作为 `nn.Module` 的子类，可自然嵌入 PyTorch Lightning 的训练循环，训练与验证阶段共用同一套指标实例。\n- **统一日志记录体验**：所有指标输出格式标准，只需一行代码即可将结果自动推送至各类主流可视化工具，大幅提升调试效率。\n\ntorchmetrics 通过标准化、模块化且原生支持分布式的指标计算，让开发者从繁琐的数学实现与通信同步中解放出来，专注于模型架构本身的优化。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLightning-AI_torchmetrics_5251175d.png","Lightning-AI","⚡️ Lightning AI ","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FLightning-AI_e518c84b.png","Turn ideas into AI, Lightning fast. Creators of PyTorch Lightning, Lightning AI Studio, TorchMetrics, Fabric, Lit-GPT, Lit-LLaMA",null,"LightningAI","https:\u002F\u002Flightning.ai\u002F","https:\u002F\u002Fgithub.com\u002FLightning-AI",[81,85,89],{"name":82,"color":83,"percentage":84},"Python","#3572A5",99.8,{"name":86,"color":87,"percentage":88},"Dockerfile","#384d54",0.1,{"name":90,"color":91,"percentage":92},"Makefile","#427819",0,2423,483,"2026-04-02T14:33:50","Apache-2.0",1,"Linux, macOS, Windows","非必需。支持 CPU、单 GPU 或多 GPU（分布式训练）。若使用 GPU，需兼容 PyTorch 的 NVIDIA GPU 及对应 CUDA 版本（具体版本取决于安装的 PyTorch），无特定显存要求。","未说明",{"notes":102,"python":103,"dependencies":104},"该工具是纯指标库，不包含大模型文件，无需下载额外模型。核心功能仅需 PyTorch。针对音频、图像、文本等特定领域的指标，需分别安装额外依赖（如 pip install torchmetrics[audio]）。支持分布式训练（DDP）和多设备自动同步。","3.8+",[105,106,107,108],"torch>=1.10.0","numpy","packaging","typing-extensions",[14,110,16],"其他",[112,113,114,115,116,117,118],"python","data-science","machine-learning","pytorch","deep-learning","metrics","analyses","2026-03-27T02:49:30.150509","2026-04-11T16:52:57.032004",[122,127,132,137,142,146,150],{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},13209,"MeanAveragePrecision (mAP) 指标计算速度非常慢，尤其是在 0.6.0 版本之后，有什么解决办法吗？","这是一个已知问题。维护者计划重新引入 `pycocotools` 作为 MeanAveragePrecision 的必需依赖项。目前的纯 PyTorch 实现存在性能瓶颈且难以维护，而 `pycocotools` 后端能更高效地处理所有计算细节并支持分割任务。建议关注后续版本更新，届时将默认使用 `pycocotools` 来解决速度慢和计算错误的问题。","https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fissues\u002F1024",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},13210,"TorchMetrics 是否支持图像质量评估指标（如 SSIM, PSNR, MS-SSIM 等）？","是的，TorchMetrics 已经实现了多种图像质量评估指标。社区贡献者已经完成了包括均方误差 (MSE)、峰值信噪比 (PSNR)、结构相似性指数 (SSIM)、多尺度结构相似性 (MS-SSIM) 以及空间相关系数 (SCC) 等在内的多个指标的添加工作。您可以直接在文档中查找这些特定的图像指标使用。","https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fissues\u002F799",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},13211,"如何将 COCO 风格的 mAP 计算迁移到 GPU 以加速训练？","虽然用户曾尝试将 `COCOeval` 类转换为纯 PyTorch 以利用 GPU 加速，但维护者指出这非常困难且容易出错。目前的最佳实践是依赖 `pycocotools` 后端，尽管它主要运行在 CPU 上，但它保证了计算的正确性和对分割任务的支持。强行转为 GPU 实现可能导致结果不准确或开发成本过高，因此官方倾向于维持基于 `pycocotools` 的实现。","https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fissues\u002F53",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},13212,"TorchMetrics 的 Metric API 设计有何重大变化？`update` 函数现在应该如何编写？","TorchMetrics 对基础 Metric 类的内部 API 进行了重构。新的设计中，`update` 函数不再直接执行计算或返回最终结果，而是负责返回一个包含当前批次状态（batch states）的字典。累积状态的计算由 `compute()` 处理，而单步计算由 `compute_on_step()` 处理。这种分离使得状态管理更加清晰，用户自定义指标时应在 `update` 中返回状态字典，例如：`return {\"total\": preds.shape[0], \"correct\": correct_count}`。","https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fissues\u002F344",{"id":143,"question_zh":144,"answer_zh":145,"source_url":126},13213,"为什么 MeanAveragePrecision 在某些版本中会出现计算错误或不可维护的情况？","由于纯 PyTorch 实现的复杂性，MeanAveragePrecision 在多个版本中出现了难以修复的 Bug（如 issue #1793, #1774 等）。维护者承认缺乏该领域足够的专业知识来手动维护所有边缘情况。因此，解决方案是将底层计算逻辑完全委托给成熟的 `pycocotools` 库，TorchMetrics 仅保留用户接口层。这确保了指标计算的准确性和长期可维护性。",{"id":147,"question_zh":148,"answer_zh":149,"source_url":141},13214,"如何在分布式训练中正确使用 TorchMetrics 的 compute 和 compute_on_step？","在新的 API 设计中，`compute()` 用于在所有秩（ranks）上对累积的状态进行计算，而 `compute_on_step()` 则用于对当前批次的状态进行计算。两者都支持 `sync_dist` 参数：设置为 `True`（默认）时会在所有进程间同步状态后计算；设置为 `False` 时则仅在当前秩上计算。用户应根据是需要全局指标还是单步指标来选择调用方式。",{"id":151,"question_zh":152,"answer_zh":153,"source_url":131},13215,"TorchMetrics 是否支持空间相关系数 (SCC) 这一图像指标？","是的，空间相关系数 (SCC) 已经被添加到 TorchMetrics 中。这是社区贡献者针对图像质量评估需求提出的功能之一，旨在补充原有的图像指标集合。如果您需要使用该指标，请确保安装了包含此更新的最新版本。",[155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250],{"id":156,"version":157,"summary_zh":158,"released_at":159},71875,"v1.9.0","## [1.9.0] - 2026-03-06\n\n### 变更\n\n- 将 Dice 分数的默认 `average` 参数设置为 `\"macro\"` ([#3042](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3042))\n- 停止对 Python 3.9 的支持，将最低版本要求提升至 3.10 ([#3330](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3330))\n- 使用 `packaging` 替代 `pkg_resources` ([#3329](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3329))\n\n### 修复\n\n- 修复了 `Metric` 基类中的设备不匹配问题 ([#3316](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3316))\n- 修复了 n 维切片已弃用的警告 ([#3319](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3319))\n- 修复了 `logAUC` 中的张量复制警告 ([#3295](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3295))\n- 修复了使用高值索引计算检索指标时的内存问题 ([#3291](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3291))\n- 通过直接在设备上创建张量，修复了 `_safe_divide` 中的竞争条件 ([#3284](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3284))\n\n---\n\n## 主要贡献者\n\n@adaliaramon、@bhimrazy、@GdoongMathew、@Isalia20、@KyleMylonakisProtopia、@VijayVignesh1\n\n### 新贡献者\n\n* @PussyCat0700 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3182 中做出了首次贡献\n* @iamkulbhushansingh 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3196 中做出了首次贡献\n* @adosar 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3282 中做出了首次贡献\n* @bhimrazy 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3306 中做出了首次贡献\n* @VijayVignesh1 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3295 中做出了首次贡献\n* @KyleMylonakisProtopia 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3319 中做出了首次贡献\n* @GdoongMathew 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3316 中做出了首次贡献\n* @adaliaramon 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3291 中做出了首次贡献\n\n_如果我们因提交邮箱与 GitHub 账号不匹配而遗漏了某位贡献者，请告知我们 :]_\n\n---\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.8.0...v1.9.0","2026-03-09T17:40:37",{"id":161,"version":162,"summary_zh":163,"released_at":164},71876,"v1.8.2","## [1.8.2] - 2025-09-03\n\n### 修复\n\n- 修复了 `BinaryPrecisionRecallCurve`，当没有预测值达到阈值时，现在会返回 `NaN` 作为精确率（[#3227](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3227)）\n- 修复了 `precision_at_fixed_recall` 和 `recall_at_fixed_precision`，使其在无法满足召回率\u002F精确率条件时，能够正确返回 `NaN` 阈值（[#3226](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3226)）\n\n---\n\n### 主要贡献者\n\n@iamkulbhushansingh\n\n_如果您发现有遗漏的贡献者，请告知我们，可能是由于提交邮箱与 GitHub 账号不匹配所致 :]_\n\n---\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.8.1...v1.8.2","2025-09-03T13:54:56",{"id":166,"version":167,"summary_zh":168,"released_at":169},71877,"v1.8.1","## [1.8.1] - 2025-08-07\n\n### 变更\n\n- 向 `vif` 指标添加了 `reduction='none'` 参数 (#3196)\n- 为分割任务指标增加了浮点型输入支持 (#3198)\n\n### 修复\n\n- 修复了 `BinaryPrecisionRecallCurve` 中意外的 `sigmoid` 归一化问题 (#3182)\n\n---\n\n### 主要贡献者\n\n@iamkulbhushansingh、@PussyCat0700、@simonreise\n\n\n_如果由于提交邮箱与 GitHub 账号不匹配而遗漏了某位贡献者，请告知我们 :]_\n\n---\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.8.0...v1.8.1","2025-08-07T20:38:43",{"id":171,"version":172,"summary_zh":173,"released_at":174},71878,"v1.8.0","即将发布的 TorchMetrics v1.8.0 版本引入了三项旗舰指标，每项指标都旨在满足实际应用中的关键评估需求。\n\n视频多方法评估融合（VMAF）提供了一种与人类主观判断高度一致的感知视频质量评分，为 Netflix 和 YouTube 等流媒体服务优化编码阶梯以确保一致的观看体验，并帮助视频修复实验室量化去噪和超分辨率算法所带来的改进效果。\n\n连续排名概率评分（CRPS）能够对完整的预测分布进行综合评估，而不仅仅是点估计。气象中心利用 CRPS 来评估概率性降水和气温预报的准确性，从而提升公众天气预警的质量；同时，能源公司也用它来衡量负荷需求预测中的不确定性，进而优化电网管理和交易策略。\n\n唇部顶点误差（LVE）用于测量预测唇部关键点与真实标注之间的差异，以量化视听同步程度。后期制作工作室使用 LVE 在电影配音过程中验证唇形同步的准确性，而 AR\u002FVR 开发者则将其集成到虚拟形象工作流中，以确保在实时虚拟会议和社交体验中口型动作的自然流畅。\n\n---\n\n## [1.8.0] - 2025-07-23\n\n### 新增\n\n- 在视频领域新增 `VMAF` 指标 (#2991)\n- 在回归领域新增 `CRPS` (#3024)\n- 为 `DiceScore` 添加 `aggregation_level` 参数 (#3018)\n- 为 `LearnedPerceptualImagePatchSimilarity` 增加对 `reduction=\"none\"` 的支持 (#3053)\n- 为 `bert_score` 的函数式接口增加对单个 `str` 输入的支持 (#3056)\n- 改进：使 `BERTScore` 能够基于多个参考文本评估假设句 (#3069)\n- 在多模态领域新增 `Lip Vertex Error (LVE)` (#3090)\n- 为 `FID` 指标添加 `antialias` 参数 (#3177)\n- 为分割类指标增加 `mixed` 输入格式 (#3176)\n\n### 变更\n\n- 将 `PSNR` 指标的 `data_range` 参数改为必填参数 (#3178)\n\n### 移除\n\n- 从 `DiceScore` 中移除 `zero_division` 参数 (#3018)\n\n---\n\n## 主要贡献者\n\n@nkaenzig、@rittik9、@simonreise、@SkafteNicki\n\n### 新晋贡献者\n\n* @lantiga 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3054 中完成了首次贡献\n* @AlexVerine 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3057 中完成了首次贡献\n* @ZhiyuanChen 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3059 中完成了首次贡献\n* @ahmedhshahin 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3101 中完成了首次贡献\n* @gratus907 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3103 中完成了首次贡献\n* @cyyever 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3118 中完成了首次贡献\n* @Armannas 在 https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F3124 中完成了首次贡献\n* @alifa98 在 https:\u002F\u002Fgithub.com","2025-07-23T17:33:16",{"id":176,"version":177,"summary_zh":178,"released_at":179},71879,"v1.7.4","## [1.7.4] - 2025-07-04\n\n### 变更\n\n- 改进了皮尔逊相关系数的数值稳定性 (#3152)\n\n### 修复\n\n- 修复：在检索指标中忽略零值和负值预测 (#3160)\n- 修复了分布式训练时 `reduction=None` 情况下的 SSIM `dist_reduce_fx` (#3162, #3166)\n- 修复了属性错误 (#3154)\n- 修复了 `_pearson_corrcoef_update` 中的形状错误 (#3168)\n\n---\n\n### 主要贡献者\n\n@AymenKallala、@gratus907、@Isalia20、@rittik9\n\n\n_如果由于提交邮箱与 GitHub 账号不匹配而遗漏了某位贡献者，请告知我们 :]_\n\n---\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.7.3...v1.7.4","2025-07-05T12:22:52",{"id":181,"version":182,"summary_zh":183,"released_at":184},71880,"v1.7.3","## [1.7.3] - 2025-06-13\n\n### 修复\n\n- 修复：确保 `WrapperMetric` 重置 `wrapped_metric` 的状态 (#3123)\n- 修复了 `multiclass_accuracy` 中的 `top_k` 参数 (#3117)\n- 修复了与 `pycocotools` 2.0.10 版本的 COCO 格式兼容性问题 (#3131)\n---\n\n### 主要贡献者\n\n@rittik9\n\n_如果您发现有贡献者因提交邮箱与 GitHub 账号不匹配而被遗漏，请告知我们 :]_\n\n---\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.7.2...v1.7.3","2025-06-13T15:33:34",{"id":186,"version":187,"summary_zh":188,"released_at":189},71881,"v1.7.2","## [1.7.2] - 2025-05-27\n\n### 变更\n\n- 增强：提升 `_rank_data` 的性能 (#3103)\n\n### 修复\n\n- 修复了 `MatthewsCorrCoef` 中的 `UnboundLocalError` (#3059)\n- 修复了 MIFID 在使用自定义编码器时错误地将输入转换为 `byte` 数据类型的问题 (#3064)\n- 修复了 `MultilabelExactMatch` 中的 `ignore_index` 参数问题 (#3085)\n- 修复：在 MPS 上禁用非阻塞模式 (#3101)\n\n---\n\n### 主要贡献者\n\n@ahmedhshahin、@gratus907、@rittik9、@ZhiyuanChen\n\n_如果您发现有贡献者因提交邮箱与 GitHub 账号不匹配而被遗漏，请告知我们 :]_\n\n---\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.7.1...v1.7.2","2025-05-28T20:20:33",{"id":191,"version":192,"summary_zh":193,"released_at":194},71882,"v1.7.1","## [1.7.1] - 2025-04-06\n\n### 变更\n\n- 在 `add_metrics` 函数中，增强对将一个 `MetricCollection` 添加到另一个 `MetricCollection` 的支持 (#3032)\n\n### 修复\n\n- 修复了缺失的 `MeanIOU` 类 (#2892)\n- 修复了检测 IoU 忽略无真实标签的预测的问题 (#3025)\n- 修复了当 `top_k>1` 时 `MulticlassAccuracy` 抛出错误的问题 (#3039)\n\n---\n\n### 主要贡献者\n\n@Isalia20、@rittik9、@SkafteNicki\n\n_如果您发现有贡献者因提交邮箱与 GitHub 账号不匹配而被遗漏，请告知我们 :]_\n\n---\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.7.0...v1.7.1","2025-04-07T19:33:42",{"id":196,"version":197,"summary_zh":198,"released_at":199},71883,"v1.7.0","TorchMetrics 即将发布的版本将在多个领域带来一系列创新功能和增强特性，进一步巩固其作为机器学习指标领域领先工具的地位。在图像领域，新增的 ARNIQA 和 DeepImageStructureAndTextureSimilarity 指标为评估图像质量和相似性提供了全新视角。此外，CLIPScore 指标现已支持更多模型和处理器，从而提升了其在图像-文本对齐任务中的通用性。\n\n除了图像分析之外，回归包中新增了 JensenShannonDivergence 指标，为比较概率分布提供了一个强有力的工具。聚类包也迎来了显著更新，引入了 ClusterAccuracy 指标，有助于更有效地评估聚类算法的性能。\n\n在分类任务方面，新增了等错误率（EER）指标，为评估分类模型性能提供了一项关键度量，尤其适用于假阳性与假阴性代价不同的场景。同时，MeanAveragePrecision 指标现增加了函数式接口，进一步提升了其易用性和灵活性。\n\n这些更新共同增强了 TorchMetrics 的功能，使其成为机器学习从业者和研究人员更加全面且不可或缺的资源。\n\n## [1.7.0] - 2025-03-20\n\n### 新增\n\n- 图像领域新增：\n  - 添加 `ARNIQA` 指标 (#2953)\n  - 添加 `DeepImageStructureAndTextureSimilarity` (#2993)\n  - 在 `CLIPScore` 中增加对更多模型和处理器的支持 (#2978)\n- 向回归包中添加 `JensenShannonDivergence` 指标 (#2992)\n- 向聚类包中添加 `ClusterAccuracy` 指标 (#2777)\n- 向分类包中添加 `等错误率（EER）` (#3013)\n- 为 `MeanAveragePrecision` 指标添加函数式接口 (#3011)\n\n### 变更\n\n- 将 `MeanIoU` 中 `one-hot` 输入的 `num_classes` 参数设为可选 (#3012)\n\n### 移除\n\n- 从分类包中移除 `Dice` 指标 (#3017)\n\n### 修复\n\n- 修复了按类别封装器与指标跟踪器之间集成中的边缘情况 (#3008)\n- 修复了在使用 `top_k` 并仅有一个样本时 `MultiClassAccuracy` 中可能出现的 `IndexError` (#3021)\n\n---\n\n### 主要贡献者\n\n@Isalia20、@LorenzoAgnolucci、@nathanpainchaud、@rittik9、@SkafteNicki\n\n_若因提交邮箱与 GitHub 账号不匹配而遗漏了某位贡献者，请告知我们 :]_\n\n---\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.6.0...v1.7.0","2025-03-20T19:05:47",{"id":201,"version":202,"summary_zh":203,"released_at":204},71884,"v1.6.3","## [1.6.3] - 2024-03-13\n\n### 修复\n\n- 修复了 `MetricCollection` 中处理指标状态引用的逻辑 (#2990)\n- 修复了按类别封装器与指标跟踪器之间的集成问题 (#3004)\n\n---\n\n### 主要贡献者\n\n@SkafteNicki\n\n_如果您发现因提交邮箱与 GitHub 账号不匹配而遗漏了某位贡献者，请告知我们 :]_\n\n---\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.6.2...v1.6.3","2025-03-14T06:57:14",{"id":206,"version":207,"summary_zh":208,"released_at":209},71885,"v1.6.2","## [1.6.2] - 2024-02-28\r\n\r\n### Added\r\n\r\n- Added `zero_division` argument to `DiceScore` in segmentation package (#2860)\r\n- Added `cache_session` to `DNSMOS` metric to control caching behavior (#2974)\r\n- Added `disable` option to `nan_strategy` in basic aggregation metrics (#2943)\r\n\r\n### Changed\r\n\r\n- Make `num_classes` optional for classification in case of micro averaging (#2841)\r\n- Enhance `Clip_Score` to calculate similarities between same modalities (#2875)\r\n\r\n### Fixed\r\n\r\n- Fixed `DiceScore` when there is zero overlap between predictions and targets (#2860)\r\n- Fixed `MeanAveragePrecision` for `average=\"micro\"` when 0 label is not present (#2968)\r\n- Fixed corner-case in `PearsonCorrCoef` when input is constant (#2975)\r\n- Fixed `MetricCollection.update` gives identical results (#2944)\r\n- Fixed missing `kwargs` in `PIT` metric for permutation wise mode (#2977)\r\n- Fixed multiple errors in the `_final_aggregation` function for `PearsonCorrCoef` (#2980)\r\n- Fixed incorrect CLIP-IQA type hints (#2952)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@baskrahmer, @czmrand, @rbedyakin, @rittik9, @SkafteNicki, @wooseopkim\r\n\r\n_If we forgot someone due to not matching commit email with GitHub account, let us know :]_\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.6.1...v1.6.2","2025-03-03T11:25:27",{"id":211,"version":212,"summary_zh":213,"released_at":214},71886,"v1.6.1","## [1.6.1] - 2024-12-25\r\n\r\n### Changed\r\n\r\n- Enabled specifying weights path for FID (#2867)\r\n- Delete `Device2Host` caused by comm with device and host (#2840)\r\n\r\n### Fixed\r\n\r\n- Fixed plotting of multilabel confusion matrix (#2858)\r\n- Fixed issue with shared state in metric collection when using dice score (#2848)\r\n- Fixed `top_k` for `multiclassf1score` with one-hot encoding (#2839)\r\n- Fixed slow calculations of classification metrics with MPS (#2876)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@Isalia20, @nkaenzig, @podgorki, @rittik9, @yuvalkirstain, @zhaozheng09\r\n\r\n_If we forgot someone due to not matching commit email with GitHub account, let us know :]_\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.6.0...v1.6.1","2024-12-25T23:50:39",{"id":216,"version":217,"summary_zh":218,"released_at":219},71887,"v1.6.0","The latest release of TorchMetrics introduces several significant enhancements and new features that will greatly benefit users across various domains. This update includes the addition of new metrics and methods that enhance the library's functionality and usability.\r\n\r\nOne of the key additions is the `NISQA` audio metric, which provides advanced capabilities for evaluating audio quality. In the classification domain, the new `LogAUC` and `NegativePredictiveValue` metrics offer improved tools for assessing model performance, particularly in imbalanced datasets. For regression tasks, the `NormalizedRootMeanSquaredError` metric has been introduced, providing a normalized measure of prediction accuracy that is less sensitive to outliers.\r\n\r\nIn the field of image segmentation, the new `Dice` metric enhances the evaluation of segmentation models by providing a robust measure of overlap between predicted and ground truth masks. Additionally, the `merge_state` method has been added to the `Metric` class, allowing for more efficient state management and aggregation across multiple devices or processes.\r\n\r\nFurthermore, this release includes support for the propagation of the autograd graph in Distributed Data-Parallel (DDP) settings, enabling more efficient and scalable training of models across multiple GPUs. These enhancements collectively make TorchMetrics a more powerful and versatile tool for machine learning practitioners, enabling more accurate and efficient model evaluation across a wide range of applications.\r\n\r\n## [1.6.0] - 2024-11-12\r\n\r\n### Added\r\n\r\n- Added audio metric `NISQA` (#2792)\r\n- Added classification metric `LogAUC` (#2377)\r\n- Added classification metric `NegativePredictiveValue` (#2433)\r\n- Added regression metric `NormalizedRootMeanSquaredError` (#2442)\r\n- Added segmentation metric `Dice` (#2725)\r\n- Added method `merge_state` to `Metric` (#2786)\r\n- Added support for propagation of the autograd graph in DDP setting (#2754)\r\n\r\n### Changed\r\n\r\n- Changed naming and input order arguments in `KLDivergence` (#2800)\r\n\r\n### Deprecated\r\n\r\n- Deprecated Dice from classification metrics (#2725)\r\n\r\n### Removed\r\n\r\n- Changed minimum supported Pytorch version to 2.0 (#2671)\r\n- Dropped support for Python 3.8 (#2827)\r\n- Removed `num_outputs` in `R2Score` (#2800)\r\n\r\n### Fixed\r\n\r\n- Fixed segmentation `Dice` + `GeneralizedDice` for 2d index tensors (#2832)\r\n- Fixed mixed results of `rouge_score` with `accumulate='best'` (#2830)\r\n\r\n---\r\n### Key Contributors\r\n\r\n@Borda, @cw-tan, @philgzl, @rittik9, @SkafteNicki\r\n\r\n## New Contributors since `1.5.0`\r\n* @bfolie made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2793\r\n* @StalkerShurik made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2811\r\n* @philgzl made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2792\r\n* @cw-tan made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2754\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.5.0...v1.6.0","2024-11-12T19:29:02",{"id":221,"version":222,"summary_zh":223,"released_at":224},71888,"v1.5.2","## [1.5.2] - 2024-11-07\r\n\r\n### Changed\r\n\r\n- Re-adding `numpy` 2+ support (#2804)\r\n\r\n### Fixed\r\n\r\n- Fixed iou scores in detection for either empty predictions\u002Ftargets leading to wrong scores (#2805)\r\n- Fixed `MetricCollection` compatibility with `torch.jit.script` (#2813)\r\n- Fixed assert in PIT (#2811)\r\n- Patched `np.Inf` for `numpy` 2.0+ (#2826)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@adamjstewart, @Borda, @SkafteNicki, @StalkerShurik, @yurithefury\r\n\r\n_If we forgot someone due to not matching commit email with GitHub account, let us know :]_\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.5.1...v1.5.2","2024-11-08T10:35:25",{"id":226,"version":227,"summary_zh":228,"released_at":229},71889,"v1.5.1","## [1.5.1] - 2024-10-22\r\n\r\n### Fixed\r\n\r\n- Changing `_modules` dict type in Pytorch 2.5 preventing to fail collections metrics (#2793)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@bfolie\r\n\r\n_If we forgot someone due to not matching commit email with GitHub account, let us know :]_\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.5.0...v1.5.1","2024-10-23T07:05:32",{"id":231,"version":232,"summary_zh":233,"released_at":234},71890,"v1.5.0","Shape metrics are quantitative methods used to assess and compare the geometric properties of objects, often in datasets that represent shapes. One such metric is the Procrustes Disparity, which measures the sum of the squared differences between two datasets after applying a Procrustes transformation. This transformation involves scaling, rotating, and translating the datasets to achieve optimal alignment. The Procrustes Disparity is particularly useful when comparing datasets that are similar in structure but not perfectly aligned, allowing for more meaningful comparison by minimizing differences due to orientation or size.\r\n\r\n## [1.5.0] - 2024-10-18\r\n\r\n### Added\r\n\r\n- Added segmentation metric `HausdorffDistance` (#2122)\r\n- Added audio metric `DNSMOS` (#2525)\r\n- Added shape metric `ProcrustesDistance` (#2723)\r\n- Added `MetricInputTransformer` wrapper (#2392)\r\n- Added `input_format` argument to segmentation metrics (#2572)\r\n- Added `multi-output` support for MAE metric (#2605)\r\n- Added `truncation` argument to `BERTScore` (#2776)\r\n\r\n### Changed\r\n\r\n- Tracker higher is better integration (#2649)\r\n- Updated `InfoLM` class to dynamically set `higher_is_better` (#2674)\r\n\r\n### Deprecated\r\n\r\n- Deprecated `num_outputs` in `R2Score` (#2705)\r\n\r\n### Fixed\r\n\r\n- Fixed corner case in `IoU` metric for single empty prediction tensors (#2780)\r\n- Fixed `PSNR` calculation for integer type input images (#2788)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@Astraightrain, @grahamannett, @lgienapp, @matsumotosan, @quancs, @SkafteNicki\r\n\r\n### New Contributors since `1.4.0`\r\n* @kalekundert made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2543\r\n* @lgienapp made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2392\r\n* @sweber1 made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2634\r\n* @gxy-gxy made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2347\r\n* @Astraightrain made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2605\r\n* @ndrwrbgs made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2640\r\n* @grahamannett made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2674\r\n* @petertheprocess made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2721\r\n* @rittik9 made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2726\r\n* @vkinakh made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2698\r\n* @likawind made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2732\r\n* @veera-puthiran-14082 made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2753\r\n* @GPPassos made their first contribution in https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fpull\u002F2727\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.4.0...v1.5.0","2024-10-18T19:35:30",{"id":236,"version":237,"summary_zh":238,"released_at":239},71891,"v1.4.3","## [1.4.3] - 2024-10-10\r\n\r\n### Fixed\r\n\r\n- Fixed for Pearson changes inputs (#2765)\r\n- Fixed bug in `PESQ` metric where `NoUtterancesError` prevented calculating on a batch of data (#2753)\r\n- Fixed corner case in `MatthewsCorrCoef` (#2743)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@Borda, @SkafteNicki, @veera-puthiran-14082\r\n\r\n_If we forgot someone due to not matching commit email with GitHub account, let us know :]_\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.4.2...v1.4.3","2024-10-10T12:04:35",{"id":241,"version":242,"summary_zh":243,"released_at":244},71892,"v1.4.2","## [1.4.2] - 2022-09-12\r\n\r\n### Added\r\n\r\n- Re-adding `Chrf` implementation (#2701)\r\n\r\n### Fixed\r\n\r\n- Fixed wrong aggregation in `segmentation.MeanIoU` (#2698)\r\n- Fixed handling zero division error in binary IoU (Jaccard index) calculation (#2726)\r\n- Corrected the padding related calculation errors in SSIM (#2721)\r\n- Fixed compatibility of audio domain with new `scipy` (#2733)\r\n- Fixed how `prefix`\u002F`postfix` works in `MultitaskWrapper` (#2722)\r\n- Fixed flakiness in tests related to `torch.unique` with `dim=None` (#2650)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@Borda, @petertheprocess, @rittik9, @SkafteNicki, @vkinakh\r\n\r\n_If we forgot someone due to not matching commit email with GitHub account, let us know :]_\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.4.1...v1.4.2","2024-09-13T20:01:21",{"id":246,"version":247,"summary_zh":248,"released_at":249},71893,"v1.4.1","## [1.4.1] - 2024-08-02\r\n\r\n### Changed\r\n\r\n- Calculate the text color of `ConfusionMatrix` plot based on luminance (#2590)\r\n- Updated `_safe_divide` to allow `Accuracy` to run on the GPU (#2640)\r\n- Improved better error messages for intersection detection metrics for wrong user input (#2577)\r\n\r\n### Removed\r\n\r\n- Dropped `Chrf` implementation due to licensing issues with the upstream package (#2668)\r\n\r\n### Fixed\r\n\r\n- Fixed bug in `MetricCollection` when using compute groups and `compute` is called more than once (#2571)\r\n- Fixed class order of `panoptic_quality(..., return_per_class=True)` output (#2548)\r\n- Fixed `BootstrapWrapper` not being reset correctly (#2574)\r\n- Fixed integration between `ClasswiseWrapper` and `MetricCollection` with custom `_filter_kwargs` method (#2575)\r\n- Fixed BertScore calculation: pred target misalignment (#2347)\r\n- Fixed `_cumsum` helper function in multi-gpu (#2636)\r\n- Fixed bug in `MeanAveragePrecision.coco_to_tm` (#2588)\r\n- Fixed missed f-strings in exceptions\u002Fwarnings (#2667)\r\n\r\n---\r\n\r\n### Key Contributors\r\n\r\n@Borda, @gxy-gxy, @i-aki-y, @ndrwrbgs, @relativityhd, @SkafteNicki\r\n\r\n_If we forgot someone due to not matching commit email with GitHub account, let us know :]_\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.4.0...v1.4.1","2024-08-03T11:32:06",{"id":251,"version":252,"summary_zh":253,"released_at":254},71894,"v1.4.0.post0","**Full Changelog**: https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Ftorchmetrics\u002Fcompare\u002Fv1.4.0...v1.4.0.post0","2024-05-15T11:24:52"]