[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-frgfm--torch-cam":3,"tool-frgfm--torch-cam":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":95,"env_os":96,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":105,"github_topics":106,"view_count":121,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":122,"updated_at":123,"faqs":124,"releases":153},615,"frgfm\u002Ftorch-cam","torch-cam","Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)","TorchCAM 是一款专为 PyTorch 深度学习框架打造的类激活图（CAM）生成库。它能够帮助用户直观地可视化卷积神经网络在进行图像分类时，究竟聚焦于图像的哪些区域。对于常被诟病为“黑盒”的深度学习模型，TorchCAM 通过热力图形式揭示了模型的决策依据，有效解决了模型可解释性不足的问题。\n\nTorchCAM 非常适合计算机视觉领域的开发工程师、AI 研究人员以及需要调试模型行为的团队使用。它内置了丰富的算法支持，涵盖经典的 Grad-CAM、Layer-CAM 以及最新的 Smooth Grad-CAM++ 等多种变体。技术亮点在于其基于 PyTorch Hook 机制的设计，只需将模型包裹其中，无需修改原有网络结构或额外编写复杂的反向传播代码，即可无缝提取所需的激活信息。无论是快速验证模型注意力分布，还是深入分析预测逻辑，TorchCAM 都能提供简洁高效的解决方案，让模型内部运作更加透明可信。","\u003Ch1 align=\"center\">\n  TorchCAM: class activation explorer\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Factions\u002Fworkflows\u002Fpackage.yml\">\n    \u003Cimg alt=\"CI Status\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Ffrgfm\u002Ftorch-cam\u002Fpackage.yml?branch=main&label=CI&logo=github&style=flat-square\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fruff\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinter-Ruff-FCC21B?style=flat-square&logo=ruff&logoColor=white\" alt=\"ruff\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fty\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTypecheck-Ty-261230?style=flat-square&logo=astral&logoColor=white\" alt=\"ty\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.codacy.com\u002Fgh\u002Ffrgfm\u002Ftorch-cam\u002Fdashboard?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=frgfm\u002Ftorch-cam&amp;utm_campaign=Badge_Grade\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_adaaa9495e23.png\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002Ffrgfm\u002Ftorch-cam\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fcodecov\u002Fc\u002Fgithub\u002Ffrgfm\u002Ftorch-cam.svg?logo=codecov&style=flat-square&label=Coverage\" alt=\"Test coverage percentage\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchcam\u002F\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftorchcam.svg?logo=PyPI&logoColor=fff&style=flat-square&label=PyPI\" alt=\"PyPi Version\">\n  \u003C\u002Fa>\n  \u003Cimg alt=\"GitHub release (latest by date)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Ffrgfm\u002Ftorch-cam?label=Release&logo=github\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftorchcam.svg?logo=Python&label=Python&logoColor=fff&style=flat-square\" alt=\"pyversions\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fblob\u002Fmain\u002FLICENSE\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Ffrgfm\u002Ftorch-cam.svg?label=License&logoColor=fff&style=flat-square\" alt=\"License\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Ffrgfm\u002Ftorch-cam\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue\" alt=\"Huggingface Spaces\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffrgfm\u002Fnotebooks\u002Fblob\u002Fmain\u002Ftorch-cam\u002Fquicktour.ipynb\">\n    \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open in Colab\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Ffrgfm\u002Ftorch-cam\u002Fpage-build.yml?branch=main&label=Documentation&logo=read-the-docs&logoColor=white&style=flat-square\" alt=\"Documentation Status\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\nSimple way to leverage the class-specific activation of convolutional layers in PyTorch.\n\n\u003Cp align=\"center\">\n    \u003Ca alt=\"cam_examples\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_15c178189ca7.png\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Cem>Source: image from \u003Ca href=\"https:\u002F\u002Fwww.woopets.fr\u002Fassets\u002Fraces\u002F000\u002F066\u002Fbig-portrait\u002Fborder-collie.jpg\">woopets\u003C\u002Fa> (activation maps created with a pretrained \u003Ca href=\"https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Fmodels.html#torchvision.models.resnet18\">Resnet-18\u003C\u002Fa>)\u003C\u002Fem>\n\u003C\u002Fp>\n\n\n## Quick Tour\n\n### Setting your CAM\n\nTorchCAM leverages [PyTorch hooking mechanisms](https:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Fformer_torchies\u002Fnnft_tutorial.html#forward-and-backward-function-hooks) to seamlessly retrieve all required information to produce the class activation without additional efforts from the user. Each CAM object acts as a wrapper around your model.\n\nYou can find the exhaustive list of supported CAM methods in the [documentation](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Fmethods.html), then use it as follows:\n\n```python\nfrom torchvision.models import get_model, get_model_weights\nfrom torchcam.methods import LayerCAM\n\n# Define your model\nmodel = get_model(\"resnet18\", weights=get_model_weights(\"resnet18\").DEFAULT).eval()\n# Set your CAM extractor\ncam_extractor = LayerCAM(model)\n```\n\n*Please note that by default, the layer at which the CAM is retrieved is set to the last non-reduced convolutional layer. If you wish to investigate a specific layer, use the `target_layer` argument in the constructor.*\n\n### Retrieving the class activation map\n\nOnce your CAM extractor is set, you only need to use your model to infer on your data as usual. If any additional information is required, the extractor will get it for you automatically.\n\n```python\nfrom torchvision.io import decode_image\nfrom torchvision.models import get_model, get_model_weights\nfrom torchcam.methods import LayerCAM\n\n# Get a model and an image\nweights = get_model_weights(\"resnet18\").DEFAULT\nmodel = get_model(\"resnet18\", weights=weights).eval()\npreprocess = weights.transforms()\nimg = decode_image(\"path\u002Fto\u002Fyour\u002Fimage.jpg\")\n\ninput_tensor = preprocess(img)\n\nwith LayerCAM(model) as cam_extractor:\n  out = model(input_tensor.unsqueeze(0))\n  # Retrieve the CAM by passing the class index and the model output\n  activation_map = cam_extractor(out.squeeze(0).argmax().item(), out)\n```\n\nIf you want to visualize your heatmap, you only need to cast the CAM to a numpy ndarray:\n\n```python\nimport matplotlib.pyplot as plt\n# Visualize the raw CAM\nplt.imshow(activation_map[0].squeeze(0).numpy()); plt.axis('off'); plt.tight_layout(); plt.show()\n```\n\n![raw_heatmap](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_8a480f197db1.png)\n\nOr if you wish to overlay it on your input image:\n\n```python\nimport matplotlib.pyplot as plt\nfrom torchvision.transforms.v2.functional import to_pil_image\nfrom torchcam.utils import overlay_mask\n\n# Resize the CAM and overlay it\nresult = overlay_mask(to_pil_image(img), to_pil_image(activation_map[0].squeeze(0), mode='F'), alpha=0.5)\nplt.imshow(result); plt.axis('off'); plt.tight_layout(); plt.show()\n```\n\n![overlayed_heatmap](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_45a5824cbf1b.png)\n\n## Setup\n\nPython 3.11 (or higher) and [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F)\u002F[pip](https:\u002F\u002Fpip.pypa.io\u002Fen\u002Fstable\u002Finstallation\u002F) are required to install TorchCAM.\n\n### Stable release\n\nYou can install the last stable release of the package using [pypi](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchcam\u002F) as follows:\n\n```shell\npip install torchcam\n```\n\n### Latest version\n\nAlternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source:\n\n```shell\npip install torchcam @ git+https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam.git\n```\n\n\n## CAM Zoo\n\nThis project is developed and maintained by the repo owner, but the implementation was based on the following research papers:\n\n- [Learning Deep Features for Discriminative Localization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.04150): the original CAM paper\n- [Grad-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.02391): GradCAM paper, generalizing CAM to models without global average pooling.\n- [Grad-CAM++](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.11063): improvement of GradCAM++ for more accurate pixel-level contribution to the activation.\n- [Smooth Grad-CAM++](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.01224): SmoothGrad mechanism coupled with GradCAM.\n- [Score-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01279): score-weighting of class activation for better interpretability.\n- [SS-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.14255): SmoothGrad mechanism coupled with Score-CAM.\n- [IS-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03023): integration-based variant of Score-CAM.\n- [XGrad-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02312): improved version of Grad-CAM in terms of sensitivity and conservation.\n- [Layer-CAM](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21TIP_LayerCAM.pdf): Grad-CAM alternative leveraging pixel-wise contribution of the gradient to the activation.\n\n\u003Cp align=\"center\">\n    \u003Ca alt=\"wallaby_video_cam\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_980b612e0c1a.gif\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Cem>Source: \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=hZJN5BzKfxk\">YouTube video\u003C\u002Fa> (activation maps created by \u003Ca href=\"https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM\">Layer-CAM\u003C\u002Fa> with a pretrained \u003Ca href=\"https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Fmodels.html#torchvision.models.resnet18\">ResNet-18\u003C\u002Fa>)\u003C\u002Fem>\n\u003C\u002Fp>\n\n\n\n## What else\n\n### Documentation\n\nThe full package documentation is available [here](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002F) for detailed specifications.\n\n### Playground app\n\nA minimal demo app is provided for you to play with the supported CAM methods! Feel free to check out the live demo on [![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Ffrgfm\u002Ftorch-cam)\n\nIf you prefer running the demo by yourself, you will need an extra dependency ([Streamlit](https:\u002F\u002Fstreamlit.io\u002F)) for the app to run:\n\n```\npip install -e \".[demo]\"\n```\n\nYou can then easily run your app in your default browser by running:\n\n```\nstreamlit run demo\u002Fapp.py\n```\n\n![torchcam_demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_745d3dcdfd9c.png)\n\n### Visualization script\n\nAn example script is provided for you to benchmark the heatmaps produced by multiple CAM approaches on the same image:\n\n```shell\npython scripts\u002Fcam_example.py --arch resnet18 --class-idx 232 --rows 2\n```\n\n![gradcam_sample](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_15c178189ca7.png)\n\n*All script arguments can be checked using `python scripts\u002Fcam_example.py --help`*\n\n### Performance benchmarks\n\nThe purpose of CAM methods is to provide interpretability and they do so by pointing the biggest influence factors on the model outputs. Ideally the CAM should pinpoint all the visual cues that have any influence of the output classification score.\nFor this, we use two metrics:\n- [Increase in Confidence](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmetrics.html#torchcam.metrics.ClassificationMetric) (higher is better): if we forward the input masked with the CAM (keep origin pixel values where CAM is highest, nullify where lowest), how many times in the dataset has the classification probability improve.\n- [Average Drop](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmetrics.html#torchcam.metrics.ClassificationMetric) (lower is better): if we forward the input masked with the CAM (keep origin pixel values where CAM is highest, nullify where lowest), by how much does the classification probability drop.\n\n| CAM method | Arch | Average drop (↓) | Increase in confidence (↑) |\n| ---------- | ---- | ---------------- | -------------------------- |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | resnet18 | 0.2686 | 0.2250 |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | resnet18 | 0.5271 | 0.1962 |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | resnet18 | 0.2088 | 0.2499 |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | resnet18 | 0.1712 | 0.2819 |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | mobilenet_v3_large | 0.2678 | 0.3483 |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | mobilenet_v3_large | 0.3182 | 0.2535 |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | mobilenet_v3_large | 0.2681 | 0.2678 |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | mobilenet_v3_large | 0.2526 | 0.2882 |\n\nThis benchmark was performed over the validation set of [imagenette](https:\u002F\u002Fgithub.com\u002Ffastai\u002Fimagenette), which is a subset of Imagenet, on (224, 224) inputs.\n\nYou can run this performance benchmark for any CAM method on your hardware as follows:\n\n```bash\npython scripts\u002Feval_perf.py ~\u002FDownloads\u002Fimagenette LayerCAM --arch mobilenet_v3_large\n```\n\n*All script arguments can be checked using `python scripts\u002Feval_perf.py --help`*\n\n### Latency benchmark\n\nYou crave for beautiful activation maps, but you don't know whether it fits your needs in terms of latency?\n\nIn the table below, you will find a latency overhead benchmark (forward pass not included) for all CAM methods:\n\n| CAM method | Arch | GPU mean (std) | CPU mean (std) |\n| ---------- | ---- | -------------- | -------------- |\n| [CAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.CAM) | resnet18           | 0.11ms (0.02ms)    | 0.14ms (0.03ms)      |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | resnet18           | 3.71ms (1.11ms)    | 40.66ms (1.82ms)     |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | resnet18           | 5.21ms (1.22ms)    | 41.61ms (3.24ms)     |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | resnet18           | 33.67ms (2.51ms)   | 239.27ms (7.85ms)    |\n| [ScoreCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.ScoreCAM) | resnet18           | 304.74ms (11.54ms) | 6796.89ms (415.14ms) |\n| [XGradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.XGradCAM) | resnet18           | 3.78ms (0.96ms)    | 40.63ms (2.03ms)     |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | resnet18           | 3.65ms (1.04ms)    | 40.91ms (1.79ms)     |\n| [CAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.CAM) | mobilenet_v3_large | N\u002FA*               | N\u002FA*                 |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | mobilenet_v3_large | 8.61ms (1.04ms)    | 26.64ms (3.46ms)     |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | mobilenet_v3_large | 8.83ms (1.29ms)    | 25.50ms (3.10ms)     |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | mobilenet_v3_large | 77.38ms (3.83ms)   | 156.25ms (4.89ms)    |\n| [ScoreCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.ScoreCAM) | mobilenet_v3_large | 35.19ms (2.11ms)   | 679.16ms (55.04ms)   |\n| [XGradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.XGradCAM) | mobilenet_v3_large | 8.41ms (0.98ms)    | 24.21ms (2.94ms)     |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | mobilenet_v3_large | 8.02ms (0.95ms)    | 25.14ms (3.17ms)     |\n\n**The base CAM method cannot work with architectures that have multiple fully-connected layers*\n\nThis benchmark was performed over 100 iterations on (224, 224) inputs, on a laptop to better reflect performances that can be expected by common users. The hardware setup includes an [Intel(R) Core(TM) i7-10750H](https:\u002F\u002Fark.intel.com\u002Fcontent\u002Fwww\u002Fus\u002Fen\u002Fark\u002Fproducts\u002F201837\u002Fintel-core-i710750h-processor-12m-cache-up-to-5-00-ghz.html) for the CPU, and a [NVIDIA GeForce RTX 2070 with Max-Q Design](https:\u002F\u002Fwww.nvidia.com\u002Ffr-fr\u002Fgeforce\u002Fgraphics-cards\u002Frtx-2070\u002F) for the GPU.\n\nYou can run this latency benchmark for any CAM method  on your hardware as follows:\n\n```bash\npython scripts\u002Feval_latency.py SmoothGradCAMpp\n```\n\n*All script arguments can be checked using `python scripts\u002Feval_latency.py --help`*\n\n### Example notebooks\n\nLooking for more illustrations of TorchCAM features?\nYou might want to check the [Jupyter notebooks](notebooks) designed to give you a broader overview.\n\n## Citation\n\nIf you wish to cite this project, feel free to use this [BibTeX](http:\u002F\u002Fwww.bibtex.org\u002F) reference:\n\n```bibtex\n@misc{torcham2020,\n    title={TorchCAM: class activation explorer},\n    author={François-Guillaume Fernandez},\n    year={2020},\n    month={March},\n    publisher = {GitHub},\n    howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam}}\n}\n```\n\n## Contributing\n\nFeeling like extending the range of possibilities of CAM? Or perhaps submitting a paper implementation? Any sort of contribution is greatly appreciated!\n\nYou can find a short guide in [`CONTRIBUTING`](CONTRIBUTING.md) to help grow this project!\n\n## License\n\nDistributed under the Apache 2.0 License. See [`LICENSE`](LICENSE) for more information.\n\n[![FOSSA Status](https:\u002F\u002Fapp.fossa.com\u002Fapi\u002Fprojects\u002Fgit%2Bgithub.com%2Ffrgfm%2Ftorch-cam.svg?type=large&issueType=license)](https:\u002F\u002Fapp.fossa.com\u002Fprojects\u002Fgit%2Bgithub.com%2Ffrgfm%2Ftorch-cam?ref=badge_large&issueType=license)\n","\u003Ch1 align=\"center\">\n  TorchCAM：类别激活探索器\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Factions\u002Fworkflows\u002Fpackage.yml\">\n    \u003Cimg alt=\"CI 状态\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Ffrgfm\u002Ftorch-cam\u002Fpackage.yml?branch=main&label=CI&logo=github&style=flat-square\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fruff\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinter-Ruff-FCC21B?style=flat-square&logo=ruff&logoColor=white\" alt=\"代码检查工具 - Ruff\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fty\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTypecheck-Ty-261230?style=flat-square&logo=astral&logoColor=white\" alt=\"类型检查 - Ty\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.codacy.com\u002Fgh\u002Ffrgfm\u002Ftorch-cam\u002Fdashboard?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=frgfm\u002Ftorch-cam&amp;utm_campaign=Badge_Grade\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_adaaa9495e23.png\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002Ffrgfm\u002Ftorch-cam\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fcodecov\u002Fc\u002Fgithub\u002Ffrgfm\u002Ftorch-cam.svg?logo=codecov&style=flat-square&label=Coverage\" alt=\"测试覆盖率百分比\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchcam\u002F\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftorchcam.svg?logo=PyPI&logoColor=fff&style=flat-square&label=PyPI\" alt=\"PyPI 版本\">\n  \u003C\u002Fa>\n  \u003Cimg alt=\"GitHub release (latest by date)\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Ffrgfm\u002Ftorch-cam?label=Release&logo=github\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftorchcam.svg?logo=Python&label=Python&logoColor=fff&style=flat-square\" alt=\"Python 版本\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fblob\u002Fmain\u002FLICENSE\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Ffrgfm\u002Ftorch-cam.svg?label=License&logoColor=fff&style=flat-square\" alt=\"许可证\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Ffrgfm\u002Ftorch-cam\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue\" alt=\"Hugging Face Spaces\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffrgfm\u002Fnotebooks\u002Fblob\u002Fmain\u002Ftorch-cam\u002Fquicktour.ipynb\">\n    \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Ffrgfm\u002Ftorch-cam\u002Fpage-build.yml?branch=main&label=Documentation&logo=read-the-docs&logoColor=white&style=flat-square\" alt=\"文档状态\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n一种在 PyTorch (深度学习框架) 中利用卷积层特定类别激活的简便方法。\n\n\u003Cp align=\"center\">\n    \u003Ca alt=\"cam 示例\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_15c178189ca7.png\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Cem>来源：图片来自 \u003Ca href=\"https:\u002F\u002Fwww.woopets.fr\u002Fassets\u002Fraces\u002F000\u002F066\u002Fbig-portrait\u002Fborder-collie.jpg\">woopets\u003C\u002Fa>（激活图由预训练的 \u003Ca href=\"https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Fmodels.html#torchvision.models.resnet18\">Resnet-18\u003C\u002Fa> 创建）\u003C\u002Fem>\n\u003C\u002Fp>\n\n\n## 快速入门\n\n### 配置您的 CAM\n\nTorchCAM 利用 [PyTorch 钩子机制](https:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Fformer_torchies\u002Fnnft_tutorial.html#forward-and-backward-function-hooks) 无缝检索生成类别激活图 (Class Activation Map, CAM) 所需的所有信息，无需用户额外操作。每个 CAM 对象都充当您模型的包装器。\n\n您可以在 [文档](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Fmethods.html) 中找到支持的所有 CAM 方法的完整列表，然后按如下方式使用：\n\n```python\nfrom torchvision.models import get_model, get_model_weights\nfrom torchcam.methods import LayerCAM\n\n# Define your model\nmodel = get_model(\"resnet18\", weights=get_model_weights(\"resnet18\").DEFAULT).eval()\n# Set your CAM extractor\ncam_extractor = LayerCAM(model)\n```\n\n*请注意，默认情况下，检索 CAM 的层设置为最后一个非降采样卷积层。如果您希望调查特定的层，请在构造函数中使用 `target_layer` 参数。*\n\n### 获取类别激活图\n\n一旦设置了 CAM 提取器，您只需要像往常一样使用模型对数据进行推理即可。如果需要任何额外信息，提取器会自动为您获取。\n\n```python\nfrom torchvision.io import decode_image\nfrom torchvision.models import get_model, get_model_weights\nfrom torchcam.methods import LayerCAM\n\n# Get a model and an image\nweights = get_model_weights(\"resnet18\").DEFAULT\nmodel = get_model(\"resnet18\", weights=weights).eval()\npreprocess = weights.transforms()\nimg = decode_image(\"path\u002Fto\u002Fyour\u002Fimage.jpg\")\n\ninput_tensor = preprocess(img)\n\nwith LayerCAM(model) as cam_extractor:\n  out = model(input_tensor.unsqueeze(0))\n  # Retrieve the CAM by passing the class index and the model output\n  activation_map = cam_extractor(out.squeeze(0).argmax().item(), out)\n```\n\n如果您想可视化热力图，只需将 CAM 转换为 NumPy 数组：\n\n```python\nimport matplotlib.pyplot as plt\n# Visualize the raw CAM\nplt.imshow(activation_map[0].squeeze(0).numpy()); plt.axis('off'); plt.tight_layout(); plt.show()\n```\n\n![原始热力图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_8a480f197db1.png)\n\n或者如果您希望将其叠加在输入图像上：\n\n```python\nimport matplotlib.pyplot as plt\nfrom torchvision.transforms.v2.functional import to_pil_image\nfrom torchcam.utils import overlay_mask\n\n# Resize the CAM and overlay it\nresult = overlay_mask(to_pil_image(img), to_pil_image(activation_map[0].squeeze(0), mode='F'), alpha=0.5)\nplt.imshow(result); plt.axis('off'); plt.tight_layout(); plt.show()\n```\n\n![叠加热力图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_45a5824cbf1b.png)\n\n## 安装\n\n需要 Python 3.11（或更高版本）以及 [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F)\u002F[pip](https:\u002F\u002Fpip.pypa.io\u002Fen\u002Fstable\u002Finstallation\u002F) 来安装 TorchCAM。\n\n### 稳定版本\n\n您可以使用 [pypi](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchcam\u002F) 安装该包的最后一个稳定版本，如下所示：\n\n```shell\npip install torchcam\n```\n\n### 最新版本\n\n另外，如果您希望使用尚未发布到正式版本中的项目最新功能，可以从源代码安装该包：\n\n```shell\npip install torchcam @ git+https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam.git\n```\n\n## CAM Zoo\n\n本项目由仓库所有者开发和维护，但其实现基于以下研究论文：\n\n- [Learning Deep Features for Discriminative Localization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.04150)：原始的 CAM（类激活映射）论文\n- [Grad-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.02391)：GradCAM 论文，将 CAM 推广至没有全局平均池化的模型。\n- [Grad-CAM++](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.11063)：GradCAM++ 的改进版，用于更准确地计算像素级对激活的贡献。\n- [Smooth Grad-CAM++](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.01224)：将 SmoothGrad 机制与 GradCAM 结合。\n- [Score-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01279)：类激活的分数加权，以获得更好的可解释性。\n- [SS-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.14255)：将 SmoothGrad 机制与 Score-CAM 结合。\n- [IS-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03023)：基于积分的 Score-CAM 变体。\n- [XGrad-CAM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02312)：在敏感性和守恒性方面改进的 Grad-CAM 版本。\n- [Layer-CAM](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21TIP_LayerCAM.pdf)：利用梯度对激活的像素级贡献的 Grad-CAM 替代方案。\n\n\u003Cp align=\"center\">\n    \u003Ca alt=\"wallaby_video_cam\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_980b612e0c1a.gif\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Cem>来源：\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=hZJN5BzKfxk\">YouTube 视频\u003C\u002Fa>（激活图由 \u003Ca href=\"https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM\">Layer-CAM\u003C\u002Fa> 使用预训练的 \u003Ca href=\"https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Fmodels.html#torchvision.models.resnet18\">ResNet-18\u003C\u002Fa> 创建）\u003C\u002Fem>\n\u003C\u002Fp>\n\n\n\n## 其他\n\n### 文档\n\n完整包文档可在 [此处](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002F) 获取，包含详细规格说明。\n\n### 演示应用\n\n提供了一个最小化演示应用供您体验支持的 CAM 方法！欢迎查看 [![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Ffrgfm\u002Ftorch-cam) 上的实时演示。\n\n如果您更喜欢自己运行演示，需要额外的依赖项 ([Streamlit](https:\u002F\u002Fstreamlit.io\u002F)) 才能运行该应用：\n\n```\npip install -e \".[demo]\"\n```\n\n然后您可以通过运行以下命令在默认浏览器中轻松运行您的应用：\n\n```\nstreamlit run demo\u002Fapp.py\n```\n\n![torchcam_demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_745d3dcdfd9c.png)\n\n### 可视化脚本\n\n提供了一个示例脚本，供您对同一图像上多种 CAM 方法生成的热力图进行基准测试：\n\n```shell\npython scripts\u002Fcam_example.py --arch resnet18 --class-idx 232 --rows 2\n```\n\n![gradcam_sample](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_readme_15c178189ca7.png)\n\n*所有脚本参数均可使用 `python scripts\u002Fcam_example.py --help` 进行检查*\n\n### 性能基准测试\n\nCAM 方法的目的是提供可解释性，它们通过指出对模型输出影响最大的因素来实现这一点。理想情况下，CAM 应该精确定位任何影响输出分类分数的视觉线索。为此，我们使用两个指标：\n- [置信度提升](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmetrics.html#torchcam.metrics.ClassificationMetric)（越高越好）：如果我们使用 CAM 掩码后的输入进行前向传播（保留 CAM 值最高处的原始像素值，最低处置零），数据集中有多少次分类概率得到了提升。\n- [平均下降](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmetrics.html#torchcam.metrics.ClassificationMetric)（越低越好）：如果我们使用 CAM 掩码后的输入进行前向传播（保留 CAM 值最高处的原始像素值，最低处置零），分类概率下降了多少。\n\n| CAM 方法 | 架构 | 平均下降 (↓) | 置信度提升 (↑) |\n| ---------- | ---- | ---------------- | -------------------------- |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | resnet18 | 0.2686 | 0.2250 |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | resnet18 | 0.5271 | 0.1962 |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | resnet18 | 0.2088 | 0.2499 |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | resnet18 | 0.1712 | 0.2819 |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | mobilenet_v3_large | 0.2678 | 0.3483 |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | mobilenet_v3_large | 0.3182 | 0.2535 |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | mobilenet_v3_large | 0.2681 | 0.2678 |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | mobilenet_v3_large | 0.2526 | 0.2882 |\n\n此基准测试是在 [imagenette](https:\u002F\u002Fgithub.com\u002Ffastai\u002Fimagenette) 的验证集上进行的，它是 Imagenet 的一个子集，输入大小为 (224, 224)。\n\n您可以在自己的硬件上为任何 CAM 方法运行此性能基准测试，如下所示：\n\n```bash\npython scripts\u002Feval_perf.py ~\u002FDownloads\u002Fimagenette LayerCAM --arch mobilenet_v3_large\n```\n\n*所有脚本参数均可使用 `python scripts\u002Feval_perf.py --help` 进行检查*\n\n### 延迟基准测试\n\n你渴望获得漂亮的激活图 (activation maps)，但不知道它们在延迟方面是否符合你的需求？\n\n在下表中，你将找到所有 CAM（类激活映射）方法的延迟开销基准测试（不包含前向传播）：\n\n| CAM 方法 | 架构 | GPU 均值 (标准差) | CPU 均值 (标准差) |\n| ---------- | ---- | -------------- | -------------- |\n| [CAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.CAM) | resnet18           | 0.11ms (0.02ms)    | 0.14ms (0.03ms)      |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | resnet18           | 3.71ms (1.11ms)    | 40.66ms (1.82ms)     |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | resnet18           | 5.21ms (1.22ms)    | 41.61ms (3.24ms)     |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | resnet18           | 33.67ms (2.51ms)   | 239.27ms (7.85ms)    |\n| [ScoreCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.ScoreCAM) | resnet18           | 304.74ms (11.54ms) | 6796.89ms (415.14ms) |\n| [XGradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.XGradCAM) | resnet18           | 3.78ms (0.96ms)    | 40.63ms (2.03ms)     |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | resnet18           | 3.65ms (1.04ms)    | 40.91ms (1.79ms)     |\n| [CAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.CAM) | mobilenet_v3_large | N\u002FA*               | N\u002FA*                 |\n| [GradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAM) | mobilenet_v3_large | 8.61ms (1.04ms)    | 26.64ms (3.46ms)     |\n| [GradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.GradCAMpp) | mobilenet_v3_large | 8.83ms (1.29ms)    | 25.50ms (3.10ms)     |\n| [SmoothGradCAMpp](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.SmoothGradCAMpp) | mobilenet_v3_large | 77.38ms (3.83ms)   | 156.25ms (4.89ms)    |\n| [ScoreCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.ScoreCAM) | mobilenet_v3_large | 35.19ms (2.11ms)   | 679.16ms (55.04ms)   |\n| [XGradCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.XGradCAM) | mobilenet_v3_large | 8.41ms (0.98ms)    | 24.21ms (2.94ms)     |\n| [LayerCAM](https:\u002F\u002Ffrgfm.github.io\u002Ftorch-cam\u002Flatest\u002Fmethods.html#torchcam.methods.LayerCAM) | mobilenet_v3_large | 8.02ms (0.95ms)    | 25.14ms (3.17ms)     |\n\n**基础 CAM 方法无法与具有多个全连接层的架构配合使用***\n\n此基准测试在笔记本电脑上进行了 100 次迭代，输入尺寸为 (224, 224)，以更好地反映普通用户可预期的性能。硬件配置包括用于 CPU 的 [Intel(R) Core(TM) i7-10750H](https:\u002F\u002Fark.intel.com\u002Fcontent\u002Fwww\u002Fus\u002Fen\u002Fark\u002Fproducts\u002F201837\u002Fintel-core-i710750h-processor-12m-cache-up-to-5-00-ghz.html) 和用于 GPU 的 [NVIDIA GeForce RTX 2070 with Max-Q Design](https:\u002F\u002Fwww.nvidia.com\u002Ffr-fr\u002Fgeforce\u002Fgraphics-cards\u002Frtx-2070\u002F)。\n\n你可以按照以下方式在你的硬件上运行任何 CAM 方法的延迟基准测试：\n\n```bash\npython scripts\u002Feval_latency.py SmoothGradCAMpp\n```\n\n*可以使用 `python scripts\u002Feval_latency.py --help` 检查所有脚本参数*\n\n### 示例笔记本\n\n想要了解更多 TorchCAM 功能的说明吗？\n你可能想查看 [Jupyter 笔记本](notebooks)，它们旨在为你提供更广泛的概览。\n\n## 引用\n\n如果你希望引用本项目，请随意使用此 [BibTeX](http:\u002F\u002Fwww.bibtex.org\u002F) 引用：\n\n```bibtex\n@misc{torcham2020,\n    title={TorchCAM: class activation explorer},\n    author={François-Guillaume Fernandez},\n    year={2020},\n    month={March},\n    publisher = {GitHub},\n    howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam}}\n}\n```\n\n## 贡献\n\n想要扩展 CAM 的可能性范围吗？或者提交论文实现？任何形式的贡献都深表感谢！\n\n你可以在 [`CONTRIBUTING`](CONTRIBUTING.md) 中找到简短指南，以帮助推动本项目发展！\n\n## 许可证\n\n根据 Apache 2.0 许可证分发。有关更多信息，请参阅 [`LICENSE`](LICENSE)。\n\n[![FOSSA Status](https:\u002F\u002Fapp.fossa.com\u002Fapi\u002Fprojects\u002Fgit%2Bgithub.com%2Ffrgfm%2Ftorch-cam.svg?type=large&issueType=license)](https:\u002F\u002Fapp.fossa.com\u002Fprojects\u002Fgit%2Bgithub.com%2Ffrgfm%2Ftorch-cam?ref=badge_large&issueType=license)","# TorchCAM 快速上手指南\n\nTorchCAM 是一个轻量级工具，旨在简化 PyTorch 中卷积层特定类别激活的利用，帮助开发者快速生成类激活图（Class Activation Maps, CAM）以增强模型的可解释性。\n\n## 1. 环境准备\n\n- **Python 版本**：3.11 或更高\n- **包管理器**：pip 或 uv\n- **前置依赖**：确保已安装 PyTorch 和 torchvision（示例代码依赖这些库）\n\n```bash\n# 建议先安装基础深度学习框架\npip install torch torchvision\n```\n\n## 2. 安装步骤\n\n通过 PyPI 安装最新稳定版：\n\n```shell\npip install torchcam\n```\n\n如需使用尚未发布到正式版本的最新功能，可从源码安装：\n\n```shell\npip install torchcam @ git+https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam.git\n```\n\n## 3. 基本使用\n\n### 配置 CAM 提取器\n\n首先加载预训练模型并设置 CAM 提取器。默认情况下，CAM 将作用于最后一个非降采样卷积层。\n\n```python\nfrom torchvision.models import get_model, get_model_weights\nfrom torchcam.methods import LayerCAM\n\n# 定义模型\nmodel = get_model(\"resnet18\", weights=get_model_weights(\"resnet18\").DEFAULT).eval()\n# 设置 CAM 提取器\ncam_extractor = LayerCAM(model)\n```\n\n### 获取并可视化激活图\n\n运行推理后，提取器会自动捕获所需信息。你可以通过指定类别索引获取激活图，并将其转换为 NumPy 数组进行可视化。\n\n```python\nfrom torchvision.io import decode_image\nfrom torchvision.models import get_model, get_model_weights\nfrom torchcam.methods import LayerCAM\n\n# 获取模型和图片\nweights = get_model_weights(\"resnet18\").DEFAULT\nmodel = get_model(\"resnet18\", weights=weights).eval()\npreprocess = weights.transforms()\nimg = decode_image(\"path\u002Fto\u002Fyour\u002Fimage.jpg\")\n\ninput_tensor = preprocess(img)\n\nwith LayerCAM(model) as cam_extractor:\n  out = model(input_tensor.unsqueeze(0))\n  # 通过传入类别索引和模型输出来获取 CAM\n  activation_map = cam_extractor(out.squeeze(0).argmax().item(), out)\n\nimport matplotlib.pyplot as plt\n# 可视化原始 CAM\nplt.imshow(activation_map[0].squeeze(0).numpy()); plt.axis('off'); plt.tight_layout(); plt.show()\n```\n\n若希望将热力图叠加在原图上，可使用以下代码：\n\n```python\nimport matplotlib.pyplot as plt\nfrom torchvision.transforms.v2.functional import to_pil_image\nfrom torchcam.utils import overlay_mask\n\n# 调整 CAM 大小并叠加\nresult = overlay_mask(to_pil_image(img), to_pil_image(activation_map[0].squeeze(0), mode='F'), alpha=0.5)\nplt.imshow(result); plt.axis('off'); plt.tight_layout(); plt.show()\n```","医疗影像分析团队正在开发基于 CNN 的肺炎检测模型，需要验证模型是否真正关注了病灶区域而非背景噪声，以确保临床可信度。\n\n### 没有 torch-cam 时\n- 需要手动编写反向传播钩子来提取特征图梯度，代码量大且容易遗漏细节。\n- 切换不同可视化算法（如 Grad-CAM 与 Score-CAM）需重写大量底层逻辑。\n- 难以快速定位模型误判的具体原因，调试效率低下影响项目进度。\n- 缺乏统一接口，不同层级的激活值获取方式不一致导致维护困难。\n\n### 使用 torch-cam 后\n- 通过 LayerCAM 等类直接实例化，无需手动处理复杂的 Hook 机制。\n- 支持多种激活映射方法，一行代码即可切换算法对比效果。\n- 生成热力图直观展示病灶关注点，快速确认模型决策依据。\n- 兼容现有 PyTorch 模型结构，集成成本几乎为零且稳定可靠。\n\ntorch-cam 通过提供标准化的激活映射接口，让深度学习模型的决策过程透明化，显著提升了视觉任务的可解释性与调试效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrgfm_torch-cam_745d3dcd.png","frgfm","F-G Fernandez","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffrgfm_2bb614c0.jpg","Deep Learning Engineer by day, Open Source contributor by night :bat: ","@relaycli @pyronear","Paris, FR",null,"FrG_FM","https:\u002F\u002Ffgfm.dev\u002F","https:\u002F\u002Fgithub.com\u002Ffrgfm",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",95.7,{"name":88,"color":89,"percentage":90},"Makefile","#427819",4.3,2297,223,"2026-04-10T13:53:43","Apache-2.0",1,"未说明",{"notes":98,"python":99,"dependencies":100},"安装需要 Python 3.11 或更高版本，支持使用 pip 或 uv 工具；运行演示应用需额外安装 Streamlit；支持在 Google Colab 和 Hugging Face Spaces 环境中直接使用","3.11+",[101,102,103,104],"torch","torchvision","matplotlib","numpy",[14],[107,108,109,110,111,112,113,114,115,116,117,118,119,120],"pytorch","python","deep-learning","cnn","activation-maps","gradcam-plus-plus","gradcam","saliency-map","interpretability","interpretable-deep-learning","smoothgrad","score-cam","class-activation-map","grad-cam",4,"2026-03-27T02:49:30.150509","2026-04-11T16:56:51.836266",[125,130,134,139,143,148],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},2522,"如何使用 torch-cam 配合自定义的多视图 CNN 模型？","需要手动注册前向钩子（hook）。参考代码示例，使用 `self.resnet.layer4.register_forward_hook(self.forward_hook())` 来捕获特征图，并在 `forward` 函数中处理多视图输入。同时可以通过 `activations_hook` 获取梯度信息。","https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fissues\u002F210",{"id":131,"question_zh":132,"answer_zh":133,"source_url":129},2523,"生成热力图时 `normalized=False` 参数的具体含义是什么？","该参数表示库不会自动对 Class Activation Map 进行归一化，而是允许用户根据需求自行处理。用户可以选择不归一化，或者在生成地图后、绘图前自行执行特定的归一化操作。",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},2524,"集成 torch-cam 时遇到 Autograd Inplace 警告该如何解决？","该警告已在 v0.3.0 版本中修复。建议将 torchcam 升级到最新版本。如果在 Google Colab 环境中，可尝试通过 GitHub 源安装以获取最新修复。","https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fissues\u002F72",{"id":140,"question_zh":141,"answer_zh":142,"source_url":138},2525,"如何在 Google Colab 中安装 torchcam 的最新开发版本？","可以直接从 GitHub 仓库安装，使用命令：`!pip install git+https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam.git#egg=torchcam`。这比从 PyPI 安装更能保证获取到最新的修复和功能。",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},2526,"使用 timm 模型（如 BiT）时报错 `KeyError: 'fc'` 是什么原因及解决方法？","这是因为模型的实际分类层名称并非默认的 `'fc'`，导致自动解析失败。解决方法是在初始化 CAM 提取器时，手动指定正确的 `fc_layer` 参数值（需查看模型结构确认实际层名）。","https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fissues\u002F68",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},2527,"torch-cam 是否支持没有全连接层（FC Layer）的模型？","不支持。维护者指出，对于没有全连接层的模型（如语义分割模型），目前尚缺乏标准的数学定义来应用此类 CAM 算法，因此无法直接使用该方法获取结果。","https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fissues\u002F171",[154,159,164,169,174,179,184,189,194],{"id":155,"version":156,"summary_zh":157,"released_at":158},200654,"v0.4.1","This minor release makes sure the example scripts are compatible with the last release API changes.\r\n\r\nNote: TorchCAM 0.4.1 requires PyTorch 2.0.0 or higher.\r\n\r\n## Highlights\r\n\r\n### Minor new API for activation & gradient hook control\r\n\r\nBefore\r\n```python\r\nimport torch\r\nfrom torchvision.models import resnet18\r\nfrom torchcam.methods import LayerCAM\r\n\r\nmodel = resnet18(pretrained=True).eval()\r\n# Hooks are enabled by default\r\ncam_extractor = LayerCAM(model)\r\n\r\n# Disable it to do inference without recording CAMs (save some RAM & latency)\r\ncam_extractor._hooks_enabled = False\r\nimg = read_image(\"path\u002Fto\u002Fyour\u002Fimage.png\")\r\ninput_tensor = normalize(resize(img, (224, 224)) \u002F 255., [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\r\nwith torch.inference_mode():\r\n    out = model(input_tensor.unsqueeze(0))\r\n\r\n# Re-enable it\r\ncam_extractor._hooks_enabled = True\r\n```\r\n\r\nAfter\r\n```diff\r\nimport torch\r\nfrom torchvision.models import resnet18\r\nfrom torchcam.methods import LayerCAM\r\n\r\nmodel = resnet18(pretrained=True).eval()\r\n# Hooks are enabled by default\r\ncam_extractor = LayerCAM(model)\r\n\r\n# Disable it to do inference without recording CAMs (save some RAM & latency)\r\n- cam_extractor._hooks_enabled = False\r\n+ cam_extractor.disable_hooks()\r\nimg = read_image(\"path\u002Fto\u002Fyour\u002Fimage.png\")\r\ninput_tensor = normalize(resize(img, (224, 224)) \u002F 255., [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\r\nwith torch.inference_mode():\r\n    out = model(input_tensor.unsqueeze(0))\r\n\r\n# Re-enable it\r\n- cam_extractor._hooks_enabled = True\r\n+ cam_extractor.enable_hooks()\r\n```\r\n\r\n## What's Changed\r\n### Miscellaneous\r\n* chore(deps): bump ruff to 0.2.0 by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F235\r\n* build(deps-dev): bump ruff from 0.3.0 to 0.4.1 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F241\r\n* build(deps-dev): bump mypy from 1.8.0 to 1.10.0 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F243\r\n* build(deps): bump ruff from 0.4.1 to 0.4.9 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F254\r\n* build(deps): bump ruff from 0.4.9 to 0.4.10 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F255\r\n* build(deps-dev): bump ruff from 0.4.10 to 0.5.0 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F258\r\n* build(deps): bump ruff from 0.5.0 to 0.5.1 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F259\r\n* build(deps): bump ruff from 0.5.1 to 0.5.2 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F260\r\n* build(deps-dev): bump ruff from 0.5.2 to 0.5.5 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F266\r\n* build(deps): bump ruff from 0.5.5 to 0.5.7 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F269\r\n* build(deps): bump ruff from 0.5.7 to 0.6.1 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F272\r\n* build(deps): bump actions\u002Fdownload-artifact from 2 to 4.1.7 in \u002F.github\u002Fworkflows by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F275\r\n* build(deps): bump ruff from 0.6.1 to 0.8.2 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F291\r\n* build(deps): bump the gh-actions group across 1 directory with 5 updates by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F293\r\n* build(deps): bump ruff from 0.8.2 to 0.8.4 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F295\r\n* ci(dependabot): change the update rule for Github actions by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F296\r\n* build(deps): bump astral-sh\u002Fsetup-uv from 4 to 5 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F297\r\n* ci(github): bump uv to 0.5.13 by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F298\r\n* build(deps): update pre-commit requirement from \u003C4.0.0,>=3.0.0 to >=3.0.0,\u003C5.0.0 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F283\r\n* docs(readme): update installation instructions and badges by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F299\r\n* build(deps-dev): bump mypy to 1.14.0 by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F300\r\n* build(deps): bump ruff from 0.8.4 to 0.9.7 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F310\r\n* build(deps): bump JamesIves\u002Fgithub-pages-deploy-action from 4.7.2 to 4.7.3 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F311\r\n* docs: update copyright year by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F314\r\n* build(deps-dev): bump mypy to 1.15.0 by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F315\r\n* build(deps): bump ruff from 0.9.7 to 0.9.9 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F313\r\n* build(deps): bump astral-sh\u002Fsetup-uv from 5 to 6 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F327\r\n* build(deps): bump webfactory\u002Fssh-agent from 0.9.0 to 0.9.1 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F321\r\n* build(deps): bump ruff from 0.9.9 to 0.11.9 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F329\r\n* build(deps): bump actions\u002Fsetup-p","2025-10-27T15:27:08",{"id":160,"version":161,"summary_zh":162,"released_at":163},200655,"v0.4.0","This minor release adds evaluation metrics to the package and bumps PyTorch to version 2.0\r\n\r\nNote: TorchCAM 0.4.0 requires PyTorch 2.0.0 or higher.\r\n\r\n## Highlights\r\n\r\n### Evaluation metrics\r\n\r\nThis release comes with a standard way to evaluate interpretability methods. This allows users to better evaluate models' robustness:\r\n\r\n```python\r\nfrom functools import partial\r\nfrom torchcam.metrics import ClassificationMetric\r\nmetric = ClassificationMetric(cam_extractor, partial(torch.softmax, dim=-1))\r\nmetric.update(input_tensor)\r\nmetric.summary()\r\n```\r\n\r\n## What's Changed\r\n### New Features 🚀\r\n* feat: Added CAM evaluation metric by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F172\r\n* ci: Added FUNDING button by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F182\r\n* feat: Added new TV models to demo by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F184\r\n* ci: Added precommits, bandit & autoflake by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F192\r\n* feat: Removes model hooks when the context manager exits by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F198\r\n### Bug Fixes 🐛\r\n* chore: Applied post-release modifications by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F180\r\n* fix: Fixed division by zero during normalization by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F185\r\n* fix: Fixed zero division for weight computation in gradient based methods by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F187\r\n* docs: Fixes README badges by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F194\r\n* ci: Fixes issue templates by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F196\r\n* fix: Fixes SmoothGradCAMpp by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F204\r\n### Improvements\r\n* docs: Improved documentation build script by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F183\r\n* docs: Updates README & CONTRIBUTING by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F193\r\n* feat: Removed param grad computation in scripts by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F201\r\n* style: Updates precommit hooks and mypy config by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F203\r\n* refactor: Replaces flake8 by ruff and updates python version by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F211\r\n* style: Bumps ruff & black, removes isort & pydocstyle by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F216\r\n* style: Bumps ruff and updates torch & torchvision version specifiers by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F219\r\n* docs: Updates CONTRIBUTING & README by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F220\r\n* test: Speeds up test suite using plugins by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F222\r\n* ci: Adds multiple build CI jobs by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F223\r\n* feat: Removes warnings for torchvision and matplotlib by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F224\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fcompare\u002Fv0.3.2...v0.4.0","2023-10-19T18:37:25",{"id":165,"version":166,"summary_zh":167,"released_at":168},200656,"v0.3.2","This patch release fixes the Score-CAM methods and improves the base API for CAM computation.\r\n\r\n**Note**: TorchCAM 0.3.2 requires PyTorch 1.7.0 or higher.\r\n\r\n## Highlights\r\n\r\n### :hushed: Batch processing\r\n\r\nCAM computation now supports batch sizes larger than 1 (#143) ! Practically, this means that you can compute CAMs for multiple samples at the same time, which will let you make the most of your GPU as well :zap: \r\n\r\nThe following snippet:\r\n```python\r\nimport torch\r\nfrom torchcam.methods import LayerCAM\r\nfrom torchvision.models import resnet18\r\n\r\n# A preprocessed (resized & normalized) tensor\r\nimg_tensor = torch.rand((2, 3, 224, 224))\r\nmodel = resnet18(pretrained=True).eval()\r\n# Hook your model before inference\r\ncam_extractor = LayerCAM(model)\r\nout = model(img_tensor)\r\n# Compute the CAM\r\nactivation_map = cam_extractor(out[0].argmax().item(), out)\r\nprint(activation_map[0].ndim)\r\n```\r\nwill yield `3` as the batch dimension is now also used.\r\n\r\n### :paintbrush:  Documentation theme\r\n\r\nNew year, new documentation theme!\r\nFor clarity and improved interface, the documentation theme was changed from [Read the Docs](https:\u002F\u002Fsphinx-rtd-theme.readthedocs.io\u002Fen\u002Fstable\u002F) to [Furo](https:\u002F\u002Fpradyunsg.me\u002Ffuro\u002Fquickstart\u002F) (#162) \r\n\r\n![image](https:\u002F\u002Fuser-images.githubusercontent.com\u002F26927750\u002F182440159-2c928ba5-62a1-4ca7-ae37-68693200947b.png)\r\n\r\nThis comes with nice features like dark mode and edit button!\r\n\r\n### :computer_mouse: Contribution process\r\n\r\nContributions are important to OSS projects, and for this reason, a few improvements were made to the contribution process:\r\n- added a Makefile for easier development (#109)\r\n- added a dedicated README for the documentation (#109)\r\n- updated CONTRIBUTING (#109, #166)\r\n\r\n## Breaking changes\r\n\r\n### CAM signature\r\n\r\nCAM extractors now outputs a list of tensors. The size of the list is equal to the number of target layers and ordered the same way.\r\nEach of these elements used to be a 2D spatial tensor, and is now a 3D tensor to include the batch dimension:\r\n\r\n```python\r\n# Model was hooked and a tensor of shape (2, 3, 224, 224) was forwarded to it\r\namaps = cam_extractor(0, out)\r\nfor elt in amaps: print(elt.shape)\r\n```\r\nwill, from now on, yield\r\n```\r\ntorch.Size([2, 7, 7])\r\n```\r\n\r\n## What's Changed\r\n### Breaking Changes 🛠\r\n* feat: Adds support for batch processing and fixes ScoreCAMs by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F143\r\n### New Features 🚀\r\n* ci: Added release note template and a job to check PR labels by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F138\r\n* docs: Added CITATION file by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F144\r\n### Bug Fixes 🐛\r\n* fix: Updated headers and added pydocstyle by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F137\r\n* chore: Updated PyTorch version specifier by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F149\r\n* docs: Fixed deprecated method call by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F158\r\n* chore: Fixed jinja2 deps (subdep of sphinx) by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F159\r\n* docs: Fixed docstring of ISCAM by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F160\r\n* docs: Fixed multi-version build by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F163\r\n* docs: Fixed codacy badge by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F164\r\n* docs: Fixed typo in CONTRIBUTING by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F166\r\n* docs: Fixed author entry in pyproject by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F168\r\n* style: Fixed import order by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F175\r\n### Improvements\r\n* docs: Added PR template and tools for contributing by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F109\r\n* refactor: Removed unused import by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F110\r\n* feat: Added text strip for multiple target selection in demo by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F111\r\n* refactor: Updated environment collection script by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F112\r\n* style: Updated flake8 config by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F115\r\n* ci: Updated isort config and related CI job by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F118\r\n* ci: Speeded up the example script CI check by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F130\r\n* refactor: Updated the timing function for latency eval by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F129\r\n* docs: Updated TOC of documentation by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F161\r\n* refactor: Updated build config and documentation theme by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F162\r\n* style: Updated mypy and isort configs by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F167\r\n* chore: Improved version specifiers and fixed conda recipe by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F169\r\n* docs: Fixed README badge and updated documentation by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F170","2022-08-02T18:03:31",{"id":170,"version":171,"summary_zh":172,"released_at":173},200657,"v0.3.1","This patch release adds new features to the demo and reorganizes the package for a clearer hierarchy.\r\n\r\n**Note**: TorchCAM 0.3.1 requires PyTorch 1.5.1 or higher.\r\n\r\n# Highlights\r\n## CAM fusion is coming to the demo :rocket: \r\n\r\nWith release 0.3.0, the support of multiple target layers was added as well as CAM fusion. The demo was updated to automatically fuse CAMs when you hooked multiple layers (add a \"+\" separator between each layer name):\r\n\r\n![demo](https:\u002F\u002Fuser-images.githubusercontent.com\u002F26927750\u002F139595071-190aad5b-6515-4833-9f18-1774fb1fd719.png)\r\n\r\n\r\n# Breaking changes\r\n## Submodule renaming\r\n\r\nTo anticipate further developments of the library, modules were renamed:\r\n- `torchcam.cams` was renamed into `torchcam.methods`\r\n- `torchcam.cams.utils` was renamed and made private (`torchcam.methods._utils`) since it's API may evolve quickly\r\n- activation-based CAM methods are now implemented in `torchcam.methods.activation` rather than `torchcam.cams.cam`\r\n- gradient-based CAM methods are now implemented in `torchcam.methods.gradient` rather than `torchcam.cams.gradcam`\r\n\r\n0.3.0 | 0.3.1\r\n-- | --\r\n`>>> from torchcam.cams import LayerCAM` | `>>> from torchcam.methods import LayerCAM`  |\r\n\r\n\r\n## What's Changed\r\n* chore: Made post release modifications by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F103\r\n* docs: Updated changelog by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F104\r\n* feat: Added possibility to retrieve multiple CAMs in demo by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F105\r\n* refactor: Reorganized package hierarchy by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F106\r\n* docs: Fixed LaTeX syntax in docstrings by @frgfm in https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fpull\u002F107\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Fcompare\u002Fv0.3.0...v0.3.1","2021-10-31T17:55:10",{"id":175,"version":176,"summary_zh":177,"released_at":178},200658,"v0.3.0","This release extends CAM methods with Layer-CAM, greatly improves the core features (CAM computation for multiple layers at once, CAM fusion, support of `torch.nn.Module`), while improving accessibility for entry users.\r\n\r\n**Note**: TorchCAM 0.3.0 requires PyTorch 1.5.1 or higher.\r\n\r\n# Highlights\r\n### Enters Layer-CAM\r\n\r\nThe previous release saw the introduction of Score-CAM variants, and this one introduces you to [Layer-CAM](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21TIP_LayerCAM.pdf), which is meant to be considerably faster, while offering very competitive localization cues!\r\n\r\nJust like any other CAM methods, you can now use it as follows:\r\n\r\n```python\r\nfrom torchcam.cams import LayerCAM\r\n# model = ....\r\n# Hook the model\r\ncam_extractor = LayerCAM(model)\r\n```\r\n\r\nConsequently, the illustration of visual outputs for all CAM methods has been updated so that you can better choose the option that suits you:\r\n\r\n![cam_example](https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Freleases\u002Fdownload\u002Fv0.2.0\u002Fcam_example_2rows.png)\r\n\r\n\r\n### Computing CAMs for multiple layers & CAM fusion\r\n\r\nA class activation map is specific to a given layer in a model. To fully capture the influence of visual traits on your classification output, you might want to explore the CAMs for multiple layers.\r\n\r\nFor instance, here are the CAMs on the layers \"layer2\", \"layer3\" and \"layer4\" of a `resnet18`:\r\n\r\n```python\r\nfrom torchvision.io.image import read_image\r\nfrom torchvision.models import resnet18\r\nfrom torchvision.transforms.functional import normalize, resize, to_pil_image\r\nimport matplotlib.pyplot as plt\r\n\r\nfrom torchcam.cams import LayerCAM\r\nfrom torchcam.utils import overlay_mask\r\n\r\n# Download an image\r\n!wget https:\u002F\u002Fwww.woopets.fr\u002Fassets\u002Fraces\u002F000\u002F066\u002Fbig-portrait\u002Fborder-collie.jpg\r\n# Set this to your image path if you wish to run it on your own data\r\nimg_path = \"border-collie.jpg\"\r\n\r\n# Get your input\r\nimg = read_image(img_path)\r\n# Preprocess it for your chosen model\r\ninput_tensor = normalize(resize(img, (224, 224)) \u002F 255., [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\r\n# Get your model\r\nmodel = resnet18(pretrained=True).eval()\r\n# Hook the model\r\ncam_extractor = LayerCAM(model, [\"layer2\", \"layer3\", \"layer4\"])\r\n\r\nout = model(input_tensor.unsqueeze(0))\r\ncams = cam_extractor(out.squeeze(0).argmax().item(), out)\r\n# Plot the CAMs\r\n_, axes = plt.subplots(1, len(cam_extractor.target_names))\r\nfor idx, name, cam in zip(range(len(cam_extractor.target_names)), cam_extractor.target_names, cams):\r\n  axes[idx].imshow(cam.numpy()); axes[idx].axis('off'); axes[idx].set_title(name);\r\nplt.show()\r\n```\r\n\r\n![multi_cams](https:\u002F\u002Fuser-images.githubusercontent.com\u002F26927750\u002F139589179-6ad35490-d6f3-4705-900f-3a0267bf02c9.png)\r\n\r\nNow, the way you would combine those together is up to you. By default, most approaches use an element-wise maximum. But, LayerCAM has its own fusion method:\r\n\r\n```python\r\n# Let's fuse them\r\nfused_cam = cam_extractor.fuse_cams(cams)\r\n# Plot the raw version\r\nplt.imshow(fused_cam.numpy()); plt.axis('off'); plt.title(\" + \".join(cam_extractor.target_names)); plt.show()\r\n```\r\n\r\n![fused_cams](https:\u002F\u002Fuser-images.githubusercontent.com\u002F26927750\u002F139589184-325bfe66-762a-421d-9fd3-f9e62c3c4558.png)\r\n\r\n```python\r\n# Overlay it on the image\r\nresult = overlay_mask(to_pil_image(img), to_pil_image(fused_cam, mode='F'), alpha=0.5)\r\n# Plot the result\r\nplt.imshow(result); plt.axis('off'); plt.title(\" + \".join(cam_extractor.target_names)); plt.show()\r\n```\r\n\r\n![fused_overlay](https:\u002F\u002Fuser-images.githubusercontent.com\u002F26927750\u002F139590233-9217217e-2bc4-4a4e-9b41-3178db9afc8a.png)\r\n\r\n\r\n### Support of `torch.nn.Module` as `target_layer`\r\n\r\nWhile making the API more robust, CAM constructors now also accept `torch.nn.Module` as `target_layer`. Previously, you had to pass the name of the layer as string, but you can now pass the object reference directly if you prefer:\r\n```python\r\nfrom torchcam.cams import LayerCAM\r\n# model = ....\r\n# Hook the model\r\ncam_extractor = LayerCAM(model, model.layer4)\r\n```\r\n\r\n### :zap: Latency benchmark :zap: \r\n\r\nSince CAMs can be used from localization or production pipelines, it is important to consider latency along with pure visual output quality. For this reason, a latency evaluation script has been included in this release along with a full [benchmark table](https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam#latency-benchmark).\r\n\r\nShould you wish to have latency metrics on your dedicated hardware, you can run the script on your own:\r\n```shell\r\npython scripts\u002Feval_latency.py SmoothGradCAMpp --size 224\r\n```\r\n\r\n### Notebooks :play_or_pause_button: \r\n\r\nDo you prefer to only run code rather than write it? Perhaps you only want to tweak a few things?\r\nThen enjoy the brand new [Jupyter notebooks](https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Ftree\u002Fmaster\u002Fnotebooks) than you can either run locally or on [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002F)!\r\n\r\n### :hugs: Live demo :hugs:  \r\n\r\nThe ML community was recently blessed by HuggingFace with their beta of [Spaces](https:\u002F\u002Fhuggingface.","2021-10-31T15:24:56",{"id":180,"version":181,"summary_zh":182,"released_at":183},200659,"v0.2.0","This release extends TorchCAM compatibility to 3D inputs, and improves documentation.\r\n\r\n**Note**: TorchCAM 0.2.0 requires PyTorch 1.5.1 or higher.\r\n\r\n# Highlights\r\n### Compatibility for inputs with more than 2 spatial dimensions\r\nThe first papers about CAM methods were built for classification models using 2D (spatially) inputs. However, the latest methods can be extrapolated to higher dimension inputs and it's now live:\r\n```python\r\nimport torch\r\nfrom torchcam.cams import SmoothGradCAMpp\r\n# Define your model compatible with 3D inputs\r\nvideo_model = ...\r\nextractor = SmoothGradCAMpp(video_model)\r\n# Forward your input\r\nscores = model(torch.rand((1, 3, 32, 224, 224)))\r\n# Retrieve the CAM\r\ncam = extractor(scores[0].argmax().item(), scores)\r\n```\r\n\r\n### Multi-version documentation\r\nWhile documentation was up-to-date with the latest commit on the main branch, previously if you were running an older release of the library, you had no corresponding documentation.\r\n\r\nAs of now, you can select the version of the documentation you wish to access (stable releases or latest commit):\r\n![torchcam_doc](https:\u002F\u002Fuser-images.githubusercontent.com\u002F26927750\u002F114251220-8f202e00-99a0-11eb-88c4-3bc43155da3f.png)\r\n\r\n\r\n### Demo app\r\nSince spatial information is at the very core of TorchCAM, a minimal [Streamlit](https:\u002F\u002Fstreamlit.io\u002F) demo app was added to explore the activation of your favorite models. You can run the demo with the following commands:\r\n```\r\nstreamlit run demo\u002Fapp.py\r\n```\r\n\r\nHere is how it renders retrieving the heatmap using `SmoothGradCAMpp` on a pretrained `resnet18`:\r\n![torchcam_demo](https:\u002F\u002Fgithub.com\u002Ffrgfm\u002Ftorch-cam\u002Freleases\u002Fdownload\u002Fv0.1.2\u002Ftorchcam_demo.png)\r\n\r\n# New features\r\n\r\n## CAMs\r\nImplementations of CAM method\r\n- Enabled CAM compatibility for inputs with more than 2 spatial dimensions #45 (@frgfm)\r\n- Added support of XGradCAM #47 (@frgfm)\r\n\r\n## Test\r\nVerifications of the package well-being before release\r\n- Added unittests for XGradCAM #47 (@frgfm)\r\n\r\n## Documentation\r\nOnline resources for potential users\r\n- Added references to XGradCAM in README and documentation #47 (@frgfm)\r\n- Added multi-version documentation & added github star button #53, #54, #55, #56 (@frgfm)\r\n- Revamped README #59 (@frgfm) focusing on short easy code snippets\r\n- Improved documentation #60 (@frgfm)\r\n\r\n## Others\r\nOther tools and implementations\r\n- Added issue templates for bug report and feature request #49 (@frgfm)\r\n- Added option to specify a single CAM method in example script #52 (@frgfm)\r\n- Added minimal demo app #59 (@frgfm)\r\n\r\n# Bug fixes\r\n## CAMs\r\n- Fixed automatic layer resolution on GPU #41 (@frgfm)\r\n- Fixed backward hook warnings for Pytorch >= 1.8.0 #58 (@frgfm)\r\n\r\n## Utils\r\n- Fixed RGBA -> RGB conversion in `overlay_mask` #38 (@alexandrosstergiou)\r\n\r\n## Test\r\n- Fixed `overlay_mask` unittest #38 (@alexandrosstergiou)\r\n\r\n## Documentation\r\n- Fixed codacy badge in README #46 (@frgfm)\r\n- Fixed typo in documentation #62 (@frgfm)\r\n\r\n## Others\r\n- Fixed CI job for conda build #34 (@frgfm)\r\n- Fixed model mode in example script #37 (@frgfm)\r\n- Fixed sphinx version #40 (@frgfm)\r\n- Fixed usage instructions in README #43 (@frgfm)\r\n- Fixed example script for local image input #51 (@frgfm)\r\n\r\n# Improvements\r\n\r\n## CAMs\r\n- Added NaN check in gradcams #37 (@frgfm)\r\n\r\n## Test\r\n- Added NaN check unittest for gradcam #37 (@frgfm)\r\n- Switched from `unittest` to `pytest` #45 (@frgfm) and split test files by module\r\n\r\n## Documentation\r\n- Updated README badges #34, illustration #39 and usage instructions #41 (@frgfm)\r\n- Added instructions to run all CI checks locally in CONTRIBUTING #34, #45 (@frgfm)\r\n- Updated project hierarchy description in CONTRIBUTING #43 (@frgfm)\r\n- Added minimal code snippet in documentation #41 (@frgfm)\r\n\r\n## Others\r\n- Updated version in setup #34 and requirements #61 (@frgfm)\r\n- Leveraged automatic layer resolution in example script #41 (@frgfm)\r\n- Updated CI job to run unittests #45 (@frgfm)","2021-04-10T00:05:43",{"id":185,"version":186,"summary_zh":187,"released_at":188},200660,"v0.1.2","This release adds an implementation of IS-CAM and greatly improves interface.\r\n\r\n**Note**: torchcam 0.1.2 requires PyTorch 1.1 or newer.\r\n\r\n# Highlights\r\n\r\n## CAMs\r\nImplementation of CAM extractor\r\n**New**\r\n- Add an IS-CAM implementation #13 (@frgfm)\r\n- Added automatic target layer resolution #32 (@frgfm)\r\n\r\n**Improvements**\r\n- Added support for submodule hooking #21 (@pkmandke)\r\n\r\n**Fixes**\r\n- Fixed hooking mechanism edge case #23 (@frgfm)\r\n\r\n## Test\r\nVerifications of the package well-being before release\r\n**New**\r\n- Updated test for `torchcam.cams` #13, #30 (@frgfm)\r\n\r\n**Improvements**\r\n- Removed pretrained model loading in unittests #25 (@frgfm)\r\n- Switched all models to eval, removed gradient when not required, and changed to simpler models #33 (@frgfm)\r\n\r\n## Documentation\r\nOnline resources for potential users\r\n**New**\r\n- Added entry for IS-CAM #13, #30 (@frgfm)\r\n\r\n**Fixes**\r\n- Fixed examples in docstrings of gradient-based CAMs #28, #33 (@frgfm)\r\n\r\n## Others\r\nOther tools and implementations\r\n**New**\r\n- Added annotation typing to the codebase & mypy verification CI job #19 (@frgfm)\r\n- Added package publishing verification jobs #12 (@frgfm)\r\n\r\n**Improvements**\r\n- Improved example script #15 (@frgfm)\r\n- Optimized CI cache #20 (@frgfm)\r\n\r\n**Fixes**\r\n- Fixed coverage upload job #16 (@frgfm)\r\n- Fixed doc deployment job #24 (@frgfm)\r\n- Fixed conda recipe #29 (@frgfm)\r\n","2020-12-27T01:43:31",{"id":190,"version":191,"summary_zh":192,"released_at":193},200661,"v0.1.1","This release adds implementations of SmoothGradCAM++, Score-CAM and SS-CAM.\r\n\r\n**Note**: torchcam 0.1.1 requires PyTorch 1.1 or newer.\r\n\r\n_brought to you by @frgfm_\r\n\r\n# Highlights\r\n\r\n## CAMs\r\nImplementation of CAM extractor\r\n**New**\r\n- Add a SmoothGradCAM++ implementation (#4)\r\n- Add a Score-CAM implementation (#5)\r\n- Add a SS-CAM  implementation (#11).\r\n\r\n**Improvements**\r\n- Refactor CAM extractor for better code reusability (#6)\r\n\r\n## Test\r\nVerifications of the package well-being before release\r\n**New**\r\n- Updated test for `torchcam.cams` (#4, #5, #11)\r\n\r\n## Documentation\r\nOnline resources for potential users\r\n**Improvements**\r\n- Add detailed explanation of CAM computation (#8, #11)\r\n- Add websearch referencing of documentation (#7)\r\n\r\n## Others\r\nOther tools and implementations\r\n- Fixed conda upload job (#3)\r\n","2020-08-03T21:47:49",{"id":195,"version":196,"summary_zh":197,"released_at":198},200662,"v0.1.0","This release adds implementations of CAM, GradCAM and GradCAM++.\r\n\r\n**Note**: torchcam 0.1.0 requires PyTorch 1.1 or newer.\r\n\r\n_brought to you by @frgfm_\r\n\r\n# Highlights\r\n\r\n## GradCAM\r\nImplementation of gradient-based CAM extractor\r\n**New**\r\n- Add a CAM implementation (#2)\r\n- Add Grad-CAM and Grad-CAM++ implementations (#1, #2).\r\n\r\n## Test\r\nVerifications of the package well-being before release\r\n**New**\r\n- Add test for `torchcam.cams` (#1, #2)\r\n- Add test for `torschscan.utils` (#1)\r\n\r\n## Documentation\r\nOnline resources for potential users\r\n**New**\r\n- Add sphinx automatic documentation build for existing features (#1, #2)\r\n- Add contribution guidelines (#1)\r\n- Add installation, usage, and benchmark in readme (#1, #2)\r\n\r\n## Others\r\nOther tools and implementations\r\n- Add  ̀overlay_mask` to easily overlay mask on images (#1).\r\n","2020-03-24T01:31:36"]