[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jacobgil--pytorch-grad-cam":3,"tool-jacobgil--pytorch-grad-cam":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":95,"env_ram":95,"env_deps":96,"category_tags":101,"github_topics":102,"view_count":120,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":121,"updated_at":122,"faqs":123,"releases":149},452,"jacobgil\u002Fpytorch-grad-cam","pytorch-grad-cam","Advanced AI Explainability for computer vision.  Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.","pytorch-grad-cam 是一个专为计算机视觉模型设计的可解释性分析库，旨在揭开深度学习模型“黑盒”的神秘面纱。它通过生成直观的类激活热力图，清晰展示模型在做出预测时关注的图像区域，从而帮助用户理解模型的决策逻辑。\n\n这个库主要解决了模型预测缺乏透明度的问题，让开发者能够诊断模型是否学到了正确的特征，还是仅仅依靠背景噪声进行“作弊”。pytorch-grad-cam 非常适合 AI 开发者和研究人员使用，无论是用于模型调试、验证，还是探索新的可解释性算法，它都能提供极大便利。\n\n在技术上，它集成了 GradCAM、ScoreCAM、EigenCAM 等多种前沿的像素归因方法，支持 CNN 和 Vision Transformer 等主流网络架构。除了基础的图像分类，它还能处理目标检测、语义分割等复杂任务，并提供了评估解释可信度的指标，确保分析结果既美观又科学可靠。","[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n![Build Status](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fworkflows\u002FTests\u002Fbadge.svg)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_dbf05cd93c6b.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fgrad-cam)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a7f619ad6a46.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fgrad-cam)\n\n# Advanced AI explainability for PyTorch\n\n`pip install grad-cam`\n\nDocumentation with advanced tutorials: [https:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book](https:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book)\n\n\nThis is a package with state of the art methods for Explainable AI for computer vision.\nThis can be used for diagnosing model predictions, either in production or while\ndeveloping models.\nThe aim is also to serve as a benchmark of algorithms and metrics for research of new explainability methods.\n\n⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.\n\n⭐ Tested on many Common CNN Networks and Vision Transformers.\n\n⭐ Advanced use cases: Works with Classification, Object Detection, Semantic Segmentation, Embedding-similarity and more.\n\n⭐ Includes smoothing methods to make the CAMs look nice.\n\n⭐ High performance: full support for batches of images in all methods.\n\n⭐ Includes metrics for checking if you can trust the explanations, and tuning them for best performance.\n\n\n![visualization](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fjacobgil.github.io\u002Fblob\u002Fmaster\u002Fassets\u002Fcam_dog.gif?raw=true\n)\n\n| Method              | What it does                                                                                                                |\n|---------------------|-----------------------------------------------------------------------------------------------------------------------------|\n| GradCAM             | Weight the 2D activations by the average gradient                                                                           |\n| HiResCAM            | Like GradCAM but element-wise multiply the activations with the gradients; provably guaranteed faithfulness for certain models |\n| GradCAMElementWise  | Like GradCAM but element-wise multiply the activations with the gradients then apply a ReLU operation before summing        |\n| GradCAM++           | Like GradCAM but uses second order gradients                                                                                |\n| XGradCAM            | Like GradCAM but scale the gradients by the normalized activations                                                          |\n| AblationCAM         | Zero out activations and measure how the output drops (this repository includes a fast batched implementation)              |\n| ScoreCAM            | Perbutate the image by the scaled activations and measure how the output drops                                              |\n| EigenCAM            | Takes the first principle component of the 2D Activations (no class discrimination, but seems to give great results)        |\n| EigenGradCAM        | Like EigenCAM but with class discrimination: First principle component of Activations*Grad. Looks like GradCAM, but cleaner |\n| LayerCAM            | Spatially weight the activations by positive gradients. Works better especially in lower layers                             |\n| FullGrad            | Computes the gradients of the biases from all over the network, and then sums them                                          |\n| Deep Feature Factorizations           | Non Negative Matrix Factorization on the 2D activations                                                   |\n| KPCA-CAM            | Like EigenCAM but with Kernel PCA instead of PCA                                                                            |            \n| FEM                 | A gradient free method that binarizes activations by an activation > mean + k * std rule.                                   |\n| ShapleyCAM          | Weight the activations using the gradient and Hessian-vector product.|\n| FinerCAM                |  Improves fine-grained classification by comparing similar classes, suppressing shared features and highlighting discriminative details.    |\n## Visual Examples\n\n| What makes the network think the image label is 'pug, pug-dog' | What makes the network think the image label is 'tabby, tabby cat' | Combining Grad-CAM with Guided Backpropagation for the 'pug, pug-dog' class |\n| ---------------------------------------------------------------|--------------------|-----------------------------------------------------------------------------|\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_ada099dc5e3f.jpg\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a762039d0474.jpg\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_b68c9828fe4f.jpg\" width=\"256\" height=\"256\"> |\n\n## Object Detection and Semantic Segmentation\n| Object Detection | Semantic Segmentation |\n| -----------------|-----------------------|\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_66a9d9ed1fd1.png\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_e2df54161e8b.png\" width=\"256\" height=\"200\"> |\n\n| 3D Medical Semantic Segmentation |\n| -------------------------- |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_ca7e5c83e341.gif\" width=\"539\">|\n\n## Explaining similarity to other images \u002F embeddings\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_00c546bfff66.png\">\n\n## Deep Feature Factorization\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_174bff69e0ec.png\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_96acb6bf1b87.png\">\n\n## CLIP\n| Explaining the text prompt \"a dog\" | Explaining the text prompt \"a cat\" |\n| -----------------------------------|------------------------------------|\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_47665d1ba04a.jpg\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_55f8517a0008.jpg\" width=\"256\" height=\"256\"> |\n\n## Classification\n\n#### Resnet50:\n| Category  | Image | GradCAM  |  AblationCAM |  ScoreCAM |\n| ---------|-------|----------|------------|------------|\n| Dog    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a36a1a22d62d.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_c3fd57165590.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_3b2edcde7b35.jpg)   |\n| Cat    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png?raw=true) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_fe6489f7b739.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_7c0618e6ce61.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_81b0153c7cc3.jpg)   |\n\n#### Vision Transfomer (Deit Tiny):\n| Category  | Image | GradCAM  |  AblationCAM |  ScoreCAM |\n| ---------|-------|----------|------------|------------|\n| Dog    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_91465c322e85.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_ec95c74bbf3a.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_0d7af89814a0.jpg)   |\n| Cat    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_616fa9935c22.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_4816785b19c2.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_4676dbeaff5b.jpg)   |\n\n#### Swin Transfomer (Tiny window:7 patch:4 input-size:224):\n| Category  | Image | GradCAM  |  AblationCAM |  ScoreCAM |\n| ---------|-------|----------|------------|------------|\n| Dog    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_0c09c6bba692.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_eed0e3aa0da0.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_925abe45826c.jpg)   |\n| Cat    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_86609db51d8c.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_b07baf18bfb2.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a61b3c4b5f04.jpg)   |\n\n\n# Metrics and Evaluation for XAI\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_13c0349557d5.png\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_6dddeeed82e6.png\">\n\n----------\n\n# Usage examples\n\n```python\nfrom pytorch_grad_cam import GradCAM, HiResCAM, ScoreCAM, GradCAMPlusPlus, AblationCAM, XGradCAM, EigenCAM, FullGrad\nfrom pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget\nfrom pytorch_grad_cam.utils.image import show_cam_on_image\nfrom torchvision.models import resnet50\n\nmodel = resnet50(pretrained=True)\ntarget_layers = [model.layer4[-1]]\ninput_tensor = # Create an input tensor image for your model..\n# Note: input_tensor can be a batch tensor with several images!\n\n# We have to specify the target we want to generate the CAM for.\ntargets = [ClassifierOutputTarget(281)]\n\n# Construct the CAM object once, and then re-use it on many images.\nwith GradCAM(model=model, target_layers=target_layers) as cam:\n  # You can also pass aug_smooth=True and eigen_smooth=True, to apply smoothing.\n  grayscale_cam = cam(input_tensor=input_tensor, targets=targets)\n  # In this example grayscale_cam has only one image in the batch:\n  grayscale_cam = grayscale_cam[0, :]\n  visualization = show_cam_on_image(rgb_img, grayscale_cam, use_rgb=True)\n  # You can also get the model outputs without having to redo inference\n  model_outputs = cam.outputs\n```\n\n[cam.py](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fblob\u002Fmaster\u002Fcam.py) has a more detailed usage example.\n\n----------\n# Choosing the layer(s) to extract activations from\nYou need to choose the target layer to compute the CAM for.\nSome common choices are:\n- FasterRCNN: model.backbone\n- Resnet18 and 50: model.layer4[-1]\n- VGG, densenet161 and mobilenet: model.features[-1]\n- mnasnet1_0: model.layers[-1]\n- ViT: model.blocks[-1].norm1\n- SwinT: model.layers[-1].blocks[-1].norm1\n\n\nIf you pass a list with several layers, the CAM will be averaged accross them.\nThis can be useful if you're not sure what layer will perform best.\n\n----------\n\n# Adapting for new architectures and tasks\n\nMethods like GradCAM were designed for and were originally mostly applied on classification models, \nand specifically CNN classification models.\nHowever you can also use this package on new architectures like Vision Transformers, and on non classification tasks like Object Detection or Semantic Segmentation.\n\nThe be able to adapt to non standard cases, we have two concepts.\n- The reshape transform - how do we convert activations to represent spatial images ?\n- The model targets - What exactly should the explainability method try to explain ?\n\n## The reshape_transform argument\nIn a CNN the intermediate activations in the model are a mult-channel image that have the dimensions channel x rows x cols,\nand the various explainabiltiy methods work with these to produce a new image.\n\nIn case of another architecture, like the Vision Transformer, the shape might be different, like (rows x cols + 1) x channels, or something else.\nThe reshape transform converts the activations back into a multi-channel image, for example by removing the class token in a vision transformer. \nFor examples, check [here](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fblob\u002Fmaster\u002Fpytorch_grad_cam\u002Futils\u002Freshape_transforms.py)\n\n## The model_target argument\nThe model target is just a callable that is able to get the model output, and filter it out for the specific scalar output we want to explain.\n\nFor classification tasks, the model target will typically be the output from a specific category.\nThe `targets` parameter passed to the CAM method can then use `ClassifierOutputTarget`:\n```python\ntargets = [ClassifierOutputTarget(281)]\n```\n\nHowever for more advanced cases, you might want a different behaviour.\nCheck [here](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fblob\u002Fmaster\u002Fpytorch_grad_cam\u002Futils\u002Fmodel_targets.py) for more examples.\n\n----------\n\n# Tutorials\nHere you can find detailed examples of how to use this for various custom use cases like object detection:\n\nThese point to the new documentation jupter-book for fast rendering.\nThe jupyter notebooks themselves can be found under the tutorials folder in the git repository.\n\n- [Notebook tutorial: XAI Recipes for the HuggingFace 🤗 Image Classification Models](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FHuggingFace.html>)\n\n- [Notebook tutorial: Deep Feature Factorizations for better model explainability](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FDeep%20Feature%20Factorizations.html>)\n\n- [Notebook tutorial: Class Activation Maps for Object Detection with Faster-RCNN](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FClass%20Activation%20Maps%20for%20Object%20Detection%20With%20Faster%20RCNN.html>)\n\n- [Notebook tutorial: Class Activation Maps for YOLO5](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FEigenCAM%20for%20YOLO5.html>)\n\n- [Notebook tutorial: Class Activation Maps for Semantic Segmentation](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FClass%20Activation%20Maps%20for%20Semantic%20Segmentation.html>)\n\n- [Notebook tutorial: Adapting pixel attribution methods for embedding outputs from models](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FPixel%20Attribution%20for%20embeddings.html>)\n\n- [Notebook tutorial: May the best explanation win. CAM Metrics and Tuning](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FCAM%20Metrics%20And%20Tuning%20Tutorial.html>)\n\n- [How it works with Vision\u002FSwinT transformers](tutorials\u002Fvision_transformers.md)\n\n\n----------\n\n# Guided backpropagation\n\n```python\nfrom pytorch_grad_cam import GuidedBackpropReLUModel\nfrom pytorch_grad_cam.utils.image import (\n    show_cam_on_image, deprocess_image, preprocess_image\n)\ngb_model = GuidedBackpropReLUModel(model=model, device=model.device())\ngb = gb_model(input_tensor, target_category=None)\n\ncam_mask = cv2.merge([grayscale_cam, grayscale_cam, grayscale_cam])\ncam_gb = deprocess_image(cam_mask * gb)\nresult = deprocess_image(gb)\n```\n\n----------\n\n# Metrics and evaluating the explanations\n\n```python\nfrom pytorch_grad_cam.utils.model_targets import ClassifierOutputSoftmaxTarget\nfrom pytorch_grad_cam.metrics.cam_mult_image import CamMultImageConfidenceChange\n# Create the metric target, often the confidence drop in a score of some category\nmetric_target = ClassifierOutputSoftmaxTarget(281)\nscores, batch_visualizations = CamMultImageConfidenceChange()(input_tensor, \n  inverse_cams, targets, model, return_visualization=True)\nvisualization = deprocess_image(batch_visualizations[0, :])\n\n# State of the art metric: Remove and Debias\nfrom pytorch_grad_cam.metrics.road import ROADMostRelevantFirst, ROADLeastRelevantFirst\ncam_metric = ROADMostRelevantFirst(percentile=75)\nscores, perturbation_visualizations = cam_metric(input_tensor, \n  grayscale_cams, targets, model, return_visualization=True)\n\n# You can also average across different percentiles, and combine\n# (LeastRelevantFirst - MostRelevantFirst) \u002F 2\nfrom pytorch_grad_cam.metrics.road import ROADMostRelevantFirstAverage,\n                                          ROADLeastRelevantFirstAverage,\n                                          ROADCombined\ncam_metric = ROADCombined(percentiles=[20, 40, 60, 80])\nscores = cam_metric(input_tensor, grayscale_cams, targets, model)\n```\n\n\n# Smoothing to get nice looking CAMs\n\nTo reduce noise in the CAMs, and make it fit better on the objects,\ntwo smoothing methods are supported:\n\n- `aug_smooth=True`\n\n  Test time augmentation: increases the run time by x6.\n\n  Applies a combination of horizontal flips, and mutiplying the image\n  by [1.0, 1.1, 0.9].\n\n  This has the effect of better centering the CAM around the objects.\n\n\n- `eigen_smooth=True`\n\n  First principle component of `activations*weights`\n\n  This has the effect of removing a lot of noise.\n\n\n|AblationCAM | aug smooth | eigen smooth | aug+eigen smooth|\n|------------|------------|--------------|--------------------|\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_f47c07db1771.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_f7ada823a32b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_805b8d693d1f.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_9fb3d027c0c6.jpg) | \n\n----------\n\n# Running the example script:\n\nUsage: `python cam.py --image-path \u003Cpath_to_image> --method \u003Cmethod> --output-dir \u003Coutput_dir_path> `\n\n\nTo use with a specific device, like cpu, cuda, cuda:0, mps or hpu:\n`python cam.py --image-path \u003Cpath_to_image> --device cuda  --output-dir \u003Coutput_dir_path> `\n\n----------\n\nYou can choose between:\n\n`GradCAM` , `HiResCAM`, `ScoreCAM`, `GradCAMPlusPlus`, `AblationCAM`, `XGradCAM` , `LayerCAM`, `FullGrad`, `EigenCAM`, `ShapleyCAM`, and `FinerCAM`.\n\nSome methods like ScoreCAM and AblationCAM require a large number of forward passes,\nand have a batched implementation.\n\nYou can control the batch size with\n`cam.batch_size = `\n\n----------\n\n## Citation\nIf you use this for research, please cite. Here is an example BibTeX entry:\n\n```\n@misc{jacobgilpytorchcam,\n  title={PyTorch library for CAM methods},\n  author={Jacob Gildenblat and contributors},\n  year={2021},\n  publisher={GitHub},\n  howpublished={\\url{https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam}},\n}\n```\n\n----------\n\n# References\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1610.02391 \u003Cbr>\n`Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization\nRamprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2011.08891 \u003Cbr>\n`Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks\nRachel L. Draelos, Lawrence Carin`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1710.11063 \u003Cbr>\n`Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks\nAditya Chattopadhyay, Anirban Sarkar, Prantik Howlader, Vineeth N Balasubramanian`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01279 \u003Cbr>\n`Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks\nHaofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, Xia Hu`\n\nhttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9093360\u002F \u003Cbr>\n`Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization.\nSaurabh Desai and Harish G Ramaswamy. In WACV, pages 972–980, 2020`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02312 \u003Cbr>\n`Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs\nRuigang Fu, Qingyong Hu, Xiaohu Dong, Yulan Guo, Yinghui Gao, Biao Li`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2008.00299 \u003Cbr>\n`Eigen-CAM: Class Activation Map using Principal Components\nMohammed Bany Muhammad, Mohammed Yeasin`\n\nhttp:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21TIP_LayerCAM.pdf \u003Cbr>\n`LayerCAM: Exploring Hierarchical Class Activation Maps for Localization\nPeng-Tao Jiang; Chang-Bin Zhang; Qibin Hou; Ming-Ming Cheng; Yunchao Wei`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1905.00780 \u003Cbr>\n`Full-Gradient Representation for Neural Network Visualization\nSuraj Srinivas, Francois Fleuret`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1806.10206 \u003Cbr>\n`Deep Feature Factorization For Concept Discovery\nEdo Collins, Radhakrishna Achanta, Sabine Süsstrunk`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00267 \u003Cbr>\n`KPCA-CAM: Visual Explainability of Deep Computer Vision Models using Kernel PCA\nSachin Karmani, Thanushon Sivakaran, Gaurav Prasad, Mehmet Ali, Wenbo Yang, Sheyang Tang`\n\nhttps:\u002F\u002Fhal.science\u002Fhal-02963298\u002Fdocument \u003Cbr>\n`Features Understanding in 3D CNNs for Actions Recognition in Video\nKazi Ahmed Asif Fuad, Pierre-Etienne Martin, Romain Giot, Romain\nBourqui, Jenny Benois-Pineau, Akka Zemmar`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06261 \u003Cbr>\n`CAMs as Shapley Value-based Explainers\nHuaiguang Cai`\n\n\nhttps:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.11309 \u003Cbr>\n`Finer-CAM : Spotting the Difference Reveals Finer Details for Visual Explanation`    \n`Ziheng Zhang*, Jianyang Gu*, Arpita Chowdhury, Zheda Mai, David Carlyn,Tanya Berger-Wolf, Yu Su, Wei-Lun Chao`\n","[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n![Build Status](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fworkflows\u002FTests\u002Fbadge.svg)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_dbf05cd93c6b.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fgrad-cam)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a7f619ad6a46.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fgrad-cam)\n\n# PyTorch 的高级 AI 可解释性\n\n`pip install grad-cam`\n\n包含高级教程的文档：[https:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book](https:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book)\n\n\n这是一个包含计算机视觉可解释人工智能（Explainable AI）最先进方法的软件包。\n它可用于诊断模型预测，无论是在生产环境还是模型开发过程中。\n其目标也是作为算法和指标的基准，用于研究新的可解释性方法。\n\n⭐ 计算机视觉像素归因（Pixel Attribution）方法的全面集合。\n\n⭐ 在许多常见的 CNN（卷积神经网络）和 Vision Transformers（视觉 Transformer）上进行了测试。\n\n⭐ 高级用例：适用于分类、目标检测、语义分割、嵌入相似度等场景。\n\n⭐ 包含平滑方法，使 CAM（类激活图）看起来更美观。\n\n⭐ 高性能：所有方法均完全支持图像批次处理。\n\n⭐ 包含用于检查解释可信度的指标，并可对其进行调优以获得最佳性能。\n\n\n![visualization](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fjacobgil.github.io\u002Fblob\u002Fmaster\u002Fassets\u002Fcam_dog.gif?raw=true\n)\n\n| 方法              | 功能                                                                                                                |\n|---------------------|-----------------------------------------------------------------------------------------------------------------------------|\n| GradCAM             | 通过平均梯度对 2D 激活进行加权                                                                           |\n| HiResCAM            | 类似于 GradCAM，但将激活与梯度进行逐元素相乘；对于某些模型可证明保证忠实度 |\n| GradCAMElementWise  | 类似于 GradCAM，但将激活与梯度进行逐元素相乘，然后在求和前应用 ReLU 操作        |\n| GradCAM++           | 类似于 GradCAM，但使用二阶梯度                                                                                |\n| XGradCAM            | 类似于 GradCAM，但通过归一化的激活对梯度进行缩放                                                          |\n| AblationCAM         | 将激活置零并测量输出下降的程度（本仓库包含快速的批处理实现）              |\n| ScoreCAM            | 通过缩放的激活对图像进行扰动，并测量输出下降的程度                                              |\n| EigenCAM            | 获取 2D 激活的第一主成分（无类别区分能力，但似乎能产生很好的结果）        |\n| EigenGradCAM        | 类似于 EigenCAM 但具有类别区分能力：激活*梯度的第一主成分。看起来像 GradCAM，但更清晰 |\n| LayerCAM            | 通过正梯度在空间上对激活进行加权。在较低层表现更好                             |\n| FullGrad            | 计算网络中所有偏置的梯度，然后将它们相加                                          |\n| Deep Feature Factorizations           | 对 2D 激活进行非负矩阵分解                                                   |\n| KPCA-CAM            | 类似于 EigenCAM，但使用核 PCA（Kernel PCA）代替 PCA                                                                            |            \n| FEM                 | 一种无梯度方法，通过激活 > 均值 + k * 标准差的规则对激活进行二值化。                                   |\n| ShapleyCAM          | 使用梯度和海森向量积对激活进行加权。|\n| FinerCAM                |  通过比较相似类别、抑制共享特征并突出判别性细节来改进细粒度分类。    |\n\n## 可视化示例\n\n| 是什么让网络认为图像标签是 'pug, pug-dog' | 是什么让网络认为图像标签是 'tabby, tabby cat' | 将 Grad-CAM 与 Guided Backpropagation 结合用于 'pug, pug-dog' 类 |\n| ---------------------------------------------------------------|--------------------|-----------------------------------------------------------------------------|\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_ada099dc5e3f.jpg\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a762039d0474.jpg\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_b68c9828fe4f.jpg\" width=\"256\" height=\"256\"> |\n\n## 目标检测和语义分割\n| 目标检测 | 语义分割 |\n| -----------------|-----------------------|\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_66a9d9ed1fd1.png\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_e2df54161e8b.png\" width=\"256\" height=\"200\"> |\n\n| 3D 医疗语义分割 |\n| -------------------------- |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_ca7e5c83e341.gif\" width=\"539\">|\n\n## 解释与其他图像\u002F嵌入的相似度\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_00c546bfff66.png\">\n\n## 深度特征分解\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_174bff69e0ec.png\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_96acb6bf1b87.png\">\n\n## CLIP\n| 解释文本提示 \"a dog\" | 解释文本提示 \"a cat\" |\n| -----------------------------------|------------------------------------|\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_47665d1ba04a.jpg\" width=\"256\" height=\"256\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_55f8517a0008.jpg\" width=\"256\" height=\"256\"> |\n\n## 分类\n\n#### Resnet50:\n| 类别 | 图片 | GradCAM | AblationCAM (消融CAM) | ScoreCAM (得分CAM) |\n| ---------|-------|----------|------------|------------|\n| Dog (狗)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a36a1a22d62d.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_c3fd57165590.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_3b2edcde7b35.jpg)   |\n| Cat (猫)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png?raw=true) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_fe6489f7b739.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_7c0618e6ce61.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_81b0153c7cc3.jpg)   |\n\n#### Vision Transformer (Deit Tiny):\n| 类别 | 图片 | GradCAM | AblationCAM (消融CAM) | ScoreCAM (得分CAM) |\n| ---------|-------|----------|------------|------------|\n| Dog (狗)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_91465c322e85.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_ec95c74bbf3a.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_0d7af89814a0.jpg)   |\n| Cat (猫)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_616fa9935c22.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_4816785b19c2.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_4676dbeaff5b.jpg)   |\n\n#### Swin Transformer (Tiny window:7 patch:4 input-size:224):\n| 类别 | 图片 | GradCAM | AblationCAM (消融CAM) | ScoreCAM (得分CAM) |\n| ---------|-------|----------|------------|------------|\n| Dog (狗)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_0c09c6bba692.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_eed0e3aa0da0.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_925abe45826c.jpg)   |\n| Cat (猫)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_5a87ade329ab.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_86609db51d8c.jpg)     |  ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_b07baf18bfb2.jpg)   |![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_a61b3c4b5f04.jpg)   |\n\n\n# XAI 的指标与评估\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_13c0349557d5.png\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_6dddeeed82e6.png\">\n\n----------\n\n# 使用示例\n\n```python\nfrom pytorch_grad_cam import GradCAM, HiResCAM, ScoreCAM, GradCAMPlusPlus, AblationCAM, XGradCAM, EigenCAM, FullGrad\nfrom pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget\nfrom pytorch_grad_cam.utils.image import show_cam_on_image\nfrom torchvision.models import resnet50\n\nmodel = resnet50(pretrained=True)\ntarget_layers = [model.layer4[-1]]\ninput_tensor = # Create an input tensor image for your model..\n# Note: input_tensor can be a batch tensor with several images!\n\n# We have to specify the target we want to generate the CAM for.\ntargets = [ClassifierOutputTarget(281)]\n\n# Construct the CAM object once, and then re-use it on many images.\nwith GradCAM(model=model, target_layers=target_layers) as cam:\n  # You can also pass aug_smooth=True and eigen_smooth=True, to apply smoothing.\n  grayscale_cam = cam(input_tensor=input_tensor, targets=targets)\n  # In this example grayscale_cam has only one image in the batch:\n  grayscale_cam = grayscale_cam[0, :]\n  visualization = show_cam_on_image(rgb_img, grayscale_cam, use_rgb=True)\n  # You can also get the model outputs without having to redo inference\n  model_outputs = cam.outputs\n```\n\n[cam.py](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fblob\u002Fmaster\u002Fcam.py) 包含更详细的使用示例。\n\n----------\n# 选择提取激活值的层\n你需要选择目标层来计算 CAM (类激活映射)。\n一些常见的选择是：\n- FasterRCNN: model.backbone\n- Resnet18 和 50: model.layer4[-1]\n- VGG, densenet161 和 mobilenet: model.features[-1]\n- mnasnet1_0: model.layers[-1]\n- ViT: model.blocks[-1].norm1\n- SwinT: model.layers[-1].blocks[-1].norm1\n\n\n如果你传入一个包含多个层的列表，CAM 将在这些层上进行平均。\n如果你不确定哪一层表现最好，这会很有用。\n\n----------\n\n# 适配新的架构和任务\n\n像 GradCAM (梯度加权类激活映射) 这样的方法是为分类模型设计的，最初主要应用于分类模型，特别是 CNN (卷积神经网络) 分类模型。\n然而，你也可以将此库用于新的架构，如 Vision Transformer (视觉 Transformer)，以及非分类任务，如目标检测或语义分割。\n\n为了能够适应非标准情况，我们有两个概念。\n- Reshape Transform (形状变换) —— 我们如何将激活值转换为代表空间图像的形式？\n- Model Targets (模型目标) —— 可解释性方法究竟应该尝试解释什么？\n\n## reshape_transform 参数\n在 CNN 中，模型的中间激活值是一个具有通道 x 行 x 列维度的多通道图像，各种可解释性方法利用这些值来生成新图像。\n\n对于其他架构，例如 Vision Transformer，形状可能会有所不同，例如 (行 x 列 + 1) x 通道，或者其他形式。\nReshape Transform 将激活值转换回多通道图像，例如通过移除 Vision Transformer 中的 class token (类标记)。\n示例请查看 [这里](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fblob\u002Fmaster\u002Fpytorch_grad_cam\u002Futils\u002Freshape_transforms.py)\n\n## model_target 参数\nModel Target 只是一个可调用对象，它能够获取模型输出，并筛选出我们想要解释的特定标量输出。\n\n对于分类任务，Model Target 通常是特定类别的输出。\n传递给 CAM 方法的 `targets` 参数可以使用 `ClassifierOutputTarget`：\n```python\ntargets = [ClassifierOutputTarget(281)]\n```\n\n然而，对于更高级的情况，你可能需要不同的行为。\n查看 [这里](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fblob\u002Fmaster\u002Fpytorch_grad_cam\u002Futils\u002Fmodel_targets.py) 获取更多示例。\n\n----------\n\n# 教程\n这里您可以找到关于如何将其用于各种自定义用例（如目标检测）的详细示例：\n\n这些链接指向新的文档 jupyter-book 以实现快速渲染。\nJupyter notebooks 本身可以在 git 仓库的 tutorials 文件夹下找到。\n\n- [Notebook 教程：HuggingFace 🤗 图像分类模型的 XAI (可解释人工智能) 配方](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FHuggingFace.html>)\n\n- [Notebook 教程：用于更好模型可解释性的深度特征分解](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FDeep%20Feature%20Factorizations.html>)\n\n- [Notebook 教程：使用 Faster-RCNN 进行目标检测的类别激活映射](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FClass%20Activation%20Maps%20for%20Object%20Detection%20With%20Faster%20RCNN.html>)\n\n- [Notebook 教程：YOLO5 的类别激活映射](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FEigenCAM%20for%20YOLO5.html>)\n\n- [Notebook 教程：语义分割的类别激活映射](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FClass%20Activation%20Maps%20for%20Semantic%20Segmentation.html>)\n\n- [Notebook 教程：调整像素归因方法以适应模型的嵌入输出](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FPixel%20Attribution%20for%20embeddings.html>)\n\n- [Notebook 教程：愿最佳解释获胜。CAM 指标与调优](\u003Chttps:\u002F\u002Fjacobgil.github.io\u002Fpytorch-gradcam-book\u002FCAM%20Metrics%20And%20Tuning%20Tutorial.html>)\n\n- [它如何与 Vision\u002FSwinT Transformer 协同工作](tutorials\u002Fvision_transformers.md)\n\n\n----------\n\n# 引导反向传播\n\n```python\nfrom pytorch_grad_cam import GuidedBackpropReLUModel\nfrom pytorch_grad_cam.utils.image import (\n    show_cam_on_image, deprocess_image, preprocess_image\n)\ngb_model = GuidedBackpropReLUModel(model=model, device=model.device())\ngb = gb_model(input_tensor, target_category=None)\n\ncam_mask = cv2.merge([grayscale_cam, grayscale_cam, grayscale_cam])\ncam_gb = deprocess_image(cam_mask * gb)\nresult = deprocess_image(gb)\n```\n\n----------\n\n# 指标与评估解释\n\n```python\nfrom pytorch_grad_cam.utils.model_targets import ClassifierOutputSoftmaxTarget\nfrom pytorch_grad_cam.metrics.cam_mult_image import CamMultImageConfidenceChange\n# Create the metric target, often the confidence drop in a score of some category\nmetric_target = ClassifierOutputSoftmaxTarget(281)\nscores, batch_visualizations = CamMultImageConfidenceChange()(input_tensor, \n  inverse_cams, targets, model, return_visualization=True)\nvisualization = deprocess_image(batch_visualizations[0, :])\n\n# State of the art metric: Remove and Debias\nfrom pytorch_grad_cam.metrics.road import ROADMostRelevantFirst, ROADLeastRelevantFirst\ncam_metric = ROADMostRelevantFirst(percentile=75)\nscores, perturbation_visualizations = cam_metric(input_tensor, \n  grayscale_cams, targets, model, return_visualization=True)\n\n# You can also average across different percentiles, and combine\n# (LeastRelevantFirst - MostRelevantFirst) \u002F 2\nfrom pytorch_grad_cam.metrics.road import ROADMostRelevantFirstAverage,\n                                          ROADLeastRelevantFirstAverage,\n                                          ROADCombined\ncam_metric = ROADCombined(percentiles=[20, 40, 60, 80])\nscores = cam_metric(input_tensor, grayscale_cams, targets, model)\n```\n\n\n# 平滑处理以获得美观的 CAM\n\n为了减少 CAM (类别激活映射) 中的噪声，并使其更好地贴合目标物体，支持两种平滑方法：\n\n- `aug_smooth=True`\n\n  测试时增强：运行时间增加 6 倍。\n\n  应用水平翻转和将图像乘以 [1.0, 1.1, 0.9] 的组合。\n\n  这具有将 CAM 更好地集中在物体周围的效果。\n\n\n- `eigen_smooth=True`\n\n  `activations*weights` 的第一主成分。\n\n  这具有消除大量噪声的效果。\n\n\n|AblationCAM | aug 平滑 | eigen 平滑 | aug+eigen 平滑|\n|------------|------------|--------------|--------------------|\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_f47c07db1771.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_f7ada823a32b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_805b8d693d1f.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_readme_9fb3d027c0c6.jpg) | \n\n----------\n\n# 运行示例脚本：\n\n用法： `python cam.py --image-path \u003Cpath_to_image> --method \u003Cmethod> --output-dir \u003Coutput_dir_path> `\n\n\n若要使用特定设备，如 cpu、cuda、cuda:0、mps 或 hpu：\n`python cam.py --image-path \u003Cpath_to_image> --device cuda  --output-dir \u003Coutput_dir_path> `\n\n----------\n\n您可以在以下方法中进行选择：\n\n`GradCAM` , `HiResCAM`, `ScoreCAM`, `GradCAMPlusPlus`, `AblationCAM`, `XGradCAM` , `LayerCAM`, `FullGrad`, `EigenCAM`, `ShapleyCAM`, 和 `FinerCAM`。\n\n某些方法如 ScoreCAM 和 AblationCAM 需要大量的前向传播，并具有批处理实现。\n\n您可以使用 `cam.batch_size =` 来控制批大小。\n\n----------\n\n## 引用\n如果您在研究中使用此代码，请引用。以下是一个 BibTeX 条目示例：\n\n```\n@misc{jacobgilpytorchcam,\n  title={PyTorch library for CAM methods},\n  author={Jacob Gildenblat and contributors},\n  year={2021},\n  publisher={GitHub},\n  howpublished={\\url{https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam}},\n}\n```\n\n----------\n\n# 参考文献\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1610.02391 \u003Cbr>\n`Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization\nRamprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2011.08891 \u003Cbr>\n`Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks\nRachel L. Draelos, Lawrence Carin`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1710.11063 \u003Cbr>\n`Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks\nAditya Chattopadhyay, Anirban Sarkar, Prantik Howlader, Vineeth N Balasubramanian`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01279 \u003Cbr>\n`Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks\nHaofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, Xia Hu`\n\nhttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9093360\u002F \u003Cbr>\n`Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization.\nSaurabh Desai and Harish G Ramaswamy. In WACV, pages 972–980, 2020`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02312 \u003Cbr>\n`Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs\nRuigang Fu, Qingyong Hu, Xiaohu Dong, Yulan Guo, Yinghui Gao, Biao Li`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2008.00299 \u003Cbr>\n`Eigen-CAM: Class Activation Map using Principal Components\nMohammed Bany Muhammad, Mohammed Yeasin`\n\nhttp:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21TIP_LayerCAM.pdf \u003Cbr>\n`LayerCAM: Exploring Hierarchical Class Activation Maps for Localization\nPeng-Tao Jiang; Chang-Bin Zhang; Qibin Hou; Ming-Ming Cheng; Yunchao Wei`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1905.00780 \u003Cbr>\n`Full-Gradient Representation for Neural Network Visualization\nSuraj Srinivas, Francois Fleuret`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F1806.10206 \u003Cbr>\n`Deep Feature Factorization For Concept Discovery\nEdo Collins, Radhakrishna Achanta, Sabine Süsstrunk`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00267 \u003Cbr>\n`KPCA-CAM: Visual Explainability of Deep Computer Vision Models using Kernel PCA\nSachin Karmani, Thanushon Sivakaran, Gaurav Prasad, Mehmet Ali, Wenbo Yang, Sheyang Tang`\n\nhttps:\u002F\u002Fhal.science\u002Fhal-02963298\u002Fdocument \u003Cbr>\n`Features Understanding in 3D CNNs for Actions Recognition in Video\nKazi Ahmed Asif Fuad, Pierre-Etienne Martin, Romain Giot, Romain\nBourqui, Jenny Benois-Pineau, Akka Zemmar`\n\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06261 \u003Cbr>\n`CAMs as Shapley Value-based Explainers\nHuaiguang Cai`\n\n\nhttps:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.11309 \u003Cbr>\n`Finer-CAM : Spotting the Difference Reveals Finer Details for Visual Explanation`    \n`Ziheng Zhang*, Jianyang Gu*, Arpita Chowdhury, Zheda Mai, David Carlyn,Tanya Berger-Wolf, Yu Su, Wei-Lun Chao`","# pytorch-grad-cam 快速上手指南\n\n## 1. 环境准备\n\n在安装 `pytorch-grad-cam` 之前，请确保您的系统已安装以下依赖：\n\n*   **Python**：建议 3.6 及以上版本。\n*   **PyTorch**：建议安装较新版本，且支持 CUDA（如果需要 GPU 加速）。\n*   **TorchVision**：通常与 PyTorch 配套安装，用于加载预训练模型（如 ResNet, VGG 等）。\n*   **OpenCV**：用于图像处理（可选，但在运行示例时常用）。\n\n## 2. 安装步骤\n\n推荐使用 pip 进行安装。为了提高下载速度，建议使用国内镜像源。\n\n**基础安装命令：**\n```bash\npip install grad-cam\n```\n\n**推荐使用国内镜像源加速安装：**\n```bash\npip install grad-cam -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 3. 基本使用\n\n以下示例展示了如何使用 Grad-CAM 对 ResNet50 模型进行可视化分析。\n\n### 3.1 导入必要的库\n```python\nfrom pytorch_grad_cam import GradCAM\nfrom pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget\nfrom pytorch_grad_cam.utils.image import show_cam_on_image\nfrom torchvision.models import resnet50\nimport torch\nfrom PIL import Image\nimport numpy as np\nimport cv2\n```\n\n### 3.2 加载模型与数据\n```python\n# 1. 加载预训练模型\nmodel = resnet50(pretrained=True)\nmodel.eval()  # 设置为评估模式\n\n# 2. 选择目标层\n# 不同的模型架构对应不同的目标层，常见选择如下：\n# Resnet18\u002F50: model.layer4[-1]\n# VGG\u002FDenseNet\u002FMobileNet: model.features[-1]\n# ViT: model.blocks[-1].norm1\ntarget_layers = [model.layer4[-1]]\n\n# 3. 准备输入数据\n# 注意：input_tensor 需要经过归一化处理，此处仅为示例\n# 假设 rgb_img 是经过 resize 且转为 float32 的原始图像数组 (H, W, C)，值域 [0, 1]\n# input_tensor 是经过 ToTensor 和 Normalize 的 tensor (C, H, W)\nrgb_img = np.array(Image.open(\"your_image.jpg\").resize((224, 224))) \u002F 255.0\ninput_tensor = torch.from_numpy(rgb_img).permute(2, 0, 1).unsqueeze(0).float() # 示例转换，实际请按模型要求归一化\n```\n\n### 3.3 运行 Grad-CAM\n```python\n# 4. 设置目标类别\n# 例如 281 是 ImageNet 中的 \"tabby, tabby cat\" 类别\ntargets = [ClassifierOutputTarget(281)]\n\n# 5. 构建 CAM 对象并生成热力图\nwith GradCAM(model=model, target_layers=target_layers) as cam:\n    # 可以传入 aug_smooth=True 和 eigen_smooth=True 来改善视觉效果\n    grayscale_cam = cam(input_tensor=input_tensor, targets=targets)\n    \n    # 获取单张图片的 CAM\n    grayscale_cam = grayscale_cam[0, :]\n    \n    # 将热力图叠加在原图上\n    visualization = show_cam_on_image(rgb_img, grayscale_cam, use_rgb=True)\n    \n    # 保存或显示结果\n    # cv2.imwrite('cam_result.jpg', visualization[:, :, ::-1] * 255)\n```","某工业视觉公司的算法工程师正在开发一款电路板表面缺陷检测模型，模型在测试集上准确率很高，但在实际产线部署时却频繁将良品误判为次品，急需排查原因。\n\n### 没有 pytorch-grad-cam 时\n- 面对误判结果，工程师只能看到模型输出的“次品”标签和概率值，完全不知道模型是根据图像的哪一部分特征做出的判断，调试如同“盲人摸象”。\n- 为了解决误判，只能凭借经验盲目尝试调整超参数、更换骨干网络或重新清洗数据，往往耗费数天时间却收效甚微。\n- 缺乏直观的证据来定位问题，无法判断是数据标注错误还是模型学到了错误的特征关联，难以向项目组解释模型失效的原因。\n- 产线工人对频繁的“黑盒”误报感到困惑和不信任，导致人机协作困难，AI 项目落地阻力重重。\n\n### 使用 pytorch-grad-cam 后\n- 工程师只需几行代码即可对误判图片生成类激活热力图，直观地看到模型在推理时“关注”的具体图像区域。\n- 热力图清晰显示模型关注的是传送带背景的阴影而非电路板本身的划痕，迅速定位到“背景干扰”这一根本原因，从而针对性地进行数据增强。\n- 支持多种先进算法（如 GradCAM++、EigenCAM），适用于工程师手头的 CNN 和 Vision Transformer 等不同架构，无需自行编写复杂的梯度计算代码。\n- 通过可视化图表向非技术人员展示模型决策依据，不仅快速解决了误判问题，还显著提升了团队对 AI 模型的信任度。\n\npytorch-grad-cam 将不可见的模型逻辑转化为可视化的热力图，让算法调试从“猜谜游戏”变为“精准诊断”，极大提升了模型优化的效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjacobgil_pytorch-grad-cam_123662a2.png","jacobgil","Jacob Gildenblat","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjacobgil_be8e147b.jpg","Playing with tensors.",null,"Israel","jacob.gildenblat@gmail.com","jacobgildenblat","jacobgil.github.io","https:\u002F\u002Fgithub.com\u002Fjacobgil",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,12720,1703,"2026-04-05T03:48:15","MIT",1,"未说明",{"notes":97,"python":95,"dependencies":98},"该工具为 PyTorch 计算机视觉模型提供可解释性分析方法。支持多种 CNN 和 Vision Transformer 架构（如 ResNet, ViT, Swin Transformer）。支持分类、目标检测、语义分割等多种任务。包含多种可视化方法（如 GradCAM, EigenCAM 等）及评估指标。安装命令为 'pip install grad-cam'。",[99,100],"torch","torchvision",[26,14,13],[103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119],"deep-learning","pytorch","grad-cam","visualizations","interpretability","interpretable-ai","interpretable-deep-learning","score-cam","class-activation-maps","vision-transformers","explainable-ai","xai","image-classification","machine-learning","object-detection","computer-vision","explainable-ml",19,"2026-03-27T02:49:30.150509","2026-04-06T07:15:06.492445",[124,129,134,139,144],{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},1746,"运行 GradCAM 时出现 \"AxisError: axis 2 is out of bounds\" 错误如何解决？","这通常是因为选择了错误的目标层（target_layers）。GradCAM 需要二维的激活图，因此不能直接在全连接层或线性层上使用。例如，在 EfficientNet 等模型中，应选择卷积层或池化层（如 'avgpool'）作为 target_layers，而不是 'classifier' 层。","https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fissues\u002F192",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},1747,"如何通过 Conda 安装 grad-cam？","可以通过 conda-forge 渠道直接安装。安装命令为：\nconda install -c conda-forge grad-cam","https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fissues\u002F180",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},1748,"在 timm 的 ViT (Vision Transformer) 模型中使用 GradCAM 时遇到形状错误 怎么办？","对于 ViT 模型，需要正确设置 reshape_transform 和目标层。可以尝试将目标层设置为 model.blocks[-2] 或 model.blocks[-1].norm1，并确保 reshape 函数正确处理了 patch embedding 的维度变换（例如移除 CLS token 并调整形状）。","https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fissues\u002F76",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},1749,"为什么自定义训练模型的 GradCAM 可视化结果看起来很奇怪或不正确？","请检查推理时的图像预处理参数是否与训练时完全一致。常见错误包括归一化的均值和标准差设置错误。例如，将均值 [0.485, 0.456, 406]（笔误）修正为 [0.485, 0.456, 0.406] 后，可视化结果恢复正常。","https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fissues\u002F189",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},1750,"如何在 Detectron2 或 FPN 网络中使用 GradCAM？","Detectron2 模型通常接受字典列表作为输入，而 GradCAM 期望张量输入，这会导致 IndexError。解决方法通常需要调整输入处理流程，确保传入 GradCAM 的是预处理后的张量，同时可能需要自定义 reshape 变换以适应 FPN 的输出结构。","https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam\u002Fissues\u002F31",[]]