[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-yizt--Grad-CAM.pytorch":3,"tool-yizt--Grad-CAM.pytorch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":83,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":10,"env_os":94,"env_gpu":95,"env_ram":94,"env_deps":96,"category_tags":106,"github_topics":107,"view_count":23,"oss_zip_url":83,"oss_zip_packed_at":83,"status":16,"created_at":114,"updated_at":115,"faqs":116,"releases":146},3214,"yizt\u002FGrad-CAM.pytorch","Grad-CAM.pytorch","pytorch实现Grad-CAM和Grad-CAM++,可以可视化任意分类网络的Class Activation Map (CAM)图,包括自定义的网络;同时也实现了目标检测faster r-cnn和retinanet两个网络的CAM图;欢迎试用、关注并反馈问题...","Grad-CAM.pytorch 是一个基于 PyTorch 框架开发的开源工具，旨在为深度学习分类网络和目标检测模型提供直观的可视化解释。它核心解决了神经网络“黑盒”决策过程不透明的问题，通过生成类激活映射图（CAM），清晰展示模型在判断图像类别或定位物体时，具体关注了图片中的哪些区域。\n\n该工具不仅完整复现了经典的 Grad-CAM 及其改进版 Grad-CAM++ 算法，还具备极强的通用性，支持包括 ResNet、VGG、DenseNet 在内的多种主流自定义分类网络，以及 Faster R-CNN、RetinaNet 和 FCOS 等复杂的目标检测架构。其独特的技术亮点在于能够灵活指定网络层级和类别 ID，并融合了引导反向传播（Guided Backpropagation）技术，从而生成更高分辨率、细节更丰富的热力图，帮助用户精准分析模型的注意力机制。\n\nGrad-CAM.pytorch 非常适合 AI 研究人员、算法工程师及深度学习开发者使用。无论是需要调试模型性能、验证数据偏差，还是希望向非技术人员展示模型决策依据，这款工具都能提供强有力的支持。它安装简便，命令行操作友好，","Grad-CAM.pytorch 是一个基于 PyTorch 框架开发的开源工具，旨在为深度学习分类网络和目标检测模型提供直观的可视化解释。它核心解决了神经网络“黑盒”决策过程不透明的问题，通过生成类激活映射图（CAM），清晰展示模型在判断图像类别或定位物体时，具体关注了图片中的哪些区域。\n\n该工具不仅完整复现了经典的 Grad-CAM 及其改进版 Grad-CAM++ 算法，还具备极强的通用性，支持包括 ResNet、VGG、DenseNet 在内的多种主流自定义分类网络，以及 Faster R-CNN、RetinaNet 和 FCOS 等复杂的目标检测架构。其独特的技术亮点在于能够灵活指定网络层级和类别 ID，并融合了引导反向传播（Guided Backpropagation）技术，从而生成更高分辨率、细节更丰富的热力图，帮助用户精准分析模型的注意力机制。\n\nGrad-CAM.pytorch 非常适合 AI 研究人员、算法工程师及深度学习开发者使用。无论是需要调试模型性能、验证数据偏差，还是希望向非技术人员展示模型决策依据，这款工具都能提供强有力的支持。它安装简便，命令行操作友好，是深入理解卷积神经网络内部运作机制的实用助手。","# Grad-CAM.pytorch\n\n​          pytorch 实现[Grad-CAM:Visual Explanations from Deep Networks via Gradient-based Localization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1610.02391) 和\n\n[Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.11063.pdf)\n\n1. [依赖](#依赖)\n2. [使用方法](#使用方法)\n3. [样例分析](#样例分析)\u003Cbr>\n   3.1 [单个对象](#单个对象)\u003Cbr>\n   3.3 [多个对象](#多个对象)\u003Cbr>\n4. [总结](#总结)\n5. [目标检测-faster-r-cnn](#目标检测-faster-r-cnn)\u003Cbr>\n   5.1 [detectron2安装](#detectron2安装)\u003Cbr>\n   5.2 [测试](#测试)\u003Cbr>\n   5.3 [Grad-CAM结果](#Grad-CAM结果)\u003Cbr>\n   5.4 [总结](#总结)\n6. [目标检测-retinanet](#目标检测-retinanet)\u003Cbr>\n   6.1 [detectron2安装](#detectron2安装)\u003Cbr>\n   6.2 [测试](#测试)\u003Cbr>\n   6.3 [Grad-CAM结果](#Grad-CAM结果)\u003Cbr>\n   6.4 [总结](#总结)\n7. [目标检测-fcos](#目标检测-fcos)\u003Cbr>\n   7.1 [AdelaiDet安装](#AdelaiDet安装)\u003Cbr>\n   7.2 [测试](#测试)\u003Cbr>\n   7.3 [Grad-CAM结果](#Grad-CAM结果)\u003Cbr>\n   7.4 [总结](#总结)\n\n**Grad-CAM整体架构**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b72e04ad4515.jpg)\n\n\n\n**Grad-CAM++与Grad-CAM的异同**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b5b2d046b772.png)\n\n\n\n## 依赖\n\n```wiki\npython 3.6.x\npytoch 1.0.1+\ntorchvision 0.2.2\nopencv-python\nmatplotlib\nscikit-image\nnumpy\n```\n\n\n\n## 使用方法\n\n```shell\npython main.py --image-path examples\u002Fpic1.jpg \\\n               --network densenet121 \\\n               --weight-path \u002Fopt\u002Fpretrained_model\u002Fdensenet121-a639ec97.pth\n```\n\n**参数说明**：\n\n- image-path：需要可视化的图像路径(可选,默认`.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg`)\n\n- network: 网络名称(可选,默认`resnet50`)\n- weight-path: 网络对应的与训练参数权重路径(可选,默认从pytorch官网下载对应的预训练权重)\n- layer-name: Grad-CAM使用的层名(可选,默认最后一个卷积层)\n- class-id：Grad-CAM和Guided Back Propagation反向传播使用的类别id（可选,默认网络预测的类别)\n- output-dir：可视化结果图像保存目录(可选，默认`results`目录)\n\n\n\n## 样例分析\n\n### 单个对象\n\n**原始图像**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg)\n\n**效果**\n\n| network      | HeatMap                                   | Grad-CAM                              | HeatMap++                                   | Grad-CAM++                              | Guided backpropagation               | Guided Grad-CAM                          |\n| ------------ | ----------------------------------------- | ------------------------------------- | ------------------------------------------- | --------------------------------------- | ------------------------------------ | ---------------------------------------- |\n| vgg16        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a063ca18eca0.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_35bc39ce39c3.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_902127b751cd.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_371a7d0c9b9a.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_00e09ca58135.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e5bd8a66bc90.jpg)       |\n| vgg19        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b7d31d05810d.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aaf8f8b9ac3c.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_114c5c2ac1ae.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9c6832c3fe50.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_27c3f5894007.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d66ba77ef4da.jpg)       |\n| resnet50     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a64e0759dbad.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ac93bdd96c90.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_982c076ebcd7.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_284ffc45d712.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f9a5e6bfdffb.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_39d6da85db4b.jpg)    |\n| resnet101    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_caac4fcac2b3.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a91e79a34739.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_667b78f22ac3.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_284ffc45d712.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5c588bff88fa.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_337092a927f1.jpg)   |\n| densenet121  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4098137cbea0.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dd271bf3cc32.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_98833e1a857f.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e7fa7ab144d1.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_55166bd0d8a2.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4a410fa680ba.jpg) |\n| inception_v3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_466c00209926.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc6453a48313.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_da873f1a57ec.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f3c75f22393e.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d3562b3870ae.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_77e9a10c64cb.jpg)   |\n| mobilenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_30d8868e52d8.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c806c11ac2dd.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6f61bc45e347.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_172156d1a484.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9f3515855146.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9b0f243e6094.jpg)   |\n| shufflenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cc2e44ab484d.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d4a9ae6eb016.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1e0ab3327d29.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5cb495f4099b.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0ce0494144ee.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7dbc2f33c5dd.jpg)   |\n\n### 多个对象\n\n​         对应多个图像Grad-CAM++比Grad-CAM覆盖要更全面一些，这也是Grad-CAM++最主要的优势\n\n**原始图像**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b0d01cbbeb15.jpg)\n\n**效果**\n\n| network      | HeatMap                                   | Grad-CAM                              | HeatMap++                                   | Grad-CAM++                              | Guided backpropagation               | Guided Grad-CAM                          |\n| ------------ | ----------------------------------------- | ------------------------------------- | ------------------------------------------- | --------------------------------------- | ------------------------------------ | ---------------------------------------- |\n| vgg16        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_83a9a7484a40.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c0b067cf1f15.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_165579e7cc19.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6a2c65fb29f5.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_076d910716a1.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aa1ecad7ee2c.jpg)       |\n| vgg19        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aa9803c09d97.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_09a509837852.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_74d1fad5b6f3.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f1e1caf1c32b.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_db5a06dda3c0.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_70673834b548.jpg)       |\n| resnet50     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8b1fc1fcb6d7.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cbfa25656b68.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a095d7d75308.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c3cfaaaea218.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_551c071cee34.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_951973719a68.jpg)    |\n| resnet101    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9c5b9d653b66.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d14d9fb0d606.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_faa812ee3524.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c3cfaaaea218.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1677f80ff6f7.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_46b0c4d6d1fa.jpg)   |\n| densenet121  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3185eb48845a.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bb4d9913ff21.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_37a8a06ce8c3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_572c0d492383.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b1af8c0fe69c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a25b5946baa2.jpg) |\n| inception_v3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9729bcd6fa31.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4e59bb3c422a.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d32e1d9903b8.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_db07a2e0a702.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7b48b8331c7d.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8ac8c8ac30a1.jpg)   |\n| mobilenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4e6a468e4d87.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_43c9e794c0f6.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a7b2f90da5ae.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f1245ff445a0.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f9ce85219cbe.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ff17db4ad392.jpg)   |\n| shufflenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aa5320e717aa.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0dfea55ebf64.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2f3c20a02286.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5dc5198d0301.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_47f4c0d1cba4.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_623a4ae25753.jpg)   |\n\n \n\n## 总结\n\n- vgg模型的Grad-CAM并没有覆盖整个对象,相对来说resnet和denset覆盖更全,特别是densenet;从侧面说明就模型的泛化和鲁棒性而言densenet>resnet>vgg\n- Grad-CAM++相对于Grad-CAM也是覆盖对象更全面，特别是对于同一个类别有多个实例的情况下,Grad-CAM可能只覆盖部分对象，Grad-CAM++基本覆盖所有对象;但是这仅仅对于vgg而言,想densenet直接使用Grad-CAM也基本能够覆盖所有对象\n- MobileNet V2的Grad-CAM覆盖也很全面\n- Inception V3和MobileNet V2的Guided backpropagation图轮廓很模糊，但是ShuffleNet V2的轮廓则比较清晰\n\n\n\n## 目标检测-faster-r-cnn\n\n​        有位网友[SHAOSIHAN](\u003Chttps:\u002F\u002Fgithub.com\u002FSHAOSIHAN>)问道怎样在目标检测中使用Grad-CAM;在Grad-CAM和Grad-CAM++论文中都没有提及对目标检测生成CAM图。我想主要有两个原因：\n\na) 目标检测不同于分类，分类网络只有一个分类损失，而且所有网络都是一样的(几个类别最后一层就是几个神经元)，最后的预测输出都是单一的类别得分分布。目标检测则不同，输出都不是单一的，而且不同的网络如Faster R-CNN, CornerNet,CenterNet,FCOS，它们的建模方式不一样，输出的含义都不相同。所以不会有统一的生成Grad-CAM图的方法。\n\nb) 分类属于弱监督，通过CAM可以了解网络预测时主要关注的空间位置，也就是\"看哪里\"，对分析问题有实际的价值；而目标检测，本身是强监督，预测边框就直接指示了“看哪里”。\n\n​         \n\n​        这里以detetron2中的faster-rcnn网络为例，生成Grad-CAM图。主要思路是直接获取预测分值最高的边框;将该边框的预测分值反向传播梯度到，该边框对应的proposal 边框的feature map上，生成此feature map的CAM图。\n\n\n\n### detectron2安装\n\na) 下载\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git\n```\n\n\n\nb) 修改`detectron2\u002Fmodeling\u002Froi_heads\u002Ffast_rcnn.py`文件中的`fast_rcnn_inference_single_image`函数，主要是增加索引号，记录分值高的预测边框是由第几个proposal边框生成的；修改后的`fast_rcnn_inference_single_image`函数如下：\n\n```python\ndef fast_rcnn_inference_single_image(\n        boxes, scores, image_shape, score_thresh, nms_thresh, topk_per_image\n):\n    \"\"\"\n    Single-image inference. Return bounding-box detection results by thresholding\n    on scores and applying non-maximum suppression (NMS).\n\n    Args:\n        Same as `fast_rcnn_inference`, but with boxes, scores, and image shapes\n        per image.\n\n    Returns:\n        Same as `fast_rcnn_inference`, but for only one image.\n    \"\"\"\n    valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1)\n    indices = torch.arange(start=0, end=scores.shape[0], dtype=int)\n    indices = indices.expand((scores.shape[1], scores.shape[0])).T\n    if not valid_mask.all():\n        boxes = boxes[valid_mask]\n        scores = scores[valid_mask]\n        indices = indices[valid_mask]\n    scores = scores[:, :-1]\n    indices = indices[:, :-1]\n\n    num_bbox_reg_classes = boxes.shape[1] \u002F\u002F 4\n    # Convert to Boxes to use the `clip` function ...\n    boxes = Boxes(boxes.reshape(-1, 4))\n    boxes.clip(image_shape)\n    boxes = boxes.tensor.view(-1, num_bbox_reg_classes, 4)  # R x C x 4\n\n    # Filter results based on detection scores\n    filter_mask = scores > score_thresh  # R x K\n    # R' x 2. First column contains indices of the R predictions;\n    # Second column contains indices of classes.\n    filter_inds = filter_mask.nonzero()\n    if num_bbox_reg_classes == 1:\n        boxes = boxes[filter_inds[:, 0], 0]\n    else:\n        boxes = boxes[filter_mask]\n\n    scores = scores[filter_mask]\n    indices = indices[filter_mask]\n    # Apply per-class NMS\n    keep = batched_nms(boxes, scores, filter_inds[:, 1], nms_thresh)\n    if topk_per_image >= 0:\n        keep = keep[:topk_per_image]\n    boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep]\n    indices = indices[keep]\n\n    result = Instances(image_shape)\n    result.pred_boxes = Boxes(boxes)\n    result.scores = scores\n    result.pred_classes = filter_inds[:, 1]\n    result.indices = indices\n    return result, filter_inds[:, 0]\n```\n\n\n\nc) 安装;如遇到问题，请参考[detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)；不同操作系统安装有差异\n\n```shell\ncd detectron2\npip install -e .\n```\n\n\n\n### 测试\n\na) 预训练模型下载\n\n```shell\nwget https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fdetectron2\u002FPascalVOC-Detection\u002Ffaster_rcnn_R_50_C4\u002F142202221\u002Fmodel_final_b1acc2.pkl\n```\n\n\n\nb) 测试Grad-CAM图像生成\n\n​          在本工程目录下执行如下命令\n\n```shell\nexport KMP_DUPLICATE_LIB_OK=TRUE\npython detection\u002Fdemo.py --config-file detection\u002Ffaster_rcnn_R_50_C4.yaml \\\n--input .https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg \\\n--opts MODEL.WEIGHTS \u002FUsers\u002Fyizuotian\u002Fpretrained_model\u002Fmodel_final_b1acc2.pkl MODEL.DEVICE cpu\n```\n\n\n\n### Grad-CAM结果\n\n| 原始图像                 | 检测边框                                  | Grad-CAM HeatMap                      | Grad-CAM++ HeatMap                      | 边框预测类别 |\n| ------------------------ | ----------------------------------------- | ------------------------------------- | --------------------------------------- | ------------ |\n| ![](.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_eed228f7b2b3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_002173aa96d5.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c0d0a8f5dae0.jpg) | Dog          |\n| ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9225ec332111.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b845da2472f0.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2525b1547dc3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c837c50fdbe1.jpg) | Aeroplane    |\n| ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4c54eafc1cd8.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8f7b6be23c1c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_fc96b82a8684.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a5099ebdcc7c.jpg) | Person       |\n| ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc41be35979b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4ecc541bb541.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_424ece8c4799.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a87b1b87234a.jpg) | Horse        |\n\n\n### 总结\n\n​          对于目标检测Grad-CAM++的效果并没有比Grad-CAM效果好，推测目标检测中预测边框已经是单个对象了,Grad-CAM++在多个对象的情况下优于Grad-CAM\n\n\n\n\n\n## 目标检测-retinanet\n\n​        在目标检测网络faster r-cnn的Grad-CAM完成后，有两位网友[**abhigoku10**](\u003Chttps:\u002F\u002Fgithub.com\u002Fabhigoku10>) 、[**wangzyon**](\u003Chttps:\u002F\u002Fgithub.com\u002Fwangzyon>)问道怎样在retinanet中实现Grad-CAM。retinanet与faster  r-cnn网络结构不同，CAM的生成也有一些差异；以下是详细的过程：\n\n### detectron2安装\n\na) 下载\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git\n```\n\n\n\nb) 修改`detectron2\u002Fmodeling\u002Fmeta_arch\u002Fretinanet.py` 文件中的`inference_single_image`函数，主要是增加feature level 索引，记录分值高的预测边框是由第几层feature map生成的；修改后的`inference_single_image`函数如下：\n\n```python\n    def inference_single_image(self, box_cls, box_delta, anchors, image_size):\n        \"\"\"\n        Single-image inference. Return bounding-box detection results by thresholding\n        on scores and applying non-maximum suppression (NMS).\n\n        Arguments:\n            box_cls (list[Tensor]): list of #feature levels. Each entry contains\n                tensor of size (H x W x A, K)\n            box_delta (list[Tensor]): Same shape as 'box_cls' except that K becomes 4.\n            anchors (list[Boxes]): list of #feature levels. Each entry contains\n                a Boxes object, which contains all the anchors for that\n                image in that feature level.\n            image_size (tuple(H, W)): a tuple of the image height and width.\n\n        Returns:\n            Same as `inference`, but for only one image.\n        \"\"\"\n        boxes_all = []\n        scores_all = []\n        class_idxs_all = []\n        feature_level_all = []\n\n        # Iterate over every feature level\n        for i, (box_cls_i, box_reg_i, anchors_i) in enumerate(zip(box_cls, box_delta, anchors)):\n            # (HxWxAxK,)\n            box_cls_i = box_cls_i.flatten().sigmoid_()\n\n            # Keep top k top scoring indices only.\n            num_topk = min(self.topk_candidates, box_reg_i.size(0))\n            # torch.sort is actually faster than .topk (at least on GPUs)\n            predicted_prob, topk_idxs = box_cls_i.sort(descending=True)\n            predicted_prob = predicted_prob[:num_topk]\n            topk_idxs = topk_idxs[:num_topk]\n\n            # filter out the proposals with low confidence score\n            keep_idxs = predicted_prob > self.score_threshold\n            predicted_prob = predicted_prob[keep_idxs]\n            topk_idxs = topk_idxs[keep_idxs]\n\n            anchor_idxs = topk_idxs \u002F\u002F self.num_classes\n            classes_idxs = topk_idxs % self.num_classes\n\n            box_reg_i = box_reg_i[anchor_idxs]\n            anchors_i = anchors_i[anchor_idxs]\n            # predict boxes\n            predicted_boxes = self.box2box_transform.apply_deltas(box_reg_i, anchors_i.tensor)\n\n            boxes_all.append(predicted_boxes)\n            scores_all.append(predicted_prob)\n            class_idxs_all.append(classes_idxs)\n            feature_level_all.append(torch.ones_like(classes_idxs) * i)\n\n        boxes_all, scores_all, class_idxs_all, feature_level_all = [\n            cat(x) for x in [boxes_all, scores_all, class_idxs_all, feature_level_all]\n        ]\n        keep = batched_nms(boxes_all, scores_all, class_idxs_all, self.nms_threshold)\n        keep = keep[: self.max_detections_per_image]\n\n        result = Instances(image_size)\n        result.pred_boxes = Boxes(boxes_all[keep])\n        result.scores = scores_all[keep]\n        result.pred_classes = class_idxs_all[keep]\n        result.feature_levels = feature_level_all[keep]\n        return result\n```\n\nc) 修改`detectron2\u002Fmodeling\u002Fmeta_arch\u002Fretinanet.py` 文件增加`predict`函数，具体如下：\n\n```python\n    def predict(self, batched_inputs):\n        \"\"\"\n        Args:\n            batched_inputs: a list, batched outputs of :class:`DatasetMapper` .\n                Each item in the list contains the inputs for one image.\n                For now, each item in the list is a dict that contains:\n\n                * image: Tensor, image in (C, H, W) format.\n                * instances: Instances\n\n                Other information that's included in the original dicts, such as:\n\n                * \"height\", \"width\" (int): the output resolution of the model, used in inference.\n                  See :meth:`postprocess` for details.\n        Returns:\n            dict[str: Tensor]:\n                mapping from a named loss to a tensor storing the loss. Used during training only.\n        \"\"\"\n        images = self.preprocess_image(batched_inputs)\n\n        features = self.backbone(images.tensor)\n        features = [features[f] for f in self.in_features]\n        box_cls, box_delta = self.head(features)\n        anchors = self.anchor_generator(features)\n\n        results = self.inference(box_cls, box_delta, anchors, images.image_sizes)\n        processed_results = []\n        for results_per_image, input_per_image, image_size in zip(\n                results, batched_inputs, images.image_sizes\n        ):\n            height = input_per_image.get(\"height\", image_size[0])\n            width = input_per_image.get(\"width\", image_size[1])\n            r = detector_postprocess(results_per_image, height, width)\n            processed_results.append({\"instances\": r})\n        return processed_results\n```\n\n\n\nd) 安装;如遇到问题，请参考[detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)；不同操作系统安装有差异\n\n```shell\ncd detectron2\npip install -e .\n```\n\n\n\n### 测试\n\na) 预训练模型下载\n\n```shell\nwget https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fdetectron2\u002FCOCO-Detection\u002Fretinanet_R_50_FPN_3x\u002F137849486\u002Fmodel_final_4cafe0.pkl\n```\n\n\n\nb) 测试Grad-CAM图像生成\n\n​          在本工程目录下执行如下命令:\n\n```shell\nexport KMP_DUPLICATE_LIB_OK=TRUE\npython detection\u002Fdemo_retinanet.py --config-file detection\u002Fretinanet_R_50_FPN_3x.yaml \\\n      --input .https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg \\\n      --layer-name head.cls_subnet.0 \\\n      --opts MODEL.WEIGHTS \u002FUsers\u002Fyizuotian\u002Fpretrained_model\u002Fmodel_final_4cafe0.pkl MODEL.DEVICE cpu\n```\n\n\n\n### Grad-CAM结果\n\n|                        | 图像1                                                        | 图像2                                                        | 图像3                                                        | 图像4                                                        |\n| ---------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\n| 原图                   | ![](.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9225ec332111.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4c54eafc1cd8.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc41be35979b.jpg)                                     |\n| 预测边框               | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a2c9a0845dc6.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_97258584d06a.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b81bd2cefc53.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c0a5fd37a634.jpg)                |\n| GradCAM-cls_subnet.0   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a9a5388f8a09.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4cbb5d8f14bf.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_39d9cfc6d880.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4db14738fae0.jpg)  |\n| GradCAM-cls_subnet.1   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_01d6e2fe8c77.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d53df4ae6727.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ff11e51caae9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_901c6bab4607.jpg)  |\n| GradCAM-cls_subnet.2   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_96bb1116266b.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cf7d224b1a13.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_24cd9dd2802d.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccd9d289e38d.jpg)  |\n| GradCAM-cls_subnet.3   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_34e5868297a8.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_05e9a30099dc.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_df817c58d320.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d9965d8afeff.jpg)  |\n| GradCAM-cls_subnet.4   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9d8c4a75951f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_118ae38a970a.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5faf261e5c50.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9b076116f286.jpg)  |\n| GradCAM-cls_subnet.5   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_660870b73cfe.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7f31b71f41da.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5e4455a915c9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1e843f07aa74.jpg)  |\n| GradCAM-cls_subnet.6   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9e876ff9060f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7462ba4696d0.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_752c552c7bf1.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_59fe9ea5aaac.jpg)  |\n| GradCAM-cls_subnet.7   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_052747f40d3b.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1da0ebcdcec2.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_79c8b77b2b87.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1451e5b37498.jpg)  |\n| GradCAM++-cls_subnet.0 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_57add27842bb.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_776ff8115619.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d87762bb5e12.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_44961a791d41.jpg) |\n| GradCAM++-cls_subnet.1 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_15f2fa00c787.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_063207040936.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2254b6989c4b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_376c82f9ce56.jpg) |\n| GradCAM++-cls_subnet.2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9bec54a616e4.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_01cd8f8232d8.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cdf277035fa4.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2d41408f7b89.jpg) |\n| GradCAM++-cls_subnet.3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cea337d15e98.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_439d5cc49265.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_732a5a5ad913.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_29d06c97016d.jpg) |\n| GradCAM++-cls_subnet.4 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_12e92db74a5e.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e341640aeae7.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bd807b98ba87.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e84b90197087.jpg) |\n| GradCAM++-cls_subnet.5 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_776a5620da18.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7d2c3775352c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0b5291736d0c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8407ae67746c.jpg) |\n| GradCAM++-cls_subnet.6 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_520fc1034359.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1d8c0f5b4ea5.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c45ec6b1dbe1.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7094155b39bf.jpg) |\n| GradCAM++-cls_subnet.7 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d82b908c438a.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_54c3b4dae2e5.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b71750d14edc.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6f0eaa53e9ed.jpg) |\n\n​            \n\n注：以上分别对head.cls_subnet.0~head.cls_subnet.7共8个层生成Grad-CAM图，这8层分别对应retinanet分类子网络的4层卷积feature map及ReLu激活后的feature map\n\n\n\n### 总结\n\na) retinanet的Grad-CAM图效果都不算好，相对来说中间层head.cls_subnet.2~head.cls_subnet.4相对好一点\n\nb) 个人认为retinanet效果不要的原因是，retinanet最后的分类是卷积层，卷积核实3\\*3，也就是说反向传播到最后一个卷积层的feature map上，只有3\\*3个单元有梯度。而分类网络或者faster r-cnn分类都是全连接层，感受全局信息，最后一个卷积层的feature map上所有单元都有梯度。\n\nc) 反向传播到浅层的feature map上，有梯度的单元会逐渐增加，但是就像Grad-CAM论文中说的，越浅层的feature map语义信息越弱，所以可以看到head.cls_subnet.0的CAM图效果很差。\n\n\n\n## 目标检测-fcos\n\n​        在目标检测网络faster r-cnn和retinanet的Grad-CAM完成后，有位网友[**linsy-ai**](\u003Chttps:\u002F\u002Fgithub.com\u002Flinsy-ai>) 问道怎样在fcos中实现Grad-CAM。fcos与retinanet基本类似，因为它们整体网络结构类似；这里使用[AdelaiDet](https:\u002F\u002Fgithub.com\u002Faim-uofa\u002FAdelaiDet) 工程中的fcos网络，以下是详细的过程：\n\n### AdelaiDet安装\n\na) 下载\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Faim-uofa\u002FAdelaiDet.git\n```\n\nb) 安装\n\n```shell\ncd AdelaiDet\npython setup.py build develop\n```\n\n 注意：1. AdelaiDet安装依赖[detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git),需要首先安装$\\color{red}{detectron2}$\n\n​            2. fcos的不支持CPU,只支持GPU,请确保在$\\color{red}{GPU环境}$下安装和测试\n\n\n\n### 测试\n\na) 预训练模型下载\n\n```shell\nwget https:\u002F\u002Fcloudstor.aarnet.edu.au\u002Fplus\u002Fs\u002FglqFc13cCoEyHYy\u002Fdownload -O fcos_R_50_1x.pth\n```\n\n\n\nb) 测试Grad-CAM图像生成\n\n​          在本工程目录下执行如下命令:\n\n```shell\nexport CUDA_DEVICE_ORDER=\"PCI_BUS_ID\"\nexport CUDA_VISIBLE_DEVICES=\"0\"\npython AdelaiDet\u002Fdemo_fcos.py --config-file AdelaiDet\u002FR_50_1x.yaml \\\n  --input .https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg \\\n  --layer-name proposal_generator.fcos_head.cls_tower.8 \\\n  --opts MODEL.WEIGHTS \u002Fpath\u002Fto\u002Ffcos_R_50_1x.pth MODEL.DEVICE cuda\n```\n\n\n\n### Grad-CAM结果\n\n|                        | 图像1                                                        | 图像2                                                        | 图像3                                                        | 图像4                                                        |\n| ---------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\n| 原图                   | ![](.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9225ec332111.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4c54eafc1cd8.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc41be35979b.jpg)                                     |\n| 预测边框               | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_283631fe864d.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_153f23958476.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ebfa268e24e1.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_23dac6ad9eae.jpg)                |\n| GradCAM-cls_tower.0   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e60cc1807049.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c8e79254876f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_591cc91e4539.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5b3f570f3703.jpg)  |\n| GradCAM-cls_tower.1   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bed96ffc8535.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_281a4ef368c9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6db1574f597f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_474ced3f9030.jpg)  |\n| GradCAM-cls_tower.2   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_11aa585c3d51.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_80cfaba238bb.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e99e205fad8a.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7faf6e1ff331.jpg)  |\n| GradCAM-cls_tower.3   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b2c4cf9d4f90.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3a2cd8eaec6e.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3522c1d6c517.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ba957072b8e2.jpg)  |\n| GradCAM-cls_tower.4   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b91b3aa82deb.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4ae317ac2edf.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_60d58a0d3a14.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cb47e510d94b.jpg)  |\n| GradCAM-cls_tower.5   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_895c16671c0b.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_52a36fd85a04.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a5f3e5dbf697.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_45cc36c5de7e.jpg)  |\n| GradCAM-cls_tower.6   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3ee75608e1f9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e25e90c9ee80.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_fb2dd685235c.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a2ae866d0e21.jpg)  |\n| GradCAM-cls_tower.7   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_01f7168ae0a8.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cc35dac1e2c5.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e19cb7fb0377.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f78deb461a78.jpg)  |\n| GradCAM-cls_tower.8   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_843004faedae.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6dc0573bbd8d.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b0feb5beed6c.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_be0cf9eab767.jpg)  |\n| GradCAM-cls_tower.9   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_67445c2e23ca.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_87ce8658a408.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1301deef7143.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d64892c4ac64.jpg)  |\n| GradCAM-cls_tower.10   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1733b3bbc92a.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_84a79934dc64.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7dedf38b519c.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2426bcd83a7b.jpg)  |\n| GradCAM-cls_tower.11   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4e980f68728e.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5e3056d0b778.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_005d35992ee5.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_739072b0b4b0.jpg)  |\n| GradCAM++-cls_tower.0 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7131d8119a89.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1eb6d8ffa0ac.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1871be5f7914.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_683d39474049.jpg) |\n| GradCAM++-cls_tower.1 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0cc2edd04958.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3e90d23ead9a.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d9e7f7437715.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cb54c99ff767.jpg) |\n| GradCAM++-cls_tower.2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_af850227cba9.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_14e0e7e9d7cf.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_71c17fdcefe0.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1f4daa50ba71.jpg) |\n| GradCAM++-cls_tower.3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e5962073a6a0.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_10c8f04e51ef.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4d984edcc1c7.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a84e594288b9.jpg) |\n| GradCAM++-cls_tower.4 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bd43ad65fafc.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_452c702570cb.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5cafac0e3a12.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9ebc27d5ef8f.jpg) |\n| GradCAM++-cls_tower.5 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_012848e3c7a4.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aaee06491f58.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9d52d2f140c1.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_64da79adc1ec.jpg) |\n| GradCAM++-cls_tower.6 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_729191eb43d3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6d657e62df6a.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b5b70d709434.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ffba46642fec.jpg) |\n| GradCAM++-cls_tower.7 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d2c0facb6a06.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8a9428e3ce2b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_005bc738b38e.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_41aefd47a46a.jpg) |\n| GradCAM++-cls_tower.8 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d10f542bbdcb.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4ab011aad7f0.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b2d4c35aa831.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c7fc44de762d.jpg) |\n| GradCAM++-cls_tower.9 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_206610f18c67.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2da6f1ed4568.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_47e185f9e2fe.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_50b5cb5a6587.jpg) |\n| GradCAM++-cls_tower.10 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6055e6fc8d1d.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_41c3fa6dd9d5.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_419f19ce50fe.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3ce4acb185b7.jpg) |\n| GradCAM++-cls_tower.11 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1b553ef7bd19.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a7fdc11de995.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_57ce77a14efd.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_444d21f1166c.jpg) |\n\n\n注：以上分别对proposal_generator.fcos_head.cls_tower..0~head.cls_subnet.11共12个层生成Grad-CAM图，这12层分别对应fcos分类子网络的4层卷积feature map、组标准化后的feature map及ReLu激活后的feature map\n\n\n\n### 总结\n\n​        不总结了，看图效果吧！ ","# Grad-CAM.pytorch\n\n​          使用 PyTorch 实现 [Grad-CAM: 通过基于梯度的定位从深度网络获得视觉解释](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1610.02391) 和\n\n[Grad-CAM++: 深度卷积网络的改进型视觉解释](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.11063.pdf)\n\n1. [依赖](#依赖)\n2. [使用方法](#使用方法)\n3. [样例分析](#样例分析)\u003Cbr>\n   3.1 [单个对象](#单个对象)\u003Cbr>\n   3.3 [多个对象](#多个对象)\u003Cbr>\n4. [总结](#总结)\n5. [目标检测-faster-r-cnn](#目标检测-faster-r-cnn)\u003Cbr>\n   5.1 [detectron2安装](#detectron2安装)\u003Cbr>\n   5.2 [测试](#测试)\u003Cbr>\n   5.3 [Grad-CAM结果](#Grad-CAM结果)\u003Cbr>\n   5.4 [总结](#总结)\n6. [目标检测-retinanet](#目标检测-retinanet)\u003Cbr>\n   6.1 [detectron2安装](#detectron2安装)\u003Cbr>\n   6.2 [测试](#测试)\u003Cbr>\n   6.3 [Grad-CAM结果](#Grad-CAM结果)\u003Cbr>\n   6.4 [总结](#总结)\n7. [目标检测-fcos](#目标检测-fcos)\u003Cbr>\n   7.1 [AdelaiDet安装](#AdelaiDet安装)\u003Cbr>\n   7.2 [测试](#测试)\u003Cbr>\n   7.3 [Grad-CAM结果](#Grad-CAM结果)\u003Cbr>\n   7.4 [总结](#总结)\n\n**Grad-CAM整体架构**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b72e04ad4515.jpg)\n\n\n\n**Grad-CAM++与Grad-CAM的异同**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b5b2d046b772.png)\n\n\n\n## 依赖\n\n```wiki\npython 3.6.x\npytoch 1.0.1+\ntorchvision 0.2.2\nopencv-python\nmatplotlib\nscikit-image\nnumpy\n```\n\n\n\n## 使用方法\n\n```shell\npython main.py --image-path examples\u002Fpic1.jpg \\\n               --network densenet121 \\\n               --weight-path \u002Fopt\u002Fpretrained_model\u002Fdensenet121-a639ec97.pth\n```\n\n**参数说明**：\n\n- image-path：需要可视化的图像路径(可选,默认`.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg`)\n\n- network: 网络名称(可选,默认`resnet50`)\n- weight-path: 网络对应的与训练参数权重路径(可选,默认从pytorch官网下载对应的预训练权重)\n- layer-name: Grad-CAM使用的层名(可选,默认最后一个卷积层)\n- class-id：Grad-CAM和Guided Back Propagation反向传播使用的类别id（可选,默认网络预测的类别)\n- output-dir：可视化结果图像保存目录(可选，默认`results`目录)\n\n\n\n## 样例分析\n\n### 单个对象\n\n**原始图像**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg)\n\n**效果**\n\n| network      | HeatMap                                   | Grad-CAM                              | HeatMap++                                   | Grad-CAM++                              | Guided backpropagation               | Guided Grad-CAM                          |\n| ------------ | ----------------------------------------- | ------------------------------------- | ------------------------------------------- | --------------------------------------- | ------------------------------------ | ---------------------------------------- |\n| vgg16        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a063ca18eca0.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_35bc39ce39c3.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_902127b751cd.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_371a7d0c9b9a.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_00e09ca58135.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e5bd8a66bc90.jpg)       |\n| vgg19        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b7d31d05810d.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aaf8f8b9ac3c.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_114c5c2ac1ae.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9c6832c3fe50.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_27c3f5894007.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d66ba77ef4da.jpg)       |\n| resnet50     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a64e0759dbad.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ac93bdd96c90.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_982c076ebcd7.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_284ffc45d712.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f9a5e6bfdffb.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_39d6da85db4b.jpg)    |\n| resnet101    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_caac4fcac2b3.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a91e79a34739.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_667b78f22ac3.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_51e6e30173b7.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5c588bff88fa.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_337092a927f1.jpg)   |\n| densenet121  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4098137cbea0.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dd271bf3cc32.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_98833e1a857f.jpg) | ![](results\u002Fpic1-densenet121-cam++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_55166bd0d8a2.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4a410fa680ba.jpg) |\n| inception_v3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_466c00209926.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc6453a48313.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_da873f1a57ec.jpg)   | ![](results\u002Fpic1-inception-cam++..jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d3562b3870ae.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_77e9a10c64cb.jpg)   |\n| mobilenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_30d8868e52d8.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c806c11ac2dd.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6f61bc45e347.jpg)   | ![](results\u002Fpic1-mobilenet_v2-cam++..jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9f3515855146.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9b0f243e6094.jpg)   |\n| shufflenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cc2e44ab484d.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d4a9ae6eb016.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1e0ab3327d29.jpg)   | ![](results\u002Fpic1-shufflenet_v2-cam++..jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0ce0494144ee.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7dbc2f33c5dd.jpg)   |\n\n### 多个对象\n\n对应多个图像时，Grad-CAM++的覆盖范围比Grad-CAM更为全面，这也是Grad-CAM++最主要的优点。\n\n**原始图像**\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b0d01cbbeb15.jpg)\n\n**效果**\n\n| 网络      | 热力图                                   | Grad-CAM                              | 热力图++                                   | Grad-CAM++                              | 引导反向传播               | 引导Grad-CAM                          |\n| ------------ | ----------------------------------------- | ------------------------------------- | ------------------------------------------- | --------------------------------------- | -------------------------- | ---------------------------------------- |\n| vgg16        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_83a9a7484a40.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c0b067cf1f15.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_165579e7cc19.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6a2c65fb29f5.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_076d910716a1.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aa1ecad7ee2c.jpg)       |\n| vgg19        | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aa9803c09d97.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_09a509837852.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_74d1fad5b6f3.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f1e1caf1c32b.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_db5a06dda3c0.jpg)       | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_70673834b548.jpg)       |\n| resnet50     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8b1fc1fcb6d7.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cbfa25656b68.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a095d7d75308.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c3cfaaaea218.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_551c071cee34.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_951973719a68.jpg)    |\n| resnet101    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9c5b9d653b66.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d14d9fb0d606.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_faa812ee3524.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c3cfaaaea218.jpg)    | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1677f80ff6f7.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_46b0c4d6d1fa.jpg)   |\n| densenet121  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3185eb48845a.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bb4d9913ff21.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_37a8a06ce8c3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_572c0d492383.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b1af8c0fe69c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a25b5946baa2.jpg) |\n| inception_v3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9729bcd6fa31.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4e59bb3c422a.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d32e1d9903b8.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_db07a2e0a702.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7b48b8331c7d.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8ac8c8ac30a1.jpg)   |\n| mobilenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4e6a468e4d87.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_43c9e794c0f6.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a7b2f90da5ae.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f1245ff445a0.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f9ce85219cbe.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ff17db4ad392.jpg)   |\n| shufflenet_v2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_aa5320e717aa.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0dfea55ebf64.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2f3c20a02286.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5dc5198d0301.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_47f4c0d1cba4.jpg)   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_623a4ae25753.jpg)   |\n\n \n\n## 总结\n\n- VGG模型的Grad-CAM并未完全覆盖整个目标，相比之下，ResNet和DenseNet的覆盖更加全面，尤其是DenseNet；这从侧面说明，在模型的泛化能力和鲁棒性方面，DenseNet > ResNet > VGG。\n- 相较于Grad-CAM，Grad-CAM++对目标的覆盖也更为全面，特别是在同一类别存在多个实例的情况下，Grad-CAM可能只覆盖部分目标，而Grad-CAM++则基本能够覆盖所有目标。不过，这一点仅在VGG模型中表现明显，对于DenseNet而言，直接使用Grad-CAM也能基本覆盖所有目标。\n- MobileNet V2的Grad-CAM覆盖同样非常全面。\n- Inception V3和MobileNet V2的引导反向传播图轮廓较为模糊，而ShuffleNet V2的轮廓则相对清晰。\n\n## 目标检测——Faster R-CNN\n\n有位网友[SHAOSIHAN](\u003Chttps:\u002F\u002Fgithub.com\u002FSHAOSIHAN>)询问如何在目标检测中使用Grad-CAM；然而，在Grad-CAM和Grad-CAM++的相关论文中，并未提及针对目标检测生成CAM图的方法。我认为主要有以下两个原因：\n\na) 目标检测与分类任务不同：分类网络通常只有一个分类损失，且各网络的最后一层结构一致（根据类别数决定神经元数量），最终的预测输出为单一的类别得分分布。而目标检测的输出则复杂得多，不同网络如Faster R-CNN、CornerNet、CenterNet、FCOS等，其建模方式各异，输出的意义也不相同。因此，不存在一种统一的方法来生成Grad-CAM图。\n\nb) 分类任务属于弱监督学习，通过CAM图可以了解网络在预测时主要关注的空间位置，即“它在看哪里”，这对分析问题具有实际价值。而目标检测本身是强监督学习，预测出的边界框已经直接指明了“应该关注的位置”。\n\n​         \n\n这里以Detectron2中的Faster R-CNN网络为例，介绍如何生成Grad-CAM图。其核心思路是：首先获取预测分数最高的边界框，然后将该边界框的预测分数对应的梯度反向传播到与其对应的proposal边界框的特征图上，从而生成该特征图的CAM图。\n\n\n\n### Detectron2安装\n\na) 下载\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git\n```\n\n\n\nb) 修改`detectron2\u002Fmodeling\u002Froi_heads\u002Ffast_rcnn.py`文件中的`fast_rcnn_inference_single_image`函数，主要是增加索引号，记录分值高的预测边框是由第几个proposal边框生成的；修改后的`fast_rcnn_inference_single_image`函数如下：\n\n```python\ndef fast_rcnn_inference_single_image(\n        boxes, scores, image_shape, score_thresh, nms_thresh, topk_per_image\n):\n    \"\"\"\n    单张图片推理。通过对得分进行阈值筛选并应用非极大值抑制（NMS）来返回边界框检测结果。\n\n    参数：\n        与 `fast_rcnn_inference` 相同，但针对每张图片分别提供边界框、得分和图像尺寸。\n\n    返回值：\n        与 `fast_rcnn_inference` 相同，但仅适用于一张图片。\n    \"\"\"\n    valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1)\n    indices = torch.arange(start=0, end=scores.shape[0], dtype=int)\n    indices = indices.expand((scores.shape[1], scores.shape[0])).T\n    if not valid_mask.all():\n        boxes = boxes[valid_mask]\n        scores = scores[valid_mask]\n        indices = indices[valid_mask]\n    scores = scores[:, :-1]\n    indices = indices[:, :-1]\n\n    num_bbox_reg_classes = boxes.shape[1] \u002F\u002F 4\n    # 转换为 Boxes 格式以便使用 `clip` 函数 ...\n    boxes = Boxes(boxes.reshape(-1, 4))\n    boxes.clip(image_shape)\n    boxes = boxes.tensor.view(-1, num_bbox_reg_classes, 4)  # R x C x 4\n\n    # 根据检测得分过滤结果\n    filter_mask = scores > score_thresh  # R x K\n    # R' x 2。第一列包含 R 个预测的索引；\n    # 第二列包含类别的索引。\n    filter_inds = filter_mask.nonzero()\n    if num_bbox_reg_classes == 1:\n        boxes = boxes[filter_inds[:, 0], 0]\n    else:\n        boxes = boxes[filter_mask]\n\n    scores = scores[filter_mask]\n    indices = indices[filter_mask]\n    # 对每个类别分别应用 NMS\n    keep = batched_nms(boxes, scores, filter_inds[:, 1], nms_thresh)\n    if topk_per_image >= 0:\n        keep = keep[:topk_per_image]\n    boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep]\n    indices = indices[keep]\n\n    result = Instances(image_shape)\n    result.pred_boxes = Boxes(boxes)\n    result.scores = scores\n    result.pred_classes = filter_inds[:, 1\n    result.indices = indices\n    return result, filter_inds[:, 0]\n```\n\n\n\nc) 安装；如遇到问题，请参考[detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)；不同操作系统安装方法有所不同\n\n```shell\ncd detectron2\npip install -e .\n```\n\n### 测试\n\na) 预训练模型下载\n\n```shell\nwget https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fdetectron2\u002FPascalVOC-Detection\u002Ffaster_rcnn_R_50_C4\u002F142202221\u002Fmodel_final_b1acc2.pkl\n```\n\n\n\nb) 测试Grad-CAM图像生成\n\n​          在本工程目录下执行如下命令\n\n```shell\nexport KMP_DUPLICATE_LIB_OK=TRUE\npython detection\u002Fdemo.py --config-file detection\u002Ffaster_rcnn_R_50_C4.yaml \\\n--input .https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg \\\n--opts MODEL.WEIGHTS \u002FUsers\u002Fyizuotian\u002Fpretrained_model\u002Fmodel_final_b1acc2.pkl MODEL.DEVICE cpu\n```\n\n\n\n### Grad-CAM结果\n\n| 原始图像                 | 检测边框                                  | Grad-CAM HeatMap                      | Grad-CAM++ HeatMap                      | 边框预测类别 |\n| ------------------------ | ----------------------------------------- | ------------------------------------- | --------------------------------------- | ------------ |\n| ![](.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_eed228f7b2b3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_002173aa96d5.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c0d0a8f5dae0.jpg) | Dog          |\n| ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9225ec332111.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b845da2472f0.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2525b1547dc3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c837c50fdbe1.jpg) | Aeroplane    |\n| ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4c54eafc1cd8.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8f7b6be23c1c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_fc96b82a8684.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a5099ebdcc7c.jpg) | Person       |\n| ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc41be35979b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4ecc541bb541.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_424ece8c4799.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a87b1b87234a.jpg) | Horse        |\n\n\n### 总结\n\n​          对于目标检测Grad-CAM++的效果并没有比Grad-CAM效果好，推测目标检测中预测边框已经是单个对象了,Grad-CAM++在多个对象的情况下优于Grad-CAM\n\n\n\n\n\n## 目标检测-retinanet\n\n​        在目标检测网络faster r-cnn的Grad-CAM完成后，有两位网友[**abhigoku10**](\u003Chttps:\u002F\u002Fgithub.com\u002Fabhigoku10>) 、[**wangzyon**](\u003Chttps:\u002F\u002Fgithub.com\u002Fwangzyon>)问道怎样在retinanet中实现Grad-CAM。retinanet与faster  r-cnn网络结构不同，CAM的生成也有一些差异；以下是详细的过程：\n\n### detectron2安装\n\na) 下载\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git\n```\n\n\n\nb) 修改`detectron2\u002Fmodeling\u002Fmeta_arch\u002Fretinanet.py` 文件中的`inference_single_image`函数，主要是增加feature level 索引，记录分值高的预测边框是由第几层feature map生成的；修改后的`inference_single_image`函数如下：\n\n```python\n    def inference_single_image(self, box_cls, box_delta, anchors, image_size):\n        \"\"\"\n        Single-image inference. Return bounding-box detection results by thresholding\n        on scores and applying non-maximum suppression (NMS).\n\n        Arguments:\n            box_cls (list[Tensor]): list of #feature levels. Each entry contains\n                tensor of size (H x W x A, K)\n            box_delta (list[Tensor]): Same shape as 'box_cls' except that K becomes 4.\n            anchors (list[Boxes]): list of #feature levels. Each entry contains\n                a Boxes object, which contains all the anchors for that\n                image in that feature level.\n            image_size (tuple(H, W)): a tuple of the image height and width.\n\n        Returns:\n            Same as `inference`, but for only one image.\n        \"\"\"\n        boxes_all = []\n        scores_all = []\n        class_idxs_all = []\n        feature_level_all = []\n\n        # Iterate over every feature level\n        for i, (box_cls_i, box_reg_i, anchors_i) in enumerate(zip(box_cls, box_delta, anchors)):\n            # (HxWxAxK,)\n            box_cls_i = box_cls_i.flatten().sigmoid_()\n\n            # Keep top k top scoring indices only.\n            num_topk = min(self.topk_candidates, box_reg_i.size(0))\n            # torch.sort is actually faster than .topk (at least on GPUs)\n            predicted_prob, topk_idxs = box_cls_i.sort(descending=True)\n            predicted_prob = predicted_prob[:num_topk]\n            topk_idxs = topk_idxs[:num_topk]\n\n            # filter out the proposals with low confidence score\n            keep_idxs = predicted_prob > self.score_threshold\n            predicted_prob = predicted_prob[keep_idxs]\n            topk_idxs = topk_idxs[keep_idxs]\n\n            anchor_idxs = topk_idxs \u002F\u002F self.num_classes\n            classes_idxs = topk_idxs % self.num_classes\n\n            box_reg_i = box_reg_i[anchor_idxs]\n            anchors_i = anchors_i[anchor_idxs]\n            # predict boxes\n            predicted_boxes = self.box2box_transform.apply_deltas(box_reg_i, anchors_i.tensor)\n\n            boxes_all.append(predicted_boxes)\n            scores_all.append(predicted_prob)\n            class_idxs_all.append(classes_idxs)\n            feature_level_all.append(torch.ones_like(classes_idxs) * i)\n\n        boxes_all, scores_all, class_idxs_all, feature_level_all = [\n            cat(x) for x in [boxes_all, scores_all, class_idxs_all, feature_level_all]\n        ]\n        keep = batched_nms(boxes_all, scores_all, class_idxs_all, self.nms_threshold)\n        keep = keep[: self.max_detections_per_image]\n\n        result = Instances(image_size)\n        result.pred_boxes = Boxes(boxes_all[keep])\n        result.scores = scores_all[keep]\n        result.pred_classes = class_idxs_all[keep]\n        result.feature_levels = feature_level_all[keep]\n        return result\n```\n\nc) 修改`detectron2\u002Fmodeling\u002Fmeta_arch\u002Fretinanet.py` 文件增加`predict`函数，具体如下：\n\n```python\n    def predict(self, batched_inputs):\n        \"\"\"\n        Args:\n            batched_inputs: a list, batched outputs of :class:`DatasetMapper` .\n                Each item in the list contains the inputs for one image.\n                For now, each item in the list is a dict that contains:\n\n                * image: Tensor, image in (C, H, W) format.\n                * instances: Instances\n\n                Other information that's included in the original dicts, such as:\n\n                * \"height\", \"width\" (int): the output resolution of the model, used in inference.\n                  See :meth:`postprocess` for details.\n        Returns:\n            dict[str: Tensor]:\n                mapping from a named loss to a tensor storing the loss. Used during training only.\n        \"\"\"\n        images = self.preprocess_image(batched_inputs)\n\n        features = self.backbone(images.tensor)\n        features = [features[f] for f in self.in_features]\n        box_cls, box_delta = self.head(features)\n        anchors = self.anchor_generator(features)\n\n        results = self.inference(box_cls, box_delta, anchors, images.image_sizes)\n        processed_results = []\n        for results_per_image, input_per_image, image_size in zip(\n                results, batched_inputs, images.image_sizes\n        ):\n            height = input_per_image.get(\"height\", image_size[0])\n            width = input_per_image.get(\"width\", image_size[1])\n            r = detector_postprocess(results_per_image, height, width)\n            processed_results.append({\"instances\": r})\n        return processed_results\n```\n\n\n\nd) 安装;如遇到问题，请参考[detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)；不同操作系统安装有差异\n\n```shell\ncd detectron2\npip install -e .\n```\n\n### 测试\n\na) 预训练模型下载\n\n```shell\nwget https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fdetectron2\u002FCOCO-Detection\u002Fretinanet_R_50_FPN_3x\u002F137849486\u002Fmodel_final_4cafe0.pkl\n```\n\n\n\nb) 测试Grad-CAM图像生成\n\n​          在本工程目录下执行如下命令:\n\n```shell\nexport KMP_DUPLICATE_LIB_OK=TRUE\npython detection\u002Fdemo_retinanet.py --config-file detection\u002Fretinanet_R_50_FPN_3x.yaml \\\n      --input .https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg \\\n      --layer-name head.cls_subnet.0 \\\n      --opts MODEL.WEIGHTS \u002FUsers\u002Fyizuotian\u002Fpretrained_model\u002Fmodel_final_4cafe0.pkl MODEL.DEVICE cpu\n```\n\n\n\n### Grad-CAM结果\n\n|                        | 图像1                                                        | 图像2                                                        | 图像3                                                        | 图像4                                                        |\n| ---------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\n| 原图                   | ![](.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9225ec332111.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4c54eafc1cd8.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc41be35979b.jpg)                                     |\n| 预测边框               | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a2c9a0845dc6.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_97258584d06a.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b81bd2cefc53.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c0a5fd37a634.jpg)                |\n| GradCAM-cls_subnet.0   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a9a5388f8a09.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4cbb5d8f14bf.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_39d9cfc6d880.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4db14738fae0.jpg)  |\n| GradCAM-cls_subnet.1   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_01d6e2fe8c77.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d53df4ae6727.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ff11e51caae9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_901c6bab4607.jpg)  |\n| GradCAM-cls_subnet.2   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_96bb1116266b.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cf7d224b1a13.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_24cd9dd2802d.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccd9d289e38d.jpg)  |\n| GradCAM-cls_subnet.3   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_34e5868297a8.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_05e9a30099dc.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_df817c58d320.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d9965d8afeff.jpg)  |\n| GradCAM-cls_subnet.4   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9d8c4a75951f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_118ae38a970a.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5faf261e5c50.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9b076116f286.jpg)  |\n| GradCAM-cls_subnet.5   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_660870b73cfe.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7f31b71f41da.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5e4455a915c9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1e843f07aa74.jpg)  |\n| GradCAM-cls_subnet.6   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9e876ff9060f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7462ba4696d0.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_752c552c7bf1.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_59fe9ea5aaac.jpg)  |\n| GradCAM-cls_subnet.7   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_052747f40d3b.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1da0ebcdcec2.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_79c8b77b2b87.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1451e5b37498.jpg)  |\n| GradCAM++-cls_subnet.0 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_57add27842bb.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_776ff8115619.jpg) | ![](.\u002Fresults\u002Fpic3-retinanet-head.cls_subnet.0-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_44961a791d41.jpg) |\n| GradCAM++-cls_subnet.1 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_15f2fa00c787.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_063207040936.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2254b6989c4b.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_376c82f9ce56.jpg) |\n| GradCAM++-cls_subnet.2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9bec54a616e4.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_01cd8f8232d8.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cdf277035fa4.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2d41408f7b89.jpg) |\n| GradCAM++-cls_subnet.3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cea337d15e98.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_439d5cc49265.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_732a5a5ad913.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_29d06c97016d.jpg) |\n| GradCAM++-cls_subnet.4 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_12e92db74a5e.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e341640aeae7.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bd807b98ba87.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e84b90197087.jpg) |\n| GradCAM++-cls_subnet.5 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_776a5620da18.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7d2c3775352c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0b5291736d0c.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_8407ae67746c.jpg) |\n| GradCAM++-cls_subnet.6 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_520fc1034359.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1d8c0f5b4ea5.jpg) | ![](.\u002Fresults\u002Fpic3-retinanet-head.cls_subnet.6-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7094155b39bf.jpg) |\n| GradCAM++-cls_subnet.7 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d82b908c438a.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_54c3b4dae2e5.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b71750d14edc.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6f0eaa53e9ed.jpg) |\n\n​            \n\n注：以上分别对head.cls_subnet.0~head.cls_subnet.7共8个层生成Grad-CAM图，这8层分别对应retinanet分类子网络的4层卷积feature map及ReLu激活后的feature map\n\n\n\n### 总结\n\na) retinanet的Grad-CAM图效果都不算好，相对来说中间层head.cls_subnet.2~head.cls_subnet.4相对好一点\n\nb) 个人认为retinanet效果不要的原因是，retinanet最后的分类是卷积层，卷积核实3\\*3，也就是说反向传播到最后一个卷积层的feature map上，只有3\\*3个单元有梯度。而分类网络或者faster r-cnn分类都是全连接层，感受全局信息，最后一个卷积层的feature map上所有单元都有梯度。\n\nc) 反向传播到浅层的feature map上，有梯度的单元会逐渐增加，但是就像Grad-CAM论文中说的，越浅层的feature map语义信息越弱，所以可以看到head.cls_subnet.0的CAM图效果很差。\n\n\n\n## 目标检测-fcos\n\n​        在目标检测网络faster r-cnn和retinanet的Grad-CAM完成后，有位网友[**linsy-ai**](\u003Chttps:\u002F\u002Fgithub.com\u002Flinsy-ai>) 问道怎样在fcos中实现Grad-CAM。fcos与retinanet基本类似，因为它们整体网络结构类似；这里使用[AdelaiDet](https:\u002F\u002Fgithub.com\u002Faim-uofa\u002FAdelaiDet) 工程中的fcos网络，以下是详细的过程：\n\n### AdelaiDet 安装\n\na) 下载\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Faim-uofa\u002FAdelaiDet.git\n```\n\nb) 安装\n\n```shell\ncd AdelaiDet\npython setup.py build develop\n```\n\n 注意：1. AdelaiDet安装依赖[detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git),需要首先安装$\\color{red}{detectron2}$\n\n​            2. fcos的不支持CPU,只支持GPU,请确保在$\\color{red}{GPU环境}$下安装和测试\n\n\n\n### 测试\n\na) 预训练模型下载\n\n```shell\nwget https:\u002F\u002Fcloudstor.aarnet.edu.au\u002Fplus\u002Fs\u002FglqFc13cCoEyHYy\u002Fdownload -O fcos_R_50_1x.pth\n```\n\n\n\nb) 测试Grad-CAM图像生成\n\n​          在本工程目录下执行如下命令:\n\n```shell\nexport CUDA_DEVICE_ORDER=\"PCI_BUS_ID\"\nexport CUDA_VISIBLE_DEVICES=\"0\"\npython AdelaiDet\u002Fdemo_fcos.py --config-file AdelaiDet\u002FR_50_1x.yaml \\\n  --input .https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg \\\n  --layer-name proposal_generator.fcos_head.cls_tower.8 \\\n  --opts MODEL.WEIGHTS \u002Fpath\u002Fto\u002Ffcos_R_50_1x.pth MODEL.DEVICE cuda\n```\n\n\n\n### Grad-CAM结果\n\n|                        | 图像1                                                        | 图像2                                                        | 图像3                                                        | 图像4                                                        |\n| ---------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\n| 原图                   | ![](.https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ccf4bb08e27b.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9225ec332111.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4c54eafc1cd8.jpg)                                     | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_dc41be35979b.jpg)                                     |\n| 预测边框               | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_283631fe864d.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_153f23958476.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ebfa268e24e1.jpg)                | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_23dac6ad9eae.jpg)                |\n| GradCAM-cls_tower.0   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e60cc1807049.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c8e79254876f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_591cc91e4539.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5b3f570f3703.jpg)  |\n| GradCAM-cls_tower.1   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bed96ffc8535.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_281a4ef368c9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6db1574f597f.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_474ced3f9030.jpg)  |\n| GradCAM-cls_tower.2   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_11aa585c3d51.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_80cfaba238bb.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e99e205fad8a.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7faf6e1ff331.jpg)  |\n| GradCAM-cls_tower.3   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b2c4cf9d4f90.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3a2cd8eaec6e.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3522c1d6c517.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ba957072b8e2.jpg)  |\n| GradCAM-cls_tower.4   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b91b3aa82deb.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4ae317ac2edf.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_60d58a0d3a14.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cb47e510d94b.jpg)  |\n| GradCAM-cls_tower.5   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_895c16671c0b.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_52a36fd85a04.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a5f3e5dbf697.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_45cc36c5de7e.jpg)  |\n| GradCAM-cls_tower.6   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3ee75608e1f9.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e25e90c9ee80.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_fb2dd685235c.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a2ae866d0e21.jpg)  |\n| GradCAM-cls_tower.7   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_01f7168ae0a8.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cc35dac1e2c5.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e19cb7fb0377.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_f78deb461a78.jpg)  |\n| GradCAM-cls_tower.8   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_843004faedae.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6dc0573bbd8d.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_b0feb5beed6c.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_be0cf9eab767.jpg)  |\n| GradCAM-cls_tower.9   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_67445c2e23ca.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_87ce8658a408.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1301deef7143.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d64892c4ac64.jpg)  |\n| GradCAM-cls_tower.10   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1733b3bbc92a.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_84a79934dc64.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7dedf38b519c.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_2426bcd83a7b.jpg)  |\n| GradCAM-cls_tower.11   | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_4e980f68728e.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_5e3056d0b778.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_005d35992ee5.jpg)  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_739072b0b4b0.jpg)  |\n| GradCAM++-cls_tower.0 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_7131d8119a89.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1eb6d8ffa0ac.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1871be5f7914.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_683d39474049.jpg) |\n| GradCAM++-cls_tower.1 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_0cc2edd04958.jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.1-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.1-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_cb54c99ff767.jpg) |\n| GradCAM++-cls_tower.2 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_af850227cba9.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_14e0e7e9d7cf.jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.2-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1f4daa50ba71.jpg) |\n| GradCAM++-cls_tower.3 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_e5962073a6a0.jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.3-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.3-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_a84e594288b9.jpg) |\n| GradCAM++-cls_tower.4 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_bd43ad65fafc.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_452c702570cb.jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.4-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_9ebc27d5ef8f.jpg) |\n| GradCAM++-cls_tower.5 | ![](.\u002Fresults\u002Fpic1-fcos-proposal_generator.fcos_head.cls_tower.5-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.5-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.5-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_64da79adc1ec.jpg) |\n| GradCAM++-cls_tower.6 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_729191eb43d3.jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6d657e62df6a.jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.6-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_ffba46642fec.jpg) |\n| GradCAM++-cls_tower.7 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d2c0facb6a06.jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.7-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.7-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_41aefd47a46a.jpg) |\n| GradCAM++-cls_tower.8 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_d10f542bbdcb.jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.8-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.8-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_c7fc44de762d.jpg) |\n| GradCAM++-cls_tower.9 | ![](.\u002Fresults\u002Fpic1-fcos-proposal_generator.fcos_head.cls_tower.9-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.9-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.9-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic4-fcos-proposal_generator.fcos_head.cls_tower.9-heatmap++..jpg) |\n| GradCAM++-cls_tower.10 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_6055e6fc8d1d.jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.10-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.10-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_3ce4acb185b7.jpg) |\n| GradCAM++-cls_tower.11 | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_1b553ef7bd19.jpg) | ![](.\u002Fresults\u002Fpic2-fcos-proposal_generator.fcos_head.cls_tower.11-heatmap++..jpg) | ![](.\u002Fresults\u002Fpic3-fcos-proposal_generator.fcos_head.cls_tower.11-heatmap++..jpg) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_readme_444d21f1166c.jpg) |\n\n注：以上分别对proposal_generator.fcos_head.cls_tower..0~head.cls_subnet.11共12个层生成Grad-CAM图，这12层分别对应fcos分类子网络的4层卷积feature map、组标准化后的feature map及ReLu激活后的feature map\n\n\n\n\n\n### 总结\n\n​        不总结了，看图效果吧！","# Grad-CAM.pytorch 快速上手指南\n\n本指南帮助中国开发者快速部署并使用 Grad-CAM.pytorch，实现深度学习模型的可视化解释（支持 Grad-CAM 与 Grad-CAM++）。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**: Linux \u002F macOS \u002F Windows\n*   **Python 版本**: 3.6.x 或更高\n*   **核心依赖**:\n    *   PyTorch >= 1.0.1\n    *   torchvision >= 0.2.2\n*   **其他库**:\n    *   opencv-python\n    *   matplotlib\n    *   scikit-image\n    *   numpy\n\n> **国内加速建议**：安装 Python 依赖时，推荐使用清华或阿里镜像源以提升下载速度。\n> ```bash\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n\n1.  **克隆项目代码**\n    ```bash\n    git clone \u003C项目仓库地址>\n    cd Grad-CAM.pytorch\n    ```\n\n2.  **安装依赖包**\n    您可以手动安装所需库，或使用 `pip` 一键安装（假设已创建 `requirements.txt` 或直接在命令行安装）：\n    ```bash\n    pip install torch torchvision opencv-python matplotlib scikit-image numpy -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n运行 `main.py` 即可生成可视化热力图。以下是最简单的使用示例，使用默认的 ResNet50 网络和处理默认图片：\n\n```shell\npython main.py --image-path examples\u002Fpic1.jpg \\\n               --network resnet50\n```\n\n### 常用参数说明\n\n您可以根据需求自定义以下参数：\n\n*   `--image-path`: 需要可视化的图像路径（默认：`.\u002Fexamples\u002Fpic1.jpg`）\n*   `--network`: 骨干网络名称，支持 `vgg16`, `resnet50`, `densenet121`, `mobilenet_v2` 等（默认：`resnet50`）\n*   `--weight-path`: 预训练权重文件路径（可选，若不填则自动从官方下载）\n*   `--layer-name`: 指定用于生成 Grad-CAM 的层名（可选，默认使用最后一个卷积层）\n*   `--class-id`: 指定目标类别 ID（可选，默认使用网络预测概率最高的类别）\n*   `--output-dir`: 结果保存目录（默认：`results`）\n\n**高级示例（指定网络和权重）：**\n\n```shell\npython main.py --image-path examples\u002Fpic1.jpg \\\n               --network densenet121 \\\n               --weight-path \u002Fopt\u002Fpretrained_model\u002Fdensenet121-a639ec97.pth\n```\n\n运行完成后，请在 `results` 目录下查看生成的 HeatMap、Grad-CAM 及 Guided Grad-CAM 等可视化结果。","某医疗 AI 研发团队正在开发基于深度学习的肺部 CT 影像辅助诊断系统，急需验证模型是否真正关注病灶区域而非背景噪声。\n\n### 没有 Grad-CAM.pytorch 时\n- **决策过程如“黑盒”**：医生无法理解模型为何将某张片子判定为“肺炎”，只能盲目信任或拒绝结果，难以建立临床信任。\n- **错误归因难排查**：当模型误判时，开发者无法确定是模型学到了错误的特征（如关注到了影像角落的标记文字），还是数据标注本身有误。\n- **模型优化无方向**：在调整网络结构（如从 ResNet50 切换到 DenseNet121）时，缺乏直观依据来判断哪种架构更能精准聚焦病灶。\n- **合规审查受阻**：医疗器械审批需要提供算法的可解释性证明，缺乏可视化证据导致项目无法通过伦理和安全审查。\n\n### 使用 Grad-CAM.pytorch 后\n- **可视化决策依据**：利用 Grad-CAM 生成热力图，清晰叠加显示模型高亮关注的肺部区域，让医生直观看到“模型看到了什么”，显著提升信任度。\n- **快速定位误判根源**：通过对比不同网络（如 VGG16 与 MobileNetV2）的热力图，迅速发现旧模型错误地关注了肋骨纹理而非炎症区域，从而针对性清洗数据。\n- **科学选型网络架构**：借助 Grad-CAM++ 对多类目标的精细定位能力，量化评估各主干网络对微小病灶的覆盖准确度，选出最优模型部署。\n- **满足监管合规要求**：直接输出带有热力图解释的诊断报告作为技术文档，有力证明了算法的逻辑合理性，加速产品上市进程。\n\nGrad-CAM.pytorch 将抽象的神经网络梯度转化为直观的视觉证据，彻底打破了深度学习在关键领域的“黑盒”壁垒。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyizt_Grad-CAM.pytorch_e0a2ba6b.png","yizt","mick.yi","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fyizt_bede3fa4.png","keyword:算法、机器人","zoomlion","ChangSha","csuyzt@163.com",null,"https:\u002F\u002Fgithub.com\u002Fyizt",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,768,167,"2026-03-12T01:44:49","Apache-2.0","未说明","未说明 (支持 CPU 运行，示例命令中包含 MODEL.DEVICE cpu)",{"notes":97,"python":98,"dependencies":99},"该工具主要实现 Grad-CAM 和 Grad-CAM++ 算法。若需进行目标检测（如 Faster R-CNN）的可视化分析，需额外安装 detectron2 或 AdelaiDet 框架，并修改其源码以记录 proposal 索引。预训练权重默认从官网下载，也可本地指定路径。","3.6.x",[100,101,102,103,104,105],"pytorch>=1.0.1","torchvision>=0.2.2","opencv-python","matplotlib","scikit-image","numpy",[14,13],[108,109,110,111,112,113],"cam","grad-cam","guided-backpropagation","model-interpretability","faster-r-cnn-grad-cam","retinanet-grad-cam","2026-03-27T02:49:30.150509","2026-04-06T05:27:06.440078",[117,122,127,132,137,141],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},14812,"使用 GuidedBackPropagation 时出现 'NoneType' object is not subscriptable 错误怎么办？","该错误通常是因为输入张量未开启梯度追踪。解决方法是在调用模型前，确保对输入数据执行 `inputs.requires_grad=True`。例如：\n```python\ninputs.requires_grad = True\ngbp = GuidedBackPropagation(model)\nresult = gbp(inputs)\n```\n如果修改了模型结构（如改变卷积层通道数），请确保新层正确替换且权重已初始化。","https:\u002F\u002Fgithub.com\u002Fyizt\u002FGrad-CAM.pytorch\u002Fissues\u002F1",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},14813,"如何在目标检测模型（如 FCOS、YOLOv3）上实现 Grad-CAM 热力图？","项目已更新支持目标检测模型。对于 YOLOv3 等复杂架构，若使用 `register_forward_hook` 或 `register_backward_hook` 获取梯度全为 0，可能是因为钩子注册在了 Container 模块上。建议直接针对具体的张量（Tensor）使用 `register_hook` 方法。此外，需注意梯度经过 ReLU 后可能全为零的情况，这不代表原始梯度为零。具体实现可参考项目中已更新的 FCOS 示例代码。","https:\u002F\u002Fgithub.com\u002Fyizt\u002FGrad-CAM.pytorch\u002Fissues\u002F41",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},14814,"运行代码时出现 AttributeError: 'NoneType' object has no attribute 'zero_' 错误如何解决？","此错误表明 `inputs.grad` 为 None，通常是因为在反向传播前未设置 `inputs.requires_grad=True`，或者在多次迭代中未正确保留计算图。请在输入数据进入模型前添加 `inputs.requires_grad = True`，并确保在计算梯度后才执行 `.zero_()` 操作。如果是在循环中使用，请检查是否在每次迭代前重新设置了 requires_grad。","https:\u002F\u002Fgithub.com\u002Fyizt\u002FGrad-CAM.pytorch\u002Fissues\u002F27",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},14815,"Grad-CAM 是否支持目标检测任务？如何修改代码以适应检测模型？","是的，Grad-CAM 可以应用于目标检测模型。维护者已在项目中更新了针对目标检测部分的实现代码。用户只需更新工程代码即可使用。对于自定义的检测模型，关键在于选择正确的特征层（通常是最后一个卷积层）并正确传递类别索引或边界框对应的梯度信息。","https:\u002F\u002Fgithub.com\u002Fyizt\u002FGrad-CAM.pytorch\u002Fissues\u002F5",{"id":138,"question_zh":139,"answer_zh":140,"source_url":126},14816,"PyTorch 中 register_forward_hook 和 register_backward_hook 的作用是什么？为什么我的梯度全是 0？","`register_forward_hook` 用于在前向传播时获取指定层的输出特征，`register_backward_hook` 用于在反向传播时获取指定层的梯度。\n- `output_grad`: 当前层的梯度。\n- `input_grad`: 计算当前层梯度所需的信息（包含上一层梯度和当前层权重）。\n如果获取的梯度全为 0，可能是因为：1. 该层梯度值本身小于零，经过 ReLU 激活后被截断为 0；2. 钩子注册位置不当（如注册在 Container 模块而非具体算子上）。建议检查网络结构，尝试对具体张量使用 `register_hook`，并确认反向传播路径畅通。",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},14817,"如何在自定义分类模型中使用 Grad-CAM 生成热力图？","在自定义分类模型中使用 Grad-CAM 的步骤如下：\n1. 实例化 Grad-CAM 类并传入你的模型：`grad_cam = GradCAM(model)`。\n2. 准备输入图像并转换为 Tensor，务必执行 `inputs.requires_grad = True`。\n3. 调用 grad_cam 对象：`mask = grad_cam(inputs, target_class_idx)`，其中 `target_class_idx` 是你要可视化的类别索引。\n4. 将输出的 mask 与原图叠加即可得到热力图。确保模型最后有明确的分类输出层以便提取梯度。","https:\u002F\u002Fgithub.com\u002Fyizt\u002FGrad-CAM.pytorch\u002Fissues\u002F54",[]]