[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-oneTaken--awesome_deep_learning_interpretability":3,"tool-oneTaken--awesome_deep_learning_interpretability":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":82,"languages":77,"stars":83,"forks":84,"last_commit_at":85,"license":86,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":93,"github_topics":94,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":115,"updated_at":116,"faqs":117,"releases":118},2960,"oneTaken\u002Fawesome_deep_learning_interpretability","awesome_deep_learning_interpretability","深度学习近年来关于神经网络模型解释性的相关高引用\u002F顶会论文(附带代码)","awesome_deep_learning_interpretability 是一个专注于深度学习模型可解释性的开源资源库，旨在帮助开发者理解神经网络“黑盒”背后的决策逻辑。随着 AI 在医疗、金融等高风险领域的应用日益广泛，模型透明度成为关键痛点。该工具系统整理了近年来顶会（如 CVPR、NeurIPS、ICLR）中高引用的相关论文，涵盖视觉解释、因果推断、不确定性评估等前沿方向，并附带代码实现链接与 PDF 文献下载渠道，部分资源已整理至云端方便获取。\n\n它主要解决了研究人员和工程师在复现算法、对比方法或寻找灵感时资料分散、难以追踪最新进展的问题。通过按引用量排序和分类展示，用户能快速定位高影响力工作，例如 Score-CAM、ProtoPNet 等经典可视化技术，或关于解释忠实度与敏感性的理论分析。\n\n适合人工智能领域的研究人员、算法工程师及对模型透明性有需求的技术决策者使用。无论是希望提升模型可信度，还是探索可解释性新范式，awesome_deep_learning_interpretability 都提供了一条高效、结构化的学习路径。其持续更新的机制也确保了内容紧跟学术前沿，","awesome_deep_learning_interpretability 是一个专注于深度学习模型可解释性的开源资源库，旨在帮助开发者理解神经网络“黑盒”背后的决策逻辑。随着 AI 在医疗、金融等高风险领域的应用日益广泛，模型透明度成为关键痛点。该工具系统整理了近年来顶会（如 CVPR、NeurIPS、ICLR）中高引用的相关论文，涵盖视觉解释、因果推断、不确定性评估等前沿方向，并附带代码实现链接与 PDF 文献下载渠道，部分资源已整理至云端方便获取。\n\n它主要解决了研究人员和工程师在复现算法、对比方法或寻找灵感时资料分散、难以追踪最新进展的问题。通过按引用量排序和分类展示，用户能快速定位高影响力工作，例如 Score-CAM、ProtoPNet 等经典可视化技术，或关于解释忠实度与敏感性的理论分析。\n\n适合人工智能领域的研究人员、算法工程师及对模型透明性有需求的技术决策者使用。无论是希望提升模型可信度，还是探索可解释性新范式，awesome_deep_learning_interpretability 都提供了一条高效、结构化的学习路径。其持续更新的机制也确保了内容紧跟学术前沿，是深入理解深度学习内部机制的实用指南。","\n\n\n\n# awesome_deep_learning_interpretability\n深度学习近年来关于模型解释性的相关论文。\n\n按引用次数排序可见[引用排序](.\u002Fsort_cite.md)\n\n159篇论文pdf(有2篇需要上scihub找)上传到[腾讯微云](https:\u002F\u002Fshare.weiyun.com\u002F5ddB0EQ)。\n\n不定期更新。\n\n|Year|Publication|Paper|Citation|code|\n|:---:|:---:|:---:|:---:|:---:|\n|2020|CVPR|[Explaining Knowledge Distillation by Quantifying the Knowledge](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.03622.pdf)|81|\n|2020|CVPR|[High-frequency Component Helps Explain the Generalization of Convolutional Neural Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_High-Frequency_Component_Helps_Explain_the_Generalization_of_Convolutional_Neural_Networks_CVPR_2020_paper.pdf)|289|\n|2020|CVPRW|[Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw1\u002FWang_Score-CAM_Score-Weighted_Visual_Explanations_for_Convolutional_Neural_Networks_CVPRW_2020_paper.pdf)|414|[Pytorch](https:\u002F\u002Fgithub.com\u002Fhaofanwang\u002FScore-CAM)\n|2020|ICLR|[Knowledge consistency between neural networks and beyond](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.01581.pdf)|28|\n|2020|ICLR|[Interpretable Complex-Valued Neural Networks for Privacy Protection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.09546.pdf)|23|\n|2019|AI|[Explanation in artificial intelligence: Insights from the social sciences](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.07269.pdf)|3248|\n|2019|NMI|[Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.10154.pdf)|3505|\n|2019|NeurIPS|[Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9547-can-you-trust-your-models-uncertainty-evaluating-predictive-uncertainty-under-dataset-shift.pdf)|1052|-|\n|2019|NeurIPS|[This looks like that: deep learning for interpretable image recognition](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition.pdf)|665|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcfchen-duke\u002FProtoPNet)|\n|2019|NeurIPS|[A benchmark for interpretability methods in deep neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks.pdf)|413|\n|2019|NeurIPS|[Full-gradient representation for neural network visualization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8666-full-gradient-representation-for-neural-network-visualization.pdf)|155|\n|2019|NeurIPS|[On the (In) fidelity and Sensitivity of Explanations](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9278-on-the-infidelity-and-sensitivity-of-explanations.pdf)|226|\n|2019|NeurIPS|[Towards Automatic Concept-based Explanations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9126-towards-automatic-concept-based-explanations.pdf)|342|[Tensorflow](https:\u002F\u002Fgithub.com\u002Famiratag\u002FACE)|\n|2019|NeurIPS|[CXPlain: Causal explanations for model interpretation under uncertainty](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty.pdf)|133|\n|2019|CVPR|[Interpreting CNNs via Decision Trees](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhang_Interpreting_CNNs_via_Decision_Trees_CVPR_2019_paper.pdf)|293|\n|2019|CVPR|[From Recognition to Cognition: Visual Commonsense Reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZellers_From_Recognition_to_Cognition_Visual_Commonsense_Reasoning_CVPR_2019_paper.pdf)|544|[Pytorch](https:\u002F\u002Fgithub.com\u002Frowanz\u002Fr2c)|\n|2019|CVPR|[Attention branch network: Learning of attention mechanism for visual explanation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FFukui_Attention_Branch_Network_Learning_of_Attention_Mechanism_for_Visual_Explanation_CVPR_2019_paper.pdf)|371|\n|2019|CVPR|[Interpretable and fine-grained visual explanations for convolutional neural networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FWagner_Interpretable_and_Fine-Grained_Visual_Explanations_for_Convolutional_Neural_Networks_CVPR_2019_paper.pdf)|116|\n|2019|CVPR|[Learning to Explain with Complemental Examples](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKanehira_Learning_to_Explain_With_Complemental_Examples_CVPR_2019_paper.pdf)|36|\n|2019|CVPR|[Revealing Scenes by Inverting Structure from Motion Reconstructions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FPittaluga_Revealing_Scenes_by_Inverting_Structure_From_Motion_Reconstructions_CVPR_2019_paper.pdf)|84|[Tensorflow](https:\u002F\u002Fgithub.com\u002Ffrancescopittaluga\u002Finvsfm)|\n|2019|CVPR|[Multimodal Explanations by Predicting Counterfactuality in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKanehira_Multimodal_Explanations_by_Predicting_Counterfactuality_in_Videos_CVPR_2019_paper.pdf)|26|\n|2019|CVPR|[Visualizing the Resilience of Deep Convolutional Network Interpretations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FExplainable%20AI\u002FVasu_Visualizing_the_Resilience_of_Deep_Convolutional_Network_Interpretations_CVPRW_2019_paper.pdf)|2|\n|2019|ICCV|[U-CAM: Visual Explanation using Uncertainty based Class Activation Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FPatro_U-CAM_Visual_Explanation_Using_Uncertainty_Based_Class_Activation_Maps_ICCV_2019_paper.pdf)|61|\n|2019|ICCV|[Towards Interpretable Face Recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.00611.pdf)|66|\n|2019|ICCV|[Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FSelvaraju_Taking_a_HINT_Leveraging_Explanations_to_Make_Vision_and_Language_ICCV_2019_paper.pdf)|163|\n|2019|ICCV|[Understanding Deep Networks via Extremal Perturbations and Smooth Masks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FFong_Understanding_Deep_Networks_via_Extremal_Perturbations_and_Smooth_Masks_ICCV_2019_paper.pdf)|276|[Pytorch](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FTorchRay)|\n|2019|ICCV|[Explaining Neural Networks Semantically and Quantitatively](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FChen_Explaining_Neural_Networks_Semantically_and_Quantitatively_ICCV_2019_paper.pdf)|49|\n|2019|ICLR|[Hierarchical interpretations for neural network predictions](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.05337.pdf)|111|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcsinva\u002Fhierarchical-dnn-interpretations)|\n|2019|ICLR|[How Important Is a Neuron?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.12233.pdf)|101|\n|2019|ICLR|[Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.06302.pdf)|56|\n|2018|ICML|[Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09576.pdf)|169|[Pytorch](https:\u002F\u002Fgithub.com\u002Ftech-srl\u002Flstar_extraction)|\n|2019|ICML|[Towards A Deep and Unified Understanding of Deep Neural Models in NLP](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fguan19a\u002Fguan19a.pdf)|80|[Pytorch](https:\u002F\u002Fgithub.com\u002Ficml2019paper2428\u002FTowards-A-Deep-and-Unified-Understanding-of-Deep-Neural-Models-in-NLP)|\n|2019|ICAIS|[Interpreting black box predictions using fisher kernels](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.10118.pdf)|80|\n|2019|ACMFAT|[Explaining explanations in AI](https:\u002F\u002Fs3.amazonaws.com\u002Facademia.edu.documents\u002F57692790\u002FMittelstadt__Russell_and_Wachter_-_2019_-_Explaining_Explanations_in_AI.pdf?response-content-disposition=inline%3B%20filename%3DExplaining_Explanations_in_AI.pdf&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIATUSBJ6BAJW2TMFXG%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200528T052420Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEHUaCXVzLWVhc3QtMSJIMEYCIQDCCKV%2FpUmJZHn03yzTquQ%2FNMtaXW%2FC63WPmQd%2FhImmYAIhAMelsFwqb9IfV4W2xlfL%2FHk4qeovouLdYbXKf%2B1%2FMwvyKr0DCM7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQABoMMjUwMzE4ODExMjAwIgytA%2BM6OWOGN4XLrlUqkQN2f8ywZT0AEUzKdbVDyGvZN%2B1repdgXrfgT2rAJiGacTK8IRCoyECvRgcgS%2BWJWYpjS7CjoL%2BlTm1c%2BWDWdo%2FYnVM0U6shk9OQivK089W064ZR64AQCCkBDutI3vYhP%2BOJ8AtEUDE%2B7W5EWVQ4zeUDG4ryxzdomFnrHpzA5fp05qWrOmPS0vd%2FFabC%2FPKXO34bpfgyRzz3PHrIsUC2%2BPB0EAo7CPKS0Ux%2FlxmiIOYOIj5u1ZKoP8NVLgOfueQe7%2F%2F3VJUnUXSAIsAThszDTnbi0AJEjvNvUHjm8E%2F7zqBApJ6YVd39NkKl8%2BTE7MRwKuITAOIq8jsyta%2FcmIY5igpHpVCkYcG395rHfScDu3CODXIAcKRLX%2F7brNz%2FRHuGhddK3Q2XuGTjQaeLTEYTmTj2e7VDDmEOt%2BpxvXx7UaImPakzpVZ1Ks6APy1JHupKgBhM6JJkeFprlK62e4sf09wqwxk9KsJSot3TMLVwM63yGr7VmXdg61ETsg0D%2BO1DOnnMprsFhEkb%2Bt%2FpCVafebolsjCN%2Frz2BTrqAZiqy6Obte6J%2BeHJ5bzB1sy1oF%2Fi7ueF56nd1C9ObB%2FXLx930j8wqmakO%2FnoaUiYM6gHh1jZbl8cCeLr8Xu0YSGecpe1J5HECU0A5%2Fq68zoBDfyY6UGNZJ%2B87Br6crqpfaHFkP5g4zXvuN2%2F0fp6S9m2iuSRBr%2B%2Bh2Z1rXmvb3Vequ2qgqeJBS2nHOX8pLp2LhJsVMqdl218jeQDsjYnbxJKq86peVGr66Cuv7TmNiimVl0c0dPr1jgjr25N9hvMnpX83n2Xa%2Fz%2BHUmaYfwFLrD0YLkUWaS2Khcpm0%2BwvrcYsQEyOmYkVG8x5Q%3D%3D&X-Amz-Signature=4fcca52f4ae92746068ea2164846aca05c2bb44e04c1330947ba70f75e676171)|558|\n|2019|AAAI|[Interpretation of neural networks is fragile](https:\u002F\u002Fmachine-learning-and-security.github.io\u002Fpapers\u002Fmlsec17_paper_18.pdf)|597|[Tensorflow](https:\u002F\u002Fgithub.com\u002Famiratag\u002FInterpretationFragility)|\n|2019|AAAI|[Classifier-agnostic saliency map extraction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.08249.pdf)|23|\n|2019|AAAI|[Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.03285.pdf)|11|\n|2019|AAAIW|[Unsupervised Learning of Neural Networks to Explain Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07468.pdf)|28|\n|2019|AAAIW|[Network Transplanting](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.10272.pdf)|4|\n|2019|CSUR|[A Survey of Methods for Explaining Black Box Models](https:\u002F\u002Fkdd.isti.cnr.it\u002Fsites\u002Fkdd.isti.cnr.it\u002Ffiles\u002Fcsur2018survey.pdf)|3088|\n|2019|JVCIR|[Interpretable convolutional neural networks via feedforward design](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.02786)|134|[Keras](https:\u002F\u002Fgithub.com\u002Fdavidsonic\u002FInterpretable_CNNs_via_Feedforward_Design)|\n|2019|ExplainAI|[The (Un)reliability of saliency methods](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.00867.pdf)|515|\n|2019|ACL|[Attention is not Explanation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.10186.pdf)|920|\n|2019|EMNLP|[Attention is not not Explanation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.04626.pdf)|667|\n|2019|arxiv|[Attention Interpretability Across NLP Tasks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.11218.pdf)|129|\n|2019|arxiv|[Interpretable CNNs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.02413.pdf)|2|\n|2018|ICLR|[Towards better understanding of gradient-based attribution methods for deep neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.06104.pdf)|775|\n|2018|ICLR|[Learning how to explain neural networks: PatternNet and PatternAttribution](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.05598.pdf)|342|\n|2018|ICLR|[On the importance of single directions for generalization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.06959.pdf)|282|[Pytorch](https:\u002F\u002Fgithub.com\u002F1Konny\u002Fclass_selectivity_index)|\n|2018|ICLR|[Detecting statistical interactions from neural network weights](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.04977.pdf)|148|[Pytorch](https:\u002F\u002Fgithub.com\u002Fmtsang\u002Fneural-interaction-detection)|\n|2018|ICLR|[Interpretable counting for visual question answering](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.08697.pdf)|55|[Pytorch](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Firlc-vqa-counting)|\n|2018|CVPR|[Interpretable Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Interpretable_Convolutional_Neural_CVPR_2018_paper.pdf)|677|\n|2018|CVPR|[Tell me where to look: Guided attention inference network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Tell_Me_Where_CVPR_2018_paper.pdf)|454|[Chainer](https:\u002F\u002Fgithub.com\u002Falokwhitewolf\u002FGuided-Attention-Inference-Network)|\n|2018|CVPR|[Multimodal Explanations: Justifying Decisions and Pointing to the Evidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPark_Multimodal_Explanations_Justifying_CVPR_2018_paper.pdf)|349|[Caffe](https:\u002F\u002Fgithub.com\u002FSeth-Park\u002FMultimodalExplanations)|\n|2018|CVPR|[Transparency by design: Closing the gap between performance and interpretability in visual reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMascharka_Transparency_by_Design_CVPR_2018_paper.pdf)|180|[Pytorch](https:\u002F\u002Fgithub.com\u002Fdavidmascharka\u002Ftbd-nets)|\n|2018|CVPR|[Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFong_Net2Vec_Quantifying_and_CVPR_2018_paper.pdf)|186|\n|2018|CVPR|[What have we learned from deep representations for action recognition?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFeichtenhofer_What_Have_We_CVPR_2018_paper.pdf)|52|\n|2018|CVPR|[Learning to Act Properly: Predicting and Explaining Affordances from Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChuang_Learning_to_Act_CVPR_2018_paper.pdf)|57|\n|2018|CVPR|[Teaching Categories to Human Learners with Visual Explanations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAodha_Teaching_Categories_to_CVPR_2018_paper.pdf)|64|[Pytorch](https:\u002F\u002Fgithub.com\u002Fmacaodha\u002Fexplain_teach)|\n|2018|CVPR|[What do deep networks like to see?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPalacio_What_Do_Deep_CVPR_2018_paper.pdf)|36|\n|2018|CVPR|[Interpret Neural Networks by Identifying Critical Data Routing Paths](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Interpret_Neural_Networks_CVPR_2018_paper.pdf)|73|[Tensorflow](https:\u002F\u002Fgithub.com\u002Flidongyue12138\u002FCriticalPathPruning)|\n|2018|ECCV|[Deep clustering for unsupervised learning of visual features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FMathilde_Caron_Deep_Clustering_for_ECCV_2018_paper.pdf)|2056|[Pytorch](https:\u002F\u002Fgithub.com\u002Fasanakoy\u002Fdeep_clustering)|\n|2018|ECCV|[Explainable neural computation via stack neural module networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FRonghang_Hu_Explainable_Neural_Computation_ECCV_2018_paper.pdf)|164|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fsnmn)|\n|2018|ECCV|[Grounding visual explanations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FLisa_Anne_Hendricks_Grounding_Visual_Explanations_ECCV_2018_paper.pdf)|184|\n|2018|ECCV|[Textual explanations for self-driving vehicles](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FJinkyu_Kim_Textual_Explanations_for_ECCV_2018_paper.pdf)|196|\n|2018|ECCV|[Interpretable basis decomposition for visual explanation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FAntonio_Torralba_Interpretable_Basis_Decomposition_ECCV_2018_paper.pdf)|228|[Pytorch](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002FIBD)|\n|2018|ECCV|[Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FPierre_Stock_ConvNets_and_ImageNet_ECCV_2018_paper.pdf)|147|\n|2018|ECCV|[Vqa-e: Explaining, elaborating, and enhancing your answers for visual questions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FQing_Li_VQA-E_Explaining_Elaborating_ECCV_2018_paper.pdf)|71|\n|2018|ECCV|[Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FRamprasaath_Ramasamy_Selvaraju_Choose_Your_Neuron_ECCV_2018_paper.pdf)|41|[Pytorch](https:\u002F\u002Fgithub.com\u002Framprs\u002Fneuron-importance-zsl)|\n|2018|ECCV|[Diverse feature visualizations reveal invariances in early layers of deep neural networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FSantiago_Cadena_Diverse_feature_visualizations_ECCV_2018_paper.pdf)|23|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fsacadena\u002Fdiverse_feature_vis)|\n|2018|ECCV|[ExplainGAN: Model Explanation via Decision Boundary Crossing Transformations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FNathan_Silberman_ExplainGAN_Model_Explanation_ECCV_2018_paper.pdf)|36|\n|2018|ICML|[Interpretability beyond feature attribution: Quantitative testing with concept activation vectors](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.11279.pdf)|1130|[Tensorflow](https:\u002F\u002Fgithub.com\u002Ffursovia\u002Ftcav_nlp)|\n|2018|ICML|[Learning to explain: An information-theoretic perspective on model interpretation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.07814.pdf)|421|\n|2018|ACL|[Did the Model Understand the Question?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.05492.pdf)|171|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fpramodkaushik\u002Facl18_results)|\n|2018|FITEE|[Visual interpretability for deep learning: a survey](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.00614)|731|\n|2018|NeurIPS|[Sanity Checks for Saliency Maps](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8160-sanity-checks-for-saliency-maps.pdf)|1353|\n|2018|NeurIPS|[Explanations based on the missing: Towards contrastive explanations with pertinent negatives](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives.pdf)|443|[Tensorflow](https:\u002F\u002Fgithub.com\u002FIBM\u002FContrastive-Explanation-Method)|\n|2018|NeurIPS|[Towards robust interpretability with self-explaining neural networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf)|648|[Pytorch](https:\u002F\u002Fgithub.com\u002Fraj-shah\u002Fsenn)|\n|2018|NeurIPS|[Attacks meet interpretability: Attribute-steered detection of adversarial samples](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7998-attacks-meet-interpretability-attribute-steered-detection-of-adversarial-samples.pdf)|142|\n|2018|NeurIPS|[DeepPINK: reproducible feature selection in deep neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8085-deeppink-reproducible-feature-selection-in-deep-neural-networks.pdf)|125|[Keras](https:\u002F\u002Fgithub.com\u002Fyounglululu\u002FDeepPINK)|\n|2018|NeurIPS|[Representer point selection for explaining deep neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8141-representer-point-selection-for-explaining-deep-neural-networks.pdf)|182|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fchihkuanyeh\u002FRepresenter_Point_Selection)|\n|2018|NeurIPS Workshop|[Interpretable convolutional filters with sincNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.09725)|97|\n|2018|AAAI|[Anchors: High-precision model-agnostic explanations](https:\u002F\u002Fdm-gatech.github.io\u002FCS8803-Fall2018-DML-Papers\u002Fanchors.pdf)|1517|\n|2018|AAAI|[Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients](https:\u002F\u002Fasross.github.io\u002Fpublications\u002FRossDoshiVelez2018.pdf)|537|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fdtak\u002Fadversarial-robustness-public)|\n|2018|AAAI|[Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.04806.pdf)|396|[Tensorflow](https:\u002F\u002Fgithub.com\u002FOscarcarLi\u002FPrototypeDL)|\n|2018|AAAI|[Interpreting CNN Knowledge via an Explanatory Graph](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.01785.pdf)|199|[Matlab](https:\u002F\u002Fgithub.com\u002Fzqs1022\u002FexplanatoryGraph)|\n|2018|AAAI|[Examining CNN Representations with respect to Dataset Bias](http:\u002F\u002Fwww.stat.ucla.edu\u002F~sczhu\u002Fpapers\u002FConf_2018\u002FAAAI_2018_DNN_Learning_Bias.pdf)|88|\n|2018|WACV|[Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FAditya_Chattopadhyay2\u002Fpublication\u002F320727679_Grad-CAM_Generalized_Gradient-based_Visual_Explanations_for_Deep_Convolutional_Networks\u002Flinks\u002F5a3aa2e5a6fdcc3889bd04cb\u002FGrad-CAM-Generalized-Gradient-based-Visual-Explanations-for-Deep-Convolutional-Networks.pdf)|1459|\n|2018|IJCV|[Top-down neural attention by excitation backprop](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1608.00507)|778|\n|2018|TPAMI|[Interpreting deep visual representations via network dissection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.05611)|252|\n|2018|DSP|[Methods for interpreting and understanding deep neural networks](http:\u002F\u002Fiphome.hhi.de\u002Fsamek\u002Fpdf\u002FMonDSP18.pdf)|2046|\n|2018|Access|[Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=8466590)|3110|\n|2018|JAIR|[Learning Explanatory Rules from Noisy Data](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2018\u002F0792.pdf)|440|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fai-systems\u002FDILP-Core)|\n|2018|MIPRO|[Explainable artificial intelligence: A survey](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FMario_Brcic\u002Fpublication\u002F325398586_Explainable_Artificial_Intelligence_A_Survey\u002Flinks\u002F5b0bec90a6fdcc8c2534d673\u002FExplainable-Artificial-Intelligence-A-Survey.pdf)|794|\n|2018|BMVC|[Rise: Randomized input sampling for explanation of black-box models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.07421.pdf)|657|\n|2018|arxiv|[Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06169.pdf)|194|\n|2018|arxiv|[Manipulating and measuring model interpretability](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.07810.pdf)|496|\n|2018|arxiv|[How convolutional neural network see the world-A survey of convolutional neural network visualization methods](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.11191.pdf)|211|\n|2018|arxiv|[Revisiting the importance of individual units in cnns via ablation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.02891.pdf)|93|\n|2018|arxiv|[Computationally Efficient Measures of Internal Neuron Importance](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.09946.pdf)|10|\n|2017|ICML|[Understanding Black-box Predictions via Influence Functions](https:\u002F\u002Fdm-gatech.github.io\u002FCS8803-Fall2018-DML-Papers\u002Finfluence-functions.pdf)|2062|[Pytorch](https:\u002F\u002Fgithub.com\u002Fnimarb\u002Fpytorch_influence_functions)|\n|2017|ICML|[Axiomatic attribution for deep networks](https:\u002F\u002Fmit6874.github.io\u002Fassets\u002Fmisc\u002Fsundararajan.pdf)|3654|[Keras](https:\u002F\u002Fgithub.com\u002Fhiranumn\u002FIntegratedGradients)|\n|2017|ICML|[Learning Important Features Through Propagating Activation Differences](https:\u002F\u002Fmit6874.github.io\u002Fassets\u002Fmisc\u002Fshrikumar.pdf)|2835|\n|2017|ICLR|[Visualizing deep neural network decisions: Prediction difference analysis](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.04595.pdf)|674|[Caffe](https:\u002F\u002Fgithub.com\u002Flmzintgraf\u002FDeepVis-PredDiff)|\n|2017|ICLR|[Exploring LOTS in Deep Neural Networks](https:\u002F\u002Fopenreview.net\u002Fpdf?id=SkCILwqex)|34|\n|2017|NeurIPS|[A Unified Approach to Interpreting Model Predictions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions.pdf)|11511|\n|2017|NeurIPS|[Real time image saliency for black box classifiers](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7272-real-time-image-saliency-for-black-box-classifiers.pdf)|483|[Pytorch](https:\u002F\u002Fgithub.com\u002Fkaranchahal\u002FSaliencyMapper)|\n|2017|NeurIPS|[SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7188-svcca-singular-vector-canonical-correlation-analysis-for-deep-learning-dynamics-and-interpretability.pdf)|473|\n|2017|CVPR|[Mining Object Parts from CNNs via Active Question-Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FZhang_Mining_Object_Parts_CVPR_2017_paper.pdf)|29|\n|2017|CVPR|[Network dissection: Quantifying interpretability of deep visual representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FBau_Network_Dissection_Quantifying_CVPR_2017_paper.pdf)|1254|\n|2017|CVPR|[Improving Interpretability of Deep Neural Networks with Semantic Information](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FDong_Improving_Interpretability_of_CVPR_2017_paper.pdf)|118|\n|2017|CVPR|[MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FZhang_MDNet_A_Semantically_CVPR_2017_paper.pdf)|307|[Torch](https:\u002F\u002Fgithub.com\u002Fzizhaozhang\u002Fmdnet-cvpr2017)|\n|2017|CVPR|[Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FGoyal_Making_the_v_CVPR_2017_paper.pdf)|1686|\n|2017|CVPR|[Knowing when to look: Adaptive attention via a visual sentinel for image captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLu_Knowing_When_to_CVPR_2017_paper.pdf)|1392|[Torch](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FAdaptiveAttention)|\n|2017|CVPRW|[Interpretable 3d human action analysis with temporal convolutional networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017_workshops\u002Fw20\u002Fpapers\u002FKim_Interpretable_3D_Human_CVPR_2017_paper.pdf)|539|\n|2017|ICCV|[Grad-cam: Visual explanations from deep networks via gradient-based localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FSelvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf)|13006|[Pytorch](https:\u002F\u002Fgithub.com\u002Fleftthomas\u002FGradCAM)|\n|2017|ICCV|[Interpretable Explanations of Black Boxes by Meaningful Perturbation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FFong_Interpretable_Explanations_of_ICCV_2017_paper.pdf)|1293|[Pytorch](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-explain-black-box)|\n|2017|ICCV|[Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FKim_Interpretable_Learning_for_ICCV_2017_paper.pdf)|323|\n|2017|ICCV|[Understanding and comparing deep neural networks for age and gender classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017_workshops\u002Fpapers\u002Fw23\u002FLapuschkin_Understanding_and_Comparing_ICCV_2017_paper.pdf)|130|\n|2017|ICCV|[Learning to disambiguate by asking discriminative questions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Learning_to_Disambiguate_ICCV_2017_paper.pdf)|26|\n|2017|IJCAI|[Right for the right reasons: Training differentiable models by constraining their explanations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.03717.pdf)|429|\n|2017|IJCAI|[Understanding and improving convolutional neural networks via concatenated rectified linear units](http:\u002F\u002Fwww.jmlr.org\u002Fproceedings\u002Fpapers\u002Fv48\u002Fshang16.pdf)|510|[Caffe](https:\u002F\u002Fgithub.com\u002Fchakkritte\u002FCReLU)|\n|2017|AAAI|[Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.04246.pdf)|67|[Matlab](https:\u002F\u002Fgithub.com\u002Fzqs1022\u002FpartGraphForCNN)|\n|2017|ACL|[Visualizing and Understanding Neural Machine Translation](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP17-1106.pdf)|179|\n|2017|EMNLP|[A causal framework for explaining the predictions of black-box sequence-to-sequence models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.01943.pdf)|192|\n|2017|CVPR Workshop|[Looking under the hood: Deep neural network visualization to interpret whole-slide image analysis outcomes for colorectal polyps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017_workshops\u002Fw8\u002Fpapers\u002FKorbar_Looking_Under_the_CVPR_2017_paper.pdf)|47|\n|2017|survey|[Interpretability of deep learning models: a survey of results](https:\u002F\u002Fdiscovery.ucl.ac.uk\u002Fid\u002Feprint\u002F10059575\u002F1\u002FChakraborty_Interpretability%20of%20deep%20learning%20models.pdf)|345|\n|2017|arxiv|[SmoothGrad: removing noise by adding noise](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03825.pdf)|1479|\n|2017|arxiv|[Interpretable & explorable approximations of black box models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.01154.pdf)|259|\n|2017|arxiv|[Distilling a neural network into a soft decision tree](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09784.pdf)|520|[Pytorch](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fsoft-decision-tree)|\n|2017|arxiv|[Towards interpretable deep neural networks by leveraging adversarial examples](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.05493.pdf)|111|\n|2017|arxiv|[Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.08296.pdf)|1279|\n|2017|arxiv|[Contextual Explanation Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.10301.pdf)|77|[Pytorch](https:\u002F\u002Fgithub.com\u002Falshedivat\u002Fcen)|\n|2017|arxiv|[Challenges for transparency](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.01870.pdf)|142|\n|2017|ACMSOPP|[Deepxplore: Automated whitebox testing of deep learning systems](https:\u002F\u002Fmachine-learning-and-security.github.io\u002Fpapers\u002Fmlsec17_paper_1.pdf)|1144|\n|2017|CEURW|[What does explainable AI really mean? A new conceptualization of perspectives](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.00794.pdf)|518|\n|2017|TVCG|[ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.01942.pdf)|346|\n|2016|NeurIPS|[Synthesizing the preferred inputs for neurons in neural networks via deep generator networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6519-synthesizing-the-preferred-inputs-for-neurons-in-neural-networks-via-deep-generator-networks.pdf)|659|[Caffe](https:\u002F\u002Fgithub.com\u002FEvolving-AI-Lab\u002Fsynthesizing)|\n|2016|NeurIPS|[Understanding the effective receptive field in deep convolutional neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6203-understanding-the-effective-receptive-field-in-deep-convolutional-neural-networks.pdf)|1356|\n|2016|CVPR|[Inverting Visual Representations with Convolutional Networks](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FDosovitskiy_Inverting_Visual_Representations_CVPR_2016_paper.pdf)|626|\n|2016|CVPR|[Visualizing and Understanding Deep Texture Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FLin_Visualizing_and_Understanding_CVPR_2016_paper.pdf)|147|\n|2016|CVPR|[Analyzing Classifiers: Fisher Vectors and Deep Neural Networks](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FBach_Analyzing_Classifiers_Fisher_CVPR_2016_paper.pdf)|191|\n|2016|ECCV|[Generating Visual Explanations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1603.08507)|613|[Caffe](https:\u002F\u002Fgithub.com\u002FLisaAnne\u002FECCV2016)|\n|2016|ECCV|[Design of kernels in convolutional neural networks for image classification](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.09231.pdf)|24|\n|2016|ICML|[Understanding and improving convolutional neural networks via concatenated rectified linear units](http:\u002F\u002Fwww.jmlr.org\u002Fproceedings\u002Fpapers\u002Fv48\u002Fshang16.pdf)|510|\n|2016|ICML|[Visualizing and comparing AlexNet and VGG using deconvolutional layers](https:\u002F\u002Ficmlviz.github.io\u002Ficmlviz2016\u002Fassets\u002Fpapers\u002F4.pdf)|126|\n|2016|EMNLP|[Rationalizing Neural Predictions](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.04155)|738|[Pytorch](https:\u002F\u002Fgithub.com\u002Fzhaopku\u002FRationale-Torch)|\n|2016|IJCV|[Visualizing deep convolutional neural networks using natural pre-images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.02017)|508|[Matlab](https:\u002F\u002Fgithub.com\u002Faravindhm\u002Fnnpreimage)|\n|2016|IJCV|[Visualizing Object Detection Features](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.05461.pdf)|38|[Caffe](https:\u002F\u002Fgithub.com\u002Fcvondrick\u002Fihog)|\n|2016|KDD|[Why should i trust you?: Explaining the predictions of any classifier](https:\u002F\u002Fchu-data-lab.github.io\u002FCS8803Fall2018\u002FCS8803-Fall2018-DML-Papers\u002Flime.pdf)|11742|\n|2016|TVCG|[Visualizing the hidden activity of artificial neural networks](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FSamuel_Fadel\u002Fpublication\u002F306049229_Visualizing_the_Hidden_Activity_of_Artificial_Neural_Networks\u002Flinks\u002F5b13ffa7aca2723d9980083c\u002FVisualizing-the-Hidden-Activity-of-Artificial-Neural-Networks.pdf)|309|\n|2016|TVCG|[Towards better analysis of deep convolutional neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1604.07043.pdf)|474|\n|2016|NAACL|[Visualizing and understanding neural models in nlp](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.01066)|650|[Torch](https:\u002F\u002Fgithub.com\u002Fjiweil\u002FVisualizing-and-Understanding-Neural-Models-in-NLP)|\n|2016|arxiv|[Understanding neural networks through representation erasure](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1612.08220.pdf))|492|\n|2016|arxiv|[Grad-CAM: Why did you say that?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07450.pdf)|398|\n|2016|arxiv|[Investigating the influence of noise and distractors on the interpretation of neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07270.pdf)|108|\n|2016|arxiv|[Attentive Explanations: Justifying Decisions and Pointing to the Evidence](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1612.04757)|88|\n|2016|arxiv|[The Mythos of Model Interpretability](http:\u002F\u002Fwww.zacklipton.com\u002Fmedia\u002Fpapers\u002Fmythos_model_interpretability_lipton2016.pdf)|3786|\n|2016|arxiv|[Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.03616)|317|\n|2015|ICLR|[Striving for Simplicity: The All Convolutional Net](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1412.6806.pdf)|4645|[Pytorch](https:\u002F\u002Fgithub.com\u002FStefOe\u002Fall-conv-pytorch)|\n|2015|CVPR|[Understanding deep image representations by inverting them](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FMahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf)|1942|[Matlab](https:\u002F\u002Fgithub.com\u002Faravindhm\u002Fdeep-goggle)|\n|2015|ICCV|[Understanding deep features with computer-generated imagery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FAubry_Understanding_Deep_Features_ICCV_2015_paper.pdf)|156|[Caffe](https:\u002F\u002Fgithub.com\u002Fmathieuaubry\u002Ffeatures_analysis)|\n|2015|ICML Workshop|[Understanding Neural Networks Through Deep Visualization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.06579.pdf)|2038|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fjiye-ML\u002FVisualizing-and-Understanding-Convolutional-Networks)|\n|2015|AAS|[Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model](https:\u002F\u002Fprojecteuclid.org\u002Fdownload\u002Fpdfview_1\u002Feuclid.aoas\u002F1446488742)|749|\n|2014|ECCV|[Visualizing and Understanding Convolutional Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1311.2901.pdf)|18604|[Pytorch](https:\u002F\u002Fgithub.com\u002Fhuybery\u002FVisualizingCNN)|\n|2014|ICLR|[Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1312.6034.pdf)|6142|[Pytorch](https:\u002F\u002Fgithub.com\u002Fhuanghao-code\u002FVisCNN_ICLR_2014_Saliency)|\n|2013|ICCV|[Hoggles: Visualizing object detection features](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2013\u002Fpapers\u002FVondrick_HOGgles_Visualizing_Object_2013_ICCV_paper.pdf)|352|\n \n+ [ ] 论文talk\n","# 令人惊叹的深度学习可解释性\n近年来深度学习领域关于模型可解释性的相关论文。\n\n按引用次数排序，请参见[引用排序](.\u002Fsort_cite.md)。\n\n共159篇论文的PDF文件（其中2篇需通过Sci-Hub获取）已上传至[腾讯微云](https:\u002F\u002Fshare.weiyun.com\u002F5ddB0EQ)。\n\n不定期更新。\n\n|Year|Publication|Paper|Citation|code|\n|:---:|:---:|:---:|:---:|:---:|\n|2020|CVPR|[Explaining Knowledge Distillation by Quantifying the Knowledge](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.03622.pdf)|81|\n|2020|CVPR|[High-frequency Component Helps Explain the Generalization of Convolutional Neural Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_High-Frequency_Component_Helps_Explain_the_Generalization_of_Convolutional_Neural_Networks_CVPR_2020_paper.pdf)|289|\n|2020|CVPRW|[Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw1\u002FWang_Score-CAM_Score-Weighted_Visual_Explanations_for_Convolutional_Neural_Networks_CVPRW_2020_paper.pdf)|414|[Pytorch](https:\u002F\u002Fgithub.com\u002Fhaofanwang\u002FScore-CAM)\n|2020|ICLR|[Knowledge consistency between neural networks and beyond](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.01581.pdf)|28|\n|2020|ICLR|[Interpretable Complex-Valued Neural Networks for Privacy Protection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.09546.pdf)|23|\n|2019|AI|[Explanation in artificial intelligence: Insights from the social sciences](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.07269.pdf)|3248|\n|2019|NMI|[Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.10154.pdf)|3505|\n|2019|NeurIPS|[Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9547-can-you-trust-your-models-uncertainty-evaluating-predictive-uncertainty-under-dataset-shift.pdf)|1052|-|\n|2019|NeurIPS|[This looks like that: deep learning for interpretable image recognition](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition.pdf)|665|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcfchen-duke\u002FProtoPNet)|\n|2019|NeurIPS|[A benchmark for interpretability methods in deep neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks.pdf)|413|\n|2019|NeurIPS|[Full-gradient representation for neural network visualization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8666-full-gradient-representation-for-neural-network-visualization.pdf)|155|\n|2019|NeurIPS|[On the (In) fidelity and Sensitivity of Explanations](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9278-on-the-infidelity-and-sensitivity-of-explanations.pdf)|226|\n|2019|NeurIPS|[Towards Automatic Concept-based Explanations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9126-towards-automatic-concept-based-explanations.pdf)|342|[Tensorflow](https:\u002F\u002Fgithub.com\u002Famiratag\u002FACE)|\n|2019|NeurIPS|[CXPlain: Causal explanations for model interpretation under uncertainty](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty.pdf)|133|\n|2019|CVPR|[Interpreting CNNs via Decision Trees](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhang_Interpreting_CNNs_via_Decision_Trees_CVPR_2019_paper.pdf)|293|\n|2019|CVPR|[From Recognition to Cognition: Visual Commonsense Reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZellers_From_Recognition_to_Cognition_Visual_Commonsense_Reasoning_CVPR_2019_paper.pdf)|544|[Pytorch](https:\u002F\u002Fgithub.com\u002Frowanz\u002Fr2c)|\n|2019|CVPR|[Attention branch network: Learning of attention mechanism for visual explanation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FFukui_Attention_Branch_Network_Learning_of_Attention_Mechanism_for_Visual_Explanation_CVPR_2019_paper.pdf)|371|\n|2019|CVPR|[Interpretable and fine-grained visual explanations for convolutional neural networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FWagner_Interpretable_and_Fine-Grained_Visual_Explanations_for_Convolutional_Neural_Networks_CVPR_2019_paper.pdf)|116|\n|2019|CVPR|[Learning to Explain with Complemental Examples](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKanehira_Learning_to_Explain_With_Complemental_Examples_CVPR_2019_paper.pdf)|36|\n|2019|CVPR|[Revealing Scenes by Inverting Structure from Motion Reconstructions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FPittaluga_Revealing_Scenes_by_Inverting_Structure_From_Motion_Reconstructions_CVPR_2019_paper.pdf)|84|[Tensorflow](https:\u002F\u002Fgithub.com\u002Ffrancescopittaluga\u002Finvsfm)|\n|2019|CVPR|[Multimodal Explanations by Predicting Counterfactuality in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKanehira_Multimodal_Explanations_by_Predicting_Counterfactuality_in_Videos_CVPR_2019_paper.pdf)|26|\n|2019|CVPR|[Visualizing the Resilience of Deep Convolutional Network Interpretations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FExplainable%20AI\u002FVasu_Visualizing_the_Resilience_of_Deep_Convolutional_Network_Interpretations_CVPRW_2019_paper.pdf)|2|\n|2019|ICCV|[U-CAM: Visual Explanation using Uncertainty based Class Activation Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FPatro_U-CAM_Visual_Explanation_Using_Uncertainty_Based_Class_Activation_Maps_ICCV_2019_paper.pdf)|61|\n|2019|ICCV|[Towards Interpretable Face Recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.00611.pdf)|66|\n|2019|ICCV|[Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FSelvaraju_Taking_a_HINT_Leveraging_Explanations_to_Make_Vision_and_Language_ICCV_2019_paper.pdf)|163|\n|2019|ICCV|[Understanding Deep Networks via Extremal Perturbations and Smooth Masks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FFong_Understanding_Deep_Networks_via_Extremal_Perturbations_and_Smooth_Masks_ICCV_2019_paper.pdf)|276|[Pytorch](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FTorchRay)|\n|2019|ICCV|[Explaining Neural Networks Semantically and Quantitatively](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FChen_Explaining_Neural_Networks_Semantically_and_Quantitatively_ICCV_2019_paper.pdf)|49|\n|2019|ICLR|[Hierarchical interpretations for neural network predictions](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.05337.pdf)|111|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcsinva\u002Fhierarchical-dnn-interpretations)|\n|2019|ICLR|[How Important Is a Neuron?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.12233.pdf)|101|\n|2019|ICLR|[Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.06302.pdf)|56|\n|2018|ICML|[Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09576.pdf)|169|[Pytorch](https:\u002F\u002Fgithub.com\u002Ftech-srl\u002Flstar_extraction)|\n|2019|ICML|[Towards A Deep and Unified Understanding of Deep Neural Models in NLP](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fguan19a\u002Fguan19a.pdf)|80|[Pytorch](https:\u002F\u002Fgithub.com\u002Ficml2019paper2428\u002FTowards-A-Deep-and-Unified-Understanding-of-Deep-Neural-Models-in-NLP)|\n|2019|ICAIS|[Interpreting black box predictions using fisher kernels](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.10118.pdf)|80|\n|2019|ACMFAT|[Explaining explanations in AI](https:\u002F\u002Fs3.amazonaws.com\u002Facademia.edu.documents\u002F57692790\u002FMittelstadt__Russell_and_Wachter_-_2019_-_Explaining_Explanations_in_AI.pdf?response-content-disposition=inline%3B%20filename%3DExplaining_Explanations_in_AI.pdf&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIATUSBJ6BAJW2TMFXG%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200528T052420Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEHUaCXVzLWVhc3QtMSJIMEYCIQDCCKV%2FpUmJZHn03yzTquQ%2FNMtaXW%2FC63WPmQd%2FhImmYAIhAMelsFwqb9IfV4W2xlfL%2FHk4qeovouLdYbXKf%2B1%2FMwvyKr0DCM7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQABoMMjUwMzE4ODExMjAwIgytA%2BM6OWOGN4XLrlUqkQN2f8ywZT0AEUzKdbVDyGvZN%2B1repdgXrfgT2rAJiGacTK8IRCoyECvRgcgS%2BWJWYpjS7CjoL%2BlTm1c%2BWDWdo%2FYnVM0U6shk9OQivK089W064ZR64AQCCkBDutI3vYhP%2BOJ8AtEUDE%2B7W5EWVQ4zeUDG4ryxzdomFnrHpzA5fp05qWrOmPS0vd%2FFabC%2FPKXO34bpfgyRzz3PHrIsUC2%2BPB0EAo7CPKS0Ux%2FlxmiIOYOIj5u1ZKoP8NVLgOfueQe7%2F%2F3VJUnUXSAIsAThszDTnbi0AJEjvNvUHjm8E%2F7zqBApJ6YVd39NkKl8%2BTE7MRwKuITAOIq8jsyta%2FcmIY5igpHpVCkYcG395rHfScDu3CODXIAcKRLX%2F7brNz%2FRHuGhddK3Q2XuGTjQaeLTEYTmTj2e7VDDmEOt%2BpxvXx7UaImPakzpVZ1Ks6APy1JHupKgBhM6JJkeFprlK62e4sf09wqwxk9KsJSot3TMLVwM63yGr7VmXdg61ETsg0D%2BO1DOnnMprsFhEkb%2Bt%2FpCVafebolsjCN%2Frz2BTrqAZiqy6Obte6J%2BeHJ5bzB1sy1oF%2Fi7ueF56nd1C9ObB%2FXLx930j8wqmakO%2FnoaUiYM6gHh1jZbl8cCeLr8Xu0YSGecpe1J5HECU0A5%2Fq68zoBDfyY6UGNZJ%2B87Br6crqpfaHFkP5g4zXvuN2%2F0fp6S9m2iuSRBr%2B%2Bh2Z1rXmvb3Vequ2qgqeJBS2nHOX8pLp2LhJsVMqdl218jeQDsjYnbxJKq86peVGr66Cuv7TmNiimVl0c0dPr1jgjr25N9hvMnpX83n2Xa%2Fz%2BHUmaYfwFLrD0YLkUWaS2Khcpm0%2BwvrcYsQEyOmYkVG8x5Q%3D%3D&X-Amz-Signature=4fcca52f4ae92746068ea2164846aca05c2bb44e04c1330947ba70f75e676171)|558|\n|2019|AAAI|[Interpretation of neural networks is fragile](https:\u002F\u002Fmachine-learning-and-security.github.io\u002Fpapers\u002Fmlsec17_paper_18.pdf)|597|[Tensorflow](https:\u002F\u002Fgithub.com\u002Famiratag\u002FInterpretationFragility)|\n|2019|AAAI|[Classifier-agnostic saliency map extraction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.08249.pdf)|23|\n|2019|AAAI|[Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.03285.pdf)|11|\n|2019|AAAIW|[Unsupervised Learning of Neural Networks to Explain Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07468.pdf)|28|\n|2019|AAAIW|[Network Transplanting](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.10272.pdf)|4|\n|2019|CSUR|[A Survey of Methods for Explaining Black Box Models](https:\u002F\u002Fkdd.isti.cnr.it\u002Fsites\u002Fkdd.isti.cnr.it\u002Ffiles\u002Fcsur2018survey.pdf)|3088|\n|2019|JVCIR|[Interpretable convolutional neural networks via feedforward design](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.02786)|134|[Keras](https:\u002F\u002Fgithub.com\u002Fdavidsonic\u002FInterpretable_CNNs_via_Feedforward_Design)|\n|2019|ExplainAI|[The (Un)reliability of saliency methods](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.00867.pdf)|515|\n|2019|ACL|[Attention is not Explanation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.10186.pdf)|920|\n|2019|EMNLP|[Attention is not not Explanation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.04626.pdf)|667|\n|2019|arxiv|[Attention Interpretability Across NLP Tasks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.11218.pdf)|129|\n|2019|arxiv|[Interpretable CNNs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.02413.pdf)|2|\n|2018|ICLR|[Towards better understanding of gradient-based attribution methods for deep neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.06104.pdf)|775|\n|2018|ICLR|[Learning how to explain neural networks: PatternNet and PatternAttribution](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.05598.pdf)|342|\n|2018|ICLR|[On the importance of single directions for generalization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.06959.pdf)|282|[Pytorch](https:\u002F\u002Fgithub.com\u002F1Konny\u002Fclass_selectivity_index)|\n|2018|ICLR|[Detecting statistical interactions from neural network weights](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.04977.pdf)|148|[Pytorch](https:\u002F\u002Fgithub.com\u002Fmtsang\u002Fneural-interaction-detection)|\n|2018|ICLR|[Interpretable counting for visual question answering](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.08697.pdf)|55|[Pytorch](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Firlc-vqa-counting)|\n|2018|CVPR|[Interpretable Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Interpretable_Convolutional_Neural_CVPR_2018_paper.pdf)|677|\n|2018|CVPR|[Tell me where to look: Guided attention inference network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Tell_Me_Where_CVPR_2018_paper.pdf)|454|[Chainer](https:\u002F\u002Fgithub.com\u002Falokwhitewolf\u002FGuided-Attention-Inference-Network)|\n|2018|CVPR|[Multimodal Explanations: Justifying Decisions and Pointing to the Evidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPark_Multimodal_Explanations_Justifying_CVPR_2018_paper.pdf)|349|[Caffe](https:\u002F\u002Fgithub.com\u002FSeth-Park\u002FMultimodalExplanations)|\n|2018|CVPR|[Transparency by design: Closing the gap between performance and interpretability in visual reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMascharka_Transparency_by_Design_CVPR_2018_paper.pdf)|180|[Pytorch](https:\u002F\u002Fgithub.com\u002Fdavidmascharka\u002Ftbd-nets)|\n|2018|CVPR|[Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFong_Net2Vec_Quantifying_and_CVPR_2018_paper.pdf)|186|\n|2018|CVPR|[What have we learned from deep representations for action recognition?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFeichtenhofer_What_Have_We_CVPR_2018_paper.pdf)|52|\n|2018|CVPR|[Learning to Act Properly: Predicting and Explaining Affordances from Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChuang_Learning_to_Act_CVPR_2018_paper.pdf)|57|\n|2018|CVPR|[Teaching Categories to Human Learners with Visual Explanations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAodha_Teaching_Categories_to_CVPR_2018_paper.pdf)|64|[Pytorch](https:\u002F\u002Fgithub.com\u002Fmacaodha\u002Fexplain_teach)|\n|2018|CVPR|[What do deep networks like to see?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPalacio_What_Do_Deep_CVPR_2018_paper.pdf)|36|\n|2018|CVPR|[Interpret Neural Networks by Identifying Critical Data Routing Paths](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Interpret_Neural_Networks_CVPR_2018_paper.pdf)|73|[Tensorflow](https:\u002F\u002Fgithub.com\u002Flidongyue12138\u002FCriticalPathPruning)|\n|2018|ECCV|[Deep clustering for unsupervised learning of visual features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FMathilde_Caron_Deep_Clustering_for_ECCV_2018_paper.pdf)|2056|[Pytorch](https:\u002F\u002Fgithub.com\u002Fasanakoy\u002Fdeep_clustering)|\n|2018|ECCV|[Explainable neural computation via stack neural module networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FRonghang_Hu_Explainable_Neural_Computation_ECCV_2018_paper.pdf)|164|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fsnmn)|\n|2018|ECCV|[Grounding visual explanations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FLisa_Anne_Hendricks_Grounding_Visual_Explanations_ECCV_2018_paper.pdf)|184|\n|2018|ECCV|[Textual explanations for self-driving vehicles](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FJinkyu_Kim_Textual_Explanations_for_ECCV_2018_paper.pdf)|196|\n|2018|ECCV|[Interpretable basis decomposition for visual explanation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FAntonio_Torralba_Interpretable_Basis_Decomposition_ECCV_2018_paper.pdf)|228|[Pytorch](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002FIBD)|\n|2018|ECCV|[Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FPierre_Stock_ConvNets_and_ImageNet_ECCV_2018_paper.pdf)|147|\n|2018|ECCV|[Vqa-e: Explaining, elaborating, and enhancing your answers for visual questions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FQing_Li_VQA-E_Explaining_Elaborating_ECCV_2018_paper.pdf)|71|\n|2018|ECCV|[Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FRamprasaath_Ramasamy_Selvaraju_Choose_Your_Neuron_ECCV_2018_paper.pdf)|41|[Pytorch](https:\u002F\u002Fgithub.com\u002Framprs\u002Fneuron-importance-zsl)|\n|2018|ECCV|[Diverse feature visualizations reveal invariances in early layers of deep neural networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FSantiago_Cadena_Diverse_feature_visualizations_ECCV_2018_paper.pdf)|23|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fsacadena\u002Fdiverse_feature_vis)|\n|2018|ECCV|[ExplainGAN: Model Explanation via Decision Boundary Crossing Transformations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FNathan_Silberman_ExplainGAN_Model_Explanation_ECCV_2018_paper.pdf)|36|\n|2018|ICML|[Interpretability beyond feature attribution: Quantitative testing with concept activation vectors](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.11279.pdf)|1130|[Tensorflow](https:\u002F\u002Fgithub.com\u002Ffursovia\u002Ftcav_nlp)|\n|2018|ICML|[Learning to explain: An information-theoretic perspective on model interpretation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.07814.pdf)|421|\n|2018|ACL|[Did the Model Understand the Question?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.05492.pdf)|171|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fpramodkaushik\u002Facl18_results)|\n|2018|FITEE|[Visual interpretability for deep learning: a survey](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.00614)|731|\n|2018|NeurIPS|[Sanity Checks for Saliency Maps](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8160-sanity-checks-for-saliency-maps.pdf)|1353|\n|2018|NeurIPS|[Explanations based on the missing: Towards contrastive explanations with pertinent negatives](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives.pdf)|443|[Tensorflow](https:\u002F\u002Fgithub.com\u002FIBM\u002FContrastive-Explanation-Method)|\n|2018|NeurIPS|[Towards robust interpretability with self-explaining neural networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf)|648|[Pytorch](https:\u002F\u002Fgithub.com\u002Fraj-shah\u002Fsenn)|\n|2018|NeurIPS|[Attacks meet interpretability: Attribute-steered detection of adversarial samples](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7998-attacks-meet-interpretability-attribute-steered-detection-of-adversarial-samples.pdf)|142|\n|2018|NeurIPS|[DeepPINK: reproducible feature selection in deep neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8085-deeppink-reproducible-feature-selection-in-deep-neural-networks.pdf)|125|[Keras](https:\u002F\u002Fgithub.com\u002Fyounglululu\u002FDeepPINK)|\n|2018|NeurIPS|[Representer point selection for explaining deep neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8141-representer-point-selection-for-explaining-deep-neural-networks.pdf)|182|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fchihkuanyeh\u002FRepresenter_Point_Selection)|\n|2018|NeurIPS Workshop|[Interpretable convolutional filters with sincNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.09725)|97|\n|2018|AAAI|[Anchors: High-precision model-agnostic explanations](https:\u002F\u002Fdm-gatech.github.io\u002FCS8803-Fall2018-DML-Papers\u002Fanchors.pdf)|1517|\n|2018|AAAI|[Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients](https:\u002F\u002Fasross.github.io\u002Fpublications\u002FRossDoshiVelez2018.pdf)|537|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fdtak\u002Fadversarial-robustness-public)|\n|2018|AAAI|[Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.04806.pdf)|396|[Tensorflow](https:\u002F\u002Fgithub.com\u002FOscarcarLi\u002FPrototypeDL)|\n|2018|AAAI|[Interpreting CNN Knowledge via an Explanatory Graph](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.01785.pdf)|199|[Matlab](https:\u002F\u002Fgithub.com\u002Fzqs1022\u002FexplanatoryGraph)|\n|2018|AAAI|[Examining CNN Representations with respect to Dataset Bias](http:\u002F\u002Fwww.stat.ucla.edu\u002F~sczhu\u002Fpapers\u002FConf_2018\u002FAAAI_2018_DNN_Learning_Bias.pdf)|88|\n|2018|WACV|[Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FAditya_Chattopadhyay2\u002Fpublication\u002F320727679_Grad-CAM_Generalized_Gradient-based_Visual_Explanations_for_Deep_Convolutional_Networks\u002Flinks\u002F5a3aa2e5a6fdcc3889bd04cb\u002FGrad-CAM-Generalized-Gradient-based-Visual-Explanations-for-Deep-Convolutional-Networks.pdf)|1459|\n|2018|IJCV|[Top-down neural attention by excitation backprop](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1608.00507)|778|\n|2018|TPAMI|[Interpreting deep visual representations via network dissection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.05611)|252|\n|2018|DSP|[Methods for interpreting and understanding deep neural networks](http:\u002F\u002Fiphome.hhi.de\u002Fsamek\u002Fpdf\u002FMonDSP18.pdf)|2046|\n|2018|Access|[Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=8466590)|3110|\n|2018|JAIR|[Learning Explanatory Rules from Noisy Data](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2018\u002F0792.pdf)|440|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fai-systems\u002FDILP-Core)|\n|2018|MIPRO|[Explainable artificial intelligence: A survey](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FMario_Brcic\u002Fpublication\u002F325398586_Explainable_Artificial_Intelligence_A_Survey\u002Flinks\u002F5b0bec90a6fdcc8c2534d673\u002FExplainable-Artificial-Intelligence-A-Survey.pdf)|794|\n|2018|BMVC|[Rise: Randomized input sampling for explanation of black-box models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.07421.pdf)|657|\n|2018|arxiv|[Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06169.pdf)|194|\n|2018|arxiv|[Manipulating and measuring model interpretability](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.07810.pdf)|496|\n|2018|arxiv|[How convolutional neural network see the world-A survey of convolutional neural network visualization methods](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.11191.pdf)|211|\n|2018|arxiv|[Revisiting the importance of individual units in cnns via ablation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.02891.pdf)|93|\n|2018|arxiv|[Computationally Efficient Measures of Internal Neuron Importance](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.09946.pdf)|10|\n|2017|ICML|[Understanding Black-box Predictions via Influence Functions](https:\u002F\u002Fdm-gatech.github.io\u002FCS8803-Fall2018-DML-Papers\u002Finfluence-functions.pdf)|2062|[Pytorch](https:\u002F\u002Fgithub.com\u002Fnimarb\u002Fpytorch_influence_functions)|\n|2017|ICML|[Axiomatic attribution for deep networks](https:\u002F\u002Fmit6874.github.io\u002Fassets\u002Fmisc\u002Fsundararajan.pdf)|3654|[Keras](https:\u002F\u002Fgithub.com\u002Fhiranumn\u002FIntegratedGradients)|\n|2017|ICML|[Learning Important Features Through Propagating Activation Differences](https:\u002F\u002Fmit6874.github.io\u002Fassets\u002Fmisc\u002Fshrikumar.pdf)|2835|\n|2017|ICLR|[Visualizing deep neural network decisions: Prediction difference analysis](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.04595.pdf)|674|[Caffe](https:\u002F\u002Fgithub.com\u002Flmzintgraf\u002FDeepVis-PredDiff)|\n|2017|ICLR|[Exploring LOTS in Deep Neural Networks](https:\u002F\u002Fopenreview.net\u002Fpdf?id=SkCILwqex)|34|\n|2017|NeurIPS|[A Unified Approach to Interpreting Model Predictions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions.pdf)|11511|\n|2017|NeurIPS|[Real time image saliency for black box classifiers](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7272-real-time-image-saliency-for-black-box-classifiers.pdf)|483|[Pytorch](https:\u002F\u002Fgithub.com\u002Fkaranchahal\u002FSaliencyMapper)|\n|2017|NeurIPS|[SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7188-svcca-singular-vector-canonical-correlation-analysis-for-deep-learning-dynamics-and-interpretability.pdf)|473|\n|2017|CVPR|[Mining Object Parts from CNNs via Active Question-Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FZhang_Mining_Object_Parts_CVPR_2017_paper.pdf)|29|\n|2017|CVPR|[Network dissection: Quantifying interpretability of deep visual representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FBau_Network_Dissection_Quantifying_CVPR_2017_paper.pdf)|1254|\n|2017|CVPR|[Improving Interpretability of Deep Neural Networks with Semantic Information](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FDong_Improving_Interpretability_of_CVPR_2017_paper.pdf)|118|\n|2017|CVPR|[MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FZhang_MDNet_A_Semantically_CVPR_2017_paper.pdf)|307|[Torch](https:\u002F\u002Fgithub.com\u002Fzizhaozhang\u002Fmdnet-cvpr2017)|\n|2017|CVPR|[Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FGoyal_Making_the_v_CVPR_2017_paper.pdf)|1686|\n|2017|CVPR|[Knowing when to look: Adaptive attention via a visual sentinel for image captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLu_Knowing_When_to_CVPR_2017_paper.pdf)|1392|[Torch](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FAdaptiveAttention)|\n|2017|CVPRW|[Interpretable 3d human action analysis with temporal convolutional networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017_workshops\u002Fw20\u002Fpapers\u002FKim_Interpretable_3D_Human_CVPR_2017_paper.pdf)|539|\n|2017|ICCV|[Grad-cam: Visual explanations from deep networks via gradient-based localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FSelvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf)|13006|[Pytorch](https:\u002F\u002Fgithub.com\u002Fleftthomas\u002FGradCAM)|\n|2017|ICCV|[Interpretable Explanations of Black Boxes by Meaningful Perturbation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FFong_Interpretable_Explanations_of_ICCV_2017_paper.pdf)|1293|[Pytorch](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-explain-black-box)|\n|2017|ICCV|[Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FKim_Interpretable_Learning_for_ICCV_2017_paper.pdf)|323|\n|2017|ICCV|[Understanding and comparing deep neural networks for age and gender classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017_workshops\u002Fpapers\u002Fw23\u002FLapuschkin_Understanding_and_Comparing_ICCV_2017_paper.pdf)|130|\n|2017|ICCV|[Learning to disambiguate by asking discriminative questions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Learning_to_Disambiguate_ICCV_2017_paper.pdf)|26|\n|2017|IJCAI|[Right for the right reasons: Training differentiable models by constraining their explanations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.03717.pdf)|429|\n|2017|IJCAI|[Understanding and improving convolutional neural networks via concatenated rectified linear units](http:\u002F\u002Fwww.jmlr.org\u002Fproceedings\u002Fpapers\u002Fv48\u002Fshang16.pdf)|510|[Caffe](https:\u002F\u002Fgithub.com\u002Fchakkritte\u002FCReLU)|\n|2017|AAAI|[Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.04246.pdf)|67|[Matlab](https:\u002F\u002Fgithub.com\u002Fzqs1022\u002FpartGraphForCNN)|\n|2017|ACL|[Visualizing and Understanding Neural Machine Translation](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP17-1106.pdf)|179|\n|2017|EMNLP|[A causal framework for explaining the predictions of black-box sequence-to-sequence models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.01943.pdf)|192|\n|2017|CVPR Workshop|[Looking under the hood: Deep neural network visualization to interpret whole-slide image analysis outcomes for colorectal polyps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017_workshops\u002Fw8\u002Fpapers\u002FKorbar_Looking_Under_the_CVPR_2017_paper.pdf)|47|\n|2017|survey|[Interpretability of deep learning models: a survey of results](https:\u002F\u002Fdiscovery.ucl.ac.uk\u002Fid\u002Feprint\u002F10059575\u002F1\u002FChakraborty_Interpretability%20of%20deep%20learning%20models.pdf)|345|\n|2017|arxiv|[SmoothGrad: removing noise by adding noise](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03825.pdf)|1479|\n|2017|arxiv|[Interpretable & explorable approximations of black box models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.01154.pdf)|259|\n|2017|arxiv|[Distilling a neural network into a soft decision tree](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09784.pdf)|520|[Pytorch](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fsoft-decision-tree)|\n|2017|arxiv|[Towards interpretable deep neural networks by leveraging adversarial examples](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.05493.pdf)|111|\n|2017|arxiv|[Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.08296.pdf)|1279|\n|2017|arxiv|[Contextual Explanation Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.10301.pdf)|77|[Pytorch](https:\u002F\u002Fgithub.com\u002Falshedivat\u002Fcen)|\n|2017|arxiv|[Challenges for transparency](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.01870.pdf)|142|\n|2017|ACMSOPP|[Deepxplore: Automated whitebox testing of deep learning systems](https:\u002F\u002Fmachine-learning-and-security.github.io\u002Fpapers\u002Fmlsec17_paper_1.pdf)|1144|\n|2017|CEURW|[What does explainable AI really mean? A new conceptualization of perspectives](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.00794.pdf)|518|\n|2017|TVCG|[ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.01942.pdf)|346|\n|2016|NeurIPS|[Synthesizing the preferred inputs for neurons in neural networks via deep generator networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6519-synthesizing-the-preferred-inputs-for-neurons-in-neural-networks-via-deep-generator-networks.pdf)|659|[Caffe](https:\u002F\u002Fgithub.com\u002FEvolving-AI-Lab\u002Fsynthesizing)|\n|2016|NeurIPS|[Understanding the effective receptive field in deep convolutional neural networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6203-understanding-the-effective-receptive-field-in-deep-convolutional-neural-networks.pdf)|1356|\n|2016|CVPR|[Inverting Visual Representations with Convolutional Networks](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FDosovitskiy_Inverting_Visual_Representations_CVPR_2016_paper.pdf)|626|\n|2016|CVPR|[Visualizing and Understanding Deep Texture Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FLin_Visualizing_and_Understanding_CVPR_2016_paper.pdf)|147|\n|2016|CVPR|[Analyzing Classifiers: Fisher Vectors and Deep Neural Networks](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FBach_Analyzing_Classifiers_Fisher_CVPR_2016_paper.pdf)|191|\n|2016|ECCV|[Generating Visual Explanations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1603.08507)|613|[Caffe](https:\u002F\u002Fgithub.com\u002FLisaAnne\u002FECCV2016)|\n|2016|ECCV|[Design of kernels in convolutional neural networks for image classification](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.09231.pdf)|24|\n|2016|ICML|[Understanding and improving convolutional neural networks via concatenated rectified linear units](http:\u002F\u002Fwww.jmlr.org\u002Fproceedings\u002Fpapers\u002Fv48\u002Fshang16.pdf)|510|\n|2016|ICML|[Visualizing and comparing AlexNet and VGG using deconvolutional layers](https:\u002F\u002Ficmlviz.github.io\u002Ficmlviz2016\u002Fassets\u002Fpapers\u002F4.pdf)|126|\n|2016|EMNLP|[Rationalizing Neural Predictions](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.04155)|738|[Pytorch](https:\u002F\u002Fgithub.com\u002Fzhaopku\u002FRationale-Torch)|\n|2016|IJCV|[Visualizing deep convolutional neural networks using natural pre-images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.02017)|508|[Matlab](https:\u002F\u002Fgithub.com\u002Faravindhm\u002Fnnpreimage)|\n|2016|IJCV|[Visualizing Object Detection Features](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.05461.pdf)|38|[Caffe](https:\u002F\u002Fgithub.com\u002Fcvondrick\u002Fihog)|\n|2016|KDD|[Why should i trust you?: Explaining the predictions of any classifier](https:\u002F\u002Fchu-data-lab.github.io\u002FCS8803Fall2018\u002FCS8803-Fall2018-DML-Papers\u002Flime.pdf)|11742|\n|2016|TVCG|[Visualizing the hidden activity of artificial neural networks](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FSamuel_Fadel\u002Fpublication\u002F306049229_Visualizing_the_Hidden_Activity_of_Artificial_Neural_Networks\u002Flinks\u002F5b13ffa7aca2723d9980083c\u002FVisualizing-the-Hidden-Activity-of-Artificial-Neural-Networks.pdf)|309|\n|2016|TVCG|[Towards better analysis of deep convolutional neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1604.07043.pdf)|474|\n|2016|NAACL|[Visualizing and understanding neural models in nlp](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.01066)|650|[Torch](https:\u002F\u002Fgithub.com\u002Fjiweil\u002FVisualizing-and-Understanding-Neural-Models-in-NLP)|\n|2016|arxiv|[Understanding neural networks through representation erasure](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1612.08220.pdf))|492|\n|2016|arxiv|[Grad-CAM: Why did you say that?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07450.pdf)|398|\n|2016|arxiv|[Investigating the influence of noise and distractors on the interpretation of neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07270.pdf)|108|\n|2016|arxiv|[Attentive Explanations: Justifying Decisions and Pointing to the Evidence](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1612.04757)|88|\n|2016|arxiv|[The Mythos of Model Interpretability](http:\u002F\u002Fwww.zacklipton.com\u002Fmedia\u002Fpapers\u002Fmythos_model_interpretability_lipton2016.pdf)|3786|\n|2016|arxiv|[Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.03616)|317|\n|2015|ICLR|[Striving for Simplicity: The All Convolutional Net](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1412.6806.pdf)|4645|[Pytorch](https:\u002F\u002Fgithub.com\u002FStefOe\u002Fall-conv-pytorch)|\n|2015|CVPR|[Understanding deep image representations by inverting them](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FMahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf)|1942|[Matlab](https:\u002F\u002Fgithub.com\u002Faravindhm\u002Fdeep-goggle)|\n|2015|ICCV|[Understanding deep features with computer-generated imagery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FAubry_Understanding_Deep_Features_ICCV_2015_paper.pdf)|156|[Caffe](https:\u002F\u002Fgithub.com\u002Fmathieuaubry\u002Ffeatures_analysis)|\n|2015|ICML Workshop|[Understanding Neural Networks Through Deep Visualization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.06579.pdf)|2038|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fjiye-ML\u002FVisualizing-and-Understanding-Convolutional-Networks)|\n|2015|AAS|[Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model](https:\u002F\u002Fprojecteuclid.org\u002Fdownload\u002Fpdfview_1\u002Feuclid.aoas\u002F1446488742)|749|\n|2014|ECCV|[Visualizing and Understanding Convolutional Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1311.2901.pdf)|18604|[Pytorch](https:\u002F\u002Fgithub.com\u002Fhuybery\u002FVisualizingCNN)|\n|2014|ICLR|[Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1312.6034.pdf)|6142|[Pytorch](https:\u002F\u002Fgithub.com\u002Fhuanghao-code\u002FVisCNN_ICLR_2014_Saliency)|\n|2013|ICCV|[Hoggles: Visualizing object detection features](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2013\u002Fpapers\u002FVondrick_HOGgles_Visualizing_Object_2013_ICCV_paper.pdf)|352|\n \n+ [ ] 论文talk","# awesome_deep_learning_interpretability 快速上手指南\n\n本项目是一个深度学习模型解释性（Interpretability）的论文合集，收录了近年来关于如何理解和解释黑盒模型的重要研究。它主要提供论文列表、PDF 下载链接及部分论文的官方代码实现，**本身不是一个可直接安装的 Python 库**。\n\n本指南将指导你如何获取资源并运行其中带有代码实现的经典论文示例。\n\n## 环境准备\n\n由于本项目包含多篇不同论文的代码，每篇论文的具体依赖可能略有不同。建议先搭建一个通用的深度学习基础环境，再根据具体选择的论文安装额外依赖。\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+), macOS, 或 Windows (需 WSL2)\n*   **Python**: 3.7 或更高版本\n*   **深度学习框架**: PyTorch 或 TensorFlow (根据目标论文选择，目前 PyTorch 实现居多)\n*   **硬件要求**: 推荐配备 NVIDIA GPU 以加速推理和可视化过程\n\n### 前置依赖安装\n\n建议使用 `conda` 创建独立虚拟环境。以下是一个基于 PyTorch 的通用环境配置示例：\n\n```bash\n# 创建虚拟环境\nconda create -n xai_env python=3.8\nconda activate xai_env\n\n# 安装 PyTorch (推荐使用国内清华源加速)\npip install torch torchvision torchaudio -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装通用数据处理与可视化库\npip install numpy pandas matplotlib seaborn scikit-learn jupyter -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 安装步骤\n\n由于这是一个论文列表仓库，没有统一的 `pip install` 命令。你需要针对具体的论文代码进行“安装”。\n\n### 1. 获取论文代码\n在 README 表格中找到你感兴趣的论文（例如 **Score-CAM** 或 **ProtoPNet**），点击 `code` 列中的 GitHub 链接。\n\n以 **Score-CAM** 为例：\n```bash\n# 克隆具体论文的代码仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fhaofanwang\u002FScore-CAM.git\ncd Score-CAM\n```\n\n### 2. 安装该论文特定依赖\n进入代码目录后，通常会有 `requirements.txt` 文件。\n\n```bash\n# 安装该论文所需的特定依赖 (同样推荐使用国内源)\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n*注：如果某篇论文没有提供 `requirements.txt`，请查阅其 README 中的 \"Installation\" 章节手动安装缺失包。*\n\n## 基本使用\n\n以下以 **Score-CAM** (CVPRW 2020) 为例，展示如何运行一个简单的可视化示例。大多数解释性工具的使用流程相似：**加载预训练模型 -> 输入图像 -> 生成热力图**。\n\n### 1. 准备测试图像\n确保当前目录下有测试图片，或下载一张示例图片：\n```bash\nwget https:\u002F\u002Fraw.githubusercontent.com\u002Fhaofanwang\u002FScore-CAM\u002Fmaster\u002Fexamples\u002Fdog.jpg -O dog.jpg\n```\n\n### 2. 运行可视化脚本\n大多数仓库会提供 `demo.py` 或 `test.py`。执行以下命令生成解释性热力图：\n\n```bash\npython demo.py --image_path dog.jpg --model resnet50\n```\n\n*   `--image_path`: 输入图像路径。\n*   `--model`: 指定使用的预训练模型架构（如 resnet50, vgg16 等）。\n\n### 3. 查看结果\n运行结束后，程序通常会保存一张叠加了热力图的图片（如 `result.jpg` 或在 `output\u002F` 文件夹中）。这张图高亮显示了模型在做决策时关注的图像区域，从而实现了对模型行为的“解释”。\n\n---\n**提示**：若你想探索其他论文（如 *ProtoPNet*, *ACE*, *TorchRay* 等），请重复上述“克隆仓库 -> 安装依赖 -> 运行 Demo\"的步骤。所有论文 PDF 可通过项目提供的 [腾讯微云链接](https:\u002F\u002Fshare.weiyun.com\u002F5ddB0EQ) 批量下载阅读。","某医疗 AI 团队正在开发基于卷积神经网络的肺结节筛查系统，急需向医院专家证明模型判断依据的可靠性以通过伦理审查。\n\n### 没有 awesome_deep_learning_interpretability 时\n- **文献调研效率极低**：团队成员需手动在 arXiv、Google Scholar 等平台大海捞针，难以快速定位如 Score-CAM 或 ProtoPNet 等兼具高引用率与开源代码的顶会论文。\n- **复现门槛过高**：找到的论文往往缺乏官方代码实现，或代码版本过旧无法运行，导致算法验证周期从几天拖延至数周。\n- **解释方案单一且不可信**：因缺乏对比基准，团队仅能使用基础的热力图方法，无法评估其在数据分布偏移下的不确定性，难以回应医生对“假阳性”原因的质疑。\n- **合规风险大**：由于无法提供符合最新学术标准的细粒度视觉解释，项目面临无法通过医疗器械审批的风险。\n\n### 使用 awesome_deep_learning_interpretability 后\n- **精准锁定前沿方案**：直接利用按引用排序的列表，迅速锁定 CVPR 和 NeurIPS 上关于“细粒度视觉解释”和“不确定性评估”的 159 篇核心论文及对应 PyTorch\u002FTensorFlow 代码。\n- **加速算法落地验证**：依托仓库提供的现成代码链接（如 ACE 或 CXPlain），团队在两天内成功复现了多种解释算法，并快速集成到现有管线中。\n- **构建多维可信报告**：结合列表中关于“概念级解释”和“因果推断”的研究，生成了不仅展示“哪里有问题”，还能说明“为什么像结节”的深度报告，有效消除了医生疑虑。\n- **顺利通过伦理审查**：引用列表中高权重的社会学洞察论文作为理论支撑，使模型的可解释性论证达到了顶级学术会议标准，大幅提升了审批通过率。\n\nawesome_deep_learning_interpretability 将原本耗时数月的黑盒模型“白盒化”探索过程，压缩为以天为单位的高效技术攻关，成为连接深度学习性能与行业信任的关键桥梁。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FoneTaken_awesome_deep_learning_interpretability_9dfdfa82.png","oneTaken",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FoneTaken_f29fc40e.jpg","Game Pattern Hard！","Megvii Research","Beijing","https:\u002F\u002Fgithub.com\u002FoneTaken",765,125,"2026-03-17T11:26:27","MIT",1,"","未说明",{"notes":91,"python":89,"dependencies":92},"该仓库是一个深度学习可解释性论文的汇总列表（Awesome List），并非一个可直接运行的软件工具或代码库。它主要提供论文链接、引用次数以及部分论文对应的独立代码库地址（涵盖 PyTorch, TensorFlow, Keras, Caffe, Chainer 等多种框架）。因此，没有统一的运行环境、依赖库或硬件需求。用户需根据列表中具体想复现的某篇论文，前往其对应的独立代码仓库查看具体的环境配置要求。",[],[26,14,13],[95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114],"interpretability","deep-learning","computer-vision","awesome","awesome-list","papers","pytorch","tensorflow","keras","torch","chainer","matlab","nlp","neural-network","cvpr","iccv","eccv","neurips","icml","iclr","2026-03-27T02:49:30.150509","2026-04-06T05:36:45.320471",[],[]]