[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-shap--shap":3,"tool-shap--shap":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":67,"owner_name":67,"owner_avatar_url":75,"owner_bio":76,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":110,"forks":111,"last_commit_at":112,"license":113,"difficulty_score":114,"env_os":115,"env_gpu":116,"env_ram":117,"env_deps":118,"category_tags":132,"github_topics":133,"view_count":23,"oss_zip_url":76,"oss_zip_packed_at":76,"status":16,"created_at":140,"updated_at":141,"faqs":142,"releases":172},3109,"shap\u002Fshap","shap","A game theoretic approach to explain the output of any machine learning model.","SHAP（SHapley Additive exPlanations）是一款基于博弈论的开源工具，旨在为任何机器学习模型的预测结果提供清晰、可信的解释。在人工智能应用中，模型往往像“黑盒”一样难以理解，SHAP 有效解决了这一痛点，它能精准量化每个特征对最终预测结果的贡献度，回答“模型为何做出此判断”的关键问题。\n\n无论是数据科学家、算法工程师还是学术研究人员，都能利用 SHAP 深入洞察模型行为，从而提升模型的可信度、辅助调试优化或满足合规性要求。其核心亮点在于将博弈论中的经典\"Shapley 值”引入机器学习领域，从数学上保证了特征归因的一致性与局部准确性。此外，SHAP 针对 XGBoost、LightGBM、CatBoost 等主流树模型提供了高速精确算法，并支持丰富的可视化图表（如瀑布图、力导向图），让用户能直观地看到各个特征是如何推动预测值偏离基准线的。通过简洁的 Python 接口，SHAP 让复杂的模型解释变得触手可及，是连接高精度模型与人类理解之间的重要桥梁。","\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Fartwork\u002Fshap_header.svg\" width=\"800\" \u002F>\n\u003C\u002Fp>\n\n---\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fshap)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshap\u002F)\n[![Conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fshap)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fshap)\n![License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fshap\u002Fshap)\n![Tests](https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Factions\u002Fworkflows\u002Frun_tests.yml\u002Fbadge.svg)\n[![Binder](https:\u002F\u002Fmybinder.org\u002Fbadge_logo.svg)](https:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fshap\u002Fshap\u002Fmaster)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_13d664e1afd7.png)](https:\u002F\u002Fshap.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n![Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fshap)\n[![PyPI pyversions](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshap)](https:\u002F\u002Fpypi.org\u002Fpypi\u002Fshap\u002F)\n\n\n**SHAP (SHapley Additive exPlanations)** is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see [papers](#citations) for details and citations).\n\n\u003C!--**SHAP (SHapley Additive exPlanations)** is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our [papers](#citations) for details and citations).-->\n\n\n\n## Install\n\nSHAP can be installed from either [PyPI](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshap) or [conda-forge](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fshap):\n\n\u003Cpre>\npip install shap\n\u003Ci>or\u003C\u002Fi>\nconda install -c conda-forge shap\n\u003C\u002Fpre>\n\n## Supported versions\n\nSHAP follows [SPEC 0](https:\u002F\u002Fscientific-python.org\u002Fspecs\u002Fspec-0000\u002F) for minimum supported dependency versions. We test against the versions specified there and may not fix bugs for older versions.\n\n## Contributing\n\nWe welcome contributions highly. Feel free to file an issue. Before opening a PR make sure you've read our [CONTRIBUTING.md](CONTRIBUTING.md) guideline.\n\n## Tree ensemble example (XGBoost\u002FLightGBM\u002FCatBoost\u002Fscikit-learn\u002Fpyspark models)\n\nWhile SHAP can explain the output of any machine learning model, we have developed a high-speed exact algorithm for tree ensemble methods (see our [Nature MI paper](https:\u002F\u002Frdcu.be\u002Fb0z70)). Fast C++ implementations are supported for *XGBoost*, *LightGBM*, *CatBoost*, *scikit-learn* and *pyspark* tree models:\n\n```python\nimport xgboost\nimport shap\n\n# train an XGBoost model\nX, y = shap.datasets.california()\nmodel = xgboost.XGBRegressor().fit(X, y)\n\n# explain the model's predictions using SHAP\n# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)\nexplainer = shap.Explainer(model)\nshap_values = explainer(X)\n\n# visualize the first prediction's explanation\nshap.plots.waterfall(shap_values[0])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"616\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_46210954404b.png\" \u002F>\n\u003C\u002Fp>\n\nThe above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue. Another way to visualize the same explanation is to use a force plot (these are introduced in our [Nature BME paper](https:\u002F\u002Frdcu.be\u002FbaVbR)):\n\n```python\n# visualize the first prediction's explanation with a force plot\nshap.plots.force(shap_values[0])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"811\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_95b1975c1057.png\" \u002F>\n\u003C\u002Fp>\n\nIf we take many force plot explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset (in the notebook this plot is interactive):\n\n```python\n# visualize all the training set predictions\nshap.plots.force(shap_values[:500])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"811\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_bc0b72e5fe4d.png\" \u002F>\n\u003C\u002Fp>\n\nTo understand how a single feature effects the output of the model we can plot the SHAP value of that feature vs. the value of the feature for all the examples in a dataset. Since SHAP values represent a feature's responsibility for a change in the model output, the plot below represents the change in predicted house price as the latitude changes. Vertical dispersion at a single value of latitude represents interaction effects with other features. To help reveal these interactions we can color by another feature. If we pass the whole explanation tensor to the `color` argument the scatter plot will pick the best feature to color by. In this case it picks longitude.\n\n```python\n# create a dependence scatter plot to show the effect of a single feature across the whole dataset\nshap.plots.scatter(shap_values[:, \"Latitude\"], color=shap_values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"544\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_a28f4348b3d5.png\" \u002F>\n\u003C\u002Fp>\n\n\nTo get an overview of which features are most important for a model we can plot the SHAP values of every feature for every sample. The plot below sorts features by the sum of SHAP value magnitudes over all samples, and uses SHAP values to show the distribution of the impacts each feature has on the model output. The color represents the feature value (red high, blue low). This reveals for example that higher median incomes increases the predicted home price.\n\n```python\n# summarize the effects of all the features\nshap.plots.beeswarm(shap_values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"583\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_ebb0c294910e.png\" \u002F>\n\u003C\u002Fp>\n\nWe can also just take the mean absolute value of the SHAP values for each feature to get a standard bar plot (produces stacked bars for multi-class outputs):\n\n```python\nshap.plots.bar(shap_values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"570\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_8f7a2e3f0df8.png\" \u002F>\n\u003C\u002Fp>\n\n## Natural language example (transformers)\n\nSHAP has specific support for natural language models like those in the Hugging Face transformers library. By adding coalitional rules to traditional Shapley values we can form games that explain large modern NLP model using very few function evaluations. Using this functionality is as simple as passing a supported transformers pipeline to SHAP:\n\n```python\nimport transformers\nimport shap\n\n# load a transformers pipeline model\nmodel = transformers.pipeline('sentiment-analysis', return_all_scores=True)\n\n# explain the model on two sample inputs\nexplainer = shap.Explainer(model)\nshap_values = explainer([\"What a great movie! ...if you have no taste.\"])\n\n# visualize the first prediction's explanation for the POSITIVE output class\nshap.plots.text(shap_values[0, :, \"POSITIVE\"])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"811\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_f66127e6de16.png\" \u002F>\n\u003C\u002Fp>\n\n## Deep learning example with DeepExplainer (TensorFlow\u002FKeras models)\n\nDeep SHAP is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with [DeepLIFT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02685) described in the SHAP NIPS paper. The implementation here differs from the original DeepLIFT by using a distribution of background samples instead of a single reference value, and using Shapley equations to linearize components such as max, softmax, products, divisions, etc. Note that some of these enhancements have also been since integrated into DeepLIFT. TensorFlow models and Keras models using the TensorFlow backend are supported (there is also preliminary support for PyTorch):\n\n```python\n# ...include code from https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fexamples\u002Fdemo_mnist_convnet.py\n\nimport shap\nimport numpy as np\n\n# select a set of background examples to take an expectation over\nbackground = x_train[np.random.choice(x_train.shape[0], 100, replace=False)]\n\n# explain predictions of the model on four images\ne = shap.DeepExplainer(model, background)\n# ...or pass tensors directly\n# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)\nshap_values = e.shap_values(x_test[1:5])\n\n# plot the feature attributions\nshap.image_plot(shap_values, -x_test[1:5])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"820\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_0f1d7473ecf4.png\" \u002F>\n\u003C\u002Fp>\n\nThe plot above explains ten outputs (digits 0-9) for four different images. Red pixels increase the model's output while blue pixels decrease the output. The input images are shown on the left, and as nearly transparent grayscale backings behind each of the explanations. The sum of the SHAP values equals the difference between the expected model output (averaged over the background dataset) and the current model output. Note that for the 'zero' image the blank middle is important, while for the 'four' image the lack of a connection on top makes it a four instead of a nine.\n\n\n## Deep learning example with GradientExplainer (TensorFlow\u002FKeras\u002FPyTorch models)\n\nExpected gradients combines ideas from [Integrated Gradients](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01365), SHAP, and [SmoothGrad](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03825) into a single expected value equation. This allows an entire dataset to be used as the background distribution (as opposed to a single reference value) and allows local smoothing. If we approximate the model with a linear function between each background data sample and the current input to be explained, and we assume the input features are independent then expected gradients will compute approximate SHAP values. In the example below we have explained how the 7th intermediate layer of the VGG16 ImageNet model impacts the output probabilities.\n\n```python\nfrom keras.applications.vgg16 import VGG16\nfrom keras.applications.vgg16 import preprocess_input\nimport keras.backend as K\nimport numpy as np\nimport json\nimport shap\n\n# load pre-trained model and choose two images to explain\nmodel = VGG16(weights='imagenet', include_top=True)\nX,y = shap.datasets.imagenet50()\nto_explain = X[[39,41]]\n\n# load the ImageNet class names\nurl = \"https:\u002F\u002Fs3.amazonaws.com\u002Fdeep-learning-models\u002Fimage-models\u002Fimagenet_class_index.json\"\nfname = shap.datasets.cache(url)\nwith open(fname) as f:\n    class_names = json.load(f)\n\n# explain how the input to the 7th layer of the model explains the top two classes\ndef map2layer(x, layer):\n    feed_dict = dict(zip([model.layers[0].input], [preprocess_input(x.copy())]))\n    return K.get_session().run(model.layers[layer].input, feed_dict)\ne = shap.GradientExplainer(\n    (model.layers[7].input, model.layers[-1].output),\n    map2layer(X, 7),\n    local_smoothing=0 # std dev of smoothing noise\n)\nshap_values,indexes = e.shap_values(map2layer(to_explain, 7), ranked_outputs=2)\n\n# get the names for the classes\nindex_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)\n\n# plot the explanations\nshap.image_plot(shap_values, to_explain, index_names)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"500\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_3a75d9611c6d.png\" \u002F>\n\u003C\u002Fp>\n\nPredictions for two input images are explained in the plot above. Red pixels represent positive SHAP values that increase the probability of the class, while blue pixels represent negative SHAP values the reduce the probability of the class. By using `ranked_outputs=2` we explain only the two most likely classes for each input (this spares us from explaining all 1,000 classes).\n\n## Model agnostic example with KernelExplainer (explains any function)\n\nKernel SHAP uses a specially-weighted local linear regression to estimate SHAP values for any model. Below is a simple example for explaining a multi-class SVM on the classic iris dataset.\n\n```python\nimport sklearn\nimport shap\nfrom sklearn.model_selection import train_test_split\n\n# print the JS visualization code to the notebook\nshap.initjs()\n\n# train a SVM classifier\nX_train,X_test,Y_train,Y_test = train_test_split(*shap.datasets.iris(), test_size=0.2, random_state=0)\nsvm = sklearn.svm.SVC(kernel='rbf', probability=True)\nsvm.fit(X_train, Y_train)\n\n# use Kernel SHAP to explain test set predictions\nexplainer = shap.KernelExplainer(svm.predict_proba, X_train, link=\"logit\")\nshap_values = explainer.shap_values(X_test, nsamples=100)\n\n# plot the SHAP values for the Setosa output of the first instance\nshap.force_plot(explainer.expected_value[0], shap_values[0][0,:], X_test.iloc[0,:], link=\"logit\")\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"810\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_f9112fc41a8f.png\" \u002F>\n\u003C\u002Fp>\n\nThe above explanation shows four features each contributing to push the model output from the base value (the average model output over the training dataset we passed) towards zero. If there were any features pushing the class label higher they would be shown in red.\n\nIf we take many explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset. This is exactly what we do below for all the examples in the iris test set:\n\n```python\n# plot the SHAP values for the Setosa output of all instances\nshap.force_plot(explainer.expected_value[0], shap_values[0], X_test, link=\"logit\")\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"813\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_f50d90884d66.png\" \u002F>\n\u003C\u002Fp>\n\n## SHAP Interaction Values\n\nSHAP interaction values are a generalization of SHAP values to higher order interactions. Fast exact computation of pairwise interactions are implemented for tree models with `shap.TreeExplainer(model).shap_interaction_values(X)`. This returns a matrix for every prediction, where the main effects are on the diagonal and the interaction effects are off-diagonal. These values often reveal interesting hidden relationships, such as how the increased risk of death peaks for men at age 60 (see the NHANES notebook for details):\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"483\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_4762fbdf8122.png\" \u002F>\n\u003C\u002Fp>\n\n## Sample notebooks\n\nThe notebooks below demonstrate different use cases for SHAP. Look inside the notebooks directory of the repository if you want to try playing with the original notebooks yourself.\n\n### TreeExplainer\n\nAn implementation of Tree SHAP, a fast and exact algorithm to compute SHAP values for trees and ensembles of trees.\n\n- [**NHANES survival model with XGBoost and SHAP interaction values**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FNHANES%20I%20Survival%20Model.html) - Using mortality data from 20 years of followup this notebook demonstrates how to use XGBoost and `shap` to uncover complex risk factor relationships.\n\n- [**Census income classification with LightGBM**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Ftree_explainer\u002FCensus%20income%20classification%20with%20LightGBM.html) - Using the standard adult census income dataset, this notebook trains a gradient boosting tree model with LightGBM and then explains predictions using `shap`.\n\n- [**League of Legends Win Prediction with XGBoost**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FLeague%20of%20Legends%20Win%20Prediction%20with%20XGBoost.html) - Using a Kaggle dataset of 180,000 ranked matches from League of Legends we train and explain a gradient boosting tree model with XGBoost to predict if a player will win their match.\n\n### DeepExplainer\n\nAn implementation of Deep SHAP, a faster (but only approximate) algorithm to compute SHAP values for deep learning models that is based on connections between SHAP and the DeepLIFT algorithm.\n\n- [**MNIST Digit classification with Keras**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fdeep_explainer\u002FFront%20Page%20DeepExplainer%20MNIST%20Example.html) - Using the MNIST handwriting recognition dataset, this notebook trains a neural network with Keras and then explains predictions using `shap`.\n\n- [**Keras LSTM for IMDB Sentiment Classification**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fdeep_explainer\u002FKeras%20LSTM%20for%20IMDB%20Sentiment%20Classification.html) - This notebook trains an LSTM with Keras on the IMDB text sentiment analysis dataset and then explains predictions using `shap`.\n\n### GradientExplainer\n\nAn implementation of expected gradients to approximate SHAP values for deep learning models. It is based on connections between SHAP and the Integrated Gradients algorithm. GradientExplainer is slower than DeepExplainer and makes different approximation assumptions.\n\n- [**Explain an Intermediate Layer of VGG16 on ImageNet**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fgradient_explainer\u002FExplain%20an%20Intermediate%20Layer%20of%20VGG16%20on%20ImageNet.html) - This notebook demonstrates how to explain the output of a pre-trained VGG16 ImageNet model using an internal convolutional layer.\n\n### LinearExplainer\n\nFor a linear model with independent features we can analytically compute the exact SHAP values. We can also account for feature correlation if we are willing to estimate the feature covariance matrix. LinearExplainer supports both of these options.\n\n- [**Sentiment Analysis with Logistic Regression**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Flinear_explainer\u002FSentiment%20Analysis%20with%20Logistic%20Regression.html) - This notebook demonstrates how to explain a linear logistic regression sentiment analysis model.\n\n### KernelExplainer\n\nAn implementation of Kernel SHAP, a model agnostic method to estimate SHAP values for any model. Because it makes no assumptions about the model type, KernelExplainer is slower than the other model type specific algorithms.\n\n- [**Census income classification with scikit-learn**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FCensus%20income%20classification%20with%20scikit-learn.html) - Using the standard adult census income dataset, this notebook trains a k-nearest neighbors classifier using scikit-learn and then explains predictions using `shap`.\n\n- [**ImageNet VGG16 Model with Keras**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FImageNet%20VGG16%20Model%20with%20Keras.html) - Explain the classic VGG16 convolutional neural network's predictions for an image. This works by applying the model agnostic Kernel SHAP method to a super-pixel segmented image.\n\n- [**Iris classification**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FIris%20classification%20with%20scikit-learn.html) - A basic demonstration using the popular iris species dataset. It explains predictions from six different models in scikit-learn using `shap`.\n\n## Documentation notebooks\n\nThese notebooks comprehensively demonstrate how to use specific functions and objects.\n\n- [`shap.decision_plot` and `shap.multioutput_decision_plot`](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fplots\u002Fdecision_plot.html)\n\n- [`shap.dependence_plot`](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fplots\u002Fdependence_plot.html)\n\n## Methods Unified by SHAP\n\n1. *LIME:* Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. \"Why should i trust you?: Explaining the predictions of any classifier.\" Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.\n\n2. *Shapley sampling values:* Strumbelj, Erik, and Igor Kononenko. \"Explaining prediction models and individual predictions with feature contributions.\" Knowledge and information systems 41.3 (2014): 647-665.\n\n3. *DeepLIFT:* Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. \"Learning important features through propagating activation differences.\" arXiv preprint arXiv:1704.02685 (2017).\n\n4. *QII:* Datta, Anupam, Shayak Sen, and Yair Zick. \"Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems.\" Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.\n\n5. *Layer-wise relevance propagation:* Bach, Sebastian, et al. \"On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.\" PloS one 10.7 (2015): e0130140.\n\n6. *Shapley regression values:* Lipovetsky, Stan, and Michael Conklin. \"Analysis of regression in game theory approach.\" Applied Stochastic Models in Business and Industry 17.4 (2001): 319-330.\n\n7. *Tree interpreter:* Saabas, Ando. Interpreting random forests. http:\u002F\u002Fblog.datadive.net\u002Finterpreting-random-forests\u002F\n\n## Citations\n\nThe algorithms and visualizations used in this package came primarily out of research in [Su-In Lee's lab](https:\u002F\u002Fsuinlee.cs.washington.edu) at the University of Washington, and Microsoft Research. If you use SHAP in your research we would appreciate a citation to the appropriate paper(s):\n\n- For general use of SHAP you can read\u002Fcite our [NeurIPS paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions) ([bibtex](https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Freferences\u002Fshap_nips.bib)).\n- For TreeExplainer you can read\u002Fcite our [Nature Machine Intelligence paper](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-019-0138-9) ([bibtex](https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Freferences\u002Ftree_explainer.bib); [free access](https:\u002F\u002Frdcu.be\u002Fb0z70)).\n- For GPUTreeExplainer you can read\u002Fcite [this article](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13972).\n- For `force_plot` visualizations and medical applications you can read\u002Fcite our [Nature Biomedical Engineering paper](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41551-018-0304-0) ([bibtex](https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Freferences\u002Fnature_bme.bib); [free access](https:\u002F\u002Frdcu.be\u002FbaVbR)).\n\n\u003Cimg height=\"1\" width=\"1\" style=\"display:none\" src=\"https:\u002F\u002Fwww.facebook.com\u002Ftr?id=189147091855991&ev=PageView&noscript=1\" \u002F>\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Fartwork\u002Fshap_header.svg\" width=\"800\" \u002F>\n\u003C\u002Fp>\n\n---\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fshap)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshap\u002F)\n[![Conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fshap)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fshap)\n![License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fshap\u002Fshap)\n![Tests](https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Factions\u002Fworkflows\u002Frun_tests.yml\u002Fbadge.svg)\n[![Binder](https:\u002F\u002Fmybinder.org\u002Fbadge_logo.svg)](https:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fshap\u002Fshap\u002Fmaster)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_13d664e1afd7.png)](https:\u002F\u002Fshap.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n![Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fshap)\n[![PyPI pyversions](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshap)](https:\u002F\u002Fpypi.org\u002Fpypi\u002Fshap\u002F)\n\n\n**SHAP (SHapley Additive exPlanations)** 是一种基于博弈论的方法，用于解释任何机器学习模型的输出。它通过使用经典的博弈论 Shapley 值及其相关扩展，将最优的信用分配与局部解释联系起来（详情和引用请参见 [论文](#citations)）。\n\n\u003C!--**SHAP (SHapley Additive exPlanations)** 是一种统一的方法，用于解释任何机器学习模型的输出。SHAP 将博弈论与局部解释相结合，整合了此前的多种方法 [1-7]，并代表了唯一可能的一致且在局部上准确的基于期望值的可加性特征归因方法（详情和引用请参见我们的 [论文](#citations)）。-->\n\n\n\n## 安装\n\nSHAP 可以从 [PyPI](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshap) 或 [conda-forge](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fshap) 安装：\n\n\u003Cpre>\npip install shap\n\u003Ci>或\u003C\u002Fi>\nconda install -c conda-forge shap\n\u003C\u002Fpre>\n\n## 支持的版本\n\nSHAP 遵循 [SPEC 0](https:\u002F\u002Fscientific-python.org\u002Fspecs\u002Fspec-0000\u002F) 对最低支持依赖版本的要求。我们会在该规范中指定的版本上进行测试，对于更旧的版本可能不会修复问题。\n\n## 贡献\n\n我们非常欢迎贡献。请随时提交 issue。在打开 PR 之前，请确保您已阅读我们的 [CONTRIBUTING.md](CONTRIBUTING.md) 指南。\n\n## 树模型示例（XGBoost\u002FLightGBM\u002FCatBoost\u002Fscikit-learn\u002Fpyspark 模型）\n\n虽然 SHAP 可以解释任何机器学习模型的输出，但我们为树集成方法开发了一种高速精确算法（详见我们的 [Nature MI 论文](https:\u002F\u002Frdcu.be\u002Fb0z70)）。针对 *XGBoost*、*LightGBM*、*CatBoost*、*scikit-learn* 和 *pyspark* 的树模型，提供了高效的 C++ 实现：\n\n```python\nimport xgboost\nimport shap\n\n# 训练一个 XGBoost 模型\nX, y = shap.datasets.california()\nmodel = xgboost.XGBRegressor().fit(X, y)\n\n# 使用 SHAP 解释模型的预测结果\n# （相同语法适用于 LightGBM、CatBoost、scikit-learn、transformers、Spark 等）\nexplainer = shap.Explainer(model)\nshap_values = explainer(X)\n\n# 可视化第一个预测的解释\nshap.plots.waterfall(shap_values[0])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"616\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_46210954404b.png\" \u002F>\n\u003C\u002Fp>\n\n上述解释展示了各个特征如何从基线值（我们所用训练数据集上的平均模型输出）推动模型输出的变化。使预测结果增高的特征用红色表示，降低预测结果的特征用蓝色表示。另一种可视化方式是使用力图（这些在我们的 [Nature BME 论文](https:\u002F\u002Frdcu.be\u002FbaVbR) 中介绍）：\n\n```python\n# 使用力图可视化第一个预测的解释\nshap.plots.force(shap_values[0])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"811\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_95b1975c1057.png\" \u002F>\n\u003C\u002Fp>\n\n如果我们收集许多像上面这样的力图，将其旋转 90 度并水平堆叠，就可以看到整个数据集的解释（在笔记本中，此图是交互式的）：\n\n```python\n# 可视化所有训练集样本的预测\nshap.plots.force(shap_values[:500])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"811\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_bc0b72e5fe4d.png\" \u002F>\n\u003C\u002Fp>\n\n为了理解单个特征对模型输出的影响，我们可以绘制该特性的 SHAP 值与其在数据集中所有样本取值之间的关系图。由于 SHAP 值代表了某个特征对模型输出变化的责任，下图展示了随着纬度变化，预测房价的变化情况。在某一特定纬度上的垂直分散，反映了该特征与其他特征的交互效应。为了更好地揭示这些交互作用，我们可以按另一个特征着色。如果我们将整个解释张量传递给 `color` 参数，散点图会自动选择最佳的着色特征，在本例中选择了经度。\n\n```python\n# 创建依赖散点图，展示单个特征在整个数据集中的影响\nshap.plots.scatter(shap_values[:, \"Latitude\"], color=shap_values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"544\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_a28f4348b3d5.png\" \u002F>\n\u003C\u002Fp>\n\n\n要了解哪些特征对模型最重要，可以绘制每个样本中每个特征的 SHAP 值。下图按照所有样本中 SHAP 值绝对值之和对特征进行排序，并利用 SHAP 值展示每个特征对模型输出影响的分布情况。颜色表示特征的取值（红色较高，蓝色较低）。这表明例如，较高的家庭收入中位数会提高预测房价。\n\n```python\n# 总结所有特征的影响\nshap.plots.beeswarm(shap_values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"583\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_ebb0c294910e.png\" \u002F>\n\u003C\u002Fp>\n\n我们也可以只计算每个特征 SHAP 值的平均绝对值，得到标准的条形图（对于多分类输出会产生堆叠条形图）：\n\n```python\nshap.plots.bar(shap_values)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"570\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_8f7a2e3f0df8.png\" \u002F>\n\u003C\u002Fp>\n\n## 自然语言示例（transformers）\nSHAP 对自然语言处理模型有专门的支持，例如 Hugging Face transformers 库中的模型。通过在传统 Shapley 值的基础上加入联盟规则，我们可以构建游戏来解释大型现代 NLP 模型，而只需极少的函数评估次数。使用这一功能非常简单，只需将受支持的 transformers 流水线模型传递给 SHAP 即可：\n\n```python\nimport transformers\nimport shap\n\n# 加载一个 transformers 流水线模型\nmodel = transformers.pipeline('sentiment-analysis', return_all_scores=True)\n\n# 对两个样本输入进行模型解释\nexplainer = shap.Explainer(model)\nshap_values = explainer([\"多好的电影啊！……如果你没有品味的话。\"])\n\n# 可视化第一个预测中 POSITIVE 输出类别的解释\nshap.plots.text(shap_values[0, :, \"POSITIVE\"])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"811\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_f66127e6de16.png\" \u002F>\n\u003C\u002Fp>\n\n## 基于 DeepExplainer 的深度学习示例（TensorFlow\u002FKeras 模型）\n\nDeep SHAP 是一种用于深度学习模型中 SHAP 值的高速近似算法，它基于 SHAP NIPS 论文中描述的与 [DeepLIFT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02685) 之间的联系。此处的实现与原始 DeepLIFT 不同，它使用背景样本的分布代替单个参考值，并利用 Shapley 方程对诸如最大值、softmax、乘法、除法等操作进行线性化处理。需要注意的是，其中一些改进后来也被整合到了 DeepLIFT 中。该实现支持 TensorFlow 模型以及使用 TensorFlow 后端的 Keras 模型（同时也有对 PyTorch 的初步支持）：\n\n```python\n# ...包含来自 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fexamples\u002Fdemo_mnist_convnet.py 的代码\n\nimport shap\nimport numpy as np\n\n# 选择一组背景样本以计算期望值\nbackground = x_train[np.random.choice(x_train.shape[0], 100, replace=False)]\n\n# 解释模型在四张图像上的预测结果\ne = shap.DeepExplainer(model, background)\n# 或者直接传递张量\n# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)\nshap_values = e.shap_values(x_test[1:5])\n\n# 绘制特征归因图\nshap.image_plot(shap_values, -x_test[1:5])\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"820\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_0f1d7473ecf4.png\" \u002F>\n\u003C\u002Fp>\n\n上图解释了四张不同图像的十个输出（数字 0–9）。红色像素会增加模型的输出，而蓝色像素则会降低输出。输入图像显示在左侧，作为每个解释背后的近乎透明的灰度背景。SHAP 值之和等于模型期望输出（基于背景数据集的平均值）与当前模型输出之间的差值。请注意，对于“零”图像，中间的空白区域非常重要；而对于“四”图像，顶部缺少连接使得它成为“四”而不是“九”。\n\n\n## 基于 GradientExplainer 的深度学习示例（TensorFlow\u002FKeras\u002FPyTorch 模型）\n\n期望梯度将 [Integrated Gradients](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01365)、SHAP 和 [SmoothGrad](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03825) 的思想结合成一个单一的期望值方程。这允许将整个数据集用作背景分布（而非单个参考值），并实现局部平滑化。如果我们假设在每个背景数据样本与当前待解释的输入之间用线性函数近似模型，并且假定输入特征相互独立，则期望梯度可以计算出近似的 SHAP 值。在下面的例子中，我们解释了 VGG16 ImageNet 模型的第 7 层中间层如何影响输出概率。\n\n```python\nfrom keras.applications.vgg16 import VGG16\nfrom keras.applications.vgg16 import preprocess_input\nimport keras.backend as K\nimport numpy as np\nimport json\nimport shap\n\n# 加载预训练模型并选择两张图像进行解释\nmodel = VGG16(weights='imagenet', include_top=True)\nX,y = shap.datasets.imagenet50()\nto_explain = X[[39,41]]\n\n# 加载 ImageNet 类别名称\nurl = \"https:\u002F\u002Fs3.amazonaws.com\u002Fdeep-learning-models\u002Fimage-models\u002Fimagenet_class_index.json\"\nfname = shap.datasets.cache(url)\nwith open(fname) as f:\n    class_names = json.load(f)\n\n# 解释模型第 7 层的输入如何影响前两个类别\ndef map2layer(x, layer):\n    feed_dict = dict(zip([model.layers[0].input], [preprocess_input(x.copy())]))\n    return K.get_session().run(model.layers[layer].input, feed_dict)\ne = shap.GradientExplainer(\n    (model.layers[7].input, model.layers[-1].output),\n    map2layer(X, 7),\n    local_smoothing=0 # 平滑噪声的标准差\n)\nshap_values,indexes = e.shap_values(map2layer(to_explain, 7), ranked_outputs=2)\n\n# 获取类别的名称\nindex_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)\n\n# 绘制解释图\nshap.image_plot(shap_values, to_explain, index_names)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"500\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_3a75d9611c6d.png\" \u002F>\n\u003C\u002Fp>\n\n上图解释了两张输入图像的预测结果。红色像素表示正的 SHAP 值，会增加相应类别的概率；而蓝色像素则表示负的 SHAP 值，会降低该类别的概率。通过设置 `ranked_outputs=2`，我们只解释了每张输入中最可能的两个类别（这样就无需解释全部 1,000 个类别）。\n\n## 基于 KernelExplainer 的模型无关示例（可解释任意函数）\n\nKernel SHAP 使用加权局部线性回归来估计任何模型的 SHAP 值。以下是一个简单的例子，用于解释经典鸢尾花数据集上的多分类 SVM 模型。\n\n```python\nimport sklearn\nimport shap\nfrom sklearn.model_selection import train_test_split\n\n# 在笔记本中打印 JS 可视化代码\nshap.initjs()\n\n# 训练一个 SVM 分类器\nX_train,X_test,Y_train,Y_test = train_test_split(*shap.datasets.iris(), test_size=0.2, random_state=0)\nsvm = sklearn.svm.SVC(kernel='rbf', probability=True)\nsvm.fit(X_train, Y_train)\n\n# 使用 Kernel SHAP 解释测试集的预测结果\nexplainer = shap.KernelExplainer(svm.predict_proba, X_train, link=\"logit\")\nshap_values = explainer.shap_values(X_test, nsamples=100)\n\n# 绘制第一个实例 Setosa 输出的 SHAP 值\nshap.force_plot(explainer.expected_value[0], shap_values[0][0,:], X_test.iloc[0,:], link=\"logit\")\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"810\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_f9112fc41a8f.png\" \u002F>\n\u003C\u002Fp>\n\n上述解释展示了四个特征各自如何推动模型输出从基准值（我们所传入的训练数据集上的平均模型输出）向零方向移动。如果有任何特征使类别标签更高，它们将以红色显示。\n\n如果我们收集许多像上面这样的解释，将其旋转 90 度，然后水平堆叠起来，就可以看到整个数据集的解释。这正是我们在下面对鸢尾花测试集中所有样本所做的操作：\n\n```python\n# 绘制所有实例 Setosa 输出的 SHAP 值\nshap.force_plot(explainer.expected_value[0], shap_values[0], X_test, link=\"logit\")\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"813\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_f50d90884d66.png\" \u002F>\n\u003C\u002Fp>\n\n## SHAP 交互值\n\nSHAP 交互值是 SHAP 值在高阶交互作用上的推广。对于树模型，我们可以通过 `shap.TreeExplainer(model).shap_interaction_values(X)` 快速精确地计算两两交互作用。该方法会为每个预测返回一个矩阵，其中主效应位于对角线上，而交互效应则位于非对角线上。这些值通常能揭示出一些有趣的隐藏关系，例如男性死亡风险在60岁时达到峰值（详情请参阅 NHANES 笔记本）：\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"483\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_readme_4762fbdf8122.png\" \u002F>\n\u003C\u002Fp>\n\n## 示例笔记本\n\n以下笔记本展示了 SHAP 的不同使用场景。如果您想亲自尝试运行原始笔记本，请查看仓库中的 notebooks 目录。\n\n### TreeExplainer\n\nTree SHAP 是一种快速且精确的算法，用于计算树模型及树集成的 SHAP 值。\n\n- [**NHANES 生存模型：XGBoost 与 SHAP 交互值**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FNHANES%20I%20Survival%20Model.html) - 本笔记本基于20年随访的死亡率数据，演示如何结合 XGBoost 和 `shap` 揭示复杂的危险因素关系。\n\n- [**Census 收入分类：LightGBM**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Ftree_explainer\u002FCensus%20income%20classification%20with%20LightGBM.html) - 使用标准的成人人口普查收入数据集，本笔记本首先用 LightGBM 训练梯度提升树模型，随后利用 `shap` 解释预测结果。\n\n- [**英雄联盟胜率预测：XGBoost**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FLeague%20of%20Legends%20Win%20Prediction%20with%20XGBoost.html) - 基于 Kaggle 上包含18万场英雄联盟排位赛的数据集，本笔记本训练并解释了一个使用 XGBoost 的梯度提升树模型，以预测玩家是否能够赢得比赛。\n\n### DeepExplainer\n\nDeep SHAP 是一种更快但仅近似的算法，用于计算深度学习模型的 SHAP 值。它基于 SHAP 与 DeepLIFT 算法之间的联系。\n\n- [**MNIST 数字分类：Keras**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fdeep_explainer\u002FFront%20Page%20DeepExplainer%20MNIST%20Example.html) - 利用 MNIST 手写数字识别数据集，本笔记本首先用 Keras 训练神经网络，然后使用 `shap` 解释预测结果。\n\n- [**IMDB 情感分类：Keras LSTM**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fdeep_explainer\u002FKeras%20LSTM%20for%20IMDB%20Sentiment%20Classification.html) - 本笔记本使用 Keras 在 IMDB 文本情感分析数据集上训练 LSTM 模型，并通过 `shap` 解释预测结果。\n\n### GradientExplainer\n\nGradientExplainer 实现了期望梯度方法，用于近似计算深度学习模型的 SHAP 值。它基于 SHAP 与积分梯度算法之间的联系。相比 DeepExplainer，GradientExplainer 的速度较慢，且其近似假设也有所不同。\n\n- [**VGG16 中间层在 ImageNet 上的解释**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fgradient_explainer\u002FExplain%20an%20Intermediate%20Layer%20of%20VGG16%20on%20ImageNet.html) - 本笔记本演示如何使用内部卷积层来解释预训练 VGG16 ImageNet 模型的输出。\n\n### LinearExplainer\n\n对于具有独立特征的线性模型，我们可以解析地计算出精确的 SHAP 值。如果我们愿意估计特征协方差矩阵，也可以考虑特征之间的相关性。LinearExplainer 同时支持这两种方式。\n\n- [**情感分析：逻辑回归**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Flinear_explainer\u002FSentiment%20Analysis%20with%20Logistic%20Regression.html) - 本笔记本演示如何解释一个线性逻辑回归情感分析模型。\n\n### KernelExplainer\n\nKernel SHAP 是一种模型无关的方法，可用于估算任何模型的 SHAP 值。由于它不依赖于模型类型，因此 KernelExplainer 的速度比其他特定于模型类型的算法要慢。\n\n- [**Census 收入分类：scikit-learn**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FCensus%20income%20classification%20with%20scikit-learn.html) - 使用标准的成人人口普查收入数据集，本笔记本首先用 scikit-learn 训练 k 最近邻分类器，然后利用 `shap` 解释预测结果。\n\n- [**ImageNet VGG16 模型：Keras**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FImageNet%20VGG16%20Model%20with%20Keras.html) - 解释经典的 VGG16 卷积神经网络对图像的预测结果。此过程采用模型无关的 Kernel SHAP 方法，并结合超像素分割技术进行分析。\n\n- [**鸢尾花分类**](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002FIris%20classification%20with%20scikit-learn.html) - 这是一个基础示例，使用流行的鸢尾花物种数据集，通过 `shap` 解释 scikit-learn 中六种不同模型的预测结果。\n\n## 文档笔记本\n\n这些笔记本全面展示了如何使用特定的函数和对象。\n\n- [`shap.decision_plot` 和 `shap.multioutput_decision_plot`](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fplots\u002Fdecision_plot.html)\n\n- [`shap.dependence_plot`](https:\u002F\u002Fshap.github.io\u002Fshap\u002Fnotebooks\u002Fplots\u002Fdependence_plot.html)\n\n## 由 SHAP 统一的方法\n\n1. *LIME:* Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. \"Why should i trust you?: Explaining the predictions of any classifier.\" Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.\n\n2. *Shapley sampling values:* Strumbelj, Erik, and Igor Kononenko. \"Explaining prediction models and individual predictions with feature contributions.\" Knowledge and information systems 41.3 (2014): 647-665.\n\n3. *DeepLIFT:* Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. \"Learning important features through propagating activation differences.\" arXiv preprint arXiv:1704.02685 (2017).\n\n4. *QII:* Datta, Anupam, Shayak Sen, and Yair Zick. \"Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems.\" Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.\n\n5. *Layer-wise relevance propagation:* Bach, Sebastian, et al. \"On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.\" PloS one 10.7 (2015): e0130140.\n\n6. *Shapley regression values:* Lipovetsky, Stan, and Michael Conklin. \"Analysis of regression in game theory approach.\" Applied Stochastic Models in Business and Industry 17.4 (2001): 319-330.\n\n7. *Tree interpreter:* Saabas, Ando. Interpreting random forests. http:\u002F\u002Fblog.datadive.net\u002Finterpreting-random-forests\u002F\n\n## 引用\n\n本包中使用的算法和可视化主要源自于华盛顿大学[Su-In Lee 实验室](https:\u002F\u002Fsuinlee.cs.washington.edu)以及微软研究院的研究成果。如果您在研究中使用 SHAP，我们诚挚地希望您能引用相应的论文：\n\n- 对于 SHAP 的通用使用，您可以阅读并引用我们的 [NeurIPS 论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions)（[BibTeX](https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Freferences\u002Fshap_nips.bib)）。\n- 对于 TreeExplainer，您可以阅读并引用我们的 [Nature Machine Intelligence 论文](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-019-0138-9)（[BibTeX](https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Freferences\u002Ftree_explainer.bib)；[免费获取](https:\u002F\u002Frdcu.be\u002Fb0z70)）。\n- 对于 GPUTreeExplainer，您可以阅读并引用 [这篇文章](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13972)。\n- 对于 `force_plot` 可视化及医疗应用，您可以阅读并引用我们的 [Nature Biomedical Engineering 论文](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41551-018-0304-0)（[BibTeX](https:\u002F\u002Fraw.githubusercontent.com\u002Fshap\u002Fshap\u002Fmaster\u002Fdocs\u002Freferences\u002Fnature_bme.bib)；[免费获取](https:\u002F\u002Frdcu.be\u002FbaVbR)）。\n\n\u003Cimg height=\"1\" width=\"1\" style=\"display:none\" src=\"https:\u002F\u002Fwww.facebook.com\u002Ftr?id=189147091855991&ev=PageView&noscript=1\" \u002F>","# SHAP 快速上手指南\n\nSHAP (SHapley Additive exPlanations) 是一种基于博弈论的方法，用于解释任何机器学习模型的输出。它通过计算 Shapley 值，将最优信用分配与局部解释相结合，帮助用户理解特征如何影响模型预测。\n\n## 环境准备\n\n- **操作系统**：Windows, macOS, Linux\n- **Python 版本**：支持 Python 3.7+（具体版本请参考 [SPEC 0](https:\u002F\u002Fscientific-python.org\u002Fspecs\u002Fspec-0000\u002F)）\n- **前置依赖**：\n  - 基础使用无需特殊依赖。\n  - 若需解释特定模型，请预先安装对应库（如 `xgboost`, `lightgbm`, `catboost`, `scikit-learn`, `transformers`, `tensorflow` 或 `torch`）。\n\n## 安装步骤\n\n推荐使用国内镜像源加速安装。\n\n### 方式一：使用 pip 安装\n\n```bash\npip install shap -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：使用 conda 安装\n\n```bash\nconda install -c conda-forge shap\n```\n*注：conda-forge 源通常在全球范围内同步，国内用户若连接缓慢可尝试配置清华或中科大 conda 镜像源。*\n\n## 基本使用\n\n以下示例展示如何使用 SHAP 解释一个 **XGBoost** 回归模型。该流程同样适用于 LightGBM、CatBoost、scikit-learn 等树模型。\n\n### 1. 训练模型并生成解释\n\n```python\nimport xgboost\nimport shap\n\n# 加载示例数据集 (加州房价)\nX, y = shap.datasets.california()\n\n# 训练 XGBoost 模型\nmodel = xgboost.XGBRegressor().fit(X, y)\n\n# 创建解释器并计算 SHAP 值\n# 此语法同样适用于 LightGBM, CatBoost, scikit-learn 等\nexplainer = shap.Explainer(model)\nshap_values = explainer(X)\n```\n\n### 2. 可视化解释结果\n\n**瀑布图 (Waterfall Plot)**：展示单个样本的预测值是如何从基准值（平均预测值）被各个特征推高或拉低的。红色表示推高预测值，蓝色表示降低预测值。\n\n```python\n# 可视化第一个样本的解释\nshap.plots.waterfall(shap_values[0])\n```\n\n**蜂群图 (Beeswarm Plot)**：全局概览，展示所有样本中每个特征的重要性及其对模型输出的影响分布。颜色代表特征值大小（红高蓝低）。\n\n```python\n# 总结所有特征的影响\nshap.plots.beeswarm(shap_values)\n```\n\n**条形图 (Bar Plot)**：按特征重要性排序的标准条形图。\n\n```python\n# 生成特征重要性条形图\nshap.plots.bar(shap_values)\n```\n\n> **提示**：对于深度学习模型（TensorFlow\u002FPyTorch），可使用 `shap.DeepExplainer` 或 `shap.GradientExplainer`；对于自然语言处理模型（Transformers），可直接传入 pipeline 对象进行解释。","某金融风控团队正在构建基于 XGBoost 的信贷违约预测模型，急需向监管机构和业务部门解释为何拒绝特定客户的贷款申请。\n\n### 没有 shap 时\n- **决策如“黑盒”**：模型虽然预测准确，但无法说明具体是“收入低”还是“负债高”导致了拒贷，业务人员只能盲目信任代码输出。\n- **排查困难**：当模型出现异常预测时，数据科学家难以定位是哪个特征引发了偏差，调试过程如同大海捞针。\n- **合规风险高**：面对监管询问“为何拒绝该用户”，团队无法提供符合法规要求的逐案特征归因报告，面临审计不通过的风险。\n- **信任缺失**：业务部门因不理解模型逻辑，对自动审批结果持怀疑态度，导致大量本可放款的案例被人工重复复核，效率低下。\n\n### 使用 shap 后\n- **透明化归因**：shap 利用博弈论中的 Shapley 值，精确计算出每个特征（如年龄、信用分）对当前预测结果的贡献度，将“黑盒”变为清晰的白盒。\n- **精准定位问题**：通过瀑布图（Waterfall Plot），团队能瞬间看到哪些因素将预测概率从基准值推高至拒贷线，快速发现数据或逻辑异常。\n- **满足合规要求**：shap 生成的力导向图（Force Plot）直观展示了单个案例的决策路径，轻松生成符合监管要求的可解释性报告。\n- **建立业务信任**：业务人员能看懂具体的拒贷理由（例如“虽收入高但近期查询次数过多”），从而放心地部署模型，大幅减少人工复核成本。\n\nshap 通过数学上严谨的特征归因机制，成功在保持模型高精度的同时，打破了算法黑盒，让机器学习决策变得可信、可控且合规。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshap_shap_46210954.png","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fshap_a4048f44.png",null,"https:\u002F\u002Fgithub.com\u002Fshap",[79,83,87,91,95,98,101,104,107],{"name":80,"color":81,"percentage":82},"Jupyter Notebook","#DA5B0B",98.7,{"name":84,"color":85,"percentage":86},"Python","#3572A5",1.1,{"name":88,"color":89,"percentage":90},"C++","#f34b7d",0.1,{"name":92,"color":93,"percentage":94},"JavaScript","#f1e05a",0,{"name":96,"color":97,"percentage":94},"Cuda","#3A4E3A",{"name":99,"color":100,"percentage":94},"PowerShell","#012456",{"name":102,"color":103,"percentage":94},"Batchfile","#C1F12E",{"name":105,"color":106,"percentage":94},"Cython","#fedf5b",{"name":108,"color":109,"percentage":94},"Makefile","#427819",25230,3519,"2026-04-04T03:14:40","MIT",1,"Linux, macOS, Windows","非必需。仅在解释深度学习模型（如使用 DeepExplainer 或 GradientExplainer）时，若底层框架（TensorFlow\u002FPyTorch）配置了 GPU 加速则可使用，无特定显卡型号或显存强制要求。","未说明（取决于数据集大小及被解释模型的复杂度；深度学习和大规模集成模型通常建议 16GB+）",{"notes":119,"python":120,"dependencies":121},"该工具通过 PyPI (pip install shap) 或 conda-forge (conda install -c conda-forge shap) 安装。它支持多种模型类型：针对树模型（XGBoost, LightGBM, CatBoost, scikit-learn, pyspark）有高速 C++ 实现；支持 Hugging Face transformers 进行自然语言处理解释；支持 TensorFlow\u002FKeras 及初步支持 PyTorch 进行深度学习解释（需额外安装对应深度学习框架）。对于未明确列出的依赖版本，库遵循 SPEC 0 标准，即支持科学 Python 生态系统中最近两年发布的版本。","遵循 SPEC 0 标准（通常支持当前及过去两年的 Python 版本，如 3.9+），具体版本需参考 scientific-python.org\u002Fspecs\u002Fspec-0000\u002F",[122,123,124,125,126,127,128,129,130,131],"numpy","scipy","scikit-learn","pandas","tqdm","cloudpickle","typing_extensions","xgboost (可选，用于树模型)","lightgbm (可选，用于树模型)","catboost (可选，用于树模型)",[13],[134,135,136,137,67,138,139],"interpretability","machine-learning","deep-learning","gradient-boosting","shapley","explainability","2026-03-27T02:49:30.150509","2026-04-06T09:46:03.389140",[143,148,153,158,163,167],{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},14317,"如何保存 SHAP Explainer 对象？","目前官方尚未提供直接保存 Explainer 的标准方法。有用户尝试使用 joblib 保存但遇到错误。建议关注官方后续更新，或尝试将模型和背景数据分别保存，重新初始化 Explainer。如果问题持续，建议在 GitHub 上重新提交带有具体报错信息的 Issue。","https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fissues\u002F295",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},14318,"SHAP 是否支持 PySpark 或计划集成 Spark？","目前 SHAP 官方尚未正式集成 PySpark。对于想在 Spark 中使用 SHAP 的用户，暂时没有官方推荐的直接方案。部分用户询问了关于 CategoricalSplit 等特定功能的实现情况，但尚未得到明确的原生支持确认。建议直接在本地环境导出数据进行解释，或关注社区是否有第三方扩展。","https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fissues\u002F38",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},14319,"使用 TreeExplainer 加载 XGBoost 模型时出现 'utf-8' codec 解码错误怎么办？","这是一个已知的版本兼容性问题。解决方案通常是升级或调整 SHAP 和 XGBoost 的版本。有用户反馈升级到 SHAP 0.45.1 配合 XGBoost 2.1.1 可以解决问题。如果升级无效，可以尝试降级 SHAP 到 v1.0.0（如果源可用），但更推荐保持两个库均为较新的稳定版本以确保兼容性。","https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fissues\u002F1215",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},14320,"为什么二分类任务的 SHAP 输出值（base value 和 output value）不在 [0, 1] 范围内？","这是预期行为。XGBoost 等模型在内部通常使用对数几率（log-odds\u002Fmargin）空间进行计算，因此原始的 SHAP 值和基准值可能超出 [0, 1] 范围。若需将其转换为概率形式，可以使用 scipy.special.expit 函数（即 sigmoid 函数）对基准值和累加后的结果进行转换。例如：base_value = expit(untransformed_base_value)。","https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fissues\u002F29",{"id":164,"question_zh":165,"answer_zh":166,"source_url":162},14321,"如何在多分类问题中绘制与 predict_proba 匹配的 SHAP 瀑布图（Waterfall Plot）？","针对多分类问题，需要手动转换 SHAP 值以匹配各类别的预测概率。可以通过编写自定义函数，利用 scipy.special.expit 处理基准值，并计算距离系数来缩放 SHAP 值。核心步骤包括：1. 提取未转换的基准值并应用 expit 函数；2. 计算原始解释距离；3. 计算模型预测值与转换后基准值的距离；4. 根据距离比率调整 SHAP 值以便正确可视化。",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},14322,"使用 SHAP 解释 PySpark 的 GBTClassifier 模型时为什么会报 NotImplementedError 错误？","SHAP 的 TreeExplainer 目前主要支持 sklearn、XGBoost、LightGBM 等特定格式的树模型，尚不直接支持 PySpark MLlib 中的 GBTClassifier 或 RandomForestClassifier 对象。直接传入 PySpark 模型对象会导致 NotImplementedError。变通方法是尝试将 PySpark 模型转换为支持的格式（如果可行），或者在本地使用 sklearn 等价模型进行解释分析。","https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fissues\u002F884",[173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258,263,268],{"id":174,"version":175,"summary_zh":176,"released_at":177},81074,"v0.51.0","\u003C!-- 使用 .github\u002Frelease.yml 中的配置在 master 分支上生成的发布说明 -->\n\n## 变更内容\n\n## 修复\n* 修复：首先检查特征是否不在叶节点中，由 @Far-naz 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4268 中完成\n* 修复 MAPLE 中缺失的数组到标量转换问题，由 @Scienfitz 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4285 中完成\n* 修复 Tree SHAP 笔记本的 Python 版本问题，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4289 中完成\n* 修复使用小型背景数据集时路径依赖型 SHAP 的 NaN 问题，由 @tudstudent 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4272 中完成\n* 修复 format_value() 处理空字符串时出现的 IndexError，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4238 中完成\n* 修复 test_scatter_categorical 以兼容 pandas 3.0，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4253 中完成\n* 修复 SamplingExplainer.explain 对于 Series 的处理问题，由 @ljw20180420 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4200 中完成\n\n### 其他变更\n* 添加针对已修复掩码器的测试，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4216 中完成\n* 为生产代码添加全面的类型注解，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4217 中完成\n* 解除版本锁定并固定 numba 版本，跳过 causalml 测试，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4235 中完成\n* 明确 TreeExplainer 在二分类任务中会根据模型返回不同形状的结果，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4254 中完成\n* 添加对解释器的测试，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4218 中完成\n* 添加 Colab 笔记本以测试 GPUTreeExplainer，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4266 中完成\n* 修复文档中的拼写错误：将 perterbation 改为 perturbation，由 @laffertyryan0 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4275 中完成\n* 更新 test_scatter 使其与最新版 xgboost 兼容，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4278 中完成\n* 在贡献指南的适当位置添加 AI 使用政策，并在 README 中提及贡献事宜，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4279 中完成\n* 废弃在 macOS x64_86 上测试较新 llvmlite 版本的功能，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4286 中完成\n\n## 新贡献者\n* @ljw20180420 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4200 中完成了首次贡献\n* @Copilot 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4238 中完成了首次贡献\n* @laffertyryan0 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4275 中完成了首次贡献\n* @tudstudent 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4272 中完成了首次贡献\n* @Far-naz 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4268 中完成了首次贡献\n* @Scienfitz 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4285 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.50.0...v0.51.0","2026-03-04T09:04:18",{"id":179,"version":180,"summary_zh":181,"released_at":182},81075,"v0.50.0","\u003C!-- 使用 .github\u002Frelease.yml 中的配置生成的发布说明 -->\n\n## 变更内容\n* 将 threshold_types 交由 GPUTreeExplainer 处理，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4181 中完成\n* 改进 base_score 的赋值逻辑，由 @lsdxp 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4187 中完成\n* 针对 Python 3.14 进行测试，并移除对 Python 3.9 和 3.10 的支持，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4176 中完成\n* 始终将 transformers 的 label2id ID 强制转换为整数，由 @evamaxfield 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4192 中完成\n* 修复 GPU 树解释器的测试，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4199 中完成\n\n## 新贡献者\n* @lsdxp 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4187 中完成了首次贡献\n* @evamaxfield 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4192 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.49.1...v0.50.0","2025-11-11T18:24:12",{"id":184,"version":185,"summary_zh":186,"released_at":187},81076,"v0.49.1","### 变更内容\n修复 v0.49.0 版本的发布问题。\n\n由于 macOS 上的 HTTP 错误，之前的版本未能正确发布。","2025-10-14T09:19:03",{"id":189,"version":190,"summary_zh":191,"released_at":192},81077,"v0.49.0","\u003C!-- 使用 .github\u002Frelease.yml 中的配置在 master 分支上生成的发布说明 -->\n\n## 变更内容\n注意：这是最后一个同时支持 Python 3.9 和 Python 3.10 的版本。从 `v0.50.0` 开始，我们将仅支持 Python 3.11 及以上版本。\n\n* @nqviller 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4171 中为 C++ 库添加了对分类分割的支持。\n* @FanwangM 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4119 中增加了自定义小提琴图颜色条的选项。\n\n### 其他变更\n* 如果提供了标签，plots.image 将为每一行显示标签，由 @julesvanrie 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4113 中实现。\n* 使用 nbtest，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4143 中引入。\n* @RoyiAvital 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4148 中添加了对 PyTorch `Flaten` 的支持。\n* 更新 Explainer 和 Serializer 中的 `.save()` 和 `.load()` 方法，以移除 AttributeError，由 @oscarl77 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4155 中完成。\n* 修复 LGBMRegressor 缺失目标函数错误，由 @imatiach-msft 和 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F1063 中解决。\n* 通过为特征名称添加字符串转换，修复数值特征分支中潜在的 TypeError #4150，由 @YunyuG 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4159 中完成。\n* 改进 Coalition Explainer 用户引导并修复树构建问题，由 @EnzoFanAccount 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4116 中实现。\n\n## 新贡献者\n* @diego-pm 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4110 中做出了首次贡献。\n* @julesvanrie 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4113 中做出了首次贡献。\n* @Helias 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4141 中做出了首次贡献。\n* @oscarl77 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4155 中做出了首次贡献。\n* @FanwangM 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4119 中做出了首次贡献。\n* @YunyuG 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4159 中做出了首次贡献。\n* @EnzoFanAccount 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4116 中做出了首次贡献。\n* @nqviller 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4171 中做出了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.48.0...v0.49.0","2025-10-14T09:06:34",{"id":194,"version":195,"summary_zh":196,"released_at":197},81078,"v0.48.0","\u003C!-- 使用 .github\u002Frelease.yml 中的配置在 master 分支上生成的发布说明 -->\n\n## 变更内容\n### 新增\n* 添加 CoalitionExplainer，并支持在 Partition Explainer 中使用 Winter Values，由 @CousinThrockmorton 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3666 中实现。\n* 支持并测试 Python 3.13，由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3861 和 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4104 中完成。\n* 添加对 PyTorch `Identity` 层的支持，由 @RoyiAvital 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4028 中实现。\n### 文档\n* 更新“解释使用标准化特征的模型”的相关内容，由 @randombenj 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3903 中完成。\n### 其他变更\n* 更改别名，由 @Ja-Tink 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4049 中完成。\n* 更新 JavaScript 依赖包，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4067 中完成。\n* 修复小 SHAP 值的视觉显示问题，由 @Ja-Tink 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4053 中解决。\n* 修复 summary_plot 只显示一个特征的问题，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4087 中完成。\n* 解决正则表达式库的线程安全警告，由 @emmanuel-ferdman 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4084 中修复。\n\n## 新贡献者\n* @RoyiAvital 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4028 中完成了首次贡献。\n* @Ja-Tink 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4049 中完成了首次贡献。\n* @emmanuel-ferdman 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4084 中完成了首次贡献。\n* @randombenj 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3903 中完成了首次贡献。\n* @CousinThrockmorton 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3666 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.47.2...v0.48.0","2025-06-12T11:38:58",{"id":199,"version":200,"summary_zh":201,"released_at":202},81079,"v0.47.2","\u003C!-- 使用 .github\u002Frelease.yml 中的配置生成的发布说明 -->\n\n## 变更内容\n### 新增\n* @alexander-pv 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3273 中添加了实验性的 causalml 支持\n\n### 其他变更\n* 文档：@ethanknights 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4027 中提升了 `Simple California Demo.ipynb` 中关于分区树解释的清晰度\n* 修复：@fabianliebig 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4041 中修复了唯一值抖动问题\n* 修复：@Hrafz 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4058 中修正了 SHAP 值描述中的中性用语\n* 修复：@CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4060 中修复了 JavaScript 绘图组件的回归测试，并新增了相关测试\n\n## 新贡献者\n* @ethanknights 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4027 中完成了首次贡献\n* @Hrafz 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4058 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.47.1...v0.47.2","2025-04-17T18:00:24",{"id":204,"version":205,"summary_zh":206,"released_at":207},81080,"v0.47.1","\u003C!-- 发布说明由 .github\u002Frelease.yml 中的配置在 master 分支上生成 -->\n\n## 修复\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4033 中修复了摘要小提琴图中的回归问题。\n* 由 @sunruslan 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3998 中修复了新版本 scikit-learn 中树模型对缺失数据的 SHAP 值计算错误。\n* 由 @adamwitmer 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3993 中修复了调用 check_additivity 时精度要求过严的问题。\n* 由 @fabianliebig 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4017 中修复了提取颜色时出现的 AttributeError。\n* 由 @arhall0 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4006 中修复了 uint32 溢出导致的可加性检查失败问题。\n\n## 新贡献者\n* @adamwitmer 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3993 中完成了首次贡献。\n* @fabianliebig 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4017 中完成了首次贡献。\n* @sunruslan 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3998 中完成了首次贡献。\n* @arhall0 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F4006 中完成了首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.47.0...v0.47.1","2025-03-22T20:53:30",{"id":209,"version":210,"summary_zh":211,"released_at":212},81081,"v0.47.0","\u003C!-- 发行说明由 .github\u002Frelease.yml 中的配置在 master 分支上生成 -->\n\n## 变更内容\n### 重大变更\n* 为旧版条形图添加弃用警告，并由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3739 中添加新的 Explainer API 迁移指南。\n### 新增功能\n* 由 @hypostulate 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3706 中为 shap.plots.scatter 添加分类支持。\n* 由 @sd3ntato 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F2848 中引入图像图中的 vmax 参数。\n* 由 @chriscave 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3561 中推出可接受并返回坐标轴的新蜂群图绘图 API。\n* 由 @kalkairis 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F2225 中实现无需其他特征之和即可创建蜂群图的功能。\n* 由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3788 中提供用于自定义可视化的新界面。\n* 由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3849 中允许为条形图设置自定义样式。\n* 由 @tylerjereddy 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3990 中实现 TreeExplainer 的数值敏感性。\n* 由 @tylerjereddy 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3944 中优化非树模型的 KernelExplainer 性能，使其运行更快。\n### 修复\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3917 中修复 KernelExplainer 中的 logit 错误。\n* 由 @bedapisl 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3738 中修复 summary_plot 中的 TypeError。\n* 由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3785 中重新引入 skimage 0.24.0 中的 colorconv 库。\n* 由 @SFatemehM 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3636 中修复 shap.plots.image 中多行标签选项的问题。\n* 由 @costrau 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3578 中修复 transformers 相关问题。\n* 由 @thatlittleboy 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3838 中修复仅含关键字参数操作的 OpChain repr 问题。\n* 由 @46319943 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3836 中修复多分类情况下 summary_plot 的错误图表显示。\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3925 中修复多分类情况下的 summary_plot 问题。\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3909 中修复颜色映射问题。\n### 文档\n* 由 @TommyGiak 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3846 中修复 shap.datasets.communitiesandcrime 文档中的错误。\n* 由 @operte 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3749 中修正 Understanding Tree SHAP 笔记本中的列索引。\n* 由 @Xovee 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3752 中重新格式化 API 示例中的散点图笔记本。\n* 由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3811 中改进散点图的类型注解和文档。\n* 由 @thatlittleboy 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3851 中使用 intersphinx 处理部分外部链接。\n* 由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3885 中锁定文档依赖项以提高可重复性。\n* 由 @davidefiocco 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3962 中修复带有 LightGBM 的 force plot 及其描述中的错别字。\n* 由 @CSantos01 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3957 中修复介绍部分某些章节标题的 Markdown 格式问题。\n* 由 @davidefiocco 在介绍笔记本中修复蜂群图的注释。","2025-03-05T06:49:38",{"id":214,"version":215,"summary_zh":216,"released_at":217},81082,"v0.46.0","## 变更内容\n此版本增加了对最新版本 NumPy 和 TensorFlow 的兼容性，并包含多项错误修复。\n\n### 新增功能\n* 添加了对 NumPy 2 的支持，由 @connortann 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3717 以及 @paulbkoch 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3704 中实现。\n* 添加了对 Keras 3 和 TensorFlow 2.16 的支持，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3677 中实现。\n\n### 变更\n* 移除了 `shap.summary_plot()` 中已弃用的 `auto_size_plot` 参数。\n\n### 修复\n* 修复了使用 `float16` 混合精度训练的模型解释问题，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3652 中完成。\n* 修复了 `XGBRegressor` 模型的反序列化 bug，由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3669 中完成。\n\n此外，还进行了多项文档和代码质量方面的改进。\n\n## 新贡献者\n* @LetiP 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3685 中做出了首次贡献。\n* @paulbkoch 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3704 中做出了首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.45.1...v0.46.0","2024-06-27T10:06:50",{"id":219,"version":220,"summary_zh":221,"released_at":222},81083,"v0.45.1","这是一个补丁版本，包含若干错误修复。特别是修复了加载使用指数损失函数的 XGBoost 模型时出现的一个问题。\n\n## 变更内容\n### 新增\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3617 中为 PyTorch Deep Explainer 添加了 SELU 激活函数。\n* 由 @sroener 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3571 中为热图绘制函数添加了 `ax` 参数。\n\n### 修改\n* 由 @LakshmanKishore 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3543 中移除了数据集函数中未使用的 `display` 参数。\n\n### 修复\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3616 中修复了加载使用指数损失函数的 XGBoost 模型的问题。\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3558 中修复了 Deep Explainer 的调用接口。\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3592 中修复了使用 Falcon 语言模型进行文本生成的问题。\n* 由 @bewygs 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3632 中修复了 LightGBM 的编译问题（macOS 工作流）。\n* 由 @CloseChoice 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3616 中再次修复了加载使用指数损失函数的 XGBoost 模型的问题。\n\n此外，还有若干文档和维护方面的更新，由 @bewygs、@CloseChoice 和 @Hugh-OBrien 完成。\n\n## 新贡献者\n* @sroener 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3571 中完成了首次贡献。\n* @Hugh-OBrien 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3604 中完成了首次贡献。\n* @bewygs 在 https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3632 中完成了首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.45.0...v0.45.1","2024-05-07T11:32:42",{"id":224,"version":225,"summary_zh":226,"released_at":227},81084,"v0.45.0","This is a fairly significant release containing a number of breaking changes.\r\n\r\nThank you to a number of new contributors for their contributions to this release! We are eager to grow the pool of maintainers, so please do get in touch on #3559 if you are interested in being part of the team.\r\n\r\n## What's Changed\r\n\r\n### Breaking changes\r\n* Dropped support for 3.8 in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3414\r\n* Changed type and shape of returned SHAP values in some cases, to be consistent with model outputs. SHAP values for models with multiple outputs are now np.ndarray rather than list, by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3318\r\n* Removed deprecated `feature_dependence` parameters in TreeExplainer and LinearExplainer by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3340\r\n* Removed deprecated alias for Coefficient by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3511\r\n\r\n### Added\r\n* Added support for python 3.12 by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3414\r\n* Added support for GPU build on recent CUDA versions by @trivialfis in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3462\r\n* 2x import time speedup via lazy importing of pytorch by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3533\r\n* Added support returning the matplotlib figure in bar plots by @richarddli in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3494\r\n* Added selu activation for tensorflow deep explainer by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3504\r\n* Added support for special characters in catboost models by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3506\r\n* Added ability to control marker size in `beeswarm` plots by @MonoHue in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3530\r\n\r\n### Fixed\r\n* Fixed XGBoost model load by @trivialfis in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3462\r\n* Fixed text masking with certain tokenizers by @costrau in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3536\r\n* Fixed issue with KernelExplainer when explaining tensorflow models by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3542\r\n* Fixed force_plot contribution threshold for negative contributions by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3547\r\n* Removed overwrite of default warning filter or formatter by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3514\r\n\r\n.. plus a large number of documentation, testing and other maintenance updates by @CloseChoice , @yuanx749 , @LakshmanKishore  and others.\r\n\r\n## New Contributors\r\n\r\n* @richarddli made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3494\r\n* @yuanx749 made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3458\r\n* @LakshmanKishore made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3393\r\n* @trivialfis made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3462\r\n* @DanGolding made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3526\r\n* @MonoHue made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3530\r\n* @costrau made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3536\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.44.1...v0.45.0","2024-03-08T11:43:15",{"id":229,"version":230,"summary_zh":231,"released_at":232},81085,"v0.44.1","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at master -->\r\n\r\nPatch release to fix an issue with the display of force plots.\r\n\r\n### Fixed\r\n* Fixed HTML issue affecting display of force plots by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3464\r\n* Fixed calculation of interactions values for catboost regressors by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3459\r\n* Update XGBoost parsing to use ubjson format, replacing deprecated binary format by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3345\r\n### Other\r\n* Further improvements to documentation\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.44.0...v0.44.1","2024-01-25T13:08:31",{"id":234,"version":235,"summary_zh":236,"released_at":237},81086,"v0.44.0","This release contains a number enhancements and bug fixes.\r\n\r\n## What's Changed\r\n\r\n### Added\r\n* Faster and more stable linear solver in KernelShap by @lorentzenchr in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3271\r\n* Enabled passing of `ax` to `group_difference()` plot by @mtlulka in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3355\r\n* Added support for Explanation.display_data in bar plot by @fountaindive in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3386\r\n* Improved build messages when building from source by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3403\r\n\r\n### Fixed\r\n* Fixed `CatboostClassifier` explanations with feature interactions on Windows by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3325\r\n* Fixed passing of Xgboost model parameters to xgboost.DMatrix by @vancromy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3314\r\n* Explicit specification of xgboost>=1.4 constraint by @mtlulka in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3352\r\n* Fixed conversion of DMatrix to CSR matrix by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3359\r\n* Removed deprecated `use_line_collection` in `dependence_plot` by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3369\r\n* Fixed bug relating to array reshaping in `scatter` plots by @SomeUserName1 in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F2799\r\n\r\n### Documentation\r\n* A large number of [example notebooks](https:\u002F\u002Fshap.readthedocs.io\u002Fen\u002Flatest) fixed and updated by @connortann, @znacer , @thatlittleboy, @CloseChoice and @stompsjo\r\n\r\n## New Contributors\r\n* @vancromy made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3314\r\n* @lorentzenchr made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3271\r\n* @mtlulka made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3352\r\n* @fountaindive made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3386\r\n* @SomeUserName1 made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F2799\r\n* @stompsjo made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3391\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.43.0...V0.44.0","2023-12-07T12:14:26",{"id":239,"version":240,"summary_zh":241,"released_at":242},81087,"v0.43.0","## What's Changed\r\n\r\nThis release contains a number of bug fixes and improvements.\r\n\r\nFollowing the [NEP 29 deprecation policy](https:\u002F\u002Fnumpy.org\u002Fneps\u002Fnep-0029-deprecation_policy.html), this release drops support for python 3.7.\r\n\r\n### Breaking changes\r\n\r\n* Removed the deprecated Boston dataset by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3316\r\n* The shape of `Explanation.base_values` has been standardised between different TreeExplainer models to always be of shape `(N,)` and not `(N,1)`. By @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3121\r\n\r\n### Added\r\n* Added additivity check to Pytorch DeepExplainer (activated by default) by @noxthot in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3265\r\n* Added flag to allow the printing of the mean SHAP value in the legend of a multi-output bar plot. By @101AlexMartin in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3062\r\n* Added heatmap and violin plot to top-level API by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3157\r\n* Replaced all tqdm imports with tqdm.auto by @owenlamont in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3199\r\n\r\n### Fixed\r\n* Fixed segmentation faults on MacOS with lightgbm tests (with newer libomp versions) by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3093\r\n* Support LightGBM ensemble containing single leaf trees (stump) by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3094\r\n* Fixed conversion DataFrame to ndarray for Explanation.data by @danieleongari in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3131\r\n* Fixed waterfall plot on explanations of sklearn tree models by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3138\r\n* Fixed pandas input for gradient explainer by @Koen-Git in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3153\r\n* Fixed slicing of `feature_names` in Explanation objects with square `.values` by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3126\r\n* Correct xlim in force_matplotlib in cases where the signs of force are all the same by @zaburo-ch in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F2839\r\n* Fixed ngboost explanations when col_sample \u003C 1 by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3294\r\n* Fixed torch additivity check in PyTorch DeepExplainer  by @noxthot in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3281\r\n* Replaced print statements with warnings in DeepExplainer by @znacer in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3264\r\n* Replace deprecated `register_backward_hook()` by @noxthot in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3259\r\n* Fixed deprecated use of xgboost early_stopping_rounds by @CloseChoice in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3306\r\n* Fixed 3rd party deprecation warnings: numba, xgboost, typing, distutils by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3084\r\n* Updated the Javascript bundle to update deprecated dependencies by @connortann in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F2974\r\n\r\nThere have also been a large number of improvements to the tutorials and examples, by @connortann, @znacer, @arshiaar, @thatlittleboy, @dsgibbons, @owenlamont and @CloseChoice\r\n\r\n## New Contributors\r\n* @101AlexMartin made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3062\r\n* @znacer made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3112\r\n* @danieleongari made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3131\r\n* @Koen-Git made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3153\r\n* @pre-commit-ci made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3173\r\n* @owenlamont made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3199\r\n* @arshiaar made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3201\r\n* @dsgibbons made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3200\r\n* @noxthot made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3265\r\n* @zaburo-ch made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F2839\r\n* @CloseChoice made their first contribution in https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fpull\u002F3282\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap\u002Fcompare\u002Fv0.42.1...v0.43.0","2023-10-09T09:41:17",{"id":244,"version":245,"summary_zh":246,"released_at":247},81088,"v0.42.1","Patch release to provide wheels for a broader range of architectures.\r\n\r\n### Added\r\n\r\n* Added wheels for linux:aarch64 and macos:arm64  by @PrimozGodec in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3078 and @connortann in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3083.\r\n\r\n### Fixed\r\n\r\n* Fixed circular import issues with shap.benchmark by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3076.\r\n* Fixed TestPyPI releases workflow by @connortann in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3068\r\n* Fix further flaky tests by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3073\r\n* Fix shap.summary_plot to work with matplotlib 3.6.0 by @jklaise in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2697\r\n* Fix benchmark top-level import by @thatlittleboy in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3076\r\n* Fix ipython import warning from top-level shap import by @connortann in https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3090\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fcompare\u002Fv0.42.0...v0.42.1","2023-07-15T11:15:28",{"id":249,"version":250,"summary_zh":251,"released_at":252},81089,"v0.42.0","This release incorporates many changes that were originally contributed by the SHAP community via @dsgibbons's [Community Fork][fork], which has now been merged into the main shap repository. PRs from this origin are labelled here as `fork#123`.\r\n\r\nThis will be the last release that supports python 3.7.\r\n\r\n[fork]: https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fdiscussions\u002F2942\r\n\r\n### Added\r\n\r\n- Added support for python 3.11 ([fork#72](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F72) by @connortann).\r\n- Added `n_points` parameter to all functions in `shap.datasets` ([fork#39](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F39) by @thatlittleboy).\r\n- Added `__call__` to `KernelExplainer` ([#2966](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2966) by @dwolfeu).\r\n- Added [contributing guidelines][contrib-guide]  ([#2996](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2996) by @connortann).\r\n\r\n[contrib-guide]: [https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md]\r\n\r\n### Fixed\r\n\r\n- Fixed `plot.waterfall` to support yticklabels with boolean features ([fork#58](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F58) by @dwolfeu).\r\n- Prevent `TreeExplainer.__call__` from throwing ValueError when passed a pandas DataFrame containing Categorical columns ([fork#88](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F88) by @thatlittleboy).\r\n- Fixed sampling in `shap.datasets` to sample without replacement ([fork#36](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F36) by @thatlittleboy).\r\n- Fixed an `UnboundLocalError` problem arising from passing a dictionary input to `shap.plots.bar` ([#3001](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3000) by @thatlittleboy).\r\n- Fixed tensorflow import issue with Pyspark when using `Gradient`  ([#2983](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2983) by @skamdar).\r\n- Fixed the aspect ratio of the colorbar in `shap.plots.heatmap`, and use the `ax` matplotlib API internally for plotting ([#3040](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3040) by @thatlittleboy).\r\n- Fixed deprecation warnings for `numba>=0.44` ([fork#9](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F9) and [fork#68](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F68) by @connortann).\r\n- Fixed deprecation warnings for `numpy>=1.24` from numpy types ([fork#7](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F7) by @dsgibbons).\r\n- Fixed deprecation warnings for `Ipython>=8` from `Ipython.core.display` ([fork#13](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F13) by @thatlittleboy).\r\n- Fixed deprecation warnings for `tensorflow>=2.11` from `tf.optimisers` ([fork#16](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F16) by @simonangerbauer).\r\n- Fixed deprecation warnings for `sklearn>=1.2` from `sklearn.linear_model` ([fork#22](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F22) by @dsgibbons).\r\n- Fixed deprecation warnings for `xgboost>=1.4` from `ntree_limit` in tree explainer ([#2987](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2987) by @adnene-guessoum).\r\n- Fixed build on Windows and MacOS ([#3015](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3015) by @PrimozGodec; [#3028](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3028),  [#3029](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3029) and  [#3031](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3031) by @connortann).\r\n- Fixed creation of ragged arrays in `shap.explainers.Exact` ([#3064](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3064) by @connortann).\r\n\r\n### Changed\r\n\r\n- Updates to docstrings of several `shap.plots` functions  ([#3003](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3003),  [#3005](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3005) by @thatlittleboy).\r\n\r\n### Removed\r\n\r\n- Deprecated the Boston house price dataset ([fork#38](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F38) by @thatlittleboy).\r\n- Removed the unused `mimic.py` file and `MimicExplainer` code ([fork#53](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F53) by @thatlittleboy).\r\n\r\n### Maintenance\r\n\r\n- Fixed failing unit tests  ([fork#29](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F29) by @dsgibbons,  [fork#20](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F20) by @simonangerbauer, [#3044](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3044) and  [fork#24](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F24) by @connortann).\r\n- Include CUDA GPU C extension files in the source distribution ([#3009](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3009) by @jklaise).\r\n- Fixed installation of package via setuptools ([fork#51](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F51) by @thatlittleboy).\r\n- Introduced a minimal set of `ruff` linting  ([fork#25](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F25), [fork#26](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F26), [fork#27](https:\u002F\u002Fgithub.com\u002Fdsgibbons\u002Fshap\u002Fpull\u002F27), [#2973](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2973), [#2972](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2972) and [#2976](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2976) by @connortann;  [#2968](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2968), [#2986](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F2986) by @thatlittleboy).\r\n- Updated project metadata to PEP 517 ([#3022](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F3022) by @connortann).\r\n- Introduced more thorough testing ","2023-07-06T17:38:22",{"id":254,"version":255,"summary_zh":256,"released_at":257},81090,"v0.41.0","Lots of bugs fixes and API improvements.\r\n\r\n- Fixed rare bug with XGBoost model loading by @TheZL @lrjball \r\n- Fixed the beeswarm plot so it does not modify the passed explanation object, @ravwojdyla \r\n- Automatic wheel building using GH actions by @quantumtec \r\n- GC collection for memory in KernelExplainer by @Qingtian-Zou \r\n- Fixed max_evals params for PartitionExplainer\r\n- JIT optimize the PartitionExplainer\r\n- Fix colorbar formatting issues @SleepyPepperHead \r\n- New benchmark notebooks\r\n- Use display_data for plotting when possible @yuuuxt \r\n- Improved GPUTreeShap compilation and params @RAMitchell \r\n- Fix TF API change in DeepExplainer @filusn \r\n- Add torch tensor support for plots @alexander-pv \r\n- Switch to Github actions for testing instead of Travis\r\n- New California demo dataset @swalsh1123 \r\n- Fix waterfall plot bug @RichardScottOZ \r\n- Handle missing matplotlib installation @klieret \r\n- Add linearize link support for Additive explainer (Nandish Gupta)\r\n- Fix exceptions to be more specific @alexisdrakopoulos @collinb9\r\n- Add color map option for plotting @tlabarta \r\n- Release fixed numpy version requirement @rmehyde \r\n- And many other contributions kindly made by @WeichenXu123 @imatiach-msft @zeshengli @nkthiebaut @songololo @GiovannaNicora @joshzwiebel @Ashishbodla @navdeep-G @smathewmanuel @ycouble @anubhavmaity @adityasaini70 @ngupta20 @jckkvs @abs428 @JulesCollenne @Tiagosf00 @javirandor and @Thuener ","2022-06-16T00:31:04",{"id":259,"version":260,"summary_zh":261,"released_at":262},81091,"v0.40.0","This release contains many bugs fixes and lots of new functionality, specifically for transformer based NLP models. Some highlights include:\r\n- New plots, bug fixes, docs, and features for NLP model explanations (see docs for details).\r\n- important permutation explainer performance fix by @sander-sn \r\n- New joint scatter plots to plot many at once on the same y-scale\r\n- better tree model memory usage by @morriskurz \r\n- new docs by @coryroyce \r\n- new wheel building by @PrimozGodec \r\n- dark mode improvements for the docs by @gialmisi \r\n- api tweaks by @c56pony @nsorros @jebarb ","2021-10-20T18:36:57",{"id":264,"version":265,"summary_zh":266,"released_at":267},81092,"v0.39.0","Lots of new text explainer work courtesy of @ryserrao and serialization courtesy of @vivekchettiar! (will note all the other changes later)","2021-03-03T18:38:33",{"id":269,"version":270,"summary_zh":271,"released_at":272},81093,"v0.38.1","Fixes a version mismatch with the v0.38.0 release and serialization updates.","2021-01-15T18:02:54"]