[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-MAIF--shapash":3,"tool-MAIF--shapash":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",154349,2,"2026-04-13T23:32:16",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":75,"owner_email":76,"owner_twitter":75,"owner_website":77,"owner_url":78,"languages":79,"stars":105,"forks":106,"last_commit_at":107,"license":108,"difficulty_score":109,"env_os":110,"env_gpu":110,"env_ram":110,"env_deps":111,"category_tags":124,"github_topics":125,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":134,"updated_at":135,"faqs":136,"releases":162},7371,"MAIF\u002Fshapash","shapash","🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models","Shapash 是一款专为提升机器学习模型可解释性而设计的 Python 开源库，旨在让复杂的算法决策变得对所有人都清晰易懂。它主要解决了黑盒模型难以理解、技术结论难以向非专业人士传达的痛点，帮助团队构建更可靠、透明的 AI 系统。\n\n无论是数据科学家、分析师还是业务决策者，都能通过 Shapash 轻松上手。其核心亮点在于能够一键生成交互式 Web 应用，用户可以在其中直观地查看特征间的相互作用，并在“局部解释”（单个样本的预测原因）与“全局解释”（模型整体逻辑）之间无缝切换。此外，Shapash 还能自动生成包含关键信息的综合审计报告，极大便利了模型合规性审查。\n\n在技术兼容性方面，Shapash 表现卓越，广泛支持 Catboost、Xgboost、LightGBM、Sklearn 集成模型、线性模型及 SVM 等多种主流算法，适用于回归、二分类及多分类等各类任务。通过将晦涩的技术指标转化为带有清晰标签的可视化图表，Shapash 架起了技术与业务之间的沟通桥梁，让模型结果不仅可信，更易被共享和理解。","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_e2b8c3deaed4.png\" width=\"300\" title=\"shapash-logo\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- Tests -->\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fworkflows\u002FBuild%20%26%20Test\u002Fbadge.svg\">\n    \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fworkflows\u002FBuild%20%26%20Test\u002Fbadge.svg\" alt=\"tests\">\n  \u003C\u002Fa>\n  \u003C!-- PyPi -->\n  \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fshapash\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fshapash\" alt=\"pypi\">\n  \u003C\u002Fa>\n  \u003C!-- Downloads -->\n  \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_ed605e12a8dd.png\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_ed605e12a8dd.png\" alt=\"downloads\">\n  \u003C\u002Fa>\n  \u003C!-- Python Version -->\n  \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshapash\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshapash\" alt=\"pyversion\">\n  \u003C\u002Fa>\n  \u003C!-- License -->\n  \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fshapash\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fshapash\" alt=\"license\">\n  \u003C\u002Fa>\n  \u003C!-- Doc -->\n  \u003Ca href=\"https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002F\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_13d664e1afd7.png\" alt=\"doc\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n## 🔍 Overview\n\nShapash is a Python library designed to **make machine learning interpretable and comprehensible for everyone**. It offers various visualizations with clear and explicit labels that are easily understood by all.\n\nWith Shapash, you can generate a **Webapp** that simplifies the comprehension of **interactions between the model's features**, and allows **seamless navigation between local and global explainability**. This Webapp enables Data Scientists to effortlessly understand their models and **share their results with both data scientists and non-data experts**.\n\nAdditionally, Shapash contributes to data science auditing by **presenting valuable information** about any model and data **in a comprehensive report**.\n\nShapash is suitable for Regression, Binary Classification and Multiclass problems. It is **compatible with numerous models**, including Catboost, Xgboost, LightGBM, Sklearn Ensemble, Linear models, and SVM. For other models, solutions to integrate Shapash are available; more details can be found [here](#how_shapash_works).\n\n> [!NOTE]\n> If you want to give us feedback : [Feedback form](https:\u002F\u002Fframaforms.org\u002Fshapash-collecting-your-feedback-and-use-cases-1687456776)\n\n[Shapash App Demo](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_dcabdef7b223.gif\" width=\"800\">\n\u003C\u002Fp>\n\n## 🌱 Documentation and resources\n\n- Readthedocs: [![documentation badge](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_13d664e1afd7.png)](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002F)\n- [Video presentation for french speakers](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r1R_A9B9apk)\n- Medium:\n  - [Understand your model with Shapash - Towards AI](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-making-ml-models-understandable-by-everyone-8f96ad469eb3)\n  - [Model auditability - Towards DS](https:\u002F\u002Ftowardsdatascience.com\u002Fshapash-1-3-2-announcing-new-features-for-more-auditable-ai-64a6db71c919)\n  - [Group of features - Towards AI](https:\u002F\u002Fpub.towardsai.net\u002Fmachine-learning-6011d5d9a444)\n  - [Building confidence on explainability - Towards DS](https:\u002F\u002Ftowardsdatascience.com\u002Fbuilding-confidence-on-explainability-methods-66b9ee575514)\n  - [Picking Examples to Understand Machine Learning Model](https:\u002F\u002Fwww.kdnuggets.com\u002F2022\u002F11\u002Fpicking-examples-understand-machine-learning-model.html)\n  - [Enhancing Webapp Built-In Features for Comprehensive Machine Learning Model Interpretation](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)\n\n\n## 🎉 What's new ?\n\n| Version       | New Feature                                                                           | Description                                                                                                                            | Tutorial |\n|:-------------:|:-------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------:|:--------:|\n| 2.3.x         |  Additional dataset columns \u003Cbr> [New demo](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F) \u003Cbr> [Article](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)                                                                | In Webapp: Target and error columns added to dataset and possibility to add features outside the model for more filtering options            |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_93fb6c5d5511.png\" width=\"50\" title=\"add_column\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fgenerate_webapp\u002Ftuto-webapp01-additional-data.ipynb)\n| 2.3.x         |  Identity card \u003Cbr> [New demo](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F) \u003Cbr> [Article](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)                                                                  | In Webapp: New identity card to summarize the information of the selected sample                  |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_c141f85757b2.png\" width=\"50\" title=\"identity\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fgenerate_webapp\u002Ftuto-webapp01-additional-data.ipynb)\n| 2.2.x         |  Picking samples \u003Cbr> [Article](https:\u002F\u002Fwww.kdnuggets.com\u002F2022\u002F11\u002Fpicking-examples-understand-machine-learning-model.html)                                                                | New tab in the webapp for picking samples. The graph represents the \"True Values Vs Predicted Values\"            |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_4944ab459b5b.png\" width=\"50\" title=\"picking\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fplots_and_charts\u002Ftuto-plot06-prediction_plot.ipynb)\n| 2.2.x         |  Dataset Filter \u003Cbr>                                                              | New tab in the webapp to filter data. And several improvements in the webapp: subtitles, labels, screen adjustments                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_bd22e3201d70.png\" width=\"50\" title=\"webapp\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Ftutorial01-Shapash-Overview-Launch-WebApp.ipynb)\n| 2.0.x         |  Refactoring Shapash \u003Cbr>                                                                   | Refactoring attributes of compile methods and init. Refactoring implementation for new backends                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_1f10d457f651.png\" width=\"50\" title=\"modular\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fexplainer_and_backend\u002Ftuto-expl06-Shapash-custom-backend.ipynb)\n| 1.7.x         |  Variabilize Colors \u003Cbr>                                                                   | Giving possibility to have your own colour palette for outputs adapted to your design                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_0335c3e0fb37.png\" width=\"50\" title=\"variabilize-colors\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fcommon\u002Ftuto-common02-colors.ipynb)\n| 1.6.x         |  Explainability Quality Metrics \u003Cbr> [Article](https:\u002F\u002Ftowardsdatascience.com\u002Fbuilding-confidence-on-explainability-methods-66b9ee575514)                                                                   | To help increase confidence in explainability methods, you can evaluate the relevance of your explainability using 3 metrics: **Stability**, **Consistency** and **Compacity**                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_543438383809.png\" width=\"50\" title=\"quality-metrics\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fexplainability_quality\u002Ftuto-quality01-Builing-confidence-explainability.ipynb)\n| 1.4.x         |  Groups of features \u003Cbr> [Demo](https:\u002F\u002Fshapash-demo2.ossbymaif.fr\u002F)                  | You can now regroup features that share common properties together. \u003Cbr>This option can be useful if your model has a lot of features. |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_85e5192ca75c.gif\" width=\"120\" title=\"groups-features\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fcommon\u002Ftuto-common01-groups_of_features.ipynb)    |\n| 1.3.x         |  Shapash Report \u003Cbr> [Demo](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Freport.html)     | A standalone HTML report that constitutes a basis of an audit document.                                                                |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_61b733ba3062.png\" width=\"50\" title=\"shapash-report\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fgenerate_report\u002Ftuto-shapash-report01.ipynb)    |\n\n## 🔥 Features\n\n- Display clear and understandable results: plots and outputs use **explicit labels** for each feature and its values\n\n\u003Cp align=\"center\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_94e8685ff01c.png\" width=\"28%\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_db880478d847.png\" width=\"28%\" \u002F>\n  \u003Cimg align=\"right\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_dc1132bc539c.png\" width=\"28%\" \u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a1d1da602101.png\" width=\"28%\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_e2b8c3deaed4.png\" width=\"18%\" \u002F>\n  \u003Cimg align=\"right\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_3371d78b6ea0.png\" width=\"28%\" \u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_964c964da120.png\" width=\"33%\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_6fa859679761.png\" width=\"28%\" \u002F>\n  \u003Cimg align=\"right\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_8f31723ddb11.png\" width=\"25%\" \u002F>\n\u003C\u002Fp>\n\n\n- Allow Data Scientists to quickly understand their models using a **webapp** to easily navigate between global and local explainability, and understand how the different features contribute: [Live Demo Shapash-Monitor](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F)\n\n- **Summarize and export** local explanation\n> **Shapash** provides concise and clear local explanations, It allows each user, enabling users of any Data background to understand a local prediction of a supervised model through a summarized and explicit explanation\n\n\n- **Evaluate** the quality of your explainability with various metrics\n\n- Effortlessly share and discuss results with non-Data users\n\n- Select subsets for in-depth analysis of explainability by filtering based on explanatory and additional features, as well as correct or wrong predictions. [Picking Examples to Understand Machine Learning Model](https:\u002F\u002Fwww.kdnuggets.com\u002F2022\u002F11\u002Fpicking-examples-understand-machine-learning-model.html)\n\n- Deploy interpretability part of your project: From model training to deployment (API or Batch Mode)\n\n- Contribute to the **auditability of your model** by generating a **standalone HTML report** of your projects. [Report Example](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Freport.html)\n>We believe that this report will offer valuable support for auditing models and data, leading to improved AI governance.\nData Scientists can now provide anyone interested in their project with **a document that captures various aspects of their work as the foundation for an audit report**.\nThis document can be easily shared among teams (internal audit, DPO, risk, compliance...).\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_1dd83c229a43.gif\" width=\"800\">\n\u003C\u002Fp>\n\n\u003Ca name=\"how_shapash_works\">\u003C\u002Fa>\n## ⚙️ How Shapash works\n**Shapash** is an overlay package for libraries focused on model interpretability. It uses Shap or Lime backend\nto compute contributions.\n**Shapash** builds upon the various steps required to create a machine learning model, making the results more understandable.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a9686ed048ca.png\" width=\"700\" title=\"diagram\">\n\u003C\u002Fp>\n\n**Shapash** is suitable for Regression, Binary Classification or Multiclass problem. \u003Cbr \u002F>\nIt is compatible with numerous models: *Catboost*, *Xgboost*, *LightGBM*, *Sklearn Ensemble*, *Linear models*, *SVM*. \u003Cbr \u002F>\n\nIf your model is not in the list of compatible models, it is possible to provide Shapash with local contributions calculated with shap or another method. [Here's](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fexplainer_and_backend\u002Ftuto-expl05-Shapash-using-Fasttreeshap.ipynb) an example of how to provide contributions to Shapash. An [issue](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fissues\u002F488) has been created to enhance this use case.\n\nShapash can use category-encoders object, sklearn ColumnTransformer or simply features dictionary. \u003Cbr \u002F>\n- Category_encoder: *OneHotEncoder*, *OrdinalEncoder*, *BaseNEncoder*, *BinaryEncoder*, *TargetEncoder*\n- Sklearn ColumnTransformer: *OneHotEncoder*, *OrdinalEncoder*, *StandardScaler*, *QuantileTransformer*, *PowerTransformer*\n\n## 🛠 Installation\n\nShapash is intended to work with Python versions 3.9 to 3.12. Installation can be done with pip:\n\n```bash\npip install shapash\n```\n\nIn order to generate the Shapash Report some extra requirements are needed.\nYou can install these using the following command :  \n```bash\npip install shapash[report]\n```\n\nIf you encounter **compatibility issues** you may check the corresponding section in the Shapash documentation [here](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Finstallation-instructions\u002Findex.html).\n\n## 🕐 Quickstart\n\nThe 4 steps to display results:\n\n- Step 1: Declare SmartExplainer Object\n  > There 1 mandatory parameter in compile method: Model\n  > You can declare features dict here to specify the labels to display\n\n```python\nfrom shapash import SmartExplainer\n\nxpl = SmartExplainer(\n    model=regressor,\n    features_dict=house_dict,  # Optional parameter\n    preprocessing=encoder,  # Optional: compile step can use inverse_transform method\n    postprocessing=postprocess,  # Optional: see tutorial postprocessing\n)\n```\n\n- Step 2: Compile  Dataset, ...\n  > There 1 mandatory parameter in compile method: Dataset\n\n```python\nxpl.compile(\n    x=xtest,\n    y_pred=y_pred,  # Optional: for your own prediction (by default: model.predict)\n    y_target=yTest,  # Optional: allows to display True Values vs Predicted Values\n    additional_data=xadditional,  # Optional: additional dataset of features for Webapp\n    additional_features_dict=features_dict_additional,  # Optional: dict additional data\n)\n```  \n\n- Step 3: Display output\n  > There are several outputs and plots available. for example, you can launch the web app:\n\n```python\napp = xpl.run_app()\n```\n\n[Live Demo Shapash-Monitor](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F)\n\n- Step 4: Generate the Shapash Report\n  > This step allows to generate a standalone html report of your project using the different splits\n  of your dataset and also the metrics you used:\n\n```python\nxpl.generate_report(\n    output_file=\"path\u002Fto\u002Foutput\u002Freport.html\",\n    project_info_file=\"path\u002Fto\u002Fproject_info.yml\",\n    x_train=xtrain,\n    y_train=ytrain,\n    y_test=ytest,\n    title_story=\"House prices report\",\n    title_description=\"\"\"This document is a data science report of the kaggle house prices tutorial project.\n        It was generated using the Shapash library.\"\"\",\n    metrics=[{\"name\": \"MSE\", \"path\": \"sklearn.metrics.mean_squared_error\"}],\n)\n```\n\n[Report Example](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Freport.html)\n\n- Step 5: From training to deployment : SmartPredictor Object\n  > Shapash provides a SmartPredictor object to deploy the summary of local explanation for the operational needs.\n  It is an object dedicated to deployment, lighter than SmartExplainer with additional consistency checks.\n  SmartPredictor can be used with an API or in batch mode. It provides predictions, detailed or summarized local\n  explainability using appropriate wording.\n\n```python\npredictor = xpl.to_smartpredictor()\n```\nSee the tutorial part to know how to use the SmartPredictor object\n\n## 📖  Tutorials\nThis github repository offers many tutorials to allow you to easily get started with Shapash.\n\n\n\u003Cdetails>\u003Csummary>\u003Cb>Overview\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [Launch the webapp with a concrete use case](tutorial\u002Ftutorial01-Shapash-Overview-Launch-WebApp.ipynb)\n- [Jupyter Overviews - The main outputs and methods available with the SmartExplainer object](tutorial\u002Ftutorial02-Shapash-overview-in-Jupyter.ipynb)\n- [Shapash in production: From model training to deployment (API or Batch Mode)](tutorial\u002Ftutorial03-Shapash-overview-model-in-production.ipynb)\n- [Use groups of features](tutorial\u002Fcommon\u002Ftuto-common01-groups_of_features.ipynb)\n- [Deploy local explainability in production with SmartPredictor](tutorial\u002Fpredictor_to_production\u002Ftuto-smartpredictor-introduction-to-SmartPredictor.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Charts and plots\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [**Shapash** Features Importance](tutorial\u002Fplots_and_charts\u002Ftuto-plot03-features-importance.ipynb)\n- [Contribution plot to understand how one feature affects a prediction](tutorial\u002Fplots_and_charts\u002Ftuto-plot02-contribution_plot.ipynb)\n- [Summarize, display and export local contribution using filter and local_plot method](tutorial\u002Fplots_and_charts\u002Ftuto-plot01-local_plot-and-to_pandas.ipynb)\n- [Contributions Comparing plot to understand why predictions on several individuals are different](tutorial\u002Fplots_and_charts\u002Ftuto-plot04-compare_plot.ipynb)\n- [Visualize interactions between couple of variables](tutorial\u002Fplots_and_charts\u002Ftuto-plot05-interactions-plot.ipynb)\n- [Display True Values Vs Predicted Values](tutorial\u002Fplots_and_charts\u002Ftuto-plot06-prediction_plot.ipynb)\n- [Customize colors in Webapp, plots and report](tutorial\u002Fcommon\u002Ftuto-common02-colors.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Different ways to use Encoders and Dictionaries\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [Use Category_Encoder & inverse transformation](tutorial\u002Fuse_encoders\u002Ftuto-encoder01-using-category_encoder.ipynb)\n- [Use ColumnTransformers](tutorial\u002Fuse_encoders\u002Ftuto-encoder02-using-columntransformer.ipynb)\n- [Use Simple Python Dictionnaries](tutorial\u002Fuse_encoders\u002Ftuto-encoder03-using-dict.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Displaying data with postprocessing\u003C\u002Fb> \u003C\u002Fsummary>\n\n[Using postprocessing parameter in compile method](tutorial\u002Fpostprocess\u002Ftuto-postprocess01.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Using different backends\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [Compute Shapley Contributions using **Shap**](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl01-Shapash-Viz-using-Shap-contributions.ipynb)\n- [Use **Lime** to compute local explanation, Summarize-it with **Shapash**](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl02-Shapash-Viz-using-Lime-contributions.ipynb)\n- [Compile faster Lime and consistency of contributions](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl04-Shapash-compute-Lime-faster.ipynb)\n- [Use **FastTreeSHAP** or add contributions from another backend](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl05-Shapash-using-Fasttreeshap.ipynb)\n- [Use Class Shapash Backend](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl06-Shapash-custom-backend.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Evaluating the quality of your explainability\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [Building confidence on explainability methods using **Stability**, **Consistency** and **Compacity** metrics](tutorial\u002Fexplainability_quality\u002Ftuto-quality01-Builing-confidence-explainability.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Generate a report of your project\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [Generate a standalone HTML report of your project with generate_report](tutorial\u002Fgenerate_report\u002Ftuto-shapash-report01.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>Analysing your model via Shapash WebApp\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [Add features outside of the model for more exploration options](tutorial\u002Fgenerate_webapp\u002Ftuto-webapp01-additional-data.ipynb)\n\n\u003C\u002Fdetails>\n\n## 🤝 Contributors\n\n\u003Cdiv align=\"left\">\n  \u003Cdiv style=\"display: flex; align-items: flex-start;\">\n    \u003Ca href=\"https:\u002F\u002Fmaif.github.io\u002Fprojets.html\" >\n      \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_24fd8219a905.png\" width=\"18%\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fwww.quantmetry.com\u002F\" >\n      \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_e58a5fee7a49.png\" width=\"18%\"\u002F>\n    \u003C\u002Fa>\n    \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_26372b3c3625.png\" width=\"18%\" \u002F>\n    \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_f534a3d2a8ce.png\" width=\"18%\" \u002F>\n    \u003Ca href=\"https:\u002F\u002Fwww.sixfoissept.com\u002Fen\u002F\" >\n      \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_d9cdd7a976fe.png\" width=\"18%\"\u002F>\n    \u003C\u002Fa>\n  \u003C\u002Fdiv>\n\u003C\u002Fdiv>\n\n\n## 🏆 Awards\n\n\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a80c60bf7e4c.png\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a80c60bf7e4c.png\" width=\"180\" \u002F>\n\u003C\u002Fa>\n\n\u003Ca href=\"https:\u002F\u002Fwww.kdnuggets.com\u002F2021\u002F04\u002Fshapash-machine-learning-models-understandable.html\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_16e6077b9b46.png\" width=\"65\" \u002F>\n\u003C\u002Fa>\n","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_e2b8c3deaed4.png\" width=\"300\" title=\"shapash-logo\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- 测试 -->\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fworkflows\u002FBuild%20%26%20Test\u002Fbadge.svg\">\n    \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fworkflows\u002FBuild%20%26%20Test\u002Fbadge.svg\" alt=\"测试\">\n  \u003C\u002Fa>\n  \u003C!-- PyPi -->\n  \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fshapash\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fshapash\" alt=\"PyPI\">\n  \u003C\u002Fa>\n  \u003C!-- 下载量 -->\n  \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_ed605e12a8dd.png\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_ed605e12a8dd.png\" alt=\"下载量\">\n  \u003C\u002Fa>\n  \u003C!-- Python版本 -->\n  \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshapash\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshapash\" alt=\"Python版本\">\n  \u003C\u002Fa>\n  \u003C!-- 许可证 -->\n  \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fshapash\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fshapash\" alt=\"许可证\">\n  \u003C\u002Fa>\n  \u003C!-- 文档 -->\n  \u003Ca href=\"https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002F\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_13d664e1afd7.png\" alt=\"文档\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n## 🔍 概述\n\nShapash 是一个 Python 库，旨在 **让机器学习对所有人来说都可解释、易理解**。它提供了多种可视化工具，配有清晰明确的标签，便于所有人快速掌握。\n\n借助 Shapash，您可以生成一个 **Web 应用程序**，帮助简化对 **模型特征之间交互关系** 的理解，并实现 **局部与全局解释性之间的无缝切换**。通过这个 Web 应用程序，数据科学家可以轻松地理解自己的模型，并将结果 **分享给数据专家和非专业人士**。\n\n此外，Shapash 还有助于数据科学审计工作，能够以 **全面报告的形式呈现** 关于任何模型和数据的 **有价值信息**。\n\nShapash 适用于回归、二分类和多分类问题。它 **兼容众多模型**，包括 Catboost、XGBoost、LightGBM、Scikit-learn 集成模型、线性模型以及 SVM 等。对于其他模型，也有集成 Shapash 的解决方案；更多详情请参阅 [这里](#how_shapash_works)。\n\n> [!NOTE]\n> 如果您想给我们反馈：[反馈表单](https:\u002F\u002Fframaforms.org\u002Fshapash-collecting-your-feedback-and-use-cases-1687456776)\n\n[Shapash 应用演示](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_dcabdef7b223.gif\" width=\"800\">\n\u003C\u002Fp>\n\n## 🌱 文档与资源\n\n- ReadTheDocs: [![文档徽章](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_13d664e1afd7.png)](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002F)\n- [面向法语观众的视频介绍](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r1R_A9B9apk)\n- Medium:\n  - [用 Shapash 理解你的模型 - Towards AI](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-making-ml-models-understandable-by-everyone-8f96ad469eb3)\n  - [模型可审计性 - Towards DS](https:\u002F\u002Ftowardsdatascience.com\u002Fshapash-1-3-2-announcing-new-features-for-more-auditable-ai-64a6db71c919)\n  - [特征组 - Towards AI](https:\u002F\u002Fpub.towardsai.net\u002Fmachine-learning-6011d5d9a444)\n  - [增强可解释性方法的信心 - Towards DS](https:\u002F\u002Ftowardsdatascience.com\u002Fbuilding-confidence-on-explainability-methods-66b9ee575514)\n  - [挑选示例来理解机器学习模型](https:\u002F\u002Fwww.kdnuggets.com\u002F2022\u002F11\u002Fpicking-examples-understand-machine-learning-model.html)\n  - [增强 Web 应用内置功能，实现全面的机器学习模型解释](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)\n\n## 🎉 有什么新功能？\n\n| 版本       | 新特性                                                                           | 描述                                                                                                                            | 教程 |\n|:-------------:|:-------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------:|:--------:|\n| 2.3.x         |  额外的数据集列 \u003Cbr> [新演示](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F) \u003Cbr> [文章](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)                                                                | 在Web应用中：向数据集中添加目标列和误差列，并可添加模型之外的特征，以提供更多筛选选项            |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_93fb6c5d5511.png\" width=\"50\" title=\"add_column\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fgenerate_webapp\u002Ftuto-webapp01-additional-data.ipynb)\n| 2.3.x         |  身份卡 \u003Cbr> [新演示](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F) \u003Cbr> [文章](https:\u002F\u002Fpub.towardsai.net\u002Fshapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)                                                                  | 在Web应用中：新增身份卡，用于汇总所选样本的信息                  |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_c141f85757b2.png\" width=\"50\" title=\"identity\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fgenerate_webapp\u002Ftuto-webapp01-additional-data.ipynb)\n| 2.2.x         |  样本挑选 \u003Cbr> [文章](https:\u002F\u002Fwww.kdnuggets.com\u002F2022\u002F11\u002Fpicking-examples-understand-machine-learning-model.html)                                                                | Web应用中新增样本挑选标签页。图表展示了“真实值 vs 预测值”            |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_4944ab459b5b.png\" width=\"50\" title=\"picking\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fplots_and_charts\u002Ftuto-plot06-prediction_plot.ipynb)\n| 2.2.x         |  数据集筛选 \u003Cbr>                                                              | Web应用中新增数据筛选标签页。此外，Web应用还进行了多项改进：添加副标题、标签以及屏幕布局调整                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_bd22e3201d70.png\" width=\"50\" title=\"webapp\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Ftutorial01-Shapash-Overview-Launch-WebApp.ipynb)\n| 2.0.x         |  Shapash重构 \u003Cbr>                                                                   | 重构了compile方法和init方法的属性。为新的后端实现了重构                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_1f10d457f651.png\" width=\"50\" title=\"modular\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fexplainer_and_backend\u002Ftuto-expl06-Shapash-custom-backend.ipynb)\n| 1.7.x         |  颜色可变化 \u003Cbr>                                                                   | 允许用户使用自定义调色板来生成符合自身设计风格的输出                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_0335c3e0fb37.png\" width=\"50\" title=\"variabilize-colors\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fcommon\u002Ftuto-common02-colors.ipynb)\n| 1.6.x         |  可解释性质量指标 \u003Cbr> [文章](https:\u002F\u002Ftowardsdatascience.com\u002Fbuilding-confidence-on-explainability-methods-66b9ee575514)                                                                   | 为了增强对可解释性方法的信心，您可以使用3个指标来评估可解释性的相关性：**稳定性**、**一致性**和**紧凑性**                   |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_543438383809.png\" width=\"50\" title=\"quality-metrics\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fexplainability_quality\u002Ftuto-quality01-Builing-confidence-explainability.ipynb)\n| 1.4.x         |  特征分组 \u003Cbr> [演示](https:\u002F\u002Fshapash-demo2.ossbymaif.fr\u002F)                  | 现在可以将具有共同属性的特征归为一组。如果您的模型包含大量特征，此功能会非常有用。 |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_85e5192ca75c.gif\" width=\"120\" title=\"groups-features\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fcommon\u002Ftuto-common01-groups_of_features.ipynb)    |\n| 1.3.x         |  Shapash报告 \u003Cbr> [演示](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Freport.html)     | 一份独立的HTML报告，可作为审计文档的基础。                                                                |  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_61b733ba3062.png\" width=\"50\" title=\"shapash-report\">](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fgenerate_report\u002Ftuto-shapash-report01.ipynb)    |\n\n## 🔥 功能特性\n\n- 展示清晰易懂的结果：图表和输出对每个特征及其取值都使用了**明确的标签**\n\n\u003Cp align=\"center\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_94e8685ff01c.png\" width=\"28%\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_db880478d847.png\" width=\"28%\" \u002F>\n  \u003Cimg align=\"right\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_dc1132bc539c.png\" width=\"28%\" \u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a1d1da602101.png\" width=\"28%\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_e2b8c3deaed4.png\" width=\"18%\" \u002F>\n  \u003Cimg align=\"right\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_3371d78b6ea0.png\" width=\"28%\" \u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_964c964da120.png\" width=\"33%\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_6fa859679761.png\" width=\"28%\" \u002F>\n  \u003Cimg align=\"right\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_8f31723ddb11.png\" width=\"25%\" \u002F>\n\u003C\u002Fp>\n\n\n- 允许数据科学家通过一个**Web应用**快速理解其模型，轻松在全局与局部可解释性之间切换，并了解不同特征的贡献情况：[Shapash-Monitor在线演示](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F)\n\n- **总结并导出**局部解释\n> **Shapash** 提供简洁明了的局部解释，使任何背景的数据用户都能通过总结性和明确的解释来理解监督模型的单个预测结果。\n\n\n- 使用多种指标**评估**可解释性的质量\n\n- 轻松与非数据领域的用户分享和讨论结果\n\n- 可根据解释性特征、附加特征以及正确或错误的预测结果进行筛选，选择子集以深入分析可解释性。[挑选示例以理解机器学习模型](https:\u002F\u002Fwww.kdnuggets.com\u002F2022\u002F11\u002Fpicking-examples-understand-machine-learning-model.html)\n\n- 部署项目中的可解释性部分：从模型训练到部署（API 或批处理模式）\n\n- 通过生成项目的**独立 HTML 报告**，为模型的**可审计性**做出贡献。[报告示例](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Freport.html)\n我们相信，这份报告将为模型和数据的审计提供有力支持，从而提升 AI 治理水平。\n数据科学家现在可以向任何对项目感兴趣的人提供**一份记录其工作各方面的文档，作为审计报告的基础**。\n该文档可在团队内部（内部审计、DPO、风险、合规等部门）轻松共享。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_1dd83c229a43.gif\" width=\"800\">\n\u003C\u002Fp>\n\n\u003Ca name=\"how_shapash_works\">\u003C\u002Fa>\n## ⚙️ Shapash 的工作原理\n**Shapash** 是一个用于模型可解释性相关库的叠加型工具包。它使用 Shap 或 Lime 作为后端来计算特征贡献。\n**Shapash** 基于构建机器学习模型所需的各个步骤，使结果更加易于理解。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a9686ed048ca.png\" width=\"700\" title=\"diagram\">\n\u003C\u002Fp>\n\n**Shapash** 适用于回归、二分类或多分类问题。\u003Cbr \u002F>\n它兼容多种模型：*Catboost*、*Xgboost*、*LightGBM*、*Sklearn Ensemble*、*线性模型*、*SVM*。\u003Cbr \u002F>\n\n如果您的模型不在兼容列表中，也可以将使用 Shap 或其他方法计算出的局部贡献提供给 Shapash。[这里](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fblob\u002Fmaster\u002Ftutorial\u002Fexplainer_and_backend\u002Ftuto-expl05-Shapash-using-Fasttreeshap.ipynb)有一个如何向 Shapash 提供贡献的示例。为了进一步完善这一用法，已创建了一个[问题](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fissues\u002F488)。\n\nShapash 可以使用 category-encoders 对象、sklearn ColumnTransformer，或者直接使用特征字典。\u003Cbr \u002F>\n- Category_encoder：*OneHotEncoder*、*OrdinalEncoder*、*BaseNEncoder*、*BinaryEncoder*、*TargetEncoder*\n- Sklearn ColumnTransformer：*OneHotEncoder*、*OrdinalEncoder*、*StandardScaler*、*QuantileTransformer*、*PowerTransformer*\n\n## 🛠 安装说明\n\nShapash 适用于 Python 3.9 至 3.12 版本。可通过 pip 进行安装：\n\n```bash\npip install shapash\n```\n\n若需生成 Shapash 报告，则需要额外的依赖项。您可以通过以下命令安装这些依赖：\n```bash\npip install shapash[report]\n```\n\n如果您遇到**兼容性问题**，请参阅 Shapash 文档中的相应章节[此处](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Finstallation-instructions\u002Findex.html)。\n\n## 🕐 快速入门\n\n展示结果的4个步骤：\n\n- 第1步：声明 SmartExplainer 对象\n  > 在 compile 方法中有一个必填参数：Model\n  > 您可以在此处声明 features_dict，以指定要显示的标签\n\n```python\nfrom shapash import SmartExplainer\n\nxpl = SmartExplainer(\n    model=regressor,\n    features_dict=house_dict,  # 可选参数\n    preprocessing=encoder,  # 可选：compile 步骤可以使用 inverse_transform 方法\n    postprocessing=postprocess,  # 可选：参见教程中的后处理部分\n)\n```\n\n- 第2步：编译数据集，…\n  > 在 compile 方法中有一个必填参数：Dataset\n\n```python\nxpl.compile(\n    x=xtest,\n    y_pred=y_pred,  # 可选：用于您自己的预测（默认为 model.predict）\n    y_target=yTest,  # 可选：允许显示真实值与预测值的对比\n    additional_data=xadditional,  # 可选：Web 应用程序的附加特征数据集\n    additional_features_dict=features_dict_additional,  # 可选：附加数据的字典\n)\n```  \n\n- 第3步：展示输出\n  > 有多种输出和图表可供使用。例如，您可以启动 Web 应用程序：\n\n```python\napp = xpl.run_app()\n```\n\n[Shapash-Monitor 实时演示](https:\u002F\u002Fshapash-demo.ossbymaif.fr\u002F)\n\n- 第4步：生成 Shapash 报告\n  > 此步骤允许您使用数据集的不同划分以及所使用的指标，生成项目的独立 HTML 报告：\n\n```python\nxpl.generate_report(\n    output_file=\"path\u002Fto\u002Foutput\u002Freport.html\",\n    project_info_file=\"path\u002Fto\u002Fproject_info.yml\",\n    x_train=xtrain,\n    y_train=ytrain,\n    y_test=ytest,\n    title_story=\"房屋价格报告\",\n    title_description=\"\"\"本文档是 Kaggle 房屋价格教程项目的数据科学报告。\n        它使用 Shapash 库生成。\"\"\",\n    metrics=[{\"name\": \"MSE\", \"path\": \"sklearn.metrics.mean_squared_error\"}],\n)\n```\n\n[报告示例](https:\u002F\u002Fshapash.readthedocs.io\u002Fen\u002Flatest\u002Freport.html)\n\n- 第5步：从训练到部署：SmartPredictor 对象\n  > Shapash 提供了一个 SmartPredictor 对象，用于在运营需求中部署局部解释的摘要信息。\n  这是一个专门用于部署的对象，比 SmartExplainer 更轻量，并增加了额外的一致性检查。\n  SmartPredictor 可以通过 API 或批处理模式使用。它提供预测、详细或汇总的局部可解释性，并采用适当的措辞。\n\n```python\npredictor = xpl.to_smartpredictor()\n```\n请参阅教程部分，了解如何使用 SmartPredictor 对象。\n\n## 📖 教程\n此 GitHub 仓库提供了许多教程，帮助您轻松上手 Shapash。\n\n\n\u003Cdetails>\u003Csummary>\u003Cb>概述\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [使用具体案例启动 Web 应用程序](tutorial\u002Ftutorial01-Shapash-Overview-Launch-WebApp.ipynb)\n- [Jupyter 概览 - SmartExplainer 对象提供的主要输出和方法](tutorial\u002Ftutorial02-Shapash-overview-in-Jupyter.ipynb)\n- [Shapash 在生产环境中的应用：从模型训练到部署（API 或批处理模式）](tutorial\u002Ftutorial03-Shapash-overview-model-in-production.ipynb)\n- [使用特征组](tutorial\u002Fcommon\u002Ftuto-common01-groups_of_features.ipynb)\n- [使用 SmartPredictor 在生产环境中部署局部可解释性](tutorial\u002Fpredictor_to_production\u002Ftuto-smartpredictor-introduction-to-SmartPredictor.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>图表和绘图\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [**Shapash** 特征重要性](tutorial\u002Fplots_and_charts\u002Ftuto-plot03-features-importance.ipynb)\n- [贡献度图，用于理解单个特征如何影响预测](tutorial\u002Fplots_and_charts\u002Ftuto-plot02-contribution_plot.ipynb)\n- [使用 filter 和 local_plot 方法总结、展示和导出局部贡献](tutorial\u002Fplots_and_charts\u002Ftuto-plot01-local_plot-and-to_pandas.ipynb)\n- [比较贡献度图，用于理解为何不同个体的预测结果存在差异](tutorial\u002Fplots_and_charts\u002Ftuto-plot04-compare_plot.ipynb)\n- [可视化变量之间的交互作用](tutorial\u002Fplots_and_charts\u002Ftuto-plot05-interactions-plot.ipynb)\n- [展示真实值与预测值的对比](tutorial\u002Fplots_and_charts\u002Ftuto-plot06-prediction_plot.ipynb)\n- [自定义 Web 应用程序、图表和报告中的颜色](tutorial\u002Fcommon\u002Ftuto-common02-colors.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>使用编码器和字典的不同方式\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [使用 Category_Encoder 及其逆变换](tutorial\u002Fuse_encoders\u002Ftuto-encoder01-using-category_encoder.ipynb)\n- [使用 ColumnTransformers](tutorial\u002Fuse_encoders\u002Ftuto-encoder02-using-columntransformer.ipynb)\n- [使用简单的 Python 字典](tutorial\u002Fuse_encoders\u002Ftuto-encoder03-using-dict.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>通过后处理展示数据\u003C\u002Fb> \u003C\u002Fsummary>\n\n[在 compile 方法中使用 postprocessing 参数](tutorial\u002Fpostprocess\u002Ftuto-postprocess01.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>使用不同的后端\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [使用 **Shap** 计算 Shapley 贡献度](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl01-Shapash-Viz-using-Shap-contributions.ipynb)\n- [使用 **Lime** 计算局部解释，并用 **Shapash** 进行总结](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl02-Shapash-Viz-using-Lime-contributions.ipynb)\n- [更快地编译 Lime 结果并确保贡献度的一致性](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl04-Shapash-compute-Lime-faster.ipynb)\n- [使用 **FastTreeSHAP** 或添加来自其他后端的贡献](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl05-Shapash-using-Fasttreeshap.ipynb)\n- [使用 Class Shapash 后端](tutorial\u002Fexplainer_and_backend\u002Ftuto-expl06-Shapash-custom-backend.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>评估可解释性的质量\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [利用 **稳定性**、**一致性** 和 **紧凑性** 指标建立对可解释性方法的信心](tutorial\u002Fexplainability_quality\u002Ftuto-quality01-Builing-confidence-explainability.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>生成项目报告\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [使用 generate_report 生成项目的独立 HTML 报告](tutorial\u002Fgenerate_report\u002Ftuto-shapash-report01.ipynb)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\u003Csummary>\u003Cb>通过 Shapash Web 应用程序分析您的模型\u003C\u002Fb> \u003C\u002Fsummary>\n\n- [添加模型之外的特征，以获得更多探索选项](tutorial\u002Fgenerate_webapp\u002Ftuto-webapp01-additional-data.ipynb)\n\n\u003C\u002Fdetails>\n\n## 🤝 贡献者\n\n\u003Cdiv align=\"left\">\n  \u003Cdiv style=\"display: flex; align-items: flex-start;\">\n    \u003Ca href=\"https:\u002F\u002Fmaif.github.io\u002Fprojets.html\" >\n      \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_24fd8219a905.png\" width=\"18%\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fwww.quantmetry.com\u002F\" >\n      \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_e58a5fee7a49.png\" width=\"18%\"\u002F>\n    \u003C\u002Fa>\n    \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_26372b3c3625.png\" width=\"18%\" \u002F>\n    \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_f534a3d2a8ce.png\" width=\"18%\" \u002F>\n    \u003Ca href=\"https:\u002F\u002Fwww.sixfoissept.com\u002Fen\u002F\" >\n      \u003Cimg align=middle src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_d9cdd7a976fe.png\" width=\"18%\"\u002F>\n    \u003C\u002Fa>\n  \u003C\u002Fdiv>\n\u003C\u002Fdiv>\n\n\n## 🏆 奖项\n\n\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a80c60bf7e4c.png\">\n  \u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_a80c60bf7e4c.png\" width=\"180\" \u002F>\n\u003C\u002Fa>\n\n\u003Ca href=\"https:\u002F\u002Fwww.kdnuggets.com\u002F2021\u002F04\u002Fshapash-machine-learning-models-understandable.html\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_readme_16e6077b9b46.png\" width=\"65\" \u002F>\n\u003C\u002Fa>","# Shapash 快速上手指南\n\nShapash 是一个旨在让机器学习模型对所有人都可解释的 Python 库。它提供清晰的可视化图表和标签，支持生成 Web 应用以便在局部和全局可解释性之间无缝切换，并能为回归、二分类及多分类问题生成审计报告。\n\n## 环境准备\n\n*   **操作系统**: Linux, macOS, Windows\n*   **Python 版本**: 3.8 - 3.11 (推荐 3.9+)\n*   **前置依赖**:\n    *   核心依赖：`pandas`, `numpy`, `scikit-learn`, `plotly`, `dash`\n    *   支持的模型库（按需安装）：`xgboost`, `lightgbm`, `catboost`, `shap`\n*   **浏览器**: 推荐使用 Chrome 或 Firefox 以获取最佳 WebApp 体验。\n\n## 安装步骤\n\n### 1. 基础安装\n使用 pip 安装最新稳定版：\n\n```bash\npip install shapash\n```\n\n### 2. 国内加速安装（推荐）\n如果您在中国大陆，建议使用清华源或阿里源以加快下载速度：\n\n```bash\npip install shapash -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 安装完整依赖（可选）\n如果您需要使用所有支持的模型（如 CatBoost, XGBoost 等），建议安装完整依赖包：\n\n```bash\npip install \"shapash[all]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n以下是最简单的使用流程：训练一个模型，初始化 Shapash，并启动交互式 Web 应用。\n\n### 1. 准备数据与模型\n首先导入必要的库并训练一个简单的模型（以 sklearn 为例）：\n\n```python\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom shapash import SmartExplainer\n\n# 加载示例数据\nurl = \"https:\u002F\u002Fraw.githubusercontent.com\u002Fpymetrics\u002Faudit-ai\u002Fmaster\u002Fdata\u002Fcompas.csv\"\ndata = pd.read_csv(url)\n\n# 数据预处理（简化版）\nfeatures = ['age', 'juv_fel_count', 'juv_misd_count', 'juv_other_count', 'priors_count']\nX = data[features]\ny = data['two_year_recid']\n\n# 划分数据集\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\n# 训练模型\nmodel = RandomForestClassifier(n_estimators=10, random_state=42)\nmodel.fit(X_train, y_train)\n```\n\n### 2. 初始化 Shapash 解释器\n编译解释器以计算特征贡献度。`features_dict` 用于将列名映射为更易读的中文或描述性标签（可选但推荐）。\n\n```python\n# 定义特征字典（将英文列名映射为易读标签）\nfeatures_dict = {\n    'age': '年龄',\n    'juv_fel_count': '少年重罪记录数',\n    'juv_misd_count': '少年轻罪记录数',\n    'juv_other_count': '其他少年记录数',\n    'priors_count': '前科数量'\n}\n\n# 初始化 SmartExplainer\nxpl = SmartExplainer(model=model, features_dict=features_dict)\n\n# 编译解释器\n# x_pred: 测试集预测概率 (对于分类问题)\n# x_init: 测试集原始特征数据\nxpl.compile(\n    x=X_test,\n    y_pred=model.predict_proba(X_test)[:, 1] \n)\n```\n\n### 3. 启动 Web 应用\n运行以下命令启动本地 Web 服务器，自动在浏览器中打开交互式仪表盘：\n\n```python\nxpl.run_app()\n```\n\n**操作提示：**\n*   启动后，终端会显示访问地址（通常为 `http:\u002F\u002F127.0.0.1:8050\u002F`）。\n*   在 Web 界面中，您可以查看全局特征重要性、筛选特定样本、分析单个预测的原因（局部解释），以及导出分析报告。\n\n### 4. (可选) 生成静态审计报告\n如果不需启动 Web 应用，可直接生成独立的 HTML 报告用于审计：\n\n```python\nxpl.to_html(path='shapash_report.html', title='模型可解释性报告')\n```","某金融风控团队正在开发一套信贷审批模型，需要向非技术背景的业务部门和合规审计人员解释为何拒绝特定客户的贷款申请。\n\n### 没有 shapash 时\n- 数据科学家只能输出复杂的 SHAP 数值矩阵或晦涩的代码图表，业务人员完全看不懂特征对结果的具体影响。\n- 面对“为什么拒绝这位客户”的质询，团队需手动编写大量临时代码来提取单个案例的解释，响应速度极慢且容易出错。\n- 缺乏统一的可视化界面，全局模型逻辑（如哪些特征整体最重要）与局部个案分析割裂，难以在会议中直观展示。\n- 合规审计报告需要人工拼凑截图和数据，格式不统一，难以证明模型决策的透明度和公平性。\n- 不同利益相关者（开发、业务、法务）对模型理解存在巨大鸿沟，导致模型上线审批流程反复受阻。\n\n### 使用 shapash 后\n- shapash 自动生成带有清晰中文标签的可视化图表，业务人员能直接看懂“收入”和“负债率”如何具体影响了审批结果。\n- 通过内置的 Webapp，团队成员可实时输入任意客户 ID，秒级获取该个案的详细决策依据，无需再写一行解释代码。\n- 在同一界面中无缝切换全局视角（整体模型行为）和局部视角（单个预测解释），让汇报演示流畅且具有说服力。\n- shapash 一键生成包含模型概览、特征贡献及稳定性分析的综合报告，直接满足合规审计对可解释性的严格要求。\n- 透明的交互界面消除了技术黑盒，让业务和法务团队建立信任，大幅缩短了模型从开发到投产的周期。\n\nshapash 将复杂的算法逻辑转化为每个人都能理解的语言，真正实现了机器学习模型的透明化与可信落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMAIF_shapash_e2b8c3de.png","MAIF","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FMAIF_6e223a9e.png","#OSSbyMAIF",null,"oss@maif.fr","https:\u002F\u002Fmaif.github.io","https:\u002F\u002Fgithub.com\u002FMAIF",[80,84,88,92,95,99,102],{"name":81,"color":82,"percentage":83},"Jupyter Notebook","#DA5B0B",85,{"name":85,"color":86,"percentage":87},"Python","#3572A5",14.8,{"name":89,"color":90,"percentage":91},"CSS","#663399",0.1,{"name":93,"color":94,"percentage":91},"HTML","#e34c26",{"name":96,"color":97,"percentage":98},"Jinja","#a52a22",0,{"name":100,"color":101,"percentage":98},"Makefile","#427819",{"name":103,"color":104,"percentage":98},"JavaScript","#f1e05a",3183,374,"2026-04-13T21:00:49","Apache-2.0",1,"未说明",{"notes":112,"python":113,"dependencies":114},"该工具是一个用于解释机器学习模型的 Python 库，支持回归、二分类和多分类问题。兼容多种模型（如 Catboost, Xgboost, LightGBM, Sklearn 等）。主要功能包括生成交互式 Web 应用和独立的 HTML 审计报告。README 中未明确列出具体的操作系统、GPU、内存需求及依赖库的具体版本号，通常此类库在主流操作系统上均可运行，且主要依赖 CPU 进行计算。","3.6+",[115,116,117,118,119,120,121,122,123],"shap","plotly","dash","pandas","numpy","scikit-learn","catboost","xgboost","lightgbm",[14],[126,127,128,129,130,131,115,132,133],"python","machine-learning","explainability","explainable-ml","transparency","ethical-artificial-intelligence","lime","interpretability","2026-03-27T02:49:30.150509","2026-04-14T12:26:52.478399",[137,142,147,152,157],{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},33081,"interactions_plot 生成的图表中，Y 轴（Shap 交互值）是如何计算的？为什么数值看起来不对？","这是一个已知问题。Shapash 在计算两个变量之间的交互作用时，没有将交互矩阵乘以 2。因此，图表中显示的 Y 轴数值实际上是正确值的一半。例如，如果显示值为 -0.245，实际应为该值的两倍。维护者已确认此问题并计划修复。","https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fissues\u002F477",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},33082,"如何在 ColumnTransformer 中使用 Imputer（如 SimpleImputer）进行预处理？","Shapash 的 'preprocessing' 参数主要用于对输入数据 x 执行逆变换（inverse_transform），以便将编码后的数据还原为原始模式以便解释。目前官方暂不支持在 ColumnTransformer 中直接集成 Imputer 或其他 Scaler（如 RobustScaler, MinMaxScaler）。建议用户在调用 xpl.compile() 之前，在外部独立完成数据的填充（imputation）和编码步骤，并将处理后的数据传入。注意：为了避免数据泄露，填充操作应在 train_test_split 之后针对训练集拟合，再应用于测试集。","https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fissues\u002F366",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},33083,"Shapash 是否支持 Python 3.10？","是的，Shapash 现已支持 Python 3.10。维护者已移除对已结束生命周期（EOL）的 Python 3.6 的支持，并添加了针对 Python 3.10 的依赖检查、测试及 GitHub 工作流适配。用户现在可以在 Python 3.10 环境中正常使用 Shapash，但需注意 numba 和 numpy 版本的依赖兼容性。","https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fissues\u002F293",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},33084,"生成报告时遇到 \"ValueError: The condensed distance matrix must contain only finite values\" 错误怎么办？","该错误通常由数据集中存在常量列（constant values）或缺失值（NaNs）引起。当某列的标准差为零时，pandas 的 corr() 函数计算相关性会产生 NaN，进而导致距离矩阵计算失败。解决方法是检查输入数据，移除所有值都相同的常量特征列，或在生成报告前妥善处理缺失值，确保用于相关性分析的数据框中不包含 NaN。","https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fissues\u002F472",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},33085,"在使用 XGBoost 的生存分析模型（objective='survival:cox'）时，为什么编译 SmartExplainer 会报错？","Shapash 底层依赖 SHAP 库来计算贡献值。对于 XGBoost 的特殊目标函数（如 'survival:cox'），SHAP 的某些后端可能无法直接自动计算贡献值，或者需要特定的解释器配置。如果遇到 ValueError 或相关计算错误，建议尝试手动计算 SHAP 值（使用 shap.TreeExplainer 或其他适用解释器），然后将计算好的 contributions 矩阵直接传递给 SmartExplainer 的 compile 方法，而不是让 Shapash 自动推导。","https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fissues\u002F127",[163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258],{"id":164,"version":165,"summary_zh":166,"released_at":167},251929,"v2.8.1","## 功能特性\n\n- 由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 改进了 Web 应用的响应性和布局一致性（包括页眉、下拉菜单、表格和输入框）—— [#676](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F676)\n- 由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 采用固定单位与响应式单位相结合的方式优化了字体大小，以提升跨屏幕的可读性—— [#676](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F676)\n\n## 修复内容\n\n- 由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 调整了标题高度和对齐方式，以避免不同浏览器之间的布局不一致—— [#676](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F676)\n- 由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 强制要求 `pandas \u003C 3`，以防止破坏性变更并确保数据处理的稳定性—— [#676](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F676)\n\n**完整更新日志**: [v2.8.0...v2.8.1](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002F2.8.0...2.8.1)","2026-01-30T10:50:27",{"id":169,"version":170,"summary_zh":171,"released_at":172},251940,"v2.7.1","### What's Changed\r\n\r\n\u003Cdiv id=\"release-thumbnail\">\r\nMajor fix in this release:\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>Packaging Fix:\u003C\u002Fstrong> Resolved a critical issue in \u003Ccode>pyproject.toml\u003C\u002Fcode> that caused \u003Ccode>shapash\u003C\u002Fcode> to fail importing after installation. The module now installs and loads correctly in all environments, including Conda and Python 3.10 setups.\u003C\u002Fli>\r\n\u003C\u002Ful>\r\n\u003C\u002Fdiv>\r\n\r\n- **Fix pyproject.toml** by [@tbloron](https:\u002F\u002Fgithub.com\u002Ftbloron) in [#592](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F592)\r\n  - This fix resolves an issue where the `shapash` module was not importable after installation, resulting in a `ModuleNotFoundError`. The problem was traced back to the `pyproject.toml` configuration file, which has now been corrected. The fix ensures that Shapash installs and imports correctly in all environments.\r\n\r\n### Bug Fix Details:\r\n\r\n- Users installing version 2.7.0 encountered a packaging issue that prevented `shapash` from being properly imported in Python environments, including Conda environments using Python 3.10. This version addresses that issue, making the module accessible after installation.\r\n\r\n### Installation Instructions:\r\n\r\nTo upgrade to the latest version, run:\r\n\r\n```bash\r\npip install --upgrade shapash\r\n```\r\n\r\n### Full Changelog:\r\n\r\n[Compare v2.7.0...v2.7.1](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.0...v2.7.1)","2024-10-11T08:50:59",{"id":174,"version":175,"summary_zh":176,"released_at":177},251941,"v2.7.0","## What's Changed\r\n\r\n\u003Cdiv id=\"release-thumbnail\">\r\nMajor features and improvements in this release are:\r\n\u003Cul>\r\n\u003Cli>\u003Cstrong>Feature Importance Pagination:\u003C\u002Fstrong> Navigate through feature importance results across multiple pages, enabling deeper exploration of models with many features.\u003C\u002Fli>\r\n\u003Cli>\u003Cstrong>Subpopulation-based Feature Importance Plots:\u003C\u002Fstrong> New visualizations to analyze feature importance divergence across subpopulations and track importance variation across the dataset.\u003C\u002Fli>\r\n\u003Cli>\u003Cstrong>SmartPlotter Refactoring:\u003C\u002Fstrong> Modularized plotting system for easier maintenance and future feature additions.\u003C\u002Fli>\r\n\u003Cli>\u003Cstrong>Flask Constraint Removal:\u003C\u002Fstrong> Shapash now supports the latest Flask versions, improving compatibility and performance.\u003C\u002Fli>\r\n\u003C\u002Ful>\r\n\u003C\u002Fdiv>\r\n\r\n### Documentation Updates\r\n- **Update Figures in Documentation** by [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) ([#568](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F568))  \r\n  Updated the figures in the documentation to reflect changes introduced in version 2.6.0. Some unnecessary files were also removed.\r\n\r\n### New Features\r\n- **Feature Importance Pagination** by [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) ([#574](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F574))  \r\n  Introduced pagination for the feature importance plot, allowing users to navigate through all features. This is especially useful for models with a large number of features, as users can now explore feature contributions beyond the top few with improved usability and dynamic page handling.\r\n  \r\n- **Subpopulation-based Feature Importance Plots** by [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) ([#579](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F579))  \r\n  Added two new plots:\r\n  - *Local Importance Divergence Metric*: Highlights features with varying importance across different subpopulations.\r\n  - *Feature Importance Curve Plot*: Displays how feature importance fluctuates across the dataset, offering more granular insights.\r\n\r\n### Enhancements\r\n- **SmartPlotter Refactoring** by [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) ([#582](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F582))  \r\n  Simplified the `SmartPlotter` class by decoupling each plot type into its own function file. This improves modularity, making the code more maintainable and testable. Future plots can now be easily added without altering the core structure.\r\n  \r\n- **Removed Flask Version Constraint** by [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) ([#584](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F584))  \r\n  Lifted the Flask version constraint (\u003C2.3.0) as the compatibility issue with Dash has been resolved. Shapash now supports the latest versions of Flask, enhancing security, compatibility, and performance.\r\n\r\n- **Dataset Sorting** by [@sam94700](https:\u002F\u002Fgithub.com\u002Fsam94700) ([#575](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F575))  \r\n  Added the ability to sort datasets by features, improving data management.\r\n\r\n### Bug Fixes\r\n- **Contribution Plot for Boolean Features** by [@sam94700](https:\u002F\u002Fgithub.com\u002Fsam94700) ([#586](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F586))  \r\n  Fixed a bug affecting the contribution plot for boolean features, ensuring accurate visualizations.\r\n\r\n- **DataFrame Transformation Warning Fix** by [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) ([#589](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F589))  \r\n  Refactored DataFrame column transformations to avoid future warnings from pandas regarding in-place modifications.\r\n\r\n### Development Tools\r\n- **Add Ruff Linter and Formatter** by [@tbloron](https:\u002F\u002Fgithub.com\u002Ftbloron) ([#585](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F585))  \r\n  Integrated the `ruff` linter and code formatter into the project. This also includes updates to the GitHub workflow and the addition of a `pyproject.toml` configuration file.\r\n\r\n## New Contributors\r\n- **[@sam94700](https:\u002F\u002Fgithub.com\u002Fsam94700)** made their first contribution in [#575](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F575)\r\n- **[@tbloron](https:\u002F\u002Fgithub.com\u002Ftbloron)** made their first contribution in [#585](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F585)\r\n\r\n**Full Changelog**: [v2.6.0...v2.7.0](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.6.0...v2.7.0)","2024-10-10T11:57:22",{"id":179,"version":180,"summary_zh":181,"released_at":182},251942,"v2.6.0","## What's Changed\r\n\r\n\u003Cdiv id=\"release-thumbnail\">\r\nMajor features announcements in this release are:\r\n\u003Cul>\r\n\u003Cli>Contribution Plot Improvement: Enhanced the contribution plot to provide more insightful visualizations.\u003C\u002Fli>\r\n\u003Cli>Shapash Report Enhancement: Upgraded the Shapash report with new functionalities and optimizations.\u003C\u002Fli>\r\n\u003C\u002Ful>\r\n\u003C\u002Fdiv>\r\n\r\n### Added\r\n* Feature\u002Fcontribution plot improvement by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F553\r\n* Feature\u002Fshapash report improvement by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F564\r\n\r\n### Fixed\r\n* Fix color style. by @MLecardonnel in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F561\r\n* fix documentation generation bug due to numpy 2.0 by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F566\r\n* Fix interaction plot bug on labels by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F563\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.5.1...v2.6.0","2024-07-04T10:10:33",{"id":184,"version":185,"summary_zh":186,"released_at":187},251943,"v2.5.1","## What's Changed\r\n* Temporary Fix for NumPy 2.0 Incompatibility by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F559\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.5.0...v2.5.1","2024-06-24T08:51:34",{"id":189,"version":190,"summary_zh":191,"released_at":192},251930,"v2.8.0","### 变更内容\n\n\u003Cdiv id=\"release-thumbnail\">\n本版本的主要更新如下：\n\u003Cul>\n\u003Cli>\u003Cstrong>WebApp 可解释性扩展：\u003C\u002Fstrong>在 Shapash WebApp 中引入了多项新的可视化可解释性工具，包括全局-局部特征重要性、按可解释性聚类等，显著提升了模型解释能力。\u003C\u002Fli>\n\u003Cli>\u003Cstrong>现代化打包：\u003C\u002Fstrong>全面支持 \u003Ccode>numpy&gt;=2.0.0\u003C\u002Fcode> 并对依赖项进行了现代化升级，确保与最新科学计算 Python 生态系统的向前兼容性。\u003C\u002Fli>\n\u003C\u002Ful>\n\u003C\u002Fdiv>\n\n---\n\n### 🚀 新功能与增强\n\n* **在 WebApp 中添加全局-局部特征重要性图**\n  由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 在 [#656](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F656) 中实现\n\n  * 引入了一种将全局特征重要性和局部（实例级）贡献相结合的新可视化方式，在 WebApp 中直接打通了模型层面与个体解释之间的桥梁。\n\n* **新增按可解释性聚类的绘图功能，并将其集成到 WebApp 中**\n  由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 和 [@Yh-Cherif](https:\u002F\u002Fgithub.com\u002FYh-Cherif) 在 [#658](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F658)、[#671](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F671) 和 [#632](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F632) 中实现\n\n  * 提供了一种基于地图的全新可视化方式，用于探索和比较不同个体的解释特征。\n  * 能够将 Shapley 分配值投影到低维空间中，从而帮助识别观测数据中的解释模式和聚类。\n\n* **为 WebApp 中的附加数据添加列顺序支持**\n  由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 在 [#643](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F643) 中实现\n\n  * 允许显式控制附加数据列的显示顺序，从而提升 WebApp 的可读性和一致性。\n\n* **为分类任务添加 `_error_` 列支持**\n  由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 在 [#663](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F663) 中实现\n\n  * 明确支持在数据集和可视化中跟踪分类错误。\n\n---\n\n### 📊 可视化与投影更新\n\n* **为 `distribution_plot` 添加 `cat_num_threshold` 参数**\n  由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 在 [#646](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F646) 中实现\n\n  * 改进了分布图中对类别型与数值型特征的自动处理。\n\n---\n\n### ⚙️ 技术改进与性能优化\n\n* **增加对 `numpy>=2.0.0` 的支持，并对依赖项进行现代化升级**\n  由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 在 [#650](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F650) 中实现\n\n  * 更新了核心依赖项，以确保与 NumPy 2.x 的兼容性，并使库具备面向未来的稳定性。\n\n* **更新 `pyproject.toml` 文件**\n  由 [@guerinclement](https:\u002F\u002Fgithub.com\u002Fguerinclement) 在 [#651](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F651) 中实现","2026-01-20T14:00:54",{"id":194,"version":195,"summary_zh":196,"released_at":197},251931,"v2.7.10","## 功能特性\n\n* 在 `pyproject.toml` 中**新增对 Python 3.13 的支持**，由 @guerinclement 完成 — [#635](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F635)\n* **重构了类型注解**：将 `typing.List` 和 `Tuple` 替换为内置的 `list` 和 `tuple`（PEP 585），由 @guillaume-vignal 完成 — [#636](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F636)\n\n## 修复问题\n\n* 由于 Dash 3.x 版本中的破坏性变更，**将 Dash 版本限制在 3.0 以下**，由 @jasperges 完成 — [#634](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F634)\n* 为修复聚类错误，在 `SmartPlotter.correlations` 的距离矩阵中**处理了非有限值**，由 @guillaume-vignal 完成 — [#638](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F638)\n* **Web 应用修复**：修正了日期筛选功能，并改进了与 pandas 配合时对布尔类型的处理，由 @ZakariaRida96 完成 — [#640](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F640)\n\n## 新贡献者\n\n* @jasperges（Dash 版本修复）\n* @ZakariaRida96（Web 应用改进）\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.9...v2.7.10","2025-07-24T09:25:23",{"id":199,"version":200,"summary_zh":201,"released_at":202},251932,"v2.7.9","## 变更内容\n* 修复 Shapash 报告中的 `display_model_analysis`，使其能够正确获取 `sklearn` 版本，由 @guillaume-vignal 在 https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F628 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.8...v2.7.9","2025-03-20T15:04:41",{"id":204,"version":205,"summary_zh":206,"released_at":207},251933,"v2.7.8","## 变更内容\n\n- 修复了 `report` 中贡献度图表无法显示的问题。感谢 [@MLecardonnel](https:\u002F\u002Fgithub.com\u002FMLecardonnel) 在 [PR #622](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F622) 中的贡献。\n\n**完整变更日志：** [v2.7.7...v2.7.8](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.7...v2.7.8)","2025-02-13T13:54:16",{"id":209,"version":210,"summary_zh":211,"released_at":212},251934,"v2.7.7","## 变更内容\n* 添加按交互图显示的选项，由 @guillaume-vignal 在 https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F621 中实现\n* 由于在某些 Python 环境中会随机出现意外错误，将 Plotly 的版本限制在 6.0.0 以下，由 @guillaume-vignal 在 https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F621 中实施\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.6...v2.7.7","2025-02-11T15:52:52",{"id":214,"version":215,"summary_zh":216,"released_at":217},251935,"v2.7.6","## 变更内容\n* 限制 scikit-learn 版本为 \u003C 1.6.0，由 @MLecardonnel 在 https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F614 中提出\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.5...v2.7.6","2025-01-10T16:14:14",{"id":219,"version":220,"summary_zh":221,"released_at":222},251936,"v2.7.5","## 变更内容\n* 由 @guillaume-vignal 在 https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F609 中添加了可自定义的图表尺寸、改进了标题对齐方式，并修复了 Shapash 可解释性质量图表中的颜色 palette 选择问题。\n* 由 @guillaume-vignal 在 https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F612 中增强了其他可视化图表的功能性和一致性。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.4...v2.7.5","2024-12-09T15:34:15",{"id":224,"version":225,"summary_zh":226,"released_at":227},251937,"v2.7.4","### 变更内容\n- **修复全局特征重要性图之后无法绘制局部特征重要性图的问题**  \n  解决了在绘制全局特征重要性图之后再绘制局部特征重要性图时，由于缺失值计算而导致报错的问题。此修复确保了全局图与局部图之间的无缝切换，提升了绘图的稳定性和易用性。  \n  **由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 在 [#605](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F605) 中贡献**\n\n- **特征重要性图标题高度动态调整功能**  \n  新增了一项功能，可根据图表高度动态调整标题位置，以避免与图表内容重叠。此更新显著提升了可读性及布局灵活性，尤其适用于自定义尺寸的图表。  \n  **由 [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) 在 [#607](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F607) 中贡献**\n\n**完整变更日志**：[查看 v2.7.3 至 v2.7.4 的变更...](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.3...v2.7.4)","2024-10-25T09:58:14",{"id":229,"version":230,"summary_zh":231,"released_at":232},251938,"v2.7.3","### **变更内容**\n\u003Cdiv id=\"release-thumbnail\">\n本次发布带来了重要的修复，旨在提升 Shapash 包的可视化能力和稳定性。\n\n- **图表标题重叠问题修复**：我们已解决图表标题位置错误导致在用户指定不同图形高度时与图表内容发生重叠的问题。\n\n- **缩短标签重复问题修复**：当长特征名称被程序化缩短以优化可视化效果时，重复的标签可能导致相关性矩阵中出现缺失线条的情况。\n\n- **Shapash 包中缺失文件的恢复**：在之前的包版本中，`*.ipynb`、`*.html` 和 `*.j2` 等重要文件曾一度缺失，这引发了诸多问题并阻碍了报告的生成。现在这些文件已正确包含，使包的功能得以完全恢复。\n\u003C\u002Fdiv>\n\n### **贡献者**\n- @guillaume-vignal 在 [PR #603](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F603) 中\n\n**完整变更日志**：[v2.7.2...v2.7.3](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.2...v2.7.3)","2024-10-24T09:36:10",{"id":234,"version":235,"summary_zh":236,"released_at":237},251939,"v2.7.2","## What's Changed\r\n\r\n\u003Cdiv id=\"release-thumbnail\">\r\nThis release focuses on bug fixes to improve stability and functionality:\r\n\r\n* **Fixed default color in Local explanation plot**: Resolved an issue where the default colors in the local explanation plot were incorrect.\r\n* **Improved pagination for large feature sets**: Addressed a bug where pagination would not work properly.\r\n* **Restored Shapash icon in the webapp**: Replaced the unintended dash icon with the correct Shapash icon in the web application.\r\n* **Removed unnecessary dataframe print in `plot_scatter_prediction.py`**: Eliminated unintended dataframe printing, which improves clarity in scatter plot generation.\r\n\u003C\u002Fdiv>\r\n\r\nContributed by [@guillaume-vignal](https:\u002F\u002Fgithub.com\u002Fguillaume-vignal) in [#597](https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F597).\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.7.1...v2.7.2","2024-10-17T13:52:56",{"id":239,"version":240,"summary_zh":241,"released_at":242},251944,"v2.5.0","## What's Changed\r\n\r\n\u003Cdiv id=\"release-thumbnail\">\r\nMajor changes in this release are:\r\n\u003Cul>\r\n\u003Cli>Dropped support for Python 3.8.\u003C\u002Fli>\r\n\u003Cli>Now requires Scikit-learn >= 1.4.0, pandas >= 2.1.0, and shap >= 0.45.0.\u003C\u002Fli>\r\n\u003Cli>Added support for Python 3.12.\u003C\u002Fli>\r\n\u003Cli>Optimized compile step to compute predictions and probabilities directly.\u003C\u002Fli>\r\n\u003Cli>Fixed multiple issues in the report demo.\u003C\u002Fli>\r\n\u003C\u002Ful>\r\n\u003C\u002Fdiv>\r\n\r\n### Breaking changes\r\n* Dropped support for 3.8 in @MLecardonnel in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F538  \r\n* Support only Scikit learn>=1.4.0 by @MLecardonnel in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F543  \r\n* Support only pandas>=2.1.0 by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F551  \r\n* Support only shap>=0.45.0 by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F552\r\n\r\n### Added\r\n* Feature python 3.12 support by @MLecardonnel in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F538  \r\n* Optimization: compute predictions and probabilities directly in the compile step by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F535, https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F544\r\n\r\n### Fixed\r\n* Fixed report demo by @MLecardonnel in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F540, https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F541, https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F546, https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F547, https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F548, https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F549\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.4.3...v2.5.0\r\n","2024-05-06T15:21:57",{"id":244,"version":245,"summary_zh":246,"released_at":247},251945,"v2.4.3","## What's Changed\r\n* remove code for category_encoder\u003C=2.2.2 by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F530\r\n* Hotfix shap 0.45.0 by @guillaume-vignal in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F534\r\n* last release for: python 3.8, shap\u003C0.45.0, scikit-learn\u003C1.4\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.4.2...v2.4.3","2024-03-12T10:53:41",{"id":249,"version":250,"summary_zh":251,"released_at":252},251946,"v2.4.2","## What's Changed\r\n* Feature\u002Fcode quality by @guerinclement in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F521\r\n* Bump dash from 1.9.1 to 2.15.0 by @dependabot in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F526\r\n* Feature\u002Flint by @guerinclement in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F522\r\n\r\n## New Contributors\r\n* @guerinclement made their first contribution in https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fpull\u002F521\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMAIF\u002Fshapash\u002Fcompare\u002Fv2.4.1...v2.4.2","2024-02-08T10:24:50",{"id":254,"version":255,"summary_zh":256,"released_at":257},251947,"v2.4.1","Fix #514  BUG: with version 2.4.0 TreeExplainer is never used","2023-12-08T10:22:58",{"id":259,"version":260,"summary_zh":261,"released_at":262},251948,"v2.4.0","\u003Cdiv id=\"release-thumbnail\">\r\nMajor announcements in this release are :\u003Cbr \u002F>\r\n- Shapash support Python 3.11 \u003Cbr \u002F>\r\n- Shapash can compute Shapeley values through Shap for any model supported by Shap\r\n\u003C\u002Fdiv>\r\n\u003Cbr \u002F>\r\n\r\n**Features:**\r\n- Support for Python 3.11 #512 \r\n- Be able to use Shapash to compute Shapeley values through Shap for any model supported by Shap #506\r\n\r\n**Breaking change:**\r\n- Removes ACV from shapash and fixes dependencies #482 \r\n\r\n**Fixes:**\r\n- #475 \r\n- #503 \r\n- #497 ","2023-12-01T08:36:46"]