[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mmschlk--shapiq":3,"tool-mmschlk--shapiq":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":78,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":42,"env_os":95,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":102,"github_topics":103,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":120},9719,"mmschlk\u002Fshapiq","shapiq","Shapley Interactions and Shapley Values for Machine Learning","shapiq 是一款专为机器学习设计的 Python 开源库，旨在量化模型预测中的特征交互效应。传统的可解释性工具（如 SHAP）通常只关注单个特征的独立贡献，却忽略了特征之间协同作用产生的“合力”。shapiq 通过引入博弈论中的沙普利交互指数（Shapley Interaction Index），能够计算任意阶数的特征交互值，从而揭示多个特征组合在一起时如何共同影响模型结果，提供比单一归因更全面、深入的视角。\n\n这款工具特别适合机器学习研究人员、数据科学家以及需要深度分析模型行为的开发者使用。无论是探究复杂模型的决策逻辑，还是验证博弈论算法在 ML 领域的表现，shapiq 都能提供强有力的支持。其核心技术亮点在于不仅兼容现有的 SHAP 工作流，还扩展了对高阶交互效应的近似计算能力，支持从成对交互到多特征协同的全面分析。用户只需几行代码即可加载数据、训练模型并可视化交互结果，同时底层模块也为算法研究者提供了灵活的基准测试环境。如果你希望超越表面归因，真正理解特征间复杂的协同机制，shapiq 是一个值得尝试的专业工具。","# shapiq: Shapley Interactions for Machine Learning \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fmmschlk\u002Fshapiq\u002Fmain\u002Fdocs\u002Fsource\u002F_static\u002Flogo\u002Flogo_shapiq_light.svg\" alt=\"shapiq_logo\" align=\"right\" height=\"250px\"\u002F>\n\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fshapiq.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fshapiq)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-brightgreen.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmmschlk\u002Fshapiq\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmmschlk\u002Fshapiq)\n[![Tests](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002F.github\u002Fworkflows\u002Fci.yml)\n[![Read the Docs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_13d664e1afd7.png)](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n[![PyPI Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshapiq.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshapiq)\n[![PyPI status](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fstatus\u002Fshapiq.svg?color=blue)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshapiq)\n[![PePy](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_764c0a45113d.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fshapiq)\n\n[![Code Style](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![Contributions Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen)](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues)\n[![Last Commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fmmschlk\u002Fshapiq)](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcommits\u002Fmain)\n\n> An interaction may speak more than a thousand main effects.\n\nShapley Interaction Quantification (`shapiq`) is a Python package for (1) approximating any-order Shapley interactions, (2) benchmarking game-theoretical algorithms for machine learning, (3) explaining feature interactions of model predictions. `shapiq` extends the well-known [shap](https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap) package for both researchers working on game theory in machine learning, as well as the end-users explaining models. SHAP-IQ extends individual Shapley values by quantifying the **synergy** effect between entities (aka **players** in the jargon of game theory) like explanatory features, data points, or weak learners in ensemble models. Synergies between players give a more comprehensive view of machine learning models.\n\n## 🛠️ Install\n`shapiq` is intended to work with **Python 3.12 and above**.\nInstallation can be done via `uv` :\n```sh\nuv add shapiq\n```\n\nor via `pip`:\n\n```sh\npip install shapiq\n```\n\n## 👀 Upcoming\nSee what’s on the horizon for the library in our [GitHub Project Board](https:\u002F\u002Fgithub.com\u002Fusers\u002Fmmschlk\u002Fprojects\u002F4). We plan and track upcoming features, improvements, and maintenance tasks there including new explainers, performance optimizations, and expanded model support.\n\n## ⭐ Quickstart\n\nYou can explain your model with `shapiq.explainer` and visualize Shapley interactions with `shapiq.plot`.\nIf you are interested in the underlying game theoretic algorithms, then check out the `shapiq.approximator` and `shapiq.games` modules.\n\n### Compute any-order feature interactions\n\nExplain your models with Shapley interactions:\nJust load your data and model, and then use a `shapiq.Explainer` to compute Shapley interactions.\n\n```python\nimport shapiq\n# load data\nX, y = shapiq.load_california_housing(to_numpy=True)\n# train a model\nfrom sklearn.ensemble import RandomForestRegressor\nmodel = RandomForestRegressor()\nmodel.fit(X, y)\n# set up an explainer with k-SII interaction values up to order 4\nexplainer = shapiq.TabularExplainer(\n    model=model,\n    data=X,\n    index=\"k-SII\",\n    max_order=4\n)\n# explain the model's prediction for the first sample\ninteraction_values = explainer.explain(X[0], budget=256)\n# analyse interaction values\nprint(interaction_values)\n\n>> InteractionValues(\n>>     index=k-SII, max_order=4, min_order=0, estimated=False,\n>>     estimation_budget=256, n_players=8, baseline_value=2.07282292,\n>>     Top 10 interactions:\n>>         (0,): 1.696969079  # attribution of feature 0\n>>         (0, 5): 0.4847876\n>>         (0, 1): 0.4494288  # interaction between features 0 & 1\n>>         (0, 6): 0.4477677\n>>         (1, 5): 0.3750034\n>>         (4, 5): 0.3468325\n>>         (0, 3, 6): -0.320  # interaction between features 0 & 3 & 6\n>>         (2, 3, 6): -0.329\n>>         (0, 1, 5): -0.363\n>>         (6,): -0.56358890\n>> )\n```\n\n### Compute Shapley values like you are used to with SHAP\n\nIf you are used to working with SHAP, you can also compute Shapley values with `shapiq` the same way:\nYou can load your data and model, and then use the `shapiq.Explainer` to compute Shapley values.\nIf you set the index to ``'SV'``, you will get the Shapley values as you know them from SHAP.\n\n```python\nimport shapiq\n\ndata, model = ...  # get your data and model\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"SV\",  # Shapley values\n)\nshapley_values = explainer.explain(data[0])\nshapley_values.plot_force(feature_names=...)\n```\n\nOnce you have the Shapley values, you can easily compute Interaction values as well:\n\n```python\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"k-SII\",  # k-SII interaction values\n    max_order=2     # specify any order you want\n)\ninteraction_values = explainer.explain(data[0])\ninteraction_values.plot_force(feature_names=...)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"800px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_c5f9ec915bcb.png\" alt=\"An example Force Plot for the California Housing Dataset with Shapley Interactions\">\n\u003C\u002Fp>\n\n### Use ProxySPEX (Proxy SParse EXplainer) \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_30ba8ecb03f6.png\" alt=\"spex_logo\" align=\"right\" height=\"75px\"\u002F>\nFor large-scale use-cases you can also check out the [👓``ProxySPEX``](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002Fapi\u002Fshapiq.approximator.sparse.html#shapiq.approximator.sparse.SPEX) approximator.\n\n```python\n# load your data and model with large number of features\ndata, model, n_features = ...\n\n# use the ProxySPEX approximator directly\napproximator = shapiq.ProxySPEX(n=n_features, index=\"FBII\", max_order=2)\nfbii_scores = approximator.approximate(budget=2000, game=model.predict)\n\n# or use ProxySPEX with an explainer\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"FBII\",\n    max_order=2,\n    approximator=\"proxyspex\"  # specify ProxySPEX as approximator\n)\nexplanation = explainer.explain(data[0])\n```\n\n### Visualize feature interactions\n\nA handy way of visualizing interaction scores up to order 2 are network plots.\nYou can see an example of such a plot below.\nThe nodes represent feature **attributions** and the edges represent the **interactions** between features.\nThe strength and size of the nodes and edges are proportional to the absolute value of attributions and interactions, respectively.\n\n```python\nshapiq.network_plot(\n    first_order_values=interaction_values.get_n_order_values(1),\n    second_order_values=interaction_values.get_n_order_values(2)\n)\n# or use\ninteraction_values.plot_network()\n```\n\nThe pseudo-code above can produce the following plot (here also an image is added):\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"500px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_475305b0e797.png\" alt=\"network_plot_example\">\n\u003C\u002Fp>\n\n### Explain TabPFN\n\nWith ``shapiq`` you can also explain [``TabPFN``](https:\u002F\u002Fgithub.com\u002FPriorLabs\u002FTabPFN) by making use of the _remove-and-recontextualize_ explanation paradigm implemented in ``shapiq.TabPFNExplainer``.\n\n```python\nimport tabpfn, shapiq\ndata, labels = ...                    # load your data\nmodel = tabpfn.TabPFNClassifier()     # get TabPFN\nmodel.fit(data, labels)               # \"fit\" TabPFN (optional)\nexplainer = shapiq.TabPFNExplainer(   # setup the explainer\n    model=model,\n    data=data,\n    labels=labels,\n    index=\"FSII\"\n)\nfsii_values = explainer.explain(data[0])  # explain with Faithful Shapley values\nfsii_values.plot_force()               # plot the force plot\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"800px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_aa036731a719.png\" alt=\"Force Plot of FSII values\">\n\u003C\u002Fp>\n\n\n## 📖 Documentation with tutorials\nThe documentation of ``shapiq`` can be found at https:\u002F\u002Fshapiq.readthedocs.io.\nIf you are new to Shapley values or Shapley interactions, we recommend starting with the [introduction](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002Fintroduction\u002F) and the [examples & tutorials](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002Fauto_examples\u002Findex.html).\nThere is a lot of great resources available to get you started with Shapley values and interactions.\n\n## 💬 Citation\n\nIf you use ``shapiq`` and enjoy it, please consider citing our [NeurIPS paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01649) or consider starring this repository.\n\n```bibtex\n@inproceedings{Muschalik.2024b,\n  title     = {shapiq: Shapley Interactions for Machine Learning},\n  author    = {Maximilian Muschalik and Hubert Baniecki and Fabian Fumagalli and\n               Patrick Kolpaczki and Barbara Hammer and Eyke H\\\"{u}llermeier},\n  booktitle = {The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},\n  year      = {2024},\n  url       = {https:\u002F\u002Fopenreview.net\u002Fforum?id=knxGmi6SJi}\n}\n```\n\n## 📦 Contributing\nWe welcome any kind of contributions to `shapiq`!\nIf you are interested in contributing, please check out our [contributing guidelines](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002F.github\u002FCONTRIBUTING.md).\nIf you have any questions, feel free to reach out to us.\nWe are tracking our progress via a [project board](https:\u002F\u002Fgithub.com\u002Fusers\u002Fmmschlk\u002Fprojects\u002F4) and the [issues](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues) section.\nIf you find a bug or have a feature request, please open an issue or help us fixing it by opening a pull request.\n\n## 📜 License\nThis project is licensed under the [MIT License](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002FLICENSE).\n\n## 💰 Funding\nThis work is openly available under the MIT license.\nSome authors acknowledge the financial support by the German Research Foundation (DFG) under grant number TRR 318\u002F1 2021 – 438445824.\n\n---\nBuilt with ❤️ by the shapiq team.\n","# shapiq: 用于机器学习的夏普利交互 \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fmmschlk\u002Fshapiq\u002Fmain\u002Fdocs\u002Fsource\u002F_static\u002Flogo\u002Flogo_shapiq_light.svg\" alt=\"shapiq_logo\" align=\"right\" height=\"250px\"\u002F>\n\n[![PyPI版本](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fshapiq.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fshapiq)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-brightgreen.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n[![Codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmmschlk\u002Fshapiq\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmmschlk\u002Fshapiq)\n[![测试](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002F.github\u002Fworkflows\u002Fci.yml)\n[![Read the Docs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_13d664e1afd7.png)](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n[![PyPI版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fshapiq.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshapiq)\n[![PyPI状态](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fstatus\u002Fshapiq.svg?color=blue)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fshapiq)\n[![PePy](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_764c0a45113d.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fshapiq)\n\n[![代码风格](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![欢迎贡献](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen)](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues)\n[![最近提交](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fmmschlk\u002Fshapiq)](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcommits\u002Fmain)\n\n> 一次交互可能胜过千次主效应。\n\nShapley Interaction Quantification (`shapiq`) 是一个 Python 包，用于 (1) 近似任意阶的夏普利交互值，(2) 对机器学习中的博弈论算法进行基准测试，(3) 解释模型预测中的特征交互。`shapiq` 扩展了广为人知的 [shap](https:\u002F\u002Fgithub.com\u002Fshap\u002Fshap) 包，既适用于从事机器学习中博弈论研究的研究人员，也适用于需要解释模型的最终用户。SHAP-IQ 在个体夏普利值的基础上，进一步量化了实体（即博弈论术语中的“玩家”）之间的**协同效应**，例如解释性特征、数据点或集成模型中的弱学习器。玩家之间的协同作用能够提供对机器学习模型更为全面的理解。\n\n## 🛠️ 安装\n`shapiq` 旨在与 **Python 3.12 及以上版本** 一起使用。\n可以通过 `uv` 进行安装：\n```sh\nuv add shapiq\n```\n\n或者通过 `pip`：\n\n```sh\npip install shapiq\n```\n\n## 👀 即将推出的功能\n请在我们的 [GitHub 项目板](https:\u002F\u002Fgithub.com\u002Fusers\u002Fmmschlk\u002Fprojects\u002F4) 上查看该库的未来规划。我们在此处规划并跟踪即将推出的功能、改进和维护任务，包括新的解释器、性能优化以及更广泛的模型支持。\n\n## ⭐ 快速入门\n\n您可以使用 `shapiq.explainer` 解释您的模型，并用 `shapiq.plot` 可视化夏普利交互值。如果您对底层的博弈论算法感兴趣，请查看 `shapiq.approximator` 和 `shapiq.games` 模块。\n\n### 计算任意阶的特征交互\n\n使用夏普利交互来解释您的模型：\n只需加载数据和模型，然后使用 `shapiq.Explainer` 计算夏普利交互。\n\n```python\nimport shapiq\n# 加载数据\nX, y = shapiq.load_california_housing(to_numpy=True)\n# 训练模型\nfrom sklearn.ensemble import RandomForestRegressor\nmodel = RandomForestRegressor()\nmodel.fit(X, y)\n# 设置一个计算 k-SII 交互值、最高阶数为 4 的解释器\nexplainer = shapiq.TabularExplainer(\n    model=model,\n    data=X,\n    index=\"k-SII\",\n    max_order=4\n)\n# 解释模型对第一个样本的预测\ninteraction_values = explainer.explain(X[0], budget=256)\n# 分析交互值\nprint(interaction_values)\n\n>> InteractionValues(\n>>     index=k-SII, max_order=4, min_order=0, estimated=False,\n>>     estimation_budget=256, n_players=8, baseline_value=2.07282292,\n>>     前 10 个交互：\n>>         (0,): 1.696969079  # 特征 0 的归因\n>>         (0, 5): 0.4847876\n>>         (0, 1): 0.4494288  # 特征 0 和 1 之间的交互\n>>         (0, 6): 0.4477677\n>>         (1, 5): 0.3750034\n>>         (4, 5): 0.3468325\n>>         (0, 3, 6): -0.320  # 特征 0、3 和 6 之间的交互\n>>         (2, 3, 6): -0.329\n>>         (0, 1, 5): -0.363\n>>         (6,): -0.56358890\n>> )\n```\n\n### 计算您熟悉的 SHAP 式夏普利值\n\n如果您习惯使用 SHAP，也可以用 `shapiq` 以相同的方式计算夏普利值：\n您可以加载数据和模型，然后使用 `shapiq.Explainer` 计算夏普利值。如果将索引设置为 ``'SV'``，您将得到与 SHAP 中相同的夏普利值。\n\n```python\nimport shapiq\n\ndata,模型 = ...  # 获取您的数据和模型\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"SV\",  # 夏普利值\n)\nshapley_values = explainer.explain(data[0])\nshapley_values.plot_force(feature_names=...)\n```\n\n一旦您获得了夏普利值，也可以轻松计算交互值：\n\n```python\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"k-SII\",  # k-SII 交互值\n    max_order=2     # 指定您想要的任何阶数\n)\ninteraction_values = explainer.explain(data[0])\ninteraction_values.plot_force(feature_names=...)\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"800px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_c5f9ec915bcb.png\" alt=\"加利福尼亚住房数据集的夏普利交互力图示例\">\n\u003C\u002Fp>\n\n### 使用 ProxySPEX（代理稀疏解释器）\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_30ba8ecb03f6.png\" alt=\"spex_logo\" align=\"right\" height=\"75px\"\u002F>\n对于大规模应用场景，您还可以尝试 [👓``ProxySPEX``](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002Fapi\u002Fshapiq.approximator.sparse.html#shapiq.approximator.sparse.SPEX) 近似器。\n\n```python\n# 加载具有大量特征的数据和模型\ndata,模型,n_features = ...\n\n# 直接使用 ProxySPEX 近似器\napproximator = shapiq.ProxySPEX(n=n_features, index=\"FBII\", max_order=2)\nfbii_scores = approximator.approximate(budget=2000, game=model.predict)\n\n# 或者结合 ProxySPEX 与解释器使用\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"FBII\",\n    max_order=2,\n    approximator=\"proxyspex\"  # 指定 ProxySPEX 作为近似器\n)\nexplanation = explainer.explain(data[0])\n```\n\n### 可视化特征交互\n\n可视化二阶及以下交互得分的一种便捷方式是使用网络图。下面是一个此类图的示例。节点代表特征的**归因值**，边则表示特征之间的**交互作用**。节点和边的强度与大小分别与归因值和交互作用的绝对值成正比。\n\n```python\nshapiq.network_plot(\n    first_order_values=interaction_values.get_n_order_values(1),\n    second_order_values=interaction_values.get_n_order_values(2)\n)\n# 或者使用\ninteraction_values.plot_network()\n```\n\n上述伪代码可以生成如下图表（此处也附上图片）：\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"500px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_475305b0e797.png\" alt=\"network_plot_example\">\n\u003C\u002Fp>\n\n### 解释 TabPFN\n\n借助 `shapiq`，您还可以通过使用 `shapiq.TabPFNExplainer` 中实现的“移除并重新上下文化”解释范式来解释 [``TabPFN``](https:\u002F\u002Fgithub.com\u002FPriorLabs\u002FTabPFN)。\n\n```python\nimport tabpfn, shapiq\ndata, labels = ...                    # 加载您的数据\nmodel = tabpfn.TabPFNClassifier()     # 获取 TabPFN\nmodel.fit(data, labels)               # “拟合” TabPFN（可选）\nexplainer = shapiq.TabPFNExplainer(   # 设置解释器\n    model=model,\n    data=data,\n    labels=labels,\n    index=\"FSII\"\n)\nfsii_values = explainer.explain(data[0])  # 使用忠实 Shapley 值进行解释\nfsii_values.plot_force()               # 绘制力图\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"800px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_readme_aa036731a719.png\" alt=\"FSII 值的力图\">\n\u003C\u002Fp>\n\n\n## 📖 包含教程的文档\n`shapiq` 的文档可在 https:\u002F\u002Fshapiq.readthedocs.io 上找到。如果您对 Shapley 值或 Shapley 交互还不熟悉，建议从 [简介](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002Fintroduction\u002F) 和 [示例与教程](https:\u002F\u002Fshapiq.readthedocs.io\u002Fen\u002Flatest\u002Fauto_examples\u002Findex.html) 开始阅读。这里有许多优质资源可以帮助您快速入门 Shapley 值和交互。\n\n## 💬 引用\n\n如果您使用了 `shapiq` 并对其感到满意，请考虑引用我们的 [NeurIPS 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01649)，或者为本仓库点个赞。\n\n```bibtex\n@inproceedings{Muschalik.2024b,\n  title     = {shapiq: 用于机器学习的 Shapley 交互},\n  author    = {Maximilian Muschalik、Hubert Baniecki、Fabian Fumagalli、Patrick Kolpaczki、Barbara Hammer 和 Eyke H\\\"{u}llermeier},\n  booktitle = {第38届神经信息处理系统大会数据集与基准测试赛道},\n  year      = {2024},\n  url       = {https:\u002F\u002Fopenreview.net\u002Fforum?id=knxGmi6SJi}\n}\n```\n\n## 📦 贡献\n我们欢迎对 `shapiq` 的任何形式的贡献！如果您有兴趣参与贡献，请查看我们的 [贡献指南](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002F.github\u002FCONTRIBUTING.md)。如有任何问题，欢迎随时与我们联系。我们通过 [项目看板](https:\u002F\u002Fgithub.com\u002Fusers\u002Fmmschlk\u002Fprojects\u002F4) 和 [问题](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues) 部分跟踪进展。如果您发现错误或有功能请求，请提交一个问题，或者通过打开拉取请求来帮助我们修复。\n\n## 📜 许可证\n本项目采用 [MIT 许可证](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002FLICENSE) 许可。\n\n## 💰 资助\n这项工作以 MIT 许可证公开发布。部分作者感谢德国研究基金会 (DFG) 在编号 TRR 318\u002F1 2021 – 438445824 的资助下提供的财政支持。\n\n---\n由 shapiq 团队用心打造。","# shapiq 快速上手指南\n\n`shapiq` 是一个用于机器学习的 Shapley 交互量化工具包。它不仅扩展了经典的 SHAP 库，还能量化特征、数据点或弱学习器之间的**协同效应（Synergy）**，提供比单一 Shapley 值更全面的模型解释视角。\n\n## 环境准备\n\n*   **操作系统**：Linux, macOS, Windows\n*   **Python 版本**：要求 **Python 3.12** 及以上版本\n*   **前置依赖**：\n    *   推荐安装 `scikit-learn` 用于构建示例模型。\n    *   若需可视化网络图，建议安装 `networkx` 和 `matplotlib`。\n\n## 安装步骤\n\n你可以选择使用 `pip` 或现代化的 `uv` 工具进行安装。\n\n### 方式一：使用 pip (推荐)\n\n```bash\npip install shapiq\n```\n\n> **国内加速提示**：如果遇到下载速度慢的问题，可以使用清华或阿里云镜像源：\n> ```bash\n> pip install shapiq -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方式二：使用 uv\n\n如果你已经安装了 `uv` 包管理器：\n\n```bash\nuv add shapiq\n```\n\n## 基本使用\n\n以下示例演示如何加载数据、训练模型，并计算高阶特征交互值（Shapley Interactions）。\n\n### 1. 计算任意阶特征交互\n\n此示例展示如何使用 `TabularExplainer` 计算高达 4 阶的 k-SII 交互值。\n\n```python\nimport shapiq\nfrom sklearn.ensemble import RandomForestRegressor\n\n# 1. 加载数据 (内置加州房价数据集)\nX, y = shapiq.load_california_housing(to_numpy=True)\n\n# 2. 训练模型\nmodel = RandomForestRegressor()\nmodel.fit(X, y)\n\n# 3. 设置解释器\n# index=\"k-SII\": 使用 k-Shapley Interaction Index\n# max_order=4: 计算最高 4 阶的交互作用\nexplainer = shapiq.TabularExplainer(\n    model=model,\n    data=X,\n    index=\"k-SII\",\n    max_order=4\n)\n\n# 4. 解释第一个样本的预测结果\n# budget=256: 限制评估次数以加快近似速度\ninteraction_values = explainer.explain(X[0], budget=256)\n\n# 5. 查看结果\nprint(interaction_values)\n```\n\n**输出示例：**\n```text\nInteractionValues(\n    index=k-SII, max_order=4, min_order=0, estimated=False,\n    estimation_budget=256, n_players=8, baseline_value=2.07282292,\n    Top 10 interactions:\n        (0,): 1.696969079  # 特征 0 的独立贡献\n        (0, 5): 0.4847876\n        (0, 1): 0.4494288  # 特征 0 和 1 之间的交互作用\n        (0, 6): 0.4477677\n        (1, 5): 0.3750034\n        ...\n)\n```\n\n### 2. 兼容传统 SHAP 用法\n\n如果你习惯使用 SHAP，`shapiq` 也支持计算标准的 Shapley 值（SV），并可无缝切换至交互值计算。\n\n```python\nimport shapiq\n\n# 假设已有 data 和 model\n# data, model = ... \n\n# 计算标准 Shapley 值\nexplainer_sv = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"SV\",  # 指定为 Shapley Values\n)\nshapley_values = explainer_sv.explain(data[0])\n# shapley_values.plot_force(feature_names=...)\n\n# 计算二阶交互值\nexplainer_si = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"k-SII\", \n    max_order=2     # 指定计算 2 阶交互\n)\ninteraction_values = explainer_si.explain(data[0])\n# interaction_values.plot_force(feature_names=...)\n```\n\n### 3. 可视化交互关系\n\n对于 2 阶及以下的交互值，可以使用网络图直观展示特征归因（节点）与特征间交互（边）。\n\n```python\n# 方法一：直接使用 plot_network\ninteraction_values.plot_network()\n\n# 方法二：手动提取不同阶数的值绘图\nshapiq.network_plot(\n    first_order_values=interaction_values.get_n_order_values(1),\n    second_order_values=interaction_values.get_n_order_values(2)\n)\n```\n\n### 4. 高级场景：大规模数据与 TabPFN\n\n*   **大规模数据**：对于特征数量巨大的场景，推荐使用 `ProxySPEX` 近似器以提高效率。\n    ```python\n    approximator = shapiq.ProxySPEX(n=n_features, index=\"FBII\", max_order=2)\n    fbii_scores = approximator.approximate(budget=2000, game=model.predict)\n    ```\n\n*   **解释 TabPFN 模型**：`shapiq` 原生支持解释 `TabPFN` 模型。\n    ```python\n    import tabpfn, shapiq\n    \n    model = tabpfn.TabPFNClassifier()\n    model.fit(data, labels)\n    \n    explainer = shapiq.TabPFNExplainer(\n        model=model,\n        data=data,\n        labels=labels,\n        index=\"FSII\"\n    )\n    fsii_values = explainer.explain(data[0])\n    fsii_values.plot_force()\n    ```\n\n更多详细教程和 API 文档请访问：[https:\u002F\u002Fshapiq.readthedocs.io](https:\u002F\u002Fshapiq.readthedocs.io)","某金融风控团队正在优化房贷违约预测模型，急需深入理解特征间复杂的非线性关系以提升模型可解释性。\n\n### 没有 shapiq 时\n- 只能依赖传统的 SHAP 值分析单个特征的贡献，完全忽略了“收入”与“负债率”等特征组合产生的协同效应。\n- 面对模型对特定高风险群体的误判，无法定位是哪几个特征相互作用导致了异常预测，排查如同大海捞针。\n- 试图手动计算高阶交互项时，面临指数级增长的计算复杂度，导致分析过程耗时数天且结果往往不收敛。\n- 向业务部门汇报时，仅能展示孤立的特征重要性图表，难以解释为何某些看似低风险的客户组合会被模型标记为高危。\n\n### 使用 shapiq 后\n- 利用 `TabularExplainer` 直接计算高达 4 阶的 k-SII 交互值，精准量化了“年龄”与“信用历史长度”之间的协同风险贡献。\n- 通过 `interaction_values` 快速锁定导致误判的具体特征组合（如“自由职业”叠加“短期居住”），将归因分析时间从数天缩短至分钟级。\n- 借助内置的近似算法，在有限的计算预算（budget）内高效获取任意阶次的交互指标，无需担心计算资源爆炸。\n- 结合 `shapiq.plot` 生成直观的交互热力图，向业务方清晰展示了多特征耦合如何共同推高违约概率，显著提升了信任度。\n\nshapiq 通过量化特征间的“协同效应”，让黑盒模型中隐藏的复杂逻辑变得透明且可操作。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmmschlk_shapiq_c5f9ec91.png","mmschlk","Maximilian","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmmschlk_c27eadae.png","PhD student at LMU Munich","AIML@LMU","Germany",null,"https:\u002F\u002Fmaxmuschalik.com\u002F","https:\u002F\u002Fgithub.com\u002Fmmschlk",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",89.9,{"name":88,"color":89,"percentage":90},"C++","#f34b7d",10.1,716,51,"2026-04-16T21:02:10","MIT","","未说明",{"notes":98,"python":99,"dependencies":100},"该工具是一个用于计算机器学习中 Shapley 交互值的 Python 包，扩展了知名的 shap 库。支持通过 uv 或 pip 安装。文档中提到可解释 TabPFN 模型及使用 ProxySPEX 进行大规模场景的近似计算，但未明确列出具体的深度学习框架（如 PyTorch）作为硬性依赖，具体依赖可能随所解释的模型类型而变化。","3.12+",[101],"shap",[14,45],[104,105,106,107,108,101,109,65,110,111,112,113,114,115,116],"python","feature-interactions","explainability","interpretability","machine-learning","shapley","shapley-interactions","banzhaf-index","explainable-ai","feature-importance","game-theory","interpretable-machine-learning","feature-attribution","2026-03-27T02:49:30.150509","2026-04-20T04:04:29.132805",[],[121,126,131,136,141,146,151,156,161,166,171,176,181,186,191,196,201,206],{"id":122,"version":123,"summary_zh":124,"released_at":125},343265,"v1.4.1","## 修复 bug\n\n- 修复了 ProxySPEX 中的一个 bug，该 bug 导致 baseline_value 被设置为错误的 ID，而非空联盟的正确得分。https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F469\n- 修复了 shapiq 的构建流程，使其能够正确地将所有测试、基准测试、文档等文件排除在构建包之外。https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F464\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.4.0...v1.4.1","2025-11-10T19:41:25",{"id":127,"version":128,"summary_zh":129,"released_at":130},343266,"v1.4.0","## 引入 ProxySPEX [#442](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F442)\n新增了 [`ProxySPEX`](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.17495) [近似器](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002Fsrc\u002Fshapiq\u002Fapproximator\u002Fsparse\u002Fproxyspex.py)，用于基于新的 ProxySPEX 算法高效计算稀疏交互值。ProxySPEX 是 [SPEX](https:\u002F\u002Fopenreview.net\u002Fpdf?id=UQpYmaBGwB) 算法的直接扩展，该算法利用对值函数的巧妙傅里叶表示与分析，识别出在 `Moebius` 系数意义上最相关的交互作用，并将其转化为汇总得分（Shapley 交互值）。与 SPEX 相比，ProxySPEX 的关键创新之一是使用一个代理模型来近似原始值函数（内部采用 LightGBM 模型）。**值得注意的是，** 要运行 ProxySPEX，用户必须在其环境中安装 `lightgbm` 包。更多细节请参阅论文，该论文将在 NeurIPS 2025 上发表：Butler, L., Kang, J.S., Agarwal, A., Erginbas, Y.E., Yu, Bin, Ramchandran, K. (2025). ProxySPEX: 基于大语言模型中稀疏特征交互的高效推理可解释性。[arxiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.17495)\n\n## 引入 ProductKernelExplainer [#431](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F431)\n`ProductKernelExplainer` 是一种针对使用乘积核的机器学习模型（如高斯过程和支持向量机）的新颖的模型特定解释方法。与 TreeExplainer 类似，它采用一种特殊的计算方案，利用底层乘积核的结构高效地计算精确的 Shapley 值。**请注意，** 此解释器目前仅能计算 Shapley 值（尚不支持高阶交互作用）。更多详情请参阅论文：Mohammadi, M., Chau, S.-L., Muandet, K. 为乘积核方法在多项式时间内精确计算 Shapley 值。[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16516)\n\n## 新的条件插补方法 [#435](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F435)\n我们基于传统统计方法，在 `shapiq.imputer` 模块中实现了两种新的条件插补方法，分别命名为 `GaussianImputer` 和 `GaussianCopulaImputer`。\n这两种插补方法旨在以尊重数据底层分布的方式处理缺失特征的插补问题，其假设是数据服从多元正态分布（`GaussianImputer`）或可用高斯 copula 表示（`GaussianCopulaImputer`）。\n在实际应用中，这一假设往往难以满足，但这些方法在许多场景下仍能提供合理的插补结果，并可作为有用的基准，从而促进 Shapley 值解释领域中条件插补研究的开展。\n\n## Shapiq 实现静态类型检查 [#430](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F430)\n我们已通过 [Pyright](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fpyright) 为 `shapiq` 引入了静态类型检查。","2025-10-31T10:07:42",{"id":132,"version":133,"summary_zh":134,"released_at":135},343267,"v1.3.2","## 修复：移除 override 导入  \n在表格解释器中，曾导入了 `overrides` 包，而该包并未列在 `shapiq` 的依赖项中。这导致用户无法安装和运行 `shapiq`。由于 `overrides` 语句仅被使用一次，且其作用仅为提升代码质量的小改进，我们决定删除该导入，并发布此修复版本。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.3.1...v1.3.2","2025-10-14T11:16:48",{"id":137,"version":138,"summary_zh":139,"released_at":140},343268,"v1.3.1","## 概述\n\n### 蜂群热图\n\n此版本向 `shapiq` 添加了 `beewarm_plot` 绘图功能，并扩展该功能以同时可视化特征交互。蜂群热图有助于直观展示特征值与特征重要性之间的依赖关系。该绘图方法源自 SHAP 库，通过为每个交互项细分 y 轴来实现：\n- 一个交互项（例如“特征 A × 特征 B”）将在 y 轴上占据一个“区块”。\n- 该区块会进一步细分为多行：每行对应交互中的一个特征。\n- 对于交互 (A, B)：\n  * 将有一行专门用于特征 A。该行中每个点的 x 坐标表示该样本的交互值，而点的颜色则由特征 A 的取值决定。\n  * 另一行则用于特征 B。该行中各点的 x 坐标相同，但颜色由特征 B 的取值决定。\n\n此类图表的示例如下：\n\n\u003Cimg width=\"938\" height=\"780\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fc42c7b0b-daaf-4615-ae81-f17e58110549\" \u002F>\n\n### JSON 支持\n\n此版本还增加了对对象序列化的 JSON 支持。这对于保存和加载文件至关重要，因为 JSON 格式是人类可读的，即使以原始格式查看也能确保库的安全性。此外，这一改进还使我们能够在将来重构部分数据结构时，保留一个回退版本（即当前版本），以便将旧版值对象存储在新文件格式中。\n\n## 新贡献者\n* @annprzy 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F406 中做出了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.3.0...v1.3.1","2025-07-11T15:58:50",{"id":142,"version":143,"summary_zh":144,"released_at":145},343269,"v1.3.0","以下是 shapiq 1.3.0 版本的完整更新说明。\n\n## ✨亮点\n\n### SPEX（SParse EXplainer）由 @justinkang221 和 @landonbutler 提供 \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fmmschlk\u002Fshapiq\u002Fmain\u002Fdocs\u002Fsource\u002F_static\u002Fimages\u002Fspex_logo.png\" alt=\"spex_logo\" align=\"right\" height=\"75px\"\u002F>\n`shapiq.SPEX`（稀疏精确）近似器，用于高效计算超大规模模型和博弈中的稀疏交互值。论文：[SPEX: 扩展大型语言模型的特征交互解释](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13870)\n\n```python\n# 加载您的数据和具有大量特征的模型\ndata, model, n_features = ...\n\n# 直接使用 SPEX 近似器\napproximator = shapiq.SPEX(n=n_features, index=\"FBII\", max_order=2)\nfbii_scores = approximator.approximate(budget=2000, game=model.predict)\n\n# 或者将 SPEX 与解释器结合使用\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"FBII\",\n    max_order=2,\n    approximator=\"spex\"  # 指定 SPEX 作为近似器\n)\nexplanation = explainer.explain(data[0])\n```\n\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fdaabec50-eb2c-48c2-9d59-977c817129a2\" alt=\"spec_results\" width=\"400px\"\u002F>\n\n### AgnosticExplainer\n`shapiq.AgnosticExplainer` 是一个通用解释器，适用于任何值函数或 `shapiq.Game` 对象，从而提高了解释器的灵活性。\n\n```python\n# 获取您的博弈行为并将其传递给 AgnosticExplainer\nmy_logic = ...\n\ndef value_function(coalition: np.ndarray[bool]) -> np.ndarray[float]:\n    return my_logic(coalition)\n\nexplainer = shapiq.AgnosticExplainer(\n    game=value_function,\n    index=\"FSII\",\n    max_order=2,\n    approximator=\"auto\"\n)\nexplanation = explainer.explain(budget=100)\n```\n\n## 完整变更日志\n\n### 新特性\n- 在 `approximator.sparse` 中新增 SPEX（稀疏精确）模块，用于高效计算稀疏交互值 [#379](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F379)\n- 新增 `shapiq.AgnosticExplainer`，这是一个通用解释器，可用于任何值函数或 `shapiq.Game` 对象。这使得解释器更加灵活。[#100](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F100), [#395](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F395)\n- 将 `budget` 改为必须传入 `TabularExplainer.explain()` 方法的参数 [#355](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F356)\n- 修改了 `InteractionValues.get_n_order()` 函数的逻辑，现在可以**仅使用** `order: int` 参数，并可选地指定 `min_order: int` 和 `max_order: int` 参数，**或者**直接使用 `min_order` 和 `max_order` 参数调用该函数 [#372](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F372)\n- 将力图中的 `min_percentage` 参数重命名为 `contribution_threshold`，以更准确地反映其用途 [#391](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F391)\n- 为 `Explainer` 的 `explain_X()` 方法新增了 `verbose` 参数，用于控制是否显示进度条，默认值为 `False`。","2025-06-17T08:59:59",{"id":147,"version":148,"summary_zh":149,"released_at":150},343270,"v.1.2.3","## 最酷的新功能\n- 回归计算速度大幅提升（之前我们实现的方式非常慢，但现在已经很高效了，详情见 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F340）\n- 引入 RegressionFBII 近似方法，现在也可以通过 `Regression` 近似器来计算 FBII 分数的值。\n\n## 变更内容\n* 由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F332 中改进了测试流程。\n* 由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F334 中添加了简单的 TreeExplainer 计算，并修复了 XGBoost 模型解析中的一个 bug。\n* 由 @Advueu963 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F335 中修复了 stacked_bar 图表的 bug。\n* 由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F337 中修复了 #336 问题。\n* 由 @Advueu963 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F333 中新增了 Faith Banzhaf 近似器。\n* 🚀 由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F341 中优化了 Regression 估计器的运行时性能。\n* 由 @hbaniecki 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F342 中修复了 regressionfbii 缺少的导入语句。\n* 由 @dependabot 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F246 中将 NumPy 从 1.26.4 升级到 2.1.2。\n* ⚒️ 由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F348 中减少了包依赖的冗余，并引入了 uv 工具。\n* 由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F350 中更新了测试结构，以使用 uv 工具。\n* 🏷️ 由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F351 中创建了 v1.2.3 版本发布。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.2.2...v1.2.3","2025-03-24T18:07:09",{"id":152,"version":153,"summary_zh":154,"released_at":155},343271,"v1.2.2","## 变更内容\n* 修复无写入权限时的包导入问题，由 @mmschlk 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F327 中完成\n* 将支持的 Python 版本更新为 3.10-3.13，由 @Advueu963 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F318 中完成\n* 将 Scikit-Learn 的 ExtraTreesRegressor 添加到 TreeExplainer 允许使用的模型列表中，由 @Deathn0t 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F309 中完成\n* 将网络图中的图例移开以避免重叠，由 @chenhao20241224 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F329 中完成\n* 使用 pip-all-updates 组合更新了 8 个依赖项，由 @dependabot 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F328 中完成\n\n## 新贡献者\n* @Deathn0t 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F309 中完成了首次贡献\n* @chenhao20241224 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F329 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.2.1...v1.2.2","2025-03-11T13:32:50",{"id":157,"version":158,"summary_zh":159,"released_at":160},343272,"v1.2.1","本次发布包含若干错误修复和小幅改进。\n\n## 变更内容\n* 修复当 LightGBM 分类模型中的某棵树满足 `n_features_in_tree = 1` 时，TreeExplainer 的实例化问题，由 @CharlesCousyn 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F310 中完成。\n* 将 pip-all-updates 组更新至 9 个新版本，由 @dependabot 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F312 中完成。\n* 修复条形图和力图的绘制问题，由 @Advueu963 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F317 中完成。\n* 将 pypa\u002Fgh-action-pypi-publish 从 1.12.3 升级到 1.12.4，由 @dependabot 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F311 中完成。\n* 修复 #324、#322 和 #319 相关问题，由 @hbaniecki 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F323 中完成。\n\n## 新贡献者\n* @CharlesCousyn 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F310 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.2.0...v1.2.1","2025-02-17T22:14:37",{"id":162,"version":163,"summary_zh":164,"released_at":165},343273,"v1.2.0","## 新特性\n- 添加了 ``shapiq.TabPFNExplainer``，作为 ``shapiq.TabularExplainer`` 的专用版本，为 TabPFN 模型提供了一种简化的解释器实现 [#301](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F301)\n- 在文档中新增了一个 TabPFN 示例笔记本\n- 在 `plot` 模块中添加了 `sentence_plot` 函数，用于以句子形式可视化单词对语言模型预测的贡献\n- 为解释器和树模型解释器增加了对 IsoForest 模型的支持 [#278](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F278)\n- 在 `plot` 模块中添加了 `upset_plot` 函数，用于可视化高阶交互作用 [#290](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F290)\n\n## API 改进\n- 现在所有解释器类都通过一个通用接口来处理 ``explainer.explain()`` 调用，这些类需要实现 ``explain_function()`` 方法\n- 当 ``min_order=0``（默认值）时，对于所有非 ``SII`` 的索引，在 ``()`` 交互项的 `InteractionValues` 对象中存储基线值（SII 具有单独的基线值），从而确保值的高效性（总和等于模型预测值），而无需再对 `baseline_value` 属性进行繁琐的处理\n- 将 ``shapiq.ExactComputer`` 中的 ``game_fun`` 参数重命名为 ``game`` [#297](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F297)\n- 对博弈论相关计算模块进行了重构，如 `ExactComputer`、`MoebiusConverter`、`core` 等，将其整合到 `game_theory` 模块中，以提高模块化和灵活性 [#258](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F258)\n- 移除了在解释器中未提供 `class_index` 时的警告 [#298](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F298)\n- 使 `plot` 模块中的缩写选项变为可选 [#281](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F281)\n- 增加了对交互值数据类中玩家子集选择的支持 [#276](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F276)，允许仅获取部分玩家的交互值\n\n## 测试改进\n- 通过为不同的交互指数和计算方法添加更多语义测试，显著提升了测试质量 [#285](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F285)\n\n## 新贡献者\n* @JuliaHerbinger 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F287 中做出了首次贡献\n* @r-visser 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F289 中做出了首次贡献\n* @heinzll 在 https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F285 中做出了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.1.1...v1.2.0","2025-01-15T07:41:29",{"id":167,"version":168,"summary_zh":169,"released_at":170},343274,"v1.1.1","## 改进与易用性提升\n- 为 `TabularExplainer` 和 `Explainer` 添加了 `class_index` 参数，用于指定分类模型中要解释的类别索引 [#271](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F271)（将 `TreeExplainer` 中的 `class_label` 参数重命名为 `class_index`）\n- 为 `Explainer` 增加了对 `PyTorch` 模型的支持 [#272](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F272)\n- 新增测试，用于比较 `shapiq` 计算的 Shapley 值与 `shap` 库计算结果的一致性\n- 新增测试，用于验证 `shapiq` 解释器在不同类型模型上的表现\n\n## 错误修复\n- 修复了一个 bug：`RandomForestClassifier` 模型无法与 `TreeExplainer` 配合使用 [#273](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F273)","2024-11-13T11:35:34",{"id":172,"version":173,"summary_zh":174,"released_at":175},343275,"v1.1.0","## New Features and Improvements\r\n- adds computation of the Egalitarian Core (`EC`) and Egalitarian Least-Core (`ELC`) to the `ExactComputer` [#182](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F182)\r\n- adds `waterfall_plot` [#34](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F34) that visualizes the contributions of features to the model prediction\r\n- adds `BaselineImputer` [#107](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F107) which is now responsible for handling the `sample_replacements` parameter. Added a DeprecationWarning for the parameter in `MarginalImputer`, which will be removed in the next release.\r\n- adds `joint_marginal_distribution` parameter to `MarginalImputer` with default value `True` [#261](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F261)\r\n- renames explanation graph to `si_graph`\r\n- `get_n_order` now has optional lower\u002Fupper limits for the order\r\n- computing metrics for benchmarking now tries to resolve not-matching interaction indices and will throw a warning instead of a ValueError [#179](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F179)\r\n- add a legend to benchmark plots [#170](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F170)\r\n- refactored the `shapiq.games.benchmark` module into a separate `shapiq.benchmark` module by moving all but the benchmark games into the new module. This closes [#169](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F169) and makes benchmarking more flexible and convenient.\r\n- a `shapiq.Game` can now be called more intuitively with coalitions data types (tuples of int or str) and also allows to add `player_names` to the game at initialization [#183](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F183)\r\n- improve tests across the package\r\n\r\n## Documentation\r\n- adds a notebook showing how to use custom tree models with the `TreeExplainer` [#66](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F66)\r\n- adds a notebook show how to use the `shapiq.Game` API to create custom games [#184](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F184)\r\n- adds a notebook showing hot to visualize interactions [#252](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F252)\r\n- adds a notebook showing how to compute Shapley values with `shapiq` [#193](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F197)\r\n- adds a notebook for conducting data valuation [#190](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F190)\r\n- adds a notebook showcasing introducing the Core and how to compute it with `shapiq` [#191](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F191)\r\n\r\n## Bug Fixes\r\n- fixes a bug with SIs not adding up to the model prediction because of wrong values in the empty set [#264](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F264)\r\n- fixes a bug that `TreeExplainer` did not have the correct baseline_value when using XGBoost models [#250](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fissues\u002F250)\r\n- fixes the force plot not showing and its baseline value\r\n\r\n## New Contributors\r\n* @Advueu963 made their first contribution in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F185\r\n* @dependabot made their first contribution in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F200\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv1.0.1...v1.1.0","2024-11-08T09:44:52",{"id":177,"version":178,"summary_zh":179,"released_at":180},343276,"v1.0.1","- add `max_order=1` to `TabularExplainer` and `TreeExplainer`\r\n- fix `TreeExplainer.explain_X(..., n_jobs=2, random_state=0)`","2024-06-05T13:42:37",{"id":182,"version":183,"summary_zh":184,"released_at":185},343277,"v1.0.0","Major release of the `shapiq` Python package including (among others):\r\n\r\n- `approximator` module implements over 10 approximators of Shapley values and interaction indices.\r\n- `exact` module implements a computer for over 10 game theoretic concepts like interaction indices or generalized values.\r\n- `games` module implements over 10 application benchmarks for the approximators.\r\n- `explainer` module includes a `TabularExplainer` and `TreeExplainer` for any-order feature interactions of machine learning model predictions.\r\n- `interaction_values` module implements a data class to store and analyze interaction values.\r\n- `plot` module allows visualizing interaction values.\r\n- `datasets` module loads datasets for testing and examples.\r\n\r\nDocumentation of `shapiq` with tutorials and API reference is available at https:\u002F\u002Fshapiq.readthedocs.io","2024-06-04T14:30:39",{"id":187,"version":188,"summary_zh":189,"released_at":190},343278,"v.0.0.6-alpha","## Highlights\r\nThe highlights of this release are as follows.\r\n\r\n### Random Forest Support for TreeExplainer\r\nWe add support for `RandomForestClassifier` and `RandomForestRegressor` as provided by `sklearn.ensemble` in the `TreeExplainer`. This is discussed in #55.\r\n\r\n### TreeExplainer Bugfix\r\nWe fix a package-breaking bug that made it impossible to use the `TreeExplainer` class.\r\n\r\n### Additional Unittests\r\nWe add additional unittests that fully covers the `TreeExplainer` in its current form.\r\n\r\n## What's Changed\r\n\r\n* Bugfix and Support for Random Forests in TreeExplainer by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F59\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv.0.0.5-alpha...v.0.0.6-alpha","2024-03-20T15:37:48",{"id":192,"version":193,"summary_zh":194,"released_at":195},343279,"v.0.0.5-alpha","## Highlights\r\nSince the last release already is some time back, lot's has changed. The highlights are as follows:\r\n\r\n### TreeExplainer based on TreeSHAP-IQ\r\n`shapiq` now is equipped with a `TreeExplainer`. The `TreeExplainer` is based on the TreeSHAP-IQ algorithm proposed in this [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12069). It is a model-specific method to compute Shapley interactions of all kinds and **any-order**. Since it is based on the _linear TreeSHAP_ algorithm it is quite fast. The work is not yet finished on the TreeExplainer, since it only accepts very basic tree models from sklearn as input. More to follow on this front.\r\n\r\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fassets\u002F45100845\u002F5427df49-7630-449b-8e8c-29795a2e4b59\" width=\"350\">\r\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fassets\u002F45100845\u002F42f89f4b-4832-4c74-a7ec-828d04d6d6ce\" width=\"400\">\r\n\r\n### The first visualizations are added.\r\nWe added a new plot to the visualizations: the stacked bar chart (underwhelming name). This plot is illustrated in the TreeSHAP-IQ [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.12069.pdf) and Sebastian Bordt and Ulrike von Luxburg's AISTATS [paper](https:\u002F\u002Fproceedings.mlr.press\u002Fv206\u002Fbordt23a\u002Fbordt23a.pdf). The stacked bar charts shows how much interaction is present in a specific instance and is based on the `k-SII` values.\r\n\r\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fassets\u002F45100845\u002F07a24e5c-b1f1-4ebb-8bba-8e64836a7fdd\" width=\"400\">\r\n\r\n### Major Refactoring\r\nThe codebase has changed quite drastically and many objects have been renamed (e.g. `InteractionExplainer` has been renamed to `Tabular Explainer` to better fit the more specific intend of this class). The `InteractionValues` (the main data structure of the package) has become much more powerful. You can now multiply scalar values to the interaction scores and even add two objects together.\r\n\r\n### Welcoming new Collaborators\r\nWe still are looking for all the help that we can get! If you want to contribute please check out the [tutorial](https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fblob\u002Fmain\u002FCONTRIBUTING.md) and our [project](https:\u002F\u002Fgithub.com\u002Fusers\u002Fmmschlk\u002Fprojects\u002F4).\r\n\r\n## What's Changed\r\n* Add nSII and SII Regression Estimator by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F22\r\n* Merge Dev by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F23\r\n* Add initial explainer. by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F25\r\n* Development by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F27\r\n* Merge by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F28\r\n* Add additional Tests by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F29\r\n* Adds Stacked Bar Plot and tidyies up Network Plot.  by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F41\r\n* Renamed nSII to k-SII and refactors base approximators by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F42\r\n* Add Collaborator Tutorial and Code of Conduct by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F46\r\n* adds code-quality check and closes #45 by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F47\r\n* adds TreeExplainer with TreeSHAP-IQ by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F51\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv.0.0.4-alpha...v.0.0.5-alpha","2024-03-18T14:23:51",{"id":197,"version":198,"summary_zh":199,"released_at":200},343280,"v.0.0.4-alpha","# New Approximators\r\nThis release adds new approximation methods to `shapiq` and makes all interaction calculations faster and more memory efficient.\r\n\r\n## ShapIQ approximator\r\nThis release adds the `shapiq.approximator.ShapIQ` approximator as proposed in this [paper (NeurIPS'23)](https:\u002F\u002Fopenreview.net\u002Fforum?id=IEMLNF4gK4).\r\n`ShapIQ` can approximate any cardinal interaction index (CII) like the Shapley Interaction Index (SII), the Shapley Taylor Index (STI), or the Faithful Shapley Interaction index (FSI).\r\n\r\n## Regression Estimator for FSI\r\nThe `shapiq.approximator.RegressionFSI` regression estimator, which is only available for FSI, was proposed in this [paper (JMLR'23)](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv24\u002F22-0202.html). It is similar to KernelSHAP in that it leverages a weighted least squares representation for the interaction index and solves this by estimating this regression problem.\r\n\r\n## Permutation Sampling for STI\r\nThe permutation sampling currently implemented for the SII (`shapiq.approximator.PermutationSamplingSII`) can also be extended to the STI. The new `shapiq.approximator.PermutationSamplingSTI` uses the traditional permutation sampling approach to compute STI scores.\r\n\r\n# List of PRs\r\n* Add FSI and STI approximation methods by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F7\r\n* Add SHAP-IQ approximator by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F10\r\n* Tests and Efficiency by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F18\r\n* Updates docs. by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F20\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv0.0.3-alpha...v.0.0.4-alpha","2023-11-29T16:27:39",{"id":202,"version":203,"summary_zh":204,"released_at":205},343281,"v0.0.3-alpha","## What's Changed\r\n* Development by @mmschlk in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F6\r\n* Development by @FFmgll in 7e5bc062a91b3a12b45d262761ffb663353a4bf4\r\n\r\n## New Contributors\r\n* @mmschlk made their first contribution in https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fpull\u002F6\r\n* @FFmgll made their first contribution in 7e5bc062a91b3a12b45d262761ffb663353a4bf4\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmmschlk\u002Fshapiq\u002Fcompare\u002Fv0.0.2-alpha...v0.0.3-alpha","2023-11-17T20:05:03",{"id":207,"version":208,"summary_zh":209,"released_at":210},343282,"v0.0.2-alpha","Adds network plot functionality and updates docs. This release is mainly used to test the workflows and publish of docs and automatic update to pypi.","2023-10-23T09:39:53"]