[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SeldonIO--alibi":3,"tool-SeldonIO--alibi":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":79,"owner_website":81,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":96,"env_os":97,"env_gpu":98,"env_ram":98,"env_deps":99,"category_tags":108,"github_topics":109,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":115,"updated_at":116,"faqs":117,"releases":146},729,"SeldonIO\u002Falibi","alibi","Algorithms for explaining machine learning models","Alibi 是一款专为机器学习模型设计的开源 Python 库，致力于实现模型的检查与解释。在实际应用中，许多复杂的算法如同“黑盒”，让人难以理解其决策逻辑。Alibi 正是为了解决这一透明度问题而生，它提供了一套高质量的方法，帮助用户深入洞察分类和回归模型的工作原理。\n\n无论是针对黑盒还是白盒模型，Alibi 都支持局部和全局的解释策略。其技术亮点丰富，涵盖了图像领域的锚点解释、文本分析的集成梯度、反事实示例生成以及累积局部效应分析等先进算法。这些功能让模型的可信度评估变得更加直观。\n\nAlibi 非常适合机器学习开发者、数据科学家及研究人员使用。通过 Alibi，团队可以更自信地部署模型，排查潜在偏差。此外，若需进行异常检测或对抗实例识别，还可结合其姊妹项目 alibi-detect 一同使用。完善的文档和社区支持也让上手过程更加顺畅。","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_3aa13c096b99.png\" alt=\"Alibi Logo\" width=\"50%\">\n\u003C\u002Fp>\n\n\u003C!--- BADGES: START --->\n\n[![Build Status](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi-detect\u002Fworkflows\u002FCI\u002Fbadge.svg?branch=master)][#build-status]\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_13d664e1afd7.png)][#docs-package]\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FSeldonIO\u002Falibi\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FSeldonIO\u002Falibi)\n[![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Falibi?logo=pypi&style=flat&color=blue)][#pypi-package]\n[![PyPI - Package Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Falibi?logo=pypi&style=flat&color=orange)][#pypi-package]\n[![Conda (channel only)](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Falibi?logo=anaconda&style=flat&color=orange)][#conda-forge-package]\n[![GitHub - License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FSeldonIO\u002Falibi?logo=github&style=flat&color=green)][#github-license]\n[![Slack channel](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchat-on%20slack-e51670.svg)][#slack-channel]\n\n\u003C!--- Hide platform for now as platform agnostic --->\n\u003C!--- [![Conda - Platform](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fpn\u002Fconda-forge\u002Falibi?logo=anaconda&style=flat)][#conda-forge-package]--->\n\n[#github-license]: https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fblob\u002Fmaster\u002FLICENSE\n[#pypi-package]: https:\u002F\u002Fpypi.org\u002Fproject\u002Falibi\u002F\n[#conda-forge-package]: https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Falibi\n[#docs-package]: https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002F\n[#build-status]: https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Factions?query=workflow%3A%22CI%22\n[#slack-channel]: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fseldondev\u002Fshared_invite\u002Fzt-vejg6ttd-ksZiQs3O_HOtPQsen_labg\n\u003C!--- BADGES: END --->\n---\n\n[Alibi](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi) is a source-available Python library aimed at machine learning model inspection and interpretation.\nThe focus of the library is to provide high-quality implementations of black-box, white-box, local and global\nexplanation methods for classification and regression models.\n*  [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002F)\n\nIf you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project [alibi-detect](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi-detect).\n\n\u003Ctable>\n  \u003Ctr valign=\"top\">\n    \u003Ctd width=\"50%\" >\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_image_imagenet.html\">\n            \u003Cbr>\n            \u003Cb>Anchor explanations for images\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_51906e61875d.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\">\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_imdb.html\">\n            \u003Cbr>\n            \u003Cb>Integrated Gradients for text\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_393ace8b18df.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr valign=\"top\">\n    \u003Ctd width=\"50%\">\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFProto.html\">\n            \u003Cbr>\n            \u003Cb>Counterfactual examples\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_a3b01a44f96d.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\">\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FALE.html\">\n            \u003Cbr>\n            \u003Cb>Accumulated Local Effects\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_b853830a3307.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## Table of Contents\n\n* [Installation and Usage](#installation-and-usage)\n* [Supported Methods](#supported-methods)\n  * [Model Explanations](#model-explanations)\n  * [Model Confidence](#model-confidence)\n  * [Prototypes](#prototypes)\n  * [References and Examples](#references-and-examples)\n* [Citations](#citations)\n\n## Installation and Usage\nAlibi can be installed from:\n\n- PyPI or GitHub source (with `pip`)\n- Anaconda (with `conda`\u002F`mamba`)\n\n### With pip\n\n- Alibi can be installed from [PyPI](https:\u002F\u002Fpypi.org\u002Fproject\u002Falibi):\n\n  ```bash\n  pip install alibi\n  ```\n  \n- Alternatively, the development version can be installed:\n  ```bash\n  pip install git+https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi.git \n  ```\n\n- To take advantage of distributed computation of explanations, install `alibi` with `ray`:\n  ```bash\n  pip install alibi[ray]\n  ```\n\n- For SHAP support, install `alibi` as follows:\n  ```bash\n  pip install alibi[shap]\n  ```\n\n### With conda \n\nTo install from [conda-forge](https:\u002F\u002Fconda-forge.org\u002F) it is recommended to use [mamba](https:\u002F\u002Fmamba.readthedocs.io\u002Fen\u002Fstable\u002F), \nwhich can be installed to the *base* conda enviroment with:\n\n```bash\nconda install mamba -n base -c conda-forge\n```\n\n- For the standard Alibi install:\n  ```bash\n  mamba install -c conda-forge alibi\n  ```\n\n- For distributed computing support:\n  ```bash\n  mamba install -c conda-forge alibi ray\n  ```\n\n- For SHAP support:\n  ```bash\n  mamba install -c conda-forge alibi shap\n  ```\n\n### Usage\nThe alibi explanation API takes inspiration from `scikit-learn`, consisting of distinct initialize,\nfit and explain steps. We will use the [AnchorTabular](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FAnchors.html)\nexplainer to illustrate the API:\n\n```python\nfrom alibi.explainers import AnchorTabular\n\n# initialize and fit explainer by passing a prediction function and any other required arguments\nexplainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)\nexplainer.fit(X_train)\n\n# explain an instance\nexplanation = explainer.explain(x)\n```\n\nThe explanation returned is an `Explanation` object with attributes `meta` and `data`. `meta` is a dictionary\ncontaining the explainer metadata and any hyperparameters and `data` is a dictionary containing everything\nrelated to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed\nvia `explanation.data['anchor']` (or `explanation.anchor`). The exact details of available fields varies\nfrom method to method so we encourage the reader to become familiar with the\n[types of methods supported](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Foverview\u002Falgorithms.html).\n \n\n## Supported Methods\nThe following tables summarize the possible use cases for each method.\n\n### Model Explanations\n| Method                                                                                                       |    Models    |     Explanations      | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed |\n|:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:|\n| [ALE](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FALE.html)                                      |      BB      |        global         |       ✔        |     ✔      |    ✔    |      |        |                      |                    |             |\n| [Partial Dependence](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependence.html)         |    BB WB     |        global         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [PD Variance](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependenceVariance.html)        |    BB WB     |        global         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [Permutation Importance](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPermutationImportance.html) |      BB      |        global         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [Anchors](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FAnchors.html)                              |      BB      |         local         |       ✔        |            |    ✔    |  ✔   |   ✔    |          ✔           |    For Tabular     |             |\n| [CEM](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCEM.html)                                      | BB* TF\u002FKeras |         local         |       ✔        |            |    ✔    |      |   ✔    |                      |      Optional      |             |\n| [Counterfactuals](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCF.html)                           | BB* TF\u002FKeras |         local         |       ✔        |            |    ✔    |      |   ✔    |                      |         No         |             |\n| [Prototype Counterfactuals](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFProto.html)            | BB* TF\u002FKeras |         local         |       ✔        |            |    ✔    |      |   ✔    |          ✔           |      Optional      |             |\n| [Counterfactuals with RL](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFRL.html)                 |      BB      |         local         |       ✔        |            |    ✔    |      |   ✔    |          ✔           |         ✔          |             |\n| [Integrated Gradients](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FIntegratedGradients.html)     |   TF\u002FKeras   |         local         |       ✔        |     ✔      |    ✔    |  ✔   |   ✔    |          ✔           |      Optional      |             |\n| [Kernel SHAP](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FKernelSHAP.html)                       |      BB      | local \u003Cbr>\u003C\u002Fbr>global |       ✔        |     ✔      |    ✔    |      |        |          ✔           |         ✔          |      ✔      |\n| [Tree SHAP](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTreeSHAP.html)                           |      WB      | local \u003Cbr>\u003C\u002Fbr>global |       ✔        |     ✔      |    ✔    |      |        |          ✔           |      Optional      |             |\n| [Similarity explanations](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FSimilarity.html)           |      WB      |         local         |       ✔        |     ✔      |    ✔    |  ✔   |   ✔    |          ✔           |         ✔          |             |\n\n### Model Confidence\nThese algorithms provide **instance-specific** scores measuring the model confidence for making a\nparticular prediction.\n\n|Method|Models|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set required|\n|:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---|\n|[Trust Scores](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTrustScores.html)|BB|✔| |✔|✔(1)|✔(2)| |Yes|\n|[Linearity Measure](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FLinearityMeasure.html)|BB|✔|✔|✔| |✔| |Optional|\n\nKey:\n - **BB** - black-box (only require a prediction function)\n - **BB\\*** - black-box but assume model is differentiable\n - **WB** - requires white-box model access. There may be limitations on models supported\n - **TF\u002FKeras** - TensorFlow models via the Keras API\n - **Local** - instance specific explanation, why was this prediction made?\n - **Global** - explains the model with respect to a set of instances\n - **(1)** -  depending on model\n - **(2)** -  may require dimensionality reduction\n\n### Prototypes\nThese algorithms provide a **distilled** view of the dataset and help construct a 1-KNN **interpretable** classifier.\n\n|Method|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set labels|\n|:-----|:-------------|:---------|:------|:---|:-----|:-------------------|:---------------|\n|[ProtoSelect](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FProtoSelect.html)|✔| |✔|✔|✔|✔| Optional       |\n\n\n## References and Examples\n- Accumulated Local Effects (ALE, [Apley and Zhu, 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08468))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FALE.html)\n  - Examples:\n    [California housing dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fale_regression_california.html),\n    [Iris dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fale_classification.html)\n\n- Partial Dependence ([J.H. Friedman, 2001](https:\u002F\u002Fprojecteuclid.org\u002Fjournals\u002Fannals-of-statistics\u002Fvolume-29\u002Fissue-5\u002FGreedy-function-approximation-A-gradient-boostingmachine\u002F10.1214\u002Faos\u002F1013203451.full))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependence.html)\n  - Examples:\n    [Bike rental](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpdp_regression_bike.html)\n\n- Partial Dependence Variance([Greenwell et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04755))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependenceVariance.html)\n  - Examples:\n    [Friedman’s regression problem](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpd_variance_regression_friedman.html)\n\n- Permutation Importance([Breiman, 2001](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1023\u002FA:1010933404324); [Fisher et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.01489))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPermutationImportance.html)\n  - Examples:\n    [Who's Going to Leave Next?](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpermutation_importance_classification_leave.html)\n\n- Anchor explanations ([Ribeiro et al., 2018](https:\u002F\u002Fhomes.cs.washington.edu\u002F~marcotcr\u002Faaai18.pdf))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FAnchors.html)\n  - Examples:\n    [income prediction](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_tabular_adult.html),\n    [Iris dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_tabular_iris.html),\n    [movie sentiment classification](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_text_movie.html),\n    [ImageNet](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_image_imagenet.html),\n    [fashion MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_image_fashion_mnist.html)\n\n- Contrastive Explanation Method (CEM, [Dhurandhar et al., 2018](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCEM.html)\n  - Examples: [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcem_mnist.html),\n    [Iris dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcem_iris.html)\n\n- Counterfactual Explanations (extension of\n  [Wachter et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00399))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCF.html)\n  - Examples: \n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcf_mnist.html)\n\n- Counterfactual Explanations Guided by Prototypes ([Van Looveren and Klaise, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02584))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFProto.html)\n  - Examples:\n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_mnist.html),\n    [California housing dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_housing.html),\n    [Adult income (one-hot)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_cat_adult_ohe.html),\n    [Adult income (ordinal)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_cat_adult_ord.html)\n\n- Model-agnostic Counterfactual Explanations via RL([Samoilescu et al., 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02597))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFRL.html)\n  - Examples:\n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfrl_mnist.html),\n    [Adult income](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfrl_adult.html)\n\n- Integrated Gradients ([Sundararajan et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01365))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FIntegratedGradients.html),\n  - Examples:\n    [MNIST example](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_mnist.html),\n    [Imagenet example](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_imagenet.html),\n    [IMDB example](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_imdb.html).\n\n- Kernel Shapley Additive Explanations ([Lundberg et al., 2017](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FKernelSHAP.html)\n  - Examples:\n    [SVM with continuous data](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fkernel_shap_wine_intro.html),\n    [multinomial logistic regression with continous data](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fkernel_shap_wine_lr.html),\n    [handling categorical variables](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fkernel_shap_adult_lr.html)\n    \n- Tree Shapley Additive Explanations ([Lundberg et al., 2020](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-019-0138-9))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTreeSHAP.html)\n  - Examples:\n    [Interventional (adult income, xgboost)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Finterventional_tree_shap_adult_xgb.html),\n    [Path-dependent (adult income, xgboost)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpath_dependent_tree_shap_adult_xgb.html)\n    \n- Trust Scores ([Jiang et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11783))\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTrustScores.html)\n  - Examples:\n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Ftrustscore_mnist.html),\n    [Iris dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Ftrustscore_mnist.html)\n\n- Linearity Measure\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FLinearityMeasure.html)\n  - Examples:\n    [Iris dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Flinearity_measure_iris.html),\n    [fashion MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Flinearity_measure_fashion_mnist.html)\n\n- ProtoSelect\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FProtoSelect.html)\n  - Examples:\n    [Adult Census & CIFAR10](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fexamples\u002Fprotoselect_adult_cifar10.html)\n\n- Similarity explanations\n  - [Documentation](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FSimilarity.html)\n  - Examples:\n    [20 news groups dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fsimilarity_explanations_20ng.html),\n    [ImageNet dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fsimilarity_explanations_imagenet.html),\n    [MNIST dataset](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fsimilarity_explanations_mnist.html)\n\n## Citations\nIf you use alibi in your research, please consider citing it.\n\nBibTeX entry:\n\n```\n@article{JMLR:v22:21-0017,\n  author  = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},\n  title   = {Alibi Explain: Algorithms for Explaining Machine Learning Models},\n  journal = {Journal of Machine Learning Research},\n  year    = {2021},\n  volume  = {22},\n  number  = {181},\n  pages   = {1-7},\n  url     = {http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F21-0017.html}\n}\n```\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_3aa13c096b99.png\" alt=\"Alibi Logo\" width=\"50%\">\n\u003C\u002Fp>\n\n\u003C!--- BADGES: START --->\n\n[![构建状态](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi-detect\u002Fworkflows\u002FCI\u002Fbadge.svg?branch=master)][#build-status]\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_13d664e1afd7.png)][#docs-package]\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FSeldonIO\u002Falibi\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FSeldonIO\u002Falibi)\n[![PyPI - Python 版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Falibi?logo=pypi&style=flat&color=blue)][#pypi-package]\n[![PyPI - 包版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Falibi?logo=pypi&style=flat&color=orange)][#pypi-package]\n[![Conda (仅渠道)](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Falibi?logo=anaconda&style=flat&color=orange)][#conda-forge-package]\n[![GitHub - 许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FSeldonIO\u002Falibi?logo=github&style=flat&color=green)][#github-license]\n[![Slack 频道](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchat-on%20slack-e51670.svg)][#slack-channel]\n\n\u003C!--- Hide platform for now as platform agnostic --->\n\u003C!--- [![Conda - Platform](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fpn\u002Fconda-forge\u002Falibi?logo=anaconda&style=flat)][#conda-forge-package]--->\n\n[#github-license]: https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fblob\u002Fmaster\u002FLICENSE\n[#pypi-package]: https:\u002F\u002Fpypi.org\u002Fproject\u002Falibi\u002F\n[#conda-forge-package]: https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Falibi\n[#docs-package]: https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002F\n[#build-status]: https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Factions?query=workflow%3A%22CI%22\n[#slack-channel]: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fseldondev\u002Fshared_invite\u002Fzt-vejg6ttd-ksZiQs3O_HOtPQsen_labg\n\u003C!--- BADGES: END --->\n---\n\n[Alibi](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi) 是一个源代码可用的 Python 库，旨在用于机器学习模型检查和解释。\n该库的重点是为分类和回归模型提供高质量的黑盒、白盒、局部和全局解释方法的实现。\n*  [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002F)\n\n如果您对外部检测、概念漂移或对抗实例检测感兴趣，请查看我们的姊妹项目 [alibi-detect](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi-detect)。\n\n\u003Ctable>\n  \u003Ctr valign=\"top\">\n    \u003Ctd width=\"50%\" >\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_image_imagenet.html\">\n            \u003Cbr>\n            \u003Cb>图像的 Anchor 解释\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_51906e61875d.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\">\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_imdb.html\">\n            \u003Cbr>\n            \u003Cb>文本的集成梯度\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_393ace8b18df.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr valign=\"top\">\n    \u003Ctd width=\"50%\">\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFProto.html\">\n            \u003Cbr>\n            \u003Cb>反事实示例\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_a3b01a44f96d.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\">\n        \u003Ca href=\"https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FALE.html\">\n            \u003Cbr>\n            \u003Cb>累积局部效应\u003C\u002Fb>\n            \u003Cbr>\n            \u003Cbr>\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_readme_b853830a3307.png\">\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## 目录\n\n* [安装与使用](#installation-and-usage)\n* [支持的方法](#supported-methods)\n  * [模型解释](#model-explanations)\n  * [模型置信度](#model-confidence)\n  * [原型](#prototypes)\n  * [参考文献与示例](#references-and-examples)\n* [引用](#citations)\n\n## 安装与使用\nAlibi 可以从以下位置安装：\n\n- PyPI 或 GitHub 源码（使用 `pip`）\n- Anaconda（使用 `conda`\u002F`mamba`）\n\n### 使用 pip\n\n- Alibi 可以从 [PyPI](https:\u002F\u002Fpypi.org\u002Fproject\u002Falibi) 安装：\n\n  ```bash\n  pip install alibi\n  ```\n  \n- 或者，可以安装开发版本：\n  ```bash\n  pip install git+https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi.git \n  ```\n\n- 为了利用解释的分布式计算，请使用 `ray` 安装 `alibi`：\n  ```bash\n  pip install alibi[ray]\n  ```\n\n- 如需支持 SHAP，请按如下方式安装 `alibi`：\n  ```bash\n  pip install alibi[shap]\n  ```\n\n### 使用 conda \n\n要从 [conda-forge](https:\u002F\u002Fconda-forge.org\u002F) 安装，建议使用 [mamba](https:\u002F\u002Fmamba.readthedocs.io\u002Fen\u002Fstable\u002F)， \n它可以安装在 *base* conda 环境中，命令如下：\n\n```bash\nconda install mamba -n base -c conda-forge\n```\n\n- 对于标准 Alibi 安装：\n  ```bash\n  mamba install -c conda-forge alibi\n  ```\n\n- 对于分布式计算支持：\n  ```bash\n  mamba install -c conda-forge alibi ray\n  ```\n\n- 对于 SHAP 支持：\n  ```bash\n  mamba install -c conda-forge alibi shap\n  ```\n\n### 使用方式\nAlibi 解释 API 借鉴了 `scikit-learn`（一个流行的机器学习库），包含独立的初始化、\n拟合和解释步骤。我们将使用 [AnchorTabular](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FAnchors.html)\n解释器来说明该 API：\n\n```python\nfrom alibi.explainers import AnchorTabular\n\n# initialize and fit explainer by passing a prediction function and any other required arguments\nexplainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)\nexplainer.fit(X_train)\n\n# explain an instance\nexplanation = explainer.explain(x)\n```\n\n返回的解释是一个 `Explanation`（解释对象），具有 `meta` 和 `data` 属性。`meta` 是一个包含解释器元数据和任何超参数的字典，`data` 是一个包含所有与计算出的解释相关内容的字典。例如，对于 Anchor 算法，可以通过 `explanation.data['anchor']`（或 `explanation.anchor`）访问解释。可用字段的详细信息因方法而异，因此我们鼓励读者熟悉\n[支持的方法类型](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Foverview\u002Falgorithms.html)。\n \n\n## 支持的方法\n下表总结了每种方法的潜在用例。\n\n### 模型解释\n| 方法                                                                                                       |    模型类型     |     解释类型      | 分类 | 回归 | 表格数据 | 文本 | 图像 | 类别特征 | 是否需要训练集 | 分布式 |\n|:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:|\n| [ALE](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FALE.html)                                      |      BB      |        全局         |       ✔        |     ✔      |    ✔    |      |        |                      |                    |             |\n| [部分依赖](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependence.html)         |    BB WB     |        全局         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [部分依赖方差](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependenceVariance.html)        |    BB WB     |        全局         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [排列重要性](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPermutationImportance.html) |      BB      |        全局         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [锚点](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FAnchors.html)                              |      BB      |         局部         |       ✔        |            |    ✔    |  ✔   |   ✔    |          ✔           |    针对表格数据     |             |\n| [CEM](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCEM.html)                                      | BB* TF\u002FKeras |         局部         |       ✔        |            |    ✔    |      |   ✔    |                      |      可选      |             |\n| [反事实](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCF.html)                           | BB* TF\u002FKeras |         局部         |       ✔        |            |    ✔    |      |   ✔    |                      |         否         |             |\n| [原型反事实](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFProto.html)            | BB* TF\u002FKeras |         局部         |       ✔        |            |    ✔    |      |   ✔    |          ✔           |      可选      |             |\n| [基于强化学习的反事实](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFRL.html)                 |      BB      |         局部         |       ✔        |            |    ✔    |      |   ✔    |          ✔           |         ✔          |             |\n| [集成梯度](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FIntegratedGradients.html)     |   TF\u002FKeras   |         局部         |       ✔        |     ✔      |    ✔    |  ✔   |   ✔    |          ✔           |      可选      |             |\n| [Kernel SHAP](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FKernelSHAP.html)                       |      BB      | 局部 \u003Cbr>\u003C\u002Fbr>全局 |       ✔        |     ✔      |    ✔    |      |        |          ✔           |         ✔          |      ✔      |\n| [Tree SHAP](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTreeSHAP.html)                           |      WB      | 局部 \u003Cbr>\u003C\u002Fbr>全局 |       ✔        |     ✔      |    ✔    |      |        |          ✔           |      可选      |             |\n| [相似性解释](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FSimilarity.html)           |      WB      |         局部         |       ✔        |     ✔      |    ✔    |  ✔   |   ✔    |          ✔           |         ✔          |             |\n\n### 模型置信度\n这些算法提供**实例特定 (instance-specific)** 的评分，用于衡量模型进行特定预测时的置信度。\n\n| 方法 | 模型类型 | 分类 | 回归 | 表格数据 | 文本 | 图像 | 类别特征 | 是否需要训练集 |\n|:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---|\n|[信任分数](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTrustScores.html)|BB|✔| |✔|✔(1)|✔(2)| |是|\n|[线性度量](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FLinearityMeasure.html)|BB|✔|✔|✔| |✔| |可选|\n\n关键术语：\n - **BB** - 黑盒 (Black-box)（仅需预测函数）\n - **BB\\*** - 黑盒但假设模型是可微分的 (differentiable)\n - **WB** - 需要白盒 (White-box) 模型访问权限。支持的模型可能存在限制\n - **TF\u002FKeras** - 通过 Keras API 的 TensorFlow 模型\n - **局部 (Local)** - 实例特定的解释，为何做出此预测？\n - **全局 (Global)** - 相对于一组实例解释模型\n - **(1)** - 取决于模型\n - **(2)** - 可能需要降维 (dimensionality reduction)\n\n### 原型\n这些算法提供数据集的**提炼 (distilled)** 视图，并帮助构建一个 1-KNN **可解释 (interpretable)** 分类器。\n\n| 方法 | 分类 | 回归 | 表格数据 | 文本 | 图像 | 类别特征 | 训练集标签 |\n|:-----|:-------------|:---------|:------|:---|:-----|:-------------------|:---------------|\n|[原型选择](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FProtoSelect.html)|✔| |✔|✔|✔|✔| 可选       |\n\n\n## 引用与示例\n- 累积局部效应 (ALE, [Apley and Zhu, 2016](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08468))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FALE.html)\n  - 示例：\n    [加州住房数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fale_regression_california.html),\n    [鸢尾花数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fale_classification.html)\n\n- 部分依赖 ([J.H. Friedman, 2001](https:\u002F\u002Fprojecteuclid.org\u002Fjournals\u002Fannals-of-statistics\u002Fvolume-29\u002Fissue-5\u002FGreedy-function-approximation-A-gradient-boostingmachine\u002F10.1214\u002Faos\u002F1013203451.full))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependence.html)\n  - 示例：\n    [自行车租赁](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpdp_regression_bike.html)\n\n- 部分依赖方差 ([Greenwell et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04755))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependenceVariance.html)\n  - 示例：\n    [弗里德曼回归问题](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpd_variance_regression_friedman.html)\n\n- 排列重要性 (Permutation Importance) ([Breiman, 2001](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1023\u002FA:1010933404324); [Fisher et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.01489))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPermutationImportance.html)\n  - 示例：\n    [谁将离开？](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpermutation_importance_classification_leave.html)\n\n- 锚点解释 (Anchor explanations) ([Ribeiro et al., 2018](https:\u002F\u002Fhomes.cs.washington.edu\u002F~marcotcr\u002Faaai18.pdf))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FAnchors.html)\n  - 示例：\n    [收入预测](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_tabular_adult.html),\n    [Iris 数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_tabular_iris.html),\n    [电影情感分类](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_text_movie.html),\n    [ImageNet](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_image_imagenet.html),\n    [Fashion MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fanchor_image_fashion_mnist.html)\n\n- 对比解释方法 (CEM, [Dhurandhar et al., 2018](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCEM.html)\n  - 示例：[MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcem_mnist.html),\n    [Iris 数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcem_iris.html)\n\n- 反事实解释 (Counterfactual Explanations) (扩展自\n  [Wachter et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00399))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCF.html)\n  - 示例： \n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcf_mnist.html)\n\n- 原型引导的反事实解释 (Counterfactual Explanations Guided by Prototypes) ([Van Looveren and Klaise, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02584))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFProto.html)\n  - 示例：\n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_mnist.html),\n    [加州住房数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_housing.html),\n    [成人收入 (one-hot)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_cat_adult_ohe.html),\n    [成人收入 (ordinal)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfproto_cat_adult_ord.html)\n\n- 基于强化学习 (RL) 的模型无关反事实解释 (Model-agnostic Counterfactual Explanations via RL) ([Samoilescu et al., 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02597))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FCFRL.html)\n  - 示例：\n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfrl_mnist.html),\n    [成人收入](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fcfrl_adult.html)\n\n- 集成梯度 (Integrated Gradients) ([Sundararajan et al., 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01365))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FIntegratedGradients.html),\n  - 示例：\n    [MNIST 示例](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_mnist.html),\n    [Imagenet 示例](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_imagenet.html),\n    [IMDB 示例](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fintegrated_gradients_imdb.html).\n\n- 核 SHAP 可加性解释 (Kernel Shapley Additive Explanations) ([Lundberg et al., 2017](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FKernelSHAP.html)\n  - 示例：\n    [连续数据的 SVM](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fkernel_shap_wine_intro.html),\n    [连续数据的多项逻辑回归](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fkernel_shap_wine_lr.html),\n    [处理分类变量](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fkernel_shap_adult_lr.html)\n    \n- 树 SHAP 可加性解释 (Tree Shapley Additive Explanations) ([Lundberg et al., 2020](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-019-0138-9))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTreeSHAP.html)\n  - 示例：\n    [干预式 (成人收入，xgboost)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Finterventional_tree_shap_adult_xgb.html),\n    [路径依赖式 (成人收入，xgboost)](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fpath_dependent_tree_shap_adult_xgb.html)\n    \n- 信任分数 (Trust Scores) ([Jiang et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11783))\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FTrustScores.html)\n  - 示例：\n    [MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Ftrustscore_mnist.html),\n    [Iris 数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Ftrustscore_mnist.html)\n\n- 线性度量 (Linearity Measure)\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FLinearityMeasure.html)\n  - 示例：\n    [Iris 数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Flinearity_measure_iris.html),\n    [Fashion MNIST](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Flinearity_measure_fashion_mnist.html)\n\n- ProtoSelect\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FProtoSelect.html)\n  - 示例：\n    [成人人口普查 & CIFAR10](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fexamples\u002Fprotoselect_adult_cifar10.html)\n\n- 相似性解释 (Similarity explanations)\n  - [文档](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FSimilarity.html)\n  - 示例：\n    [20 个新闻组数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fsimilarity_explanations_20ng.html),\n    [ImageNet 数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fsimilarity_explanations_imagenet.html),\n    [MNIST 数据集](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fexamples\u002Fsimilarity_explanations_mnist.html)\n\n\n\n## 引用\n如果您在研究中使用 Alibi，请考虑引用它。\n\nBibTeX 条目：\n\n```\n@article{JMLR:v22:21-0017,\n  author  = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},\n  title   = {Alibi Explain: Algorithms for Explaining Machine Learning Models},\n  journal = {Journal of Machine Learning Research},\n  year    = {2021},\n  volume  = {22},\n  number  = {181},\n  pages   = {1-7},\n  url     = {http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F21-0017.html}\n}\n```","# Alibi 快速上手指南\n\n**Alibi** 是一个面向机器学习模型的检查与解释（XAI）开源 Python 库。它提供了高质量的黑盒、白盒、局部和全局解释方法，适用于分类和回归模型。\n\n## 1. 环境准备\n\n*   **操作系统**：跨平台支持（Linux, macOS, Windows）。\n*   **Python 版本**：请确保已安装兼容的 Python 版本（参考 PyPI 徽章支持的版本）。\n*   **依赖管理**：推荐使用 `pip` 或 `conda`\u002F`mamba`。\n\n## 2. 安装步骤\n\n### 使用 pip 安装\n\n安装标准版本：\n```bash\npip install alibi\n```\n\n安装开发版本：\n```bash\npip install git+https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi.git \n```\n\n**可选扩展依赖：**\n\n*   启用分布式计算支持（需安装 Ray）：\n    ```bash\n    pip install alibi[ray]\n    ```\n*   启用 SHAP 支持：\n    ```bash\n    pip install alibi[shap]\n    ```\n\n### 使用 conda 安装\n\n建议先安装 `mamba` 以加速依赖解析：\n```bash\nconda install mamba -n base -c conda-forge\n```\n\n安装 Alibi：\n```bash\nmamba install -c conda-forge alibi\n```\n\n如需分布式计算或 SHAP 支持，可添加相应包：\n```bash\nmamba install -c conda-forge alibi ray\nmamba install -c conda-forge alibi shap\n```\n\n> **注意**：如果您需要异常检测、概念漂移或对抗实例检测功能，请参考姊妹项目 [alibi-detect](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi-detect)。\n\n## 3. 基本使用\n\nAlibi 的解释器 API 设计灵感来源于 `scikit-learn`，主要包含三个步骤：**初始化 (Initialize)**、**拟合 (Fit)** 和 **解释 (Explain)**。\n\n以下以 `AnchorTabular` 解释器为例：\n\n```python\nfrom alibi.explainers import AnchorTabular\n\n# 1. 初始化并拟合解释器\n# 传入预测函数及其他必要参数（如特征名称、类别映射等）\nexplainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)\nexplainer.fit(X_train)\n\n# 2. 解释单个样本\nexplanation = explainer.explain(x)\n```\n\n### 结果说明\n\n返回的 `explanation` 是一个 `Explanation` 对象，包含以下主要属性：\n\n*   **`meta`**：字典，包含解释器的元数据及超参数。\n*   **`data`**：字典，包含计算出的解释相关数据。\n\n不同方法的字段略有差异。例如，对于 Anchor 算法，可通过以下方式访问解释内容：\n```python\nanchor_value = explanation.data['anchor']  # 或 explanation.anchor\n```\n\n更多详细的方法类型和用法，请参阅官方文档。","某金融科技公司的数据科学团队正在部署一款高风险信贷审批模型，业务方对部分优质客户的自动拒贷决定提出强烈质疑，急需透明化解释。\n\n### 没有 alibi 时\n- 模型决策如同黑盒，无法直观量化各特征对最终结果的贡献权重\n- 排查特定样本的错误预测需手动编写脚本分析，排查效率极其低下\n- 面对金融监管审查，难以提供具体、可解释的拒贷依据与证据\n- 模型上线后若出现性能波动，无法快速定位是数据分布漂移还是算法问题\n\n### 使用 alibi 后\n- 利用内置的 SHAP 和 LIME 算法，一键生成可视化的特征归因报告\n- 通过反事实示例功能，模拟调整关键指标（如收入）后的预测变化路径\n- 自动生成符合合规要求的解释文档，大幅降低与法务及监管部门的沟通成本\n- 结合全局累积效应分析，精准识别模型在不同客群中的潜在偏差与公平性问题\n\nalibi 将复杂的深度学习决策转化为可理解的业务逻辑，显著提升了模型的可信度与落地效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSeldonIO_alibi_3aa13c09.png","SeldonIO","Seldon","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSeldonIO_17a4841a.png","Machine Learning Deployment for Kubernetes",null,"hello@seldon.io","https:\u002F\u002Fseldon.io","https:\u002F\u002Fgithub.com\u002FSeldonIO",[84,88],{"name":85,"color":86,"percentage":87},"Python","#3572A5",99.9,{"name":89,"color":90,"percentage":91},"Makefile","#427819",0.1,2622,264,"2026-04-02T23:24:32","NOASSERTION",1,"Linux, macOS, Windows","未说明",{"notes":100,"python":98,"dependencies":101},"支持通过 pip 或 conda 安装；可通过 extras 安装 ray 以支持分布式计算，安装 shap 以支持 SHAP 解释；部分算法（如 Integrated Gradients、CEM）需 TensorFlow\u002FKeras 模型支持；API 设计参考 scikit-learn 风格。",[102,103,104,105,106,107],"numpy","scikit-learn","tensorflow","keras","shap","ray",[13],[110,111,112,113,114],"machine-learning","explanations","interpretability","counterfactual","xai","2026-03-27T02:49:30.150509","2026-04-06T06:44:28.265539",[118,123,127,132,137,141],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},3085,"如何使用 Conda 安装 Alibi 库？","Alibi 现已支持通过 conda-forge 频道安装。请使用以下命令：\nconda install -c conda-forge alibi\n安装完成后即可使用。","https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fissues\u002F579",{"id":124,"question_zh":125,"answer_zh":126,"source_url":122},3086,"如何检查 Conda 中 Alibi 的版本和可用性？","可以使用 conda search 命令在 conda-forge 频道中搜索并查看版本信息。示例命令如下：\n$ conda search -c conda-forge alibi\n这将显示可用的版本号和构建信息。",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},3087,"AnchorTabular 在处理分类特征时报错怎么办？","建议对分类特征进行标签编码（label encoding）。如果直接传递未编码的分类数据或预处理后的 pandas dataframe，可能会导致错误。当前库中的 gen_category_map 功能尚未完全支持，因此先进行标签编码是一个有效的解决方案。","https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fissues\u002F221",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},3088,"AnchorText 出现 IndexError (dimension mismatch) 如何解决？","主要问题通常在于预测函数的定义方式。Anchor 内部期望预测函数能够处理批量实例（batches of instances），而不是单个实例。请确保你的预测函数接收批量输入并返回 np.ndarray 类型的输出。","https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fissues\u002F240",{"id":138,"question_zh":139,"answer_zh":140,"source_url":136},3089,"在 AnchorText 中使用 FastText 模型时有什么注意事项？","FastText 模型的预测标签可能会按概率降序排列，导致最可能的类始终位于输出开头。如果直接使用 argmax 操作，可能会总是预测标签 0。请务必查阅 FastText 官方文档以确认其输出顺序，并在解释器中正确处理。",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},3090,"使用 TensorFlow 模型配合 Alibi 时如何确保结果可复现？","如果在每次调用 explain 之前都重新定义相同的模型结构，权重会随机初始化，导致结果不可复现。建议遵循 TensorFlow 的可复现性指南（如 NVIDIA framework-determinism）来固定权重初始化，以确保实验的一致性。","https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fissues\u002F627",[147,152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242],{"id":148,"version":149,"summary_zh":150,"released_at":151},102617,"v0.9.3","## [v0.9.3](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.9.3) (2023-06-21)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.9.2...v0.9.3)\r\n\r\nThis is a patch release to officially enable support for Python 3.11.\r\nThis is the last release with official support for Python 3.7.\r\n\r\n### Development\r\n- Test library on Python 3.11 ([#932](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F932)).\r\n- Separate code quality into its own Github Action and only run against the main development version of Python, currently Python 3.10 ([#925](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F925)).\r\n- Check and remove stale `mypy` ignore commands ([#874](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F874)).\r\n- Bump `torch` version to `2.x` ([#893](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F893)).\r\n- Bump `scikit-image` version to `0.21.x` ([#928](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F928)).\r\n- Bump `numba` version to `0.57.x` ([#922](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F922)).\r\n- Bump `sphinx` version to `7.x` ([#921](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F921)).\r\n","2023-06-21T13:56:40",{"id":153,"version":154,"summary_zh":155,"released_at":156},102618,"v0.9.2","## [v0.9.2](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.9.2) (2023-04-28)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.9.1...v0.9.2)\r\n\r\nThis is a patch release fixing several bugs, updating dependencies and adding some small extensions.\r\n\r\n### Added\r\n- Allow `IntegratedGradients` layer selection to be specified with a custom callable ([#894](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F894)).\r\n- Implement `reset_predictor` method for `PartialDependence` explainer ([#897](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F897)).\r\n- Extend `GradientSimilarity` explainer to allow models of any input type ([#912](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F912)).\r\n\r\n### Fixed\r\n - `AnchorText` auto-regressive language model sampler updating `input_ids` tensor ([#895](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F895)).\r\n - `AnchorTabular` length discrepancy between `feature` and `names` fields ([#902](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F902)).\r\n - `AnchorBaseBeam` unintended coverage update during the multi-armed bandit run ([#919](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F919), [#914](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fissues\u002F914)).\r\n\r\n### Changed\r\n - Maximum supported version of `tensorflow` bumped to `2.12.x` ([#896](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F896)).\r\n - Supported version of `pandas` bumped to `>1.0.0, \u003C3.0.0` ([#899](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F899)).\r\n - Update notebooks to account for `pandas` version `2.x` deprecations ([#908](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F908), [#910](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F910)).\r\n - Maximum supported version of `scikit-image` bumped to `0.20.x` ([#882](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F882)).\r\n - Maximum supported version of `attrs` bumped to `23.x` ([#905](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F905)).\r\n\r\n### Development\r\n - Migrate `codecov` to use Github Actions and don't fail CI on coverage report upload failure due to rate limiting ([#901](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F901), [#913](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F913)).\r\n - Bumpy `mypy` version to `>=1.0, \u003C2.0` ([#886](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F886)).\r\n - Bump `sphinx` version to `6.x` ([#852](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F852)).\r\n - Bump `sphinx-design` version to `0.4.1` ([#904](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F904)).\r\n - Bump `nbsphinx` version to `0.9.x` ([#889](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F889)).\r\n - Bump `myst-parser` version to `>=1.0, \u003C2.0` ([#887](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F887)).\r\n - Bump `twine` version to `4.x` ([#620](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F620)).\r\n - Bump `pre-commit` version to `3.x` and update the config ([#866](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F866)).","2023-04-28T14:12:49",{"id":158,"version":159,"summary_zh":160,"released_at":161},102619,"v0.9.1","## [v0.9.1](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.9.1) (2023-03-13)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.9.0...v0.9.1)\r\n\r\nThis is a patch release fixing several bugs.\r\n\r\n### Fixed\r\n - Replace deprecated usage of `np.object` in the codebase which was causing errors with `numpy>=1.24` ([#872](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F872), [#890](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F890)).\r\n - Fix a bug\u002Ftypo in `cfrl_base.py` of the `tensorflow` backend ([#891](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F891)).\r\n - Correctly handle calls to `.reset_predictor` for `KernelShap` and `TreeShap` explainers ([#880](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F880)).\r\n - Update saving of `KernelShap` to avoid saving the internal `_explainer` object ([#881](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F881)).\r\n\r\n### Development\r\n - Show text execution times ([#849](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F849)).\r\n - Replace Python 2 style type comments with type annotations ([#870](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F870)).\r\n - Bump the `mypy` version to `~1.0` ([#871](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F871)).\r\n - Bump the default development version of Python to 3.10 ([#876](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F876), [#877](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F877)).","2023-03-13T15:40:56",{"id":163,"version":164,"summary_zh":165,"released_at":166},102615,"v0.9.5","## [v0.9.5](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.9.5) (2024-01-22)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.9.4...v0.9.5)\r\n\r\nThis is a patch release fixing several bugs, updating dependencies and a change of license.\r\n\r\n### Fixed\r\n- Fix torch version bound in setup.py extras_require ([#950](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F950))\r\n- Fix DistributedExplainer import errors that arise when ray absent([#951](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F951))\r\n- Fix memory limit issue in tox ci jobs ([#956](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F956))\r\n- Fix E721 linting errors ([#958](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F958))\r\n- Fix plot_pd function to work with matplotlib 3.8.0 changes ([#965](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F965))\r\n- Fix typechecking with matplotlib 3.8.0 ([#969](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F969))\r\n- fix typechecking for matplotlib 3.8.1 ([#981](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F981))\r\n- Fix typechecking for mypy 1.7.0 ([#983](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F983))\r\n- Fix test models to output logits and work with default loss functions ([#975](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F975))\r\n- Fix dtype type in helper method for AnchorText samplers ([#980](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F980))\r\n\r\n### Changed\r\n- Alibi License change from Apache to Business Source License 1.1 ([#995](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F995))\r\n\r\n### Development\r\n- Update myst-parser requirement upper bound from 2.0 to 3.0 ([#931](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F931))\r\n- Update pillow requirement upper bound from 10.0 to 11.0 ([#939](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F939))\r\n- Add notebooks tests for python 3.11 ([#948](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F948)) & ([#949](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F949))\r\n- Update sphinxcontrib-apidoc requirement upper bound from 0.4.0 to 0.5.0 ([#962](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F962))\r\n- Update numba requirement upper bound from 0.58.0 to 0.59.0 ([#967](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F967))\r\n- Update shap requirement upper bound from 0.43.0 to 0.44.0 ([#974](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F974))\r\n- Update tensorflow requirement upper bound from 2.14.0 to 2.15.0 ([#968](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F968))\r\n- Update Alibi_Explain_Logo_rgb image with white stroked letters ([#979](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F979))\r\n- Remove macos from ci ([#995](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F995))\r\n- Add security scans to CI ([#995](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F995))\r\n","2024-01-22T16:23:03",{"id":168,"version":169,"summary_zh":170,"released_at":171},102616,"v0.9.4","## [v0.9.4](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.9.4) (2023-07-07)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.9.3...v0.9.4)\r\n\r\nThis is a patch release to support `numpy >= 1.24` and `scikit-learn >= 1.3.0`, and drop official support for Python 3.7.\r\n\r\n### Development\r\n- Drop official support for Python 3.7 ([#942](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F942)).\r\n- Maximum supported version of `shap` bumped to `0.42.x`, to give `numpy >= 1.24` support ([#944](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F944)).\r\n- Fix handling of `scikit-learn` `partial_dependence` kwarg's in tests, for compatibility with `scikit-learn 1.3.0` ([#940](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi-detect\u002Fpull\u002F940)).\r\n","2023-07-07T14:10:27",{"id":173,"version":174,"summary_zh":175,"released_at":176},102614,"v0.9.6","## [v0.9.6](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.9.6) (2024-04-18)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.9.5...v0.9.6)\r\n\r\nThis is a minor release.\r\n\r\n### Changed\r\n- Removed explicit dependency on Pydantic ([#1002](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F1002))\r\n\r\n### Development\r\n- Bump tj-actions\u002Fchanged-files from 1.1.2 to 41.0.0 in \u002F.github\u002Fworkflows ([#992](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F992))\r\n- Update README.md ([#996](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F996))\r\n- Update numba requirement from !=0.54.0,\u003C0.59.0,>=0.50.0 to >=0.50.0,!=0.54.0,\u003C0.60.0 ([#999](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F999))\r\n- Update twine requirement from \u003C5.0.0,>3.2.0 to >3.2.0,\u003C6.0.0 ([#1001](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F1001))\r\n- build(ci): Migrate actions to later Node version ([#1003](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F1003))","2024-04-18T15:30:25",{"id":178,"version":179,"summary_zh":180,"released_at":181},102620,"v0.9.0","## [v0.9.0](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.9.0) (2023-01-11)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.8.0...v0.9.0)\r\n\r\n### Added\r\n- **New feature** `PermutationImportance` explainer implementing the permutation feature importance global explanations. Also included is a `plot_permutation_importance` utility function for flexible plotting of the resulting feature importance scores ([docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPermutationImportance.html),  [#798](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F798)). \r\n- **New feature** `PartialDependenceVariance` explainer implementing partial dependence variance global explanations. Also included is a `plot_pd_variance` utility function for flexible plotting of the resulting PD variance plots ([docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependenceVariance.html), [#758](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F758)).\r\n\r\n### Fixed\r\n- `GradientSimilarity` explainer now automatically handles sparse tensors in the model by converting the gradient tensors to dense ones before calculating similarity. This used to be a source of bugs when calculating similarity for models with embedding layers for which gradients tensors are sparse by default. Additionally, it now filters any non-trainable parameters and doesn't consider those in the calculation as no gradients exist. A warning is raised if any non-trainable layers or parameters are detected ([#829](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F829)).\r\n- Updated the discussion of the interpretation of `ALE`. The previous examples and documentation had some misleading claims; these have been removed and reworked with an emphasis on the mostly qualitative interpretation of `ALE` plots ([#838](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F838), [#846](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F846)).\r\n\r\n### Changed\r\n- Deprecated the use of the legacy Boston housing dataset in examples and testing. The new examples now use the California housing dataset ([#838](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F838), [#834](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F834)).\r\n- Modularized the computation of prototype importances and plotting for `ProtoSelect`, allowing greater flexibility to the end user ([#826](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F826)).\r\n- Roadmap documentation page removed due to going out of date ([#842](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F842)).\r\n\r\n### Development\r\n- Tests added for `tensorflow` models used in `CounterfactualRL` ([#793](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F793)).\r\n- Tests added for `pytorch` models used in `CounterfactualRL` ([#799](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F799)).\r\n- Tests added for `ALE` plotting functionality ([#816](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F816)).\r\n- Tests added for `PartialDependence` plotting functionality ([#819](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F819)).\r\n- Tests added for `PartialDependenceVariance` plotting functionality ([#820](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F820)).\r\n- Tests added for `PermutationImportance` plotting functionality ([#824](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F824)).\r\n- Tests addef for `ProtoSelect` plotting functionality ([#841](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F841)).\r\n- Tests added for the `datasets` subpackage ([#814](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F814)).\r\n- Fixed optional dependency installation during CI to make sure dependencies are consistent ([#817](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F817)).\r\n- Synchronize notebook CI workflow with the main CI workflow ([#818](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F818)).\r\n- Version of `pytest-cov` bumped to `4.x` ([#794](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F794)).\r\n- Version of `pytest-xdist` bumped to `3.x` ([#808](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F808)).\r\n- Version of `tox` bumped to `4.x` ([#832](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F832)).","2023-01-11T16:52:55",{"id":183,"version":184,"summary_zh":185,"released_at":186},102621,"v0.8.0","## [v0.8.0](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.8.0) (2022-09-26)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.7.0...v0.8.0)\r\n\r\n### Added\r\n- **New feature** `PartialDependence` and `TreePartialDependence` explainers implementing partial dependence (PD) global explanations. Also included is a `plot_pd` utility function for flexible plotting of the resulting PD plots ([docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FPartialDependence.html), [#721](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F721)).\r\n- New `exceptions.NotFittedError` exception which is raised whenever a compulsory call to a `fit` method has not been carried out. Specifically, this is now raised in `AnchorTabular.explain` when `AnchorTabular.fit` has been skipped ([#732](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F732)).\r\n- Various improvements to docs and examples ([#695](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F695), [#701](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F701), [#698](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F698), [#703](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F703), [#717](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F717), [#711](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F711), [#750](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F750), [#784](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F784)).\r\n\r\n### Fixed\r\n- Edge case in `AnchorTabular` where an error is raised during an `explain` call if the instance contains a categorical feature value not seen in the training data ([#742](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F742)).\r\n\r\n### Changed\r\n- Improved handling of custom `grid_points` for the `ALE` explainer ([#731](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F731)).\r\n- Renamed our custom exception classes to remove the verbose `Alibi*` prefix and standardised the `*Error` suffix. Concretely:\r\n  - `exceptions.AlibiPredictorCallException` is now `exceptions.PredictorCallError`\r\n  - `exceptions.AlibiPredictorReturnTypeError` is now `exceptions.PredictorReturnTypeError`. Backwards compatibility has been maintained by subclassing the new exception classes by the old ones, **but these will likely be removed in a future version** ([#733](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F733)).\r\n- Warn users when `TreeShap` is used with more than 100 samples in the background dataset which is due to a limitation in the upstream `shap` package ([#710](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F710)).\r\n- Minimum version of `scikit-learn` bumped to `1.0.0` mainly due to upcoming deprecations ([#776](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F776)).\r\n- Minimum version of `scikit-image` bumped to `0.17.2` to fix a possible bug when using the `slic` segmentation function with `AnchorImage` ([#753](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F753)).\r\n- Maximum supported version of `attrs` bumped to `22.x` ([#727](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F727)).\r\n- Maximum supported version of `tensorflow` bumped to `2.10.x` ([#745](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F745)).\r\n- Maximum supported version of `ray` bumped to `2.x` ([#740](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F740)).\r\n- Maximum supported version of `numba` bumped to `0.56.x` ([#724](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F724)).\r\n- Maximum supported version of `shap` bumped to `0.41.x` ([#702](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F702)).\r\n- Updated `shap` example notebooks to recommend installing `matplotlib==3.5.3` due to failure of `shap` plotting functions with `matplotlib==3.6.0` ([#776](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F776)).\r\n\r\n### Development\r\n- Extend optional dependency checks to ensure the correct submodules are present ([#714](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F714)). \r\n- Introduce `pytest-custom_exit_code` to let notebook CI pass when no notebooks are selected for tests ([#728](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F728)).\r\n- Use UTF-8 encoding when loading `README.md` in `setup.py` to avoid a possible failure of installation for some users ([#744](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F744)).\r\n- Updated guidance for class docstrings ([#743](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F743)).\r\n- Reinstate `ray` tests ([#756](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F756)).\r\n- We now exclude test files from test coverage for a more accurate representation of coverage ([#751](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F751)). Note that this has led to a drop in code covered which will be addressed in due course ([#760](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fissues\u002F760)).\r\n- The Python `3.10.x` version on CI has been pinned to `3.10.6` due to typechecking failures, pending a new release of `mypy` ([#761](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F761)).\r\n- The `test_changed_notebooks` workflow can now be triggered manually and is run on push\u002FPR for any branch ([#762](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcommit\u002F98e962b32c31e7ee670147a44af032b593950b5d)).\r\n- Use `codecov` flags for more granular reporting of code coverage ([#759](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F759)).\r\n- Option to ssh into Github Actions runs for remote debu","2022-09-26T10:27:18",{"id":188,"version":189,"summary_zh":190,"released_at":191},102622,"v0.7.0","## [v0.7.0](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.7.0) (2022-05-18)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.6.5...v0.7.0)\r\n\r\nThis release introduces two new methods, a `GradientSimilarity` explainer and a `ProtoSelect` data summarisation algorithm.\r\n\r\n## Added\r\n- **New feature** `GradientSimilarity` explainer for explaining predictions of gradient-based (PyTorch and TensorFlow) models by returning the most similar training data points from the point of view of the model ([docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FSimilarity.html)).\r\n- **New feature** We have introduced a new subpackage `alibi.prototypes` which contains the `ProtoSelect` algorithm for summarising datasets with a representative set of \"prototypes\" ([docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FProtoSelect.html)).\r\n- `ALE` explainer now can take a custom grid-point per feature to evaluate the `ALE` on. This can help in certain situations when grid-points defined by quantiles might not be the best choice ([docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Fmethods\u002FALE.html#Usage)).\r\n- Extended the `IntegratedGradients` method target selection to handle explaining any scalar dimension of tensors of any rank (previously only rank-1 and rank-2 were supported). See [#635](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F635).\r\n- Python 3.10 support. Note that `PyTorch` at the time of writing doesn't support Python 3.10 on Windows.\r\n\r\n## Fixed\r\n- Fixed a bug which incorrectly handled multi-dimensional scaling in `CounterfactualProto` ([#646](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F646)).\r\n- Fixed a bug in the example using `CounterfactualRLTabular` ([#651](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F651)).\r\n\r\n## Changed\r\n- `tensorflow` is now an optional dependency. To use methods that require `tensorflow` you can install `alibi` using `pip install alibi[tensorflow]` which will pull in a supported version. For full instructions for the recommended way of installing optional dependencies please refer to [Installation docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Fstable\u002Foverview\u002Fgetting_started.html#installation).\r\n- Updated `sklearn` version bounds to `scikit-learn>=0.22.0, \u003C2.0.0`.\r\n- Updated `tensorflow` maximum allowed version to `2.9.x`.\r\n\r\n## Development\r\n- This release introduces a way to manage the absence of optional dependencies. In short, the design is such that if an optional dependency is required for an algorithm but missing, at import time the corresponding public (or private in the case of the optional dependency being required for a subset of the functionality of a private class) algorithm class will be replaced by a `MissingDependency` object. For full details on developing `alibi` with optional dependencies see [Contributing: Optional Dependencies](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md#optional-dependencies).\r\n- The [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md) has been updated with further instructions for managing optional dependencies (see point above) and more conventions around docstrings.\r\n- We have split the `Explainer` base class into `Base` and `Explainer` to facilitate reusability and better class hierarchy semantics with introducing methods that are not explainers ([#649](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F649)).\r\n- `mypy` has been updated to `~=0.900` which requires additional development dependencies for type stubs, currently only `types-requests` has been necessary to add to `requirements\u002Fdev.txt`.\r\n- Fron this release onwards we exclude the directories `doc\u002F` and `examples\u002F` from the source distribution (by adding `prune` directives in `MANIFEST.in`). This results in considerably smaller file sizes for the source distribution.\r\n","2022-05-18T11:35:00",{"id":193,"version":194,"summary_zh":195,"released_at":196},102623,"v0.6.5","## [v0.6.5](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.6.4) (2022-03-18)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.6.4...v0.6.5)\r\n\r\nThis is a patch release to correct a regression in `CounterfactualProto` introduced in `v0.6.3`.\r\n\r\n### Added\r\n- Added a [Frequently Asked Questions](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Foverview\u002Ffaq.html) page to the docs.\r\n\r\n### Fixed\r\n- Fix a bug introduced in `v0.6.3` which prevented `CounterfactualProto` working with categorical features ([#612](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F612)).\r\n- Fix an issue with the `LanguageModelSampler` where it would sometimes sample punctuation ([#585](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F585)). \r\n\r\n### Development\r\n- The maximum `tensorflow` version has been bumped from 2.7 to 2.8 ([#588](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F588)).","2022-03-18T15:26:19",{"id":198,"version":199,"summary_zh":200,"released_at":201},102624,"v0.6.4","## [v0.6.4](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.6.4) (2022-01-28)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.6.3...v0.6.4)\r\n\r\nThis is a patch release to correct a regression in `AnchorImage` introduced in `v0.6.3`.\r\n\r\n### Fixed\r\n- Fix a bug introduced in `v0.6.3` where `AnchorImage` would ignore user `segmentation_kwargs` ([#581](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F581)).\r\n\r\n### Development\r\n- The maximum versions of `Pillow` and `scikit-image` have been bumped to 9.x and 0.19.x respectively.","2022-01-28T18:03:00",{"id":203,"version":204,"summary_zh":205,"released_at":206},102625,"v0.6.3","## [v0.6.3](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.6.3) (2022-01-18)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.6.2...v0.6.3)\r\n\r\n### Added\r\n- **New feature** A callback can now be passed to `IntegratedGradients` via the `target_fn` argument, in order to calculate the scalar target dimension from the model output. This is to bypass the requirement of passing `target` directly to `explain` when the `target` of interest may depend on the prediction output. See the example in the [docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FIntegratedGradients.html). ([#523](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F523)).\r\n- A new comprehensive [Introduction](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Foverview\u002Fhigh_level.html) to explainability added to the documentation ([#510](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F510)).\r\n\r\n### Changed\r\n- Python 3.6 has been deprecated from the supported versions as it has reached end-of-life. \r\n\r\n### Fixed\r\n- Fix a bug with passing background images to `AnchorImage` leading to an error ([#542](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F542)).\r\n- Fix a bug with rounding errors being introduced in `CounterfactualRLTabular` ([#550](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F550)).\r\n\r\n### Development\r\n- Docstrings have been updated and consolidated ([#548](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F548)). For developers, docstring conventions have been documented in [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md#docstrings).\r\n- `numpy` typing has been updated to be compatible with `numpy 1.22` ([#543](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F543)). This is a prerequisite for upgrading to `tensorflow 2.7`. \r\n- To further improve reliability, strict `Optional` type-checking with `mypy` has been reinstated ([#541](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F541)).\r\n- The Alibi CI tests now include Windows and MacOS platforms ([#575](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F575)).\r\n- The maximum `tensorflow` version has been bumped from 2.6 to 2.7 ([#377](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi-detect\u002Fpull\u002F377)).","2022-01-18T18:12:07",{"id":208,"version":209,"summary_zh":210,"released_at":211},102626,"v0.6.2","## [v0.6.2](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.6.2) (2021-11-18)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.6.1...v0.6.2)\r\n\r\n### Added\r\n- Documentation on using black-box and white-box models in the context of alibi, [see here](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Foverview\u002Fwhite_box_black_box.html).\r\n- `AnchorTabular`, `AnchorImage` and `AnchorText` now expose an additional `dtype` keyword argument with a default value of `np.float32`. This is to ensure that whenever a user `predictor` is called internally with dummy data a correct data type can be ensured ([#506](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F506)).\r\n- Custom exceptions. A new public module `alibi.exceptions` defining the `alibi` exception hierarchy. This introduces two exceptions, `AlibiPredictorCallException` and `AlibiPredictorReturnTypeError`. See [#520](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F520) for more details.\r\n\r\n### Changed\r\n- For `AnchorImage`, coerce `image_shape` argument into a tuple to implicitly allow passing a list input which eases use of configuration files. In the future the typing will be improved to be more explicit about allowed types with runtime type checking.\r\n- Updated the minimum `shap` version to the latest `0.40.0` as this fixes an installation issue if `alibi` and `shap` are installed with the same command.\r\n\r\n### Fixed\r\n- Fix a bug with version saving being overwritten on subsequent saves ([#481](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F481)).\r\n- Fix a bug in the Integrated Gradients notebook with transformer models due to a regression in the upstream `transformers` library ([#528](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F528)).\r\n- Fix a bug in `IntegratedGradients` with `forward_kwargs` not always being correctly passed ([#525](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F525)).\r\n- Fix a bug resetting `TreeShap` predictor ([#534](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F534)).\r\n\r\n\r\n### Development\r\n- Now using `readthedocs` Docker image in our CI to replicate the doc building environment exactly. Also enabled `readthedocs` build on PR feature which allows browsing the built docs on every PR.\r\n- New notebook execution testing framework via Github Actions. There are two new GA workflows, [test_all_notebooks](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Factions\u002Fworkflows\u002Ftest_all_notebooks.yml) which is run once a week and can be triggered manually, and [test_changed_notebooks](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Factions\u002Fworkflows\u002Ftest_changed_notebooks.yml) which detects if any notebooks have been modified in a PR and executes only those. Not all notebooks are amenable to be tested automatically due to long running times or complex software\u002Fhardware dependencies. We maintain a list of notebooks to be excluded in the testing script under [testing\u002Ftest_notebooks.py](testing\u002Ftest_notebooks.py).\r\n- Now using `myst` (a markdown superset) for more flexible documentation ([#482](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F482)).\r\n- Added a [CITATION.cff](CITATION.cff) file.","2021-11-18T13:42:16",{"id":213,"version":214,"summary_zh":215,"released_at":216},102627,"v0.6.1","## [v0.6.1](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.6.1) (2021-09-02)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.6.0...v0.6.1)\r\n\r\n### Added\r\n- **New feature** An implementation of [Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02597) is now available via `alibi.explainers.CounterfactualRL` and `alibi.explainers.CounterfactualRLTabular` classes. The method is model-agnostic and the implementation is written in both PyTorch and TensorFlow. See [docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FCFRL.html) for more information.\r\n\r\n### Changed\r\n- **Future breaking change** The names of `CounterFactual` and `CounterFactualProto` classes have been changed to `Counterfactual` and `CounterfactualProto` respectively for consistency and correctness. The old class names continue working for now but emit a deprecation warning message and will be removed in an upcoming version.\r\n- `dill` behaviour was changed to not extend the `pickle` protocol so that standard usage of `pickle` in a session with `alibi` does not change expected `pickle` behaviour. See [discussion](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fissues\u002F447).\r\n- `AnchorImage` internals refactored to avoid persistent state between `explain` calls.\r\n\r\n### Development\r\n- A PR checklist is available under [CONTRIBUTING.md](..\u002FCONTRIBUTING.md#pr-checklist). In the future many of these may be turned into automated checks.\r\n- `pandoc` version for docs building updated to `1.19.2` which is what is used on `readthedocs`.\r\n- Citation updated to the JMLR paper.","2021-09-02T14:14:42",{"id":218,"version":219,"summary_zh":220,"released_at":221},102628,"v0.6.0","## [v0.6.0](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.6.0) (2021-07-08)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.5.8...v0.6.0)\r\n\r\n### Added\r\n- **New feature** `AnchorText` now supports sampling according to masked language models via the `transformers` library. See [docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FAnchors.html#id2) and the [example](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fexamples\u002Fanchor_text_movie.html) for using the new functionality.\r\n- **Breaking change** due to the new masked language model sampling for `AnchorText` the public API for the constructor has changed. See [docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fmethods\u002FAnchors.html#id2) for a full description of the new API.\r\n- `AnchorTabular` now supports one-hot encoded categorical variables in addition to the default ordinal\u002Flabel encoded representation of categorical variables.\r\n- `IntegratedGradients` changes to allow explaining a wider variety of models. In particular, a new `forward_kwargs` argument to `explain` allows passing additional arguments to the model and `attribute_to_layer_inputs` flag to allow calculating attributions with respect to layer input instead of output if set to `True`. The API and capabilities now track more closely to the [captum.ai](https:\u002F\u002Fcaptum.ai\u002Fapi\u002F) `PyTorch` implementation.\r\n- [Example](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fexamples\u002Fintegrated_gradients_transformers.html) of using `IntegratedGradients` to explain `transformer` models.\r\n- Python 3.9 support.\r\n\r\n\r\n### Fixed\r\n- `IntegratedGradients` - fix the path definition for attributions calculated with respect to an internal layer. Previously the paths were defined in terms of the inputs and baselines, now they are correctly defined in terms of the corresponding layer input\u002Foutput. ","2021-07-08T14:26:57",{"id":223,"version":224,"summary_zh":225,"released_at":226},102629,"v0.5.8","## [v0.5.8](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.5.8) (2021-04-29)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.5.7...v0.5.8)\r\n\r\n### Added\r\n- Experimental explainer serialization support using `dill`. See [docs](https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Foverview\u002Fsaving.html) for more details.\r\n\r\n### Fixed\r\n- Handle layers which are not part of `model.layers` for `IntegratedGradients`.\r\n\r\n### Development\r\n- Update type hints to be compatible with `numpy` 1.20.\r\n- Separate licence build step in CI, only check licences against latest Python version.","2021-04-29T14:20:00",{"id":228,"version":229,"summary_zh":230,"released_at":231},102630,"v0.5.7","## [v0.5.7](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.5.7) (2021-03-31)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.5.6...v0.5.7)\r\n\r\n### Changed\r\n- Support for `KernelShap` and `TreeShap` now requires installing the `shap` dependency explicitly after installing `alibi`. This can be achieved by running `pip install alibi && pip install alibi[shap]`. The reason for this is that the build process for the upstream `shap` package is not well configured resulting in broken installations as detailed in https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F376 and https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\u002Fpull\u002F1802. We expect this to be a temporary change until changes are made upstream.\r\n\r\n### Added\r\n- A `reset_predictor` method for black-box explainers. The intended use case for this is for deploying an already configured explainer to work with a remote predictor endpoint instead of the local predictor used in development.\r\n- `alibi.datasets.load_cats` function which loads a small sample of cat images shipped with the library to be used in examples.\r\n\r\n### Fixed\r\n- Deprecated the `alibi.datasets.fetch_imagenet` function as the Imagenet API is no longer available.\r\n- `IntegratedGradients` now works with subclassed TensorFlow models.\r\n- Removed support for calculating attributions wrt multiple layers in `IntegratedGradients` as this was not working properly and is difficult to do in the general case.\r\n\r\n### Development\r\n- Fixed an issue with `AnchorTabular` tests not being picked up due to a name change of test data fixtures.\r\n","2021-03-31T13:51:42",{"id":233,"version":234,"summary_zh":235,"released_at":236},102631,"v0.5.6","## [v0.5.6](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.5.6) (2021-02-18)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.5.5...v0.5.6)\r\n\r\n### Added\r\n- **Breaking change** `IntegratedGradients` now supports models with multiple inputs. For each input of the model, attributions are calculated and returned in a list. Also extends the method allowing to calculate attributions for multiple internal layers. If a list of layers is passed, a list of attributions is returned. See  https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fpull\u002F321.\r\n- `ALE` now supports selecting a subset of features to explain. This can be useful to reduce runtime if only some features are of interest and also indirectly helps dealing with categorical variables by being able to exclude them (as `ALE` does not support categorical variables).\r\n\r\n### Fixed\r\n- `AnchorTabular` coverage calculation was incorrect which was caused by incorrectly indexing a list, this is now resolved.\r\n- `ALE` was causing an error when a constant feature was present. This is now handled explicitly and the user has control over how to handle these features. See https:\u002F\u002Fdocs.seldon.io\u002Fprojects\u002Falibi\u002Fen\u002Flatest\u002Fapi\u002Falibi.explainers.ale.html#alibi.explainers.ale.ALE for more details.\r\n- Release of Spacy 3.0 broke the `AnchorText` functionality as the way `lexeme_prob` tables are loaded was changed. This is now fixed by explicitly handling the loading depending on the `spacy` version.\r\n- Fixed documentation to refer to the `Explanation` object instead of the old `dict` object.\r\n- Added warning boxes to `CounterFactual`, `CounterFactualProto` and `CEM` docs to explain the necessity of clearing the TensorFlow graph if switching to a new model in the same session.\r\n\r\n### Development\r\n- Introduced lower and upper bounds for library and development dependencies to limit the potential for breaking functionality upon new releases of dependencies.\r\n- Added dependabot support to automatically monitor new releases of dependencies (both library and development).\r\n- Switched from Travis CI to Github Actions as the former limited their free tier.\r\n- Removed unused CI provider configs from the repo to reduce clutter.\r\n- Simplified development dependencies to just two files, `requirements\u002Fdev.txt` and `requirements\u002Fdocs.txt`.\r\n- Split out the docs building stage as a separate step on CI as it doesn't need to run on every Python version thus saving time.\r\n- Added `.readthedocs.yml` to control how user-facing docs are built directly from the repo.\r\n- Removed testing related entries to `setup.py` as the workflow is both unused and outdated.\r\n- Avoid `shap==0.38.1` as a dependency as it assumes `IPython` is installed and breaks the installation.","2021-02-18T12:50:04",{"id":238,"version":239,"summary_zh":240,"released_at":241},102632,"v0.5.5","## [v0.5.5](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.5.5) (2020-10-20)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.5.4...v0.5.5)\r\n\r\n### Added\r\n- **New feature** Distributed backend using `ray`. To use, install `ray` using `pip install alibi[ray]`.\r\n- **New feature** `KernelShap` distributed version using the new distributed backend.\r\n- For anchor methods added an explanation field `data['raw']['instances']` which is a batch-wise version of the existing `data['raw']['instance']`. This is in preparation for the eventual batch support for anchor methods.\r\n- Pre-commit hook for `pyupgrade` via `nbqa` for formatting example notebooks using Python 3.6+ syntax.\r\n\r\n### Fixed\r\n- Flaky test for distributed anchors (note: this is the old non-batchwise implementation) by dropping the precision treshold.\r\n- Notebook string formatting upgraded to Python 3.6+ f-strings.\r\n\r\n### Changed\r\n- **Breaking change** For anchor methods, the returned explanation field `data['raw']['prediction']` is now batch-wise, i.e. for `AnchorTabular` and `AnchorImage` it is a 1-dimensional `numpy` array whilst for `AnchorText` it is a list of strings. This is in preparation for the eventual batch support for anchor methods.\r\n- Removed dependency on `prettyprinter` and substituted with a slightly modified standard library version of `PrettyPrinter`. This is to prepare for a `conda` release which requires all dependencies to also be published on `conda`.","2020-10-20T14:14:29",{"id":243,"version":244,"summary_zh":245,"released_at":246},102633,"v0.5.4","## [v0.5.4](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Ftree\u002Fv0.5.4) (2020-09-03)\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi\u002Fcompare\u002Fv0.5.3...v0.5.4)\r\n\r\n### Added\r\n- `update_metadata` method for any `Explainer` object to enable easy book-keeping for algorithm parameters\r\n\r\n### Fixed\r\n- Updated `KernelShap` wrapper to work with the newest `shap>=0.36` library\r\n- Fix some missing metadata parameters in `KernelShap` and `TreeShap`","2020-09-03T15:53:54"]