[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SelfExplainML--PiML-Toolbox":3,"tool-SelfExplainML--PiML-Toolbox":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,2,"2026-04-08T23:32:35",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":77,"languages":78,"stars":83,"forks":84,"last_commit_at":85,"license":86,"difficulty_score":87,"env_os":88,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":95,"github_topics":96,"view_count":10,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":101,"updated_at":102,"faqs":103,"releases":133},3405,"SelfExplainML\u002FPiML-Toolbox","PiML-Toolbox","PiML (Python Interpretable Machine Learning) toolbox for model development & diagnostics","PiML-Toolbox 是一款专为可解释机器学习打造的 Python 集成工具箱，旨在让模型开发与诊断过程更加透明、可靠。它主要解决了传统机器学习模型“黑箱”操作难以理解、缺乏信任度的问题，帮助开发者深入洞察模型决策逻辑，并有效识别过拟合、公平性偏差及鲁棒性不足等潜在风险。\n\n这款工具非常适合数据科学家、机器学习工程师以及需要应对高监管要求的研究人员使用。无论是构建新模型还是验证外部“黑箱”模型，PiML-Toolbox 都能提供强力支持。其独特亮点在于兼顾了低代码图形界面与高代码 API 两种使用方式，既方便快速上手，又满足深度定制需求。工具内置了多种天生具备可解释性的先进算法（如 EBM、GAMI-Net 及特定深度的梯度提升树），并提供了从全局到局部的全方位解释器（如 SHAP、LIME）。此外，它还涵盖了准确性评估、公平性测试、弱区识别及抗干扰能力分析等丰富的诊断功能，让用户能全面掌握模型性能，从而做出更自信的决策。","\u003Cdiv align=\"center\">\n  \n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_be32a8eae543.png\" alt=\"drawing\" width=\"314.15926\"\u002F>\n\n**An integrated Python toolbox for interpretable machine learning** \n\n\u003C\u002Fdiv>\n\n\u003Cdiv>\n\n  ---\n\n**March 30, 2025 by Dr. Agus Sudjianto**: **Farewell PiML, Hello MoDeVa!**\n\nAfter three impactful years of empowering model developers and validators, we’re thrilled to introduce the next evolution: MoDeVa – MOdel DEvelopment & VAlidation.\n\nMoDeVa builds on the success of PiML, taking transparency, interpretability, and robustness in machine learning to a whole new level. Whether you’re in a high-stakes regulatory setting or exploring cutting-edge model architectures, MoDeVa is built to support your journey.\n\nWhy MoDeVa?\n\n •\tNext-Gen Models: Interpretable ML models like Boosted Trees, Mixture of Experts, and Neural Trees—built for confident decision-making.\n\n •\tModel Hacking Redefined: Tools to uncover failure modes, analyze robustness, reliability and resilience. \n\n •\tInteractive Statistical Visualizations: Bring models to life with dynamic graphs that go beyond static charts.\n\n •\tSeamless Validation: Effortlessly validate external black-box models using flexible wrappers.\n\nCheck it out here: https:\u002F\u002Fmodeva.ai\u002F\n\n--- \n\n\u003Cdiv align=\"center\">\n\n  `pip install PiML`\n  \n🎄 **Dec 1, 2023:**  V0.6.0 is released with enhanced data handling and model analytics.\n\n:rocket: **May 4, 2023:**  V0.5.0 is released together with PiML user guide.\n\n:rocket: **October 31, 2022:**  V0.4.0 is released with enriched models and enhanced diagnostics.\n\n:rocket: **July 26, 2022:**  V0.3.0 is released with classic statistical models.\n\n:rocket: **June 26, 2022:** V0.2.0 is released with high-code APIs.\n\n:loudspeaker: **May 4, 2022:**  V0.1.0 is launched with low-code UI\u002FUX.\n\u003C\u002Fdiv>\n\nPiML (or π-ML, \u002Fˈpaɪ·ˈem·ˈel\u002F) is a new Python toolbox for interpretable machine learning model development and validation. Through low-code interface and high-code APIs, PiML supports a growing list of inherently interpretable ML models:\n\n1. **GLM**: Linear\u002FLogistic Regression with L1 ∨ L2 Regularization\n2. **GAM**: Generalized Additive Models using B-splines\n3. **Tree**: Decision Tree for Classification and Regression\n4. **FIGS**: Fast Interpretable Greedy-Tree Sums (Tan, et al. 2022)\n5. **XGB1**: Extreme Gradient Boosted Trees of Depth 1, with optimal binning (Chen and Guestrin, 2016; Navas-Palencia, 2020)\n6. **XGB2**: Extreme Gradient Boosted Trees of Depth 2, with effect purification (Chen and Guestrin, 2016; Lengerich, et al. 2020)\n7. **EBM**: Explainable Boosting Machine (Nori, et al. 2019; Lou, et al. 2013)\n8. **GAMI-Net**: Generalized Additive Model with Structured Interactions (Yang, Zhang and Sudjianto, 2021)\n9. **ReLU-DNN**: Deep ReLU Networks using Aletheia Unwrapper and Sparsification (Sudjianto, et al. 2020)\n\nPiML also works for arbitrary supervised ML models under regression and binary classification settings. It supports a whole spectrum of outcome testing, including but not limited to the following:\n\n1. **Accuracy**: popular metrics like MSE, MAE for regression tasks and ACC, AUC, Recall, Precision, F1-score for binary classification tasks. \n2. **Explainability**: post-hoc global explainers (PFI, PDP, ALE) and local explainers (LIME, SHAP).\n3. **Fairness**: disparity test and segmented analysis by integrating the solas-ai package.\n4. **WeakSpot**: identification of weak regions with high residuals by slicing techniques.\n5. **Overfit**: identification of overfitting regions according to train-test performance gap.\n6. **Reliability**: assessment of prediction uncertainty by split conformal prediction techniques.\n7. **Robustness**: evaluation of performance degradation under covariate noise perturbation.\n8. **Resilience**: evaluation of performance degradation under different out-of-distribution scenarios.\n\n[Installation](#Install) | [Examples](#Example) | [Usage](#Usage) | [Citations](#Cite)\n\n\n## Installation\u003Ca name=\"Install\">\u003C\u002Fa>  \n\n```python\npip install PiML  \n```\n\n## Low-code Examples\u003Ca name=\"Example\">\u003C\u002Fa>   \nClick the ipynb links to run examples in Google Colab:  \n1. BikeSharing data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_BikeSharing.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n2. CaliforniaHousing data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_CaliforniaHousing.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n3. TaiwanCredit data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_TaiwanCredit.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n4. Fairness_SimuStudy1 data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_Fairness_SimuStudy1.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n5. Fairness_SimuStudy2 data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_Fairness_SimuStudy2.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n6. Upload custom data in two ways: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_CustomDataLoad_Two_Ways.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>    \n7. Deal with external models: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_ExternalModels.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>    \n\nBegin your own PiML journey with \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002FPiML%20Low-code%20Example%20Run.ipynb\">this demo notebook\u003C\u002Fa>. \n\n\n## High-code Examples\u003Ca name=\"Example\">\u003C\u002Fa>   \nThe same examples can also be run by high-code APIs:  \n1. BikeSharing data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_BikeSharing_HighCode.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n2. CaliforniaHousing data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_CaliforniaHousing_HighCode.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n3. TaiwanCredit data: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_TaiwanCredit_HighCode.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n4. Model saving: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_ModelSaving.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n5. Results return: \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_ResultsReturn.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n## Low-code Usage on Google Colab\u003Ca name=\"Usage\">\u003C\u002Fa>  \n\n### Stage 1:  Initialize an experiment, Load and Prepare data\n\n```python\nfrom piml import Experiment\nexp = Experiment()\n```\n\n```python\nexp.data_loader()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_a85732b5fc58.png\">\n\n```python\nexp.data_summary()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_d29ab6043e16.png\">\n\n```python\nexp.data_prepare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_a4e39fd72e3d.png\">\n\n```python\nexp.data_quality()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_d54a28b613e7.png\">\n\n```python\nexp.feature_select()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_58e442c69cd4.png\">\n\n```python\nexp.eda()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_6d89bcc410ef.png\">\n\n### Stage 2:  Train intepretable models\n```python\nexp.model_train()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_5c7c3dbbf0c9.png\">\n\n### Stage 3. Explain and Interpret\n```python\nexp.model_explain()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_c513cbda4b13.png\">\n\n```python\nexp.model_interpret() \n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_37297f6f96ae.png\">\n\n### Stage 4. Diagnose and Compare\n```python\nexp.model_diagnose()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_301477ea33e5.png\">\n\n```python\nexp.model_compare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_e0cf4894fafd.png\">\n\n```python\nexp.model_fairness()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_c0b9af370973.png\">\n\n```python\nexp.model_fairness_compare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_2584afe5b199.png\">\n\n\n## Arbitrary Black-Box Modeling\nFor example, train a complex LightGBM with depth 7 and register it to the experiment: \n\n```python\nfrom lightgbm import LGBMClassifier\nexp.model_train(LGBMClassifier(max_depth=7), name='LGBM-7')\n```\n\nThen, compare it to inherently interpretable models (e.g. XGB2 and GAMI-Net): \n```python\nexp.model_compare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_ac8c84b096ec.png\">\n\n\n\n## Citations\u003Ca name=\"Cite\">\u003C\u002Fa>  \n\n\u003Cdetails open>\n  \u003Csummary>\u003Cstrong>PiML, ReLU-DNN Aletheia and GAMI-Net\u003C\u002Fstrong>\u003C\u002Fsummary>\u003Chr\u002F>\n\n  \"PiML Toolbox for Interpretable Machine Learning Model Development and Diagnostics\" (A. Sudjianto, A. Zhang, Z. Yang, Y. Su and N. Zeng, 2023) \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04214\">arXiv link\u003C\u002Fa>  \n\n  ```latex\n  @article{sudjianto2023piml,\n  title={PiML Toolbox for Interpretable Machine Learning Model Development and Diagnostics},\n  author={Sudjianto, Agus and Zhang, Aijun and Yang, Zebin and Su, Yu and Zeng, Ningzhou},\n  year={2023}\n  }\n  ```\n  \n  \n  \"Designing Inherently Interpretable Machine Learning Models\" (A. Sudjianto and A. Zhang, 2021)  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01743\">arXiv link\u003C\u002Fa>  \n  \n  ```latex\n  @article{sudjianto2021designing,\n  title={Designing Inherently Interpretable Machine Learning Models},\n  author={Sudjianto, Agus and Zhang, Aijun},\n  journal={arXiv preprint:2111.01743},\n  year={2021}\n  }\n  ```\n\n  \"Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification\" (A. Sudjianto, W. Knauth, R. Singh, Z. Yang and A. Zhang, 2020) \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.04041\">arXiv link\u003C\u002Fa>  \n  \n  ```latex\n  @article{sudjianto2020unwrapping,\n  title={Unwrapping the black box of deep ReLU networks: interpretability, diagnostics, and simplification},\n  author={Sudjianto, Agus and Knauth, William and Singh, Rahul and Yang, Zebin and Zhang, Aijun},\n  journal={arXiv preprint:2011.04041},\n  year={2020}\n  }\n  ```\n\n  \"GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions\" (Z. Yang, A. Zhang, and A. Sudjianto, 2021) \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.07132\">arXiv link\u003C\u002Fa>  \n\n  ```latex\n  @article{yang2021gami,\n  title={GAMI-Net: An explainable neural network based on generalized additive models with structured interactions},\n  author={Yang, Zebin and Zhang, Aijun and Sudjianto, Agus},\n  journal={Pattern Recognition},\n  volume={120},\n  pages={108192},\n  year={2021}\n  }\n  ```\n\u003C\u002Fdetails>  \n\n\n\u003Cdetails open>\n  \u003Csummary>\u003Cstrong>Other Interpretable ML Models\u003C\u002Fstrong>\u003C\u002Fsummary>\u003Chr\u002F>\n  \n  \"Fast Interpretable Greedy-Tree Sums (FIGS)\" (Tan, Y.S., Singh, C., Nasseri, K., Agarwal, A. and Yu, B., 2022)  \n  \n  ```latex\n  @article{tan2022fast,\n  title={Fast interpretable greedy-tree sums (FIGS)},\n  author={Tan, Yan Shuo and Singh, Chandan and Nasseri, Keyan and Agarwal, Abhineet and Yu, Bin},\n  journal={arXiv preprint arXiv:2201.11931},\n  year={2022}\n  }\n  ```\n\n  \"Accurate intelligible models with pairwise interactions\" (Y. Lou, R. Caruana, J. Gehrke, and G. Hooker, 2013)   \n  \n  ```latex\n  @inproceedings{lou2013accurate,\n  title={Accurate intelligible models with pairwise interactions},\n  author={Lou, Yin and Caruana, Rich and Gehrke, Johannes and Hooker, Giles},\n  booktitle={Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},\n  pages={623--631},\n  year={2013},\n  organization={ACM}\n  }  \n  ```\n  \n  \"Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models\" (Lengerich, B., Tan, S., Chang, C.H., Hooker, G. and Caruana, R., 2020)  \n  \n  ```latex\n  @inproceedings{lengerich2020purifying,\n  title={Purifying interaction effects with the functional anova: An efficient algorithm for recovering identifiable additive models},\n  author={Lengerich, Benjamin and Tan, Sarah and Chang, Chun-Hao and Hooker, Giles and Caruana, Rich},\n  booktitle={International Conference on Artificial Intelligence and Statistics},\n  pages={2402--2412},\n  year={2020},\n  organization={PMLR}\n  }\n  ```\n  \n  \n  \"InterpretML: A Unified Framework for Machine Learning Interpretability\" (H. Nori, S. Jenkins, P. Koch, and R. Caruana, 2019)  \n  \n  ```latex\n  @article{nori2019interpretml,\n  title={InterpretML: A Unified Framework for Machine Learning Interpretability},\n  author={Nori, Harsha and Jenkins, Samuel and Koch, Paul and Caruana, Rich},\n  journal={arXiv preprint:1909.09223},\n  year={2019}\n  }\n  ```  \n\u003C\u002Fdetails>\n","\u003Cdiv align=\"center\">\n  \n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_be32a8eae543.png\" alt=\"drawing\" width=\"314.15926\"\u002F>\n\n**用于可解释机器学习的集成式Python工具箱**\n\n\u003C\u002Fdiv>\n\n\u003Cdiv>\n\n  ---\n\n**2025年3月30日，由苏吉安托博士撰写**：**告别PiML，迎接MoDeVa！**\n\n在赋能模型开发者与验证者长达三年并取得显著成效之后，我们非常高兴地推出下一代工具——MoDeVa——MOdel DEvelopment & VAlidation（模型开发与验证）。\n\nMoDeVa基于PiML的成功经验，将机器学习中的透明性、可解释性和稳健性提升至全新水平。无论您身处高风险的监管环境，还是正在探索前沿的模型架构，MoDeVa都能为您的工作提供有力支持。\n\n为什么选择MoDeVa？\n\n •\t新一代模型：可解释的机器学习模型，如梯度提升树、专家混合模型和神经树，专为自信决策而设计。\n\n •\t模型分析新境界：帮助揭示模型的失效模式，分析其稳健性、可靠性和韧性。\n\n •\t交互式统计可视化：通过动态图表让模型“活”起来，超越传统的静态图表。\n\n •\t无缝验证：利用灵活的封装器轻松验证外部黑盒模型。\n\n立即访问：https:\u002F\u002Fmodeva.ai\u002F\n\n---\n\n\u003Cdiv align=\"center\">\n\n  `pip install PiML`\n  \n🎄 **2023年12月1日：** 发布V0.6.0版本，增强了数据处理与模型分析功能。\n\n:rocket: **2023年5月4日：** 发布V0.5.0版本，并附带PiML用户指南。\n\n:rocket: **2022年10月31日：** 发布V0.4.0版本，新增更多模型并强化诊断功能。\n\n:rocket: **2022年7月26日：** 发布V0.3.0版本，包含经典统计模型。\n\n:rocket: **2022年6月26日：** 发布V0.2.0版本，提供高代码API接口。\n\n:loudspeaker: **2022年5月4日：** 正式发布V0.1.0版本，配备低代码UI\u002FUX界面。\n\u003C\u002Fdiv>\n\nPiML（或π-ML，\u002Fˈpaɪ·ˈem·ˈel\u002F）是一款全新的Python工具箱，用于可解释机器学习模型的开发与验证。通过低代码界面和高代码API，PiML支持一系列日益丰富的内在可解释性机器学习模型：\n\n1. **GLM**：带有L1或L2正则化的线性\u002F逻辑回归\n2. **GAM**：使用B样条的广义加性模型\n3. **Tree**：用于分类与回归的决策树\n4. **FIGS**：快速可解释的贪婪树求和模型（Tan等，2022）\n5. **XGB1**：深度为1的极端梯度提升树，采用最优分箱技术（Chen和Guestrin，2016；Navas-Palencia，2020）\n6. **XGB2**：深度为2的极端梯度提升树，具备效应净化功能（Chen和Guestrin，2016；Lengerich等，2020）\n7. **EBM**：可解释梯度提升机（Nori等，2019；Lou等，2013）\n8. **GAMI-Net**：具有结构化交互作用的广义加性模型（Yang、Zhang和Sudjianto，2021）\n9. **ReLU-DNN**：利用Aletheia解包器和稀疏化技术的深层ReLU网络（Sudjianto等，2020）\n\nPiML同样适用于回归和二分类任务下的任意监督学习模型。它支持全方位的结果评估，包括但不限于以下内容：\n\n1. **准确性**：流行的指标，如回归任务中的MSE、MAE，以及二分类任务中的ACC、AUC、召回率、精确率和F1分数。\n2. **可解释性**：事后全局解释器（PFI、PDP、ALE）和局部解释器（LIME、SHAP）。\n3. **公平性**：通过集成solas-ai软件包进行差异性测试和分段分析。\n4. **薄弱点**：利用切片技术识别残差较大的薄弱区域。\n5. **过拟合**：根据训练集与测试集表现差距识别过拟合区域。\n6. **可靠性**：通过分割共形预测技术评估预测不确定性。\n7. **稳健性**：评估在协变量噪声扰动下的性能下降情况。\n8. **韧性**：评估在不同分布外场景下的性能退化情况。\n\n[安装](#Install) | [示例](#Example) | [使用方法](#Usage) | [引用](#Cite)\n\n\n## 安装\u003Ca name=\"Install\">\u003C\u002Fa>  \n\n```python\npip install PiML  \n```\n\n## 低代码示例\u003Ca name=\"Example\">\u003C\u002Fa>   \n点击ipynb链接，在Google Colab中运行示例：  \n1. 自行车共享数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_BikeSharing.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n2. 加州住房数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_CaliforniaHousing.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n3. 台湾信贷数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_TaiwanCredit.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n4. 公平性模拟研究1数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_Fairness_SimuStudy1.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n5. 公平性模拟研究2数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_Fairness_SimuStudy2.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n6. 以两种方式上传自定义数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_CustomDataLoad_Two_Ways.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>    \n7. 处理外部模型： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_ExternalModels.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>    \n\n从这本演示笔记本开始您的PiML之旅：\u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002FPiML%20Low-code%20Example%20Run.ipynb\">此演示笔记本\u003C\u002Fa>。\n\n## 高代码示例\u003Ca name=\"Example\">\u003C\u002Fa>   \n同样的示例也可以通过高代码 API 运行：  \n1. BikeSharing 数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_BikeSharing_HighCode.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n2. CaliforniaHousing 数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_CaliforniaHousing_HighCode.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>  \n3. TaiwanCredit 数据： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_TaiwanCredit_HighCode.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n4. 模型保存： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_ModelSaving.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n5. 结果返回： \u003Ca style=\"text-laign: 'center'\" target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_ResultsReturn.ipynb\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_9da794830ac9.png\" width=\"20\">  ipynb\u003C\u002Fa>   \n## 在 Google Colab 上的低代码使用\u003Ca name=\"Usage\">\u003C\u002Fa>  \n\n### 第一阶段：初始化实验、加载并准备数据\n\n```python\nfrom piml import Experiment\nexp = Experiment()\n```\n\n```python\nexp.data_loader()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_a85732b5fc58.png\">\n\n```python\nexp.data_summary()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_d29ab6043e16.png\">\n\n```python\nexp.data_prepare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_a4e39fd72e3d.png\">\n\n```python\nexp.data_quality()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_d54a28b613e7.png\">\n\n```python\nexp.feature_select()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_58e442c69cd4.png\">\n\n```python\nexp.eda()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_6d89bcc410ef.png\">\n\n### 第二阶段：训练可解释模型\n```python\nexp.model_train()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_5c7c3dbbf0c9.png\">\n\n### 第三阶段：解释与解读\n```python\nexp.model_explain()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_c513cbda4b13.png\">\n\n```python\nexp.model_interpret() \n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_37297f6f96ae.png\">\n\n### 第四阶段：诊断与比较\n```python\nexp.model_diagnose()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_301477ea33e5.png\">\n\n```python\nexp.model_compare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_e0cf4894fafd.png\">\n\n```python\nexp.model_fairness()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_c0b9af370973.png\">\n\n```python\nexp.model_fairness_compare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_2584afe5b199.png\">\n\n\n## 任意黑盒建模\n例如，训练一个深度为 7 的复杂 LightGBM 模型，并将其注册到实验中： \n\n```python\nfrom lightgbm import LGBMClassifier\nexp.model_train(LGBMClassifier(max_depth=7), name='LGBM-7')\n```\n\n然后，将其与固有可解释模型（如 XGB2 和 GAMI-Net）进行比较： \n```python\nexp.model_compare()\n```\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_readme_ac8c84b096ec.png\">\n\n## 引用文献\u003Ca name=\"Cite\">\u003C\u002Fa>  \n\n\u003Cdetails open>\n  \u003Csummary>\u003Cstrong>PiML、ReLU-DNN Aletheia 和 GAMI-Net\u003C\u002Fstrong>\u003C\u002Fsummary>\u003Chr\u002F>\n\n  “用于可解释机器学习模型开发与诊断的 PiML 工具箱”（A. Sudjianto、A. Zhang、Z. Yang、Y. Su 和 N. Zeng，2023 年） \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04214\">arXiv 链接\u003C\u002Fa>  \n\n  ```latex\n  @article{sudjianto2023piml,\n  title={PiML Toolbox for Interpretable Machine Learning Model Development and Diagnostics},\n  author={Sudjianto, Agus and Zhang, Aijun and Yang, Zebin and Su, Yu and Zeng, Ningzhou},\n  year={2023}\n  }\n  ```\n  \n  \n  “设计内在可解释的机器学习模型”（A. Sudjianto 和 A. Zhang，2021 年） \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01743\">arXiv 链接\u003C\u002Fa>  \n  \n  ```latex\n  @article{sudjianto2021designing,\n  title={Designing Inherently Interpretable Machine Learning Models},\n  author={Sudjianto, Agus and Zhang, Aijun},\n  journal={arXiv preprint:2111.01743},\n  year={2021}\n  }\n  ```\n\n  “揭开深度 ReLU 网络的黑箱：可解释性、诊断与简化”（A. Sudjianto、W. Knauth、R. Singh、Z. Yang 和 A. Zhang，2020 年） \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.04041\">arXiv 链接\u003C\u002Fa>  \n  \n  ```latex\n  @article{sudjianto2020unwrapping,\n  title={Unwrapping the black box of deep ReLU networks: interpretability, diagnostics, and simplification},\n  author={Sudjianto, Agus and Knauth, William and Singh, Rahul and Yang, Zebin and Zhang, Aijun},\n  journal={arXiv preprint:2011.04041},\n  year={2020}\n  }\n  ```\n\n  “GAMI-Net：基于具有结构化交互作用的广义加性模型的可解释神经网络”（Z. Yang、A. Zhang 和 A. Sudjianto，2021 年） \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.07132\">arXiv 链接\u003C\u002Fa>  \n\n  ```latex\n  @article{yang2021gami,\n  title={GAMI-Net: An explainable neural network based on generalized additive models with structured interactions},\n  author={Yang, Zebin and Zhang, Aijun and Sudjianto, Agus},\n  journal={Pattern Recognition},\n  volume={120},\n  pages={108192},\n  year={2021}\n  }\n  ```\n\u003C\u002Fdetails>  \n\n\n\u003Cdetails open>\n  \u003Csummary>\u003Cstrong>其他可解释机器学习模型\u003C\u002Fstrong>\u003C\u002Fsummary>\u003Chr\u002F>\n  \n  “快速可解释的贪婪树求和模型 (FIGS)”（Tan, Y.S., Singh, C., Nasseri, K., Agarwal, A. 和 Yu, B., 2022 年）  \n  \n  ```latex\n  @article{tan2022fast,\n  title={Fast interpretable greedy-tree sums (FIGS)},\n  author={Tan, Yan Shuo and Singh, Chandan and Nasseri, Keyan and Agarwal, Abhineet and Yu, Bin},\n  journal={arXiv preprint arXiv:2201.11931},\n  year={2022}\n  }\n  ```\n\n  “具有成对交互作用的准确且易懂的模型”（Y. Lou、R. Caruana、J. Gehrke 和 G. Hooker，2013 年）   \n  \n  ```latex\n  @inproceedings{lou2013accurate,\n  title={Accurate intelligible models with pairwise interactions},\n  author={Lou, Yin and Caruana, Rich and Gehrke, Johannes and Hooker, Giles},\n  booktitle={Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},\n  pages={623--631},\n  year={2013},\n  organization={ACM}\n  }  \n  ```\n  \n  “利用函数型方差分析净化交互效应：一种恢复可识别加性模型的有效算法”（Lengerich, B., Tan, S., Chang, C.H., Hooker, G. 和 Caruana, R., 2020 年）  \n  \n  ```latex\n  @inproceedings{lengerich2020purifying,\n  title={Purifying interaction effects with the functional anova: An efficient algorithm for recovering identifiable additive models},\n  author={Lengerich, Benjamin and Tan, Sarah and Chang, Chun-Hao and Hooker, Giles and Caruana, Rich},\n  booktitle={International Conference on Artificial Intelligence and Statistics},\n  pages={2402--2412},\n  year={2020},\n  organization={PMLR}\n  }\n  ```\n  \n  \n  “InterpretML：机器学习可解释性的统一框架”（H. Nori、S. Jenkins、P. Koch 和 R. Caruana，2019 年）  \n  \n  ```latex\n  @article{nori2019interpretml,\n  title={InterpretML: A Unified Framework for Machine Learning Interpretability},\n  author={Nori, Harsha and Jenkins, Samuel and Koch, Paul and Caruana, Rich},\n  journal={arXiv preprint:1909.09223},\n  year={2019}\n  }\n  ```  \n\u003C\u002Fdetails>","# PiML-Toolbox 快速上手指南\n\nPiML (π-ML) 是一个用于可解释机器学习模型开发与验证的集成 Python 工具箱。它支持低代码界面和高代码 API，内置多种原生可解释模型（如 GLM, GAM, EBM, GAMI-Net 等），并提供全面的模型诊断、公平性分析及黑盒模型解释功能。\n\n> **注意**：根据官方最新公告，PiML 项目已演进为 **MoDeVa** (Model Development & Validation)。虽然 PiML 仍可使用，但建议新用户关注 [MoDeVa](https:\u002F\u002Fmodeva.ai\u002F) 以获取下一代可解释性工具支持。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Windows, macOS 或 Linux\n*   **Python 版本**：推荐 Python 3.7 - 3.10\n*   **前置依赖**：\n    *   `pip` (Python 包管理工具)\n    *   基础科学计算库（安装 PiML 时会自动处理大部分依赖，如 `pandas`, `numpy`, `scikit-learn`, `matplotlib` 等）\n*   **可选环境**：推荐使用 Jupyter Notebook 或 Google Colab 以获得最佳的低代码交互体验。\n\n## 安装步骤\n\n### 1. 标准安装\n使用 pip 直接安装最新稳定版：\n\n```bash\npip install PiML\n```\n\n### 2. 国内加速安装（推荐）\n如果您在中国大陆地区，建议使用国内镜像源以加快下载速度：\n\n```bash\npip install PiML -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 验证安装\n安装完成后，可在 Python 环境中运行以下命令验证：\n\n```python\nimport piml\nprint(piml.__version__)\n```\n\n## 基本使用\n\nPiML 的核心优势在于其**低代码工作流**。只需几行代码即可完成从数据加载、预处理、模型训练到解释和诊断的全过程。\n\n以下是一个基于低代码 API 的最小化使用示例：\n\n### 步骤 1: 初始化实验\n导入 `Experiment` 类并创建实验实例。\n\n```python\nfrom piml import Experiment\nexp = Experiment()\n```\n\n### 步骤 2: 数据加载与准备\n加载内置示例数据（如自行车共享数据），并进行自动摘要、预处理和质量检查。\n\n```python\n# 加载数据\nexp.data_loader()\n\n# 查看数据摘要\nexp.data_summary()\n\n# 数据预处理（处理缺失值、编码等）\nexp.data_prepare()\n\n# 数据质量检查\nexp.data_quality()\n\n# 特征选择\nexp.feature_select()\n\n# 探索性数据分析 (EDA)\nexp.eda()\n```\n\n### 步骤 3: 训练可解释模型\n一键训练内置的可解释模型列表（包括 GLM, GAM, Tree, EBM, GAMI-Net 等）。\n\n```python\nexp.model_train()\n```\n\n### 步骤 4: 模型解释与诊断\n生成全局\u002F局部解释图，进行模型诊断、对比及公平性分析。\n\n```python\n# 模型解释 (全局与局部)\nexp.model_explain()\nexp.model_interpret()\n\n# 模型诊断 (过拟合、弱区域识别等)\nexp.model_diagnose()\n\n# 模型对比\nexp.model_compare()\n\n# 公平性分析\nexp.model_fairness()\n```\n\n### 进阶：处理外部黑盒模型\nPiML 同样支持对任意黑盒模型（如 LightGBM, XGBoost 深度模型等）进行注册和解释。\n\n```python\nfrom lightgbm import LGBMClassifier\n\n# 训练并注册一个复杂的黑盒模型\nexp.model_train(LGBMClassifier(max_depth=7), name='LGBM-Deep')\n\n# 将其与原生可解释模型进行对比\nexp.model_compare()\n```\n\n---\n*更多详细示例（包括自定义数据上传、高代码 API 用法等），请访问官方 GitHub 仓库中的 `examples` 目录或在 Google Colab 中运行官方提供的 Demo Notebook。*","某金融风控团队正在开发一款用于审批小微企业贷款的信用评分模型，面临严格的监管审查要求，必须清晰解释每一个拒贷决定的依据。\n\n### 没有 PiML-Toolbox 时\n- **模型如“黑盒”**：团队使用复杂的集成算法虽提升了精度，但无法向监管机构解释为何特定特征（如现金流波动）导致拒贷，合规报告难以撰写。\n- **盲区排查困难**：缺乏系统化工具定位模型在特定细分群体（如初创科技企业）中的表现弱点，只能靠人工经验猜测，容易遗漏高风险区域。\n- **验证流程割裂**：评估准确性、公平性和鲁棒性需要拼接多个独立库，代码繁琐且结果不一致，导致模型上线前的验证周期长达数周。\n- **过拟合难发现**：难以直观区分模型是学到了真实规律还是在训练数据上“死记硬背”，常在部署后因环境变化导致性能骤降。\n\n### 使用 PiML-Toolbox 后\n- **白盒化决策**：利用内置的 EBM 或 GAMI-Net 等原生可解释模型，直接生成可视化的特征贡献图，让每个评分理由都有据可查，轻松通过合规审计。\n- **精准定位弱项**：通过 WeakSpot 功能自动切片分析，迅速锁定模型在“高负债初创企业”群体的预测偏差，针对性地补充数据或调整策略。\n- **一站式诊断**：在一个框架内完成从准确率、公平性差异测试到抗噪声鲁棒性评估的全流程，将验证时间从数周缩短至几天。\n- **防过拟合预警**：利用训练集与测试集的性能差距分析，自动识别过拟合区域，并结合保形预测技术量化不确定性，确保模型在真实环境中稳健运行。\n\nPiML-Toolbox 将晦涩的机器学习模型转化为透明、可信且易于诊断的决策工具，让高风险场景下的 AI 落地不再因“不可解释”而受阻。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSelfExplainML_PiML-Toolbox_7171cc40.png","SelfExplainML","Self-Explanatory Machine Learning","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSelfExplainML_99533dc7.png","High performance ML models that are inherently interpretable",null,"https:\u002F\u002Fgithub.com\u002FSelfExplainML",[79],{"name":80,"color":81,"percentage":82},"Jupyter Notebook","#DA5B0B",100,1284,136,"2026-03-17T10:31:45","Apache-2.0",1,"未说明",{"notes":90,"python":88,"dependencies":91},"该工具是一个用于可解释机器学习的 Python 工具箱，支持低代码界面和高代码 API。它内置了多种可解释模型（如 GLM, GAM, EBM, GAMI-Net 等），并支持对任意黑盒模型进行诊断。虽然 README 未明确列出系统要求，但提供了 Google Colab 示例链接，表明其兼容云端 Notebook 环境。安装命令为 `pip install PiML`。注意：项目已于 2025 年 3 月宣布演进为 MoDeVa。",[92,93,94],"piml","solas-ai","lightgbm",[14,13],[97,98,99,100],"interpretable-machine-learning","low-code","ml-workflow","model-diagnostics","2026-03-27T02:49:30.150509","2026-04-09T19:33:24.815758",[104,109,114,119,124,129],{"id":105,"question_zh":106,"answer_zh":107,"source_url":108},15656,"如何在 Mac M1 (Apple Silicon) 架构上安装 PiML？","从 PiML 0.5.1 版本开始，已支持 Mac M1 架构（需使用 Python 3.8, 3.9 或 3.10）。如果遇到安装问题，请尝试以下步骤：\n1. 升级 pip 到最新版本：pip install pip -U\n2. 如果是 conda 环境且构建 wheel 失败（特别是涉及 cmake 的依赖如 qdldl），请先安装 cmake：conda install anaconda::cmake\n3. 然后运行：pip install piml\n4. 若遇到 lightgbm 相关错误，可参考 StackOverflow 上的 MacOS 安装解决方案重新安装 lightgbm。","https:\u002F\u002Fgithub.com\u002FSelfExplainML\u002FPiML-Toolbox\u002Fissues\u002F4",{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},15657,"如何保存训练好的模型以便下次直接加载使用，而无需重新训练？","可以使用 dill 库来保存和加载拟合后的模型（GLMRegressor 除外）。示例代码如下：\nimport dill\n# 获取模型并标记为已拟合\nclf = exp.get_model(\"GAM\").estimator \nclf.__sklearn_is_fitted__ = lambda : True\n# 保存模型\nwith open('name_model.pkl', 'wb') as file:\n    dill.dump(clf, file)\n# 加载模型\nwith open('name_model.pkl', 'rb') as file:\n    clf_load = dill.load(file)\n# 验证加载后的模型\ndata = exp.get_model(\"GAM\").get_data(train=True)[0]\nclf_load.predict(data)\n加载后，你可以将其注册到 PiML 工作流中进行解释：\npipeline = exp.make_pipeline(model=clf_load)\nexp.register(pipeline, \"loaded_model\")\nexp.model_interpret()","https:\u002F\u002Fgithub.com\u002FSelfExplainML\u002FPiML-Toolbox\u002Fissues\u002F14",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},15658,"为什么在二分类任务中报错提示不支持多分类，或者目标变量识别错误？","这是因为 PiML 默认将数据的最后一列视为目标变量。如果该列被识别为“类别型”（categorical）且有多个唯一值（例如 4 个不同的速度限制值），系统会自动将其判定为多分类任务，而当前版本可能尚不支持或多分类逻辑未触发。\n解决方法有两种：\n1. 明确指定目标变量列名：exp.data_prepare(target=\"你的目标列名\")\n2. 在 exp.data_summary() 中手动将误判的特征类型改为数值型（numerical），然后在 exp.data_prepare() 中选择正确的目标和任务类型。","https:\u002F\u002Fgithub.com\u002FSelfExplainML\u002FPiML-Toolbox\u002Fissues\u002F9",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},15659,"注册 TensorFlow\u002FKeras 模型时出现 'generator' object has no attribute 'shape' 错误怎么办？","该错误通常由以下两个原因导致：\n1. 响应变量（y）只包含一个类别（单类），导致模型无法进行有效训练或形状推断。请检查数据标签是否包含至少两个类别。\n2. 对于神经网络模型，通常在训练前需要对输入特征 x 进行标准化处理。\n请确保数据平衡且进行了适当的预处理（如标准化），然后再尝试将 Keras 模型转换为 Estimator 并注册到 PiML 中。","https:\u002F\u002Fgithub.com\u002FSelfExplainML\u002FPiML-Toolbox\u002Fissues\u002F32",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},15660,"pip install PiML 提示找不到匹配的版本（No matching distribution found）如何解决？","这通常是因为缺少必要的构建工具或环境配置问题，特别是在 ARM 架构（如 Mac M1\u002FM2）上。解决步骤如下：\n1. 确保使用的是受支持的 Python 版本（3.8, 3.9, 3.10）。\n2. 在 conda 环境中，先安装 cmake：conda install anaconda::cmake\n3. 升级 pip：pip install --upgrade pip\n4. 再次尝试安装：pip install piml\n如果仍然失败，请检查错误日志中是否涉及特定包（如 qdldl）的编译错误，并确保系统已安装完整的编译环境。","https:\u002F\u002Fgithub.com\u002FSelfExplainML\u002FPiML-Toolbox\u002Fissues\u002F29",{"id":130,"question_zh":131,"answer_zh":132,"source_url":113},15661,"如何加载外部模型并在 PiML 中查看其可解释性指标？","加载外部模型（如 .pkl 文件）后，需要将其封装进 PiML 的 pipeline 并注册。步骤如下：\n1. 使用 dill 或 pickle 加载模型对象。\n2. 确保数据已加载并完成 summary 和 preparation 步骤，或者在注册时显式指定数据信息。\n3. 创建 pipeline 并注册：\n   pipeline = exp.make_pipeline(model=加载后的模型对象)\n   exp.register(pipeline, \"自定义模型名称\")\n4. 运行解释命令：\n   exp.model_interpret()\n详细用法可参考官方示例笔记：https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FSelfExplainML\u002FPiML-Toolbox\u002Fblob\u002Fmain\u002Fexamples\u002FExample_ExternalModels.ipynb",[134,139,144,149,154,159,164,169,174,179,184,189,194,199,204,209,214,219,224],{"id":135,"version":136,"summary_zh":137,"released_at":138},90261,"V0.6.0","1. 支持使用 Spark 加载数据  \n2. 支持 H2O 模型注册  \n3. 支持基于数据的模型解释与说明  \n4. 在事后解释中新增弗里德曼 H 统计量  \n5. 增加数据完整性检查及更多异常值检测方法  \n6. 在二分类评估指标中新增对数损失和布里尔评分  \n7. 增加分段式性能诊断功能  \n8. 增加基于网格搜索和随机搜索的超参数调优功能  \n9. 流水线优化、代码清理及错误修复","2023-11-30T06:53:56",{"id":140,"version":141,"summary_zh":142,"released_at":143},90262,"V0.5.1","1. 添加无模型诊断测试 API。 2. 增加对更广泛计算环境的支持，包括 Mac ARM 架构。 3. 优化过拟合诊断模块，支持 AUC 指标。 4. 新增切片一维图，作为二维热力图的替代方案。 5. 代码重构及缺陷修复。","2023-09-26T15:45:15",{"id":145,"version":146,"summary_zh":147,"released_at":148},90263,"V0.5.0","1. 发布 PiML 用户指南。 2. 新增数据质量模块。 3. 新增可解释的 XGB1 模型。 4. 在 data_prepare 中新增训练集与测试集划分方法。 5. 改进 model_fairness 中的分箱搜索功能。 6. 修复部分 bug 并进行其他小的调整。","2023-05-03T11:55:26",{"id":150,"version":151,"summary_zh":152,"released_at":153},90264,"V0.4.3","1. 将模型保存为 pickle 文件，并将其与管道注册关联。\n2. 为所有高代码 API 添加结果返回功能。\n3. 为高代码 API 的使用添加错误提示信息。\n4. 在分布漂移功能中添加 KS 统计量和 WS 统计量。\n5. 改进模型公平性模块的用户界面。\n6. 其他杂项改进和错误修复。","2023-03-17T05:53:53",{"id":155,"version":156,"summary_zh":157,"released_at":158},90265,"V0.4.2","1. 重构特征选择模块：   a) 新增基于皮尔逊相关性的策略；   b) 新增基于距离相关的策略；   c) 改进特征重要性评估策略；   d) 改进随机化条件独立性检验策略。 2. 重构模型公平性模块：   a) 新增不同模型之间的公平性对比功能；   b) 新增通过特征分箱提升公平性的方法；   c) 改进用于提升公平性的特征剔除策略；   d) 新增演示数据集“FairnessCreditSimu”。 3. 其他杂项改进及缺陷修复。","2022-12-15T09:41:36",{"id":160,"version":161,"summary_zh":162,"released_at":163},90266,"V0.4.1","1. 增加基于 RCIT 和卡方条件独立性检验的特征选择功能；\r\n2. 增加通过特征删除和阈值调整来缓解不公平性的功能；\r\n3. 完善高覆盖率的代码注释，以支持后续文档编写；\r\n4. 其他杂项改进及错误修复。","2022-11-15T16:06:37",{"id":165,"version":166,"summary_zh":167,"released_at":168},90267,"V0.4.0","1. 新增两个可解释的机器学习模型（FIGS 和 XGB2）；\r\n2. 新增集成 solas-ai 包的公平性测试低代码界面；\r\n3. 新增 model_diagnose 模块中的自定义指标低代码界面；\r\n4. 新增用于自定义训练集与测试集划分的高代码 API；\r\n5. 采用扰动离散特征的策略改进鲁棒性测试；\r\n6. 增加三种分布外样本选项，进一步完善韧性测试；\r\n7. 其他各项优化改进。","2022-10-31T01:00:51",{"id":170,"version":171,"summary_zh":172,"released_at":173},90268,"V0.3.3","1. 增加在原始尺度上展示协变量的支持。  \n2. 为 model_train、_interpret、_diagnose 和 _compare 添加 sample_weight 参数选项。  \n3. 增加对鲁棒性和韧性测试中指标进行更改的支持。  \n4. 通过直方图切片法添加基于 AUC 的 WeakSpot 功能。  \n5. 增加在终端中使用 PiML 高代码的支持。  \n6. 在 data_loader 中为演示数据集添加缓存功能。  \n7. 其他杂项更改。","2022-09-19T08:46:38",{"id":175,"version":176,"summary_zh":177,"released_at":178},90269,"V0.3.2","1. 增加对 scikit-learn≥0.24.2、xgboost≥1.4.2 和 matplotlib≥3.1.2 的支持。 2. 增加数据分布漂移的两样本检验。 3. 增加基于 GBM 特征重要性的特征选择功能。 4. 修改用于鲁棒性测试的噪声扰动方法。 5. 进行了一些其他小的改动。","2022-08-19T14:42:38",{"id":180,"version":181,"summary_zh":182,"released_at":183},90270,"V0.3.1","1. 为 LIME 局部解释添加居中选项  \n2. 为 GLM 和 ReLU-DNN 的局部精确可解释性添加居中选项  \n3. 改进二分类器和分类特征的残差图  \n4. 优化高阶 API 的命名，并提供详细的文档字符串  \n5. 修复 EBM 模型解释中的一个错误  \n6. 其他一些小的改动。","2022-08-01T03:53:08",{"id":185,"version":186,"summary_zh":187,"released_at":188},90271,"V0.3.0","1. Add classic statistical models (GLM, GAM, Tree).\r\n2. Add interpret methods for newly added models.\r\n3. Add overfit for model comparison.\r\n4. Improve reliability diagrams for binary classification.\r\n5. Improve accuracy plots for model comparison.\r\n6. Some miscellaneous changes.\r\n\r\n","2022-07-25T11:32:50",{"id":190,"version":191,"summary_zh":192,"released_at":193},90272,"V0.2.2","1. Update WeakSpot with training and testing data option\r\n2. Remove underfit analysis from the overfit tab\r\n3. Update reliability testing for binary classification\r\n4. Add cv tuning for quantile regression used by conformal regression.\r\n5. Some miscellaneous changes.","2022-07-06T05:46:10",{"id":195,"version":196,"summary_zh":197,"released_at":198},90273,"V0.2.1","1. Clean up deprecated APIs and dependency packages;\r\n2. Allow data_prepare with skipped data_summary.\r\n3. Improve the model_diagnose reliability method.\r\n4. Fix visualization bugs in model_explain and model_interpret.","2022-06-29T09:32:19",{"id":200,"version":201,"summary_zh":202,"released_at":203},90274,"V0.2.0","1. Major release of high-code APIs.\r\n2. Support registration of pre-trained models.\r\n3. Provide examples for 1 and 2.\r\n4. Provide wrappers for SHAP plots.\r\n5. Improve model_diagnose reliability testing. \r\n6. Some miscellaneous changes.","2022-06-26T11:34:08",{"id":205,"version":206,"summary_zh":207,"released_at":208},90275,"V0.1.4","1. Add the 'dropout_rate' hyperparameter for ReLU-DNN training.\r\n2. Add both feature and effect plots for local interpretability.\r\n3. Add the PSI plot to model_diagnose reliability testing.\r\n4. Improve the WeakSplot and Overfit\u002FUnderfit plots.\r\n5. Improve the robustness plot w.r.t. worst-sample ratios.\r\n6. Improve the LIME plot with local weights and effects.\r\n7. Resolve the SHAP waterfall plot with ylabel truncation bug.\r\n8. Some miscellaneous changes.","2022-05-27T12:02:52",{"id":210,"version":211,"summary_zh":212,"released_at":213},90276,"V0.1.3","1. Add a violin plot of LLM coefficients for ReLU-DNN.\r\n2. Add a density plot for visualizing distribution shift in the resilience test.\r\n3. Update the LIME plot from coefficients to marginal effects.\r\n4. Add the segmented bandwidth plot for the reliability test.\r\n5. Fix the revert button for “exclude attribute” in data_summary.\r\n6. Some miscellaneous changes.","2022-05-24T14:13:08",{"id":215,"version":216,"summary_zh":217,"released_at":218},90277,"V0.1.2","1. Improved data loader for CSV data w\u002Fwo header. \r\n2. Improved data processing for categorical attributes with strings. \r\n3. Add a warning message for multiclass response data (not currently supported). \r\n4. Upgrade the shap library from 0.35 (which causes compilation issues for windows) to 0.40. \r\n5. Speed up Data summary (only calculate specific type summary info).","2022-05-16T14:09:21",{"id":220,"version":221,"summary_zh":222,"released_at":223},90278,"V0.1.1","1. Support manually uploading datasets from the user end, with examples to be provided.\r\n2. Add notification and error messages for the data loader widget (file size limit 10MB).\r\n3. Truncate feature names if needed for the plots if the string length is longer than 10.\r\n4. Add top3 (value, count) pairs for categorical data summary.\r\n5. Speed up the calculation of the robustness metric.\r\n6. Use predict_proba to calculate residuals for the classification tasks.","2022-05-12T08:19:46",{"id":225,"version":226,"summary_zh":76,"released_at":227},90279,"V0.1.0","2022-05-03T18:25:52"]