[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-PAIR-code--what-if-tool":3,"tool-PAIR-code--what-if-tool":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,2,"2026-04-07T23:26:32",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":113,"forks":114,"last_commit_at":115,"license":116,"difficulty_score":32,"env_os":117,"env_gpu":118,"env_ram":118,"env_deps":119,"category_tags":127,"github_topics":128,"view_count":10,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":135,"updated_at":136,"faqs":137,"releases":167},3458,"PAIR-code\u002Fwhat-if-tool","what-if-tool","Source code\u002Fwebpage\u002Fdemos for the What-If Tool","What-If Tool 是一款由 Google PAIR 团队开发的可视化交互工具，旨在帮助用户轻松探索和理解“黑盒”机器学习模型（包括分类和回归任务）。它解决了传统模型分析中依赖代码、难以直观洞察模型行为及公平性问题的痛点。通过图形界面，用户无需编写任何代码，即可对大量数据样本进行推理测试，直观查看预测结果分布，并手动或编程修改样本特征，实时观察模型输出的变化，从而深入探究模型决策逻辑。\n\n该工具特别适合机器学习开发者、数据科学家及研究人员使用，同时也为希望向非技术利益相关者展示模型行为的团队提供了便利。其独特亮点在于支持在 TensorBoard、Jupyter Notebook 或 Google Colab 中无缝集成，不仅能分析 TensorFlow Estimator 模型，还能连接云端 AI Platform 托管的模型甚至自定义预测函数。此外，它还内置了强大的子集分析功能，可辅助检测模型在不同数据群体中的性能差异与潜在偏见，让模型调试与公平性评估变得更加简单高效。","# What-If Tool\n\n![What-If Tool Screenshot](\u002Fimg\u002Fwit-smile-intro.png \"What-If Tool Screenshot\")\n\nThe [What-If Tool](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool) (WIT) provides an easy-to-use interface for expanding\nunderstanding of a black-box classification or regression ML model.\nWith the plugin, you can perform inference on a large set of examples and\nimmediately visualize the results in a variety of ways.\nAdditionally, examples can be edited manually or programmatically and re-run\nthrough the model in order to see the results of the changes.\nIt contains tooling for investigating model performance and fairness over\nsubsets of a dataset.\n\nThe purpose of the tool is that give people a simple, intuitive, and powerful\nway to play with a trained ML model on a set of data through a visual interface\nwith absolutely no code required.\n\nThe tool can be accessed through TensorBoard or as an extension in a Jupyter\nor\n[Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb)\nnotebook.\n\n## I don’t want to read this document. Can I just play with a demo?\n\nCheck out the large set of web and colab demos in the\n[demo section of the What-If Tool website](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Findex.html#demos).\n\nTo build the web demos yourself:\n* [Binary classifier for UCI Census dataset salary prediction](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Fuci.html)\n  * Dataset: [UCI Census](https:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002Fcensus+income)\n  * Task: Predict whether a person earns more or less than $50k based on their\n    census information\n  * To build and run the demo from code:\n    `bazel run wit_dashboard\u002Fdemo:demoserver`\n    then navigate to `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Fdemo.html`\n* [Binary classifier for smile detection in images](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Fimage.html)\n  * Dataset: [CelebA](http:\u002F\u002Fmmlab.ie.cuhk.edu.hk\u002Fprojects\u002FCelebA.html)\n  * Task: Predict whether the person in an image is smiling\n  * To build and run the demo from code:\n    `bazel run wit_dashboard\u002Fdemo:imagedemoserver`\n    then navigate to `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Fimage_demo.html`\n* [Multiclass classifier for Iris dataset](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Firis.html)\n  * Dataset: [UCI Iris](https:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002Firis)\n  * Task: Predict which of three classes of iris flowers that a flower falls\n    into based on 4 measurements of the flower\n  * To build and run the demo from code:\n    `bazel run wit_dashboard\u002Fdemo:irisdemoserver`\n    then navigate to `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Firis_demo.html`\n* [Regression model for UCI Census dataset age prediction](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Fage.html)\n  * Dataset: [UCI Census](https:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002Fcensus+income)\n  * Task: Predict the age of a person based on their census information\n  * To build and run the demo from code:\n    `bazel run wit_dashboard\u002Fdemo:agedemoserver`\n    then navigate to `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Fage_demo.html`\n  * This demo model returns attribution values in addition to predictions (through the use of vanilla gradients)\n    in order to demonstate how the tool can display attribution values from predictions.\n\n## What do I need to use it in a jupyter or colab notebook?\n\nYou can use the What-If Tool to analyze a classification or regression\n[TensorFlow Estimator](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Festimator\u002FEstimator)\nthat takes TensorFlow Example or SequenceExample protos\n(data points) as inputs directly in a jupyter or colab notebook.\n\nAdditionally, the What-If Tool can analyze\n[AI Platform Prediction-hosted](https:\u002F\u002Fcloud.google.com\u002Fml-engine\u002F) classification\nor regresssion models that take TensorFlow Example protos, SequenceExample protos,\nor raw JSON objects as inputs.\n\nYou can also use What-If Tool with a custom prediction function that takes\nTensorflow examples and produces predictions. In this mode, you can load any model\n(including non-TensorFlow models that don't use Example protos as inputs) as\nlong as your custom function's input and output specifications are correct.\n\nWith either AI Platform models or a custom prediction function, the What-If Tool can\ndisplay and make use of attribution values for each input feature in relation to each\nprediction. See the below section on attribution values for more information.\n\nIf you want to train an ML model from a dataset and explore the dataset and\nmodel, check out the [What_If_Tool_Notebook_Usage.ipynb notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb) in colab, which starts from a CSV file,\nconverts the data to tf.Example protos, trains a classifier, and then uses the\nWhat-If Tool to show the classifier performance on the data.\n\n## What do I need to use it in TensorBoard?\n\nA walkthrough of using the tool in TensorBoard, including a pretrained model and\ntest dataset, can be found on the\n[What-If Tool page on the TensorBoard website](https:\u002F\u002Fwww.tensorflow.org\u002Ftensorboard\u002Fr2\u002Fwhat_if_tool).\n\nTo use the tool in TensorBoard, only the following information needs to be provided:\n\n* The model server host and port, served using\n  [TensorFlow Serving](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fserving). The model can\n  use the TensorFlow Serving Classification, Regression, or Predict API.\n    * Information on how to create a saved model with the `Estimator` API that\n      will use thse appropriate TensorFlow Serving Classification or Regression\n      APIs can be found in the [saved model documentation](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fsaved_model#using_savedmodel_with_estimators)\n      and in this [helpful tutorial](http:\u002F\u002Fshzhangji.com\u002Fblog\u002F2018\u002F05\u002F14\u002Fserve-tensorflow-estimator-with-savedmodel\u002F).\n      Models that use these APIs are the simplest to use with the What-If Tool\n      as they require no set-up in the tool beyond setting the model type.\n    * If the model uses the Predict API, the input must be serialized tf.Example\n      or tf.SequenceExample protos and the output must be following:\n        * For classification models, the output must include a 2D float tensor\n          containing a list of class probabilities for all possible class\n          indices for each inferred example.\n        * For regression models, the output must include a float tensor\n          containing a single regression score for each inferred example.\n    * The What-If Tool queries the served model using the gRPC API, not the\n      RESTful API. See the TensorFlow Serving\n      [docker documentation](https:\u002F\u002Fwww.tensorflow.org\u002Fserving\u002Fdocker) for\n      more information on the two APIs. The docker image uses port 8500 for the\n      gRPC API, so if using the docker approach, the port to specify in the\n      What-If Tool will be 8500.\n    * Alternatively, instead of querying a model hosted by TensorFlow Serving,\n      you can provide a python function for model prediction to the tool through\n      the \"--whatif-use-unsafe-custom-prediction\" runtime argument as\n      described in more detail below.\n* A TFRecord file of tf.Examples or tf.SequenceExamples to perform inference on\n  and the number of examples to load from the file.\n    * Can handle up to tens of thousands of examples. The exact amount depends\n      on the size of each example (how many features there are and how large the\n      feature values are).\n    * The file must be in the logdir provided to TensorBoard on startup.\n      Alternatively, you can provide another directory to allow file loading\n      from, through use of the -whatif-data-dir=PATH runtime parameter.\n* An indication if the model is a regression, binary classification or\n  multi-class classification model.\n* An optional vocab file for the labels for a classification model. This file\n  maps the predicted class indices returned from the model prediction into class\n  labels. The text file contains one label per line, corresponding to the class\n  indices returned by the model, starting with index 0.\n    * If this file is provided, then the dashboard will show the predicted\n      labels for a classification model. If not, it will show the predicted\n      class indices.\n\nAlternatively, the What-If Tool can be used to explore a dataset directly from\na CSV file. See the next section for details.\n\nThe information can be provided in the settings dialog screen, which pops up\nautomatically upon opening this tool and is accessible through the settings\nicon button in the top-right of the tool.\nThe information can also be provided directly through URL parameters.\nChanging the settings through the controls automatically updates the URL so that\nit can be shared with others for them to view the same data in the What-If Tool.\n\n### All I have is a dataset. What can I do in TensorBoard? Where do I start?\n\nIf you just want to explore the information in a CSV file using the What-If Tool\nin TensorBoard, just set the path to the examples to the file (with a \".csv\"\nextension) and leave the inference address and model name fields blank.\nThe first line of the CSV file must contain column names. Each line after that\ncontains one example from the dataset, with values for each of the columns\ndefined on the first line. The pipe character (\"|\") deliminates separate feature\nvalues in a list of feature values for a given feature.\n\nIn order to make use of the model understanding features of the tool, you can\nhave columns in your dataset that contain the output from an ML model. If your\nfile has a column named \"predictions__probabilities\" with a pipe-delimited (\"|\") list of\nprobability scores (between 0 and 1), then the tool will treat those as the\noutput scores of a classification model. If your file has a numeric column named\n\"predictions\" then the tool will treat those as the output of a regression model. In\nthis way, the tool can be used to analyze any dataset and the results of any\nmodel run offline against the dataset. Note that in this mode, the examples\naren't editable as there is no way to get new inference results when an example\nchanges.\n\n## What can it do?\n\nDetails on the capabilities of the tool, including a guided walkthrough, can be\nfound on the [What-If Tool website](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool).\nHere is a basic rundown of what it can do:\n\n* Visualize a dataset of TensorFlow Example protos.\n  * The main panel shows the dataset using [Facets Dive](https:\u002F\u002Fpair-code.github.io\u002Ffacets),\n    where the examples can be organized\u002Fsliced\u002Fpositioned\u002Fcolored by any of the\n    dataset’s features.\n    * The examples can also be organized by the results of their inferences.\n      * For classification models, this includes inferred label, confidence of\n        inferred label, and inference correctness.\n      * For regression models, this includes inferred score and amount of error\n        (including absolute or squared error) in the inference.\n  * A selected example can be viewed in detail in the side panel, showing all\n    feature values for all features of that example.\n  * For examples that contain an encoded image in a bytes feature named\n    \"image\u002Fencoded\", Facets Dive will create a thumbnail of the image to display\n    the point, and the full-size image is visible in the side panel for a\n    selected example.\n  * Aggregate statistics for all loaded examples can be viewed in the side panel\n    using [Facets Overview](https:\u002F\u002Fpair-code.github.io\u002Ffacets\u002F).\n\n* Visualize the results of the inference\n  * By default, examples in the main panel are colored by their inference\n    results.\n  * The examples in the main panel can be organized into confusion matrices and\n    other custom layouts to show the inference results faceted by a number of\n    different features, or faceted\u002Fpositioned by inference result, allowing the\n    creation of small multiples of 1D and 2D histograms and scatter plots.\n  * For a selected example, detailed inference results (e.x. predicted classes\n    and their confidence scores) are shown in the side panel.\n  * If the model returns attribution values in addition to predictions, they\n    are displayed for each selected example, and the attribution values can be\n    used to control custom layouts and as dimensions to slice the dataset on\n    for performance analysis.\n\n* Explore counterfactual examples\n  * For classification models, for any selected example, with one click you can\n    compare the example to the example most-similar to it but which is\n    classified as a different.\n  * Similarity is calculated based on the distribution of feature values across\n    all loaded examples and similarity can be calculated using either L1 or L2\n    distance.\n    * Distance is normalized between features by:\n      * For numeric features, use the distance between values divided by the\n        standard deviation of the values across all examples.\n      * For categorical features, the distance is 0 if the values are the same,\n        otherwise the distance is the probability that any two examples have\n        the same value for that feature across all examples.\n  * In notebook mode, the tool also allows you to set a custom distance function\n    using set_custom_distance_fn in WitConfigBuilder, where that function is\n    used to compute closest counterfactuals instead. As in the case with\n    custom_predict_fn, the custom distance function can be any python function.\n\n* Edit a selected example in the browser and re-run inference and visualize the\n  difference in the inference results.\n  * See auto-generated partial dependence plots, which are plots that for every\n    feature show the change in inference results as that feature has its value\n    changed to different valid values for that feature.\n  * Edit\u002Fadd\u002Fremove any feature or feature value in the side panel and re-run\n    inference on the edited datapoint. A history of the inference values of that\n    point as it is edited and re-inferred is shown.\n  * For examples that contain encoded images, upload your own image and re-run\n    inference.\n  * Clone an existing example for editing\u002Fcomparison.\n  * Revert edits to an edited example.\n\n* Compare the results of two models on the same input data.\n  * If you provide two models to the tool during setup, it will run inference\n    with the provided data on both models and you can compare the results\n    between the two models using all the features defined above.\n\n* If using a binary classification model and your examples include a feature\n  that describes the true label, you can do the following:\n  * See the ROC curve and numeric confusion matrices in the side panel,\n    including the point on the curve where your model lives, given the current\n    positive classification threshold value.\n  * See separate ROC curves and numeric confusion matrices split out for subsets\n    of the dataset, sliced of any feature or features of your dataset (i.e. by\n    gender).\n  * Manually adjust the positive classification threshold (or thresholds, if\n    slicing the dataset by a feature) and see the difference in inference\n    results, ROC curve position and confusion matrices immediately.\n  * Set the positive classification thresholds with one click based on concepts\n    such as the cost of a false positive vs a false negative and satisfying\n    fairness measures such as equality of opportunity or demographic parity.\n\n* If using a multi-class classification model and your examples include a\n  feature that describes the true label, you can do the following:\n  * See a confusion matrix in the side panel for all classifications and all\n    classes.\n  * See separate confusion matrices split out for subsets of the dataset, sliced\n    on any feature or features of your dataset.\n* If using a regression model and your examples include a feature that describes\n  the true label, you can do the following:\n  * See the mean error, mean absolute error and mean squared error across the\n    dataset.\n  * See separate calculations of those mean error calculations split out for\n    subsets of the dataset, sliced of any feature or features of your dataset.\n\n## Who is it for?\nWe imagine WIT to be useful for a wide variety of users.\n* ML researchers and model developers - Investigate your datasets and models and\n  explore inference results. Poke at the data and model to gain insights, for\n  tasks such as debugging strange results and looking into ML fairness.\n* Non-technical stakeholders - Gain an understanding of the performance of a\n  model on a dataset. Try it out with your own data.\n* Lay users - Learn about machine learning by interactively playing with\n  datasets and models.\n\n## Notebook mode details\n\nAs seen in the [example notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb),\ncreating the `WitWidget` object is what causes the What-If Tool to be displayed\nin an output cell. The `WitWidget` object takes a `WitConfigBuilder` object as a\nconstructor argument. The `WitConfigBuilder` object specifies the data and model\ninformation that the What-If Tool will use.\n\nThe WitConfigBuilder object takes a list of tf.Example or tf.SequenceExample\nprotos as a constructor argument. These protos will be shown in the tool and\ninferred in the specified model.\n\nThe model to be used for inference by the tool can be specified in many ways:\n- As a TensorFlow [Estimator](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Festimators)\n  object that is provided through the `set_estimator_and_feature_spec` method.\n  In this case the inference will be done inside the notebook using the\n  provided estimator.\n- As a model hosted by [AI Platform Prediction](https:\u002F\u002Fcloud.google.com\u002Fml-engine\u002F)\n  through the `set_ai_platform_model` method.\n- As a custom prediction function provided through `set_custom_predict_fn` method.\n  In this case WIT will directly call the function for inference.\n- As an endpoint for a model being served by [TensorFlow Serving](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fserving),\n  through the `set_inference_address` and `set_model_name` methods. In this case\n  the inference will be done on the model server specified. To query a model served\n  on host \"localhost\" on port 8888, named \"my_model\", you would set on your\n  builder\n  `builder.set_inference_address('localhost:8888').set_model_name('my_model')`.\n\nSee the documentation of [WitConfigBuilder](https:\u002F\u002Fgithub.com\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002Fwitwidget\u002Fnotebook\u002Fvisualization.py)\nfor all options you can provide, including how to specify other model types\n(defaults to binary classification) and how to specify an optional second model\nto compare to the first model.\n\n### How can the What-If Tool use attribution values and other prediction-time information?\n\nFeature attribution values are numeric values for each input feature to an ML model that\nindicate how much impact that feature value had on the model's prediction.\nThere are a variety of approaches to get feature attribution values for a predicts from an ML model,\nincluding [SHAP](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap),\n[Integrated Gradients](https:\u002F\u002Fgithub.com\u002Fankurtaly\u002FIntegrated-Gradients),\n[SmoothGrad](https:\u002F\u002Fpair-code.github.io\u002Fsaliency\u002F), and more.\n\nThey can be a powerful way of analyzing how a model reacts to certain input values beyond\njust simply studying the effect that changing individual feature values has on model\npredictions as is done with partial dependence plots.\nSome attribution techniques require access to a model internals, such as the gradient-based methods,\nwhereas others can be performed on black-box models. Regardless, the What-If Tool can visualize the\nresults of attribution methods in addition to the standard model prediction results.\n\nThere are two ways to use the What-If Tool to visualize attribution values. If you have deployed a\nmodel to Cloud AI Platform with the explainability feature enabled, and provide this model to\nthe tool through the standard `set_ai_platform_model` method, then attribution values will\nautomatically be generated and visualized by the tool with no additional setup needed.\nIf you wish to view attribution values for a different model setup, this can be accomplished through\nuse of the custom prediction function.\n\nAs described in the `set_custom_predict_fn` documentation in\n[WitConfigBuilder](https:\u002F\u002Fgithub.com\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002Fwitwidget\u002Fnotebook\u002Fvisualization.py), this method must return a list of the same size as the number of examples\nprovided to it, with each list entry representing the prediction-time information for that example.\nIn the case of a standard model with no attribution information, the list entry is just a number\n(in the case of a regression model), or a list of class probabilities (in the case of a classification model).\n\nHowever, if there is attribution or other prediction-time information, then the list entry\ncan instead be a dictionary, with the standard model prediction output under the `predictions` key. Attribution\ninformation can be returned under the `attributions` key and any other supplemental information under its own\ndescriptive key. The exact format of the attributions and other supplemental information can be found in the code\ndocumentation linked above.\n\nIf attribution values are provided to the What-If Tool, they can be used in a number of ways. First, when selecting\na datapoint in the Datapoint Editor tab, the attribution values are displayed next to each feature value and the\nfeatures can be ordered by their attribution strength instead of alphabetically. Additionally, the feature values\nare colored by their attribution values for quick interpretation of attribution strengths.\n\nBeyond displaying the attribution values for the selected datapoint, the attribution values for each feature can be\nused in the tool in the same ways as any other feature of the datapoints. They can be used selected in the\ndatapoints visualization controls to use those values to create custom scatter plots and histograms.\nFor example, you can create a scatterplot showing the relationship between the attribution value of two different features,\nwith the datapoints colored by the predicted result from the model.\nThey can also be used in the Performance tab as a way to slice a dataset for comparing performance statistics of different slices.\nFor example, you can quickly compare the aggregate performance of a model on datapoints with low attribution of a\nspecified feature, against the datapoints with high attribution of that feature.\n\nAny other supplemental information returned from a custom prediction function will appear in the tool as a\nfeature named after its key in the dictionary.\nThey can also be used in the same way, driving custom visualizations and as a dimension to slice when analyzing\naggregate model performance.\n\nWhen a datapoint is edited and the re-inferred through the model with the \"Run inference\" button, the attributions and\nother supplemental information is recalculated and updated in the tool.\n\nFor an example of returning attribution values from a custom prediction function (in this case using the SHAP library to\nget attributions), see the [WIT COMPAS with SHAP notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FPAIR-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWIT_COMPAS_with_SHAP.ipynb).\n\n### How do I enable it for use in a Jupyter notebook?\nFirst, install and enable WIT for Jupyter through the following commands:\n```sh\npip install witwidget\njupyter nbextension install --py --symlink --sys-prefix witwidget\njupyter nbextension enable --py --sys-prefix witwidget\n```\n\nThen, use it as seen at the bottom of the\n[What_If_Tool_Notebook_Usage.ipynb notebook](.\u002FWhat_If_Tool_Notebook_Usage.ipynb).\n\n### How do I enable it for use in a Colab notebook?\nInstall the widget into the runtime of the notebook kernel by running a cell\ncontaining:\n```\n!pip install witwidget\n```\n\nThen, use it as seen at the bottom of the\n[What_If_Tool_Notebook_Usage.ipynb notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb).\n\n### How do I enable it for use in a JupyterLab or Cloud AI Platform notebook?\nWIT has been tested in JupyterLab versions 1.x, 2.x, and 3.x.\n\nInstall and enable WIT for JupyterLab 3.x by running a cell containing:\n```\n!pip install witwidget\n!jupyter labextension install wit-widget\n!jupyter labextension install @jupyter-widgets\u002Fjupyterlab-manager\n```\nNote that you may need to specify the correct version of jupyterlab-manager for\nyou JupyterLab version as per\nhttps:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@jupyter-widgets\u002Fjupyterlab-manager.\n\nNote that you may need to run `!sudo jupyter labextension ...` commands depending on your notebook setup.\n\nUse of WIT after installation is the same as with the other notebook installations.\n\n## Can I use a custom prediction function in TensorBoard?\nYes. You can do this by defining a python function named `custom_predict_fn`\nwhich takes two arguments: a list of examples to preform inference on, and the\nserving bundle object which contains information about the model to query.\nThe function should return a list of results, one entry per example provided.\nFor regression models, the result is just a number. For classification models,\nthe result is a list of numbers, representing the class scores for each possible\nclass.\nHere is a minimal example that just returns random results:\n\n```python\nimport random\n\n# The function name \"custom_predict_fn\" must be exact.\ndef custom_predict_fn(examples, serving_bundle):\n  # Examples are a list of TFRecord objects, each object contains the features of each point.\n  # serving_bundle is a dictionary that contains the setup information provided to the tool,\n  # such as server address, model name, model version, etc.\n\n  number_of_examples = len(examples)\n  results = []\n  for _ in range(number_of_examples):\n    score = random.random()\n    results.append([score, 1 - score]) # For binary classification\n    # results.append(score) # For regression\n  return results\n```\n\nDefine this function in a file you save to disk. For this example, let's assume\nthe file is saved as `\u002Ftmp\u002Fmy_custom_predict_function.py`.\nThen the TensorBoard server with `tensorboard --whatif-use-unsafe-custom-prediction \u002Ftmp\u002Fmy_custom_predict_function.py` \nand the function should be invoked once you have set up your data and model in\nthe What-If Tool setup dialog.\nThe `unsafe` means that the function is not sandboxed, so make sure that \nyour function doesn't do anything destructive, such as accidently delete your\nexperiment data.\n\n## How can I help develop it?\n\nCheck out the [developement guide](.\u002FDEVELOPMENT.md).\n\n## What's new in WIT?\n\nCheck out the [release notes](.\u002FRELEASE.md).\n","# 假设分析工具\n\n![假设分析工具截图](\u002Fimg\u002Fwit-smile-intro.png \"假设分析工具截图\")\n\n[假设分析工具](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool)（WIT）提供了一个易于使用的界面，用于深入理解黑盒分类或回归机器学习模型。通过该插件，您可以对大量示例进行推理，并以多种方式立即可视化结果。此外，还可以手动或通过编程方式编辑示例，然后重新输入模型以查看更改的效果。它还包含用于在数据集的子集上评估模型性能和公平性的工具。\n\n该工具的目的是为用户提供一种简单、直观且强大的方式，通过可视化界面在不编写任何代码的情况下，对一组数据上的训练好的机器学习模型进行探索和实验。\n\n该工具可以通过 TensorBoard 访问，也可以作为 Jupyter 或 [Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb) 笔记本中的扩展使用。\n\n## 我不想阅读这份文档，可以直接体验演示吗？\n\n请访问 [假设分析工具官网的演示部分](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Findex.html#demos)，那里提供了大量的网页版和 Colab 演示。\n\n要自行构建网页版演示：\n* [UCI Census 数据集薪资预测二分类器](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Fuci.html)\n  * 数据集：[UCI Census](https:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002Fcensus+income)\n  * 任务：根据人口普查信息预测个人收入是否超过 5 万美元\n  * 从代码构建并运行演示：\n    `bazel run wit_dashboard\u002Fdemo:demoserver`\n    然后访问 `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Fdemo.html`\n* [图像微笑检测二分类器](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Fimage.html)\n  * 数据集：[CelebA](http:\u002F\u002Fmmlab.ie.cuhk.edu.hk\u002Fprojects\u002FCelebA.html)\n  * 任务：预测图像中的人是否在微笑\n  * 从代码构建并运行演示：\n    `bazel run wit_dashboard\u002Fdemo:imagedemoserver`\n    然后访问 `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Fimage_demo.html`\n* [Iris 数据集多分类器](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Firis.html)\n  * 数据集：[UCI Iris](https:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002Firis)\n  * 任务：根据花朵的四维测量值，预测花朵属于三种鸢尾花类别中的哪一类\n  * 从代码构建并运行演示：\n    `bazel run wit_dashboard\u002Fdemo:irisdemoserver`\n    然后访问 `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Firis_demo.html`\n* [UCI Census 数据集年龄预测回归模型](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool\u002Fage.html)\n  * 数据集：[UCI Census](https:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002Fcensus+income)\n  * 任务：根据人口普查信息预测个人年龄\n  * 从代码构建并运行演示：\n    `bazel run wit_dashboard\u002Fdemo:agedemoserver`\n    然后访问 `http:\u002F\u002Flocalhost:6006\u002Fwit-dashboard\u002Fage_demo.html`\n  * 该演示模型除了预测之外，还会返回归因值（通过使用普通梯度），以展示该工具如何显示预测的归因值。\n\n## 在 Jupyter 或 Colab 笔记本中使用它需要什么？\n\n您可以在 Jupyter 或 Colab 笔记本中直接使用假设分析工具来分析以 TensorFlow Example 或 SequenceExample 协议缓冲区（数据点）为输入的 [TensorFlow Estimator](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Festimator\u002FEstimator) 分类或回归模型。\n\n此外，假设分析工具还可以分析由 [AI Platform Prediction](https:\u002F\u002Fcloud.google.com\u002Fml-engine\u002F) 托管的分类或回归模型，这些模型接受 TensorFlow Example 协议缓冲区、SequenceExample 协议缓冲区或原始 JSON 对象作为输入。\n\n您还可以将假设分析工具与自定义预测函数一起使用，该函数接收 TensorFlow 示例并生成预测。在这种模式下，只要您的自定义函数的输入和输出规范正确，就可以加载任何模型（包括不使用 Example 协议缓冲区作为输入的非 TensorFlow 模型）。\n\n无论是 AI Platform 模型还是自定义预测函数，假设分析工具都可以显示并利用每个输入特征相对于每项预测的归因值。有关更多信息，请参阅下面关于归因值的部分。\n\n如果您想从数据集训练一个机器学习模型，并探索该数据集和模型，可以参考 Colab 中的 [What_If_Tool_Notebook_Usage.ipynb 笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb)，该笔记本从 CSV 文件开始，将数据转换为 tf.Example 协议缓冲区，训练一个分类器，然后使用假设分析工具展示分类器在该数据上的表现。\n\n## 在 TensorBoard 中使用它需要哪些内容？\n\n关于如何在 TensorBoard 中使用该工具的完整指南，包括预训练模型和测试数据集，可以在\n[TensorBoard 官网的 What-If 工具页面](https:\u002F\u002Fwww.tensorflow.org\u002Ftensorboard\u002Fr2\u002Fwhat_if_tool) 上找到。\n\n要在 TensorBoard 中使用该工具，只需提供以下信息：\n\n* 模型服务器的主机名和端口，该服务器需使用\n  [TensorFlow Serving](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fserving) 提供服务。模型可以使用 TensorFlow Serving 的分类、回归或预测 API。\n    * 关于如何使用 `Estimator` API 创建保存模型，并使其能够与适当的 TensorFlow Serving 分类或回归 API 配合使用的说明，可在\n      [保存模型文档](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fsaved_model#using_savedmodel_with_estimators)\n      和这篇[实用教程](http:\u002F\u002Fshzhangji.com\u002Fblog\u002F2018\u002F05\u002F14\u002Fserve-tensorflow-estimator-with-savedmodel\u002F)中找到。\n      使用这些 API 的模型是最容易与 What-If 工具配合使用的，因为除了设置模型类型外，无需在工具中进行额外配置。\n    * 如果模型使用预测 API，则输入必须是序列化的 `tf.Example` 或 `tf.SequenceExample` 协议缓冲区消息，且输出应符合以下要求：\n        * 对于分类模型，输出必须包含一个二维浮点张量，其中为每个推断示例列出所有可能类别索引对应的概率值。\n        * 对于回归模型，输出必须包含一个浮点张量，其中为每个推断示例提供单个回归分数。\n    * What-If 工具通过 gRPC API 而不是 RESTful API 查询已部署的模型。有关这两种 API 的更多信息，请参阅 TensorFlow Serving 的\n      [Docker 文档](https:\u002F\u002Fwww.tensorflow.org\u002Fserving\u002Fdocker)。Docker 镜像默认使用 8500 端口提供 gRPC API，因此如果采用 Docker 方式部署，那么在 What-If 工具中指定的端口应为 8500。\n    * 或者，您也可以不查询由 TensorFlow Serving 托管的模型，而是通过 `--whatif-use-unsafe-custom-prediction` 运行时参数向工具提供自定义的 Python 预测函数，具体说明见下文。\n* 用于执行推理的 `tf.Example` 或 `tf.SequenceExample` 格式的 TFRecord 文件，以及从该文件中加载的示例数量。\n    * 最多可处理数万个示例。具体数量取决于每个示例的大小（特征数量及特征值的大小）。\n    * 该文件必须位于启动 TensorBoard 时提供的日志目录中。此外，您还可以通过 `-whatif-data-dir=PATH` 运行时参数指定其他目录，以允许从该目录加载文件。\n* 指明模型是回归模型、二分类模型还是多分类模型。\n* 可选的分类模型标签词汇表文件。该文件将模型预测返回的类别索引映射为类别标签。文本文件每行对应一个标签，按模型返回的类别索引顺序排列，从索引 0 开始。\n    * 如果提供了此文件，仪表板将显示分类模型的预测标签；否则，将显示预测的类别索引。\n\n或者，您也可以直接使用 What-If 工具从 CSV 文件中探索数据集。详情请参阅下一节。\n\n上述信息可以在设置对话框中提供，该对话框会在打开此工具时自动弹出，也可通过工具右上角的设置图标按钮访问。此外，这些信息也可以直接通过 URL 参数传递。通过控件更改设置会自动更新 URL，以便与其他用户共享，使他们能够在 What-If 工具中查看相同的数据。\n\n### 我只有数据集，在 TensorBoard 中能做什么？从哪里开始？\n\n如果您只想使用 TensorBoard 中的 What-If 工具探索 CSV 文件中的信息，只需将示例路径设置为该文件（扩展名为 `.csv`），并将推理地址和模型名称字段留空即可。CSV 文件的第一行必须包含列名。此后每一行代表数据集中的一条记录，其值对应第一行定义的各列。对于某一特征的多个取值，各值之间用竖线字符 (`|`) 分隔。\n\n为了充分利用该工具对模型的理解功能，您的数据集可以包含来自机器学习模型的输出列。例如，如果您的文件中有一列名为 `predictions__probabilities`，其值是以竖线分隔的概率分数列表（介于 0 和 1 之间），则工具会将其视为分类模型的输出概率。如果您的文件中有一列名为 `predictions` 的数值列，则工具会将其视为回归模型的输出。通过这种方式，该工具可用于分析任何数据集，以及针对该数据集离线运行的任何模型结果。请注意，在此模式下，示例不可编辑，因为当示例发生变化时无法获取新的推理结果。\n\n## 它能做什么？\n\n有关该工具功能的详细信息，包括引导式操作指南，可在 [What-If Tool 官网](https:\u002F\u002Fpair-code.github.io\u002Fwhat-if-tool) 上找到。以下是其主要功能的简要概述：\n\n* 可视化 TensorFlow Example 协议缓冲区数据集。\n  * 主面板使用 [Facets Dive](https:\u002F\u002Fpair-code.github.io\u002Ffacets) 展示数据集，其中样本可按数据集中的任意特征进行组织、切片、定位或着色。\n    * 样本也可按其推理结果进行组织。\n      * 对于分类模型，这包括推断标签、推断标签的置信度以及推理是否正确。\n      * 对于回归模型，这包括推断分数及推理误差（包括绝对误差或平方误差）。\n  * 在侧边栏中可以详细查看选定的样本，显示该样本所有特征的特征值。\n  * 对于在名为“image\u002Fencoded”的字节型特征中包含编码图像的样本，Facets Dive 会生成该图像的缩略图以直观展示，而完整尺寸的图像则会在侧边栏中针对选定样本显示。\n  * 使用 [Facets Overview](https:\u002F\u002Fpair-code.github.io\u002Ffacets\u002F)，可以在侧边栏中查看所有已加载样本的汇总统计信息。\n\n* 可视化推理结果\n  * 默认情况下，主面板中的样本会根据其推理结果着色。\n  * 主面板中的样本可以组织成混淆矩阵和其他自定义布局，以按多种不同特征分面展示推理结果，或按推理结果分面\u002F定位，从而创建一维和二维直方图及散点图的小多图。\n  * 对于选定的样本，侧边栏会显示详细的推理结果（例如预测类别及其置信度分数）。\n  * 如果模型除了预测之外还返回了归因值，则会为每个选定样本显示这些归因值，并且这些归因值可用于控制自定义布局，以及作为切分数据集的维度，以便进行性能分析。\n\n* 探索反事实样本\n  * 对于分类模型，对于任何选定的样本，只需单击一下即可将其与最相似但被分类为不同类别的样本进行比较。\n  * 相似性是基于所有已加载样本中特征值的分布来计算的，可以使用 L1 或 L2 距离来衡量。\n    * 特征之间的距离会进行归一化处理：\n      * 对于数值特征，使用各样本之间特征值之差除以所有样本特征值的标准差。\n      * 对于分类特征，如果两个样本的特征值相同，则距离为 0；否则，距离即为所有样本中任意两个样本在该特征上取相同值的概率。\n  * 在笔记本模式下，该工具还允许您使用 WitConfigBuilder 中的 set_custom_distance_fn 设置自定义距离函数，用以计算最近的反事实样本。与自定义 predict_fn 的情况类似，自定义距离函数可以是任何 Python 函数。\n\n* 在浏览器中编辑选定的样本并重新运行推理，可视化推理结果的变化。\n  * 查看自动生成的部分依赖图，这些图表会针对每个特征展示当该特征取不同的有效值时推理结果的变化情况。\n  * 在侧边栏中编辑、添加或删除任意特征或特征值，并对编辑后的数据点重新运行推理。系统会显示该数据点在编辑和重新推理过程中的推理值历史记录。\n  * 对于包含编码图像的样本，您可以上传自己的图像并重新运行推理。\n  * 克隆现有样本以进行编辑或比较。\n  * 撤销对已编辑样本的更改。\n\n* 比较两个模型在同一输入数据上的推理结果。\n  * 如果在设置过程中向该工具提供两个模型，它将使用提供的数据对这两个模型分别运行推理，然后您可以利用上述所有功能比较两个模型的推理结果。\n\n* 如果使用二分类模型且您的样本包含描述真实标签的特征，您可以执行以下操作：\n  * 在侧边栏中查看 ROC 曲线和数值混淆矩阵，包括在当前阳性分类阈值下您的模型所处的曲线上对应点。\n  * 查看按数据集子集划分的独立 ROC 曲线和数值混淆矩阵，这些子集可根据数据集中的任意特征进行切片（例如按性别划分）。\n  * 手动调整阳性分类阈值（或多个阈值，如果按某个特征切分数据集），并立即查看推理结果、ROC 曲线位置和混淆矩阵的变化。\n  * 根据假阳性和假阴性的成本权衡，以及公平性指标（如机会均等或人口统计学平等）的要求，一键设置阳性分类阈值。\n\n* 如果使用多分类模型且您的样本包含描述真实标签的特征，您可以执行以下操作：\n  * 在侧边栏中查看所有分类和所有类别的混淆矩阵。\n  * 查看按数据集子集划分的独立混淆矩阵，这些子集可根据数据集中的任意特征进行切片。\n\n* 如果使用回归模型且您的样本包含描述真实标签的特征，您可以执行以下操作：\n  * 查看整个数据集的平均误差、平均绝对误差和平均平方误差。\n  * 查看按数据集子集划分的这些平均误差计算结果，这些子集可根据数据集中的任意特征进行切片。\n\n## 适用于哪些人群？\n我们设想 WIT 对各类用户都非常有用。\n* 机器学习研究人员和模型开发者——研究您的数据集和模型，探索推理结果。通过操作数据和模型来获得洞察，例如调试异常结果或研究机器学习的公平性问题。\n* 非技术利益相关者——了解模型在特定数据集上的表现。尝试使用您自己的数据进行体验。\n* 普通用户——通过交互式地玩转数据集和模型来学习机器学习知识。\n\n## 笔记本模式详情\n\n如[示例笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb)所示，创建 `WitWidget` 对象是使“What-If 工具”在输出单元格中显示的关键步骤。`WitWidget` 对象以一个 `WitConfigBuilder` 对象作为构造函数参数。`WitConfigBuilder` 对象指定了“What-If 工具”将使用的数据和模型信息。\n\n`WitConfigBuilder` 对象以一组 `tf.Example` 或 `tf.SequenceExample` 协议缓冲区作为构造函数参数。这些协议缓冲区将在工具中显示，并用于推断指定的模型。\n\n工具用于推理的模型可以通过多种方式指定：\n- 作为通过 `set_estimator_and_feature_spec` 方法提供的 TensorFlow [Estimator](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Festimators) 对象。在这种情况下，推理将在笔记本内部使用提供的 Estimator 完成。\n- 作为通过 `set_ai_platform_model` 方法指定的由 [AI Platform Prediction](https:\u002F\u002Fcloud.google.com\u002Fml-engine\u002F) 托管的模型。\n- 作为通过 `set_custom_predict_fn` 方法提供的自定义预测函数。在这种情况下，WIT 将直接调用该函数进行推理。\n- 作为由 [TensorFlow Serving](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fserving) 提供服务的模型端点，通过 `set_inference_address` 和 `set_model_name` 方法指定。在这种情况下，推理将在指定的模型服务器上进行。例如，要查询在主机 “localhost” 的 8888 端口上运行、名为 “my_model” 的模型，您可以在构建器上设置：`builder.set_inference_address('localhost:8888').set_model_name('my_model')`。\n\n请参阅 [WitConfigBuilder](https:\u002F\u002Fgithub.com\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002Fwitwidget\u002Fnotebook\u002Fvisualization.py) 的文档，了解您可以提供的所有选项，包括如何指定其他模型类型（默认为二分类）以及如何指定一个可选的第二个模型与第一个模型进行比较。\n\n### “What-If 工具”如何利用特征归因值和其他预测时信息？\n\n特征归因值是针对机器学习模型每个输入特征的数值，用于指示该特征值对模型预测的影响程度。获取机器学习模型预测结果的特征归因值有多种方法，包括 [SHAP](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap)、[集成梯度](https:\u002F\u002Fgithub.com\u002Fankurtaly\u002FIntegrated-Gradients)、[SmoothGrad](https:\u002F\u002Fpair-code.github.io\u002Fsaliency\u002F) 等。\n\n它们可以作为一种强大的分析手段，帮助我们理解模型对特定输入值的反应，而不仅仅是像部分依赖图那样简单地研究单个特征值变化对模型预测的影响。一些归因技术需要访问模型内部结构，例如基于梯度的方法；而另一些则可以应用于黑盒模型。无论哪种情况，“What-If 工具”都可以在标准模型预测结果之外，可视化归因方法的结果。\n\n有两种方法可以使用“What-If 工具”来可视化归因值。如果您已将启用了可解释性功能的模型部署到 Cloud AI Platform，并通过标准的 `set_ai_platform_model` 方法将其提供给工具，则工具会自动生成并可视化归因值，无需额外设置。如果您希望查看其他模型设置下的归因值，可以通过使用自定义预测函数来实现。\n\n正如 [WitConfigBuilder](https:\u002F\u002Fgithub.com\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002Fwitwidget\u002Fnotebook\u002Fvisualization.py) 中 `set_custom_predict_fn` 文档所述，此方法必须返回一个与传入示例数量相同大小的列表，其中每个列表条目代表对应示例的预测时信息。对于没有归因信息的标准模型，列表条目通常只是一个数字（回归模型）或一个类概率列表（分类模型）。\n\n然而，如果存在归因或其他预测时信息，则列表条目可以改为一个字典，其中 `predictions` 键下包含标准模型的预测输出，`attributions` 键下包含归因信息，其他补充信息则放在各自的描述性键下。归因及其他补充信息的具体格式可在上述代码文档中找到。\n\n如果向“What-If 工具”提供了归因值，它们可以以多种方式使用。首先，在“数据点编辑器”选项卡中选择某个数据点时，归因值会显示在每个特征值旁边，且特征可以按归因强度而非字母顺序排列。此外，特征值还会根据其归因值进行着色，以便快速直观地判断归因强度。\n\n除了显示所选数据点的归因值外，每个特征的归因值还可以像数据点的其他特征一样在工具中使用。它们可以在数据点可视化控件中被选中，用于创建自定义散点图和直方图。例如，您可以创建一个散点图，展示两个不同特征的归因值之间的关系，并根据模型的预测结果对数据点进行着色。它们也可以在“性能”选项卡中用作切片依据，以比较不同切片的性能统计。例如，您可以快速比较模型在某特征低归因数据点上的整体表现，与该特征高归因数据点上的表现。\n\n自定义预测函数返回的任何其他补充信息也会在工具中以字典键名命名的特征形式出现。它们同样可以用于驱动自定义可视化，或作为分析模型整体性能时的切片维度。\n\n当编辑某个数据点并使用“运行推理”按钮重新通过模型进行推理时，归因值和其他补充信息会在工具中重新计算并更新。\n\n有关从自定义预测函数返回归因值的示例（本例使用 SHAP 库获取归因值），请参阅[WIT COMPAS with SHAP 笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FPAIR-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWIT_COMPAS_with_SHAP.ipynb)。\n\n### 如何在 Jupyter Notebook 中启用它？\n首先，通过以下命令安装并启用适用于 Jupyter 的 WIT：\n```sh\npip install witwidget\njupyter nbextension install --py --symlink --sys-prefix witwidget\njupyter nbextension enable --py --sys-prefix witwidget\n```\n\n然后，按照\n[What_If_Tool_Notebook_Usage.ipynb 笔记本](.\u002FWhat_If_Tool_Notebook_Usage.ipynb) 底部所示的方式使用。\n\n### 如何在 Colab Notebook 中启用它？\n通过运行包含以下内容的单元格，将该小部件安装到笔记本内核的运行环境中：\n```\n!pip install witwidget\n```\n\n之后，同样按照\n[What_If_Tool_Notebook_Usage.ipynb 笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpair-code\u002Fwhat-if-tool\u002Fblob\u002Fmaster\u002FWhat_If_Tool_Notebook_Usage.ipynb) 底部所示的方式使用。\n\n### 如何在 JupyterLab 或 Cloud AI Platform Notebook 中启用它？\nWIT 已在 JupyterLab 1.x、2.x 和 3.x 版本中进行了测试。\n\n要为 JupyterLab 3.x 安装并启用 WIT，请运行包含以下内容的单元格：\n```\n!pip install witwidget\n!jupyter labextension install wit-widget\n!jupyter labextension install @jupyter-widgets\u002Fjupyterlab-manager\n```\n\n请注意，您可能需要根据自己的 JupyterLab 版本，在\nhttps:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@jupyter-widgets\u002Fjupyterlab-manager 上指定正确的 `@jupyter-widgets\u002Fjupyterlab-manager` 版本。\n\n此外，根据您的笔记本设置，您可能还需要运行 `!sudo jupyter labextension ...` 命令。\n\n安装完成后，WIT 的使用方式与其他笔记本环境中的安装方法相同。\n\n## 是否可以在 TensorBoard 中使用自定义预测函数？\n可以。您可以通过定义一个名为 `custom_predict_fn` 的 Python 函数来实现，该函数接受两个参数：待进行推理的示例列表，以及包含模型查询相关信息的服务包对象。函数应返回一个结果列表，每个提供的示例对应一个条目。对于回归模型，结果仅为一个数字；而对于分类模型，则返回一个数字列表，表示每个可能类别的得分。\n\n以下是一个仅返回随机结果的最小示例：\n\n```python\nimport random\n\n# 函数名“custom_predict_fn”必须完全一致。\ndef custom_predict_fn(examples, serving_bundle):\n  # 示例是 TFRecord 对象的列表，每个对象包含每个数据点的特征。\n  # serving_bundle 是一个字典，包含传递给工具的配置信息，例如服务器地址、模型名称、模型版本等。\n\n  number_of_examples = len(examples)\n  results = []\n  for _ in range(number_of_examples):\n    score = random.random()\n    results.append([score, 1 - score]) # 用于二分类\n    # results.append(score) # 用于回归\n  return results\n```\n\n将此函数定义在一个保存到磁盘的文件中。以本示例为例，假设文件保存为 `\u002Ftmp\u002Fmy_custom_predict_function.py`。然后，使用 `tensorboard --whatif-use-unsafe-custom-prediction \u002Ftmp\u002Fmy_custom_predict_function.py` 启动 TensorBoard 服务器，并在 What-If 工具的设置对话框中完成数据和模型的配置后，该函数便会自动调用。需要注意的是，“unsafe” 表示该函数未经过沙箱隔离，因此请确保您的函数不会执行任何破坏性操作，例如意外删除实验数据。\n\n## 我如何参与开发？\n请查看[开发指南](.\u002FDEVELOPMENT.md)。\n\n## WIT 的最新动态有哪些？\n请参阅[发布说明](.\u002FRELEASE.md)。","# What-If Tool 快速上手指南\n\nWhat-If Tool (WIT) 是一款由 Google PAIR 团队开发的可视化工具，旨在帮助用户无需编写代码即可直观地探索和分析黑盒机器学习模型（分类或回归）。它支持在 TensorBoard、Jupyter Notebook 或 Google Colab 中运行，提供数据切片、性能对比、公平性分析及反事实推理等功能。\n\n## 环境准备\n\n### 系统要求\n- **操作系统**：Linux, macOS, Windows\n- **Python 版本**：Python 3.6 或更高版本\n- **浏览器**：推荐使用最新版的 Chrome 或 Firefox\n\n### 前置依赖\n根据使用场景不同，需准备以下环境之一：\n\n1.  **Jupyter\u002FColab 环境**（推荐初学者）：\n    -   已安装 `jupyter notebook` 或可直接访问 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002F)。\n    -   TensorFlow 2.x (可选，若需训练模型)。\n\n2.  **TensorBoard 环境**：\n    -   已安装 `tensorboard`。\n    -   **模型服务**：需有一个通过 [TensorFlow Serving](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fserving) 运行的模型服务（gRPC 接口），或者准备好包含预测结果的 CSV\u002FTFRecord 数据集。\n    -   **数据格式**：`tf.Example` \u002F `tf.SequenceExample` (TFRecord 格式) 或带有特定列名的 CSV 文件。\n\n> **国内加速提示**：\n> 在安装 Python 包时，建议使用清华或阿里镜像源以提升下载速度：\n> `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 安装步骤\n\n### 场景一：在 Jupyter Notebook 或 Colab 中使用\n\n这是最简单的使用方式，无需配置复杂的服务器环境。\n\n1.  **安装库**\n    在 Notebook 单元格中运行以下命令：\n    ```bash\n    pip install witwidget -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n    *注：如果在 Colab 中运行，安装后可能需要重启运行时 (Runtime -> Restart runtime)。*\n\n2.  **验证安装**\n    导入库以确保无报错：\n    ```python\n    import witwidget\n    ```\n\n### 场景二：在 TensorBoard 中使用\n\n如果你已经在使用 TensorFlow 生态，WIT 通常随 TensorBoard 一起安装。\n\n1.  **安装\u002F更新 TensorBoard**\n    ```bash\n    pip install tensorboard -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n2.  **准备数据与模型**\n    -   **模式 A (连接在线模型)**：启动一个 TensorFlow Serving 服务，记下 host 和 port (gRPC 默认端口为 8500)。\n    -   **模式 B (仅分析数据)**：准备一个 `.csv` 文件或 `.tfrecord` 文件。若文件中包含名为 `predictions` (回归) 或 `predictions__probabilities` (分类) 的列，WIT 可直接分析离线预测结果。\n\n## 基本使用\n\n### 1. 在 Jupyter\u002FColab 中的最简单示例\n\n以下示例演示如何加载一个预训练的 TensorFlow Estimator 模型并启动 WIT 界面。\n\n```python\nfrom witwidget.notebook.visualization import WitConfigBuilder\nfrom witwidget.notebook.visualization import WitWidget\n\n# 假设你已经有以下对象：\n# examples: 一个包含 tf.Example 协议的列表 (测试数据集)\n# estimator: 一个训练好的 TensorFlow Estimator 模型\n\n# 1. 构建配置\n# 指定模型类型：'classification' (分类), 'regression' (回归)\nconfig = WitConfigBuilder(\n    examples, \n    estimator=estimator, \n    model_type='classification'\n)\n\n# 2. 启动可视化组件\nWitWidget(config)\n```\n\n**自定义预测函数模式**（适用于非 TensorFlow 模型）：\n如果你的模型不是 TF Estimator，可以提供一个自定义预测函数：\n\n```python\ndef custom_predict(examples):\n    # 这里编写你的模型推理逻辑\n    # 输入：examples (list of tf.Example)\n    # 输出：predictions (list of scores or probabilities)\n    return my_model.predict(examples)\n\nconfig = WitConfigBuilder(\n    examples, \n    custom_predict_fn=custom_predict,\n    model_type='regression'\n)\n\nWitWidget(config)\n```\n\n### 2. 在 TensorBoard 中的使用步骤\n\n1.  **启动 TensorBoard**\n    指向包含数据文件（`.csv` 或 `.tfrecord`）的目录：\n    ```bash\n    tensorboard --logdir=\u002Fpath\u002Fto\u002Fyour\u002Fdata_directory\n    ```\n    *若数据目录与日志目录不同，可添加参数：`--whatif-data-dir=\u002Fpath\u002Fto\u002Fdata`*\n\n2.  **访问界面**\n    在浏览器打开 TensorBoard 地址（通常为 `http:\u002F\u002Flocalhost:6006`），点击顶部的 **\"What-If Tool\"** 标签页。\n\n3.  **配置面板**\n    首次进入会弹出设置对话框，根据情况填写：\n    -   **Inference Address**: 填入 TensorFlow Serving 的地址（如 `localhost:8500`）。若仅分析 CSV 中的离线结果，留空即可。\n    -   **Model Name**: 填入 TF Serving 中的模型名称。\n    -   **Path to examples**: 选择你的 `.csv` 或 `.tfrecord` 文件。\n    -   **Model Type**: 选择 `Binary classification`, `Multi-class classification` 或 `Regression`。\n\n4.  **开始探索**\n    配置完成后，你将看到数据分布图（Facets Dive）。你可以：\n    -   点击任意数据点查看详细信息。\n    -   手动修改特征值（Edit），实时观察预测结果变化（反事实分析）。\n    -   使用 \"Performance & Fairness\" 面板对比不同数据子集的表现。","某金融科技团队正在优化一个基于人口普查数据预测用户年收入是否超过 5 万美元的黑盒分类模型，急需排查模型是否存在性别或种族歧视。\n\n### 没有 what-if-tool 时\n- 开发人员必须编写大量 Python 代码来手动切片数据集，才能单独分析特定群体（如“亚裔女性”）的预测准确率，效率极低。\n- 难以直观发现不公平现象，往往需要反复运行脚本并绘制静态图表，才能偶然察觉到模型对某些敏感特征的过度依赖。\n- 无法实时验证假设，若想测试“如果将用户的学历从高中改为本科，预测结果会如何变化”，必须修改数据源并重新触发整个推理流程，耗时漫长。\n- 模型决策逻辑如同黑盒，业务方难以理解为何某个具体样本被判定为低收入，缺乏可解释性导致信任度不足。\n\n### 使用 what-if-tool 后\n- 团队成员直接在 Jupyter Notebook 的可视化界面中点击筛选，即可瞬间查看不同人口统计学子集的性能指标和公平性对比，无需编写额外代码。\n- 通过内置的性能与公平性分析视图，系统自动高亮显示模型在特定群体上的偏差，让歧视问题一目了然。\n- 支持手动编辑单个样本特征（如滑动条调整年龄、下拉菜单更改职业），并能即时看到模型输出的变化，快速验证“反事实”假设。\n- 提供直观的归因分析，清晰展示每个特征对具体预测结果的贡献度，帮助非技术人员轻松理解模型的决策依据。\n\nwhat-if-tool 将原本需要数天代码调试的模型诊断工作，转化为几分钟内的交互式探索，极大地提升了机器学习模型的可解释性与公平性审查效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPAIR-code_what-if-tool_10b236c4.png","PAIR-code","PAIR code","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPAIR-code_8b8800c7.png","Code repositories for projects from the People+AI Research (PAIR) Initiative",null,"https:\u002F\u002Fpair.withgoogle.com","https:\u002F\u002Fgithub.com\u002FPAIR-code",[80,84,88,92,96,99,103,106,110],{"name":81,"color":82,"percentage":83},"HTML","#e34c26",89,{"name":85,"color":86,"percentage":87},"Jupyter Notebook","#DA5B0B",9.9,{"name":89,"color":90,"percentage":91},"Python","#3572A5",0.6,{"name":93,"color":94,"percentage":95},"TypeScript","#3178c6",0.2,{"name":97,"color":98,"percentage":95},"JavaScript","#f1e05a",{"name":100,"color":101,"percentage":102},"Starlark","#76d275",0.1,{"name":104,"color":105,"percentage":102},"CSS","#663399",{"name":107,"color":108,"percentage":109},"Liquid","#67b8de",0,{"name":111,"color":112,"percentage":109},"Shell","#89e051",998,183,"2026-04-04T13:07:26","Apache-2.0","","未说明",{"notes":120,"python":118,"dependencies":121},"该工具主要作为 TensorBoard 插件、Jupyter 扩展或 Colab 笔记本运行。核心依赖为 TensorFlow 生态系统。若在本地构建演示需安装 Bazel。模型服务通常通过 TensorFlow Serving (gRPC 接口，默认端口 8500) 提供，也可使用自定义 Python 预测函数。支持直接加载 CSV 文件或 TFRecord (tf.Example\u002FSequenceExample) 格式数据。文档未明确指定具体的操作系统、GPU、内存或 Python 版本要求，通常取决于所安装的 TensorFlow 和 TensorBoard 版本的系统需求。",[122,123,85,124,125,126],"TensorFlow","TensorBoard","Google Colab","TensorFlow Serving","Bazel",[52,14],[129,130,131,132,133,134],"ml-fairness","visualization","machine-learning","jupyterlab-extension","colaboratory","tensorboard","2026-03-27T02:49:30.150509","2026-04-08T18:46:55.526374",[138,143,148,153,158,163],{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},15872,"在使用 What-If Tool 的性能标签页时，遇到 'TypeError: unhashable type: list' 错误该如何解决？","该错误通常是因为自定义预测函数假设输入是 DataFrame，但 WIT 实际传入的是原始列表（例如 `[[27, 'White'], [45, 'White']]`）。\n\n解决方法是将输入列表先转换为 pandas DataFrame，再进行后续处理。代码示例如下：\n```python\nz_df = pd.DataFrame(z, columns=['age', 'race'])\ntesting_data = pd.get_dummies(z_df, columns=categorical, drop_first=True)\npred = lr.predict_proba(testing_data)\nreturn pred\n```\n确保在函数开头完成从 list 到 DataFrame 的转换即可修复此问题。","https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fwhat-if-tool\u002Fissues\u002F150",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},15873,"按照开发文档构建 pip 包后，找不到生成的 .whl 文件在哪里？","如果在运行 `bazel run witwidget\u002Fpip_package:build_pip_package` 后找不到 `.whl` 文件，通常是因为脚本中使用的 `pushd` 命令在当前 shell 中不可用（常见于默认使用 dash 而非 bash 的系统）。\n\n解决方法是将 shell 切换为 bash。如果是 Ubuntu\u002FDebian 系统，可以执行以下命令：\n1. 检查当前 sh 指向：`ls -lh \u002Fbin\u002Fsh`（如果显示 `sh -> dash` 则需修改）。\n2. 重新配置 dash：`sudo dpkg-reconfigure dash`。\n3. 在提示中选择 \"NO\"，将 sh 链接回 bash。\n\n之后重新运行构建命令，`.whl` 文件通常会生成在 `bazel-bin\u002Fwitwidget\u002Fpip_package\u002F` 目录下。","https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fwhat-if-tool\u002Fissues\u002F34",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},15874,"What-If Tool 是否支持非 TensorFlow 模型（如 XGBoost、PyTorch）？","是的，What-If Tool (WIT) 不仅限于 TensorFlow 模型。它支持任何可以通过自定义预测函数（custom prediction function）进行调用的模型，包括 XGBoost 和 PyTorch。\n\n使用方法是通过 `WitConfigBuilder` 设置 `.set_custom_predict_fn(your_function)`。你的函数需要接收原始数据列表作为输入，并返回模型的预测结果。这样你就可以在 Jupyter Notebook 或 TensorBoard 中使用 WIT 来分析任意框架训练的模型。","https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fwhat-if-tool\u002Fissues\u002F2",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},15875,"如何在 Jupyter Notebook（非 Colab 环境）中正确渲染 What-If Tool 组件？","如果在本地 Jupyter Notebook 中使用 What-If Tool 时发现组件无法显示，可能是因为缺少显式的渲染调用。\n\n除了常规的初始化代码外，你需要在创建 widget 对象后调用 `.render()` 方法。例如：\n```python\nwv = WitWidget(config_builder, height=tool_height_in_px)\nwv.render()\n```\n加上 `wv.render()` 后，组件即可在本地 Notebook 中正常显示。","https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fwhat-if-tool\u002Fissues\u002F143",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},15876,"在 TensorBoard 中使用 Keras 模型时遇到 'Expects arg[0] to be float but string is provided' 错误怎么办？","这个错误通常发生在通过 Docker 部署 TensorFlow Serving 并在 WIT 中调用时，输入数据类型不匹配（例如模型期望 float 但收到了 string）。\n\n如果你使用的是自定义预测函数（custom predict function），请注意输入是一个 `tf.Example` proto 对象列表。你需要手动解析这些对象来获取特征值。例如，提取名为 'age' 的浮点特征：\n```python\n# examples 是输入的 tf.Example 列表\nage_value = examples[0].features.feature['age'].float_list.value[0]\n```\n建议在将函数集成到 TensorBoard\u002FWIT 之前，先在外部脚本中调试该函数，确保能正确解析 `tf.Example` 并输出正确的数据类型。","https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fwhat-if-tool\u002Fissues\u002F126",{"id":164,"question_zh":165,"answer_zh":166,"source_url":152},15877,"What-If Tool 的 Datapoint Editor 如何处理图像分类任务的特征？","Datapoint Editor 会显示模型直接接收的输入特征。对于图像分类任务，如果模型直接以图像像素作为输入，编辑器中将显示图像数据。\n\n如果模型还接受其他辅助输入（如坐标信息），这些可以表示为浮点数或整数特征。虽然 WIT 主要用于表格数据，但也支持简单的图像二分类场景（如笑脸检测）。对于复杂的图像特征（如边缘、角点等动态计算的特征），通常需要将其作为预处理步骤转化为固定特征输入模型，或者通过自定义预测函数在内部处理图像逻辑，WIT 本身主要展示最终输入给模型的数据形式。",[168,173,177],{"id":169,"version":170,"summary_zh":171,"released_at":172},90580,"v1.8.1","新增对 JupyterLab 3.x 的 witwidget 支持。","2021-10-12T17:36:36",{"id":174,"version":175,"summary_zh":76,"released_at":176},90581,"v1.8.0","2021-01-19T14:38:47",{"id":178,"version":179,"summary_zh":76,"released_at":180},90582,"v1.7.0","2020-06-26T16:29:17"]