[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-aerdem4--lofo-importance":3,"tool-aerdem4--lofo-importance":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":79,"owner_email":78,"owner_twitter":80,"owner_website":78,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":91,"env_os":92,"env_gpu":92,"env_ram":92,"env_deps":93,"category_tags":99,"github_topics":100,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":144},3612,"aerdem4\u002Flofo-importance","lofo-importance","Leave One Feature Out Importance","lofo-importance 是一款用于评估机器学习模型特征重要性的开源工具。它采用“留一法”（Leave One Feature Out）策略，通过依次移除单个特征并重新训练模型，对比验证集上的性能变化，从而量化每个特征对预测结果的实际贡献。\n\n在传统方法中，模型往往容易高估那些在训练集中表现良好但在未知数据上泛化能力差的特征（例如具有时间泄漏风险的变量）。lofo-importance 通过引入灵活的验证方案，有效解决了这一痛点。它不仅能识别出真正提升模型性能的特征，还能给那些反而降低模型表现的特征赋予负重要性评分，帮助用户更准确地筛选变量。\n\n该工具特别适合数据科学家、机器学习工程师及研究人员使用，尤其是在处理高维数据（如文本 TF-IDF 或独热编码特征）时表现出色。其独特亮点在于支持特征分组功能，可自动将高度相关的特征合并评估，避免重要性被低估；同时它是模型无关的，默认使用 LightGBM，但也支持传入任意自定义模型。只需几行代码，用户即可获得包含均值与标准差的重要性报告及可视化图表，让特征选择过程更加科学、透明且高效。","![alt text](docs\u002Flofo_logo.png?raw=true \"Title\")\n\nLOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.\n\nLOFO first evaluates the performance of the model with all the input features included, then iteratively removes one feature at a time, retrains the model, and evaluates its performance on a validation set. The mean and standard deviation (across the folds) of the importance of each feature is then reported.\n\nIf a model is not passed as an argument to LOFO Importance, it will run LightGBM as a default model.\n\n## Install\n\nLOFO Importance can be installed using\n\n```\npip install lofo-importance\n```\n\n## Advantages of LOFO Importance\n\nLOFO has several advantages compared to other importance types:\n\n* It does not favor granular features\n* It generalises well to unseen test sets\n* It is model agnostic\n* It gives negative importance to features that hurt performance upon inclusion\n* It can group the features. Especially useful for high dimensional features like TFIDF or OHE features.\n* It can automatically group highly correlated features to avoid underestimating their importance.\n\n## Example on Kaggle's Microsoft Malware Prediction Competition\n\nIn this Kaggle competition, Microsoft provides a malware dataset to predict whether or not a machine will soon be hit with malware. One of the features, Centos_OSVersion is very predictive on the training set, since some OS versions are probably more prone to bugs and failures than others. However, upon splitting the data out of time, we obtain validation sets with OS versions that have not occurred in the training set. Therefore, the model will not have learned the relationship between the target and this seasonal feature. By evaluating this feature's importance using other importance types, Centos_OSVersion seems to have high importance, because its importance was evaluated using only the training set. However, LOFO Importance depends on a validation scheme, so it will not only give this feature low importance, but even negative importance.\n\n```python\n import pandas as pd\nfrom sklearn.model_selection import KFold\nfrom lofo import LOFOImportance, Dataset, plot_importance\n%matplotlib inline\n\n# import data\ntrain_df = pd.read_csv(\"..\u002Finput\u002Ftrain.csv\", dtype=dtypes)\n\n# extract a sample of the data\nsample_df = train_df.sample(frac=0.01, random_state=0)\nsample_df.sort_values(\"AvSigVersion\", inplace=True) # Sort by time for time split validation\n\n# define the validation scheme\ncv = KFold(n_splits=4, shuffle=False, random_state=None) # Don't shuffle to keep the time split split validation\n\n# define the binary target and the features\ndataset = Dataset(df=sample_df, target=\"HasDetections\", features=[col for col in train_df.columns if col != \"HasDetections\"])\n\n# define the validation scheme and scorer. The default model is LightGBM\nlofo_imp = LOFOImportance(dataset, cv=cv, scoring=\"roc_auc\")\n\n# get the mean and standard deviation of the importances in pandas format\nimportance_df = lofo_imp.get_importance()\n\n# plot the means and standard deviations of the importances\nplot_importance(importance_df, figsize=(12, 20))\n```\n\n![alt text](docs\u002Fplot_importance.png?raw=true \"Title\")\n\n## Another Example: Kaggle's TReNDS Competition\n\nIn this Kaggle competition, pariticipants are asked to predict some cognitive properties of patients.\nIndependent component features (IC) from sMRI and very high dimensional correlation features (FNC) from 3D fMRIs are provided.\nLOFO can group the fMRI correlation features into one.\n\n```python\ndef get_lofo_importance(target):\n    cv = KFold(n_splits=7, shuffle=True, random_state=17)\n\n    dataset = Dataset(df=df[df[target].notnull()], target=target, features=loading_features,\n                      feature_groups={\"fnc\": df[df[target].notnull()][fnc_features].values\n                      })\n\n    model = Ridge(alpha=0.01)\n    lofo_imp = LOFOImportance(dataset, cv=cv, scoring=\"neg_mean_absolute_error\", model=model)\n\n    return lofo_imp.get_importance()\n\nplot_importance(get_lofo_importance(target=\"domain1_var1\"), figsize=(8, 8), kind=\"box\")\n```\n\n![alt text](docs\u002Fplot_importance_box.png?raw=true \"Title\")\n\n## Flofo Importance\n\nIf running the LOFO Importance package is too time-costly for you, you can use Fast LOFO. Fast LOFO, or FLOFO takes, as inputs, an already trained model and a validation set, and does a pseudo-random permutation on the values of each feature, one by one, then uses the trained model to make predictions on the validation set. The mean of the FLOFO importance is then the difference in the performance of the model on the validation set over several randomised permutations.\nThe difference between FLOFO importance and permutation importance is that the permutations on a feature's values are done within groups, where groups are obtained by grouping the validation set by k=2 features. These k features are chosen at random n=10 times, and the mean and standard deviation of the FLOFO importance are calculated based on these n runs.\nThe reason this grouping makes the measure of importance better is that permuting a feature's value is no longer completely random. In fact, the permutations are done within groups of similar samples, so the permutations are equivalent to noising the samples. This ensures that:\n\n* The permuted feature values are very unlikely to be replaced by unrealistic values.\n* A feature that is predictable by features among the chosen n*k features will be replaced by very similar values during permutation. Therefore, it will only slightly affect the model performance (and will yield a small FLOFO importance). This solves the correlated feature overestimation problem.\n","![alt text](docs\u002Flofo_logo.png?raw=true \"Title\")\n\nLOFO（Leave One Feature Out）重要性分析是一种基于所选指标和模型的方法，用于计算一组特征的重要性。其核心思想是通过迭代地从特征集中移除每个特征，并在选定的验证方案下，根据所选指标评估模型性能，从而得出各特征的重要性。\n\nLOFO首先评估包含所有输入特征时模型的性能，然后逐个移除特征、重新训练模型，并在验证集上再次评估模型性能。最后，报告每个特征重要性的均值和标准差（跨折数）。\n\n如果未向LOFO重要性分析传递模型，则默认使用LightGBM作为基线模型。\n\n## 安装\n\n可以通过以下命令安装LOFO重要性分析工具：\n\n```\npip install lofo-importance\n```\n\n## LOFO重要性分析的优势\n\n与其它特征重要性方法相比，LOFO具有以下优势：\n\n* 不偏向于细粒度特征；\n* 对未见过的测试集泛化能力更强；\n* 模型无关，适用于任何类型的模型；\n* 能够为加入后会降低模型性能的特征赋予负重要性；\n* 支持对特征进行分组，尤其适用于高维特征，如TFIDF或独热编码特征；\n* 可自动将高度相关的特征归为一组，避免低估其重要性。\n\n## Kaggle微软恶意软件预测竞赛示例\n\n在该Kaggle竞赛中，微软提供了一个恶意软件数据集，目标是预测某台机器是否即将遭受恶意软件攻击。其中一个特征“Centos_OSVersion”在训练集上具有很强的预测能力，因为某些操作系统版本可能更容易出现漏洞和故障。然而，当按时间顺序划分数据时，验证集中的操作系统版本并未出现在训练集中，因此模型无法学习到目标变量与该季节性特征之间的关系。若使用其他特征重要性方法评估此特征的重要性，由于仅基于训练集计算，可能会得出其重要性较高的结论。而LOFO重要性分析依赖于验证方案，因此不仅会给出较低的重要性评分，甚至可能为负值。\n\n```python\nimport pandas as pd\nfrom sklearn.model_selection import KFold\nfrom lofo import LOFOImportance, Dataset, plot_importance\n%matplotlib inline\n\n# 导入数据\ntrain_df = pd.read_csv(\"..\u002Finput\u002Ftrain.csv\", dtype=dtypes)\n\n# 抽取数据样本\nsample_df = train_df.sample(frac=0.01, random_state=0)\nsample_df.sort_values(\"AvSigVersion\", inplace=True) # 按时间排序，以便进行时间序列交叉验证\n\n# 定义验证方案\ncv = KFold(n_splits=4, shuffle=False, random_state=None) # 不打乱顺序，以保持时间序列验证\n\n# 定义二分类目标及特征\ndataset = Dataset(df=sample_df, target=\"HasDetections\", features=[col for col in train_df.columns if col != \"HasDetections\"])\n\n# 定义验证方案和评分函数。默认模型为LightGBM\nlofo_imp = LOFOImportance(dataset, cv=cv, scoring=\"roc_auc\")\n\n# 获取重要性均值和标准差，并以Pandas格式展示\nimportance_df = lofo_imp.get_importance()\n\n# 绘制重要性均值和标准差\nplot_importance(importance_df, figsize=(12, 20))\n```\n\n![alt text](docs\u002Fplot_importance.png?raw=true \"Title\")\n\n## 另一个示例：Kaggle TReNDS竞赛\n\n在该Kaggle竞赛中，参赛者需要预测患者的某些认知属性。比赛提供了来自静息态功能磁共振成像（sMRI）的独立成分特征（IC），以及来自三维功能磁共振成像（fMRI）的超高维相关性特征（FNC）。LOFO可以将这些fMRI相关性特征合并为一组。\n\n```python\ndef get_lofo_importance(target):\n    cv = KFold(n_splits=7, shuffle=True，random_state=17)\n\n    dataset = Dataset(df=df[df[target].notnull()], target=target，features=loading_features，\n                      feature_groups={\"fnc\": df[df[target].notnull()][fnc_features].values\n                      })\n\n    model = Ridge(alpha=0.01)\n    lofo_imp = LOFOImportance(dataset，cv=cv，scoring=\"neg_mean_absolute_error\"，model=model)\n\n    return lofo_imp.get_importance()\n\nplot_importance(get_lofo_importance(target=\"domain1_var1\")，figsize=(8，8)，kind=\"box\")\n```\n\n![alt text](docs\u002Fplot_importance_box.png?raw=true \"Title\")\n\n## FLOFO重要性分析\n\n如果您觉得运行LOFO重要性分析工具耗时过长，可以尝试使用Fast LOFO。Fast LOFO（简称FLOFO）接受已训练好的模型和验证集作为输入，对每个特征的取值进行伪随机排列，然后利用训练好的模型在验证集上进行预测。最终，FLOFO重要性即为模型在多次随机排列后的验证集表现差异的平均值。\n\nFLOFO重要性与置换重要性的区别在于：FLOFO的特征取值排列是在分组内进行的，分组方式是将验证集按每k=2个特征划分为若干组。随机选择n=10组，基于这n次运行的结果计算FLOFO重要性的均值和标准差。这种分组方式能够提升重要性度量的准确性，原因在于特征取值的排列不再是完全随机的，而是发生在相似样本的组内，相当于对样本进行了噪声扰动。这样可以确保：\n\n* 特征取值不太可能被替换为不合理的值；\n* 如果某个特征的预测能力主要依赖于所选n×k个特征中的其他特征，那么在排列过程中它会被替换成非常相似的值，因此对模型性能的影响较小，从而得到较小的FLOFO重要性值。这有效解决了相关特征被高估的问题。","# LOFO Importance 快速上手指南\n\nLOFO (Leave One Feature Out) Importance 是一种模型无关的特征重要性评估工具。它通过迭代移除每个特征并重新训练模型，基于选定的验证方案和指标来评估特征对模型性能的实际贡献。相比传统方法，它能有效避免高维特征偏差、处理特征相关性，并能识别出对模型性能有负面影响的特征。\n\n## 环境准备\n\n*   **系统要求**：支持 Python 3.6+ 的操作系统（Linux, macOS, Windows）。\n*   **前置依赖**：\n    *   Python 包管理工具 `pip`\n    *   核心依赖库：`pandas`, `scikit-learn`, `lightgbm` (若使用默认模型), `matplotlib` (若需绘图)\n    *   建议预先安装基础数据科学栈，例如通过 `conda` 或 `pip` 安装 `pandas` 和 `scikit-learn`。\n\n## 安装步骤\n\n推荐使用国内镜像源加速安装过程。\n\n**使用 pip 安装（推荐阿里云镜像）：**\n\n```bash\npip install lofo-importance -i https:\u002F\u002Fmirrors.aliyun.com\u002Fpypi\u002Fsimple\u002F\n```\n\n**或者使用官方源安装：**\n\n```bash\npip install lofo-importance\n```\n\n## 基本使用\n\n以下是一个基于 Kaggle 数据集的最简使用示例，展示如何定义数据集、设置时间序列验证方案并计算特征重要性。\n\n### 1. 导入依赖与数据准备\n\n```python\nimport pandas as pd\nfrom sklearn.model_selection import KFold\nfrom lofo import LOFOImportance, Dataset, plot_importance\n\n# 导入数据 (此处以读取 CSV 为例)\n# train_df = pd.read_csv(\"your_data.csv\")\n\n# 为了演示，假设已有 DataFrame: train_df\n# 提取样本并按时间排序（针对时间序列验证场景）\nsample_df = train_df.sample(frac=0.01, random_state=0)\nsample_df.sort_values(\"AvSigVersion\", inplace=True) \n```\n\n### 2. 定义验证方案与数据集\n\nLOFO 的核心优势在于其灵活的验证机制。以下示例展示了如何设置不洗牌（保持时间顺序）的 K 折交叉验证。\n\n```python\n# 定义验证方案：4 折交叉验证，不打乱顺序以保留时间分割特性\ncv = KFold(n_splits=4, shuffle=False, random_state=None)\n\n# 定义目标列和特征列\ntarget_col = \"HasDetections\"\nfeature_cols = [col for col in train_df.columns if col != target_col]\n\n# 创建 Dataset 对象\ndataset = Dataset(\n    df=sample_df, \n    target=target_col, \n    features=feature_cols\n)\n```\n\n### 3. 计算并可视化重要性\n\n如果不指定模型，LOFO 默认使用 LightGBM。你可以直接获取重要性评分并绘图。\n\n```python\n# 初始化 LOFOImportance\n# scoring 参数可根据任务类型调整，如 'roc_auc', 'neg_mean_absolute_error' 等\nlofo_imp = LOFOImportance(dataset, cv=cv, scoring=\"roc_auc\")\n\n# 获取重要性结果 (返回包含均值和标准差的 DataFrame)\nimportance_df = lofo_imp.get_importance()\n\n# 打印前几行查看结果\nprint(importance_df.head())\n\n# 绘制特征重要性图\nplot_importance(importance_df, figsize=(12, 20))\n```\n\n### 进阶提示：特征分组\n\n对于高维特征（如 TF-IDF 或 One-Hot 编码产生的大量特征），可以通过 `feature_groups` 参数将它们分组评估，避免低估整体重要性。\n\n```python\n# 示例：将多个相关特征归为一组 \"group_name\"\ndataset = Dataset(\n    df=df, \n    target=\"target_col\", \n    features=[\"single_feature_1\", \"single_feature_2\"],\n    feature_groups={\n        \"high_dim_group\": df[[\"fnc_feat_1\", \"fnc_feat_2\", ...]].values\n    }\n)\n```","某金融风控团队正在构建信用卡欺诈检测模型，面对包含数千个高维独热编码（OHE）特征和强相关性时间序列变量的复杂数据集，急需筛选出真正具备泛化能力的核心指标。\n\n### 没有 lofo-importance 时\n- 传统重要性评估倾向于给粒度细碎的独热编码特征打分过高，导致模型被大量噪声特征干扰，难以捕捉核心规律。\n- 无法识别“负向特征”，某些在训练集表现好但在验证集严重过拟合的变量（如特定季节的操作系统版本）被误判为重要特征保留下来。\n- 高度相关的特征组（如同一用户的多个行为衍生变量）因共线性问题被单独评估，导致整体重要性被低估，关键业务逻辑被错误剔除。\n- 模型在离线测试集表现优异，但上线后面对未见过的数据分布时性能急剧下降，缺乏可靠的验证机制来预判泛化能力。\n\n### 使用 lofo-importance 后\n- 通过逐个移除特征并重新验证的策略，自动将高维稀疏特征分组评估，精准识别出真正驱动预测结果的变量组合，不再偏袒细碎特征。\n- 能够计算并暴露“负重要性”特征，直接标记出那些引入后反而降低模型泛化性能的变量，帮助团队果断剔除过拟合源头。\n- 智能聚合高度相关特征的重要性得分，还原了特征组的真实贡献度，确保关键的业务逻辑链条不被误删。\n- 基于自定义验证方案（如时间序列切分）输出带标准差的重要性报告，让团队在模型部署前就能清晰预判其在新数据上的稳定性。\n\nlofo-importance 通过模拟“移除 - 重训 - 验证”的闭环，将特征重要性评估从单纯的统计数值升级为可信赖的泛化能力试金石。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Faerdem4_lofo-importance_8681a546.png","aerdem4","Ahmet Erdem","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Faerdem4_908ad91b.png",null,"Istanbul","a_erdem4","https:\u002F\u002Fgithub.com\u002Faerdem4",[83],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,862,83,"2026-04-01T17:12:38","MIT",1,"未说明",{"notes":94,"python":92,"dependencies":95},"该工具默认使用 LightGBM 作为模型，若未指定模型则自动调用。支持自定义验证策略（如时间序列分割）和特征分组功能。若计算耗时过长，可使用其变体 FLOFO（Fast LOFO），它基于已训练模型和伪随机置换进行快速评估，无需重新训练模型。",[96,97,98],"pandas","scikit-learn","lightgbm",[54,51,13],[101,102,103,104,105],"feature-importance","machine-learning","data-science","feature-selection","explainable-ai","2026-03-27T02:49:30.150509","2026-04-06T08:46:03.268651",[109,114,119,124,129,134,139],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},16556,"遇到 sklearn 1.5 或 1.6 版本导致的 _get_cv_score 错误或延迟怎么办？","该问题已在 lofo-importance 0.3.5 版本中修复。请运行以下命令更新包：\npip install --upgrade lofo-importance\n更新后，代码运行速度也会显著提升（从几小时缩短到几分钟）。","https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fissues\u002F64",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},16555,"如何在 LOFOImportance 中使用 GroupKFold？","新版本的 scikit-learn 在 cross_validate 中对可迭代对象支持存在问题。解决方法是将 GroupKFold 生成的可迭代对象转换为列表。代码示例如下：\nlofo_imp = LOFOImportance(dataset, cv=list(GroupKFold(n_splits=4).split(X=tr, y=tr['pressure'], groups=tr['breath_id'])), scoring=\"neg_mean_absolute_error\")","https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fissues\u002F28",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},16557,"为什么传入自定义模型（如 RandomForestClassifier）时变量分组功能会报错？","LOFO 的变量分组功能不会自动进行特征预处理（如标签编码）。如果您使用自定义模型，必须预先处理特征列：\n1. 将字符串类型的列转换为类别类型（category dtype）。\n2. 或者手动进行标签编码（Label Encoding）。\n如果直接传入包含字符串 dtypes 的列，模型将会报错。","https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fissues\u002F56",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},16558,"在 Pandas 2.0.x 版本下导入 lofo 时出现 'StringMethods' AttributeError 怎么办？","该错误通常不是由 LOFO 直接引起的，而是由于依赖库 lightgbm 与 Pandas 2.0 的兼容性问题。建议尝试升级 dask 库，这通常能解决底层依赖冲突：\npip install --upgrade dask\n如果问题依旧，请检查 lightgbm 的版本兼容性。","https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fissues\u002F55",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},16559,"如何获取更稳健的特征重要性统计量（如中位数和 IQR）而非均值和标准差？","LOFO 现已支持箱线图（box plot）功能，可以更直观地展示重要性的分布（包括中位数和四分位距），特别适用于非正态分布的数据。在使用 FLOFO 或绘图时，确保交叉验证的折数（folds）至少为 3，否则统计结果可能不准确并导致图表显示异常。","https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fissues\u002F32",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},16560,"如何处理分类特征（categorical features）以避免 'Could not convert to float' 错误？","如果您的分类特征是 pandas 的 'category' 数据类型而不是 'object'，可能会触发类型转换错误。LOFO 已更新以更好地处理这种情况。确保您的数据集中分类列正确设置为 'category' 类型，或者在创建 Dataset 之前手动处理这些列。此问题已在相关 PR 中解决。","https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fissues\u002F23",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},16561,"为什么无法直接从 lofo 导入 Dataset (from lofo import Dataset)？","这通常是因为安装了过旧的版本（如 0.2.0）。在新版本中，Dataset 已正确导出到主命名空间。请尝试在干净的虚拟环境中安装或升级最新版本的 lofo-importance：\npip install --upgrade lofo-importance\n升级后即可正常使用 from lofo import Dataset。","https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fissues\u002F26",[145,150],{"id":146,"version":147,"summary_zh":148,"released_at":149},98868,"v0.3.5","## 变更内容\n* 修复了由 @aerdem4 在 https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fpull\u002F65 中提出的关于 sklearn 参数名变更的问题\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\u002Fcompare\u002Fv0.3.4...v0.3.5","2025-02-14T12:14:55",{"id":151,"version":152,"summary_zh":78,"released_at":153},98869,"v0.3.4","2024-01-16T09:15:43"]