[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-shankarpandala--lazypredict":3,"tool-shankarpandala--lazypredict":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":96,"env_ram":95,"env_deps":97,"category_tags":111,"github_topics":112,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":149},3182,"shankarpandala\u002Flazypredict","lazypredict","Lazy Predict help build a lot of basic models without much code and helps understand which models works better without any parameter tuning","lazypredict 是一款专为快速验证机器学习模型而设计的开源工具，旨在帮助开发者用最少的代码构建并对比大量基础模型。在数据科学项目中，初学者或资深工程师往往需要花费大量时间编写重复代码来测试不同算法，且难以在不调整参数的情况下直观判断哪种模型表现更佳。lazypredict 恰好解决了这一痛点，它能自动完成模型训练与评估，让用户迅速锁定最适合当前数据的算法方向。\n\n这款工具非常适合希望提高原型开发效率的数据科学家、机器学习工程师以及正在学习建模的学生。其核心亮点在于内置了超过 40 种机器学习模型，不仅覆盖传统的分类与回归任务，还强大支持时间序列预测，包含从统计模型（如 ARIMA）到深度学习（如 LSTM）乃至预训练基础模型（TimesFM）等 20 多种前沿算法。此外，lazypredict 具备自动季节性检测、多种分类编码策略、原生 MLflow 实验追踪集成以及 GPU 加速能力，支持灵活配置超时与交叉验证。无论是处理结构化数据还是复杂的时间序列，它都能让模型筛选过程变得简单高效，是探索性数据分析阶段的得力助手。","# Lazy Predict\n\n[![image](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Flazypredict.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Flazypredict)\n[![Publish](https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Factions\u002Fworkflows\u002Fpublish.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Factions\u002Fworkflows\u002Fpublish.yml)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshankarpandala_lazypredict_readme_13d664e1afd7.png)](https:\u002F\u002Flazypredict.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshankarpandala_lazypredict_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Flazypredict)\n[![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshankarpandala_lazypredict_readme_9ee0cb95ac54.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fshankarpandala\u002Flazypredict)\n[![Citations](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCitations-45-blue)](https:\u002F\u002Fscholar.google.com\u002Fscholar?oi=bibs&hl=en&cites=4325808232671020176,16284230108871951652&as_sdt=5)\n\nLazy Predict helps build a lot of basic models without much code and helps understand which models work better without any parameter tuning.\n\n- Free software: MIT license\n- Documentation: \u003Chttps:\u002F\u002Flazypredict.readthedocs.io>\n\n## Features\n- Over 40 built-in machine learning models\n- Automatic model selection for classification, regression, and **time series forecasting**\n- **20+ forecasting models**: statistical (ETS, ARIMA, Theta), ML (Random Forest, XGBoost, etc.), deep learning (LSTM, GRU), and pretrained foundation models (TimesFM)\n- Automatic seasonal period detection via ACF\n- Multiple categorical encoding strategies (OneHot, Ordinal, Target, Binary)\n- Built-in MLflow integration for experiment tracking\n- **GPU acceleration**: XGBoost, LightGBM, CatBoost, cuML (RAPIDS), LSTM\u002FGRU, TimesFM\n- Support for Python 3.9 through 3.13\n- Custom metric evaluation support\n- Configurable timeout and cross-validation\n- Intel Extension for Scikit-learn acceleration support\n\n## Installation\n\n### pip (PyPI)\n\n```bash\npip install lazypredict\n```\n\n### conda (conda-forge)\n\n```bash\nconda install -c conda-forge lazypredict\n```\n\n### Optional extras (pip only)\n\nInstall with boosting libraries (XGBoost, LightGBM, CatBoost):\n\n```bash\npip install lazypredict[boost]\n```\n\nInstall with time series forecasting support:\n\n```bash\npip install lazypredict[timeseries]          # statsmodels + pmdarima\npip install lazypredict[timeseries,deeplearning]  # + LSTM\u002FGRU via PyTorch\npip install lazypredict[timeseries,foundation]    # + Google TimesFM (Python 3.10-3.11)\n```\n\nInstall with all optional dependencies:\n\n```bash\npip install lazypredict[all]\n```\n\n## Usage\n\nTo use Lazy Predict in a project:\n\n```python\nimport lazypredict\n```\n\n## Classification\n\nExample:\n\n```python\nfrom lazypredict.Supervised import LazyClassifier\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\n\ndata = load_breast_cancer()\nX = data.data\ny = data.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=123)\n\nclf = LazyClassifier(verbose=0, ignore_warnings=True, custom_metric=None)\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n\nprint(models)\n```\n\n### Advanced Options\n\n```python\n# With categorical encoding, timeout, cross-validation, and GPU\nclf = LazyClassifier(\n    verbose=1,                          # Show progress\n    ignore_warnings=True,               # Suppress warnings\n    custom_metric=None,                 # Use default metrics\n    predictions=True,                   # Return predictions\n    classifiers='all',                  # Use all available classifiers\n    categorical_encoder='onehot',       # Encoding: 'onehot', 'ordinal', 'target', 'binary'\n    timeout=60,                         # Max time per model in seconds\n    cv=5,                               # Cross-validation folds (optional)\n    use_gpu=True                        # Enable GPU acceleration\n)\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n```\n\n**Parameters:**\n- `verbose` (int): 0 for silent, 1 for progress display\n- `ignore_warnings` (bool): Suppress scikit-learn warnings\n- `custom_metric` (callable): Custom evaluation metric\n- `predictions` (bool): Return prediction DataFrame\n- `classifiers` (str\u002Flist): 'all' or list of classifier names\n- `categorical_encoder` (str): Encoding strategy for categorical features\n  - `'onehot'`: One-hot encoding (default)\n  - `'ordinal'`: Ordinal encoding\n  - `'target'`: Target encoding (requires `category-encoders`)\n  - `'binary'`: Binary encoding (requires `category-encoders`)\n- `timeout` (int): Maximum seconds per model (None for no limit)\n- `cv` (int): Number of cross-validation folds (None to disable)\n- `use_gpu` (bool): Enable GPU acceleration for supported models (default False)\n\n| Model                          |   Accuracy |   Balanced Accuracy |   ROC AUC |   F1 Score |   Time Taken |\n|:-------------------------------|-----------:|--------------------:|----------:|-----------:|-------------:|\n| LinearSVC                      |   0.989474 |            0.987544 |  0.987544 |   0.989462 |    0.0150008 |\n| SGDClassifier                  |   0.989474 |            0.987544 |  0.987544 |   0.989462 |    0.0109992 |\n| MLPClassifier                  |   0.985965 |            0.986904 |  0.986904 |   0.985994 |    0.426     |\n| Perceptron                     |   0.985965 |            0.984797 |  0.984797 |   0.985965 |    0.0120046 |\n| LogisticRegression             |   0.985965 |            0.98269  |  0.98269  |   0.985934 |    0.0200036 |\n| LogisticRegressionCV           |   0.985965 |            0.98269  |  0.98269  |   0.985934 |    0.262997  |\n| SVC                            |   0.982456 |            0.979942 |  0.979942 |   0.982437 |    0.0140011 |\n| CalibratedClassifierCV         |   0.982456 |            0.975728 |  0.975728 |   0.982357 |    0.0350015 |\n| PassiveAggressiveClassifier    |   0.975439 |            0.974448 |  0.974448 |   0.975464 |    0.0130005 |\n| LabelPropagation               |   0.975439 |            0.974448 |  0.974448 |   0.975464 |    0.0429988 |\n| LabelSpreading                 |   0.975439 |            0.974448 |  0.974448 |   0.975464 |    0.0310006 |\n| RandomForestClassifier         |   0.97193  |            0.969594 |  0.969594 |   0.97193  |    0.033     |\n| GradientBoostingClassifier     |   0.97193  |            0.967486 |  0.967486 |   0.971869 |    0.166998  |\n| QuadraticDiscriminantAnalysis  |   0.964912 |            0.966206 |  0.966206 |   0.965052 |    0.0119994 |\n| HistGradientBoostingClassifier |   0.968421 |            0.964739 |  0.964739 |   0.968387 |    0.682003  |\n| RidgeClassifierCV              |   0.97193  |            0.963272 |  0.963272 |   0.971736 |    0.0130029 |\n| RidgeClassifier                |   0.968421 |            0.960525 |  0.960525 |   0.968242 |    0.0119977 |\n| AdaBoostClassifier             |   0.961404 |            0.959245 |  0.959245 |   0.961444 |    0.204998  |\n| ExtraTreesClassifier           |   0.961404 |            0.957138 |  0.957138 |   0.961362 |    0.0270066 |\n| KNeighborsClassifier           |   0.961404 |            0.95503  |  0.95503  |   0.961276 |    0.0560005 |\n| BaggingClassifier              |   0.947368 |            0.954577 |  0.954577 |   0.947882 |    0.0559971 |\n| BernoulliNB                    |   0.950877 |            0.951003 |  0.951003 |   0.951072 |    0.0169988 |\n| LinearDiscriminantAnalysis     |   0.961404 |            0.950816 |  0.950816 |   0.961089 |    0.0199995 |\n| GaussianNB                     |   0.954386 |            0.949536 |  0.949536 |   0.954337 |    0.0139935 |\n| NuSVC                          |   0.954386 |            0.943215 |  0.943215 |   0.954014 |    0.019989  |\n| DecisionTreeClassifier         |   0.936842 |            0.933693 |  0.933693 |   0.936971 |    0.0170023 |\n| NearestCentroid                |   0.947368 |            0.933506 |  0.933506 |   0.946801 |    0.0160074 |\n| ExtraTreeClassifier            |   0.922807 |            0.912168 |  0.912168 |   0.922462 |    0.0109999 |\n| CheckingClassifier             |   0.361404 |            0.5      |  0.5      |   0.191879 |    0.0170043 |\n| DummyClassifier                |   0.512281 |            0.489598 |  0.489598 |   0.518924 |    0.0119965 |\n\n## Regression\n\nExample:\n\n```python\nfrom lazypredict.Supervised import LazyRegressor\nfrom sklearn import datasets\nfrom sklearn.utils import shuffle\nimport numpy as np\n\ndiabetes  = datasets.load_diabetes()\nX, y = shuffle(diabetes.data, diabetes.target, random_state=13)\nX = X.astype(np.float32)\n\noffset = int(X.shape[0] * 0.9)\n\nX_train, y_train = X[:offset], y[:offset]\nX_test, y_test = X[offset:], y[offset:]\n\nreg = LazyRegressor(verbose=0, ignore_warnings=False, custom_metric=None)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n\nprint(models)\n```\n\n### Advanced Options\n\n```python\n# With categorical encoding, timeout, and GPU\nreg = LazyRegressor(\n    verbose=1,                          # Show progress\n    ignore_warnings=True,               # Suppress warnings\n    custom_metric=None,                 # Use default metrics\n    predictions=True,                   # Return predictions\n    regressors='all',                   # Use all available regressors\n    categorical_encoder='ordinal',      # Encoding: 'onehot', 'ordinal', 'target', 'binary'\n    timeout=120,                        # Max time per model in seconds\n    use_gpu=True                        # Enable GPU acceleration\n)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n```\n\n**Parameters:**\n- `verbose` (int): 0 for silent, 1 for progress display\n- `ignore_warnings` (bool): Suppress scikit-learn warnings\n- `custom_metric` (callable): Custom evaluation metric\n- `predictions` (bool): Return prediction DataFrame\n- `regressors` (str\u002Flist): 'all' or list of regressor names\n- `categorical_encoder` (str): Encoding strategy for categorical features\n  - `'onehot'`: One-hot encoding (default)\n  - `'ordinal'`: Ordinal encoding\n  - `'target'`: Target encoding (requires `category-encoders`)\n  - `'binary'`: Binary encoding (requires `category-encoders`)\n- `timeout` (int): Maximum seconds per model (None for no limit)\n- `use_gpu` (bool): Enable GPU acceleration for supported models (default False)\n\n| Model                         |   Adjusted R-Squared |   R-Squared |     RMSE |   Time Taken |\n|:------------------------------|---------------------:|------------:|---------:|-------------:|\n| ExtraTreesRegressor           |           0.378921   |  0.520076   |  54.2202 |   0.121466   |\n| OrthogonalMatchingPursuitCV   |           0.374947   |  0.517004   |  54.3934 |   0.0111742  |\n| Lasso                         |           0.373483   |  0.515873   |  54.457  |   0.00620174 |\n| LassoLars                     |           0.373474   |  0.515866   |  54.4575 |   0.0087235  |\n| LarsCV                        |           0.3715     |  0.514341   |  54.5432 |   0.0160234  |\n| LassoCV                       |           0.370413   |  0.513501   |  54.5903 |   0.0624897  |\n| PassiveAggressiveRegressor    |           0.366958   |  0.510831   |  54.7399 |   0.00689793 |\n| LassoLarsIC                   |           0.364984   |  0.509306   |  54.8252 |   0.0108321  |\n| SGDRegressor                  |           0.364307   |  0.508783   |  54.8544 |   0.0055306  |\n| RidgeCV                       |           0.363002   |  0.507774   |  54.9107 |   0.00728202 |\n| Ridge                         |           0.363002   |  0.507774   |  54.9107 |   0.00556874 |\n| BayesianRidge                 |           0.362296   |  0.507229   |  54.9411 |   0.0122972  |\n| LassoLarsCV                   |           0.361749   |  0.506806   |  54.9646 |   0.0175984  |\n| TransformedTargetRegressor    |           0.361749   |  0.506806   |  54.9646 |   0.00604773 |\n| LinearRegression              |           0.361749   |  0.506806   |  54.9646 |   0.00677514 |\n| Lars                          |           0.358828   |  0.504549   |  55.0903 |   0.00935149 |\n| ElasticNetCV                  |           0.356159   |  0.502486   |  55.2048 |   0.0478678  |\n| HuberRegressor                |           0.355251   |  0.501785   |  55.2437 |   0.0129263  |\n| RandomForestRegressor         |           0.349621   |  0.497434   |  55.4844 |   0.2331     |\n| AdaBoostRegressor             |           0.340416   |  0.490322   |  55.8757 |   0.0512381  |\n| LGBMRegressor                 |           0.339239   |  0.489412   |  55.9255 |   0.0396187  |\n| HistGradientBoostingRegressor |           0.335632   |  0.486625   |  56.0779 |   0.0897055  |\n| PoissonRegressor              |           0.323033   |  0.476889   |  56.6072 |   0.00953603 |\n| ElasticNet                    |           0.301755   |  0.460447   |  57.4899 |   0.00604224 |\n| KNeighborsRegressor           |           0.299855   |  0.458979   |  57.5681 |   0.00757337 |\n| OrthogonalMatchingPursuit     |           0.292421   |  0.453235   |  57.8729 |   0.00709486 |\n| BaggingRegressor              |           0.291213   |  0.452301   |  57.9223 |   0.0302746  |\n| GradientBoostingRegressor     |           0.247009   |  0.418143   |  59.7011 |   0.136803   |\n| TweedieRegressor              |           0.244215   |  0.415984   |  59.8118 |   0.00633955 |\n| XGBRegressor                  |           0.224263   |  0.400567   |  60.5961 |   0.339694   |\n| GammaRegressor                |           0.223895   |  0.400283   |  60.6105 |   0.0235181  |\n| RANSACRegressor               |           0.203535   |  0.38455    |  61.4004 |   0.0653253  |\n| LinearSVR                     |           0.116707   |  0.317455   |  64.6607 |   0.0077076  |\n| ExtraTreeRegressor            |           0.00201902 |  0.228833   |  68.7304 |   0.00626636 |\n| NuSVR                         |          -0.0667043  |  0.175728   |  71.0575 |   0.0143399  |\n| SVR                           |          -0.0964128  |  0.152772   |  72.0402 |   0.0114729  |\n| DummyRegressor                |          -0.297553   | -0.00265478 |  78.3701 |   0.00592971 |\n| DecisionTreeRegressor         |          -0.470263   | -0.136112   |  83.4229 |   0.00749898 |\n| GaussianProcessRegressor      |          -0.769174   | -0.367089   |  91.5109 |   0.0770502  |\n| MLPRegressor                  |          -1.86772    | -1.21597    | 116.508  |   0.235267   |\n| KernelRidge                   |          -5.03822    | -3.6659     | 169.061  |   0.0243919  |\n\n## Time Series Forecasting\n\nLazyForecaster benchmarks 20+ forecasting models on your time series in a single call:\n\n```python\nimport numpy as np\nfrom lazypredict.TimeSeriesForecasting import LazyForecaster\n\n# Generate sample data (or use your own)\nnp.random.seed(42)\nt = np.arange(200)\ny = 10 + 0.05 * t + 3 * np.sin(2 * np.pi * t \u002F 12) + np.random.normal(0, 1, 200)\n\ny_train, y_test = y[:180], y[180:]\n\nfcst = LazyForecaster(verbose=0, ignore_warnings=True)\nscores, predictions = fcst.fit(y_train, y_test)\nprint(scores)\n```\n\n| Model                         |     MAE |    RMSE |   MAPE |   SMAPE |    MASE | R-Squared | Time Taken |\n|:------------------------------|--------:|--------:|-------:|--------:|--------:|----------:|-----------:|\n| Holt                          | 0.8532  | 1.0285  | 6.3241 | 6.1758  | 0.6993  |  0.7218   |     0.03   |\n| SARIMAX                       | 0.8791  | 1.0601  | 6.5012 | 6.3414  | 0.7205  |  0.7045   |     0.12   |\n| Ridge_TS                      | 0.9124  | 1.0843  | 6.7523 | 6.5721  | 0.7478  |  0.6912   |     0.01   |\n| ...                           |   ...   |   ...   |  ...   |   ...   |   ...   |    ...    |     ...    |\n\n### With Exogenous Variables\n\n```python\n# Optional exogenous features\nX_train = np.column_stack([np.sin(t[:180]), np.cos(t[:180])])\nX_test = np.column_stack([np.sin(t[180:]), np.cos(t[180:])])\n\nscores, predictions = fcst.fit(y_train, y_test, X_train, X_test)\n```\n\n### Advanced Options\n\n```python\nfcst = LazyForecaster(\n    verbose=1,                          # Show progress\n    ignore_warnings=True,               # Suppress model errors\n    predictions=True,                   # Return forecast values\n    seasonal_period=12,                 # Override auto-detection\n    cv=3,                               # Time series cross-validation\n    timeout=30,                         # Max seconds per model\n    sort_by=\"RMSE\",                     # Sort metric (MAE, MAPE, SMAPE, MASE, R-Squared)\n    forecasters=\"all\",                  # Or list: [\"Holt\", \"AutoARIMA\", \"LSTM_TS\"]\n    max_models=10,                      # Limit number of models\n    use_gpu=True,                       # GPU acceleration for supported models\n    foundation_model_path=\"\u002Fpath\u002Fto\u002Ftimesfm-weights\",  # Local model weights (offline)\n)\nscores, predictions = fcst.fit(y_train, y_test)\n```\n\n**Parameters:**\n- `verbose` (int): 0 for silent, 1 for progress display\n- `ignore_warnings` (bool): Suppress per-model exceptions\n- `predictions` (bool): Return a second DataFrame of forecasted values\n- `seasonal_period` (int\u002FNone): Seasonal cycle length; ``None`` auto-detects via ACF\n- `cv` (int\u002FNone): Number of ``TimeSeriesSplit`` folds for cross-validation\n- `timeout` (int\u002Ffloat\u002FNone): Maximum training seconds per model\n- `sort_by` (str): Metric to sort by (``\"RMSE\"``, ``\"MAE\"``, ``\"MAPE\"``, ``\"SMAPE\"``, ``\"MASE\"``, ``\"R-Squared\"``)\n- `forecasters` (str\u002Flist): ``\"all\"`` or a list of model names\n- `n_lags` (int): Number of lag features for ML\u002FDL models (default 10)\n- `n_rolling` (tuple): Rolling-window sizes for feature engineering (default (3, 7))\n- `max_models` (int\u002FNone): Limit total models to train\n- `custom_metric` (callable): Additional metric ``f(y_true, y_pred) -> float``\n- `use_gpu` (bool): Enable GPU acceleration for supported models (default False)\n- `foundation_model_path` (str): Local path to pre-downloaded foundation model weights (e.g. TimesFM)\n\n**Available model categories:**\n- **Baselines:** Naive, SeasonalNaive\n- **Statistical (statsmodels):** SimpleExpSmoothing, Holt, HoltWinters_Add, HoltWinters_Mul, Theta, SARIMAX\n- **Statistical (pmdarima):** AutoARIMA\n- **ML (sklearn):** LinearRegression_TS, Ridge_TS, Lasso_TS, ElasticNet_TS, KNeighborsRegressor_TS, DecisionTreeRegressor_TS, RandomForestRegressor_TS, GradientBoostingRegressor_TS, AdaBoostRegressor_TS, ExtraTreesRegressor_TS, BaggingRegressor_TS, SVR_TS, XGBRegressor_TS, LGBMRegressor_TS, CatBoostRegressor_TS\n- **Deep Learning (torch):** LSTM_TS, GRU_TS\n- **Foundation (timesfm):** TimesFM\n\n## GPU Acceleration\n\nEnable GPU acceleration for supported models with `use_gpu=True`:\n\n```python\nfrom lazypredict.Supervised import LazyClassifier, LazyRegressor\n\n# Classification with GPU\nclf = LazyClassifier(use_gpu=True, verbose=0, ignore_warnings=True)\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n\n# Regression with GPU\nreg = LazyRegressor(use_gpu=True, verbose=0, ignore_warnings=True)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n\n# Time Series with GPU\nfrom lazypredict.TimeSeriesForecasting import LazyForecaster\nfcst = LazyForecaster(use_gpu=True, verbose=0, ignore_warnings=True)\nscores, predictions = fcst.fit(y_train, y_test)\n```\n\n**Supported GPU backends:**\n- **XGBoost** — `device=\"cuda\"`\n- **LightGBM** — `device=\"gpu\"`\n- **CatBoost** — `task_type=\"GPU\"`\n- **cuML (RAPIDS)** — GPU-native scikit-learn replacements (auto-discovered when installed)\n- **LSTM \u002F GRU** — PyTorch CUDA\n- **TimesFM** — PyTorch CUDA\n\nFalls back to CPU automatically if no CUDA GPU is available.\n\n## Categorical Encoding\n\nLazy Predict supports multiple categorical encoding strategies:\n\n```python\nfrom lazypredict.Supervised import LazyClassifier\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n# Example with categorical features\ndf = pd.read_csv('data_with_categories.csv')\nX = df.drop('target', axis=1)\ny = df['target']\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n\n# Try different encoders\nfor encoder in ['onehot', 'ordinal', 'target', 'binary']:\n    clf = LazyClassifier(\n        categorical_encoder=encoder,\n        verbose=0,\n        ignore_warnings=True\n    )\n    models, predictions = clf.fit(X_train, X_test, y_train, y_test)\n    print(f\"\\n{encoder.upper()} Encoding Results:\")\n    print(models.head())\n```\n\n**Note:** Target and binary encoders require the `category-encoders` package:\n```bash\npip install category-encoders\n```\n\n## Intel Extension Acceleration\n\nFor improved performance on Intel CPUs, install Intel Extension for Scikit-learn:\n\n```bash\npip install scikit-learn-intelex\n```\n\nLazy Predict will automatically detect and use it for acceleration.\n\n## MLflow Integration\n\nLazy Predict includes built-in MLflow integration. Enable it by setting the MLflow tracking URI:\n\n```python\nimport os\nos.environ['MLFLOW_TRACKING_URI'] = 'sqlite:\u002F\u002F\u002Fmlflow.db'\n\n# MLflow tracking will be automatically enabled\nreg = LazyRegressor(verbose=0, ignore_warnings=True)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n```\n\nAutomatically tracks:\n- Model metrics (R-squared, RMSE, etc.)\n- Training time\n- Model parameters\n- Model artifacts","# 懒人预测\n\n[![image](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Flazypredict.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Flazypredict)\n[![发布](https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Factions\u002Fworkflows\u002Fpublish.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Factions\u002Fworkflows\u002Fpublish.yml)\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshankarpandala_lazypredict_readme_13d664e1afd7.png)](https:\u002F\u002Flazypredict.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshankarpandala_lazypredict_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Flazypredict)\n[![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshankarpandala_lazypredict_readme_9ee0cb95ac54.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fshankarpandala\u002Flazypredict)\n[![引用次数](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F引用次数-45-blue)](https:\u002F\u002Fscholar.google.com\u002Fscholar?oi=bibs&hl=en&cites=4325808232671020176,16284230108871951652&as_sdt=5)\n\n懒人预测可以帮助你在无需编写大量代码的情况下构建许多基础模型，并且无需进行任何超参数调优就能了解哪些模型效果更好。\n\n- 自由软件：MIT 许可证\n- 文档：\u003Chttps:\u002F\u002Flazypredict.readthedocs.io>\n\n## 特性\n- 内置超过40种机器学习模型\n- 自动选择分类、回归以及**时间序列预测**的模型\n- **20多种预测模型**：统计类（ETS、ARIMA、Theta）、机器学习类（随机森林、XGBoost等）、深度学习类（LSTM、GRU）以及预训练的基础模型（TimesFM）\n- 通过自相关函数（ACF）自动检测季节性周期\n- 多种类别编码策略（OneHot、Ordinal、Target、Binary）\n- 内置MLflow集成，用于实验跟踪\n- **GPU加速**：XGBoost、LightGBM、CatBoost、cuML（RAPIDS）、LSTM\u002FGRU、TimesFM\n- 支持Python 3.9至3.13版本\n- 支持自定义指标评估\n- 可配置的超时时间和交叉验证\n- 支持Intel Extension for Scikit-learn加速\n\n## 安装\n\n### pip (PyPI)\n\n```bash\npip install lazypredict\n```\n\n### conda (conda-forge)\n\n```bash\nconda install -c conda-forge lazypredict\n```\n\n### 可选扩展（仅限pip）\n\n安装包含提升库（XGBoost、LightGBM、CatBoost）的版本：\n\n```bash\npip install lazypredict[boost]\n```\n\n安装支持时间序列预测的版本：\n\n```bash\npip install lazypredict[timeseries]          # statsmodels + pmdarima\npip install lazypredict[timeseries,deeplearning]  # + LSTM\u002FGRU via PyTorch\npip install lazypredict[timeseries,foundation]    # + Google TimesFM (Python 3.10-3.11)\n```\n\n安装所有可选依赖的版本：\n\n```bash\npip install lazypredict[all]\n```\n\n## 使用方法\n\n在项目中使用懒人预测：\n\n```python\nimport lazypredict\n```\n\n## 分类\n\n示例：\n\n```python\nfrom lazypredict.Supervised import LazyClassifier\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\n\ndata = load_breast_cancer()\nX = data.data\ny = data.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=123)\n\nclf = LazyClassifier(verbose=0, ignore_warnings=True, custom_metric=None)\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n\nprint(models)\n```\n\n### 高级选项\n\n```bash\n\n# 使用分类编码、超时、交叉验证和 GPU\nclf = LazyClassifier(\n    verbose=1,                          # 显示进度\n    ignore_warnings=True,               # 抑制警告\n    custom_metric=None,                 # 使用默认指标\n    predictions=True,                   # 返回预测结果\n    classifiers='all',                  # 使用所有可用的分类器\n    categorical_encoder='onehot',       # 编码方式：'onehot'、'ordinal'、'target'、'binary'\n    timeout=60,                         # 每个模型的最大运行时间（秒）\n    cv=5,                               # 交叉验证折数（可选）\n    use_gpu=True                        # 启用 GPU 加速\n)\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n```\n\n**参数说明：**\n- `verbose` (int): 0 表示静默，1 表示显示进度。\n- `ignore_warnings` (bool): 抑制 scikit-learn 的警告信息。\n- `custom_metric` (callable): 自定义评估指标。\n- `predictions` (bool): 是否返回预测结果 DataFrame。\n- `classifiers` (str\u002Flist): 可以是 'all' 或者指定的分类器名称列表。\n- `categorical_encoder` (str): 处理分类特征的编码策略。\n  - `'onehot'`: one-hot 编码（默认）。\n  - `'ordinal'`: 序号编码。\n  - `'target'`: 目标编码（需要 `category-encoders` 库）。\n  - `'binary'`: 二进制编码（需要 `category-encoders` 库）。\n- `timeout` (int): 每个模型的最大运行时间（秒），设置为 None 表示无限制。\n- `cv` (int): 交叉验证的折数，设置为 None 表示不进行交叉验证。\n- `use_gpu` (bool): 对支持的模型启用 GPU 加速（默认为 False）。\n\n| 模型                          |   精确率 |   平衡准确率 |   ROC AUC |   F1 分数 |   耗时 |\n|:-------------------------------|-----------:|--------------------:|----------:|-----------:|-------------:|\n| LinearSVC                      |   0.989474 |            0.987544 |  0.987544 |   0.989462 |    0.0150008 |\n| SGDClassifier                  |   0.989474 |            0.987544 |  0.987544 |   0.989462 |    0.0109992 |\n| MLPClassifier                  |   0.985965 |            0.986904 |  0.986904 |   0.985994 |    0.426     |\n| Perceptron                     |   0.985965 |            0.984797 |  0.984797 |   0.985965 |    0.0120046 |\n| LogisticRegression             |   0.985965 |            0.98269  |  0.98269  |   0.985934 |    0.0200036 |\n| LogisticRegressionCV           |   0.985965 |            0.98269  |  0.98269  |   0.985934 |    0.262997  |\n| SVC                            |   0.982456 |            0.979942 |  0.979942 |   0.982437 |    0.0140011 |\n| CalibratedClassifierCV         |   0.982456 |            0.975728 |  0.975728 |   0.982357 |    0.0350015 |\n| PassiveAggressiveClassifier    |   0.975439 |            0.974448 |  0.974448 |   0.975464 |    0.0130005 |\n| LabelPropagation               |   0.975439 |            0.974448 |  0.974448 |   0.975464 |    0.0429988 |\n| LabelSpreading                 |   0.975439 |            0.974448 |  0.974448 |   0.975464 |    0.0310006 |\n| RandomForestClassifier         |   0.97193  |            0.969594 |  0.969594 |   0.97193  |    0.033     |\n| GradientBoostingClassifier     |   0.97193  |            0.967486 |  0.967486 |   0.971869 |    0.166998  |\n| QuadraticDiscriminantAnalysis  |   0.964912 |            0.966206 |  0.966206 |   0.965052 |    0.0119994 |\n| HistGradientBoostingClassifier |   0.968421 |            0.964739 |  0.964739 |   0.968387 |    0.682003  |\n| RidgeClassifierCV              |   0.97193  |            0.963272 |  0.963272 |   0.971736 |    0.0130029 |\n| RidgeClassifier                |   0.968421 |            0.960525 |  0.960525 |   0.968242 |    0.0119977 |\n| AdaBoostClassifier             |   0.961404 |            0.959245 |  0.959245 |   0.961444 |    0.204998  |\n| ExtraTreesClassifier           |   0.961404 |            0.957138 |  0.957138 |   0.961362 |    0.0270066 |\n| KNeighborsClassifier           |   0.961404 |            0.95503  |  0.95503  |   0.961276 |    0.0560005 |\n| BaggingClassifier              |   0.947368 |            0.954577 |  0.954577 |   0.947882 |    0.0559971 |\n| BernoulliNB                    |   0.950877 |            0.951003 |  0.951003 |   0.951072 |    0.0169988 |\n| LinearDiscriminantAnalysis     |   0.961404 |            0.950816 |  0.950816 |   0.961089 |    0.0199995 |\n| GaussianNB                     |   0.954386 |            0.949536 |  0.949536 |   0.954337 |    0.0139935 |\n| NuSVC                          |   0.954386 |            0.943215 |  0.943215 |   0.954014 |    0.019989  |\n| DecisionTreeClassifier         |   0.936842 |            0.933693 |  0.933693 |   0.936971 |    0.0170043 |\n| NearestCentroid                |   0.947368 |            0.933506 |  0.933506 |   0.946801 |    0.0160074 |\n| ExtraTreeClassifier            |   0.922807 |            0.912168 |  0.912168 |   0.922462 |    0.01099lassified as \"not applicable\" in this context.\n| DummyClassifier                |   0.512281 |            0.489598 |  0.489598 |   0.518924 |    0.01199lassified as \"not applicable\" in this context.\n\n## 回归\n\n示例：\n\n```python\nfrom lazypredict.Supervised import LazyRegressor\nfrom sklearn import datasets\nfrom sklearn.utils import shuffle\nimport numpy as np\n\ndiabetes  = datasets.load_diabetes()\nX, y = shuffle(diabetes.data, diabetes.target, random_state=13)\nX = X.astype(np.float32)\n\noffset = int(X.shape[0] * 0.9)\n\nX_train, y_train = X[:offset], y[:offset]\nX_test, y_test = X[offset:], y[offset:]\n\nreg = LazyRegressor(verbose=0, ignore_warnings=False, custom_metric=None)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n\nprint(models)\n```\n\n### 高级选项\n\n```python\n\n# 使用分类编码、超时设置和 GPU\nreg = LazyRegressor(\n    verbose=1,                          # 显示进度\n    ignore_warnings=True,               # 抑制警告\n    custom_metric=None,                 # 使用默认指标\n    predictions=True,                   # 返回预测结果\n    regressors='all',                   # 使用所有可用回归模型\n    categorical_encoder='ordinal',      # 编码方式：'onehot'、'ordinal'、'target'、'binary'\n    timeout=120,                        # 每个模型的最大运行时间（秒）\n    use_gpu=True                        # 启用 GPU 加速\n)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n```\n\n**参数说明：**\n- `verbose` (int): 0 表示静默，1 表示显示进度。\n- `ignore_warnings` (bool): 是否抑制 scikit-learn 的警告信息。\n- `custom_metric` (callable): 自定义评估指标。\n- `predictions` (bool): 是否返回预测结果的 DataFrame。\n- `regressors` (str\u002Flist): 可以是 'all' 或者指定回归模型名称的列表。\n- `categorical_encoder` (str): 处理分类特征的编码策略：\n  - `'onehot'`: 独热编码（默认）。\n  - `'ordinal'`: 序号编码。\n  - `'target'`: 目标编码（需要 `category-encoders` 库）。\n  - `'binary'`: 二进制编码（需要 `category-encoders` 库）。\n- `timeout` (int): 每个模型的最大运行时间（秒），设置为 None 表示无限制。\n- `use_gpu` (bool): 是否启用 GPU 加速（默认为 False），仅支持部分模型。\n\n| 模型                         |   调整后 R-Squared |   R-Squared |     RMSE |   时间耗时 |\n|:------------------------------|---------------------:|------------:|---------:|-------------:|\n| ExtraTreesRegressor           |           0.378921   |  0.520076   |  54.2202 |   0.121466   |\n| OrthogonalMatchingPursuitCV   |           0.374947   |  0.517004   |  54.3934 |   0.0111742  |\n| Lasso                         |           0.373483   |  0.515873   |  54.457  |   0.00620174 |\n| LassoLars                     |           0.373474   |  0.515866   |  54.4575 |   0.0087235  |\n| LarsCV                        |           0.3715     |  0.514341   |  54.5432 |   0.0160234  |\n| LassoCV                       |           0.370413   |  0.513501   |  54.5903 |   0.0624897  |\n| PassiveAggressiveRegressor    |           0.366958   |  0.510831   |  54.7399 |   0.00689793 |\n| LassoLarsIC                   |           0.364984   |  0.509306   |  54.8252 |   0.0108321  |\n| SGDRegressor                  |           0.364307   |  0.508783   |  54.8544 |   0.0055306  |\n| RidgeCV                       |           0.363002   |  0.507774   |  54.9107 |   0.00728202 |\n| Ridge                         |           0.363002   |  0.507774   |  54.9107 |   0.00556874 |\n| BayesianRidge                 |           0.362296   |  0.507229   |  54.9411 |   0.0122972  |\n| LassoLarsCV                   |           0.361749   |  0.506806   |  54.9646 |   0.0175984  |\n| TransformedTargetRegressor    |           0.361749   |  0.506806   |  54.9646 |   0.00604773 |\n| LinearRegression              |           0.361749   |  0.506806   |  54.9646 |   0.00677514 |\n| Lars                          |           0.358828   |  0.504549   |  55.0903 |   0.00935149 |\n| ElasticNetCV                  |           0.356159   |  0.502486   |  55.2048 |   0.0478678  |\n| HuberRegressor                |           0.355251   |  0.501785   |  55.2437 |   0.0129263  |\n| RandomForestRegressor         |           0.349621   |  0.497434   |  55.4844 |   0.2331     |\n| AdaBoostRegressor             |           0.340416   |  0.490322   |  55.8757 |   0.0512381  |\n| LGBMRegressor                 |           0.339239   |  0.489412   |  55.9255 |   0.0396187  |\n| HistGradientBoostingRegressor |           0.335632   |  0.486625   |  56.0779 |   0.0897055  |\n| PoissonRegressor              |           0.323033   |  0.476889   |  56.6072 |   0.00953603 |\n| ElasticNet                    |           0.301755   |  0.460447   |  57.4899 |   0.00604224 |\n| KNeighborsRegressor           |           0.299855   |  0.458979   |  57.5681 |   0.00757337 |\n| OrthogonalMatchingPursuit     |           0.292421   |  0.453235   |  57.8729 |   0.00709486 |\n| BaggingRegressor              |           0.291213   |  0.452301   |  57.9223 |   0.0302746  |\n| GradientBoostingRegressor     |           0.247009   |  0.418143   |  59.7011 |   0.136803   |\n| TweedieRegressor              |           0.244215   |  0.415984   |  59.8118 |   0.00633955 |\n| XGBRegressor                  |           0.224263   |  0.400567   |  60.5961 |   0.339694   |\n| GammaRegressor                |           0.223895   |  0.400283   |  60.6105 |   0.0235181  |\n| RANSACRegressor               |           0.203535   |  0.38455    |  61.4004 |   0.0653253  |\n| LinearSVR                     |           0.116707   |  0.317455   |  64.6607 |   0.0077076  |\n| ExtraTreeRegressor            |           0.00201902 |  0.228833   |  68.7304 |   0.00626636 |\n| NuSVR                         |          -0.0667043  |  0.175728   |  71.0575 |   0.0143399  |\n| SVR                           |          -0.0964128  |  0.152772   |  72.0402 |   0.0114729  |\n| DummyRegressor                |          -0.297553   | -0.00265478 |  78.3701 |   0.00592971 |\n| DecisionTreeRegressor         |          -0.470263   | -0.136112   |  83.4229 |   0.00749898 |\n| GaussianProcessRegressor      |          -0.769174   | -0.367089   |  91.5109 |   0.0770502  |\n| MLPRegressor                  |          -1.86772    | -1.21597    | 116.508  |   0.235267   |\n| KernelRidge                   |          -5.03822    | -3.6659     | 169.061  |   0.0243919  |\n\n## 时间序列预测\n\nLazyForecaster 可以在一次调用中对您的时间序列数据进行 20 多种预测模型的基准测试：\n\n```python\nimport numpy as np\nfrom lazypredict.TimeSeriesForecasting import LazyForecaster\n\n# 生成示例数据（或使用您自己的数据）\nnp.random.seed(42)\nt = np.arange(200)\ny = 10 + 0.05 * t + 3 * np.sin(2 * np.pi * t \u002F 12) + np.random.normal(0, 1, 200)\n\ny_train, y_test = y[:180], y[180:]\n\nfcst = LazyForecaster(verbose=0, ignore_warnings=True)\nscores, predictions = fcst.fit(y_train, y_test)\nprint(scores)\n```\n\n| 模型                         |     MAE |    RMSE |   MAPE |   SMAPE |    MASE | R-Squared | 时间耗时 |\n|:------------------------------|--------:|--------:|-------:|--------:|--------:|----------:|-----------:|\n| Holt                          | 0.8532  | 1.0285  | 6.3241 | 6.1758  | 0.6993  |  0.7218   |     0.03   |\n| SARIMAX                       | 0.8791  | 1.0601  | 6.5012 | 6.3414  | 0.7205  |  0.7045   |     0.12   |\n| Ridge_TS                      | 0.9124  | 1.0843  | 6.7523 | 6.5721  | 0.7478  |  0.6912   |     0.01   |\n| ...                           |   ...   |   ...   |  ...   |   ...   |   ...   |    ...    |     ...    |\n\n### 带有外生变量的情况\n\n```\n\n# 可选的外生特征\nX_train = np.column_stack([np.sin(t[:180]), np.cos(t[:180])])\nX_test = np.column_stack([np.sin(t[180:]), np.cos(t[180:])])\n\nscores, predictions = fcst.fit(y_train, y_test, X_train, X_test)\n```\n\n### 高级选项\n\n```python\nfcst = LazyForecaster(\n    verbose=1,                          # 显示进度\n    ignore_warnings=True,               # 抑制模型错误\n    predictions=True,                   # 返回预测值\n    seasonal_period=12,                 # 覆盖自动检测\n    cv=3,                               # 时间序列交叉验证\n    timeout=30,                         # 每个模型的最大训练秒数\n    sort_by=\"RMSE\",                     # 排序指标（MAE、MAPE、SMAPE、MASE、R-Squared）\n    forecasters=\"all\",                  # 或者列出：[\"Holt\", \"AutoARIMA\", \"LSTM_TS\"]\n    max_models=10,                      # 限制模型数量\n    use_gpu=True,                       # 对支持的模型启用 GPU 加速\n    foundation_model_path=\"\u002Fpath\u002Fto\u002Ftimesfm-weights\",  # 本地模型权重（离线）\n)\nscores, predictions = fcst.fit(y_train, y_test)\n```\n\n**参数：**\n- `verbose` (int): 0 表示静默，1 表示显示进度\n- `ignore_warnings` (bool): 抑制每个模型的异常\n- `predictions` (bool): 返回包含预测值的第二个 DataFrame\n- `seasonal_period` (int\u002FNone): 季节性周期长度；设置为 ``None`` 时会通过 ACF 自动检测\n- `cv` (int\u002FNone): 用于交叉验证的 ``TimeSeriesSplit`` 折叠数\n- `timeout` (int\u002Ffloat\u002FNone): 每个模型的最大训练时间（秒）\n- `sort_by` (str): 用于排序的指标（``\"RMSE\"``, ``\"MAE\"``, ``\"MAPE\"``, ``\"SMAPE\"``, ``\"MASE\"``, ``\"R-Squared\"``）\n- `forecasters` (str\u002Flist): ``\"all\"`` 或模型名称列表\n- `n_lags` (int): 用于 ML\u002FDL 模型的滞后特征数量（默认 10）\n- `n_rolling` (tuple): 用于特征工程的滑动窗口大小（默认 (3, 7)）\n- `max_models` (int\u002FNone): 限制训练的总模型数量\n- `custom_metric` (callable): 自定义评估指标 ``f(y_true, y_pred) -> float``\n- `use_gpu` (bool): 对支持的模型启用 GPU 加速（默认 False）\n- `foundation_model_path` (str): 预先下载的基础模型权重的本地路径（例如 TimesFM）\n\n**可用的模型类别：**\n- **基准模型:** Naive, SeasonalNaive\n- **统计模型 (statsmodels):** SimpleExpSmoothing, Holt, HoltWinters_Add, HoltWinters_Mul, Theta, SARIMAX\n- **统计模型 (pmdarima):** AutoARIMA\n- **机器学习 (sklearn):** LinearRegression_TS, Ridge_TS, Lasso_TS, ElasticNet_TS, KNeighborsRegressor_TS, DecisionTreeRegressor_TS, RandomForestRegressor_TS, GradientBoostingRegressor_TS, AdaBoostRegressor_TS, ExtraTreesRegressor_TS, BaggingRegressor_TS, SVR_TS, XGBRegressor_TS, LGBMRegressor_TS, CatBoostRegressor_TS\n- **深度学习 (torch):** LSTM_TS, GRU_TS\n- **基础模型 (timesfm):** TimesFM\n\n## GPU 加速\n\n通过设置 `use_gpu=True`，可以为支持的模型启用 GPU 加速：\n\n```python\nfrom lazypredict.Supervised import LazyClassifier, LazyRegressor\n\n# 分类任务并使用 GPU\nclf = LazyClassifier(use_gpu=True, verbose=0, ignore_warnings=True)\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n\n# 回归任务并使用 GPU\nreg = LazyRegressor(use_gpu=True, verbose=0, ignore_warnings=True)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n\n# 时间序列预测并使用 GPU\nfrom lazypredict.TimeSeriesForecasting import LazyForecaster\nfcst = LazyForecaster(use_gpu=True, verbose=0, ignore_warnings=True)\nscores, predictions = fcst.fit(y_train, y_test)\n```\n\n**支持的 GPU 后端：**\n- **XGBoost** — `device=\"cuda\"`\n- **LightGBM** — `device=\"gpu\"`\n- **CatBoost** — `task_type=\"GPU\"`\n- **cuML (RAPIDS)** — 基于 GPU 的 scikit-learn 替代品（安装后自动识别）\n- **LSTM \u002F GRU** — PyTorch CUDA\n- **TimesFM** — PyTorch CUDA\n\n如果没有可用的 CUDA GPU，则会自动回退到 CPU。\n\n## 分类编码\n\nLazy Predict 支持多种分类编码策略：\n\n```python\nfrom lazypredict.Supervised import LazyClassifier\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n# 包含分类特征的示例\ndf = pd.read_csv('data_with_categories.csv')\nX = df.drop('target', axis=1)\ny = df['target']\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n\n# 尝试不同的编码方法\nfor encoder in ['onehot', 'ordinal', 'target', 'binary']:\n    clf = LazyClassifier(\n        categorical_encoder=encoder,\n        verbose=0,\n        ignore_warnings=True\n    )\n    models, predictions = clf.fit(X_train, X_test, y_train, y_test)\n    print(f\"\\n{encoder.upper()} 编码结果:\")\n    print(models.head())\n```\n\n**注意：** 目标编码和二进制编码需要安装 `category-encoders` 包：\n```bash\npip install category-encoders\n```\n\n## Intel 扩展加速\n\n为了在 Intel CPU 上获得更好的性能，可以安装 Intel Extension for Scikit-learn：\n\n```bash\npip install scikit-learn-intelex\n```\n\nLazy Predict 会自动检测并利用它来加速计算。\n\n## MLflow 集成\n\nLazy Predict 内置了 MLflow 集成。可以通过设置 MLflow 追踪 URI 来启用：\n\n```python\nimport os\nos.environ['MLFLOW_TRACKING_URI'] = 'sqlite:\u002F\u002F\u002Fmlflow.db'\n\n# MLflow 追踪将自动启用\nreg = LazyRegressor(verbose=0, ignore_warnings=True)\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n```\n\n自动跟踪的内容包括：\n- 模型指标（R-squared、RMSE 等）\n- 训练时间\n- 模型参数\n- 模型工件","# LazyPredict 快速上手指南\n\nLazyPredict 是一个高效的 Python 库，旨在通过极少的代码自动构建和评估超过 40 种机器学习模型（涵盖分类、回归及时间序列预测），帮助开发者快速筛选出表现最佳的算法，无需手动调参。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Windows, macOS, 或 Linux\n*   **Python 版本**：3.9 - 3.13\n*   **前置依赖**：建议预先安装 `scikit-learn`, `pandas`, `numpy`。\n    *   若需使用 GPU 加速或时间序列深度模型，请确保已安装对应的 CUDA 驱动及 PyTorch\u002FTensorFlow 环境。\n\n## 2. 安装步骤\n\n您可以选择 `pip` 或 `conda` 进行安装。国内用户推荐使用清华源或阿里源以加速下载。\n\n### 基础安装\n仅安装核心功能（包含基础分类与回归模型）：\n\n```bash\n# 使用 pip (推荐国内镜像)\npip install lazypredict -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 或使用 conda\nconda install -c conda-forge lazypredict\n```\n\n### 可选扩展安装\n根据需求安装额外功能模块：\n\n**安装提升树模型支持 (XGBoost, LightGBM, CatBoost):**\n```bash\npip install lazypredict[boost] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**安装时间序列预测支持:**\n```bash\n# 统计模型 (ARIMA, ETS 等)\npip install lazypredict[timeseries] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 深度学习模型 (LSTM, GRU)\npip install lazypredict[timeseries,deeplearning] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 基础大模型 (Google TimesFM, 仅限 Python 3.10-3.11)\npip install lazypredict[timeseries,foundation] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**安装所有依赖:**\n```bash\npip install lazypredict[all] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 3. 基本使用\n\nLazyPredict 的核心用法非常简洁，只需几行代码即可对比多种模型的效果。\n\n### 分类任务示例\n\n以下示例展示了如何使用 `LazyClassifier` 在乳腺癌数据集上自动训练并评估所有可用的分类器。\n\n```python\nfrom lazypredict.Supervised import LazyClassifier\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\n\n# 加载数据\ndata = load_breast_cancer()\nX = data.data\ny = data.target\n\n# 划分训练集和测试集\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=123)\n\n# 初始化分类器\nclf = LazyClassifier(verbose=0, ignore_warnings=True, custom_metric=None)\n\n# 拟合模型并获取结果\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n\n# 打印模型性能排名\nprint(models)\n```\n\n**输出说明：**\n程序将输出一个 DataFrame，包含每个模型的准确率 (Accuracy)、平衡准确率、ROC AUC、F1 分数以及训练耗时，按性能自动排序。\n\n### 回归任务示例\n\n以下示例展示了如何使用 `LazyRegressor` 处理糖尿病数据集的回归问题。\n\n```python\nfrom lazypredict.Supervised import LazyRegressor\nfrom sklearn import datasets\nfrom sklearn.utils import shuffle\nimport numpy as np\n\n# 加载并打乱数据\ndiabetes = datasets.load_diabetes()\nX, y = shuffle(diabetes.data, diabetes.target, random_state=13)\nX = X.astype(np.float32)\n\n# 划分数据集 (90% 训练，10% 测试)\noffset = int(X.shape[0] * 0.9)\nX_train, y_train = X[:offset], y[:offset]\nX_test, y_test = X[offset:], y[offset:]\n\n# 初始化回归器\nreg = LazyRegressor(verbose=0, ignore_warnings=False, custom_metric=None)\n\n# 拟合模型并获取结果\nmodels, predictions = reg.fit(X_train, X_test, y_train, y_test)\n\n# 打印模型性能排名\nprint(models)\n```\n\n**输出说明：**\n结果将展示各模型的调整后 R 平方 (Adjusted R-Squared)、R 平方、RMSE (均方根误差) 及训练耗时。\n\n### 进阶配置提示\n\n若需启用 GPU 加速、设置超时限制或使用特定编码策略，可在初始化时传入参数：\n\n```python\nclf = LazyClassifier(\n    verbose=1,                   # 显示进度条\n    categorical_encoder='onehot',# 分类特征编码方式: 'onehot', 'ordinal', 'target', 'binary'\n    timeout=60,                  # 单个模型最大运行时间 (秒)\n    use_gpu=True                 # 启用支持的模型的 GPU 加速\n)\n```","某电商数据团队需要在半天内为“双十一”促销预测各类商品的销量，以快速制定备货策略。\n\n### 没有 lazypredict 时\n- 数据科学家需手动编写代码逐一实例化几十种模型（如线性回归、随机森林、XGBoost 等），重复性工作繁重且容易出错。\n- 面对时间序列数据，难以快速判断该用传统统计模型（ARIMA）还是深度学习模型（LSTM），往往凭经验盲目尝试，效率低下。\n- 缺乏统一的评估框架，对比不同模型效果时需要分别计算指标并整理表格，耗时耗力，难以在紧迫期限内产出结论。\n- 若要利用 GPU 加速训练或尝试复杂的分类编码策略，需要额外配置大量环境依赖和底层代码，技术门槛高。\n\n### 使用 lazypredict 后\n- 仅需几行代码即可自动运行内置的 40 多种机器学习模型及 20 多种时间序列预测模型，瞬间完成批量建模。\n- 工具自动检测数据季节性特征并匹配最佳算法，同时支持从统计模型到 TimesFM 基础模型的无缝切换，让选型科学直观。\n- 自动生成包含各项性能指标的对比报表，直接呈现表现最优的模型，让团队能立即锁定最佳预测方案并投入业务使用。\n- 通过简单参数配置（如 `use_gpu=True`）即可启用 GPU 加速和自动类别编码，大幅缩短训练时间并降低工程复杂度。\n\nlazypredict 将原本需要数天的模型筛选与基准测试工作压缩至分钟级，让数据团队能专注于业务洞察而非重复编码。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshankarpandala_lazypredict_a1d7728a.png","shankarpandala","Shankar Rao Pandala","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fshankarpandala_35d3797c.jpg","Sr. Data Scientist","Applied Materials","Bengaluru",null,"https:\u002F\u002Fwww.pandala.in\u002F","https:\u002F\u002Fgithub.com\u002Fshankarpandala",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.3,{"name":87,"color":88,"percentage":89},"Makefile","#427819",0.7,3312,368,"2026-04-03T23:30:54","MIT",1,"未说明","非必需。可选支持：启用 use_gpu=True 时，需 NVIDIA GPU 以加速 XGBoost, LightGBM, CatBoost, cuML (RAPIDS), LSTM\u002FGRU, TimesFM 模型。具体显存和 CUDA 版本取决于所选后端库（如 PyTorch 或 RAPIDS）的要求。",{"notes":98,"python":99,"dependencies":100},"1. 基础功能可通过 'pip install lazypredict' 安装；2. 时间序列预测需安装额外依赖 'lazypredict[timeseries]'；3. 深度学习模型 (LSTM\u002FGRU) 需安装 'lazypredict[timeseries,deeplearning]'；4. Google TimesFM 基础模型仅支持 Python 3.10-3.11，需安装 'lazypredict[timeseries,foundation]'；5. 若使用 Target 或 Binary 编码策略，需确保安装了 'category-encoders' 库；6. 支持 Intel Extension 以加速 Scikit-learn 模型。","3.9 - 3.13",[101,102,103,104,105,106,107,108,109,110],"scikit-learn","pandas","numpy","xgboost","lightgbm","catboost","torch","statsmodels","pmdarima","category-encoders",[14],[113,114,115,116],"machine-learning","automl","regression","classification","2026-03-27T02:49:30.150509","2026-04-11T18:33:07.105193",[120,125,130,134,139,144],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},14658,"导入包时出现 'ModuleNotFoundError: No module named sklearn.utils.testing' 错误怎么办？","这通常是由于 scikit-learn 版本不兼容导致的。虽然依赖管理已在 setup.py 中配置，但建议尝试以下方法：\n1. 安装包含所有可选依赖的版本：`pip install lazypredict[all]`\n2. 或者手动安装依赖：`pip install -r requirements.txt`\n3. 确保您的 scikit-learn 版本与 lazypredict 要求的版本一致（该错误常见于 sklearn 架构变更后的版本）。\n如果问题依旧，可能需要检查环境中是否存在冲突的 sklearn 版本。","https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fissues\u002F325",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},14659,"LazyPredict 运行时间过长或卡在某个进度（如 60%）如何处理？","可以使用新引入的 `timeout` 参数来防止模型运行时间过长。示例代码如下：\n```python\nfrom lazypredict.Supervised import LazyClassifier\nclf = LazyClassifier(timeout=60)  # 限制每个模型最多运行 60 秒\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)\n```\n此外，还可以采取以下优化措施：\n1. 排除已知运行缓慢的模型（如 SVC），自定义分类器列表。\n2. 对数据集进行采样以减少数据量。\n3. 如果可用，启用 Intel Extension 以获得约 1.5 倍的速度提升。","https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fissues\u002F317",{"id":131,"question_zh":132,"answer_zh":133,"source_url":129},14660,"如何排除特定的慢速模型（如 SVC）以加快运行速度？","您可以手动构建一个不包含慢速模型的分类器列表。参考代码如下：\n```python\nfrom sklearn.utils import all_estimators\nfrom sklearn.base import ClassifierMixin\n\n# 定义要排除的慢速或不需要的模型名称\nremoved_classifiers = [\n    \"SVC\", \"LabelPropagation\", \"LabelSpreading\", \"NuSVC\",\n    \"GradientBoostingClassifier\", \"GaussianProcessClassifier\",\n    \"MLPClassifier\", \"LogisticRegressionCV\"\n]\n\n# 生成过滤后的分类器列表\nclassifiers_list = [est for est in all_estimators() if (issubclass(est[1], ClassifierMixin) and (est[0] not in removed_classifiers))]\n\n# 将 classifiers_list 传递给 LazyClassifier (具体参数名视版本而定，或通过修改源码实现)\n```\n通过移除如 SVC 等在实际数据上运行极慢的模型，可以显著缩短整体运行时间。",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},14661,"安装时提示找不到特定版本的 scipy (如 scipy==1.6.0) 怎么办？","这通常是因为您的 Python 版本或操作系统环境不支持该特定版本的 scipy。解决方法包括：\n1. 升级 pip 和 setuptools：`pip install --upgrade pip setuptools`\n2. 尝试安装预编译的二进制包（特别是在 Windows 上）。\n3. 如果不需要严格版本约束，可以尝试先安装兼容版本的 scipy，再安装 lazypredict，或者使用 `pip install lazypredict --no-deps` 然后手动安装其他依赖。\n4. 检查 Python 版本是否过旧（如 Python 3.6 可能不再支持最新的 scipy 版本），建议升级到更新的 Python 环境。","https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fissues\u002F329",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},14662,"安装后导入时报错 'ModuleNotFoundError: No module named tqdm' 或其他依赖缺失如何解决？","这表明安装过程中未自动安装所有必要的依赖项。请执行以下步骤：\n1. 确保使用完整安装命令：`pip install lazypredict[all]`，这将安装包括 tqdm, xgboost, lightgbm 在内的所有可选依赖。\n2. 如果上述命令无效，请手动安装缺失的库：`pip install tqdm xgboost lightgbm pytest`。\n3. 在某些情况下，可能需要重新创建虚拟环境并重新安装，以确保依赖关系解析正确。","https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fissues\u002F284",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},14663,"遇到 'ModuleNotFoundError: No module named xgboost' 错误该如何修复？","这是因为 xgboost 未安装在当前环境中。虽然它是可选依赖，但在使用相关功能时必须安装。解决方法如下：\n1. 直接安装 xgboost：`pip install xgboost`\n   - Windows 用户如果遇到编译错误，建议下载预编译的 wheel 文件或使用 conda 安装：`conda install -c conda-forge xgboost`\n2. 安装完成后重启 Jupyter Notebook 或 Python 内核再次尝试导入。\n3. 为避免未来出现类似问题，建议使用 `pip install lazypredict[all]` 进行安装，它会自动包含 xgboost。","https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fissues\u002F171",[150,155,160,165,170,175,180,185,190,195,200],{"id":151,"version":152,"summary_zh":153,"released_at":154},81550,"v0.3.0","## 变更内容\n* 修复：由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F533 中实现，使用预测概率而非类别标签来计算 ROC-AUC。\n* 新特性：由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F534 中实现，添加 K 折交叉验证和内置的 predict 函数。\n* 修复 #324：由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F535 中实现，防止自定义指标因数组长度不匹配而崩溃。\n* 向 LazyClassifier 添加精确率和召回率指标，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F536 中实现。\n* 修复 #438：由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F537 中实现，通过 tqdm 的 disable 参数改进输出详细程度控制。\n* 修复 #455：由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F538 中实现，添加 PerpetualBooster 支持。\n* 修复 #440：由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F539 中实现，添加超时参数以停止运行缓慢的算法。\n* 修复 #436：由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F540 中实现，正确处理布尔型 DataFrame 特征。\n* 添加对 Intel Extension for Scikit-learn 的支持（#382），由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F541 中实现。\n* 新特性：向 LazyClassifier 和 LazyRegressor 添加 categorical_encoder 参数，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F542 中实现。\n* 重构 Supervised.py：添加类型提示、日志记录、输入验证，并改进代码组织结构，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F547 中实现。\n* 新特性：添加 LazyForecaster 时间序列预测功能 — v0.3.0a1，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F548 中实现。\n* 新特性：为支持 GPU 加速的模型启用 GPU 支持，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F550 中实现。\n* 0.3.0a2，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F551 中发布。\n* 合并 pull request #551 from shankarpandala\u002Fdev，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F552 中完成。\n* 修复：解决所有 GitHub Actions CI 失败问题（lint、文档、conda-build），由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F553 中实现。\n* 修复：解决 0.3.0a2 版本在 PyPI 发布中的阻塞问题，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F556 中实现。\n* Claude\u002Frelease blockers 0.3.0a2 4 i4 dp，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F558 中实现。\n* Claude\u002Frelease blockers 0.3.0a2 4 i4 dp，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F560 中实现。\n* Claude\u002Frelease blockers 0.3.0a2 4 i4 dp，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F561 中实现。\n* Claude\u002Frelease blockers 0.3.0a2 4 i4 dp，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F562 中实现。\n* Claude\u002Frelease blockers 0.3.0a2 4 i4 dp，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F563 中实现。\n* 添加时间序列 v","2026-03-15T17:58:52",{"id":156,"version":157,"summary_zh":158,"released_at":159},81551,"v0.3.0a5","## 变更内容\n* 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F564 中添加了时间序列可视化和诊断模块\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.3.0a4...v0.3.0a5","2026-03-14T15:43:34",{"id":161,"version":162,"summary_zh":163,"released_at":164},81552,"v0.3.0a4","## 变更内容\n* Claude\u002F发布阻塞项 0.3.0a2 4 i4 dp 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F563 中提交\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.3.0a3...v0.3.0a4","2026-03-10T17:24:09",{"id":166,"version":167,"summary_zh":168,"released_at":169},81553,"v0.3.0a3","## 变更内容\n* Claude\u002F发布阻塞项 0.3.0a2 4 i4 dp 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F560 中提出\n* Claude\u002F发布阻塞项 0.3.0a2 4 i4 dp 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F561 中提出\n* Claude\u002F发布阻塞项 0.3.0a2 4 i4 dp 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F562 中提出\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.3.0a2...v0.3.0a3","2026-03-10T10:57:40",{"id":171,"version":172,"summary_zh":173,"released_at":174},81554,"v0.3.0a2","## 变更内容\n* 功能：支持 GPU 加速的模型现已启用 GPU 支持，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F550 中实现。\n* 0.3.0a2 版本，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F551 中发布。\n* 合并拉取请求 #551（来自 shankarpandala 的 dev 分支），由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F552 中完成。\n* 修复：解决所有 GitHub Actions CI 流水线失败问题（代码风格检查、文档构建、conda 构建），由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F553 中完成。\n* 修复：解除 0.3.0a2 版本在 PyPI 发布中的阻塞问题，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F556 中完成。\n* Claude\u002F解除 0.3.0a2 发布阻塞问题 4 i4 dp，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F558 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.3.0a1...v0.3.0a2","2026-03-10T09:26:10",{"id":176,"version":177,"summary_zh":178,"released_at":179},81555,"v0.3.0a1","## 新功能\n\n### 时间序列预测 — `LazyForecaster`\n只需一次调用即可基准测试 **26 种预测模型**：\n\n- **统计模型**: Naive、SeasonalNaive、SimpleExpSmoothing、Holt、HoltWinters（加法\u002F乘法）、Theta、SARIMAX、AutoARIMA\n- **机器学习模型**: LinearRegression、Ridge、Lasso、ElasticNet、KNN、SVR、DecisionTree、RandomForest、GradientBoosting、AdaBoost、Bagging、ExtraTrees、XGBoost、LightGBM\n- **深度学习模型**: LSTM、GRU（通过 PyTorch 实现）\n- **基础模型**: Google TimesFM 2.5（2 亿参数的零样本预训练 Transformer）\n\n### 特性\n- 通过自相关函数 (ACF) 自动检测季节性周期\n- 支持 SARIMAX、AutoARIMA 及机器学习模型中的外生变量\n- 使用扩展窗口进行交叉验证（TimeSeriesSplit）\n- 新增预测指标：MAPE、SMAPE、MASE\n- 新增安装选项：`pip install lazypredict[timeseries]`、`[deeplearning]`、`[foundation]`\n- 为 LazyClassifier 和 LazyRegressor 添加了 `categorical_encoder` 参数\n- 重构了 Supervised.py，增加了类型提示、日志记录和输入验证\n\n### 安装\n```bash\npip install lazypredict==0.3.0a1\n```","2026-03-09T19:30:51",{"id":181,"version":182,"summary_zh":183,"released_at":184},81556,"v0.2.16","## 变更内容\n* 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F494 中添加了用于文档依赖的 requirements.txt 文件\n* 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F495 中添加了 Lazy Predict 的文档结构和示例\n* 0.2.16 版本，由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F496 中发布\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.2.15...v0.2.16","2025-04-05T19:47:18",{"id":186,"version":187,"summary_zh":188,"released_at":189},81557,"v0.2.15","## 变更内容\n* 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F471 中更改了软件包的发布方式\n* 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F472 中更新了发布方式\n* 由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F491 中实现了 Mlflow 集成\n* 将 setup.py 和 __init__.py 中的版本号升级至 2.13，并添加了 .bumpversion.cf… 文件；由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F492 中完成\n* 将 setup.py、__init__.py 和 .bumpversion.cfg 中的版本号升级至 2.15；由 @shankarpandala 在 https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F493 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.2.13...v0.2.15","2025-04-01T19:46:46",{"id":191,"version":192,"summary_zh":193,"released_at":194},81558,"v0.2.14-alpha.1","**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.2.14-alpha...v0.2.14-beta\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.2.14-alpha...v0.2.14-alpha.1","2024-11-02T17:52:58",{"id":196,"version":197,"summary_zh":198,"released_at":199},81559,"v0.2.14-alpha","**完整更新日志**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.2.13...v0.2.14-alpha","2024-11-02T16:02:06",{"id":201,"version":202,"summary_zh":203,"released_at":204},81560,"v0.2.13","## What's Changed\r\n* Updated to version 0.2.10 to support python versions 3.9 and 3.10 by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F381\r\n* Fix #441 and #442, changed sparse keyword in OneHotEncoder by @JSchoeck in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F444\r\n* Merge pull request #381 from shankarpandala\u002Fdev by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F458\r\n* Sync to dev by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F459\r\n* Merge pull request #459 from shankarpandala\u002Fdev by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F460\r\n* update build matrix to include Python 3.11, 3.12, and 3.13 by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F461\r\n* Merge pull request #461 from shankarpandala\u002Fdev by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F462\r\n* setup-python@v5 by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F463\r\n* removed build.yml for because of duplication by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F464\r\n* fix sync issues by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F465\r\n* updated actions version to actions\u002Fupload-artifact@v3 by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F466\r\n* updated CI workflow to include build to anaconda as well by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F467\r\n* Master by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F468\r\n* Merge pull request #468 from shankarpandala\u002Fmaster by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F469\r\n* Merge pull request #469 from shankarpandala\u002Fdev by @shankarpandala in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F470\r\n\r\n## New Contributors\r\n* @JSchoeck made their first contribution in https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fpull\u002F444\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fshankarpandala\u002Flazypredict\u002Fcompare\u002Fv0.2.12...v0.2.13","2024-11-02T11:24:46"]