[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-yzhao062--combo":3,"tool-yzhao062--combo":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150720,2,"2026-04-11T11:33:10",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":90,"env_os":91,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":103,"github_topics":105,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":115,"updated_at":116,"faqs":117,"releases":133},6602,"yzhao062\u002Fcombo","combo","(AAAI' 20) A Python Toolbox for Machine Learning Model Combination","combo 是一个专为机器学习模型融合设计的 Python 工具箱，旨在帮助开发者轻松整合多个模型的预测结果以提升整体性能。在数据科学竞赛和实际业务中，单一模型往往难以达到最优效果，而通过集成学习（Ensemble Learning）将不同模型的优势结合，是解决这一痛点的关键策略。combo 正是为此而生，它统一了来自 scikit-learn、XGBoost 和 LightGBM 等主流库的模型接口，广泛支持分类、聚类及异常检测等核心任务。\n\n这款工具特别适合机器学习工程师、数据科学家以及研究人员使用。无论是需要快速验证融合算法效果的参赛者，还是致力于探索前沿集成技术的学术研究者，都能从中获益。combo 的独特亮点在于其高度统一的 API 设计和丰富的算法覆盖，不仅包含了经典的堆叠（Stacking）方法，还集成了 DCS、DES、LSCP 等较新的动态选择算法。此外，底层利用 numba 和 joblib 进行了即时编译与并行化优化，确保了在处理大规模数据时的高效运行。配合详尽的文档与交互式示例，combo 让复杂的模型组合变得简单可控，是提升模型表现力的得力助手。","combo: A Python Toolbox for Machine Learning Model Combination\n==============================================================\n\n\n**Deployment & Documentation & Stats**\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fcombo.svg?color=brightgreen\n   :target: https:\u002F\u002Fpypi.org\u002Fproject\u002Fcombo\u002F\n   :alt: PyPI version\n\n\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fpycombo\u002Fbadge\u002F?version=latest\n   :target: https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\n   :alt: Documentation Status\n\n\n.. image:: https:\u002F\u002Fmybinder.org\u002Fbadge_logo.svg\n   :target: https:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fyzhao062\u002Fcombo\u002Fmaster\n   :alt: Binder\n\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fyzhao062\u002Fcombo.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fstargazers\n   :alt: GitHub stars\n\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fyzhao062\u002Fcombo.svg?color=blue\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fnetwork\n   :alt: GitHub forks\n\n\n.. image:: https:\u002F\u002Fpepy.tech\u002Fbadge\u002Fcombo\n   :target: https:\u002F\u002Fpepy.tech\u002Fproject\u002Fcombo\n   :alt: Downloads\n\n\n.. image:: https:\u002F\u002Fpepy.tech\u002Fbadge\u002Fcombo\u002Fmonth\n   :target: https:\u002F\u002Fpepy.tech\u002Fproject\u002Fcombo\n   :alt: Downloads\n\n\n----\n\n\n**Build Status & Coverage & Maintainability & License**\n\n.. image:: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Factions\u002Fworkflows\u002Ftesting.yml\u002Fbadge.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Factions\u002Fworkflows\u002Ftesting.yml\n   :alt: testing\n\n\n.. image:: https:\u002F\u002Fcircleci.com\u002Fgh\u002Fyzhao062\u002Fcombo.svg?style=svg\n   :target: https:\u002F\u002Fcircleci.com\u002Fgh\u002Fyzhao062\u002Fcombo\n   :alt: Circle CI\n\n\n.. image:: https:\u002F\u002Fci.appveyor.com\u002Fapi\u002Fprojects\u002Fstatus\u002Fte7uieha87305ike\u002Fbranch\u002Fmaster?svg=true\n   :target: https:\u002F\u002Fci.appveyor.com\u002Fproject\u002Fyzhao062\u002Fcombo\u002Fbranch\u002Fmaster\n   :alt: Build status\n\n\n.. image:: https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002Fyzhao062\u002Fcombo\u002Fbadge.svg\n   :target: https:\u002F\u002Fcoveralls.io\u002Fgithub\u002Fyzhao062\u002Fcombo\n   :alt: Coverage Status\n\n\n.. image:: https:\u002F\u002Fapi.codeclimate.com\u002Fv1\u002Fbadges\u002F465ebba81e990abb357b\u002Fmaintainability\n   :target: https:\u002F\u002Fcodeclimate.com\u002Fgithub\u002Fyzhao062\u002Fcombo\u002Fmaintainability\n   :alt: Maintainability\n\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fyzhao062\u002Fcombo.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002FLICENSE\n   :alt: License\n\n\n----\n\n\n**combo** is a comprehensive Python toolbox for **combining machine learning (ML) models and scores**.\n**Model combination** can be considered as a subtask of `ensemble learning \u003Chttps:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEnsemble_learning>`_,\nand has been widely used in real-world tasks and data science competitions like Kaggle [#Bell2007Lessons]_.\n**combo** has been used\u002Fintroduced in various research works since its inception [#Raschka2020Machine]_ [#Zhao2019PyOD]_.\n\n**combo** library supports the combination of models and score from\nkey ML libraries such as `scikit-learn \u003Chttps:\u002F\u002Fscikit-learn.org\u002Fstable\u002Findex.html>`_,\n`xgboost \u003Chttps:\u002F\u002Fxgboost.ai\u002F>`_, and `LightGBM \u003Chttps:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLightGBM>`_,\nfor crucial tasks including classification, clustering, anomaly detection.\nSee figure below for some representative combination approaches.\n\n.. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fdocs\u002Ffigs\u002Fframework_demo.png\n   :target: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fdocs\u002Ffigs\u002Fframework_demo.png\n   :alt: Combination Framework Demo\n\n\n**combo** is featured for:\n\n* **Unified APIs, detailed documentation, and interactive examples** across various algorithms.\n* **Advanced and latest models**, such as Stacking\u002FDCS\u002FDES\u002FEAC\u002FLSCP.\n* **Comprehensive coverage** for classification, clustering, anomaly detection, and raw score.\n* **Optimized performance with JIT and parallelization** when possible, using `numba \u003Chttps:\u002F\u002Fgithub.com\u002Fnumba\u002Fnumba>`_ and `joblib \u003Chttps:\u002F\u002Fgithub.com\u002Fjoblib\u002Fjoblib>`_.\n\n\n**API Demo**\\ :\n\n.. code-block:: python\n\n\n   from combo.models.classifier_stacking import Stacking\n   # initialize a group of base classifiers\n   classifiers = [DecisionTreeClassifier(), LogisticRegression(),\n                  KNeighborsClassifier(), RandomForestClassifier(),\n                  GradientBoostingClassifier()]\n\n   clf = Stacking(base_estimators=classifiers) # initialize a Stacking model\n   clf.fit(X_train, y_train) # fit the model\n\n   # predict on unseen data\n   y_test_labels = clf.predict(X_test)  # label prediction\n   y_test_proba = clf.predict_proba(X_test)  # probability prediction\n\n\n**Citing combo**\\ :\n\n`combo paper \u003Chttp:\u002F\u002Fwww.andrew.cmu.edu\u002Fuser\u002Fyuezhao2\u002Fpapers\u002F20-aaai-combo.pdf>`_ is published in\n`AAAI 2020 \u003Chttps:\u002F\u002Faaai.org\u002FConferences\u002FAAAI-20\u002F>`_ (demo track).\nIf you use combo in a scientific publication, we would appreciate citations to the following paper::\n\n    @inproceedings{zhao2020combo,\n      title={Combining Machine Learning Models and Scores using combo library},\n      author={Zhao, Yue and Wang, Xuejian and Cheng, Cheng and Ding, Xueying},\n      booktitle={Thirty-Fourth AAAI Conference on Artificial Intelligence},\n      month = {Feb},\n      year={2020},\n      address = {New York, USA}\n    }\n\nor::\n\n    Zhao, Y., Wang, X., Cheng, C. and Ding, X., 2020. Combining Machine Learning Models and Scores using combo library. Thirty-Fourth AAAI Conference on Artificial Intelligence.\n\n\n**Key Links and Resources**\\ :\n\n\n* `awesome-ensemble-learning \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fawesome-ensemble-learning>`_ (ensemble learning related books, papers, and more)\n* `View the latest codes on Github \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo>`_\n* `View the documentation & API \u003Chttps:\u002F\u002Fpycombo.readthedocs.io\u002F>`_\n* `View all examples \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Ftree\u002Fmaster\u002Fexamples>`_\n* `View the demo video for AAAI 2020 \u003Chttps:\u002F\u002Fyoutu.be\u002FPaSJ49Ij7w4>`_\n* `Execute Interactive Jupyter Notebooks \u003Chttps:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fyzhao062\u002Fcombo\u002Fmaster>`_\n\n\n**Table of Contents**\\ :\n\n\n* `Installation \u003C#installation>`_\n* `API Cheatsheet & Reference \u003C#api-cheatsheet--reference>`_\n* `Implemented Algorithms \u003C#implemented-algorithms>`_\n* `Example 1: Classifier Combination with Stacking\u002FDCS\u002FDES \u003C#example-of-stackingdcsdes>`_\n* `Example 2: Simple Classifier Combination \u003C#example-of-classifier-combination>`_\n* `Example 3: Clustering Combination \u003C#example-of-clustering-combination>`_\n* `Example 4: Outlier Detector Combination \u003C#example-of-outlier-detector-combination>`_\n* `Development Status \u003C#development-status>`_\n* `Inclusion Criteria \u003C#inclusion-criteria>`_\n\n\n----\n\n\nInstallation\n^^^^^^^^^^^^\n\nIt is recommended to use **pip** for installation. Please make sure\n**the latest version** is installed, as combo is updated frequently:\n\n.. code-block:: bash\n\n   pip install combo            # normal install\n   pip install --upgrade combo  # or update if needed\n   pip install --pre combo      # or include pre-release version for new features\n\nAlternatively, you could clone and run setup.py file:\n\n.. code-block:: bash\n\n   git clone https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo.git\n   cd combo\n   pip install .\n\n\n**Required Dependencies**\\ :\n\n\n* Python 3.5, 3.6, or 3.7\n* joblib\n* matplotlib (**optional for running examples**)\n* numpy>=1.13\n* numba>=0.35\n* pyod\n* scipy>=0.19.1\n* scikit_learn>=0.20\n\n\n**Note on Python 2**\\ :\nThe maintenance of Python 2.7 will be stopped by January 1, 2020 (see `official announcement \u003Chttps:\u002F\u002Fgithub.com\u002Fpython\u002Fdevguide\u002Fpull\u002F344>`_).\nTo be consistent with the Python change and combo's dependent libraries, e.g., scikit-learn,\n**combo only supports Python 3.5+** and we encourage you to use\nPython 3.5 or newer for the latest functions and bug fixes. More information can\nbe found at `Moving to require Python 3 \u003Chttps:\u002F\u002Fpython3statement.org\u002F>`_.\n\n\n----\n\n\nAPI Cheatsheet & Reference\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nFull API Reference: (https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002Fapi.html).\nThe following APIs are consistent for most of the models\n(API Cheatsheet: https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002Fapi_cc.html).\n\n* **fit(X, y)**\\ : Fit estimator. y is optional for unsupervised methods.\n* **predict(X)**\\ : Predict on a particular sample once the estimator is fitted.\n* **predict_proba(X)**\\ : Predict the probability of a sample belonging to each class once the estimator is fitted.\n* **fit_predict(X, y)**\\ : Fit estimator and predict on X. y is optional for unsupervised methods.\n\nFor raw score combination (after the score matrix is generated),\nuse individual methods from\n`\"score_comb.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fcombo\u002Fmodels\u002Fscore_comb.py>`_ directly.\nRaw score combination API: (https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002Fapi.html#score-combination).\n\n\n----\n\n\nImplemented Algorithms\n^^^^^^^^^^^^^^^^^^^^^^\n\n**combo** groups combination frameworks by tasks. General purpose methods are\nfundamental ones which can be applied to various tasks.\n\n===================  ======================================================================================================  =====  ===========================================\nTask                 Algorithm                                                                                               Year   Ref\n===================  ======================================================================================================  =====  ===========================================\nGeneral Purpose      Average & Weighted Average: average across all scores\u002Fprediction results, maybe with weights            N\u002FA    [#Zhou2012Ensemble]_\nGeneral Purpose      Maximization: simple combination by taking the maximum scores                                           N\u002FA    [#Zhou2012Ensemble]_\nGeneral Purpose      Median: take the median value across all scores\u002Fprediction results                                      N\u002FA    [#Zhou2012Ensemble]_\nGeneral Purpose      Majority Vote & Weighted Majority Vote                                                                  N\u002FA    [#Zhou2012Ensemble]_\nClassification       SimpleClassifierAggregator: combining classifiers by general purpose methods above                      N\u002FA    N\u002FA\nClassification       DCS: Dynamic Classifier Selection (Combination of multiple classifiers using local accuracy estimates)  1997   [#Woods1997Combination]_\nClassification       DES: Dynamic Ensemble Selection (From dynamic classifier selection to dynamic ensemble selection)       2008   [#Ko2008From]_\nClassification       Stacking (meta ensembling): use a meta learner to learn the base classifier results                     N\u002FA    [#Gorman2016Kaggle]_\nClustering           Clusterer Ensemble: combine the results of multiple clustering results by relabeling                    2006   [#Zhou2006Clusterer]_\nClustering           Combining multiple clusterings using evidence accumulation (EAC)                                        2002   [#Fred2005Combining]_\nAnomaly Detection    SimpleDetectorCombination: combining outlier detectors by general purpose methods above                 N\u002FA    [#Aggarwal2017Outlier]_\nAnomaly Detection    Average of Maximum (AOM): divide base detectors into subgroups to take the maximum, and then average    2015   [#Aggarwal2015Theoretical]_\nAnomaly Detection    Maximum of Average (MOA): divide base detectors into subgroups to take the average, and then maximize   2015   [#Aggarwal2015Theoretical]_\nAnomaly Detection    XGBOD: a semi-supervised combination framework for outlier detection                                    2018   [#Zhao2018XGBOD]_\nAnomaly Detection    Locally Selective Combination (LSCP)                                                                    2019   [#Zhao2019LSCP]_\n===================  ======================================================================================================  =====  ===========================================\n\n\n**The comparison among selected implemented models** is made available below\n(\\ `Figure \u003Chttps:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.png>`_\\ ,\n`compare_selected_classifiers.py \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.py>`_\\, `Interactive Jupyter Notebooks \u003Chttps:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fyzhao062\u002Fcombo\u002Fmaster>`_\\ ).\nFor Jupyter Notebooks, please navigate to **\"\u002Fnotebooks\u002Fcompare_selected_classifiers.ipynb\"**.\n\n\n.. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.png\n   :target: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.png\n   :alt: Comparison of Selected Models\n\n\n----\n\n\n**All implemented modes** are associated with examples, check\n`\"combo examples\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples>`_\nfor more information.\n\n\nExample of Stacking\u002FDCS\u002FDES\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fclassifier_stacking_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_stacking_example.py>`_\ndemonstrates the basic API of stacking (meta ensembling). `\"examples\u002Fclassifier_dcs_la_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_dcs_la_example.py>`_\ndemonstrates the basic API of Dynamic Classifier Selection by Local Accuracy. `\"examples\u002Fclassifier_des_la_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_des_la_example.py>`_\ndemonstrates the basic API of Dynamic Ensemble Selection by Local Accuracy.\n\nIt is noted **the basic API is consistent across all these models**.\n\n\n#. Initialize a group of classifiers as base estimators\n\n   .. code-block:: python\n\n\n      # initialize a group of classifiers\n      classifiers = [DecisionTreeClassifier(random_state=random_state),\n                     LogisticRegression(random_state=random_state),\n                     KNeighborsClassifier(),\n                     RandomForestClassifier(random_state=random_state),\n                     GradientBoostingClassifier(random_state=random_state)]\n\n\n#. Initialize, fit, predict, and evaluate with Stacking\n\n   .. code-block:: python\n\n\n      from combo.models.classifier_stacking import Stacking\n\n      clf = Stacking(base_estimators=classifiers, n_folds=4, shuffle_data=False,\n                   keep_original=True, use_proba=False, random_state=random_state)\n\n      clf.fit(X_train, y_train)\n      y_test_predict = clf.predict(X_test)\n      evaluate_print('Stacking | ', y_test, y_test_predict)\n\n\n#. See a sample output of classifier_stacking_example.py\n\n   .. code-block:: bash\n\n\n      Decision Tree        | Accuracy:0.9386, ROC:0.9383, F1:0.9521\n      Logistic Regression  | Accuracy:0.9649, ROC:0.9615, F1:0.973\n      K Neighbors          | Accuracy:0.9561, ROC:0.9519, F1:0.9662\n      Gradient Boosting    | Accuracy:0.9605, ROC:0.9524, F1:0.9699\n      Random Forest        | Accuracy:0.9605, ROC:0.961, F1:0.9693\n\n      Stacking             | Accuracy:0.9868, ROC:0.9841, F1:0.9899\n\n\n----\n\n\nExample of Classifier Combination\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fclassifier_comb_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_comb_example.py>`_\ndemonstrates the basic API of predicting with multiple classifiers. **It is noted that the API across all other algorithms are consistent\u002Fsimilar**.\n\n#. Initialize a group of classifiers as base estimators\n\n   .. code-block:: python\n\n\n      # initialize a group of classifiers\n      classifiers = [DecisionTreeClassifier(random_state=random_state),\n                     LogisticRegression(random_state=random_state),\n                     KNeighborsClassifier(),\n                     RandomForestClassifier(random_state=random_state),\n                     GradientBoostingClassifier(random_state=random_state)]\n\n\n#. Initialize, fit, predict, and evaluate with a simple aggregator (average)\n\n   .. code-block:: python\n\n\n      from combo.models.classifier_comb import SimpleClassifierAggregator\n\n      clf = SimpleClassifierAggregator(classifiers, method='average')\n      clf.fit(X_train, y_train)\n      y_test_predicted = clf.predict(X_test)\n      evaluate_print('Combination by avg   |', y_test, y_test_predicted)\n\n\n\n#. See a sample output of classifier_comb_example.py\n\n   .. code-block:: bash\n\n\n      Decision Tree        | Accuracy:0.9386, ROC:0.9383, F1:0.9521\n      Logistic Regression  | Accuracy:0.9649, ROC:0.9615, F1:0.973\n      K Neighbors          | Accuracy:0.9561, ROC:0.9519, F1:0.9662\n      Gradient Boosting    | Accuracy:0.9605, ROC:0.9524, F1:0.9699\n      Random Forest        | Accuracy:0.9605, ROC:0.961, F1:0.9693\n\n      Combination by avg   | Accuracy:0.9693, ROC:0.9677, F1:0.9763\n      Combination by w_avg | Accuracy:0.9781, ROC:0.9716, F1:0.9833\n      Combination by max   | Accuracy:0.9518, ROC:0.9312, F1:0.9642\n      Combination by w_vote| Accuracy:0.9649, ROC:0.9644, F1:0.9728\n      Combination by median| Accuracy:0.9693, ROC:0.9677, F1:0.9763\n\n\n----\n\n\nExample of Clustering Combination\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fcluster_comb_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fcluster_comb_example.py>`_\ndemonstrates the basic API of combining multiple base clustering estimators. `\"examples\u002Fcluster_eac_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fcluster_eac_example.py>`_\ndemonstrates the basic API of Combining multiple clusterings using evidence accumulation (EAC).\n\n#. Initialize a group of clustering methods as base estimators\n\n   .. code-block:: python\n\n\n      # Initialize a set of estimators\n      estimators = [KMeans(n_clusters=n_clusters),\n                    MiniBatchKMeans(n_clusters=n_clusters),\n                    AgglomerativeClustering(n_clusters=n_clusters)]\n\n\n#. Initialize a Clusterer Ensemble class and fit the model\n\n   .. code-block:: python\n\n\n      from combo.models.cluster_comb import ClustererEnsemble\n      # combine by Clusterer Ensemble\n      clf = ClustererEnsemble(estimators, n_clusters=n_clusters)\n      clf.fit(X)\n\n\n#. Get the aligned results\n\n   .. code-block:: python\n\n\n      # generate the labels on X\n      aligned_labels = clf.aligned_labels_\n      predicted_labels = clf.labels_\n\n\n\nExample of Outlier Detector Combination\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fdetector_comb_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fdetector_comb_example.py>`_\ndemonstrates the basic API of combining multiple base outlier detectors.\n\n#. Initialize a group of outlier detection methods as base estimators\n\n   .. code-block:: python\n\n\n      # Initialize a set of estimators\n      detectors = [KNN(), LOF(), OCSVM()]\n\n\n#. Initialize a simple averaging aggregator, fit the model, and make\n   the prediction.\n\n   .. code-block:: python\n\n\n      from combo.models.detector combination import SimpleDetectorAggregator\n      clf = SimpleDetectorAggregator(base_estimators=detectors)\n      clf_name = 'Aggregation by Averaging'\n      clf.fit(X_train)\n\n      y_train_pred = clf.labels_  # binary labels (0: inliers, 1: outliers)\n      y_train_scores = clf.decision_scores_  # raw outlier scores\n\n      # get the prediction on the test data\n      y_test_pred = clf.predict(X_test)  # outlier labels (0 or 1)\n      y_test_scores = clf.decision_function(X_test)  # outlier scores\n\n\n#. Evaluate the prediction using ROC and Precision @ Rank n.\n\n   .. code-block:: python\n\n      # evaluate and print the results\n      print(\"\\nOn Training Data:\")\n      evaluate_print(clf_name, y_train, y_train_scores)\n      print(\"\\nOn Test Data:\")\n      evaluate_print(clf_name, y_test, y_test_scores)\n\n#. See sample outputs on both training and test data.\n\n   .. code-block:: bash\n\n      On Training Data:\n      Aggregation by Averaging ROC:0.9994, precision @ rank n:0.95\n\n      On Test Data:\n      Aggregation by Averaging ROC:1.0, precision @ rank n:1.0\n\n\n----\n\n\nDevelopment Status\n^^^^^^^^^^^^^^^^^^\n\n**combo** is currently **under development** as of Feb, 2020. A concrete plan has\nbeen laid out and will be implemented in the next few months.\n\nSimilar to other libraries built by us, e.g., Python Outlier Detection Toolbox\n(`pyod \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fpyod>`_),\n**combo** is also targeted to be published in *Journal of Machine Learning Research (JMLR)*,\n`open-source software track \u003Chttp:\u002F\u002Fwww.jmlr.org\u002Fmloss\u002F>`_. A demo paper has been presented in\n*AAAI 2020* for progress update.\n\n**Watch & Star** to get the latest update! Also feel free to send me an email (zhaoy@cmu.edu)\nfor suggestions and ideas.\n\n\n----\n\n\nInclusion Criteria\n^^^^^^^^^^^^^^^^^^\n\nSimilarly to scikit-learn, We mainly consider well-established algorithms for inclusion.\nA rule of thumb is at least two years since publication, 50+ citations, and usefulness.\n\nHowever, we encourage the author(s) of newly proposed models to share and add your implementation into combo\nfor boosting ML accessibility and reproducibility.\nThis exception only applies if you could commit to the maintenance of your model for at least two year period.\n\n\n----\n\n\nReference\n^^^^^^^^^\n\n.. [#Aggarwal2015Theoretical] Aggarwal, C.C. and Sathe, S., 2015. Theoretical foundations and algorithms for outlier ensembles. *ACM SIGKDD Explorations Newsletter*, 17(1), pp.24-47.\n\n.. [#Aggarwal2017Outlier] Aggarwal, C.C. and Sathe, S., 2017. Outlier ensembles: An introduction. Springer.\n\n.. [#Bell2007Lessons] Bell, R.M. and Koren, Y., 2007. Lessons from the Netflix prize challenge. *SIGKDD Explorations*, 9(2), pp.75-79.\n\n.. [#Gorman2016Kaggle] Gorman, B. (2016). A Kaggler's Guide to Model Stacking in Practice. [online] The Official Blog of Kaggle.com. Available at: http:\u002F\u002Fblog.kaggle.com\u002F2016\u002F12\u002F27\u002Fa-kagglers-guide-to-model-stacking-in-practice [Accessed 26 Jul. 2019].\n\n.. [#Ko2008From] Ko, A.H., Sabourin, R. and Britto Jr, A.S., 2008. From dynamic classifier selection to dynamic ensemble selection. *Pattern recognition*, 41(5), pp.1718-1731.\n\n.. [#Fred2005Combining] Fred, A. L. N., & Jain, A. K. (2005). Combining multiple clusterings using evidence accumulation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 27(6), 835–850. https:\u002F\u002Fdoi.org\u002F10.1109\u002FTPAMI.2005.113\n\n.. [#Raschka2020Machine] Raschka, S., Patterson, J. and Nolet, C., 2020. Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence. arXiv preprint arXiv:2002.04803.\n\n.. [#Woods1997Combination] Woods, K., Kegelmeyer, W.P. and Bowyer, K., 1997. Combination of multiple classifiers using local accuracy estimates. *IEEE transactions on pattern analysis and machine intelligence*, 19(4), pp.405-410.\n\n.. [#Zhao2018XGBOD] Zhao, Y. and Hryniewicki, M.K. XGBOD: Improving Supervised Outlier Detection with Unsupervised Representation Learning. *IEEE International Joint Conference on Neural Networks*, 2018.\n\n.. [#Zhao2019LSCP] Zhao, Y., Nasrullah, Z., Hryniewicki, M.K. and Li, Z., 2019, May. LSCP: Locally selective combination in parallel outlier ensembles. In *Proceedings of the 2019 SIAM International Conference on Data Mining (SDM)*, pp. 585-593. Society for Industrial and Applied Mathematics.\n\n.. [#Zhao2019PyOD] Zhao, Y., Nasrullah, Z. and Li, Z., 2019. PyOD: A Python Toolbox for Scalable Outlier Detection. *Journal of Machine Learning Research*, 20, pp.1-7.\n\n.. [#Zhou2006Clusterer] Zhou, Z.H. and Tang, W., 2006. Clusterer ensemble. *Knowledge-Based Systems*, 19(1), pp.77-83.\n\n.. [#Zhou2012Ensemble] Zhou, Z.H., 2012. Ensemble methods: foundations and algorithms. Chapman and Hall\u002FCRC.","combo：用于机器学习模型组合的 Python 工具箱\n==============================================================\n\n\n**部署、文档与统计**\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fcombo.svg?color=brightgreen\n   :target: https:\u002F\u002Fpypi.org\u002Fproject\u002Fcombo\u002F\n   :alt: PyPI 版本\n\n\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fpycombo\u002Fbadge\u002F?version=latest\n   :target: https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\n   :alt: 文档状态\n\n\n.. image:: https:\u002F\u002Fmybinder.org\u002Fbadge_logo.svg\n   :target: https:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fyzhao062\u002Fcombo\u002Fmaster\n   :alt: Binder\n\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fyzhao062\u002Fcombo.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fstargazers\n   :alt: GitHub 星标\n\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fyzhao062\u002Fcombo.svg?color=blue\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fnetwork\n   :alt: GitHub 分支\n\n\n.. image:: https:\u002F\u002Fpepy.tech\u002Fbadge\u002Fcombo\n   :target: https:\u002F\u002Fpepy.tech\u002Fproject\u002Fcombo\n   :alt: 下载量\n\n\n.. image:: https:\u002F\u002Fpepy.tech\u002Fbadge\u002Fcombo\u002Fmonth\n   :target: https:\u002F\u002Fpepy.tech\u002Fproject\u002Fcombo\n   :alt: 月度下载量\n\n\n----\n\n\n**构建状态、覆盖率、可维护性与许可证**\n\n.. image:: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Factions\u002Fworkflows\u002Ftesting.yml\u002Fbadge.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Factions\u002Fworkflows\u002Ftesting.yml\n   :alt: 测试\n\n\n.. image:: https:\u002F\u002Fcircleci.com\u002Fgh\u002Fyzhao062\u002Fcombo.svg?style=svg\n   :target: https:\u002F\u002Fcircleci.com\u002Fgh\u002Fyzhao062\u002Fcombo\n   :alt: Circle CI\n\n\n.. image:: https:\u002F\u002Fci.appveyor.com\u002Fapi\u002Fprojects\u002Fstatus\u002Fte7uieha87305ike\u002Fbranch\u002Fmaster?svg=true\n   :target: https:\u002F\u002Fci.appveyor.com\u002Fproject\u002Fyzhao062\u002Fcombo\u002Fbranch\u002Fmaster\n   :alt: 构建状态\n\n\n.. image:: https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002Fyzhao062\u002Fcombo\u002Fbadge.svg\n   :target: https:\u002F\u002Fcoveralls.io\u002Fgithub\u002Fyzhao062\u002Fcombo\n   :alt: 覆盖率状态\n\n\n.. image:: https:\u002F\u002Fapi.codeclimate.com\u002Fv1\u002Fbadges\u002F465ebba81e990abb357b\u002Fmaintainability\n   :target: https:\u002F\u002Fcodeclimate.com\u002Fgithub\u002Fyzhao062\u002Fcombo\u002Fmaintainability\n   :alt: 可维护性\n\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fyzhao062\u002Fcombo.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002FLICENSE\n   :alt: 许可证\n\n\n----\n\n\n**combo** 是一个全面的 Python 工具箱，用于 **组合机器学习（ML）模型和得分**。\n**模型组合**可以被视为 `集成学习 \u003Chttps:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEnsemble_learning>`_ 的一个子任务，\n并且已广泛应用于实际任务和 Kaggle 等数据科学竞赛中 [#Bell2007Lessons]_。\n自推出以来，**combo** 已被用于或介绍于多项研究工作中 [#Raschka2020Machine]_ [#Zhao2019PyOD]_。\n\n**combo** 库支持来自关键 ML 库（如 `scikit-learn \u003Chttps:\u002F\u002Fscikit-learn.org\u002Fstable\u002Findex.html>`_、\n`xgboost \u003Chttps:\u002F\u002Fxgboost.ai\u002F>`_ 和 `LightGBM \u003Chttps:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLightGBM>`_）的模型和得分组合，\n适用于分类、聚类、异常检测等关键任务。下图展示了一些具有代表性的组合方法。\n\n.. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fdocs\u002Ffigs\u002Fframework_demo.png\n   :target: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fdocs\u002Ffigs\u002Fframework_demo.png\n   :alt: 组合框架演示\n\n\n**combo** 的特点包括：\n\n* **统一的 API、详尽的文档和交互式示例**，覆盖多种算法。\n* **先进且最新的模型**，如 Stacking\u002FDCS\u002FDES\u002FEAC\u002FLSCP。\n* **全面的覆盖范围**，涵盖分类、聚类、异常检测及原始得分。\n* **在可能的情况下，通过 JIT 和并行化优化性能**，使用 `numba \u003Chttps:\u002F\u002Fgithub.com\u002Fnumba\u002Fnumba>`_ 和 `joblib \u003Chttps:\u002F\u002Fgithub.com\u002Fjoblib\u002Fjoblib>`_。\n\n\n**API 演示**\\ :\n\n.. code-block:: python\n\n\n   from combo.models.classifier_stacking import Stacking\n   # 初始化一组基础分类器\n   classifiers = [DecisionTreeClassifier(), LogisticRegression(),\n                  KNeighborsClassifier(), RandomForestClassifier(),\n                  GradientBoostingClassifier()]\n\n   clf = Stacking(base_estimators=classifiers) # 初始化 Stacking 模型\n   clf.fit(X_train, y_train) # 拟合模型\n\n   # 对未见数据进行预测\n   y_test_labels = clf.predict(X_test)  # 标签预测\n   y_test_proba = clf.predict_proba(X_test)  # 概率预测\n\n\n**引用 combo**\\ :\n\n`combo 论文 \u003Chttp:\u002F\u002Fwww.andrew.cmu.edu\u002Fuser\u002Fyuezhao2\u002Fpapers\u002F20-aaai-combo.pdf>`_ 已发表于\n`AAAI 2020 \u003Chttps:\u002F\u002Faaai.org\u002FConferences\u002FAAAI-20\u002F>`_（演示环节）。\n如果您在科学出版物中使用 combo，我们非常感谢您引用以下论文::\n\n    @inproceedings{zhao2020combo,\n      title={Combining Machine Learning Models and Scores using combo library},\n      author={Zhao, Yue and Wang, Xuejian and Cheng, Cheng and Ding, Xueying},\n      booktitle={Thirty-Fourth AAAI Conference on Artificial Intelligence},\n      month = {Feb},\n      year={2020},\n      address = {New York, USA}\n    }\n\n或者::\n\n    Zhao, Y., Wang, X., Cheng, C. and Ding, X., 2020. Combining Machine Learning Models and Scores using combo library. Thirty-Fourth AAAI Conference on Artificial Intelligence.\n\n\n**重要链接与资源**\\ :\n\n\n* `awesome-ensemble-learning \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fawesome-ensemble-learning>`_（集成学习相关书籍、论文等）\n* `在 Github 上查看最新代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo>`_\n* `查看文档与 API \u003Chttps:\u002F\u002Fpycombo.readthedocs.io\u002F>`_\n* `查看所有示例 \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Ftree\u002Fmaster\u002Fexamples>`_\n* `观看 AAAI 2020 的演示视频 \u003Chttps:\u002F\u002Fyoutu.be\u002FPaSJ49Ij7w4>`_\n* `运行交互式 Jupyter 笔记本 \u003Chttps:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fyzhao062\u002Fcombo\u002Fmaster>`_\n\n\n**目录**\\ :\n\n\n* `安装 \u003C#installation>`_\n* `API 备忘录与参考 \u003C#api-cheatsheet--reference>`_\n* `已实现的算法 \u003C#implemented-algorithms>`_\n* `示例 1：使用 Stacking\u002FDCS\u002FDES 进行分类器组合 \u003C#example-of-stackingdcsdes>`_\n* `示例 2：简单的分类器组合 \u003C#example-of-classifier-combination>`_\n* `示例 3：聚类组合 \u003C#example-of-clustering-combination>`_\n* `示例 4：异常检测器组合 \u003C#example-of-outlier-detector-combination>`_\n* `开发状态 \u003C#development-status>`_\n* `纳入标准 \u003C#inclusion-criteria>`_\n\n\n----\n\n\n安装\n^^^^^^^^^^^^\n\n建议使用 **pip** 进行安装。请确保安装的是 **最新版本**，因为 combo 更新频繁：\n\n.. code-block:: bash\n\n   pip install combo            # 普通安装\n   pip install --upgrade combo  # 或根据需要更新\n   pip install --pre combo      # 或包含预发布版本以获取新功能\n\n此外，您也可以克隆并运行 setup.py 文件：\n\n.. code-block:: bash\n\n   git clone https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo.git\n   cd combo\n   pip install .\n\n\n**所需依赖**\\ :\n\n* Python 3.5、3.6 或 3.7\n* joblib\n* matplotlib（运行示例时**可选**）\n* numpy>=1.13\n* numba>=0.35\n* pyod\n* scipy>=0.19.1\n* scikit_learn>=0.20\n\n\n**关于 Python 2 的说明**：\nPython 2.7 的维护将于 2020 年 1 月 1 日停止（参见 `官方公告 \u003Chttps:\u002F\u002Fgithub.com\u002Fpython\u002Fdevguide\u002Fpull\u002F344>`_）。\n为与 Python 社区的变更以及 combo 所依赖的库（如 scikit-learn）保持一致，\n**combo 仅支持 Python 3.5 及以上版本**，我们建议您使用 Python 3.5 或更高版本以获得最新的功能和错误修复。更多信息请参阅 `迁移到要求 Python 3 \u003Chttps:\u002F\u002Fpython3statement.org\u002F>`_。\n\n\n----\n\n\nAPI 备忘录与参考\n^^^^^^^^^^^^^^^^^^\n\n完整 API 参考：(https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002Fapi.html)。\n以下 API 在大多数模型中保持一致（API 备忘录：https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002Fapi_cc.html)。\n\n* **fit(X, y)**：拟合估计器。对于无监督方法，y 是可选的。\n* **predict(X)**：在估计器拟合完成后，对特定样本进行预测。\n* **predict_proba(X)**：在估计器拟合完成后，预测样本属于每个类别的概率。\n* **fit_predict(X, y)**：拟合估计器并在 X 上进行预测。对于无监督方法，y 是可选的。\n\n对于原始分数组合（在生成分数矩阵之后），\n可以直接使用 `\"score_comb.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fcombo\u002Fmodels\u002Fscore_comb.py>`_ 中的各个方法。\n原始分数组合 API：(https:\u002F\u002Fpycombo.readthedocs.io\u002Fen\u002Flatest\u002Fapi.html#score-combination)。\n\n\n----\n\n\n已实现的算法\n^^^^^^^^^^^^^^\n\n**combo** 按任务将组合框架分组。通用方法是基础方法，可以应用于各种任务。\n\n===================  ======================================================================================================  =====  ===========================================\n任务                 算法                                                                                               年份   参考\n===================  ======================================================================================================  =====  ===========================================\n通用                 平均与加权平均：对所有分数或预测结果求平均，可带权重                                            无    [#Zhou2012Ensemble]_\n通用                 最大化：通过取最大分数进行简单组合                                           无    [#Zhou2012Ensemble]_\n通用                 中位数：对所有分数或预测结果取中位数                                      无    [#Zhou2012Ensemble]_\n通用                 多数投票与加权多数投票                                                                  无    [#Zhou2012Ensemble]_\n分类               SimpleClassifierAggregator：使用上述通用方法组合分类器                      无    无\n分类               DCS：动态分类器选择（利用局部准确率估计组合多个分类器）                       1997   [#Woods1997Combination]_\n分类               DES：动态集成选择（从动态分类器选择到动态集成选择）                             2008   [#Ko2008From]_\n分类               Stacking（元集成）：使用元学习器学习基分类器的结果                     无    [#Gorman2016Kaggle]_\n聚类               Clusterer Ensemble：通过重新标记组合多个聚类结果                            2006   [#Zhou2006Clusterer]_\n聚类               使用证据积累（EAC）组合多个聚类结果                                        2002   [#Fred2005Combining]_\n异常检测           SimpleDetectorCombination：使用上述通用方法组合异常检测器                 无    [#Aggarwal2017Outlier]_\n异常检测           最大值平均（AOM）：将基检测器分成子组取最大值后再求平均                    2015   [#Aggarwal2015Theoretical]_\n异常检测           平均值最大（MOA）：将基检测器分成子组取平均值后再取最大值                   2015   [#Aggarwal2015Theoretical]_\n异常检测           XGBOD：一种用于异常检测的半监督组合框架                                    2018   [#Zhao2018XGBOD]_\n异常检测           局部选择性组合（LSCP）                                                                    2019   [#Zhao2019LSCP]_\n===================  ======================================================================================================  =====  ===========================================\n\n\n**所选已实现模型的比较**如下所示\n(\\ `图 \u003Chttps:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.png>`_\\ ,\n`compare_selected_classifiers.py \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.py>`_\\, `交互式 Jupyter 笔记本 \u003Chttps:\u002F\u002Fmybinder.org\u002Fv2\u002Fgh\u002Fyzhao062\u002Fcombo\u002Fmaster>`_\\ )。\n对于 Jupyter 笔记本，请导航至 **\"\u002Fnotebooks\u002Fcompare_selected_classifiers.ipynb\"**。\n\n\n.. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.png\n   :target: https:\u002F\u002Fraw.githubusercontent.com\u002Fyzhao062\u002Fcombo\u002Fmaster\u002Fexamples\u002Fcompare_selected_classifiers.png\n   :alt: 所选模型比较\n\n\n----\n\n\n**所有已实现的模式**都配有示例，更多信息请查看\n`\"combo 示例\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples>`_。\n\n\nStacking\u002FDCS\u002FDES 示例\n^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fclassifier_stacking_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_stacking_example.py>`_\n展示了 stacking（元集成）的基本 API。`\"examples\u002Fclassifier_dcs_la_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_dcs_la_example.py>`_\n展示了基于局部准确率的动态分类器选择的基本 API。`\"examples\u002Fclassifier_des_la_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_des_la_example.py>`_\n展示了基于局部准确率的动态集成选择的基本 API。\n\n需要注意的是，**这些模型的基本 API 是一致的**。\n\n\n#. 初始化一组分类器作为基估计器\n\n   .. code-block:: python\n\n\n      # 初始化一组分类器\n      classifiers = [DecisionTreeClassifier(random_state=random_state),\n                     LogisticRegression(random_state=random_state),\n                     KNeighborsClassifier(),\n                     RandomForestClassifier(random_state=random_state),\n                     GradientBoostingClassifier(random_state=random_state)]\n\n\n#. 使用 Stacking 进行初始化、拟合、预测和评估\n\n   .. code-block:: python\n\n\n      from combo.models.classifier_stacking import Stacking\n\nclf = Stacking(基分类器=classifiers, 折数=4, 是否打乱数据=False,\n                   保留原始数据=True, 使用概率预测=False, 随机种子=random_state)\n\n      clf.fit(X_train, y_train)\n      y_test_predict = clf.predict(X_test)\n      evaluate_print('Stacking | ', y_test, y_test_predict)\n\n\n#. 查看 classifier_stacking_example.py 的示例输出\n\n   .. code-block:: bash\n\n\n      决策树        | 准确率:0.9386, ROC:0.9383, F1:0.9521\n      逻辑回归  | 准确率:0.9649, ROC:0.9615, F1:0.973\n      K近邻          | 准确率:0.9561, ROC:0.9519, F1:0.9662\n      梯度提升    | 准确率:0.9605, ROC:0.9524, F1:0.9699\n      随机森林        | 准确率:0.9605, ROC:0.961, F1:0.9693\n\n      Stacking             | 准确率:0.9868, ROC:0.9841, F1:0.9899\n\n\n----\n\n\n分类器组合示例\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fclassifier_comb_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_comb_example.py>`_\n演示了使用多个分类器进行预测的基本 API。**需要注意的是，所有其他算法的 API 都是统一或相似的**。\n\n#. 初始化一组分类器作为基估计器\n\n   .. code-block:: python\n\n\n      # 初始化一组分类器\n      classifiers = [DecisionTreeClassifier(random_state=random_state),\n                     LogisticRegression(random_state=random_state),\n                     KNeighborsClassifier(),\n                     RandomForestClassifier(random_state=random_state),\n                     GradientBoostingClassifier(random_state=random_state)]\n\n\n#. 使用简单的聚合器（平均）初始化、拟合、预测并评估\n\n   .. code-block:: python\n\n\n      from combo.models.classifier_comb import SimpleClassifierAggregator\n\n      clf = SimpleClassifierAggregator(classifiers, method='average')\n      clf.fit(X_train, y_train)\n      y_test_predicted = clf.predict(X_test)\n      evaluate_print('通过平均组合   |', y_test, y_test_predicted)\n\n\n\n#. 查看 classifier_comb_example.py 的示例输出\n\n   .. code-block:: bash\n\n\n      决策树        | 准确率:0.9386, ROC:0.9383, F1:0.9521\n      逻辑回归  | 准确率:0.9649, ROC:0.9615, F1:0.973\n      K近邻          | 准确率:0.9561, ROC:0.9519, F1:0.9662\n      梯度提升    | 准确率:0.9605, ROC:0.9524, F1:0.9699\n      随机森林        | 准确率:0.9605, ROC:0.961, F1:0.9693\n\n      通过平均组合   | 准确率:0.9693, ROC:0.9677, F1:0.9763\n      通过加权平均组合 | 准确率:0.9781, ROC:0.9716, F1:0.9833\n      通过最大值组合   | 准确率:0.9518, ROC:0.9312, F1:0.9642\n      通过加权投票组合| 准确率:0.9649, ROC:0.9644, F1:0.9728\n      通过中位数组合| 准确率:0.9693, ROC:0.9677, F1:0.9763\n\n\n----\n\n\n聚类组合示例\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fcluster_comb_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fcluster_comb_example.py>`_\n演示了组合多个基本聚类估计器的基本 API。`\"examples\u002Fcluster_eac_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fcluster_eac_example.py>`_\n演示了使用证据积累（EAC）组合多个聚类结果的基本 API。\n\n#. 初始化一组聚类方法作为基估计器\n\n   .. code-block:: python\n\n\n      # 初始化一组估计器\n      estimators = [KMeans(n_clusters=n_clusters),\n                    MiniBatchKMeans(n_clusters=n_clusters),\n                    AgglomerativeClustering(n_clusters=n_clusters)]\n\n\n#. 初始化 Clusterer Ensemble 类并拟合模型\n\n   .. code-block:: python\n\n\n      from combo.models.cluster_comb import ClustererEnsemble\n      # 通过 Clusterer Ensemble 组合\n      clf = ClustererEnsemble(estimators, n_clusters=n_clusters)\n      clf.fit(X)\n\n\n#. 获取对齐后的结果\n\n   .. code-block:: python\n\n\n      # 在 X 上生成标签\n      aligned_labels = clf.aligned_labels_\n      predicted_labels = clf.labels_\n\n\n\n异常检测器组合示例\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n`\"examples\u002Fdetector_comb_example.py\" \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fdetector_comb_example.py>`_\n演示了组合多个基本异常检测器的基本 API。\n\n#. 初始化一组异常检测方法作为基估计器\n\n   .. code-block:: python\n\n\n      # 初始化一组估计器\n      detectors = [KNN(), LOF(), OCSVM()]\n\n\n#. 初始化一个简单的平均聚合器，拟合模型并进行预测。\n\n   .. code-block:: python\n\n\n      from combo.models.detector combination import SimpleDetectorAggregator\n      clf = SimpleDetectorAggregator(base_estimators=detectors)\n      clf_name = '通过平均聚合'\n      clf.fit(X_train)\n\n      y_train_pred = clf.labels_  # 二元标签（0：内点，1：异常点）\n      y_train_scores = clf.decision_scores_  # 原始异常分数\n\n      # 对测试数据进行预测\n      y_test_pred = clf.predict(X_test)  # 异常标签（0 或 1）\n      y_test_scores = clf.decision_function(X_test)  # 异常分数\n\n\n#. 使用 ROC 曲线和排名 n 的精确率评估预测结果\n\n   .. code-block:: python\n\n      # 评估并打印结果\n      print(\"\\n在训练数据上:\")\n      evaluate_print(clf_name, y_train, y_train_scores)\n      print(\"\\n在测试数据上:\")\n      evaluate_print(clf_name, y_test, y_test_scores)\n\n#. 查看训练和测试数据上的示例输出。\n\n   .. code-block:: bash\n\n      在训练数据上:\n      通过平均聚合 ROC:0.9994, 排名 n 的精确率:0.95\n\n      在测试数据上:\n      通过平均聚合 ROC:1.0, 排名 n 的精确率:1.0\n\n\n----\n\n\n开发状态\n^^^^^^^^^^^^^^^^^^\n\n截至 2020 年 2 月，**combo** 目前仍处于 **开发阶段**。我们已经制定了具体的计划，并将在接下来的几个月内逐步实施。\n\n与我们构建的其他库类似，例如 Python 异常检测工具箱（`pyod \u003Chttps:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fpyod>`_），**combo** 也计划发表在《机器学习研究期刊》（JMLR）的开源软件专栏（`http:\u002F\u002Fwww.jmlr.org\u002Fmloss\u002F`）。我们已在 *AAAI 2020* 上提交了一篇演示论文，以汇报项目进展。\n\n请关注并点赞以获取最新更新！如有任何建议或想法，欢迎随时发送邮件至 zhaoy@cmu.edu。\n\n\n----\n\n\n纳入标准\n^^^^^^^^^^^^^^\n\n与 scikit-learn 类似，我们主要考虑将经过充分验证的算法纳入其中。一般而言，这些算法应至少发表两年以上，被引用超过 50 次，并且具有实用性。\n\n然而，我们也鼓励新提出的模型的作者分享并将其实现添加到 combo 中，以提高机器学习的可访问性和可重复性。这一例外仅适用于那些能够承诺至少维护其模型两年以上的作者。\n\n\n----\n\n\n参考文献\n^^^^^^^^^\n\n.. [#Aggarwal2015Theoretical] Aggarwal, C.C. 和 Sathe, S., 2015. 异常值集成的理论基础与算法. *ACM SIGKDD 探索通讯*, 17(1), 页24–47.\n\n.. [#Aggarwal2017Outlier] Aggarwal, C.C. 和 Sathe, S., 2017. 异常值集成：导论. Springer.\n\n.. [#Bell2007Lessons] Bell, R.M. 和 Koren, Y., 2007. Netflix 奖项挑战赛的经验教训. *SIGKDD 探索*, 9(2), 页75–79.\n\n.. [#Gorman2016Kaggle] Gorman, B. (2016). Kaggle 用户实践模型堆叠指南. [在线] Kaggle.com 官方博客. 可获取于: http:\u002F\u002Fblog.kaggle.com\u002F2016\u002F12\u002F27\u002Fa-kagglers-guide-to-model-stacking-in-practice [访问日期: 2019年7月26日].\n\n.. [#Ko2008From] Ko, A.H., Sabourin, R. 和 Britto Jr, A.S., 2008. 从动态分类器选择到动态集成选择. *模式识别*, 41(5), 页1718–1731.\n\n.. [#Fred2005Combining] Fred, A. L. N. 和 Jain, A. K. (2005). 利用证据积累结合多个聚类方法. *IEEE 模式分析与机器智能汇刊*, 27(6), 835–850. https:\u002F\u002Fdoi.org\u002F10.1109\u002FTPAMI.2005.113\n\n.. [#Raschka2020Machine] Raschka, S., Patterson, J. 和 Nolet, C., 2020. Python 中的机器学习：数据科学、机器学习和人工智能领域的主要进展与技术趋势. arXiv 预印本 arXiv:2002.04803.\n\n.. [#Woods1997Combination] Woods, K., Kegelmeyer, W.P. 和 Bowyer, K., 1997. 基于局部准确率估计的多分类器组合. *IEEE 模式分析与机器智能汇刊*, 19(4), 页405–410.\n\n.. [#Zhao2018XGBOD] Zhao, Y. 和 Hryniewicki, M.K. XGBOD：利用无监督表示学习改进有监督异常检测. *IEEE 国际神经网络联合会议*, 2018年.\n\n.. [#Zhao2019LSCP] Zhao, Y., Nasrullah, Z., Hryniewicki, M.K. 和 Li, Z., 2019年5月. LSCP：并行异常值集成中的局部选择性组合. 载于 *2019年 SIAM 国际数据挖掘会议（SDM）论文集*, 页585–593. 工业与应用数学学会.\n\n.. [#Zhao2019PyOD] Zhao, Y., Nasrullah, Z. 和 Li, Z., 2019. PyOD：用于可扩展异常检测的 Python 工具箱. *机器学习研究期刊*, 20, 页1–7.\n\n.. [#Zhou2006Clusterer] Zhou, Z.H. 和 Tang, W., 2006. 聚类器集成. *基于知识的系统*, 19(1), 页77–83.\n\n.. [#Zhou2012Ensemble] Zhou, Z.H., 2012. 集成方法：基础与算法. Chapman and Hall\u002FCRC.","# combo 快速上手指南\n\n**combo** 是一个功能全面的 Python 工具箱，专为**机器学习模型与分数的组合（Model Combination）**而设计。它支持分类、聚类和异常检测等任务，集成了 Stacking、DCS、DES、LSCP 等先进算法，并提供统一的 API 接口。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS, Windows\n*   **Python 版本**：3.5, 3.6, 3.7 或更高版本（**不支持 Python 2**）\n*   **核心依赖**：\n    *   `numpy` (>=1.13)\n    *   `scipy` (>=0.19.1)\n    *   `scikit-learn` (>=0.20)\n    *   `numba` (>=0.35)\n    *   `joblib`\n    *   `pyod`\n*   **可选依赖**：`matplotlib`（用于运行示例绘图）\n\n> **提示**：国内用户建议使用清华或阿里镜像源加速安装过程。\n\n## 2. 安装步骤\n\n推荐使用 `pip` 进行安装。\n\n### 标准安装\n```bash\npip install combo\n```\n\n### 使用国内镜像源加速安装（推荐）\n```bash\npip install combo -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 升级至最新版本\n```bash\npip install --upgrade combo\n```\n\n### 从源码安装\n如果您需要最新的功能或参与开发，可以克隆仓库安装：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo.git\ncd combo\npip install .\n```\n\n## 3. 基本使用\n\n`combo` 提供了与 `scikit-learn` 风格一致的统一 API（`fit`, `predict`, `predict_proba`）。以下是一个使用 **Stacking** 算法组合多个分类器的最简单示例：\n\n```python\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier\nfrom combo.models.classifier_stacking import Stacking\n\n# 1. 初始化一组基分类器\nclassifiers = [\n    DecisionTreeClassifier(), \n    LogisticRegression(),\n    KNeighborsClassifier(), \n    RandomForestClassifier(),\n    GradientBoostingClassifier()\n]\n\n# 2. 初始化 Stacking 模型\nclf = Stacking(base_estimators=classifiers)\n\n# 3. 训练模型 (X_train, y_train 为您的训练数据)\nclf.fit(X_train, y_train)\n\n# 4. 对未见数据进行预测\n# 预测类别标签\ny_test_labels = clf.predict(X_test)\n\n# 预测概率\ny_test_proba = clf.predict_proba(X_test)\n```\n\n该库同样支持聚类组合（如 `Clusterer Ensemble`）和异常检测组合（如 `LSCP`, `XGBOD`），使用方式类似，只需导入对应的模型类即可。","某金融风控团队正在构建信用卡欺诈检测系统，需要整合逻辑回归、随机森林和 XGBoost 等多个异构模型的预测结果，以最大化识别准确率并降低误报率。\n\n### 没有 combo 时\n- **代码重复繁琐**：工程师需手动编写大量胶水代码来对齐不同库（如 scikit-learn 与 XGBoost）的输出格式，才能进行简单的加权平均或投票。\n- **高级算法缺失**：难以快速落地 DCS（动态分类器选择）或 LSCP 等前沿集成策略，只能依赖基础的静态加权，导致模型上限受限。\n- **性能瓶颈明显**：在处理百万级交易数据时，串行的模型组合逻辑耗时过长，且缺乏原生的并行加速支持，影响实时拦截效率。\n- **维护成本高昂**：每新增一个基模型或调整组合策略，都需要重构底层数据流转逻辑，测试与调试周期漫长。\n\n### 使用 combo 后\n- **统一接口调用**：通过 combo 标准化的 API，仅需几行代码即可无缝接入各类基模型，自动处理分数对齐与格式转换。\n- **前沿策略即享**：直接调用内置的 Stacking、DES 及 EAC 等高级算法模块，无需复现论文公式，瞬间提升模型泛化能力。\n- **运行效率飞跃**：利用工具内建的 Numba JIT 编译与 Joblib 并行化特性，大规模数据的组合预测速度提升数倍，满足实时风控要求。\n- **扩展灵活便捷**：新增模型或切换组合范式只需修改配置参数，大幅降低了实验迭代门槛与维护复杂度。\n\ncombo 将复杂的模型集成工程简化为标准化流程，让数据科学家能专注于策略创新而非底层代码实现。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyzhao062_combo_800fce56.png","yzhao062","Yue Zhao","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fyzhao062_b7512e00.jpg","Assistant Professor at USC | AI Auditing |  Making AI agents and systems inspectable, safe, and accountable","University of Southern California","Los Angeles, CA, USA",null,"https:\u002F\u002Fviterbi-web.usc.edu\u002F~yzhao010\u002F","https:\u002F\u002Fgithub.com\u002Fyzhao062",[82],{"name":83,"color":84,"percentage":85},"Python","#3572A5",100,661,105,"2026-04-06T03:05:49","BSD-2-Clause",1,"未说明",{"notes":93,"python":94,"dependencies":95},"该工具不支持 Python 2.7（已于 2020 年停止维护），建议使用 Python 3.5 或更高版本以获取最新功能和修复。安装推荐使用 pip，也可通过克隆源码运行 setup.py 安装。","3.5, 3.6, 或 3.7",[96,97,98,99,100,101,102],"joblib","numpy>=1.13","numba>=0.35","pyod","scipy>=0.19.1","scikit_learn>=0.20","matplotlib (可选)",[16,104,14],"其他",[106,107,108,109,110,111,112,113,114],"machine-learning","data-mining","data-science","ensemble-learning","python","model-combination","aggregation","pipeline-framework","machine-learning-pipelines","2026-03-27T02:49:30.150509","2026-04-11T21:39:48.018148",[118,123,128],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},29822,"导入 stacking 模块时提示 'No module named combo.models.stacking' 怎么办？","这是因为文档未及时更新导致的包名变更。stacking 包实际位于 classifier_stacking 名下，请使用正确的导入语句：from combo.models.classifier_stacking import Stacking。请忽略 README 文档中旧的导入方式。","https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fissues\u002F1",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},29823,"combo 库是否支持 LightGBM 的 LGBMRanker 模型？","目前仅支持来自 scikit-learn、xgboost 和 lightGBM 的分类模型（classification）进行组合。LGBMRanker 是排序模型而非有效的分类器，因此不支持。您可以参考官方示例代码了解支持的用法：https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fblob\u002Fmaster\u002Fexamples\u002Fclassifier_multiple_libs.py","https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fissues\u002F4",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},29824,"如何实现主图中提到的“顺序组合”（sequential combination）高级算法？","截至该问题提出时，库中尚未提供实现“顺序组合”算法的具体 API 或示例代码。建议关注项目后续版本更新或查看最新文档以确认是否已添加此功能。","https:\u002F\u002Fgithub.com\u002Fyzhao062\u002Fcombo\u002Fissues\u002F9",[134],{"id":135,"version":136,"summary_zh":137,"released_at":138},206411,"V0.1.0","这是一个稳定版本。\n\nv\u003C0.0.0>, \u003C2019年7月14日> -- 初始发布。\nv\u003C0.0.1>, \u003C2019年7月15日> -- 添加基本功能和示例。\nv\u003C0.0.1>, \u003C2019年7月15日> -- 添加聚类集成器。\nv\u003C0.0.2>, \u003C2019年7月16日> -- 修复紧急问题。\nv\u003C0.0.3>, \u003C2019年7月16日> -- 添加文档。\nv\u003C0.0.3>, \u003C2019年7月17日> -- 添加Travis-CI集成。\nv\u003C0.0.4>, \u003C2019年7月17日> -- 更新单元测试和聚类算法。\nv\u003C0.0.4>, \u003C2019年7月17日> -- 更新文档。\nv\u003C0.0.4>, \u003C2019年7月21日> -- 提高代码可维护性。\nv\u003C0.0.5>, \u003C2019年7月27日> -- 添加中位数组合及score_to_proba函数。\nv\u003C0.0.5>, \u003C2019年7月28日> -- 添加堆叠（元集成）。\nv\u003C0.0.6>, \u003C2019年7月29日> -- 启用AppVeyor集成。\nv\u003C0.0.6>, \u003C2019年7月29日> -- 更新依赖文件。\nv\u003C0.0.6>, \u003C2019年7月29日> -- 添加简单的异常检测器组合方法。\nv\u003C0.0.6>, \u003C2019年7月30日> -- 添加LSCP。\nv\u003C0.0.7>, \u003C2019年8月2日> -- 添加DCS_LA。\nv\u003C0.0.7>, \u003C2019年8月3日> -- 重构Base类中设置权重的代码。\nv\u003C0.0.7>, \u003C2019年8月4日> -- 添加DES_LA。\nv\u003C0.0.8>, \u003C2019年8月5日> -- 添加fit_predict作为核心API。\nv\u003C0.0.8>, \u003C2019年8月6日> -- 添加EAC模型。\nv\u003C0.0.8>, \u003C2019年8月8日> -- 更新聚类示例，增加可视化展示。\nv\u003C0.0.9>, \u003C2019年9月1日> -- 添加classifier_multiple_libs.py，支持多库集成。\nv\u003C0.1.0>, \u003C2019年12月30日> -- 引入优化。\nv\u003C0.1.0>, \u003C2020年2月17日> -- 更新文档。\nv\u003C0.1.0>, \u003C2020年2月17日> -- 代码清理。","2020-02-19T02:11:55"]