[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-hyperactive-project--Hyperactive":3,"tool-hyperactive-project--Hyperactive":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160015,2,"2026-04-18T11:30:52",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":91,"env_os":92,"env_gpu":92,"env_ram":92,"env_deps":93,"category_tags":103,"github_topics":105,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":123,"updated_at":124,"faqs":125,"releases":155},9128,"hyperactive-project\u002FHyperactive","Hyperactive","A unified interface for optimization algorithms and experiments","Hyperactive 是一个专为 Python 设计的优化算法统一接口工具，旨在简化超参数调优、模型选择及黑盒优化实验的复杂流程。它巧妙地将“优化问题定义”与“具体算法执行”分离开来，让用户无需修改实验代码即可自由切换不同的优化策略，从而高效对比多种算法效果。\n\n该工具主要解决了开发者在面对众多优化库时接口不统一、迁移成本高以及重复编写样板代码的痛点。无论是需要快速验证想法的机器学习工程师，还是致力于算法对比的研究人员，都能通过 Hyperactive 轻松上手。只需定义目标函数和搜索空间，即可立即运行实验。\n\n其核心亮点在于集成了高达 31 种优化算法，涵盖局部搜索、全局搜索、基于种群及基于模型的各类方法，并统一封装了 GFO、Optuna 和 scikit-learn 三大主流后端。此外，Hyperactive 原生支持 scikit-learn、PyTorch、sktime 等流行框架，能够以极低的配置成本直接对机器学习模型进行调优。如果你希望在实验中灵活尝试不同优化器并关注结果而非底层实现细节，Hyperactive 将是一个得力的助手。","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\">\n    \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\".\u002Fdocs\u002Fimages\u002Fhyperactive_logo_ink_dark.svg\">\n      \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\".\u002Fdocs\u002Fimages\u002Fhyperactive_logo_ink.svg\">\n      \u003Cimg src=\".\u002Fdocs\u002Fimages\u002Fhyperactive_logo_ink.svg\" width=\"400\" alt=\"Hyperactive Logo\">\n    \u003C\u002Fpicture>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n\u003Ch3 align=\"center\">\nA unified interface for optimization algorithms and experiments in Python.\n\u003C\u002Fh3>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Factions\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002FSimonBlanke\u002FHyperactive\u002Ftest.yml?style=for-the-badge&logo=githubactions&logoColor=white&label=tests\" alt=\"Tests\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FSimonBlanke\u002FHyperactive\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fcodecov\u002Fc\u002Fgithub\u002FSimonBlanke\u002FHyperactive?style=for-the-badge&logo=codecov&logoColor=white\" alt=\"Coverage\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cbr>\n\n\u003Ctable align=\"center\">\n  \u003Ctr>\n    \u003Ctd align=\"right\">\u003Cb>Documentation\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">&#9656;\u003C\u002Ftd>\n    \u003Ctd>\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002F\">Homepage\u003C\u002Fa> &#183;\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide.html\">User Guide\u003C\u002Fa> &#183;\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fapi_reference.html\">API Reference\u003C\u002Fa> &#183;\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fexamples.html\">Examples\u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"right\">\u003Cb>On this page\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">&#9656;\u003C\u002Ftd>\n    \u003Ctd>\n      \u003Ca href=\"#key-features\">Features\u003C\u002Fa> &#183;\n      \u003Ca href=\"#examples\">Examples\u003C\u002Fa> &#183;\n      \u003Ca href=\"#core-concepts\">Concepts\u003C\u002Fa> &#183;\n      \u003Ca href=\"#citation\">Citation\u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cbr>\n\n---\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperactive-project_Hyperactive_readme_457a27388e35.gif\" width=\"240\" align=\"right\" alt=\"Bayesian Optimization on Ackley Function\">\n\u003C\u002Fa>\n\n**Hyperactive** provides 31 optimization algorithms across 3 backends (GFO, Optuna, scikit-learn), accessible through a unified experiment-based interface. The library separates optimization problems from algorithms, enabling you to swap optimizers without changing your experiment code.\n\nDesigned for hyperparameter tuning, model selection, and black-box optimization. Native integrations with scikit-learn, sktime, skpro, and PyTorch allow tuning ML models with minimal setup. Define your objective, specify a search space, and run.\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fgerman-center-for-open-source-ai\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinkedIn-Follow-0A66C2?style=flat-square&logo=linkedin\" alt=\"LinkedIn\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002F7uKdHfdcJG\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Chat-5865F2?style=flat-square&logo=discord&logoColor=white\" alt=\"Discord\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cbr>\n\n## Installation\n\n```bash\npip install hyperactive\n```\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fhyperactive\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fhyperactive?style=flat-square&color=blue\" alt=\"PyPI\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fhyperactive\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fhyperactive?style=flat-square\" alt=\"Python\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cdetails>\n\u003Csummary>Optional dependencies\u003C\u002Fsummary>\n\n```bash\npip install hyperactive[sklearn-integration]  # scikit-learn integration\npip install hyperactive[sktime-integration]   # sktime\u002Fskpro integration\npip install hyperactive[all_extras]           # Everything including Optuna\n```\n\n\u003C\u002Fdetails>\n\n\u003Cbr>\n\n## Key Features\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Foptimizers\u002Findex.html\">\u003Cb>31 Optimization Algorithms\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>Local, global, population-based, and model-based methods across 3 backends (GFO, Optuna, sklearn).\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fexperiments.html\">\u003Cb>Experiment Abstraction\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>Clean separation between what to optimize (experiments) and how to optimize (algorithms).\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fsearch_spaces.html\">\u003Cb>Flexible Search Spaces\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>Discrete, continuous, and mixed parameter types. Define spaces with NumPy arrays or lists.\u003C\u002Fsub>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fintegrations.html\">\u003Cb>ML Framework Integrations\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>Native support for scikit-learn, sktime, skpro, and PyTorch with minimal code changes.\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Foptimizers\u002Foptuna.html\">\u003Cb>Multiple Backends\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>GFO algorithms, Optuna samplers, and sklearn search methods through one unified API.\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fapi_reference.html\">\u003Cb>Stable & Tested\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>5+ years of development, comprehensive test coverage, and active maintenance since 2019.\u003C\u002Fsub>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cbr>\n\n## Quick Start\n\n```python\nimport numpy as np\nfrom hyperactive.opt.gfo import HillClimbing\n\n# Define objective function (maximize)\ndef objective(params):\n    x, y = params[\"x\"], params[\"y\"]\n    return -(x**2 + y**2)  # Negative paraboloid, optimum at (0, 0)\n\n# Define search space\nsearch_space = {\n    \"x\": np.arange(-5, 5, 0.1),\n    \"y\": np.arange(-5, 5, 0.1),\n}\n\n# Run optimization\noptimizer = HillClimbing(\n    search_space=search_space,\n    n_iter=100,\n    experiment=objective,\n)\nbest_params = optimizer.solve()\n\nprint(f\"Best params: {best_params}\")\n```\n\n**Output:**\n```\nBest params: {'x': 0.0, 'y': 0.0}\n```\n\n\u003Cbr>\n\n## Core Concepts\n\nHyperactive separates **what** you optimize from **how** you optimize. Define your experiment (objective function) and search space once, then swap optimizers freely without changing your code. The unified interface abstracts away backend differences, letting you focus on your optimization problem.\n\n```mermaid\nflowchart TB\n    subgraph USER[\"Your Code\"]\n        direction LR\n        F[\"def objective(params):\u003Cbr\u002F>    return score\"]\n        SP[\"search_space = {\u003Cbr\u002F>    'x': np.arange(...),\u003Cbr\u002F>    'y': [1, 2, 3]\u003Cbr\u002F>}\"]\n    end\n\n    subgraph HYPER[\"Hyperactive\"]\n        direction TB\n        OPT[\"Optimizer\"]\n\n        subgraph BACKENDS[\"Backends\"]\n            GFO[\"GFO\u003Cbr\u002F>21 algorithms\"]\n            OPTUNA[\"Optuna\u003Cbr\u002F>8 algorithms\"]\n            SKL[\"sklearn\u003Cbr\u002F>2 algorithms\"]\n            MORE[\"...\u003Cbr\u002F>more to come\"]\n        end\n\n        OPT --> GFO\n        OPT --> OPTUNA\n        OPT --> SKL\n        OPT --> MORE\n    end\n\n    subgraph OUT[\"Output\"]\n        BEST[\"best_params\"]\n    end\n\n    F --> OPT\n    SP --> OPT\n    HYPER --> OUT\n```\n\n**Optimizer**: Implements the search strategy (Hill Climbing, Bayesian, Particle Swarm, etc.).\n\n**Search Space**: Defines valid parameter combinations as NumPy arrays or lists.\n\n**Experiment**: Your objective function or a built-in experiment (SklearnCvExperiment, etc.).\n\n**Best Parameters**: The optimizer returns the parameters that maximize the objective.\n\n\u003Cbr>\n\n## Examples\n\n\u003Cdetails open>\n\u003Csummary>\u003Cb>Scikit-learn Hyperparameter Tuning\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom sklearn.svm import SVC\nfrom sklearn.datasets import load_iris\nfrom sklearn.model_selection import train_test_split\n\nfrom hyperactive.integrations.sklearn import OptCV\nfrom hyperactive.opt.gfo import HillClimbing\n\n# Load data\nX, y = load_iris(return_X_y=True)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\n# Define search space and optimizer\nsearch_space = {\"kernel\": [\"linear\", \"rbf\"], \"C\": [1, 10, 100]}\noptimizer = HillClimbing(search_space=search_space, n_iter=20)\n\n# Create tuned estimator\ntuned_svc = OptCV(SVC(), optimizer)\ntuned_svc.fit(X_train, y_train)\n\nprint(f\"Best params: {tuned_svc.best_params_}\")\nprint(f\"Test accuracy: {tuned_svc.score(X_test, y_test):.3f}\")\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Bayesian Optimization\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom hyperactive.opt.gfo import BayesianOptimizer\n\ndef ackley(params):\n    x, y = params[\"x\"], params[\"y\"]\n    return -(\n        -20 * np.exp(-0.2 * np.sqrt(0.5 * (x**2 + y**2)))\n        - np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))\n        + np.e + 20\n    )\n\nsearch_space = {\n    \"x\": np.arange(-5, 5, 0.01),\n    \"y\": np.arange(-5, 5, 0.01),\n}\n\noptimizer = BayesianOptimizer(\n    search_space=search_space,\n    n_iter=50,\n    experiment=ackley,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Particle Swarm Optimization\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom hyperactive.opt.gfo import ParticleSwarmOptimizer\n\ndef rastrigin(params):\n    A = 10\n    values = [params[f\"x{i}\"] for i in range(5)]\n    return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)\n\nsearch_space = {f\"x{i}\": np.arange(-5.12, 5.12, 0.1) for i in range(5)}\n\noptimizer = ParticleSwarmOptimizer(\n    search_space=search_space,\n    n_iter=500,\n    experiment=rastrigin,\n    population_size=20,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Experiment Abstraction with SklearnCvExperiment\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom sklearn.svm import SVC\nfrom sklearn.datasets import load_iris\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import KFold\n\nfrom hyperactive.experiment.integrations import SklearnCvExperiment\nfrom hyperactive.opt.gfo import HillClimbing\n\nX, y = load_iris(return_X_y=True)\n\n# Create reusable experiment\nsklearn_exp = SklearnCvExperiment(\n    estimator=SVC(),\n    scoring=accuracy_score,\n    cv=KFold(n_splits=3, shuffle=True),\n    X=X,\n    y=y,\n)\n\nsearch_space = {\n    \"C\": np.logspace(-2, 2, num=10),\n    \"kernel\": [\"linear\", \"rbf\"],\n}\n\noptimizer = HillClimbing(\n    search_space=search_space,\n    n_iter=100,\n    experiment=sklearn_exp,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Optuna Backend (TPE)\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom hyperactive.opt.optuna import TPEOptimizer\n\ndef objective(params):\n    x, y = params[\"x\"], params[\"y\"]\n    return -(x**2 + y**2)\n\nsearch_space = {\n    \"x\": np.arange(-5, 5, 0.1),\n    \"y\": np.arange(-5, 5, 0.1),\n}\n\noptimizer = TPEOptimizer(\n    search_space=search_space,\n    n_iter=100,\n    experiment=objective,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Time Series Forecasting with sktime\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom sktime.forecasting.naive import NaiveForecaster\nfrom sktime.datasets import load_airline\n\nfrom hyperactive.integrations.sktime import ForecastingOptCV\nfrom hyperactive.opt.gfo import RandomSearch\n\ny = load_airline()\n\nsearch_space = {\n    \"strategy\": [\"last\", \"mean\", \"drift\"],\n    \"sp\": [1, 12],\n}\n\noptimizer = RandomSearch(search_space=search_space, n_iter=10)\ntuned_forecaster = ForecastingOptCV(NaiveForecaster(), optimizer)\ntuned_forecaster.fit(y)\n\nprint(f\"Best params: {tuned_forecaster.best_params_}\")\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>PyTorch Neural Network Tuning\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom hyperactive.opt.gfo import BayesianOptimizer\n\n# Example data\nX_train = torch.randn(1000, 10)\ny_train = torch.randint(0, 2, (1000,))\n\ndef train_model(params):\n    learning_rate = params[\"learning_rate\"]\n    batch_size = params[\"batch_size\"]\n    hidden_size = params[\"hidden_size\"]\n\n    model = nn.Sequential(\n        nn.Linear(10, hidden_size),\n        nn.ReLU(),\n        nn.Linear(hidden_size, 2),\n    )\n\n    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n    criterion = nn.CrossEntropyLoss()\n    loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size)\n\n    model.train()\n    for epoch in range(10):\n        for X_batch, y_batch in loader:\n            optimizer.zero_grad()\n            loss = criterion(model(X_batch), y_batch)\n            loss.backward()\n            optimizer.step()\n\n    # Return validation accuracy\n    model.eval()\n    with torch.no_grad():\n        predictions = model(X_train).argmax(dim=1)\n        accuracy = (predictions == y_train).float().mean().item()\n\n    return accuracy\n\nsearch_space = {\n    \"learning_rate\": np.logspace(-5, -1, 20),\n    \"batch_size\": [16, 32, 64, 128],\n    \"hidden_size\": [64, 128, 256, 512],\n}\n\noptimizer = BayesianOptimizer(\n    search_space=search_space,\n    n_iter=30,\n    experiment=train_model,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\u003Cbr>\n\n## Ecosystem\n\nThis library is part of a suite of optimization and machine learning tools. For updates on these packages, [follow on GitHub](https:\u002F\u002Fgithub.com\u002FSimonBlanke).\n\n| Package | Description |\n|---------|-------------|\n| [Hyperactive](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive) | Hyperparameter optimization framework with experiment abstraction and ML integrations |\n| [Gradient-Free-Optimizers](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FGradient-Free-Optimizers) | Core optimization algorithms for black-box function optimization |\n| [Surfaces](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FSurfaces) | Test functions and benchmark surfaces for optimization algorithm evaluation |\n\n\n\u003Cbr>\n\n## Documentation\n\n| Resource | Description |\n|----------|-------------|\n| [User Guide](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide.html) | Comprehensive tutorials and explanations |\n| [API Reference](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fapi_reference.html) | Complete API documentation |\n| [Examples](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fexamples.html) | Jupyter notebooks with use cases |\n| [FAQ](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Ffaq.html) | Common questions and troubleshooting |\n\n\u003Cbr>\n\n## Contributing\n\nContributions welcome! See [CONTRIBUTING.md](.\u002FCONTRIBUTING.md) for guidelines.\n\n- **Bug reports**: [GitHub Issues](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fissues)\n- **Feature requests**: [GitHub Discussions](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fdiscussions)\n- **Questions**: [Discord](https:\u002F\u002Fdiscord.gg\u002F7uKdHfdcJG)\n\n\u003Cbr>\n\n## Citation\n\nIf you use this software in your research, please cite:\n\n```bibtex\n@software{hyperactive2019,\n  author = {Simon Blanke},\n  title = {Hyperactive: A hyperparameter optimization and meta-learning toolbox},\n  year = {2019},\n  url = {https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive},\n}\n```\n\n\u003Cbr>\n\n## License\n\n[MIT License](.\u002FLICENSE) - Free for commercial and academic use.\n","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\">\n    \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\".\u002Fdocs\u002Fimages\u002Fhyperactive_logo_ink_dark.svg\">\n      \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\".\u002Fdocs\u002Fimages\u002Fhyperactive_logo_ink.svg\">\n      \u003Cimg src=\".\u002Fdocs\u002Fimages\u002Fhyperactive_logo_ink.svg\" width=\"400\" alt=\"Hyperactive Logo\">\n    \u003C\u002Fpicture>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n\u003Ch3 align=\"center\">\nPython中用于优化算法和实验的统一接口。\n\u003C\u002Fh3>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Factions\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002FSimonBlanke\u002FHyperactive\u002Ftest.yml?style=for-the-badge&logo=githubactions&logoColor=white&label=tests\" alt=\"测试\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FSimonBlanke\u002FHyperactive\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fcodecov\u002Fc\u002Fgithub\u002FSimonBlanke\u002FHyperactive?style=for-the-badge&logo=codecov&logoColor=white\" alt=\"覆盖率\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cbr>\n\n\u003Ctable align=\"center\">\n  \u003Ctr>\n    \u003Ctd align=\"right\">\u003Cb>文档\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">&#9656;\u003C\u002Ftd>\n    \u003Ctd>\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002F\">主页\u003C\u002Fa> &#183;\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide.html\">用户指南\u003C\u002Fa> &#183;\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fapi_reference.html\">API参考\u003C\u002Fa> &#183;\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fexamples.html\">示例\u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"right\">\u003Cb>本页内容\u003C\u002Fb>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">&#9656;\u003C\u002Ftd>\n    \u003Ctd>\n      \u003Ca href=\"#key-features\">功能特性\u003C\u002Fa> &#183;\n      \u003Ca href=\"#examples\">示例\u003C\u002Fa> &#183;\n      \u003Ca href=\"#core-concepts\">核心概念\u003C\u002Fa> &#183;\n      \u003Ca href=\"#citation\">引用\u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cbr>\n\n---\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperactive-project_Hyperactive_readme_457a27388e35.gif\" width=\"240\" align=\"right\" alt=\"贝叶斯优化在Ackley函数上的应用\">\n\u003C\u002Fa>\n\n**Hyperactive** 提供了跨3个后端（GFO、Optuna、scikit-learn）的31种优化算法，可通过统一的基于实验的接口访问。该库将优化问题与算法分离，使您无需更改实验代码即可切换优化器。\n\n专为超参数调优、模型选择和黑箱优化设计。与scikit-learn、sktime、skpro和PyTorch的原生集成，让您只需最少的设置即可调优机器学习模型。只需定义目标函数、指定搜索空间并运行即可。\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fgerman-center-for-open-source-ai\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinkedIn-Follow-0A66C2?style=flat-square&logo=linkedin\" alt=\"LinkedIn\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002F7uKdHfdcJG\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Chat-5865F2?style=flat-square&logo=discord&logoColor=white\" alt=\"Discord\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cbr>\n\n## 安装\n\n```bash\npip install hyperactive\n```\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fhyperactive\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fhyperactive?style=flat-square&color=blue\" alt=\"PyPI\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fhyperactive\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fhyperactive?style=flat-square\" alt=\"Python版本\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cdetails>\n\u003Csummary>可选依赖\u003C\u002Fsummary>\n\n```bash\npip install hyperactive[sklearn-integration]  # scikit-learn集成\npip install hyperactive[sktime-integration]   # sktime\u002Fskpro集成\npip install hyperactive[all_extras]           # 包括Optuna在内的所有附加功能\n```\n\n\u003C\u002Fdetails>\n\n\u003Cbr>\n\n## 核心特性\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Foptimizers\u002Findex.html\">\u003Cb>31种优化算法\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>包括局部、全局、群体和基于模型的方法，覆盖3个后端（GFO、Optuna、sklearn）。\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fexperiments.html\">\u003Cb>实验抽象\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>清晰地分离优化对象（实验）和优化方法（算法）。\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fsearch_spaces.html\">\u003Cb>灵活的搜索空间\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>支持离散、连续及混合类型的参数。可用NumPy数组或列表定义搜索空间。\u003C\u002Fsub>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Fintegrations.html\">\u003Cb>机器学习框架集成\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>原生支持scikit-learn、sktime、skpro和PyTorch，只需极少的代码改动。\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002Foptimizers\u002Foptuna.html\">\u003Cb>多后端支持\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>通过统一的API使用GFO算法、Optuna采样器以及sklearn搜索方法。\u003C\u002Fsub>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\">\n      \u003Ca href=\"https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fapi_reference.html\">\u003Cb>稳定且经过测试\u003C\u002Fb>\u003C\u002Fa>\u003Cbr>\n      \u003Csub>超过5年的开发历程，全面的测试覆盖，自2019年以来持续维护。\u003C\u002Fsub>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cbr>\n\n## 快速入门\n\n```python\nimport numpy as np\nfrom hyperactive.opt.gfo import HillClimbing\n\n# 定义目标函数（最大化）\ndef objective(params):\n    x, y = params[\"x\"], params[\"y\"]\n    return -(x**2 + y**2)  # 负抛物面，最优解在(0, 0)\n\n# 定义搜索空间\nsearch_space = {\n    \"x\": np.arange(-5, 5, 0.1),\n    \"y\": np.arange(-5, 5, 0.1),\n}\n\n# 运行优化\noptimizer = HillClimbing(\n    search_space=search_space,\n    n_iter=100,\n    experiment=objective,\n)\nbest_params = optimizer.solve()\n\nprint(f\"最佳参数：{best_params}\")\n```\n\n**输出：**\n```\n最佳参数：{'x': 0.0, 'y': 0.0}\n```\n\n\u003Cbr>\n\n## 核心概念\n\nHyperactive 将你优化的 **内容** 与 **方式** 分离。只需定义一次你的实验（目标函数）和搜索空间，即可在不修改代码的情况下自由切换优化器。统一的接口屏蔽了后端差异，使你可以专注于优化问题本身。\n\n```mermaid\nflowchart TB\n    subgraph USER[\"你的代码\"]\n        direction LR\n        F[\"def objective(params):\u003Cbr\u002F>    return score\"]\n        SP[\"search_space = {\u003Cbr\u002F>    'x': np.arange(...),\u003Cbr\u002F>    'y': [1, 2, 3]\u003Cbr\u002F>}\"]\n    end\n\n    subgraph HYPER[\"Hyperactive\"]\n        direction TB\n        OPT[\"优化器\"]\n\n        subgraph BACKENDS[\"后端\"]\n            GFO[\"GFO\u003Cbr\u002F>21种算法\"]\n            OPTUNA[\"Optuna\u003Cbr\u002F>8种算法\"]\n            SKL[\"sklearn\u003Cbr\u002F>2种算法\"]\n            MORE[\"...\u003Cbr\u002F>更多即将推出\"]\n        end\n\n        OPT --> GFO\n        OPT --> OPTUNA\n        OPT --> SKL\n        OPT --> MORE\n    end\n\n    subgraph OUT[\"输出\"]\n        BEST[\"best_params\"]\n    end\n\n    F --> OPT\n    SP --> OPT\n    HYPER --> OUT\n```\n\n**优化器**：实现搜索策略（爬山法、贝叶斯优化、粒子群优化等）。\n\n**搜索空间**：以 NumPy 数组或列表的形式定义有效的参数组合。\n\n**实验**：你的目标函数，或内置的实验（如 SklearnCvExperiment 等）。\n\n**最佳参数**：优化器返回使目标函数最大化的参数。\n\n\u003Cbr>\n\n## 示例\n\n\u003Cdetails open>\n\u003Csummary>\u003Cb>Scikit-learn 超参数调优\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom sklearn.svm import SVC\nfrom sklearn.datasets import load_iris\nfrom sklearn.model_selection import train_test_split\n\nfrom hyperactive.integrations.sklearn import OptCV\nfrom hyperactive.opt.gfo import HillClimbing\n\n# 加载数据\nX, y = load_iris(return_X_y=True)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\n# 定义搜索空间和优化器\nsearch_space = {\"kernel\": [\"linear\", \"rbf\"], \"C\": [1, 10, 100]}\noptimizer = HillClimbing(search_space=search_space, n_iter=20)\n\n# 创建调优后的估计器\ntuned_svc = OptCV(SVC(), optimizer)\ntuned_svc.fit(X_train, y_train)\n\nprint(f\"最佳参数: {tuned_svc.best_params_}\")\nprint(f\"测试准确率: {tuned_svc.score(X_test, y_test):.3f}\")\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>贝叶斯优化\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom hyperactive.opt.gfo import BayesianOptimizer\n\ndef ackley(params):\n    x, y = params[\"x\"], params[\"y\"]\n    return -(\n        -20 * np.exp(-0.2 * np.sqrt(0.5 * (x**2 + y**2)))\n        - np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))\n        + np.e + 20\n    )\n\nsearch_space = {\n    \"x\": np.arange(-5, 5, 0.01),\n    \"y\": np.arange(-5, 5, 0.01),\n}\n\noptimizer = BayesianOptimizer(\n    search_space=search_space,\n    n_iter=50,\n    experiment=ackley,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>粒子群优化\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom hyperactive.opt.gfo import ParticleSwarmOptimizer\n\ndef rastrigin(params):\n    A = 10\n    values = [params[f\"x{i}\"] for i in range(5)]\n    return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)\n\nsearch_space = {f\"x{i}\": np.arange(-5.12, 5.12, 0.1) for i in range(5)}\n\noptimizer = ParticleSwarmOptimizer(\n    search_space=search_space,\n    n_iter=500,\n    experiment=rastrigin,\n    population_size=20,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>使用 SklearnCvExperiment 进行实验抽象\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom sklearn.svm import SVC\nfrom sklearn.datasets import load_iris\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import KFold\n\nfrom hyperactive.experiment.integrations import SklearnCvExperiment\nfrom hyperactive.opt.gfo import HillClimbing\n\nX, y = load_iris(return_X_y=True)\n\n# 创建可重用的实验\nsklearn_exp = SklearnCvExperiment(\n    estimator=SVC(),\n    scoring=accuracy_score,\n    cv=KFold(n_splits=3, shuffle=True),\n    X=X,\n    y=y,\n)\n\nsearch_space = {\n    \"C\": np.logspace(-2, 2, num=10),\n    \"kernel\": [\"linear\", \"rbf\"],\n}\n\noptimizer = HillClimbing(\n    search_space=search_space,\n    n_iter=100,\n    experiment=sklearn_exp,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Optuna 后端（TPE）\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nfrom hyperactive.opt.optuna import TPEOptimizer\n\ndef objective(params):\n    x, y = params[\"x\"], params[\"y\"]\n    return -(x**2 + y**2)\n\nsearch_space = {\n    \"x\": np.arange(-5, 5, 0.1),\n    \"y\": np.arange(-5, 5, 0.1),\n}\n\noptimizer = TPEOptimizer(\n    search_space=search_space,\n    n_iter=100,\n    experiment=objective,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>使用 sktime 进行时间序列预测\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom sktime.forecasting.naive import NaiveForecaster\nfrom sktime.datasets import load_airline\n\nfrom hyperactive.integrations.sktime import ForecastingOptCV\nfrom hyperactive.opt.gfo import RandomSearch\n\ny = load_airline()\n\nsearch_space = {\n    \"strategy\": [\"last\", \"mean\", \"drift\"],\n    \"sp\": [1, 12],\n}\n\noptimizer = RandomSearch(search_space=search_space, n_iter=10)\ntuned_forecaster = ForecastingOptCV(NaiveForecaster(), optimizer)\ntuned_forecaster.fit(y)\n\nprint(f\"最佳参数: {tuned_forecaster.best_params_}\")\n```\n\n\u003C\u002Fdetails>\n\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>PyTorch 神经网络调优\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom hyperactive.opt.gfo import BayesianOptimizer\n\n# 示例数据\nX_train = torch.randn(1000, 10)\ny_train = torch.randint(0, 2, (1000,))\n\ndef train_model(params):\n    learning_rate = params[\"learning_rate\"]\n    batch_size = params[\"batch_size\"]\n    hidden_size = params[\"hidden_size\"]\n\n    model = nn.Sequential(\n        nn.Linear(10, hidden_size),\n        nn.ReLU(),\n        nn.Linear(hidden_size, 2),\n    )\n\n    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n    criterion = nn.CrossEntropyLoss()\n    loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size)\n\n    model.train()\n    for epoch in range(10):\n        for X_batch, y_batch in loader:\n            optimizer.zero_grad()\n            loss = criterion(model(X_batch), y_batch)\n            loss.backward()\n            optimizer.step()\n\n    # 返回验证准确率\n    model.eval()\n    with torch.no_grad():\n        predictions = model(X_train).argmax(dim=1)\n        accuracy = (predictions == y_train).float().mean().item()\n\n    return accuracy\n\nsearch_space = {\n    \"learning_rate\": np.logspace(-5, -1, 20),\n    \"batch_size\": [16, 32, 64, 128],\n    \"hidden_size\": [64, 128, 256, 512],\n}\n\noptimizer = BayesianOptimizer(\n    search_space=search_space,\n    n_iter=30,\n    experiment=train_model,\n)\nbest_params = optimizer.solve()\n```\n\n\u003C\u002Fdetails>\n\n\u003Cbr>\n\n## 生态系统\n\n本库是优化与机器学习工具套件的一部分。如需了解这些包的最新动态，请在 [GitHub](https:\u002F\u002Fgithub.com\u002FSimonBlanke) 上关注。\n\n| 包名 | 描述 |\n|---------|-------------|\n| [Hyperactive](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive) | 带有实验抽象和机器学习集成的超参数优化框架 |\n| [Gradient-Free-Optimizers](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FGradient-Free-Optimizers) | 用于黑箱函数优化的核心优化算法 |\n| [Surfaces](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FSurfaces) | 用于评估优化算法的测试函数和基准曲面 |\n\n\n\u003Cbr>\n\n## 文档\n\n| 资源 | 描述 |\n|----------|-------------|\n| [用户指南](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide.html) | 全面的教程和说明 |\n| [API 参考](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fapi_reference.html) | 完整的 API 文档 |\n| [示例](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Fexamples.html) | 包含用例的 Jupyter 笔记本 |\n| [常见问题解答](https:\u002F\u002Fhyperactive.readthedocs.io\u002Fen\u002Flatest\u002Ffaq.html) | 常见问题及故障排除 |\n\n\u003Cbr>\n\n## 贡献\n\n欢迎贡献！请参阅 [CONTRIBUTING.md](.\u002FCONTRIBUTING.md) 获取相关指南。\n\n- **Bug 报告**: [GitHub Issues](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fissues)\n- **功能请求**: [GitHub Discussions](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fdiscussions)\n- **问题咨询**: [Discord](https:\u002F\u002Fdiscord.gg\u002F7uKdHfdcJG)\n\n\u003Cbr>\n\n## 引用\n\n如果您在研究中使用本软件，请引用以下内容：\n\n```bibtex\n@software{hyperactive2019,\n  author = {Simon Blanke},\n  title = {Hyperactive: A hyperparameter optimization and meta-learning toolbox},\n  year = {2019},\n  url = {https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive},\n}\n```\n\n\u003Cbr>\n\n## 许可证\n\n[MIT 许可证](.\u002FLICENSE) - 可免费用于商业和学术用途。","# Hyperactive 快速上手指南\n\nHyperactive 是一个用于 Python 优化算法和实验的统一接口库。它支持 31 种优化算法（涵盖 GFO、Optuna、scikit-learn 三大后端），专为超参数调优、模型选择和黑盒优化设计。其核心优势在于将“优化目标”与“优化算法”分离，允许你在不修改实验代码的情况下自由切换优化器。\n\n## 环境准备\n\n*   **操作系统**：Linux, macOS, Windows\n*   **Python 版本**：3.8 及以上\n*   **前置依赖**：\n    *   核心功能仅需 `numpy`。\n    *   若需使用机器学习集成（如 scikit-learn, PyTorch, sktime），请确保已安装相应库。\n\n## 安装步骤\n\n### 1. 基础安装\n使用 pip 安装核心库：\n\n```bash\npip install hyperactive\n```\n\n> **国内加速建议**：如果遇到下载速度慢的问题，推荐使用清华或阿里镜像源：\n> ```bash\n> pip install hyperactive -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 2. 可选扩展安装\n根据需求安装特定集成包：\n\n```bash\n# 安装 scikit-learn 集成\npip install hyperactive[sklearn-integration]\n\n# 安装 sktime\u002Fskpro (时间序列) 集成\npip install hyperactive[sktime-integration]\n\n# 安装所有额外依赖（包含 Optuna 后端等）\npip install hyperactive[all_extras]\n```\n\n## 基本使用\n\nHyperactive 的使用流程分为三步：**定义目标函数** -> **定义搜索空间** -> **运行优化器**。\n\n以下是最简单的入门示例，使用爬山算法（Hill Climbing）寻找函数最大值：\n\n```python\nimport numpy as np\nfrom hyperactive.opt.gfo import HillClimbing\n\n# 1. 定义目标函数 (最大化返回值)\ndef objective(params):\n    x, y = params[\"x\"], params[\"y\"]\n    # 负抛物面函数，最优解在 (0, 0)\n    return -(x**2 + y**2)\n\n# 2. 定义搜索空间\n# 使用 NumPy 数组或列表定义参数的取值范围\nsearch_space = {\n    \"x\": np.arange(-5, 5, 0.1),\n    \"y\": np.arange(-5, 5, 0.1),\n}\n\n# 3. 运行优化\noptimizer = HillClimbing(\n    search_space=search_space,\n    n_iter=100,       # 迭代次数\n    experiment=objective,\n)\n\n# 获取最佳参数\nbest_params = optimizer.solve()\n\nprint(f\"Best params: {best_params}\")\n```\n\n**输出示例：**\n```text\nBest params: {'x': 0.0, 'y': 0.0}\n```\n\n### 核心概念说明\n*   **Experiment (实验)**: 你的目标函数 `objective`，接收参数字典并返回评分。\n*   **Search Space (搜索空间)**: 字典格式，键为参数名，值为该参数所有可能取值的列表或数组。\n*   **Optimizer (优化器)**: 执行搜索策略的对象（如 `HillClimbing`, `BayesianOptimizer`, `ParticleSwarmOptimizer` 等）。\n*   **统一接口**: 更换算法只需更改导入的优化器类（例如从 `HillClimbing` 改为 `BayesianOptimizer`），无需修改目标函数或搜索空间代码。","某机器学习工程师正在为一家电商公司的销量预测模型寻找最优超参数组合，以应对复杂的非线性数据特征。\n\n### 没有 Hyperactive 时\n- **算法切换成本极高**：想对比随机搜索、贝叶斯优化和遗传算法的效果，需要分别学习 Optuna、scikit-optimize 等不同库的 API，反复重写实验代码。\n- **代码耦合严重**：目标函数（模型训练逻辑）与具体的优化算法深度绑定，一旦更换策略，必须大幅修改核心业务逻辑，容易引入 Bug。\n- **试错效率低下**：面对 31 种可用算法，手动集成和测试不同后端（如 GFO 或 sklearn）耗时数天，导致模型迭代周期被拉长。\n- **黑盒优化困难**：对于非标准机器学习库的自定义目标函数，缺乏统一的接口来快速部署高效的全局搜索策略。\n\n### 使用 Hyperactive 后\n- **统一接口一键切换**：通过 Hyperactive 标准化的实验接口，仅需修改一行配置即可在 31 种算法间自由切换，无需触碰底层实验逻辑。\n- **关注点完全分离**：将“定义搜索空间”与“选择优化器”解耦，工程师只需专注构建高质量的目标函数，算法调度交给 Hyperactive 自动处理。\n- **快速验证最佳策略**：利用其内置的 GFO、Optuna 等三大后端支持，能在几小时内完成多算法并行基准测试，迅速锁定当前数据集的最优解法。\n- **无缝集成主流生态**：直接调用原生集成的 scikit-learn 或 PyTorch 模块，极简配置即可启动复杂的黑盒优化任务，大幅降低接入门槛。\n\nHyperactive 通过统一接口消除了算法实现的碎片化，让开发者能以最低成本探索最广泛的优化策略，从而显著提升模型调优的效率与上限。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperactive-project_Hyperactive_5334a49a.png","hyperactive-project","hyperactive","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhyperactive-project_af12f10b.png","A unified framework for optimization in python",null,"https:\u002F\u002Fgithub.com\u002Fhyperactive-project",[79,83],{"name":80,"color":81,"percentage":82},"Python","#3572A5",99.1,{"name":84,"color":85,"percentage":86},"Makefile","#427819",0.9,549,76,"2026-04-18T03:14:11","MIT",1,"未说明",{"notes":94,"python":95,"dependencies":96},"该工具是一个纯 Python 优化库，核心功能无特殊硬件要求。若使用 PyTorch、Optuna 等可选后端或集成，需自行安装对应依赖。支持通过 pip 安装额外组件（如 hyperactive[all_extras]）。","3.8+",[97,98,99,100,101,102],"numpy","scikit-learn (可选)","sktime (可选)","skpro (可选)","torch (可选)","optuna (可选)",[14,104,16],"其他",[106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,73,122],"hyperparameter-optimization","scikit-learn","machine-learning","python","data-science","parameter-tuning","xgboost","keras","deep-learning","bayesian-optimization","parallel-computing","neural-architecture-search","automated-machine-learning","optimization","pytorch","feature-engineering","model-selection","2026-03-27T02:49:30.150509","2026-04-18T22:31:39.451384",[126,131,136,141,145,150],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},40966,"如何向目标函数传递不需要在搜索空间中的额外参数（如 DataFrame 或类实例）？","可以使用 `pass_through` 参数传递任何数据类型（如数字、列表、函数、DataFrame 等）。这些参数不会被视为搜索空间的一部分，因此不会被优化器遍历，也不会影响内存缓存机制。您可以在优化运行期间动态更新 `pass_through` 中的数据。这是传递大型对象（如 DataFrame）而不将其放入搜索空间的首选方法。","https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fissues\u002F42",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},40967,"为什么在搜索空间中直接放入 DataFrame、类实例或列表会导致 TypeError 或内存问题？","Hyperactive 需要为搜索空间中的每个参数提供一个唯一的“标识符”以便进行记忆（memory）和热启动（warm-start）比较。数字和字符串可以直接比较，但复杂对象（如 DataFrame、类实例）难以直接比较。解决方法是：不要直接将对象放入搜索空间，而是将其封装在一个函数中，然后将该包装函数放入搜索空间。Hyperactive 会通过函数的 `.__name__` 属性获取其名称作为标识符，从而正确识别和处理参数。","https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fissues\u002F39",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},40968,"遇到 'NoneType' object is not subscriptable 错误该怎么办？","该错误通常发生在某些优化过程中，可能是由于内部状态未正确初始化导致的间歇性问题。虽然具体复现步骤较难追踪，但建议确保您的 Hyperactive 版本已更新至最新（该问题在后续版本中已得到修复）。如果问题持续存在，请检查是否在搜索空间或初始化参数中传入了不兼容的对象类型，并尝试使用函数包装复杂对象。","https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fissues\u002F25",{"id":142,"question_zh":143,"answer_zh":144,"source_url":135},40969,"如何在搜索空间中正确使用自定义对象（如类实例）以避免序列化错误？","避免直接将自定义对象（如类实例）放入搜索空间，因为这可能导致 pickle 序列化错误（例如：AttributeError: Can't pickle local object）。正确的做法是将这些对象封装在函数中，返回该对象，然后将函数本身放入搜索空间。Hyperactive 能够识别函数名称作为标识符，从而绕过序列化复杂对象的问题。",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},40970,"Hyperactive 是否支持 PyTorch 或 Lightning 的自动超参数调优？","社区正在讨论并计划集成 PyTorch 和 Lightning 的支持。未来的实现可能包括一个名为 `TorchExperiment` 的类，它接受 `DataLoader`、`LightningModule` 和 `Trainer`，执行训练并返回验证损失。此外，可能会提供类似 `tune_lightning` 的辅助函数来生成调优后的网络参数或直接返回初始化好的模型。请关注官方发布的最新版本以获取此功能。","https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fissues\u002F195",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},40971,"优化算法的文件夹结构和分类方式是如何组织的？","在 V5 版本中，已经确定了优化算法的文件夹结构和分类方案。主要考虑了两种分类维度：一是基于语义或算法类型（例如将所有的网格搜索算法归为一类），二是基于依赖包（例如将来自 `gfo` 包的所有算法归为一类）。具体的实现细节已在相关 Pull Request 中完成并合并，用户可以参考最新的代码库结构了解具体分类。","https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fissues\u002F126",[156,161,166,171,176,181,186,191,196,201,206,210,215,220,225,230,235,240,245,250],{"id":157,"version":158,"summary_zh":159,"released_at":160},324525,"v5.0.4","## 变更内容\n* [BUG] 修复 _score_params + 添加测试 + 修复被 'try... except' 块掩盖的错误，由 @SimonBlanke 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F237 中完成\n* [DOC] 在 README 和文档中添加缺失的 PyTorch 条目，由 @Abhishek9639 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F239 中完成\n\n## 新贡献者\n* @Abhishek9639 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F239 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fcompare\u002Fv5.0.3...v5.0.4","2026-03-06T06:42:27",{"id":162,"version":163,"summary_zh":164,"released_at":165},324526,"v5.0.3","## 变更内容\n* [DOC] 由 @fkiraly 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F193 中添加的估算器与 AI 工具箱集成示例\n* [ENH] 由 @fkiraly 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F200 中实现的 `skpro` 集成\n* 由 @amitsubhashchejara 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F203 中添加的 PyTorch Lightning 集成\n* [MNT] 由 @fkiraly 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F202 中完成的 Python 3.14 兼容性更新及对 Python 3.9 的生命周期结束支持\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F206 中将 pytest 从 8.4.2 升级至 9.0.1\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F218 中将 pytest 从 9.0.1 升级至 9.0.2\n\n## 新贡献者\n* @amitsubhashchejara 在 https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fpull\u002F203 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fhyperactive-project\u002FHyperactive\u002Fcompare\u002Fv5.0.2...v5.0.3","2025-12-15T06:05:54",{"id":167,"version":168,"summary_zh":169,"released_at":170},324527,"v5.0.2","## 变更内容\n* [文档] 对 README 的小幅改进：logo 背景改为白色，链接表格格式化，由 @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F186 中完成\n* 添加 Optuna 的可选导入，由 @SimonBlanke 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F187 中完成\n* [修复] 修复 `TSCOptCV` 与指标函数输入的集成问题，由 @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F190 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fcompare\u002Fv5.0.1...v5.0.2","2025-09-21T06:34:51",{"id":172,"version":173,"summary_zh":174,"released_at":175},324528,"v5.0.0","## 变更内容\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F84 中将 pytest 从 7.4.4 升级至 8.3.2\n* @SimonBlanke 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F87 中添加了 sklearn 集成的原型\n* @SimonBlanke 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F89 中实现了 sklearn 最优参数得分的功能与修复\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F91 中将 pytest 从 8.3.2 升级至 8.3.3\n* [BUG] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F97 中修复了 `sklearn` 适配器目标函数中未使用 `params` 的问题\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F98 中将 pytest 从 8.3.3 升级至 8.3.4\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F99 中将 scikit-learn 从 1.5.2 升级至 1.6.0\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F100 中将 scikit-learn 从 1.6.0 升级至 1.6.1\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F104 中将 pytest 从 8.3.4 升级至 8.3.5\n* @MekongDelta-mind 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F109 中更新了 README.md\n* ⚡️ @misrasaurabh1 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F106 中将函数 `run_search` 的速度提升了 8%\n* [MNT] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F111 中设置了 CI 作业的最大并发数\n* [MNT] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F114 中简化了 CI 规范\n* [DOC] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F118 中完成了 `Hyperactive.search_data` 的文档字符串\n* @SimonBlanke 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F119 中跳过了多进程处理\n* [MNT] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F116 中跳过了在 Windows 上会挂起的 `multiprocessing` 测试\n* @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F110 中对 V5 API 进行了重构，统一了优化器和实验的 API\n* [DOC] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F124 中为包含 TeX 反斜杠的文档字符串添加了 `r` 前缀\n* ⚡️ @misrasaurabh1 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F105 中将函数 `gfo2hyper` 的速度提升了 15%\n* [ENH] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F121 中提供了 `gfo` 适配器的原型，并增加了更多爬山类算法\n* [MNT] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F133 中将 `scikit-learn` 的版本限制在 1.7.0 以下\n* [MNT] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F132 中进行了不带额外依赖的测试\n* [ENH] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F130 中添加了快速测试工具 `check_estimator`\n* 由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F129 中将 pytest 从 8.3.5 升级至 8.4.0\n* [DOC] @fkiraly 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F120 中修复了 README 中损坏的徽章\n* @SimonBlanke 在 https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FHyperactive\u002Fpull\u002F127 中添加了来自 GFO 的优化算法\n* 由 @dependabot[","2025-09-21T06:34:07",{"id":177,"version":178,"summary_zh":179,"released_at":180},324529,"v4.8.0","  - 添加对 NumPy 2.x 的支持\n  - 添加对 Pandas 2.x 的支持\n  - 添加对 Python 3.12 的支持\n  - 将 setup.py 迁移到 pyproject.toml\n  - 将项目结构改为 src 目录布局","2024-08-14T15:06:05",{"id":182,"version":183,"summary_zh":184,"released_at":185},324530,"v4.7.0","- 添加遗传算法优化器\r\n- 添加差分进化优化器","2024-07-29T13:03:18",{"id":187,"version":188,"summary_zh":189,"released_at":190},324531,"v4.6.0","添加对约束优化的支持","2023-11-01T10:40:28",{"id":192,"version":193,"summary_zh":194,"released_at":195},324532,"v4.5.0","- 为自定义优化策略添加早停功能\n- 在命令行结果中显示目标函数的额外输出\n- 为 Hyperactive API 添加类型提示\n- 为新功能添加测试\n- 添加 verbosity=False 的测试","2023-08-27T10:59:47",{"id":197,"version":198,"summary_zh":199,"released_at":200},324533,"v4.4.0","- 添加新功能：“优化策略”\n- 重新设计进度条","2023-03-01T11:44:32",{"id":202,"version":203,"summary_zh":204,"released_at":205},324534,"4.3","- 添加来自 [GFO](https:\u002F\u002Fgithub.com\u002FSimonBlanke\u002FGradient-Free-Optimizers) 的新功能\n  - 添加螺旋优化算法\n  - 添加利普希茨优化算法\n  - 添加 DIRECT 优化算法\n  - 打印随机种子以保证结果可复现","2022-11-18T15:49:51",{"id":207,"version":208,"summary_zh":208,"released_at":209},324535,"v4.0.0","2021-12-01T14:54:47",{"id":211,"version":212,"summary_zh":213,"released_at":214},324536,"v3.2.4","Changes from v3.0.0 -> v3.2.4:\r\n\r\n- Decouple number of runs from active processes (Thanks to [PartiallyTyped](https:\u002F\u002Fgithub.com\u002FPartiallyTyped)). This reduces memory load if number of jobs is huge\r\n- New feature: The progress board enables the user to monitor the optimization progress during the run.\r\n  - Display trend of best score\r\n  - Plot parameters and score in parallel coordinates\r\n  - Generate filter file to define an upper and\u002For lower bound for all parameters and the score in the parallel coordinate plot\r\n  - List parameters of 5 best scores\r\n- add Python 3.8 to tests\r\n- add warnings of search space values does not contain lists\r\n- improve stability of result-methods\r\n- add tests for hyperactive-memory + search spaces\r\n","2021-07-07T12:40:57",{"id":216,"version":217,"summary_zh":218,"released_at":219},324537,"v2.3.0","- add Tree-structured optimization algorithm (idea from Hyperopt)\r\n- add Decision-tree optimization algorithm (idea from sklearn)\r\n- enable new optimization parameters for bayes-opt:\r\n  - max_sample_size: maximum number of samples for the gaussian-process-reg to train on. Sampling done by random choice.\r\n  - skip_retrain: skips the retraining of the gaussian-process-reg sometimes during the optimization run. Basically returns multiple predictions for next output (which should be apart from another)","2020-07-16T10:23:46",{"id":221,"version":222,"summary_zh":223,"released_at":224},324538,"v2.1.0","- first stable implementation of \"long-term-memory\" to save\u002Fload search positions\u002Fparameter and results.\r\n- enable warm start of sequence based optimizers (bayesian opt, ...) with results from \"long-term-memory\"\r\n- enable the usage of other gaussian-process-regressors than from sklearn. GPR-class (from gpy, GPflow, ...) can be passed to \"optimizer\"-kwarg ","2020-07-16T10:15:10",{"id":226,"version":227,"summary_zh":228,"released_at":229},324539,"v2.0.0","API-change to improve usage. Class accepts training data. \"search\"-method accepts search_config and other optimization-run specific arguments like n_iter, n_jobs, optimizer.","2020-07-16T10:09:25",{"id":231,"version":232,"summary_zh":233,"released_at":234},324540,"v1.1.1","- small api-change\r\n- extend progress bar information\r\n- re-enable multiprocessing for new api","2019-10-08T17:31:37",{"id":236,"version":237,"summary_zh":238,"released_at":239},324541,"v1.0.0","- new API that creates model by function and search space by dict\r\n- enables more flexible usage (e.g. free use of framework, ensembles, nn-structure)\r\n- 100% test coverage\r\n","2019-09-25T07:11:48",{"id":241,"version":242,"summary_zh":243,"released_at":244},324542,"v0.4.2","- performance fixes for bayesian optimization and parallel tempering\r\n- better default parameter for most optimizers\r\n- better implementation for metrics\r\n- add support for catboost\r\n- integration of meta-learn code into hyperactive\r\n- cleanup to avoid similar code","2019-09-09T13:24:58",{"id":246,"version":247,"summary_zh":248,"released_at":249},324543,"v0.4.1.2","- k-fold-cross validation works with keras models\r\n- a cv of \u003C 1 trains the model on a fraction of the training data and tests on the rest\r\n- better testing and code-coverage\r\n- fix of score and predict method\r\n","2019-07-31T06:41:38",{"id":251,"version":252,"summary_zh":253,"released_at":254},324544,"v0.4.0","- improvement of optimizer class structure\r\n- lower memory usage\r\n- add testing of optimization process\r\n- a lot of clean up und several bug fixes (mostly parallel tempering)","2019-07-23T17:12:53"]