[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-optuna--optuna":3,"tool-optuna--optuna":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":64,"owner_name":64,"owner_avatar_url":72,"owner_bio":73,"owner_company":74,"owner_location":74,"owner_email":74,"owner_twitter":74,"owner_website":74,"owner_url":75,"languages":76,"stars":85,"forks":86,"last_commit_at":87,"license":88,"difficulty_score":89,"env_os":90,"env_gpu":91,"env_ram":90,"env_deps":92,"category_tags":102,"github_topics":103,"view_count":32,"oss_zip_url":74,"oss_zip_packed_at":74,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":142},8776,"optuna\u002Foptuna","optuna","A hyperparameter optimization framework","Optuna 是一款专为机器学习打造的自动超参数优化框架。在训练 AI 模型时，手动调整学习率、网络层数等超参数不仅耗时费力，还难以找到最优组合，而 Optuna 正是为了解决这一痛点而生。它能自动搜索并锁定最佳参数配置，显著提升模型性能与开发效率。\n\n这款工具特别适合机器学习开发者、数据科学家及算法研究人员使用。无论是初学者还是资深专家，都能通过 Optuna 简化繁琐的调参工作，将更多精力投入到模型架构设计与业务逻辑中。\n\nOptuna 的核心亮点在于其独特的“定义即运行”（define-by-run）API 设计。不同于传统静态配置方式，它允许用户使用标准的 Python 代码（包括条件判断和循环）动态构建搜索空间，极大地提高了代码的模块化程度与灵活性。此外，Optuna 架构轻量、跨平台兼容性强，并内置了多种前沿的高效优化算法，支持多目标优化与约束处理。只需简单安装，即可轻松集成到现有的 Python 项目中，帮助团队快速实现自动化调优。","\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foptuna_optuna_readme_78924362ffd9.png\" width=\"800\"\u002F>\u003C\u002Fdiv>\n\n# Optuna: A hyperparameter optimization framework\n\n[![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12%20%7C%203.13%20%7C%203.14-blue)](https:\u002F\u002Fwww.python.org)\n[![pypi](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Foptuna.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Foptuna)\n[![conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Foptuna.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Foptuna)\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg)](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna)\n[![Read the Docs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foptuna_optuna_readme_13d664e1afd7.png)](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002F)\n\n:link: [**Website**](https:\u002F\u002Foptuna.org\u002F)\n| :page_with_curl: [**Docs**](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002F)\n| :gear: [**Install Guide**](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Finstallation.html)\n| :pencil: [**Tutorial**](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002Findex.html)\n| :bulb: [**Examples**](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples)\n| [**Twitter**](https:\u002F\u002Ftwitter.com\u002FOptunaAutoML)\n| [**LinkedIn**](https:\u002F\u002Fwww.linkedin.com\u002Fshowcase\u002Foptuna\u002F)\n| [**Medium**](https:\u002F\u002Fmedium.com\u002Foptuna)\n\n*Optuna* is an automatic hyperparameter optimization software framework, particularly designed\nfor machine learning. It features an imperative, *define-by-run* style user API. Thanks to our\n*define-by-run* API, the code written with Optuna enjoys high modularity, and the user of\nOptuna can dynamically construct the search spaces for the hyperparameters.\n\n## :loudspeaker: News\nHelp us create the next version of Optuna!\n\nOptuna 5.0 Roadmap published for review. Please take a look at [the planned improvements to Optuna](https:\u002F\u002Fmedium.com\u002Foptuna\u002Foptuna-v5-roadmap-ac7d6935a878), and share your feedback in [the github issues](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Flabels\u002Fv5). PR contributions also welcome!\n\nPlease take a few minutes to fill in [this survey](https:\u002F\u002Fforms.gle\u002FwVwLCQ9g6st6AXuq9), and let us know how you use Optuna now and what improvements you'd like.🤔\nAll questions are optional. 🙇‍♂️\n\n\u003C!-- TODO: when you add a new line, please delete the oldest line -->\n* **Mar 16, 2026**: Optuna 4.8.0 is out! Check out [the release note](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Freleases\u002Ftag\u002Fv4.8.0) for details.\n* **Jan 19, 2026**: Optuna 4.7.0 is out! Check out [the release note](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Freleases\u002Ftag\u002Fv4.7.0) for details.\n* **Nov 10, 2025**: A new article [Announcing Optuna 4.6](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fannouncing-optuna-4-6-a9e82183ab07) has been published.\n* **Oct 28, 2025**: A new article [AutoSampler: Full Support for Multi-Objective & Constrained Optimization](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fautosampler-full-support-for-multi-objective-constrained-optimization-c1c4fc957ba2) has been published.\n* **Sep 22, 2025**: A new article [[Optuna v4.5] Gaussian Process-Based Sampler (GPSampler) Can Now Perform Constrained Multi-Objective Optimization](https:\u002F\u002Fmedium.com\u002Foptuna\u002Foptuna-v4-5-81e78d8e077a) has been published.\n* **Jun 16, 2025**: Optuna 4.4.0 has been released! Check out [the release blog](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fannouncing-optuna-4-4-ece661493126).\n\n## :fire: Key Features\n\nOptuna has modern functionalities as follows:\n\n- [Lightweight, versatile, and platform agnostic architecture](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F001_first.html)\n  - Handle a wide variety of tasks with a simple installation that has few requirements.\n- [Pythonic search spaces](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F002_configurations.html)\n  - Define search spaces using familiar Python syntax including conditionals and loops.\n- [Efficient optimization algorithms](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F003_efficient_optimization_algorithms.html)\n  - Adopt state-of-the-art algorithms for sampling hyperparameters and efficiently pruning unpromising trials.\n- [Easy parallelization](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F004_distributed.html)\n  - Scale studies to tens or hundreds of workers with little or no changes to the code.\n- [Quick visualization](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F005_visualization.html)\n  - Inspect optimization histories from a variety of plotting functions.\n\n\n## Basic Concepts\n\nWe use the terms *study* and *trial* as follows:\n\n- Study: optimization based on an objective function\n- Trial: a single execution of the objective function\n\nPlease refer to the sample code below. The goal of a *study* is to find out the optimal set of\nhyperparameter values (e.g., `regressor` and `svr_c`) through multiple *trials* (e.g.,\n`n_trials=100`). Optuna is a framework designed for automation and acceleration of\noptimization *studies*.\n\n\u003Cdetails open>\n\u003Csummary>Sample code with scikit-learn\u003C\u002Fsummary>\n\n[![Open in Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](http:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Foptuna\u002Foptuna-examples\u002Fblob\u002Fmain\u002Fquickstart.ipynb)\n\n```python\nimport optuna\nimport sklearn\n\n\n# Define an objective function to be minimized.\ndef objective(trial):\n\n    # Invoke suggest methods of a Trial object to generate hyperparameters.\n    regressor_name = trial.suggest_categorical(\"regressor\", [\"SVR\", \"RandomForest\"])\n    if regressor_name == \"SVR\":\n        svr_c = trial.suggest_float(\"svr_c\", 1e-10, 1e10, log=True)\n        regressor_obj = sklearn.svm.SVR(C=svr_c)\n    else:\n        rf_max_depth = trial.suggest_int(\"rf_max_depth\", 2, 32)\n        regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)\n\n    X, y = sklearn.datasets.fetch_california_housing(return_X_y=True)\n    X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)\n\n    regressor_obj.fit(X_train, y_train)\n    y_pred = regressor_obj.predict(X_val)\n\n    error = sklearn.metrics.mean_squared_error(y_val, y_pred)\n\n    return error  # An objective value linked with the Trial object.\n\n\nstudy = optuna.create_study()  # Create a new study.\nstudy.optimize(objective, n_trials=100)  # Invoke optimization of the objective function.\n```\n\u003C\u002Fdetails>\n\n> [!NOTE]\n> More examples can be found in [optuna\u002Foptuna-examples](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples).\n>\n> The examples cover diverse problem setups such as multi-objective optimization, constrained optimization, pruning, and distributed optimization.\n\n## Installation\n\nOptuna is available at [the Python Package Index](https:\u002F\u002Fpypi.org\u002Fproject\u002Foptuna\u002F) and on [Anaconda Cloud](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Foptuna).\n\n```bash\n# PyPI\n$ pip install optuna\n```\n\n```bash\n# Anaconda Cloud\n$ conda install -c conda-forge optuna\n```\n\n> [!IMPORTANT]\n> Optuna supports Python 3.9 or newer.\n\n## Integrations\n\nOptuna has integration features with various third-party libraries. Integrations can be found in [optuna\u002Foptuna-integration](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration) and the document is available [here](https:\u002F\u002Foptuna-integration.readthedocs.io\u002Fen\u002Fstable\u002Findex.html).\n\n\u003Cdetails>\n\u003Csummary>Supported integration libraries\u003C\u002Fsummary>\n\n* [Catboost](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fcatboost\u002Fcatboost_pruning.py)\n* [Dask](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fdask\u002Fdask_simple.py)\n* [fastai](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ffastai\u002Ffastai_simple.py)\n* [Keras](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fkeras\u002Fkeras_integration.py)\n* [LightGBM](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Flightgbm\u002Flightgbm_integration.py)\n* [MLflow](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fmlflow\u002Fkeras_mlflow.py)\n* [PyTorch](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fpytorch\u002Fpytorch_simple.py)\n* [PyTorch Ignite](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fpytorch\u002Fpytorch_ignite_simple.py)\n* [PyTorch Lightning](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fpytorch\u002Fpytorch_lightning_simple.py)\n* [TensorBoard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ftensorboard\u002Ftensorboard_simple.py)\n* [TensorFlow](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ftensorflow\u002Ftensorflow_estimator_integration.py)\n* [tf.keras](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ftfkeras\u002Ftfkeras_integration.py)\n* [Weights & Biases](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fwandb\u002Fwandb_integration.py)\n* [XGBoost](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fxgboost\u002Fxgboost_integration.py)\n\u003C\u002Fdetails>\n\n## Web Dashboard\n\n[Optuna Dashboard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard) is a real-time web dashboard for Optuna.\nYou can check the optimization history, hyperparameter importance, etc. in graphs and tables.\nYou don't need to create a Python script to call [Optuna's visualization](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Freference\u002Fvisualization\u002Findex.html) functions.\nFeature requests and bug reports are welcome!\n\n![optuna-dashboard](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foptuna_optuna_readme_afdc8e41d027.gif)\n\n`optuna-dashboard` can be installed via pip:\n\n```shell\n$ pip install optuna-dashboard\n```\n\n> [!TIP]\n> Please check out the convenience of Optuna Dashboard using the sample code below.\n\n\u003Cdetails>\n\u003Csummary>Sample code to launch Optuna Dashboard\u003C\u002Fsummary>\n\nSave the following code as `optimize_toy.py`.\n\n```python\nimport optuna\n\n\ndef objective(trial):\n    x1 = trial.suggest_float(\"x1\", -100, 100)\n    x2 = trial.suggest_float(\"x2\", -100, 100)\n    return x1**2 + 0.01 * x2**2\n\n\nstudy = optuna.create_study(storage=\"sqlite:\u002F\u002F\u002Fdb.sqlite3\")  # Create a new study with database.\nstudy.optimize(objective, n_trials=100)\n```\n\nThen try the commands below:\n\n```shell\n# Run the study specified above\n$ python optimize_toy.py\n\n# Launch the dashboard based on the storage `sqlite:\u002F\u002F\u002Fdb.sqlite3`\n$ optuna-dashboard sqlite:\u002F\u002F\u002Fdb.sqlite3\n...\nListening on http:\u002F\u002Flocalhost:8080\u002F\nHit Ctrl-C to quit.\n```\n\n\u003C\u002Fdetails>\n\n\n## OptunaHub\n\n[OptunaHub](https:\u002F\u002Fhub.optuna.org\u002F) is a feature-sharing platform for Optuna.\nYou can use the registered features and publish your packages.\n\n### Use registered features\n\n`optunahub` can be installed via pip:\n\n```shell\n$ pip install optunahub\n# Install AutoSampler dependencies (CPU only is sufficient for PyTorch)\n$ pip install cmaes scipy torch --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcpu\n```\n\nYou can load registered module with `optunahub.load_module`.\n\n```python\nimport optuna\nimport optunahub\n\n\ndef objective(trial: optuna.Trial) -> float:\n    x = trial.suggest_float(\"x\", -5, 5)\n    y = trial.suggest_float(\"y\", -5, 5)\n    return x**2 + y**2\n\n\nmodule = optunahub.load_module(package=\"samplers\u002Fauto_sampler\")\nstudy = optuna.create_study(sampler=module.AutoSampler())\nstudy.optimize(objective, n_trials=10)\n\nprint(study.best_trial.value, study.best_trial.params)\n```\n\nFor more details, please refer to [the optunahub documentation](https:\u002F\u002Foptuna.github.io\u002Foptunahub\u002F).\n\n### Publish your packages\n\nYou can publish your package via [optunahub-registry](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptunahub-registry).\nSee the [Tutorials for Contributors](https:\u002F\u002Foptuna.github.io\u002Foptunahub\u002Ftutorials_for_contributors.html) in OptunaHub.\n\n\n## Communication\n\n- [GitHub Discussions] for questions.\n- [GitHub Issues] for bug reports and feature requests.\n\n[GitHub Discussions]: https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fdiscussions\n[GitHub issues]: https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\n\n\n## Contribution\n\nAny contributions to Optuna are more than welcome!\n\nIf you are new to Optuna, please check the [good first issues](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Flabels\u002Fgood%20first%20issue). They are relatively simple, well-defined, and often good starting points for you to get familiar with the contribution workflow and other developers.\n\nIf you already have contributed to Optuna, we recommend the other [contribution-welcome issues](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Flabels\u002Fcontribution-welcome).\n\nFor general guidelines on how to contribute to the project, take a look at [CONTRIBUTING.md](.\u002FCONTRIBUTING.md).\n\n\n## Reference\n\nIf you use Optuna in one of your research projects, please cite [our KDD paper](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3292500.3330701) \"Optuna: A Next-generation Hyperparameter Optimization Framework\":\n\n\u003Cdetails open>\n\u003Csummary>BibTeX\u003C\u002Fsummary>\n\n```bibtex\n@inproceedings{akiba2019optuna,\n  title={{O}ptuna: A Next-Generation Hyperparameter Optimization Framework},\n  author={Akiba, Takuya and Sano, Shotaro and Yanase, Toshihiko and Ohta, Takeru and Koyama, Masanori},\n  booktitle={The 25th ACM SIGKDD International Conference on Knowledge Discovery \\& Data Mining},\n  pages={2623--2631},\n  year={2019}\n}\n```\n\u003C\u002Fdetails>\n\n\n## License\n\nMIT License (see [LICENSE](.\u002FLICENSE)).\n\nOptuna uses the codes from SciPy and fdlibm projects (see [LICENSE_THIRD_PARTY](.\u002FLICENSE_THIRD_PARTY)).\n","\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foptuna_optuna_readme_78924362ffd9.png\" width=\"800\"\u002F>\u003C\u002Fdiv>\n\n# Optuna：超参数优化框架\n\n[![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12%20%7C%203.13%20%7C%203.14-blue)](https:\u002F\u002Fwww.python.org)\n[![pypi](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Foptuna.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Foptuna)\n[![conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Foptuna.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Foptuna)\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg)](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna)\n[![Read the Docs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foptuna_optuna_readme_13d664e1afd7.png)](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002F)\n\n:link: [**官网**](https:\u002F\u002Foptuna.org\u002F)\n| :page_with_curl: [**文档**](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002F)\n| :gear: [**安装指南**](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Finstallation.html)\n| :pencil: [**教程**](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002Findex.html)\n| :bulb: [**示例**](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples)\n| [**Twitter**](https:\u002F\u002Ftwitter.com\u002FOptunaAutoML)\n| [**LinkedIn**](https:\u002F\u002Fwww.linkedin.com\u002Fshowcase\u002Foptuna\u002F)\n| [**Medium**](https:\u002F\u002Fmedium.com\u002Foptuna)\n\n*Optuna* 是一个自动超参数优化软件框架，专为机器学习设计。它采用命令式、即“运行时定义”的用户 API。得益于这种“运行时定义”的 API，使用 Optuna 编写的代码具有很高的模块化特性，用户可以动态地构建超参数的搜索空间。\n\n## :loudspeaker: 最新消息\n帮助我们打造 Optuna 的下一个版本！\n\nOptuna 5.0 路线图已发布，欢迎审阅。请查看 [Optuna 计划中的改进内容](https:\u002F\u002Fmedium.com\u002Foptuna\u002Foptuna-v5-roadmap-ac7d6935a878)，并在 [GitHub 问题页面](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Flabels\u002Fv5)上分享您的反馈。我们也欢迎 PR 贡献！\n\n请您抽出几分钟时间填写 [这份调查问卷](https:\u002F\u002Fforms.gle\u002FwVwLCQ9g6st6AXuq9)，告诉我们您目前如何使用 Optuna，以及您希望看到哪些改进。🤔 全部问题均为可选。🙇‍♂️\n\n\u003C!-- TODO: 当添加新条目时，请删除最旧的一条 -->\n* **2026年3月16日**：Optuna 4.8.0 发布！详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Freleases\u002Ftag\u002Fv4.8.0)。\n* **2026年1月19日**：Optuna 4.7.0 发布！详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Freleases\u002Ftag\u002Fv4.7.0)。\n* **2025年11月10日**：一篇新文章 [宣布 Optuna 4.6](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fannouncing-optuna-4-6-a9e82183ab07) 已发布。\n* **2025年10月28日**：一篇新文章 [AutoSampler：全面支持多目标与约束优化](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fautosampler-full-support-for-multi-objective-constrained-optimization-c1c4fc957ba2) 已发布。\n* **2025年9月22日**：一篇新文章 [[Optuna v4.5] 基于高斯过程的采样器 (GPSampler) 现在可以进行约束型多目标优化](https:\u002F\u002Fmedium.com\u002Foptuna\u002Foptuna-v4-5-81e78d8e077a) 已发布。\n* **2025年6月16日**：Optuna 4.4.0 已发布！详情请参阅 [发布博客](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fannouncing-optuna-4-4-ece661493126)。\n\n## :fire: 核心功能\n\nOptuna 具备以下现代化功能：\n\n- [轻量级、通用且平台无关的架构](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F001_first.html)\n  - 通过简单的安装即可处理各种任务，依赖项极少。\n- [符合 Python 风格的搜索空间](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F002_configurations.html)\n  - 使用熟悉的 Python 语法（包括条件语句和循环）定义搜索空间。\n- [高效的优化算法](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F003_efficient_optimization_algorithms.html)\n  - 采用最先进的算法来采样超参数，并高效地剪枝无望的试验。\n- [易于并行化](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F004_distributed.html)\n  - 只需对代码进行少量或无需修改，即可将研究扩展到数十甚至数百个工作者。\n- [快速可视化](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Ftutorial\u002F10_key_features\u002F005_visualization.html)\n  - 通过多种绘图函数检查优化历史。\n\n## 基本概念\n\n我们使用以下术语：*study* 和 *trial*：\n\n- Study：基于目标函数的优化\n- Trial：目标函数的一次执行\n\n请参考下面的示例代码。*Study* 的目标是通过多次 *Trial*（例如 `n_trials=100`）找到最优的超参数组合（如 `regressor` 和 `svr_c`）。Optuna 是一个旨在自动化和加速优化 *Study* 的框架。\n\n\u003Cdetails open>\n\u003Csummary>使用 scikit-learn 的示例代码\u003C\u002Fsummary>\n\n[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](http:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Foptuna\u002Foptuna-examples\u002Fblob\u002Fmain\u002Fquickstart.ipynb)\n\n```python\nimport optuna\nimport sklearn\n\n\n# 定义一个需要最小化的目标函数。\ndef objective(trial):\n\n    # 调用 Trial 对象的建议方法生成超参数。\n    regressor_name = trial.suggest_categorical(\"regressor\", [\"SVR\", \"RandomForest\"])\n    if regressor_name == \"SVR\":\n        svr_c = trial.suggest_float(\"svr_c\", 1e-10, 1e10, log=True)\n        regressor_obj = sklearn.svm.SVR(C=svr_c)\n    else:\n        rf_max_depth = trial.suggest_int(\"rf_max_depth\", 2, 32)\n        regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)\n\n    X, y = sklearn.datasets.fetch_california_housing(return_X_y=True)\n    X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)\n\n    regressor_obj.fit(X_train, y_train)\n    y_pred = regressor_obj.predict(X_val)\n\n    error = sklearn.metrics.mean_squared_error(y_val, y_pred)\n\n    return error  # 与 Trial 对象关联的目标值.\n\n\nstudy = optuna.create_study()  # 创建一个新的 study。\nstudy.optimize(objective, n_trials=100)  # 调用目标函数的优化。\n```\n\u003C\u002Fdetails>\n\n> [!NOTE]\n> 更多示例可在 [optuna\u002Foptuna-examples](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples) 中找到。\n>\n> 这些示例涵盖了多种问题设置，如多目标优化、约束优化、剪枝以及分布式优化。\n\n## 安装\n\nOptuna 可在 [Python 包索引](https:\u002F\u002Fpypi.org\u002Fproject\u002Foptuna\u002F) 和 [Anaconda Cloud](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Foptuna) 上获取。\n\n```bash\n# PyPI\n$ pip install optuna\n```\n\n```bash\n# Anaconda Cloud\n$ conda install -c conda-forge optuna\n```\n\n> [!IMPORTANT]\n> Optuna 支持 Python 3.9 及以上版本。\n\n## 集成\n\nOptuna 提供了与多种第三方库的集成功能。这些集成可以在 [optuna\u002Foptuna-integration](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration) 中找到，相关文档请参见 [这里](https:\u002F\u002Foptuna-integration.readthedocs.io\u002Fen\u002Fstable\u002Findex.html)。\n\n\u003Cdetails>\n\u003Csummary>支持的集成库\u003C\u002Fsummary>\n\n* [Catboost](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fcatboost\u002Fcatboost_pruning.py)\n* [Dask](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fdask\u002Fdask_simple.py)\n* [fastai](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ffastai\u002Ffastai_simple.py)\n* [Keras](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fkeras\u002Fkeras_integration.py)\n* [LightGBM](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Flightgbm\u002Flightgbm_integration.py)\n* [MLflow](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fmlflow\u002Fkeras_mlflow.py)\n* [PyTorch](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fpytorch\u002Fpytorch_simple.py)\n* [PyTorch Ignite](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fpytorch\u002Fpytorch_ignite_simple.py)\n* [PyTorch Lightning](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fpytorch\u002Fpytorch_lightning_simple.py)\n* [TensorBoard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ftensorboard\u002Ftensorboard_simple.py)\n* [TensorFlow](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ftensorflow\u002Ftensorflow_estimator_integration.py)\n* [tf.keras](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Ftfkeras\u002Ftfkeras_integration.py)\n* [Weights & Biases](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fwandb\u002Fwandb_integration.py)\n* [XGBoost](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Ftree\u002Fmain\u002Fxgboost\u002Fxgboost_integration.py)\n\u003C\u002Fdetails>\n\n## Web 仪表盘\n\n[Optuna Dashboard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard) 是一个用于 Optuna 的实时 Web 仪表盘。\n您可以通过图表和表格查看优化历史、超参数重要性等信息。\n无需编写 Python 脚本来调用 [Optuna 的可视化](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Freference\u002Fvisualization\u002Findex.html) 函数。\n欢迎提出功能请求和报告 bug！\n\n![optuna-dashboard](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foptuna_optuna_readme_afdc8e41d027.gif)\n\n`optuna-dashboard` 可以通过 pip 安装：\n\n```shell\n$ pip install optuna-dashboard\n```\n\n> [!提示]\n> 请使用下面的示例代码体验 Optuna Dashboard 的便捷性。\n\n\u003Cdetails>\n\u003Csummary>启动 Optuna Dashboard 的示例代码\u003C\u002Fsummary>\n\n将以下代码保存为 `optimize_toy.py`。\n\n```python\nimport optuna\n\n\ndef objective(trial):\n    x1 = trial.suggest_float(\"x1\", -100, 100)\n    x2 = trial.suggest_float(\"x2\", -100, 100)\n    return x1**2 + 0.01 * x2**2\n\n\nstudy = optuna.create_study(storage=\"sqlite:\u002F\u002F\u002Fdb.sqlite3\")  # 创建一个新的带有数据库的研究\nstudy.optimize(objective, n_trials=100)\n```\n\n然后尝试运行以下命令：\n\n```shell\n# 运行上述研究\n$ python optimize_toy.py\n\n# 基于存储 `sqlite:\u002F\u002F\u002Fdb.sqlite3` 启动仪表盘\n$ optuna-dashboard sqlite:\u002F\u002F\u002Fdb.sqlite3\n...\n正在监听 http:\u002F\u002Flocalhost:8080\u002F\n按 Ctrl-C 退出。\n```\n\n\u003C\u002Fdetails>\n\n\n## OptunaHub\n\n[OptunaHub](https:\u002F\u002Fhub.optuna.org\u002F) 是一个用于分享 Optuna 功能的平台。\n您可以使用已注册的功能，并发布自己的软件包。\n\n### 使用已注册的功能\n\n`optunahub` 可以通过 pip 安装：\n\n```shell\n$ pip install optunahub\n# 安装 AutoSampler 的依赖（对于 PyTorch 来说，仅 CPU 版本就足够了）\n$ pip install cmaes scipy torch --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcpu\n```\n\n您可以使用 `optunahub.load_module` 加载已注册的模块。\n\n```python\nimport optuna\nimport optunahub\n\n\ndef objective(trial: optuna.Trial) -> float:\n    x = trial.suggest_float(\"x\", -5, 5)\n    y = trial.suggest_float(\"y\", -5, 5)\n    return x**2 + y**2\n\n\nmodule = optunahub.load_module(package=\"samplers\u002Fauto_sampler\")\nstudy = optuna.create_study(sampler=module.AutoSampler())\nstudy.optimize(objective, n_trials=10)\n\nprint(study.best_trial.value, study.best_trial.params)\n```\n\n更多详情，请参阅 [optunahub 文档](https:\u002F\u002Foptuna.github.io\u002Foptunahub\u002F)。\n\n### 发布您的软件包\n\n您可以通过 [optunahub-registry](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptunahub-registry) 发布您的软件包。\n请参阅 OptunaHub 中的 [贡献者教程](https:\u002F\u002Foptuna.github.io\u002Foptunahub\u002Ftutorials_for_contributors.html)。\n\n\n## 沟通\n\n- [GitHub Discussions] 用于提问。\n- [GitHub Issues] 用于报告 bug 和提出功能请求。\n\n[GitHub Discussions]: https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fdiscussions\n[GitHub issues]: https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\n\n\n## 贡献\n\n我们非常欢迎对 Optuna 的任何贡献！\n\n如果您是 Optuna 的新手，请查看 [good first issues](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Flabels\u002Fgood%20first%20issue)。这些问题相对简单、定义明确，通常是您熟悉贡献流程和其他开发人员的好起点。\n\n如果您已经为 Optuna 做过贡献，我们建议您关注其他 [contribution-welcome issues](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Flabels\u002Fcontribution-welcome)。\n\n有关如何为项目做出贡献的通用指南，请参阅 [CONTRIBUTING.md](.\u002FCONTRIBUTING.md)。\n\n\n## 参考文献\n\n如果您在某个研究项目中使用了 Optuna，请引用我们的 KDD 论文 [“Optuna：下一代超参数优化框架”](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3292500.3330701)：\n\n\u003Cdetails open>\n\u003Csummary>BibTeX\u003C\u002Fsummary>\n\n```bibtex\n@inproceedings{akiba2019optuna,\n  title={{O}ptuna: A Next-Generation Hyperparameter Optimization Framework},\n  author={Akiba, Takuya and Sano, Shotaro and Yanase, Toshihiko and Ohta, Takeru and Koyama, Masanori},\n  booktitle={The 25th ACM SIGKDD International Conference on Knowledge Discovery \\& Data Mining},\n  pages={2623--2631},\n  year={2019}\n}\n```\n\u003C\u002Fdetails>\n\n\n## 许可证\n\nMIT 许可证（详见 [LICENSE](.\u002FLICENSE)）。\n\nOptuna 使用了 SciPy 和 fdlibm 项目的代码（详见 [LICENSE_THIRD_PARTY](.\u002FLICENSE_THIRD_PARTY)）。","# Optuna 快速上手指南\n\nOptuna 是一个专为机器学习设计的自动超参数优化框架。它采用“定义即运行（define-by-run）”的 API 风格，允许用户使用原生 Python 代码（包括条件判断和循环）动态构建搜索空间，具有轻量、高效且易于并行的特点。\n\n## 环境准备\n\n*   **操作系统**：Linux, macOS, Windows\n*   **Python 版本**：3.9, 3.10, 3.11, 3.12, 3.13 或 3.14\n*   **前置依赖**：无特殊强制依赖，基础安装仅需 `pip` 或 `conda`。若需使用可视化仪表盘或特定深度学习框架集成，请参考后续可选安装部分。\n\n## 安装步骤\n\n### 方式一：使用 pip 安装（推荐）\n\n可以使用国内镜像源加速安装：\n\n```bash\npip install optuna -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：使用 Conda 安装\n\n```bash\nconda install -c conda-forge optuna\n```\n\n### 可选组件安装\n\n*   **安装可视化仪表盘 (Optuna Dashboard)**：用于实时查看优化历史和超参数重要性。\n    ```bash\n    pip install optuna-dashboard -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n*   **安装功能共享平台客户端 (OptunaHub)**：用于加载社区分享的采样器等功能模块。\n    ```bash\n    pip install optunahub -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n以下示例展示了如何使用 Optuna 结合 `scikit-learn` 进行最简单的超参数优化。该脚本会自动尝试不同的回归模型（SVR 或 RandomForest）及其对应的超参数，以最小化均方误差。\n\n```python\nimport optuna\nimport sklearn\n\n\n# 定义需要最小化的目标函数\ndef objective(trial):\n\n    # 使用 trial 对象的 suggest 方法生成超参数\n    # 动态选择回归器类型\n    regressor_name = trial.suggest_categorical(\"regressor\", [\"SVR\", \"RandomForest\"])\n    \n    if regressor_name == \"SVR\":\n        # 如果是 SVR，搜索 C 参数 (对数空间)\n        svr_c = trial.suggest_float(\"svr_c\", 1e-10, 1e10, log=True)\n        regressor_obj = sklearn.svm.SVR(C=svr_c)\n    else:\n        # 如果是随机森林，搜索最大深度\n        rf_max_depth = trial.suggest_int(\"rf_max_depth\", 2, 32)\n        regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)\n\n    # 准备数据\n    X, y = sklearn.datasets.fetch_california_housing(return_X_y=True)\n    X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)\n\n    # 训练与预测\n    regressor_obj.fit(X_train, y_train)\n    y_pred = regressor_obj.predict(X_val)\n\n    # 计算误差并返回\n    error = sklearn.metrics.mean_squared_error(y_val, y_pred)\n\n    return error  # 返回与 Trial 对象关联的目标值\n\n\n# 创建一个新的研究 (Study)\nstudy = optuna.create_study()\n\n# 执行优化，运行 100 次试验 (Trials)\nstudy.optimize(objective, n_trials=100)\n\n# 输出最佳结果\nprint(f\"最佳试验值：{study.best_trial.value}\")\nprint(f\"最佳超参数：{study.best_trial.params}\")\n```\n\n### 核心概念说明\n*   **Study (研究)**：基于目标函数进行的优化过程整体。\n*   **Trial (试验)**：目标函数的单次执行。Optuna 通过多次 Trial 寻找最优的超参数组合。\n*   **suggest 方法**：`trial.suggest_*` 系列方法用于在定义的范围内采样超参数，支持分类 (`categorical`)、浮点数 (`float`) 和整数 (`int`) 类型。","某电商数据团队正在构建用户流失预测模型，急需通过调整随机森林算法的超参数来提升准确率。\n\n### 没有 optuna 时\n- **人工试错效率极低**：数据科学家只能凭经验手动修改树的数量、最大深度等参数，每次调整都需重新运行耗时数小时的训练脚本。\n- **搜索空间僵化**：难以动态处理参数间的依赖关系（例如仅当“分裂策略”为特定值时才调整“最大特征数”），导致大量无效组合被重复测试。\n- **资源浪费严重**：缺乏智能剪枝机制，即使某些参数组合在早期迭代中表现极差，程序仍会固执地跑完全部流程，白白消耗算力。\n- **结果不可复现**：手工记录的参数表格混乱且易出错，难以追溯哪组配置产生了最佳模型，协作沟通成本高昂。\n\n### 使用 optuna 后\n- **自动化高效寻优**：optuna 自动调度数百次试验，利用贝叶斯优化算法在短短几小时内就找到了比人工调优准确率高出 3% 的最佳参数组合。\n- **动态定义搜索空间**：借助 Python 原生语法，团队轻松构建了包含条件判断的灵活搜索空间，确保只生成逻辑合法的参数组合。\n- **智能提前终止**：通过内置的剪枝器，optuna 实时监测中间结果，自动杀掉表现不佳的试验，将整体计算资源消耗降低了 60%。\n- **可视化与可追溯**：optuna 自动生成参数重要性图表和历史轨迹，团队能清晰看到决策路径，快速锁定关键影响因子并复用最佳配置。\n\noptuna 将原本需要数周的人工调参工作压缩至一天内完成，让数据团队能从繁琐的实验中解放出来，专注于业务逻辑的创新。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foptuna_optuna_afdc8e41.gif","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Foptuna_f41f0caf.png","",null,"https:\u002F\u002Fgithub.com\u002Foptuna",[77,81],{"name":78,"color":79,"percentage":80},"Python","#3572A5",100,{"name":82,"color":83,"percentage":84},"Mako","#7e858d",0,13987,1309,"2026-04-17T17:26:19","MIT",1,"未说明","非必需（框架本身为纯 Python 实现，不依赖特定 GPU；若配合 PyTorch\u002FTensorFlow 等深度学习库使用，则需遵循相应库的 GPU 要求）",{"notes":93,"python":94,"dependencies":95},"Optuna 是一个轻量级、跨平台的超参数优化框架，核心安装无特殊硬件要求。若使用 AutoSampler 功能，需额外安装 cmaes、scipy 和 torch（CPU 版本即可满足基本需求）。支持通过 SQLite 等存储后端实现分布式优化和 Web 仪表盘监控。","3.9, 3.10, 3.11, 3.12, 3.13, 3.14",[96,97,98,99,100,101],"scikit-learn (示例依赖)","cmaes (AutoSampler 依赖)","scipy (AutoSampler 依赖)","torch (可选，用于 AutoSampler 及深度学习集成)","optuna-dashboard (可选，可视化面板)","optunahub (可选，功能共享平台)",[14],[104,105,106,107,108],"python","machine-learning","parallel","distributed","hyperparameter-optimization","2026-03-27T02:49:30.150509","2026-04-18T09:20:49.930715",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},39365,"为什么 Optuna 的 TPESampler 或 RandomSampler 会在多次试验中建议相同的参数值？","这是采样器的正常行为，尤其是当搜索空间较小或试验次数较多时，随机采样或基于概率的采样可能会重复选择相同的值。目前 Optuna 没有内置机制强制所有试验参数唯一。若需避免重复，用户需在外部逻辑中自行记录已尝试的参数组合并在目标函数中跳过，或者增大参数的搜索范围以减少碰撞概率。","https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\u002F2021",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},39366,"当目标函数存在波动（jitter）时，Optuna 过早剪枝（pruning）怎么办？","Optuna 已提供 `PatiencePruner` 来解决此问题。它允许设置一个耐心值（patience），只有当目标函数在连续多个检查点内都没有改善时才会执行剪枝，从而容忍短期的波动。该功能已在主分支（master branch）可用，用户可以通过安装开发版本或等待下一个正式リリース来使用。","https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\u002F1447",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},39367,"如何让 suggest_int 支持步长（step）或对数分布（log）？","Optuna v2.0.0 及以上版本已更新 Suggest API。现在 `suggest_int` 支持 `step` 参数用于指定整数步长（例如每隔 50 取值），也支持 `log` 参数用于对数分布采样。此外，还引入了 `suggest_float` 函数来处理浮点数的步长和对数分布需求。请确保升级到最新版本并使用新 API。","https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\u002F510",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},39368,"Optuna 是否支持基于时间的剪枝（time-based pruning）？","原生 Optuna 目前没有直接的时间剪枝器，但可以通过变通方法实现。用户可以在报告中间结果时，将“步骤（step）”设置为累积运行时间（例如秒或毫秒的整数值），而不是传统的 epoch 数。配合 `MedianPruner` 等使用，即可实现基于耗时的剪枝逻辑。注意可能需要对时间进行取整以避免精度问题导致匹配失败。","https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\u002F2873",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},39369,"为什么 get_param_importances 有时会忽略某些参数？","当某些参数在所有试验中取值完全相同（常数），或者该参数对目标值的变化没有贡献（方差为零）时，重要性评估算法（如 ANOVA 或随机森林）无法计算其重要性，因此会将其跳过。此外，如果试验被过早剪枝导致部分参数未充分采样，也可能影响结果。建议检查参数是否在试验中有变化，并增加试验次数或调整剪枝策略。","https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\u002F1856",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},39370,"如何修复因类型检查导入导致的循环导入错误？","对于仅用于类型注解的模块导入，应使用 `typing.TYPE_CHECKING` 进行条件导入。例如：\n```python\nfrom typing import TYPE_CHECKING\nif TYPE_CHECKING:\n    from some_module import SomeClass\n```\n这样可以避免运行时导入引发的循环依赖问题。维护者建议使用 `ruff check . --select TCH` 命令自动检测需要修改的文件。","https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\u002F6029",[143,148,153,158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238],{"id":144,"version":145,"summary_zh":146,"released_at":147},315293,"v4.8.0","这是 [v4.8.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F73?closed=1) 的发布说明。\n\n# 亮点\n\n## GPSampler 支持常谎策略\n\n@sawa3030 为 GPSampler 引入了用于高效并行化的常谎策略。从图中可以看出（左：v4.7.0，右：v4.8.0），搜索点的重叠有所减少，探索到的解也更加多样化。实验使用了 `n_jobs = 10` 和 `n_trials = 100`。目前，该功能仅支持单目标、无约束优化问题。未来在 v4.9.0 中还将进一步扩展。\n\n| v4.7.0 | v4.8.0 |\n| ------ | ------ |\n| \u003Cimg width=\"704\" height=\"763\" alt=\"image60\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fe5822aac-4ca0-4b74-a3aa-f779b5502f72\" \u002F> | \u003Cimg width=\"704\" height=\"763\" alt=\"image26\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F3c39b9e5-2957-4cb2-b518-c27ba77af35e\" \u002F> |\n\n## 类 SHAP 的蜂群图可视化\n\n\u003Cimg width=\"1395\" height=\"508\" alt=\"image52\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F7950ba58-363e-49dd-a117-dea68fc8f182\" \u002F>\n\n@yasumorishima 在 OptunaHub 中引入了这一新的可视化功能。详情请参阅 https:\u002F\u002Fhub.optuna.org\u002Fvisualization\u002Fplot_beeswarm\u002F。\n\n# 新特性\n\n- 为 Optuna 添加 Trackio 集成（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F259，感谢 @ParagEkbote！）\n- 向 `GPSampler` 添加常谎策略 (#6430)\n\n# 功能增强\n\n- 在 FileSystemArtifactStore 中验证 artifact_id，以防止路径遍历攻击 (#6432，感谢 @RinZ27！)\n- 修复：修正 Pareto 前沿图中的警告信息颠倒问题 (#6498，感谢 @aerosta！)\n\n# 错误修复\n\n- 修复 LightGBM 并行 OptunaSearchCV 中的共享回调状态问题（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F260，感谢 @Quant-Quasar！）\n- 修复当 PyTorch 默认设备为 CUDA 时 GPSampler 崩溃的问题 (#6418，感谢 @VedantMadane！）\n- 修复 `PartialFixedSampler` 与分组分解搜索空间结合使用时的兼容性问题 (#6428)\n- 修复 `TPESampler` 在使用 `multivariate` 和 `constant_liar` 策略时的问题 (#6505)\n\n# 文档更新\n\n- 添加文档说明 `WilcoxonPruner` 需要 `scipy` 库 (#6477)\n- 移除文档侧边栏中的版本和语言选择器 (#6482)\n\n# 示例代码\n\n- 应用 black 26.1.0 格式化工具（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F348）\n- 移除对不再维护的 allennlp 的 CI 工作流（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F351）\n- 降低计划性 CI 触发的频率（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F352）\n- 移除针对 `aim` 的计划性 CI 触发（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F353）\n- 为 `transformers` 示例添加约束条件（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F355）\n\n# 测试改进\n\n- 在 `optuna.testing` 包中添加 `SamplerTestCase` 类 (#6424)\n- 将 `test_before_trial` 和 `test_after_trial_*` 分别移动到 `test_trial.py` 和 `test_study.py` 文件中 (#6429)\n\n# 代码修复\n\n- 将仅用于类型注解的导入移至 `_para","2026-03-16T04:59:17",{"id":149,"version":150,"summary_zh":151,"released_at":152},315294,"v4.7.0","这是 [v4.7.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F72?closed=1) 的发布说明。\n\n# 亮点\n\n## OptunaHub 新增两款多目标采样器！\n\n\u003Cimg width=\"1487\" height=\"946\" alt=\"hype-sampler\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F717752e7-4f55-4519-a407-70b1e0502052\" \u002F>\n\n@hrntsm 向 OptunaHub 引入了两款新的多目标采样器——SPEA-II（强度帕累托进化算法 2）和 HypE（超体积估计算法）。SPEA-II 是一种改进的多目标进化算法，其选择机制与 NSGA-II 不同。HypE 则是一种基于超体积的快速进化算法，专为多目标优化问题设计。更多详细信息请参阅以下页面：\n\n* SPEA-II：https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fspeaii\u002F\n* HypE：https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fhype\u002F\n\n## `PedAnovaImportanceEvaluator` 现已支持局部超参数重要性计算\n\n在 [`PedAnovaImportanceEvaluator`](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Flatest\u002Freference\u002Fgenerated\u002Foptuna.importance.PedAnovaImportanceEvaluator.html) 中新增了 `target_quantile` 和 `region_quantile` 参数。这一改动允许您通过设置 `region_quantile \u003C 1.0` 来研究局部超参数重要性，而非全局重要性。技术细节请参阅[原始论文](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2023\u002F488)。\n\n# 功能增强\n\n- 引入可感知堆栈级别的自定义警告 (#6293)\n- 缓存分布以跳过一致性检查 (#6301)\n- 当 `JournalStorage` 锁获取延迟时添加警告 (#6361)\n- 在 PED-ANOVA 中增加对局部 HPI 的支持 (#6362)\n\n# 错误修复\n\n- 修复 `TPESampler` 中离散截断对数正态分布的概率密度函数日志计算错误 (#6258)\n- 修正 PED-ANOVA 中的系数 (#6358)\n- 修复默认 PyTorch 设备为 CUDA 时 GPSampler 崩溃的问题 (#6397，感谢 @Quant-Quasar！)\n\n# 文档更新\n\n- 添加 `SECURITY.md` 文件 (#6317)\n- 添加关于未来开发独占式超体积的注释 (#6318)\n- 更新 GPSampler 文档，加入 D-BE 优化细节 (#6347，感谢 @Kaichi-Irie！)\n- 撤销 PR #6354，以启用 Sphinx 构建中的 `-W` 选项 (#6373)\n\n# 示例代码\n\n- 暂时禁用 PyTorch 和可视化相关的定时运行 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F337)\n- 修复 skorch 示例：替换已不可用的 OpenML MNIST 数据集 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F338，感谢 @sotagg！)\n- 将 `minio` 版本固定为 `\u003C=7.2.18`，以修复 CI 问题并停止每日 CI 运行 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F339)\n- 修复 Spark 示例 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F342，感谢 @fritshermans！)\n- 为 lightgbm 将 scikit-learn 版本固定在 \u003C 1.6.0 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F343)\n- 因 Python 3.9 已停止维护而将其移除 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F344，感谢 @ParagEkbote！)\n- 为 fastai 示例添加 IPython 作为依赖项 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F347)\n\n# 测试\n\n- 修复 tests\u002Fvisualization 中的 TC006 违规问题","2026-01-19T05:45:13",{"id":154,"version":155,"summary_zh":156,"released_at":157},315312,"v3.2.0","This is the release note of [v3.2.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F54?closed=1).\r\n\r\n# Highlights\r\n\r\n## Human-in-the-loop optimization\r\n\r\nWith the latest release, we have incorporated support for human-in-the-loop optimization. It enables an interactive optimization process between users and the optimization algorithm. As a result, it opens up new opportunities for the application of Optuna in tuning Generative AI. For further details, please check out [our human-in-the-loop optimization tutorial](https:\u002F\u002Foptuna-dashboard.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Fhitl.html).\r\n\r\n\u003Cimg width=\"826\" alt=\"human-in-the-loop-optimization\" src=\"https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F3255979\u002Fcb03dd4d-2521-499c-bbe6-06dd7144fb4b\">\r\n\r\n_Overview of human-in-the-loop optimization. Generated images and sounds are displayed on [Optuna Dashboard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard), and users can directly evaluate them there._\r\n\r\n## Automatic optimization terminator(Optuna Terminator)\r\n\r\nOptuna Terminator is a new feature that quantitatively estimates room for optimization and automatically stops the optimization process. It is designed to alleviate the burden of figuring out an appropriate value for the number of trials (`n_trials`), or unnecessarily consuming computational resources by indefinitely running the optimization loop. See [#4398](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fissues\u002F4398) and [optuna-examples#190](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F190).\r\n\r\n![b5b752f2-5d2a-410b-a756-53f3d24acd82](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F3255979\u002F74c8833e-fe30-406e-90fc-ce9fece591f2)\r\n\r\n_Transition of estimated room for improvement.  It steadily decreases towards the level of cross-validation errors._\r\n\r\n## New sampling algorithms\r\n\r\n### NSGA-III for many-objective optimization\r\n\r\nWe've introduced the NSGAIIISampler as a new multi-objective optimization sampler. It implements NSGA-III, which is an extended variant of NSGA-II, designed to efficiently optimize even when the dimensionality of the objective values is large (especially when it's four or more). NSGA-II had an issue where the search would become biased towards specific regions when the dimensionality of the objective values exceeded four. In NSGA-III, the algorithm is designed to distribute the points more uniformly. This feature was introduced by #4436. \r\n\r\n![219599007-8dc7a435-10e8-45cd-8b95-2b386b4642d5](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F3255979\u002F10b9c21d-bc24-4673-803b-a2fe9a8b401e)\r\n\r\n_Objective value space for multi-objective optimization (minimization problem). Red points represent Pareto solutions found by NSGA-II. Blue points represent those found by NSGA-III. NSGA-II shows a tendency for points to concentrate towards each axis (corresponding to the ends of the Pareto Front). On the other hand, NSGA-III displays a wider distribution across the Pareto Front._\r\n\r\n\r\n### BI-population CMA-ES\r\n\r\nContinuing from v3.1, significant improvements have been made to the CMA-ES Sampler. As a new feature, we've added the BI-population CMA-ES algorithm, a kind of restart strategy that mitigates the problem of falling into local optima. Whether the IPOP CMA-ES, which we've been providing so far, or the new BI-population CMA-ES is better depends on the problems. If you're struggling with local optima, please try BI-population CMA-ES as well. For more details, please see #4464.\r\n\r\n![221167904-809a1a17-7248-4f81-84fc-396d783e6548](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F3255979\u002Fd211a0b9-ea61-4d4c-9c81-bea7987212e4)\r\n\r\n## New visualization functions\r\n\r\n### Timeline plot for trial life cycle\r\n\r\nThe timeline plot visualizes the progress (status, start and end times) of each trial. In this plot, the horizontal axis represents time, and trials are plotted in the vertical direction. Each trial is represented as a horizontal bar, drawn from the start to the end of the trial. With this plot, you can quickly get an understanding of the overall progress of the optimization experiment, such as whether parallel optimization is progressing properly or if there are any trials taking an unusually long time.\r\n\r\nSimilar to other plot functions, all you need to do is pass the study object to `plot_timeline`. For more details, please refer to #4470 and #4538.\r\n![221496175-3f1b286a-ebdc-48d3-9cd7-2a01284e415a](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F3255979\u002F9183a6fe-4114-4daa-b0fc-0623981905c1)\r\n\r\n\r\n### Rank plot to understand input-output relationship\r\n\r\nA new visualization feature, `plot_rank`, has been introduced. This plot provides valuable insights into landscapes of objective functions, i.e., relationship between parameters and objective values. In this plot, the vertical and horizontal axes represent the parameter values, and each point represents a single trial. The points are colored according to their ranks.\r\n\r\nSimilar to other plot functions, all you need to do is pass the study object to plot_rank. For more details, please refer to #4427 and #4541.\r\n\r\n![b","2023-05-30T05:59:24",{"id":159,"version":160,"summary_zh":161,"released_at":162},315295,"v4.6.0","这是 [v4.6.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F71?closed=1) 的发布说明。\n\n# 亮点\n\n## Optuna Dashboard 集成大模型\n\n[Optuna Dashboard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard) 是一款基于 Web 的工具，可帮助您轻松探索和可视化 Optuna 优化历史。最新版本 v0.20.0 引入了大模型集成功能，支持基于自然语言的试验筛选以及 Plotly 图表的自动生成。更多详情请参阅发布博客。\n\n\u003Cimg width=\"900\" alt=\"image55\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa2a2d6d2-e782-48ad-a4f7-3a33ca8599eb\" \u002F>\n\n## `GPSampler` 进一步提速\n\n得益于通过 PyTorch 批处理实现的并行化多起点采集函数优化，以及对 NumPy 操作的优化，`GPSampler` 的运行速度显著提升。\n\n\u003Cimg width=\"900\" alt=\"image15\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd72c1fc7-f9c5-4dac-baf7-a88167bbcd27\" \u002F>\n\n## AutoSampler 完全支持多目标与约束优化\n\n我们已在 [AutoSampler](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fauto_sampler\u002F) 中完整实现了多目标和约束优化的采样器选择规则。更多详情请参阅我们的博客文章《AutoSampler：全面支持多目标与约束优化》。[链接](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fautosampler-full-support-for-multi-objective-constrained-optimization-c1c4fc957ba2)\n\n\u003Cimg width=\"900\" alt=\"optuna-blog-autosampler-multi-constrained\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F75c768b1-8d22-464b-90e8-b44700a044e8\" \u002F>\n\n## 新增鲁棒贝叶斯优化包\n\nOptunaHub 新增了鲁棒贝叶斯优化方法。该方法能够在输入扰动下建议更为稳健的超参数，尤其适用于 Sim2Real 转移场景。\n\n\u003Cimg width=\"600\" alt=\"image25\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F9ad6de92-1c84-4716-a06e-c21ffcdb0233\" \u002F>\n\n# 破坏性变更\n\n- 放弃对 Python 3.8 的支持，新增对 Python 3.13 的支持（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F253）\n- 修改 `TrialState.__repr__` 和 `TrialState.__str__` (#6281，感谢 @ktns！)\n- 放弃对 Python 3.8 的支持 (#6302)\n\n# 功能增强\n\n- 在日志存储的 `read_logs` 中使用迭代器进行惰性求值 (#6144)\n- 缓存两两距离以加速 `GPSampler` (#6244)\n- 加速 LogEI 实现 (#6248)\n- 通过优化张量运算顺序提升 EHVI 性能 (#6257)\n- 在超体积贡献计算中采用递减法 (#6264)\n- 在 `TPESampler` 的 `sample_relative` 中使用缓存试验数据 (#6265)\n- 移除 `_set_trial_value_without_commit` 中的 `find_or_raise_by_id` (#6266)\n- 通过批处理采集函数评估进一步加速 `GPSampler` (#6268，感谢 @Kaichi-Irie！)\n- 在 `_CachedStorage` 的 `get_best_trial` 中使用缓存的研究方向和试验数据 (#6270)\n- 为 PostgreSQL 的 `_set_trial_attr_without_commit` 添加 upsert 功能 (#6282，","2025-11-10T05:13:52",{"id":164,"version":165,"summary_zh":166,"released_at":167},315296,"v4.5.0","这是 [v4.5.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F70?closed=1) 的发布说明。\n\n# 亮点\n\n## 用于约束多目标优化的 `GPSampler`\n`GPSampler` 现在能够使用新引入的约束型 LogEHVI 获取函数，同时处理多个目标和约束条件。\n\n下图展示了 `GPSampler`（LogEHVI，无约束）与 `GPSampler`（约束型 LogEHVI，新功能）之间的差异。我们使用的 C2DTLZ2 基准问题的三维版本是一个这样的问题：原 DTLZ2 问题的帕累托前沿中某些区域因约束而不可行。因此，即使不考虑约束，也有可能找到帕累托前沿。实验结果表明，LogEHVI 和约束型 LogEHVI 都能近似帕累托前沿，但后者产生的不可行解明显更少，从而显示出其更高的效率。\n\n| Optuna v4.4 (LogEHVI) | Optuna v4.5 (约束型 LogEHVI) |\n|:--:|:--:|\n|\u003Cimg width=\"320\" height=\"240\" alt=\"Log EHVI\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Ff8f302c3-4f43-4afd-8033-3c631985861b\" \u002F>|\u003Cimg width=\"320\" height=\"240\" alt=\"约束型 LogEHVI\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8bb70c11-b9c7-412a-9367-d6d832855a73\" \u002F>|\n\n\n## `TPESampler` 的显著提速\n`TPESampler` 的速度大幅提升（如下表所示，约提升了 5 倍）！这使得每次研究可以进行更多次试验。提速得益于对常数因子的一系列优化。\n\n下表展示了 v4.4.0 和 v4.5.0 版本中 `TPESampler` 的速度对比。实验是在包含 3 个连续参数和 3 个数值离散参数的搜索空间上，使用 `multivariate=True` 进行的。每一行表示不同目标数量下的运行时间，每一列则表示要评估的不同试验次数。每个运行时间都附带了基于 3 个随机种子的标准误差。括号中的数字表示相对于 v4.4.0 的提速倍数。例如，(5.1x) 表示 v4.5.0 的运行时间是 v4.4.0 的 5.1 倍快。\n\n|`n_objectives`\u002F`n_trials`|500|1000|1500|2000|\n|:--:|:--:|:--:|:--:|:--:|\n|1|1.4 $\\pm$ 0.03 (5.1x)|3.9 $\\pm$ 0.07 (5.3x)|7.3 $\\pm$ 0.09 (5.4x)|11.9 $\\pm$ 0.10 (5.4x)|\n|2|1.8 $\\pm$ 0.01 (4.7x)|4.7 $\\pm$ 0.02 (4.8x)|8.7 $\\pm$ 0.03 (4.8x)|13.9 $\\pm$ 0.04 (4.9x)|\n|3|2.0 $\\pm$ 0.01 (4.2x)|5.4 $\\pm$ 0.03 (4.4x)|10.0 $\\pm$ 0.03 (4.6x)|15.9 $\\pm$ 0.03 (4.7x)|\n|4|4.2 $\\pm$ 0.11 (3.2x)|12.1 $\\pm$ 0.14 (3.9x)|20.9 $\\pm$ 0.23 (4.2x)|31.3 $\\pm$ 0.05 (4.4x)|\n|5|12.1 $\\pm$ 0.59 (4.7x)|30.8 $\\pm$ 0.16 (5.8x)|50.7 $\\pm$ 0.46 (6.5x)|72.8 $\\pm$ 1.13 (7.1x)|\n\n## `plot_hypervolume_history` 的显著提速\n`plot_hypervolume_history` 是评估多目标优化性能的重要工具，但在针对多目标问题（目标数量 > 3）进行大量试验时，其运行速度却极其缓慢。v4.5","2025-08-18T06:48:44",{"id":169,"version":170,"summary_zh":171,"released_at":172},315297,"v4.4.0","这是 [v4.4.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F69?closed=1) 的发布说明。\n\n# 亮点\n\n除了新功能、错误修复以及文档和测试方面的改进之外，4.4 版还引入了一个名为 [Optuna MCP 服务器](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-mcp) 的新工具。\n\n## Optuna MCP 服务器\n任何 MCP 客户端都可以通过 uv 访问 Optuna MCP 服务器——例如，在 Claude Desktop 中，只需将以下配置添加到您的 MCP 服务器设置文件中即可。当然，其他 LLM 客户端，如 VSCode 或 Cline，也可以采用类似的方式进行访问。您还可以通过 Docker 来使用它。如果希望持久化结果，可以使用 —storage 选项。有关详细信息，请参阅 [该仓库](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-mcp)。\n\n```json\n{\n  \"mcpServers\": {\n    … (其他 MCP 服务器的设置)\n    \"Optuna\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"optuna-mcp\"\n      ]\n    }\n  }\n}\n```\n\n![image3](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F231165f7-5df6-4e11-8c20-c89f1da7af26)\n\n## 基于高斯过程的多目标优化\n\nOptuna 在 3.6 版中引入的 GPSampler 相较于现有的贝叶斯优化框架，在速度和性能上都更为出色，尤其是在处理包含离散变量的目标函数时。在 Optuna 4.4 版中，我们进一步扩展了 GPSampler 的功能，使其能够支持多目标优化问题。多目标优化的应用非常广泛，而此次引入的多目标能力有望在材料设计、实验设计问题以及高成本超参数优化等领域得到应用。\n\nGPSampler 可以轻松集成到您的程序中，并且在与现有 BoTorchSampler 的对比中表现出色。我们鼓励您将其应用于自己的多目标优化问题中。\n\n```python\nsampler = optuna.samplers.GPSampler()\nstudy = optuna.create_study(directions=[\"minimize\", \"minimize\"], sampler=sampler)\n```\n\n![image2](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F24586f21-fd9c-4d8c-86db-2507f3a2c12f)\n\n## OptunaHub 的新功能\n\n在 Optuna 4.4 版的开发期间，OptunaHub——Optuna 的功能共享平台——也推出了一些新功能：\n\n- 现已新增一个[利用 Google Vizier 的采样器](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fvizier\u002F)。\n- 此前属于 Optuna 核心部分的[带有重启策略的基于 CMA-ES 的采样器](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Frestart_cmaes\u002F)现已迁移到 OptunaHub，使得其使用更加简单便捷。\n- 新增了一个作为黑箱优化任务的[飞机设计基准问题](https:\u002F\u002Fhub.optuna.org\u002Fbenchmarks\u002Fhpa\u002F)，进一步提升了使用 OptunaHub 进行算法开发的便利性。\n- 此外还增加了一项可视化功能，用户可以查看[默认 TPESampler 的采集函数如何随着试验的推进而演变](https:\u002F\u002Fhub.optuna.","2025-06-16T05:12:20",{"id":174,"version":175,"summary_zh":176,"released_at":177},315298,"v4.3.0","这是 [v4.3.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F66?closed=1) 的发布说明。\n\n# 亮点\n\n本次更新包含多项 bug 修复、文档改进等内容。\n\n# 破坏性变更\n\n- [修复] 兼容 LightGBM 4.6.0（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F207，感谢 @ffineis！）\n\n# 功能增强\n\n- 在 `LightGBMTuner` 中支持自定义目标函数（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F203，感谢 @sawa3030！）\n- 改进 `IntersectionSearchSpace` 的时间复杂度（#5982，感谢 @GittyHarsha！）\n- 在 `InMemoryStorage` 中添加 `_prev_waiting_trial_number`，以提升 `_pop_waiting_trial_id` 的效率（#5993，感谢 @sawa3030！）\n- 向 `convert_positional_args` 添加版本参数（#6009，感谢 @fusawa-yugo！）\n- 在 GrpcStorageProxy 中添加 `wait_server_ready` 方法（#6010，感谢 @hitsgub！）\n- 移除基于 Matplotlib 的 `plot_contour` 和 `plot_rank` 的警告信息（#6011）\n- 修复 `optuna._callbacks.py` 中的类型检查问题（#6030）\n- 增强 `SBXCrossover` 功能（#6008，感谢 @hrntsm！）\n\n# Bug 修复\n\n- 在复制到本地之前，先将存储转换为 `InMemoryStorage`（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F213）\n- 修复 Matplotlib 的等高线图问题（#5892，感谢 @fusawa-yugo！）\n- 修复线程锁逻辑（#5922）\n- 对 grpcio 包使用 `_LazyImport`（#5954）\n- 通过为 `JournalStorage` 添加超时机制，防止锁阻塞（#5971，感谢 @sawa3030！）\n- 修复 GPSampler 中针对返回 `inf` 的目标函数的小 bug（#5995）\n- 修复 gRPC 服务器无法与 JournalStorage 配合使用的问题（#6004，感谢 @fusawa-yugo！）\n- 修复已完成试验的 `_pop_waiting_trial_id` 问题（#6012）\n- 解决 `BruteForceSampler` 无法建议所有组合的问题（#5893）\n\n# 文档\n\n- 按照 `optuna\u002Foptuna` 文档 Sphinx 配置的最新更改进行调整（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F197）\n- 修复对外部模块的链接（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F198）\n- 更新 `CONTRIBUTING.md` 文件（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F200，感谢 @sawa3030！）\n- 更新 `.readthedocs.yml` 中的注释（#5976）\n- 为 `HyperBandPruner` 的可重复性添加注释（#6018）\n\n# 示例\n\n- [热修复] 为 `dask` 添加版本约束（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F296）\n- [热修复] 为 `dask-ml` 的 `dask` 添加版本约束（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F297）\n- 扩展 `hiplot` 和 `sklearn` 的执行范围（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F298，感谢 @fusawa-yugo！）\n- 使用 black 格式化代码以修复 CI 问题（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F300）\n- 将 CI 的 Python 版本升级至 3.12（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F301）\n- [热修复] 为 `lightgbm` 添加版本约束（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F302）\n- 修复 Skorch 示例（https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F303，感谢 @ParagEkbote！）\n- 为 TensorFlow 相关的 CI 添加版本约束（https:\u002F\u002Fgithub.com\u002Foptuna\u002Fop","2025-04-14T05:06:59",{"id":179,"version":180,"summary_zh":181,"released_at":182},315299,"v4.2.1","这是 v4.2.1 的发布说明。本次发布包含一个 bug 修复，解决了在安装了旧版本 grpcio 包的情况下，Optuna 无法正常导入的问题。\n\n## Bug\n\n- [后向移植] 对 grpcio 包使用 `_LazyImport` (#5965)\n\n## 其他\n\n- 将版本号提升至 v4.2.1 (#5964)\n\n## 感谢所有贡献者！\n\n本次发布离不开作者以及参与评审和讨论的各位。  \n@c-bata @HideakiImamura @nabenabe0928\n","2025-02-12T07:56:39",{"id":184,"version":185,"summary_zh":186,"released_at":187},315300,"v3.5.1","这是 v3.5.1 的发布说明。\n\n# 错误修复\n\n- [后向移植] 修复 `load_study` 函数的默认采样器问题。\n\n# 其他\n\n- 将版本号提升至 v3.5.1。\n","2025-01-27T07:13:18",{"id":189,"version":190,"summary_zh":191,"released_at":192},315301,"v3.6.2","这是 v3.6.2 的发布说明。\n\n# Bug 修复\n\n- [后向移植] 修复 `load_study` 函数的默认采样器问题。\n\n# 其他\n\n- 将版本号提升至 v3.6.2\n","2025-01-27T07:13:38",{"id":194,"version":195,"summary_zh":196,"released_at":197},315302,"v3.4.1","这是 v3.4.1 的发布说明。\n\n# 错误修复\n\n- [后向移植] 修复 `load_study` 函数的默认采样器问题。\n\n# 其他\n\n- 将版本号提升至 v3.4.1。","2025-01-27T06:51:19",{"id":199,"version":200,"summary_zh":201,"released_at":202},315303,"v4.2.0","This is the release note of [v4.2.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Freleases\u002Ftag\u002Fv4.2.0). In conjunction with the Optuna release, OptunaHub 0.2.0 is released. Please refer to [the release note of OptunaHub 0.2.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptunahub\u002Freleases\u002Ftag\u002Fv0.2.0) for more details.\r\n\r\nHighlights of this release include:\r\n\r\n- 🚀gRPC Storage Proxy for Scalable Hyperparameter Optimization\r\n- 🤖 SMAC3: Support for New State-of-the-art Optimization Algorithm by AutoML.org (@automl)\r\n- 📁 OptunaHub Now Supports Benchmark Functions\r\n- 🧑‍💻 Gaussian Process-Based Bayesian Optimization with Inequality Constraints\r\n- 🧑‍💻 c-TPE: Support Constrained TPESampler\r\n\r\n# Highlights\r\n\r\n## gRPC Storage Proxy for Scalable Hyperparameter Optimization\r\n\r\n\r\nThe gRPC storage proxy is a feature designed to support large-scale distributed optimization. As shown in the diagram below, gRPC storage proxy sits between the optimization workers and the database server, proxying the calls of Optuna’s storage APIs.\r\n\r\n\u003Cimg width=\"1241\" alt=\"grpc-proxy\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F00220bc6-75da-4ee3-b9ff-a2b3440ac207\" \u002F>\r\n\r\n\r\nIn large-scale distributed optimization settings where hundreds to thousands of workers are operating, placing a gRPC storage proxy for every few tens can significantly reduce the load on the RDB server which would otherwise be a single point of failure. The gRPC storage proxy enables sharing the cache about Optuna studies and trials, which can further mitigate load. Please refer to [the official documentation](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Flatest\u002Freference\u002Fgenerated\u002Foptuna.storages.GrpcStorageProxy.html#optuna.storages.GrpcStorageProxy) for further details on how to utilize gRPC storage proxy.\r\n\r\n## SMAC3: Random Forest-Based Bayesian Optimization Developed by AutoML.org\r\n[SMAC3](https:\u002F\u002Fgithub.com\u002Fautoml\u002FSMAC3) is a hyperparameter optimization framework developed by [AutoML.org](http:\u002F\u002Fautoml.org\u002F), one of the most influential AutoML research groups. The Optuna-compatible SMAC3 sampler is now available thanks to the contribution to OptunaHub by Difan Deng (@dengdifan), one of the core members of AutoML.org. We can now use the method widely used in AutoML research and real-world applications from Optuna.\r\n\r\n```python\r\n# pip install optunahub smac\r\nimport optuna\r\nimport optunahub\r\nfrom optuna.distributions import FloatDistribution\r\n\r\ndef objective(trial: optuna.Trial) -> float:\r\n    x = trial.suggest_float(\"x\", -10, 10)\r\n    y = trial.suggest_float(\"y\", -10, 10)\r\n    return x**2 + y**2\r\n\r\nsmac_mod = optunahub.load_module(\"samplers\u002Fsmac_sampler\")\r\nn_trials = 100\r\nsampler = smac_mod.SMACSampler(\r\n    {\"x\": FloatDistribution(-10, 10), \"y\": FloatDistribution(-10, 10)},\r\n    n_trials=n_trials,\r\n)\r\nstudy = optuna.create_study(sampler=sampler)\r\nstudy.optimize(objective, n_trials=n_trials)\r\n```\r\n\r\nPlease refer to https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fsmac_sampler\u002F for more details.\r\n\r\n## OptunaHub Now Supports Benchmark Functions\r\nBenchmarking the performance of optimization algorithms is an essential process indispensable to the research and development of algorithms. The newly added OptunaHub Benchmarks in the latest version v0.2.0 of [optunahub](https:\u002F\u002Fhub.optuna.org\u002F) is a new feature for Optuna users to conduct benchmarks conveniently.\r\n\r\n```python\r\n# pip install optunahub>=4.2.0 scipy torch\r\nimport optuna\r\nimport optunahub\r\n\r\nbbob_mod = optunahub.load_module(\"benchmarks\u002Fbbob\")\r\nsmac_mod = optunahub.load_module(\"samplers\u002Fsmac_sampler\")\r\nsphere2d = bbob_mod.Problem(function_id=1, dimension=2)\r\n\r\nn_trials = 100\r\nstudies = []\r\nfor study_name, sampler in [\r\n    (\"random\", optuna.samplers.RandomSampler(seed=1)),\r\n    (\"tpe\", optuna.samplers.TPESampler(seed=1)),\r\n    (\"cmaes\", optuna.samplers.CmaEsSampler(seed=1)),\r\n    (\"smac\", smac_mod.SMACSampler(sphere2d.search_space, n_trials, seed=1)),\r\n]:\r\n    study = optuna.create_study(directions=sphere2d.directions,\r\n        sampler=sampler, study_name=study_name)\r\n    study.optimize(sphere2d, n_trials=n_trials)\r\n    studies.append(study)\r\n\r\noptuna.visualization.plot_optimization_history(studies).show()\r\n```\r\n\r\nIn the above sample code, we compare and display the performance of the four kinds of samplers using a two-dimensional Sphere function, which is part of a group of benchmark functions widely used in the black-box optimization research community known as [Blackbox Optimization Benchmarking (BBOB)](https:\u002F\u002Fhub.optuna.org\u002Fbenchmarks\u002Fbbob\u002F).\r\n\r\n\u003Cimg width=\"1250\" alt=\"bbob\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F89b09090-dd37-4679-9fa8-af987f138dcc\" \u002F>\r\n\r\n\r\n## Gaussian Process-Based Bayesian Optimization with Inequality Constraints\r\nWe worked on its extension and adapted `GPSampler` to constrained optimization in Optuna v4.2.0 since Gaussian process-based Bayesian optimization is a very popular method in various research fields such as aircraft engineering and materials science. We show the basic usage below.\r\n\r\n```python\r\n# pip install optuna>=4.2.0 scip","2025-01-20T07:13:34",{"id":204,"version":205,"summary_zh":206,"released_at":207},315304,"v4.1.0","This is the release note of [v4.1.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F64?closed=1). Highlights of this release include:\r\n- 🤖 AutoSampler: Automatic Selection of Optimization Algorithms \r\n- 🚀 More scalable RDB Storage Backend\r\n- 🧑‍💻 Five New Algorithms in OptunaHub (MO-CMA-ES, MOEA\u002FD, etc.)\r\n- 🐍 Support Python 3.13\r\n\r\nThe updated list of tested and supported Python releases is as follows:\r\n- [Optuna 4.1](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Freleases\u002Ftag\u002Fv4.1.0): supported by Python 3.8 - 3.13\r\n- [Optuna Integration 4.1](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Freleases\u002Ftag\u002Fv4.1.0): supported by Python 3.8 - 3.12\r\n- [Optuna Dashboard 0.17.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard\u002Freleases\u002Ftag\u002Fv0.17.0): supported by Python 3.8 - 3.13\r\n\r\n# Highlights\r\n\r\n## AutoSampler: Automatic Selection of Optimization Algorithms\r\n\r\n\u003Cimg width=\"750\" alt=\"Blog-1\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Ff3ecd366-7e7b-49d6-ae38-f35301fa7d11\">\r\n\r\n[AutoSampler](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fauto_sampler\u002F) automatically selects a sampler from those implemented in Optuna, depending on the situation. Using AutoSampler, as in the code example below, users can achieve optimization performance equal to or better than Optuna's default without being aware of which optimization algorithm to use.\r\n\r\n```\r\n$ pip install optunahub cmaes torch scipy\r\n```\r\n\r\n```python\r\nimport optuna\r\nimport optunahub\r\n\r\nauto_sampler_module = optunahub.load_module(\"samplers\u002Fauto_sampler\")\r\nstudy = optuna.create_study(sampler=auto_sampler_module.AutoSampler())\r\n```\r\n\r\nSee the [Medium blog post](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fautosampler-automatic-selection-of-optimization-algorithms-in-optuna-1443875fd8f9) for details.\r\n\r\n\r\n## Enhanced RDB Storage Backend\r\n\r\nThis release incorporates comprehensive performance tuning on Optuna’s [RDBStorage](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Freference\u002Fgenerated\u002Foptuna.storages.RDBStorage.html), leading to significant performance improvements. The table below shows the comparison results of execution times between versions 4.0 and 4.1.\r\n\r\n| # trials | v4.0.0 | v4.1.0 | Diff |\r\n| --- |  --- |  --- |  --- |\r\n| 1000 | 72.461 sec (±1.026) | 59.706 sec (±1.216) | **-17.60%** |\r\n| 10000 | 1153.690 sec (±91.311) | 664.830 sec (±9.951) | **-42.37%** |\r\n| 50000 | 12118.413 sec (±254.870) | 4435.961 sec (±190.582) | **-63.39%** |\r\n\r\nFor fair comparison, all experiments were repeated 10 times, and the mean execution time was compared. Additional detailed benchmark settings include the following:\r\n- Objective Function: Each trial consists of 10 parameters and 10 user attributes\r\n- Storage: MySQL 8.0 (with PyMySQL)\r\n- Sampler: [RandomSampler](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fstable\u002Freference\u002Fsamplers\u002Fgenerated\u002Foptuna.samplers.RandomSampler.html#optuna.samplers.RandomSampler)\r\n- Execution Environment: Kubernetes Pod with 5 cpus and 8Gi RAM\r\n\r\nPlease note, due to extensive execution time, the figure for v4.0.0 with 50,000 trials represents the average of 7 runs instead of 10.\r\n\r\n\u003Cdetails>\r\n\u003Csummary>Benchmark Script\u003C\u002Fsummary>\r\n\r\n```python\r\nimport optuna\r\nimport time\r\nimport os\r\nimport numpy as np\r\n\r\noptuna.logging.set_verbosity(optuna.logging.ERROR)\r\nstorage_url = \"mysql+pymysql:\u002F\u002Fuser:password@\u003Cipaddr>:\u003Cport>\u002F\u003Cdbname>\"\r\nn_repeat = 10\r\n\r\ndef objective(trial: optuna.Trial) -> float:\r\n    s = 0\r\n    for i in range(10):\r\n        trial.set_user_attr(f\"attr{i}\", \"dummy user attribute\")\r\n        s += trial.suggest_float(f\"x{i}\", -10, 10) ** 2\r\n    return s\r\n\r\n\r\ndef bench(n_trials):\r\n    elapsed = []\r\n    for i in range(n_repeat):\r\n        start = time.time()\r\n        study = optuna.create_study(\r\n            storage=storage_url,\r\n            sampler=optuna.samplers.RandomSampler()\r\n        )\r\n        study.optimize(objective, n_trials=n_trials, n_jobs=10)\r\n        elapsed.append(time.time() - start)\r\n        optuna.delete_study(study_name=study.study_name, storage=storage_url)\r\n    print(f\"{np.mean(elapsed)=} {np.std(elapsed)=}\")\r\n\r\n\r\nfor n_trials in [1000, 10000, 50000]:\r\n    bench(n_trials)\r\n```\r\n\r\n\u003C\u002Fdetails>\r\n\r\n\r\n## Five New Algorithms in OptunaHub (MO-CMA-ES, MOEA\u002FD, etc.)\r\n\r\nThe following five new algorithms were added to OptunaHub!\r\n\r\n- [Multi-objective CMA-ES (MO-CMA-ES)](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fmocma\u002F)by @y0z\r\n- [MOEA\u002FD sampler](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fmoead\u002F) by @hrntsm \r\n- [MAB Epsilon-Greedy Sampler](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fmab_epsilon_greedy\u002F) by @ryota717\r\n- [NSGAII sampler with Initial Trials](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fnsgaii_with_initial_trials\u002F) by @hrntsm \r\n- [CMA-ES with User Prior](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fuser_prior_cmaes\u002F) by @nabenabe0928\r\n\r\n[MO-CMA-ES](https:\u002F\u002Fhub.optuna.org\u002Fsamplers\u002Fmocma\u002F) is an extension of CMA-ES for multi-objective optimization. Its search mechanism is based on multiple (1+1)-CMA-ES and inherits good invariance properties from CMA-ES, such as invariance against rotation of the search space.\r\n\r\n\u003Cimg width=\"400\" alt=\"mocmaes\" src=\"https:\u002F\u002Fgithub.com\u002Fu","2024-11-11T05:20:54",{"id":209,"version":210,"summary_zh":211,"released_at":212},315305,"v4.0.0","Here is the release note of [v4.0.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F63?closed=1). Please also check out the [release blog post](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fannouncing-optuna-4-0-3325a8420d10).\r\n\r\nIf you want to update the Optuna version of your existing projects to v4.0, please see the [migration guide](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fdiscussions\u002F5573).\r\n\r\nWe have also published blog posts about the development items. Please check them out!\r\n- [OptunaHub, a Feature-Sharing Platform for Optuna, Now Available in Official Release!](https:\u002F\u002Fmedium.com\u002Foptuna\u002Foptunahub-a-feature-sharing-platform-for-optuna-now-available-in-official-release-4b99efe9934d)\r\n- [File Management during LLM (Large Language Model) Trainings by Optuna v4.0.0 Artifact Store](https:\u002F\u002Fmedium.com\u002Foptuna\u002Ffile-management-during-llm-large-language-model-trainings-by-optuna-v4-0-0-artifact-store-5bdd5112f3c7)\r\n- [Significant Speed Up of Multi-Objective TPESampler in Optuna v4.0.0](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fsignificant-speed-up-of-multi-objective-tpesampler-in-optuna-v4-0-0-2bacdcd1d99b)\r\n\r\n# Highlights\r\n\r\n## Official Release of Feature-Sharing Platform OptunaHub\r\nWe officially released [OptunaHub](https:\u002F\u002Fhub.optuna.org\u002F), a feature-sharing platform for Optuna. A large number of optimization and visualization algorithms are available in OptunaHub. Contributors can easily register their methods and deliver them to Optuna users around the world.\r\n\r\nPlease also read the [OptunaHub release blog post](https:\u002F\u002Fmedium.com\u002Foptuna\u002Foptunahub-a-feature-sharing-platform-for-optuna-now-available-in-official-release-4b99efe9934d).\r\n\r\n![optunahub](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F7995c15b-89e6-474a-ba75-56888ae67230)\r\n\r\n\r\n## Enhanced Experiment Management Feature: Official Support of Artifact Store\r\nArtifact Store is a file management feature for files generated during optimization, dubbed artifacts. In Optuna v4.0, we stabilized the existing file upload API and further enhanced the usability of Artifact Store by adding some APIs such as the artifact download API. We also added features to show JSONL and CSV files on [Optuna Dashboard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard) in addition to the existing support for images, audio, and video. With this official support, the API backward compatibility will be guaranteed.\r\n\r\nFor more details, please check the [blog post](https:\u002F\u002Fmedium.com\u002Foptuna\u002Ffile-management-during-llm-large-language-model-trainings-by-optuna-v4-0-0-artifact-store-5bdd5112f3c7).\r\n\r\n![artifact](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F1556ef5a-852a-4256-b2e7-32c7d2eef253)\r\n\r\n\r\n\r\n\r\n## `JournalStorage`: Official Support of Distributed Optimization via Network File System\r\n`JournalStorage` is a new Optuna storage experimentally introduced in Optuna v3.1 (see the [blog post](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fdistributed-optimization-via-nfs-using-optunas-new-operation-based-logging-storage-9815f9c3f932) for details). Optuna has `JournalFileBackend`, a storage backend for various file systems. It can be used on NFS, allowing Optuna to scale to multiple nodes.\r\n\r\n\r\nIn Optuna v4.0, the API for `JournalStorage` has been reorganized, and `JournalStorage` is officially supported. This official support guarantees its backward compatibility from v4.0. For details on the API changes, please refer to the [Optuna v4.0 Migration Guide](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fdiscussions\u002F5573).\r\n\r\n```python\r\nimport optuna\r\nfrom optuna.storages import JournalStorage\r\nfrom optuna.storages.journal import JournalFileBackend\r\n\r\ndef objective(trial: optuna.Trial) -> float:\r\n    ...\r\n\r\nstorage = JournalStorage(JournalFileBackend(\".\u002Foptuna_journal_storage.log\"))\r\nstudy = optuna.create_study(storage=storage)\r\nstudy.optimize(objective)\r\n```\r\n\r\n\r\n## Significant Speedup of Multi-Objective `TPESampler`\r\nBefore v4.0, the multi-objective `TPESampler` sometimes limits the number of trials during optimization due to the sampler bottleneck after a few hundred trials. Optuna v4.0 drastically improves the sampling speed, e.g., 300 times faster for three objectives with 200 trials, and enables users to handle much more trials. Please check the [blog post](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fsignificant-speed-up-of-multi-objective-tpesampler-in-optuna-v4-0-0-2bacdcd1d99b) for details.\r\n\r\n## Introduction of a New `Terminator` Algorithm\r\nOptuna `Terminator` was originally introduced for hyperparameter optimization of machine learning algorithms using cross-validation. To accept broader use cases, Optuna v4.0 introduced the Expected Minimum Model Regret (EMMR) algorithm. Please refer to the [`EMMREvaluator` document](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fv4.0.0\u002Freference\u002Fgenerated\u002Foptuna.terminator.EMMREvaluator.html) for details.\r\n\r\n## Enhancements of Constrained Optimization\r\nWe have gradually expanded the support for constrained optimization. In v4.0, [`study.best_trial`](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fv4.0.0\u002Freference\u002Fgenerated\u002Foptuna.study.Study.html#optuna.study.Study.best_trial) a","2024-09-02T05:16:46",{"id":214,"version":215,"summary_zh":216,"released_at":217},315306,"v4.0.0-b0","This is the release note of [v4.0.0-b0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F61?closed=1).\r\n\r\nIf you want to update your existing projects from Optuna v3.x to Optuna v4, please see the [migration guide](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fdiscussions\u002F5573) and try out Optuna v4.\r\n\r\n# Highlights\r\n\r\n## OptunaHub Beta Release\r\n\r\nThe Optuna team released the beta version of OptunaHub, the feature-sharing platform for Optuna. Registered features can be easily implemented on users’ code and contributors can register the features they implement. The beta version of OptunaHub is now ready to accept contributions from all over the world. Visit [hub.optuna.org](https:\u002F\u002Fhub.optuna.org)!\r\n\r\nThe following code shows an example to use a sampler registered on OptunaHub.\r\n\r\n```bash\r\n% pip install optunahub\r\n```\r\n\r\n```python\r\nimport optunahub\r\nimport optuna\r\n\r\ndef objective(trial):\r\n    x = trial.suggest_float(\"x\", 0, 1)\r\n    return x\r\n\r\nmod = optunahub.load_module(\"samplers\u002Fsimulated_annealing\")\r\n\r\nsampler = mod.SimulatedAnnealingSampler()\r\nstudy = optuna.create_study(sampler=sampler)\r\nstudy.optimize(objective, n_trials=20)\r\n```\r\n\r\n\r\n## Stabilization of Artifact\r\n\r\nThe stable version of the artifact module is available, now equipped with several new APIs. This module introduces capabilities for managing the relatively large-sized data such as model snapshots in hyperparameter tuning, training\u002Fvalidation datasets, and etc. Compared to third-party libraries for experiment tracking, the advantage of using Optuna’s artifact module is a tight integration of [Optuna Dashboard](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard). This allows users to see artifacts (files) associated with the Optuna trial or study.\r\n\r\nHere is a list of new APIs:\r\n- [`download_artifact`](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fv4.0.0-b0\u002Freference\u002Fartifacts.html#optuna.artifacts.download_artifact): Download an artifact from the artifact store.\r\n- [`get_all_artifact_meta`](https:\u002F\u002Foptuna.readthedocs.io\u002Fen\u002Fv4.0.0-b0\u002Freference\u002Fartifacts.html#optuna.artifacts.get_all_artifact_meta): List the associated artifact information of the provided trial or study.\r\n\r\n\r\n## Stabilization of `JournalStorage`\r\n\r\nThe stable version of `JournalStorage` is available. This implies we have decided to maintain backward compatibility of the log format in the future releases.\r\n\r\nPlease note that this release introduces the following API changes to improve the clarity of class names and the module structure.\r\n\r\n| Deprecated APIs | Corresponding active APIs |\r\n-|-\r\n| `optuna.storages.JournalFileStorage` | `optuna.storages.journal.JournalFileBackend` |\r\n| `optuna.storages.JournalFileSymlinkLock` | `optuna.storages.journal.JournalFileSymlinkLock` |\r\n| `optuna.storages.JournalFileOpenLock` | `optuna.storages.journal.JournalFileOpenLock` |\r\n| `optuna.storages.JournalRedisStorage` | `optuna.storages.journal.JournalRedisBackend` |\r\n\r\n\r\n# Breaking Changes\r\n\r\n- Delete deprecated three integrations, `skopt`, `catalyst`, and `fastaiv1` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F114)\r\n- Remove deprecated `CmaEsSampler` from integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F116)\r\n- Remove verbosity of `LightGBMTuner` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F136)\r\n- Move positional args of LightGBM tuner (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F138)\r\n- Remove `multi_objective` (#5390)\r\n- Delete deprecated `_ask` and `_tell` (#5398)\r\n- Delete deprecated `--direction(s)` arguments in the `ask` command (#5405)\r\n- Delete deprecated three integrations, `skopt`, `catalyst`, and `fastaiv1` (#5407)\r\n- Remove the default normalization of importance in f-ANOVA (#5411)\r\n- Remove `samplers.intersection` (#5414)\r\n- Drop implicit create-study in `ask` command (#5415)\r\n- Remove deprecated `study optimize` CLI command (#5416)\r\n- Remove deprecated `CmaEsSampler` from integration (#5417)\r\n- Support constrained optimization in `best_trial` (#5426)\r\n- Drop `--study` in `cli.py` (#5430)\r\n- Deprecate `constraints_func` in `plot_pareto_front` function (#5455)\r\n- Rename some classnames related to `JournalStorage` (#5539)\r\n\r\n# New Features\r\n\r\n- Add Comet ML integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F63, thanks @caleb-kaiser!)\r\n- Add Knowledge Gradient candidates functions (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F125, thanks @alxhslm!)\r\n- Add `is_exhausted()` function in the `GridSampler` class (#5306, thanks @aaravm!)\r\n- Remove experimental from plot (#5413)\r\n- Implement `download_artifact` (#5448)\r\n- Add a function to list linked artifact information (#5467)\r\n- Stabilize artifact APIs (#5567)\r\n- Stabilize `JournalStorage` (#5568)\r\n\r\n# Enhancements\r\n\r\n- Pass two arguments to the forward of `ConstrainedMCObjective` to support `botorch=0.10.0` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F106)\r\n- Speed up non-dominated sort (#5302)\r\n- Make 2d hypervolume computation twice faster (#5303)\r\n- Reduce the time complexity of HSSP 2d from `O(NK^2 log K)` to `O","2024-07-16T04:17:08",{"id":219,"version":220,"summary_zh":221,"released_at":222},315307,"v3.6.1","This is the release note of [v3.6.1](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F62?closed=1).\r\n\r\n# Bug Fixes\r\n\r\n- [Backport] Fix Wilcoxon pruner bug when best_trial has no intermediate value #5370\r\n- [Backport] Address issue#5358 (#5371)\r\n- [Backport] Fix `average_is_best` implementation in `WilcoxonPruner` (#5373)\r\n\r\n# Other\r\n\r\n- Bump up version number to v3.6.1 (#5372)\r\n\r\n# Thanks to All the Contributors!\r\n\r\nThis release was made possible by the authors and the people who participated in the reviews and discussions.\r\n\r\n@HideakiImamura, @eukaryo, @nabenabe0928\r\n","2024-04-01T06:02:58",{"id":224,"version":225,"summary_zh":226,"released_at":227},315308,"v3.6.0","This is the release note of [v3.6.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F60?closed=1).\r\n\r\n# Highlights\r\n\r\nOptuna 3.6 newly supports the following new features. See [our release blog](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fannouncing-optuna-3-6-f5d7efeb5620) for more detailed information.\r\n- Wilcoxon Pruner: New Pruner Based on Wilcoxon Signed-Rank Test\r\n- Lightweight Gaussian Process (GP)-Based Sampler\r\n- Speeding up Importance Evaluation with PED-ANOVA\r\n- Stricter Verification Logic for FrozenTrial\r\n- Refactoring the Optuna Dashboard\r\n- Migration to Optuna Integration\r\n\r\n# Breaking Changes\r\n\r\n- Implement `optuna.terminator` using `optuna._gp` (#5241)\r\n\r\nThese migration-related PRs do not break the backward compatibility as long as optuna-integration v3.6.0 or later is installed in your environment.\r\n\r\n- Move TensorBoard Integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F56, thanks @dheemantha-bhat!)\r\n- Delete TensorBoard integration for migration to `optuna-integration` (#5161, thanks @dheemantha-bhat!)\r\n- Remove CatBoost integration for isolation (#5198)\r\n- Remove PyTorch integration (#5213)\r\n- Remove Dask integration (#5222)\r\n- Migrate the `sklearn` integration (#5225)\r\n- Remove BoTorch integration (#5230)\r\n- Remove `SkoptSampler` (#5234)\r\n- Remove the `cma` integration (#5236)\r\n- Remove the `wandb` integration (#5237)\r\n- Remove XGBoost Integration (#5239)\r\n- Remove MLflow integration (#5246)\r\n- Migrate LightGBM integration (#5249)\r\n- Add CatBoost integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F61)\r\n- Add PyTorch integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F62)\r\n- Add XGBoost integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F65, thanks @buruzaemon!)\r\n- Add `sklearn` integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F66)\r\n- Move Dask integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F67)\r\n- Migrate BoTorch integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F72)\r\n- Move `SkoptSampler` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F74)\r\n- Migrate `pycma` integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F77)\r\n- Migrate the Weights & Biases integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F79)\r\n- Add LightGBM integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F81, thanks @DanielAvdar!)\r\n- Migrate `MLflow` integration (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F84)\r\n\r\n# New Features\r\n\r\n- Backport the change of the timeline plot in Optuna Dashboard (#5168)\r\n- Wilcoxon pruner (#5181)\r\n- Add `GPSampler` (#5185)\r\n- Add a super quick f-ANOVA algorithm named PED-ANOVA (#5212)\r\n\r\n# Enhancements\r\n\r\n- Add `formats.sh` based on `optuna\u002Fmaster` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F75)\r\n- Use vectorization for categorical distance (#5147)\r\n- Unify implementation of fast non-dominated sort (#5160)\r\n- Raise `TypeError` if `params` is not a `dict` in `enqueue_trial` (#5164, thanks @adjeiv!)\r\n- Upgrade `FrozenTrial._validate()` (#5211)\r\n- Import SQLAlchemy lazily (#5215)\r\n- Add UCB for `optuna._gp` (#5224)\r\n- Enhance performance of `GPSampler` (#5274)\r\n- Fix inconsistencies between terminator and its visualization (#5276, thanks @SimonPop!)\r\n- Enhance `GPSampler` performance other than introducing local search (#5279)\r\n\r\n# Bug Fixes\r\n\r\n- Fix import path (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F83)\r\n- Fix `README.md` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F88)\r\n- Fix `LightGBMTuner` test (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F89)\r\n- Fix `JSONDecodeError` in `JournalStorage` (#5195)\r\n- Fix trial validation (#5229)\r\n- Make `gp.fit_kernel_params` more robust (#5247)\r\n- Fix checking value in `study.tell`  (#5269, thanks @ryota717!)\r\n- Fix `_split_trials` of `TPESampler` for constrained optimization with constant liar (#5298)\r\n- Make each importance evaluator compatible with doc (#5311)\r\n\r\n# Documentation\r\n\r\n- Remove `study optimize` from CLI tutorial page (#5152)\r\n- Clarify the `GridSampler` with ask-and-tell interface (#5153)\r\n- Clean-up `faq.rst` (#5170)\r\n- Make Methods section hidden from Artifact Docs (#5188)\r\n- Enhance README (#5189)\r\n- Add a new section explaing how to customize figures (#5194)\r\n- Replace legacy `plotly.graph_objs` with `plotly.graph_objects` (#5223)\r\n- Add a note section to explain that reseed affects reproducibility (#5233)\r\n- Update links to papers (#5235)\r\n- adding link for module's example to documetation for the `optuna.terminator` module (#5243, thanks @HarshitNagpal29!)\r\n- Replace the old example directory (#5244)\r\n- Add Optuna Dashboard section to docs (#5250, thanks @porink0424!)\r\n- Add a safety guard to Wilcoxon pruner, and modify the docstring (#5256)\r\n- Replace LightGBM with PyTorch-based example to remove `lightgbm` dependency in visualization tutorial (#5257)\r\n- Remove unnecessary comment in `Specify Hyperparameters Manually` tutorial page (#5258)\r\n- Add a tutorial of Wilcoxon pruner (#5266)\r\n- Clarify that pruners","2024-03-18T06:01:11",{"id":229,"version":230,"summary_zh":231,"released_at":232},315309,"v3.5.0","This is the release note of [v3.5.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F59?closed=1).\r\n\r\n# Highlights\r\n\r\nThis is a maintenance release with various bug fixes and improvements to the documentation and more.\r\n\r\n# Breaking Changes\r\n\r\n- Isolate the fast.ai module from optuna (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F49, thanks @sousu4!)\r\n- Change `n_objectives` condition to be greater than 4 in candidates functions (#5121, thanks @adjeiv!)\r\n\r\n# New Features\r\n\r\n- Support constraints in plot contour (#4975, thanks @y-kamiya!)\r\n- Support infeasible coloring for plot_timeline (#5014)\r\n- Support `constant_liar` in multi-objective `TPESampler` (#5021)\r\n- Add `optuna study-names` cli (#5029)\r\n- Use `ExpectedHypervolumeImprovement` candidates function for `BotorchSampler` (#5065, thanks @adjeiv!)\r\n- Fix logei_candidates_func in `botorch.py` (#5094, thanks @sousu4!)\r\n- Report CV scores from within `OptunaSearchCV` (#5098, thanks @adjeiv!)\r\n\r\n# Enhancements\r\n\r\n- Support `constant_liar` in multi-objective `TPESampler` (#5021)\r\n- Make positional args to kwargs in suggest_int (#5044)\r\n- Ensure n_below is never negative in TPESampler (#5074, thanks @p1kit!)\r\n- Improve visibility of infeasible trials in `plot_contour` (#5107)\r\n\r\n# Bug Fixes\r\n\r\n- Fix random number generator of `NSGAIIChildGenerationStrategy` (#5003)\r\n- Return `trials` for above in MO split when `n_below=0` (#5079)\r\n- Enable loading of read-only files (#5103, thanks @Guillaume227!)\r\n- Fix `logpdf` for scaled `truncnorm` (#5110)\r\n- Fix the bug of matplotlib's plot_rank function (#5133)\r\n\r\n# Documentation\r\n\r\n- Add the table of dependencies in each integration module (#5005)\r\n- Enhance the documentation of `LightGBM` tuner and separate `train()` from `__init__.py` (#5010)\r\n- Update link to reference (#5064)\r\n- Update the FAQ on reproducible optimization results to remove note on `HyperbandPruner` (#5075, thanks @felix-cw!)\r\n- Remove `MOTPESampler` from `index.rst` file (#5084, thanks @Ashhar-24!)\r\n- Add a note about the deprecation of `MOTPESampler` to the doc (#5086)\r\n- Add the TPE tutorial paper to the doc-string (#5096)\r\n- Update `README.md` to fix the installation and integration (#5126)\r\n- Clarify that `Recommended budgets` include `n_startup_trials` (#5137)\r\n\r\n# Examples\r\n\r\n- Update version syntax for PyTorch and PyTorch Lightning examples (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F205, thanks @JustinGoheen!)\r\n- Update import path (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F213)\r\n- Bump up python versions (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F214)\r\n- Add the simplest example directly to README (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F215)\r\n- Add simples examples for multi-objective and constrained optimizations (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F216)\r\n- Revise the comment to describe the problem (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F217)\r\n- Modify simple examples based on the Optuna code conventions (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F218)\r\n- Remove version specification of `jax` and `jaxlib` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F223)\r\n- Import examples from `optuna\u002Foptuna-dashboard` (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F224)\r\n- Add `OptunaSearchCV` with terminator (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F225)\r\n- Drop python 3.8 from haiku test (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F227)\r\n- Run MXNet in Python 3.11 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F228)\r\n\r\n# Tests\r\n\r\n- Remove tests for allennlp and chainer (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F47)\r\n- Reduce the warning in `tests\u002Fstudy_tests\u002Ftest_study.py` (#5070, thanks @sousu4!)\r\n\r\n# Code Fixes\r\n\r\n- Implement NSGA-III elite population selection strategy (#5027)\r\n- Fix import path of `PyTorchLightning` (#5028)\r\n- Fix `Any` with `float` in `_TreeNode.children` (#5040, thanks @aanghelidi!)\r\n- Fix future annotation in `typing.py` (#5054, thanks @jot-s-bindra!)\r\n- Add future annotations to callback and terminator files inside terminator folder (#5055, thanks @jot-s-bindra!)\r\n- Fix future annotations to edf python file (#5056, thanks @Vaibhav101203!)\r\n- Fix future annotations in _hypervolume_history.py (#5057, thanks @Vaibhav101203!)\r\n- Reduce the warning in `tests\u002Fstorages_tests\u002Ftest_heartbeat.py` (#5066, thanks @sousu4!)\r\n- Fix future annotation to `frozen.py` (#5080, thanks @Vaibhav101203!)\r\n- Fix annotation for `dataframe.py` (#5081, thanks @Vaibhav101203!)\r\n- Fix future annotation (#5083, thanks @Vaibhav101203!)\r\n- Fix type annotation (#5105)\r\n- Fix mypy error in CI (#5106)\r\n- Isolate the fast.ai module (#5120, thanks @sousu4!)\r\n- Clean up workflow file (#5122)\r\n\r\n# Continuous Integration\r\n\r\n- Run `test_tensorflow` in Python 3.11 (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F46)\r\n- Exclude mypy checks for chainer (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F48)\r\n- Support Python 3.12 on tests for core modules (#5018)\r\n- Fix the issue where formats.sh does not","2023-12-11T05:04:29",{"id":234,"version":235,"summary_zh":236,"released_at":237},315310,"v3.4.0","\r\nThis is the release note of [v3.4.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F58?closed=1).\r\n\r\n# Highlights\r\n\r\nOptuna 3.4 newly supports the following new features. See [our release blog](https:\u002F\u002Fmedium.com\u002Foptuna\u002Fannouncing-optuna-3-4-0087644c92fa) for more detailed information.\r\n\r\n* Preferential Optimization (Optuna Dashboard)\r\n* Optuna Artifact\r\n* Jupyter Lab Extension\r\n* VS Code Extension\r\n* User-defined Distance for Categorical Parameters in TPE\r\n* Constrained Optimization Support for Visualization Functions\r\n* User-Defined Plotly’s Figure Support (Optuna Dashboard)\r\n* 3D Model Viewer Support (Optuna Dashboard)\r\n\r\n# Breaking Changes\r\n\r\n- Remove deprecated arguments with regard to `LightGBM>=4.0` (#4844)\r\n- Deprecate `SkoptSampler` (#4913)\r\n\r\n# New Features\r\n\r\n- Support constraints for intermediate values plot (#4851, thanks @adjeiv!)\r\n- Display all objectives on hyperparameter importances plot (#4871)\r\n- Implement `get_all_study_names()` (#4898)\r\n- Support constraints `plot_rank` (#4899, thanks @ryota717!)\r\n- Support Study Artifacts (#4905)\r\n- Support specifying distance between categorical choices in `TPESampler` (#4926)\r\n- Add `metric_names` getter to study (#4930)\r\n- Add artifact middleware for exponential backoff retries (#4956)\r\n- Add `GCSArtifactStore` (#4967, thanks @semiexp!)\r\n- Add `BestValueStagnationEvaluator` (#4974, thanks @smygw72!)\r\n- Allow user-defined objective names in hyperparameter importance plots (#4986)\r\n\r\n# Enhancements\r\n\r\n- CHG constrained param displayed in #cccccc (#4877, thanks @louis-she!)\r\n- Faster implementation of fANOVA (#4897)\r\n- Support constraint in plot slice (#4906, thanks @hrntsm!)\r\n- Add mimetype input (#4910, thanks @hrntsm!)\r\n- Show all ticks in `_parallel_coordinate.py` when log scale (#4911)\r\n- Speed up multi-objective TPE (#5017)\r\n\r\n# Bug Fixes\r\n\r\n- Fix numpy indexing bugs and named tuple comparing (#4874, thanks @ryota717!)\r\n- Fix `fail_stale_trials` with race condition (#4886)\r\n- Fix alias handler (#4887)\r\n- Add lazy random state and use it in `RandomSampler` (#4970, thanks @shu65!)\r\n- Fix TensorBoard error on categorical choices of mixed types (#4973, thanks @ciffelia!)\r\n- Use lazy random state in samplers (#4976, thanks @shu65!)\r\n- Fix an error that does not consider `min_child_samples` (#5007)\r\n- Fix `BruteForceSampler` in parallel optimization (#5022)\r\n\r\n# Documentation\r\n\r\n- Fix typo in `_filesystem.py` (#4909)\r\n- Mention a pruner instance is not stored in a storage in resuming tutorial (#4927)\r\n- Add introduction of `optuna-fast-fanova` in documents (#4943)\r\n- Add artifact tutorial (#4954)\r\n- Fix an example code in `Boto3ArtifactStore`'s docstring (#4957)\r\n- Add tutorial for `JournalStorage` (#4980, thanks @semiexp!)\r\n- Fix document regarding `ArtifactNotFound` (#4982, thanks @smygw72!)\r\n- Add the workaround for duplicated samples to FAQ (#5006)\r\n\r\n# Examples\r\n\r\n- Add huggingface's link to external projects (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F201)\r\n- Fix samplers CI (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F202)\r\n- Set version constraint on aim (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F206)\r\n- Add an example of Optuna Terminator for LightGBM (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-examples\u002Fpull\u002F210, thanks @hamster-86!)\r\n\r\n# Tests\r\n\r\n- Reduce `n_trials` in `test_combination_of_different_distributions_objective` (#4950)\r\n- Replaces California housing dataset with iris dataset (#4953)\r\n- Fix numpy duplication warning (#4978, thanks @torotoki!)\r\n- Make test order deterministic for `pytest-xdist` (#4999)\r\n\r\n# Code Fixes\r\n\r\n- Move shap (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F32)\r\n- Remove shap (#4791)\r\n- Use `isinstance` instead of `if type() is ...` (#4896)\r\n- Make `cmaes` dependency optional (#4901)\r\n- Call internal sampler's `before_trial` (#4914)\r\n- Refactor `_grid.py` (#4918)\r\n- Fix the `checks-integration` errors on LightGBMTuner (#4923)\r\n- Replace deprecated `botorch` method to remove warning (#4940)\r\n- Fix type annotation (#4941)\r\n- Add `_split_trials` instead of `_get_observation_pairs` and `_split_observation_pairs` (#4947)\r\n- Use `__future__.annotations` in `optuna\u002Fvisualization\u002F_optimization_history.py` (#4964, thanks @YuigaWada!)\r\n- Fix #4508 for `optuna\u002Fvisualization\u002F_hypervolume_history.py` (#4965, thanks @RuTiO2le!)\r\n- Use future annotation in `optuna\u002F_convert_positional_args.py` (#4966, thanks @hamster-86!)\r\n- Fix type annotation of `SQLAlchemy` (#4968)\r\n- Use `collections.abc` in `optuna\u002Fvisualization\u002F_edf.py` (#4969, thanks @g-tamaki!)\r\n- Use `collections.abc` in plot pareto front (#4971)\r\n- Remove `experimental_func` from `metric_names` property (#4983, thanks @semiexp!)\r\n- Add `__future__.annotations` to `progress_bar.py` (#4992)\r\n- Fix annotations in `optuna\u002Foptuna\u002Fvisualization\u002Fmatplotlib\u002F_optimization_history.py` (#5015, thanks @sousu4!)\r\n\r\n# Continuous Integration\r\n\r\n- Fix checks integration (#4869)\r\n- Remove fakeredis version constraint (#4873)\r\n- Support `asv` 0.6.0 (#4882)\r\n- Fix speed-benchmarks","2023-10-17T07:04:18",{"id":239,"version":240,"summary_zh":241,"released_at":242},315311,"v3.3.0","This is the release note of [v3.3.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fmilestone\u002F57?closed=1).\r\n\r\n# Highlights\r\n\r\n## CMA-ES with Learning Rate Adaptation\r\n\r\nA new variant of CMA-ES has been added. By setting the `lr_adapt` argument to `True` in `CmaEsSampler`, you can utilize it. For multimodal and\u002For noisy problems, adapting the learning rate can help avoid getting trapped in local optima. For more details, please refer to #4817. We want to thank @nomuramasahir0, one of the authors of LRA-CMA-ES, for his great work and the development of [cmaes](https:\u002F\u002Fgithub.com\u002FCyberAgentAILab\u002Fcmaes) library.\r\n\r\n\r\n\u003Cimg width=\"513\" alt=\"256118903-6796d0c4-3278-4d99-bdb2-00b6fe0fa13b\" src=\"https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F5564044\u002F50ed3200-2e02-4b10-8ad1-1f237cb3f3ea\">\r\n\r\n\r\n## Hypervolume History Plot for Multiobjective Optimization\r\n\r\nIn multiobjective optimization, the history of hypervolume is commonly used as an indicator of performance. Optuna now supports this feature in the visualization module. Thanks to @y0z for your great work!\r\n\r\n![246094447-f17d5961-216a-44b3-b9ce-715c105445a7](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F5564044\u002F36350c77-87e1-44e2-83e4-4a4a9480bfde)\r\n\r\n\r\n## Constrained Optimization Support for Visualization Functions\r\n\r\n| Plotly | matplotlib |\r\n| --- | --- |\r\n| ![constrained-optimization-history-plot (1)](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F5564044\u002F942316ac-0e04-4ff8-97a9-dea02fd45f9c) | \u003Cimg width=\"1056\" alt=\"254270811-e85c3c5e-44e5-4a04-ba8a-f6ea2c53611f (1)\" src=\"https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F5564044\u002Fc043c79b-a6ad-46bc-92f5-fd54ee61f995\"> |\r\n\r\nSome samplers support constrained optimization, however, many other features cannot handle it. We are continuously enhancing support for constraints. In this release, `plot_optimization_history` starts to consider constraint violations. Thanks to @hrntsm for your great work!\r\n\r\n```python\r\nimport optuna\r\n\r\ndef objective(trial):\r\n    x = trial.suggest_float(\"x\", -15, 30)\r\n    y = trial.suggest_float(\"y\", -15, 30)\r\n    v0 = 4 * x**2 + 4 * y**2\r\n    trial.set_user_attr(\"constraint\", [1000 - v0])\r\n    return v0\r\n\r\ndef constraints_func(trial):\r\n    return trial.user_attrs[\"constraint\"]\r\n\r\nsampler = optuna.samplers.TPESampler(constraints_func=constraints_func)\r\nstudy = optuna.create_study(sampler=sampler)\r\nstudy.optimize(objective, n_trials=100)\r\nfig = optuna.visualization.plot_optimization_history(study)\r\nfig.show()\r\n```\r\n\r\n\r\n## Streamlit Integration for Human-in-the-loop Optimization\r\n\r\n\u003Cimg width=\"1127\" alt=\"streamlit_integration\" src=\"https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna\u002Fassets\u002F5564044\u002Fe8ea5d13-c834-4ed3-8c7b-24ab07c37105\">\r\n\r\n[Optuna Dashboard v0.11.0](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard\u002Freleases\u002Ftag\u002Fv0.11.0) provides the tight integration with [Streamlit](https:\u002F\u002Fstreamlit.io\u002F) framework. By using this feature, you can create your own application for human-in-the-loop optimization. Please check out [the documentation](https:\u002F\u002Foptuna-dashboard.readthedocs.io\u002Fen\u002Flatest\u002Fapi.html#streamlit) and [the example](https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-dashboard\u002Ftree\u002Fmain\u002Fexamples\u002Fstreamlit_plugin) for details.\r\n\r\n\r\n# Breaking Changes\r\n\r\n- Move mxnet (https:\u002F\u002Fgithub.com\u002Foptuna\u002Foptuna-integration\u002Fpull\u002F31)\r\n- Remove mxnet (#4790)\r\n- Remove `ordered_dict` argument from `IntersectionSearchSpace` (#4846)\r\n\r\n# New Features\r\n\r\n- Add `logei_candidate_func` and make it default when available (#4667)\r\n- Support `JournalFileStorage` and `JournalRedisStorage` on CLI (#4696)\r\n- Implement hypervolume history plot for matplotlib backend (#4748, thanks @y0z!)\r\n- Add `cv_results_` to `OptunaSearchCV` (#4751, thanks @jckkvs!)\r\n- Add `optuna.integration.botorch.qnei_candidates_func` (#4753, thanks @kstoneriv3!)\r\n- Add hypervolume history plot for `plotly` backend (#4757, thanks @y0z!)\r\n- Add `FileSystemArtifactStore` (#4763)\r\n- Sort params on fetch (#4775)\r\n- Add constraints support to `_optimization_history_plot` (#4793, thanks @hrntsm!)\r\n- Bump up `LightGBM` version to v4.0.0 (#4810)\r\n- Add constraints support to `matplotlib._optimization_history_plot` (#4816, thanks @hrntsm!)\r\n- Introduce CMA-ES with Learning Rate Adaptation (#4817)\r\n- Add `upload_artifact` api (#4823)\r\n- Add `before_trial` (#4825)\r\n- Add `Boto3ArtifactStore` (#4840)\r\n- Display best objective value in contour plot for a given param pair, not the value from the most recent trial (#4848)\r\n\r\n# Enhancements\r\n\r\n- Speed up `logpdf` in `_truncnorm.py` (#4712)\r\n- Speed up `erf` (#4713)\r\n- Speed up `get_all_trials` in `InMemoryStorage` (#4716)\r\n- Add a warning for a progress bar not being displayed #4679 (#4728, thanks @rishabsinghh!)\r\n- Make `BruteForceSampler` consider failed trials (#4747)\r\n- Use shallow copy in `_get_latest_trial` (#4774)\r\n- Speed up `plot_hypervolume_history` (#4776)\r\n\r\n# Bug Fixes\r\n\r\n- Solve issue #4557 - error_score (#4642, thanks @jckkvs!)\r\n- Fix `BruteForceSampler` for pruned trials (#4720)\r\n- Fix `plot_slice` bug when some of the choices are numeric (#4724)\r\n- Make `LightGBMTuner` ","2023-08-07T07:12:54"]