[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-keras-team--keras-tuner":3,"tool-keras-team--keras-tuner":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160411,2,"2026-04-18T23:33:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":97,"env_os":98,"env_gpu":99,"env_ram":99,"env_deps":100,"category_tags":105,"github_topics":106,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":145},9343,"keras-team\u002Fkeras-tuner","keras-tuner","A Hyperparameter Tuning Library for Keras","KerasTuner 是一款专为 Keras 和 TensorFlow 打造的超参数优化库，旨在帮助开发者轻松找到机器学习模型的最佳配置。在深度学习训练中，手动调整学习率、网络层数等超参数既耗时又依赖经验，KerasTuner 有效解决了这一痛点，将繁琐的搜索过程自动化且规模化。\n\n这款工具非常适合希望提升模型性能的 AI 开发者、数据科学家以及需要验证新算法的研究人员。其核心亮点在于采用了“定义即运行”（define-by-run）的语法，允许用户在构建模型的代码中直接灵活地定义搜索空间，无需复杂的额外配置。KerasTuner 内置了贝叶斯优化、Hyperband 和随机搜索等多种高效搜索算法，能智能地筛选出验证损失最低的模型组合。此外，它的架构设计极具扩展性，方便研究人员快速集成和实验自定义的搜索策略。只需几行代码，用户即可启动搜索并自动获取最优模型，让模型调优变得更加简单高效。","# KerasTuner\n\n[![](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fworkflows\u002FTests\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Factions?query=workflow%3ATests+branch%3Amaster)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkeras-team\u002Fkeras-tuner\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkeras-team\u002Fkeras-tuner)\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fkeras-tuner.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fkeras-tuner)\n\nKerasTuner is an easy-to-use, scalable hyperparameter optimization framework\nthat solves the pain points of hyperparameter search. Easily configure your\nsearch space with a define-by-run syntax, then leverage one of the available\nsearch algorithms to find the best hyperparameter values for your models.\nKerasTuner comes with Bayesian Optimization, Hyperband, and Random Search algorithms\nbuilt-in, and is also designed to be easy for researchers to extend in order to\nexperiment with new search algorithms.\n\nOfficial Website: [https:\u002F\u002Fkeras.io\u002Fkeras_tuner\u002F](https:\u002F\u002Fkeras.io\u002Fkeras_tuner\u002F)\n\n## Quick links\n\n* [Getting started with KerasTuner](https:\u002F\u002Fkeras.io\u002Fguides\u002Fkeras_tuner\u002Fgetting_started)\n* [KerasTuner developer guides](https:\u002F\u002Fkeras.io\u002Fguides\u002Fkeras_tuner\u002F)\n* [KerasTuner API reference](https:\u002F\u002Fkeras.io\u002Fapi\u002Fkeras_tuner\u002F)\n\n\n## Installation\n\nKerasTuner requires **Python 3.8+** and **TensorFlow 2.0+**.\n\nInstall the latest release:\n\n```\npip install keras-tuner\n```\n\nYou can also check out other versions in our\n[GitHub repository](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner).\n\n\n## Quick introduction\n\nImport KerasTuner and TensorFlow:\n\n```python\nimport keras_tuner\nfrom tensorflow import keras\n```\n\nWrite a function that creates and returns a Keras model.\nUse the `hp` argument to define the hyperparameters during model creation.\n\n```python\ndef build_model(hp):\n  model = keras.Sequential()\n  model.add(keras.layers.Dense(\n      hp.Choice('units', [8, 16, 32]),\n      activation='relu'))\n  model.add(keras.layers.Dense(1, activation='relu'))\n  model.compile(loss='mse')\n  return model\n```\n\nInitialize a tuner (here, `RandomSearch`).\nWe use `objective` to specify the objective to select the best models,\nand we use `max_trials` to specify the number of different models to try.\n\n```python\ntuner = keras_tuner.RandomSearch(\n    build_model,\n    objective='val_loss',\n    max_trials=5)\n```\n\nStart the search and get the best model:\n\n```python\ntuner.search(x_train, y_train, epochs=5, validation_data=(x_val, y_val))\nbest_model = tuner.get_best_models()[0]\n```\n\nTo learn more about KerasTuner, check out [this starter guide](https:\u002F\u002Fkeras.io\u002Fguides\u002Fkeras_tuner\u002Fgetting_started\u002F).\n\n## Contributing Guide\n\nPlease refer to the [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md) for the contributing guide.\n\nThank all the contributors!\n\n[![The contributors](https:\u002F\u002Fraw.githubusercontent.com\u002Fkeras-team\u002Fkeras-tuner\u002Fmaster\u002Fdocs\u002Fcontributors.svg)](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fgraphs\u002Fcontributors)\n\n## Community\n\nAsk your questions on our [GitHub Discussions](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fdiscussions).\n\n## Citing KerasTuner\n\nIf KerasTuner helps your research, we appreciate your citations.\nHere is the BibTeX entry:\n\n```bibtex\n@misc{omalley2019kerastuner,\n\ttitle        = {KerasTuner},\n\tauthor       = {O'Malley, Tom and Bursztein, Elie and Long, James and Chollet, Fran\\c{c}ois and Jin, Haifeng and Invernizzi, Luca and others},\n\tyear         = 2019,\n\thowpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner}}\n}\n```\n","# KerasTuner\n\n[![](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fworkflows\u002FTests\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Factions?query=workflow%3ATests+branch%3Amaster)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkeras-team\u002Fkeras-tuner\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkeras-team\u002Fkeras-tuner)\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fkeras-tuner.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fkeras-tuner)\n\nKerasTuner 是一个易于使用、可扩展的超参数优化框架，旨在解决超参数搜索中的痛点。您可以使用“边运行边定义”的语法轻松配置搜索空间，然后利用内置的搜索算法为您的模型找到最佳的超参数值。KerasTuner 内置了贝叶斯优化、Hyperband 和随机搜索等算法，并且设计得便于研究人员扩展，以尝试新的搜索算法。\n\n官方网站：[https:\u002F\u002Fkeras.io\u002Fkeras_tuner\u002F](https:\u002F\u002Fkeras.io\u002Fkeras_tuner\u002F)\n\n## 快速链接\n\n* [KerasTuner 入门指南](https:\u002F\u002Fkeras.io\u002Fguides\u002Fkeras_tuner\u002Fgetting_started)\n* [KerasTuner 开发者指南](https:\u002F\u002Fkeras.io\u002Fguides\u002Fkeras_tuner\u002F)\n* [KerasTuner API 参考](https:\u002F\u002Fkeras.io\u002Fapi\u002Fkeras_tuner\u002F)\n\n\n## 安装\n\nKerasTuner 需要 **Python 3.8+** 和 **TensorFlow 2.0+**。\n\n安装最新版本：\n\n```\npip install keras-tuner\n```\n\n您也可以在我们的\n[GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner) 中查看其他版本。\n\n\n## 快速入门\n\n导入 KerasTuner 和 TensorFlow：\n\n```python\nimport keras_tuner\nfrom tensorflow import keras\n```\n\n编写一个创建并返回 Keras 模型的函数。在模型创建过程中，使用 `hp` 参数来定义超参数。\n\n```python\ndef build_model(hp):\n  model = keras.Sequential()\n  model.add(keras.layers.Dense(\n      hp.Choice('units', [8, 16, 32]),\n      activation='relu'))\n  model.add(keras.layers.Dense(1, activation='relu'))\n  model.compile(loss='mse')\n  return model\n```\n\n初始化一个调优器（这里使用 `RandomSearch`）。我们通过 `objective` 指定用于选择最佳模型的目标，并通过 `max_trials` 指定要尝试的不同模型数量。\n\n```python\ntuner = keras_tuner.RandomSearch(\n    build_model,\n    objective='val_loss',\n    max_trials=5)\n```\n\n开始搜索并获取最佳模型：\n\n```python\ntuner.search(x_train, y_train, epochs=5, validation_data=(x_val, y_val))\nbest_model = tuner.get_best_models()[0]\n```\n\n如需了解更多关于 KerasTuner 的信息，请参阅 [这篇入门指南](https:\u002F\u002Fkeras.io\u002Fguides\u002Fkeras_tuner\u002Fgetting_started\u002F)。\n\n## 贡献指南\n\n请参阅 [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md) 以获取贡献指南。\n\n感谢所有贡献者！\n\n[![贡献者](https:\u002F\u002Fraw.githubusercontent.com\u002Fkeras-team\u002Fkeras-tuner\u002Fmaster\u002Fdocs\u002Fcontributors.svg)](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fgraphs\u002Fcontributors)\n\n## 社区\n\n您可以在我们的 [GitHub Discussions](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fdiscussions) 上提出问题。\n\n## 引用 KerasTuner\n\n如果 KerasTuner 对您的研究有所帮助，我们非常感谢您的引用。以下是 BibTeX 条目：\n\n```bibtex\n@misc{omalley2019kerastuner,\n\ttitle        = {KerasTuner},\n\tauthor       = {O'Malley, Tom and Bursztein, Elie and Long, James and Chollet, Fran\\c{c}ois and Jin, Haifeng and Invernizzi, Luca and others},\n\tyear         = 2019,\n\thowpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner}}\n}\n```","# KerasTuner 快速上手指南\n\nKerasTuner 是一个易用且可扩展的超参数优化框架，旨在解决模型调参痛点。它支持“定义即运行”的语法来配置搜索空间，并内置了贝叶斯优化、Hyperband 和随机搜索等算法，帮助开发者快速找到模型的最佳超参数组合。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **Python 版本**：3.8 或更高版本\n*   **深度学习框架**：TensorFlow 2.0 或更高版本\n\n## 安装步骤\n\n您可以使用 pip 直接安装最新稳定版。国内用户推荐使用清华或阿里镜像源以加速下载。\n\n**标准安装：**\n```bash\npip install keras-tuner\n```\n\n**使用国内镜像源加速安装（推荐）：**\n```bash\npip install keras-tuner -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n以下是使用 KerasTuner 进行随机搜索（Random Search）的最简流程：\n\n### 1. 导入依赖\n```python\nimport keras_tuner\nfrom tensorflow import keras\n```\n\n### 2. 构建模型函数\n编写一个返回 Keras 模型的函数。利用 `hp` 参数在模型构建过程中定义超参数搜索空间（例如选择神经元数量）。\n\n```python\ndef build_model(hp):\n  model = keras.Sequential()\n  model.add(keras.layers.Dense(\n      hp.Choice('units', [8, 16, 32]),\n      activation='relu'))\n  model.add(keras.layers.Dense(1, activation='relu'))\n  model.compile(loss='mse')\n  return model\n```\n\n### 3. 初始化 Tuner\n实例化一个搜索器（此处使用 `RandomSearch`）。\n*   `objective`: 指定优化目标（如验证集损失 `val_loss`）。\n*   `max_trials`: 指定尝试的不同模型组合总数。\n\n```python\ntuner = keras_tuner.RandomSearch(\n    build_model,\n    objective='val_loss',\n    max_trials=5)\n```\n\n### 4. 开始搜索并获取最佳模型\n调用 `search` 方法启动超参数搜索，完成后提取表现最好的模型。\n\n```python\ntuner.search(x_train, y_train, epochs=5, validation_data=(x_val, y_val))\nbest_model = tuner.get_best_models()[0]\n```\n\n现在，`best_model` 即为经过超参数优化后的最佳模型，可直接用于评估或预测。","某电商数据团队正在构建用户流失预测模型，急需在有限时间内找到最优神经网络结构以提升准确率。\n\n### 没有 keras-tuner 时\n- 工程师只能依靠经验手动猜测隐藏层节点数和学习率，反复修改代码并重新运行训练，效率极低。\n- 为了测试不同参数组合，需要编写大量重复的脚本逻辑，导致代码臃肿且难以维护。\n- 由于计算资源有限，无法系统性地遍历所有潜在的优秀参数组合，往往陷入局部最优解而不自知。\n- 实验过程缺乏统一记录，难以回溯哪组参数对应哪个验证集得分，协作复盘困难。\n\n### 使用 keras-tuner 后\n- 通过定义 `hp.Choice` 等搜索空间，keras-tuner 自动利用贝叶斯优化算法智能推荐下一组最佳参数，无需人工干预。\n- 仅需编写一次模型构建函数，keras-tuner 即可自动管理数百次试验的调度与执行，大幅精简代码。\n- 内置的 Hyperband 等算法能提前终止表现不佳的试验，将宝贵的 GPU 资源集中投入到更有潜力的参数组合上。\n- 自动保存每次试验的详细日志和最佳模型权重，随时可调用 `get_best_models` 获取经过验证的最优解。\n\nkeras-tuner 将原本耗时数周的人工“猜参”过程转化为自动化、智能化的搜索流程，显著提升了模型迭代效率与最终性能。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkeras-team_keras-tuner_439b3ff7.png","keras-team","Keras","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkeras-team_dd76ba2a.jpg","Deep Learning for humans",null,"keras-users@googlegroups.com","https:\u002F\u002Fkeras.io\u002F","https:\u002F\u002Fgithub.com\u002Fkeras-team",[81,85,89],{"name":82,"color":83,"percentage":84},"Python","#3572A5",99.6,{"name":86,"color":87,"percentage":88},"Shell","#89e051",0.4,{"name":90,"color":91,"percentage":92},"Dockerfile","#384d54",0,2925,403,"2026-04-09T01:43:49","Apache-2.0",1,"","未说明",{"notes":101,"python":102,"dependencies":103},"该工具是用于 Keras\u002FTensorFlow 的超参数优化框架，内置贝叶斯优化、Hyperband 和随机搜索算法。具体 GPU 和内存需求取决于所运行的 TensorFlow 模型及数据集规模，README 中未给出硬性指标。","3.8+",[104],"tensorflow>=2.0",[14],[107,108,109,110,111,112],"automl","deep-learning","hyperparameter-optimization","keras","machine-learning","tensorflow","2026-03-27T02:49:30.150509","2026-04-19T09:17:57.000203",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},41910,"如何在 Keras Tuner 中调整 epoch 数量和 batch_size？","官方推荐的方法是自定义 HyperModel 类并重写 fit 方法。在 build 方法中定义模型结构，在 fit 方法中使用 hp.Choice 或 hp.Int 来动态设置 batch_size。示例代码如下：\n\nclass MyHyperModel(kt.HyperModel):\n    def build(self, hp):\n        model = keras.Sequential()\n        model.add(layers.Flatten())\n        model.add(layers.Dense(units=hp.Int(\"units\", min_value=32, max_value=512, step=32), activation=\"relu\"))\n        model.add(layers.Dense(10, activation=\"softmax\"))\n        model.compile(optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"])\n        return model\n\n    def fit(self, hp, model, *args, **kwargs):\n        return model.fit(\n            *args,\n            batch_size=hp.Choice(\"batch_size\", [32, 64, 128]),\n            epochs=hp.Int(\"epochs\", 10, 50),\n            **kwargs\n        )","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fissues\u002F122",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},41911,"如何在 Keras Tuner 中使用自定义目标函数（如特定类别的精确率）？","可以定义自定义指标函数并在初始化 Tuner 时通过 kt.Objective 指定。需要注意优化方向（direction），对于精确率（Precision）或 AUC 等指标，方向应设为 \"max\"（越大越好）。示例：\n\ndef prec_class1(y_true, y_pred):\n    # 计算逻辑...\n    return precision_value\n\ntuner = Hyperband(\n    hypermodel,\n    objective=kt.Objective(\"prec_class1\", direction=\"max\"),\n    max_epochs=30,\n    executions_per_trial=1,\n    directory=root_dir\n)","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fissues\u002F263",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},41912,"使用条件超参数（parent_name\u002Fparent_values）时遇到 KeyError 怎么办？","这是一个在 keras-tuner 1.0.4 及后续部分版本中出现的已知 Bug。当使用 parent_name 和 parent_values 定义条件超参数时会抛出 KeyError。解决方案是将 keras-tuner 升级到 1.2.0 或更高版本，该问题已在 1.2.0 中修复。如果无法升级，可暂时降级到 1.0.3 版本。","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fissues\u002F614",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},41913,"Keras Tuner 是否支持数据增强（Data Augmentation）的超参数调优？","支持。虽然库中没有内置专门的数据增强调优接口，但可以通过重写 Tuner 类的 run_trial 方法来实现。用户可以在 trial 运行过程中动态应用不同的数据增强策略。相关讨论和更高级的自动化功能建议在 AutoKeras 项目中查看。","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fissues\u002F153",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},41914,"调用 get_best_models() 获取 Sequential 模型时报错 \"Weights have not yet been created\" 如何解决？","该错误通常发生在加载权重时模型尚未构建（即未调用 build 或未通过输入推断形状）。在使用 Sequential 模型时，确保在保存模型前已经用虚拟数据或实际输入调用了 model.build(input_shape) 或者让模型至少运行过一次 forward pass。这样权重会被正确初始化，get_best_models() 才能成功加载权重。","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fissues\u002F75",{"id":142,"question_zh":143,"answer_zh":144,"source_url":125},41915,"如何确定优化目标（如 AUC、Precision）的方向是 \"min\" 还是 \"max\"？","对于性能指标如 AUC（曲线下面积）、Precision（精确率）、Accuracy（准确率）等，数值越大代表模型越好，因此 direction 应设置为 \"max\"。例如，AUC 为 0.9 的模型优于 0.6 的模型。反之，对于 Loss（损失函数）等误差指标，数值越小越好，direction 应设为 \"min\"。",[146,151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241],{"id":147,"version":148,"summary_zh":149,"released_at":150},333915,"v1.4.8","## 变更内容\n\n* 添加了 `grpcio` 和 `protobuf` 作为依赖项。\n* 对增强功能进行了小幅修复。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.7...v1.4.8","2025-11-11T05:22:28",{"id":152,"version":153,"summary_zh":154,"released_at":155},333916,"v1.4.7","## Bug 修复\n* 默认将主服务器在关闭前的等待时间更改为 60 分钟。\n\n## 新贡献者\n* @sfo 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F977 中完成了他们的首次贡献。\n* @onponomarev 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F991 中完成了他们的首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.6...v1.4.7","2024-03-04T19:29:02",{"id":157,"version":158,"summary_zh":159,"released_at":160},333917,"v1.4.6","## Bug 修复\n* 在并行运行时，主控进程可能会在某些客户端尚未请求下一次试验之前就退出，从而导致这些客户端也被通知退出。现已修复此问题。\n\n## 新特性\n* 将依赖项从 `keras-core` 更新至 Keras 3 及以上版本。同时为保持向后兼容性，仍支持 Keras 2 版本。\n\n\n## 新贡献者\n* @AniketP04 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F962 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.5...v1.4.6","2023-11-07T19:21:59",{"id":162,"version":163,"summary_zh":164,"released_at":165},333918,"v1.4.5","## 错误修复\n* 在并行运行时，客户端 Oracle 会在首席 Oracle 无响应时无限等待。现已修复。\n* 在并行运行时，客户端会在调用 `oracle.end_trial()` 后再次调用首席 Oracle，而此时首席 Oracle 已经结束。现已修复。\n* 在并行运行时，首席 Oracle 曾在 `tuner.__init__()` 中开始阻塞。然而，更合理的情况是在调用 `tuner.search()` 时才进行阻塞。现已修复。\n* 无法执行 `from keras_tuner.engine.hypermodel import HyperModel`。现已修复。\n* 无法执行 `from keras_tuner.engine.hyperparameters import HyperParameters`。现已修复。\n* 无法执行 `from keras_tuner.engine.metrics_tracking import infer_metric_direction`。现已修复。\n* 无法执行 `from keras_tuner.engine.oracle import Objective`。现已修复。\n* 无法执行 `from keras_tuner.engine.oracle import Oracle`。现已修复。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.4...v1.4.5","2023-10-12T21:49:31",{"id":167,"version":168,"summary_zh":169,"released_at":170},333919,"v1.4.4","## 错误修复\n* 无法执行 `from keras_tuner.engine.hyperparameters import serialize`。现已修复。\n* 无法执行 `from keras_tuner.engine.hyperparameters import deserialize`。现已修复。\n* 无法执行 `from keras_tuner.engine.tuner import maybe_distribute`。现已修复。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.3...v1.4.4","2023-10-02T21:02:37",{"id":172,"version":173,"summary_zh":174,"released_at":175},333920,"v1.4.3","## Bug 修复\n* 无法执行 `from keras_tuner.engine.tuner import Tuner`。现已修复。\n* 当 TensorFlow 版本较低时，使用 Keras 模型会因没有名为 `get_build_config` 的属性而报错。现已修复。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.2...v1.4.3","2023-09-29T20:54:22",{"id":177,"version":178,"summary_zh":179,"released_at":180},333921,"v1.4.2","## 错误修复\n* 无法执行 `from keras_tuner.engine import trial`。现已修复。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.1...v1.4.2","2023-09-26T16:55:53",{"id":182,"version":183,"summary_zh":184,"released_at":185},333922,"v1.4.1","## 错误修复\n* 无法执行 `from keras_tuner.engine import base_tuner`。现已修复。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.4.0...v1.4.1","2023-09-25T17:18:57",{"id":187,"version":188,"summary_zh":189,"released_at":190},333923,"v1.4.0","## 破坏性变更\n* 所有私有 API 都被移至 `keras_tuner.src.*` 命名空间下。例如，如果您之前使用 `keras_tuner.some_private_api`，现在则需要改为 `keras_tuner.src.some_private_api`。\n\n## 新特性\n* 支持多后端的 Keras Core。\n\n## 新贡献者\n* @airvzxf 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F906 中做出了首次贡献。\n* @pnacht 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F930 中做出了首次贡献。\n* @fhausmann 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F926 中做出了首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.3.5...v1.4.0rc","2023-09-22T21:31:54",{"id":192,"version":193,"summary_zh":194,"released_at":195},333924,"v1.4.0rc0","## 破坏性变更\n* 所有私有 API 都被移至 `keras_tuner.src.*` 命名空间下。例如，如果您之前使用 `keras_tuner.some_private_api`，现在则需要改为 `keras_tuner.src.some_private_api`。\n\n## 新特性\n* 支持多后端的 Keras Core。\n\n## 新贡献者\n* @airvzxf 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F906 中完成了首次贡献。\n* @pnacht 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F930 中完成了首次贡献。\n* @fhausmann 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F926 中完成了首次贡献。\n\n**完整更新日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.3.5...v1.4.0rc0","2023-09-22T19:50:42",{"id":197,"version":198,"summary_zh":199,"released_at":200},333925,"v1.3.5","## Breaking changes\r\n* Removed TensorFlow from the required dependencies of KerasTuner. The user need\r\n  to install TensorFlow either separately with KerasTuner or with\r\n  `pip install keras_tuner[tensorflow]`. This change is because some people may\r\n  want to use KerasTuner with `tensorflow-cpu` instead of `tensorflow`.\r\n\r\n## Bug fixes\r\n* KerasTuner used to require protobuf version to be under 3.20. The limit is\r\n  removed. Now, it support both protobuf 3 and 4.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.3.4...v1.3.5","2023-04-13T04:21:57",{"id":202,"version":203,"summary_zh":204,"released_at":205},333926,"v1.3.4","# Bug fixes\r\n* If you have a protobuf version > 3.20, it would through an error when import KerasTuner. It is now fixed.","2023-04-02T22:17:45",{"id":207,"version":208,"summary_zh":209,"released_at":210},333927,"v1.3.3","* KerasTuner would install protobuf 3.19 with `protobuf\u003C=3.20`. We want to install `3.20.3`, so we changed it to `protobuf\u003C=3.20.3`. It is now fixed.\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.3.2...v1.3.3","2023-03-27T19:08:38",{"id":212,"version":213,"summary_zh":214,"released_at":215},333928,"v1.3.2","# Bug fixes\r\n* It use to install `protobuf` 4.22.1 if install with TensorFlow 2.12, which is not compatible with KerasTuner. We limited the `protobuf` version to \u003C=3.20, which is compatible with all TensorFlow versions so far.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.3.1...v1.3.2","2023-03-27T17:58:28",{"id":217,"version":218,"summary_zh":219,"released_at":220},333929,"v1.3.1","## Bug fixes\r\n* The `Tuner.results_summary()` did not print error messages for failed trials and did not display `Objective` information correctly. It is now fixed.\r\n* The `BayesianOptimization` would break when not specifying the `num_initial_points` and overriding `.run_trial()`. It is now fixed.\r\n* TensorFlow 2.12 would break because the different protobuf version. It is now fixed.\r\n\r\n## New Contributors\r\n* @jkittner made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F860\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002Fv1.3.0...v1.3.1","2023-03-27T02:59:28",{"id":222,"version":223,"summary_zh":224,"released_at":225},333930,"v1.3.0","## Breaking changes\r\n* Removed `Logger` and `CloudLogger` and the related arguments in `BaseTuner.__init__(logger=...)`.\r\n* Removed `keras_tuner.oracles.BayesianOptimization`, `keras_tuner.oracles.Hyperband`, `keras_tuner.oracles.RandomSearch`, which were actually `Oracle`s instead of `Tuner`s. Please use`keras_tuner.oracles.BayesianOptimizationOracle`, `keras_tuner.oracles.HyperbandOracle`, `keras_tuner.oracles.RandomSearchOracle` instead.\r\n* Removed `keras_tuner.Sklearn`. Please use `keras_tuner.SklearnTuner` instead.\r\n\r\n## New features\r\n* `keras_tuner.oracles.GridSearchOracle` is now available as a standalone `Oracle` to be used with custom tuners.\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002F1.2.1...v1.3.0","2023-02-23T01:48:07",{"id":227,"version":228,"summary_zh":229,"released_at":230},333931,"1.2.1","## Bug fixes\r\n* The resume feature (`overwrite=False`) would crash in 1.2.0. This is now fixed.\r\n\r\n## New Contributors\r\n* @nkovela1 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F834\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002F1.2.0...1.2.1","2023-02-10T01:06:53",{"id":232,"version":233,"summary_zh":234,"released_at":235},333932,"1.2.0","# Release v1.2.0\r\n\r\n## Breaking changes\r\n* If you implemented your own `Tuner`, the old use case of reporting results  with `Oracle.update_trial()` in `Tuner.run_trial()` is deprecated. Please return the metrics in `Tuner.run_trial()` instead.\r\n* If you implemented your own `Oracle` and overrided `Oracle.end_trial()`, you need to change the signature of the function from `Oracle.end_trial(trial.trial_id, trial.status)` to `Oracle.end_trial(trial)`.\r\n* The default value of the `step` argument in `keras_tuner.HyperParameters.Int()` is changed to `None`, which was `1` before. No change in default behavior.\r\n* The default value of the `sampling` argument in `keras_tuner.HyperParameters.Int()` is changed to `\"linear\"`, which was `None`\r\n  before. No change in default behavior.\r\n* The default value of the `sampling` argument in `keras_tuner.HyperParameters.Float()` is changed to `\"linear\"`, which was\r\n  `None` before. No change in default behavior.\r\n* If you explicitly rely on protobuf values, the new protobuf bug fix may affect you.\r\n* Changed the mechanism of how a random sample is drawn for a hyperparameter. They now all start from a random value between 0 and 1, and convert the value to a random sample.\r\n\r\n## New features\r\n* A new tuner is added, `keras_tuner.GridSearch`, which can exhaust all the possible hyperparameter combinations.\r\n* Better fault tolerance during the search. Added two new arguments to `Tuner` and `Oracle` initializers, `max_retries_per_trial` and `max_consecutive_failed_trials`.\r\n* You can now mark a `Trial` as failed by `raise keras_tuner.FailedTrialError(\"error message.\")` in `HyperModel.build()`, `HyperModel.fit()`, or your model build function.\r\n* Provides better error messages for invalid configs for `Int` and `Float` type hyperparameters.\r\n* A decorator `@keras_tuner.synchronized` is added to decorate the methods in `Oracle` and its subclasses to synchronize the concurrent calls to ensure thread safety in parallel tuning.\r\n\r\n## Bug fixes\r\n* Protobuf was not converting Boolean type hyperparameter correctly. This is now fixed.\r\n* Hyperband was not loading the weights correctly for half-trained models. This is now fixed.\r\n* `KeyError` may occur if using `hp.conditional_scope()`, or the `parent` argument for hyperparameters. This is now fixed.\r\n* `num_initial_points` of the `BayesianOptimization` should defaults to `3 * dimension`, but it defaults to 2. This is now fixed.\r\n* It would through an error when using a concrete Keras optimizer object to override the `HyperModel` compile arg. This is now fixed.\r\n* Workers might crash due to `Oracle` reloading when running in parallel. This is now fixed.\r\n\r\n## New Contributors\r\n* @Firas-RHIMI made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F711\r\n* @HanxiaoLyu made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F746\r\n* @leleogere made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F794\r\n* @LuNoX made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F815\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002F1.1.3...1.2.0","2023-01-28T18:25:25",{"id":237,"version":238,"summary_zh":239,"released_at":240},333933,"1.2.0rc0","## Breaking changes\r\n* If you implemented your own `Tuner`, the old use case of reporting results with `Oracle.update_trial()` in `Tuner.run_trial()` is deprecated. Please return the metrics in `Tuner.run_trial()` instead.\r\n* If you implemented your own `Oracle` and overrided `Oracle.end_trial()`, you need to change the signature of the function from `Oracle.end_trial(trial.trial_id, trial.status)` to `Oracle.end_trial(trial)`.\r\n* The default value of the `step` argument in `keras_tuner.HyperParameters.Int()` is changed to `None`, which was `1` before. No change in default behavior.\r\n* The default value of the `sampling` argument in `keras_tuner.HyperParameters.Int()` is changed to `\"linear\"`, which was `None` before. No change in default behavior.\r\n* The default value of the `sampling` argument in `keras_tuner.HyperParameters.Float()` is changed to `\"linear\"`, which was `None` before. No change in default behavior.\r\n* If you explicitly rely on protobuf values, the new protobuf bug fix may affect you.\r\n* Changed the mechanism of how a random sample is drawn for a hyperparameter. They now all start from a random value between 0 and 1, and convert the value to a random sample.\r\n\r\n## New features\r\n* A new tuner is added, `keras_tuner.GridSearch`, which can exhaust all the possible hyperparameter combinations.\r\n* Better fault tolerance during the search. Added two new arguments to `Tuner` and `Oracle` initializers, `max_retries_per_trial` and `max_consecutive_failed_trials`.\r\n* Provides better error messages for invalid configs for `Int` and `Float` type hyperparameters.\r\n* A decorator `@keras_tuner.synchronized` is added to decorate the methods in `Oracle` and its subclasses to synchronize the concurrent calls to ensure thread safety in parallel tuning.\r\n\r\n## Bug fixes\r\n* Protobuf was not converting Boolean type hyperparameter correctly. This is now fixed.\r\n* Hyperband was not loading the weights correctly for half-trained models. This is now fixed.\r\n* `KeyError` may occur if using `hp.conditional_scope()`, or the `parent` argument for hyperparameters. This is now fixed.\r\n* `num_initial_points` of the `BayesianOptimization` should defaults to `3 * dimension`, but it defaults to 2. This is now fixed.\r\n\r\n## New Contributors\r\n* @Firas-RHIMI made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F711\r\n* @HanxiaoLyu made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F746\r\n* @leleogere made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F794\r\n* @LuNoX made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F815\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002F1.1.3...1.2.0rc0","2023-01-13T01:03:38",{"id":242,"version":243,"summary_zh":244,"released_at":245},333934,"1.1.3","## Summary\r\nBug fixes to better support AutoKeras.\r\n\r\n## What's Changed\r\n* Fixed issue #677 by @Anselmoo in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F678\r\n* Adopt safe model and trial saving practices in the multi-worker setting by @jamesmullenbach in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F684\r\n* tuner_utils: use datetime to calculate elapsed time by @mebeim in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F690\r\n* Add pre_create_trial callback by @jamesmullenbach in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F695\r\n* Multi-worker file writing checks by @jamesmullenbach in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F694\r\n* Update actions.yml by @haifeng-jin in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F698\r\n* Add \"declare_hyperparameters\" to HyperModel by @jamesmullenbach in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F696\r\n* Record best epoch info with update_trial by @haifeng-jin in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F706\r\n\r\n## New Contributors\r\n* @Anselmoo made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F678\r\n* @jamesmullenbach made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F684\r\n* @mebeim made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fpull\u002F690\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-tuner\u002Fcompare\u002F1.1.2...1.1.3","2022-07-16T04:22:02"]