[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-kedro-org--kedro":3,"tool-kedro-org--kedro":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":97,"forks":98,"last_commit_at":99,"license":100,"difficulty_score":101,"env_os":102,"env_gpu":102,"env_ram":102,"env_deps":103,"category_tags":111,"github_topics":112,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":120,"updated_at":121,"faqs":122,"releases":151},9786,"kedro-org\u002Fkedro","kedro","Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.","Kedro 是一款专为生产环境打造的开源数据科学与数据工程工具箱。它致力于解决数据项目中常见的代码混乱、难以复现和维护成本高等痛点，通过引入成熟的软件工程最佳实践，帮助团队构建可复现、易维护且模块化的数据处理流水线。\n\n无论是正在从实验性脚本向生产系统过渡的数据科学家，还是希望规范工作流的数据工程师，Kedro 都能提供极大的便利。它不再让数据分析停留在零散的 Jupyter 笔记本中，而是引导用户将代码组织成结构清晰的标准项目。\n\nKedro 的核心亮点在于其强大的项目模板和独特的“数据目录”（Data Catalog）机制。这一设计巧妙地将数据处理逻辑与具体的数据存储位置解耦，让用户无需修改核心代码即可灵活切换本地文件、数据库或云存储等不同数据源。此外，其内置的流水线可视化功能，能直观展示数据流转过程，极大提升了调试效率与协作透明度。作为一个由 LF AI & Data 基金会托管的 Python 框架，Kedro 以友好的上手体验和严谨的工程规范，成为构建可靠数据产品的得力助手。","\u003Cp align=\"center\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_20d0e0ec7edb.png\">\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fkedro-org\u002Fkedro\u002Fmain\u002F.github\u002Fdemo-dark.png\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_20d0e0ec7edb.png\" alt=\"Kedro\">\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n[![Python version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10%20%7C%203.11%20%7C%203.12%20%7C%203.13-blue.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fkedro\u002F)\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fkedro.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fkedro\u002F)\n[![Conda version](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fkedro.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fkedro)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-blue.svg)](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fblob\u002Fmain\u002FLICENSE.md)\n[![Slack Organisation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-blueviolet.svg?label=Kedro%20Slack&logo=slack)](https:\u002F\u002Fslack.kedro.org)\n[![Slack Archive](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-archive-blueviolet.svg?label=Kedro%20Slack%20)](https:\u002F\u002Flinen-slack.kedro.org\u002F)\n![GitHub Actions Workflow Status - Main](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fkedro-org\u002Fkedro\u002Fall-checks.yml?label=main)\n![GitHub Actions Workflow Status - Develop](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fkedro-org\u002Fkedro\u002Fall-checks.yml?branch=develop&label=develop)\n[![Documentation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_13d664e1afd7.png)](https:\u002F\u002Fdocs.kedro.org\u002F)\n[![OpenSSF Best Practices](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_50ccde67b228.png)](https:\u002F\u002Fbestpractices.coreinfrastructure.org\u002Fprojects\u002F6711)\n[![Monthly downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_6c2704c1a640.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkedro)\n[![Total downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_f4ec9727814f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkedro)\n\n[![Powered by Kedro](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpowered_by-kedro-ffc900?logo=kedro)](https:\u002F\u002Fkedro.org)\n\n## What is Kedro?\n\nKedro is a toolbox for production-ready data engineering and data science pipelines. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular. You can find out more at [kedro.org](https:\u002F\u002Fkedro.org).\n\nKedro is an open-source Python framework hosted by the [LF AI & Data Foundation](https:\u002F\u002Flfaidata.foundation\u002F).\n\n## How do I install Kedro?\n\nTo install Kedro from the Python Package Index (PyPI) run:\n\n```\nuv pip install kedro\n```\n\nIt is also possible to install Kedro using `conda`:\n\n```\nconda install -c conda-forge kedro\n```\n\nOur [Get Started guide](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Finstall\u002F) contains full installation instructions, and includes how to set up Python virtual environments.\n\n### Installation from source\nTo access the latest Kedro version before its official release, install it from the `main` branch.\n```\nuv pip install git+https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro@main\n```\n\n## What are the main features of Kedro?\n\n| Feature              | What is this?                                                                                                                                                                                                                                                                                                                                                                                      |\n| -------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Project Template     | A standard, modifiable and easy-to-use project template based on [Cookiecutter Data Science](https:\u002F\u002Fgithub.com\u002Fdrivendata\u002Fcookiecutter-data-science\u002F).                                                                                                                                                                                                                                            |\n| Data Catalog         | A series of lightweight data connectors used to save and load data across many different file formats and file systems, including local and network file systems, cloud object stores, and HDFS. The Data Catalog also includes data and model versioning for file-based systems.                                                                                                                  |\n| Pipeline Abstraction | Automatic resolution of dependencies between pure Python functions and data pipeline visualisation using [Kedro-Viz](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro-viz).                                                                                                                                                                                                                                      |\n| Coding Standards     | Test-driven development using [`pytest`](https:\u002F\u002Fgithub.com\u002Fpytest-dev\u002Fpytest), produce well-documented code using [Sphinx](http:\u002F\u002Fwww.sphinx-doc.org\u002Fen\u002Fmaster\u002F), create linted code with support for [`ruff`](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fruff) and make use of the standard Python logging library. |\n| Flexible Deployment  | Deployment strategies that include single or distributed-machine deployment as well as additional support for deploying on Argo, Prefect, Kubeflow, AWS Batch, and Databricks.                                                                                                                                                                                                                      |\n\n## How do I use Kedro?\n\nThe [Kedro documentation](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002F) first explains [how to install Kedro](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Finstall\u002F) and then introduces [key Kedro concepts](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Fkedro_concepts\u002F\n).\n\nYou can then review the [spaceflights tutorial](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Ftutorials\u002Fspaceflights_tutorial\u002F) to build a Kedro project for hands-on experience.\n\nFor new and intermediate Kedro users, there's a comprehensive section on [how to visualise Kedro projects using Kedro-Viz](https:\u002F\u002Fdocs.kedro.org\u002Fprojects\u002Fkedro-viz\u002F).\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fkedro-org\u002Fkedro-viz\u002Fmain\u002F.github\u002Fimg\u002Fbanner.png\" alt>\n    \u003Cem>A pipeline visualisation generated using Kedro-Viz\u003C\u002Fem>\n\u003C\u002Fp>\n\nAdditional documentation explains [how to work with Kedro and Jupyter notebooks](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fintegrations-and-plugins\u002Fnotebooks_and_ipython\u002F), and there are a set of advanced user guides for advanced for key Kedro features. We also recommend the [API reference documentation](\u002Fkedro) for further information.\n\n\n## Why does Kedro exist?\n\nKedro is built upon our collective best-practice (and mistakes) trying to deliver real-world ML applications that have vast amounts of raw unvetted data. We developed Kedro to achieve the following:\n\n- To address the main shortcomings of Jupyter notebooks, one-off scripts, and glue-code because there is a focus on\n  creating **maintainable data engineering and data science code**\n- To enhance **team collaboration** when different team members have varied exposure to software engineering concepts\n- To increase efficiency, because applied concepts like modularity and separation of concerns inspire the creation of\n  **reusable analytics code**\n\nFind out more about how Kedro can answer your use cases from the [product FAQs on the Kedro website](https:\u002F\u002Fkedro.org\u002F#faq).\n\n## The humans behind Kedro\n\nThe [Kedro product team](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fabout\u002Ftechnical_steering_committee\u002F#current-maintainers) and a number of [open source contributors from across the world](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Freleases) maintain Kedro.\n\n## Can I contribute?\n\nYes! We welcome all kinds of contributions. Check out our [guide to contributing to Kedro](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fwiki\u002FContribute-to-Kedro).\n\n## Where can I learn more?\n\nThere is a growing community around Kedro. We encourage you to ask and answer technical questions on [Slack](https:\u002F\u002Fslack.kedro.org\u002F) and bookmark the [Linen archive of past discussions](https:\u002F\u002Flinen-slack.kedro.org\u002F).\n\nWe keep a list of [technical FAQs in the Kedro documentation](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Ffaq\u002F) and you can find a  growing list of blog posts, videos and projects that use Kedro over on the [`awesome-kedro` GitHub repository](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fawesome-kedro). If you have created anything with Kedro we'd love to include it on the list. Just make a PR to add it!\n\n## How can I cite Kedro?\n\nIf you're an academic, Kedro can also help you, for example, as a tool to solve the problem of reproducible research. Use the \"Cite this repository\" button on [our repository](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro) to generate a citation from the [CITATION.cff file](https:\u002F\u002Fdocs.github.com\u002Fen\u002Frepositories\u002Fmanaging-your-repositorys-settings-and-features\u002Fcustomizing-your-repository\u002Fabout-citation-files).\n\n## Python version support policy\n* The core [Kedro Framework](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro) supports all Python versions that are actively maintained by the CPython core team. When a [Python version reaches end of life](https:\u002F\u002Fdevguide.python.org\u002Fversions\u002F#versions), support for that version is dropped from Kedro. This is not considered a breaking change.\n* The [Kedro Datasets](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro-plugins\u002Ftree\u002Fmain\u002Fkedro-datasets) package follows the [NEP 29](https:\u002F\u002Fnumpy.org\u002Fneps\u002Fnep-0029-deprecation_policy.html) Python version support policy. This means that `kedro-datasets` generally drops Python version support before `kedro`. This is because `kedro-datasets` has a lot of dependencies that follow NEP 29 and the more conservative version support approach of the Kedro Framework makes it hard to manage those dependencies properly.\n\n\n## ☕️ Kedro Coffee Chat 🔶\n\nWe appreciate our community and want to stay connected. For that, we offer a public Coffee Chat format where we share updates and cool stuff around Kedro once every two weeks and give you time to ask your questions live.\n\nCheck out the upcoming demo topics and dates at the [Kedro Coffee Chat wiki page](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fwiki\u002FKedro-Coffee-Chat).\n\nFollow our Slack [announcement channel](https:\u002F\u002Fkedro-org.slack.com\u002Farchives\u002FC03RKAQ0MGQ) to see Kedro Coffee Chat announcements and access demo recordings.\n","\u003Cp align=\"center\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_20d0e0ec7edb.png\">\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fkedro-org\u002Fkedro\u002Fmain\u002F.github\u002Fdemo-dark.png\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_20d0e0ec7edb.png\" alt=\"Kedro\">\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n[![Python版本](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10%20%7C%203.11%20%7C%203.12%20%7C%203.13%20%7C%203.14-blue.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fkedro\u002F)\n[![PyPI版本](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fkedro.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fkedro\u002F)\n[![Conda版本](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fkedro.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fkedro)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-blue.svg)](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fblob\u002Fmain\u002FLICENSE.md)\n[![Slack组织](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-blueviolet.svg?label=Kedro%20Slack&logo=slack)](https:\u002F\u002Fslack.kedro.org)\n[![Slack存档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-archive-blueviolet.svg?label=Kedro%20Slack%20存档)](https:\u002F\u002Flinen-slack.kedro.org\u002F)\n![GitHub Actions工作流状态 - Main分支](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fkedro-org\u002Fkedro\u002Fall-checks.yml?label=main)\n![GitHub Actions工作流状态 - Develop分支](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fkedro-org\u002Fkedro\u002Fall-checks.yml?branch=develop&label=develop)\n[![文档](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_13d664e1afd7.png)](https:\u002F\u002Fdocs.kedro.org\u002F)\n[![OpenSSF最佳实践](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_50ccde67b228.png)](https:\u002F\u002Fbestpractices.coreinfrastructure.org\u002Fprojects\u002F6711)\n[![月度下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_6c2704c1a640.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkedro)\n[![总下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_readme_f4ec9727814f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkedro)\n\n[![由Kedro提供支持](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpowered_by-kedro-ffc900?logo=kedro)](https:\u002F\u002Fkedro.org)\n\n## 什么是Kedro？\n\nKedro是一个用于构建生产级数据工程和数据科学流水线的工具箱。它采用软件工程的最佳实践，帮助您创建可复现、易于维护且模块化的数据工程和数据科学流水线。您可以在[kedro.org](https:\u002F\u002Fkedro.org)上了解更多信息。\n\nKedro是一个开源的Python框架，由[LF AI & Data Foundation](https:\u002F\u002Flfaidata.foundation\u002F)托管。\n\n## 如何安装Kedro？\n\n要从Python包索引（PyPI）安装Kedro，请运行以下命令：\n\n```\nuv pip install kedro\n```\n\n您也可以使用`conda`来安装Kedro：\n\n```\nconda install -c conda-forge kedro\n```\n\n我们的[入门指南](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Finstall\u002F)提供了完整的安装说明，还包括如何设置Python虚拟环境的内容。\n\n### 从源代码安装\n如果您想在正式发布之前使用最新的Kedro版本，可以从`main`分支安装：\n```\nuv pip install git+https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro@main\n```\n\n## Kedro的主要特性有哪些？\n\n| 特性              | 是什么？                                                                                                                                                                                                                                                                                                                                                                                      |\n| -------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 项目模板     | 基于[Cookiecutter Data Science](https:\u002F\u002Fgithub.com\u002Fdrivendata\u002Fcookiecutter-data-science\u002F)的标准、可修改且易于使用的项目模板。                                                                                                                                                                                                                                            |\n| 数据目录         | 一系列轻量级的数据连接器，用于在多种不同的文件格式和文件系统之间保存和加载数据，包括本地和网络文件系统、云对象存储以及HDFS。数据目录还为基于文件的系统提供了数据和模型版本控制功能。                                                                                                                  |\n| 流水线抽象       | 自动解析纯Python函数之间的依赖关系，并使用[Kedro-Viz](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro-viz)进行数据流水线可视化。                                                                                                                                                                                                                                      |\n| 编码标准     | 使用[`pytest`](https:\u002F\u002Fgithub.com\u002Fpytest-dev\u002Fpytest)进行测试驱动开发，利用[Sphinx](http:\u002F\u002Fwww.sphinx-doc.org\u002Fen\u002Fmaster\u002F)编写文档齐全的代码，借助[`ruff`](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fruff)的支持生成经过静态检查的代码，并使用标准的Python日志记录库。 |\n| 灵活的部署      | 支持单机或分布式机器部署的策略，同时还支持在Argo、Prefect、Kubeflow、AWS Batch和Databricks等平台上部署。                                                                                                                                                                                                                      |\n\n## 我如何使用 Kedro？\n\n[Kedro 文档](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002F) 首先介绍了[如何安装 Kedro](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Finstall\u002F)，随后讲解了[Kedro 的核心概念](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Fkedro_concepts\u002F)。\n\n接下来，您可以参考 [spaceflights 教程](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Ftutorials\u002Fspaceflights_tutorial\u002F) 来构建一个 Kedro 项目，以获得实际操作经验。\n\n对于初学者和中级用户，文档中还有一节全面的内容，介绍如何使用 [Kedro-Viz 可视化 Kedro 项目](https:\u002F\u002Fdocs.kedro.org\u002Fprojects\u002Fkedro-viz\u002F)。\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fkedro-org\u002Fkedro-viz\u002Fmain\u002F.github\u002Fimg\u002Fbanner.png\" alt>\n    \u003Cem>使用 Kedro-Viz 生成的管道可视化图\u003C\u002Fem>\n\u003C\u002Fp>\n\n此外，文档还说明了[如何将 Kedro 与 Jupyter Notebook 结合使用](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fintegrations-and-plugins\u002Fnotebooks_and_ipython\u002F)，并提供了一系列针对 Kedro 核心功能的高级用户指南。我们还建议您查阅 [API 参考文档](\u002Fkedro)，以获取更多详细信息。\n\n\n## 为什么会有 Kedro？\n\nKedro 是基于我们在交付涉及大量未经验证原始数据的真实世界机器学习应用过程中积累的最佳实践（以及教训）而构建的。我们开发 Kedro 的目标是：\n\n- 解决 Jupyter Notebook、一次性脚本和胶水代码的主要缺陷，因为其重点在于\n  创建**可维护的数据工程和数据科学代码**\n- 提升**团队协作效率**，尤其是在团队成员对软件工程概念的熟悉程度各不相同的情况下\n- 提高效率，因为模块化和关注点分离等实践理念能够促进**可重用分析代码**的产生\n\n如需了解更多关于 Kedro 如何满足您的使用场景，请参阅 [Kedro 官网上的产品常见问题解答](https:\u002F\u002Fkedro.org\u002F#faq)。\n\n## Kedro 背后的团队\n\n[Kedro 产品团队](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fabout\u002Ftechnical_steering_committee\u002F#current-maintainers) 以及来自全球的众多[开源贡献者](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Freleases)共同维护着 Kedro。\n\n## 我可以参与贡献吗？\n\n当然可以！我们欢迎各种形式的贡献。请查看我们的[参与 Kedro 贡献指南](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fwiki\u002FContribute-to-Kedro)。\n\n## 我还能在哪里了解更多？\n\n围绕 Kedro 的社区正在不断壮大。我们鼓励您在 [Slack](https:\u002F\u002Fslack.kedro.org\u002F) 上提出和回答技术问题，并将 [Linen 历史讨论存档](https:\u002F\u002Flinen-slack.kedro.org\u002F) 收藏起来。\n\n我们在 [Kedro 文档中的技术 FAQ 列表](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fgetting-started\u002Ffaq\u002F) 中整理了常见问题，同时在 [`awesome-kedro` GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fawesome-kedro) 上也汇集了越来越多使用 Kedro 的博客文章、视频和项目。如果您使用 Kedro 开发了任何内容，我们非常乐意将其加入该列表——只需提交一个 PR 即可！\n\n## 我该如何引用 Kedro？\n\n如果您是学术界人士，Kedro 同样可以帮助您，例如作为解决可重复性研究问题的工具。请使用我们[仓库](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro)上的“引用此仓库”按钮，从 [CITATION.cff 文件](https:\u002F\u002Fdocs.github.com\u002Fen\u002Frepositories\u002Fmanaging-your-repositorys-settings-and-features\u002Fcustomizing-your-repository\u002Fabout-citation-files)中生成引用格式。\n\n## Python 版本支持政策\n* 核心 [Kedro 框架](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro) 支持由 CPython 核心团队积极维护的所有 Python 版本。当某个 [Python 版本达到生命周期结束](https:\u002F\u002Fdevguide.python.org\u002Fversions\u002F#versions)时，Kedro 将停止对该版本的支持。这并不被视为破坏性变更。\n* [Kedro Datasets](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro-plugins\u002Ftree\u002Fmain\u002Fkedro-datasets) 包遵循 [NEP 29](https:\u002F\u002Fnumpy.org\u002Fneps\u002Fnep-0029-deprecation_policy.html) Python 版本支持政策。这意味着 `kedro-datasets` 通常会比 `kedro` 更早停止对某些 Python 版本的支持。这是因为 `kedro-datasets` 依赖较多遵循 NEP 29 政策的库，而 Kedro 框架更为保守的版本支持策略使得管理这些依赖变得较为困难。\n\n\n## ☕️ Kedro Coffee Chat 🔶\n\n我们珍视社区，并希望保持紧密联系。为此，我们定期举办公开的 Coffee Chat 活动，每两周分享 Kedro 的最新动态和精彩内容，并留出时间让您现场提问。\n\n请访问 [Kedro Coffee Chat 维基页面](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fwiki\u002FKedro-Coffee-Chat)，了解即将举行的演示主题和日期。\n\n关注我们的 Slack [公告频道](https:\u002F\u002Fkedro-org.slack.com\u002Farchives\u002FC03RKAQ0MGQ)，以获取 Kedro Coffee Chat 的最新消息及演示录像。","# Kedro 快速上手指南\n\nKedro 是一个用于构建生产级数据工程和数据科学流水线的 Python 框架。它通过软件工程最佳实践，帮助你创建可复现、易维护且模块化的数据流水线。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：3.10, 3.11, 3.12 或 3.13\n*   **包管理工具**：推荐使用 `uv` (高速替代 pip) 或 `conda`\n*   **前置知识**：具备基础的 Python 编程能力\n\n> **提示**：强烈建议在虚拟环境中安装 Kedro 以避免依赖冲突。\n\n## 安装步骤\n\n你可以选择以下任意一种方式进行安装：\n\n### 方式一：使用 uv (推荐，速度更快)\n\n```bash\nuv pip install kedro\n```\n\n### 方式二：使用 conda\n\n```bash\nconda install -c conda-forge kedro\n```\n\n### 方式三：安装最新开发版 (从源码)\n\n如果你需要体验尚未正式发布的最新功能：\n\n```bash\nuv pip install git+https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro@main\n```\n\n## 基本使用\n\n以下是创建一个新项目并运行流水线的最小化流程：\n\n### 1. 创建新项目\n\n使用 `kedro new` 命令初始化一个基于标准模板的项目。系统会交互式地询问项目名称、仓库名称和 Python 包名。\n\n```bash\nkedro new --starter=spaceflights\n```\n*(注：`--starter=spaceflights` 会直接加载官方提供的示例项目，适合新手快速体验)*\n\n### 2. 进入项目目录\n\n```bash\ncd \u003Cyour-project-name>\n```\n\n### 3. 查看流水线依赖\n\nKedro 会自动解析节点之间的依赖关系。你可以使用以下命令查看项目结构：\n\n```bash\nkedro viz\n```\n*(这将启动 Kedro-Viz 可视化界面，在浏览器中展示数据流水线图)*\n\n### 4. 运行流水线\n\n执行整个项目的默认流水线：\n\n```bash\nkedro run\n```\n\n运行成功后，Kedro 将根据 `catalog.yml` 配置自动加载数据、执行处理函数并保存结果。\n\n### 5. 下一步\n\n*   **开发节点**：在 `src\u002F\u003Cpackage_name>\u002Fnodes\u002F` 目录下编写具体的数据处理逻辑。\n*   **配置数据**：在 `conf\u002Fbase\u002Fcatalog.yml` 中定义数据的输入输出路径和格式。\n*   **测试代码**：使用内置的 `pytest` 配置运行测试：`kedro test`。","某电商公司的数据团队正在构建一个每日更新的用户流失预测模型，需要处理从原始日志到特征工程再到模型训练的复杂流程。\n\n### 没有 kedro 时\n- **代码如“意大利面”**：数据处理、特征提取和训练逻辑全部堆砌在几个巨大的 Jupyter Notebook 中，变量依赖混乱，新人接手几乎无法理清执行顺序。\n- **复现噩梦**：当业务方质疑模型结果时，数据科学家很难重新运行完全一致的中间步骤，因为临时修改的代码未版本化，且中间数据丢失。\n- **协作冲突频繁**：多位工程师同时修改同一个脚本的不同部分，缺乏模块化隔离，导致合并代码时频繁出现冲突且难以调试。\n- **部署困难**：实验室环境的代码充满硬编码路径和全局变量，无法直接迁移到生产服务器，每次上线都需要花费数天手动重构。\n\n### 使用 kedro 后\n- **流水线可视化与标准化**：kedro 将项目拆分为独立的节点（Node）和数据集（DataSet），通过有向无环图（DAG）清晰展示数据流向，任何人一眼就能看懂逻辑结构。\n- **一键复现与缓存**：借助 kedro 的数据抽象层，团队可以随时重跑任意历史版本的管道，自动加载或跳过未变动的中间数据，确保实验结果严格可复现。\n- **模块化并行开发**：不同成员负责不同的功能模块（如清洗、特征、训练），通过标准接口对接，互不干扰，大幅降低了代码合并的难度。\n- **无缝切换环境**：通过配置文件管理数据路径和参数，同一套代码只需切换配置即可从本地开发平滑过渡到生产环境，消除了“在我机器上能跑”的问题。\n\nkedro 通过引入软件工程的最佳实践，将原本混乱的实验性代码转化为可维护、可信赖的生产级数据流水线。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkedro-org_kedro_20d0e0ec.png","kedro-org","Kedro","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkedro-org_4f0fd19f.jpg","Kedro is an open-source Python framework for creating reproducible, maintainable and modular data science code. It is hosted in incubation in LF AI & Data.",null,"info@lfaidata.foundation","https:\u002F\u002Fkedro.org\u002F","https:\u002F\u002Fgithub.com\u002Fkedro-org",[81,85,89,93],{"name":82,"color":83,"percentage":84},"Python","#3572A5",98.5,{"name":86,"color":87,"percentage":88},"Gherkin","#5B2063",0.8,{"name":90,"color":91,"percentage":92},"Shell","#89e051",0.5,{"name":94,"color":95,"percentage":96},"Makefile","#427819",0.1,10834,1025,"2026-04-18T20:49:08","Apache-2.0",1,"未说明",{"notes":104,"python":105,"dependencies":106},"Kedro 是一个用于数据工程和数据科学管道的 Python 框架，支持通过 PyPI (uv pip) 或 Conda 安装。它依赖于标准 Python 日志库，并支持多种部署策略（如 Argo, Prefect, Kubeflow, AWS Batch, Databricks）。核心框架支持所有 CPython 团队积极维护的 Python 版本，而 kedro-datasets 包遵循 NEP 29 版本支持政策。","3.10 | 3.11 | 3.12 | 3.13",[107,108,109,110],"pytest","Sphinx","ruff","kedro-viz",[14,16],[113,64,114,115,116,117,118,119],"pipeline","hacktoberfest","mlops","experiment-tracking","python","machine-learning","machine-learning-engineering","2026-03-27T02:49:30.150509","2026-04-20T07:18:28.224643",[123,128,133,138,143,147],{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},43952,"为什么搜索引擎中会出现过多过时的 Kedro 文档版本，如何只索引最新的稳定版？","为了解决搜索结果中出现过多旧版本的问题，项目方已调整策略：仅对 `stable`（稳定版）文档进行索引，其他所有编号版本（包括当前最新版）均添加 `noindex, nofollow` 标签。同时，通过 Read the Docs (RTD) 控制台暂时取消隐藏更多版本，让 Google 抓取更新后，再重新隐藏它们，以确保搜索引擎只保留最新稳定版的记录。","https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fissues\u002F3741",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},43953,"Databricks CLI 工具弃用导致文档链接失效，目前有什么解决方案？","由于 Databricks 弃用了其 CLI 工具，导致原有文档链接断裂。目前的临时修复方案是在链接中添加 `\u002Farchive\u002F` 路径。长远来看，团队正在重新评估 Kedro 与 Azure Databricks 生态系统的集成方式（如 Feature Store 和 Databricks Connect），计划发布更全面的新文档而非简单修补旧内容。","https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fissues\u002F3360",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},43954,"在 Jupyter Notebook 中使用 Dataset Factory 时，为什么 `catalog.list()` 无法显示完整的数据集列表？","这是因为使用 Dataset Factory 时，数据集的定义直到管道运行前都是未知的。当前的变通方法是在会话创建时显式检查数据集是否存在。你可以运行以下代码来触发检查并列出 `__default__` 管道中的所有数据集：\n```python\nfor dataset in pipeline[\"__default__\"].data_sets():\n    catalog.exists(dataset)\n```\n这将强制实例化工厂定义的数据集，使其能被 `catalog.list()` 识别。","https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fissues\u002F3312",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},43955,"设置了 PYTHONPATH 环境变量后，为什么 `kedro ipython` 和 `kedro jupyter` 命令会失败？","这是一个已知问题，原因是内部函数 `_add_src_to_path` 在检测到已存在 `PYTHONPATH` 环境变量时无法正确工作。该问题已在版本 0.17.4 中修复。如果你使用的是 0.17.3 或更早版本且无法升级，请尝试临时取消设置 `PYTHONPATH` 后再运行相关命令。","https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro\u002Fissues\u002F727",{"id":144,"question_zh":145,"answer_zh":146,"source_url":127},43956,"如何确保旧版本的 Kedro 文档从搜索引擎结果中彻底移除？","仅仅阻止爬虫访问（robots.txt）是不够的，因为这会导致爬虫无法读取页面上的 `noindex` 标签。正确的做法是：允许爬虫访问旧版本页面，但在页面的 HTTP 响应头或 HTML meta 标签中明确设置 `noindex, nofollow`。这样爬虫在抓取页面后会将其从索引中删除。项目方已对所有非 `stable` 版本实施了此策略。",{"id":148,"question_zh":149,"answer_zh":150,"source_url":132},43957,"Kedro 与 Databricks Connect 是如何集成的，它与传统的打包部署有何不同？","Kedro 与 Databricks 的集成可以通过 Databricks Connect 实现，它允许你在本地执行代码，而将计算任务远程发送到 Databricks 集群。与传统需要将代码打包、上传和部署到集群的方式不同，这种方式更简单，无需处理复杂的资源创建和包管理过程。但这主要适用于执行逻辑，对于非 Spark 逻辑可能仍需在本地运行。",[152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247],{"id":153,"version":154,"summary_zh":155,"released_at":156},351412,"1.3.1","## 错误修复及其他更改\n* 修复了当节点函数的 `params:` 输入带有非 Pydantic 或 dataclass 类型提示时出现的 `AttributeError`。参数验证框架现在能够正确跳过无法验证的类型。\n\n## 文档更新\n* 添加了关于参数验证中 `Optional[Model]` 支持以及多类型联合限制的文档。\n* 改进了深色模式下 Mermaid 图表的可见性。\n\n## 社区贡献\n* [SayantanDutt](https:\u002F\u002Fgithub.com\u002FSayantanDutt)","2026-04-07T12:54:59",{"id":158,"version":159,"summary_zh":160,"released_at":161},351413,"1.3.0","## 主要功能与改进\n* 添加了可选的参数校验功能，利用参数输入的类型提示自动校验并实例化 Pydantic 模型或数据类，且不会影响未指定类型的参数。\n* 为版本化数据集添加了 `list_versions()` 方法，用于列出可用的数据集版本。\n* 在 `find_pipelines()` 中新增了 `pipelines_to_find` 参数，允许用户通过修改流水线注册表来有选择地运行现有流水线的子集。\n* 现在可以在基于默认模板的新 Kedro 项目中使用 CLI 的 `--checkout` 标志，而无需启动器。\n* 新增了可配置的项目设置 `SESSION_CLASS`，允许用户定义自定义的 KedroSession 子类。\n\n## 错误修复及其他变更\n* `DataCatalog.load()` 和 `DataCatalog.save()` 现在会抛出包含数据集名称的 `DatasetError`，以便于调试。\n* 将传递给 `before_pipeline_run`、`after_pipeline_run` 和 `on_pipeline_error` 的运行数据与钩子规范中指定的模式对齐。\n* 修复了版本化数据集加载中的路径遍历漏洞，该漏洞可能因未清理的版本字符串而导致未经授权的文件访问。\n* 修复了日志配置中的远程代码执行漏洞。\n* 移除了 `cachetools` 依赖，并用轻量级的内部缓存实现替代。\n* 当节点返回值但被定义为 `outputs=None` 时，会发出警告，明确指出返回值将被忽略。\n* 在 `configure_project()` 中新增了 `preserve_logging` 标志，以防止在自定义日志处理器已附加后调用 `configure_project()` 时覆盖这些处理器（例如在 FastAPI 等长时间运行的服务器进程中）。\n* 添加了实用方法 `find_config_file()`，用于处理不同的配置文件扩展名（.yml、.yaml）。\n* 在使用 `kedro run` 时，增加了针对拼写错误的流水线名称的可重用建议功能。\n* 修复了 `CatalogConfigResolver` 在模式解析过程中分割 SQLAlchemy URL 的问题。\n\n## 文档变更\n* 添加了参数校验文档，涵盖对类型化参数的 Pydantic 模型和数据类支持。\n\n## 社区贡献\n* [aziq](https:\u002F\u002Fgithub.com\u002Faziq)\n* [zhubaobao2024](https:\u002F\u002Fgithub.com\u002Fzhubaobao2024)\n* [Camille Coeurjoly](https:\u002F\u002Fgithub.com\u002FCamille1992)\n* [sinanpl](https:\u002F\u002Fgithub.com\u002Fsinanpl)\n* [Mr-Neutr0n](https:\u002F\u002Fgithub.com\u002FMr-Neutr0n)\n* [mvhensbergen](https:\u002F\u002Fgithub.com\u002Fmvhensbergen)","2026-03-31T14:27:37",{"id":163,"version":164,"summary_zh":165,"released_at":166},351414,"1.2.0","## 主要功能与改进\n* 添加了 `@experimental` 装饰器，用于标记不稳定的或处于早期阶段的公共 API。\n* 支持在单次 Kedro 会话中运行多个管道，可通过 `--pipelines` CLI 选项以及 `KedroSession.run()` 方法中的 `pipeline_names` 参数实现。\n* 更新了 `spaceflights-pyspark` 入门模板，使其使用新的 `SparkDatasetV2` 集成，从而支持本地、Databricks 原生及远程 Spark 执行工作流。\n\n## 实验性功能\n* 添加了实验性的 `llm_context_node` 和 `LLMContextNode`，用于在 Kedro 管道中将 LLM、提示和工具组装成运行时的 `LLMContext`。\n* 为 `Node` 类添加了实验性的 `preview_fn` 参数，以支持用户可注入的节点预览函数。\n* 新增了实验性的 `support-agent-langgraph` 入门模板，该模板支持上述实验性功能。此模板包含利用 LangGraph 实现代理式工作流，并借助 Langfuse 或 Opik 进行提示管理和追踪的管道。\n\n## 错误修复及其他变更\n* 在项目模板的 `pipeline_registry.py` 中，将 `find_pipelines()` 调用中的 `raise_errors=True` 设置为真，以确保在项目运行期间抛出管道发现错误。\n* 修复了打包运行时记录当前工作目录名称的问题；现在改为记录已安装的包名（或项目路径）。\n\n## 文档变更\n* 添加了针对初学者的 `uvx` 安装说明。\n* 更新了 Databricks 部署文档，新增对 `Spark Connect` 和 `Unity Catalog` 的介绍——包括首次工作流以及本地到远程的开发流程。\n\n## 社区贡献\n衷心感谢以下 Kedroid 为本次发布提交了 PR：\n* [Mohmn](https:\u002F\u002Fgithub.com\u002FMohmn)","2026-01-29T14:25:50",{"id":168,"version":169,"summary_zh":170,"released_at":171},351415,"1.1.1","## 错误修复及其他更改\n* 修复了项目版本不匹配错误。现在，只有当项目与 Kedro 包的**主版本**不一致时才会抛出该错误，而次版本和修订版本的差异则不会导致不必要的失败。","2025-11-26T14:53:47",{"id":173,"version":174,"summary_zh":175,"released_at":176},351416,"1.1.0","## 主要功能与改进\n* 为 `OmegaConfigLoader` 添加了 `ignore_hidden` 参数。\n* 停止对 Python 3.9 的支持（EOL 2025年10月）。最低支持版本现为 3.10。\n\n## 错误修复及其他变更\n* 升级 `click` 依赖，以支持 8.2.0 及以上版本。\n* 修复了文档和 docstring 中的拼写错误。\n* 修复了 Kedro 文档页面上 `deindex-old-docs.js` 脚本重复执行的问题，从而避免在 MkDocs 迁移后重复注入 `noindex` 元标签。\n\n## 文档变更\n* 在延迟保存 `PartionedDataset` 时，添加了关于以编程方式创建 lambda 函数的说明。\n* 将 Dagster 新增为受支持的部署平台，并将 `kedro-dagster` 插件加入社区插件列表。\n* 新增数据验证文档，介绍如何在 Kedro 中使用 Pandera 和 Great Expectations。\n\n## 社区贡献\n衷心感谢以下 Kedroid 为本次发布提交的 PR：\n* [Aseem Sangalay](https:\u002F\u002Fgithub.com\u002Faseemsangalay)\n* [Chris Schopp](https:\u002F\u002Fgithub.com\u002Fchrisschopp)\n* [Yaroslav Halchenko](https:\u002F\u002Fgithub.com\u002Fyarikoptic)","2025-11-25T15:53:12",{"id":178,"version":179,"summary_zh":180,"released_at":181},351417,"0.19.15","## 错误修复及其他更改\n* 提升了命名空间验证的效率，以防止在创建大型流水线时出现严重性能下降\n* 仅在使用已弃用的 `--namespace` 标志时才会显示弃用警告","2025-12-16T15:19:17",{"id":183,"version":184,"summary_zh":185,"released_at":186},351418,"1.0.0","## 主要功能与改进\n\n### 数据目录\n* 之前处于实验阶段的 `KedroDataCatalog` 已更名为 `DataCatalog`，现已成为默认的数据目录实现。\n* 它保留了类似字典的接口，支持惰性数据集初始化，并带来了性能提升。\n* 对于遵循标准 Kedro 工作流的用户来说，这一变化是无缝的；但对于程序化使用场景，它引入了一个更丰富的 API：\n  * 新增的管道感知命令，可通过 CLI 和交互式环境访问。\n  * 简化的数据集工厂处理机制。\n  * 通过 `CatalogConfigResolver` 属性实现模式的集中解析。\n  * 能够将数据目录序列化为配置文件，并从中重新构建。\n\n更多内容请参阅 [Kedro 文档](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Fstable\u002Fcatalog-data\u002Fadvanced_data_catalog_usage\u002F)。\n\n\n### 命名空间\n* 添加了对在单个会话中运行多个命名空间的支持，可通过 `--namespaces` CLI 选项以及 `KedroSession.run()` 方法中的 `namespaces` 参数实现。\n* 提高了命名空间验证的效率，避免在创建大型管道时出现显著的性能下降。\n* 对 `Node` 类中的数据集名称进行了更严格的验证，确保 `.` 字符仅用于表示命名空间的一部分。\n* 向 `Pipeline` 类添加了 `prefix_datasets_with_namespace` 参数，允许用户启用或禁用在节点输入、输出和参数前添加命名空间前缀的功能。\n* 修改了按命名空间进行的管道过滤逻辑，改为返回精确匹配的命名空间，而非部分匹配。\n\n\n### 其他功能与改进\n* 将默认节点名称更改为由节点所使用的函数名后缀一个基于该函数、输入和输出的 SHA-256 安全哈希值，从而确保唯一性和更好的可读性。\n* 在 `ParallelRunner` 中，新增了一个通过 `KEDRO_MP_CONTEXT` 环境变量选择多进程启动方式的选项。\n* 向 `kedro run` 增加了 `--only-missing-outputs` CLI 标志。当节点的所有持久化输出都已存在时，此标志会跳过这些节点。\n* 更新了 `kedro registry describe` 命令，使其返回节点的名称属性，而不是为节点生成自定义名称。\n* 移除了新项目创建时对 `pre-commit-hooks` 的依赖。\n\n\n## API 的破坏性变更\n### CLI\n* 删除了 `kedro catalog create` 命令。\n* 将 `kedro catalog list`、`kedro catalog rank` 和 `kedro catalog resolve` 命令分别替换为 `kedro catalog describe-datasets`、`kedro catalog list-patterns` 和 `kedro catalog resolve-patterns` 命令。\n* 删除了 `kedro run` 的 `--namespace` 选项，并将其替换为 `--namespaces`。\n* 作为微打包功能弃用的一部分，删除了 `kedro micropkg` CLI 命令。\n\n### API\n* 将私有方法 `_is_project` 和 `_find_kedro_project` 改为 `is_kedro_project` 和 `find_kedro_project`。\n* 将 `extra_params` 和 `_extra_params` 的实例重命名为 `r","2025-07-22T15:32:35",{"id":188,"version":189,"summary_zh":190,"released_at":191},351419,"1.0.0rc3","## 主要功能和改进\n将 `DataCatalog.__getitem__` 改为在数据集不存在时抛出 `DatasetNotFoundError`，以符合字典的预期行为。\n\n## 错误修复及其他更改\n## API 的破坏性变更\n## Kedro 1.0.0 即将弃用的功能\n## 文档更新\n## 社区贡献","2025-07-21T13:54:58",{"id":193,"version":194,"summary_zh":195,"released_at":196},351420,"1.0.0rc2","## 主要功能与改进\n* 为 `kedro run` 添加了 `--only-missing-outputs` CLI 标志。该标志会在所有持久化输出都已存在时跳过相应节点。\n* 移除了 `AbstractRunner.run_only_missing()` 方法，这是一个较旧且不常用的用于部分运行的 API。请改用 `--only-missing-outputs` CLI。\n\n## 错误修复及其他变更\n* 提升了命名空间验证效率，以防止在创建大型流水线时出现显著性能下降。\n\n## API 的破坏性变更\n## Kedro 1.0.0 即将废弃的功能\n## 文档变更\n## 社区贡献","2025-07-18T22:31:12",{"id":198,"version":199,"summary_zh":200,"released_at":201},351421,"1.0.0rc1","## 主要功能与改进\n* 在 `Node` 类中对数据集名称添加了更严格的验证，确保 `.` 字符保留用于命名空间的一部分。\n* 为 `Pipeline` 类添加了 `prefix_datasets_with_namespace` 参数，允许用户启用或禁用在节点输入、输出和参数前添加命名空间前缀的功能。\n* 将默认节点名称改为由节点所使用的函数名后接基于该函数、输入和输出计算出的 SHA-256 安全哈希值组成，以确保唯一性并提高可读性。\n* 通过 `KEDRO_MP_CONTEXT` 环境变量，新增了在 `ParallelRunner` 上选择多进程启动方式的选项。\n\n## 错误修复及其他变更\n* 修改了按命名空间筛选管道的行为，使其返回精确匹配的命名空间，而非部分匹配。\n* 增加了在同一会话中运行多个命名空间的支持。\n* 更新了 `kedro registry describe` 命令，使其返回节点的名称属性，而不是为节点自动生成名称。\n\n## 文档变更\n* 更新了 `DataCatalog` 文档，优化了结构并详细介绍了新功能。\n\n## 社区贡献\n\n## API 的破坏性变更\n* 私有方法 `_is_project` 和 `_find_kedro_project` 已更改为 `is_kedro_project` 和 `find_kedro_project`。\n* 将 `extra_params` 和 `_extra_params` 的实例重命名为 `runtime_params`。\n* 移除了 `modular_pipeline` 模块，并将其功能迁移到 `pipeline` 模块中。\n* 将 `ModularPipelineError` 重命名为 `PipelineError`。\n* `Pipeline.grouped_nodes_by_namespace()` 被替换为 `group_nodes_by(group_by)` 方法，该方法支持多种分组策略并返回 `GroupedNodes` 列表，从而提高了类型安全性，并使部署插件集成更加一致。\n* 移除了微打包功能及相应的 `micropkg` CLI 命令。\n* 为提升 API 清晰度并为未来支持多运行会话做准备，在所有运行器方法和钩子中将 `session_id` 参数重命名为 `run_id`。\n* 删除了以下 `DataCatalog` 方法：`_get_dataset()`、`add_all()`、`add_feed_dict()`、`list()` 和 `shallow_copy()`。\n* 删除了 `kedro catalog create` CLI 命令。\n* 改变了 `runner.run()` 的输出——现在无论目录配置如何，它都会始终返回所有管道输出。\n\n## 从 Kedro 0.19.* 迁移到 1.* 的迁移指南\n[请参阅 Kedro 文档中的 1.0.0 迁移指南](https:\u002F\u002Fdocs.kedro.org\u002Fen\u002Flatest\u002Fresources\u002Fmigration.html)。","2025-06-20T13:33:46",{"id":203,"version":204,"summary_zh":205,"released_at":206},351422,"0.19.14","## Major features and improvements\n* Added execution time to pipeline completion log.\n## Bug fixes and other changes\n* Fixed a recursion error in custom datasets when `_describe()` accessed `self.__dict__`.\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [Yury Fedotov](https:\u002F\u002Fgithub.com\u002Fyury-fedotov)","2025-06-17T10:01:26",{"id":208,"version":209,"summary_zh":210,"released_at":211},351423,"0.19.13","## Major features and improvements\n* Unified `pipeline()` and `Pipeline` into a single module (`kedro.pipeline`), aligning with the `node()`\u002F`Node` design pattern and improving namespace handling.\n\n## Bug fixes and other changes\n* Fixed bug where project creation workflow would use the `main` branch version of `kedro-starters` instead of the respective release version.\n* Fixed namespacing for `confirms` during pipeline creation to support `IncrementalDataset`.\n* Fixed bug where `OmegaConf`cause an error during config resolution with runtime parameters.\n* Cached `inputs` in `Node` when created from dictionary for better performance.\n* Enabled pluggy tracing only when logging level is set to `DEBUG` to speed up the execution of project runs.\n\n## Upcoming deprecations for Kedro 1.0.0\n* Added a deprecation warning for catalog CLI commands. The following commands will be replaced with their alternatives - `kedro catalog rank`, `kedro catalog list`, `kedro catalog resolve` and the `kedro catalog create` command will be removed.\n* Added a deprecation warning for `KedroDataCatalog` that will replace `DataCatalog` while adopting the original `DataCatalog` name.\n* Add deprecation warning for `--namespace` option for `kedro run`. It will be replaced with `--namespaces` option which will allow for running multiple namespaces together.\n* The `modular_pipeline` module is deprecated and will be removed in Kedro 1.0.0. Use the `pipeline` module instead.\n\n**Note**: On March 20th, a security vulnerability, CVE-2024-12215, was identified in Kedro. This issue stems from the deprecated micropackaging functionality, which is scheduled for removal in the upcoming Kedro 1.0 release. While we agree with the CVE assigned, this vulnerability only poses a risk if you pull a malicious micropackage from an untrusted source. If you're concerned, we recommend avoiding the micropackaging feature for now and upgrading to Kedro 1.0 once it's released.\n\n## Documentation changes\n* Updated Dask deployment docs.\n* Added non-jupyter environment integration page (for example Marimo) with dynamic Kedro session loading.\n\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [Arnout Verboven](https:\u002F\u002Fgithub.com\u002FArnoutVerboven)\n* [gabohc](https:\u002F\u002Fgithub.com\u002Fgabohc)\n* [Luis Chaves Rodriguez](https:\u002F\u002Fgithub.com\u002Flucharo)","2025-05-22T13:51:18",{"id":213,"version":214,"summary_zh":215,"released_at":216},351424,"0.19.12","## Major features and improvements\n* Added `KedroDataCatalog.filter()` to filter datasets by name and type.\n* Added `Pipeline.grouped_nodes_by_namespace` property which returns a dictionary of nodes grouped by namespace, intended to be used by plugins to facilitate deployment of namespaced nodes together.\n* Added support for cloud storage protocols in `--conf-source`, allowing configuration to be loaded from remote locations such as S3.\n\n## Bug fixes and other changes\n* Added `DataCatalog` deprecation warning.\n* Updated `_LazyDataset` representation when printing `KedroDataCatalog`.\n* Fixed `MemoryDataset` to infer `assign` copy mode for Ibis Tables, which previously would be inferred as `deepcopy`.\n* Fixed pipeline packaging issue by ensuring `pipelines\u002F__init__.py` exists when creating new pipelines.\n* Changed the execution of `SequentialRunner` to not use an executor pool to ensure it's single threaded.\n* Fixed `%load_node` magic command to work with Jupyter Notebook `>=7.2.0`.\n* Remove `7: Kedro Viz` from Kedro tools.\n* Updated node grouping API to only group on first level of namespace.\n\n## Documentation changes\n* Added documentation for Kedro's support for Delta Lake versioning.\n* Added documentation for Kedro's support for Iceberg versioning.\n* Added documentation for Kedro's nodes grouping in deployment.\n* Fixed a minor grammatical error in Kedro-Viz installation instructions to improve documentation clarity.\n* Improved the Kedro VSCode extension documentation.\n* Updated the recommendations for nesting namespaces.\n\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [Jacob Pieniazek](https:\u002F\u002Fgithub.com\u002Fjakepenzak)\n* [Lucas Vittor](https:\u002F\u002Fgithub.com\u002Flvvittor)\n* [Ean Jimenez](https:\u002F\u002Fgithub.com\u002FPrometean)\n* [Toran Sahu](https:\u002F\u002Fgithub.com\u002Ftoransahu)","2025-03-20T09:14:01",{"id":218,"version":219,"summary_zh":220,"released_at":221},351425,"0.19.11","## Major features and improvements\n* Implemented `KedroDataCatalog.to_config()` method that converts the catalog instance into a configuration format suitable for serialization.\n* Improve OmegaConfigLoader performance.\n* Replaced `trufflehog` with `detect-secrets` for detecting secrets within a code base.\n* Added support for `%load_ext kedro`.\n\n## Bug fixes and other changes\n* Added validation to ensure dataset versions consistency across catalog.\n* Fixed a bug in project creation when using a custom starter template offline.\n* Added `node` import to the pipeline template.\n* Update error message when executing kedro run without pipeline.\n* Safeguard hooks when user incorrectly registers a hook class in settings.py.\n* Fixed parsing paths with query and fragment.\n* Remove lowercase transformation in regex validation.\n* Moved `kedro-catalog` JSON schema to `kedro-datasets`.\n* Updated `Partitioned dataset lazy saving` docs page.\n* Fixed `KedroDataCatalog` mutation after pipeline run.\n* Made `KedroDataCatalog._datasets` compatible with `DataCatalog._datasets`.\n\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [Hendrik Scherner](https:\u002F\u002Fgithub.com\u002FSchernHe)\n* [Chris Schopp](https:\u002F\u002Fgithub.com\u002Fchrisschopp)","2025-01-29T15:01:28",{"id":223,"version":224,"summary_zh":225,"released_at":226},351426,"0.19.10","## Major features and improvements\n* Add official support for Python 3.13.\n* Implemented dict-like interface for `KedroDataCatalog`.\n* Implemented lazy dataset initializing for `KedroDataCatalog`.\n* Project dependencies on both the default template and on starter templates are now explicitly declared on the `pyproject.toml` file, allowing Kedro projects to work with project management tools like `uv`, `pdm`, and `rye`.\n\n**Note:** ``KedroDataCatalog`` is an experimental feature and is under active development. Therefore, it is possible we'll introduce breaking changes to this class, so be mindful of that if you decide to use it already. Let us know if you have any feedback about the ``KedroDataCatalog`` or ideas for new features.\n\n## Bug fixes and other changes\n* Added I\u002FO support for Oracle Cloud Infrastructure (OCI) Object Storage filesystem.\n* Fixed `DatasetAlreadyExistsError` for `ThreadRunner` when Kedro project run and using runner separately.\n\n## Breaking changes to the API\n## Documentation changes\n* Added Databricks Asset Bundles deployment guide.\n* Added a new minimal Kedro project creation guide.\n* Added example to explain how dataset factories work.\n* Updated CLI autocompletion docs with new Click syntax.\n* Standardised `.parquet` suffix in docs and tests.\n\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [G. D. McBain](https:\u002F\u002Fgithub.com\u002Fgdmcbain)\n* [Greg Vaslowski](https:\u002F\u002Fgithub.com\u002FVaslo)\n* [Hyewon Choi](https:\u002F\u002Fgithub.com\u002Fhyew0nChoi)\n* [Pedro Antonacio](https:\u002F\u002Fgithub.com\u002Fantonacio)","2024-11-26T17:44:50",{"id":228,"version":229,"summary_zh":230,"released_at":231},351427,"0.19.9","## Major features and improvements\n* Dropped Python 3.8 support.\n* Implemented `KedroDataCatalog` repeating `DataCatalog` functionality with a few API enhancements:\n  * Removed `_FrozenDatasets` and access datasets as properties;\n  * Added get dataset by name feature;\n  * `add_feed_dict()` was simplified to only add raw data;\n  * Datasets' initialisation was moved out from `from_config()` method to the constructor.\n* Moved development requirements from `requirements.txt` to the dedicated section in `pyproject.toml` for project template.\n* Implemented `Protocol` abstraction for the current `DataCatalog` and adding new catalog implementations.\n* Refactored `kedro run` and `kedro catalog` commands.\n* Moved pattern resolution logic from `DataCatalog` to a separate component - `CatalogConfigResolver`. Updated `DataCatalog` to use `CatalogConfigResolver` internally.\n* Made packaged Kedro projects return `session.run()` output to be used when running it in the interactive environment.\n* Enhanced `OmegaConfigLoader` configuration validation to detect duplicate keys at all parameter levels, ensuring comprehensive nested key checking.\n\n**Note:** ``KedroDataCatalog`` is an experimental feature and is under active development. Therefore, it is possible we'll introduce breaking changes to this class, so be mindful of that if you decide to use it already. Let us know if you have any feedback about the ``KedroDataCatalog`` or ideas for new features.\n\n## Bug fixes and other changes\n* Fixed bug where using dataset factories breaks with `ThreadRunner`.\n* Fixed a bug where `SharedMemoryDataset.exists` would not call the underlying `MemoryDataset`.\n* Fixed template projects example tests.\n* Made credentials loading consistent between `KedroContext._get_catalog()` and `resolve_patterns` so that both use `_get_config_credentials()`\n\n## Breaking changes to the API\n* Removed `ShelveStore` to address a security vulnerability.\n\n## Documentation changes\n* Fix logo on PyPI page.\n* Minor language\u002Fstyling updates.\n\n## Community contributions\n* [Puneet](https:\u002F\u002Fgithub.com\u002Fpuneeter)\n* [ethanknights](https:\u002F\u002Fgithub.com\u002Fethanknights)\n* [Manezki](https:\u002F\u002Fgithub.com\u002FManezki)\n* [MigQ2](https:\u002F\u002Fgithub.com\u002FMigQ2)\n* [Felix Scherz](https:\u002F\u002Fgithub.com\u002Ffelixscherz)\n* [Yu-Sheng Li](https:\u002F\u002Fgithub.com\u002Fkevin1kevin1k)","2024-10-10T19:13:09",{"id":233,"version":234,"summary_zh":235,"released_at":236},351428,"0.19.8","## Major features and improvements\n* Made default run entrypoint in `__main__.py` work in interactive environments such as IPyhon and Databricks.\n\n## Bug fixes and other changes\n* Fixed a bug that caused tracebacks disappeared from CLI runs.\n* Moved `_find_run_command()` and `_find_run_command_in_plugins()` from `__main__.py` in the project template to the framework itself.\n* Fixed a bug where `%load_node` breaks with multi-lines import statements.\n* Fixed a regression where `rich` mark up logs stop showing since 0.19.7.\n\n## Breaking changes to the API\n\n## Documentation changes\n* Add clarifications in docs explaining how runtime parameter resolution works.\n\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [cclauss](https:\u002F\u002Fgithub.com\u002Fcclauss)\n* [eltociear](https:\u002F\u002Fgithub.com\u002Feltociear)\n* [ltalirz](https:\u002F\u002Fgithub.com\u002Fltalirz)","2024-08-22T13:46:12",{"id":238,"version":239,"summary_zh":240,"released_at":241},351429,"0.19.7","## Major features and improvements\n* Exposed `load` and `save` publicly for each dataset in the core `kedro` library, and enabled other datasets to do the same. If a dataset doesn't expose `load` or `save` publicly, Kedro will fall back to using `_load` or `_save`, respectively.\n* Kedro commands are now lazily loaded to add performance gains when running Kedro commands.\n* Implemented key completion support for accessing datasets in the `DataCatalog`.\n* Implemented dataset pretty printing.\n* Implemented `DataCatalog` pretty printing.\n* Moved to an opt-out model for telemetry, enabling it by default without requiring prior consent.\n\n## Bug fixes and other changes\n* Updated error message for invalid catalog entries.\n* Updated error message for catalog entries when the dataset class is not found with hints on how to resolve the issue.\n* Fixed a bug in the `DataCatalog` `shallow_copy()` method to ensure it returns the type of the used catalog and doesn't cast it to `DataCatalog`.\n* Made [kedro-telemetry](https:\u002F\u002Fgithub.com\u002Fkedro-org\u002Fkedro-plugins\u002Ftree\u002Fmain\u002Fkedro-telemetry) a core dependency.\n* Fixed a bug when `OmegaConfigLoader` is printed, there are few missing arguments.\n* Fixed a bug when where iterating `OmegaConfigLoader`'s `keys` return empty dictionary.\n\n## Breaking changes to the API\n\n## Upcoming deprecations for Kedro 0.20.0\n* The utility method `get_pkg_version()` is deprecated and will be removed in Kedro 0.20.0.\n\n## Documentation changes\n* Improved documentation for configuring dataset parameters in the data catalog\n* Extended documentation with an example of logging customisation at runtime\n\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [nickolasrm](https:\u002F\u002Fgithub.com\u002Fnickolasrm)\n* [yury-fedotov](https:\u002F\u002Fgithub.com\u002Fyury-fedotov)","2024-08-01T18:53:09",{"id":243,"version":244,"summary_zh":245,"released_at":246},351430,"0.19.6","## Major features and improvements\n* Added `raise_errors` argument to `find_pipelines`. If `True`, the first pipeline for which autodiscovery fails will cause an error to be raised. The default behaviour is still to raise a warning for each failing pipeline.\n* It is now possible to use Kedro without having `rich` installed.\n* Updated custom logging behavior: `conf\u002Flogging.yml` will be used if it exists and `KEDRO_LOGGING_CONFIG` is not set; otherwise, `default_logging.yml` will be used.\n\n## Bug fixes and other changes\n* User defined catch-all dataset factory patterns now override the default pattern provided by the runner.\n\n## Breaking changes to the API\n\n## Upcoming deprecations for Kedro 0.20.0\n* All micro-packaging commands (`kedro micropkg pull`, `kedro micropkg package`) are deprecated and will be removed in Kedro 0.20.0.\n\n## Documentation changes\n* Improved documentation for custom starters\n* Added a new docs section on deploying Kedro project on AWS Airflow MWAA\n* Detailed instructions on using `globals` and `runtime_params` with the `OmegaConfigLoader`\n\n## Community contributions\nMany thanks to the following Kedroids for contributing PRs to this release:\n* [doxenix](https:\u002F\u002Fgithub.com\u002Fdoxenix)\n* [cleeeks](https:\u002F\u002Fgithub.com\u002Fcleeeks)","2024-05-27T16:32:40",{"id":248,"version":249,"summary_zh":250,"released_at":251},351431,"0.19.5","## Bug fixes and other changes\n* Fixed breaking import issue when working on a project with `kedro-viz` on python 3.8.\n\n## Documentation changes\n* Updated the documentation for deploying a Kedro project with Astronomer Airflow.\n* Used `kedro-sphinx-theme` for documentation.","2024-04-22T14:50:04"]