[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-amazon-science--chronos-forecasting":3,"tool-amazon-science--chronos-forecasting":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",148568,2,"2026-04-09T23:34:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":85,"forks":86,"last_commit_at":87,"license":88,"difficulty_score":10,"env_os":89,"env_gpu":90,"env_ram":89,"env_deps":91,"category_tags":96,"github_topics":97,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":111,"updated_at":112,"faqs":113,"releases":141},6049,"amazon-science\u002Fchronos-forecasting","chronos-forecasting","Chronos: Pretrained Models for Time Series Forecasting","Chronos 是一系列专为时间序列预测打造的预训练 AI 模型，旨在帮助用户无需从头训练即可快速获得高精度的预测结果。它有效解决了传统预测方法依赖大量历史数据、建模周期长以及难以应对新场景（零样本）的痛点，特别适用于缺乏标注数据或需要快速部署的场景。\n\n无论是从事数据科学的研究人员、需要集成预测功能的开发者，还是希望优化库存、销量或资源规划的业务分析师，都能从中受益。Chronos 的核心亮点在于其强大的“零样本”泛化能力，支持单变量、多变量及含外部特征的复杂预测任务。其最新迭代版本 Chronos-2 在多项权威基准测试中表现卓越；而 Chronos-Bolt 变体则通过创新的补丁机制，在保持高精度的同时，将推理速度提升高达 250 倍，内存占用降低 20 倍，极大降低了大规模应用的门槛。配合完善的 Hugging Face 模型库与 AWS 部署指南，Chronos 让时间序列预测变得像使用通用大模型一样简单高效。","\u003Cdiv align=\"center\">\n\n# Chronos: Pretrained Models for Time Series Forecasting\n\n[![preprint](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Chronos-Paper&message=2403.07815&color=B31B1B&logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07815)\n[![preprint](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Chronos-2-Report&message=2510.15821&color=B31B1B&logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821)\n[![huggingface](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20HF-Datasets-FFD21E)](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fautogluon\u002Fchronos_datasets)\n[![huggingface](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20HF-Models-FFD21E)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Famazon\u002Fchronos-models-65f1791d630a8d57cb718444)\n[![fev](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=fev&message=Benchmark&color=B31B1B&logo=github)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Ffev)\n[![aws](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=SageMaker&message=Deploy&color=FF9900&logo=amazon-web-services)](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb)\n[![faq](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFAQ-Questions%3F-blue)](https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fissues?q=is%3Aissue+label%3AFAQ)\n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache--2.0-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0)\n\n\u003C\u002Fdiv>\n\n\n## 🚀 News\n- **30 Dec 2025**: ☁️ Deploy Chronos-2 to AWS with Amazon SageMaker: new guide covers real-time inference (GPU\u002FCPU), serverless endpoints with automatic scaling, and batch transform for large-scale forecasting. See the [deployment tutorial](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb).\n- **20 Oct 2025**: 🚀 [Chronos-2](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-2) released. It offers _zero-shot_ support for univariate, multivariate, and covariate-informed forecasting tasks. Chronos-2 achieves the best performance on fev-bench, GIFT-Eval and Chronos Benchmark II amongst pretrained models. Check out [this notebook](notebooks\u002Fchronos-2-quickstart.ipynb) to get started with Chronos-2.\n- **12 Dec 2024**: 📊 We released [`fev`](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Ffev), a lightweight package for benchmarking time series forecasting models based on the [Hugging Face `datasets`](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdatasets\u002Fen\u002Findex) library.\n- **26 Nov 2024**: ⚡️ Chronos-Bolt models released [on HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Famazon\u002Fchronos-models-65f1791d630a8d57cb718444). Chronos-Bolt models are more accurate (5% lower error), up to 250x faster and 20x more memory efficient than the original Chronos models of the same size!\n- **13 Mar 2024**: 🚀 Chronos [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07815) and inference code released.\n\n## ✨ Introduction\n\nThis package provides an interface to the Chronos family of **pretrained time series forecasting models**. The following model types are supported.\n\n- **Chronos-2**: Our latest model with significantly enhanced capabilities. It offers zero-shot support for univariate, multivariate, and covariate-informed forecasting tasks. Chronos-2 delivers state-of-the-art zero-shot performance across multiple benchmarks (including fev-bench and GIFT-Eval), with the largest improvements observed on tasks that include exogenous features. It also achieves a win rate of over 90% against Chronos-Bolt in head-to-head comparisons. To learn more about Chronos, check out the [technical report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821).\n- **Chronos-Bolt**: A patch-based variant of Chronos. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—a method known as direct multi-step forecasting. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the original Chronos models of the same size. To learn more about Chronos-Bolt, check out this [blog post](https:\u002F\u002Faws.amazon.com\u002Fblogs\u002Fmachine-learning\u002Ffast-and-accurate-zero-shot-forecasting-with-chronos-bolt-and-autogluon\u002F).\n- **Chronos**: The original Chronos family which is based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. To learn more about Chronos, check out the [publication](https:\u002F\u002Fopenreview.net\u002Fforum?id=gerNCVqqtR).\n\n### Available Models\n\n\u003Cdiv align=\"center\">\n\n| Model ID                                                               | Parameters |\n| ---------------------------------------------------------------------- | ---------- |\n| [`amazon\u002Fchronos-2`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-2)   | 120M         |\n| [`autogluon\u002Fchronos-2-synth`](https:\u002F\u002Fhuggingface.co\u002Fautogluon\u002Fchronos-2-synth)   | 120M         |\n| [`autogluon\u002Fchronos-2-small`](https:\u002F\u002Fhuggingface.co\u002Fautogluon\u002Fchronos-2-small)   | 28M         |\n| [`amazon\u002Fchronos-bolt-tiny`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-tiny)   | 9M         |\n| [`amazon\u002Fchronos-bolt-mini`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-mini)   | 21M        |\n| [`amazon\u002Fchronos-bolt-small`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-small) | 48M        |\n| [`amazon\u002Fchronos-bolt-base`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-base)   | 205M       |\n| [`amazon\u002Fchronos-t5-tiny`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-tiny)   | 8M         |\n| [`amazon\u002Fchronos-t5-mini`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-mini)   | 20M        |\n| [`amazon\u002Fchronos-t5-small`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-small) | 46M        |\n| [`amazon\u002Fchronos-t5-base`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-base)   | 200M       |\n| [`amazon\u002Fchronos-t5-large`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-large) | 710M       |\n\n\u003C\u002Fdiv>\n\n## 📈 Usage\n\nTo perform inference with Chronos, the easiest way is to install this package through `pip`:\n\n```sh\npip install chronos-forecasting\n```\n\n> [!TIP]\n> For reliable production use, we recommend using Chronos-2 models through [Amazon SageMaker JumpStart](https:\u002F\u002Faws.amazon.com\u002Fsagemaker\u002Fai\u002Fjumpstart\u002F). Check out [this tutorial](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb) to learn how to deploy Chronos-2 inference endpoints to AWS with just a few lines of code.\n\n\n### Forecasting\n\nA minimal example showing how to perform forecasting using Chronos-2:\n\n```python\nimport pandas as pd  # requires: pip install 'pandas[pyarrow]'\nfrom chronos import Chronos2Pipeline\n\npipeline = Chronos2Pipeline.from_pretrained(\"amazon\u002Fchronos-2\", device_map=\"cuda\")\n\n# Load historical target values and past values of covariates\ncontext_df = pd.read_parquet(\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fdatasets\u002Ftimeseries\u002Felectricity_price\u002Ftrain.parquet\")\n\n# (Optional) Load future values of covariates\ntest_df = pd.read_parquet(\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fdatasets\u002Ftimeseries\u002Felectricity_price\u002Ftest.parquet\")\nfuture_df = test_df.drop(columns=\"target\")\n\n# Generate predictions with covariates\npred_df = pipeline.predict_df(\n    context_df,\n    future_df=future_df,\n    prediction_length=24,  # Number of steps to forecast\n    quantile_levels=[0.1, 0.5, 0.9],  # Quantile for probabilistic forecast\n    id_column=\"id\",  # Column identifying different time series\n    timestamp_column=\"timestamp\",  # Column with datetime information\n    target=\"target\",  # Column(s) with time series values to predict\n)\n```\n\nWe can now visualize the forecast:\n\n```python\nimport matplotlib.pyplot as plt  # requires: pip install matplotlib\n\nts_context = context_df.set_index(\"timestamp\")[\"target\"].tail(256)\nts_pred = pred_df.set_index(\"timestamp\")\nts_ground_truth = test_df.set_index(\"timestamp\")[\"target\"]\n\nts_context.plot(label=\"historical data\", color=\"xkcd:azure\", figsize=(12, 3))\nts_ground_truth.plot(label=\"future data (ground truth)\", color=\"xkcd:grass green\")\nts_pred[\"predictions\"].plot(label=\"forecast\", color=\"xkcd:violet\")\nplt.fill_between(\n    ts_pred.index,\n    ts_pred[\"0.1\"],\n    ts_pred[\"0.9\"],\n    alpha=0.7,\n    label=\"prediction interval\",\n    color=\"xkcd:light lavender\",\n)\nplt.legend()\n```\n\n## Example Notebooks\n\n- [Chronos-2 Quick Start](notebooks\u002Fchronos-2-quickstart.ipynb)\n  &nbsp;\n  \u003Ca href=\"https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fimport\u002Fgithub\u002Famazon-science\u002Fchronos-forecasting\u002Fblob\u002Fmain\u002Fnotebooks\u002Fchronos-2-quickstart.ipynb\">\n    \u003Cimg src=\"https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fstudiolab.svg\" alt=\"Open In SageMaker Studio Lab\" height=\"18\" align=\"absmiddle\">\n  \u003C\u002Fa>\n  &nbsp;\n  \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Famazon-science\u002Fchronos-forecasting\u002Fblob\u002Fmain\u002Fnotebooks\u002Fchronos-2-quickstart.ipynb\">\n    \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\" height=\"18\" align=\"absmiddle\">\n  \u003C\u002Fa>\n- [Deploy Chronos-2 on Amazon SageMaker](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb)\n\n## 📝 Citation\n\nIf you find Chronos models useful for your research, please consider citing the associated papers:\n\n```\n@article{ansari2024chronos,\n  title={Chronos: Learning the Language of Time Series},\n  author={Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},\n  journal={Transactions on Machine Learning Research},\n  issn={2835-8856},\n  year={2024},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=gerNCVqqtR}\n}\n\n@article{ansari2025chronos2,\n  title        = {Chronos-2: From Univariate to Universal Forecasting},\n  author       = {Abdul Fatir Ansari and Oleksandr Shchur and Jaris Küken and Andreas Auer and Boran Han and Pedro Mercado and Syama Sundar Rangapuram and Huibin Shen and Lorenzo Stella and Xiyuan Zhang and Mononito Goswami and Shubham Kapoor and Danielle C. Maddix and Pablo Guerron and Tony Hu and Junming Yin and Nick Erickson and Prateek Mutalik Desai and Hao Wang and Huzefa Rangwala and George Karypis and Yuyang Wang and Michael Bohlke-Schneider},\n  journal      = {arXiv preprint arXiv:2510.15821},\n  year         = {2025},\n  url          = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821}\n}\n```\n\n## 🛡️ Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n\n## 📃 License\n\nThis project is licensed under the Apache-2.0 License.\n","\u003Cdiv align=\"center\">\n\n# Chronos：用于时间序列预测的预训练模型\n\n[![预印本](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Chronos-Paper&message=2403.07815&color=B31B1B&logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07815)\n[![预印本](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Chronos-2-Report&message=2510.15821&color=B31B1B&logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821)\n[![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20HF-Datasets-FFD21E)](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fautogluon\u002Fchronos_datasets)\n[![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20HF-Models-FFD21E)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Famazon\u002Fchronos-models-65f1791d630a8d57cb718444)\n[![fev](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=fev&message=Benchmark&color=B31B1B&logo=github)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Ffev)\n[![AWS](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=SageMaker&message=Deploy&color=FF9900&logo=amazon-web-services)](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb)\n[![FAQ](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFAQ-Questions%3F-blue)](https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fissues?q=is%3Aissue+label%3AFAQ)\n[![许可证：MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache--2.0-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0)\n\n\u003C\u002Fdiv>\n\n\n## 🚀 最新消息\n- **2025年12月30日**：☁️ 使用 Amazon SageMaker 将 Chronos-2 部署到 AWS：新指南涵盖了实时推理（GPU\u002FCPU）、具有自动扩展功能的无服务器端点，以及用于大规模预测的批量转换。请参阅 [部署教程](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb)。\n- **2025年10月20日**：🚀 [Chronos-2](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-2) 发布。它为单变量、多变量和协变量驱动的预测任务提供零样本支持。在 fev-bench、GIFT-Eval 和 Chronos Benchmark II 上，Chronos-2 在所有预训练模型中表现最佳。请查看 [此笔记本](notebooks\u002Fchronos-2-quickstart.ipynb) 以开始使用 Chronos-2。\n- **2024年12月12日**：📊 我们发布了 [`fev`](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Ffev)，这是一个基于 Hugging Face `datasets` 库的轻量级时间序列预测模型基准测试工具包。\n- **2024年11月26日**：⚡️ Chronos-Bolt 模型已在 [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Famazon\u002Fchronos-models-65f1791d630a8d57cb718444) 上发布。与相同规模的原始 Chronos 模型相比，Chronos-Bolt 模型精度更高（误差降低 5%），速度最高可快 250 倍，内存效率提高 20 倍！\n- **2024年3月13日**：🚀 Chronos 论文 ([arXiv:2403.07815](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07815)) 和推理代码发布。\n\n## ✨ 简介\n\n本软件包提供了对 Chronos 系列 **预训练时间序列预测模型** 的接口。支持以下模型类型：\n\n- **Chronos-2**：我们最新的模型，功能显著增强。它为单变量、多变量和协变量驱动的预测任务提供零样本支持。Chronos-2 在多个基准测试中（包括 fev-bench 和 GIFT-Eval）均表现出最先进的零样本性能，尤其是在包含外生特征的任务上表现尤为突出。此外，在与 Chronos-Bolt 的直接对比中，其胜率超过 90%。要了解更多关于 Chronos 的信息，请参阅 [技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821)。\n- **Chronos-Bolt**：Chronos 的一种基于分块的变体。它将历史时间序列上下文分割成包含多个观测值的片段，并将其输入编码器。解码器随后利用这些表示直接生成未来多个时间步的分位数预测——这种方法被称为直接多步预测。与相同规模的原始 Chronos 模型相比，Chronos-Bolt 模型的速度最高可快 250 倍，内存效率提高 20 倍。要了解更多关于 Chronos-Bolt 的信息，请参阅这篇 [博客文章](https:\u002F\u002Faws.amazon.com\u002Fblogs\u002Fmachine-learning\u002Ffast-and-accurate-zero-shot-forecasting-with-chronos-bolt-and-autogluon\u002F)。\n- **Chronos**：最初的 Chronos 系列，基于语言模型架构。通过缩放和量化，时间序列被转换为一系列标记，然后使用交叉熵损失函数在这些标记上训练语言模型。训练完成后，给定历史上下文，通过采样多种未来轨迹即可获得概率性预测。要了解更多关于 Chronos 的信息，请参阅 [论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=gerNCVqqtR)。\n\n### 可用模型\n\n\u003Cdiv align=\"center\">\n\n| 模型 ID                                                               | 参数 |\n| ---------------------------------------------------------------------- | ---------- |\n| [`amazon\u002Fchronos-2`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-2)   | 1.2亿         |\n| [`autogluon\u002Fchronos-2-synth`](https:\u002F\u002Fhuggingface.co\u002Fautogluon\u002Fchronos-2-synth)   | 1.2亿         |\n| [`autogluon\u002Fchronos-2-small`](https:\u002F\u002Fhuggingface.co\u002Fautogluon\u002Fchronos-2-small)   | 2,800万         |\n| [`amazon\u002Fchronos-bolt-tiny`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-tiny)   | 900万         |\n| [`amazon\u002Fchronos-bolt-mini`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-mini)   | 2,100万        |\n| [`amazon\u002Fchronos-bolt-small`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-small) | 4,800万        |\n| [`amazon\u002Fchronos-bolt-base`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-bolt-base)   | 2.05亿       |\n| [`amazon\u002Fchronos-t5-tiny`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-tiny)   | 800万         |\n| [`amazon\u002Fchronos-t5-mini`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-mini)   | 2,000万        |\n| [`amazon\u002Fchronos-t5-small`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-small) | 4,600万        |\n| [`amazon\u002Fchronos-t5-base`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-base)   | 2亿       |\n| [`amazon\u002Fchronos-t5-large`](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-t5-large) | 7.1亿       |\n\n\u003C\u002Fdiv>\n\n## 📈 使用方法\n\n要使用 Chronos 进行推理，最简单的方式是通过 `pip` 安装本软件包：\n\n```sh\npip install chronos-forecasting\n```\n\n> [!提示]\n> 对于可靠的生产环境使用，我们建议通过 [Amazon SageMaker JumpStart](https:\u002F\u002Faws.amazon.com\u002Fsagemaker\u002Fai\u002Fjumpstart\u002F) 使用 Chronos-2 模型。请参阅 [此教程](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb) ，了解如何仅用几行代码将 Chronos-2 推理端点部署到 AWS。\n\n\n### 预测\n\n一个展示如何使用 Chronos-2 进行预测的最小示例：\n\n```python\nimport pandas as pd  # 需要：pip install 'pandas[pyarrow]'\nfrom chronos import Chronos2Pipeline\n\npipeline = Chronos2Pipeline.from_pretrained(\"amazon\u002Fchronos-2\", device_map=\"cuda\")\n\n# 加载历史目标值和协变量的过去值\ncontext_df = pd.read_parquet(\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fdatasets\u002Ftimeseries\u002Felectricity_price\u002Ftrain.parquet\")\n\n# （可选）加载协变量的未来值\ntest_df = pd.read_parquet(\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fdatasets\u002Ftimeseries\u002Felectricity_price\u002Ftest.parquet\")\nfuture_df = test_df.drop(columns=\"target\")\n\n# 使用协变量生成预测\npred_df = pipeline.predict_df(\n    context_df,\n    future_df=future_df,\n    prediction_length=24,  # 预测的步数\n    quantile_levels=[0.1, 0.5, 0.9],  # 概率预测的分位数\n    id_column=\"id\",  # 用于标识不同时间序列的列\n    timestamp_column=\"timestamp\",  # 包含日期时间信息的列\n    target=\"target\",  # 用于预测的时间序列值的列\n)\n```\n\n现在我们可以可视化预测结果：\n\n```python\nimport matplotlib.pyplot as plt  # 需要：pip install matplotlib\n\nts_context = context_df.set_index(\"timestamp\")[\"target\"].tail(256)\nts_pred = pred_df.set_index(\"timestamp\")\nts_ground_truth = test_df.set_index(\"timestamp\")[\"target\"]\n\nts_context.plot(label=\"历史数据\", color=\"xkcd:azure\", figsize=(12, 3))\nts_ground_truth.plot(label=\"未来数据（真实值）\", color=\"xkcd:grass green\")\nts_pred[\"predictions\"].plot(label=\"预测\", color=\"xkcd:violet\")\nplt.fill_between(\n    ts_pred.index,\n    ts_pred[\"0.1\"],\n    ts_pred[\"0.9\"],\n    alpha=0.7,\n    label=\"预测区间\",\n    color=\"xkcd:light lavender\",\n)\nplt.legend()\n```\n\n## 示例笔记本\n\n- [Chronos-2 快速入门](notebooks\u002Fchronos-2-quickstart.ipynb)\n  &nbsp;\n  \u003Ca href=\"https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fimport\u002Fgithub\u002Famazon-science\u002Fchronos-forecasting\u002Fblob\u002Fmain\u002Fnotebooks\u002Fchronos-2-quickstart.ipynb\">\n    \u003Cimg src=\"https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fstudiolab.svg\" alt=\"在 SageMaker Studio Lab 中打开\" height=\"18\" align=\"absmiddle\">\n  \u003C\u002Fa>\n  &nbsp;\n  \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Famazon-science\u002Fchronos-forecasting\u002Fblob\u002Fmain\u002Fnotebooks\u002Fchronos-2-quickstart.ipynb\">\n    \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\" height=\"18\" align=\"absmiddle\">\n  \u003C\u002Fa>\n- [在 Amazon SageMaker 上部署 Chronos-2](notebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb)\n\n## 📝 引用\n\n如果您发现 Chronos 模型对您的研究有所帮助，请考虑引用相关论文：\n\n```\n@article{ansari2024chronos,\n  title={Chronos: Learning the Language of Time Series},\n  author={Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},\n  journal={Transactions on Machine Learning Research},\n  issn={2835-8856},\n  year={2024},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=gerNCVqqtR}\n}\n\n@article{ansari2025chronos2,\n  title        = {Chronos-2: From Univariate to Universal Forecasting},\n  author       = {Abdul Fatir Ansari and Oleksandr Shchur and Jaris Küken and Andreas Auer and Boran Han and Pedro Mercado and Syama Sundar Rangapuram and Huibin Shen and Lorenzo Stella and Xiyuan Zhang and Mononito Goswami and Shubham Kapoor and Danielle C. Maddix and Pablo Guerron and Tony Hu and Junming Yin and Nick Erickson and Prateek Mutalik Desai and Hao Wang and Huzefa Rangwala and George Karypis and Yuyang Wang and Michael Bohlke-Schneider},\n  journal      = {arXiv preprint arXiv:2510.15821},\n  year         = {2025},\n  url          = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821}\n}\n```\n\n## 🛡️ 安全\n\n更多信息请参阅 [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications)。\n\n## 📃 许可证\n\n本项目采用 Apache-2.0 许可证。","# Chronos 时间序列预测快速上手指南\n\nChronos 是亚马逊推出的一系列预训练时间序列预测模型，支持零样本（zero-shot）推理。本指南将帮助你快速在本地环境中部署并使用最新的 **Chronos-2** 模型进行预测。\n\n## 1. 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: 3.8 及以上\n*   **硬件建议**: \n    *   **GPU**: 推荐使用 NVIDIA GPU (需安装 CUDA) 以获得最佳推理速度。\n    *   **CPU**: 支持 CPU 推理，但速度较慢。\n*   **前置依赖**: \n    *   `pandas` (建议安装 pyarrow 支持以加速数据读取)\n    *   `matplotlib` (用于可视化结果，可选)\n\n## 2. 安装步骤\n\n通过 `pip` 安装官方包。国内用户建议使用清华或阿里镜像源以加速下载。\n\n```bash\n# 使用默认源安装\npip install chronos-forecasting\n\n# 或者使用国内镜像源加速安装 (推荐)\npip install chronos-forecasting -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **提示**: 如果需要可视化功能，请额外安装 matplotlib：\n> ```bash\n> pip install matplotlib -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 3. 基本使用\n\n以下是一个最小化的示例，展示如何使用 **Chronos-2** 模型加载历史数据并生成未来预测。该示例支持单变量及多变量（含协变量）预测。\n\n### 代码示例\n\n```python\nimport pandas as pd\nfrom chronos import Chronos2Pipeline\n\n# 1. 初始化模型管道\n# device_map=\"cuda\" 表示使用 GPU，若无 GPU 可改为 \"cpu\"\npipeline = Chronos2Pipeline.from_pretrained(\"amazon\u002Fchronos-2\", device_map=\"cuda\")\n\n# 2. 加载历史数据 (上下文)\n# 这里以远程 parquet 文件为例，实际使用中可替换为本地路径 pd.read_parquet(\"local_file.parquet\")\ncontext_df = pd.read_parquet(\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fdatasets\u002Ftimeseries\u002Felectricity_price\u002Ftrain.parquet\")\n\n# 3. (可选) 加载未来的协变量数据\n# 如果任务需要外生特征（如节假日、天气等），需提供未来时间段的协变量\ntest_df = pd.read_parquet(\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fdatasets\u002Ftimeseries\u002Felectricity_price\u002Ftest.parquet\")\nfuture_df = test_df.drop(columns=\"target\")\n\n# 4. 生成预测\npred_df = pipeline.predict_df(\n    context_df,\n    future_df=future_df,       # 若无需协变量，可忽略此参数\n    prediction_length=24,      # 预测未来 24 个时间步\n    quantile_levels=[0.1, 0.5, 0.9], # 输出分位数，用于概率预测区间\n    id_column=\"id\",            # 标识不同时间序列的列名\n    timestamp_column=\"timestamp\", # 时间戳列名\n    target=\"target\",           # 目标值列名\n)\n\n# 查看预测结果前几行\nprint(pred_df.head())\n```\n\n### 结果可视化 (可选)\n\n如果你已安装 `matplotlib`，可以使用以下代码绘制预测结果与真实值的对比图：\n\n```python\nimport matplotlib.pyplot as plt\n\n# 准备数据\nts_context = context_df.set_index(\"timestamp\")[\"target\"].tail(256)\nts_pred = pred_df.set_index(\"timestamp\")\nts_ground_truth = test_df.set_index(\"timestamp\")[\"target\"]\n\n# 绘图\nplt.figure(figsize=(12, 3))\nts_context.plot(label=\"historical data\", color=\"xkcd:azure\")\nts_ground_truth.plot(label=\"future data (ground truth)\", color=\"xkcd:grass green\")\nts_pred[\"predictions\"].plot(label=\"forecast\", color=\"xkcd:violet\")\n\n# 绘制预测区间\nplt.fill_between(\n    ts_pred.index,\n    ts_pred[\"0.1\"],\n    ts_pred[\"0.9\"],\n    alpha=0.7,\n    label=\"prediction interval\",\n    color=\"xkcd:light lavender\",\n)\n\nplt.legend()\nplt.show()\n```\n\n### 可用模型列表\n除了默认的 `amazon\u002Fchronos-2` (120M 参数)，你还可以根据资源情况选择其他模型：\n*   **轻量级**: `autogluon\u002Fchronos-2-small` (28M)\n*   **高速版**: `amazon\u002Fchronos-bolt-base` (比原版快 250 倍)\n*   **超大版**: `amazon\u002Fchronos-t5-large` (710M)\n\n只需将 `from_pretrained` 中的模型 ID 替换为上述名称即可。","某大型连锁零售企业的供应链团队正面临黑五促销前的库存预测挑战，需为数千个 SKU 快速生成精准销量预估以优化备货。\n\n### 没有 chronos-forecasting 时\n- **冷启动困难**：面对新开门店或新上架商品缺乏历史数据，传统统计模型无法训练，只能依赖人工经验估算，误差极大。\n- **开发周期漫长**：数据科学家需为不同品类单独特征工程并训练定制化模型（如 Prophet 或 LSTM），耗时数周才能覆盖全量 SKU。\n- **多变量支持薄弱**：难以有效整合促销活动、节假日、天气等外部协变量，导致在复杂市场环境下预测失准。\n- **资源消耗巨大**：维护数千个独立模型需要庞大的计算集群和存储资源，推理速度慢，无法支持实时调优。\n\n### 使用 chronos-forecasting 后\n- **零样本即时预测**：利用 Chronos-2 的零样本（zero-shot）能力，即使无历史数据的新品也能直接基于预训练知识输出高质量预测，消除冷启动盲区。\n- **统一模型高效部署**：单个预训练模型即可泛化处理所有 SKU 的单变量、多变量及协变量任务，将建模周期从数周缩短至几小时。\n- **协变量深度融合**：原生支持外生特征输入，自动捕捉促销与季节性波动对销量的非线性影响，显著提升复杂场景下的准确率。\n- **极致性能表现**：采用 Chronos-Bolt 架构后，推理速度提升高达 250 倍且内存占用降低 20 倍，轻松在普通实例上实现大规模实时 forecasts。\n\nchronos-forecasting 通过预训练大模型的泛化能力，将时间序列预测从繁琐的定制开发转变为高效的零样本推理，彻底重构了企业决策响应速度。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Famazon-science_chronos-forecasting_78366a9f.png","amazon-science","Amazon Science","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Famazon-science_b9b0901f.png","",null,"AmazonScience","https:\u002F\u002Famazon.science","https:\u002F\u002Fgithub.com\u002Famazon-science",[81],{"name":82,"color":83,"percentage":84},"Python","#3572A5",100,5095,606,"2026-04-09T19:57:53","Apache-2.0","未说明","可选（支持 CPU 和 GPU）。示例代码显示支持 CUDA (device_map=\"cuda\")，AWS SageMaker 部署指南提及支持 GPU\u002FCPU 推理。具体显存大小和 CUDA 版本未在文中明确说明，但提到 Chronos-Bolt 模型比原版节省 20 倍内存。",{"notes":92,"python":89,"dependencies":93},"1. 可通过 pip install chronos-forecasting 安装。2. 支持多种预训练模型（Chronos-2, Chronos-Bolt, Chronos-T5），参数量从 8M 到 710M 不等。3. 生产环境推荐使用 Amazon SageMaker JumpStart 部署。4. 模型托管在 Hugging Face，首次运行需下载模型文件。5. 输入数据需为 pandas DataFrame 格式，包含时间戳和目标列。",[94,95,64],"pandas","matplotlib",[14,35,15],[98,99,100,101,102,103,104,105,106,107,108,109,110],"forecasting","large-language-models","llm","machine-learning","time-series","foundation-models","pretrained-models","time-series-forecasting","timeseries","artificial-intelligence","huggingface","huggingface-transformers","transformers","2026-03-27T02:49:30.150509","2026-04-10T09:02:46.630850",[114,119,124,128,133,137],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},27390,"如何复现论文中展示的评估结果（例如图表数据）？","虽然部分代码片段可在相关讨论中找到，但需注意 GluonTS 中的许多数据集名称虽与论文中使用的相同，但在关键方面（如预测长度和滚动次数）可能存在差异。目前官方建议关注专门的复现议题（如 #150）。此外，对于预训练任务，官方推荐使用大内存机器以获得更好效果。","https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fissues\u002F102",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},27391,"在使用 `convert_to_arrow` 函数时将数据转换为 Arrow 格式报错，应该如何正确传入时间序列数据？","错误通常是因为传入的数据格式不符合函数签名要求。`convert_to_arrow` 期望 `time_series` 参数是一个一维 numpy 数组的列表（即多个时间序列的列表），而不是单个数组。即使只有一个时间序列，也必须将其放入列表中，例如 `[time_series_data]`。同时，`start_times` 应为对应每个时间序列起始时间的 `np.datetime64` 列表。如果只处理单个序列，请确保调用方式为：`convert_to_arrow(path=path, time_series=[data_array])`。","https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fissues\u002F149",{"id":125,"question_zh":126,"answer_zh":127,"source_url":118},27392,"如何将 Hugging Face 数据集高效地写入 Arrow 文件而不占用过多内存？","`ArrowWriter.write_to_file` 方法支持直接接收生成器（generator）作为输入，无需先将数据实例化为列表。你可以编写一个函数，通过 `yield` 逐个从 Hugging Face 数据集中产出数据项，并将该生成器直接传递给 `write_to_file`，从而避免内存溢出问题。",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},27393,"Chronos Base 模型在单独处理与批量处理时间序列时预测结果不一致，原因是什么？","该问题通常并非模型本身的随机性导致（即使在固定随机种子下复现），而是由于数据处理逻辑中的排序不一致引起的。例如，Pandas 可能按字母顺序对系列 ID 进行了排序，而原始上下文列表保持了用户定义的顺序，导致预测结果与实际序列错位。解决方法是检查并统一数据框（DataFrame）中 `unique_id` 和真实值 `y` 的排序逻辑，确保其与模型输出的索引顺序严格一致。","https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fissues\u002F262",{"id":134,"question_zh":135,"answer_zh":136,"source_url":123},27394,"在 Windows 上运行训练脚本时遇到 `ForkingPickler` 或 `EOFError` 相关的 pickle 错误，如何解决？","这类错误在 Windows 上常因多进程启动方式（spawn）与数据加载逻辑不兼容导致。首先确认传入训练函数的数据格式是否正确（必须是时间序列列表）。其次，检查是否错误地传递了单个数组而非列表。若问题依旧，尝试减少 `num_workers` 设置为 0 以禁用多进程数据加载，或确保所有自定义数据集转换逻辑在主进程中执行。",{"id":138,"question_zh":139,"answer_zh":140,"source_url":132},27395,"为什么我的预测结果中数值看起来一致，但绘图显示的曲线却完全不同？","这通常是由于元数据（如 `unique_id` 或时间戳 `ds`）与预测值 `fcst` 之间的对齐错误造成的。虽然预测数值本身可能是正确的，但如果用于绘图的 Pandas DataFrame 中索引顺序混乱，会导致错误的序列被绘制在一起。请打印预测结果的前几行（使用 `.head()`），仔细核对 `unique_id`、`y`（真实值）和 `fcst`（预测值）列是否属于同一个时间序列，并修正数据合并或排序的逻辑。",[142,147,152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237],{"id":143,"version":144,"summary_zh":145,"released_at":146},180541,"v2.2.2","## 变更内容\n* Chronos-2：@abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F436 中添加了 after_batch 回调函数\n* @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F437 中优化了 predict_df 的速度\n* @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F439 中将版本从 2.2.1 升级至 2.2.2\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv2.2.1...v2.2.2","2025-12-17T18:13:25",{"id":148,"version":149,"summary_zh":150,"released_at":151},180542,"v2.2.1","## 变更内容\n* Chronos-2：@abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F415 中更新了快速入门笔记本，添加了 LoRA 示例。\n* @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F414 中为 `df_utils` 添加了单元测试。\n* Chronos-2：仅在设备类型为 CUDA 时启用 `pin_memory`，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F431 中实现。\n* Chronos-2：添加了禁用 `DataParallel` 的选项，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F434 中完成。\n* 将版本更新至 2.2.1，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F435 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv2.2.0...v2.2.1","2025-12-16T12:08:18",{"id":153,"version":154,"summary_zh":155,"released_at":156},180543,"v2.2.0","## 🚀 新功能？\n\n- 支持低秩适应（LoRA）微调\n- 优化了 DataFrame 操作，使 `predict_df` 的运行速度比之前大幅提升\n\n## 🐛 错误修复与改进\n\n- 移除了多 GPU 环境下对批量大小的断言\n- 在 `predict_df` 中新增了跳过 DataFrame 验证的选项\n- Chronos-2：为更清晰地表达含义，将 `predict_batches_jointly` 重命名为 `cross_learning`\n- Chronos-2：在 `fit` 方法中增加了指定回调函数的选项\n- Chronos-2：处理了在提供 future_df 时，prediction_length 小于 3 的情况\n\n## 变更内容\n\n* Chronos-2：@abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F392 中将关于批量大小的断言改为警告\n* 修复 Chronos-Bolt 教程笔记本条目中的链接错误，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F396 中完成\n* 优化了 `convert_df_input_to_list_of_dicts_input` 和 `validate_df_inputs` 函数，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F395 中完成\n* Chronos-2：增加 LoRA 微调支持，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F393 中实现\n* Chronos-2：在 `predict_df` 中添加跳过 DataFrame 验证的选项，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F400 中完成\n* 修复 df_utils 中因未绑定局部变量可能导致的崩溃问题，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F404 中解决\n* Chronos-2：在 `fit` 方法中增加指定回调函数的选项，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F405 中实现\n* 处理 prediction_length 小于 3 的情况，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F407 中完成\n* Chronos-2：在 Trainer 中将默认 dataloader_num_workers 设置为 0，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F409 中完成\n* Chronos-2：增加移除 PrinterCallback 的选项，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F410 中实现\n* 版本号升级至 2.2.0rc3，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F411 中完成\n* Chronos-2：在微调时，predict_fev 不再捕获异常，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F417 中完成\n* Chronos-2：将 predict_batches_jointly 重命名为 cross_learning，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F418 中完成\n* Chronos-2：如果使用比默认值更长的上下文长度进行微调，则更新 context_length，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F419 中实现\n* Chronos-2：确保微调后保存更新后的 chronos_config，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F420 中完成\n* Chronos-2：处理缺少 'peft' 和 lora_config 的情况，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F421 中完成\n* 版本号从 2.2.0rc3 升级至 2.2.0rc4，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F422 中完成\n* 版本号从 2.2.0rc4 升级至 2.2.0，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasti","2025-12-08T14:16:34",{"id":158,"version":159,"summary_zh":160,"released_at":161},180544,"v2.2.0rc4","## 变更内容\n* Chronos-2：在微调时，不要在 predict_fev 中捕获异常，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F417 中实现。\n* Chronos-2：将 predict_batches_jointly 重命名为 cross_learning，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F418 中实现。\n* Chronos-2：如果使用比默认值更长的上下文长度进行微调，则更新 context_length，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F419 中实现。\n* Chronos-2：确保在微调后保存更新后的 chronos_config，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F420 中实现。\n* Chronos-2：处理缺失的 'peft' 和 lora_config，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F421 中实现。\n* 版本从 2.2.0rc3 升级到 2.2.0rc4，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F422 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv2.2.0rc3...v2.2.0rc4","2025-12-04T12:25:48",{"id":163,"version":164,"summary_zh":165,"released_at":166},180545,"v2.2.0rc3","## 变更内容\n* 由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F407 中处理 `prediction_length` 小于 3 的情况\n* Chronos-2：由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F409 中将 Trainer 的默认 `dataloader_num_workers` 设置为 0\n* Chronos-2：由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F410 中添加移除 `PrinterCallback` 的选项\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F411 中将版本号提升至 2.2.0rc3\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv2.2.0rc2...v2.2.0rc3","2025-12-01T18:22:24",{"id":168,"version":169,"summary_zh":170,"released_at":171},180546,"v2.2.0rc2","## 变更内容\n* 修复由 `df_utils` 中未绑定的局部变量导致的潜在崩溃，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F404 中完成。\n* Chronos-2：新增在 `fit` 方法中指定回调函数的选项，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F405 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv2.2.0rc1...v2.2.0rc2","2025-11-30T15:55:13",{"id":173,"version":174,"summary_zh":175,"released_at":176},180547,"v2.2.0rc1","## 变更内容\n* Chronos-2：将关于批次大小的断言改为警告，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F392 中完成\n* 修复 Chronos-Bolt 教程笔记本条目中的链接，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F396 中完成\n* 优化 `convert_df_input_to_list_of_dicts_input` 和 `validate_df_inputs` 函数，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F395 中完成\n* Chronos-2：添加 LoRA 微调支持，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F393 中完成\n* Chronos-2：在 `predict_df` 中添加跳过数据框验证的选项，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F400 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv2.1.0...v2.2.0rc1","2025-11-27T23:14:47",{"id":178,"version":179,"summary_zh":180,"released_at":181},180548,"v2.1.0","## 🚀 新功能？\n\n- Chronos-2 现在可以通过 SageMaker JumpStart 部署到 AWS 上。详情请参阅 [此笔记本](https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fblob\u002Fmain\u002Fnotebooks\u002Fdeploy-chronos-to-amazon-sagemaker.ipynb)。\n- [缩放点积注意力 (SDPA)](https:\u002F\u002Fdocs.pytorch.org\u002Fdocs\u002Fstable\u002Fgenerated\u002Ftorch.nn.functional.scaled_dot_product_attention.html) 现已成为 Chronos-2 中默认的注意力实现方式。如果您需要使用之前的 eager 实现，请通过 `Chronos2Pipeline.from_pretrained(..., attn_implementation=\"eager\")` 加载模型。\n- 为旧版 Chronos 和 Chronos-Bolt 模型添加了 `predict_df` 支持。现在，所有模型（Chronos-2、Chronos-Bolt、Chronos）都提供统一的 pandas DataFrame API。注意：只有 Chronos-2 支持多变量和协变量驱动的预测。\n- 新增了 `Chronos2Pipeline.embed` 方法，允许用户从 Chronos-2 编码器的 _最后一层_ 提取嵌入。\n\n## 🐛 错误修复\n\n- 修复了在 Chronos-2 微调过程中仅使用过去协变量时的相关问题。**如果您正在微调 Chronos-2 模型，我们强烈建议升级到 `chronos-forecasting==2.1.0`**。\n- 修复了 Windows 系统上多工作进程相关的问题。\n\n\n## 所有变更\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F326 中修复 pandas 安装说明。\n* 由 @AdnaneKhan 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F328 中缩小 GitHub token 权限范围。\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F333 中更新 Chronos-2 笔记本的安装说明。\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F334 中在微调示例中添加日志记录步骤。\n* [chronos-2] 由 @kashif 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F331 中添加对 SDPA 的支持。\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F338 中将笔记本视为文档处理。\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F347 中将笔记本计入 Python 语言统计。\n* 由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F348 中将 SageMaker 笔记本移动到新路径。\n* 由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F349 中撤销“将 SageMaker 笔记本移动到新路径”的更改。\n* 由 @HarvestStars 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F344 中通过分离“仅过去”和“未来已知”协变量键来修复 `validate_and_prepare_single_dict_task` 函数。\n* 由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F350 中添加 Chronos-2 SageMaker 笔记本。\n* 由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F353 中更新指向 Hugging Face 的模型链接。\n* 由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F355 中移除 README 中的 logo。\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F35 中更新笔记本和 README。","2025-11-21T10:58:04",{"id":183,"version":184,"summary_zh":185,"released_at":186},180549,"v2.1.0rc1","## 变更内容\n* 修复 pandas 安装说明，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F326 中完成\n* 缩小 GitHub token 权限范围，由 @AdnaneKhan 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F328 中完成\n* 更新 Chronos-2 笔记本的安装说明，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F333 中完成\n* 在微调示例中添加日志记录步骤，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F334 中完成\n* [chronos-2] 添加对 SDPA 的支持，由 @kashif 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F331 中完成\n* 将笔记本视为文档处理，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F338 中完成\n* 在语言统计中将笔记本计为 Python 文件，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F347 中完成\n* 将 SageMaker 笔记本移动到新路径，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F348 中完成\n* 撤销“将 SageMaker 笔记本移动到新路径”的更改，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F349 中完成\n* 修复 validate_and_prepare_single_dict_task 函数，通过分离“仅过去”和“未来已知”协变量键来实现，由 @HarvestStars 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F344 中完成\n* 添加 Chronos-2 SageMaker 笔记本，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F350 中完成\n* 更新模型在 Hugging Face 上的链接，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F353 中完成\n* 从 README 中移除 logo，由 @shchur 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F355 中完成\n* 更新笔记本和 README，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F356 中完成\n* 为 CloudFront 添加 FutureWarning 警告，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F357 中完成\n* 放宽 `transformers` 的最低版本要求至 >=4.41，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F364 中完成\n* 在 Chronos-2 测试数据加载器中设置 `num_workers=0`，由 @IliasAarab 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F365 中完成\n* 更新 Chronos-Bolt 的长 horizon 启发式方法，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F366 中完成\n* 为 Chronos 和 Chronos-Bolt 添加 predict_df 支持，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F371 中完成\n* 添加 Chronos2Pipeline.embed 方法，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F361 中完成\n* 在损失计算过程中屏蔽仅过去的协变量，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F379 中完成\n* Chronos-2：更改默认微调学习率并移除实验性标签，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F381 中完成\n* 将版本更新为 2.1.0rc1 用于预发布，由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F383 中完成\n\n## 新贡献者\n* @AdnaneKhan 完成了他们的首次贡献","2025-11-18T09:54:21",{"id":188,"version":189,"summary_zh":190,"released_at":191},180550,"v2.0.1","## 变更内容\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F364 中将 `transformers` 的最低版本约束放宽至 >=4.41\n* 由 @abdulfatir 在 https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F366 中更新了 Chronos-Bolt 的长 horizon 启发式方法\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv2.0.0...v2.0.1","2025-11-06T12:25:00",{"id":193,"version":194,"summary_zh":195,"released_at":196},180551,"v2.0.0","## 🚀 Introducing Chronos-2: From univariate to universal forecasting\r\n\r\nThis release adds support for [**Chronos-2**](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2510.15821). It is a 120M-parameter time series foundation model that offers zero-shot support for univariate, multivariate, and covariate-informed forecasting tasks. Chronos-2 delivers state-of-the-art zero-shot performance across multiple benchmarks (including fev-bench and GIFT-Eval), with the largest improvements observed on tasks that include exogenous features. In head-to-head comparisons, it outperforms its predecessor, Chronos-Bolt, over 90% of times.\r\n\r\n📌 Get started with Chronos-2: [Chronos-2 Quick Start](https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fblob\u002Fmain\u002Fnotebooks\u002Fchronos-2-quickstart.ipynb)\r\n\r\nChronos-2 offers significant improvements in capabilities and can handle diverse forecasting scenarios not supported by earlier models.\r\n\r\n| Capability | Chronos | Chronos-Bolt | Chronos-2 |\r\n|------------|---------|--------------|-----------|\r\n| Univariate Forecasting | ✅ | ✅ | ✅ |\r\n| Cross-learning across items | ❌ | ❌ | ✅ |\r\n| Multivariate Forecasting | ❌ | ❌ | ✅ |\r\n| Past-only (real\u002Fcategorical) covariates | ❌ | ❌ | ✅ |\r\n| Known future (real\u002Fcategorical) covariates | 🧩 | 🧩 | ✅ |\r\n| Fine-tuning support | ✅ | ✅ | ✅ |\r\n| Max. Context Length | 512 | 2048 | 8192 |\r\n\r\n🧩 Chronos\u002FChronos-Bolt do not natively support future covariates, but they can be combined with external covariate regressors (see [AutoGluon tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Fforecasting-chronos.html#incorporating-the-covariates)). This only models per-timestep effects, not effects across time. In contrast, Chronos-2 supports all covariate types natively.\r\n\r\n![fig1](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8ab82fce-45ad-431e-a58c-189ed0d3359f)\r\n*Figure 1: The complete Chronos-2 pipeline. Input time series (targets and covariates) are first normalized using a robust scaling scheme, after which a time index and mask meta features are added. The resulting sequences are split into non-overlapping patches and mapped to high-dimensional embeddings via a residual network. The core transformer stack operates on these patch embeddings and produces multi-patch quantile outputs corresponding to the future patches masked out in the input. Each transformer block alternates between time and group attention layers: the time attention layer aggregates information across patches within a single time series, while the group attention layer aggregates information across all series within a group at each patch index. The figure illustrates two multivariate time series with one known covariate each, with corresponding groups highlighted in blue and red. This example is for illustration purposes only; Chronos-2 supports arbitrary numbers of targets and optional covariates.*\r\n\r\n![fig2](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F994d47c7-e197-4644-b16f-14f7aa853627)\r\n*Figure 2: Results of experiments on the fev-bench time series benchmark. The average win rate and skill score are computed with respect to the scaled quantile loss (SQL) metric, which evaluates probabilistic forecasting performance. Higher values are better for both. Chronos-2 outperforms all existing pretrained models by a substantial margin on this comprehensive benchmark, which includes univariate, multivariate, and covariate-informed forecasting tasks.*\r\n\r\n![fig3](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F9046081b-2d9d-4588-a274-8dc70ad47a56)\r\n*Figure 3: Chronos-2 results in univariate mode and the corresponding gains from in-context learning (ICL), shown as stacked bars on the covariates subset of fev-bench. ICL delivers large gains on tasks with covariates, demonstrating Chronos-2’s ability to effectively use covariates through ICL. Besides Chronos-2, only TabPFN-TS and COSMIC support covariates, and Chronos-2 outperforms all baselines (including TabPFN-TS and COSMIC) by a wide margin.*\r\n\r\n![fig4](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F4373e257-08af-4a8e-9700-1b77d2cc3a4b)\r\n*Figure 4: Results on the GIFT-Eval time series benchmark. The average win rate and skill score with respect to the (a) probabilistic and (b) point forecasting metrics. Higher values are better for both win rate and skill score. Chronos-2 outperforms the previously best-performing models, TimesFM-2.5 and TiRex.*\r\n\r\n## What's Changed\r\n* Add Chronos-2 by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F319\r\n* Remove ALWAYS_DOWNLOAD from CF and S3 by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F322\r\n* Use dynamic versioning and bump version by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F320\r\n* Add example notebook for Chronos-2 by @shchur in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F325\r\n* Update README for Chronos-2 by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F324\r\n* Bump version to 2.0.0 by @abdulfatir in https:","2025-10-20T13:48:29",{"id":198,"version":199,"summary_zh":200,"released_at":201},180552,"v2.0.0rc1","## Chronos-2 Pre-release\r\n\r\n## What's Changed\r\n* Add Chronos-2 by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F319\r\n* Remove ALWAYS_DOWNLOAD from CF and S3 by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F322\r\n* Use dynamic versioning and bump version by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F320\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.5.3...v2.0.0rc1","2025-10-20T10:06:06",{"id":203,"version":204,"summary_zh":205,"released_at":206},180553,"v1.5.3","## What's Changed\r\n* Fix issue with new caching mechanism in transformers and bump versions by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F313\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.5.2...v1.5.3","2025-08-05T08:50:30",{"id":208,"version":209,"summary_zh":210,"released_at":211},180554,"v1.5.2","v1.5.2 relaxes the upper bound on accelerate to `\u003C2`.\r\n\r\n## What's Changed\r\n* Bump `accelerate>=0.32,\u003C2` by @Tyler-Hardin in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F298\r\n* Bump version to 1.5.2 by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F299\r\n\r\n## New Contributors\r\n* @Tyler-Hardin made their first contribution in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F298\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.5.1...v1.5.2","2025-05-06T08:22:41",{"id":213,"version":214,"summary_zh":215,"released_at":216},180555,"v1.5.1","🐛 Fixed an issue with forecasting constant series for Chronos-Bolt. See #294.\r\n\r\n## What's Changed\r\n* Bump transformers to >=4.48 by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F280\r\n* Add example notebook for SageMaker JumpStart by @shchur in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F281\r\n* Fix date in readme by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F284\r\n* Fix scaling that affects constant series by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F294\r\n* Fix type-checking issues by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F295\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.5.0...v1.5.1","2025-04-10T15:26:11",{"id":218,"version":219,"summary_zh":220,"released_at":221},180556,"v1.5.0","## What's Changed\r\n* Fix training install instructions by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F236\r\n* remove eval-pr-comment workflow by @canerturkmen in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F239\r\n* Add pipeline.embed support for Chronos-Bolt by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F247\r\n* Update issue templates  by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F269\r\n* Relax torch compatibility to \u003C3 by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F277\r\n* Bump package version to 1.5.0 by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F278\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.4.1...v1.5.0","2025-02-06T15:38:36",{"id":223,"version":224,"summary_zh":225,"released_at":226},180557,"v1.4.1","## What's Changed\r\n* Fix padding for int contexts by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F227\r\n* Bump version number to 1.4.1 by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F228\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.4.0...v1.4.1","2024-12-04T17:38:40",{"id":228,"version":229,"summary_zh":230,"released_at":231},180558,"v1.4.0","## Key Changes\r\n- `predict` and `predict_quantiles` will return predictions on `cpu` in `float32`.\r\n\r\n## What's Changed\r\n* Remove reference to MPS by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F216\r\n* Run type checks on Python 3.11 only by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F217\r\n* Clean up evaluation script by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F218\r\n* Return predictions in fp32 on CPU by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F219\r\n* Fix README example to use `predict_quantiles` by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F220\r\n* Add workflow to run evaluation on a subset of datasets by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F222\r\n* Fix auto eval workflow by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F224\r\n* Use absolute link to images in the README by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F223\r\n* Bump version to 1.4.0 by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F225\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.3.0...v1.4.0","2024-12-02T11:09:46",{"id":233,"version":234,"summary_zh":235,"released_at":236},180559,"v1.3.0","## Highlight\r\n\r\n### Chronos-Bolt⚡: a 250x faster, more accurate Chronos model\r\n\r\nChronos-Bolt is our latest foundation model for forecasting. It is based on the T5 encoder-decoder architecture and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—_a method known as direct multi-step forecasting_. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the original Chronos models of the same size.\r\n\r\nThe following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.\r\n\r\n\u003Ccenter>\r\n\u003Cimg src=\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fimages\u002Fchronos_bolt_speed.svg\" width=\"60%\"\u002F>\r\n\u003C\u002Fcenter>\r\n\r\nChronos-Bolt models are not only significantly faster but also more accurate than the original Chronos models. The following plot reports the probabilistic and point forecasting performance of Chronos-Bolt in terms of the [Weighted Quantile Loss (WQL)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Fforecasting-metrics.html#autogluon.timeseries.metrics.WQL) and the [Mean Absolute Scaled Error (MASE)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Fforecasting-metrics.html#autogluon.timeseries.metrics.MASE), respectively, aggregated over 27 datasets (see the [Chronos paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07815) for details on this benchmark). Remarkably, despite having no prior exposure to these datasets during training, the zero-shot Chronos-Bolt models outperform commonly used statistical models and deep learning models that have been trained on these datasets (highlighted by *). Furthermore, they also perform better than other FMs, denoted by a +, which indicates that these models were pretrained on certain datasets in our benchmark and are not entirely zero-shot. Notably, Chronos-Bolt (Base) also surpasses the original Chronos (Large) model in terms of the forecasting accuracy while being over 600 times faster.\r\n\r\n\u003Ccenter>\r\n\u003Cimg src=\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fimages\u002Fchronos_bolt_accuracy.svg\" width=\"80%\"\u002F>\r\n\u003C\u002Fcenter>\r\n\r\nChronos-Bolt models are now available [on HuggingFace🤗](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Famazon\u002Fchronos-models-and-datasets-65f1791d630a8d57cb718444) in four sizes—Tiny (9M), Mini (21M), Small (48M), and Base (205M)—and can also be used on the CPU. Check out the example in the [README](https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting) to learn how to use Chronos-Bolt models. You can use Chronos-Bolt models for forecasting in just a few lines of code.\r\n\r\n```py\r\nimport pandas as pd  # requires: pip install pandas\r\nimport torch\r\nfrom chronos import BaseChronosPipeline\r\n\r\npipeline = BaseChronosPipeline.from_pretrained(\r\n    \"amazon\u002Fchronos-bolt-base\", \r\n    device_map=\"cuda\",  # use \"cpu\" for CPU inference\r\n    torch_dtype=torch.bfloat16,\r\n)\r\n\r\ndf = pd.read_csv(\r\n    \"https:\u002F\u002Fraw.githubusercontent.com\u002FAileenNielsen\u002FTimeSeriesAnalysisWithPython\u002Fmaster\u002Fdata\u002FAirPassengers.csv\"\r\n)\r\n\r\n# context must be either a 1D tensor, a list of 1D tensors,\r\n# or a left-padded 2D tensor with batch as the first dimension\r\n# Chronos-Bolt models generate quantile forecasts, so forecast has shape\r\n# [num_series, num_quantiles, prediction_length].\r\nforecast = pipeline.predict(\r\n    context=torch.tensor(df[\"#Passengers\"]), prediction_length=12\r\n)\r\n```\r\n\r\n> [!NOTE]\r\n> We have also integrated Chronos-Bolt models into [AutoGluon](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Findex.html) which is a more feature complete way of using Chronos models for production use cases. With the addition of Chronos-Bolt models and other enhancements, **AutoGluon v1.2 achieves a 70%+ win rate against AutoGluon v1.1**! In addition to the new Chronos-Bolt models, AutoGluon 1.2 also enables effortless fine-tuning of Chronos and Chronos-Bolt models. Check out the updated [Chronos AutoGluon tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Fforecasting-chronos.html) to learn how to use and fine-tune Chronos-Bolt models using AutoGluon.\r\n\r\n## What's Changed\r\n* Cap transformers \u003C4.41 by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F77\r\n* Save training job info by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F80\r\n* Relax torch and transformers versions by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F81\r\n* Split `input_transform` into `context_input_transform` and `label_input_transform` by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F82\r\n* Fix citation by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F86\r\n* Enhance training script: auto tf32 detection and reorder default seed setting ","2024-11-28T12:41:00",{"id":238,"version":239,"summary_zh":240,"released_at":241},180560,"v1.2.0","## What's Changed\r\n* Remove Unnecessary F-strings by @pixeeai in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F34\r\n* Fix types, add mypy to workflow by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F42\r\n* Speed up workflow by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F43\r\n* Simplify tokenizer creation by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F44\r\n* Update README.md by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F46\r\n* Add CITATION.cff by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F48\r\n* Revamp README: Add News, Coverage, Logo, Shields, Emojis, Zero-Shot results by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F56\r\n* add AGv1.1 announcement to README by @canerturkmen in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F58\r\n* Add training script by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F63\r\n* Add KernelSynth script by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F64\r\n* Add missing headers by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F65\r\n* Merge kernel-synth extra into training by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F66\r\n* Add a README file for the scripts  by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F67\r\n* Update README examples by @lostella in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F68\r\n* Add details on pushing model to huggingface hub by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F69\r\n* Add one space after --config in training readme by @huibinshen in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F71\r\n* Use logo with transparent background by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F72\r\n* Fix output transform, add test to enforce tokenizer consistency by @HugoSenetaire in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F73\r\n* Update README and bump version by @abdulfatir in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F74\r\n\r\n## New Contributors\r\n* @pixeeai made their first contribution in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F34\r\n* @canerturkmen made their first contribution in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F58\r\n* @huibinshen made their first contribution in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F71\r\n* @HugoSenetaire made their first contribution in https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fpull\u002F73\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Famazon-science\u002Fchronos-forecasting\u002Fcompare\u002Fv1.1.0...v1.2.0","2024-05-17T13:42:34"]