[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-sktime--pytorch-forecasting":3,"tool-sktime--pytorch-forecasting":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":79,"owner_website":81,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":23,"env_os":96,"env_gpu":97,"env_ram":98,"env_deps":99,"category_tags":107,"github_topics":108,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":124,"updated_at":125,"faqs":126,"releases":155},3304,"sktime\u002Fpytorch-forecasting","pytorch-forecasting","Time series forecasting with PyTorch","pytorch-forecasting 是一个基于 PyTorch 构建的开源库，专为利用前沿深度学习架构进行时间序列预测而设计。它致力于解决传统方法在处理复杂现实场景数据时的痛点，通过高度抽象的数据集类，自动处理变量转换、缺失值填补及多历史长度采样等繁琐预处理工作，让开发者能更专注于模型本身。\n\n该工具特别适合数据科学家、机器学习工程师及研究人员使用。无论是希望快速上手的新手，还是需要高度定制化方案的专业人士，pytorch-forecasting 都能提供兼顾灵活性与易用性的高层 API。其内置了多种针对实际部署优化的神经网络架构（如 TFT、N-BEATS 等），并原生支持模型可解释性分析，帮助用户不仅知道“预测结果是什么”，还能理解“为什么这样预测”。\n\n技术亮点方面，pytorch-forecasting 深度集成了 PyTorch Lightning，实现了在 GPU 或 CPU 上的高效规模化训练与自动日志记录；同时提供多步长预测指标评估，并支持与 Optuna 结合进行自动化超参数调优。凭借这些特性，它成为了连接学术研究与工业级应用的有力桥梁，让高精度时间序列预测变得更","pytorch-forecasting 是一个基于 PyTorch 构建的开源库，专为利用前沿深度学习架构进行时间序列预测而设计。它致力于解决传统方法在处理复杂现实场景数据时的痛点，通过高度抽象的数据集类，自动处理变量转换、缺失值填补及多历史长度采样等繁琐预处理工作，让开发者能更专注于模型本身。\n\n该工具特别适合数据科学家、机器学习工程师及研究人员使用。无论是希望快速上手的新手，还是需要高度定制化方案的专业人士，pytorch-forecasting 都能提供兼顾灵活性与易用性的高层 API。其内置了多种针对实际部署优化的神经网络架构（如 TFT、N-BEATS 等），并原生支持模型可解释性分析，帮助用户不仅知道“预测结果是什么”，还能理解“为什么这样预测”。\n\n技术亮点方面，pytorch-forecasting 深度集成了 PyTorch Lightning，实现了在 GPU 或 CPU 上的高效规模化训练与自动日志记录；同时提供多步长预测指标评估，并支持与 Optuna 结合进行自动化超参数调优。凭借这些特性，它成为了连接学术研究与工业级应用的有力桥梁，让高精度时间序列预测变得更加触手可及。","![PyTorch Forecasting](.\u002Fdocs\u002Fsource\u002F_static\u002Flogo.svg)\n\n_PyTorch Forecasting_ is a PyTorch-based package for forecasting with state-of-the-art deep learning architectures. It provides a high-level API and uses [PyTorch Lightning](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002F) to scale training on GPU or CPU, with automatic logging.\n\n\n|  | **[Documentation](https:\u002F\u002Fpytorch-forecasting.readthedocs.io)** · **[Tutorials](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials.html)** · **[Release Notes](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002FCHANGELOG.html)** |\n|---|---|\n| **Open&#160;Source** | [![MIT](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fsktime\u002Fpytorch-forecasting)](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmaster\u002FLICENSE) [![GC.OS Sponsored](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGC.OS-Sponsored%20Project-orange.svg?style=flat&colorA=0eac92&colorB=2077b4)](https:\u002F\u002Fgc-os-ai.github.io\u002F) | |\n| **Community** | [![!discord](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?logo=discord&label=discord&message=chat&color=lightgreen)](https:\u002F\u002Fdiscord.com\u002Finvite\u002F54ACzaFsn7) [![!slack](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?logo=linkedin&label=LinkedIn&message=news&color=lightblue)](https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fscikit-time\u002F) |\n| **CI\u002FCD** | [![github-actions](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fsktime\u002Fpytorch-forecasting\u002Fpypi_release.yml?logo=github)](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Factions\u002Fworkflows\u002Fpypi_release.yml) [![readthedocs](https:\u002F\u002Fimg.shields.io\u002Freadthedocs\u002Fpytorch-forecasting?logo=readthedocs)](https:\u002F\u002Fpytorch-forecasting.readthedocs.io) [![platform](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fpn\u002Fconda-forge\u002Fpytorch-forecasting)](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting) [![Code Coverage][coverage-image]][coverage-url] |\n| **Code** | [![!pypi](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpytorch-forecasting?color=orange)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytorch-forecasting\u002F) [![!conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fpytorch-forecasting)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpytorch-forecasting) [![!python-versions](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fpytorch-forecasting)](https:\u002F\u002Fwww.python.org\u002F) [![!black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)  |\n| **Downloads** | ![PyPI - Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdw\u002Fpytorch-forecasting) ![PyPI - Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fpytorch-forecasting) [![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsktime_pytorch-forecasting_readme_dd8fdf1c860f.png))](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpytorch-forecasting) |\n\n[coverage-image]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fsktime\u002Fpytorch-forecasting\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg\n[coverage-url]: https:\u002F\u002Fcodecov.io\u002Fgithub\u002Fsktime\u002Fpytorch-forecasting?branch=main\n\n---\n\nOur article on [Towards Data Science](https:\u002F\u002Ftowardsdatascience.com\u002Fintroducing-pytorch-forecasting-64de99b9ef46) introduces the package and provides background information.\n\nPyTorch Forecasting aims to ease state-of-the-art timeseries forecasting with neural networks for real-world cases and research alike. The goal is to provide a high-level API with maximum flexibility for professionals and reasonable defaults for beginners.\nSpecifically, the package provides\n\n- A timeseries dataset class which abstracts handling variable transformations, missing values,\n  randomized subsampling, multiple history lengths, etc.\n- A base model class which provides basic training of timeseries models along with logging in TensorBoard\n  and generic visualizations such as actual vs predictions and dependency plots\n- Multiple neural network architectures for timeseries forecasting that have been enhanced\n  for real-world deployment and come with in-built interpretation capabilities\n- Multi-horizon timeseries metrics\n- Hyperparameter tuning with [optuna](https:\u002F\u002Foptuna.readthedocs.io\u002F)\n\nThe package is built on [pytorch-lightning](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002F) to allow training on CPUs, single and multiple GPUs out-of-the-box.\n\n# Installation\n\nIf you are working on windows, you need to first install PyTorch with\n\n`pip install torch -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html`.\n\nOtherwise, you can proceed with\n\n`pip install pytorch-forecasting`\n\nAlternatively, you can install the package via conda\n\n`conda install pytorch-forecasting pytorch -c pytorch>=1.7 -c conda-forge`\n\nPyTorch Forecasting is now installed from the conda-forge channel while PyTorch is install from the pytorch channel.\n\nTo use the MQF2 loss (multivariate quantile loss), also install\n`pip install pytorch-forecasting[mqf2]`\n\n# Documentation\n\nVisit [https:\u002F\u002Fpytorch-forecasting.readthedocs.io](https:\u002F\u002Fpytorch-forecasting.readthedocs.io) to read the\ndocumentation with detailed tutorials.\n\n# Available models\n\nThe documentation provides a [comparison of available models](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Fmodels.html).\n\n- [Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.09363.pdf)\n  which outperforms DeepAR by Amazon by 36-69% in benchmarks\n- [N-BEATS: Neural basis expansion analysis for interpretable time series forecasting](http:\u002F\u002Farxiv.org\u002Fabs\u002F1905.10437)\n  which has (if used as ensemble) outperformed all other methods including ensembles of traditional statical\n  methods in the M4 competition. The M4 competition is arguably the most important benchmark for univariate time series forecasting.\n- [N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting](http:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12886) which supports covariates and has consistently beaten N-BEATS. It is also particularly well-suited for long-horizon forecasting.\n- [DeepAR: Probabilistic forecasting with autoregressive recurrent networks](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0169207019301888)\n  which is the one of the most popular forecasting algorithms and is often used as a baseline\n- Simple standard networks for baselining: LSTM and GRU networks as well as a MLP on the decoder\n- A baseline model that always predicts the latest known value\n\nTo implement new models or other custom components, see the [How to implement new models tutorial](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Fbuilding.html). It covers basic as well as advanced architectures.\n\n# Usage example\n\nNetworks can be trained with the [PyTorch Lightning Trainer](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Flatest\u002Fcommon\u002Ftrainer.html) on [pandas Dataframes](https:\u002F\u002Fpandas.pydata.org\u002Fpandas-docs\u002Fstable\u002Fuser_guide\u002Fdsintro.html#dataframe) which are first converted to a [TimeSeriesDataSet](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Fdata.html).\n\n```python\n# imports for training\nimport lightning.pytorch as pl\nfrom lightning.pytorch.loggers import TensorBoardLogger\nfrom lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor\n# import dataset, network to train and metric to optimize\nfrom pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer, QuantileLoss\nfrom lightning.pytorch.tuner import Tuner\n\n# load data: this is pandas dataframe with at least a column for\n# * the target (what you want to predict)\n# * the timeseries ID (which should be a unique string to identify each timeseries)\n# * the time of the observation (which should be a monotonically increasing integer)\ndata = ...\n\n# define the dataset, i.e. add metadata to pandas dataframe for the model to understand it\nmax_encoder_length = 36\nmax_prediction_length = 6\ntraining_cutoff = \"YYYY-MM-DD\"  # day for cutoff\n\ntraining = TimeSeriesDataSet(\n    data[lambda x: x.date \u003C= training_cutoff],\n    time_idx= ...,  # column name of time of observation\n    target= ...,  # column name of target to predict\n    group_ids=[ ... ],  # column name(s) for timeseries IDs\n    max_encoder_length=max_encoder_length,  # how much history to use\n    max_prediction_length=max_prediction_length,  # how far to predict into future\n    # covariates static for a timeseries ID\n    static_categoricals=[ ... ],\n    static_reals=[ ... ],\n    # covariates known and unknown in the future to inform prediction\n    time_varying_known_categoricals=[ ... ],\n    time_varying_known_reals=[ ... ],\n    time_varying_unknown_categoricals=[ ... ],\n    time_varying_unknown_reals=[ ... ],\n)\n\n# create validation dataset using the same normalization techniques as for the training dataset\nvalidation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training.index.time.max() + 1, stop_randomization=True)\n\n# convert datasets to dataloaders for training\nbatch_size = 128\ntrain_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=2)\nval_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=2)\n\n# create PyTorch Lightning Trainer with early stopping\nearly_stop_callback = EarlyStopping(monitor=\"val_loss\", min_delta=1e-4, patience=1, verbose=False, mode=\"min\")\nlr_logger = LearningRateMonitor()\ntrainer = pl.Trainer(\n    max_epochs=100,\n    accelerator=\"auto\",  # run on CPU, if on multiple GPUs, use strategy=\"ddp\"\n    gradient_clip_val=0.1,\n    limit_train_batches=30,  # 30 batches per epoch\n    callbacks=[lr_logger, early_stop_callback],\n    logger=TensorBoardLogger(\"lightning_logs\")\n)\n\n# define network to train - the architecture is mostly inferred from the dataset, so that only a few hyperparameters have to be set by the user\ntft = TemporalFusionTransformer.from_dataset(\n    # dataset\n    training,\n    # architecture hyperparameters\n    hidden_size=32,\n    attention_head_size=1,\n    dropout=0.1,\n    hidden_continuous_size=16,\n    # loss metric to optimize\n    loss=QuantileLoss(),\n    # logging frequency\n    log_interval=2,\n    # optimizer parameters\n    learning_rate=0.03,\n    reduce_on_plateau_patience=4\n)\nprint(f\"Number of parameters in network: {tft.size()\u002F1e3:.1f}k\")\n\n# find the optimal learning rate\nres = Tuner(trainer).lr_find(\n    tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, early_stop_threshold=1000.0, max_lr=0.3,\n)\n# and plot the result - always visually confirm that the suggested learning rate makes sense\nprint(f\"suggested learning rate: {res.suggestion()}\")\nfig = res.plot(show=True, suggest=True)\nfig.show()\n\n# fit the model on the data - redefine the model with the correct learning rate if necessary\ntrainer.fit(\n    tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader,\n)\n```\n","![PyTorch Forecasting](.\u002Fdocs\u002Fsource\u002F_static\u002Flogo.svg)\n\n_PyTorch Forecasting_ 是一个基于 PyTorch 的包，用于使用最先进的深度学习架构进行预测。它提供了一个高级 API，并利用 [PyTorch Lightning](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002F) 在 GPU 或 CPU 上扩展训练，同时支持自动日志记录。\n\n\n|  | **[文档](https:\u002F\u002Fpytorch-forecasting.readthedocs.io)** · **[教程](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials.html)** · **[发布说明](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002FCHANGELOG.html)** |\n|---|---|\n| **开源** | [![MIT](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fsktime\u002Fpytorch-forecasting)](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmaster\u002FLICENSE) [![GC.OS 赞助](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGC.OS-赞助项目-orange.svg?style=flat&colorA=0eac92&colorB=2077b4)](https:\u002F\u002Fgc-os-ai.github.io\u002F) | |\n| **社区** | [![!discord](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?logo=discord&label=discord&message=chat&color=lightgreen)](https:\u002F\u002Fdiscord.com\u002Finvite\u002F54ACzaFsn7) [![!slack](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?logo=linkedin&label=LinkedIn&message=news&color=lightblue)](https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fscikit-time\u002F) |\n| **CI\u002FCD** | [![github-actions](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fsktime\u002Fpytorch-forecasting\u002Fpypi_release.yml?logo=github)](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Factions\u002Fworkflows\u002Fpypi_release.yml) [![readthedocs](https:\u002F\u002Fimg.shields.io\u002Freadthedocs\u002Fpytorch-forecasting?logo=readthedocs)](https:\u002F\u002Fpytorch-forecasting.readthedocs.io) [![platform](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fpn\u002Fconda-forge\u002Fpytorch-forecasting)](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting) [![代码覆盖率][coverage-image]][coverage-url] |\n| **代码** | [![!pypi](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpytorch-forecasting?color=orange)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytorch-forecasting\u002F) [![!conda](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fpytorch-forecasting)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpytorch-forecasting) [![!python-versions](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fpytorch-forecasting)](https:\u002F\u002Fwww.python.org\u002F) [![!black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)  |\n| **下载量** | ![PyPI - 下载量](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdw\u002Fpytorch-forecasting) ![PyPI - 下载量](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fpytorch-forecasting) [![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsktime_pytorch-forecasting_readme_dd8fdf1c860f.png))](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpytorch-forecasting) |\n\n[coverage-image]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fsktime\u002Fpytorch-forecasting\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg\n[coverage-url]: https:\u002F\u002Fcodecov.io\u002Fgithub\u002Fsktime\u002Fpytorch-forecasting?branch=main\n\n---\n\n我们在 [Towards Data Science](https:\u002F\u002Ftowardsdatascience.com\u002Fintroducing-pytorch-forecasting-64de99b9ef46) 上发表的文章介绍了该包并提供了背景信息。\n\nPyTorch Forecasting 旨在为实际应用和研究领域简化使用神经网络进行的最先进的时间序列预测。其目标是为专业人士提供具有最大灵活性的高级 API，并为初学者提供合理的默认设置。\n具体来说，该包提供了：\n\n- 一个时间序列数据集类，抽象了变量转换、缺失值处理、随机子采样、多种历史长度等操作。\n- 一个基础模型类，提供时间序列模型的基本训练功能，以及 TensorBoard 日志记录和通用可视化功能，例如实际值与预测值对比图和依赖性图。\n- 多种针对时间序列预测优化的神经网络架构，这些架构经过改进以适应实际部署，并内置了解释能力。\n- 多步长时间序列指标。\n- 使用 [optuna](https:\u002F\u002Foptuna.readthedocs.io\u002F) 进行超参数调优。\n\n该包基于 [pytorch-lightning](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002F) 构建，因此可以开箱即用地在 CPU、单个或多个 GPU 上进行训练。\n\n# 安装\n\n如果您使用的是 Windows 系统，需要先安装 PyTorch：\n\n`pip install torch -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html`。\n\n否则，您可以直接运行：\n\n`pip install pytorch-forecasting`。\n\n或者，您也可以通过 conda 安装：\n\n`conda install pytorch-forecasting pytorch -c pytorch>=1.7 -c conda-forge`。\n\n这样，PyTorch Forecasting 将从 conda-forge 频道安装，而 PyTorch 则从 pytorch 频道安装。\n\n如果需要使用 MQF2 损失函数（多变量分位数损失），还需额外安装：\n\n`pip install pytorch-forecasting[mqf2]`。\n\n# 文档\n\n请访问 [https:\u002F\u002Fpytorch-forecasting.readthedocs.io](https:\u002F\u002Fpytorch-forecasting.readthedocs.io)，阅读包含详细教程的文档。\n\n# 可用模型\n\n文档中提供了 [可用模型的比较](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Fmodels.html)。\n\n- [用于可解释多步长时间序列预测的 Temporal Fusion Transformers](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.09363.pdf)，在基准测试中比亚马逊的 DeepAR 性能高出 36% 至 69%。\n- [N-BEATS：用于可解释时间序列预测的神经基底展开分析](http:\u002F\u002Farxiv.org\u002Fabs\u002F1905.10437)，如果作为集成模型使用，已在 M4 竞赛中超越了所有其他方法，包括传统统计方法的集成。M4 竞赛被认为是单变量时间序列预测领域最重要的基准测试之一。\n- [N-HiTS：用于时间序列预测的神经层次插值](http:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12886)，支持协变量，并且一直优于 N-BEATS。它尤其适合长期预测。\n- [DeepAR：基于自回归循环网络的概率预测](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0169207019301888)，是最受欢迎的预测算法之一，通常被用作基准。\n- 用于基准测试的简单标准网络：LSTM 和 GRU 网络，以及解码器上的 MLP。\n- 一个始终预测最新已知值的基准模型。\n\n要实现新模型或其他自定义组件，请参阅 [如何实现新模型的教程](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Fbuilding.html)。该教程涵盖了基础和高级架构。\n\n# 使用示例\n\n网络可以使用 [PyTorch Lightning Trainer](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Flatest\u002Fcommon\u002Ftrainer.html) 在 [pandas Dataframes](https:\u002F\u002Fpandas.pydata.org\u002Fpandas-docs\u002Fstable\u002Fuser_guide\u002Fdsintro.html#dataframe) 上进行训练，这些数据框首先会被转换为 [TimeSeriesDataSet](https:\u002F\u002Fpytorch-forecasting.readthedocs.io\u002Fen\u002Flatest\u002Fdata.html)。\n\n```python\n# 训练所需的导入\nimport lightning.pytorch as pl\nfrom lightning.pytorch.loggers import TensorBoardLogger\nfrom lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor\n\n# 导入数据集、要训练的网络以及优化的目标函数\nfrom pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer, QuantileLoss\nfrom lightning.pytorch.tuner import Tuner\n\n# 加载数据：这是一个 Pandas 数据框，至少包含以下几列：\n# * 目标变量（即你要预测的内容）\n# * 时间序列 ID（应为唯一字符串，用于标识每个时间序列）\n# * 观测时间（应为单调递增的整数）\ndata = ...\n\n# 定义数据集，即为 Pandas 数据框添加元数据，以便模型能够理解数据\nmax_encoder_length = 36\nmax_prediction_length = 6\ntraining_cutoff = \"YYYY-MM-DD\"  # 截止日期\n\ntraining = TimeSeriesDataSet(\n    data[lambda x: x.date \u003C= training_cutoff],\n    time_idx= ...,  # 观测时间的列名\n    target= ...,  # 预测目标的列名\n    group_ids=[ ... ],  # 时间序列 ID 的列名\n    max_encoder_length=max_encoder_length,  # 使用的历史长度\n    max_prediction_length=max_prediction_length,  # 预测未来的时间长度\n    # 针对每个时间序列 ID 的静态协变量\n    static_categoricals=[ ... ],\n    static_reals=[ ... ],\n    # 已知和未知的时变协变量，用于辅助预测\n    time_varying_known_categoricals=[ ... ],\n    time_varying_known_reals=[ ... ],\n    time_varying_unknown_categoricals=[ ... ],\n    time_varying_unknown_reals=[ ... ],\n)\n\n# 使用与训练集相同的归一化方法创建验证集\nvalidation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training.index.time.max() + 1, stop_randomization=True)\n\n# 将数据集转换为用于训练的数据加载器\nbatch_size = 128\ntrain_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=2)\nval_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=2)\n\n# 创建带有早停机制的 PyTorch Lightning Trainer\nearly_stop_callback = EarlyStopping(monitor=\"val_loss\", min_delta=1e-4, patience=1, verbose=False, mode=\"min\")\nlr_logger = LearningRateMonitor()\ntrainer = pl.Trainer(\n    max_epochs=100,\n    accelerator=\"auto\",  # 如果有多块 GPU，则使用 strategy=\"ddp\"；否则在 CPU 上运行\n    gradient_clip_val=0.1,\n    limit_train_batches=30,  # 每个 epoch 训练 30 个批次\n    callbacks=[lr_logger, early_stop_callback],\n    logger=TensorBoardLogger(\"lightning_logs\")\n)\n\n# 定义要训练的网络——其架构主要由数据集推断得出，因此用户只需设置少数几个超参数\ntft = TemporalFusionTransformer.from_dataset(\n    # 数据集\n    training,\n    # 架构超参数\n    hidden_size=32,\n    attention_head_size=1,\n    dropout=0.1,\n    hidden_continuous_size=16,\n    # 优化的损失函数\n    loss=QuantileLoss(),\n    # 日志记录频率\n    log_interval=2,\n    # 优化器参数\n    learning_rate=0.03,\n    reduce_on_plateau_patience=4\n)\nprint(f\"网络中的参数数量：{tft.size()\u002F1e3:.1f}k\")\n\n# 寻找最优学习率\nres = Tuner(trainer).lr_find(\n    tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, early_stop_threshold=1000.0, max_lr=0.3,\n)\n# 并绘制结果——务必通过可视化确认建议的学习率是否合理\nprint(f\"建议的学习率：{res.suggestion()}\")\nfig = res.plot(show=True, suggest=True)\nfig.show()\n\n# 在数据上拟合模型——如有必要，使用正确的学习率重新定义模型\ntrainer.fit(\n    tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader,\n)\n```","# PyTorch Forecasting 快速上手指南\n\nPyTorch Forecasting 是一个基于 PyTorch 的时间序列预测包，集成了 Temporal Fusion Transformers (TFT)、N-BEATS、DeepAR 等最先进的深度学习架构。它利用 PyTorch Lightning 实现高效的 GPU\u002FCPU 训练，并提供高级 API 以简化数据预处理、模型训练及超参数调优流程。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: 3.8 - 3.11 (推荐最新稳定版)\n*   **核心依赖**:\n    *   PyTorch (>= 1.7)\n    *   PyTorch Lightning\n    *   pandas, numpy, scikit-learn\n*   **硬件建议**: 支持 CUDA 的 NVIDIA GPU 可显著加速训练（可选，CPU 亦可运行）\n\n## 安装步骤\n\n### 1. 安装 PyTorch (Windows 用户必读)\n如果您使用的是 **Windows** 系统，请先单独安装 PyTorch：\n```bash\npip install torch -f https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Ftorch_stable.html\n```\n*Linux\u002FmacOS 用户通常可直接通过下一步安装，或访问 PyTorch 官网获取对应命令。*\n\n### 2. 安装 PyTorch Forecasting\n推荐使用 pip 进行安装。国内用户可使用清华源或阿里源加速下载：\n\n**标准安装：**\n```bash\npip install pytorch-forecasting -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**Conda 安装方式：**\n```bash\nconda install pytorch-forecasting pytorch -c pytorch -c conda-forge\n```\n\n**可选组件：**\n如果您需要使用多变量分位数损失 (MQF2 loss)，请额外安装：\n```bash\npip install pytorch-forecasting[mqf2]\n```\n\n## 基本使用\n\n以下是使用 **Temporal Fusion Transformer (TFT)** 模型进行训练的最小化工作流示例。主要步骤包括：数据加载 -> 构建数据集 -> 创建 DataLoader -> 初始化模型 -> 训练。\n\n```python\n# 导入必要的库\nimport lightning.pytorch as pl\nfrom lightning.pytorch.loggers import TensorBoardLogger\nfrom lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor\nfrom pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer, QuantileLoss\nfrom lightning.pytorch.tuner import Tuner\n\n# 1. 加载数据\n# 数据应为 pandas DataFrame，必须包含：\n# * target: 预测目标列\n# * time_idx: 时间索引列 (单调递增整数)\n# * group_ids: 时间序列 ID 列 (用于区分不同序列)\ndata = ... \n\n# 2. 定义数据集\nmax_encoder_length = 36       # 历史输入长度\nmax_prediction_length = 6     # 未来预测长度\ntraining_cutoff = \"YYYY-MM-DD\" # 训练集截止日期\n\ntraining = TimeSeriesDataSet(\n    data[lambda x: x.date \u003C= training_cutoff],\n    time_idx=\"time_idx\",           # 替换为实际列名\n    target=\"target\",               # 替换为实际列名\n    group_ids=[\"series_id\"],       # 替换为实际列名\n    max_encoder_length=max_encoder_length,\n    max_prediction_length=max_prediction_length,\n    # 定义协变量 (根据实际数据填写)\n    static_categoricals=[\"static_cat\"],\n    static_reals=[\"static_real\"],\n    time_varying_known_categoricals=[\"known_cat\"],\n    time_varying_known_reals=[\"known_real\"],\n    time_varying_unknown_categoricals=[\"unknown_cat\"],\n    time_varying_unknown_reals=[\"unknown_real\"],\n)\n\n# 创建验证集 (复用训练集的归一化统计量)\nvalidation = TimeSeriesDataSet.from_dataset(\n    training, \n    data, \n    min_prediction_idx=training.index.time.max() + 1, \n    stop_randomization=True\n)\n\n# 3. 转换为 DataLoader\nbatch_size = 128\ntrain_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=2)\nval_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=2)\n\n# 4. 配置 Trainer (含早停机制)\nearly_stop_callback = EarlyStopping(monitor=\"val_loss\", min_delta=1e-4, patience=1, verbose=False, mode=\"min\")\nlr_logger = LearningRateMonitor()\n\ntrainer = pl.Trainer(\n    max_epochs=100,\n    accelerator=\"auto\",  # 自动检测 GPU\u002FCPU\n    gradient_clip_val=0.1,\n    limit_train_batches=30,\n    callbacks=[lr_logger, early_stop_callback],\n    logger=TensorBoardLogger(\"lightning_logs\")\n)\n\n# 5. 初始化模型\n# 大部分架构参数会自动从数据集中推断\ntft = TemporalFusionTransformer.from_dataset(\n    training,\n    hidden_size=32,\n    attention_head_size=1,\n    dropout=0.1,\n    hidden_continuous_size=16,\n    loss=QuantileLoss(),\n    log_interval=2,\n    learning_rate=0.03,\n    reduce_on_plateau_patience=4\n)\n\nprint(f\"网络参数量：{tft.size()\u002F1e3:.1f}k\")\n\n# 6. (可选) 寻找最佳学习率\nres = Tuner(trainer).lr_find(\n    tft, \n    train_dataloaders=train_dataloader, \n    val_dataloaders=val_dataloader, \n    early_stop_threshold=1000.0, \n    max_lr=0.3,\n)\nprint(f\"建议学习率：{res.suggestion()}\")\n# res.plot(show=True, suggest=True) # 取消注释以查看图表\n\n# 7. 开始训练\ntrainer.fit(\n    tft, \n    train_dataloaders=train_dataloader, \n    val_dataloaders=val_dataloader,\n)\n```\n\n训练完成后，您可以使用该模型进行预测、可视化结果或导出模型。更多高级功能（如自定义模型、详细调参）请参考官方文档。","某大型零售连锁公司的数据科学团队正致力于构建一个能预测未来 12 周各门店商品销量的系统，以优化库存管理并减少缺货损失。\n\n### 没有 pytorch-forecasting 时\n- **数据处理繁琐**：工程师需手动编写大量代码处理时间序列特有的缺失值、变量标准化及多尺度历史窗口对齐，极易出错且难以复用。\n- **模型复现困难**：尝试部署 TFT（Temporal Fusion Transformer）等先进架构时，需从零搭建复杂的网络结构，缺乏内置的可解释性模块，导致业务方不信任黑盒预测。\n- **训练监控缺失**：缺乏统一的训练框架，无法自动记录 GPU 利用率或可视化“实际值 vs 预测值”对比图，调参过程如同盲人摸象。\n- **评估指标单一**：难以同时评估多个预测步长（Multi-horizon）的准确性，往往只能优化单点误差，导致长期预测效果崩塌。\n\n### 使用 pytorch-forecasting 后\n- **数据加载自动化**：利用其专用的 `TimeSeriesDataSet` 类，自动完成变量转换、随机子采样及历史长度处理，将数据准备时间缩短 70%。\n- **开箱即用的先进模型**：直接调用内置的 TFT 等架构，无需重写底层逻辑，并直接使用自带的依赖关系图向业务部门解释“促销”和“季节”如何影响销量。\n- **无缝集成 Lightning**：基于 PyTorch Lightning 自动实现 GPU 加速训练与 TensorBoard 日志记录，实时生成直观的预测对比可视化图表。\n- **专业多维评估**：原生支持多步长预测指标，配合 Optuna 自动进行超参数调优，显著提升了长周期预测的稳定性与准确度。\n\npytorch-forecasting 通过提供高阶 API 和工业级架构，让团队能从繁琐的工程细节中解脱，专注于利用深度学习解决真实的商业预测难题。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsktime_pytorch-forecasting_c48cb5a7.png","sktime","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fsktime_7e57e590.jpg","A unified framework for machine learning with time series",null,"info@sktime.net","https:\u002F\u002Fwww.sktime.net","https:\u002F\u002Fgithub.com\u002Fsktime",[84,88],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,{"name":89,"color":90,"percentage":91},"Shell","#89e051",0,4856,862,"2026-04-04T06:22:01","MIT","Linux, macOS, Windows","非必需。支持 CPU、单 GPU 或多 GPU 训练。若使用 GPU，需安装对应版本的 PyTorch（README 未指定具体型号、显存大小或 CUDA 版本）。","未说明",{"notes":100,"python":101,"dependencies":102},"Windows 用户需先单独安装 PyTorch（使用特定索引链接），然后再安装本包。可通过 conda 或 pip 安装。若需使用多变量分位数损失 (MQF2 loss)，需额外安装可选依赖。该库基于 PyTorch Lightning，可自动适配 CPU 或 GPU 环境。","未说明 (徽章显示支持多个 Python 版本，具体需参考 PyPI)",[103,104,105,106],"torch>=1.7","pytorch-lightning","pandas","optuna",[15,51,14,13,54],[109,110,111,112,113,104,114,115,116,117,105,118,119,120,121,122,123],"pytorch","forecasting","gpu","uncertainty","timeseries-forecasting","deep-learning","neural-networks","timeseries","machine-learning","python","ai","data-science","temporal","artificial-intelligence","hacktoberfest","2026-03-27T02:49:30.150509","2026-04-06T09:44:32.072574",[127,132,137,142,147,151],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},15182,"使用 Temporal Fusion Transformer (TFT) 设置多个目标变量（list of strings）时报错怎么办？","当将 target 参数设置为字符串列表以预测多个变量时，如果遇到 TypeError 或初始化崩溃，请尝试不要手动设置 target_normalizer。根据文档，TimeSeriesDataSet 默认会自动选择合适的归一化器。移除手动的 normalizer 配置通常能解决此问题。此外，确保使用的是最新的教程代码，因为多目标支持的实现可能随版本变化。","https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fissues\u002F542",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},15183,"运行模型训练时遇到 'AttributeError: ExperimentWriter object has no attribute add_figure' 错误如何解决？","该错误通常是由于 tensorboard 版本不兼容或安装不完整导致的。解决方法是重新安装相关依赖包，执行命令：pip install torch torchvision matplotlib tensorboard。这将确保 logger 接口与当前 PyTorch Lightning 版本兼容。","https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fissues\u002F1256",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},15184,"运行教程代码时出现 'dictionary update sequence element #0 has length 1; 2 is required' 错误怎么办？","这是由于 torchmetrics 或 pandas 版本过高导致的兼容性问题。解决方案是降级这两个包到特定版本。建议创建一个新环境并安装：torchmetrics==0.5.0 和 pandas==1.2.5。安装后重新运行代码即可恢复正常。","https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fissues\u002F665",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},15185,"在多 GPU 环境下训练 Baseline 或 TFT 模型时出现 'ProcessExitedException ... SIGSEGV' 错误如何处理？","在 Jupyter Notebook 中运行多 GPU 训练常会导致此进程退出错误，因为 Notebook 环境难以正确地在多进程间共享数据集。解决方案有两个关键点：1. 将代码保存为 Python 脚本 (.py 文件) 并在终端运行，而不是在 Notebook 中直接运行；2. 在 Trainer 初始化时显式设置策略参数 strategy=\"auto\"。","https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fissues\u002F1351",{"id":148,"question_zh":149,"answer_zh":150,"source_url":131},15186,"使用 encoderNormalizer 且 min_encoder_length=1 时遇到 'element 0 of tensors does not require grad' 错误的原因是什么？","当设置 min_encoder_length = 1 并使用 encoderNormalizer 时，可能会触发梯度计算相关的错误。这通常是因为输入张量长度过短导致无法正确构建计算图。建议尝试增加 min_encoder_length 的值（例如设为 2 或更大），或者检查数据预处理步骤，确保输入到模型的时间序列长度满足模型内部操作的最小要求。",{"id":152,"question_zh":153,"answer_zh":154,"source_url":136},15187,"DeepAR 教程代码运行失败，除了 add_figure 错误外还有 KeyError: 'radam_buffer' 怎么办？","KeyError: 'radam_buffer' 通常与优化器状态或 PyTorch\u002FLightning 版本不匹配有关。如果在 from_dataset 初始化模型时遇到此问题，可以尝试显式指定 optimizer 参数（如 optimizer='Adam'）而不是使用默认的 RAdam，或者检查是否安装了与当前 PyTorch Forecasting 版本完全匹配的 PyTorch Lightning 版本。有时降级 Lightning 版本也能解决此类缓冲区键值错误。",[156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241,246,251],{"id":157,"version":158,"summary_zh":159,"released_at":160},89850,"v1.6.1","## 变更内容\n此补丁版本主要关注以下内容：\n* 修复了一个持续存在的 bug：在 `lightning \u003C2.6` 版本中，调用 `load_from_checkpoint` 时传递 `weights_only` 参数会导致问题。\n* 修复了由 `pandas` 的写时复制行为引起的编码器不可写问题。\n\n[查看完整变更日志](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmain\u002FCHANGELOG.md)\n### 全体贡献者\n@cngmid、@phoeenniixx\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fcompare\u002Fv1.6.0...v1.6.1\n","2026-01-23T17:20:53",{"id":162,"version":163,"summary_zh":164,"released_at":165},89851,"v1.6.0","## 变更内容\n本次发布重点关注以下方面：\n\n* Python 3.14 支持\n* 解决权重加载中的反序列化错误\n* 与 `scikit-base` 合并重复的工具函数\n* 新增 **Beta v2** 的 `predict` 接口\n* 模型后端的改进\n\n### API 变更\n\n* 由于 Lightning 的重大变更，Tuner 的导入方式已更改。Lightning v2.6 在检查点加载行为上引入了破坏性变更，导致 `pytorch-forecasting` 在加载权重时出现反序列化错误（参见 #2000）。\n为解决此问题，`pytorch-forecasting` 现在提供了自己的 `Tuner` 包装器，在调用 `lr_find()` 时会暴露所需的 `weights_only` 参数。\n\n  * 当您使用 `pytorch-forecasting > 1.5.0` 和 `lightning > 2.5` 时，请使用 `pytorch_forecasting.tuning.Tuner` 替代 `lightning.pytorch.tuner.Tuner`。详情请参阅 #2000。\n\n\n[查看完整变更日志](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmain\u002FCHANGELOG.md)\n\n## 新贡献者\n* @anasashb 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1997 中做出了首次贡献\n* @ahmedkansulum 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F2009 中做出了首次贡献\n* @khenm 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F2012 中做出了首次贡献\n\n### 全体贡献者\n@agobbifbk、@ahmedkansulum、@anasashb、@dependabot[bot]、@fkiraly、@khenm、@phoeenniixx、@PranavBhatP、@szepeviktor\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fcompare\u002Fv1.5.0...v1.6.0","2026-01-16T09:28:08",{"id":167,"version":168,"summary_zh":169,"released_at":170},89852,"v1.5.0","## 变更内容\n本次发布重点关注：\n\n* Python 3.9 生命周期结束\n* 测试框架的变更\n* `pytorch-forecasting` v1 和 beta v2 中的新估计器\n\n[查看完整变更日志](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmain\u002FCHANGELOG.md)\n\n## 新贡献者\n* @Vishnu-Rangiah 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1905 中做出了首次贡献\n* @Pinaka07 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1914 中做出了首次贡献\n* @hubkrieb 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1910 中做出了首次贡献\n* @Himanshu-Verma-ds 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1488 中做出了首次贡献\n* @lohraspco 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1926 中做出了首次贡献\n* @zju-ys 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1924 中做出了首次贡献\n* @caph1993 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1944 中做出了首次贡献\n* @szepeviktor 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1948 中做出了首次贡献\n* @sanskarmodi8 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1978 中做出了首次贡献\n\n## 所有贡献者\n@agobbifbk,\n@caph1993,\n@cngmid,\n@fkiraly,\n@fnhirwa,\n@Himanshu-Verma-ds,\n@hubkrieb,\n@jdb78,\n@lohraspco,\n@phoeenniixx,\n@Pinaka07,\n@PranavBhatP,\n@sanskarmodi8,\n@Sohaib-Ahmed21,\n@szepeviktor\n@Vishnu-Rangiah,\n@zju-ys\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fcompare\u002Fv1.4.0...v1.5.0","2025-10-10T13:27:01",{"id":172,"version":173,"summary_zh":174,"released_at":175},89853,"v1.4.0","## 变更内容\n\n功能与维护更新。\n\n[查看完整变更日志](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmain\u002FCHANGELOG.md)\n\n## 新贡献者\n* @gbilleyPeco 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1750 中完成了首次贡献\n* @pietsjoh 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1399 中完成了首次贡献\n* @MartinoMensio 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1579 中完成了首次贡献\n* @phoeenniixx 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1811 中完成了首次贡献\n* @cngmid 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1827 中完成了首次贡献\n* @Marcrb2 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1518 中完成了首次贡献\n* @jobs-git 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1864 中完成了首次贡献\n\n## 全体贡献者\n\n@agobbifbk,\n@Borda,\n@cngmid,\n@fkiraly,\n@fnhirwa,\n@gbilleyPeco,\n@jobs-git,\n@Marcrb2,\n@MartinoMensio,\n@phoeenniixx,\n@pietsjoh,\n@PranavBhatP\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fcompare\u002Fv1.3.0...v1.4.0","2025-06-13T20:43:02",{"id":177,"version":178,"summary_zh":179,"released_at":180},89854,"v1.3.0","## 变更内容\n\n功能与维护更新。\n\n* 支持 `Python 3.13`\n* `tide` 模型\n* TFT 的错误修复\n\n[查看完整变更日志](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmain\u002FCHANGELOG.md)\n\n## 新贡献者\n* @xiaokongkong 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1719 中完成了首次贡献\n* @madprogramer 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1720 中完成了首次贡献\n* @julian-fong 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1705 中完成了首次贡献\n* @Sohaib-Ahmed21 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1734 中完成了首次贡献\n* @d-schmitt 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1580 中完成了首次贡献\n* @Luke-Chesley 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1516 中完成了首次贡献\n* @PranavBhatP 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1762 中完成了首次贡献\n\n## 全体贡献者\n\n@d-schmitt,\n@fkiraly,\n@fnhirwa,\n@julian-fong,\n@Luke-Chesley,\n@madprogramer,\n@PranavBhatP,\n@Sohaib-Ahmed21,\n@xiaokongkong,\n@XinyuWuu\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fcompare\u002Fv1.2.0...v1.3.0","2025-02-06T17:46:54",{"id":182,"version":183,"summary_zh":184,"released_at":185},89855,"v1.2.0","## 变更内容\n\n维护更新，包含少量功能新增和错误修复。\n\n* 支持 `numpy 2.X`\n* 结束对 `python 3.8` 的支持\n* 修复文档构建问题\n* 错误修复\n\n[查看完整变更日志](https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fblob\u002Fmain\u002FCHANGELOG.md)\n\n## 新贡献者\n* @ewth 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1696 中完成了首次贡献\n* @airookie17 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1692 中完成了首次贡献\n* @benHeid 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1704 中完成了首次贡献\n* @eugenio-mercuriali 在 https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fpull\u002F1699 中完成了首次贡献\n\n## 全体贡献者\n\n@airookie17,\n@benHeid,\n@eugenio-mercuriali,\n@ewth,\n@fkiraly,\n@fnhirwa,\n@XinyuWuu,\n@yarnabrina\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fsktime\u002Fpytorch-forecasting\u002Fcompare\u002Fv1.1.1...v1.2.0","2024-11-19T16:39:09",{"id":187,"version":188,"summary_zh":189,"released_at":190},89856,"v1.1.1","## 变更内容\n\n热修复版本，用于修正 `pyproject.toml` 文件中包名的拼写错误，从而正确设置 `pytorch-forecasting` 的 PEP 440 标识符。\n\n除此之外，与 [1.1.0](https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Freleases\u002Ftag\u002Fv1.1.0) 完全相同。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fcompare\u002Fv1.1.0...v1.1.0","2024-09-09T09:08:32",{"id":192,"version":193,"summary_zh":194,"released_at":195},89857,"v1.1.0","## 变更内容\n\n维护更新扩大了兼容性范围，并整合了依赖项：\n\n* 支持 Python 3.11 和 3.12，新增 CI 测试\n* 支持 macOS，新增 CI 测试\n* 核心依赖已精简至 `numpy`、`torch`、`lightning`、`scipy`、`pandas` 和 `scikit-learn`。\n* 软依赖被归入软依赖集合中：`all_extras` 包含所有软依赖，`tuning` 则包含基于 `optuna` 的优化相关依赖。\n\n### 依赖变更\n\n* 以下包已不再是核心依赖，改为可选依赖：`optuna`、`statsmodels`、`pytorch-optimize`、`matplotlib`。依赖这些包功能的环境需要更新，显式安装这些包。\n* `optuna` 的版本约束已更新为 `optuna >=3.1.0,\u003C4.0.0`。\n* 如果使用 `optuna >=3.3.0`，则 `optuna-integrate` 现在成为额外的软依赖。\n\n### 弃用与移除\n\n* 从 1.2.0 版本开始，默认优化器将由 `\"ranger\"` 更改为 `\"adam\"`，以避免默认配置中引入非 `torch` 依赖。用户仍可使用 `pytorch-optimize` 提供的优化器。若需继续使用 `\"ranger\"`，请显式指定优化器。\n* 从 1.1.0 版本开始，如果未安装软依赖 `matplotlib`，日志记录器将不再记录图表，但也不会抛出异常。如需记录图表，请确保已安装 `matplotlib`。\n\n## 全体贡献者\n\n@andre-marcos-perez,\n@avirsaha,\n@bendavidsteel,\n@benheid,\n@bohdan-safoniuk,\n@Borda,\n@CahidArda,\n@fkiraly,\n@fnhirwa,\n@germanKoch,\n@jacktang,\n@jdb78,\n@jurgispods,\n@maartensukel,\n@MBelniak,\n@orangehe,\n@pavelzw,\n@sfalkena,\n@tmct,\n@XinyuWuu,\n@yarnabrina,\n\n## 新贡献者\n* @jurgispods 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1366 中完成了首次贡献。\n* @jacktang 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1353 中完成了首次贡献。\n* @andre-marcos-perez 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1346 中完成了首次贡献。\n* @tmct 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1340 中完成了首次贡献。\n* @bohdan-safoniuk 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1318 中完成了首次贡献。\n* @MBelniak 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1230 中完成了首次贡献。\n* @CahidArda 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1175 中完成了首次贡献。\n* @bendavidsteel 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1359 中完成了首次贡献。\n* @Borda 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1498 中完成了首次贡献。\n* @fkiraly 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1598 中完成了首次贡献。\n* @XinyuWuu 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1599 中完成了首次贡献。\n* @pavelzw 在 https:\u002F\u002Fgithub.com\u002Fjdb78\u002Fpytorch-forecasting\u002Fpull\u002F1407 中完成了首次贡献。\n* @yarnabrina 在 https:\u002F\u002Fgithub.com\u002F","2024-09-08T18:54:08",{"id":197,"version":198,"summary_zh":199,"released_at":200},89858,"v1.0.0","### 重大变更\n\n- 升级到 PyTorch 2.0 和 Lightning 2.0。这带来了一些变化，例如训练器的配置方式。请参阅 [Lightning 升级指南](https:\u002F\u002Flightning.ai\u002Fdocs\u002Fpytorch\u002Flatest\u002Fupgrade\u002Fmigration_guide.html)。对于 PyTorch Forecasting 而言，尤其需要注意的是：如果您正在开发自定义模型，类方法 `epoch_end` 已被重命名为 `on_epoch_end`；同时，需将 `model.summarize()` 替换为 `ModelSummary(model, max_depth=-1)`，并且 `Tuner(trainer)` 现在是一个独立的类，因此需要替换 `trainer.tuner`。(#1280)\n- 修改了 `predict()` 接口，使其返回命名元组——详情请参阅教程。\n\n### 变更\n\n- `predict` 方法现在使用 Lightning 的预测功能，并支持将结果写入磁盘 (#1280)。\n\n### 修复\n\n- 修复了当分位数为 0.0 和 1.0（即最小值和最大值）时鲁棒缩放器的行为 (#1142)。","2023-04-10T19:56:20",{"id":202,"version":203,"summary_zh":204,"released_at":205},89859,"v0.10.3","### 修复\n\n- 从依赖中移除了 pandoc，以解决 poetry 安装时出现的问题 (#1126)\n- 为 torchmetrics 添加了指标属性，从而提升了多 GPU 性能 (#1126)\n\n### 新增\n\n- “robust” 编码器方法可通过设置“center”、“lower”和“upper”分位数来自定义 (#1126)","2022-09-07T11:54:32",{"id":207,"version":208,"summary_zh":209,"released_at":210},89860,"v0.10.2","### Added\r\n\r\n- DeepVar network (#923)\r\n- Enable quantile loss for N-HiTS (#926)\r\n- MQF2 loss (multivariate quantile loss) (#949)\r\n- Non-causal attention for TFT (#949)\r\n- Tweedie loss (#949)\r\n- ImplicitQuantileNetworkDistributionLoss (#995)\r\n\r\n### Fixed\r\n\r\n- Fix learning scale schedule (#912)\r\n- Fix TFT list\u002Ftuple issue at interpretation (#924)\r\n- Allowed encoder length down to zero for EncoderNormalizer if transformation is not needed (#949)\r\n- Fix Aggregation and CompositeMetric resets (#949)\r\n\r\n### Changed\r\n\r\n- Dropping Python 3.6 suppport, adding 3.10 support (#479)\r\n- Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (#479)\r\n- Changed transformation format for Encoders to dict from tuple (#949)\r\n\r\n### Contributors\r\n\r\n- jdb78","2022-05-23T11:53:09",{"id":212,"version":213,"summary_zh":214,"released_at":215},89861,"v0.10.1","### Fixed\r\n\r\n- Fix with creating tensors on correct devices (#908)\r\n- Fix with MultiLoss when calculating gradient (#908)\r\n\r\n### Contributors\r\n- jdb78","2022-03-24T21:27:16",{"id":217,"version":218,"summary_zh":219,"released_at":220},89862,"v0.10.0","### Added\r\n\r\n- Added new `N-HiTS` network that has consistently beaten `N-BEATS` (#890)\r\n- Allow using [torchmetrics](https:\u002F\u002Ftorchmetrics.readthedocs.io\u002F) as loss metrics (#776)\r\n- Enable fitting `EncoderNormalizer()` with limited data history using `max_length` argument (#782)\r\n- More flexible `MultiEmbedding()` with convenience `output_size` and `input_size` properties (#829)\r\n- Fix concatentation of attention (#902)\r\n\r\n### Fixed\r\n\r\n- Fix pip install via github (#798)\r\n\r\n### Contributors\r\n\r\n- jdb78\r\n- christy\r\n- lukemerrick\r\n- Seon82","2022-03-23T12:52:06",{"id":222,"version":223,"summary_zh":224,"released_at":225},89863,"v0.9.2","### Added\r\n\r\n- Added support for running `pytorch_lightning.trainer.test` (#759)\r\n\r\n### Fixed\r\n\r\n- Fix inattention mutation to `x_cont` (#732).\r\n- Compatability with pytorch-lightning 1.5 (#758)\r\n\r\n### Contributors\r\n\r\n- eavae\r\n- danielgafni\r\n- jdb78","2021-11-29T19:54:39",{"id":227,"version":228,"summary_zh":229,"released_at":230},89864,"v0.9.1","### Added\r\n\r\n- Use target name instead of target number for logging metrics (#588)\r\n- Optimizer can be initialized by passing string, class or function (#602)\r\n- Add support for multiple outputs in Baseline model (#603)\r\n- Added Optuna pruner as optional parameter in `TemporalFusionTransformer.optimize_hyperparameters` (#619)\r\n- Dropping support for Python 3.6 and starting support for Python 3.9 (#639)\r\n\r\n### Fixed\r\n\r\n- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (#550)\r\n- Added missing transformation of prediction for MLP (#602)\r\n- Fixed logging hyperparameters (#688)\r\n- Ensure MultiNormalizer fit state is detected (#681)\r\n- Fix infinite loop in TimeDistributedEmbeddingBag (#672)\r\n\r\n### Contributors\r\n\r\n- jdb78\r\n- TKlerx\r\n- chefPony\r\n- eavae\r\n- L0Z1K","2021-09-26T11:16:08",{"id":232,"version":233,"summary_zh":234,"released_at":235},89865,"v0.9.0","### Breaking changes\r\n\r\n- Removed `dropout_categoricals` parameter from `TimeSeriesDataSet`.\r\n  Use `categorical_encoders=dict(\u003Cvariable_name>=NaNLabelEncoder(add_nan=True)`) instead (#518)\r\n- Rename parameter `allow_missings` for `TimeSeriesDataSet` to `allow_missing_timesteps` (#518)\r\n- Transparent handling of transformations. Forward methods should now call two new methods (#518):\r\n\r\n  - `transform_output` to explicitly rescale the network outputs into the de-normalized space\r\n  - `to_network_output` to create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Only `prediction` is still required which is the main network output.\r\n\r\n  Example:\r\n\r\n  ```python\r\n  def forward(self, x):\r\n      normalized_prediction = self.module(x)\r\n      prediction = self.transform_output(prediction=normalized_prediction, target_scale=x[\"target_scale\"])\r\n      return self.to_network_output(prediction=prediction)\r\n  ```\r\n\r\n### Added\r\n\r\n- Improved validation of input parameters of TimeSeriesDataSet (#518)\r\n\r\n### Fixed\r\n\r\n- Fix quantile prediction for tensors on GPUs for distribution losses (#491)\r\n- Fix hyperparameter update for RecurrentNetwork.from_dataset method (#497)\r\n","2021-06-04T17:48:22",{"id":237,"version":238,"summary_zh":239,"released_at":240},89866,"v0.8.5","### Added\r\n\r\n- Allow lists for multiple losses and normalizers (#405)\r\n- Warn if normalization is with scale `\u003C 1e-7` (#429)\r\n- Allow usage of distribution losses in all settings (#434)\r\n\r\n### Fixed\r\n\r\n- Fix issue when predicting and data is on different devices (#402)\r\n- Fix non-iterable output (#404)\r\n- Fix problem with moving data to CPU for multiple targets (#434)\r\n\r\n### Contributors\r\n\r\n- jdb78\r\n- domplexity","2021-04-27T20:38:26",{"id":242,"version":243,"summary_zh":244,"released_at":245},89867,"v0.8.4","### Added\r\n\r\n- Adding a filter functionality to the timeseries datasset (#329)\r\n- Add simple models such as LSTM, GRU and a MLP on the decoder (#380)\r\n- Allow usage of any torch optimizer such as SGD (#380)\r\n\r\n### Fixed\r\n\r\n- Moving predictions to CPU to avoid running out of memory (#329)\r\n- Correct determination of `output_size` for multi-target forecasting with the TemporalFusionTransformer (#328)\r\n- Tqdm autonotebook fix to work outside of Jupyter (#338)\r\n- Fix issue with yaml serialization for TensorboardLogger (#379)\r\n\r\n### Contributors\r\n\r\n- jdb78\r\n- JakeForsey\r\n- vakker","2021-03-07T14:34:42",{"id":247,"version":248,"summary_zh":249,"released_at":250},89868,"v0.8.3","### Added\r\n\r\n- Make tuning trainer kwargs overwritable (#300)\r\n- Allow adding categories to NaNEncoder (#303)\r\n\r\n### Fixed\r\n\r\n- Underlying data is copied if modified. Original data is not modified inplace (#263)\r\n- Allow plotting of interpretation on passed figure for NBEATS (#280)\r\n- Fix memory leak for plotting and logging interpretation (#311)\r\n- Correct shape of `predict()` method output for multi-targets (#268)\r\n- Remove cloudpickle to allow GPU trained models to be loaded on CPU devices from checkpoints (#314)\r\n\r\n### Contributors\r\n\r\n- jdb78\r\n- kigawas\r\n- snumumrik","2021-01-31T22:53:41",{"id":252,"version":253,"summary_zh":254,"released_at":255},89869,"v0.8.2","- Added missing output transformation which was switched off by default (#260)","2021-01-12T21:09:42"]