[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-LongxingTan--Time-series-prediction":3,"tool-LongxingTan--Time-series-prediction":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",143909,2,"2026-04-07T11:33:18",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":73,"owner_company":73,"owner_location":73,"owner_email":75,"owner_twitter":73,"owner_website":73,"owner_url":76,"languages":77,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":103,"github_topics":104,"view_count":32,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":122,"updated_at":123,"faqs":124,"releases":159},5176,"LongxingTan\u002FTime-series-prediction","Time-series-prediction","tfts: Time Series Deep Learning Models in TensorFlow","Time-series-prediction（简称 TFTS）是一个基于 TensorFlow 和 Keras 构建的易用型时间序列深度学习工具包。它旨在降低时间序列分析的技术门槛，帮助用户高效解决预测、分类及异常检测等核心问题。无论是处理股票走势、气象变化还是工业传感器数据，TFTS 都能提供从数据加载、模型构建到训练可视化的完整流程支持。\n\n该工具特别适合人工智能开发者、数据科学家以及科研人员使用。对于希望快速验证算法的研究者，或需要在工业场景中落地高精度模型的工程师，TFTS 都提供了极大的便利。其独特的技术亮点在于集成了大量经典与前沿的深度学习架构，如 Transformer、Informer、Autoformer、NBEATS 及 DLinear 等。通过简洁的\"AutoModel\"接口，用户仅需几行代码即可切换不同的状态-of-the-art（SOTA）模型进行实验，无需重复编写复杂的底层逻辑。此外，它还支持自定义数据输入，并兼容 Google Colab 和 Kaggle 等主流开发环境，让时间序列建模变得更加灵活高效。","[license-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-blue.svg\n[license-url]: https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT\n[pypi-image]: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftfts.svg\n[pypi-url]: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Ftfts\n[pepy-image]: https:\u002F\u002Fpepy.tech\u002Fbadge\u002Ftfts\u002Fmonth\n[pepy-url]: https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftfts\n[build-image]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg?branch=master\n[build-url]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Ftest.yml?query=branch%3Amaster\n[lint-image]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Flint.yml\u002Fbadge.svg?branch=master\n[lint-url]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Flint.yml?query=branch%3Amaster\n[docs-image]: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Ftime-series-prediction\u002Fbadge\u002F?version=latest\n[docs-url]: https:\u002F\u002Ftime-series-prediction.readthedocs.io\u002Fen\u002Flatest\u002F?version=latest\n[coverage-image]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Flongxingtan\u002FTime-series-prediction\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg\n[coverage-url]: https:\u002F\u002Fcodecov.io\u002Fgithub\u002Flongxingtan\u002FTime-series-prediction?branch=master\n[contributing-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\n[contributing-url]: https:\u002F\u002Fgithub.com\u002Flongxingtan\u002FTime-series-prediction\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md\n[codeql-image]: https:\u002F\u002Fgithub.com\u002Flongxingtan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Fcodeql-analysis.yml\u002Fbadge.svg\n[codeql-url]: https:\u002F\u002Fgithub.com\u002Flongxingtan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Fcodeql-analysis.yml\n\n\u003Ch1 align=\"center\">\n\u003Cimg src=\".\u002Fdocs\u002Fsource\u002F_static\u002Flogo.svg\" width=\"400\" align=center\u002F>\n\u003C\u002Fh1>\u003Cbr>\n\n[![LICENSE][license-image]][license-url]\n[![PyPI Version][pypi-image]][pypi-url]\n[![Build Status][build-image]][build-url]\n[![Lint Status][lint-image]][lint-url]\n[![Docs Status][docs-image]][docs-url]\n[![Code Coverage][coverage-image]][coverage-url]\n[![Contributing][contributing-image]][contributing-url]\n\n**[Documentation](https:\u002F\u002Ftime-series-prediction.readthedocs.io)** | **[Tutorials](https:\u002F\u002Ftime-series-prediction.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials.html)** | **[Release Notes](.\u002FCHANGELOG.md)** | **[中文](https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fblob\u002Fmaster\u002FREADME_CN.md)**\n\n**TFTS** (TensorFlow Time Series) is an easy-to-use time series package, supporting the classical and latest deep learning methods in TensorFlow or Keras.\n- Support sota models for time series tasks (prediction, classification, anomaly detection)\n- Provide advanced deep learning models for industry, research and competition\n- Documentation lives at [time-series-prediction.readthedocs.io](https:\u002F\u002Ftime-series-prediction.readthedocs.io)\n\n\n## Tutorial\n\n**Installation**\n\n- python >= 3.7\n- tensorflow >= 2.4\n\n```shell\npip install tfts\n```\n\n**Quick start**\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1LHdbrXmQGBSQuNTsbbM5-lAk5WENWF-Q?usp=sharing)\n[![Open in Kaggle](https:\u002F\u002Fkaggle.com\u002Fstatic\u002Fimages\u002Fopen-in-kaggle.svg)](https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Ftanlongxing\u002Ftensorflow-time-series-starter-tfts\u002Fnotebook)\n\n```python\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tfts\nfrom tfts import AutoModel, AutoConfig, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\n(x_train, y_train), (x_valid, y_valid) = tfts.get_data(\"sine\", train_length, predict_sequence_length, test_size=0.2)\n\nmodel_name_or_path = 'seq2seq'  # 'wavenet', 'transformer', 'rnn', 'tcn', 'bert', 'dlinear', 'nbeats', 'informer', 'autoformer'\nconfig = AutoConfig.for_model(model_name_or_path)\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model, optimizer=tf.keras.optimizers.Adam(0.0007))\ntrainer.train((x_train, y_train), (x_valid, y_valid), epochs=30)\n\npred = trainer.predict(x_valid)\ntrainer.plot(history=x_valid, true=y_valid, pred=pred)\nplt.show()\n```\n\n**Prepare your own data**\n\nYou could train your own data by preparing 3D data as inputs, for both inputs and targets\n- option1 `np.ndarray`\n- option2 `tf.data.Dataset`\n- option3 `tf.keras.utils.Sequence`\n\nEncoder only model inputs\n\n```python\nimport numpy as np\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\nn_feature = 2\n\nx_train = np.random.rand(1, train_length, n_feature)  # inputs: (batch, train_length, feature)\ny_train = np.random.rand(1, predict_sequence_length, 1)  # target: (batch, predict_sequence_length, 1)\nx_valid = np.random.rand(1, train_length, n_feature)\ny_valid = np.random.rand(1, predict_sequence_length, 1)\n\nconfig = AutoConfig.for_model('rnn')\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train(train_dataset=(x_train, y_train), valid_dataset=(x_valid, y_valid), epochs=1)\n```\n\nEncoder-decoder model inputs\n\n```python\n# option1: np.ndarray\nimport numpy as np\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\nn_encoder_feature = 2\nn_decoder_feature = 3\n\nx_train = (\n    np.random.rand(1, train_length, 1),  # inputs: (batch, train_length, 1)\n    np.random.rand(1, train_length, n_encoder_feature),  # encoder_feature: (batch, train_length, encoder_features)\n    np.random.rand(1, predict_sequence_length, n_decoder_feature),  # decoder_feature: (batch, predict_sequence_length, decoder_features)\n)\ny_train = np.random.rand(1, predict_sequence_length, 1)  # target: (batch, predict_sequence_length, 1)\n\nx_valid = (\n    np.random.rand(1, train_length, 1),\n    np.random.rand(1, train_length, n_encoder_feature),\n    np.random.rand(1, predict_sequence_length, n_decoder_feature),\n)\ny_valid = np.random.rand(1, predict_sequence_length, 1)\n\nconfig = AutoConfig.for_model(\"seq2seq\")\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train((x_train, y_train), (x_valid, y_valid), epochs=1)\n```\n\n```python\n# option2: tf.data.Dataset\nimport numpy as np\nimport tensorflow as tf\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\nclass FakeReader(object):\n    def __init__(self, predict_sequence_length):\n        train_length = 24\n        n_encoder_feature = 2\n        n_decoder_feature = 3\n        self.x = np.random.rand(15, train_length, 1)\n        self.encoder_feature = np.random.rand(15, train_length, n_encoder_feature)\n        self.decoder_feature = np.random.rand(15, predict_sequence_length, n_decoder_feature)\n        self.target = np.random.rand(15, predict_sequence_length, 1)\n\n    def __len__(self):\n        return len(self.x)\n\n    def __getitem__(self, idx):\n        return {\n            \"x\": self.x[idx],\n            \"encoder_feature\": self.encoder_feature[idx],\n            \"decoder_feature\": self.decoder_feature[idx],\n        }, self.target[idx]\n\n    def iter(self):\n        for i in range(len(self.x)):\n            yield self[i]\n\npredict_sequence_length = 10\ntrain_reader = FakeReader(predict_sequence_length=predict_sequence_length)\ntrain_loader = tf.data.Dataset.from_generator(\n    train_reader.iter,\n    ({\"x\": tf.float32, \"encoder_feature\": tf.float32, \"decoder_feature\": tf.float32}, tf.float32),\n)\ntrain_loader = train_loader.batch(batch_size=1)\nvalid_reader = FakeReader(predict_sequence_length=predict_sequence_length)\nvalid_loader = tf.data.Dataset.from_generator(\n    valid_reader.iter,\n    ({\"x\": tf.float32, \"encoder_feature\": tf.float32, \"decoder_feature\": tf.float32}, tf.float32),\n)\nvalid_loader = valid_loader.batch(batch_size=1)\n\nconfig = AutoConfig.for_model(\"seq2seq\")\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train(train_dataset=train_loader, valid_dataset=valid_loader, epochs=1)\n```\n\n**Prepare custom model config**\n\n```python\nfrom tfts import AutoModel, AutoConfig\n\nconfig = AutoConfig.for_model('rnn')\nprint(config)\nconfig.rnn_hidden_size = 128\n\nmodel = AutoModel.from_config(config, predict_sequence_length=7)\n```\n\n**Build your own model**\n\n\u003Cdetails>\u003Csummary> Full list of tfts AutoModel supported \u003C\u002Fsummary>\n\n- rnn\n- tcn\n- bert\n- nbeats\n- dlinear\n- seq2seq\n- wavenet\n- transformer\n- informer\n- autoformer\n\n\u003C\u002Fdetails>\n\nYou could build the custom model based on tfts, like\n- add custom-defined embeddings for categorical variables\n- add custom-defined head layers for classification or anomaly task\n\n```python\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Input, Dense\nfrom tfts import AutoModel, AutoConfig\n\ntrain_length = 24\nnum_train_features = 15\npredict_sequence_length = 8\n\ndef build_model():\n    inputs = Input([train_length, num_train_features])\n    config = AutoConfig.for_model(\"seq2seq\")\n    backbone = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\n    outputs = backbone(inputs)\n    outputs = Dense(1, activation=\"sigmoid\")(outputs)\n    model = tf.keras.Model(inputs=inputs, outputs=outputs)\n    model.compile(loss=\"mse\", optimizer=\"rmsprop\")\n    return model\n```\n\n\n## Examples\n\n- [TFTS-Bert](https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FKDDCup2022-Baidu) wins the **3rd place** in KDD Cup 2022-wind power forecasting\n- [TFTS-Seq2seq](https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FData-competitions\u002Ftree\u002Fmaster\u002Ftianchi-enso-prediction) wins the **4th place** in Tianchi-ENSO index prediction 2021\n- [More examples ...](.\u002Fexamples)\n\n\n\u003C!-- ### Performance\n\n[Time series prediction](.\u002Fexamples\u002Frun_prediction_simple.py) performance is evaluated by tfts implementation, not official\n\n| Performance | [web traffic\u003Csup>mape\u003C\u002Fsup>]() | [grocery sales\u003Csup>wrmse\u003C\u002Fsup>](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Ffavorita-grocery-sales-forecasting\u002Fdata) | [m5 sales\u003Csup>val\u003C\u002Fsup>]() | [ventilator\u003Csup>val\u003C\u002Fsup>]() |\n| :-- | :-: | :-: | :-: | :-: |\n| [RNN]() | 672 | 47.7% |52.6% | 61.4% |\n| [DeepAR]() | 672 | 47.7% |52.6% | 61.4% |\n| [Seq2seq]() | 672 | 47.7% |52.6% | 61.4% |\n| [TCN]() | 672 | 47.7% |52.6% | 61.4% |\n| [WaveNet]() | 672 | 47.7% |52.6% | 61.4% |\n| [Bert]() | 672 | 47.7% |52.6% | 61.4% |\n| [Transformer]() | 672 | 47.7% |52.6% | 61.4% |\n| [Temporal-fusion-transformer]() | 672 | 47.7% |52.6% | 61.4% |\n| [Informer]() | 672 | 47.7% |52.6% | 61.4% |\n| [AutoFormer]() | 672 | 47.7% |52.6% | 61.4% |\n| [N-beats]() | 672 | 47.7% |52.6% | 61.4% |\n| [U-Net]() | 672 | 47.7% |52.6% | 61.4% |\n\n### More demos\n- [More complex prediction task](.\u002Fnotebooks)\n- [Time series classification](.\u002Fexamples\u002Frun_classification.py)\n- [Anomaly detection](.\u002Fexamples\u002Frun_anomaly.py)\n- [Uncertainty prediction](examples\u002Frun_uncertainty.py)\n- [Parameters tuning by optuna](examples\u002Frun_optuna_tune.py)\n- [Serving by tf-serving](.\u002Fexamples) -->\n\n\n## Citation\n\nIf you find tfts project useful in your research, please consider cite:\n\n```\n@misc{tfts2020,\n  author = {Longxing Tan},\n  title = {TFTS: Time series prediction},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Flongxingtan\u002Ftime-series-prediction}},\n}\n```\n","[license-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-blue.svg\n[license-url]: https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT\n[pypi-image]: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftfts.svg\n[pypi-url]: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Ftfts\n[pepy-image]: https:\u002F\u002Fpepy.tech\u002Fbadge\u002Ftfts\u002Fmonth\n[pepy-url]: https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftfts\n[build-image]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg?branch=master\n[build-url]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Ftest.yml?query=branch%3Amaster\n[lint-image]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Flint.yml\u002Fbadge.svg?branch=master\n[lint-url]: https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Flint.yml?query=branch%3Amaster\n[docs-image]: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Ftime-series-prediction\u002Fbadge\u002F?version=latest\n[docs-url]: https:\u002F\u002Ftime-series-prediction.readthedocs.io\u002Fen\u002Flatest\u002F?version=latest\n[coverage-image]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Flongxingtan\u002FTime-series-prediction\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg\n[coverage-url]: https:\u002F\u002Fcodecov.io\u002Fgithub\u002Flongxingtan\u002FTime-series-prediction?branch=master\n[contributing-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\n[contributing-url]: https:\u002F\u002Fgithub.com\u002Flongxingtan\u002FTime-series-prediction\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md\n[codeql-image]: https:\u002F\u002Fgithub.com\u002Flongxingtan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Fcodeql-analysis.yml\u002Fbadge.svg\n[codeql-url]: https:\u002F\u002Fgithub.com\u002Flongxingtan\u002FTime-series-prediction\u002Factions\u002Fworkflows\u002Fcodeql-analysis.yml\n\n\u003Ch1 align=\"center\">\n\u003Cimg src=\".\u002Fdocs\u002Fsource\u002F_static\u002Flogo.svg\" width=\"400\" align=center\u002F>\n\u003C\u002Fh1>\u003Cbr>\n\n[![许可证][license-image]][license-url]\n[![PyPI版本][pypi-image]][pypi-url]\n[![构建状态][build-image]][build-url]\n[![代码风格检查状态][lint-image]][lint-url]\n[![文档状态][docs-image]][docs-url]\n[![代码覆盖率][coverage-image]][coverage-url]\n[![贡献指南][contributing-image]][contributing-url]\n\n**[文档](https:\u002F\u002Ftime-series-prediction.readthedocs.io)** | **[教程](https:\u002F\u002Ftime-series-prediction.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials.html)** | **[发布说明](.\u002FCHANGELOG.md)** | **[中文](https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fblob\u002Fmaster\u002FREADME_CN.md)**\n\n**TFTS**（TensorFlow 时间序列）是一个易于使用的时序包，支持 TensorFlow 或 Keras 中的经典及最新的深度学习方法。\n- 支持用于时间序列任务（预测、分类、异常检测）的最先进模型\n- 提供适用于工业、研究和竞赛的高级深度学习模型\n- 文档位于 [time-series-prediction.readthedocs.io](https:\u002F\u002Ftime-series-prediction.readthedocs.io)\n\n\n## 教程\n\n**安装**\n\n- python >= 3.7\n- tensorflow >= 2.4\n\n```shell\npip install tfts\n```\n\n**快速入门**\n\n[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1LHdbrXmQGBSQuNTsbbM5-lAk5WENWF-Q?usp=sharing)\n[![在 Kaggle 中打开](https:\u002F\u002Fkaggle.com\u002Fstatic\u002Fimages\u002Fopen-in-kaggle.svg)](https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Ftanlongxing\u002Ftensorflow-time-series-starter-tfts\u002Fnotebook)\n\n```python\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tfts\nfrom tfts import AutoModel, AutoConfig, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\n(x_train, y_train), (x_valid, y_valid) = tfts.get_data(\"sine\", train_length, predict_sequence_length, test_size=0.2)\n\nmodel_name_or_path = 'seq2seq'  # 'wavenet', 'transformer', 'rnn', 'tcn', 'bert', 'dlinear', 'nbeats', 'informer', 'autoformer'\nconfig = AutoConfig.for_model(model_name_or_path)\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model, optimizer=tf.keras.optimizers.Adam(0.0007))\ntrainer.train((x_train, y_train), (x_valid, y_valid), epochs=30)\n\npred = trainer.predict(x_valid)\ntrainer.plot(history=x_valid, true=y_valid, pred=pred)\nplt.show()\n```\n\n**准备您自己的数据**\n\n您可以准备 3D 数据作为输入和目标来训练自己的数据：\n- 方案1 `np.ndarray`\n- 方案2 `tf.data.Dataset`\n- 方案3 `tf.keras.utils.Sequence`\n\n仅编码器模型的输入\n\n```python\nimport numpy as np\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\nn_feature = 2\n\nx_train = np.random.rand(1, train_length, n_feature)  # 输入：(batch, train_length, feature)\ny_train = np.random.rand(1, predict_sequence_length, 1)  # 目标：(batch, predict_sequence_length, 1)\nx_valid = np.random.rand(1, train_length, n_feature)\ny_valid = np.random.rand(1, predict_sequence_length, 1)\n\nconfig = AutoConfig.for_model('rnn')\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train(train_dataset=(x_train, y_train), valid_dataset=(x_valid, y_valid), epochs=1)\n```\n\n编码器-解码器模型的输入\n\n```python\n# 方案1：np.ndarray\nimport numpy as np\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\nn_encoder_feature = 2\nn_decoder_feature = 3\n\nx_train = (\n    np.random.rand(1, train_length, 1),  # 输入：(batch, train_length, 1)\n    np.random.rand(1, train_length, n_encoder_feature),  # 编码器特征：(batch, train_length, 编码器特征)\n    np.random.rand(1, predict_sequence_length, n_decoder_feature),  # 解码器特征：(batch, predict sequence length, 解码器特征)\n)\ny_train = np.random.rand(1, predict sequence length, 1)  # 目标：(batch, predict sequence length, 1)\n\nx_valid = (\n    np.random.rand(1, train length, 1),\n    np.random.rand(1, train length, n_encoder feature),\n    np.random.rand(1, predict sequence length, n decoder feature),\n)\ny_valid = np.random.rand(1, predict sequence length, 1)\n\nconfig = AutoConfig.for_model(\"seq2seq\")\nmodel = AutoModel from config with predict sequence length and trainer of Keras Trainer trained with x_train and y_train and x_valid and y_valid for one epoch.\n```\n\n```python\n\n# 选项2：tf.data.Dataset\nimport numpy as np\nimport tensorflow as tf\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\nclass FakeReader(object):\n    def __init__(self, predict_sequence_length):\n        train_length = 24\n        n_encoder_feature = 2\n        n_decoder_feature = 3\n        self.x = np.random.rand(15, train_length, 1)\n        self.encoder_feature = np.random.rand(15, train_length, n_encoder_feature)\n        self.decoder_feature = np.random.rand(15, predict_sequence_length, n_decoder_feature)\n        self.target = np.random.rand(15, predict_sequence_length, 1)\n\n    def __len__(self):\n        return len(self.x)\n\n    def __getitem__(self, idx):\n        return {\n            \"x\": self.x[idx],\n            \"encoder_feature\": self.encoder_feature[idx],\n            \"decoder_feature\": self.decoder_feature[idx],\n        }, self.target[idx]\n\n    def iter(self):\n        for i in range(len(self.x)):\n            yield self[i]\n\npredict_sequence_length = 10\ntrain_reader = FakeReader(predict_sequence_length=predict_sequence_length)\ntrain_loader = tf.data.Dataset.from_generator(\n    train_reader.iter,\n    ({\"x\": tf.float32, \"encoder_feature\": tf.float32, \"decoder_feature\": tf.float32}, tf.float32),\n)\ntrain_loader = train_loader.batch(batch_size=1)\nvalid_reader = FakeReader(predict_sequence_length=predict_sequence_length)\nvalid_loader = tf.data.Dataset.from_generator(\n    valid_reader.iter,\n    ({\"x\": tf.float32, \"encoder_feature\": tf.float32, \"decoder_feature\": tf.float32}, tf.float32),\n)\nvalid_loader = valid_loader.batch(batch_size=1)\n\nconfig = AutoConfig.for_model(\"seq2seq\")\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train(train_dataset=train_loader, valid_dataset=valid_loader, epochs=1)\n```\n\n**准备自定义模型配置**\n\n```python\nfrom tfts import AutoModel, AutoConfig\n\nconfig = AutoConfig.for_model('rnn')\nprint(config)\nconfig.rnn_hidden_size = 128\n\nmodel = AutoModel.from_config(config, predict_sequence_length=7)\n```\n\n**构建您自己的模型**\n\n\u003Cdetails>\u003Csummary> tfts AutoModel 支持的完整列表 \u003C\u002Fsummary>\n\n- rnn\n- tcn\n- bert\n- nbeats\n- dlinear\n- seq2seq\n- wavenet\n- transformer\n- informer\n- autoformer\n\n\u003C\u002Fdetails>\n\n您可以基于 tfts 构建自定义模型，例如：\n- 为分类变量添加自定义嵌入层\n- 为分类或异常检测任务添加自定义头部层\n\n```python\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Input, Dense\nfrom tfts import AutoModel, AutoConfig\n\ntrain_length = 24\nnum_train_features = 15\npredict_sequence_length = 8\n\ndef build_model():\n    inputs = Input([train_length, num_train_features])\n    config = AutoConfig.for_model(\"seq2seq\")\n    backbone = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\n    outputs = backbone(inputs)\n    outputs = Dense(1, activation=\"sigmoid\")(outputs)\n    model = tf.keras.Model(inputs=inputs, outputs=outputs)\n    model.compile(loss=\"mse\", optimizer=\"rmsprop\")\n    return model\n```\n\n\n## 示例\n\n- [TFTS-Bert](https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FKDDCup2022-Baidu) 在 KDD 杯 2022 年风电功率预测比赛中荣获 **第三名**\n- [TFTS-Seq2seq](https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FData-competitions\u002Ftree\u002Fmaster\u002Ftianchi-enso-prediction) 在天池 ENSO 指数预测 2021 年比赛中荣获 **第四名**\n- [更多示例 ...](.\u002Fexamples)\n\n\n\u003C!-- ### 性能\n\n[时间序列预测](.\u002Fexamples\u002Frun_prediction_simple.py) 的性能由 tfts 实现评估，而非官方评估。\n\n| 性能 | [网络流量\u003Csup>mape\u003C\u002Fsup>]() | [杂货销售\u003Csup>wrmse\u003C\u002Fsup>](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Ffavorita-grocery-sales-forecasting\u002Fdata) | [m5 销售\u003Csup>val\u003C\u002Fsup>]() | [呼吸机\u003Csup>val\u003C\u002Fsup>]() |\n| :-- | :-: | :-: | :-: | :-: |\n| [RNN]() | 672 | 47.7% | 52.6% | 61.4% |\n| [DeepAR]() | 672 | 47.7% | 52.6% | 61.4% |\n| [Seq2seq]() | 672 | 47.7% | 52.6% | 61.4% |\n| [TCN]() | 672 | 47.7% | 52.6% | 61.4% |\n| [WaveNet]() | 672 | 47.7% | 52.6% | 61.4% |\n| [Bert]() | 672 | 47.7% | 52.6% | 61.4% |\n| [Transformer]() | 672 | 47.7% | 52.6% | 61.4% |\n| [Temporal-fusion-transformer]() | 672 | 47.7% | 52.6% | 61.4% |\n| [Informer]() | 672 | 47.7% | 52.6% | 61.4% |\n| [AutoFormer]() | 672 | 47.7% | 52.6% | 61.4% |\n| [N-beats]() | 672 | 47.7% | 52.6% | 61.4% |\n| [U-Net]() | 672 | 47.7% | 52.6% | 61.4% |\n\n### 更多演示\n- [更复杂的预测任务](.\u002Fnotebooks)\n- [时间序列分类](.\u002Fexamples\u002Frun_classification.py)\n- [异常检测](.\u002Fexamples\u002Frun_anomaly.py)\n- [不确定性预测](examples\u002Frun_uncertainty.py)\n- [使用 optuna 调参](examples\u002Frun_optuna_tune.py)\n- [使用 tf-serving 提供服务](.\u002Fexamples) -->\n\n\n## 引用\n\n如果您在研究中发现 tfts 项目很有用，请考虑引用：\n\n```\n@misc{tfts2020,\n  author = {Longxing Tan},\n  title = {TFTS: 时间序列预测},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub 仓库},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Flongxingtan\u002Ftime-series-prediction}},\n}\n```","# Time-series-prediction (TFTS) 快速上手指南\n\nTFTS (TensorFlow Time Series) 是一个易于使用的时间序列预测包，基于 TensorFlow\u002FKeras 构建，支持多种经典及最新的深度学习模型（如 Transformer, Informer, N-BEATS 等），适用于工业界、科研及竞赛场景。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：>= 3.7\n*   **核心依赖**：TensorFlow >= 2.4\n\n## 2. 安装步骤\n\n推荐使用 pip 进行安装。国内用户建议使用清华源或阿里源以加速下载。\n\n**通用安装命令：**\n```shell\npip install tfts\n```\n\n**国内加速安装（推荐）：**\n```shell\npip install tfts -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 3. 基本使用\n\n以下是最简单的快速入门示例，演示了如何加载内置数据、配置模型、训练并进行预测可视化。\n\n### 快速开始示例\n\n```python\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tfts\nfrom tfts import AutoModel, AutoConfig, KerasTrainer\n\n# 1. 准备数据 (使用内置的正弦波数据)\ntrain_length = 24\npredict_sequence_length = 8\n(x_train, y_train), (x_valid, y_valid) = tfts.get_data(\"sine\", train_length, predict_sequence_length, test_size=0.2)\n\n# 2. 配置模型\n# 支持的模型包括：'seq2seq', 'wavenet', 'transformer', 'rnn', 'tcn', 'bert', 'dlinear', 'nbeats', 'informer', 'autoformer'\nmodel_name_or_path = 'seq2seq' \nconfig = AutoConfig.for_model(model_name_or_path)\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\n\n# 3. 初始化训练器并训练\ntrainer = KerasTrainer(model, optimizer=tf.keras.optimizers.Adam(0.0007))\ntrainer.train((x_train, y_train), (x_valid, y_valid), epochs=30)\n\n# 4. 预测与可视化\npred = trainer.predict(x_valid)\ntrainer.plot(history=x_valid, true=y_valid, pred=pred)\nplt.show()\n```\n\n### 使用自定义数据\n\n如果您需要使用自己的数据进行训练，只需将数据整理为 3D 数组格式 `(batch, time_steps, features)` 即可。\n\n**仅编码器模型输入示例：**\n```python\nimport numpy as np\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\nn_feature = 2\n\n# 构造随机数据作为示例\nx_train = np.random.rand(100, train_length, n_feature)  # 输入: (batch, 时间步，特征数)\ny_train = np.random.rand(100, predict_sequence_length, 1)  # 目标: (batch, 预测长度，1)\nx_valid = np.random.rand(20, train_length, n_feature)\ny_valid = np.random.rand(20, predict_sequence_length, 1)\n\n# 构建并训练\nconfig = AutoConfig.for_model('rnn')\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train(train_dataset=(x_train, y_train), valid_dataset=(x_valid, y_valid), epochs=10)\n```\n\n更多高级用法（如自定义模型结构、Encoder-Decoder 架构数据准备等）请参考官方文档。","某大型连锁零售企业的供应链团队正面临节假日销量预测难题，急需优化库存管理以避免缺货或积压。\n\n### 没有 Time-series-prediction 时\n- **模型选型困难**：面对 ARIMA、LSTM、Transformer 等数十种算法，数据科学家需手动逐个编写底层代码进行验证，耗时数周仍难确定最优方案。\n- **开发门槛高**：团队缺乏深度学习专家，从零搭建复杂的时序网络（如 Informer 或 NBeats）极易出错，且难以在 TensorFlow 中高效调试。\n- **迭代周期漫长**：每次调整超参数或更换模型结构都需要重写大量训练逻辑，导致无法快速响应市场变化进行多轮实验。\n- **可视化缺失**：预测结果仅能输出枯燥的数字表格，缺乏直观的趋势对比图，业务部门难以理解并信任预测数据。\n\n### 使用 Time-series-prediction 后\n- **一键自动建模**：利用 `AutoModel` 接口，只需一行代码即可自动加载并切换 Seq2Seq、Transformer 等 SOTA 模型，半天内完成全量算法筛选。\n- **低代码高效开发**：内置标准化的 `KerasTrainer` 和预配置模板，非深度学习背景的工程师也能轻松调用工业级模型，大幅降低试错成本。\n- **敏捷实验迭代**：通过 `AutoConfig` 灵活调整序列长度与参数，将原本数天的模型重构工作缩短至分钟级，迅速锁定最佳预测策略。\n- **直观结果呈现**：调用内置 `plot` 方法即可生成真实值与预测值的对比趋势图，让业务方一目了然地看到节假日销量波动的精准捕捉。\n\nTime-series-prediction 将复杂的时序深度学习工程化，让企业能以最低成本快速落地高精度的销量预测，显著降低库存成本并提升周转效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLongxingTan_Time-series-prediction_aafc1f31.png","LongxingTan",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FLongxingTan_f13d79d5.jpg","tanlongxing888@163.com","https:\u002F\u002Fgithub.com\u002FLongxingTan",[78,82,86],{"name":79,"color":80,"percentage":81},"Python","#3572A5",99.8,{"name":83,"color":84,"percentage":85},"Makefile","#427819",0.1,{"name":87,"color":88,"percentage":89},"Dockerfile","#384d54",0,887,171,"2026-03-30T13:15:20","MIT",1,"","未说明",{"notes":98,"python":99,"dependencies":100},"该工具基于 TensorFlow\u002FKeras 构建，支持多种时间序列模型（如 RNN, Transformer, Informer 等）。数据输入支持 numpy 数组、tf.data.Dataset 或 Keras Sequence 格式。官方提供了 Colab 和 Kaggle 的快速启动示例，表明其对云端 GPU 环境兼容，但本地运行未强制要求 GPU。",">=3.7",[101,102],"tensorflow>=2.4","matplotlib",[35,14],[105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121],"tensorflow","seq2seq","wavenet","tf2","time-series","transformer","deep-learning","machine-learning","forecasting","prediction","time-series-forecasting","timeseries","neural-network","time-series-forecast","keras-forecasting","keras-prediction","keras-time-series","2026-03-27T02:49:30.150509","2026-04-08T02:01:16.629989",[125,130,135,140,145,150,154],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},23463,"如何在只有一张表的情况下划分训练集和验证集，并正确设置 valid_dataset？","在使用 seq2seq 模型时，如果直接传入形状不匹配的数据会报错（如 Incompatible shapes）。确保输入数据的维度正确扩展，例如使用 np.expand_dims。对于 valid_dataset，不能简单复用训练数据或测试数据导致长度不一致。建议检查模型配置：\n1. 查看默认配置：AutoConfig('rnn').print_config()\n2. 自定义参数创建模型：model = AutoModel('rnn', predict_length=1, custom_model_params={...})\n3. 训练前固定随机种子以保证复现性：\n   def set_seed(seed):\n       random.seed(seed)\n       np.random.seed(seed)\n       os.environ['PYTHONHASHSEED'] = str(seed)\n       tf.random.set_seed(seed)\n4. 如果进行了归一化，需调整学习率等参数，避免学习率过大导致模型在极值附近震荡。","https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fissues\u002F25",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},23464,"遇到 FileNotFoundError: No such file or directory: '..\u002Fmodels\u002Fscaler.pkl' 错误怎么办？","这通常是由于相对路径配置问题导致的。请检查代码中的目录路径设置。\n解决方案：\n在 \u002Fexample\u002Fdata\u002Fread_data.py 文件的第 102 行左右，找到 get_examples 函数，确认 model_dir 参数是否正确指向权重目录。例如：\ndef get_examples(self, data_dir, sample=1, start_date=None, plot=False, model_dir='..\u002Fweights'):\n确保运行脚本时的当前工作目录结构符合相对路径预期，或者尝试在运行前切换到 examples 目录（cd examples）。","https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fissues\u002F7",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},23465,"使用 seq2seq 模型时报错 TypeError: tf__call() missing 1 required positional argument: 'v' 如何解决？","这是由于 Attention 层的 call 函数签名与 TensorFlow 版本不兼容导致的。需要修改 tfts\u002Flayers\u002Fattention_layer.py 文件中的 Attention 类。\n具体修改代码如下：\ndef call(self, q, v, k=None, mask=None):\n    if k is None:\n        k = v\n    q = self.dense_q(q)\n    k = self.dense_k(k)\n    v = self.dense_v(v)\n    # 后续多头注意力计算逻辑保持不变...\n    q_ = tf.concat(tf.split(q, self.num_heads, axis=2), axis=0)\n    k_ = tf.concat(tf.split(k, self.num_heads, axis=2), axis=0)\n    v_ = tf.concat(tf.split(v, self.num_heads, axis=2), axis=0)\n    score = tf.linalg.matmul(q_, k_, transpose_b=True)\n    score \u002F= tf.cast(tf.shape(q_)[-1], tf.float32) ** 0.5\n    if mask is not None:\n        score = score * tf.cast(mask, tf.float32)\n    score = tf.nn.softmax(score)\n    score = self.dropout(score)\n    outputs = tf.linalg.matmul(score, v_)\n    outputs = tf.concat(tf.split(outputs, self.num_heads, axis=0), axis=2)\n    return outputs","https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fissues\u002F12",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},23466,"运行时遇到 ModuleNotFoundError: No module named 'data.load_data' 或路径导入错误怎么办？","这通常是因为在非标准目录下运行脚本导致相对导入失败（常见于 Google Colab 环境）。\n解决方案有两种：\n1. 切换目录：在运行脚本前，先执行 cd examples 进入示例目录，然后再运行 python run_train.py。\n2. 重新克隆仓库：维护者已修改为相对路径导入，尝试重新 git clone 最新代码。\n如果是 Colab 用户，建议手动打开脚本，将导入路径调整为绝对路径或根据 Colab 的文件结构调整 import 语句。","https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fissues\u002F8",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},23467,"Google Colab 中 pandas 版本冲突（tfts 要求 pandas\u003C2.0.0 但 Colab 需要 2.0.3+）如何解决？","这是由于旧版本的 tfts 库对 pandas 版本限制过严导致的。\n解决方案：升级 tfts 库到最新版本以兼容新的 pandas 版本。\n执行命令：pip install tfts==0.0.11\n新版本已放宽对 pandas 的版本限制，可解决此依赖冲突问题。","https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fissues\u002F51",{"id":151,"question_zh":152,"answer_zh":153,"source_url":129},23468,"custom_model_params 有哪些可用参数？如何自定义模型参数？","可以通过 AutoConfig 查看特定模型的默认配置参数。\n查看方法：\nfrom tfts import AutoConfig, AutoModel\nAutoConfig('rnn').print_config()\n\n自定义模型时，通过 custom_model_params 字典传递参数。目前已知的参数包括：\ncustom_model_params = {\"rnn_size\": 128, \"dense_size\": 128}\n实例化模型示例：\nmodel = AutoModel('rnn', predict_length=1, custom_model_params={\"rnn_size\": 256, \"dense_size\": 256})\n注意：部分细粒度参数的修改可能尚未经过全面测试，使用时需留意潜在报错。",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},23469,"执行 run_train.py 时提示 No module named 'tfts.dataset' 是什么原因？","这通常是因为安装不完整或代码版本不匹配导致的。tfts 包的 __init__.py 文件中导入了 .dataset 模块，但如果本地 tfts 目录下缺少 dataset 文件夹，就会报错。\n解决方法：\n1. 确保从官方源完整安装了 tfts 库（pip install -U tfts）。\n2. 如果是从源码运行，请确认是否漏掉了某些子模块文件夹，或者尝试重新克隆仓库以确保文件完整性。\n3. 检查是否混用了不同版本的代码和安装包。","https:\u002F\u002Fgithub.com\u002FLongxingTan\u002FTime-series-prediction\u002Fissues\u002F10",[160,164,168,172,176],{"id":161,"version":162,"summary_zh":73,"released_at":163},144964,"v0.0.19","2025-07-27T04:23:31",{"id":165,"version":166,"summary_zh":73,"released_at":167},144965,"v0.0.18","2025-07-18T01:52:06",{"id":169,"version":170,"summary_zh":73,"released_at":171},144966,"v0.0.17","2025-05-18T02:20:07",{"id":173,"version":174,"summary_zh":73,"released_at":175},144967,"v0.0.16","2025-05-08T05:06:08",{"id":177,"version":178,"summary_zh":179,"released_at":180},144968,"v0.0.15","- 修复训练器的分发镜像","2025-04-27T17:20:46"]