[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-WillianFuks--tfcausalimpact":3,"tool-WillianFuks--tfcausalimpact":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",151918,2,"2026-04-12T11:33:05",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":75,"owner_email":77,"owner_twitter":75,"owner_website":78,"owner_url":79,"languages":80,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":32,"env_os":93,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":105,"github_topics":106,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":111,"updated_at":112,"faqs":113,"releases":144},6902,"WillianFuks\u002Ftfcausalimpact","tfcausalimpact","Python Causal Impact Implementation Based on Google's R Package. Built using TensorFlow Probability.","tfcausalimpact 是一款基于 TensorFlow Probability 构建的 Python 开源库，旨在帮助开发者量化特定事件（如营销活动上线或政策调整）对时间序列数据的真实因果影响。它解决了传统数据分析中难以区分“自然波动”与“干预效果”的痛点：通过利用干预前的历史数据训练贝叶斯结构时间序列模型，该工具能精准预测若无干预发生时的“反事实”趋势，并将其与实际观测数据进行对比，从而科学地计算出干预带来的绝对及相对效应。\n\n这款工具特别适合数据科学家、分析师及研究人员使用，尤其是那些已经熟悉 Google 原版 R 语言 CausalImpact 包，但希望在 Python 生态系统中进行高效建模的用户。其核心技术亮点在于将成熟的因果推断算法移植到了强大的深度学习框架之上，不仅支持灵活的协变量输入以提升预测精度，还能提供详尽的后验概率统计和直观的可视化报告。只需几行代码，用户即可输入前后周期数据，获得包含置信区间和显著性检验的专业分析报告，让因果效应的评估过程变得既严谨又便捷。","# tfcausalimpact\n[![Build Status](https:\u002F\u002Ftravis-ci.com\u002FWillianFuks\u002Ftfcausalimpact.svg?branch=master)](https:\u002F\u002Ftravis-ci.com\u002FWillianFuks\u002Ftfcausalimpact) [![Coverage Status](https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002FWillianFuks\u002Ftfcausalimpact\u002Fbadge.svg?branch=master)](https:\u002F\u002Fcoveralls.io\u002Fgithub\u002FWillianFuks\u002Ftfcausalimpact?branch=master) [![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FWillianFuks\u002Ftfcausalimpact.svg)](https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fblob\u002Fmaster\u002FLICENSE) [![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftfcausalimpact.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftfcausalimpact) [![Pyversions](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftfcausalimpact.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Ftfcausalimpact)\n\nGoogle's [Causal Impact](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FCausalImpact) Algorithm Implemented on Top of [TensorFlow Probability](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability).\n\n## How It Works\nThe algorithm basically fits a [Bayesian structural](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBayesian_structural_time_series) model on past observed data to make predictions on what future data would look like. Past data comprises everything that happened before an intervention (which usually is the changing of a variable as being present or not, such as a marketing campaign that starts to run at a given point). It then compares the counter-factual (predicted) series against what was really observed in order to extract statistical conclusions.\n\nRunning the model is quite straightforward, it requires the observed data `y`, covariates `X` that helps the model through a linear regression, a `pre-period` interval that selects everything that happened before the intervention and a `post-period` with data after the \"impact\" happened.\n\nPlease refer to this medium [post](https:\u002F\u002Ftowardsdatascience.com\u002Fimplementing-causal-impact-on-top-of-tensorflow-probability-c837ea18b126) for more on this subject.\n\n## Installation\n\n    pip install tfcausalimpact\n\n## Requirements\n\n - python{3.7, 3.8, 3.9, 3.10, 3.11}\n - matplotlib\n - jinja2\n - tensorflow>=2.10.0\n - tensorflow_probability>=0.18.0\n - pandas >= 1.3.5\n\n\n## Getting Started\n\nWe recommend this [presentation](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GTgZfCltMm8) by Kay Brodersen (one of the creators of the Causal Impact in R).\n\nWe also created this introductory [ipython notebook](https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgetting_started.ipynb) with examples of how to use this package.\n\nThis medium [article](https:\u002F\u002Ftowardsdatascience.com\u002Fimplementing-causal-impact-on-top-of-tensorflow-probability-c837ea18b126) also offers some ideas and concepts behind the library.\n\n### Example\n\nHere's a simple example (which can also be found in the original Google's R implementation) running in Python:\n\n```python\nimport pandas as pd\nfrom causalimpact import CausalImpact\n\n\ndata = pd.read_csv('https:\u002F\u002Fraw.githubusercontent.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fmaster\u002Ftests\u002Ffixtures\u002Farma_data.csv')[['y', 'X']]\ndata.iloc[70:, 0] += 5\n\npre_period = [0, 69]\npost_period = [70, 99]\n\nci = CausalImpact(data, pre_period, post_period)\nprint(ci.summary())\nprint(ci.summary(output='report'))\nci.plot()\n```\n\nSummary should look like this:\n\n```\nPosterior Inference {Causal Impact}\n                          Average            Cumulative\nActual                    125.23             3756.86\nPrediction (s.d.)         120.34 (0.31)      3610.28 (9.28)\n95% CI                    [119.76, 120.97]   [3592.67, 3629.06]\n\nAbsolute effect (s.d.)    4.89 (0.31)        146.58 (9.28)\n95% CI                    [4.26, 5.47]       [127.8, 164.19]\n\nRelative effect (s.d.)    4.06% (0.26%)      4.06% (0.26%)\n95% CI                    [3.54%, 4.55%]     [3.54%, 4.55%]\n\nPosterior tail-area probability p: 0.0\nPosterior prob. of a causal effect: 100.0%\n\nFor more details run the command: print(impact.summary('report'))\n```\n\nAnd here's the plot graphic:\n\n![alt text](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWillianFuks_tfcausalimpact_readme_ba79ed7ac84e.png)\n\n## Google R Package vs TensorFlow Python\n\nBoth packages should give equivalent results. Here's an example using the `comparison_data.csv` dataset available in the `fixtures` folder. When running CausalImpact in the original R package, this is the result:\n\n### R\n\n```{r}\ndata = read.csv.zoo('comparison_data.csv', header=TRUE)\npre.period \u003C- c(as.Date(\"2019-04-16\"), as.Date(\"2019-07-14\"))\npost.period \u003C- c(as.Date(\"2019-07-15\"), as.Date(\"2019-08-01\"))\nci = CausalImpact(data, pre.period, post.period)\n```\n\nSummary results:\n\n```\nPosterior inference {CausalImpact}\n\n                         Average          Cumulative\nActual                   78574            1414340\nPrediction (s.d.)        79232 (736)      1426171 (13253)\n95% CI                   [77743, 80651]   [1399368, 1451711]\n\nAbsolute effect (s.d.)   -657 (736)       -11831 (13253)\n95% CI                   [-2076, 832]     [-37371, 14971]\n\nRelative effect (s.d.)   -0.83% (0.93%)   -0.83% (0.93%)\n95% CI                   [-2.6%, 1%]      [-2.6%, 1%]\n\nPosterior tail-area probability p:   0.20061\nPosterior prob. of a causal effect:  80%\n\nFor more details, type: summary(impact, \"report\")\n```\n\nAnd correspondent plot:\n\n![alt text](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWillianFuks_tfcausalimpact_readme_c34ac5619679.png)\n\n### Python\n\n```python\nimport pandas as pd\nfrom causalimpact import CausalImpact\n\n\ndata = pd.read_csv('https:\u002F\u002Fraw.githubusercontent.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fmaster\u002Ftests\u002Ffixtures\u002Fcomparison_data.csv', index_col=['DATE'])\npre_period = ['2019-04-16', '2019-07-14']\npost_period = ['2019-7-15', '2019-08-01']\nci = CausalImpact(data, pre_period, post_period, model_args={'fit_method': 'hmc'})\n```\n\nSummary is:\n\n```\nPosterior Inference {Causal Impact}\n                          Average            Cumulative\nActual                    78574.42           1414339.5\nPrediction (s.d.)         79282.92 (727.48)  1427092.62 (13094.72)\n95% CI                    [77849.5, 80701.18][1401290.94, 1452621.31]\n\nAbsolute effect (s.d.)    -708.51 (727.48)   -12753.12 (13094.72)\n95% CI                    [-2126.77, 724.92] [-38281.81, 13048.56]\n\nRelative effect (s.d.)    -0.89% (0.92%)     -0.89% (0.92%)\n95% CI                    [-2.68%, 0.91%]    [-2.68%, 0.91%]\n\nPosterior tail-area probability p: 0.16\nPosterior prob. of a causal effect: 84.12%\n\nFor more details run the command: print(impact.summary('report'))\n```\n\nAnd plot:\n\n![alt text](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWillianFuks_tfcausalimpact_readme_9bc86d8dcd30.png)\n\nBoth results are equivalent.\n\n## Performance\n\nThis package uses as default the [`Variational Inference`](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FVariational_Bayesian_methods) method from `TensorFlow Probability` which is faster and should work for the most part. Convergence can take somewhere between 2~3 minutes on more complex time series. You could also try running the package on top of GPUs to see if results improve.\n\nIf, on the other hand, precision is the top requirement when running causal impact analyzes, it's possible to switch algorithms by manipulating the input arguments like so:\n\n```python\nci = CausalImpact(data, pre_period, post_period, model_args={'fit_method': 'hmc'})\n```\n\nThis will make usage of the algorithm [`Hamiltonian Monte Carlo`](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FHamiltonian_Monte_Carlo) which is State-of-the-Art for finding the Bayesian posterior of distributions. Still, keep in mind that on complex time series with thousands of data points and complex modeling involving various seasonal components this optimization can take 1 hour or even more to complete (on a GPU). Performance is sacrificed in exchange for better precision.\n\n## Bugs & Issues\n\nIf you find bugs or have any issues while running this library please consider opening an [`Issue`](https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues) with a complete description and reproductible environment so we can better help you solving the problem.\n","# tfcausalimpact\n[![构建状态](https:\u002F\u002Ftravis-ci.com\u002FWillianFuks\u002Ftfcausalimpact.svg?branch=master)](https:\u002F\u002Ftravis-ci.com\u002FWillianFuks\u002Ftfcausalimpact) [![覆盖率状态](https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002FWillianFuks\u002Ftfcausalimpact\u002Fbadge.svg?branch=master)](https:\u002F\u002Fcoveralls.io\u002Fgithub\u002FWillianFuks\u002Ftfcausalimpact?branch=master) [![GitHub 许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FWillianFuks\u002Ftfcausalimpact.svg)](https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fblob\u002Fmaster\u002FLICENSE) [![PyPI 版本](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftfcausalimpact.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftfcausalimpact) [![Python 版本兼容性](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftfcausalimpact.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Ftfcausalimpact)\n\n基于 [TensorFlow Probability](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability) 实现的 Google [Causal Impact](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FCausalImpact) 算法。\n\n## 工作原理\n该算法本质上是在过去观测数据上拟合一个 [贝叶斯结构化](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBayesian_structural_time_series) 时间序列模型，以预测未来数据可能的样子。过去的数据包括干预发生之前的所有内容（通常是指某个变量的状态发生变化，例如在某一时间点开始执行的营销活动）。然后，它会将反事实（预测的）序列与实际观测到的数据进行比较，从而得出统计结论。\n\n运行该模型非常简单，只需要提供观测数据 `y`、用于线性回归的协变量 `X`，以及两个时间段：`pre-period` 表示干预前的时间段，`post-period` 表示“影响”发生之后的时间段。\n\n有关更多详细信息，请参阅这篇 Medium 文章：[Implementing Causal Impact on Top of TensorFlow Probability](https:\u002F\u002Ftowardsdatascience.com\u002Fimplementing-causal-impact-on-top-of-tensorflow-probability-c837ea18b126)。\n\n## 安装\n\n    pip install tfcausalimpact\n\n## 依赖项\n\n - python{3.7, 3.8, 3.9, 3.10, 3.11}\n - matplotlib\n - jinja2\n - tensorflow>=2.10.0\n - tensorflow_probability>=0.18.0\n - pandas >= 1.3.5\n\n\n## 快速入门\n\n我们推荐观看 Kay Brodersen（R 语言中 Causal Impact 的创建者之一）的这个 [演示文稿](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GTgZfCltMm8)。\n\n此外，我们还创建了一个介绍性的 [ipython 笔记本](https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgetting_started.ipynb)，其中包含如何使用此包的示例。\n\n这篇 Medium 文章 [Implementing Causal Impact on Top of TensorFlow Probability](https:\u002F\u002Ftowardsdatascience.com\u002Fimplementing-causal-impact-on-top-of-tensorflow-probability-c837ea18b126) 也提供了关于该库的一些思路和概念。\n\n### 示例\n\n以下是一个简单的示例（也可以在 Google 原始的 R 实现中找到），在 Python 中运行：\n\n```python\nimport pandas as pd\nfrom causalimpact import CausalImpact\n\n\ndata = pd.read_csv('https:\u002F\u002Fraw.githubusercontent.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fmaster\u002Ftests\u002Ffixtures\u002Farma_data.csv')[['y', 'X']]\ndata.iloc[70:, 0] += 5\n\npre_period = [0, 69]\npost_period = [70, 99]\n\nci = CausalImpact(data, pre_period, post_period)\nprint(ci.summary())\nprint(ci.summary(output='report'))\nci.plot()\n```\n\n摘要应如下所示：\n\n```\n后验推断 {Causal Impact}\n                          平均            累计\n实际                    125.23             3756.86\n预测（标准差）         120.34 (0.31)      3610.28 (9.28)\n95% 置信区间            [119.76, 120.97]   [3592.67, 3629.06]\n\n绝对效应（标准差）    4.89 (0.31)        146.58 (9.28)\n95% 置信区间            [4.26, 5.47]       [127.8, 164.19]\n\n相对效应（标准差）    4.06% (0.26%)      4.06% (0.26%)\n95% 置信区间            [3.54%, 4.55%]     [3.54%, 4.55%]\n\n后验尾部概率 p: 0.0\n后验因果效应概率: 100.0%\n\n有关更多详情，请运行命令：print(impact.summary('report'))\n```\n\n以下是图表：\n\n![alt text](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWillianFuks_tfcausalimpact_readme_ba79ed7ac84e.png)\n\n## Google R 包 vs TensorFlow Python\n\n这两个包应该给出等效的结果。以下是一个使用 `fixtures` 文件夹中提供的 `comparison_data.csv` 数据集的示例。在原始 R 包中运行 CausalImpact 时，结果如下：\n\n### R\n\n```{r}\ndata = read.csv.zoo('comparison_data.csv', header=TRUE)\npre.period \u003C- c(as.Date(\"2019-04-16\"), as.Date(\"2019-07-14\"))\npost.period \u003C- c(as.Date(\"2019-07-15\"), as.Date(\"2019-08-01\"))\nci = CausalImpact(data, pre.period, post.period)\n```\n\n摘要结果：\n\n```\n后验推断 {CausalImpact}\n\n                         平均          累计\n实际                   78574            1414340\n预测（标准差）        79232 (736)      1426171 (13253)\n95% 置信区间           [77743, 80651]   [1399368, 1451711]\n\n绝对效应（标准差）   -657 (736)       -11831 (13253)\n95% 置信区间           [-2076, 832]     [-37371, 14971]\n\n相对效应（标准差）   -0.83% (0.93%)   -0.83% (0.93%)\n95% 置信区间           [-2.6%, 1%]      [-2.6%, 1%]\n\n后验尾部概率 p:   0.20061\n后验因果效应概率:  80%\n\n有关更多详情，请键入：summary(impact, \"report\")\n```\n\n对应的图表如下：\n\n![alt text](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWillianFuks_tfcausalimpact_readme_c34ac5619679.png)\n\n### Python\n\n```python\nimport pandas as pd\nfrom causalimpact import CausalImpact\n\n\ndata = pd.read_csv('https:\u002F\u002Fraw.githubusercontent.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fmaster\u002Ftests\u002Ffixtures\u002Fcomparison_data.csv', index_col=['DATE'])\npre_period = ['2019-04-16', '2019-07-14']\npost_period = ['2019-7-15', '2019-08-01']\nci = CausalImpact(data, pre_period, post_period, model_args={'fit_method': 'hmc'})\n```\n\n摘要如下：\n\n```\n后验推断 {Causal Impact}\n                          平均            累计\n实际                    78574.42           1414339.5\n预测（标准差）         79282.92 (727.48)  1427092.62 (13094.72)\n95% 置信区间            [77849.5, 80701.18][1401290.94, 1452621.31]\n\n绝对效应（标准差）    -708.51 (727.48)   -12753.12 (13094.72)\n95% 置信区间            [-2126.77, 724.92] [-38281.81, 13048.56]\n\n相对效应（标准差）    -0.89% (0.92%)     -0.89% (0.92%)\n95% 置信区间            [-2.68%, 0.91%]    [-2.68%, 0.91%]\n\n后验尾部概率 p: 0.16\n后验因果效应概率: 84.12%\n\n有关更多详情，请运行命令：print(impact.summary('report'))\n```\n\n对应的图表如下：\n\n![alt text](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWillianFuks_tfcausalimpact_readme_9bc86d8dcd30.png)\n\n两种结果是等价的。\n\n## 性能\n\n该包默认使用 `TensorFlow Probability` 中的 [`变分推断`](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FVariational_Bayesian_methods) 方法，这种方法速度较快，在大多数情况下都能正常工作。对于较为复杂的时间序列，收敛时间通常在 2 到 3 分钟之间。您也可以尝试在 GPU 上运行该包，以观察性能是否有所提升。\n\n另一方面，如果在进行因果效应分析时对精度有极高要求，可以通过调整输入参数来切换算法，如下所示：\n\n```python\nci = CausalImpact(data, pre_period, post_period, model_args={'fit_method': 'hmc'})\n```\n\n这将使用 [`哈密顿蒙特卡洛`](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FHamiltonian_Monte_Carlo) 算法，该算法是目前用于求解贝叶斯后验分布的最先进方法。不过，请注意，对于包含数千个数据点、且涉及多种季节性成分的复杂时间序列模型，这种优化可能需要 1 小时甚至更长时间才能完成（即使在 GPU 上）。此时会牺牲部分性能以换取更高的精度。\n\n## Bug 与问题\n\n如果您在使用本库时发现任何 bug 或遇到问题，请考虑在 [GitHub 仓库](https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues) 上提交一个包含完整描述和可复现环境的 issue，以便我们更好地帮助您解决问题。","# tfcausalimpact 快速上手指南\n\n`tfcausalimpact` 是 Google Causal Impact 算法的 Python 实现，基于 TensorFlow Probability。它通过贝叶斯结构时间序列模型，评估干预措施（如营销活动、产品上线）对目标指标的实际因果影响。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：3.7, 3.8, 3.9, 3.10 或 3.11\n*   **核心依赖**：\n    *   `tensorflow` >= 2.10.0\n    *   `tensorflow_probability` >= 0.18.0\n    *   `pandas` >= 1.3.5\n    *   `matplotlib`\n    *   `jinja2`\n\n> **提示**：国内用户建议在安装前配置 pip 镜像源（如清华源或阿里源）以加速下载。\n> ```bash\n> pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \u003Cpackage_name>\n> ```\n\n## 安装步骤\n\n使用 pip 直接安装最新稳定版：\n\n```bash\npip install tfcausalimpact\n```\n\n若需指定国内镜像源加速安装：\n\n```bash\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple tfcausalimpact\n```\n\n## 基本使用\n\n以下是一个最简示例，演示如何加载数据、定义干预前后时间段并生成因果分析报告。\n\n### 1. 准备数据与参数\n\n数据需要包含目标变量 `y` 和可选的协变量 `X`。你需要定义两个时间段：\n*   `pre_period`：干预发生前的时间段（用于训练模型）。\n*   `post_period`：干预发生后的时间段（用于评估影响）。\n\n### 2. 运行分析\n\n```python\nimport pandas as pd\nfrom causalimpact import CausalImpact\n\n# 加载示例数据 (实际使用时请替换为自己的 DataFrame)\ndata = pd.read_csv('https:\u002F\u002Fraw.githubusercontent.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fmaster\u002Ftests\u002Ffixtures\u002Farma_data.csv')[['y', 'X']]\n\n# 模拟干预效果：将后段数据的 y 值增加 5\ndata.iloc[70:, 0] += 5\n\n# 定义干预前后区间 (索引范围)\npre_period = [0, 69]\npost_period = [70, 99]\n\n# 初始化并运行模型\nci = CausalImpact(data, pre_period, post_period)\n\n# 打印统计摘要\nprint(ci.summary())\n\n# 打印详细报告\nprint(ci.summary(output='report'))\n\n# 绘制因果影响图\nci.plot()\n```\n\n### 3. 结果解读\n\n运行上述代码后，你将获得类似以下的输出：\n\n*   **Summary 输出**：展示实际值、预测值（及标准差）、95% 置信区间，以及绝对效应和相对效应。\n*   **概率判断**：`Posterior prob. of a causal effect` 表示存在因果效应的后验概率（例如 100.0% 表示极大概率存在影响）。\n*   **可视化**：`ci.plot()` 会生成一张图表，直观对比“实际观测值”与“若无干预的预测值”，阴影部分通常代表置信区间。\n\n### 进阶提示：精度与性能平衡\n\n默认情况下，库使用 **变分推断 (Variational Inference)**，速度较快（通常几分钟内完成）。如果对精度有极高要求且愿意牺牲计算时间，可以通过 `model_args` 切换为 **哈密顿蒙特卡洛 (HMC)** 算法：\n\n```python\n# 使用 HMC 算法以提高精度（耗时可能长达数小时）\nci = CausalImpact(data, pre_period, post_period, model_args={'fit_method': 'hmc'})\n```","某电商数据团队在“双 11\"大促结束后，急需评估新上线的“智能推荐弹窗”功能对整体销售额的真实贡献，以决定下一季度的预算分配。\n\n### 没有 tfcausalimpact 时\n- **归因模糊不清**：只能简单对比活动前后的销售额均值，无法排除季节性波动、自然增长趋势或其他同期营销活动的干扰，导致效果被高估或低估。\n- **缺乏统计置信度**：无法计算干预效果的置信区间和概率，汇报时只能凭经验说“感觉有效果”，难以用严谨的数据说服管理层。\n- **建模门槛极高**：若要构建贝叶斯结构时间序列模型来模拟“如果没有上线功能会怎样”，需要手动编写复杂的 TensorFlow Probability 代码，开发周期长达数周。\n- **可视化缺失**：难以直观展示“实际值”与“预测反事实值”的差异曲线，汇报材料缺乏说服力强的图表支持。\n\n### 使用 tfcausalimpact 后\n- **精准剥离干扰**：利用贝叶斯结构时间序列模型，自动学习历史数据规律并控制协变量，精准模拟出“未上线功能”时的销售走势，真实量化净增量。\n- **输出严谨结论**：一键生成包含平均效应、累积效应及 95% 置信区间的统计报告，直接得出“有 100% 概率产生正向因果影响”的确切结论。\n- **开发效率飞跃**：基于 TensorFlow Probability 封装了简洁接口，仅需几行代码即可完成从数据输入到模型推断的全过程，将分析时间从数周缩短至几分钟。\n- **图表自动洞察**：自动绘制包含实际值、预测值及阴影置信区间的专业图表，直观展示干预点后的偏差，让非技术人员也能一眼看懂业务价值。\n\ntfcausalimpact 将复杂的因果推断转化为标准化的代码流程，帮助团队在充满噪声的业务数据中，用统计学证据锁定真实的业务增长驱动力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FWillianFuks_tfcausalimpact_ba79ed7a.png","WillianFuks","Willian Fuks","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FWillianFuks_d9d8906f.jpg",null,"Maiolnir","willian.fuks@gmail.com","maiolnir.com","https:\u002F\u002Fgithub.com\u002FWillianFuks",[81,85],{"name":82,"color":83,"percentage":84},"Python","#3572A5",99.8,{"name":86,"color":87,"percentage":88},"Shell","#89e051",0.2,670,77,"2026-04-08T10:30:22","Apache-2.0","","非必需。支持在 GPU 上运行以提升性能（特别是使用 HMC 算法处理复杂时间序列时），但未指定具体显卡型号、显存大小或 CUDA 版本。","未说明",{"notes":97,"python":98,"dependencies":99},"默认使用变分推断（Variational Inference）方法，收敛通常需 2-3 分钟；若追求更高精度可切换至哈密顿蒙特卡洛（HMC）算法，但在复杂场景下可能耗时 1 小时以上。该工具是 Google Causal Impact 算法基于 TensorFlow Probability 的 Python 实现。","3.7, 3.8, 3.9, 3.10, 3.11",[100,101,102,103,104],"tensorflow>=2.10.0","tensorflow_probability>=0.18.0","pandas>=1.3.5","matplotlib","jinja2",[14],[107,108,109,110],"causalimpact","tensorflow-probability","python","causal-inference","2026-03-27T02:49:30.150509","2026-04-13T00:24:14.574374",[114,119,124,129,134,139],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},31098,"导入库时出现 'AttributeError: CausalImpact object has no attribute inferences' 错误怎么办？","这通常是因为环境中同时安装了官方的 `causalimpact` 包和 `tfcausalimpact` 包，导致命名空间冲突。解决方法是卸载冲突的包：\n1. 运行 `pip uninstall causalimpact`（如果无法卸载，可能需要手动删除库文件夹）。\n2. 升级 tensorflow：`pip install --upgrade tensorflow`。\n3. 重新安装 tfcausalimpact，建议使用根目录安装方式：`python -m pip install tfcausalimpact`。","https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues\u002F39",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},31099,"遇到 'tensorflow_probability.python.sts has no attribute regularize_series' 错误如何解决？","这是由于 TensorFlow Probability 版本更新导致的兼容性问题。可以通过降级相关包来解决：\n1. 安装特定版本的 tensorflow-probability：`pip install tensorflow-probability==0.18.0`。\n2. 或者将 tensorflow 降级到 2.9.2 版本：`pip install tensorflow==2.9.2`。","https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues\u002F61",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},31100,"使用默认拟合方法时报错 'ValueError: Argument name must be a string and cannot contain character \u002F' 怎么办？","该错误出现在较新版本的 TensorFlow (2.16+) 和 TensorFlow Probability (0.24+) 中，与默认的 'vi' 拟合方法有关。\n解决方案：\n1. 推荐升级 tfcausalimpact 到最新稳定版（0.0.15 或更高）：`pip install -U tfcausalimpact`。\n2. 临时方案是将 `fit_method` 参数显式设置为 'hmc'。\n3. 或者降级依赖包至 tensorflow 2.15.1 和 tensorflow-probability 0.23.0。","https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues\u002F86",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},31101,"如何获取每个协变量（控制组）的权重以分析其重要性？","可以通过访问模型样本数据来计算协变量的权重。虽然直接调用可能只返回单一值，但用户可以检查 `ci.model_samples` 属性。具体而言，可以通过对模型样本进行均值计算来分析不同控制国（数据框中的列）对目标国重建的贡献比例。建议参考项目最新的 Notebook 示例代码来获取详细的权重提取方法。","https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues\u002F43",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},31102,"当数据框中包含名为 '0' 的列时，为什么会导致结果不正确？","这是一个已修复的细微 Bug。如果数据框包含名为 '0' 的列且不是第一列，代码可能会错误地将其视为标签索引 (.loc) 而不是位置索引 (.iloc)，导致存储错误的 mu 和 sigma 值，从而使去标准化后的结果不正确。\n解决方法：请确保将 `tfcausalimpact` 升级到 0.0.5 或更高版本，该版本已修复此问题。","https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues\u002F17",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},31103,"如何自定义后验推断的汇总方式（例如使用中位数和 HDI 代替均值和分位数区间）？","虽然默认实现使用均值和分位数区间，但用户可以编写自定义函数来汇总推断结果。例如，可以使用 `arviz` 库中的 `hdi` 函数来计算最高密度区间 (HDI) 并使用中位数作为点估计。\n核心逻辑包括：\n1. 从后验分布中采样 (`predictive.sample`)。\n2. 使用 `arviz.stats.hdi` 计算区间。\n3. 根据需要取消标准化数据。\n维护者表示未来可能会考虑原生支持 HDI，但在非常偏斜的分布之外，差异可能不大。","https:\u002F\u002Fgithub.com\u002FWillianFuks\u002Ftfcausalimpact\u002Fissues\u002F11",[145,150,155,160,165,169,174,179,183,188,192,197,201,206,210,215,220,225,229,233],{"id":146,"version":147,"summary_zh":148,"released_at":149},223024,"v0.0.18","新增对 Tensorflow Probability==0.25 的支持","2025-01-13T14:42:12",{"id":151,"version":152,"summary_zh":153,"released_at":154},223025,"v0.0.17","添加 Python 3.12 支持","2024-10-19T19:51:33",{"id":156,"version":157,"summary_zh":158,"released_at":159},223026,"v0.0.16","- 移除已弃用的 Pandas 调用","2024-07-10T16:29:38",{"id":161,"version":162,"summary_zh":163,"released_at":164},223027,"v0.0.15","- 将 TFP 配置为使用 Keras 2.0 版本\n- 为软件包依赖项添加了上限版本","2024-05-07T19:20:28",{"id":166,"version":167,"summary_zh":75,"released_at":168},223028,"v0.0.15-rc.0","2024-05-06T21:38:47",{"id":170,"version":171,"summary_zh":172,"released_at":173},223029,"v0.0.14","- 增加对 Py3.11 的支持","2023-11-25T22:12:42",{"id":175,"version":176,"summary_zh":177,"released_at":178},223030,"v0.0.13","- 增加对 Python 3.10 的支持","2022-12-20T01:57:13",{"id":180,"version":181,"summary_zh":75,"released_at":182},223031,"v0.0.13-rc.0","2022-11-13T23:57:50",{"id":184,"version":185,"summary_zh":186,"released_at":187},223032,"v0.0.12","修复了 `tensorflow` 和 `tensorflow_probability` 之间的版本不兼容问题。","2022-10-07T13:31:59",{"id":189,"version":190,"summary_zh":75,"released_at":191},223033,"v0.0.12-rc.0","2022-10-07T03:58:46",{"id":193,"version":194,"summary_zh":195,"released_at":196},223034,"v0.0.11","* 修复了在执行 `tfp.sts.regularize_series` 操作后，数据中插入 `NaN` 值的相关问题。","2022-05-03T15:30:16",{"id":198,"version":199,"summary_zh":75,"released_at":200},223035,"v0.0.11rc0","2022-05-03T05:57:27",{"id":202,"version":203,"summary_zh":204,"released_at":205},223036,"v0.0.10","* 修复了数组索引处的 NumPy 输入 bug\n\n* 完善了单元测试\n\n* 为自定义模型提供了更完善的文档字符串。更新了 `getting_started.ipynb`，使用了最新代码。\n\n* 将 CI\u002FCD 迁移到 GitHub Actions","2022-04-11T16:51:17",{"id":207,"version":208,"summary_zh":75,"released_at":209},223037,"v0.0.10rc2","2022-04-11T16:45:34",{"id":211,"version":212,"summary_zh":213,"released_at":214},223038,"v0.0.9","* 修复了 Python 版本指定中的 bug（`3.10.*` 无效，已更正为仅使用 `3.10`）。","2021-10-08T22:12:55",{"id":216,"version":217,"summary_zh":218,"released_at":219},223039,"v0.0.8","* Adapt to newer TFP requirements (input time series must have valid frequency).","2021-10-01T14:32:10",{"id":221,"version":222,"summary_zh":223,"released_at":224},223040,"v0.0.5","Fixes bug #17 (Integer columns names)","2021-05-21T01:13:37",{"id":226,"version":227,"summary_zh":75,"released_at":228},223041,"v0.0.4","2021-03-18T14:51:29",{"id":230,"version":231,"summary_zh":75,"released_at":232},223042,"v0.0.3","2021-02-27T01:05:33",{"id":234,"version":235,"summary_zh":236,"released_at":237},223043,"v0.0.2","Updated `LinearRegression` operation to `SparseLinearRegression` which uses horseshoe distribution.","2021-02-05T01:02:25"]