[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-tensorflow--probability":3,"tool-tensorflow--probability":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":80,"owner_url":81,"languages":82,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":23,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":113,"github_topics":114,"view_count":122,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":151},895,"tensorflow\u002Fprobability","probability","Probabilistic reasoning and statistical analysis in TensorFlow","TensorFlow Probability（简称 TFP）是 TensorFlow 生态系统中的概率推理与统计分析库，让开发者能够在深度学习框架中轻松处理不确定性问题。\n\n传统神经网络输出确定性的预测结果，而现实世界的数据往往充满噪声和未知因素。TFP 将概率方法深度整合进 TensorFlow，使模型能够量化预测的不确定性，表达\"我有 80% 的把握认为结果是 A\"而非简单的\"结果是 A\"。这对于风险评估、决策系统、科学模拟等场景至关重要。\n\nTFP 采用分层架构设计：底层提供丰富的概率分布和可逆变换（Bijectors），支持从基础正态分布到复杂流模型的构建；中层支持联合分布建模和概率神经网络层；顶层则集成 MCMC 采样、变分推断等高级推理算法。值得一提的是，TFP 同时支持 JAX 后端，让偏好纯函数式编程的研究人员也能无缝使用。\n\n这款工具主要面向机器学习研究人员、数据科学家和算法工程师，尤其适合需要构建贝叶斯神经网络、生成模型或进行因果推断的开发者。借助自动微分和 GPU 加速，TFP 既能处理学术研究中的复杂模型，也能支撑工业级的大规模应用。","# TensorFlow Probability\n\nTensorFlow Probability is a library for probabilistic reasoning and statistical\nanalysis in TensorFlow. As part of the TensorFlow ecosystem, TensorFlow\nProbability provides integration of probabilistic methods with deep networks,\ngradient-based inference via automatic differentiation, and scalability to\nlarge datasets and models via hardware acceleration (e.g., GPUs) and distributed\ncomputation.\n\n__TFP also works as \"Tensor-friendly Probability\" in pure JAX!__:\n`from tensorflow_probability.substrates import jax as tfp` --\nLearn more [here](https:\u002F\u002Fwww.tensorflow.org\u002Fprobability\u002Fexamples\u002FTensorFlow_Probability_on_JAX).\n\nOur probabilistic machine learning tools are structured as follows.\n\n__Layer 0: TensorFlow.__ Numerical operations. In particular, the LinearOperator\nclass enables matrix-free implementations that can exploit special structure\n(diagonal, low-rank, etc.) for efficient computation. It is built and maintained\nby the TensorFlow Probability team and is now part of\n[`tf.linalg`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Ftree\u002Fmaster\u002Ftensorflow\u002Fpython\u002Fops\u002Flinalg)\nin core TF.\n\n__Layer 1: Statistical Building Blocks__\n\n* Distributions ([`tfp.distributions`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fdistributions)):\n  A large collection of probability\n  distributions and related statistics with batch and\n  [broadcasting](https:\u002F\u002Fdocs.scipy.org\u002Fdoc\u002Fnumpy\u002Fuser\u002Fbasics.broadcasting.html)\n  semantics. See the\n  [Distributions Tutorial](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTensorFlow_Distributions_Tutorial.ipynb).\n* Bijectors ([`tfp.bijectors`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fbijectors)):\n  Reversible and composable transformations of random variables. Bijectors\n  provide a rich class of transformed distributions, from classical examples\n  like the\n  [log-normal distribution](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLog-normal_distribution)\n  to sophisticated deep learning models such as\n  [masked autoregressive flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07057).\n\n__Layer 2: Model Building__\n\n* Joint Distributions (e.g., [`tfp.distributions.JointDistributionSequential`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fdistributions\u002Fjoint_distribution_sequential.py)):\n    Joint distributions over one or more possibly-interdependent distributions.\n    For an introduction to modeling with TFP's `JointDistribution`s, check out\n    [this colab](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FModeling_with_JointDistribution.ipynb)\n* Probabilistic Layers ([`tfp.layers`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Flayers)):\n  Neural network layers with uncertainty over the functions they represent,\n  extending TensorFlow Layers.\n\n__Layer 3: Probabilistic Inference__\n\n* Markov chain Monte Carlo ([`tfp.mcmc`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fmcmc)):\n  Algorithms for approximating integrals via sampling. Includes\n  [Hamiltonian Monte Carlo](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FHamiltonian_Monte_Carlo),\n  random-walk Metropolis-Hastings, and the ability to build custom transition\n  kernels.\n* Variational Inference ([`tfp.vi`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fvi)):\n  Algorithms for approximating integrals via optimization.\n* Optimizers ([`tfp.optimizer`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Foptimizer)):\n  Stochastic optimization methods, extending TensorFlow Optimizers. Includes\n  [Stochastic Gradient Langevin Dynamics](http:\u002F\u002Fwww.icml-2011.org\u002Fpapers\u002F398_icmlpaper.pdf).\n* Monte Carlo ([`tfp.monte_carlo`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fmonte_carlo)):\n  Tools for computing Monte Carlo expectations.\n\nTensorFlow Probability is under active development. Interfaces may change at any\ntime.\n\n## Examples\n\nSee [`tensorflow_probability\u002Fexamples\u002F`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002F)\nfor end-to-end examples. It includes tutorial notebooks such as:\n\n* [Linear Mixed Effects Models](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FLinear_Mixed_Effects_Models.ipynb).\n  A hierarchical linear model for sharing statistical strength across examples.\n* [Eight Schools](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FEight_Schools.ipynb).\n  A hierarchical normal model for exchangeable treatment effects.\n* [Hierarchical Linear Models](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FHLM_TFP_R_Stan.ipynb).\n  Hierarchical linear models compared among TensorFlow Probability, R, and Stan.\n* [Bayesian Gaussian Mixture Models](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FBayesian_Gaussian_Mixture_Model.ipynb).\n  Clustering with a probabilistic generative model.\n* [Probabilistic Principal Components Analysis](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FProbabilistic_PCA.ipynb).\n  Dimensionality reduction with latent variables.\n* [Gaussian Copulas](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FGaussian_Copula.ipynb).\n  Probability distributions for capturing dependence across random variables.\n* [TensorFlow Distributions: A Gentle Introduction](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTensorFlow_Distributions_Tutorial.ipynb).\n  Introduction to TensorFlow Distributions.\n* [Understanding TensorFlow Distributions Shapes](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FUnderstanding_TensorFlow_Distributions_Shapes.ipynb).\n  How to distinguish between samples, batches, and events for arbitrarily shaped\n  probabilistic computations.\n* [TensorFlow Probability Case Study: Covariance Estimation](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTensorFlow_Probability_Case_Study_Covariance_Estimation.ipynb).\n  A user's case study in applying TensorFlow Probability to estimate covariances.\n\nIt also includes example scripts such as:\n\n  Representation learning with a latent code and variational inference.\n* [Vector-Quantized Autoencoder](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fvq_vae.py).\n  Discrete representation learning with vector quantization.\n* [Disentangled Sequential Variational Autoencoder](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fdisentangled_vae.py)\n  Disentangled representation learning over sequences with variational inference.\n* [Bayesian Neural Networks](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fbayesian_neural_network.py).\n  Neural networks with uncertainty over their weights.\n* [Bayesian Logistic Regression](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Flogistic_regression.py).\n  Bayesian inference for binary classification.\n\n## Installation\n\nFor additional details on installing TensorFlow, guidance installing\nprerequisites, and (optionally) setting up virtual environments, see the\n[TensorFlow installation guide](https:\u002F\u002Fwww.tensorflow.org\u002Finstall).\n\n### Stable Builds\n\nTo install the latest stable version, run the following:\n\n```shell\n# Notes:\n\n# - The `--upgrade` flag ensures you'll get the latest version.\n# - The `--user` flag ensures the packages are installed to your user directory\n#   rather than the system directory.\n# - TensorFlow 2 packages require a pip >= 19.0\npython -m pip install --upgrade --user pip\npython -m pip install --upgrade --user tensorflow tensorflow_probability\n```\n\nFor CPU-only usage (and a smaller install), install with `tensorflow-cpu`.\n\nTo use a pre-2.0 version of TensorFlow, run:\n\n```shell\npython -m pip install --upgrade --user \"tensorflow\u003C2\" \"tensorflow_probability\u003C0.9\"\n```\n\nNote: Since [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Finstall) is *not* included\nas a dependency of the TensorFlow Probability package (in `setup.py`), you must\nexplicitly install the TensorFlow package (`tensorflow` or `tensorflow-cpu`).\nThis allows us to maintain one package instead of separate packages for CPU and\nGPU-enabled TensorFlow. See the\n[TFP release notes](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Freleases) for more\ndetails about dependencies between TensorFlow and TensorFlow Probability.\n\n\n### Nightly Builds\n\nThere are also nightly builds of TensorFlow Probability under the pip package\n`tfp-nightly`, which depends on one of `tf-nightly` or `tf-nightly-cpu`.\nNightly builds include newer features, but may be less stable than the\nversioned releases. Both stable and nightly docs are available\n[here](https:\u002F\u002Fwww.tensorflow.org\u002Fprobability\u002Fapi_docs\u002Fpython\u002Ftfp?version=nightly).\n\n```shell\npython -m pip install --upgrade --user tf-nightly tfp-nightly\n```\n\n### Installing from Source\n\nYou can also install from source. This requires the [Bazel](\nhttps:\u002F\u002Fbazel.build\u002F) build system. It is highly recommended that you install\nthe nightly build of TensorFlow (`tf-nightly`) before trying to build\nTensorFlow Probability from source. The most recent version of Bazel that TFP\ncurrently supports is 6.4.0; support for 7.0.0+ is WIP.\n\n```shell\n# sudo apt-get install bazel git python-pip  # Ubuntu; others, see above links.\npython -m pip install --upgrade --user tf-nightly\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability.git\ncd probability\nbazel build --copt=-O3 --copt=-march=native :pip_pkg\nPKGDIR=$(mktemp -d)\n.\u002Fbazel-bin\u002Fpip_pkg $PKGDIR\npython -m pip install --upgrade --user $PKGDIR\u002F*.whl\n```\n\n## Community\n\nAs part of TensorFlow, we're committed to fostering an open and welcoming\nenvironment.\n\n* [Stack Overflow](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Ftensorflow): Ask\n  or answer technical questions.\n* [GitHub](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fissues): Report bugs or\n  make feature requests.\n* [TensorFlow Blog](https:\u002F\u002Fblog.tensorflow.org\u002F): Stay up to date on content\n  from the TensorFlow team and best articles from the community.\n* [Youtube Channel](http:\u002F\u002Fyoutube.com\u002Ftensorflow\u002F): Follow TensorFlow shows.\n* [tfprobability@tensorflow.org](https:\u002F\u002Fgroups.google.com\u002Fa\u002Ftensorflow.org\u002Fforum\u002F#!forum\u002Ftfprobability):\n  Open mailing list for discussion and questions.\n\nSee the [TensorFlow Community](https:\u002F\u002Fwww.tensorflow.org\u002Fcommunity\u002F) page for\nmore details. Check out our latest publicity here:\n\n+ [Coffee with a Googler: Probabilistic Machine Learning in TensorFlow](\n  https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=BjUkL8DFH5Q)\n+ [Introducing TensorFlow Probability](\n  https:\u002F\u002Fmedium.com\u002Ftensorflow\u002Fintroducing-tensorflow-probability-dca4c304e245)\n\n## Contributing\n\nWe're eager to collaborate with you! See [`CONTRIBUTING.md`](CONTRIBUTING.md)\nfor a guide on how to contribute. This project adheres to TensorFlow's\n[code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to\nuphold this code.\n\n## References\n\nIf you use TensorFlow Probability in a paper, please cite:\n\n+ _TensorFlow Distributions._ Joshua V. Dillon, Ian Langmore, Dustin Tran,\nEugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt\nHoffman, Rif A. Saurous.\n[arXiv preprint arXiv:1711.10604, 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.10604).\n\n(We're aware there's a lot more to TensorFlow Probability than Distributions, but the Distributions paper lays out our vision and is a fine thing to cite for now.)\n","# TensorFlow Probability\n\nTensorFlow Probability 是一个用于概率推理（probabilistic reasoning）和统计分析的 TensorFlow 库。作为 TensorFlow 生态系统的一部分，TensorFlow Probability 提供了概率方法与深度网络的集成、通过自动微分实现的基于梯度的推断，以及通过硬件加速（如 GPU）和分布式计算实现对大规模数据集和模型的扩展能力。\n\n__TFP 也可以作为 \"Tensor-friendly Probability\" 在纯 JAX 环境中运行！__：\n`from tensorflow_probability.substrates import jax as tfp` --\n了解更多请访问[此处](https:\u002F\u002Fwww.tensorflow.org\u002Fprobability\u002Fexamples\u002FTensorFlow_Probability_on_JAX)。\n\n我们的概率机器学习工具结构如下：\n\n__第 0 层：TensorFlow__。数值运算。特别是，LinearOperator 类支持无矩阵（matrix-free）实现，可以利用特殊结构（对角、低秩等）进行高效计算。它由 TensorFlow Probability 团队构建和维护，现已成为核心 TF 中 [`tf.linalg`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Ftree\u002Fmaster\u002Ftensorflow\u002Fpython\u002Fops\u002Flinalg) 的一部分。\n\n__第 1 层：统计构建模块__\n\n* 分布（Distributions，[`tfp.distributions`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fdistributions)）：\n  大量概率分布和相关统计量，支持批处理和[广播](https:\u002F\u002Fdocs.scipy.org\u002Fdoc\u002Fnumpy\u002Fuser\u002Fbasics.broadcasting.html)语义。请参阅\n  [分布教程](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTensorFlow_Distributions_Tutorial.ipynb)。\n* 双射器（Bijectors，[`tfp.bijectors`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fbijectors)）：\n  随机变量的可逆且可组合变换。双射器提供了丰富的变换分布类别，从经典的\n  [对数正态分布](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLog-normal_distribution)\n  到复杂的深度学习模型如\n  [掩码自回归流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07057)。\n\n__第 2 层：模型构建__\n\n* 联合分布（Joint Distributions，例如 [`tfp.distributions.JointDistributionSequential`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fdistributions\u002Fjoint_distribution_sequential.py)）：\n    一个或多个可能相互依赖的分布之上的联合分布。\n    关于使用 TFP 的 `JointDistribution` 进行建模的入门介绍，请查看\n    [此 colab](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FModeling_with_JointDistribution.ipynb)\n* 概率层（Probabilistic Layers，[`tfp.layers`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Flayers)）：\n  对其所表示函数具有不确定性的神经网络层，扩展了 TensorFlow 层。\n\n__第 3 层：概率推断__\n\n* 马尔可夫链蒙特卡罗（Markov chain Monte Carlo，[`tfp.mcmc`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fmcmc)）：\n  通过采样近似积分的算法。包括\n  [哈密顿蒙特卡罗](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FHamiltonian_Monte_Carlo)、\n  随机游走 Metropolis-Hastings，以及构建自定义转移核的能力。\n* 变分推断（Variational Inference，[`tfp.vi`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fvi)）：\n  通过优化近似积分的算法。\n* 优化器（Optimizers，[`tfp.optimizer`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Foptimizer)）：\n  随机优化方法，扩展了 TensorFlow 优化器。包括\n  [随机梯度朗之万动力学](http:\u002F\u002Fwww.icml-2011.org\u002Fpapers\u002F398_icmlpaper.pdf)。\n* 蒙特卡罗（Monte Carlo，[`tfp.monte_carlo`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fpython\u002Fmonte_carlo)）：\n  计算蒙特卡罗期望的工具。\n\nTensorFlow Probability 正在积极开发中。接口可能随时更改。\n\n## 示例\n\n查看 [`tensorflow_probability\u002Fexamples\u002F`](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002F)\n获取端到端示例。其中包括以下教程笔记本：\n\n* [线性混合效应模型（Linear Mixed Effects Models）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FLinear_Mixed_Effects_Models.ipynb)。\n  一种分层线性模型，用于在样本间共享统计强度。\n* [八所学校（Eight Schools）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FEight_Schools.ipynb)。\n  一种用于可交换处理效应的分层正态模型。\n* [分层线性模型（Hierarchical Linear Models）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FHLM_TFP_R_Stan.ipynb)。\n  在 TensorFlow Probability、R 和 Stan 之间进行比较的分层线性模型。\n* [贝叶斯高斯混合模型（Bayesian Gaussian Mixture Models）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FBayesian_Gaussian_Mixture_Model.ipynb)。\n  使用概率生成模型进行聚类。\n* [概率主成分分析（Probabilistic Principal Components Analysis）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FProbabilistic_PCA.ipynb)。\n  使用潜变量进行降维。\n* [高斯连接函数（Gaussian Copulas）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FGaussian_Copula.ipynb)。\n  用于捕捉随机变量间依赖关系的概率分布。\n* [TensorFlow Distributions：温和入门](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTensorFlow_Distributions_Tutorial.ipynb)。\n  TensorFlow Distributions 简介。\n* [理解 TensorFlow Distributions 的形状](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FUnderstanding_TensorFlow_Distributions_Shapes.ipynb)。\n  如何区分任意形状概率计算中的样本（samples）、批次（batches）和事件（events）。\n* [TensorFlow Probability 案例研究：协方差估计](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTensorFlow_Probability_Case_Study_Covariance_Estimation.ipynb)。\n  用户应用 TensorFlow Probability 进行协方差估计的案例研究。\n\n还包括以下示例脚本：\n\n  使用潜码和变分推断（variational inference）进行表示学习。\n* [向量量化自编码器（Vector-Quantized Autoencoder）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fvq_vae.py)。\n  使用向量量化进行离散表示学习。\n* [解耦序列变分自编码器（Disentangled Sequential Variational Autoencoder）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fdisentangled_vae.py)\n  使用变分推断对序列进行解耦表示学习。\n* [贝叶斯神经网络（Bayesian Neural Networks）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fbayesian_neural_network.py)。\n  对权重具有不确定性的神经网络。\n* [贝叶斯逻辑回归（Bayesian Logistic Regression）](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Flogistic_regression.py)。\n  用于二分类的贝叶斯推断。\n\n## 安装\n\n有关安装 TensorFlow 的更多详细信息、安装先决条件的指导以及（可选）设置虚拟环境，请参阅\n[TensorFlow 安装指南](https:\u002F\u002Fwww.tensorflow.org\u002Finstall)。\n\n### 稳定版本\n\n要安装最新的稳定版本，请运行以下命令：\n\n```shell\n# 注意事项：\n\n# - `--upgrade` 标志确保您获得最新版本。\n# - `--user` 标志确保软件包安装到您的用户目录\n#   而不是系统目录。\n# - TensorFlow 2 软件包需要 pip >= 19.0\npython -m pip install --upgrade --user pip\npython -m pip install --upgrade --user tensorflow tensorflow_probability\n```\n\n对于仅 CPU 使用（以及更小的安装），请使用 `tensorflow-cpu` 安装。\n\n要使用 TensorFlow 2.0 之前的版本，请运行：\n\n```shell\npython -m pip install --upgrade --user \"tensorflow\u003C2\" \"tensorflow_probability\u003C0.9\"\n```\n\n注意：由于 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Finstall) *未*作为 TensorFlow Probability 软件包的依赖项包含在 `setup.py` 中，您必须显式安装 TensorFlow 软件包（`tensorflow` 或 `tensorflow-cpu`）。这使我们能够维护一个软件包，而不是分别为 CPU 和 GPU 启用的 TensorFlow 维护单独的软件包。有关 TensorFlow 和 TensorFlow Probability 之间依赖关系的更多详细信息，请参阅\n[TFP 发布说明](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Freleases)。\n\n### 夜间构建\n\nTensorFlow Probability 还有夜间构建版本，pip 软件包名为 `tfp-nightly`，它依赖于 `tf-nightly` 或 `tf-nightly-cpu` 之一。夜间构建包含较新的功能，但可能比版本化发布更不稳定。稳定版本和夜间版本的文档均可\n[在此](https:\u002F\u002Fwww.tensorflow.org\u002Fprobability\u002Fapi_docs\u002Fpython\u002Ftfp?version=nightly)获取。\n\n```shell\npython -m pip install --upgrade --user tf-nightly tfp-nightly\n```\n\n### 从源代码安装\n\n您也可以从源代码安装。这需要 [Bazel](\nhttps:\u002F\u002Fbazel.build\u002F) 构建系统。强烈建议在尝试从源代码构建 TensorFlow Probability 之前安装 TensorFlow 的夜间构建版本（`tf-nightly`）。TFP 当前支持的最新的 Bazel 版本是 6.4.0；对 7.0.0+ 的支持正在进行中（WIP, Work In Progress）。\n\n```shell\n# sudo apt-get install bazel git python-pip  # Ubuntu；其他系统，请参阅上述链接。\npython -m pip install --upgrade --user tf-nightly\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability.git\ncd probability\nbazel build --copt=-O3 --copt=-march=native :pip_pkg\nPKGDIR=$(mktemp -d)\n.\u002Fbazel-bin\u002Fpip_pkg $PKGDIR\npython -m pip install --upgrade --user $PKGDIR\u002F*.whl\n```\n\n## 社区\n\n作为 TensorFlow 的一部分，我们致力于营造一个开放且友好的环境。\n\n* [Stack Overflow](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Ftensorflow)：提出或回答技术问题。\n* [GitHub](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fissues)：报告 bug 或提出功能需求。\n* [TensorFlow 博客](https:\u002F\u002Fblog.tensorflow.org\u002F)：获取 TensorFlow 团队的最新内容和社区精选文章。\n* [YouTube 频道](http:\u002F\u002Fyoutube.com\u002Ftensorflow\u002F)：关注 TensorFlow 相关节目。\n* [tfprobability@tensorflow.org](https:\u002F\u002Fgroups.google.com\u002Fa\u002Ftensorflow.org\u002Fforum\u002F#!forum\u002Ftfprobability)：开放的邮件列表，用于讨论和提问。\n\n更多详情请参见 [TensorFlow 社区](https:\u002F\u002Fwww.tensorflow.org\u002Fcommunity\u002F)页面。查看我们最新的公开报道：\n\n+ [Coffee with a Googler: TensorFlow 中的概率机器学习](\n  https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=BjUkL8DFH5Q)\n+ [TensorFlow Probability 简介](\n  https:\u002F\u002Fmedium.com\u002Ftensorflow\u002Fintroducing-tensorflow-probability-dca4c304e245)\n\n## 贡献\n\n我们期待与您合作！请参阅 [`CONTRIBUTING.md`](CONTRIBUTING.md) 了解如何贡献。本项目遵循 TensorFlow 的[行为准则](CODE_OF_CONDUCT.md)。参与本项目即表示您同意遵守该准则。\n\n## 参考文献\n\n如果您在论文中使用了 TensorFlow Probability，请引用：\n\n+ _TensorFlow Distributions._ Joshua V. Dillon, Ian Langmore, Dustin Tran,\nEugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt\nHoffman, Rif A. Saurous.\n[arXiv preprint arXiv:1711.10604, 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.10604).\n\n（我们深知 TensorFlow Probability 的内容远不止 Distributions，但 Distributions 论文阐述了我们的愿景，目前是一个合适的引用来源。）","# TensorFlow Probability 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Python 3.7+\n- pip 19.0+（TensorFlow 2 必需）\n- 支持的操作系统：Linux、macOS、Windows\n\n### 前置依赖\n- **必须安装**：TensorFlow（`tensorflow` 或 `tensorflow-cpu`）\n- **可选**：GPU 支持需安装 CUDA\u002FcuDNN（参考 [TensorFlow GPU 指南](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fgpu)）\n\n> 注意：TensorFlow Probability 不将 TensorFlow 作为依赖项自动安装，需手动指定。\n\n---\n\n## 安装步骤\n\n### 方式一：稳定版安装（推荐）\n\n```shell\n# 升级 pip\npython -m pip install --upgrade --user pip\n\n# 安装 TensorFlow + TensorFlow Probability\npython -m pip install --upgrade --user tensorflow tensorflow_probability\n```\n\n**仅 CPU 版本**（体积更小）：\n```shell\npython -m pip install --upgrade --user tensorflow-cpu tensorflow_probability\n```\n\n**国内镜像加速**（清华源）：\n```shell\npython -m pip install --upgrade --user -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \\\n    tensorflow tensorflow_probability\n```\n\n### 方式二：每日构建版（最新功能）\n\n```shell\npython -m pip install --upgrade --user tf-nightly tfp-nightly\n```\n\n### 验证安装\n\n```python\nimport tensorflow as tf\nimport tensorflow_probability as tfp\n\nprint(tfp.__version__)  # 应输出版本号\n```\n\n---\n\n## 基本使用\n\n### 示例 1：创建分布并采样\n\n```python\nimport tensorflow as tf\nimport tensorflow_probability as tfp\n\ntfd = tfp.distributions\n\n# 创建正态分布\nnormal = tfd.Normal(loc=0., scale=1.)\n\n# 采样\nsamples = normal.sample(1000)  # 1000 个样本\n\n# 计算概率密度\nlog_prob = normal.log_prob(0.0)\n```\n\n### 示例 2：JAX 后端使用（无 TensorFlow）\n\n```python\nfrom tensorflow_probability.substrates import jax as tfp\n\ntfd = tfp.distributions\n\n# 同样 API，底层使用 JAX\nnormal = tfd.Normal(loc=0., scale=1.)\nsamples = normal.sample(seed=42)  # JAX 需要显式传入 seed\n```\n\n### 示例 3：简单贝叶斯推断（MCMC）\n\n```python\nimport tensorflow_probability as tfp\n\ntfd = tfp.distributions\ntfb = tfp.bijectors\n\n# 定义目标分布（标准正态）\ntarget_log_prob = lambda x: -0.5 * tf.square(x)\n\n# HMC 采样器\nhmc = tfp.mcmc.HamiltonianMonteCarlo(\n    target_log_prob_fn=target_log_prob,\n    num_leapfrog_steps=3,\n    step_size=0.1\n)\n\n# 运行 MCMC（简化示例，实际需设置初始状态）\n```\n\n---\n\n## 下一步\n\n| 资源 | 链接 |\n|:---|:---|\n| 官方教程 | [GitHub Examples](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks) |\n| API 文档 | https:\u002F\u002Fwww.tensorflow.org\u002Fprobability\u002Fapi_docs\u002Fpython\u002Ftfp |\n| 分布形状指南 | [Understanding TensorFlow Distributions Shapes](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmain\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FUnderstanding_TensorFlow_Distributions_Shapes.ipynb) |","某金融科技公司的风控团队正在开发一套**小微企业信贷违约预测模型**，需要在预测违约概率的同时，量化模型不确定性，为授信决策提供置信区间参考。\n\n### 没有 probability 时\n\n- 团队只能用传统神经网络输出\"点估计\"违约概率（如0.73），无法告知业务方\"这个预测有多可靠\"，导致高风险客户被误判为优质客户\n- 为获取不确定性估计，工程师被迫手动实现贝叶斯神经网络，需从零编写变分推断代码，开发周期长达2个月\n- 模型上线后，面对10万+企业客户的批量预测，Python原生采样代码运行缓慢，无法利用GPU加速，单次全量评估需6小时\n- 当业务要求\"如果企业营收下降30%，违约概率如何变化\"时，团队缺乏概率编程工具，无法便捷地做反事实推断\n\n### 使用 probability 后\n\n- 通过`tfp.layers.DenseVariational`快速构建贝叶斯神经网络层，模型天然输出预测分布，业务方可直接获取\"违约概率90%置信区间为[0.58, 0.88]\"的风险提示\n- 利用`tfp.distributions`和`tfp.vi`内置的变分推断算法，3天内完成模型搭建与训练，自动微分机制免除了手动推导梯度的繁琐工作\n- 借助TensorFlow的GPU加速能力，结合`tfp.mcmc`的并行采样，全量评估时间从6小时压缩至15分钟，轻松支持日频风控决策\n- 通过`tfp.distributions.JointDistributionSequential`构建因果概率图模型，一行代码即可实现条件概率查询，快速响应业务方的敏感性分析需求\n\n**核心价值**：probability 让团队以生产级效率将贝叶斯方法融入深度学习流水线，把\"不确定性量化\"从学术难题转化为可落地的风控标准能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorflow_probability_f367d0a7.png","tensorflow","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftensorflow_07ed5093.png","",null,"github-admin@tensorflow.org","http:\u002F\u002Fwww.tensorflow.org","https:\u002F\u002Fgithub.com\u002Ftensorflow",[83,87,91,95],{"name":84,"color":85,"percentage":86},"Jupyter Notebook","#DA5B0B",74.6,{"name":88,"color":89,"percentage":90},"Python","#3572A5",24.4,{"name":92,"color":93,"percentage":94},"Starlark","#76d275",1,{"name":96,"color":97,"percentage":98},"Shell","#89e051",0,4414,1119,"2026-04-04T03:58:07","Apache-2.0","Linux, macOS, Windows","可选，支持 GPU 加速（如 NVIDIA GPU），需安装对应版本的 TensorFlow GPU 版本","未说明",{"notes":107,"python":105,"dependencies":108},"TensorFlow 需单独安装，TFP 不将其作为依赖项；支持纯 JAX 后端（tensorflow_probability.substrates.jax）；源码编译需要 Bazel 构建系统，最高支持 Bazel 6.4.0；夜间版（tfp-nightly）可能不够稳定",[109,110,111,112],"tensorflow>=2.0","tensorflow-cpu（CPU 版本可选）","tf-nightly（夜间版）","bazel\u003C=6.4.0（源码编译）",[54,13,51],[75,115,116,117,118,119,120,121],"bayesian-methods","deep-learning","machine-learning","data-science","neural-networks","statistics","probabilistic-programming",4,"2026-03-27T02:49:30.150509","2026-04-06T06:55:14.783828",[126,131,136,141,146],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},3883,"TFP 与 Stan 的 CholeskyLKJ 对数概率计算结果不一致，如何解决？","这种差异通常源于对数行列式雅可比（log_det_jacobian）的不同处理方式。在 TFP 中，使用 `tfb.CorrelationCholesky()` 作为 LKJ 相关矩阵的默认双射器（bijector）是安全的选择。如果需要 concentration \u003C 1 的情况，可以使用 `tfb.Chain([tfb.CorrelationCholesky(), tfb.Tanh()])` 作为替代方案。建议通过实际模型验证（如 McElreath 的咖啡馆模型）来确认实现正确性。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fissues\u002F694",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},3884,"如何将无约束向量转换为 LKJ 分布的相关矩阵（Cholesky 分解形式）？","TFP 已提供 `tfb.CorrelationCholesky` 双射器来实现这一转换。该双射器可以将无约束实数向量映射到有效的 Cholesky 分解相关矩阵。使用方法：\n```python\nimport tensorflow_probability as tfp\ntfb = tfp.bijectors\n# 创建双射器\nbijector = tfb.CorrelationCholesky()\n# 正向转换：无约束向量 -> Cholesky 矩阵\nL = bijector.forward(unconstrained_vector)\n# 逆向转换：Cholesky 矩阵 -> 无约束向量\nunconstrained = bijector.inverse(L)\n```\n该实现包含完整的正向\u002F逆向变换以及对数行列式雅可比计算。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fissues\u002F400",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},3885,"如何计算高斯混合模型（GMM）之间的 KL 散度？","TFP 原生不支持 MixtureSameFamily 之间的 KL 散度计算，但可以通过蒙特卡洛（MC）方法注册自定义 KL 实现。具体步骤：\n\n1. 使用 `tfd.Independent` 重新解释批次维度为事件维度：\n```python\nprior = tfd.Independent(your_prior, reinterpreted_batch_ndims=3)\n```\n\n2. 注册 MC KL 方法（参考实现）：https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1RHx23ViJL-1SIgVYbUjB8QjCeQ9b2fVS\n\n3. 确保先验和后验的形状匹配：如果使用变分层，注意 `kernel_prior_fn` 返回的分布应使用 `batch_shape`，而默认后验（Independent）使用 `event_shape`，需要通过 `reinterpreted_batch_ndims` 调整。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fissues\u002F199",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},3886,"Gamma 分布如何使用隐式重参数化梯度（Implicit Reparameterization Gradients）？","TFP 的 Gamma 分布已支持可微分采样，可通过隐式重参数化计算梯度。对于截断 Gamma 分布，由于 Gamma 现已实现可微量化函数（quantile function），可以使用以下方法：\n\n```python\n# 标准 Gamma 重参数化采样\ngamma = tfp.distributions.Gamma(concentration=mean, rate=logvar)\nsample = gamma.sample()  # 支持梯度传播\n\n# 截断 Gamma 的近似方法\n# 参考 Issue #1135 的方法，结合可微量化函数实现\n```\n\n注意：避免在梯度计算中使用 `persistent=True` 的 GradientTape，否则会导致二阶导数计算问题，触发 LookupError。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fissues\u002F51",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},3887,"TensorFlow 2.0 中遇到 \"Tensor is unhashable if Tensor equality is enabled\" 错误如何解决？","这是 TensorFlow 2.0 与早期 TFP 版本的兼容性问题。错误发生在启用张量相等性（Tensor equality）时，张量不可哈希。解决方案：\n\n1. **升级版本**：将 TensorFlow 升级到 2.0 正式版或更高版本，TFP 升级到 0.8 或更高版本。\n\n2. **临时禁用张量相等性**（不推荐长期使用）：\n```python\nimport tensorflow as tf\ntf.compat.v1.disable_tensor_equality()\n```\n\n3. **代码修改**：如果必须保持旧版本，避免在需要哈希的操作（如字典键、集合元素）中直接使用张量，改用 `tensor.experimental_ref()` 作为键：\n```python\n# 替代方案\nkey = tensor.experimental_ref()\n```","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fissues\u002F540",[152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247],{"id":153,"version":154,"summary_zh":155,"released_at":156},103371,"v0.14.0","# Release notes\r\n\r\nThis is the 0.14 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.6.0 and JAX 0.2.20.\r\n\r\n\r\n## Change notes\r\n\r\nPlease see the release notes for TFP 0.14.1 at https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Freleases\u002Fv0.14.1 .\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  8bitmp3\r\n  -  adriencorenflos\r\n  -  allenl\r\n  -  axch\r\n  -  bjp\r\n  -  blamb\r\n  -  csuter\r\n  -  colcarroll\r\n  -  davmre\r\n  -  derifatives\r\n  -  emilyaf\r\n  -  europeanplaice\r\n  -  Frightera\r\n  -  fmuham\r\n  -  gcluo\r\n  -  GianluigiSilvestri\r\n  -  gisilvs\r\n  -  gjt\r\n  -  grisaitis\r\n  -  harahu\r\n  -  jburnim\r\n  -  langmore\r\n  -  leben\r\n  -  lukewood\r\n  -  mihaimaruseac\r\n  -  NeilGirdhar\r\n  -  phandu\r\n  -  phawkins\r\n  -  rechen\r\n  -  ronshapiro\r\n  -  scottzhu\r\n  -  sharadmv\r\n  -  siege\r\n  -  srvasude\r\n  -  ursk\r\n  -  vanderplas\r\n  -  xingyousong\r\n  -  yileiyang\r\n","2021-09-21T04:38:21",{"id":158,"version":159,"summary_zh":160,"released_at":161},103372,"v0.13.0","# Release notes\r\n\r\nThis is the 0.13 release of TensorFlow Probability. It is\r\ntested and stable against TensorFlow version 2.5.0.\r\n\r\nSee the visual release notebook in [colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_13_0.ipynb).\r\n\r\n## Change notes\r\n\r\n- Distributions\r\n  - Adds `tfd.BetaQuotient`\r\n  - Adds `tfd.DeterminantalPointProcess`\r\n  - Adds `tfd.ExponentiallyModifiedGaussian`\r\n  - Adds `tfd.MatrixNormal` and `tfd.MatrixT`\r\n  - Adds `tfd.NormalInverseGaussian`\r\n  - Adds `tfd.SigmoidBeta`\r\n  - Adds `tfp.experimental.distribute.Sharded`\r\n  - Adds `tfd.BatchBroadcast`\r\n  - Adds `tfd.Masked`\r\n  - Adds JAX support for `tfd.Zipf`\r\n  - Adds Implicit Reparameterization Gradients to `tfd.InverseGaussian`.\r\n  - Adds quantiles for `tfd.{Chi2,ExpGamma,Gamma,GeneralizedNormal,InverseGamma}`\r\n  - Derive `Distribution` batch shapes automatically from parameter annotations.\r\n  - Ensuring `Exponential.cdf(x)` is always 0 for `x \u003C 0`.\r\n  - `VectorExponentialLinearOperator` and `VectorExponentialDiag` distributions now return variance, covariance, and standard deviation of the correct shape.\r\n  - `Bates` distribution now returns mean of the correct shape.\r\n  - `GeneralizedPareto` now returns variance of the correct shape.\r\n  - `Deterministic` distribution now returns mean, mode, and variance of the correct shape.\r\n  - Ensure that `JointDistributionPinned`'s support bijectors respect autobatching.\r\n  - Now systematically testing log_probs of most distributions for numerical accuracy.\r\n  - `InverseGaussian` no longer emits negative samples for large `loc \u002F concentration`\r\n  - `GammaGamma`, `GeneralizedExtremeValue`, `LogLogistic`, `LogNormal`, `ProbitBernoulli` should no longer compute `nan` log_probs on their own samples.  `VonMisesFisher`, `Pareto`, and `GeneralizedExtremeValue` should no longer emit samples numerically outside their support.\r\n  - Improve numerical stability of `tfd.ContinuousBernoulli` and deprecate `lims` parameter.\r\n\r\n- Bijectors\r\n  - Add bijectors to mimic `tf.nest.flatten` (`tfb.tree_flatten`) and `tf.nest.pack_sequence_as` (`tfb.pack_sequence_as`).\r\n  - Adds `tfp.experimental.bijectors.Sharded`\r\n  - Remove deprecated `tfb.ScaleTrilL`.  Use `tfb.FillScaleTriL` instead.\r\n  - Adds `cls.parameter_properties()` annotations for Bijectors.\r\n  - Extend range `tfb.Power` to all reals for odd integer powers.\r\n  - Infer the log-deg-jacobian of scalar bijectors using autodiff, if not otherwise specified.\r\n\r\n- MCMC\r\n  - MCMC diagnostics support arbitrary structures of states, not just lists.\r\n  - `remc_thermodynamic_integrals` added to `tfp.experimental.mcmc`\r\n  - Adds `tfp.experimental.mcmc.windowed_adaptive_hmc`\r\n  - Adds an experimental API for initializing a Markov chain from a near-zero uniform distribution in unconstrained space. `tfp.experimental.mcmc.init_near_unconstrained_zero`\r\n  - Adds an experimental utility for retrying Markov Chain initialization until an acceptable point is found. `tfp.experimental.mcmc.retry_init`\r\n  - Shuffling experimental streaming MCMC API to slot into tfp.mcmc with a minimum of disruption.\r\n  - Adds `ThinningKernel` to `experimental.mcmc`.\r\n  - Adds `experimental.mcmc.run_kernel` driver as a candidate streaming-based replacement to `mcmc.sample_chain`\r\n\r\n- VI\r\n  - Adds `build_split_flow_surrogate_posterior` to `tfp.experimental.vi` to build structured VI surrogate posteriors from normalizing flows.\r\n  - Adds `build_affine_surrogate_posterior` to `tfp.experimental.vi` for construction of ADVI surrogate posteriors from an event shape.\r\n  - Adds `build_affine_surrogate_posterior_from_base_distribution` to `tfp.experimental.vi` to enable construction of ADVI surrogate posteriors with correlation structures induced by affine transformations.\r\n\r\n- MAP\u002FMLE\r\n  - Added convenience method `tfp.experimental.util.make_trainable(cls)` to create trainable instances of distributions and bijectors.\r\n\r\n- Math\u002Flinalg\r\n  - Add trapezoidal rule to tfp.math.\r\n  - Add `tfp.math.log_bessel_kve`.\r\n  - Add `no_pivot_ldl` to `experimental.linalg`.\r\n  - Add `marginal_fn` argument to `GaussianProcess` (see `no_pivot_ldl`).\r\n  - Added `tfp.math.atan_difference(x, y)`\r\n  - Add `tfp.math.erfcx`, `tfp.math.logerfc` and `tfp.math.logerfcx`\r\n  - Add `tfp.math.dawsn` for Dawson's Integral.\r\n  - Add `tfp.math.igammaincinv`, `tfp.math.igammacinv`.\r\n  - Add `tfp.math.sqrt1pm1`.\r\n  - Add `LogitNormal.stddev_approx` and `LogitNormal.variance_approx`\r\n  - Add `tfp.math.owens_t` for the Owen's T function.\r\n  - Add `bracket_root` method to automatically initialize bounds for a root search.\r\n  - Add Chandrupatla's method for finding roots of scalar functions.\r\n\r\n- Stats\r\n  - `tfp.stats.windowed_mean` efficiently computes windowed means.\r\n  - `tfp.stats.windowed_variance` efficiently and accurately computes windowed variances.\r\n  - `tfp.stats.cumulative_variance` efficiently and accurately computes cumulative v","2021-06-18T21:06:22",{"id":163,"version":164,"summary_zh":165,"released_at":166},103373,"0.13.0-rc0","This is the RC0 release candidate of the TensorFlow Probability 0.13 release.\r\n\r\nIt is tested against TensorFlow 2.5.0.","2021-05-24T14:04:59",{"id":168,"version":169,"summary_zh":170,"released_at":171},103374,"v0.12.2","This is the 0.12.2 release of TensorFlow Probability, a patch release to cap the JAX dependency to a compatible version. It is tested and stable against TensorFlow version 2.4.0.\r\n\r\nFor detailed change notes, please see the 0.12.1 release at https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Freleases\u002Ftag\u002Fv0.12.1 .","2021-04-19T23:03:58",{"id":173,"version":174,"summary_zh":175,"released_at":176},103363,"v0.21.0","# Release notes\r\n\r\nThis is the 0.21.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.13 and JAX 0.4.14 .\r\n\r\n## Change notes\r\n[no major changes]\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  bjp \r\n  -  chansoo \r\n  -  colcarroll \r\n  -  emilyaf \r\n  -  feyu \r\n  -  flang \r\n  -  Jacob Burnim \r\n  -  jburnim \r\n  -  jcater \r\n  -  juanantoniomc \r\n  -  Matthew Feickert \r\n  -  oskarfernlund \r\n  -  phawkins \r\n  -  schwartzedward \r\n  -  siege \r\n  -  Srinivas Vasudevan \r\n  -  ursk \r\n\r\n\r\n","2023-08-04T17:50:20",{"id":178,"version":179,"summary_zh":180,"released_at":181},103358,"v0.25.0","# Release notes\r\n\r\nThis is the 0.25 release of TensorFlow Probability. It is\r\ntested and stable against TensorFlow version 2.18 and JAX 0.4.35.\r\n\r\nNOTE: In TensorFlow 2.16+, tf.keras (and tf.initializers, tf.losses, and tf.optimizers) refers to Keras 3. TensorFlow Probability is not compatible with Keras 3 -- instead TFP is continuing to use Keras 2, which is now packaged as tf-keras and tf-keras-nightly and is imported as tf_keras. When using TensorFlow Probability with TensorFlow, you must explicitly install Keras 2 along with TensorFlow (or install tensorflow-probability[tf] or tfp-nightly[tf] to automatically install these dependencies.)\r\n\r\n## Change notes\r\n\r\n  - Add mean + variance to tfd.Categorical.\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  bjp\r\n  -  Chris Jewell\r\n  -  Christopher Suter\r\n  -  colcarroll\r\n  -  emilyaf\r\n  -  feyu\r\n  -  jburnim\r\n  -  leben\r\n  -  lukes\r\n  -  mrry\r\n  -  phawkins\r\n  -  siege\r\n  -  Srinivas Vasudevan\r\n  -  swijaya\r\n  -  thomaswc\r\n  -  ursk\r\n  -  vanderplas","2024-11-08T16:30:13",{"id":183,"version":184,"summary_zh":185,"released_at":186},103359,"v0.24.0","# Release notes\r\n\r\nThis is the 0.24.0 release of TensorFlow Probability. It is tested and stable against TensorFlow 2.16.1 and JAX 0.4.25 .\r\n\r\nNOTE: In TensorFlow 2.16+, `tf.keras` (and `tf.initializers`, `tf.losses`, and `tf.optimizers`) refers to Keras 3.  TensorFlow Probability is not compatible with Keras 3 -- instead TFP is continuing to use Keras 2, which is now packaged as `tf-keras` and `tf-keras-nightly` and is imported as `tf_keras`.  When using TensorFlow Probability with TensorFlow, you must explicitly install Keras 2 along with TensorFlow (or install `tensorflow-probability[tf]` or `tfp-nightly[tf]` to automatically install these dependencies.)\r\n\r\n\r\n## Change notes\r\n\r\n   - TensorFlow Probability now supports Python 3.12.\r\n     * But note that many parts of `tfp.layers` and `tfp.experimental.nn` will raise errors because of a TensorFlow + wrapt bug (see https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fissues\u002F60687 ), which can be worked around by setting the environment variable `WRAPT_DISABLE_EXTENSIONS=true`.\r\n\r\n  - Added an experimental implementation of Chopin, Jacob, Papaspiliopoulos, \"SMC^2: an efficient algorithm for sequential analysis of state-space models\", Journal of the Royal Statistical Society Series B: Statistical Methodology 75.3 (2013).  See https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fv0.24.0\u002Ftensorflow_probability\u002Fpython\u002Fexperimental\u002Fmcmc\u002Fparticle_filter.py#L766 .\r\n\r\n  - Added `tfp.experimental.fastgp`, a library for approximately training and evaluating Gaussian Processes in sub-O(n^3) time. \r\n See https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Ftree\u002Fr0.24\u002Ftensorflow_probability\u002Fpython\u002Fexperimental\u002Ffastgp .\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  Alessandro Slamitz\r\n  -  Christopher Suter\r\n  -  Colin Carroll\r\n  -  Emily Fertig\r\n  -  Gareth Williams\r\n  -  Jacob Burnim\r\n  -  Jake VanderPlas\r\n  -  Matthew Feickert\r\n  -  Pavel Sountsov\r\n  -  Richard Levasseur\r\n  -  Srinivas Vasudevan\r\n  -  Thomas Colthurst\r\n  -  Urs Köster\r\n ","2024-03-12T19:43:46",{"id":188,"version":189,"summary_zh":190,"released_at":191},103360,"v0.23.0","# Release notes\r\n\r\nThis is the 0.23.0 release of TensorFlow Probability. It is tested and stable against TensorFlow 2.15.0 and JAX 0.4.20 .\r\n\r\n\r\n## Change notes\r\n\r\n[coming soon]\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  - Christopher Suter\r\n  - Colin Carroll\r\n  - Jacob Burnim\r\n  - Juan Martinez\r\n  - Sergei Lebedev\r\n  - Sophia Gu\r\n  - Srinivas Vasudevan\r\n","2023-11-20T23:32:05",{"id":193,"version":194,"summary_zh":195,"released_at":196},103361,"v0.22.1","# Release notes\r\n\r\nThis is the 0.22.1 release of TensorFlow Probability. It is tested and stable against TensorFlow 2.14.0 and JAX 0.4.16 and 0.4.19 .\r\n\r\n\r\n## Change notes\r\n\r\nSee the release note for TFP 0.22.0 at https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Freleases\u002Ftag\u002Fv0.22.0 .\r\n\r\nFixes some NumPy deprecation warnings by no longer casting size-1 arrays to ints.\r\n\r\nDependency typing_extensions is no longer pinned to \u003C4.6.0.\r\n\r\nSupport for Python 3.8 has been removed starting with TensorFlow Probability 0.22.0.\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  Brian Patton\r\n  -  Colin Carroll\r\n  -  Du Phan\r\n  -  Emily Fertig\r\n  -  Fiona Lang\r\n  -  Frederik Gossen\r\n  -  Gabriel Rasskin\r\n  -  Haotian Chen\r\n  -  Jacob Burnim\r\n  -  Jake VanderPlas\r\n  -  Mark McDonald\r\n  -  Oskar Fernlund\r\n  -  Pavel Sountsov\r\n  -  Richard Levasseur\r\n  -  Salman Faroz\r\n  -  Sergei Lebedev\r\n  -  Srinivas Vasudevan\r\n  -  Thomas Colthurst\r\n  -  Urs Köster\r\n  -  Yu Feng\r\n","2023-10-23T16:37:21",{"id":198,"version":199,"summary_zh":200,"released_at":201},103362,"v0.22.0","\r\n# Release notes\r\n\r\nThis is the 0.22 release of TensorFlow Probability. It is tested and stable against TensorFlow 2.14.0 and JAX 0.4.16 .\r\n\r\n\r\n## Change notes\r\n\r\nSupport for Python 3.8 has been removed starting with TensorFlow Probability 0.22.0.\r\n\r\n[Coming soon.]\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  Brian Patton\r\n  -  Colin Carroll\r\n  -  Du Phan\r\n  -  Emily Fertig\r\n  -  Fiona Lang\r\n  -  Frederik Gossen\r\n  -  Gabriel Rasskin\r\n  -  Haotian Chen\r\n  -  Jacob Burnim\r\n  -  Jake VanderPlas\r\n  -  Mark McDonald\r\n  -  Oskar Fernlund\r\n  -  Pavel Sountsov\r\n  -  Richard Levasseur\r\n  -  Salman Faroz\r\n  -  Srinivas Vasudevan\r\n  -  Thomas Colthurst\r\n  -  Urs Köster\r\n  -  Yu Feng\r\n ","2023-10-02T16:03:27",{"id":203,"version":204,"summary_zh":205,"released_at":206},103364,"v0.20.0","# Release notes\r\n\r\nThis is the 0.20 release of TensorFlow Probability. It is\r\ntested and stable against TensorFlow version 2.12  and JAX 0.4.8 .\r\n\r\n## Change notes\r\n\r\n  - Add `LinearOperatorBasis` and `LinearOperatorRowBlock`.\r\n  - Ensure `Dirichlet` and `RelaxedOneHotCategorical` transform correctly under bijectors.\r\n  - Add `SphericalSpace` and use in all Spherical Distributions\r\n  - Add `GeneralSpace.transform_general`\r\n  - Fix guitar numpy rewrite_equivalence_test.\r\n  - BREAKING CHANGE: Ignore deprecated `always_yield_multivariante_normal` arg to `tfd.GaussianProcess` and `tfd.GaussianProcessRegressionModel` so that event shape is always [1] for a single index point.\r\n  - Create a `bayesopt` submodule of TFP experimental and add acquisition functions.\r\n  - Add the `FeatureScaledWithCategorical` kernel, a PSD kernel over structures of continuous and categorical data, to TFP experimental.\r\n  - [BREAKING] Remove deprecated arg BDF.use_pfor_to_compute_jacobian.\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  ashishenoy \r\n  -  atondwal \r\n  -  bjp \r\n  -  Christopher Suter \r\n  -  colcarroll \r\n  -  Colin Carroll \r\n  -  emilyaf \r\n  -  fdtomasi \r\n  -  flang \r\n  -  Jacob Burnim \r\n  -  jburnim \r\n  -  jcater \r\n  -  juanantoniomc \r\n  -  langmore \r\n  -  Leandro Campos \r\n  -  leben \r\n  -  Matthew Feickert \r\n  -  mmladenov \r\n  -  nkovela \r\n  -  Pavel Sountsov \r\n  -  phandu \r\n  -  phawkins \r\n  -  power \r\n  -  S. Amin \r\n  -  siege \r\n  -  Srinivas Vasudevan \r\n  -  synandi \r\n  -  thomaswc \r\n  -  Tirumalesh \r\n  -  ujaved \r\n  -  ursk \r\n\r\n","2023-05-08T20:13:58",{"id":208,"version":209,"summary_zh":210,"released_at":211},103365,"v0.19.0","# Release notes\r\n\r\nThis is the 0.19.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.11 and JAX 0.3.25 .\r\n\r\n\r\n## Change notes\r\n\r\n\r\n* Bijectors\r\n  - Added `UnitVector` bijector to map to the unit sphere.\r\n\r\n\r\n* Distributions\r\n  - Added noncentral Chi2 distribution to TFP.\r\n  - Added differentiable quantile and cdf function approximation to NC2 distribution.\r\n  - Added quantiles to Student-T, Beta and SigmoidBeta, with efficient\r\n    implementations for Student-T quantile\u002Fcdf.\r\n  - Allow structured index points to `GaussianProcess*` classes.\r\n  - Improved efficiency of `GaussianProcess*` gradients through custom gradients\r\n    on `log_prob`.\r\n\r\n* Linear Algebra\r\n  - Added functions (with custom gradients) to handle Hermitian Symmetric Positive-definite matrices:\r\n    - `tfp.math.hspd_logdet`\r\n    - `tfp.math.hpsd_quadratic_form_solve` and `tfp.math.hpsd_quadratic_form_solvevec`\r\n    - `tfp.math.hpsd_solve` and `tfp.math.hpsd_solvevec`\r\n\r\n* Optimizer\r\n  - BUGFIX: Prevent Hager-Zhang linesearch from terminating early.\r\n\r\n* PSD Kernels\r\n  - Added support for structured inputs in PSD Kernel.\r\n\r\n* STS\r\n  - Added seasonality support to STS Gibbs Sampler.\r\n\r\n* Other\r\n  - BUGFIX: Allow jnp.bfloat16 arrays to be correctly recognized as floats.\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  Brian Patton\r\n  -  Chen Qian\r\n  -  Christopher Suter\r\n  -  Colin Carrol\r\n  -  Emily Fertig\r\n  -  Francois Chollet\r\n  -  Ian Langmore\r\n  -  Jacob Burnim\r\n  -  Jonas Eschle\r\n  -  Kyle Loveless\r\n  -  Leandro Campos\r\n  -  Du Phan\r\n  -  Pavel Sountsov\r\n  -  Sebastian Nowozin\r\n  -  Srinivas Vasudevan\r\n  -  Thomas Colthurst\r\n  -  Umer Javed\r\n  -  Urs Koster\r\n  -  Yash Katariya","2022-12-06T22:34:10",{"id":213,"version":214,"summary_zh":215,"released_at":216},103366,"v0.18.0","# Release notes\r\n\r\nThis is the 0.18.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.10 and JAX 0.3.17 .\r\n\r\n\r\n## Change notes\r\n\r\n[coming soon]\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n[coming soon]","2022-09-12T15:46:51",{"id":218,"version":219,"summary_zh":220,"released_at":221},103367,"v0.17.0","# Release notes\r\n\r\nThis is the 0.17.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.9.1 and JAX 0.3.13 .\r\n\r\n\r\n## Change notes\r\n\r\n* Distributions\r\n  - Discrete distributions transform correctly when a bijector is applied.\r\n  - Fix bug in Taylor approximation of log-normalizing constant for the\r\n    `ContinuousBernoulli`.\r\n  - Add `TwoPieceNormal` distribution and reparameterize it's samples.\r\n  - Make `IncrementLogProb` a proper tfd.Distribution.\r\n  - Add quantiles to `Empirical` distribution.\r\n  - Add `tfp.experimental.distributions.MultiTaskGaussianProcessRegressionModel`\r\n  - Improve efficiency of `MultiTaskGaussian` Processes in the presence of\r\n    observation noise: Reduce complexity from O((NT)^3) to O(N^3 + T^3) where N\r\n    is the number of data points and T is the number of tasks.\r\n  - Improve efficiency of `VariationalGaussianProcess`.\r\n  - Add `tfd.LognNormal.experimental_from_mean_variance`.\r\n\r\n* Bijectors\r\n  - Fix Softfloor bijector to act as the identity at high temperature, and floor\r\n      at low temperature.\r\n  - Remove `tfb.Ordered` bijector and `finite_nondiscrete` flags in Distributions.\r\n\r\n* Math\r\n  - Add tfp.math.betainc and gradients with respect to all parameters.\r\n\r\n* STS\r\n  - Several bug fixes and performance improvements to\r\n    `tfp.experimental.sts_gibbs` for Gibbs sampling Bayesian structural time\r\n    series models with sparse linear regression.\r\n  - Enable `tfp.experimental.sts_gibbs` under JAX\r\n\r\n* Experimental\r\n  - Ensemble Kalman filter is now efficient in the case of ensemble size \u003C\u003C observation size and an \"easy to invert\" modeled observation covariance.\r\n  - Add a `perturbed_observations` option to\r\n    `ensemble_kalman_filter_log_marginal_likelihood`.\r\n  - Add Experimental support for custom JAX PRNGs.\r\n\r\n* Other\r\n\r\n  - Add `assertAllMeansClose` to `tfp.TestCase` for testing sampling code.\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  Adam Sorrenti\r\n  -  Alexey Radul\r\n  -  Christopher Suter\r\n  -  Colin Carroll\r\n  -  Du Phan\r\n  -  Emily Fertig\r\n  -  Fabien Hertschuh\r\n  -  Faizan Muhammad\r\n  -  Francois Chollet\r\n  -  Ian Langmore\r\n  -  Jacob Burnim\r\n  -  Jake VanderPlas\r\n  -  Kathy Wu\r\n  -  Kristian Hartikainen\r\n  -  Kyle Loveless\r\n  -  Leandro Campos\r\n  -  Xinle Sheila Liu\r\n  -  ltsaprounis\r\n  -  Matt Hoffman\r\n  -  Manas Mohanty\r\n  -  Max Jiang\r\n  -  Pavel Sountsov\r\n  -  Peter Hawkins\r\n  -  Praveen Narayan\r\n  -  Renu Patel\r\n  -  Ryan Russell\r\n  -  Scott Zhu\r\n  -  Sergey Lebedev\r\n  -  Sharad Vikram\r\n  -  Srinivas Vasudevan\r\n  -  tagoma\r\n  -  Urs Koster\r\n  -  Vaidotas Simkus\r\n  -  Vishnuvardhan Janapati\r\n  -  Yilei Yang","2022-06-07T18:01:34",{"id":223,"version":224,"summary_zh":225,"released_at":226},103368,"v0.16.0","# Release notes\r\n\r\nThis is the 0.16.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.8.0 and JAX 0.3.0 .\r\n\r\n\r\n## Change notes\r\n\r\n[coming soon]\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  Alexey Radul\r\n  -  Ben Lee\r\n  -  Billy Lamberta\r\n  -  Brian Patton\r\n  -  Chansoo Lee\r\n  -  Christopher Suter\r\n  -  Colin Carroll\r\n  -  Dave Moore\r\n  -  Du Phan\r\n  -  Emily Fertig\r\n  -  François Chollet\r\n  -  Gianluigi Silvestri\r\n  -  Jacob Burnim\r\n  -  Jake Taylor\r\n  -  Junpeng Lao\r\n  -  Matthew Johnson\r\n  -  Michael Weiss\r\n  -  Pavel Sountsov\r\n  -  Peter Hawkins\r\n  -  Rebecca Chen\r\n  -  Sharad Vikram\r\n  -  Soo Sung\r\n  -  Srinivas Vasudevan\r\n  -  Urs Köster\r\n","2022-02-14T17:24:33",{"id":228,"version":229,"summary_zh":230,"released_at":231},103369,"v0.15.0","# Release notes\r\n\r\nThis is the 0.15 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.7.0.\r\n\r\n## Change notes\r\n\r\n- Distributions\r\n  - Add `tfd.StudentTProcessRegressionModel`.\r\n  - Distributions' statistics now all have batch shape matching the Distribution itself.\r\n  - `JointDistributionCoroutine` no longer requires `Root` when `sample_shape==()`.\r\n  - Support `sample_distributions` from autobatched joint distributions.\r\n  - Expose `mask` argument to support missing observations in HMM log probs.\r\n  - `BetaBinomial.log_prob` is more accurate when all trials succeed.\r\n  - Support broadcast batch shapes in `MixtureSameFamily`.\r\n  - Add `cholesky_fn` argument to `GaussianProcess`, `GaussianProcessRegressionModel`, and `SchurComplement`.\r\n  - Add staticmethod for precomputing GPRM for more efficient inference in TensorFlow.\r\n  - Add `GaussianProcess.posterior_predictive`.\r\n\r\n- Bijectors\r\n  - Bijectors parameterized by distinct `tf.Variable`s no longer register as `==`.\r\n  - BREAKING CHANGE: Remove deprecated `AffineScalar` bijector. Please use `tfb.Shift(shift)(tfb.Scale(scale))` instead.\r\n  - BREAKING CHANGE: Remove deprecated `Affine` and `AffineLinearOperator` bijectors.\r\n\r\n- PSD kernels\r\n  - Add `tfp.math.psd_kernels.ChangePoint`.\r\n  - Add slicing support for `PositiveSemidefiniteKernel`.\r\n  - Add `inverse_length_scale` parameter to kernels.\r\n  - Add `parameter_properties` to PSDKernel along with automated batch shape inference.\r\n\r\n- VI\r\n  - Add support for importance-weighted variational objectives.\r\n  - Support arbitrary distribution types in `tfp.experimental.vi.build_factored_surrogate_posterior`.\r\n\r\n- STS\r\n  - Support `+` syntax for summing `StructuralTimeSeries` models.\r\n\r\n- Math\r\n  - Enable JAX\u002FNumPy backends for `tfp.math.ode`.\r\n  - Allow returning auxiliary information from `tfp.math.value_and_gradient`.\r\n\r\n- Experimental\r\n  - Speedup to `experimental.mcmc` windowed samplers.\r\n  - Support unbiased gradients through particle filtering via stop-gradient resampling.\r\n  - `ensemble_kalman_filter_log_marginal_likelihood` (log evidence) computation added to `tfe.sequential`.\r\n  - Add experimental joint-distribution layers library.\r\n  - Delete `tfp.experimental.distributions.JointDensityCoroutine`.\r\n  - Add experimental special functions for high-precision computation on a TPU.\r\n  - Add custom log-prob ratio for `IncrementLogProb`.\r\n  - Use `foldl` in `no_pivot_ldl` instead of `while_loop`.\r\n\r\n- Other\r\n  - TFP should now support numpy 1.20+.\r\n  - BREAKING CHANGE: Stock unpacking seeds when splitting in JAX.\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  8bitmp3\r\n  -  adriencorenflos\r\n  -  Alexey Radul\r\n  -  Allen Lavoie\r\n  -  Ben Lee\r\n  -  Billy Lamberta\r\n  -  Brian Patton\r\n  -  Christopher Suter\r\n  -  Colin Carroll\r\n  -  Dave Moore\r\n  -  Du Phan\r\n  -  Emily Fertig\r\n  -  Faizan Muhammad\r\n  -  George Necula\r\n  -  George Tucker\r\n  -  Grace Luo\r\n  -  Ian Langmore\r\n  -  Jacob Burnim\r\n  -  Jake VanderPlas\r\n  -  Jeremiah Liu\r\n  -  Junpeng Lao\r\n  -  Kaan\r\n  -  Luke Wood\r\n  -  Max Jiang\r\n  -  Mihai Maruseac\r\n  -  Neil Girdhar\r\n  -  Paul Chiang\r\n  -  Pavel Izmailov\r\n  -  Pavel Sountsov\r\n  -  Peter Hawkins\r\n  -  Rebecca Chen\r\n  -  Richard Song\r\n  -  Rif A. Saurous\r\n  -  Ron Shapiro\r\n  -  Roy Frostig\r\n  -  Sharad Vikram\r\n  -  Srinivas Vasudevan\r\n  -  Tomohiro Endo\r\n  -  Urs K&ouml;ster\r\n  -  William C Grisaitis\r\n  -  Yilei Yang","2021-11-18T15:49:59",{"id":233,"version":234,"summary_zh":235,"released_at":236},103370,"v0.14.1","# Release notes\r\n\r\nThis is the 0.14.1 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.6.0 and JAX 0.2.21.\r\n\r\n\r\n## Change notes\r\n\r\n[coming soon]\r\n\r\n\r\n## Huge thanks to all the contributors to this release!\r\n\r\n  -  8bitmp3\r\n  -  adriencorenflos\r\n  -  allenl\r\n  -  axch\r\n  -  bjp\r\n  -  blamb\r\n  -  csuter\r\n  -  colcarroll\r\n  -  davmre\r\n  -  derifatives\r\n  -  emilyaf\r\n  -  europeanplaice\r\n  -  Frightera\r\n  -  fmuham\r\n  -  gcluo\r\n  -  GianluigiSilvestri\r\n  -  gisilvs\r\n  -  gjt\r\n  -  grisaitis\r\n  -  harahu\r\n  -  jburnim\r\n  -  langmore\r\n  -  leben\r\n  -  lukewood\r\n  -  mihaimaruseac\r\n  -  NeilGirdhar\r\n  -  phandu\r\n  -  phawkins\r\n  -  rechen\r\n  -  ronshapiro\r\n  -  scottzhu\r\n  -  sharadmv\r\n  -  siege\r\n  -  srvasude\r\n  -  ursk\r\n  -  vanderplas\r\n  -  xingyousong\r\n  -  yileiyang\r\n","2021-09-30T23:00:59",{"id":238,"version":239,"summary_zh":240,"released_at":241},103375,"v0.12.1","\r\n# Release notes\r\n\r\nThis is the 0.12.1 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.4.0.\r\n\r\n\r\n## Change notes\r\n\r\nNOTE: Links point to examples in the [TFP 0.12.1 release Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb).\r\n\r\nBijectors:\r\n\r\n - Add implementation of GLOW at [`tfp.bijectors.Glow`](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=TMDJbHd1iBY8).\r\n - Add [`RayleighCDF` bijector](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=lk4QxaPAe7CJ).\r\n - Add [`Ascending` bijector](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=qZMD_bR-hJLc) and deprecate `Ordered`.\r\n - Add optional [`low` parameter](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=JkLyCi2hkjF3) to the `Softplus` bijector.\r\n - Enable `ScaleMatvecLinearOperator` bijector [to wrap blockwise LinearOperators](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=3ifmOfQ4ISAc) to form a multipart bijectors.\r\n - Allow passing kwargs to `Blockwise`.\r\n - Bijectors now share a global cache, keyed by the bijector parameters and the value being transformed.\r\n\r\nDistributions:\r\n\r\n - BREAKING: Remove deprecated `HiddenMarkovModel.num_states` property.\r\n - BREAKING: Change the naming scheme of un-named variables in JointDistributions.\r\n - BREAKING: Remove deprecated `batch_shape` and `event_shape` arguments of `TransformedDistribution`.\r\n - Add [`Skellam` distribution](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=oxrTNhcjIFdH).\r\n - `JointDistributionCoroutine{AutoBatched}` now [uses namedtuples as the sample dtype](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=9BEZCVGdeqbS).\r\n - von-Mises Fisher distribution now [works for dimensions > 5](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=HQFqtfraeQPm) and implements [`VonMisesFisher.entropy`](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=HQFqtfraeQPm).\r\n - Add [`ExpGamma` and `ExpInverseGamma` distributions](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=m7znpTpzeh1P).\r\n - `JointDistribution*AutoBatched` now [support (reproducible) tensor seeds](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=dFxOLc-dICrZ).\r\n - Add [KL(VonMisesFisher || SphericalUniform)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=XC5X2xv5i1wH&line=1&uniqifier=1).\r\n - Added [`Distribution.parameter_properties` method](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=H8HHW-gaJC6i).\r\n - `experimental_default_event_space_bijector` now [accepts additional arguments to pin some distribution parts](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=9qMds4htI1Nl).\r\n - Add [`JointDistribution.experimental_pin` and `JointDistributionPinned`](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=ZIvOfH61I6cR).\r\n - Add [`NegativeBinomial.experimental_from_mean_dispersion` method](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_notebooks\u002FTFP_Release_Notebook_0_12_1.ipynb#scrollTo=ep-jMPbcdpnT).\r\n - Add [`tfp.experimental.distribute`](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ftensorflow\u002Fprobability\u002Fblob\u002Fmaster\u002Ftensorflow_probability\u002Fexamples\u002Fjupyter_noteb","2020-12-29T18:40:25",{"id":243,"version":244,"summary_zh":245,"released_at":246},103376,"v0.12.0","\r\nThis is the 0.12.0 release of TensorFlow Probability.  It is tested and stable against TensorFlow version 2.4.0.\r\n\r\nFor detailed change notes, please see the 0.12.1 release at https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability\u002Freleases\u002Ftag\u002Fv0.12.1 .\r\n\r\n","2020-12-29T18:40:22",{"id":248,"version":249,"summary_zh":250,"released_at":251},103377,"v0.12.0-rc4","This is RC4 of the TensorFlow Probability 0.12 release. It is tested against TensorFlow 2.4.0-rc4.","2020-12-09T16:35:36"]