[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-TuringLang--Turing.jl":3,"tool-TuringLang--Turing.jl":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",151314,2,"2026-04-11T23:32:58",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":84,"forks":85,"last_commit_at":86,"license":87,"difficulty_score":32,"env_os":88,"env_gpu":89,"env_ram":90,"env_deps":91,"category_tags":101,"github_topics":102,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":148},6763,"TuringLang\u002FTuring.jl","Turing.jl","Bayesian inference with probabilistic programming.","Turing.jl 是一款基于 Julia 语言构建的开源概率编程库，旨在让贝叶斯推断变得简单高效。它主要解决了传统统计建模中算法实现复杂、代码复用性低以及计算效率难以平衡的痛点，让用户能够专注于模型逻辑本身，而无需手动推导繁琐的数学公式或编写底层采样代码。\n\n这款工具特别适合数据科学家、统计研究人员以及需要处理不确定性量化问题的开发者使用。无论是构建复杂的层次模型，还是进行机器学习中的参数估计，Turing.jl 都能提供灵活的支持。其核心亮点在于采用了直观的宏语法（如 `@model`），允许用户用接近自然数学表达的方式定义概率模型；同时，它内置了包括 NUTS 在内的多种先进马尔可夫链蒙特卡洛（MCMC）采样算法，并能自动利用 Julia 的高性能特性加速计算。此外，作为 Julia 生态的一部分，Turing.jl 拥有活跃的社区支持和丰富的教程资源，帮助用户快速上手并解决实际问题。如果你希望在保持代码简洁的同时获得工业级的计算性能，Turing.jl 是一个值得尝试的专业选择。","\u003Cp align=\"center\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fturinglang.org\u002Fassets\u002Flogo\u002Fturing-logo-dark.svg\">\n    \u003Cimg src=\"https:\u002F\u002Fturinglang.org\u002Fassets\u002Flogo\u002Fturing-logo-light.svg\" alt=\"Turing.jl logo\" width=\"300\">\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Ci>Bayesian inference with probabilistic programming\u003C\u002Fi>\u003C\u002Fp>\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fturinglang.org\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-tutorials-blue.svg\" alt=\"Tutorials\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fturinglang.org\u002FTuring.jl\u002Fstable\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-API-blue.svg\" alt=\"API docs\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Factions\u002Fworkflows\u002FTests.yml\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Factions\u002Fworkflows\u002FTests.yml\u002Fbadge.svg\" alt=\"Tests\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FTuringLang\u002FTuring.jl\">\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FTuringLang\u002FTuring.jl\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg\" alt=\"Code Coverage\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSciML\u002FColPrac\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColPrac-Contributor%27s%20Guide-blueviolet\" alt=\"ColPrac: Contributor's Guide on Collaborative Practices for Community Packages\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n## Get started\n\nInstall Julia (see [the official Julia website](https:\u002F\u002Fjulialang.org\u002Finstall\u002F); you will need at least Julia 1.10.8 for the latest version of Turing.jl).\nThen, launch a Julia REPL and run:\n\n```julia\njulia> using Pkg; Pkg.add(\"Turing\")\n```\n\nYou can define models using the `@model` macro, and then perform Markov chain Monte Carlo sampling using the `sample` function:\n\n```julia\njulia> using Turing\n\njulia> @model function linear_regression(x)\n           # Priors\n           α ~ Normal(0, 1)\n           β ~ Normal(0, 1)\n           σ² ~ truncated(Cauchy(0, 3); lower=0)\n\n           # Likelihood\n           μ = α .+ β .* x\n           y ~ MvNormal(μ, σ² * I)\n       end\n\njulia> x, y = rand(10), rand(10)\n\njulia> posterior = linear_regression(x) | (; y = y)\n\njulia> chain = sample(posterior, NUTS(), 1000)\n```\n\nYou can find the main TuringLang documentation at [**https:\u002F\u002Fturinglang.org**](https:\u002F\u002Fturinglang.org), which contains general information about Turing.jl's features, as well as a variety of tutorials with examples of Turing.jl models.\n\nAPI documentation for Turing.jl is specifically available at [**https:\u002F\u002Fturinglang.org\u002FTuring.jl\u002Fstable**](https:\u002F\u002Fturinglang.org\u002FTuring.jl\u002Fstable\u002F).\n\n## Contributing\n\n### Issues\n\nIf you find any bugs or unintuitive behaviour when using Turing.jl, please do [open an issue](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues)!\nPlease don't worry about finding the correct repository for the issue; we can migrate the issue to the appropriate repository if we need to.\n\n### Pull requests\n\nWe are of course also very happy to receive pull requests.\nIf you are unsure about whether a particular feature would be welcome, you can open an issue for discussion first.\n\nWhen opening a PR, non-breaking releases (patch versions) should target the `main` branch.\nBreaking releases (minor version) should target the `breaking` branch.\n\nIf you have not received any feedback on an issue or PR for a while, please feel free to ping `@TuringLang\u002Fmaintainers` in a comment.\n\n## Other channels\n\nThe Turing.jl userbase tends to be most active on the [`#turing` channel of Julia Slack](https:\u002F\u002Fjulialang.slack.com\u002Farchives\u002FCCYDC34A0).\nIf you do not have an invitation to Julia's Slack, you can get one from [the official Julia website](https:\u002F\u002Fjulialang.org\u002Fslack\u002F).\n\nThere are also often threads on [Julia Discourse](https:\u002F\u002Fdiscourse.julialang.org) (you can search using, e.g., [the `turing` tag](https:\u002F\u002Fdiscourse.julialang.org\u002Ftag\u002Fturing)).\n\n## What's changed recently?\n\nWe publish a fortnightly newsletter summarising recent updates in the TuringLang ecosystem, which you can view on [our website](https:\u002F\u002Fturinglang.org\u002Fnews\u002F), [GitHub](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues\u002F2498), or [Julia Slack](https:\u002F\u002Fjulialang.slack.com\u002Farchives\u002FCCYDC34A0).\n\nFor Turing.jl specifically, you can see a full changelog in [`HISTORY.md`](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fblob\u002Fmain\u002FHISTORY.md) or [our GitHub releases](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Freleases).\n\n## Where does Turing.jl sit in the TuringLang ecosystem?\n\nTuring.jl is the main entry point for users, and seeks to provide a unified, convenient interface to all of the functionality in the TuringLang (and broader Julia) ecosystem.\n\nIn particular, it takes the ability to specify probabilistic models with [DynamicPPL.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FDynamicPPL.jl), and combines it with a number of inference algorithms, such as:\n\n  - Markov Chain Monte Carlo (both an abstract interface: [AbstractMCMC.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAbstractMCMC.jl), and individual samplers, such as [AdvancedMH.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAdvancedMH.jl), [AdvancedHMC.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAdvancedHMC.jl), and more).\n  - Variational inference using [AdvancedVI.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAdvancedVI.jl).\n  - Maximum likelihood and maximum a posteriori estimation, which rely on SciML's [Optimization.jl interface](https:\u002F\u002Fgithub.com\u002FSciML\u002FOptimization.jl).\n\n## Citing Turing.jl\n\nIf you have used Turing.jl in your work, we would be very grateful if you could cite the following:\n\n[**Turing.jl: a general-purpose probabilistic programming language**](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3711897)  \nTor Erlend Fjelde, Kai Xu, David Widmann, Mohamed Tarek, Cameron Pfiffer, Martin Trapp, Seth D. Axen, Xianda Sun, Markus Hauru, Penelope Yong, Will Tebbutt, Zoubin Ghahramani, Hong Ge  \nACM Transactions on Probabilistic Machine Learning, 2025 (_Just Accepted_)  \n\n[**Turing: A Language for Flexible Probabilistic Inference**](https:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fge18b.html)  \nHong Ge, Kai Xu, Zoubin Ghahramani  \nProceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1682-1690, 2018.\n\n\u003Cdetails>\n\n\u003Csummary>Expand for BibTeX\u003C\u002Fsummary>\n\n```bibtex\n@article{10.1145\u002F3711897,\nauthor = {Fjelde, Tor Erlend and Xu, Kai and Widmann, David and Tarek, Mohamed and Pfiffer, Cameron and Trapp, Martin and Axen, Seth D. and Sun, Xianda and Hauru, Markus and Yong, Penelope and Tebbutt, Will and Ghahramani, Zoubin and Ge, Hong},\ntitle = {Turing.jl: a general-purpose probabilistic programming language},\nyear = {2025},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https:\u002F\u002Fdoi.org\u002F10.1145\u002F3711897},\ndoi = {10.1145\u002F3711897},\nnote = {Just Accepted},\njournal = {ACM Trans. Probab. Mach. Learn.},\nmonth = feb,\n}\n\n@InProceedings{pmlr-v84-ge18b,\n  title = \t {Turing: A Language for Flexible Probabilistic Inference},\n  author = \t {Ge, Hong and Xu, Kai and Ghahramani, Zoubin},\n  booktitle = \t {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics},\n  pages = \t {1682--1690},\n  year = \t {2018},\n  editor = \t {Storkey, Amos and Perez-Cruz, Fernando},\n  volume = \t {84},\n  series = \t {Proceedings of Machine Learning Research},\n  month = \t {09--11 Apr},\n  publisher =    {PMLR},\n  pdf = \t {http:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fge18b\u002Fge18b.pdf},\n  url = \t {https:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fge18b.html},\n}\n```\n\n\u003C\u002Fdetails>\n","\u003Cp align=\"center\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fturinglang.org\u002Fassets\u002Flogo\u002Fturing-logo-dark.svg\">\n    \u003Cimg src=\"https:\u002F\u002Fturinglang.org\u002Fassets\u002Flogo\u002Fturing-logo-light.svg\" alt=\"Turing.jl logo\" width=\"300\">\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Ci>基于概率编程的贝叶斯推断\u003C\u002Fi>\u003C\u002Fp>\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fturinglang.org\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-tutorials-blue.svg\" alt=\"教程\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fturinglang.org\u002FTuring.jl\u002Fstable\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-API-blue.svg\" alt=\"API 文档\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Factions\u002Fworkflows\u002FTests.yml\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Factions\u002Fworkflows\u002FTests.yml\u002Fbadge.svg\" alt=\"测试\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FTuringLang\u002FTuring.jl\">\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002FTuringLang\u002FTuring.jl\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg\" alt=\"代码覆盖率\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSciML\u002FColPrac\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColPrac-Contributor%27s%20Guide-blueviolet\" alt=\"ColPrac：社区包协作实践贡献者指南\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n## 开始使用\n\n安装 Julia（请参阅 [Julia 官方网站](https:\u002F\u002Fjulialang.org\u002Finstall\u002F)；运行最新版 Turing.jl 至少需要 Julia 1.10.8）。然后，启动 Julia REPL 并运行：\n\n```julia\njulia> using Pkg; Pkg.add(\"Turing\")\n```\n\n您可以使用 `@model` 宏定义模型，然后通过 `sample` 函数进行马尔可夫链蒙特卡洛采样：\n\n```julia\njulia> using Turing\n\njulia> @model function linear_regression(x)\n           # 先验分布\n           α ~ Normal(0, 1)\n           β ~ Normal(0, 1)\n           σ² ~ truncated(Cauchy(0, 3); lower=0)\n\n           # 似然函数\n           μ = α .+ β .* x\n           y ~ MvNormal(μ, σ² * I)\n       end\n\njulia> x, y = rand(10), rand(10)\n\njulia> posterior = linear_regression(x) | (; y = y)\n\njulia> chain = sample(posterior, NUTS(), 1000)\n```\n\n您可以在 [**https:\u002F\u002Fturinglang.org**](https:\u002F\u002Fturinglang.org) 找到 TuringLang 的主要文档，其中包含关于 Turing.jl 功能的概述以及大量带有 Turing.jl 模型示例的教程。\n\nTuring.jl 的 API 文档则专门位于 [**https:\u002F\u002Fturinglang.org\u002FTuring.jl\u002Fstable**](https:\u002F\u002Fturinglang.org\u002FTuring.jl\u002Fstable\u002F)。\n\n## 贡献\n\n### 问题报告\n\n如果您在使用 Turing.jl 时发现任何错误或不直观的行为，请务必 [提交一个问题](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues)！无需担心将问题提交到正确的仓库；如有必要，我们会将其迁移到合适的仓库。\n\n### 拉取请求\n\n我们当然也非常欢迎拉取请求。如果您不确定某个功能是否受欢迎，可以先创建一个问题进行讨论。\n\n提交 PR 时，非破坏性更新（补丁版本）应针对 `main` 分支；破坏性更新（小版本）则应针对 `breaking` 分支。\n\n如果您的问题或 PR 一段时间内未收到回复，请随时在评论中 @TuringLang\u002Fmaintainers。\n\n## 其他交流渠道\n\nTuring.jl 用户群体通常在 Julia Slack 的 [`#turing` 频道](https:\u002F\u002Fjulialang.slack.com\u002Farchives\u002FCCYDC34A0) 中最为活跃。如果您没有 Julia Slack 的邀请，可以从 [Julia 官方网站](https:\u002F\u002Fjulialang.org\u002Fslack\u002F) 获取。\n\n此外，在 [Julia Discourse](https:\u002F\u002Fdiscourse.julialang.org) 上也经常有关于 Turing.jl 的讨论主题（您可以通过例如 [“turing” 标签](https:\u002F\u002Fdiscourse.julialang.org\u002Ftag\u002Fturing) 进行搜索）。\n\n## 最近有哪些变化？\n\n我们每两周发布一期简报，总结 TuringLang 生态系统中的最新动态，您可以在我们的 [官网](https:\u002F\u002Fturinglang.org\u002Fnews\u002F)、[GitHub](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues\u002F2498) 或 [Julia Slack](https:\u002F\u002Fjulialang.slack.com\u002Farchives\u002FCCYDC34A0) 上查看。\n\n对于 Turing.jl 本身，完整的变更日志可在 [`HISTORY.md`](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fblob\u002Fmain\u002FHISTORY.md) 或 [我们的 GitHub 发布页面](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Freleases) 中找到。\n\n## Turing.jl 在 TuringLang 生态系统中处于什么位置？\n\nTuring.jl 是用户的主要入口，旨在为 TuringLang（以及更广泛的 Julia 生态系统）中的所有功能提供统一且便捷的接口。\n\n具体而言，它结合了使用 [DynamicPPL.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FDynamicPPL.jl) 指定概率模型的能力，并与多种推理算法相结合，例如：\n\n  - 马尔可夫链蒙特卡洛（包括抽象接口 [AbstractMCMC.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAbstractMCMC.jl)，以及具体的采样器，如 [AdvancedMH.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAdvancedMH.jl)、[AdvancedHMC.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAdvancedHMC.jl) 等）。\n  - 使用 [AdvancedVI.jl](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAdvancedVI.jl) 进行变分推断。\n  - 最大似然估计和最大后验估计，这些依赖于 SciML 的 [Optimization.jl 接口](https:\u002F\u002Fgithub.com\u002FSciML\u002FOptimization.jl)。\n\n## 引用 Turing.jl\n\n如果您在工作中使用了 Turing.jl，我们将会非常感激您能够引用以下文献：\n\n[**Turing.jl：一种通用的概率编程语言**](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3711897)  \n托尔·埃尔伦德·菲耶尔德、许凯、大卫·维德曼、穆罕默德·塔雷克、卡梅隆·皮费尔、马丁·特拉普、塞思·D·阿克森、孙显达、马库斯·豪鲁、佩内洛普·永、威尔·泰布特、祖宾·加拉马尼、何虹  \nACM 概率机器学习汇刊，2025年（已接受）\n\n[**Turing：一种用于灵活概率推理的语言**](https:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fge18b.html)  \n何虹、许凯、祖宾·加拉马尼  \n第二十一届国际人工智能与统计会议论文集，PMLR 84:1682-1690，2018年。\n\n\u003Cdetails>\n\n\u003Csummary>展开以查看 BibTeX 格式\u003C\u002Fsummary>\n\n```bibtex\n@article{10.1145\u002F3711897,\nauthor = {Fjelde, Tor Erlend and Xu, Kai and Widmann, David and Tarek, Mohamed and Pfiffer, Cameron and Trapp, Martin and Axen, Seth D. and Sun, Xianda and Hauru, Markus and Yong, Penelope and Tebbutt, Will and Ghahramani, Zoubin and Ge, Hong},\ntitle = {Turing.jl: a general-purpose probabilistic programming language},\nyear = {2025},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https:\u002F\u002Fdoi.org\u002F10.1145\u002F3711897},\ndoi = {10.1145\u002F3711897},\nnote = {Just Accepted},\njournal = {ACM Trans. Probab. Mach. Learn.},\nmonth = feb,\n}\n\n@InProceedings{pmlr-v84-ge18b,\n  title = \t {Turing: A Language for Flexible Probabilistic Inference},\n  author = \t {Ge, Hong and Xu, Kai and Ghahramani, Zoubin},\n  booktitle = \t {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics},\n  pages = \t {1682--1690},\n  year = \t {2018},\n  editor = \t {Storkey, Amos and Perez-Cruz, Fernando},\n  volume = \t {84},\n  series = \t {Proceedings of Machine Learning Research},\n  month = \t {09--11 Apr},\n  publisher =    {PMLR},\n  pdf = \t {http:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fge18b\u002Fge18b.pdf},\n  url = \t {https:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fge18b.html},\n}\n```\n\n\u003C\u002Fdetails>","# Turing.jl 快速上手指南\n\nTuring.jl 是 Julia 生态中用于**贝叶斯推断**的概率编程库。它允许用户使用直观的语法定义概率模型，并利用马尔可夫链蒙特卡洛（MCMC）、变分推断等算法进行高效采样。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：支持 Windows、macOS 或 Linux。\n*   **Julia 版本**：需安装 **Julia 1.10.8** 或更高版本（这是运行最新版 Turing.jl 的最低要求）。\n    *   下载地址：[https:\u002F\u002Fjulialang.org\u002Finstall\u002F](https:\u002F\u002Fjulialang.org\u002Finstall\u002F)\n    *   *国内用户提示*：如果官网下载较慢，可尝试使用清华或中科大镜像站提供的 Julia 安装包。\n*   **前置依赖**：无需额外安装 Python 或其他语言环境，Turing.jl 纯原生于 Julia。\n\n## 安装步骤\n\n启动 Julia REPL（交互式命令行），执行以下命令安装 Turing 包：\n\n```julia\njulia> using Pkg; Pkg.add(\"Turing\")\n```\n\n> **加速提示**：若下载速度较慢，建议在 Julia 中配置国内镜像源（如清华源）后再执行安装命令：\n> ```julia\n> julia> import Pkg; Pkg.Registry.add(Pkg.RegistrySpec(url=\"https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fgit\u002Fjulia\u002FGeneral.git\"))\n> julia> ENV[\"JULIA_PKG_SERVER\"] = \"https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fjulia\"\n> julia> using Pkg; Pkg.add(\"Turing\")\n> ```\n\n## 基本使用\n\nTuring.jl 的核心工作流分为三步：定义模型、准备数据、执行采样。以下是一个简单的**线性回归**示例：\n\n### 1. 加载库与定义模型\n使用 `@model` 宏定义概率模型，包括先验分布（Priors）和似然函数（Likelihood）。\n\n```julia\nusing Turing\n\n@model function linear_regression(x)\n    # 先验分布\n    α ~ Normal(0, 1)\n    β ~ Normal(0, 1)\n    σ² ~ truncated(Cauchy(0, 3); lower=0)\n\n    # 似然函数\n    μ = α .+ β .* x\n    y ~ MvNormal(μ, σ² * I)\nend\n```\n\n### 2. 生成模拟数据\n此处使用随机数生成简单的测试数据 `x` 和 `y`。\n\n```julia\nx, y = rand(10), rand(10)\n```\n\n### 3. 执行采样\n将观测数据传入模型，并使用 NUTS 算法（一种高效的 HMC 变体）进行 1000 次采样。\n\n```julia\n# 绑定观测数据 y\nposterior = linear_regression(x) | (; y = y)\n\n# 执行采样 (NUTS 算法，1000 次迭代)\nchain = sample(posterior, NUTS(), 1000)\n```\n\n采样完成后，`chain` 对象即包含参数的后验分布结果，您可以进一步使用 `summarize(chain)` 查看统计摘要或进行可视化分析。\n\n---\n*更多详细教程与 API 文档请访问：[https:\u002F\u002Fturinglang.org](https:\u002F\u002Fturinglang.org)*","某医疗数据科学家正在构建一个预测新药疗效的统计模型，需要处理小样本临床数据并量化参数不确定性。\n\n### 没有 Turing.jl 时\n- **建模门槛高**：必须手动推导复杂的后验分布公式，或依赖黑盒商业软件，难以灵活定制层级贝叶斯结构。\n- **不确定性量化困难**：传统点估计方法（如最小二乘法）只能给出单一预测值，无法提供置信区间，导致医生难以评估风险。\n- **迭代效率低下**：每次调整先验假设或模型结构都需要重写大量底层采样代码，开发周期长达数周。\n- **小样本表现差**：在仅有几十例患者数据时，常规机器学习模型容易过拟合，缺乏利用先验知识正则化的能力。\n\n### 使用 Turing.jl 后\n- **声明式建模**：利用 `@model` 宏直接用概率编程语言描述变量关系，像写数学公式一样定义层级模型，无需手动推导。\n- **完整不确定性输出**：内置 NUTS 等高级采样器自动运行 MCMC，直接生成参数的后验分布，清晰展示疗效预测的可信范围。\n- **快速原型验证**：修改先验分布（如将正态分布改为柯西分布）仅需改动一行代码，几分钟内即可重新采样并对比结果。\n- **小样本鲁棒性**：通过引入医学专家的先验知识（如药物半衰期范围），有效约束模型空间，在极少数据下仍能得出可靠结论。\n\nTuring.jl 将繁琐的贝叶斯推断转化为直观的代码表达，让研究人员能专注于领域逻辑而非算法实现，显著提升了小数据场景下的决策可靠性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTuringLang_Turing.jl_eb01532f.png","TuringLang","The Turing Language","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FTuringLang_1c814f6a.png","Bayesian inference with probabilistic programming",null,"https:\u002F\u002Fturinglang.org","https:\u002F\u002Fgithub.com\u002FTuringLang",[80],{"name":81,"color":82,"percentage":83},"Julia","#a270ba",100,2229,236,"2026-04-09T14:17:38","MIT","Linux, macOS, Windows","未说明 (可选，取决于具体使用的推理算法后端)","未说明",{"notes":92,"python":93,"dependencies":94},"该工具是基于 Julia 语言的贝叶斯推断概率编程库，非 Python 项目。安装前需先安装至少 1.10.8 版本的 Julia。它整合了多种推断算法（如 MCMC、变分推断），具体硬件需求取决于所选用的后端算法及模型复杂度。","不适用 (基于 Julia 语言)",[95,96,97,98,99,100],"Julia >= 1.10.8","DynamicPPL.jl","AbstractMCMC.jl","AdvancedHMC.jl","AdvancedVI.jl","Optimization.jl",[14],[103,104,105,106,107,108,109,110,111,112,113,114,115,116],"machine-learning","probabilistic-programming","julia-language","artificial-intelligence","bayesian-inference","hamiltonian-monte-carlo","turing","bayesian-statistics","mcmc","hmc","probabilistic-graphical-models","probabilistic-models","probabilistic-inference","bayesian-neural-networks","2026-03-27T02:49:30.150509","2026-04-12T09:06:37.561190",[120,125,130,135,139,143],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},30502,"如何在 Turing.jl 的采样命令中指定目标接受率（target acceptance）和自适应步数（adaptation steps）？","在调用 `sample` 函数时，可以通过向采样器（如 `NUTS()`）传递参数来设置。例如：`sample(model, NUTS(0.98, 1000), 3000)`，其中第一个参数是目标接受率（默认通常为 0.8），第二个参数是自适应\u002F预热步数。如果遇到数值错误导致提案被拒绝（如 `isfinite` 检查失败），可能需要检查模型定义或数据缩放，或者考虑切换到其他后端。","https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues\u002F1851",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},30503,"为什么使用 Zygote 作为自动微分后端时，随着模型中 `~` 语句数量增加，编译时间会急剧上升？","这是 Zygote 后端的一个已知限制，其编译时间随模型复杂度（`~` 语句数量）呈非线性增长（例如 14 个语句需 5 分钟，20 个需 23 分钟）。维护者指出，修复此问题需要对 Zygote 进行根本性的更改（如使用优化的中间表示 IR），目前暂无短期修复计划。建议对于大型模型，尝试使用 `ReverseDiff` 或 `ForwardDiff` 等其他后端，或者等待未来后端架构的更新。","https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues\u002F1754",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},30504,"遇到 `BernoulliLogit` 在 `ReverseDiff` 模式下性能严重回归怎么办？","该性能回退是由特定实现引起的，主要影响 `ReverseDiff` 后端。维护者表示这是一个 `ReverseDiff` 特有的问题，预计在切换到新的自动微分后端（如 Mooncake）后将不再相关。临时解决方案是避免在该特定配置下使用 `BernoulliLogit`，或者尝试使用 `ForwardDiff`（虽然速度较慢但可能更稳定），直到后端升级完成。","https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues\u002F1934",{"id":136,"question_zh":137,"answer_zh":138,"source_url":134},30505,"如何高效地对包含泛型内部类型的数组操作（如 `Normal.(means, stds)`）进行自动微分？","目前标准的 `ForwardDiff` 或 `ReverseDiff` 对此类泛型广播操作的支持有限且效率不高。维护者指出，目前唯一能较好处理此类情况的 Julia AD 工具是 `Enzyme`（基于源码转换 SCT）。对于普通用户，建议尽量将操作展开或使用显式循环，或者关注 `Enzyme` 后端的集成进展。类型级编程或 `return_type` 可能被用于判断结构体是否可微，但这属于高级用法。",{"id":140,"question_zh":141,"answer_zh":142,"source_url":124},30506,"TuringGLM 采样速度远慢于 brms (RStan) 或 bambi (PyMC) 且出现数值错误，该如何解决？","这种情况通常由数值不稳定引起（日志概率为非有限值）。建议：1. 检查数据是否需要标准化或缩放；2. 确认先验分布是否合理；3. 尝试调整采样器参数（如减小步长）；4. 如果问题依旧，可能是 `TuringGLM` 特定实现的局限性，维护者建议直接在 `TuringGLM` 仓库提交 Issue，因为底层高斯似然计算可能需要重写以兼容 `ReverseDiff` 或提高稳定性。",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},30507,"如何配置 TagBot 以自动触发注册表更新？","需要在项目的 `.github\u002Fworkflows\u002FTagBot.yml` 文件中添加 issue comment 触发器。具体步骤参考 Julia Discourse 论坛的相关公告（https:\u002F\u002Fdiscourse.julialang.org\u002Ft\u002Fann-required-updates-to-tagbot-yml\u002F49249）。配置完成后，可以通过在 Issue 中评论特定内容来手动触发 TagBot，或者在合并注册表 PR 后自动触发。","https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fissues\u002F1455",[149,154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244],{"id":150,"version":151,"summary_zh":152,"released_at":153},214805,"v0.43.5","## Turing v0.43.5\n\n[自 v0.43.4 以来的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.43.4...v0.43.5)\n\n修复在使用 Gibbs 采样时，子模型内部对 VarNamedTuple 模板的错误处理。\n\n**已合并的拉取请求：**\n- 修复 Gibbs\u002F子模型\u002F模板处理错误 (#2799) (@penelopeysm)\n\n**已关闭的问题：**\n- Gibbs \u002F 子模型错误 (#2797)\n- Gibbs 采样器与带有向量赋值的子模型之间的交互问题 (#2798)","2026-04-01T23:48:10",{"id":155,"version":156,"summary_zh":157,"released_at":158},214806,"v0.43.4","## Turing v0.43.4\n\n[自 v0.43.3 以来的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.43.3...v0.43.4)\n\n修复了内部结构体中缺失的 `Base.copy` 实现。\n\n**已合并的拉取请求：**\n- 修复一些缺失的 `Base.copy` 实现 (#2796) (@penelopeysm)\n\n**已关闭的问题：**\n- 将 SSMProblems 和 GeneralisedFilters 集成到 Turing.jl 中 (#2428)\n- SMC 采样错误 (#2656)","2026-04-01T16:51:24",{"id":160,"version":161,"summary_zh":162,"released_at":163},214807,"v0.43.3","## Turing v0.43.3\n\n[与 v0.43.2 的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.43.2...v0.43.3)\n\n统一 HMC 和外部采样器的参数初始化。\n现在，外部采样器（如 HMC）会多次尝试生成有效的初始参数，而不再仅采用第一组参数。\n\n从 DynamicPPL 重新导出 `set_logprob_type!`，以便用户可以控制在评估 Turing 模型时使用的基底对数概率类型。\n例如，调用 `set_logprob_type!(Float32)` 将使 Turing 在进行对数概率计算时使用 `Float32` 类型，只有当模型中存在某些因素（如返回 `Float64` 对数概率的分布）导致需要更高精度时，才会进行提升。\n请注意，这是一个编译时偏好设置：要使其生效，您需要在调用 `set_logprob_type!` 后重启 Julia 会话。\n\n此外，请注意，目前对非 `Float64` 对数概率的采样器支持仍然有限。\n尽管 DynamicPPL 承诺不会不必要地提升浮点类型，但许多采样器，包括 HMC 和 NUTS，仍在内部使用 `Float64`，因此即使模型本身使用 `Float32`，也会导致对数概率和参数被提升为 `Float64`。\n\n**已合并的拉取请求：**\n- 统一 HMC、外部采样器和 DynamicHMC 的参数初始化；重新导出 `DynamicPPL.set_logprob_type!` (#2794) (@penelopeysm)\n\n**已关闭的问题：**\n- 新的 Turing 官网：请访问 turinglang.org (#2041)\n- 支持 Float32 (#2212)\n- 在采样器状态等中包含 VarNames (#2511)\n- 作为结构体一部分修改数组输入并将其传递给 softmax 时，使用 ForwardDiff 会导致转换 MethodError (#2516)\n- 当某个变量没有采样器时的 Gibbs 方法 (#2536)\n- 从多元分布的数组分布中采样时，loglikelihood 上出现分派错误 (#2549)\n- 移除 MH，改用 externalsampler(MH) (#2593)\n- 测试 mcmc\u002FInference 时发生段错误 (#2655)\n- 是否停止使用压缩提交？(#2700)\n- 外部采样器应更努力地生成初始参数 (#2739)\n- 是否移除 `IS`“采样器”（也许还包括 SGLD + SGHMC）？(#2767)\n- 用初始化策略替换 ParticleMCMCContext (#2768)\n- 导出模型为 GraphViz、Mermaid、NetworkX，或许还有 Cytoscape.js 格式 (#2782)\n- 添加通用贝叶斯工作流方法的便捷功能 (#2785)\n- 是否加密 ModeResult 内部的 VNT？(#2786)\n- sample 中的 `initial_params` 不允许我更改 t[1] 等变量 (#2792)","2026-03-26T19:26:54",{"id":165,"version":166,"summary_zh":167,"released_at":168},214808,"v0.43.2","## Turing v0.43.2\n\n[与 v0.43.1 的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.43.1...v0.43.2)\n\n当 `Gibbs` 采样器缺少模型中任何变量的组件采样器时，抛出 `ArgumentError`。可以通过向 `sample` 传递 `check_model=false` 来绕过此检查。\n\n**已合并的拉取请求：**\n- 当 Gibbs 采样器缺少任何变量的组件时抛出错误 (#2788) (@hardik-xi11)\n\n**已关闭的问题：**\n- 是否应为每条链记录初始参数样本\u002F值 (#1282)\n- 当前的 MH 方法在按变量级别上未能正确支持链接函数\u002F逆链接函数 (#1583)\n- 使用小 α 值的 Beta 分布进行模型采样的问题 (#2159)\n- 变量缺少采样器时不报错 (#2361)\n- ForwardDiff 的 NaN 安全模式 (#2769)\n- 类型为 DynamicPPL.VarNamedTuples.VarNamedTuple 的结构没有 Tables.column 实现 (#2787)","2026-03-14T00:35:49",{"id":170,"version":171,"summary_zh":172,"released_at":173},214809,"v0.43.1","## Turing v0.43.1\n\n[与 v0.43.0 的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.43.0...v0.43.1)\n\n忽略 `SMC` 采样器中的 `discard_initial` 和 `thinning` 参数，以防止在提供这些参数时出现 `BoundsError`。\n\n**已合并的拉取请求：**\n- 忽略 SMC 的 `discard_initial` 和 `thinning` 参数 (#2784) (@hardik-xi11)\n\n**已关闭的问题：**\n- 当 `discard_initial > 1` 或 `thinning > 1` 时，SMC 会抛出 `BoundsError` (#1811)","2026-03-09T00:08:21",{"id":175,"version":176,"summary_zh":177,"released_at":178},214810,"v0.43.0","## Turing v0.43.0\n\n[自 v0.42.9 以来的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.9...v0.43.0)\n\n## DynamicPPL 0.40 和 `VarNamedTuple`\n\nDynamicPPL v0.40 对 Turing 的内部数据结构进行了重大重构。最显著的变化是，过去可能使用 `Dict{VarName}` 或 `NamedTuple` 的场景，现在都统一替换为一种名为 `VarNamedTuple` 的单一数据结构。\n\n这一改进在鲁棒性和性能方面带来了显著提升。\n\n然而，这也对 Turing 模型施加了一些限制，并引入了一些用户接口层面的破坏性变更。具体来说，**可以包含随机变量的容器类型**现在更加受限：如果 `x[i] ~ dist` 是一个随机变量，那么 `x` 必须满足以下条件：\n\n- 它必须是 `AbstractArray` 类型。目前不支持字典或其他容器（我们已为此创建了一个 [问题跟踪单](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FDynamicPPL.jl\u002Fissues\u002F1263)）。如果您确实需要此功能，请提交一个问题告诉我们，我们将尽力将其列为优先事项。\n\n  ```julia\n  @model function f()\n      # 允许\n      x = Array{Float64}(undef, 1)\n      x[1] ~ Normal()\n\n      # 禁止\n      x = Dict{Int,Float64}()\n      return x[1] ~ Normal()\n  end\n  ```\n\n- 在调用 `~` 运算符期间，其大小不得发生变化。以下用法是被禁止的（您应在循环之前将 `x` 初始化为正确的大小）：\n\n  ```julia\n  x = Float64[]\n  for i in 1:10\n      push!(x, 0.0)\n      x[i] ~ Normal()\n  end\n  ```\n\n不过，请注意，这些限制仅适用于**在波浪号语句左侧包含随机变量的容器**。通常情况下，对于*观测*数据的容器、未在波浪号语句中使用的容器，以及本身就是随机变量的容器（例如 `x ~ MvNormal(...)`），则没有此类限制。\n\n- 同样地，随机变量数组的理想情况是每次迭代时其大小保持不变。这意味着类似如下的模型有时会失败（但请参阅下文）：\n\n  ```julia\n  n ~ Poisson(2.0)\n  x = Vector{Float64}(undef, n)\n  for i in 1:n\n      x[i] ~ Normal()\n  end\n  ```\n\n  *从技术层面讲*：对该模型进行推断（例如 MCMC 采样）仍然可行，但如果您希望使用 `returned` 或 `predict` 功能，则必须同时满足以下两个条件：(1) 必须使用 FlexiChains.jl；(2) `x` 中的所有元素都必须是随机变量，即不能出现部分 `x[i]` 是随机变量而另一部分是观测值的情况。\n\n`VarNamedTuple` 和 `@vnt` 现在已直接从 Turing 中重新导出。Turing 提供了一篇文档页面，用于解释如何使用和创建 `VarNamedTuple`，您可以在此处找到相关信息：[点击此处](https:\u002F\u002Fturinglang.org\u002Fdocs\u002Fusage\u002Fvarnamedtuple\u002F)。\n\n## 条件化和固定\n\n在向 Turing 模型提供条件化或固定的变量时，我们建议您使用 `VarNamedTuple` 来完成。这样做的主要优势在于它能够…","2026-03-05T02:12:43",{"id":180,"version":181,"summary_zh":182,"released_at":183},214811,"v0.42.9","## Turing v0.42.9\n\n[与 v0.42.8 的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.8...v0.42.9)\n\n改进了对使用 Libtask 的模型评估函数的处理。\n\n这意味着在带有关键字参数的模型上运行 SMC 或 PG 时，不再需要使用 @might_produce（有关此内容的更多详细信息，请参阅 v0.42.5 的补丁说明）。\n\n这也意味着，包含观测值的子模型现在能够被 SMC\u002FPG 采样器可靠地处理，而在之前则并非如此（只有当子模型被 Julia 编译器内联时，观测值才会被识别，这可能导致正确性问题）。\n\n**已合并的拉取请求：**\n- 在新的 DPPL \u002F VNT 上运行 CI (#2756) (@penelopeysm)\n- CompatHelper：为软件包测试将 Mooncake 的兼容版本提升至 0.5，同时保留现有兼容范围 (#2758) (@github-actions[bot])\n- 向用户告知 MH 中正在使用的提案 (#2774) (@penelopeysm)\n- 将所有带有 `DynamicPPL.Model` 的方法标记为可产生结果 (#2780) (@penelopeysm)\n\n**已关闭的问题：**\n- 时间 (#2763)\n- `data[i, \"γ (N\u002Fm)\"] ~ Normal(γ_pred, σ)` 在从 v0.41 升级到 v0.42 时无法正常工作 (#2771)\n- 在 Turing v0.42.8 中执行 `data[:, \"y\"] ~ MvNormal(...)` 时出现错误 (#2773)","2026-03-02T15:54:07",{"id":185,"version":186,"summary_zh":187,"released_at":188},214812,"v0.42.8","## Turing v0.42.8\n\n[与 v0.42.7 的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.7...v0.42.8)\n\n通过 `AbstractMCMC.mcmc_callback` 添加对 `TensorBoardLogger.jl` 的支持。更多详情请参阅 [AbstractMCMC 文档](https:\u002F\u002Fturinglang.org\u002FAbstractMCMC.jl\u002Fstable\u002Fcallbacks\u002F#TensorBoard-Logging)。","2026-01-28T18:15:08",{"id":190,"version":191,"summary_zh":192,"released_at":193},214813,"v0.42.7","## Turing v0.42.7\n\n[自 v0.42.6 以来的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.6...v0.42.7)\n\n避免在 MCMC 迭代中重新评估模型，这些迭代的转移状态不会被保存到链中（例如，在初始预烧期或使用稀疏抽样时）。同时，避免 Gibbs 抽样的每个分量采样器在每一轮迭代中不必要地对模型进行一次评估。\n\n**已合并的拉取请求：**\n- 避免在不必要的地方重复评估模型 (#2759) (@penelopeysm)\n\n**已关闭的问题：**\n- Gibbs 在每轮迭代中，每个分量采样器都会浪费性地对模型进行一次重新评估 (#2639)","2026-01-28T17:33:10",{"id":195,"version":196,"summary_zh":197,"released_at":198},214814,"v0.42.6","## Turing v0.42.6\n\n[自 v0.42.5 以来的差异](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.5...v0.42.6)\n\n修复了 SMC 和 PG 中的一个 bug：由于 `objectid` 检查不正确，结果并非总是能正确存储在 Libtask 跟踪中。","2026-01-26T16:50:09",{"id":200,"version":201,"summary_zh":202,"released_at":203},214815,"v0.42.5","## Turing v0.42.5\n\n[Diff since v0.42.4](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.4...v0.42.5)\n\nSMC and PG can now be used for models with keyword arguments, albeit with one requirement: the user must mark the model function as being able to produce.\r\nFor example, if the model is\r\n\r\n```julia\r\n@model foo(x; y) = a ~ Normal(x, y)\r\n```\r\n\r\nthen before sampling from this with SMC or PG, you will have to run\r\n\r\n```julia\r\nusing Turing\r\n\r\n@might_produce(foo)\r\n```\n\n**Merged pull requests:**\n- Enable keyword arguments for particle methods (#2660) (@penelopeysm)\n\n**Closed issues:**\n- SMC samplers no longer support usage of keyword arguments in models (#2007)\n- PDMat on vector from arraydist fails with reversediff (#2101)\n- Consistent inferface for specifying model parameter spaces  (#2139)\n- Modernise the tracing data structure for DynamicPPL (#2423)\n- VonMises failed to find valid initial parameters in 1000 tries. (#2584)\n- Meta-Bayesian inference with Turing.jl (#2615)","2026-01-26T15:32:02",{"id":205,"version":206,"summary_zh":207,"released_at":208},214816,"v0.42.4","## Turing v0.42.4\n\n[Diff since v0.42.3](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.3...v0.42.4)\n\nFixes a typo that caused NUTS to perform one less adaptation step than in versions prior to 0.41.\n\n**Merged pull requests:**\n- Fix off-by-one NUTS adaptations (#2751) (@penelopeysm)\n\n**Closed issues:**\n- HMC and NUTS treating prior as optional (#2749)\n- state should be 0 not 1 (#2750)","2026-01-09T17:28:03",{"id":210,"version":211,"summary_zh":212,"released_at":213},214817,"v0.42.3","## Turing v0.42.3\n\n[Diff since v0.42.2](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.2...v0.42.3)\n\nRemoves some dead code.\n\n**Merged pull requests:**\n- Allow MLE test to fail (#2746) (@penelopeysm)\n- remove get_matching_type overload (#2748) (@penelopeysm)\n\n**Closed issues:**\n- Remove OptimizationOptimJL dep? (#2712)\n- optimisation CI failures (#2744)","2026-01-07T21:07:38",{"id":215,"version":216,"summary_zh":217,"released_at":218},214818,"v0.42.2","## Turing v0.42.2\n\n[Diff since v0.42.1](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.1...v0.42.2)\n\n`InitFromParams(mode_estimate)`, where `mode_estimate` was obtained from an optimisation on a Turing model, now accepts a second optional argument which provides a fallback initialisation strategy if some parameters are missing from `mode_estimate`.\r\n\r\nThis also means that you can now invoke `returned(model, mode_estimate)` to calculate a model's return values given the parameters in `mode_estimate`.\n\n**Merged pull requests:**\n- Test against Enzyme (#2636) (@wsmoses)\n- Fix missing `fallback` argument for `InitFromParams(::ModeResult)` (#2736) (@penelopeysm)\n\n**Closed issues:**\n- create `returned()` method for results from mode estimation (#2607)","2026-01-05T19:12:25",{"id":220,"version":221,"summary_zh":222,"released_at":223},214819,"v0.42.1","## Turing v0.42.1\n\n[Diff since v0.42.0](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.42.0...v0.42.1)\n\nAvoid passing a full VarInfo to `check_model`, which allows more models to be checked safely for validity.\n\n**Merged pull requests:**\n- Use OnlyAccsVarInfo in check_model (#2742) (@penelopeysm)\n\n**Closed issues:**\n- write better code for specification of initial params for optimisation (#2709)\n- BoundsError in VI (#2731)","2025-12-21T16:12:11",{"id":225,"version":226,"summary_zh":227,"released_at":228},214820,"v0.6.15","## Turing v0.6.15\n\n[Diff since v0.6.14](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.6.14...v0.6.15)\n\n\nThis release has been identified as a backport.\nAutomated changelogs for backports tend to be wildly incorrect.\nTherefore, the list of issues and pull requests is hidden.\n\u003C!--\n**Merged pull requests:**\n- Test refactoring (#731) (@cpfiffer)\n- TS 1: reorganize VarName and VarInfo and rename it to UntypedVarInfo (#740) (@mohdibntarek)\n- Update README.md (#741) (@yebai)\n- Fix a bug in `find_good_eps` (#744) (@xukai92)\n- Add IMM tutorial to the nav menu (#745) (@cpfiffer)\n- Set Turing up for Registrator (#753) (@cpfiffer)\n\n**Closed issues:**\n- Re-organise `test` folder to improve clarity  (#711)\n\n-->","2025-12-21T16:11:58",{"id":230,"version":231,"summary_zh":232,"released_at":233},214821,"v0.42.0","## Turing v0.42.0\r\n\r\n[Diff since v0.41.4](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.41.4...v0.42.0)\r\n\r\n## DynamicPPL 0.39\r\n\r\nTuring.jl v0.42 brings with it all the underlying changes in DynamicPPL 0.39.\r\nPlease see [the DynamicPPL changelog](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FDynamicPPL.jl\u002Freleases\u002Ftag\u002Fv0.39.0) for full details; in here we summarise only the changes that are most pertinent to end-users of Turing.jl.\r\n\r\n### Thread safety opt-in\r\n\r\nTuring.jl has supported threaded tilde-statements for a while now, as long as said tilde-statements are **observations** (i.e., likelihood terms).\r\nFor example:\r\n\r\n```julia\r\n@model function f(y)\r\n    x ~ Normal()\r\n    Threads.@threads for i in eachindex(y)\r\n        y[i] ~ Normal(x)\r\n    end\r\nend\r\n```\r\n\r\n**Models where tilde-statements or `@addlogprob!` are used in parallel require what we call 'threadsafe evaluation'.**\r\nIn previous releases of Turing.jl, threadsafe evaluation was enabled whenever Julia was launched with more than one thread.\r\nHowever, this is an imprecise way of determining whether threadsafe evaluation is really needed.\r\nIt causes performance degradation for models that do _not_ actually need threadsafe evaluation, and generally led to ill-defined behaviour in various parts of the Turing codebase.\r\n\r\nIn Turing.jl v0.42, **threadsafe evaluation is now opt-in.**\r\nTo enable threadsafe evaluation, after defining a model, you now need to call `setthreadsafe(model, true)` (note that this is not a mutating function, it returns a new model):\r\n\r\n```julia\r\ny = randn(100)\r\nmodel = f(y)\r\nmodel = setthreadsafe(model, true)\r\n```\r\n\r\nYou *only* need to do this if your model uses tilde-statements or `@addlogprob!` in parallel.\r\nYou do *not* need to do this if:\r\n\r\n  - your model has other kinds of parallelism but does not include tilde-statements inside;\r\n  - or you are using `MCMCThreads()` or `MCMCDistributed()` to sample multiple chains in parallel, but your model itself does not use parallelism.\r\n\r\nIf your model does include parallelised tilde-statements or `@addlogprob!` calls, and you evaluate it\u002Fsample from it without setting `setthreadsafe(model, true)`, then you may get statistically incorrect results without any warnings or errors.\r\n\r\n### Faster performance\r\n\r\nMany operations in DynamicPPL have been substantially sped up.\r\nYou should find that anything that uses LogDensityFunction (i.e., HMC\u002FNUTS samplers, optimisation) is faster in this release.\r\nPrior sampling should also be much faster than before.\r\n\r\n### `predict` improvements\r\n\r\nIf you have a model that requires threadsafe evaluation (i.e., parallel observations), you can now use this with `predict`.\r\nCarrying on from the previous example, you can do:\r\n\r\n```julia\r\nmodel = setthreadsafe(f(y), true)\r\nchain = sample(model, NUTS(), 1000)\r\n\r\npdn_model = f(fill(missing, length(y)))\r\npdn_model = setthreadsafe(pdn_model, true)  # set threadsafe\r\npredictions = predict(pdn_model, chain) # generate new predictions in parallel\r\n```\r\n\r\n### Log-density names in chains\r\n\r\nWhen sampling from a Turing model, the resulting `MCMCChains.Chains` object now contains the log-joint, log-prior, and log-likelihood under the names `:logjoint`, `:logprior`, and `:loglikelihood` respectively.\r\nPreviously, `:logjoint` would be stored under the name `:lp`.\r\n\r\n### Log-evidence in chains\r\n\r\nWhen sampling using MCMCChains, the chain object will no longer have its `chain.logevidence` field set.\r\nInstead, you can calculate this yourself from the log-likelihoods stored in the chain.\r\nFor SMC samplers, the log-evidence of the entire trajectory is stored in `chain[:logevidence]` (which is the same for every particle in the 'chain').\r\n\r\n### `Turing.Inference.Transition`\r\n\r\n`Turing.Inference.Transition(model, vi[, stats])` has been removed; you can directly replace this with `DynamicPPL.ParamsWithStats(vi, model[, stats])`.\r\n\r\n## AdvancedVI 0.6\r\n\r\nTuring.jl v0.42 updates `AdvancedVI.jl` compatibility to 0.6 (we skipped the breaking 0.5 update as it does not introduce new features).\r\n`AdvancedVI.jl@0.6` introduces major structural changes including breaking changes to the interface and multiple new features.\r\nThe summary of the changes below are the things that affect the end-users of Turing.\r\nFor a more comprehensive list of changes, please refer to the [changelogs](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FAdvancedVI.jl\u002Fblob\u002Fmain\u002FHISTORY.md) in `AdvancedVI`.\r\n\r\n### Breaking changes\r\n\r\nA new level of interface for defining different variational algorithms has been introduced in `AdvancedVI` v0.5. As a result, the function `Turing.vi` now receives a keyword argument `algorithm`. The object `algorithm \u003C: AdvancedVI.AbstractVariationalAlgorithm` should now contain all the algorithm-specific configurations. Therefore, keyword arguments of `vi` that were algorithm-specific such as `objective`, `operator`, `averager` and so on, have been moved as fields of the relevant `\u003C: AdvancedVI.AbstractVariationalAlgorithm` structs.\r\n\r\nIn addition, the outputs als","2025-12-04T11:42:36",{"id":235,"version":236,"summary_zh":237,"released_at":238},214822,"v0.41.4","## Turing v0.41.4\n\n[Diff since v0.41.3](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.41.3...v0.41.4)\n\nFixed a bug where the `check_model=false` keyword argument would not be respected when sampling with multiple threads or cores.\n\n**Merged pull requests:**\n- make MCMCThreads etc. respect `check_model=false` (#2721) (@penelopeysm)","2025-11-24T22:05:10",{"id":240,"version":241,"summary_zh":242,"released_at":243},214823,"v0.41.3","## Turing v0.41.3\n\n[Diff since v0.41.2](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.41.2...v0.41.3)\n\nFixed NUTS not correctly specifying the number of adaptation steps when calling `AdvancedHMC.initialize!` (this bug led to mass matrix adaptation not actually happening).\n\n**Merged pull requests:**\n- Fix incorrect nadapts being passed to AdvancedHMC (#2718) (@penelopeysm)\n\n**Closed issues:**\n- Turing and AdvancedHMC give different adaptors for NUTS (#2717)","2025-11-21T23:08:55",{"id":245,"version":246,"summary_zh":247,"released_at":248},214824,"v0.41.2","## Turing v0.41.2\n\n[Diff since v0.41.1](https:\u002F\u002Fgithub.com\u002FTuringLang\u002FTuring.jl\u002Fcompare\u002Fv0.41.1...v0.41.2)\n\nAdd `GibbsConditional`, a \"sampler\" that can be used to provide analytically known conditional posteriors in a Gibbs sampler.\r\n\r\nIn Gibbs sampling, some variables are sampled with a component sampler, while holding other variables conditioned to their current values. Usually one e.g. takes turns sampling one variable with HMC and the other with a particle sampler. However, sometimes the posterior distribution of one variable is known analytically, given the conditioned values of other variables. `GibbsConditional` provides a way to implement these analytically known conditional posteriors and use them as component samplers for Gibbs. See the docstring of `GibbsConditional` for details.\r\n\r\nNote that `GibbsConditional` used to exist in Turing.jl until v0.36, at which it was removed when the whole Gibbs sampler was rewritten. This reintroduces the same functionality, though with a slightly different interface.\n\n**Merged pull requests:**\n- Add dark\u002Flight mode logo support (#2714) (@shravanngoswamii)\n\n**Closed issues:**\n- Reenable CI tests on Julia v1 (#2686)\n- Error on running `using Turing` on latest version (#2711)","2025-11-21T18:32:49"]