[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-patrick-kidger--equinox":3,"tool-patrick-kidger--equinox":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":102,"github_topics":103,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":124},3979,"patrick-kidger\u002Fequinox","equinox","Elegant easy-to-use neural networks + scientific computing in JAX. https:\u002F\u002Fdocs.kidger.site\u002Fequinox\u002F","Equinox 是一个专为 JAX 生态打造的轻量级库，旨在让神经网络构建与科学计算变得更加优雅便捷。它解决了原生 JAX 在定义复杂模型时语法较为繁琐、缺乏类似 PyTorch 那样直观面向对象体验的痛点，同时填补了核心库之外在模型管理和高级变换功能上的空白。\n\n这款工具非常适合希望利用 JAX 高性能特性的深度学习研究者、科学家以及开发者使用。如果你熟悉 PyTorch 但想转向 JAX，或者正在使用 Flax、Haiku 却渴望更灵活的模型定义方式，Equinox 将是理想选择。\n\n其核心技术亮点在于“无框架”设计理念：Equinox 并非强加一套新规则，而是通过简单的 `eqx.Module` 将用户定义的类注册为 JAX 原生的 PyTree 结构。这意味着你可以用熟悉的类语法编写模型，却能无缝兼容 JAX 的 JIT 编译、自动求导等所有功能。此外，它还提供了过滤式 API 以精细控制变换范围，支持运行时错误检查及丰富的 PyTree 操作例程。在 Equinox 中，一切皆透明可控，没有隐藏的魔法，让你的代码既简洁又强大。","\u003Ch1 align='center'>Equinox\u003C\u002Fh1>\n\nEquinox is your one-stop [JAX](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax) library, for everything you need that isn't already in core JAX:\n\n- neural networks (or more generally any model), with easy-to-use PyTorch-like syntax;\n- filtered APIs for transformations;\n- useful PyTree manipulation routines;\n- advanced features like runtime errors;\n\nand best of all, Equinox isn't a framework: everything you write in Equinox is compatible with anything else in JAX or the ecosystem.\n\nIf you're completely new to JAX, then start with this [CNN on MNIST example](https:\u002F\u002Fdocs.kidger.site\u002Fequinox\u002Fexamples\u002Fmnist\u002F).\n\n_Coming from [Flax](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fflax) or [Haiku](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fhaiku)? The main difference is that Equinox (a) offers a lot of advanced features not found in these libraries, like PyTree manipulation or runtime errors; (b) has a simpler way of building models: they're just PyTrees, so they can pass across JIT\u002Fgrad\u002Fetc. boundaries smoothly._\n\n## Installation\n\nRequires Python 3.10+.\n\n```bash\npip install equinox\n```\n\nEquinox is also available through a community-supported build on [conda-forge](https:\u002F\u002Fgithub.com\u002Fconda-forge\u002Fequinox-feedstock).\n\n## Documentation\n\nAvailable at [https:\u002F\u002Fdocs.kidger.site\u002Fequinox](https:\u002F\u002Fdocs.kidger.site\u002Fequinox).\n\n## Quick example\n\nModels are defined using PyTorch-like syntax:\n\n```python\nimport equinox as eqx\nimport jax\n\nclass Linear(eqx.Module):\n    weight: jax.Array\n    bias: jax.Array\n\n    def __init__(self, in_size, out_size, key):\n        wkey, bkey = jax.random.split(key)\n        self.weight = jax.random.normal(wkey, (out_size, in_size))\n        self.bias = jax.random.normal(bkey, (out_size,))\n\n    def __call__(self, x):\n        return self.weight @ x + self.bias\n```\n\nand are fully compatible with normal JAX operations:\n\n```python\n@jax.jit\n@jax.grad\ndef loss_fn(model, x, y):\n    pred_y = jax.vmap(model)(x)\n    return jax.numpy.mean((y - pred_y) ** 2)\n\nbatch_size, in_size, out_size = 32, 2, 3\nmodel = Linear(in_size, out_size, key=jax.random.PRNGKey(0))\nx = jax.numpy.zeros((batch_size, in_size))\ny = jax.numpy.zeros((batch_size, out_size))\ngrads = loss_fn(model, x, y)\n```\n\nFinally, there's no magic behind the scenes. All `eqx.Module` does is register your class as a PyTree. From that point onwards, JAX already knows how to work with PyTrees.\n\n## Citation\n\nIf you found this library to be useful in academic work, then please cite: ([arXiv link](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00254))\n\n```bibtex\n@article{kidger2021equinox,\n    author={Patrick Kidger and Cristian Garcia},\n    title={{E}quinox: neural networks in {JAX} via callable {P}y{T}rees and filtered transformations},\n    year={2021},\n    journal={Differentiable Programming workshop at Neural Information Processing Systems 2021}\n}\n```\n\n(Also consider starring the project on GitHub.)\n\n## See also: other libraries in the JAX ecosystem\n\n**Always useful**  \n[jaxtyping](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fjaxtyping): type annotations for shape\u002Fdtype of arrays.  \n\n**Deep learning**  \n[Optax](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Foptax): first-order gradient (SGD, Adam, ...) optimisers.  \n[Orbax](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Forbax): checkpointing (async\u002Fmulti-host\u002Fmulti-device).  \n[Levanter](https:\u002F\u002Fgithub.com\u002Fstanford-crfm\u002Flevanter): scalable+reliable training of foundation models (e.g. LLMs).  \n[paramax](https:\u002F\u002Fgithub.com\u002Fdanielward27\u002Fparamax): parameterizations and constraints for PyTrees.\n\n**Scientific computing**  \n[Diffrax](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fdiffrax): numerical differential equation solvers.  \n[Optimistix](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Foptimistix): root finding, minimisation, fixed points, and least squares.  \n[Lineax](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Flineax): linear solvers.  \n[BlackJAX](https:\u002F\u002Fgithub.com\u002Fblackjax-devs\u002Fblackjax): probabilistic+Bayesian sampling.  \n[sympy2jax](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fsympy2jax): SymPy\u003C->JAX conversion; train symbolic expressions via gradient descent.  \n[PySR](https:\u002F\u002Fgithub.com\u002FmilesCranmer\u002FPySR): symbolic regression. (Non-JAX honourable mention!)  \n\n**Awesome JAX**  \n[Awesome Equinox](https:\u002F\u002Fdocs.kidger.site\u002Fequinox\u002Fawesome-list\u002F)  \n[Awesome JAX](https:\u002F\u002Fgithub.com\u002Flockwo\u002Fawesome-jax): a longer list of other JAX projects.  \n","\u003Ch1 align='center'>Equinox\u003C\u002Fh1>\n\nEquinox 是你的 JAX 一站式库，提供了核心 JAX 中尚未包含的所有功能：\n\n- 神经网络（或更一般地，任何模型），采用易于使用的类似 PyTorch 的语法；\n- 针对变换的过滤 API；\n- 有用的 PyTree 操作工具；\n- 运行时错误等高级特性；\n\n最重要的是，Equinox 并非一个框架：你在 Equinox 中编写的所有代码都与 JAX 或其生态系统中的其他组件完全兼容。\n\n如果你是 JAX 的新手，请从这个 [MNIST 数据集上的 CNN 示例](https:\u002F\u002Fdocs.kidger.site\u002Fequinox\u002Fexamples\u002Fmnist\u002F) 开始。\n\n_你来自 Flax 或 Haiku 吗？主要区别在于，Equinox (a) 提供了许多这些库中没有的高级特性，比如 PyTree 操作或运行时错误；(b) 使用更简单的方式构建模型：它们只是 PyTrees，因此可以顺畅地跨越 JIT\u002Fgrad 等边界。_\n\n## 安装\n\n需要 Python 3.10 或更高版本。\n\n```bash\npip install equinox\n```\n\nEquinox 也可以通过 [conda-forge](https:\u002F\u002Fgithub.com\u002Fconda-forge\u002Fequinox-feedstock) 上社区支持的构建进行安装。\n\n## 文档\n\n可在 [https:\u002F\u002Fdocs.kidger.site\u002Fequinox](https:\u002F\u002Fdocs.kidger.site\u002Fequinox) 查阅。\n\n## 快速示例\n\n模型使用类似 PyTorch 的语法定义：\n\n```python\nimport equinox as eqx\nimport jax\n\nclass Linear(eqx.Module):\n    weight: jax.Array\n    bias: jax.Array\n\n    def __init__(self, in_size, out_size, key):\n        wkey, bkey = jax.random.split(key)\n        self.weight = jax.random.normal(wkey, (out_size, in_size))\n        self.bias = jax.random.normal(bkey, (out_size,))\n\n    def __call__(self, x):\n        return self.weight @ x + self.bias\n```\n\n并且与常规 JAX 操作完全兼容：\n\n```python\n@jax.jit\n@jax.grad\ndef loss_fn(model, x, y):\n    pred_y = jax.vmap(model)(x)\n    return jax.numpy.mean((y - pred_y) ** 2)\n\nbatch_size, in_size, out_size = 32, 2, 3\nmodel = Linear(in_size, out_size, key=jax.random.PRNGKey(0))\nx = jax.numpy.zeros((batch_size, in_size))\ny = jax.numpy.zeros((batch_size, out_size))\ngrads = loss_fn(model, x, y)\n```\n\n最后，背后没有任何魔法。`eqx.Module` 做的只是将你的类注册为一个 PyTree。从那以后，JAX 就已经知道如何处理 PyTrees 了。\n\n## 引用\n\n如果你在学术工作中发现这个库很有用，请引用以下内容：([arXiv 链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00254))\n\n```bibtex\n@article{kidger2021equinox,\n    author={Patrick Kidger and Cristian Garcia},\n    title={{E}quinox: neural networks in {JAX} via callable {P}y{T}rees and filtered transformations},\n    year={2021},\n    journal={Differentiable Programming workshop at Neural Information Processing Systems 2021}\n}\n```\n\n（同时也可以考虑在 GitHub 上给该项目加星。）\n\n## 另请参阅：JAX 生态系统中的其他库\n\n**始终有用**  \n[jaxtyping](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fjaxtyping)：用于数组形状和数据类型的类型注解。\n\n**深度学习**  \n[Optax](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Foptax)：一阶梯度优化器（SGD、Adam 等）。  \n[Orbax](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Forbax)：检查点保存（异步\u002F多主机\u002F多设备）。  \n[Levanter](https:\u002F\u002Fgithub.com\u002Fstanford-crfm\u002Flevanter)：基础模型（如 LLM）的可扩展且可靠的训练。  \n[paramax](https:\u002F\u002Fgithub.com\u002Fdanielward27\u002Fparamax)：PyTrees 的参数化和约束。\n\n**科学计算**  \n[Diffrax](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fdiffrax)：数值微分方程求解器。  \n[Optimistix](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Foptimistix)：根查找、最小化、不动点和最小二乘法。  \n[Lineax](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Flineax)：线性方程组求解器。  \n[BlackJAX](https:\u002F\u002Fgithub.com\u002Fblackjax-devs\u002Fblackjax)：概率和贝叶斯采样。  \n[sympy2jax](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fsympy2jax)：SymPy 到 JAX 的转换；可通过梯度下降训练符号表达式。  \n[PySR](https:\u002F\u002Fgithub.com\u002FmilesCranmer\u002FPySR)：符号回归。（非 JAX 的荣誉提及！）\n\n**精彩的 JAX 资源**  \n[Awesome Equinox](https:\u002F\u002Fdocs.kidger.site\u002Fequinox\u002Fawesome-list\u002F)  \n[Awesome JAX](https:\u002F\u002Fgithub.com\u002Flockwo\u002Fawesome-jax)：一份更长的其他 JAX 项目列表。","# Equinox 快速上手指南\n\nEquinox 是一个基于 JAX 的一站式深度学习库，提供类似 PyTorch 的模型定义语法，同时完全兼容 JAX 生态。它不是独立的框架，而是对核心 JAX 功能的增强，支持神经网络构建、过滤变换 API、PyTree 操作及运行时错误检查等高级特性。\n\n## 环境准备\n\n- **Python 版本**：3.10 或更高\n- **前置依赖**：需先安装 [JAX](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax)（Equinox 安装时通常会自动处理，但建议确保 JAX 可用）\n- **系统要求**：支持 Linux、macOS 和 Windows（需配置适当后端）\n\n> 💡 国内用户若遇到 JAX 安装缓慢，可尝试使用清华或阿里镜像源加速 pip 安装。\n\n## 安装步骤\n\n通过 pip 安装（推荐）：\n\n```bash\npip install equinox\n```\n\n如需使用 conda（社区维护版本）：\n\n```bash\nconda install -c conda-forge equinox\n```\n\n> 🚀 国内加速建议：  \n> ```bash\n> pip install equinox -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\nEquinox 允许你用类定义模型，语法贴近 PyTorch，且天然支持 JAX 的 `jit`、`grad`、`vmap` 等变换。\n\n### 1. 定义一个简单的线性模型\n\n```python\nimport equinox as eqx\nimport jax\n\nclass Linear(eqx.Module):\n    weight: jax.Array\n    bias: jax.Array\n\n    def __init__(self, in_size, out_size, key):\n        wkey, bkey = jax.random.split(key)\n        self.weight = jax.random.normal(wkey, (out_size, in_size))\n        self.bias = jax.random.normal(bkey, (out_size,))\n\n    def __call__(self, x):\n        return self.weight @ x + self.bias\n```\n\n### 2. 结合 JAX 进行训练（计算损失与梯度）\n\n```python\n@jax.jit\n@jax.grad\ndef loss_fn(model, x, y):\n    pred_y = jax.vmap(model)(x)\n    return jax.numpy.mean((y - pred_y) ** 2)\n\n# 初始化数据与模型\nbatch_size, in_size, out_size = 32, 2, 3\nmodel = Linear(in_size, out_size, key=jax.random.PRNGKey(0))\nx = jax.numpy.zeros((batch_size, in_size))\ny = jax.numpy.zeros((batch_size, out_size))\n\n# 计算梯度\ngrads = loss_fn(model, x, y)\n```\n\n> ✅ 关键点：`eqx.Module` 本质是将你的类注册为 PyTree，因此无需特殊处理即可无缝融入 JAX 工作流。\n\n更多示例请参考官方文档：[https:\u002F\u002Fdocs.kidger.site\u002Fequinox](https:\u002F\u002Fdocs.kidger.site\u002Fequinox)","某科研团队正在利用 JAX 开发一套用于预测流体动力学的自定义神经网络模型，需要在保持高性能计算的同时快速迭代网络结构。\n\n### 没有 equinox 时\n- **模型定义繁琐**：开发者必须手动将模型参数注册为 PyTree，或使用 Haiku\u002FFlax 等框架特有的复杂宏和状态管理机制，代码可读性差。\n- **调试困难**：在 JIT 编译或自动求导过程中出现形状不匹配错误时，往往只能看到模糊的底层报错，难以定位具体是哪个层的参数出了问题。\n- **生态割裂**：若想结合 JAX 生态中其他科学计算库（如微分方程求解器 Diffrax），常因模型格式不兼容而需要编写大量胶水代码进行转换。\n- **变换控制粗糙**：在使用 `jax.grad` 或 `jax.jit` 时，难以精细控制哪些参数参与梯度更新或编译，缺乏灵活的过滤 API。\n\n### 使用 equinox 后\n- **类 PyTorch 式建模**：直接继承 `eqx.Module` 即可定义模型，参数自动成为 PyTree，无需样板代码，语法简洁且符合直觉。\n- **运行时错误捕获**：equinox 提供了增强的运行时检查，能在 JIT 边界清晰报出具体的数组形状或类型错误，大幅缩短调试时间。\n- **无缝生态集成**：定义的模型本质就是标准 PyTree，可直接传入 Diffrax 求解器或 Optax 优化器，无需任何格式转换，实现“一次定义，到处运行”。\n- **精细化变换控制**：利用内置的过滤 API（filtered APIs），可以轻松指定仅对权重部分求梯度或对偏置部分进行 JIT 编译，灵活掌控计算流程。\n\nequinox 通过消除框架壁垒并提供优雅的建模语法，让研究人员能专注于算法创新而非底层适配，真正释放了 JAX 在科学计算领域的潜力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpatrick-kidger_equinox_467333ec.png","patrick-kidger","Patrick Kidger","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fpatrick-kidger_a5d56fa4.png","ML+proteins, sciML, numerics, neural ODEs ╱ building 'scipy w\u002F autodiff+GPU' in JAX: Equinox, Diffrax, Lineax, etc ╱ solo traveller, martial artist, scuba diver","Cradle.bio","Zürich",null,"PatrickKidger","https:\u002F\u002Fkidger.site","https:\u002F\u002Fgithub.com\u002Fpatrick-kidger",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,2842,183,"2026-04-05T10:33:01","Apache-2.0",1,"","未说明",{"notes":98,"python":99,"dependencies":100},"Equinox 是构建在 JAX 之上的库，本身不强制要求特定操作系统或 GPU 配置，具体硬件需求取决于底层 JAX 的安装环境及用户运行的模型规模。可通过 pip 或 conda-forge 安装。","3.10+",[101],"jax",[13],[104,101,105,67],"deep-learning","neural-networks","2026-03-27T02:49:30.150509","2026-04-06T06:46:04.764015",[109,114,119],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},18165,"如何在 Equinox 中实现权重共享（Weight Tying\u002FSharing）？","在 Equinox 中实现权重共享的最佳实践是使用 `eqx.tree_at`。具体步骤如下：\n1. **初始化时**：创建包含重复权重的模型结构，然后使用 `eqx.tree_at` 将其中一个权重路径设置为 `None` 以去除内存中的冗余副本。\n2. **调用时（__call__）**：在模型前向传播过程中，再次使用 `eqx.tree_at` 将之前设为 `None` 的位置替换为实际的共享权重引用。\n\n示例代码逻辑：\n```python\n# 初始化阶段\ndef init_model():\n    linear = init_linear() # 包含 weight, bias\n    embedding = init_embedding() # 包含 weight\n    # 移除 embedding 中的权重引用，避免内存重复\n    embedding = eqx.tree_at(lambda e: e[0], embedding, None)\n    return (linear, embedding)\n\n# 调用阶段\ndef call(model, ...):\n    linear, embedding = model\n    (weight, _) = linear\n    # 将共享权重重新注入到 embedding 结构中\n    embedding = eqx.tree_at(lambda e: e[0], embedding, weight)\n    # 继续前向传播...\n```\n这种方法既保证了内存效率（只存储一份权重），又符合 JAX 的函数式编程范式。","https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fissues\u002F469",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},18166,"在使用 float16 或 bfloat16 进行 RMSNorm 计算时遇到数值不稳定怎么办？","当使用低精度（如 float16\u002Fbfloat16）初始化 RMSNorm 时，直接计算均方根会导致数值不稳定。正确的做法是**在计算范数时临时提升到 float32 精度**，计算完成后再转回目标精度。\n\n错误写法：\n```python\ninv_rms = jax.lax.rsqrt(jnp.mean(x**2) + self.eps)\n```\n\n正确写法：\n```python\n# 先将输入转换为 float32 进行计算\nupscaled_x = x.astype(jnp.float32)\ninv_rms = jax.lax.rsqrt(jnp.mean(upscaled_x**2) + self.eps)\n# 后续输出可根据需要转换回原精度\n```\n这能确保中间计算的精度足够，避免因低精度截断导致的模型训练发散或输出错误。","https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fissues\u002F673",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},18167,"在使用 `eqx.filter_shard` 进行自动并行化时，是否需要在 jit 函数内部和外部都使用？","通常情况下，**不需要**在 `jit` 编译的函数内部和外部同时使用 `eqx.filter_shard`。\n\n如果在 `train_step` 等被 `jit` 装饰的函数内部移除分片相关调用后代码仍能正常运行，说明外部的分片设置已经足够。文档中的示例有时为了展示完整的输入\u002F输出分片控制（in\u002Fout sharding）而显得冗余，但在实际应用中，只要保证进入 jit 函数的数据布局正确即可。JAX 的文档在某些版本中可能存在过度复杂的示例，实际操作中以代码能否运行为准。","https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fissues\u002F1076",[125,130,135,140,145,150,155,160,165,170,175,180,185,190,195,200,205,210,215,220],{"id":126,"version":127,"summary_zh":128,"released_at":129},108663,"v0.13.6","错误修复：使用具体实例字段覆盖 `AbstractClassVar` 不再会在尝试实例化该模块时导致程序崩溃。（#1198）\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.13.5...v0.13.6","2026-03-09T19:03:41",{"id":131,"version":132,"summary_zh":133,"released_at":134},108664,"v0.13.5","**错误修复：**\n\n- 使用类属性覆盖 `eqx.Module` 字段时，在展平或 JIT 编译过程中不再崩溃。（#1191、#1192）\n\n**其他**\n\n- 添加了 `eqx.nn.MLP(..., scan=True)`，它将使用逐层扫描来提升编译速度。（感谢 @nstarman！#1179、#1182）\n- 现在可以传递编译器选项 `eqx.filter_jit(...).lower().compile(...)`（感谢 @knyazer！#1178）\n- 文档更新（感谢 @RunnersNum40！#1175）\n- 类型提示改进（感谢 @mishmish66！#1190）\n\n## 新贡献者\n* @mishmish66 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1190 中做出了首次贡献。\n* @Ceyron 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1189 中做出了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.13.4...v0.13.5","2026-03-04T12:20:05",{"id":136,"version":137,"summary_zh":138,"released_at":139},108665,"v0.13.4","- 修复了因扫描包含默认字段的模块的 vmap 而导致的崩溃问题。（#1172、#1174）\n\n**完整更新日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.13.3...v0.13.4","2026-01-26T22:29:21",{"id":141,"version":142,"summary_zh":143,"released_at":144},108666,"v0.13.3","- 性能：各种模块操作（setattr、初始化、flatten、unflatten）现在应该更快了。特别是，我们现在会通过代码生成来实现 flatten\u002Funflatten 函数。（感谢 @nstarman！#1119、#1124、#1138、#1140）\n- 功能：添加了 `EQX_ON_ERROR=off` 作为有效选项。（感谢 @michael-0brien！#1134）\n- 兼容性：\n    - JAX >=0.8.2 将 `jax.core.get_aval` 重命名为 `jax.typeof` (#1157)。\n    - JAX >=0.8.2 废弃了 `jax.core.{subjaxprs, mapped_aval, unmapped_aval}` (#1157)。\n    - JAX >=0.7.2 有时会将 `jax.pure_callback` 中的错误转换为 `ValueError`；我们现在会适当地捕获这些错误 (#1156、#1157)。\n- 杂项：增加了对 `eqx.Enumeration` 的美观打印支持。\n- 杂项：测试现在可以在 GPU 上运行（感谢 @Artur-Galstyan！#1146）\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.13.2...v0.13.3","2026-01-25T00:45:37",{"id":146,"version":147,"summary_zh":148,"released_at":149},108667,"v0.13.2","兼容性版本发布：\n\n- 更多针对 JAX 0.7.2 的兼容性修复（感谢 @nstarman！#1112）\n- 更多针对 Python 3.14 的兼容性修复 (#1111)\n\n## 新贡献者\n* @EtaoinWu 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1108 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.13.1...v0.13.2","2025-10-09T10:15:42",{"id":151,"version":152,"summary_zh":153,"released_at":154},108668,"v0.13.1","本次发布的主要亮点是与 JAX 0.7.2 和 Python 3.14 的兼容性。\n\n### 兼容性\n\n- 与 JAX 0.7.2 兼容。\n  - 请注意，JAX 0.7.{0,1} 存在已知的 bug，不建议使用（https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F30517、https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F31284）。\n  - JAX 现在要求绑定参数必须是可哈希的（感谢 @johannahaffner！#1054、#1081、#1082）。实际上，这会导致在追踪时发生崩溃。\n  - JAX 已弃用 `jax.interpreters.batching.{not_mapped, NotMapped}`。（感谢 @RunnersNum40！#1086）\n  - JAX 现在引入了一种新的“数组类似类型”，即 `jax._src.literals.{LiteralArray, TypedNdArray}`（前者是 JAX 0.7.2 中的名称，后者是 JAX 夜间版本中的名称）。这意味着在所有之前接受 `np.ndarray` 或 `jax.Array` 的地方，现在都需要进行相应的处理。（感谢 @johannahaffner！#1099）\n\n- 与 Python 3.14 兼容。（#1100）\n  - 具体来说，`cls.__annotations__['__dict__']` 现已被弃用，取而代之的是 `inspect.get_annotations(cls)`（https:\u002F\u002Fgithub.com\u002Fpython\u002Fcpython\u002Fissues\u002F139140）。\n  - 这种变化导致在使用 Lineax 时出现崩溃（https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Foptimistix\u002Fissues\u002F168）。\n\n### 功能改进\n\n- 将 `eqx.nn.State` 中的键从短随机字符串改为可读的路径，以明确状态在 pytree 中的位置。（感谢 @fhchl！#1050、#1052）\n\n### Bug 修复\n\n- `eqx.field` 现在与 pyright 的严格模式兼容。（感谢 @HGangloff！#1053）\n- 静态类型检查器曾将访问 `some_module.foo` 视为返回 `Any` 类型。现已修复。（感谢 @nisheethlahoti！#1072）\n- 修复了 `eqx.nn.ConvTranspose` 未能正确验证其 `stride` 和 `output_padding` 是否一致的问题。（#1045）\n- 修复了 `eqx.nn.ConvTranspose3d` 忽视其 `dtype` 参数的问题。（感谢 @ZagButNoZig！#1044）\n- 修复了当 `eqx.Enumeration.__repr__` 是 vmap 函数的输出时发生的崩溃问题。（感谢 @LennartGevers！#1102）\n\n### 文档更新\n\n- 自动并行化教程已更新至最新的 JAX 版本。（感谢 @mjo22！#1067、#1078、#1103）\n- `eqx.tree_at` 的文档得到了改进。（感谢 @jeertmans！#1065）\n\n## 新贡献者\n* @fhchl 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1052 中做出了首次贡献。\n* @nisheethlahoti 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1072 中做出了首次贡献。\n* @johannahaffner 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1082 中做出了首次贡献。\n* @RunnersNum40 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1086 中做出了首次贡献。\n* @gerlero 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1101 中做出了首次贡献。\n* @LennartGevers 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1102 中做出了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.13.0...v0.13.1","2025-09-20T15:21:42",{"id":156,"version":157,"summary_zh":158,"released_at":159},108669,"v0.13.0","- 简化并整理了 `equinox.Module` 的定义。原则上这应该是向后兼容的，但为了应对可能存在的边缘情况，我们还是进行了一个小版本号的升级。（#1028）\n- `equinox.nn.BatchNorm` 现在同时支持 `ema` 和 `batch` 两种模式。出于向后兼容性的考虑，默认使用前者，并会发出关于显式选择模式的警告。不过，后者似乎能带来更好的性能。（感谢 @lockwo！#659、#948）\n- 实现了与即将发布的 JAX v0.7.0 的向前兼容性，该版本现在要求 jaxpr 参数必须是可哈希的。（感谢 @hawkinsp！#1039）\n- 改进了向 `equinox.{filter,partition}` 传递 tracer 时的错误信息提示。（感谢 @HGangloff！#1038）\n- 当将数组赋值给带有 `field(init=False)` 的数据类字段时，现在会打印警告，因为这可能导致意外的行为。（#1038）\n- 文档更新（感谢 @teddykoker、@vyeevani！#1029、#1033）\n\n## 新贡献者\n* @teddykoker 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1029 中完成了首次贡献。\n* @vyeevani 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1033 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.12.2...v0.13.0","2025-07-07T20:44:26",{"id":161,"version":162,"summary_zh":163,"released_at":164},108670,"v0.12.2","- 与即将发布的 JAX 版本向前兼容，因为该版本移除了 `jaxlib.xla_extension.Device`。（https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1023）\n- 提升了模块展平和反展平的速度。（感谢 @nstarman！#994）\n- 提升了跨 `filter_jit` 边界的执行速度。（感谢 @ZagButNoZig！https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F989）\n- 现在兼容 `doctest`。（感谢 @jeertmans！https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1018）\n- 文档改进（感谢 @adonath、@emmanuel-ferdman、@nboyd！https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1009）\n\n## 新贡献者\n* @adonath 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1009 中完成了首次贡献。\n* @emmanuel-ferdman 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1017 中完成了首次贡献。\n* @jeertmans 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1018 中完成了首次贡献。\n* @nboyd 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F1020 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.12.1...v0.12.2","2025-05-14T09:31:02",{"id":166,"version":167,"summary_zh":168,"released_at":169},108671,"v0.12.1","用于绕过 JAX 漏洞的热修复：https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F27545 (#988)\n\nEquinox v0.12.0 发行说明请见：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Freleases\u002Ftag\u002Fv0.12.0\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.12.0...v0.12.1","2025-03-27T22:12:35",{"id":171,"version":172,"summary_zh":173,"released_at":174},108672,"v0.12.0","- **重大变更**: `eqx.field(converter=...)` 现在会在 `__post_init__` *之后* 执行，而不是之前。这大大简化了部分内部实现，并提升了与其他库的兼容性。（#969、#975）\n- 修复了在 JAX 0.5.3 中使用 `eqx.filter_closure_convert` 时出现的警告。（#979、#981）\n- 降低了 `eqx.filter_jit` 的开销。（感谢 @ZagButNoZig！#973、#980、#983）\n- 全新文档上线！\n\n## 新贡献者\n* @ZagButNoZig 在 https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F980 中完成了首次贡献。\n\n**完整更新日志**：https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.12...v0.12.0","2025-03-27T15:03:58",{"id":176,"version":177,"summary_zh":178,"released_at":179},108673,"v0.11.12","This is primarily a compatibility release.\r\n\r\n- Fixes for compatibility with JAX 0.5.1 (#959, #960).\r\n- Fixes for compatibility with pyright 1.1.394 (#956, #960).\r\n- `eqx.nn.Linear(0, ...)` no longer crashes runs (propagating zero-size tensors). (Thanks @aseyboldt! #950)\r\n- Pretty-printing (`eqx.tree_pformat` and `eqx.tree_pprint`) now uses the new [Wadler-Lindig](https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fwadler_lindig) pretty-printing library. (#924)\r\n- Many doc improvements (Thanks @matthewfeickert @TugdualKerjan @struan-robertson @danielward27! #930, #937, #941, #950)\r\n\r\n## New Contributors\r\n* @matthewfeickert made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F930\r\n* @struan-robertson made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F941\r\n* @aseyboldt made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F950\r\n* @TugdualKerjan made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F947\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.11...v0.11.12","2025-02-25T18:57:56",{"id":181,"version":182,"summary_zh":183,"released_at":184},108674,"v0.11.11","JAX 0.4.38 moved a number of APIs with a deprecation warning, e.g. `jax.core.Jaxpr -> jax.extend.core.Jaxpr`. With this release we've updated and are back to being warning-free under this JAX release! (Thanks @FFroehlich @DrJessop, #913, #915, #917)\r\n\r\n## New Contributors\r\n* @DrJessop made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F915\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.10...v0.11.11","2024-12-24T12:08:00",{"id":186,"version":187,"summary_zh":188,"released_at":189},108675,"v0.11.10","This is a JAX 0.4.36 compatibility release.\r\n\r\nWith this release, JAX changed how custom primitive rules are called (they are always called, instead of only when the data requires them to be). That requires some updates in Equinox to avoid crashes in the downstream ecosystem. (https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fdiffrax\u002Fissues\u002F532, https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F25289 + links therein.)\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.9...v0.11.10","2024-12-08T02:44:42",{"id":191,"version":192,"summary_zh":193,"released_at":194},108676,"v0.11.9","This is a (important) bugfix release.\r\n\r\n* Fix filter_vmap with out_axes!=0,1 producing outputs with the wrong axis order. (Thanks @remifan! #900, #901)\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.8...v0.11.9","2024-11-24T15:01:37",{"id":196,"version":197,"summary_zh":198,"released_at":199},108677,"v0.11.8","The main thing for this release is JAX 0.4.34 compatibility -- JAX introduced breaking changes in this release that we are now compatible with. (#871)\r\n\r\n## Bugfixes\r\n\r\n* Accessing the concrete implementation of an abstract class attribute within `__init_subclass__` should no longer crash. (Plus probably better-behaved `__init_subclass__` overall.)\r\n\r\n## Miscellaneous\r\n\r\n* JAX 0.4.33 introduced a change that broke `eqx.error_if`'s nice displaying of error message. With this release then we are back to having nice error messages again!\r\n* `eqx.nn.StateIndex` can now be passed through `jax.jit` (and not just `eqx.filter_jit`). (Thanks @NeilGirdhar! #843)\r\n* Normalization layers now upcast to at least 32-bit precision. (Thanks @AakashKumarNain! #876)\r\n* Poetry has a bug in its interpretation of `~=` version constraints. We now work around that for better compatibility with certain kinds of Poetry installations. (Thanks @norpadon! #878)\r\n\r\n## Documentation\r\n\r\n* Updated CNN example to work with recent JAX versions. (Thanks @pasq-cat! #880, #881)\r\n* Update `eqx.tree_at` documentation for clarity. (Thanks @jeertmans! #872, #874, #877)\r\n\r\n## New Contributors\r\n* @norpadon made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F878\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.7...v0.11.8","2024-10-18T17:19:29",{"id":201,"version":202,"summary_zh":203,"released_at":204},108678,"v0.11.7","Quick release. JAX 0.4.32 \u002F 0.4.33 just introduced a breaking change; this release ensures Equinox is compatible with this. (#856)\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.6...v0.11.7","2024-09-18T17:10:11",{"id":206,"version":207,"summary_zh":208,"released_at":209},108679,"v0.11.6","This is primarily a bug fix release.\r\n\r\n- Runtime error messages (those from `eqx.error_if`, in particular when wrapped with `eqx.filter_jit`) should now be compatible with PyCharm's debugger, and with certain multithreaded contexts. (Thanks @adam-hartshorne, @dlwh! #828, #844, #849)\r\n\r\n- Marking a `jax.Array` or `np.ndarray` as an `eqx.field(static=True)` will now raise a warning. This was *technically* okay as long as you use it in certain very narrow contexts  (e.g. to smuggle it into a JIT'd region without being traced), but in practice it was nearly always just a common new-user footgun. (Thanks @lockwo! #800)\r\n\r\n- Using `eqx.tree_at` for replacing empty tuples is improved. (Thanks @danielward27! #818, #819)\r\n\r\n- `eqx.nn.RotaryEmbedding` no longer promote input dtypes to at least float32. (Thanks @knyazer! #836)\r\n\r\n- Mypy now understands that `eqx.Module`s are dataclasses. (Pyright always did, but mypy needed a slightly different approach to appreciate this fact.) (Thanks @NeilGirdhar! #822)\r\n\r\n- Multiple `eqx.Module`s participating in co-operative multiple inheritance (at least 5 inheriting from each other seem to be necessary?), with some of them overriding the `__post_init__`s of others, should now follow their expected resolution order. (Thanks @NeilGirdhar! #832, #834)\r\n\r\n- We now have a `.editorconfig` file, (thanks @NeilGirdhar! #821)\r\n\r\n- Doc improvements. (Thanks @garymm, @ColCarroll! #804, #805)\r\n\r\n## New Contributors\r\n* @garymm made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F804\r\n* @ColCarroll made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F805\r\n* @NeilGirdhar made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F823\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.5...v0.11.6","2024-09-14T09:34:03",{"id":211,"version":212,"summary_zh":213,"released_at":214},108680,"v0.11.5","## JAX compatibility\r\n\r\nRecent versions of JAX (0.4.28+) have made some changes to:\r\n\r\n- Hashing of tracers;\r\n- Tree-map'ing over Nones;\r\n- Callbacks;\r\n- Pretty-printing.\r\n\r\nWith this update, we should now be compatible with both old and new versions of JAX: this fixes both some new crashes, and some new warnings. (#719, #724, #753, #758, thanks @jakevdp, @hawkinsp!)\r\n\r\n## Better errors\r\n\r\n- The error messages from `eqx.error_if` are now substantially more informative: they include traceback information including the stack, and mention the availability of the `EQX_ON_ERROR` variable. We also do a much better job hiding the large unhelpful printouts that XLA gives by default. (#785, #803)\r\n\r\n- The default value of `EQX_ON_ERROR_BREAKPOINT_FRAMES` is now `1`. (#777) The impact of this is that using `eqx.error_if` alongside `EQX_ON_ERROR=breakpoint` will now:\r\n    - reliably always open a debugger, rather than sometimes crashing at trace-time due to upstream JAX bug [#16732](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax\u002Fissues\u002F16732).\r\n    - however, by default the debugger will no longer include any additional stack frames above it (accessed via `u`).\r\n    - much of the above is now explained in a printed-out informative message prior to the debugger opening.\r\n\r\n## Bugfixes\r\n\r\n- `eqx.filter_{jacfwd, jacrev}` now only apply filtering to their inputs but not their outputs. Previously this was problematic as there was no way to represent static-input-by-static-output in the returned Jacobian, so pieces were silently dropped. (#734, thanks @lockwo!)\r\n\r\n- `eqx.tree_at` can now be used to replace empty tuples. (#715, #717, #722, thanks @lockwo!)\r\n\r\n- `eqx.filter_custom_jvp` no longer raises a trace-time crash in some scenarios in which its `**kwargs` were erroneously counted as having tangents. (https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fissues\u002F745#issuecomment-2148560546, #749)\r\n\r\n- No longer getting a trace-time crash when doing a particular combination of vmap + autodiff + checkpointed while loops. This occurred when using `optimistix.BFGS` around `diffrax.diffeqsolve`. (#777)\r\n\r\n- Fixed a trace-time crash when:\r\n    - using a checkpointed while loop...\r\n    - ...with a body function that has a closed-over tracer...\r\n    - ...and that closed-over tracer is differentiated...\r\n    - ...and there are no other closed-over tracers that are differentiated...\r\n    - ...and the dependency on that tracer is only linear.\r\n    - (https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fdiffrax\u002Fpull\u002F387#issuecomment-2132472392, #752, thanks @dkweiss31!)\r\n\r\n- Fixed a trace-time crash when composing the grad of vmap of `lineax.linear_solve`. (https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Flineax\u002Fissues\u002F101, #795, thanks @rhacking!)\r\n\r\n- `eqx.nn.RMSNorm` now uses at least 32-bit precision for numerical stability (#723, thanks @AakashKumarNain!)\r\n\r\n## New features\r\n\r\n- `eqx.nn.{Linear,Conv,GRUCell,LSTMCell}` now support complex dtypes (#765, thanks @ChenAo-Phys!)\r\n\r\n- Added `eqx.nn.RotaryEmbedding(..., theta=...)`. (#735, thanks @Artur-Galstyan!)\r\n\r\n## Other changes\r\n\r\n- Several doc fixes. (#708, #731, #733, #747, #750, #757 + several other PRs, thanks @Artur-Galstyan, @matteoguarrera, @lockwo, @nasyxx!)\r\n\r\n- Several internal test fixes as downstream libraries have changed slightly. (#740, #742 + several other PRs, big thanks to @GaetanLepage for reporting many of these!)\r\n\r\n- There is now a Mistral 7B implementation using JAX+Equinox available over in [AakashKumarNain\u002Fmistral_jax](https:\u002F\u002Fgithub.com\u002FAakashKumarNain\u002Fmistral_jax)!\r\n\r\n\r\n## New Contributors\r\n* @nasyxx made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F708\r\n* @jakevdp made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F724\r\n* @matteoguarrera made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F739\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.4...v0.11.5","2024-08-18T19:11:54",{"id":216,"version":217,"summary_zh":218,"released_at":219},108681,"v0.11.4","## Features\r\n\r\n- Added `eqx.filter_shard`. This lowers to `jax.lax.with_sharding_constraint` as a single way to transfer data, or reshard data, both inside and outside of JIT! (No more `jax.device_put`.) In addition, the parallelism example has been updated to use this simpler new functionality. (Thanks @homerjed and @dlwh! #688, #691)\r\n\r\n- Added `eqx.filter_{jacfwd,jacrev,hessian}`. These do what you expect! (Thanks @lockwo! #677)\r\n\r\n- Added `eqx.nn.RotaryPostionalEmbedding`. This is designed to be used in conjunction with the existing `eqx.nn.MultiheadAttention`. (Thanks @Artur-Galstyan! #568)\r\n\r\n- Added support for `padding='VALID'`, `padding='SAME'`, `padding='SAME_LOWER'` to the convolutional layers: `eqx.nn.{Conv, ...}`. (Thanks @ChenAo-Phys! #658)\r\n\r\n- Added support for `padding_mode='ZEROS'`, `padding_mode='REFLECT'`, `padding_mode='REPLICATE'`, `padding_mode='CIRCULAR'` to the convolutional layers: `eqx.nn.{Conv, ...}`. (Thanks @ChenAo-Phys! #658)\r\n\r\n- Added a `dtype` argument to `eqx.nn.{MultiheadAttention, Linear, Conv, ...}` for specifying the dtype of their parameters. In addition `eqx.nn.BatchNorm` will now also uses its `dtype` argument to determine the dtype of its weights and bias, not just the dtype of its moving statistics. (Thanks @Artur-Galstyan and @AakashKumarNain! #680, #689)\r\n\r\n## Compatibility\r\n\r\n- `eqx.error_if` is now compatible with JAX 0.4.26, which changed JAX's own reporting of error messages slightly. (Thanks @hawkinsp! #670)\r\n\r\n- Added a warning that checks for doing something like:\r\n    ```python\r\n\tclass MyModule(eqx.Module):\r\n\t\tfn: Callable\r\n\r\n\t    def __init__(self, ...):\r\n\t\t    self.fn = jax.vmap(some_fn)\r\n\t```\r\n\tAs this is an easy source of bugs. (The vmap'd function is not a PyTree so will not propagate anything in the PyTree stucture of `some_fn`.)\r\n\r\n## Technical internal stuff\r\n\r\n- `eqx.internal.while_loop(..., kind=\"checkpointed\")` will now only propagate forward JVP tracers for those outputs which are perturbed due to the input to the loop being perturbed. (Rather than all of them.) This change just means that later calls to a nondifferentiable operation, like `jax.pure_callback` or `eqx.internal.nondifferentiable`, will no longer crash at trace time. (See https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fdiffrax\u002Fissues\u002F396.)\r\n- `eqx.internal.while_loop(..., kind=\"bounded\")` will now handle certain vmap+grad combinations without crashing. (It seems like JAX is adding some spurious batch tracers.) (See https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Foptimistix\u002Fissues\u002F48#issuecomment-2009221739) \r\n\r\n- the transpose rule for `eqx.internal.create_vprim` now understands symbolic zeros, fixing a crash for `grad-of-vmap-of-\u003Clineax.linear_solve that we only use some outputs from>`. (See https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Foptimistix\u002Fissues\u002F48.)\r\n\r\n- The type annotation for the input of any converter function used in `eqx.field(converter=...)` will now be used as the type annotation in any `dataclass`-autogenerated `__init__` functions. In particular this should mean such functions are now compatible with runtime type checkers like beartype. (jaxtyping users, you were already covered: this checks the assigned annotations instead.)\r\n\r\n## New Contributors\r\n* @ChenAo-Phys made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F658\r\n* @hawkinsp made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F670\r\n* @AakashKumarNain made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F680\r\n* @imilas made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F699\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.3...v0.11.4","2024-04-14T13:04:26",{"id":221,"version":222,"summary_zh":223,"released_at":224},108682,"v0.11.3","## Features\r\n\r\n- Added `equinox.nn.RMSNorm`.\r\n- Added `equinox.nn.WeightNorm`.\r\n- `equinox.tree_deserialise_leaves` now treats `jax.ShapeDtypeStruct`s in the same way as arrays. This makes it possible to avoid instantiating the initial model parameters only to throw them away again, by using `equinox.filter_eval_shape`:\r\n    ```python\r\n    model = eqx.filter_eval_shape(Model, ...hyperparameters...)\r\n    model = eqx.tree_deserialise_leaves(load_path, model)\r\n    ```\r\n    (#259)\r\n\r\n## Bugfixes\r\n\r\n- `equinox.internal.noinline` no longer initialises the JAX backend on use.\r\n- `equinox.filter_jit(...).lower(..., some_kwarg=...)` no longer crashes (#625, #627)\r\n- The state of `equionx.nn.BatchNorm` now uses the default floating point dtype, rather than always using `float32`.\r\n- `equinox.nn.MultiheadAttention` should now perform the softmax in `float32` even when the input is of lower dtype. (This is important for numerical stability.)\r\n\r\n## Refactor\r\n\r\n- All the layers in `equinox.nn.{Linear, MLP, ...}` now standardise on accepting extra `**kwargs` and not calling `super().__init__`. The intention is that these layers be treated as final, i.e. not subclassable. (Previously things were inconsistent: some did this and some did not.)\r\n- Should now be compatible with `JAX_NUMPY_DTYPE_PROMOTION=strict` and `JAX_NUMPY_RANK_PROMOTION=raise`, and this is checked in tests.\r\n- Better error message when no kwargs passed to `filter_grad` (Thanks @knyazer! #589)\r\n\r\n## Internal features\r\n_These are undocumented internal features, that may be changed at any time._\r\n\r\n- Added `EQX_GETKEY_SEED` for use with `equinox.internal.GetKey`.\r\n- `equinox.internal.while_loop` now has its runtime errors removed. This should help with compatibility with TPUs. (#628)\r\n\r\n\r\n## New Contributors\r\n* @haydn-jones made their first contribution in https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fpull\u002F608\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fpatrick-kidger\u002Fequinox\u002Fcompare\u002Fv0.11.2...v0.11.3","2024-01-10T21:26:18"]