[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jax-ml--jax":3,"tool-jax-ml--jax":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":10,"env_os":118,"env_gpu":119,"env_ram":120,"env_deps":121,"category_tags":125,"github_topics":126,"view_count":127,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":128,"updated_at":129,"faqs":130,"releases":161},393,"jax-ml\u002Fjax","jax","Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU\u002FTPU, and more","JAX 是一个基于 Python 的高性能数值计算库，专为大规模机器学习和科学计算设计。它让你能够用熟悉的 NumPy 风格编写代码，却能自动利用 GPU 或 TPU 等加速器大幅提升运算速度。\n\n面对传统框架在灵活性与性能之间的权衡难题，JAX 提供了一套可组合的程序转换系统。它不仅能对原生 Python 函数进行自动微分，支持高阶导数和反向传播，还能通过 XLA 编译器将代码编译为针对特定硬件优化的执行计划。这意味着开发者可以轻松地将求导、编译和优化等操作自由组合，无需手动处理底层细节。\n\nJAX 非常适合机器学习研究人员、算法工程师以及对计算性能有严格要求的开发者。作为开源研究项目，它在保持强大可扩展性的同时，也鼓励社区共同完善。通过 JAX，你可以更专注于模型创新，而非陷入性能调优的泥潭。","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjax-ml_jax_readme_165be0d1697a.png\" alt=\"logo\">\u003C\u002Fimg>\n\u003C\u002Fdiv>\n\n# Transformable numerical computing at scale\n\n[![Continuous integration](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Factions\u002Fworkflows\u002Fci-build.yaml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Factions\u002Fworkflows\u002Fci-build.yaml)\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fjax)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fjax\u002F)\n\n[**Transformations**](#transformations)\n| [**Scaling**](#scaling)\n| [**Install guide**](#installation)\n| [**Change logs**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fchangelog.html)\n| [**Reference docs**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002F)\n\n\n## What is JAX?\n\nJAX is a Python library for accelerator-oriented array computation and program transformation,\ndesigned for high-performance numerical computing and large-scale machine learning.\n\nJAX can automatically differentiate native\nPython and NumPy functions. It can differentiate through loops, branches,\nrecursion, and closures, and it can take derivatives of derivatives of\nderivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)\nvia [`jax.grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation,\nand the two can be composed arbitrarily to any order.\n\nJAX uses [XLA](https:\u002F\u002Fwww.openxla.org\u002Fxla)\nto compile and scale your NumPy programs on TPUs, GPUs, and other hardware accelerators.\nYou can compile your own pure functions with [`jax.jit`](#compilation-with-jit).\nCompilation and automatic differentiation can be composed arbitrarily.\n\nDig a little deeper, and you'll see that JAX is really an extensible system for\n[composable function transformations](#transformations) at [scale](#scaling).\n\nThis is a research project, not an official Google product. Expect\n[sharp edges](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002FCommon_Gotchas_in_JAX.html).\nPlease help by trying it out, [reporting bugs](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues),\nand letting us know what you think!\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef predict(params, inputs):\n  for W, b in params:\n    outputs = jnp.dot(inputs, W) + b\n    inputs = jnp.tanh(outputs)  # inputs to the next layer\n  return outputs                # no activation on last layer\n\ndef loss(params, inputs, targets):\n  preds = predict(params, inputs)\n  return jnp.sum((preds - targets)**2)\n\ngrad_loss = jax.jit(jax.grad(loss))  # compiled gradient evaluation function\nperex_grads = jax.jit(jax.vmap(grad_loss, in_axes=(None, 0, 0)))  # fast per-example grads\n```\n\n### Contents\n* [Transformations](#transformations)\n* [Scaling](#scaling)\n* [Current gotchas](#gotchas-and-sharp-bits)\n* [Installation](#installation)\n* [Citing JAX](#citing-jax)\n* [Reference documentation](#reference-documentation)\n\n## Transformations\n\nAt its core, JAX is an extensible system for transforming numerical functions.\nHere are three: `jax.grad`, `jax.jit`, and `jax.vmap`.\n\n### Automatic differentiation with `grad`\n\nUse [`jax.grad`](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#jax.grad)\nto efficiently compute reverse-mode gradients:\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef tanh(x):\n  y = jnp.exp(-2.0 * x)\n  return (1.0 - y) \u002F (1.0 + y)\n\ngrad_tanh = jax.grad(tanh)\nprint(grad_tanh(1.0))\n# prints 0.4199743\n```\n\nYou can differentiate to any order with `grad`:\n\n```python\nprint(jax.grad(jax.grad(jax.grad(tanh)))(1.0))\n# prints 0.62162673\n```\n\nYou're free to use differentiation with Python control flow:\n\n```python\ndef abs_val(x):\n  if x > 0:\n    return x\n  else:\n    return -x\n\nabs_val_grad = jax.grad(abs_val)\nprint(abs_val_grad(1.0))   # prints 1.0\nprint(abs_val_grad(-1.0))  # prints -1.0 (abs_val is re-evaluated)\n```\n\nSee the [JAX Autodiff\nCookbook](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002Fautodiff_cookbook.html)\nand the [reference docs on automatic\ndifferentiation](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#automatic-differentiation)\nfor more.\n\n### Compilation with `jit`\n\nUse XLA to compile your functions end-to-end with\n[`jit`](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#just-in-time-compilation-jit),\nused either as an `@jit` decorator or as a higher-order function.\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef slow_f(x):\n  # Element-wise ops see a large benefit from fusion\n  return x * x + x * 2.0\n\nx = jnp.ones((5000, 5000))\nfast_f = jax.jit(slow_f)\n%timeit -n10 -r3 fast_f(x)\n%timeit -n10 -r3 slow_f(x)\n```\n\nUsing `jax.jit` constrains the kind of Python control flow\nthe function can use; see\nthe tutorial on [Control Flow and Logical Operators with JIT](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fcontrol-flow.html)\nfor more.\n\n### Auto-vectorization with `vmap`\n\n[`vmap`](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#vectorization-vmap) maps\na function along array axes.\nBut instead of just looping over function applications, it pushes the loop down\nonto the function’s primitive operations, e.g. turning matrix-vector multiplies into\nmatrix-matrix multiplies for better performance.\n\nUsing `vmap` can save you from having to carry around batch dimensions in your\ncode:\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef l1_distance(x, y):\n  assert x.ndim == y.ndim == 1  # only works on 1D inputs\n  return jnp.sum(jnp.abs(x - y))\n\ndef pairwise_distances(dist1D, xs):\n  return jax.vmap(jax.vmap(dist1D, (0, None)), (None, 0))(xs, xs)\n\nxs = jax.random.normal(jax.random.key(0), (100, 3))\ndists = pairwise_distances(l1_distance, xs)\ndists.shape  # (100, 100)\n```\n\nBy composing `jax.vmap` with `jax.grad` and `jax.jit`, we can get efficient\nJacobian matrices, or per-example gradients:\n\n```python\nper_example_grads = jax.jit(jax.vmap(jax.grad(loss), in_axes=(None, 0, 0)))\n```\n\n## Scaling\n\nTo scale your computations across thousands of devices, you can use any\ncomposition of these:\n* [**Compiler-based automatic parallelization**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002FDistributed_arrays_and_automatic_parallelization.html)\nwhere you program as if using a single global machine, and the compiler chooses\nhow to shard data and partition computation (with some user-provided constraints);\n* [**Explicit sharding and automatic partitioning**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002Fexplicit-sharding.html)\nwhere you still have a global view but data shardings are\nexplicit in JAX types, inspectable using `jax.typeof`;\n* [**Manual per-device programming**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002Fshard_map.html)\nwhere you have a per-device view of data\nand computation, and can communicate with explicit collectives.\n\n| Mode | View? | Explicit sharding? | Explicit Collectives? |\n|---|---|---|---|\n| Auto | Global | ❌ | ❌ |\n| Explicit | Global | ✅ | ❌ |\n| Manual | Per-device | ✅ | ✅ |\n\n```python\nfrom jax.sharding import set_mesh, AxisType, PartitionSpec as P\nmesh = jax.make_mesh((8,), ('data',), axis_types=(AxisType.Explicit,))\nset_mesh(mesh)\n\n# parameters are sharded for FSDP:\nfor W, b in params:\n  print(f'{jax.typeof(W)}')  # f32[512@data,512]\n  print(f'{jax.typeof(b)}')  # f32[512]\n\n# shard data for batch parallelism:\ninputs, targets = jax.device_put((inputs, targets), P('data'))\n\n# evaluate gradients, automatically parallelized!\ngradfun = jax.jit(jax.grad(loss))\nparam_grads = gradfun(params, (inputs, targets))\n```\n\nSee the [tutorial](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fsharded-computation.html) and\n[advanced guides](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fadvanced_guide.html) for more.\n\n## Gotchas and sharp bits\n\nSee the [Gotchas\nNotebook](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002FCommon_Gotchas_in_JAX.html).\n\n## Installation\n\n### Supported platforms\n\n|            | Linux x86_64 | Linux aarch64 | Mac aarch64  | Windows x86_64 | Windows WSL2 x86_64 |\n|------------|--------------|---------------|--------------|----------------|---------------------|\n| CPU        | yes          | yes           | yes          | yes            | yes                 |\n| NVIDIA GPU | yes          | yes           | n\u002Fa          | no             | experimental        |\n| Google TPU | yes          | n\u002Fa           | n\u002Fa          | n\u002Fa            | n\u002Fa                 |\n| AMD GPU    | yes          | no            | n\u002Fa          | no             | experimental        |\n| Apple GPU  | n\u002Fa          | no            | experimental | n\u002Fa            | n\u002Fa                 |\n| Intel GPU  | experimental | n\u002Fa           | n\u002Fa          | no             | no                  |\n\n\n### Instructions\n\n| Platform        | Instructions                                                                                                    |\n|-----------------|-----------------------------------------------------------------------------------------------------------------|\n| CPU             | `pip install -U jax`                                                                                            |\n| NVIDIA GPU      | `pip install -U \"jax[cuda13]\"`                                                                                  |\n| Google TPU      | `pip install -U \"jax[tpu]\"`                                                                                     |\n| AMD GPU (Linux) | Follow [AMD's instructions](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fblob\u002Fmain\u002Fbuild\u002Frocm\u002FREADME.md).                      |\n| Intel GPU       | Follow [Intel's instructions](https:\u002F\u002Fgithub.com\u002Fintel\u002Fintel-extension-for-openxla\u002Fblob\u002Fmain\u002Fdocs\u002Facc_jax.md).  |\n\nSee [the documentation](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Finstallation.html)\nfor information on alternative installation strategies. These include compiling\nfrom source, installing with Docker, using other versions of CUDA, a\ncommunity-supported conda build, and answers to some frequently-asked questions.\n\n## Citing JAX\n\nTo cite this repository:\n\n```\n@software{jax2018github,\n  author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake Vander{P}las and Skye Wanderman-{M}ilne and Qiao Zhang},\n  title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs},\n  url = {http:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax},\n  version = {0.3.13},\n  year = {2018},\n}\n```\n\nIn the above bibtex entry, names are in alphabetical order, the version number\nis intended to be that from [jax\u002Fversion.py](..\u002Fmain\u002Fjax\u002Fversion.py), and\nthe year corresponds to the project's open-source release.\n\nA nascent version of JAX, supporting only automatic differentiation and\ncompilation to XLA, was described in a [paper that appeared at SysML\n2018](https:\u002F\u002Fmlsys.org\u002FConferences\u002F2019\u002Fdoc\u002F2018\u002F146.pdf). We're currently working on\ncovering JAX's ideas and capabilities in a more comprehensive and up-to-date\npaper.\n\n## Reference documentation\n\nFor details about the JAX API, see the\n[reference documentation](https:\u002F\u002Fdocs.jax.dev\u002F).\n\nFor getting started as a JAX developer, see the\n[developer documentation](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fdeveloper.html).\n","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjax-ml_jax_readme_165be0d1697a.png\" alt=\"logo\">\u003C\u002Fimg>\n\u003C\u002Fdiv>\n\n# 大规模可转换数值计算\n\n[![Continuous integration](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Factions\u002Fworkflows\u002Fci-build.yaml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Factions\u002Fworkflows\u002Fci-build.yaml)\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fjax)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fjax\u002F)\n\n[**转换**](#transformations)\n| [**扩展性**](#scaling)\n| [**安装指南**](#installation)\n| [**变更日志**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fchangelog.html)\n| [**参考文档**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002F)\n\n\n## 什么是 JAX？\n\nJAX 是一个面向加速器的数组计算和程序转换的 Python 库，专为高性能数值计算和大规模机器学习设计。\n\nJAX 可以自动对原生 Python 和 NumPy 函数进行**自动微分**（Automatic Differentiation）。它可以处理循环、分支、递归和闭包的微分，并且可以对导数求导。它支持通过 [`jax.grad`](#automatic-differentiation-with-grad) 进行**反向模式微分**（即**反向传播** Backpropagation），同时也支持**前向模式微分**，两者可以以任意顺序组合。\n\nJAX 使用 [XLA](https:\u002F\u002Fwww.openxla.org\u002Fxla)（一种用于加速线性代数运算的编译器）在 TPU、GPU 和其他硬件加速器上编译并扩展您的 NumPy 程序。您可以使用 [`jax.jit`](#compilation-with-jit) 编译您自己的纯函数。编译和自动微分也可以任意组合。\n\n深入探究，你会发现 JAX 实际上是一个用于 [可组合的函数转换](#transformations) 且具备 [扩展性](#scaling) 的可扩展系统。\n\n这是一个研究项目，并非官方 Google 产品。请预期会遇到一些 [尖锐之处](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002FCommon_Gotchas_in_JAX.html)。请通过试用、[报告错误](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues) 并提供反馈来帮助我们！\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef predict(params, inputs):\n  for W, b in params:\n    outputs = jnp.dot(inputs, W) + b\n    inputs = jnp.tanh(outputs)  # inputs to the next layer\n  return outputs                # no activation on last layer\n\ndef loss(params, inputs, targets):\n  preds = predict(params, inputs)\n  return jnp.sum((preds - targets)**2)\n\ngrad_loss = jax.jit(jax.grad(loss))  # compiled gradient evaluation function\nperex_grads = jax.jit(jax.vmap(grad_loss, in_axes=(None, 0, 0)))  # fast per-example grads\n```\n\n### 目录\n* [转换](#transformations)\n* [扩展性](#scaling)\n* [当前注意事项](#gotchas-and-sharp-bits)\n* [安装](#installation)\n* [引用 JAX](#citing-jax)\n* [参考文档](#reference-documentation)\n\n## 转换\n\n核心而言，JAX 是一个用于转换数值函数的可扩展系统。这里有三个主要的转换：`jax.grad`、`jax.jit` 和 `jax.vmap`。\n\n### 使用 `grad` 进行自动微分\n\n使用 [`jax.grad`](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#jax.grad) 高效计算反向模式梯度：\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef tanh(x):\n  y = jnp.exp(-2.0 * x)\n  return (1.0 - y) \u002F (1.0 + y)\n\ngrad_tanh = jax.grad(tanh)\nprint(grad_tanh(1.0))\n# prints 0.4199743\n```\n\n您可以使用 `grad` 进行任意阶的微分：\n\n```python\nprint(jax.grad(jax.grad(jax.grad(tanh)))(1.0))\n# prints 0.62162673\n```\n\n您可以自由地将微分与 Python 控制流结合使用：\n\n```python\ndef abs_val(x):\n  if x > 0:\n    return x\n  else:\n    return -x\n\nabs_val_grad = jax.grad(abs_val)\nprint(abs_val_grad(1.0))   # prints 1.0\nprint(abs_val_grad(-1.0))  # prints -1.0 (abs_val is re-evaluated)\n```\n\n更多详情请参见 [JAX 自动微分指南](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002Fautodiff_cookbook.html) 和 [关于自动微分的参考文档](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#automatic-differentiation)。\n\n### 使用 `jit` 进行编译\n\n使用 XLA 通过 [`jit`](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#just-in-time-compilation-jit) 将您的函数进行端到端编译，既可以作为 `@jit` 装饰器使用，也可以作为高阶函数使用。\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef slow_f(x):\n  # Element-wise ops see a large benefit from fusion\n  return x * x + x * 2.0\n\nx = jnp.ones((5000, 5000))\nfast_f = jax.jit(slow_f)\n%timeit -n10 -r3 fast_f(x)\n%timeit -n10 -r3 slow_f(x)\n```\n\n使用 `jax.jit` 会限制函数可以使用的 Python 控制流类型；更多详情请参见关于 [JIT 的控制流和逻辑运算符](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fcontrol-flow.html) 的教程。\n\n### 使用 `vmap` 进行自动向量化\n\n[`vmap`](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fjax.html#vectorization-vmap) 沿数组轴映射一个函数。但它不仅仅是循环遍历函数应用，而是将循环下推至函数的原始操作中，例如将矩阵 - 向量乘法转换为矩阵 - 矩阵乘法以获得更好的性能。\n\n使用 `vmap` 可以避免在代码中携带批量维度（batch dimensions）：\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef l1_distance(x, y):\n  assert x.ndim == y.ndim == 1  # only works on 1D inputs\n  return jnp.sum(jnp.abs(x - y))\n\ndef pairwise_distances(dist1D, xs):\n  return jax.vmap(jax.vmap(dist1D, (0, None)), (None, 0))(xs, xs)\n\nxs = jax.random.normal(jax.random.key(0), (100, 3))\ndists = pairwise_distances(l1_distance, xs)\ndists.shape  # (100, 100)\n```\n\n通过组合 `jax.vmap` 与 `jax.grad` 和 `jax.jit`，我们可以获得高效的雅可比矩阵或每个样本的梯度：\n\n```python\nper_example_grads = jax.jit(jax.vmap(jax.grad(loss), in_axes=(None, 0, 0)))\n```\n\n## 扩展性\n\n要在数千个设备上扩展您的计算，您可以使用以下任意组合：\n* [**基于编译器的自动并行化**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002FDistributed_arrays_and_automatic_parallelization.html)，您可以像使用单个全局机器一样编程，编译器选择如何切分数据和划分计算（带有一些用户提供的约束）；\n* [**显式切分和自动分区**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002Fexplicit-sharding.html)，您仍然拥有全局视图，但数据切分在 JAX 类型中是显式的，可以使用 `jax.typeof` 进行检查；\n* [**手动每设备编程**](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002Fshard_map.html)，您拥有数据的每设备视图和计算，并可以通过显式的集合通信进行交流。\n\n| 模式 | 视图？ | 显式切分？ | 显式集合通信？ |\n|---|---|---|---|\n| 自动 | 全局 | ❌ | ❌ |\n| 显式 | 全局 | ✅ | ❌ |\n| 手动 | 每设备 | ✅ | ✅ |\n\n```python\nfrom jax.sharding import set_mesh, AxisType, PartitionSpec as P\nmesh = jax.make_mesh((8,), ('data',), axis_types=(AxisType.Explicit,))\nset_mesh(mesh)\n\n# parameters are sharded for FSDP:\nfor W, b in params:\n  print(f'{jax.typeof(W)}')  # f32[512@data,512]\n  print(f'{jax.typeof(b)}')  # f32[512]\n```\n\n# shard data for batch parallelism:\ninputs, targets = jax.device_put((inputs, targets), P('data'))\n\n# evaluate gradients, automatically parallelized!\ngradfun = jax.jit(jax.grad(loss))\nparam_grads = gradfun(params, (inputs, targets))\n```\n\n有关更多信息，请参阅 [教程](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fsharded-computation.html) 和\n[高级指南](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fadvanced_guide.html)。\n\n## 注意事项与易错点\n\n请参阅 [注意事项笔记本](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fnotebooks\u002FCommon_Gotchas_in_JAX.html)。\n\n## 安装\n\n### 支持的平台\n\n|            | Linux x86_64 | Linux aarch64 | Mac aarch64  | Windows x86_64 | Windows WSL2 x86_64 |\n|------------|--------------|---------------|--------------|----------------|---------------------|\n| CPU        | 支持         | 支持          | 支持         | 支持           | 支持                |\n| NVIDIA GPU | 支持         | 支持          | 不适用       | 不支持         | 实验性              |\n| Google TPU | 支持         | 不适用        | 不适用       | 不适用         | 不适用              |\n| AMD GPU    | 支持         | 不支持        | 不适用       | 不支持         | 实验性              |\n| Apple GPU  | 不适用       | 不支持        | 实验性       | 不适用         | 不适用              |\n| Intel GPU  | 实验性       | 不适用        | 不适用       | 不支持         | 不支持              |\n\n\n### 安装说明\n\n| 平台             | 说明                                                                                                    |\n|------------------|-------------------------------------------------------------------------------------------------------|\n| CPU              | `pip install -U jax`                                                                                  |\n| NVIDIA GPU       | `pip install -U \"jax[cuda13]\"`                                                                        |\n| Google TPU       | `pip install -U \"jax[tpu]\"`                                                                           |\n| AMD GPU (Linux)  | 遵循 [AMD 的说明](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fblob\u002Fmain\u002Fbuild\u002Frocm\u002FREADME.md)。                      |\n| Intel GPU        | 遵循 [Intel 的说明](https:\u002F\u002Fgithub.com\u002Fintel\u002Fintel-extension-for-openxla\u002Fblob\u002Fmain\u002Fdocs\u002Facc_jax.md)。  |\n\n有关替代安装策略的信息，请参阅 [文档](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Finstallation.html)。\n这些包括从源码编译、使用 Docker 安装、使用其他版本的 CUDA、社区支持的 conda 构建，以及一些常见问题解答。\n\n## 引用 JAX\n\n引用此仓库：\n\n```\n@software{jax2018github,\n  author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake Vander{P}las and Skye Wanderman-{M}ilne and Qiao Zhang},\n  title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs},\n  url = {http:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax},\n  version = {0.3.13},\n  year = {2018},\n}\n```\n\n在上述 bibtex 条目中，姓名按字母顺序排列，版本号旨在来自 [jax\u002Fversion.py](..\u002Fmain\u002Fjax\u002Fversion.py)，年份对应项目的开源发布。\n\nJAX 的早期版本仅支持自动微分和编译至 XLA，描述于 [发表于 SysML 2018 的论文](https:\u002F\u002Fmlsys.org\u002FConferences\u002F2019\u002Fdoc\u002F2018\u002F146.pdf) 中。我们目前正在撰写一篇更全面、更最新的论文来涵盖 JAX 的理念和能力。\n\n## 参考文档\n\n关于 JAX API 的详细信息，请参阅\n[参考文档](https:\u002F\u002Fdocs.jax.dev\u002F)。\n\n作为 JAX 开发人员的入门指南，请参阅\n[开发者文档](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fdeveloper.html)。","# JAX 快速上手指南\n\nJAX 是一个用于加速器导向的数组计算和程序转换的 Python 库，专为高性能数值计算和大规模机器学习设计。它支持自动微分、XLA 编译以及多设备扩展。\n\n## 1. 环境准备\n\n### 系统要求\nJAX 支持以下操作系统架构：\n- **Linux**: x86_64, aarch64\n- **macOS**: aarch64 (Apple Silicon)\n- **Windows**: x86_64, WSL2 x86_64\n\n### 硬件支持\n| 硬件类型 | 支持情况 |\n| :--- | :--- |\n| **CPU** | 所有平台均支持 |\n| **NVIDIA GPU** | Linux x86_64 \u002F aarch64 (需 CUDA) |\n| **Google TPU** | Linux x86_64 |\n| **AMD GPU** | Linux x86_64 (需 ROCm) |\n| **Intel GPU** | Linux x86_64 (实验性) |\n\n### 前置依赖\n- Python 3.x\n- `pip` 包管理工具\n\n> **提示**：在中国大陆网络环境下，建议配置国内 PyPI 镜像源以加快下载速度（例如清华源或阿里源）。\n\n## 2. 安装步骤\n\n根据目标硬件选择对应的安装命令：\n\n### CPU 版本\n适用于所有平台的基础安装：\n```bash\npip install -U jax\n```\n\n### GPU 版本 (NVIDIA)\n适用于 Linux 平台，需要安装对应版本的 CUDA 驱动：\n```bash\npip install -U \"jax[cuda13]\"\n```\n\n### TPU 版本 (Google Cloud)\n适用于 TPU 环境：\n```bash\npip install -U \"jax[tpu]\"\n```\n\n### 其他硬件\n- **AMD GPU (Linux)**: 请参考 [AMD 官方说明](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fblob\u002Fmain\u002Fbuild\u002Frocm\u002FREADME.md)。\n- **Intel GPU**: 请参考 [Intel 官方说明](https:\u002F\u002Fgithub.com\u002Fintel\u002Fintel-extension-for-openxla\u002Fblob\u002Fmain\u002Fdocs\u002Facc_jax.md)。\n\n> **注意**：如需更详细的安装策略（如 Docker、源码编译等），请查阅 [官方文档](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Finstallation.html)。\n\n## 3. 基本使用\n\nJAX 的核心功能包括自动微分 (`grad`)、即时编译 (`jit`) 和向量化 (`vmap`)。以下是最简单的自动微分示例：\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef tanh(x):\n  y = jnp.exp(-2.0 * x)\n  return (1.0 - y) \u002F (1.0 + y)\n\ngrad_tanh = jax.grad(tanh)\nprint(grad_tanh(1.0))\n# prints 0.4199743\n```\n\n### 进阶组合\n您可以将 `grad`、`jit` 和 `vmap` 任意组合以实现高性能计算：\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef loss(params, inputs, targets):\n  preds = predict(params, inputs)\n  return jnp.sum((preds - targets)**2)\n\n# 编译梯度评估函数\ngrad_loss = jax.jit(jax.grad(loss))\n# 批量计算每个样本的梯度\nperex_grads = jax.jit(jax.vmap(grad_loss, in_axes=(None, 0, 0)))\n```\n\n更多详细 API 和使用指南，请访问 [JAX 参考文档](https:\u002F\u002Fdocs.jax.dev\u002F)。","某金融风控团队正在构建实时欺诈检测模型，需针对复杂的非线性特征工程进行高频次的梯度更新与大规模数据训练。\n\n### 没有 jax 时\n- 手动推导高阶梯度公式极易出错，每次调整网络层数都需重新计算反向传播路径，维护成本极高。\n- 传统 NumPy 循环处理批次数据时受限于 GIL 锁，CPU 满载但 GPU 处于等待状态，资源浪费严重。\n- 不同硬件间的代码移植成本高，从本地调试切换到云端加速需重写底层算子，阻碍实验迭代。\n- 缺乏即时编译机制，Python 动态特性导致推理延迟波动大，难以满足线上服务的低延迟要求。\n\n### 使用 jax 后\n- `jax.grad` 自动追踪计算图，支持任意阶导数，彻底解放了数学公式的推导负担，减少人工错误。\n- `jax.jit` 将 Python 函数编译为机器码，消除解释器开销，使训练速度提升数倍，响应更稳定。\n- `jax.vmap` 实现隐式向量化，轻松应对百万级样本的批量梯度计算，最大化硬件吞吐与并行效率。\n- 统一接口直接调度 GPU 或 TPU，无需关心底层内存管理，实现无缝的高性能部署与跨设备迁移。\n\njax 凭借可组合的转换系统与 XLA 编译器，让研究人员能专注于算法创新而非底层优化。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjax-ml_jax_165be0d1.png","jax-ml","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjax-ml_a00cddcf.png","Pushing back the limits on numerical computing.",null,"https:\u002F\u002Fdocs.jax.dev","https:\u002F\u002Fgithub.com\u002Fjax-ml",[82,86,90,94,98,101,105,109,112],{"name":83,"color":84,"percentage":85},"Python","#3572A5",86.6,{"name":87,"color":88,"percentage":89},"C++","#f34b7d",10.4,{"name":91,"color":92,"percentage":93},"Starlark","#76d275",2.1,{"name":95,"color":96,"percentage":97},"Shell","#89e051",0.4,{"name":99,"color":100,"percentage":97},"Jupyter Notebook","#DA5B0B",{"name":102,"color":103,"percentage":104},"C","#555555",0.1,{"name":106,"color":107,"percentage":108},"MAXScript","#00a6a6",0,{"name":110,"color":111,"percentage":108},"Makefile","#427819",{"name":113,"color":78,"percentage":108},"Linker Script",35306,3505,"2026-04-05T21:18:58","Apache-2.0","Linux, macOS, Windows","Linux 支持 NVIDIA GPU，Windows 不支持原生 NVIDIA，WSL2 实验性支持；CUDA 版本通过 pip 标签指定（如 cuda13），显存大小未说明","未说明",{"notes":122,"python":120,"dependencies":123},"支持多种硬件加速（NVIDIA GPU、Google TPU、AMD GPU、Intel GPU、Apple GPU）；Windows 仅支持 CPU 模式；推荐使用 pip 安装特定硬件版本的 jax（如 jax[cuda13]）",[124],"numpy",[13],[67],101,"2026-03-27T02:49:30.150509","2026-04-06T08:40:44.129703",[131,136,141,146,151,156],{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},1449,"如何在 Apple Silicon (M1\u002FM2) 上启用 JAX 的 Metal GPU 加速？","需要安装 Apple Metal 插件并确保 macOS 版本为 13.4+ (Ventura)。请按以下步骤操作：\n1. 遵循 Apple 官方教程配置 Metal。\n2. 激活虚拟环境（venv）。\n3. 运行 `python -c 'import jax; print(jax.devices())'` 验证。\n如果仍显示 CPU，尝试重新激活环境或升级系统至 Ventura 13.4 以上。成功识别后应显示 MetalDevice。","https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F8074",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},1450,"在 macOS ARM (Apple Silicon) 上如何安装 JAX？","推荐使用 conda-forge 直接安装，避免混合使用 Anaconda\u002FMiniconda\u002FHomebrew 导致的环境冲突。建议安装 arm64 版本的 Miniforge，然后执行以下命令：\n```\nconda install -c conda-forge jaxlib\nconda install -c conda-forge jax\n```\n这种方式通常能解决大部分兼容性问题。","https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F5501",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},1451,"JAX 是否支持 NVIDIA 以外的 GPU（如 AMD 或 Intel）？","支持。JAX 目前已支持 NVIDIA、AMD、Apple 和 Intel 等多个厂商。对于 AMD GPU，推荐使用 ROCm 4.5+ 配合 Docker 环境：\n1. 使用受支持的 Ubuntu LTS 并安装 HKE addon。\n2. 使用安装程序完整安装 ROCm。\n3. 安装启用硬件扩展的 Docker。\n4. 在容器内构建 rocm + jax，并参考 AMD ROCm 指南配置设备通信。","https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F2012",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},1452,"通过 conda 安装 JAX 时遇到 `INTERNAL: Failed to launch ptxas` 错误怎么办？","此错误通常与 conda-forge 的包打包有关。建议优先尝试通过 pip 安装（遵循 README 官方说明），通常不会出错。如果必须使用 conda，请收集 `conda info` 和 `conda list` 的输出，并在 `conda-forge\u002Fjaxlib-feedstock` 仓库提交问题以获取专门帮助。","https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F189",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},1453,"JAX 核心项目是否包含 BFGS 或拟牛顿优化器？","不包含。根据 JEP 18137 决定，优化器功能已不在 JAX 核心项目的范围内。如果需要此类功能，推荐使用专门的第三方库，如 Optax (https:\u002F\u002Foptax.readthedocs.io\u002F)。","https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F1400",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},1454,"在新硬件（如 Jetson Thor, Blackwell）上从源码构建 JAX 遇到问题怎么办？","如果遇到 LLVM 版本或新 CUDA 架构（如 sm_110, CUDA 13.0）不支持的问题，可能需要等待上游支持或使用特定容器。社区有维护的 Jetson 容器镜像可用（参考 dusty-nv\u002Fjetson-containers）。构建前请确认 jaxlib 是否支持对应的 CUDA 版本和计算能力。","https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F31399",[162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247,252,257],{"id":163,"version":164,"summary_zh":165,"released_at":166},110618,"jax-v0.9.2","## JAX 0.9.2 (March 2, 2026)\r\n\r\n* Changes:\r\n  * The semi-private type `jax._src.literals.TypedNdArray` is now a subclass of\r\n    `np.ndarray`, rather than a duck type of it.\r\n  * `jax.numpy.arange` with `step` specified no longer generates the array\r\n    on host. The benefit is more efficient code, though this can lead to less\r\n    precise outputs for narrow-width floats (e.g. bfloat16). To recover the\r\n    previous behavior in this case, use `jnp.array(np.arange(...))`.","2026-03-18T23:40:31",{"id":168,"version":169,"summary_zh":170,"released_at":171},110619,"jax-v0.9.1","* Changes:\r\n  * JAX tracers that are not of `Array` type (e.g., of `Ref` type) will no\r\n    longer report themselves to be instances of `Array`.\r\n  * Using `jax.shard_map` in Explicit mode will raise an error\r\n    if the PartitionSpec of input does not match the PartitionSpec specified in\r\n    `in_specs`. In other words, it will act like an assert instead of an\r\n    implicit reshard.\r\n    `in_specs` is an optional argument so you can omit specifying it\r\n    and `shard_map` will infer the `PartitionSpec` from the argument. If you\r\n    want to reshard your inputs, you can use `jax.reshard` on the arguments and\r\n    then pass those args to shard_map.\r\n\r\n* New features:\r\n  * Added a debug config `jax_compilation_cache_check_contents`. If set, we miss\r\n    when `get()` is called on a value that has not been `put()` by the current\r\n    process, even if the value is actually in the disk cache. When a value is\r\n    `put()`, we verify that its contents match.","2026-03-02T11:13:44",{"id":173,"version":174,"summary_zh":175,"released_at":176},110620,"jax-v0.9.0.1","JAX v0.9.0.1 is identical to v0.9.0 with the commits from the following four PRs patched in:\r\n\r\n- https:\u002F\u002Fgithub.com\u002Fopenxla\u002Fxla\u002Fpull\u002F36579\r\n- https:\u002F\u002Fgithub.com\u002Fopenxla\u002Fxla\u002Fpull\u002F36345\r\n- https:\u002F\u002Fgithub.com\u002Fopenxla\u002Fxla\u002Fpull\u002F36755\r\n- https:\u002F\u002Fgithub.com\u002Fopenxla\u002Fxla\u002Fpull\u002F36696\r\n","2026-02-05T18:51:19",{"id":178,"version":179,"summary_zh":180,"released_at":181},110621,"jax-v0.8.3","JAX v0.8.3 is identical to v0.8.2 with the following two bug fixes patched in:\r\n\r\n- https:\u002F\u002Fgithub.com\u002Fopenxla\u002Fxla\u002Fcommit\u002F4bc723da9766d784920f4e566f87afc6ffbf6a5b\r\n- https:\u002F\u002Fgithub.com\u002Fopenxla\u002Fxla\u002Fcommit\u002F21552fd83ec0f02ec3f418f4ac356bfa1d91ce4d\r\n","2026-01-29T23:10:00",{"id":183,"version":184,"summary_zh":185,"released_at":186},110622,"jax-v0.9.0","* New features:\r\n\r\n  * Added `jax.thread_guard`, a context manager that detects when devices\r\n    are used by multiple threads in multi-controller JAX.\r\n\r\n* Bug fixes:\r\n  * Fixed a workspace size calculation error for pivoted QR (`magma_zgeqp3_gpu`)\r\n    in MAGMA 2.9.0 when using `use_magma=True` and `pivoting=True`.\r\n    (#34145).\r\n\r\n* Deprecations:\r\n  * The flag `jax_collectives_common_channel_id` was removed.\r\n  * The `jax_pmap_no_rank_reduction` config state has been removed. The\r\n    no-rank-reduction behavior is now the only supported behavior: a\r\n    `jax.pmap`ped function `f` sees inputs of the same rank as the input to\r\n    `jax.pmap(f)`. For example, if `jax.pmap(f)` receives shape `(8, 128)` on\r\n    8 devices, then `f` receives shape `(1, 128)`.\r\n  * Setting the `jax_pmap_shmap_merge` config state is deprecated in JAX v0.9.0\r\n    and will be removed in JAX v0.10.0.\r\n  * `jax.numpy.fix` is deprecated, anticipating the deprecation of\r\n    `numpy.fix` in NumPy v2.5.0. `jax.numpy.trunc` is a drop-in\r\n    replacement.\r\n\r\n* Changes:\r\n  * `jax.export` now supports explicit sharding. This required a new\r\n    export serialization format version that includes the NamedSharding,\r\n    including the abstract mesh, and the partition spec. As part of this\r\n    change we have added a restriction in the use of exported modules: when\r\n    calling them the abstract mesh must match the one used at export time,\r\n    including the axis names. Previously, only the number of the devices\r\n    mattered.\r\n","2026-01-20T23:23:57",{"id":188,"version":189,"summary_zh":190,"released_at":191},110623,"jax-v0.8.2","* Deprecations\r\n  * `jax.lax.pvary` has been deprecated.\r\n    Please use `jax.lax.pcast(..., to='varying')` as the replacement.\r\n  * Complex arguments passed to `jax.numpy.arange` now result in a\r\n    deprecation warning, because the output is poorly-defined.\r\n  * From `jax.core` a number of symbols are newly deprecated including:\r\n    `call_impl`, `get_aval`, `mapped_aval`, `subjaxprs`, `set_current_trace`,\r\n    `take_current_trace`, `traverse_jaxpr_params`, `unmapped_aval`,\r\n    `AbstractToken`,  and `TraceTag`.\r\n  * All symbols in `jax.interpreters.pxla` are deprecated. These are\r\n    primarily JAX internal APIs, and users should not rely on them.\r\n\r\n* Changes:\r\n  * jax's `Tracer` no longer inherits from `jax.Array` at runtime. However,\r\n    `jax.Array` now uses a custom metaclass such `isinstance(x, Array)` is true\r\n    if an object `x` represents a traced `Array`. Only some `Tracer`s represent\r\n    `Array`s, so it is not correct for `Tracer` to inherit from `Array`.\r\n\r\n    For the moment, during Python type checking, we continue to declare `Tracer`\r\n    as a subclass of `Array`, however we expect to remove this in a future\r\n    release.\r\n  * `jax.experimental.si_vjp` has been deleted.\r\n    `jax.vjp` subsumes it's functionality.","2025-12-18T18:50:06",{"id":193,"version":194,"summary_zh":195,"released_at":196},110630,"jax-v0.6.1","* New features:\r\n  * Added `jax.lax.axis_size` which returns the size of the mapped axis\r\n    given its name.\r\n\r\n* Changes\r\n  * Additional checking for the versions of CUDA package dependencies was\r\n    reenabled, having been accidentally disabled in a previous release.\r\n  * JAX nightly packages are now published to artifact registry. To install\r\n    these packages, see the [JAX installation guide](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Finstallation.html#jax-nightly-installation).\r\n  * `jax.sharding.PartitionSpec` no longer inherits from a tuple.\r\n  * `jax.ShapeDtypeStruct` is immutable now. Please use `.update` method to\r\n    update your `ShapeDtypeStruct` instead of doing in-place updates.\r\n\r\n* Deprecations\r\n  * `jax.custom_derivatives.custom_jvp_call_jaxpr_p` is deprecated, and will be\r\n    removed in JAX v0.7.0.\r\n","2025-05-21T18:30:19",{"id":198,"version":199,"summary_zh":200,"released_at":201},110631,"jax-v0.6.0","\r\n* Breaking changes\r\n\r\n  * `jax.numpy.array` no longer accepts `None`. This behavior was\r\n    deprecated since November 2023 and is now removed.\r\n  * Removed the `config.jax_data_dependent_tracing_fallback` config option,\r\n    which was added temporarily in v0.4.36 to allow users to opt out of the\r\n    new \"stackless\" tracing machinery.\r\n  * Removed the `config.jax_eager_pmap` config option.\r\n  * Disallow the calling of `lower` and `trace` AOT APIs on the result\r\n    of `jax.jit` if there have been subsequent wrappers applied.\r\n    Previously this worked, but silently ignored the wrappers.\r\n    The workaround is to apply `jax.jit` last among the wrappers,\r\n    and similarly for `jax.pmap`.\r\n    See `#27873`.\r\n  * The `cuda12_pip` extra for `jax` has been removed; use `pip install jax[cuda12]`\r\n    instead.\r\n\r\n* Changes\r\n  * The minimum CuDNN version is v9.8.\r\n  * JAX is now built using CUDA 12.8. All versions of CUDA 12.1 or newer remain\r\n    supported.\r\n  * JAX package extras are now updated to use dash instead of underscore to\r\n    align with PEP 685. For instance, if you were previously using `pip install jax[cuda12_local]`\r\n    to install JAX, run `pip install jax[cuda12-local]` instead.\r\n  * `jax.jit` now requires `fun` to be passed by position, and additional\r\n    arguments to be passed by keyword. Doing otherwise will result in a\r\n    DeprecationWarning in v0.6.X, and an error in starting in v0.7.X.\r\n\r\n* Deprecations\r\n\r\n  * `jax.tree_util.build_tree` is deprecated. Use `jax.tree.unflatten`\r\n    instead.\r\n  * Implemented host callback handlers for CPU and GPU devices using XLA's FFI\r\n    and removed existing CPU\u002FGPU handlers using XLA's custom call.\r\n  * All APIs in `jax.lib.xla_extension` are now deprecated.\r\n  * `jax.interpreters.mlir.hlo` and `jax.interpreters.mlir.func_dialect`,\r\n    which were accidental exports, have been removed. If needed, they are\r\n    available from `jax.extend.mlir`.\r\n  * `jax.interpreters.mlir.custom_call` is deprecated. The APIs provided by\r\n    `jax.ffi` should be used instead.\r\n  * The deprecated use of `jax.ffi.ffi_call` with inline arguments is no\r\n    longer supported. `jax.ffi.ffi_call` now unconditionally returns a\r\n    callable.\r\n  * The following exports in `jax.lib.xla_client` are deprecated:\r\n    `get_topology_for_devices`, `heap_profile`, `mlir_api_version`, `Client`,\r\n    `CompileOptions`, `DeviceAssignment`, `Frame`, `HloSharding`, `OpSharding`,\r\n    `Traceback`.\r\n  * The following internal APIs in `jax.util` are deprecated:\r\n    `HashableFunction`, `as_hashable_function`, `cache`, `safe_map`, `safe_zip`,\r\n    `split_dict`, `split_list`, `split_list_checked`, `split_merge`, `subvals`,\r\n    `toposort`, `unzip2`, `wrap_name`, and `wraps`.\r\n  * `jax.dlpack.to_dlpack` has been deprecated. You can usually pass a JAX\r\n    `Array` directly to the `from_dlpack` function of another framework. If you\r\n    need the functionality of `to_dlpack`, use the `__dlpack__` attribute of an\r\n    array.\r\n  * `jax.lax.infeed`, `jax.lax.infeed_p`, `jax.lax.outfeed`, and\r\n    `jax.lax.outfeed_p` are deprecated and will be removed in JAX v0.7.0.\r\n  * Several previously-deprecated APIs have been removed, including:\r\n    * From `jax.lib.xla_client`: `ArrayImpl`, `FftType`, `PaddingType`,\r\n      `PrimitiveType`, `XlaBuilder`, `dtype_to_etype`,\r\n      `ops`, `register_custom_call_target`, `shape_from_pyval`, `Shape`,\r\n      `XlaComputation`.\r\n    * From `jax.lib.xla_extension`: `ArrayImpl`, `XlaRuntimeError`.\r\n    * From `jax`: `jax.treedef_is_leaf`, `jax.tree_flatten`, `jax.tree_map`,\r\n      `jax.tree_leaves`, `jax.tree_structure`, `jax.tree_transpose`, and\r\n      `jax.tree_unflatten`. Replacements can be found in `jax.tree` or\r\n      `jax.tree_util`.\r\n    * From `jax.core`: `AxisSize`, `ClosedJaxpr`, `EvalTrace`, `InDBIdx`, `InputType`,\r\n      `Jaxpr`, `JaxprEqn`, `Literal`, `MapPrimitive`, `OpaqueTraceState`, `OutDBIdx`,\r\n      `Primitive`, `Token`, `TRACER_LEAK_DEBUGGER_WARNING`, `Var`, `concrete_aval`,\r\n      `dedup_referents`, `escaped_tracer_error`, `extend_axis_env_nd`, `full_lower`,  `get_referent`, `jaxpr_as_fun`, `join_effects`, `lattice_join`,\r\n      `leaked_tracer_error`, `maybe_find_leaked_tracers`, `raise_to_shaped`,\r\n      `raise_to_shaped_mappings`, `reset_trace_state`, `str_eqn_compact`,\r\n      `substitute_vars_in_output_ty`, `typecompat`, and `used_axis_names_jaxpr`. Most\r\n      have no public replacement, though a few are available at `jax.extend.core`.\r\n    * The `vectorized` argument to `jax.pure_callback` and\r\n      `jax.ffi.ffi_call`. Use the `vmap_method` parameter instead.\r\n","2025-04-17T00:04:02",{"id":203,"version":204,"summary_zh":205,"released_at":206},110632,"jax-v0.5.3","* New Features\r\n\r\n  * Added a `allow_negative_indices` option to `jax.lax.dynamic_slice`,\r\n    `jax.lax.dynamic_update_slice` and related functions. The default is\r\n    true, matching the current behavior. If set to false, JAX does not need to\r\n    emit code clamping negative indices, which improves code size.\r\n  * Added a `replace` option to `jax.random.categorical` to enable sampling\r\n    without replacement.","2025-03-19T18:20:32",{"id":208,"version":209,"summary_zh":210,"released_at":211},110633,"jax-v0.5.2","Patch release of 0.5.1\r\n\r\n* Bug fixes\r\n  * Fixes TPU metric logging and `tpu-info`, which was broken in 0.5.1\r\n","2025-03-05T02:36:02",{"id":213,"version":214,"summary_zh":215,"released_at":216},110624,"jax-v0.8.1","* New features:\r\n\r\n  * `jax.jit` now supports the decorator factory pattern; i.e instead of\r\n    writing\r\n    ```python\r\n    @functools.partial(jax.jit, static_argnames=['n'])\r\n    def f(x, n):\r\n      ...\r\n    ```\r\n    you may write\r\n    ```python\r\n    @jax.jit(static_argnames=['n'])\r\n    def f(x, n):\r\n      ...\r\n    ```\r\n\r\n* Changes:\r\n\r\n  * `jax.lax.linalg.eigh` now accepts an `implementation` argument to\r\n    select between QR (CPU\u002FGPU), Jacobi (GPU\u002FTPU), and QDWH (TPU)\r\n    implementations. The `EighImplementation` enum is publicly exported from\r\n    `jax.lax.linalg`.\r\n\r\n  * `jax.lax.linalg.svd` now implements an `algorithm` that uses the polar\r\n    decomposition on CUDA GPUs. This is also an alias for the existing algorithm\r\n    on TPUs.\r\n\r\n* Bug fixes:\r\n\r\n  * Fixed a bug introduced in JAX 0.7.2 where eigh failed for large matrices on\r\n    GPU (#33062).\r\n\r\n* Deprecations:\r\n  * `jax.sharding.PmapSharding` is now deprecated. Please use\r\n    `jax.NamedSharding` instead.\r\n  * `jx.device_put_replicated` is now deprecated. Please use `jax.device_put`\r\n    with the appropriate sharding instead.\r\n  * `jax.device_put_sharded` is now deprecated. Please use `jax.device_put` with\r\n    the appropriate sharding instead.\r\n  * Default `axis_types` of `jax.make_mesh` will change in JAX v0.9.0 to return\r\n  `jax.sharding.AxisType.Explicit`. Leaving axis_types unspecified will raise a\r\n  `DeprecationWarning`.\r\n  * `jax.cloud_tpu_init` and its contents were deprecated. There is no reason for a user to import or use the contents of this module; JAX handles this for you automatically if needed.","2025-11-18T18:45:35",{"id":218,"version":219,"summary_zh":220,"released_at":221},110625,"jax-v0.8.0","* Breaking changes:\r\n\r\n  * JAX is changing the default `jax.pmap` implementation to one implemented in\r\n    terms of `jax.jit` and `jax.shard_map`. `jax.pmap` is in maintenance mode\r\n    and we encourage all new code to use `jax.shard_map` directly. See the\r\n    [migration guide](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fmigrate_pmap.html) for\r\n    more information.\r\n  * The `auto=` parameter of `jax.experimental.shard_map.shard_map` has been\r\n    removed. This means that `jax.experimental.shard_map.shard_map` no longer\r\n    supports nesting. If you want to nest shard_map calls, please use\r\n    `jax.shard_map`.\r\n  * JAX no longer allows passing objects that support `__jax_array__` directly\r\n    to, e.g. `jit`-ed functions. Call `jax.numpy.asarray` on them first.\r\n  * `jax.numpy.cov` is now returns NaN for empty arrays ({jax-issue}`#32305`),\r\n    and matches NumPy 2.2 behavior for single-row design matrices ({jax-issue}`#32308`).\r\n  * JAX no longer accepts `Array` values where a `dtype` value is expected. Call\r\n    `.dtype` on these values first.\r\n  * The deprecated function `jax.interpreters.mlir.custom_call` was\r\n    removed.\r\n  * The `jax.util`, `jax.extend.ffi`, and `jax.experimental.host_callback`\r\n    modules have been removed. All public APIs within these modules were\r\n    deprecated and removed in v0.7.0 or earlier.\r\n  * The deprecated symbol `jax.custom_derivatives.custom_jvp_call_jaxpr_p`\r\n    was removed.\r\n  * `jax.experimental.multihost_utils.process_allgather` raises an error when\r\n    the input is a jax.Array and not fully-addressable and `tiled=False`. To fix\r\n    this, pass `tiled=True` to your `process_allgather` invocation.\r\n  * from `jax.experimental.compilation_cache`, the deprecated symbols\r\n    `is_initialized` and `initialize_cache` were removed.\r\n  * The deprecated function `jax.interpreters.xla.canonicalize_dtype`\r\n    was removed.\r\n  * `jaxlib.hlo_helpers` has been removed. Use `jax.ffi` instead.\r\n  * The option `jax_cpu_enable_gloo_collectives` has been removed. Use\r\n    `jax_cpu_collectives_implementation` instead.\r\n  * The previously-deprecated `interpolation` argument to\r\n    `jax.numpy.percentile` and `jax.numpy.quantile` has been\r\n    removed; use `method` instead.\r\n  * The JAX-internal `for_loop` primitive was removed. Its functionality,\r\n    reading from and writing to refs in the loop body, is now directly\r\n    supported by `jax.lax.fori_loop`. If you need help updating your\r\n    code, please file a bug.\r\n  * `jax.numpy.trimzeros` now errors for non-1D input.\r\n  * The `where` argument to `jax.numpy.sum` and other reductions is now\r\n    required to be boolean. Non-boolean values have resulted in a\r\n    `DeprecationWarning` since JAX v0.5.0.\r\n  * The deprecated functions in  `jax.dlpack`,  `jax.errors`, \r\n    `jax.lib.xla_bridge`,  `jax.lib.xla_client`, and \r\n    `jax.lib.xla_extension` were removed.\r\n  * `jax.interpreters.mlir.dense_bool_array` was removed. Use MLIR APIs to\r\n    construct attributes instead.\r\n\r\n* Changes\r\n  * `jax.numpy.linalg.eig` now returns a namedtuple (with attributes\r\n    `eigenvalues` and `eigenvectors`) instead of a plain tuple.\r\n  * `jax.grad` and `jax.vjp` will now round always primals to\r\n    `float32` if `float64` mode is not enabled.\r\n  * `jax.dlpack.from_dlpack` now accepts arrays with non-default layouts,\r\n    for example, transposed.\r\n  * The default nonsymmetric eigendecomposition on NVIDIA GPUs now uses\r\n    cusolver. The magma and LAPACK implementations are still available via the\r\n    new `implementation` argument to `jax.lax.linalg.eig`\r\n    ({jax-issue}`#27265`). The `use_magma` argument is now deprecated in favor\r\n    of `implementation`.\r\n  * `jax.numpy.trim_zeros` now follows NumPy 2.2 in supporting\r\n    multi-dimensional inputs.\r\n\r\n* Deprecations\r\n  * `jax.experimental.enable_x64` and `jax.experimental.disable_x64`\r\n    are deprecated in favor of the new non-experimental context manager\r\n    `jax.enable_x64`.\r\n  * `jax.experimental.shard_map.shard_map` is deprecated; going forward use\r\n    `jax.shard_map`.\r\n  * `jax.experimental.pjit.pjit` is deprecated; going forward use\r\n    `jax.jit`.\r\n\r\n","2025-10-15T23:38:34",{"id":223,"version":224,"summary_zh":225,"released_at":226},110626,"jax-v0.7.2","* Breaking changes:\r\n\r\n  * `jax.dlpack.from_dlpack` no longer accepts a DLPack capsule. This\r\n    behavior was deprecated and is now removed. The function must be called\r\n    with an array implementing `__dlpack__` and `__dlpack_device__`.\r\n\r\n* Changes\r\n  * The minimum supported NumPy version is now 2.0. Since SciPy 1.13 is required\r\n    for NumPy 2.0 support, the minimum supported SciPy version is now 1.13.\r\n\r\n  * JAX now represents constants in its internal jaxpr representation as a\r\n    `LiteralArray`, which is a private JAX type that duck types as a\r\n    `numpy.ndarray`. This type may be exposed to users via `custom_jvp` rules,\r\n    for example, and may break code that uses `isinstance(x, np.ndarray)`. If\r\n    this breaks your code, you may convert these arrays to classic NumPy arrays\r\n    using `np.asarray(x)`.\r\n\r\n* Bug fixes\r\n  * `arr.view(dtype=None)` now returns the array unchanged, matching NumPy's\r\n    semantics. Previously it returned the array with a float dtype.\r\n  * `jax.random.randint` now produces a less-biased distribution for 8-bit and\r\n    16-bit integer types ({jax-issue}`#27742`). To restore the previous biased\r\n    behavior, you may temporarily set the `jax_safer_randint` configuration to\r\n    `False`, but note this is a temporary config that will be removed in a\r\n    future release.\r\n\r\n* Deprecations:\r\n  * The parameters `enable_xla` and `native_serialization` for `jax2tf.convert`\r\n    are deprecated and will be removed in a future version of JAX. These were\r\n    used for jax2tf with non-native serialization, which has been now removed.\r\n  * Setting the config state `jax_pmap_no_rank_reduction` to `False` is\r\n    deprecated. By default, `jax_pmap_no_rank_reduction` will be set to `True`\r\n    and `jax.pmap` shards will not have their rank reduced, keeping the same\r\n    rank as their enclosing array.","2025-09-16T17:19:35",{"id":228,"version":229,"summary_zh":230,"released_at":231},110627,"jax-v0.7.1","* New features\r\n  * JAX now ships Python 3.14 and 3.14t wheels.\r\n  * JAX now ships Python 3.13t and 3.14t wheels on Mac. Previously we only\r\n    offered free-threading builds on Linux.\r\n\r\n* Changes\r\n  * Exposed `jax.set_mesh` which acts as a global setter and a context manager.\r\n    Removed `jax.sharding.use_mesh` in favor of `jax.set_mesh`.\r\n  * JAX is now built using CUDA 12.9. All versions of CUDA 12.1 or newer remain\r\n    supported.\r\n  * `jax.lax.dot` now implements the general dot product via the optional\r\n    ``dimension_numbers`` argument.\r\n\r\n* Deprecations:\r\n\r\n  * `jax.lax.zeros_like_array` is deprecated. Please use\r\n    `jax.numpy.zeros_like` instead.\r\n  * Attempting to import `jax.experimental.host_callback` now results in\r\n    a `DeprecationWarning`, and will result in an `ImportError` starting in JAX\r\n    v0.8.0. Its APIs have raised `NotImplementedError` since JAX version 0.4.35.\r\n  * In `jax.lax.dot`, passing the ``precision`` and ``preferred_element_type``\r\n    arguments by position is deprecated. Pass them by explicit keyword instead.\r\n  * Several dozen internal APIs have been deprecated from `jax.interpreters.ad`,\r\n    `jax.interpreters.batching`, and `jax.interpreters.partial_eval`; they\r\n    are used rarely if ever outside JAX itself, and most are deprecated without any\r\n    public replacement.\r\n\r\n","2025-08-20T16:04:56",{"id":233,"version":234,"summary_zh":235,"released_at":236},110628,"jax-v0.7.0","\r\n* New features:\r\n  * Added `jax.P` which is an alias for `jax.sharding.PartitionSpec`.\r\n  * Added `jax.tree.reduce_associative`.\r\n\r\n* Breaking changes:\r\n  * JAX is migrating from GSPMD to Shardy by default. See the\r\n    [migration guide](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fshardy_jax_migration.html)\r\n    for more information.\r\n  * JAX autodiff is switching to using direct linearization by default (instead of\r\n    implementing linearization via JVP and partial eval).\r\n    See [migration guide](https:\u002F\u002Fdocs.jax.dev\u002Fen\u002Flatest\u002Fdirect_linearize_migration.html)\r\n    for more information.\r\n  * `jax.stages.OutInfo` has been replaced with `jax.ShapeDtypeStruct`.\r\n  * `jax.jit` now requires `fun` to be passed by position, and additional\r\n    arguments to be passed by keyword. Doing otherwise will result in an error\r\n    starting in v0.7.x. This raised a DeprecationWarning in v0.6.x.\r\n  * The minimum Python version is now 3.11. 3.11 will remain the minimum\r\n    supported version until July 2026.\r\n  * Layout API renames:\r\n    * `Layout`, `.layout`, `.input_layouts` and `.output_layouts` have been\r\n      renamed to `Format`, `.format`, `.input_formats` and `.output_formats`\r\n    * `DeviceLocalLayout`, `.device_local_layout` have been renamed to `Layout`\r\n      and `.layout`\r\n  * `jax.experimental.shard` module has been deleted and all the APIs have been\r\n    moved to the `jax.sharding` endpoint. So use `jax.sharding.reshard`,\r\n    `jax.sharding.auto_axes` and `jax.sharding.explicit_axes` instead of their\r\n    experimental endpoints.\r\n  * `lax.infeed` and `lax.outfeed` were removed, after being deprecated in\r\n    JAX 0.6. The `transfer_to_infeed` and `transfer_from_outfeed` methods were\r\n    also removed the `Device` objects.\r\n  * The `jax.extend.core.primitives.pjit_p` primitive has been renamed to\r\n    `jit_p`, and its `name` attribute has changed from `\"pjit\"` to `\"jit\"`.\r\n    This affects the string representations of jaxprs. The same primitive is no\r\n    longer exported from the `jax.experimental.pjit` module.\r\n  * The (undocumented) function `jax.extend.backend.add_clear_backends_callback`\r\n    has been removed. Users should use `jax.extend.backend.register_backend_cache`\r\n    instead.\r\n\r\n* Deprecations:\r\n  * {obj}`jax.dlpack.SUPPORTED_DTYPES` is deprecated; please use the new\r\n    `jax.dlpack.is_supported_dtype` function.\r\n  * `jax.scipy.special.sph_harm` has been deprecated following a similar\r\n    deprecation in SciPy; use `jax.scipy.special.sph_harm_y` instead.\r\n  * From {mod}`jax.interpreters.xla`, the previously deprecated symbols\r\n    `abstractify` and `pytype_aval_mappings` have been removed.\r\n  * `jax.interpreters.xla.canonicalize_dtype` is deprecated. For\r\n    canonicalizing dtypes, prefer `jax.dtypes.canonicalize_dtype`.\r\n    For checking whether an object is a valid jax input, prefer\r\n    `jax.core.valid_jaxtype`.\r\n  * From {mod}`jax.core`, the previously deprecated symbols `AxisName`,\r\n    `ConcretizationTypeError`, `axis_frame`, `call_p`, `closed_call_p`,\r\n    `get_type`, `trace_state_clean`, `typematch`, and `typecheck` have been\r\n    removed.\r\n  * From {mod}`jax.lib.xla_client`, the previously deprecated symbols\r\n    `DeviceAssignment`, `get_topology_for_devices`, and `mlir_api_version`\r\n    have been removed.\r\n  * `jax.extend.ffi` was removed after being deprecated in v0.5.0.\r\n    Use {mod}`jax.ffi` instead.\r\n  * `jax.lib.xla_bridge.get_compile_options` is deprecated, and replaced by\r\n    `jax.extend.backend.get_compile_options`.\r\n","2025-07-22T20:33:48",{"id":238,"version":239,"summary_zh":240,"released_at":241},110629,"jax-v0.6.2","* New features:\r\n  * Added `jax.tree.broadcast` which implements a pytree prefix broadcasting helper.\r\n\r\n* Changes\r\n  * The minimum NumPy version is 1.26 and the minimum SciPy version is 1.12.","2025-06-17T23:06:20",{"id":243,"version":244,"summary_zh":245,"released_at":246},110634,"jax-v0.5.1","* New Features\r\n  * Added an experimental `jax.experimental.custom_dce.custom_dce`\r\n    decorator to support customizing the behavior of opaque functions under\r\n    JAX-level dead code elimination (DCE). See `#25956` for more\r\n    details.\r\n  * Added low-level reduction APIs in {mod}`jax.lax`: `jax.lax.reduce_sum`,\r\n    `jax.lax.reduce_prod`, `jax.lax.reduce_max`, `jax.lax.reduce_min`,\r\n    `jax.lax.reduce_and`, `jax.lax.reduce_or`, and `jax.lax.reduce_xor`.\r\n  * `jax.lax.linalg.qr`, and `jax.scipy.linalg.qr`, now support\r\n    column-pivoting on CPU and GPU. See #20282 and\r\n    #25955 for more details.\r\n\r\n* Changes\r\n  * `JAX_CPU_COLLECTIVES_IMPLEMENTATION` and `JAX_NUM_CPU_DEVICES` now work as\r\n    env vars. Before they could only be specified via jax.config or flags.\r\n  * `JAX_CPU_COLLECTIVES_IMPLEMENTATION` now defaults to `'gloo'`, meaning\r\n    multi-process CPU communication works out-of-the-box.\r\n  * The `jax[tpu]` TPU extra no longer depends on the `libtpu-nightly` package.\r\n    This package may safely be removed if it is present on your machine; JAX now\r\n    uses `libtpu` instead.\r\n\r\n* Deprecations\r\n  * The internal function `linear_util.wrap_init` and the constructor\r\n    `core.Jaxpr` now must take a non-empty `core.DebugInfo` kwarg. For\r\n    a limited time, a `DeprecationWarning` is printed if\r\n    `jax.extend.linear_util.wrap_init` is used without debugging info.\r\n    A downstream effect of this several other internal functions need debug\r\n    info. This change does not affect public APIs.\r\n    See https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fissues\u002F26480 for more detail.\r\n\r\n* Bug fixes\r\n  * TPU runtime startup and shutdown time should be significantly improved on\r\n    TPU v5e and newer (from around 17s to around 8s). If not already set, you may\r\n    need to enable transparent hugepages in your VM image\r\n    (`sudo sh -c 'echo always > \u002Fsys\u002Fkernel\u002Fmm\u002Ftransparent_hugepage\u002Fenabled'`).\r\n    We hope to improve this further in future releases.\r\n  * Persistent compilation cache no longer writes access time file if\r\n    `JAX_COMPILATION_CACHE_MAX_SIZE` is unset or set to -1, i.e. if the LRU\r\n    eviction policy isn't enabled. This should improve performance when using\r\n    the cache with large-scale network storage.","2025-02-24T21:03:58",{"id":248,"version":249,"summary_zh":250,"released_at":251},110635,"jax-v0.5.0","As of this release, JAX now uses [effort-based versioning](https:\u002F\u002Fjax.readthedocs.io\u002Fen\u002Flatest\u002Fjep\u002F25516-effver.html).\r\nSince this release makes a breaking change to PRNG key semantics that\r\nmay require users to update their code, we are bumping the \"meso\" version of JAX\r\nto signify this.\r\n\r\n* Breaking changes\r\n  * Enable `jax_threefry_partitionable` by default (see\r\n    [the update note](https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fdiscussions\u002F18480)).\r\n\r\n  * This release drops support for Mac x86 wheels. Mac ARM of course remains\r\n    supported. For a recent discussion, see https:\u002F\u002Fgithub.com\u002Fjax-ml\u002Fjax\u002Fdiscussions\u002F22936.\r\n\r\n    Two key factors motivated this decision:\r\n    * The Mac x86 build (only) has a number of test failures and crashes. We\r\n      would prefer to ship no release than a broken release.\r\n    * Mac x86 hardware is end-of-life and cannot be easily obtained for\r\n      developers at this point. So it is difficult for us to fix this kind of\r\n      problem even if we wanted to.\r\n\r\n    We are open to readding support for Mac x86 if the community is willing\r\n    to help support that platform: in particular, we would need the JAX test\r\n    suite to pass cleanly on Mac x86 before we could ship releases again.\r\n\r\n* Changes:\r\n  * The minimum NumPy version is now 1.25. NumPy 1.25 will remain the minimum\r\n    supported version until June 2025.\r\n  * The minimum SciPy version is now 1.11. SciPy 1.11 will remain the minimum\r\n    supported version until June 2025.\r\n  * `jax.numpy.einsum` now defaults to `optimize='auto'` rather than\r\n    `optimize='optimal'`. This avoids exponentially-scaling trace-time in\r\n    the case of many arguments (`#25214`).\r\n  * `jax.numpy.linalg.solve` no longer supports batched 1D arguments\r\n    on the right hand side. To recover the previous behavior in these cases,\r\n    use `solve(a, b[..., None]).squeeze(-1)`.\r\n\r\n* New Features\r\n  * `jax.numpy.fft.fftn`, `jax.numpy.fft.rfftn`,\r\n    `jax.numpy.fft.ifftn`, and `jax.numpy.fft.irfftn` now support\r\n    transforms in more than 3 dimensions, which was previously the limit. See\r\n    `#25606` for more details.\r\n  * Support added for user defined state in the FFI via the new\r\n    `jax.ffi.register_ffi_type_id` function.\r\n  * The AOT lowering `.as_text()` method now supports the `debug_info` option\r\n    to include debugging information, e.g., source location, in the output.\r\n\r\n* Deprecations\r\n  * From `jax.interpreters.xla`, `abstractify` and `pytype_aval_mappings`\r\n    are now deprecated, having been replaced by symbols of the same name\r\n    in `jax.core`.\r\n  * `jax.scipy.special.lpmn` and `jax.scipy.special.lpmn_values`\r\n    are deprecated, following their deprecation in SciPy v1.15.0. There are\r\n    no plans to replace these deprecated functions with new APIs.\r\n  * The `jax.extend.ffi` submodule was moved to `jax.ffi`, and the\r\n    previous import path is deprecated.\r\n\r\n* Deletions\r\n  * `jax_enable_memories` flag has been deleted and the behavior of that flag\r\n    is on by default.\r\n  * From `jax.lib.xla_client`, the previously-deprecated `Device` and\r\n    `XlaRuntimeError` symbols have been removed; instead use `jax.Device`\r\n    and `jax.errors.JaxRuntimeError` respectively.\r\n  * The `jax.experimental.array_api` module has been removed after being\r\n    deprecated in JAX v0.4.32. Since that release, `jax.numpy` supports\r\n    the array API directly.\r\n","2025-01-17T18:27:26",{"id":253,"version":254,"summary_zh":255,"released_at":256},110636,"jax-v0.4.38","* Changes:\r\n  * `jax.tree.flatten_with_path` and `jax.tree.map_with_path` are added\r\n    as shortcuts of the corresponding `tree_util` functions.\r\n\r\n* Deprecations\r\n  * a number of APIs in the internal `jax.core` namespace have been deprecated.\r\n    Most were no-ops, were little-used, or can be replaced by APIs of the same\r\n    name in `jax.extend.core`; see the documentation for {mod}`jax.extend`\r\n    for information on the compatibility guarantees of these semi-public extensions.\r\n  * Several previously-deprecated APIs have been removed, including:\r\n    * from `jax.core`: `check_eqn`, `check_type`,  `check_valid_jaxtype`, and\r\n      `non_negative_dim`.\r\n    * from `jax.lib.xla_bridge`: `xla_client` and `default_backend`.\r\n    * from `jax.lib.xla_client`: `_xla` and `bfloat16`.\r\n    * from `jax.numpy`: `round_`.\r\n\r\n* New Features\r\n  * `jax.export.export` can be used for device-polymorphic export with\r\n    shardings constructed with {func}`jax.sharding.AbstractMesh`.\r\n    See the [jax.export documentation](https:\u002F\u002Fjax.readthedocs.io\u002Fen\u002Flatest\u002Fexport\u002Fexport.html#device-polymorphic-export).\r\n  * Added `jax.lax.split`. This is a primitive version of\r\n    `jax.numpy.split`, added because it yields a more compact\r\n    transpose during automatic differentiation.","2024-12-17T23:00:55",{"id":258,"version":259,"summary_zh":260,"released_at":261},110637,"jax-v0.4.37","This is a patch release of jax 0.4.36. Only \"jax\" was released at this version.\r\n\r\n* Bug fixes\r\n  * Fixed a bug where `jit` would error if an argument was named `f` (#25329).\r\n  * Fix a bug that will throw `index out of range` error in\r\n    `jax.lax.while_loop` if the user registers pytree node class with\r\n    different aux data for the flatten and flatten_with_path.\r\n  * Pinned a new libtpu release (0.0.6) that fixes a compiler bug on TPU v6e.\r\n","2024-12-10T01:17:59"]