[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-LuxDL--Lux.jl":3,"tool-LuxDL--Lux.jl":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":75,"owner_email":75,"owner_twitter":75,"owner_website":76,"owner_url":77,"languages":78,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":108,"github_topics":109,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":153},4653,"LuxDL\u002FLux.jl","Lux.jl","Elegant and Performant Deep Learning","Lux.jl 是一个专为 Julia 语言打造的深度学习库，旨在将代码的优雅性与极致的运行性能完美结合。它主要解决了传统深度学习框架在灵活性与执行效率之间难以兼顾的痛点，让研究人员和开发者无需在“易读的科研代码”与“高效的生产部署”之间做妥协。\n\n这款工具特别适合熟悉或希望使用 Julia 进行科学计算的研究人员、算法工程师以及高性能计算开发者。如果你正在探索需要快速原型验证且对计算速度有严苛要求的 AI 模型，Lux.jl 是理想的选择。\n\n其核心技术亮点在于独特的架构设计：既保留了 Julia 语言本身简洁、直观的语法风格，便于构建复杂的神经网络结构；又在底层深度集成了 XLA（加速线性代数）编译器技术。这意味着用户编写的代码不仅能像数学公式一样清晰易读，还能自动编译优化，获得媲美底层硬件加速的执行速度。此外，Lux.jl 拥有活跃的社区支持和完善的文档体系，并支持在 Google Colab 等云端环境中直接运行，极大地降低了上手门槛，帮助用户更专注于算法创新而非工程调优。","\u003Cp align=\"center\">\n    \u003Cimg width=\"400px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLuxDL_Lux.jl_readme_0ba8dfea3256.png\"\u002F>\n    \u003Cimg width=\"400px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLuxDL_Lux.jl_readme_0dda33202156.png\"\u002F>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\n[![GitHub Discussions](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdiscussions\u002FLuxDL\u002FLux.jl?color=white&logo=github&label=Discussions)](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fdiscussions)\n[![Latest Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue.svg)](http:\u002F\u002Flux.csail.mit.edu\u002Fdev\u002F)\n[![Stable Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-stable-blue.svg)](http:\u002F\u002Flux.csail.mit.edu\u002Fstable\u002F)\n\n[![CI](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI.yml\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI.yml)\n[![CI (pre-release)](\u003Chttps:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002FLuxDL\u002FLux.jl\u002FCIPreRelease.yml?branch=main&label=CI%20(pre-release)&logo=github>)](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCIPreRelease.yml)\n[![Build status](https:\u002F\u002Fimg.shields.io\u002Fbuildkite\u002Fba1f9622add5978c2d7b194563fd9327113c9c21e5734be20e\u002Fmain.svg?label=gpu&branch=main&logo=buildkite)](https:\u002F\u002Fbuildkite.com\u002Fjulialang\u002Flux-dot-jl)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLuxDL\u002FLux.jl\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=IMqBM1e3hz)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLuxDL\u002FLux.jl)\n\u003C!-- [![Benchmarks](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FBenchmark.yml\u002Fbadge.svg?branch=main)](https:\u002F\u002Flux.csail.mit.edu\u002Fbenchmarks\u002F) -->\n\n[![Downloads](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLux&query=total_requests&suffix=%2Fmonth&label=Downloads)](https:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLux)\n[![Downloads](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLux&query=total_requests&&label=Total%20Downloads)](https:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLux)\n\n[![JET Testing](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%9B%A9%EF%B8%8F_tested_with-JET.jl-233f9a)](https:\u002F\u002Fgithub.com\u002Faviatesk\u002FJET.jl)\n[![Aqua QA](https:\u002F\u002Fraw.githubusercontent.com\u002FJuliaTesting\u002FAqua.jl\u002Fmaster\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FJuliaTesting\u002FAqua.jl)\n[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColPrac-Contributor's%20Guide-blueviolet)](https:\u002F\u002Fgithub.com\u002FSciML\u002FColPrac)\n[![Code Style: Blue](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-blue-4495d1.svg)](https:\u002F\u002Fgithub.com\u002FJuliaDiff\u002FBlueStyle)\n\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n    \u003Ch2>Elegant & Performant Deep Learning in JuliaLang\u003C\u002Fh2>\n    \u003Ch3>Model with the elegance of Julia, and the performance of XLA.\u003C\u002Fh3>\n\u003C\u002Fdiv>\n\n## 💻 Installation\n\n```julia\nimport Pkg\nPkg.add(\"Lux\")\n```\n\n> [!TIP]\n> To use Lux online, use [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002F). The Julia Runtime comes pre-installed with Lux and Reactant!\n\n\u003Cdiv align=\"center\">\n\n| **Packages**                                           | **Stable Version**                                             | **Monthly Downloads**                                                 | **Total Downloads**                                                         | **Build Status**                                                                                                                                |\n| :----------------------------------------------------- | :------------------------------------------------------------- | :-------------------------------------------------------------------- | :-------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- |\n| 📦 [Lux.jl](.\u002Fsrc)                                     | [![][lux-version]][lux-juliahub]                               | [![][downloads-lux]][downloads-lux-url]                               | [![][total-downloads-lux]][downloads-lux-url]                               | [![][gh-actions-lux]][gh-actions-lux-url] [![][gh-actions-lux-prerelease]][gh-actions-lux-prerelease-url] [![][buildkite-badge]][buildkite-url] |\n| └ 📦 [LuxLib.jl](.\u002Flib\u002FLuxLib)                         | [![][luxlib-version]][luxlib-juliahub]                         | [![][downloads-luxlib]][downloads-luxlib-url]                         | [![][total-downloads-luxlib]][downloads-luxlib-url]                         | [![][gh-actions-luxlib]][gh-actions-luxlib-url]                                                                                                 |\n| └ 📦 [LuxCore.jl](.\u002Flib\u002FLuxCore)                       | [![][luxcore-version]][luxcore-juliahub]                       | [![][downloads-luxcore]][downloads-luxcore-url]                       | [![][total-downloads-luxcore]][downloads-luxcore-url]                       | [![][gh-actions-luxcore]][gh-actions-luxcore-url]                                                                                               |\n| └ 📦 [MLDataDevices.jl](.\u002Flib\u002FMLDataDevices)           | [![][mldatadevices-version]][mldatadevices-juliahub]           | [![][downloads-mldatadevices]][downloads-mldatadevices-url]           | [![][total-downloads-mldatadevices]][downloads-mldatadevices-url]           | [![][gh-actions-mldatadevices]][gh-actions-mldatadevices-url]                                                                                   |\n| └ 📦 [WeightInitializers.jl](.\u002Flib\u002FWeightInitializers) | [![][weightinitializers-version]][weightinitializers-juliahub] | [![][downloads-weightinitializers]][downloads-weightinitializers-url] | [![][total-downloads-weightinitializers]][downloads-weightinitializers-url] | [![][gh-actions-weightinitializers]][gh-actions-weightinitializers-url]                                                                         |\n| └ 📦 [LuxTestUtils.jl](.\u002Flib\u002FLuxTestUtils)             | [![][luxtestutils-version]][luxtestutils-juliahub]             | [![][downloads-luxtestutils]][downloads-luxtestutils-url]             | [![][total-downloads-luxtestutils]][downloads-luxtestutils-url]             | [![][gh-actions-luxtestutils]][gh-actions-luxtestutils-url]                                                                                     |\n| └ 📦 [LuxCUDA.jl](.\u002Flib\u002FLuxCUDA)                       | [![][luxcuda-version]][luxcuda-juliahub]                       | [![][downloads-luxcuda]][downloads-luxcuda-url]                       | [![][total-downloads-luxcuda]][downloads-luxcuda-url]                       | [![][gh-actions-luxcuda]][gh-actions-luxcuda-url]                                                                                               |\n\n\u003C\u002Fdiv>\n\n\u003C!-- VARIABLES -->\n\n\u003C!-- Package -->\n\n[lux-version]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLux\u002Fstable\u002Fversion.svg?color=blue\n[luxlib-version]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxLib\u002Fstable\u002Fversion.svg?color=blue\n[luxcore-version]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxCore\u002Fstable\u002Fversion.svg?color=blue\n[mldatadevices-version]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FMLDataDevices\u002Fstable\u002Fversion.svg?color=blue\n[weightinitializers-version]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FWeightInitializers\u002Fstable\u002Fversion.svg?color=blue\n[luxtestutils-version]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxTestUtils\u002Fstable\u002Fversion.svg?color=blue\n[luxcuda-version]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxCUDA\u002Fstable\u002Fversion.svg?color=blue\n[lux-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLux\n[luxlib-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxLib\n[luxcore-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxCore\n[mldatadevices-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FMLDataDevices\n[weightinitializers-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FWeightInitializers\n[luxtestutils-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxTestUtils\n[luxcuda-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxCUDA\n\n\u003C!-- Documentation -->\n\n[docr-img]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-stable-blue.svg\n[docd-img]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-dev-blue.svg\n[docr-url]: https:\u002F\u002Flux.csail.mit.edu\u002Fstable\u002F\n[docd-url]: https:\u002F\u002Flux.csail.mit.edu\u002Fdev\u002F\n\n\u003C!-- Buildkite -->\n\n[buildkite-badge]: https:\u002F\u002Fimg.shields.io\u002Fbuildkite\u002Fba1f9622add5978c2d7b194563fd9327113c9c21e5734be20e\u002Fmain.svg?label=gpu&branch=main&logo=buildkite]\n\n[buildkite-url]: https:\u002F\u002Fbuildkite.com\u002Fjulialang\u002Flux-dot-jl\u002Fbuilds?branch=main\n\n\u003C!-- CI -->\n\n[gh-actions-lux]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(Lux)\u002Fbadge.svg\n[gh-actions-lux-prerelease]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCIPreRelease%20(Lux)\u002Fbadge.svg\n[gh-actions-luxlib]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxLib)\u002Fbadge.svg\n[gh-actions-luxcore]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxCore)\u002Fbadge.svg\n[gh-actions-mldatadevices]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(MLDataDevices)\u002Fbadge.svg\n[gh-actions-weightinitializers]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(WeightInitializers)\u002Fbadge.svg\n[gh-actions-luxtestutils]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxTestUtils)\u002Fbadge.svg\n[gh-actions-luxcuda]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxCUDA)\u002Fbadge.svg\n[gh-actions-lux-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI.yml\n[gh-actions-lux-prerelease-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCIPreRelease.yml\n[gh-actions-luxlib-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxLib.yml\n[gh-actions-luxcore-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxCore.yml\n[gh-actions-mldatadevices-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_MLDataDevices.yml\n[gh-actions-weightinitializers-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_WeightInitializers.yml\n[gh-actions-luxtestutils-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxTestUtils.yml\n[gh-actions-luxcuda-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxCUDA.yml\n\n\u003C!-- Downloads -->\n\n[total-downloads-lux]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLux&query=total_requests&label=Downloads\n[total-downloads-luxlib]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxLib&query=total_requests&label=Downloads\n[total-downloads-luxcore]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxCore&query=total_requests&label=Downloads\n[total-downloads-mldatadevices]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FMLDataDevices&query=total_requests&label=Downloads\n[total-downloads-weightinitializers]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FWeightInitializers&query=total_requests&label=Downloads\n[total-downloads-luxtestutils]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxTestUtils&query=total_requests&label=Downloads\n[total-downloads-luxcuda]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxCUDA&query=total_requests&label=Downloads\n[downloads-lux]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLux&query=total_requests&suffix=%2Fmonth&label=Downloads\n[downloads-luxlib]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxLib&query=total_requests&suffix=%2Fmonth&label=Downloads\n[downloads-luxcore]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxCore&query=total_requests&suffix=%2Fmonth&label=Downloads\n[downloads-mldatadevices]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FMLDataDevices&query=total_requests&suffix=%2Fmonth&label=Downloads\n[downloads-weightinitializers]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FWeightInitializers&query=total_requests&suffix=%2Fmonth&label=Downloads\n[downloads-luxtestutils]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxTestUtils&query=total_requests&suffix=%2Fmonth&label=Downloads\n[downloads-luxcuda]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxCUDA&query=total_requests&suffix=%2Fmonth&label=Downloads\n[downloads-lux-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLux\n[downloads-luxlib-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxLib\n[downloads-luxcore-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxCore\n[downloads-mldatadevices-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FMLDataDevices\n[downloads-weightinitializers-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FWeightInitializers\n[downloads-luxtestutils-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxTestUtils\n[downloads-luxcuda-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxCUDA\n\n## 🚀 Benchmarks\n\nCurrently Benchmarks are scatter across a few places:\n\n  1. For comparison with other Julia packages like CUDA.jl take a look\n     at [Lux.jl\u002Fperf](.\u002Fperf\u002FREADME.md).\n  2. \u003Chttps:\u002F\u002Fenzymead.github.io\u002FEnzyme-JAX\u002Fbenchmarks\u002F> highlights\n     performance of EnzymeJAX (backend for Reactant.jl) against JAX.\n  3. \u003Chttps:\u002F\u002Fenzymead.github.io\u002FReactant.jl\u002Fbenchmarks\u002F> highlights\n     performance of Reactant.jl against default XLA and base Julia\n     compilation.\n\n## 🤸 Quickstart\n\n### Reactant & Enzyme\n\n```julia\nusing Lux, Random, Optimisers, Reactant, Enzyme\n\nrng = Random.default_rng()\nRandom.seed!(rng, 0)\n\nmodel = Chain(Dense(128, 256, tanh), Chain(Dense(256, 1, tanh), Dense(1, 10)))\n\ndev = reactant_device()\n\nps, st = Lux.setup(rng, model) |> dev\n\nx = rand(rng, Float32, 128, 2) |> dev\n\n# We need to compile the model before we can use it.\nmodel_forward = @compile model(x, ps, Lux.testmode(st))\nmodel_forward(x, ps, Lux.testmode(st))\n\n# Gradients can be computed using Enzyme\n@jit Enzyme.gradient(Reverse, sum ∘ first ∘ Lux.apply, Const(model), x, ps, Const(st))\n\n# All of this can be automated using the TrainState API\ntrain_state = Training.TrainState(model, ps, st, Adam(0.001f0))\n\ngs, loss, stats, train_state = Training.single_train_step!(\n    AutoEnzyme(), MSELoss(),\n    (x, dev(rand(rng, Float32, 10, 2))), train_state\n)\n```\n\n### Native Julia & Zygote\n\n```julia\nusing Lux, Random, Optimisers, Zygote\n# using LuxCUDA, AMDGPU, Metal, oneAPI # Optional packages for GPU support\n\n# Seeding\nrng = Random.default_rng()\nRandom.seed!(rng, 0)\n\n# Construct the layer\nmodel = Chain(Dense(128, 256, tanh), Chain(Dense(256, 1, tanh), Dense(1, 10)))\n\n# Get the device determined by Lux\ndev = gpu_device()\n\n# Parameter and State Variables\nps, st = Lux.setup(rng, model) |> dev\n\n# Dummy Input\nx = rand(rng, Float32, 128, 2) |> dev\n\n# Run the model\ny, st = Lux.apply(model, x, ps, st)\n\n# Gradients\n## First construct a TrainState\ntrain_state = Lux.Training.TrainState(model, ps, st, Adam(0.0001f0))\n\n## We can compute the gradients using Training.compute_gradients\ngs, loss, stats, train_state = Lux.Training.compute_gradients(AutoZygote(), MSELoss(),\n    (x, dev(rand(rng, Float32, 10, 2))), train_state)\n\n## Optimization\ntrain_state = Training.apply_gradients!(train_state, gs) # or Training.apply_gradients (no `!` at the end)\n\n# Both these steps can be combined into a single call\ngs, loss, stats, train_state = Training.single_train_step!(AutoZygote(), MSELoss(),\n    (x, dev(rand(rng, Float32, 10, 2))), train_state)\n```\n\n## 📚 Examples\n\nLook in the [examples](\u002Fexamples\u002F) directory for self-contained usage examples. The [documentation](https:\u002F\u002Flux.csail.mit.edu) has examples sorted into proper categories.\n\n## 🆘 Getting Help\n\nFor usage related questions, please use [Github Discussions](https:\u002F\u002Fgithub.com\u002Forgs\u002FLuxDL\u002Fdiscussions) which allows questions and answers to be indexed. To report bugs use [github issues](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fissues) or even better send in a [pull request](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fpulls).\n\n## 🧑‍🔬 Citation\n\nIf you found this library to be useful in academic work, then please cite:\n\n```bibtex\n@software{pal2023lux,\n  author    = {Pal, Avik},\n  title     = {{Lux: Explicit Parameterization of Deep Neural Networks in Julia}},\n  month     = apr,\n  year      = 2023,\n  note      = {If you use this software, please cite it as below.},\n  publisher = {Zenodo},\n  version   = {v1.4.2},\n  doi       = {10.5281\u002Fzenodo.7808903},\n  url       = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.7808903},\n  swhid     = {swh:1:dir:1a304ec3243961314a1cc7c1481a31c4386c4a34;origin=https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.7808903;visit=swh:1:snp:e2bbe43b14bde47c4ddf7e637eb7fc7bd10db8c7;anchor=swh:1:rel:2c0c0ff927e7bfe8fc8bc43fd553ab392a6eb403;path=\u002F}\n}\n\n@thesis{pal2023efficient,\n  title     = {{On Efficient Training \\& Inference of Neural Differential Equations}},\n  author    = {Pal, Avik},\n  year      = {2023},\n  school    = {Massachusetts Institute of Technology}\n}\n```\n\nAlso consider starring [our github repo](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002F).\n\n## 🧑‍💻 Contributing\n\nThis section is somewhat incomplete. You can contribute by contributing to finishing this\nsection 😜.\n\n### 💎 Formatting (JuliaFormatter)\n\n> [!NOTE]\n> Pin JuliaFormatter to v1 until upstream issues with v2 are resolved.\n\n```julia\nusing JuliaFormatter\nformat(\".\")\n```\n\n### 🧪 Testing\n\nThe full test of `Lux.jl` takes a long time, here's how to test a portion of the code.\n\nTests are organized by directories, where each directory contains test files with `@testset`\nblocks. For example, tests for `SkipConnection` are in `test\u002Fcore_layers\u002Fcontainers_tests.jl`.\n\n#### Running a Specific Test File\n\nThe easiest way to run a specific test is to directly activate the test directory and\ninclude the test file:\n\n```julia\n# From the Lux.jl root directory\nusing Pkg\nPkg.activate(\"test\")\n\n# Run a specific test file\ninclude(\"test\u002Fcore_layers\u002Fcontainers_tests.jl\")\n```\n\nThis approach allows you to quickly iterate on specific tests without running the entire\ntest suite.\n\nSee [ParallelTestRunners.jl](https:\u002F\u002Fgithub.com\u002FJuliaTesting\u002FParallelTestRunner.jl) for\ndetails on executing specific groups of tests.\n\n#### Running Test Groups via CI\n\nTo run a specific group of tests via the test runner, you can pass the directory name as a\npositional argument:\n\n```shell\njulia --project -e 'using Pkg; Pkg.test(test_args=[\"core_layers\"])'\n```\n\n#### Running All Tests\n\nTo run the full test suite:\n\n```shell\njulia --project -e 'using Pkg; Pkg.test()'\n```\n\n### 📖 Documentation\n\nLux builds a bunch of tutorials as part of its documentation. This can be time-consuming and\nrequires a lot of compute. To speed up the build, you can set the\n`LUX_DOCS_DRAFT_BUILD=true`.\n\n```shell\nLUX_DOCS_DRAFT_BUILD=true julia --threads=auto --startup=no --project=docs docs\u002Fmake.jl\n```\n\nWhen writing tutorials (anything under `examples\u002F`), include the tutorial in\n`docs\u002Ftutorials.jl`. If the tutorial is time-consuming, set `should_run` to `false`.\n\nAdditionally for a new page to be included in the navigation and sidebar, these need to be\nadded to `docs\u002Fsrc\u002F.vitepress\u002Fconfig.mts`. Specifically these need to be added under\n`sidebar` and\u002For `nav` based on the type of page.\n\nTo use LiveServer to preview the docs locally, checkout\n[DocumenterVitepress.jl](https:\u002F\u002Fluxdl.github.io\u002FDocumenterVitepress.jl\u002Fdev\u002Fmanual\u002Fget_started#Preview-Documentation-Development-Instantly)\ndocumentation.\n","\u003Cp align=\"center\">\n    \u003Cimg width=\"400px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLuxDL_Lux.jl_readme_0ba8dfea3256.png\"\u002F>\n    \u003Cimg width=\"400px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLuxDL_Lux.jl_readme_0dda33202156.png\"\u002F>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\n[![GitHub Discussions](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdiscussions\u002FLuxDL\u002FLux.jl?color=white&logo=github&label=Discussions)](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fdiscussions)\n[![Latest Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue.svg)](http:\u002F\u002Flux.csail.mit.edu\u002Fdev\u002F)\n[![Stable Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-stable-blue.svg)](http:\u002F\u002Flux.csail.mit.edu\u002Fstable\u002F)\n\n[![CI](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI.yml\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI.yml)\n[![CI (pre-release)](\u003Chttps:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002FLuxDL\u002FLux.jl\u002FCIPreRelease.yml?branch=main&label=CI%20(pre-release)&logo=github>)](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCIPreRelease.yml)\n[![Build status](https:\u002F\u002Fimg.shields.io\u002Fbuildkite\u002Fba1f9622add5978c2d7b194563fd9327113c9c21e5734be20e\u002Fmain.svg?label=gpu&branch=main&logo=buildkite)](https:\u002F\u002Fbuildkite.com\u002Fjulialang\u002Flux-dot-jl)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLuxDL\u002FLux.jl\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=IMqBM1e3hz)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FLuxDL\u002FLux.jl)\n\u003C!-- [![Benchmarks](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FBenchmark.yml\u002Fbadge.svg?branch=main)](https:\u002F\u002Flux.csail.mit.edu\u002Fbenchmarks\u002F) -->\n\n[![Downloads](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLux&query=total_requests&suffix=%2Fmonth&label=Downloads)](https:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLux)\n[![Downloads](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLux&query=total_requests&&label=Total%20Downloads)](https:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLux)\n\n[![JET Testing](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%9B%A9%EF%B8%8F_tested_with-JET.jl-233f9a)](https:\u002F\u002Fgithub.com\u002Faviatesk\u002FJET.jl)\n[![Aqua QA](https:\u002F\u002Fraw.githubusercontent.com\u002FJuliaTesting\u002FAqua.jl\u002Fmaster\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FJuliaTesting\u002FAqua.jl)\n[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColPrac-Contributor's%20Guide-blueviolet)](https:\u002F\u002Fgithub.com\u002FSciML\u002FColPrac)\n[![Code Style: Blue](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20 style-blue-4495d1.svg)](https:\u002F\u002Fgithub.com\u002FJuliaDiff\u002FBlueStyle)\n\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n    \u003Ch2>优雅且高性能的 Julia 语言深度学习框架\u003C\u002Fh2>\n    \u003Ch3>以 Julia 的优雅语法构建模型，同时享受 XLA 带来的卓越性能。\u003C\u002Fh3>\n\u003C\u002Fdiv>\n\n## 💻 安装\n\n```julia\nimport Pkg\nPkg.add(\"Lux\")\n```\n\n> [!TIP]\n> 如果想在线使用 Lux，可以使用 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002F)。Colab 已预装了 Julia 运行时环境以及 Lux 和 Reactant！\n\n\u003Cdiv align=\"center\">\n\n| **软件包**                                           | **稳定版本**                                             | **月下载量**                                                 | **总下载量**                                                         | **构建状态**                                                                                                                                |\n| :----------------------------------------------------- | :------------------------------------------------------------- | :-------------------------------------------------------------------- | :-------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- |\n| 📦 [Lux.jl](.\u002Fsrc)                                     | [![][lux-version]][lux-juliahub]                               | [![][downloads-lux]][downloads-lux-url]                               | [![][total-downloads-lux]][downloads-lux-url]                               | [![][gh-actions-lux]][gh-actions-lux-url] [![][gh-actions-lux-prerelease]][gh-actions-lux-prerelease-url] [![][buildkite-badge]][buildkite-url] |\n| └ 📦 [LuxLib.jl](.\u002Flib\u002FLuxLib)                         | [![][luxlib-version]][luxlib-juliahub]                         | [![][downloads-luxlib]][downloads-luxlib-url]                         | [![][total-downloads-luxlib]][downloads-luxlib-url]                         | [![][gh-actions-luxlib]][gh-actions-luxlib-url]                                                                                                 |\n| └ 📦 [LuxCore.jl](.\u002Flib\u002FLuxCore)                       | [![][luxcore-version]][luxcore-juliahub]                       | [![][downloads-luxcore]][downloads-luxcore-url]                       | [![][total-downloads-luxcore]][downloads-luxcore-url]                       | [![][gh-actions-luxcore]][gh-actions-luxcore-url]                                                                                               |\n| └ 📦 [MLDataDevices.jl](.\u002Flib\u002FMLDataDevices)           | [![][mldatadevices-version]][mldatadevices-juliahub]           | [![][downloads-mldatadevices]][downloads-mldatadevices-url]           | [![][total-downloads-mldatadevices]][downloads-mldatadevices-url]           | [![][gh-actions-mldatadevices]][gh-actions-mldatadevices-url]                                                                                   |\n| └ 📦 [WeightInitializers.jl](.\u002Flib\u002FWeightInitializers) | [![][weightinitializers-version]][weightinitializers-juliahub] | [![][downloads-weightinitializers]][downloads-weightinitializers-url] | [![][total-downloads-weightinitializers]][downloads-weightinitializers-url] | [![][gh-actions-weightinitializers]][gh-actions-weightinitializers-url]                                                                         |\n| └ 📦 [LuxTestUtils.jl](.\u002Flib\u002FLuxTestUtils)             | [![][luxtestutils-version]][luxtestutils-juliahub]             | [![][downloads-luxtestutils]][downloads-luxtestutils-url]             | [![][total-downloads-luxtestutils]][downloads-luxtestutils-url]             | [![][gh-actions-luxtestutils]][gh-actions-luxtestutils-url]                                                                                     |\n| └ 📦 [LuxCUDA.jl](.\u002Flib\u002FLuxCUDA)                       | [![][luxcuda-version]][luxcuda-juliahub]                       | [![][downloads-luxcuda]][downloads-luxcuda-url]                       | [![][total-downloads-luxcuda]][downloads-luxcuda-url]                       | [![][gh-actions-luxcuda]][gh-actions-luxcuda-url]                                                                                               |\n\n\u003C\u002Fdiv>\n\n\u003C!-- VARIABLES -->\n\n\u003C!-- Package -->\n\n[lux版本]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLux\u002Fstable\u002Fversion.svg?color=blue\n[luxlib版本]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxLib\u002Fstable\u002Fversion.svg?color=blue\n[luxcore版本]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxCore\u002Fstable\u002Fversion.svg?color=blue\n[mldatadevices版本]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FMLDataDevices\u002Fstable\u002Fversion.svg?color=blue\n[weightinitializers版本]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FWeightInitializers\u002Fstable\u002Fversion.svg?color=blue\n[luxtestutils版本]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxTestUtils\u002Fstable\u002Fversion.svg?color=blue\n[luxcuda版本]: https:\u002F\u002Fjuliahub.com\u002Fdocs\u002FGeneral\u002FLuxCUDA\u002Fstable\u002Fversion.svg?color=blue\n[lux-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLux\n[luxlib-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxLib\n[luxcore-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxCore\n[mldatadevices-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FMLDataDevices\n[weightinitializers-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FWeightInitializers\n[luxtestutils-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxTestUtils\n[luxcuda-juliahub]: https:\u002F\u002Fjuliahub.com\u002Fui\u002FPackages\u002FGeneral\u002FLuxCUDA\n\n\u003C!-- 文档 -->\n\n[docr-img]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-stable-blue.svg\n[docd-img]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-dev-blue.svg\n[docr-url]: https:\u002F\u002Flux.csail.mit.edu\u002Fstable\u002F\n[docd-url]: https:\u002F\u002Flux.csail.mit.edu\u002Fdev\u002F\n\n\u003C!-- Buildkite -->\n\n[buildkite-badge]: https:\u002F\u002Fimg.shields.io\u002Fbuildkite\u002Fba1f9622add5978c2d7b194563fd9327113c9c21e5734be20e\u002Fmain.svg?label=gpu&branch=main&logo=buildkite]\n\n[buildkite-url]: https:\u002F\u002Fbuildkite.com\u002Fjulialang\u002Flux-dot-jl\u002Fbuilds?branch=main\n\n\u003C!-- CI -->\n\n[gh-actions-lux]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(Lux)\u002Fbadge.svg\n[gh-actions-lux-prerelease]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCIPreRelease%20(Lux)\u002Fbadge.svg\n[gh-actions-luxlib]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxLib)\u002Fbadge.svg\n[gh-actions-luxcore]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxCore)\u002Fbadge.svg\n[gh-actions-mldatadevices]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(MLDataDevices)\u002Fbadge.svg\n[gh-actions-weightinitializers]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(WeightInitializers)\u002Fbadge.svg\n[gh-actions-luxtestutils]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxTestUtils)\u002Fbadge.svg\n[gh-actions-luxcuda]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fworkflows\u002FCI%20(LuxCUDA)\u002Fbadge.svg\n[gh-actions-lux-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI.yml\n[gh-actions-lux-prerelease-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCIPreRelease.yml\n[gh-actions-luxlib-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxLib.yml\n[gh-actions-luxcore-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxCore.yml\n[gh-actions-mldatadevices-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_MLDataDevices.yml\n[gh-actions-weightinitializers-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_WeightInitializers.yml\n[gh-actions-luxtestutils-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxTestUtils.yml\n[gh-actions-luxcuda-url]: https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Factions\u002Fworkflows\u002FCI_LuxCUDA.yml\n\n\u003C!-- 下载量 -->\n\n[总下载量-lux]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLux&query=total_requests&label=Downloads\n[总下载量-luxlib]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxLib&query=total_requests&label=Downloads\n[总下载量-luxcore]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxCore&query=total_requests&label=Downloads\n[总下载量-mldatadevices]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FMLDataDevices&query=total_requests&label=Downloads\n[总下载量-weightinitializers]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FWeightInitializers&query=total_requests&label=Downloads\n[总下载量-luxtestutils]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxTestUtils&query=total_requests&label=Downloads\n[总下载量-luxcuda]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Ftotal_downloads%2FLuxCUDA&query=total_requests&label=Downloads\n[月下载量-lux]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLux&query=total_requests&suffix=%2Fmonth&label=Downloads\n[月下载量-luxlib]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxLib&query=total_requests&suffix=%2Fmonth&label=Downloads\n[月下载量-luxcore]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxCore&query=total_requests&suffix=%2Fmonth&label=Downloads\n[月下载量-mldatadevices]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FMLDataDevices&query=total_requests&suffix=%2Fmonth&label=Downloads\n[月下载量-weightinitializers]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FWeightInitializers&query=total_requests&suffix=%2Fmonth&label=Downloads\n[月下载量-luxtestutils]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxTestUtils&query=total_requests&suffix=%2Fmonth&label=Downloads\n[月下载量-luxcuda]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?url=http%3A%2F%2Fjuliapkgstats.com%2Fapi%2Fv1%2Fmonthly_downloads%2FLuxCUDA&query=total_requests&suffix=%2Fmonth&label=Downloads\n[下载量-lux-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLux\n[下载量-luxlib-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxLib\n[下载量-luxcore-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxCore\n[下载量-mldatadevices-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FMLDataDevices\n[下载量-weightinitializers-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FWeightInitializers\n[下载量-luxtestutils-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxTestUtils\n[下载量-luxcuda-url]: http:\u002F\u002Fjuliapkgstats.com\u002Fpkg\u002FLuxCUDA\n\n\n\n## 🚀 基准测试\n\n目前基准测试分散在几个地方：\n\n  1. 若要与其他 Julia 包（如 CUDA.jl）进行比较，请查看\n     [Lux.jl\u002Fperf](.\u002Fperf\u002FREADME.md)。\n  2. \u003Chttps:\u002F\u002Fenzymead.github.io\u002FEnzyme-JAX\u002Fbenchmarks\u002F> 展示了\n     EnzymeJAX（Reactant.jl 的后端）与 JAX 的性能对比。\n  3. \u003Chttps:\u002F\u002Fenzymead.github.io\u002FReactant.jl\u002Fbenchmarks\u002F> 展示了\n     Reactant.jl 与默认 XLA 和基础 Julia 编译之间的性能对比。\n\n## 🤸 快速入门\n\n### 反应物与酶\n\n```julia\nusing Lux, Random, Optimisers, Reactant, Enzyme\n\nrng = Random.default_rng()\nRandom.seed!(rng, 0)\n\nmodel = Chain(Dense(128, 256, tanh), Chain(Dense(256, 1, tanh), Dense(1, 10)))\n\ndev = reactant_device()\n\nps, st = Lux.setup(rng, model) |> dev\n\nx = rand(rng, Float32, 128, 2) |> dev\n\n# 我们需要编译模型才能使用它。\nmodel_forward = @compile model(x, ps, Lux.testmode(st))\nmodel_forward(x, ps, Lux.testmode(st))\n\n# 梯度可以使用Enzyme计算\n@jit Enzyme.gradient(Reverse, sum ∘ first ∘ Lux.apply, Const(model), x, ps, Const(st))\n\n# 所有这些都可以通过TrainState API自动化\ntrain_state = Training.TrainState(model, ps, st, Adam(0.001f0))\n\ngs, loss, stats, train_state = Training.single_train_step!(\n    AutoEnzyme(), MSELoss(),\n    (x, dev(rand(rng, Float32, 10, 2))), train_state\n)\n```\n\n### 原生 Julia 与 Zygote\n\n```julia\nusing Lux, Random, Optimisers, Zygote\n# using LuxCUDA, AMDGPU, Metal, oneAPI # GPU支持的可选包\n\n# 种子设置\nrng = Random.default_rng()\nRandom.seed!(rng, 0)\n\n# 构建层\nmodel = Chain(Dense(128, 256, tanh), Chain(Dense(256, 1, tanh), Dense(1, 10)))\n\n# 获取由Lux决定的设备\ndev = gpu_device()\n\n# 参数和状态变量\nps, st = Lux.setup(rng, model) |> dev\n\n# 虚拟输入\nx = rand(rng, Float32, 128, 2) |> dev\n\n# 运行模型\ny, st = Lux.apply(model, x, ps, st)\n\n# 梯度\n## 首先构建一个TrainState\ntrain_state = Lux.Training.TrainState(model, ps, st, Adam(0.0001f0))\n\n## 我们可以使用Training.compute_gradients计算梯度\ngs, loss, stats, train_state = Lux.Training.compute_gradients(AutoZygote(), MSELoss(),\n    (x, dev(rand(rng, Float32, 10, 2))), train_state)\n\n## 优化\ntrain_state = Training.apply_gradients!(train_state, gs) # 或者Training.apply_gradients（没有`!`结尾）\n\n# 这两个步骤可以合并为一次调用\ngs, loss, stats, train_state = Training.single_train_step!(AutoZygote(), MSELoss(),\n    (x, dev(rand(rng, Float32, 10, 2))), train_state)\n```\n\n## 📚 示例\n\n请查看[examples](\u002Fexamples\u002F)目录中的独立使用示例。[文档](https:\u002F\u002Flux.csail.mit.edu)中也有按类别整理的示例。\n\n## 🆘 寻求帮助\n\n如有关于使用的问题，请使用[Github Discussions](https:\u002F\u002Fgithub.com\u002Forgs\u002FLuxDL\u002Fdiscussions)，这样问题和答案都能被索引。如需报告bug，请使用[Github Issues](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fissues)，或者更好的是提交一个[Pull Request](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fpulls)。\n\n## 🧑‍🔬 引用\n\n如果您在学术工作中发现本库很有用，请引用以下内容：\n\n```bibtex\n@software{pal2023lux,\n  author    = {Pal, Avik},\n  title     = {{Lux: Explicit Parameterization of Deep Neural Networks in Julia}},\n  month     = apr,\n  year      = 2023,\n  note      = {If you use this software, please cite it as below.},\n  publisher = {Zenodo},\n  version   = {v1.4.2},\n  doi       = {10.5281\u002Fzenodo.7808903},\n  url       = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.7808903},\n  swhid     = {swh:1:dir:1a304ec3243961314a1cc7c1481a31c4386c4a34;origin=https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.7808903;visit=swh:1:snp:e2bbe43b14bde47c4ddf7e637eb7fc7bd10db8c7;anchor=swh:1:rel:2c0c0ff927e7bfe8fc8bc43fd553ab392a6eb403;path=\u002F}\n}\n\n@thesis{pal2023efficient,\n  title     = {{On Efficient Training & Inference of Neural Differential Equations}},\n  author    = {Pal, Avik},\n  year      = {2023},\n  school    = {Massachusetts Institute of Technology}\n}\n```\n\n同时，也请给我们的[Github仓库](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002F)点个赞。\n\n## 🧑‍💻 贡献\n\n本节内容尚不完整。您可以通过完善本节来做出贡献 😜。\n\n### 💎 格式化（JuliaFormatter）\n\n> [!NOTE]\n> 在v2的上游问题解决之前，请将JuliaFormatter固定在v1版本。\n\n```julia\nusing JuliaFormatter\nformat(\".\")\n```\n\n### 🧪 测试\n\nLux.jl的完整测试耗时较长，以下是测试部分代码的方法。\n\n测试按目录组织，每个目录包含带有`@testset`块的测试文件。例如，`SkipConnection`的测试位于`test\u002Fcore_layers\u002Fcontainers_tests.jl`中。\n\n#### 运行特定测试文件\n\n运行特定测试最简单的方式是直接激活测试目录并包含该测试文件：\n\n```julia\n# 从Lux.jl根目录\nusing Pkg\nPkg.activate(\"test\")\n\n# 运行特定测试文件\ninclude(\"test\u002Fcore_layers\u002Fcontainers_tests.jl\")\n```\n\n这种方法允许您快速迭代特定测试，而无需运行整个测试套件。\n\n有关执行特定测试组的详细信息，请参阅[ParallelTestRunners.jl](https:\u002F\u002Fgithub.com\u002FJuliaTesting\u002FParallelTestRunner.jl)。\n\n#### 通过CI运行测试组\n\n要通过测试运行程序运行特定测试组，可以将目录名称作为位置参数传递：\n\n```shell\njulia --project -e 'using Pkg; Pkg.test(test_args=[\"core_layers\"])'\n```\n\n#### 运行所有测试\n\n要运行完整的测试套件：\n\n```shell\njulia --project -e 'using Pkg; Pkg.test()'\n```\n\n### 📖 文档\n\nLux在其文档中创建了许多教程。这可能非常耗时且需要大量计算资源。为了加快构建速度，您可以设置`LUX_DOCS_DRAFT_BUILD=true`。\n\n```shell\nLUX_DOCS_DRAFT_BUILD=true julia --threads=auto --startup=no --project=docs docs\u002Fmake.jl\n```\n\n在编写教程（任何位于`examples\u002F`下的内容）时，请将其包含在`docs\u002Ftutorials.jl`中。如果教程耗时较长，可以将`should_run`设置为`false`。\n\n此外，若要将新页面纳入导航和侧边栏，需要将其添加到`docs\u002Fsrc\u002F.vitepress\u002Fconfig.mts`中。具体来说，需要根据页面类型将其添加到`sidebar`和\u002F或`nav`中。\n\n要使用LiveServer在本地预览文档，请参阅[DocumenterVitepress.jl](https:\u002F\u002Fluxdl.github.io\u002FDocumenterVitepress.jl\u002Fdev\u002Fmanual\u002Fget_started#Preview-Documentation-Development-Instantly)的文档。","# Lux.jl 快速上手指南\n\nLux.jl 是一个用 Julia 语言编写的优雅且高性能的深度学习库，旨在结合 Julia 的编程灵活性与 XLA 的计算性能。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：支持 Linux、macOS 或 Windows。\n*   **Julia 版本**：建议安装最新稳定版的 Julia (v1.9 或更高版本)。\n    *   下载地址：[https:\u002F\u002Fjulialang.org\u002Fdownloads\u002F](https:\u002F\u002Fjulialang.org\u002Fdownloads\u002F)\n    *   *国内加速*：清华大学开源软件镜像站提供 Julia 下载加速 ([https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fjulia-releases\u002F](https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fjulia-releases\u002F))。\n*   **前置依赖**：无需额外安装 Python 或其他深度学习框架，Lux.jl 是纯 Julia 实现。但若要使用 GPU 加速，需确保已安装对应的 NVIDIA 驱动和 CUDA Toolkit（Lux 会自动处理大部分 CUDA 依赖）。\n\n> **提示**：如果您不想在本地配置环境，可以直接使用 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002F)，其 Julia 运行时已预装了 Lux 和相关组件。\n\n## 2. 安装步骤\n\n启动 Julia REPL（交互式命令行），执行以下命令安装 Lux 主包：\n\n```julia\nimport Pkg\nPkg.add(\"Lux\")\n```\n\n安装完成后，建议在代码中通过以下方式加载库：\n\n```julia\nusing Lux, Random\n```\n\n> **注意**：Lux 生态系统包含多个子包（如 `LuxLib`, `LuxCore`, `MLDataDevices` 等），安装主包 `Lux` 时通常会自动解析并安装必要的核心依赖。\n\n## 3. 基本使用\n\n以下是一个最简单的示例，展示如何定义一个多层感知机（MLP）模型，初始化参数，并进行一次前向传播。\n\n### 示例代码\n\n```julia\nusing Lux, Random, ComponentArrays\n\n# 1. 定义模型结构\n# 输入层 784 -> 隐藏层 64 (ReLU) -> 输出层 10\nmodel = Chain(\n    Dense(784, 64, relu),\n    Dense(64, 10)\n)\n\n# 2. 初始化随机数生成器和参数\nrng = Random.default_rng()\n# Lux 需要显式初始化参数和状态\nps, st = Lux.setup(rng, model)\n\n# 将参数转换为组件数组 (可选，但推荐用于优化器兼容)\nps_comp = ComponentArray(ps)\n\n# 3. 准备输入数据 (模拟一个批次的数据)\n# 假设输入维度为 (特征数，批次大小)\nx = rand(Float32, 784, 32)\n\n# 4. 执行前向传播\n# 返回值包括：预测结果 (y) 和更新后的状态 (st_new)\ny, st_new = model(x, ps, st)\n\n# 输出结果维度检查\nprintln(\"输入形状: \", size(x))\nprintln(\"输出形状: \", size(y))\n```\n\n### 代码说明\n*   **模型定义**：使用 `Chain` 串联网络层，`Dense` 定义全连接层。\n*   **参数初始化**：Lux 采用函数式编程风格，参数 (`ps`) 和状态 (`st`) 与模型结构分离，需通过 `Lux.setup` 初始化。\n*   **前向传播**：调用 `model(x, ps, st)` 即可得到预测结果，无需像传统 OOP 框架那样调用 `.forward()` 方法。\n\n现在您已经成功运行了第一个 Lux.jl 模型！接下来您可以结合 `Optimisers.jl` 进行训练，或使用 `LuxCUDA.jl` 启用 GPU 加速。","某生物计算实验室的研究团队正试图利用 Julia 语言构建高精度的蛋白质折叠预测模型，以加速新药研发过程中的分子动力学模拟。\n\n### 没有 Lux.jl 时\n- **性能瓶颈严重**：直接使用基础 Julia 数组操作或调用外部 Python 库（如 PyTorch）导致数据在内存间频繁拷贝，训练大规模神经网络时速度缓慢，难以利用 XLA 加速。\n- **代码风格割裂**：为了追求性能被迫混合使用 Python 和 Julia 两种语言，破坏了 Julia 原生代码的优雅性与类型稳定性，调试和维护成本极高。\n- **生态集成困难**：现有的深度学习框架缺乏对 Julia 科学计算生态（如 DifferentialEquations.jl）的原生支持，导致将物理约束融入神经网络时接口复杂且易出错。\n- **硬件适配繁琐**：在不同 GPU 后端切换时需要重写大量底层代码，缺乏统一的抽象层来自动处理硬件加速细节。\n\n### 使用 Lux.jl 后\n- **极致运行效率**：Lux.jl 原生集成 XLA 编译技术，无需离开 Julia 环境即可实现接近硬件极限的训练速度，大幅缩短了模型迭代周期。\n- **纯 Julia 优雅开发**：团队完全使用纯 Julia 编写模型，既保留了语言的数学表达力与类型安全，又享受到了高性能深度学习框架的全部功能。\n- **无缝科算融合**：凭借原生兼容性，研究人员轻松将微分方程求解器与神经网络结合，快速实现了“物理信息神经网络”（PINN）的复杂架构。\n- **灵活后端切换**：通过简单的参数配置即可在 CPU、GPU 及不同加速器之间无缝切换，底层自动优化算子，让开发者专注于算法逻辑而非硬件适配。\n\nLux.jl 成功打破了高性能计算与优雅编程之间的壁垒，让科研人员能在单一语言环境中同时获得工业级的训练速度与科学级的开发体验。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLuxDL_Lux.jl_2a0e9e19.png","LuxDL","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FLuxDL_12ac6f81.png","",null,"https:\u002F\u002Flux.csail.mit.edu\u002Fstable\u002F","https:\u002F\u002Fgithub.com\u002FLuxDL",[79,83,87],{"name":80,"color":81,"percentage":82},"Julia","#a270ba",99.5,{"name":84,"color":85,"percentage":86},"Python","#3572A5",0.5,{"name":88,"color":89,"percentage":90},"Shell","#89e051",0,696,84,"2026-04-03T00:05:33","MIT","Linux, macOS, Windows","非必需。支持通过 LuxCUDA.jl 进行 GPU 加速（通常需 NVIDIA GPU 及对应 CUDA 驱动），具体显存和 CUDA 版本取决于底层后端（如 XLA\u002FCUDA）的配置，文中未明确限定最低要求。","未说明",{"notes":99,"python":100,"dependencies":101},"1. 该工具是基于 Julia 语言的深度学习库，不需要 Python 环境。2. 安装只需在 Julia REPL 中运行 `Pkg.add(\"Lux\")`。3. 提供 Google Colab 在线使用支持（预装 Julia 运行时）。4. 核心特性是结合 Julia 的优雅语法与 XLA 的高性能。5. 包含多个子包（如 LuxLib, LuxCore 等）作为内部依赖自动管理。","不需要 (基于 Julia 语言)",[102,103,104,105,106,107],"Julia (最新稳定版)","LuxLib.jl","LuxCore.jl","MLDataDevices.jl","WeightInitializers.jl","LuxCUDA.jl (可选，用于 GPU 支持)",[14],[110,111,112,113,114,115,116],"deep-learning","machine-learning","neural-networks","gpu","scientific-machine-learning","tpu","xla","2026-03-27T02:49:30.150509","2026-04-07T06:15:18.170793",[120,125,129,134,139,144,149],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},21170,"Reactant 报错提示 CUDA 版本过低（StreamBeginCaptureToGraph not implemented），但系统已安装高版本 CUDA，如何解决？","这是因为 Reactant 默认可能未抓取到正确的 CUDA 版本。你可以使用 EnzymeAD\u002FReactant.jl PR #589 中暴露的选项来指定或调整配置（该功能尚未正式发布）。此外，如果遇到内存池问题，目前无法直接从池中回收内存，但可以通过禁用预分配（disable preallocation）并手动调用 `GC.gc()` 来清理缓存。详见：https:\u002F\u002Fenzymead.github.io\u002FReactant.jl\u002Fstable\u002Fintroduction\u002F#Empty-Cache","https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fissues\u002F1225",{"id":126,"question_zh":127,"answer_zh":128,"source_url":124},21171,"使用 Reactant 训练大型模型时编译时间过长且显存占用固定为 GPU 容量的 75%，可以调整吗？","目前 XLA 内存池大小默认固定为 GPU 容量的 75%，暂时无法直接修改该比例。对于大型模型（如参数量增加到 80k），编译时间可能需要约 130 秒，这是正常现象。关于内存回收，编译后的模型不会自动被 GC 回收，也无法直接使用 `unsafe_free!` 释放。建议通过禁用预分配并手动触发垃圾回收（`GC.gc()`）来缓解内存压力。相关配置选项正在开发中（参考 Reactant.jl PR #589）。",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},21172,"Lux 最近版本出现混合精度矩阵乘法性能回归，推理速度变慢且内存分配增加，是预期行为吗？","这并非预期的最终状态，但与后端调度策略有关。维护者指出，后端（loopvec 和 octavian）通常能智能地不使用过多线程，但在某些小模型场景下可能导致性能下降。用户可以尝试调整线程数（thread count）进行测试，但在大多数情况下无需手动干预。如果性能影响严重，可暂时通过分布式计算绕过该问题。团队正在关注此性能回归问题。","https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fissues\u002F847",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},21173,"如何在 Julia 包注册后自动创建 Git Tag？","推荐安装并使用 [Julia TagBot GitHub Action](https:\u002F\u002Fgithub.com\u002Fmarketplace\u002Factions\u002Fjulia-tagbot)。安装后，当注册 Pull Request 合并时，TagBot 会自动创建对应的版本标签。如果未安装，也可以手动执行以下命令创建标签：\n```\ngit tag -a v0.4.18 -m \"\u003Cversion description>\" \u003Ccommit_hash>\ngit push origin v0.4.18\n```\n请确保将 `v0.4.18` 替换为实际版本号，`\u003Ccommit_hash>` 替换为对应的提交哈希值。","https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fissues\u002F36",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},21174,"TagBot 触发失败或未生成标签，应该如何排查？","如果 TagBot 未自动生成标签，请检查以下几点：\n1. 确认已更新 `TagBot.yml` 配置文件，包含 issue comment 触发器（参考 Discourse 教程：https:\u002F\u002Fdiscourse.julialang.org\u002Ft\u002Fann-required-updates-to-tagbot-yml\u002F49249）。\n2. 检查 GitHub Actions 日志，确认 TagBot 是否运行且无报错。\n3. 如果需要自动修复配置，可以在该 Issue 下评论 `TagBot fix`，维护者会在几小时内提交修复 PR。\n常见错误包括配置缺失或权限不足导致无法打标签。","https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fissues\u002F6",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},21175,"Lux 对 Enzyme.jl 自动微分的支持情况如何？哪些层可以使用？","截至 Lux v0.4.7 和 Enzyme v0.10.4，以下组件已支持 Enzyme 自动微分：\n1. `Lux.Dense`：完全支持，可直接用于反向模式微分（Reverse Mode AD）。\n2. `Lux.BatchNorm`：已验证可用，但在编译时可能会出现目标三元组（target triple）不匹配的警告（如 'x86_64-unknown-linux-gnu' vs 'x86_64-pc-linux-gnu'），通常不影响运行。\n使用时需将模型和输入设为 `Const`，参数设为 `Duplicated`，例如：\n`Enzyme.autodiff(Reverse, loss_function, Const(model), Const(x), Duplicated(ps, dps), Const(st))`","https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fissues\u002F81",{"id":150,"question_zh":151,"answer_zh":152,"source_url":138},21176,"注册 Julia 包时提示“标签已存在但指向不同提交”（Tag already exists and points to a different commit），如何解决？","该错误表示当前仓库中已存在同名标签（如 `v0.4.17`），但其指向的提交哈希与注册系统记录的不一致。解决方法：\n1. 本地删除旧标签：`git tag -d v0.4.17`\n2. 远程删除旧标签：`git push origin :refs\u002Ftags\u002Fv0.4.17`\n3. 重新运行注册命令：`@JuliaRegistrator register`\n确保在重新注册前，本地代码已提交并推送到最新状态，避免再次冲突。",[154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249],{"id":155,"version":156,"summary_zh":157,"released_at":158},127177,"LuxLib-v1.15.6","## LuxLib LuxLib-v1.15.6\n\n[与 LuxLib-v1.15.5 的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxLib-v1.15.5...LuxLib-v1.15.6)\n\n\n**已合并的拉取请求：**\n- 修复：cublaslt 实现中的主机内存泄漏 (#1689) (@avik-pal)\n\n**已关闭的问题：**\n- 在 GPU 上进行大量 epoch 时内存使用不规则 (#872)\n- 使用 CUDA 时 Dense 层存在内存泄漏 (#1230)","2026-03-28T04:08:15",{"id":160,"version":161,"summary_zh":162,"released_at":163},127178,"LuxLib-v1.15.5","## LuxLib LuxLib-v1.15.5\n\n[与 LuxLib-v1.15.4 的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxLib-v1.15.4...LuxLib-v1.15.5)\n\n\n**已合并的拉取请求：**\n- chore: 将 crate-ci\u002Ftypos 从 1.42.3 升级到 1.43.3 (#1659) (@dependabot[bot])\n- chore: 更新 \u002Fdocs 中 DocumenterVitepress 的依赖版本，从 0.2 更新为 0.2 和 0.3 (#1660) (@dependabot[bot])\n- chore: 在 \u002Ftest 中将 LuxCUDA 的依赖更新至 0.3.4 (#1661) (@dependabot[bot])\n- chore: 在 \u002Ftest 中将 cuDNN 的依赖更新至 1.4.6 (#1662) (@dependabot[bot])\n- chore: 在 \u002Ftest 中将 CUDA 的依赖更新至 5.9.6 (#1663) (@dependabot[bot])\n- 为 `LuxTestUtils`、`LuxLib` 和 `Lux` 的通用测试引入 `Mooncake` 测试框架，用于当前的测试用例。(#1664) (@AstitvaAggarwal)\n- chore: 将 crate-ci\u002Ftypos 从 1.43.3 升级到 1.43.4 (#1665) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.43.4 升级到 1.43.5 (#1667) (@dependabot[bot])\n- chore: 将 actions\u002Fdownload-artifact 从 7 升级到 8 (#1670) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.43.5 升级到 1.44.0 (#1671) (@dependabot[bot])\n- chore: 将 actions\u002Fupload-artifact 从 6 升级到 7 (#1672) (@dependabot[bot])\n- chore: 将 julia-actions\u002Fcache 从 2 升级到 3 (#1675) (@dependabot[bot])\n- fix(MLDataDevices): 修正 AMDGPUExt 中 `amdgpu_array_adapt` 的签名 (#1677) (@Copilot)\n- test: 移除训练 API 中的 mooncake 测试 (#1680) (@avik-pal)\n- fix: 在 ReactantExt 的 MultiHeadAttention 中支持 N 维掩码和偏置 (#1682) (@Copilot)\n- test: 重新启用训练 API 的 mooncake 测试 (#1683) (@avik-pal)\n- fix: 修正 `cublasLt_fused_dense!` 通用回退实现中的双重激活问题 (#1685) (@Copilot)\n- docs: 在 LuxTestUtils 的 API 文档页面中添加缺失的 `mooncake_gradient_function` 文档注释 (#1687) (@Copilot)\n\n**已关闭的问题：**\n- Reactant 编译 ConvolutionalVAE 时出现故障 (#1673)\n- [MLDataDevices] 将数组移动到 AMDGPU 时发生错误 (#1676)\n- 在最近的版本中，mooncake 完全失效 (#1679)\n- 使用 Reactant 时，Attention 默认假设使用 2D 掩码 (#1681)\n- `LuxLib.Impl.cublasLt_fused_dense!` 中可能存在双重激活问题 (#1684)\n- 文档构建失败 (#1686)","2026-03-26T02:08:46",{"id":165,"version":166,"summary_zh":167,"released_at":168},127179,"LuxTestUtils-v2.3.0","## LuxTestUtils LuxTestUtils-v2.3.0\n\n[自 LuxTestUtils-v2.2.0 以来的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxTestUtils-v2.2.0...LuxTestUtils-v2.3.0)\n\n\n**已合并的拉取请求：**\n- chore: 将 crate-ci\u002Ftypos 从 1.42.0 升级到 1.42.1 (#1642) (@dependabot[bot])\n- test: 在 1.12 上运行 Enzyme.jl (#1644) (@avik-pal)\n- 修复 PolynomialFitting 文档页面中曲线的图例标签 (#1645) (@JamieMair)\n- ci: 在 1.12 上运行文档构建 (#1646) (@avik-pal)\n- feat: 支持 AutoReactant (#1647) (@avik-pal)\n- fix: 不再依赖反应物中的自动推断 (#1648) (@avik-pal)\n- fix: 在 get_device 中绕过反应物 (#1649) (@avik-pal)\n- test: 使用 ParallelTestRunner (#1650) (@avik-pal)\n- chore: 将 Mooncake 的依赖项从 0.4.148 更新为 0.4.148 和 0.5 (#1651) (@dependabot[bot])\n- chore: 在 \u002Ftest 目录下将 Mooncake 的依赖项从 0.4.138 更新为 0.4.138 和 0.5 (#1652) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.42.1 升级到 1.42.3 (#1653) (@dependabot[bot])\n- test(LuxLib): 迁移到 ParallelTestRunners (#1654) (@avik-pal)\n- fix: 修复 CPU matmul 缺失的 onehotarrays 分派问题 (#1655) (@avik-pal)\n- test(Lux): 迁移到 ParallelTestRunner (#1656) (@avik-pal)\n- chore: 将 crate-ci\u002Ftypos 从 1.42.3 升级到 1.43.3 (#1659) (@dependabot[bot])\n- chore: 在 \u002Fdocs 目录下将 DocumenterVitepress 的依赖项从 0.2 更新为 0.2 和 0.3 (#1660) (@dependabot[bot])\n- chore: 在 \u002Ftest 目录下将 LuxCUDA 的依赖项更新为 0.3.4 (#1661) (@dependabot[bot])\n- chore: 在 \u002Ftest 目录下将 cuDNN 的依赖项更新为 1.4.6 (#1662) (@dependabot[bot])\n- chore: 在 \u002Ftest 目录下将 CUDA 的依赖项更新为 5.9.6 (#1663) (@dependabot[bot])\n- `Mooncake` 用于 `LuxTestUtils`、`LuxLib`、`Lux` 的通用测试，以及当前的测试。(#1664) (@AstitvaAggarwal)\n- chore: 将 crate-ci\u002Ftypos 从 1.43.3 升级到 1.43.4 (#1665) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.43.4 升级到 1.43.5 (#1667) (@dependabot[bot])\n- chore: 将 actions\u002Fdownload-artifact 从 7 升级到 8 (#1670) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.43.5 升级到 1.44.0 (#1671) (@dependabot[bot])\n- chore: 将 actions\u002Fupload-artifact 从 6 升级到 7 (#1672) (@dependabot[bot])\n- chore: 将 julia-actions\u002Fcache 从 2 升级到 3 (#1675) (@dependabot[bot])\n- fix(MLDataDevices): 修正 AMDGPUExt 中 `amdgpu_array_adapt` 的签名 (#1677) (@Copilot)\n- test: 取消在训练 API 中使用 mooncake 测试 (#1680) (@avik-pal)\n\n**已关闭的问题：**\n- 使用 OneHotArrays 时，LuxLib 关于混合精度的警告 (#1197)\n- 向 1.12 支持迈进 (#1532)\n- 在本地运行 Lux 和 Reactant 测试 (#1577)\n- 隐式使用 RMSNorm 会导致 Reactant 和 Enzyme 的自动微分失效 (#1640)\n- ConvolutionalVAE 的 Reactant 编译失败 (#1673)\n- [MLDataDevices] 将数组移动到 AMDGPU 时出现错误 (#1676)","2026-03-25T02:24:28",{"id":170,"version":171,"summary_zh":172,"released_at":173},127180,"MLDataDevices-v1.17.5","## MLDataDevices MLDataDevices-v1.17.5\n\n[与 MLDataDevices-v1.17.4 的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FMLDataDevices-v1.17.4...MLDataDevices-v1.17.5)\n\n\n**已合并的拉取请求：**\n- chore: 将 crate-ci\u002Ftypos 从 1.42.1 升级到 1.42.3 (#1653) (@dependabot[bot])\n- test(LuxLib): 迁移到 ParallelTestRunners (#1654) (@avik-pal)\n- fix: 缺少针对 CPU 矩阵乘法的 onehotarrays 分派 (#1655) (@avik-pal)\n- test(Lux): 迁移到 ParallelTestRunner (#1656) (@avik-pal)\n- chore: 将 crate-ci\u002Ftypos 从 1.42.3 升级到 1.43.3 (#1659) (@dependabot[bot])\n- chore: 更新 \u002Fdocs 中 DocumenterVitepress 的依赖版本，从 0.2 更新为 0.2 和 0.3 (#1660) (@dependabot[bot])\n- chore: 在 \u002Ftest 中将 LuxCUDA 的版本更新至 0.3.4 (#1661) (@dependabot[bot])\n- chore: 在 \u002Ftest 中将 cuDNN 的版本更新至 1.4.6 (#1662) (@dependabot[bot])\n- chore: 在 \u002Ftest 中将 CUDA 的版本更新至 5.9.6 (#1663) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.43.3 升级到 1.43.4 (#1665) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.43.4 升级到 1.43.5 (#1667) (@dependabot[bot])\n- chore: 将 actions\u002Fdownload-artifact 从 7 升级到 8 (#1670) (@dependabot[bot])\n- chore: 将 crate-ci\u002Ftypos 从 1.43.5 升级到 1.44.0 (#1671) (@dependabot[bot])\n- chore: 将 actions\u002Fupload-artifact 从 6 升级到 7 (#1672) (@dependabot[bot])\n- chore: 将 julia-actions\u002Fcache 从 2 升级到 3 (#1675) (@dependabot[bot])\n- fix(MLDataDevices): 修正 AMDGPUExt 中 `amdgpu_array_adapt` 的签名 (#1677) (@Copilot)\n\n**已关闭的问题：**\n- 使用 OneHotArrays 时，LuxLib 关于混合精度的警告 (#1197)\n- ConvolutionalVAE 的反应物编译中断 (#1673)\n- [MLDataDevices] 将数组移动到 AMDGPU 时出现错误 (#1676)","2026-03-19T00:26:45",{"id":175,"version":176,"summary_zh":177,"released_at":178},127181,"v1.31.3","## Lux v1.31.3\n\n[自 v1.31.2 以来的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.31.2...v1.31.3)\n\n\n**已合并的拉取请求：**\n- 测试(Lux)：迁移到 ParallelTestRunner (#1656) (@avik-pal)","2026-02-03T17:45:00",{"id":180,"version":181,"summary_zh":182,"released_at":183},127182,"LuxLib-v1.15.4","## LuxLib LuxLib-v1.15.4\n\n[与 LuxLib-v1.15.3 的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxLib-v1.15.3...LuxLib-v1.15.4)\n\n\n**已合并的拉取请求：**\n- 测试(Lux)：迁移到 ParallelTestRunner (#1656) (@avik-pal)","2026-02-03T16:55:27",{"id":185,"version":186,"summary_zh":187,"released_at":188},127183,"LuxLib-v1.15.3","## LuxLib LuxLib-v1.15.3\n\n[自 LuxLib-v1.15.2 以来的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxLib-v1.15.2...LuxLib-v1.15.3)\n\n\n**已合并的拉取请求：**\n- 修复：反应物预编译 + 抛出错误 (#1631) (@avik-pal)\n- 测试 (Lux)：将测试迁移到 v1.12 (#1634) (@avik-pal)\n- 修复：更新追踪以适配新的 Reactant API (#1636) (@avik-pal)\n- 新特性：为反应物工作负载启用并行预编译 (#1638) (@avik-pal)\n- 文档：修复越界大小访问问题 (#1639) (@avik-pal)\n- 修复：在 1.12 上跳过 Zygote (#1641) (@avik-pal)\n- 杂项：将 crate-ci\u002Ftypos 从 1.42.0 升级到 1.42.1 (#1642) (@dependabot[bot])\n- 测试：Enzyme.jl 在 1.12 上运行 (#1644) (@avik-pal)\n- 修复了 PolynomialFitting 文档页面中曲线的图例标签 (#1645) (@JamieMair)\n- CI：在 1.12 上运行文档构建 (#1646) (@avik-pal)\n- 新特性：支持 AutoReactant (#1647) (@avik-pal)\n- 修复：不要依赖反应物中的自动推断 (#1648) (@avik-pal)\n- 修复：在 get_device 中绕过反应物 (#1649) (@avik-pal)\n- 测试：使用 ParallelTestRunner (#1650) (@avik-pal)\n- 杂项：将 Mooncake 的依赖版本从 0.4.148 更新至 0.4.148 和 0.5 (#1651) (@dependabot[bot])\n- 杂项：在 \u002Ftest 目录下将 Mooncake 的依赖版本从 0.4.138 更新至 0.4.138 和 0.5 (#1652) (@dependabot[bot])\n- 杂项：将 crate-ci\u002Ftypos 从 1.42.1 升级到 1.42.3 (#1653) (@dependabot[bot])\n- 测试 (LuxLib)：迁移到 ParallelTestRunners (#1654) (@avik-pal)\n- 修复：CPU 矩阵乘法缺少 OneHotArrays 的分派 (#1655) (@avik-pal)\n\n**已关闭的问题：**\n- 使用 OneHotArrays 时，LuxLib 关于混合精度的警告 (#1197)\n- 向 1.12 支持迈进 (#1532)\n- 在本地运行 Lux 和 Reactant 测试 (#1577)\n- RMSNorm 无法与 Reactant 一起编译 (#1637)\n- 使用 RMSNorm 会无声地破坏 Reactant 和 Enzyme 的自动微分功能 (#1640)","2026-02-03T01:53:09",{"id":190,"version":191,"summary_zh":192,"released_at":193},127184,"v1.31.2","## Lux v1.31.2\n\n[与 v1.31.1 的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.31.1...v1.31.2)\n\n\n**已合并的拉取请求：**\n- 修复：在 get_device 中绕过 reactant (#1649) (@avik-pal)\n- 测试：使用 ParallelTestRunner (#1650) (@avik-pal)\n- 杂项：将 Mooncake 的依赖版本从 0.4.148 更新为 0.4.148、0.5 (#1651) (@dependabot[bot])\n- 杂项：在 \u002Ftest 目录中将 Mooncake 的依赖版本从 0.4.138 更新为 0.4.138、0.5 (#1652) (@dependabot[bot])\n- 杂项：将 crate-ci\u002Ftypos 从 1.42.1 升级到 1.42.3 (#1653) (@dependabot[bot])\n- 测试（LuxLib）：迁移到 ParallelTestRunners (#1654) (@avik-pal)\n- 修复：CPU 矩阵乘法缺少 OneHotArrays 的分派 (#1655) (@avik-pal)\n\n**已关闭的问题：**\n- 使用 OneHotArrays 时，LuxLib 关于混合精度的警告 (#1197)\n- 在本地运行 Lux 和 Reactant 的测试 (#1577)","2026-02-03T01:46:45",{"id":195,"version":196,"summary_zh":197,"released_at":198},127185,"MLDataDevices-v1.17.4","## MLDataDevices MLDataDevices-v1.17.4\n\n[与 MLDataDevices-v1.17.3 的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FMLDataDevices-v1.17.3...MLDataDevices-v1.17.4)\n\n\n**已合并的拉取请求：**\n- 测试：使用 ParallelTestRunner (#1650) (@avik-pal)\n- 构建工具：将 Mooncake 依赖从 0.4.148 更新为 0.4.148 和 0.5 (#1651) (@dependabot[bot])\n- 构建工具：在 \u002Ftest 中将 Mooncake 依赖从 0.4.138 更新为 0.4.138 和 0.5 (#1652) (@dependabot[bot])\n\n**已关闭的问题：**\n- 在本地运行 Lux 和 Reactant 测试 (#1577)","2026-02-02T01:41:36",{"id":200,"version":201,"summary_zh":202,"released_at":203},127186,"MLDataDevices-v1.17.3","## MLDataDevices MLDataDevices-v1.17.3\n\n[与 MLDataDevices-v1.17.2 的差异](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FMLDataDevices-v1.17.2...MLDataDevices-v1.17.3)\n\n\n**已合并的拉取请求：**\n- 修复：Reactant 预编译 + 抛出错误 (#1631) (@avik-pal)\n- 测试（Lux）：将测试迁移到 v1.12 (#1634) (@avik-pal)\n- 新特性：为 Reactant 工作负载实现并行预编译 (#1638) (@avik-pal)\n- 文档：修复越界大小访问问题 (#1639) (@avik-pal)\n- 修复：在 1.12 上跳过 Zygote (#1641) (@avik-pal)\n- 杂项：将 crate-ci\u002Ftypos 从 1.42.0 升级到 1.42.1 (#1642) (@dependabot[bot])\n- 测试：Enzyme.jl 在 1.12 上运行 (#1644) (@avik-pal)\n- 修复了 PolynomialFitting 文档页面中曲线的图例标签 (#1645) (@JamieMair)\n- CI：在 1.12 上运行文档构建 (#1646) (@avik-pal)\n- 新特性：支持 AutoReactant (#1647) (@avik-pal)\n- 修复：不要依赖 Reactant 中的自动推断 (#1648) (@avik-pal)\n- 修复：在 get_device 中绕过 Reactant (#1649) (@avik-pal)\n\n**已关闭的问题：**\n- 向 1.12 支持迈进 (#1532)\n- RMSNorm 在使用 Reactant 时无法编译 (#1637)\n- 使用 RMSNorm 会静默地破坏 Reactant 和 Enzyme 的自动微分功能 (#1640)","2026-01-30T16:44:09",{"id":205,"version":206,"summary_zh":207,"released_at":208},127187,"v1.31.1","## Lux v1.31.1\n\n[Diff since v1.31.0](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.31.0...v1.31.1)\n\n\n**Merged pull requests:**\n- fix: dont rely on auto inference in reactant (#1648) (@avik-pal)\n\n**Closed issues:**\n- Using RMSNorm silently breaks AD with Reactant and Enzyme (#1640)","2026-01-29T19:12:53",{"id":210,"version":211,"summary_zh":212,"released_at":213},127188,"v1.31.0","## Lux v1.31.0\n\n[Diff since v1.30.0](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.30.0...v1.31.0)\n\n\n**Merged pull requests:**\n- fix: skip zygote on 1.12 (#1641) (@avik-pal)\n- chore: bump crate-ci\u002Ftypos from 1.42.0 to 1.42.1 (#1642) (@dependabot[bot])\n- test: Enzyme.jl on 1.12 (#1644) (@avik-pal)\n- Fixed legend labels for the curves in the PolynomialFitting documentation page (#1645) (@JamieMair)\n- ci: run docs on 1.12 (#1646) (@avik-pal)\n- feat: support AutoReactant (#1647) (@avik-pal)\n\n**Closed issues:**\n- Towards 1.12 support (#1532)","2026-01-29T14:59:05",{"id":215,"version":216,"summary_zh":217,"released_at":218},127189,"LuxTestUtils-v2.2.0","## LuxTestUtils LuxTestUtils-v2.2.0\n\n[Diff since LuxTestUtils-v2.1.0](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxTestUtils-v2.1.0...LuxTestUtils-v2.2.0)\n\n\n**Merged pull requests:**\n- fix: reactant precompilation + throw error (#1631) (@avik-pal)\n- test(LuxLib): migrate testing to v1.12 (#1633) (@avik-pal)\n- test(Lux): migrate testing to v1.12 (#1634) (@avik-pal)\n- fix: update tracing to new Reactant API (#1636) (@avik-pal)\n- feat: parallel precompile for reactant workloads (#1638) (@avik-pal)\n- docs: fix out of bounds size access (#1639) (@avik-pal)\n- fix: skip zygote on 1.12 (#1641) (@avik-pal)\n\n**Closed issues:**\n- RMSNorm fails to compile with Reactant (#1637)","2026-01-25T19:46:56",{"id":220,"version":221,"summary_zh":222,"released_at":223},127190,"v1.30.0","## Lux v1.30.0\n\n[Diff since v1.29.6](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.29.6...v1.30.0)\n\n\n**Merged pull requests:**\n- test(Lux): migrate testing to v1.12 (#1634) (@avik-pal)","2026-01-24T04:34:19",{"id":225,"version":226,"summary_zh":227,"released_at":228},127191,"v1.29.6","## Lux v1.29.6\n\n[Diff since v1.29.5](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.29.5...v1.29.6)\n\n\n**Merged pull requests:**\n- feat: parallel precompile for reactant workloads (#1638) (@avik-pal)","2026-01-23T22:20:48",{"id":230,"version":231,"summary_zh":232,"released_at":233},127192,"v1.29.5","## Lux v1.29.5\n\n[Diff since v1.29.4](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.29.4...v1.29.5)\n\n\n**Merged pull requests:**\n- fix: reactant precompilation + throw error (#1631) (@avik-pal)\n\n**Closed issues:**\n- RMSNorm fails to compile with Reactant (#1637)","2026-01-23T19:17:13",{"id":235,"version":236,"summary_zh":237,"released_at":238},127193,"MLDataDevices-v1.17.2","## MLDataDevices MLDataDevices-v1.17.2\n\n[Diff since MLDataDevices-v1.17.1](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FMLDataDevices-v1.17.1...MLDataDevices-v1.17.2)\n\n\n**Merged pull requests:**\n- feat: update LuxTestUtils to support 1.12 (#1535) (@avik-pal)\n- Allow for empty Chains. (#1623) (@ispielma)\n- chore: bump crate-ci\u002Ftypos from 1.41.0 to 1.42.0 (#1625) (@dependabot[bot])\n- fix: explicit imports failures from 0.14.2 (#1626) (@avik-pal)\n- fix: temporarily disable precompile workloads for reactant (#1628) (@avik-pal)\n- feat: various fixups for nicer JETLS interaction (#1630) (@avik-pal)\n- test(LuxLib): migrate testing to v1.12 (#1633) (@avik-pal)\n- fix: update tracing to new Reactant API (#1636) (@avik-pal)\n\n**Closed issues:**\n- Layer summary page of docs i weirdly formatted (#1627)\n- Precompilation fails when Flux is in the project (#1629)","2026-01-20T00:53:49",{"id":240,"version":241,"summary_zh":242,"released_at":243},127194,"v1.29.4","## Lux v1.29.4\n\n[Diff since v1.29.3](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002Fv1.29.3...v1.29.4)\n\n\n**Merged pull requests:**\n- feat: update LuxTestUtils to support 1.12 (#1535) (@avik-pal)\n- test(LuxLib): migrate testing to v1.12 (#1633) (@avik-pal)\n- fix: update tracing to new Reactant API (#1636) (@avik-pal)","2026-01-20T00:23:11",{"id":245,"version":246,"summary_zh":247,"released_at":248},127195,"LuxCore-v1.5.3","## LuxCore LuxCore-v1.5.3\n\n[Diff since LuxCore-v1.5.2](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxCore-v1.5.2...LuxCore-v1.5.3)\n\n\n**Merged pull requests:**\n- feat: update LuxTestUtils to support 1.12 (#1535) (@avik-pal)\n- test(LuxLib): migrate testing to v1.12 (#1633) (@avik-pal)","2026-01-20T00:11:11",{"id":250,"version":251,"summary_zh":252,"released_at":253},127196,"LuxLib-v1.15.2","## LuxLib LuxLib-v1.15.2\n\n[Diff since LuxLib-v1.15.1](https:\u002F\u002Fgithub.com\u002FLuxDL\u002FLux.jl\u002Fcompare\u002FLuxLib-v1.15.1...LuxLib-v1.15.2)\n\n\n**Merged pull requests:**\n- feat: update LuxTestUtils to support 1.12 (#1535) (@avik-pal)\n- test(LuxLib): migrate testing to v1.12 (#1633) (@avik-pal)","2026-01-17T17:12:13"]