[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-google-deepmind--rlax":3,"tool-google-deepmind--rlax":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":68,"owner_location":68,"owner_email":68,"owner_twitter":68,"owner_website":79,"owner_url":80,"languages":81,"stars":90,"forks":85,"last_commit_at":91,"license":92,"difficulty_score":89,"env_os":93,"env_gpu":94,"env_ram":93,"env_deps":95,"category_tags":102,"github_topics":68,"view_count":10,"oss_zip_url":68,"oss_zip_packed_at":68,"status":16,"created_at":103,"updated_at":104,"faqs":105,"releases":141},249,"google-deepmind\u002Frlax","rlax",null,"rlax（发音为\"relax\"）是由 DeepMind 开发的强化学习库，基于 JAX 框架构建。它提供了一系列强化学习所需的数学运算和函数组件，帮助开发者快速构建能够学习的智能体。\n\nrlax 主要解决了强化学习算法实现中的重复劳动问题。强化学习涉及大量特定的数学计算，如价值函数估计、Bellman 方程计算、策略梯度等，这些基础操作在实现不同算法时往往大同小异。rlax 将这些通用组件提取出来，开发者可以直接调用，无需从头编写。\n\n这个库特别适合两类用户：一是强化学习研究人员，可以快速原型验证新算法想法；二是深度学习开发者，想在实际项目中使用强化学习但不想从零实现基础组件。对于学生和学习者来说，通过阅读 rlax 的源码也是理解强化学习数学原理的好途径。\n\nrlax 的技术亮点在于充分利用了 JAX 的优势：所有代码都可以通过 JIT 编译在不同硬件（CPU、GPU、TPU）上高效运行；同时支持 on-policy 和 off-policy 两种学习范式；提供了包括状态价值函数、动作价值函数、分布式价值函数、策略梯度等多种强化学习核心组件。安装简单，通过 pip 即可快速上手。","# RLax\n\n![CI status](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fworkflows\u002Fci\u002Fbadge.svg)\n![docs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgoogle-deepmind_rlax_readme_13d664e1afd7.png)\n![pypi](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Frlax)\n\nRLax (pronounced \"relax\") is a library built on top of JAX that exposes\nuseful building blocks for implementing reinforcement learning agents. Full\ndocumentation can be found at\n [rlax.readthedocs.io](https:\u002F\u002Frlax.readthedocs.io\u002Fen\u002Flatest\u002Findex.html).\n\n## Installation\n\nYou can install the latest released version of RLax from PyPI via:\n\n```sh\npip install rlax\n```\n\nor you can install the latest development version from GitHub:\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax.git\n```\n\nAll RLax code may then be just in time compiled for different hardware\n(e.g. CPU, GPU, TPU) using `jax.jit`.\n\nIn order to run the `examples\u002F` you will also need to clone the repo and\ninstall the additional requirements:\n[optax](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Foptax),\n[haiku](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fhaiku), and\n[bsuite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fbsuite).\n\n## Content\n\nThe operations and functions provided are not complete algorithms, but\nimplementations of reinforcement learning specific mathematical operations that\nare needed when building fully-functional agents capable of learning:\n\n* Values, including both state and action-values;\n* Values for Non-linear generalizations of the Bellman equations.\n* Return Distributions, aka distributional value functions;\n* General Value Functions, for cumulants other than the main reward;\n* Policies, via policy-gradients in both continuous and discrete action spaces.\n\nThe library supports both on-policy and off-policy learning (i.e. learning from\ndata sampled from a policy different from the agent's policy).\n\nSee file-level and function-level doc-strings for the documentation of these\nfunctions and for references to the papers that introduced and\u002For used them.\n\n## Usage\n\nSee `examples\u002F` for examples of using some of the functions in RLax to\nimplement a few simple reinforcement learning agents, and demonstrate learning\non BSuite's version of the Catch environment (a common unit-test for\nagent development in the reinforcement learning literature):\n\nOther examples of JAX reinforcement learning agents using `rlax` can be found in\n[bsuite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fbsuite\u002Ftree\u002Fmaster\u002Fbsuite\u002Fbaselines).\n\n## Background\n\nReinforcement learning studies the problem of a learning system (the *agent*),\nwhich must learn to interact with the universe it is embedded in (the\n*environment*).\n\nAgent and environment interact on discrete steps. On each step the agent selects\nan *action*, and is provided in return a (partial) snapshot of the state of the\nenvironment (the *observation*), and a scalar feedback signal (the *reward*).\n\nThe behaviour of the agent is characterized by a probability distribution over\nactions, conditioned on past observations of the environment (the *policy*). The\nagents seeks a policy that, from any given step, maximises the discounted\ncumulative reward that will be collected from that point onwards (the *return*).\n\nOften the agent policy or the environment dynamics itself are stochastic. In\nthis case the return is a random variable, and the optimal agent's policy is\ntypically more precisely specified as a policy that maximises the expectation of\nthe return (the *value*), under the agent's and environment's stochasticity.\n\n## Reinforcement Learning Algorithms\n\nThere are three prototypical families of reinforcement learning algorithms:\n\n1.  those that estimate the value of states and actions, and infer a policy by\n    *inspection* (e.g. by selecting the action with highest estimated value)\n2.  those that learn a model of the environment (capable of predicting the\n    observations and rewards) and infer a policy via *planning*.\n3.  those that parameterize a policy that can be directly *executed*,\n\nIn any case, policies, values or models are just functions. In deep\nreinforcement learning such functions are represented by a neural network.\nIn this setting, it is common to formulate reinforcement learning updates as\ndifferentiable pseudo-loss functions (analogously to (un-)supervised learning).\nUnder automatic differentiation, the original update rule is recovered.\n\nNote however, that in particular, the updates are only valid if the input data\nis sampled in the correct manner. For example, a policy gradient loss is only\nvalid if the input trajectory is an unbiased sample from the current policy;\ni.e. the data are on-policy. The library cannot check or enforce such\nconstraints. Links to papers describing how each operation is used are however\nprovided in the functions' doc-strings.\n\n## Naming Conventions and Developer Guidelines\n\nWe define functions and operations for agents interacting with a single stream\nof experience. The JAX construct `vmap` can be used to apply these same\nfunctions to batches (e.g. to support *replay* and *parallel* data generation).\n\nMany functions consider policies, actions, rewards, values, in consecutive\ntimesteps in order to compute their outputs. In this case the suffix `_t` and\n`tm1` is often to clarify on which step each input was generated, e.g:\n\n*   `q_tm1`: the action value in the `source` state of a transition.\n*   `a_tm1`: the action that was selected in the `source` state.\n*   `r_t`: the resulting rewards collected in the `destination` state.\n*   `discount_t`: the `discount` associated with a transition.\n*   `q_t`: the action values in the `destination` state.\n\nExtensive testing is provided for each function. All tests should also verify\nthe output of `rlax` functions when compiled to XLA using `jax.jit` and when\nperforming batch operations using `jax.vmap`.\n\n## Citing RLax\n\nThis repository is part of the DeepMind JAX Ecosystem, to cite Rlax\nplease use the citation:\n\n```bibtex\n@software{deepmind2020jax,\n  title = {The {D}eep{M}ind {JAX} {E}cosystem},\n  author = {DeepMind and Babuschkin, Igor and Baumli, Kate and Bell, Alison and Bhupatiraju, Surya and Bruce, Jake and Buchlovsky, Peter and Budden, David and Cai, Trevor and Clark, Aidan and Danihelka, Ivo and Dedieu, Antoine and Fantacci, Claudio and Godwin, Jonathan and Jones, Chris and Hemsley, Ross and Hennigan, Tom and Hessel, Matteo and Hou, Shaobo and Kapturowski, Steven and Keck, Thomas and Kemaev, Iurii and King, Michael and Kunesch, Markus and Martens, Lena and Merzic, Hamza and Mikulik, Vladimir and Norman, Tamara and Papamakarios, George and Quan, John and Ring, Roman and Ruiz, Francisco and Sanchez, Alvaro and Sartran, Laurent and Schneider, Rosalia and Sezener, Eren and Spencer, Stephen and Srinivasan, Srivatsan and Stanojevi\\'{c}, Milo\\v{s} and Stokowiec, Wojciech and Wang, Luyu and Zhou, Guangyao and Viola, Fabio},\n  url = {http:\u002F\u002Fgithub.com\u002Fdeepmind},\n  year = {2020},\n}\n```\n","# RLax\n\n![CI status](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fworkflows\u002Fci\u002Fbadge.svg)\n![docs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgoogle-deepmind_rlax_readme_13d664e1afd7.png)\n![pypi](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Frlax)\n\nRLax（发音为\"relax\"）是一个构建在 JAX 之上的库，提供了用于实现强化学习智能体的有用构建块。完整文档可在 [rlax.readthedocs.io](https:\u002F\u002Frlax.readthedocs.io\u002Fen\u002Flatest\u002Findex.html) 查阅。\n\n## 安装\n\n您可以通过 PyPI 安装 RLax 的最新发布版本：\n\n```sh\npip install rlax\n```\n\n或者您可以从 GitHub 安装最新的开发版本：\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax.git\n```\n\n所有 RLax 代码都可以使用 `jax.jit` 进行即时编译，以在不同硬件（如 CPU、GPU、TPU）上运行。\n\n为了运行 `examples\u002F`，您还需要克隆仓库并安装额外的依赖项：\n[optax](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Foptax)、\n[haiku](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fhaiku) 和\n[bsuite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fbsuite)。\n\n## 内容\n\n提供的操作和函数并非完整算法，而是实现了强化学习特定的数学运算，这些运算在构建能够学习的完整功能智能体时是必需的：\n\n* 价值函数（Values），包括状态价值和动作价值；\n* 贝尔曼方程（Bellman equations）非线性推广的价值函数；\n* 回报分布（Return Distributions），又称分布式价值函数；\n* 广义价值函数（General Value Functions），用于除主要奖励之外的累积量；\n* 策略（Policies），通过连续和离散动作空间中的策略梯度（policy-gradients）实现。\n\n该库同时支持在线策略学习（on-policy learning）和离线策略学习（off-policy learning），即从与智能体当前策略不同的策略采样的数据进行学习。\n\n请参阅文件级和函数级的文档字符串，以获取这些函数的文档以及引入和使用这些函数的论文引用。\n\n## 用法\n\n请参阅 `examples\u002F` 中的示例，了解如何使用 RLax 中的一些函数实现简单的强化学习智能体，并在 BSuite 的 Catch 环境（强化学习文献中常见的智能体开发单元测试）上展示学习效果：\n\n其他使用 `rlax` 的 JAX 强化学习智能体示例可在\n[bsuite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fbsuite\u002Ftree\u002Fmaster\u002Fbsuite\u002Fbaselines) 中找到。\n\n## 背景\n\n强化学习研究的是学习系统（*智能体*）的问题，该系统必须学会与其所嵌入的世界（*环境*）进行交互。\n\n智能体和环境以离散的步骤进行交互。在每一步，智能体选择一个*动作*，并获得环境状态的（部分）快照（*观测*）以及一个标量反馈信号（*奖励*）。\n\n智能体的行为由以过去环境观测为条件的动作概率分布来表征（*策略*）。智能体寻求一种策略，从任何给定的步骤开始，能够最大化从该时刻起将获得的折扣累积奖励（*回报*）。\n\n通常，智能体策略或环境动力学本身是随机的。在这种情况下，回报是一个随机变量，最优智能体策略通常更精确地定义为最大化回报期望（*价值*）的策略，考虑智能体和环境的随机性。\n\n## 强化学习算法\n\n强化学习算法主要有三类原型：\n\n1.  估计状态和动作的价值，并通过*检查*推断策略（例如，选择估计价值最高的动作）\n2.  学习环境模型（能够预测观测和奖励），并通过*规划*推断策略。\n3.  对可以直接*执行*的策略进行参数化。\n\n无论如何，策略、价值或模型都只是函数。在深度强化学习中，这些函数由神经网络表示。在这种情况下，通常将强化学习更新表述为可微分的伪损失函数（类似于（无）监督学习）。在自动微分下，可以恢复原始更新规则。\n\n但请注意，特别是只有在以正确方式采样的输入数据时，更新才是有效的。例如，策略梯度损失仅在输入轨迹是当前策略的无偏样本时才有效；即数据是在线的。该库无法检查或强制执行此类约束。然而，函数文档字符串中提供了描述如何使用每个操作的论文链接。\n\n## 命名约定和开发指南\n\n我们为与单一经验流交互的智能体定义函数和操作。JAX 的 `vmap` 构造可用于将这些函数应用于批次（例如，支持*回放*和*并行*数据生成）。\n\n许多函数会考虑连续时间步中的策略、动作、奖励和价值，以计算其输出。在这种情况下，通常使用后缀 `_t` 和 `tm1` 来澄清每个输入是在哪一步生成的，例如：\n\n*   `q_tm1`：转换*源*状态中的动作价值。\n*   `a_tm1`：在*源*状态中选择的动作。\n*   `r_t`：在*目标*状态中获得的奖励。\n*   `discount_t`：与转换相关的*折扣*。\n*   `q_t`：*目标*状态中的动作价值。\n\n每个函数都提供了广泛的测试。所有测试还应验证 `rlax` 函数在编译为 XLA 使用 `jax.jit` 时以及使用 `jax.vmap` 执行批处理操作时的输出。\n\n## 引用 RLax\n\n该仓库是 DeepMind JAX 生态系统的一部分，引用 Rlax 请使用以下引用：\n\n```bibtex\n@software{deepmind2020jax,\n  title = {The {D}eep{M}ind {JAX} {E}cosystem},\n  author = {DeepMind and Babuschkin, Igor and Baumli, Kate and Bell, Alison and Bhupatiraju, Surya and Bruce, Jake and Buchlovsky, Peter and Budden, David and Cai, Trevor and Clark, Aidan and Danihelka, Ivo and Dedieu, Antoine and Fantacci, Claudio and Godwin, Jonathan and Jones, Chris and Hemsley, Ross and Hennigan, Tom and Hessel, Matteo and Hou, Shaobo and Kapturowski, Steven and Keck, Thomas and Kemaev, Iurii and King, Michael and Kunesch, Markus and Martens, Lena and Merzic, Hamza and Mikulik, Vladimir and Norman, Tamara and Papamakarios, George and Quan, John and Ring, Roman and Ruiz, Francisco and Sanchez, Alvaro and Sartran, Laurent and Schneider, Rosalia and Sezener, Eren and Spencer, Stephen and Srinivasan, Srivatsan and Stanojevi\\'{c}, Milo\\v{s} and Stokowiec, Wojciech and Wang, Luyu and Zhou, Guangyao and Viola, Fabio},\n  url = {http:\u002F\u002Fgithub.com\u002Fdeepmind},\n  year = {2020},\n}\n```","# RLax 快速上手指南\n\n## 环境准备\n\n**系统要求**\n- Python 3.7+\n- JAX 支持的硬件平台（CPU、GPU 或 TPU）\n\n**前置依赖**\n- JAX：RLax 基于 JAX 构建，需提前安装\n- NumPy\n\n```sh\npip install jax jaxlib numpy\n```\n\n## 安装步骤\n\n**方式一：从 PyPI 安装稳定版**\n\n```sh\npip install rlax\n```\n\n**方式二：安装最新开发版**\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax.git\n```\n\n**国内加速（可选）**\n\n若访问 GitHub 较慢，可使用国内镜像：\n\n```sh\npip install git+https:\u002F\u002Fgitee.com\u002Fmirrors\u002Frlax.git\n```\n\n**运行示例所需额外依赖**\n\n如需运行 `examples\u002F` 目录下的示例，还需安装以下库：\n\n```sh\npip install optax haiku bsuite\n```\n\n## 基本使用\n\nRLax 提供了强化学习常用的数学运算操作，可与 JAX 的 `jit` 编译和 `vmap` 批处理配合使用。\n\n**简单示例：计算 Q 值更新**\n\n```python\nimport jax.numpy as jnp\nimport rlax\n\n# 假设有如下数据（batch_size=2）\nq_tm1 = jnp.array([[0.5, 0.3], [0.2, 0.8]])  # 当前 Q 值\na_tm1 = jnp.array([0, 1])                     # 采取的动作\nr_t = jnp.array([1.0, 0.5])                   # 即时奖励\ndiscount_t = jnp.array([0.99, 0.99])          # 折扣因子\nq_t = jnp.array([[0.4, 0.6], [0.3, 0.7]])     # 目标 Q 值\n\n# 使用 rlax 计算 q_learning 更新\nq_learning = rlax.q_learning()\nupdate = q_learning(q_tm1, a_tm1, r_t, discount_t, q_t)\nprint(update)\n```\n\n**关键命名约定**\n\nRLax 函数通常使用以下后缀区分不同时间步的数据：\n\n- `_t`：当前时间步的值\n- `_tm1`：上一时间步的值\n\n例如：\n- `q_tm1`：转换源状态的行动价值\n- `a_tm1`：转换源状态选择的动作\n- `r_t`：转换目标状态获得的奖励\n- `discount_t`：转换对应的折扣因子\n- `q_t`：转换目标状态的行动价值\n\n**编译与批处理**\n\n```python\nfrom jax import jit, vmap\n\n# JIT 编译以获得最佳性能\nq_learning_compiled = jit(q_learning)\n\n# 使用 vmap 进行批量处理\nbatch_q_learning = vmap(q_learning, in_axes=(0, 0, 0, 0, 0))\n```\n\n更多示例请参考 `examples\u002F` 目录及 [官方文档](https:\u002F\u002Frlax.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)。","# rlax 实际使用场景案例\n\n## 场景背景\n\n某 AI 实验室的研究员小张正在开发一个游戏 AI 代理，需要基于 Q-learning 算法实现一个能在复杂环境中自主学习的智能体。在开发过程中，他需要频繁实现各种强化学习数学操作。\n\n### 没有 rlax 时\n\n- **重复造轮子**：每次实现新的强化学习算法，都需要手动编写 Q 值更新、贝尔曼最优算子等基础数学公式，花费大量时间在代码调试而非算法研究上\n- **数值计算不稳定**：自己实现的 TD 误差和值函数更新容易出现数值溢出或梯度爆炸问题，导致训练过程不稳定\n- **硬件迁移困难**：针对 GPU 编写的张量操作，在切换到 TPU 或 CPU 时需要重写大量代码，无法实现一次编写、多平台运行\n- **缺乏分布式价值函数支持**：想实现 C51 等分布式强化学习算法时，需要从零实现回报分布的数学推导和代码实现，开发周期大幅延长\n- **代码维护成本高**：不同项目中的强化学习基础操作代码重复且分散，难以复用和统一维护\n\n### 使用 rlax 后\n\n- **直接调用成熟实现**：通过 `rlax.q_learning()`、`rlax.td_error()` 等函数直接获得经过验证的 Q-learning 更新逻辑，代码量减少 60% 以上\n- **内置数值稳定性保障**：rlax 的所有函数都经过深度优化，内置梯度裁剪和数值稳定处理，训练过程更加稳定可靠\n- **JIT 编译跨平台运行**：利用 `jax.jit` 装饰器，代码可以无缝在 CPU、GPU、TPU 间切换，无需修改任何业务逻辑\n- **开箱即用的分布式价值函数**：直接使用 `rlax.categorical_double_q_learning()` 等函数实现分布式强化学习，大幅降低算法实现门槛\n- **模块化代码易维护**：所有强化学习基础操作集中在 rlax 中管理，不同项目间可以方便地共享和复用同一套底层实现\n\n### 核心价值\n\nrlax 将强化学习领域多年积累的数学操作封装为高性能、可复用的组件，让研究人员能够专注于算法创新而非底层实现，将开发效率提升数倍。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgoogle-deepmind_rlax_563c1f71.png","google-deepmind","Google DeepMind","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fgoogle-deepmind_06b1dd17.png","","https:\u002F\u002Fwww.deepmind.com\u002F","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99,{"name":87,"color":88,"percentage":89},"Shell","#89e051",1,1414,"2026-03-31T14:48:22","Apache-2.0","未说明","支持 GPU 和 TPU，但未说明具体型号、显存及 CUDA 版本要求",{"notes":96,"python":93,"dependencies":97},"rlax 是基于 JAX 的强化学习库，支持 CPU、GPU、TPU 硬件加速。运行 examples 需要额外安装 optax、haiku 和 bsuite。JAX 的 GPU\u002FTPU 支持需要单独配置 CUDA 和 cuDNN。",[98,99,100,101],"jax","optax","haiku","bsuite",[13],"2026-03-27T02:49:30.150509","2026-04-06T09:43:39.250442",[106,111,116,121,126,131,136],{"id":107,"question_zh":108,"answer_zh":109,"source_url":110},773,"RLax 库是否有文档和示例？","是的，RLax 已有完整的文档和示例。现在库中包含完整的在线 Q-learning 和 DQN 智能体示例，README 中也添加了指向 BSuite 示例的链接。文档可以在 https:\u002F\u002Frlax.readthedocs.io\u002F 查看。","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fissues\u002F5",{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},774,"如何解决 ImportError: cannot import name 'tree_multimap' from 'jax.tree_util' 错误？","这是因为使用了旧版本的 rlax。该问题已在版本 0.1.4 中修复。请升级到最新版本的 rlax：pip install --upgrade rlax","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fissues\u002F100",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},775,"为什么 vtrace 使用 Python for 循环而不是 lax.scan？","主要原因是代码可读性。由于编译只发生一次，开发团队在收到反馈后选择使用 for 循环版本，因为它更容易阅读和修改。虽然 lax.scan 版本编译更快，但团队优先考虑了代码的可维护性。","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fissues\u002F16",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},776,"如何解决 rlax 与新版 jax 或 numpy 的兼容性问题？","该问题已在 PR #114 中修复。之前的版本对 numpy 有 \u003C 1.23 的限制，但随着新版 jax 的发布，这个限制已不再需要。请更新到最新版本的 rlax。","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fissues\u002F110",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},777,"rlax 是否有中心化的文档？","是的，RLax 已有中心化文档，访问地址为：https:\u002F\u002Frlax.readthedocs.io\u002Fen\u002Flatest\u002F","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fissues\u002F37",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},778,"如何解决 rlax 与新版 dm-haiku 的兼容性问题？","该问题已在新版本 0.1.2 中解决。新版本放宽了 jax 版本约束，现在可以与最新版本的 dm-haiku (v0.0.6) 兼容。请更新到最新版本：pip install --upgrade rlax","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fissues\u002F60",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},779,"discount = 0 是否意味着终止状态？","是的，在 rlax 中 discount = 0 用于表示终止状态。通过在 discount 数组中适当设置零，可以将多个 episode 无缝地作为一个 episode 处理，这允许将两个 episode 连接在一起。","https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fissues\u002F17",[142,147,152,157,162,167,172,177,182],{"id":143,"version":144,"summary_zh":145,"released_at":146},110005,"v0.1.8","## What's Changed\r\n* Migrate to pyproject.toml by @copybara-service[bot] in https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fpull\u002F147\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fcompare\u002Fv0.1.7...v0.1.8","2025-09-01T23:39:10",{"id":148,"version":149,"summary_zh":150,"released_at":151},110006,"v0.1.7","## What's Changed\r\n* Replace deprecated `jax.tree_*` functions with `jax.tree.*` by @copybara-service in https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fpull\u002F135\r\n* jax.numpy.clip: update use of deprecated arguments. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fpull\u002F138\r\n* Configure tests to explicitly use jax_threefry_partitionable=False. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fpull\u002F141\r\n* Move to python 3.12 and 3.13, part 2. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fpull\u002F144\r\n* Update pypi workflow. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fpull\u002F143\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frlax\u002Fcompare\u002Fv0.1.6...v0.1.7","2025-05-08T14:48:03",{"id":153,"version":154,"summary_zh":155,"released_at":156},110007,"v0.1.6","## What's Changed\r\n* Bump ipython from 7.16.1 to 8.10.0 in \u002Frequirements by @dependabot in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F116\r\n* Fix KL constraint loss to ensure lagrange multiplier is always positive. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F123\r\n* Drop python 3.7 and 3.8\r\n\r\n## New Contributors\r\n* @dependabot made their first contribution in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F116\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fcompare\u002Fv0.1.5...v0.1.6","2023-06-29T15:03:07",{"id":158,"version":159,"summary_zh":160,"released_at":161},110008,"v0.1.5","## What's Changed\r\n* Replace for-loop in extract_subsequences with single indexing operation. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F98\r\n* Replace O(n^2) iterative insert with linear append + reverse. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F99\r\n* Expose utilities for constructing and learning from policy targets. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F106\r\n* Add support for disabling stop_gradients on targets (as in other rlax losses). by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F107\r\n* [rlax] Update jax and numpy requirements for RLax. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F114\r\n* Release new RLax version. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F115\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fcompare\u002Fv0.1.4...v0.1.5","2023-01-09T13:23:23",{"id":163,"version":164,"summary_zh":165,"released_at":166},110009,"v0.1.4","## What's Changed\r\n* rlax: Replace rlax categorical cross entropy computation with distrax components. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F57\r\n* Bugfix to quantile_expected_sarsa. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F63\r\n* Update Jinja2 versioning to avoid Sphinx failures. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F66\r\n* Add test for squashed gaussian in rlax distributions. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F68\r\n* Update squashed gaussian distribution in rlax for prob and logprob to numerically match distrax's implementation. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F69\r\n* Migrate RLax squashed gaussian to use Distrax. Explicitly broadcast shapes in Distrax scalar affine to avoid rank promotion errors. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F70\r\n* Add a particular pair of transforms used by muzero that combine a non linear squashing function with a reparametrisation of the scalar as linear combination of two hot values in a discrete suppport. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F73\r\n* Support Array lambda_ in Vtrace. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F71\r\n* Send deprecation warning for rlax.distributions in favor of using distrax. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F74\r\n* Send deprecation warning for rlax nested_updates in favor of using optax. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F75\r\n* Move usages of soon to be deprecated rlax.periodic_update to optax.periodic_update. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F77\r\n* Add a pair of transforms where the scalar values are reparametrised as the linear combination of two-hot values on a non-linearly spaced discrete support. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F78\r\n* Add moving averages helpers to rlax. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F79\r\n* Update .pylintrc by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F80\r\n* Add utilities to extract overlapping subsequences from trajectories. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F81\r\n* Minor edits to moving averages. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F83\r\n* Add utilities to support interruptions. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F84\r\n* Create new version 0.1.3 of RLax. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F86\r\n* Remove incremental_update from rlax: all usages ported to optax.incremental_update by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F85\r\n* Pin numpy version \u003C1.23 until new jax version is released, fixing bug that makes mpo_ops_test fail. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F92\r\n* Fix a bug in tree_split_leaves(): squeeze the right axis in case of keepdim=False. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F94\r\n* Fix max_start_idx argument. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F87\r\n* Release a new rlax verison. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F96\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fcompare\u002Fv0.1.2...v0.1.4","2022-08-15T07:29:33",{"id":168,"version":169,"summary_zh":170,"released_at":171},110010,"v0.1.2","## What's Changed\r\n* Fix arg docstring for rho_tm1 and internal computations based on it to reflect time tm1 instead of t. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F43\r\n* Add Sphinx build to CI test, point to documentation in README, and fix issues in doc strings that were causing CI test to fail. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F46\r\n* Remove usages of apply_rng=True from Haiku code. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F47\r\n* Add KNN Query to RLax public API. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F44\r\n* Change RLax citation to Jax Ecosystem citation. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F48\r\n* Update requirements and allow new versions of JAX. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F50\r\n* Remove the old venv directory before testing the package. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F52\r\n* Move decoupled_multivariate_normal_kl_divergence out of distributions.py by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F55\r\n* Use distrax distributions in epsilon_softmax. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F59\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fcompare\u002Fv0.1.1...v0.1.2","2022-02-24T15:27:34",{"id":173,"version":174,"summary_zh":175,"released_at":176},110011,"v0.1.1","## What's Changed\r\n* Drop python 3.6 support and release a new version. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F42\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fcompare\u002Fv0.1.0...v0.1.1","2021-11-19T12:34:03",{"id":178,"version":179,"summary_zh":180,"released_at":181},110012,"v0.0.5","## What's Changed\r\n* Fix failing copybara lint errors. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F24\r\n* Add tests for clipped_entropy_softmax distribution and fix improperly negated clipped entropy. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F21\r\n* Add tests for multivariate_normal_kl_divergence & kl functions in gaussian_diagonal. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F22\r\n* Migrate RLax distributions to use distrax. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F20\r\n* Re-allow rlax gaussian diagonal to work with scalar sigma. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F25\r\n* Fixes bug in kl calculation of gaussian_diagonal by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F26\r\n* [JAX] Replace uses of deprecated `jax.ops.index_update(x, idx, y)` APIs with their up-to-date, more succinct equivalent `x.at[idx].set(y)`. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F33\r\n* [JAX] Increase numerical tolerances of tests in preparation for an XLA:CPU vectorization change. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F34\r\n* Fix performance issue in simple DQN example. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F32\r\n* Add test.sh for launching CI tests on a local machine. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F35\r\n* Iterate over Python range instead jnp.arange. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F29\r\n* fix kl argument order for gaussians by @akssri-sai in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F15\r\n* Freeze the latest compatible JAX version. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F36\r\n* Internal change. by @copybara-service in https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F38\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fcompare\u002Fv0.0.4...v0.0.5","2021-11-18T19:47:38",{"id":183,"version":184,"summary_zh":185,"released_at":186},110013,"v0.0.4","_Note: this is a first GitHub release of RLax. It includes all changes since the repo was created._\r\n\r\n# Changelog\r\n\r\n## [Unreleased](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Ftree\u002FHEAD)\r\n\r\n[Full Changelog](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fcompare\u002F5e7007f0ec2a08d39ea67d957afddad2ba4137a6...HEAD)\r\n\r\n**Fixed bugs:**\r\n\r\n- can not find setup.py for pip install [\\#2](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fissues\u002F2)\r\n\r\n**Closed issues:**\r\n\r\n- Does discount = 0 mean \"terminal\" state by design? [\\#17](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fissues\u002F17)\r\n- vtrace uses `lax.scan`?  [\\#16](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fissues\u002F16)\r\n- rlax is broken on Python 3.9 [\\#13](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fissues\u002F13)\r\n- missing library: `import optax` [\\#8](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fissues\u002F8)\r\n- Documentation and Examples [\\#5](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fissues\u002F5)\r\n\r\n**Merged pull requests:**\r\n\r\n- Add PyPI release workflow. [\\#18](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frlax\u002Fpull\u002F18) ([copybara-service[bot]](https:\u002F\u002Fgithub.com\u002Fapps\u002Fcopybara-service))\r\n\r\n\r\n\r\n\\* *This Changelog was automatically generated by [github_changelog_generator](https:\u002F\u002Fgithub.com\u002Fgithub-changelog-generator\u002Fgithub-changelog-generator)*","2021-07-08T18:31:19"]