[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-cpnota--autonomous-learning-library":3,"tool-cpnota--autonomous-learning-library":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",147882,2,"2026-04-09T11:32:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":78,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":32,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":103,"github_topics":104,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":119,"updated_at":120,"faqs":121,"releases":152},5918,"cpnota\u002Fautonomous-learning-library","autonomous-learning-library","A PyTorch library for building deep reinforcement learning agents.","autonomous-learning-library 是一个基于 PyTorch 构建的面向对象深度强化学习库，旨在帮助开发者快速搭建、评估新型智能体，并提供现代主流算法的高质量参考实现。它有效解决了传统强化学习开发中环境接口繁琐、组件复用困难以及实验复现成本高等痛点，让研究者能更专注于算法创新而非底层工程细节。\n\n该工具特别适合从事人工智能研究的学者、算法工程师以及希望深入探索强化学习的开发者使用。其核心技术亮点包括：灵活的函数近似（Approximation）API，原生集成目标网络、梯度裁剪及多头网络等特性；多样化的记忆缓冲区支持优先经验回放（PER）和广义优势估计（GAE）；以及基于 Torch 的环境接口，摒弃了 NumPy 中间层，使代码更加简洁高效。此外，它还内置了 A2C、PPO、SAC、Rainbow 等七种主流深度强化学习算法的成熟实现，并提供了针对 Atari、MuJoCo 等经典基准环境的预配置版本，配合 Slurm 集群支持与 TensorBoard 可视化功能，能够轻松开展大规模实验验证。无论是复现论文结果还是研发新算法，autonomous-learni","autonomous-learning-library 是一个基于 PyTorch 构建的面向对象深度强化学习库，旨在帮助开发者快速搭建、评估新型智能体，并提供现代主流算法的高质量参考实现。它有效解决了传统强化学习开发中环境接口繁琐、组件复用困难以及实验复现成本高等痛点，让研究者能更专注于算法创新而非底层工程细节。\n\n该工具特别适合从事人工智能研究的学者、算法工程师以及希望深入探索强化学习的开发者使用。其核心技术亮点包括：灵活的函数近似（Approximation）API，原生集成目标网络、梯度裁剪及多头网络等特性；多样化的记忆缓冲区支持优先经验回放（PER）和广义优势估计（GAE）；以及基于 Torch 的环境接口，摒弃了 NumPy 中间层，使代码更加简洁高效。此外，它还内置了 A2C、PPO、SAC、Rainbow 等七种主流深度强化学习算法的成熟实现，并提供了针对 Atari、MuJoCo 等经典基准环境的预配置版本，配合 Slurm 集群支持与 TensorBoard 可视化功能，能够轻松开展大规模实验验证。无论是复现论文结果还是研发新算法，autonomous-learning-library 都能提供坚实的技术支撑。","# The Autonomous Learning Library: A PyTorch Library for Building Reinforcement Learning Agents\n\nThe `autonomous-learning-library` is an object-oriented deep reinforcement learning (DRL) library for PyTorch.\nThe goal of the library is to provide the necessary components for quickly building and evaluating novel reinforcement learning agents,\nas well as providing high-quality reference implementations of modern DRL algorithms.\nThe full documentation can be found at the following URL: [https:\u002F\u002Fautonomous-learning-library.readthedocs.io](https:\u002F\u002Fautonomous-learning-library.readthedocs.io).\n\n## Tools for Building New Agents\n\nThe primary goal of the `autonomous-learning-library` is to facilitate the rapid development of new reinforcement learning agents by providing common tools for building and evaluation agents, such as:\n\n* A flexible function `Approximation` API that integrates features such as target networks, gradient clipping, learning rate schedules, model checkpointing, multi-headed networks, loss scaling, logging, and more.\n* Various memory buffers, including prioritized experience replay (PER), generalized advantage estimation (GAE), and more.\n* A `torch`-based `Environment` interface that simplies agent implementations by cutting out the `numpy` middleman.\n* Common wrappers and agent enhancements for replicating standard benchmarks.\n* [Slurm](https:\u002F\u002Fslurm.schedmd.com\u002Fdocumentation.html) integration for running large-scale experiments.\n* Plotting and logging utilities including `tensorboard` integration and utilities for generating common plots.\n\nSee the [documentation](https:\u002F\u002Fautonomous-learning-library.readthedocs.io) guide for a full description of the functionality provided by the `autonomous-learning-library`.\nAdditionally, we provide an [example project](https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fall-example-project) which demonstrates the best practices for building new agents.\n\n## High-Quality Reference Implementations\n\nThe `autonomous-learning-library` separates reinforcement learning agents into two modules: `all.agents`, which provides flexible, high-level implementations of many common algorithms which can be adapted to new problems and environments, and `all.presets` which provides specific instansiations of these agents tuned for particular sets of environments, including Atari games, classic control tasks, and MuJoCo\u002FPybullet robotics simulations. Some benchmark results showing results on-par with published results can be found below:\n\n![atari40](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcpnota_autonomous-learning-library_readme_b65f5b378e8a.png)\n![atari40](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcpnota_autonomous-learning-library_readme_9624715df76e.png)\n![pybullet](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcpnota_autonomous-learning-library_readme_8ed412b0320d.png)\n\nAs of today, `all` contains implementations of the following deep RL algorithms:\n\n- [x] Advantage Actor-Critic (A2C)\n- [x] Categorical DQN (C51)\n- [x] Deep Deterministic Policy Gradient (DDPG)\n- [x] Deep Q-Learning (DQN) + extensions\n- [x] Proximal Policy Optimization (PPO)\n- [x] Rainbow (Rainbow)\n- [x] Soft Actor-Critic (SAC)\n\nIt also contains implementations of the following \"vanilla\" agents, which provide useful baselines and perform better than you may expect:\n\n- [x] Vanilla Actor-Critic\n- [x] Vanilla Policy Gradient\n- [x] Vanilla Q-Learning\n- [x] Vanilla Sarsa\n\n## Installation\n\nFirst, you will need a new version of [PyTorch](https:\u002F\u002Fpytorch.org) (>1.3), as well as [Tensorboard](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorboard\u002F).\nThen, you can install the core `autonomous-learning-library` through PyPi:\n\n```\npip install autonomous-learning-library\n```\n\nYou can also install all of the extras (such as Gym environments) using:\n\n```\npip install autonomous-learning-library[all]\n```\n\nFinally, you can install directly from this repository including the dev dependencies using:\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library.git\ncd autonomous-learning-library\npip install -e .[dev]\n```\n\n## Running the Presets\n\nIf you just want to test out some cool agents, the library includes several scripts for doing so:\n\n```\nall-atari Breakout a2c\n```\n\nYou can watch the training progress using:\n\n```\ntensorboard --logdir runs\n```\n\nand opening your browser to http:\u002F\u002Flocalhost:6006.\nOnce the model is fully trained, you can watch the trained model play using:\n\n```\nall-watch-atari Breakout \"runs\u002Fa2c_[id]\u002Fpreset.pt\"\n```\n\nwhere `id` is the ID of your particular run. You should should be able to find it using tab completion or by looking in the `runs` directory.\nThe `autonomous-learning-library` also contains presets and scripts for classic control and PyBullet environments.\n\nIf you want to test out your own agents, you will need to define your own scripts.\nSome examples can be found in the `examples` folder).\nSee the [docs](https:\u002F\u002Fautonomous-learning-library.readthedocs.io) for information on building your own agents!\n\n## Note\n\nThis library was built in the [Autonomous Learning Laboratory](http:\u002F\u002Fall.cs.umass.edu) (ALL) at the [University of Massachusetts, Amherst](https:\u002F\u002Fwww.umass.edu).\nIt was written and is currently maintained by Chris Nota (@cpnota).\nThe views expressed or implied in this repository do not necessarily reflect the views of the ALL.\n\n## Citing the Autonomous Learning Library\n\nWe recommend the following citation:\n\n```\n@misc{nota2020autonomous,\n  author = {Nota, Chris},\n  title = {The Autonomous Learning Library},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library}},\n}\n```\n","# 自主学习库：用于构建强化学习智能体的 PyTorch 库\n\n`autonomous-learning-library` 是一个面向对象的深度强化学习（DRL）库，专为 PyTorch 设计。\n该库的目标是提供快速构建和评估新型强化学习智能体所需的组件，同时为现代 DRL 算法提供高质量的参考实现。\n完整的文档可在以下网址找到：[https:\u002F\u002Fautonomous-learning-library.readthedocs.io](https:\u002F\u002Fautonomous-learning-library.readthedocs.io)。\n\n## 用于构建新智能体的工具\n\n`autonomous-learning-library` 的主要目标是通过提供通用的构建和评估工具，促进新型强化学习智能体的快速开发，例如：\n\n* 一个灵活的 `Approximation` 函数 API，集成了目标网络、梯度裁剪、学习率调度、模型检查点保存、多头网络、损失缩放、日志记录等功能。\n* 多种经验回放缓冲区，包括优先级经验回放（PER）、广义优势估计（GAE）等。\n* 基于 `torch` 的 `Environment` 接口，通过省去 `numpy` 中间层来简化智能体的实现。\n* 用于复现标准基准测试的常用包装器和智能体增强功能。\n* 集成 [Slurm](https:\u002F\u002Fslurm.schedmd.com\u002Fdocumentation.html)，以运行大规模实验。\n* 绘图和日志记录工具，包括与 `tensorboard` 的集成以及生成常见图表的实用程序。\n\n有关 `autonomous-learning-library` 提供的功能的完整描述，请参阅 [文档](https:\u002F\u002Fautonomous-learning-library.readthedocs.io) 指南。\n此外，我们还提供了一个 [示例项目](https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fall-example-project)，展示了构建新智能体的最佳实践。\n\n## 高质量的参考实现\n\n`autonomous-learning-library` 将强化学习智能体分为两个模块：`all.agents` 提供了许多常见算法的灵活、高层次实现，这些实现可以适应新的问题和环境；而 `all.presets` 则提供了针对特定环境集合优化的这些智能体的具体实例，包括 Atari 游戏、经典控制任务以及 MuJoCo\u002FPybullet 机器人仿真。以下是一些基准测试结果，显示其性能与已发表的结果相当：\n\n![atari40](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcpnota_autonomous-learning-library_readme_b65f5b378e8a.png)\n![atari40](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcpnota_autonomous-learning-library_readme_9624715df76e.png)\n![pybullet](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcpnota_autonomous-learning-library_readme_8ed412b0320d.png)\n\n截至今日，`all` 包含以下深度强化学习算法的实现：\n\n- [x] 优势演员-评论家（A2C）\n- [x] 分类 DQN（C51）\n- [x] 深度确定性策略梯度（DDPG）\n- [x] 深度 Q 学习（DQN）及其扩展\n- [x] 近端策略优化（PPO）\n- [x] 彩虹（Rainbow）\n- [x] 软演员-评论家（SAC）\n\n它还包含以下“纯”智能体的实现，这些智能体可作为有用的基线，并且表现可能超出预期：\n\n- [x] 纯演员-评论家\n- [x] 纯策略梯度\n- [x] 纯 Q 学习\n- [x] 纯 Sarsa\n\n## 安装\n\n首先，您需要安装最新版本的 [PyTorch](https:\u002F\u002Fpytorch.org)（>1.3），以及 [Tensorboard](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorboard\u002F)。\n然后，您可以使用 PyPI 安装核心 `autonomous-learning-library`：\n\n```\npip install autonomous-learning-library\n```\n\n您还可以通过以下命令安装所有附加组件（如 Gym 环境）：\n\n```\npip install autonomous-learning-library[all]\n```\n\n最后，您也可以直接从本仓库安装，并包含开发依赖项：\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library.git\ncd autonomous-learning-library\npip install -e .[dev]\n```\n\n## 运行预设\n\n如果您只想试用一些酷炫的智能体，该库包含若干脚本来实现这一目的：\n\n```\nall-atari Breakout a2c\n```\n\n您可以使用以下命令查看训练进度：\n\n```\ntensorboard --logdir runs\n```\n\n然后在浏览器中打开 http:\u002F\u002Flocalhost:6006。\n模型完全训练完成后，您可以使用以下命令观看训练好的模型进行游戏：\n\n```\nall-watch-atari Breakout \"runs\u002Fa2c_[id]\u002Fpreset.pt\"\n```\n\n其中 `id` 是您特定运行的标识符，您可以通过 Tab 键补全或查看 `runs` 目录来找到它。\n`autonomous-learning-library` 还包含适用于经典控制和 PyBullet 环境的预设及脚本。\n\n如果您想测试自己的智能体，则需要定义自己的脚本。\n一些示例可以在 `examples` 文件夹中找到。\n有关如何构建您自己的智能体的信息，请参阅 [文档](https:\u002F\u002Fautonomous-learning-library.readthedocs.io)！\n\n## 注释\n\n此库由位于 [马萨诸塞大学阿默斯特分校](https:\u002F\u002Fwww.umass.edu) 的 [自主学习实验室](http:\u002F\u002Fall.cs.umass.edu)（ALL）开发。\n它由 Chris Nota (@cpnota) 编写，目前仍由他维护。\n本仓库中表达或暗示的观点并不一定反映 ALL 的观点。\n\n## 引用自主学习库\n\n我们建议采用以下引用格式：\n\n```\n@misc{nota2020autonomous,\n  author = {Nota, Chris},\n  title = {The Autonomous Learning Library},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library}},\n}\n```","# Autonomous Learning Library 快速上手指南\n\n`autonomous-learning-library` (ALL) 是一个基于 PyTorch 的面向对象深度强化学习（DRL）库。它旨在帮助开发者快速构建和评估新型强化学习智能体，并提供现代 DRL 算法的高质量参考实现（如 A2C, PPO, SAC, Rainbow 等）。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：Linux \u002F macOS \u002F Windows\n*   **Python 版本**：建议 Python 3.6+\n*   **核心依赖**：\n    *   [PyTorch](https:\u002F\u002Fpytorch.org) (版本 > 1.3)\n    *   [Tensorboard](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorboard\u002F) (用于可视化训练过程)\n*   **可选依赖**：如需运行 Atari、MuJoCo 或 PyBullet 等基准测试，需安装相应的 Gym 环境包。\n\n> **国内加速提示**：建议使用国内镜像源加速 PyTorch 和 pip 包的安装，以获得更快的下载速度。\n\n## 安装步骤\n\n您可以选择通过 PyPI 直接安装，或从源码安装以获取最新开发功能。\n\n### 方式一：通过 PyPI 安装（推荐）\n\n安装核心库：\n```bash\npip install autonomous-learning-library -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n安装包含所有额外依赖（如 Gym 环境）的完整版本：\n```bash\npip install autonomous-learning-library[all] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：从源码安装（适合开发者）\n\n如果您需要修改源码或使用开发版依赖：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library.git\ncd autonomous-learning-library\npip install -e .[dev] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n### 1. 运行预置算法示例\n\n库中内置了针对 Atari、经典控制任务和机器人仿真环境的预置脚本。以下是在 Atari 的 `Breakout` 游戏中运行 **A2C** 算法的示例：\n\n```bash\nall-atari Breakout a2c\n```\n\n### 2. 监控训练进度\n\n在另一个终端窗口中启动 TensorBoard 来实时查看训练曲线和日志：\n\n```bash\ntensorboard --logdir runs\n```\n\n随后在浏览器中访问 `http:\u002F\u002Flocalhost:6006` 即可看到可视化界面。\n\n### 3. 观看训练好的模型\n\n当模型训练完成后（检查点文件位于 `runs` 目录下），可以使用以下命令观看智能体玩游戏：\n\n```bash\nall-watch-atari Breakout \"runs\u002Fa2c_[id]\u002Fpreset.pt\"\n```\n\n*注意：请将 `[id]` 替换为您实际运行的实验 ID（可通过 Tab 键自动补全或在 `runs` 目录中查找）。*\n\n### 4. 开发自定义智能体\n\n若要构建自己的智能体，请参考 `examples` 文件夹中的示例脚本，并查阅 [官方文档](https:\u002F\u002Fautonomous-learning-library.readthedocs.io) 了解 `Approximation` API、内存缓冲区及环境接口的详细用法。","某机器人实验室的研究团队正致力于开发一套能在复杂动态环境中自主导航的智能无人机控制系统。\n\n### 没有 autonomous-learning-library 时\n- **重复造轮子耗时**：研究人员需手动编写深度确定性策略梯度（DDPG）或软演员 - 评论家（SAC）等算法的基础架构，包括目标网络更新和梯度裁剪，耗费大量时间在非核心逻辑上。\n- **环境交互繁琐**：在 PyTorch 模型与 Gym 环境之间频繁进行 NumPy 数组转换，代码冗长且容易因维度不匹配引发隐蔽的 Bug。\n- **实验管理混乱**：缺乏统一的优先经验回放（PER）缓冲区和标准化的日志记录模块，导致不同算法的实验结果难以复现和横向对比。\n- **调参验证困难**：缺少内置的学习率调度器和模型检查点机制，长时间训练一旦中断往往需要从头开始，严重拖慢迭代速度。\n\n### 使用 autonomous-learning-library 后\n- **快速构建代理**：直接调用库中模块化的高层 API，几分钟内即可组装出带有目标网络和多头网络结构的新型强化学习智能体，专注核心算法创新。\n- **原生流畅交互**：利用基于 Torch 的环境接口，消除了 NumPy 中间层，使得数据流在神经网络与环境间无缝传递，代码简洁且运行高效。\n- **标准化实验流程**：内置多种高质量记忆缓冲区（如 GAE、PER）和 TensorBoard 集成工具，轻松实现大规模实验的自动化监控与可视化分析。\n- **稳健训练保障**：借助自带的 Slurm 集群支持和学习率自动调度功能，即使面对长达数天的训练任务，也能确保断点续训和超参数优化的稳定性。\n\nautonomous-learning-library 通过将通用的深度学习组件标准化，让研究人员从繁琐的工程实现中解放出来，真正专注于强化学习策略本身的突破。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcpnota_autonomous-learning-library_b65f5b37.png","cpnota","Chris Nota","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fcpnota_dbdd853c.jpg","Reinforcement learning (RL) researcher.","UMass Amherst",null,"cpnota@gmail.com","https:\u002F\u002Fgithub.com\u002Fcpnota",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.8,{"name":87,"color":88,"percentage":89},"Makefile","#427819",0.2,655,74,"2026-03-31T01:41:59","MIT","","未说明（基于 PyTorch，通常建议配备 NVIDIA GPU 以加速深度学习训练）","未说明",{"notes":98,"python":96,"dependencies":99},"该库主要用于构建和评估深度强化学习智能体。支持通过 Slurm 进行大规模实验管理。安装时可选择仅安装核心库或包含 Gym 环境等额外依赖的完整版本。官方提供了 Atari、经典控制任务及 MuJoCo\u002FPyBullet 机器人仿真环境的预配置算法实现。",[100,101,102],"torch>1.3","tensorboard","gym (可选，通过 extras 安装)",[14],[105,106,107,108,109,110,111,112,113,114,115,116,117,118],"reinforcement-learning","reinforcement-learning-algorithms","deep-reinforcement-learning","soft-actor-critic","proximal-policy-optimization","deep-q-learning","advantage-actor-critic","deep-deterministic-policy-gradient","sac","a2c","ddpg","ppo","dqn","dqn-pytorch","2026-03-27T02:49:30.150509","2026-04-09T23:55:47.995641",[122,127,132,137,142,147],{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},26852,"为什么在训练 Atari 游戏（如 Breakout）时会出现 CUDA 显存不足（Out of Memory）的错误？","这是因为 ReplayBuffer（经验回放缓冲区）默认存储在显存中。每个帧是 84x84 的图像，若存储 100 万帧，仅原始数据就需要约 7GB 显存，加上 PyTorch 的额外开销，很容易占满显存。目前库中没有内置方法将 ReplayBuffer 放在 CPU 上并在需要时动态传输到 GPU。如果需要此功能，用户必须创建自定义的 ReplayBuffer 并修改其 `sample()` 方法来实现 CPU 存储和按需传输。","https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fissues\u002F144",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},26853,"如何修改实验日志和输出文件的保存路径？","默认情况下，输出路径由 `ExperimentWriter` 类内部的 `log_path` 定义，且没有直接的重写方法。解决方案是在调用 `run_experiment` API 时添加一个可选参数来指定自定义路径。维护者已确认这是一个合理的改进点，并建议通过修改 API 支持传入自定义路径参数，以避免修改库的内部代码。","https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fissues\u002F174",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},26854,"在使用 PPO 算法时，如何正确构建随机策略（Stochastic Policy）并解决输入形状错误？","构建 PPO 时，`GaussianPolicy` 的输入形状应匹配 `FeatureNetwork` 的输出形状。注意网络的最后一层通常是 `Linear` 层，而不是 ReLU 或 BatchNorm。此外，`SingleEnvExperiment` 的第一个参数必须是一个接受 `env` 和 `writer` 的函数（用于依赖注入）。正确的代码结构如下：\n```python\ndef ppo(env, writer):\n    return PPO(\n        feature_network,\n        value_network,\n        policy_network,\n        n_envs=1,\n        n_steps=10000,\n        writer=writer\n    )\n\nexperiment = SingleEnvExperiment(ppo, env)\nexperiment.train(frames=100000)\n```","https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fissues\u002F136",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},26855,"运行 Atari PPO 预设时出现 'zero-dimensional tensor cannot be concatenated' 错误是什么原因？","该错误通常发生在尝试混合使用不同代理（例如一个用 PPO 训练，另一个用 DQN）的非标准多智能体场景中，而 PPO 本身并未完全支持这种多智能体配置。维护者指出这是非常规用法，官方暂未正式支持。如果遇到此问题，可以参考社区提供的临时解决方案，即使用自定义的 Preset 包装器和 Agent 包装器来处理特定的多智能体交互逻辑。","https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fissues\u002F244",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},26856,"GaussianPolicy 中为什么使用了两次 tanh 激活函数？这是 Bug 吗？","是的，这被确认为一个 Bug。在 `gaussian.py` 和 `_squash` 方法中重复使用 tanh 是不正确的。维护者已确认该问题并将通过 PR 进行修复。对于需要正确实现的用户，可以参考 `soft_deterministic.py` 中的实现方式，那里已经正确处理了相关逻辑。","https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fissues\u002F154",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},26857,"库是否支持向量环境（Vector Environments）？","是的，该功能已被支持。此前关于支持向量环境的请求（Issue #220）已通过后续的 Pull Request (#240) 完成并合并。现在用户可以利用向量环境来并行运行多个环境实例以提高训练效率。","https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fissues\u002F220",[153,158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248],{"id":154,"version":155,"summary_zh":156,"released_at":157},172082,"v0.9.1","版本 0.9.1。包含以下更新：\n\n* 添加了 Gymnasium 支持\n* 添加了 Mujoco 支持\n* 在运行实验后添加了超参数日志记录\n* 对日志记录进行了一些其他小改进\n* 调整了 SAC\u002FDDPG 的超参数及实现\n* 小幅改进了工作流程\n* 升级了一些其他依赖项，包括将 torch 升级至 ~=2.2\n* 修复了一些 minor bug\n\n## 变更内容\n* Release\u002F0.8.1 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F274 中完成\n* 将 opencv 依赖改为无头模式，并升级至版本 4，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F275 中完成\n* Release\u002F0.8.2 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F276 中完成\n* Feature\u002Fgymnasium 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F278 中完成\n* Feature\u002Fmujoco 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F279 中完成\n* Refactor\u002Fscripts-folder 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F286 中完成\n* 为 Builder API 添加 __call__ 方法及单元测试，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F287 中完成\n* Feature\u002Fepisode length 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F289 中完成\n* 为 SAC 添加 entropy_backups 超参数，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F296 中完成\n* Refactor\u002Fformatting 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F299 中完成\n* 修复 key error 警告，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F300 中完成\n* 完成 nn 聚合的 docstring，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F301 中完成\n* Bugfix\u002Fpublish workflow 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F303 中完成\n* 添加 save_freq 参数并重构脚本，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F305 中完成\n* 超参数日志记录，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F308 中完成\n* 从 hparams 标签中移除 env 名称，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F309 中完成\n* SAC\u002FDDPG 调整，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F312 中完成\n* 修复重复环境处理问题，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F314 中完成\n* 升级依赖项，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F315 中完成\n* 修复 plotter 并在训练结束时记录最终摘要，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F320 中完成\n* 添加 swig setup 依赖，并从 GitHub 脚本中移除 unrar 和 swig，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F321 中完成\n* Feature\u002Fbenchmarks 由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F317 中完成\n* 更新文档，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library\u002Fpull\u002F323 中完成\n* v0.9.1 - Gymnasium 和 Mujoco，由 @cpnota 在 https:\u002F\u002Fgithub.com\u002Fcpn 中完成","2024-03-17T21:46:51",{"id":159,"version":160,"summary_zh":161,"released_at":162},172083,"v0.9.1-alpha.3","发布工作流测试","2024-02-25T17:18:10",{"id":164,"version":165,"summary_zh":166,"released_at":167},172084,"v0.8.0","本次发布包含多项改进：\n\n* 更新了依赖项。\n* 简化了 FeatureNetwork 的逻辑。\n* 将软 Actor-Critic 算法升级到新版本，该版本不再使用单独的状态值函数。\n* 改进了日志记录接口。","2022-06-27T19:36:11",{"id":169,"version":170,"summary_zh":171,"released_at":172},172085,"v0.7.2","* 将 PyTorch 更新至 1.9.0 版本 (#255)\n* 如果启用了 `clip_grad` 且范数为非有限值，则抛出 `RuntimeError` (#255)\n* 修复软策略中 `log_prob` 缩放的 bug (#256)","2021-08-05T22:52:07",{"id":174,"version":175,"summary_zh":176,"released_at":177},172086,"v0.7.1","一些小的内部优化和修复：\n\n* 创建了 `VectorEnvironment` 类，并重构了并行环境的工作方式 #239\n* 在 `ParallelPreset` 中添加了 `parallel_test_agent` 方法 #240\n* 修复了一个 bug：在 Atari 预设中使用 `n_envs=1` 时，会错误地使用 `Body` 而不是 `ParallelBody` #241\n* 修正 `DeepmindAtariBody`，使其仅在 `frame_stack > 1` 时才使用 `FrameStackBody` #245\n* 修复 `GreedyPolicy` 不尊重 `test_exploration` 设置的 bug #246\n* 通过防止温度降至 0 以下，提升了 SAC 的稳定性 #247\n* 更新 PettingZoo 的版本，并在 CI 中使用新环境 #351\n* 修复优先级回放的 `store_device` 问题 #249","2021-06-14T13:35:17",{"id":179,"version":180,"summary_zh":181,"released_at":182},172087,"v0.7.0","本次发布包含多项新功能、重构以及错误修复。\n\n## 功能\n\n* 保存\u002F加载智能体。#185\n* **（实验性）** 使用 [PettingZoo](https:\u002F\u002Fwww.pettingzoo.ml) 支持多智能体 Atari 游戏。#201\n* 可选将回放缓冲区存储在不同设备上。#187\n* 使用 cloudpickle 实现更完善的环境复制。#200\n* 内置 `Identity` 特征网络 #202\n* 支持 Comet.ml #215\n\n## 重构\n\n* 智能体拆分为三种类型：`Agent`、`ParallelAgent` 和 `Multiagent` #221\n* 调整预设的工作方式，以更好地支持保存\u002F加载 #185\n* 将工作流\u002FCI 从 Travis 切换至 GitHub Actions #235\n* 改进 `Environment` 的导入 #236\n* 支持最新版本的 PyTorch #235\n\n## 错误修复\n* 修复了 ParallelGreedyPolicy 中的一个 bug #233\n* 修复了 Atari 游戏中 life_lost 相关的问题 #\n* 文档澄清 #192 #216","2021-04-12T16:51:39",{"id":184,"version":185,"summary_zh":186,"released_at":187},172088,"v0.6.0","本次发布包含一些底层优化和错误修复，其中最显著的是对 `State` 类的重构。`State` 现在支持添加任意键值对，从而能够表示更复杂的状态空间。此外，还新增了一个 `StateArray` 类，它可以自动以多种方式对状态进行堆叠或切片操作，使得处理批量数据、多时间步以及其他场景变得更加便捷。以下是完整的变更列表：\n\n* 重构了 `State` 类，并新增了 `StateArray` 对象（#160 和 #167）\n* 在所有现有预设下增加了指定自定义模型的支持。感谢 @michalgregor 的贡献！（#163）\n* 修复了 SAC 评估模式下的一个 bug。感谢 @michalgregor 发现并提出了修复方案！（#169）\n* 修复了一个问题：预先构建的 Gym 环境名称未能正确处理。感谢 @mctigger 的报告！（#169）同时感谢 @michalgregor 提供的修复方案！（#165，通过 #170 合并）\n* 修复了一个 bug：Atari 的 `FireReset` 包装器被错误地应用到了没有“Fire”动作的游戏上，导致这些游戏无法正常运行。感谢 @andrewsmike 报告并修复了该 bug！（#168）","2020-09-29T15:41:49",{"id":189,"version":190,"summary_zh":191,"released_at":192},172089,"v0.5.3","本次发布包含一个热修复补丁 #155，可提升 PPO 连续预设的性能。","2020-07-04T19:10:55",{"id":194,"version":195,"summary_zh":196,"released_at":197},172090,"v0.5.2","仅包含一些 minor bug 修复和文档改进：\n\n* Windows 系统的日期时间兼容性 #137 #142\n* 持续集成相关修复 #138\n* SoftDeterministicPolicy 缩放问题修复 #140\n* 修复并行实验中测试回合计数错误的问题 #143\n* 移除尾部逗号 #146\n* 在测试模式下，首次动作是通过 act() 而非 eval() 选择的 #150\n* 文档改进 #151","2020-06-08T13:34:48",{"id":199,"version":200,"summary_zh":201,"released_at":202},172091,"v0.5.1","之前的版本遗漏了 #132 提交中的更改，该更改修复了并行环境每秒帧数的计算问题。","2020-04-18T19:00:37",{"id":204,"version":205,"summary_zh":206,"released_at":207},172092,"v0.5.0","This release contains some minor changes to several key APIs.\r\n\r\n## Agent Evaluation Mode\r\n\r\nWe added a new method to the `Agent` interface called `eval`. `eval` is the same as `act`, except the agent does not perform any training updates. This is useful for measure the performance of an agent at the end of a training run. Speaking of which...\r\n\r\n## Experiment Refactoring: Train\u002FTest\r\n\r\nWe completely refactored the `all.experiments` module. First of all, the primary public entry point is now a function called `run_experiment`. Under the hood, there is a new `Experiment` interface:\r\n\r\n```python\r\nclass Experiment(ABC):\r\n    '''An Experiment manages the basic train\u002Ftest loop and logs results.'''\r\n\r\n    @abstractmethod\r\n    def frame(self):\r\n        '''The index of the current training frame.'''\r\n\r\n    @property\r\n    @abstractmethod\r\n    def episode(self):\r\n        '''The index of the current training episode'''\r\n\r\n    @abstractmethod\r\n    def train(self, frames=np.inf, episodes=np.inf):\r\n        '''\r\n        Train the agent for a certain number of frames or episodes.\r\n        If both frames and episodes are specified, then the training loop will exit\r\n        when either condition is satisfied.\r\n\r\n        Args:\r\n                frames (int): The maximum number of training frames.\r\n                episodes (bool): The maximum number of training episodes.\r\n        '''\r\n\r\n    @abstractmethod\r\n    def test(self, episodes=100):\r\n        '''\r\n        Test the agent in eval mode for a certain number of episodes.\r\n\r\n        Args:\r\n            episodes (int): The number of test epsiodes.\r\n\r\n        Returns:\r\n            list(float): A list of all returns received during testing.\r\n        '''\r\n```\r\n\r\nNotice the new method, `experiment.test()`. This method runs the agent in `eval` mode for a certain number of episodes and logs summary statistics (the mean and std of the returns).\r\n\r\n ## Approximation: no_grad vs. eval\r\n\r\nFinally, we clarified the usage of `Approximation.eval(*inputs)` by adding an additional method, `Approximation.no_grad(*inputs)`. `eval()` both puts the network in evaluation mode *and* runs the forward pass with `torch.no_grad()`. `no_grad()` simply runs a forward pass in the current mode. The various `Policy` implementations were also adjusted to correctly execute the greedy behavior in `eval` mode.","2020-04-18T18:39:48",{"id":209,"version":210,"summary_zh":211,"released_at":212},172093,"v0.4.0","The first public release of the library!","2020-01-20T17:51:58",{"id":214,"version":215,"summary_zh":216,"released_at":217},172094,"v0.3.3","Small but important update!\r\n\r\n1. Added `all.experiments.plot` module, with `plot_returns_100` function that accepts a `runs` directory and plots contained results.\r\n2. Tweaked the `a2c` Atari preset to match the configuration of the other algorithms better\r\n","2019-09-17T22:18:49",{"id":219,"version":220,"summary_zh":221,"released_at":222},172095,"v0.3.1","1. Add C51, a distributional RL agent\r\n2. Add double-dqn agent (ddqn)\r\n3. UIpdate the Atari wrappers to exactly match deepmind","2019-09-09T16:02:18",{"id":224,"version":225,"summary_zh":226,"released_at":227},172096,"v0.3.0","This release contains several usability enhancements! The biggest change, however, is a refactor. The policy classes now extend from `Approximation`. This means that things like target networks, learning rate schedulers, and model saving is all handled in one place! \r\n\r\nThis full list of changes is:\r\n\r\n* Refactored experiment API (#88)\r\n* Policies inherit from `Approximation` (#89)\r\n* Models now save themselves automatically every 200 updates. Also, you can load models and watch them play in each environment! (#90)\r\n* Automatically set the temperature in SAC (#91)\r\n* Schedule learning rates and other parameters (#92)\r\n* SAC bugfix\r\n* Refactor usage of target networks. Now there is a difference between `eval()` and `target()`: the former runs a forward pass of the current network, the latter does so on the target network, each without creating a computation graph. (#94)\r\n* Tweak `AdvantageBuffer` API. Also fix a minor bug in A2C (#95)\r\n* Report the best returns so far in separate metric (#96)\r\n","2019-08-02T20:51:20",{"id":229,"version":230,"summary_zh":231,"released_at":232},172097,"v0.2.4","A bunch in SoftDeterministicPolicy was slowing learning and causing numerical instability in some cases. This fixes that.","2019-07-30T16:22:02",{"id":234,"version":235,"summary_zh":236,"released_at":237},172098,"v0.2.3","Added [Soft-Actor Critic](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.01290) (SAC). SAC is a state-of-the-art algorithm for continuous control based on the max-entropy RL framework.","2019-07-23T20:55:07",{"id":239,"version":240,"summary_zh":241,"released_at":242},172099,"v0.2.2","`PPO` and `Vanilla` release!\r\n\r\n1. Add [PPO](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.06347), one of the most popular modern RL algorithms.\r\n2. Add `Vanilla` series agents: \"vanilla\" implementations of actor-critic, sarsa, q-learning, and REINFORCE. These algorithms are all prefixed with the letter \"v\" in the `agents` folder.","2019-07-20T17:04:00",{"id":244,"version":245,"summary_zh":246,"released_at":247},172100,"v0.2.1","This release introduces `continuous` policies and agents, including `DDPG`. Also includes a number of quality-of-life improvements:\r\n\r\n* Add `continuous` agent suite\r\n* Add `Gaussian` policy\r\n* Add `DeterministicPolicy`\r\n* Introduce `Approximation` base class from which `QNetwork`, `VNetwork`, etc. are derived\r\n* Convert `layers` module to `all.nn`. Extend from `torch.nn` with custom layers added, to make crafting unique networks easier.\r\n* Introduce `DDPG` agent","2019-07-12T19:45:56",{"id":249,"version":250,"summary_zh":251,"released_at":252},172101,"v0.2.0","The release contains a bunch of changes under the hood. The `agent` API was simplified down to a single method, `action = agent.act(state, reward)`. The accompany this change, `State` was added as a first class object. Terminal states now have the `state.mask` set to 0, whereas before terminal states were represented by `None`.\r\n\r\nAnother major addition is `slurm` support. This is in particular to aid in running on `gypsum`. The `SlurmExperiment` API handles the creation of the appropriate `.sh` files, output, etc., so experiments can be run on `slurm` by writing a single python script! No more writing `.sh` files by hand! Examples can be found in the `demos` folder.\r\n\r\nThere were a few other minor changes as well.\r\n\r\nChange log:\r\n* Simplified agent API to only include `act` #56 \r\n* Added State object #51 \r\n* Added SlurmExperiment for running on gypsum #53 \r\n* Updated the local and release scripts, and added slurm demos #54 \r\n* Tweaked parameter order in replay buffers #59 \r\n* Improved shared feature handling #63 \r\n* Made `write_loss` togglable #64 \r\n* Tweaked default hyperparameters\r\n","2019-06-07T22:20:25"]