[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-openai--procgen":3,"tool-openai--procgen":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":104,"forks":105,"last_commit_at":106,"license":107,"difficulty_score":23,"env_os":108,"env_gpu":109,"env_ram":109,"env_deps":110,"category_tags":117,"github_topics":80,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":118,"updated_at":119,"faqs":120,"releases":150},1410,"openai\u002Fprocgen","procgen","Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments","Procgen 是一个由 OpenAI 开源的强化学习基准测试工具包，包含 16 个程序化生成的类游戏环境。它的核心目标是帮助研究人员快速评估智能体（Agent）的学习速度与泛化能力，即检验模型能否在面对从未见过的新关卡时，依然灵活运用已学到的技能，而非死记硬背固定的操作序列。\n\n传统游戏测试环境往往关卡固定，容易导致算法“过拟合”；而 Gym Retro 虽经典但运行速度较慢且难以定制。Procgen 巧妙解决了这些痛点：它利用程序化生成技术，确保每次运行的关卡布局、敌人位置等要素都是随机变化的，从根本上杜绝了记忆式作弊。同时，其运行效率极高，在单核 CPU 上即可达到每秒数千步的速度，比同类工具快四倍以上，大幅降低了实验的时间成本。此外，每个环境的代码量极少（通常少于 300 行），结构清晰，非常便于开发者修改逻辑或构建自定义环境。\n\nProcgen 主要面向强化学习领域的研究人员、算法工程师及高校师生。如果你正在探索如何让 AI 更聪明地适应未知场景，或者需要高效的大规模训练基准，Procgen 提供了一个轻量、高速且高度可定制的绝佳平台。它支持主流操作系统及多种 Python ","Procgen 是一个由 OpenAI 开源的强化学习基准测试工具包，包含 16 个程序化生成的类游戏环境。它的核心目标是帮助研究人员快速评估智能体（Agent）的学习速度与泛化能力，即检验模型能否在面对从未见过的新关卡时，依然灵活运用已学到的技能，而非死记硬背固定的操作序列。\n\n传统游戏测试环境往往关卡固定，容易导致算法“过拟合”；而 Gym Retro 虽经典但运行速度较慢且难以定制。Procgen 巧妙解决了这些痛点：它利用程序化生成技术，确保每次运行的关卡布局、敌人位置等要素都是随机变化的，从根本上杜绝了记忆式作弊。同时，其运行效率极高，在单核 CPU 上即可达到每秒数千步的速度，比同类工具快四倍以上，大幅降低了实验的时间成本。此外，每个环境的代码量极少（通常少于 300 行），结构清晰，非常便于开发者修改逻辑或构建自定义环境。\n\nProcgen 主要面向强化学习领域的研究人员、算法工程师及高校师生。如果你正在探索如何让 AI 更聪明地适应未知场景，或者需要高效的大规模训练基准，Procgen 提供了一个轻量、高速且高度可定制的绝佳平台。它支持主流操作系统及多种 Python 版本，并能无缝对接 Gym 和 gym3 接口，让实验部署变得简单快捷。","**Status:** Maintenance (expect bug fixes and minor updates)\n\n# Procgen Benchmark\n\n#### [[Blog Post]](https:\u002F\u002Fopenai.com\u002Fblog\u002Fprocgen-benchmark\u002F) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.01588)\n\n16 simple-to-use procedurally-generated [gym](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym) environments which provide a direct measure of how quickly a reinforcement learning agent learns generalizable skills.  The environments run at high speed (thousands of steps per second) on a single core.\n\nWe ran a competition in 2020 which used these environments to measure sample efficiency and generalization in RL. You can learn more [here](https:\u002F\u002Fwww.aicrowd.com\u002Fchallenges\u002Fneurips-2020-procgen-competition).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_9a36f7ca75ee.gif\">\n\nThese environments are associated with the paper [Leveraging Procedural Generation to Benchmark Reinforcement Learning](https:\u002F\u002Fcdn.openai.com\u002Fprocgen.pdf) [(citation)](#citation).  The code for running some experiments from the paper is in the [train-procgen](https:\u002F\u002Fgithub.com\u002Fopenai\u002Ftrain-procgen) repo.  For those familiar with the original [CoinRun environment](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcoinrun), be sure to read the updated CoinRun description below as there have been subtle changes to the environment.\n\nCompared to [Gym Retro](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fretro), these environments are:\n\n* Faster: Gym Retro environments are already fast, but Procgen environments can run >4x faster.\n* Randomized: Gym Retro environments are always the same, so you can memorize a sequence of actions that will get the highest reward.  Procgen environments are randomized so this is not possible.\n* Customizable: If you install from source, you can perform experiments where you change the environments, or build your own environments.  The environment-specific code for each environment is often less than 300 lines.  This is almost impossible with Gym Retro.\n\nSupported platforms:\n\n- Windows 10\n- macOS 10.14 (Mojave), 10.15 (Catalina)\n- Linux (manylinux2010)\n\nSupported Pythons:\n\n- 3.7 64-bit\n- 3.8 64-bit\n- 3.9 64-bit\n- 3.10 64-bit\n\nSupported CPUs:\n\n- Must have at least AVX\n\n## Installation\n\nFirst make sure you have a supported version of python:\n\n```\n# run these commands to check for the correct python version\npython -c \"import sys; assert (3,7,0) \u003C= sys.version_info \u003C= (3,10,0), 'python is incorrect version'; print('ok')\"\npython -c \"import platform; assert platform.architecture()[0] == '64bit', 'python is not 64-bit'; print('ok')\"\n```\n\nTo install the wheel:\n\n```\npip install procgen\n```\n\nIf you get an error like `\"Could not find a version that satisfies the requirement procgen\"`, please upgrade pip: `pip install --upgrade pip`.\n\nTo try an environment out interactively:\n\n```\npython -m procgen.interactive --env-name coinrun\n```\n\nThe keys are: left\u002Fright\u002Fup\u002Fdown + q, w, e, a, s, d for the different (environment-dependent) actions.  Your score is displayed as \"episode_return\" in the lower left.  At the end of an episode, you can see your final \"episode_return\" as well as \"prev_level_complete\" which will be `1` if you successfully completed the level.\n\nTo create an instance of the [gym](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym) environment:\n\n```\nimport gym\nenv = gym.make(\"procgen:procgen-coinrun-v0\")\n```\n\nTo create an instance of the [gym3](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym3) (vectorized) environment:\n\n```\nfrom procgen import ProcgenGym3Env\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\")\n```\n\n### Docker\n\nA [`Dockerfile`](docker\u002FDockerfile) is included to demonstrate a minimal Docker-based setup that works for running random agent.\n\n```\ndocker build docker --tag procgen\ndocker run --rm -it procgen python3 -m procgen.examples.random_agent_gym\n```\n\nThere is a second `Dockerfile` to demonstrate installing from source:\n\n```\ndocker build . --tag procgen --file docker\u002FDockerfile.dev\ndocker run --rm -it procgen python -c \"from procgen import ProcgenGym3Env; env = ProcgenGym3Env(num=1, env_name='coinrun'); print(env.observe())\"\n```\n\n## Environments\n\nThe observation space is a box space with the RGB pixels the agent sees in a numpy array of shape (64, 64, 3).  The expected step rate for a human player is 15 Hz.\n\nThe action space is `Discrete(15)` for which button combo to press.  The button combos are defined in [`env.py`](procgen\u002Fenv.py).\n\nIf you are using the vectorized environment, the observation space is a dictionary space where the pixels are under the key \"rgb\".\n\nHere are the 16 environments:\n\n| Image | Name | Description |\n| --- | --- | --- |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_3e1f7e56f45e.png\" width=\"200px\"> | `bigfish` | The player starts as a small fish and becomes bigger by eating other fish. The player may only eat fish smaller than itself, as determined solely by width. If the player comes in contact with a larger fish, the player is eaten and the episode ends. The player receives a small reward for eating a smaller fish and a large reward for becoming bigger than all other fish, at which point the episode ends. \n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_913d14e21d81.png\" width=\"200px\"> | `bossfight` | The player controls a small starship and must destroy a much bigger boss starship. The boss randomly selects from a set of possible attacks when engaging the player. The player must dodge the incoming projectiles or be destroyed. The player can also use randomly scattered meteors for cover. After a set timeout, the boss becomes vulnerable and its shields go down. At this point, the players projectile attacks will damage the boss. Once the boss receives a certain amount of damage, the player receives a reward, and the boss re-raises its shields. If the player damages the boss several times in this way, the boss is destroyed, the player receives a large reward, and the episode ends.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_22e9f5bd66ec.png\" width=\"200px\"> | `caveflyer` | The player must navigate a network of caves to reach the exit. Player movement mimics the Atari game “Asteroids”: the ship can rotate and travel forward or backward along the current axis. The majority of the reward comes from successfully reaching the end of the level, though additional reward can be collected by destroying target objects along the way with the ship's lasers. There are stationary and moving lethal obstacles throughout the level.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_479390adf7c6.png\" width=\"200px\"> | `chaser` | Inspired by the Atari game “MsPacman”. Maze layouts are generated using Kruskal’s algorithm, and then walls are removed until no dead-ends remain in the maze. The player must collect all the green orbs. 3 large stars spawn that will make enemies vulnerable for a short time when collected. A collision with an enemy that isn’t vulnerable results in the player’s death. When a vulnerable enemy is eaten, an egg spawns somewhere on the map that will hatch into a new enemy after a short time, keeping the total number of enemies constant. The player receives a small reward for collecting each orb and a large reward for completing the level.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_3fe0b2fd3bb8.png\" width=\"200px\"> | `climber` | A simple platformer. The player must climb a sequence of platforms, collecting stars along the way. A small reward is given for collecting a star, and a larger reward is given for collecting all stars in a level. If all stars are collected, the episode ends. There are lethal flying monsters scattered throughout the level.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_567bae33a05b.png\" width=\"200px\"> | `coinrun` | A simple platformer. The goal is to collect the coin at the far right of the level, and the player spawns on the far left. The agent must dodge stationary saw obstacles, enemies that pace back and forth, and chasms that lead to death. Note that while the previously released version of CoinRun painted velocity information directly onto observations, the current version does not. This makes the environment significantly more difficult.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_87303bfce754.png\" width=\"200px\"> | `dodgeball` | Loosely inspired by the Atari game “Berzerk”. The player spawns in a room with a random configuration of walls and enemies. Touching a wall loses the game and ends the episode. The player moves relatively slowly and can navigate throughout the room. There are enemies which also move slowly and which will occasionally throw balls at the player. The player can also throw balls, but only in the direction they are facing. If all enemies are hit, the player can move to the unlocked platform and earn a significant level completion bonus.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_19df7a0c5e9e.png\" width=\"200px\"> | `fruitbot` | A scrolling game where the player controls a robot that must navigate between gaps in walls and collect fruit along the way. The player receives a positive reward for collecting a piece of fruit, and a larger negative reward for mistakenly collecting a non-fruit object. Half of the spawned objects are fruit (positive reward) and half are non-fruit (negative reward). The player receives a large reward if they reach the end of the level. Occasionally the player must use a key to unlock gates which block the way.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_ef954b549fe0.png\" width=\"200px\"> | `heist` | The player must steal the gem hidden behind a network of locks. Each lock comes in one of three colors, and the necessary keys to open these locks are scattered throughout the level. The level layout takes the form of a maze, again generated by Kruskal's algorithm. Once the player collects a key of a certain color, the player may open the lock of that color. All keys in the player's possession are shown in the top right corner of the screen.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_54da3a805b0f.png\" width=\"200px\"> | `jumper` | A platformer with an open world layout. The player, a bunny, must navigate through the world to find the carrot. It might be necessary to ascend or descend the level to do so. The player is capable of “double jumping”, allowing it to navigate tricky layouts and reach high platforms. There are spike obstacles which will destroy the player on contact. The screen includes a compass which displays direction and distance to the carrot. The only reward in the game comes from collect the carrot, at which point the episode ends. Due to a bug that permits the player to spawn on top of critical objects (an obstacle or the goal), ~7% of levels will terminate after a single action, the vast majority of which will have 0 reward.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_ebfe2e43e242.png\" width=\"200px\"> | `leaper` | Inspired by the classic game “Frogger”. The player must cross several lanes to reach the finish line and earn a reward. The first group of lanes contains cars which must be avoided. The second group of lanes contains logs on a river. The player must hop from log to log to cross the river. If the player falls in the river, the episode ends.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_6887a2639d6f.png\" width=\"200px\"> | `maze` | The player, a mouse, must navigate a maze to find the sole piece of cheese and earn a reward. Mazes are generated by Kruskal's algorithm and range in size from 3x3 to 25x25. The maze dimensions are uniformly sampled over this range. The player may move up, down, left or right to navigate the maze.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_72acb206e968.png\" width=\"200px\"> | `miner` | Inspired by the classic game “BoulderDash”. The player, a robot, can dig through dirt to move throughout the world. The world has gravity, and dirt supports boulders and diamonds. Boulders and diamonds will fall through free space and roll off each other. If a boulder or a diamond falls on the player, the game is over. The goal is to collect all the diamonds in the level and then proceed through the exit. The player receives a small reward for collecting a diamond and a larger reward for completing the level.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_b15904b2b507.png\" width=\"200px\"> | `ninja` | A simple platformer. The player, a ninja, must jump across narrow ledges while avoiding bomb obstacles. The player can toss throwing stars at several angles in order to clear bombs, if necessary. The player's jump can be charged over several timesteps to increase its effect. The player receives a reward for collecting the mushroom at the end of the level, at which point the episode terminates.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_3f983ec24d21.png\" width=\"200px\"> | `plunder` | The player must destroy enemy pirate ships by firing cannonballs from its own ship at the bottom of the screen. An on-screen timer slowly counts down. If this timer runs out, the episode ends. Whenever the player fires, the timer skips forward a few steps, encouraging the player to conserve ammunition. The player must take care to avoid hitting friendly ships. The player receives a positive reward for hitting an enemy ship and a large timer penalty for hitting a friendly ship. A target in the bottom left corner identifies the color of the enemy ships to target.\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_88b9aa0f6316.png\" width=\"200px\"> | `starpilot` | A simple side scrolling shooter game. Relatively challenging for humans to play since all enemies fire projectiles that directly target the player. An inability to dodge quickly leads to the player's demise. There are fast and slow enemies, stationary turrets with high health, clouds which obscure player vision, and impassable meteors.\n\n## Known Issues\n\n* `bigfish` - It is possible for the player to occasionally become trapped along the borders of the environment.\n* `caveflyer` - In ~0.5% of levels, the player spawns next to an enemy and will die in a single step regardless of which action is taken.\n* `jumper` - In ~7% of levels, the player will spawn on top of an enemy or the goal, resulting in the episode terminating after a single step regardless of which action is taken.\n* `miner` - There is a low probability of unsolvable level configurations, with either a diamond or the exit being unreachable.\n\nRather than patch these issues, we plan to keep the environments in their originally released form, in order to ease the reproducibility of results that are already published.\n\n## Environment Options\n\n* `env_name` - Name of environment, or comma-separate list of environment names to instantiate as each env in the VecEnv.\n* `num_levels=0` - The number of unique levels that can be generated. Set to 0 to use unlimited levels.\n* `start_level=0` - The lowest seed that will be used to generated levels. 'start_level' and 'num_levels' fully specify the set of possible levels.\n* `paint_vel_info=False` - Paint player velocity info in the top left corner. Only supported by certain games.\n* `use_generated_assets=False` - Use randomly generated assets in place of human designed assets.\n* `debug=False` - Set to `True` to use the debug build if building from source.\n* `debug_mode=0` - A useful flag that's passed through to procgen envs. Use however you want during debugging.\n* `center_agent=True` - Determines whether observations are centered on the agent or display the full level. Override at your own risk.\n* `use_sequential_levels=False` - When you reach the end of a level, the episode is ended and a new level is selected.  If `use_sequential_levels` is set to `True`, reaching the end of a level does not end the episode, and the seed for the new level is derived from the current level seed.  If you combine this with `start_level=\u003Csome seed>` and `num_levels=1`, you can have a single linear series of levels similar to a gym-retro or ALE game.\n* `distribution_mode=\"hard\"` - What variant of the levels to use, the options are `\"easy\", \"hard\", \"extreme\", \"memory\", \"exploration\"`.  All games support `\"easy\"` and `\"hard\"`, while other options are game-specific.  The default is `\"hard\"`.  Switching to `\"easy\"` will reduce the number of timesteps required to solve each game and is useful for testing or when working with limited compute resources.\n* `use_backgrounds=True` - Normally games use human designed backgrounds, if this flag is set to `False`, games will use pure black backgrounds.\n* `restrict_themes=False` - Some games select assets from multiple themes, if this flag is set to `True`, those games will only use a single theme.\n* `use_monochrome_assets=False` - If set to `True`, games will use monochromatic rectangles instead of human designed assets. best used with `restrict_themes=True`.\n\nHere's how to set the options:\n\n```\nimport gym\nenv = gym.make(\"procgen:procgen-coinrun-v0\", start_level=0, num_levels=1)\n```\n\nSince the gym environment is adapted from a gym3 environment, early calls to `reset()` are disallowed and the `render()` method does not do anything.  To render the environment, pass `render_mode=\"human\"` to the constructor, which will send `render_mode=\"rgb_array\"` to the environment constructor and wrap it in a `gym3.ViewerWrapper`.  If you just want the frames instead of the window, pass `render_mode=\"rgb_array\"`.\n\nFor the gym3 vectorized environment:\n\n```\nfrom procgen import ProcgenGym3Env\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\", start_level=0, num_levels=1)\n```\n\nTo render with the gym3 environment, pass `render_mode=\"rgb_array\"`.  If you wish to view the output, use a `gym3.ViewerWrapper`.\n\n## Saving and loading the environment state\n\nIf you are using the gym3 interface, you can save and load the environment state:\n\n```\nfrom procgen import ProcgenGym3Env\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\", start_level=0, num_levels=1)\nstates = env.callmethod(\"get_state\")\nenv.callmethod(\"set_state\", states)\n```\n\nThis returns a list of byte strings representing the state of each game in the vectorized environment.\n\n## Notes\n\n* You should depend on a specific version of this library (using `==`) for your experiments to ensure they are reproducible.  You can get the current installed version with `pip show procgen`.\n* This library does not require or make use of GPUs.\n* While the library should be thread safe, each individual environment instance should only be used from a single thread.  The library is not fork safe unless you set `num_threads=0`.  Even if you do that, `Qt` is not guaranteed to be fork safe, so you should probably create the environment after forking or not use fork at all.\n\n# Install from Source\n\nIf you want to change the environments or create new ones, you should build from source.  You can get miniconda from https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html if you don't have it, or install the dependencies from [`environment.yml`](environment.yml) manually.  On Windows you will also need \"Visual Studio 16 2019\" installed.\n\n```\ngit clone git@github.com:openai\u002Fprocgen.git\ncd procgen\nconda env update --name procgen --file environment.yml\nconda activate procgen\npip install -e .\n# this should say \"building procgen...done\"\npython -c \"from procgen import ProcgenGym3Env; ProcgenGym3Env(num=1, env_name='coinrun')\"\n# this should create a window where you can play the coinrun environment\npython -m procgen.interactive\n```\n\nThe environment code is in C++ and is compiled into a shared library exposing the [`gym3.libenv`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym3\u002Fblob\u002Fmaster\u002Fgym3\u002Flibenv.h) C interface that is then loaded by python.  The C++ code uses [Qt](https:\u002F\u002Fwww.qt.io\u002F) for drawing.\n\n# Create a new environment\n\nOnce you have installed from source, you can customize an existing environment or make a new environment of your own.  If you want to create a fast C++ 2D environment, you can fork this repo and do the following:\n\n* Copy [`src\u002Fgames\u002Fbigfish.cpp`](procgen\u002Fsrc\u002Fgames\u002Fbigfish.cpp) to `src\u002Fgames\u002F\u003Cname>.cpp`\n* Replace `BigFish` with `\u003Cname>` and `\"bigfish\"` with `\"\u003Cname>\"` in your cpp file\n* Add `src\u002Fgames\u002F\u003Cname>.cpp` to [`CMakeLists.txt`](procgen\u002FCMakeLists.txt)\n* Run `python -m procgen.interactive --env-name \u003Cname>` to test it out\n\nThis repo includes a travis configuration that will compile your environment and build python wheels for easy installation.  In order to have this build more quickly by caching the Qt compilation, you will want to configure a GCS bucket in [common.py](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen-build\u002Fprocgen_build\u002Fcommon.py#L5) and [setup service account credentials](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen-build\u002Fprocgen_build\u002Fbuild_package.py#L41).\n\n# Add information to the info dictionary\n\nTo export game information from the C++ game code to Python, you can define a new `info_type`.  `info_type`s appear in the `info` dict returned by the gym environment, or in `get_info()` from the gym3 environment.\n\nTo define a new one, add the following code to the `VecGame` constructor here: [vecgame.cpp](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen\u002Fsrc\u002Fvecgame.cpp#L290)\n\n```\n{\n    struct libenv_tensortype s;\n    strcpy(s.name, \"heist_key_count\");\n    s.scalar_type = LIBENV_SCALAR_TYPE_DISCRETE;\n    s.dtype = LIBENV_DTYPE_INT32;\n    s.ndim = 0,\n    s.low.int32 = 0;\n    s.high.int32 = INT32_MAX;\n    info_types.push_back(s);\n}\n```\n\nThis lets the Python code know to expect a single integer and expose it in the `info` dict.\n\nAfter adding that, you can add the following code to [heist.cpp](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen\u002Fsrc\u002Fgames\u002Fheist.cpp#L93):\n\n```\nvoid observe() override {\n    Game::observe();\n    int32_t key_count = 0;\n    for (const auto& has_key : has_keys) {\n        if (has_key) {\n            key_count++;\n        }\n    }\n    *(int32_t *)(info_bufs[info_name_to_offset.at(\"heist_key_count\")]) = key_count;\n}\n```\n\nThis populates the `heist_key_count` info value each time the environment is observed.\n\nIf you run the interactive script (making sure that you installed from source), the new keys should appear in the bottom left hand corner:\n\n`python -m procgen.interactive --env-name heist`\n\n# Changelog\n\nSee [CHANGES](CHANGES.md) for changes present in each release.\n\n# Contributing\n\nSee [CONTRIBUTING](CONTRIBUTING.md) for information on contributing.\n\n# Assets\n\nSee [ASSET_LICENSES](ASSET_LICENSES.md) for asset license information.\n\n# Citation\n\nPlease cite using the following bibtex entry:\n\n```\n@article{cobbe2019procgen,\n  title={Leveraging Procedural Generation to Benchmark Reinforcement Learning},\n  author={Cobbe, Karl and Hesse, Christopher and Hilton, Jacob and Schulman, John},\n  journal={arXiv preprint arXiv:1912.01588},\n  year={2019}\n}\n```\n","**状态：** 维护中（预计会修复漏洞并进行少量更新）\n\n# Procgen 基准测试\n\n#### [[博客文章]](https:\u002F\u002Fopenai.com\u002Fblog\u002Fprocgen-benchmark\u002F) [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.01588)\n\n16个简单易用的程序化生成的 [Gym](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym) 环境，能够直接衡量强化学习智能体在多大程度上能快速习得通用技能。这些环境以极高的运行速度（每秒数千步）运行于单核处理器之上。\n\n我们在2020年举办了一场竞赛，利用这些环境来评估强化学习中的样本效率与泛化能力。您可以通过[这里](https:\u002F\u002Fwww.aicrowd.com\u002Fchallenges\u002Fneurips-2020-procgen-competition)了解更多详情。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_9a36f7ca75ee.gif\">\n\n这些环境与论文【利用程序化生成来对强化学习进行基准测试】（https:\u002F\u002Fcdn.openai.com\u002Fprocgen.pdf）相关联。（参考文献：#citation）论文中部分实验的代码已收录在[train-procgen](https:\u002F\u002Fgithub.com\u002Fopenai\u002Ftrain-procgen) 仓库中。对于熟悉原始【CoinRun 环境】（https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcoinrun）的用户，请务必阅读下方更新后的 CoinRun 描述，因为该环境已发生了细微变化。\n\n与【Gym Retro】相比，这些环境具有以下特点：\n\n* 更快：Gym Retro 环境本身已经非常高效，而 Procgen 环境的运行速度可提升至 4 倍以上。\n* 随机化：Gym Retro 环境始终一成不变，因此你只需记住一系列动作序列，就能获得最高奖励；而 Procgen 环境则采用随机化设计，因此无法做到这一点。\n* 可定制化：如果你从源码安装，可以自由地对环境进行实验，甚至自定义环境。每个环境的环境特定代码往往少于 300 行。相比之下，在 Gym Retro 中几乎不可能实现这一点。\n\n支持平台：\n\n- Windows 10\n- macOS 10.14（Mojave）、10.15（Catalina）\n- Linux（manylinux2010）\n\n支持 Python 版本：\n\n- 3.7 64 位\n- 3.8 64 位\n- 3.9 64 位\n- 3.10 64 位\n\n支持 CPU：\n\n- 至少需要支持 AVX 指令集\n\n## 安装\n\n首先，请确保你已安装支持的 Python 版本：\n\n```\n# 运行这些命令以检查正确的 Python 版本\npython -c \"import sys; assert (3,7,0) \u003C= sys.version_info \u003C= (3,10,0), 'Python 版本不正确'; print('ok')\"\npython -c \"import platform; assert platform.architecture()[0] == '64bit', 'Python 不是 64 位'; print('ok')\")\n```\n\n要安装 wheel 包：\n\n```\npip install procgen\n```\n\n若出现类似“无法找到满足需求的 procgen 版本”的错误，请升级 pip：`pip install --upgrade pip`。\n\n若想交互式地试用某个环境：\n\n```\npython -m procgen.interactive --env-name coinrun\n```\n\n按键包括：左\u002F右\u002F上\u002F下 + Q、W、E、A、S、D，用于执行不同（依赖于环境）的动作。你的得分会显示在左下角，标注为“episode_return”。在某一回合结束时，你还可以查看最终的“episode_return”，以及“prev_level_complete”——如果成功完成关卡，则该值为 `1`。\n\n要创建一个 [Gym](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym) 环境的实例：\n\n```\nimport gym\nenv = gym.make(\"procgen:procgen-coinrun-v0\")\n```\n\n要创建一个 [Gym3](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym3)（向量化的）环境的实例：\n\n```\nfrom procgen import ProcgenGym3Env\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\")\n```\n\n### Docker\n\n我们附带了一个 [`Dockerfile`](docker\u002FDockerfile)，用于演示一种基于 Docker 的最小化设置，该设置适用于运行随机代理。\n\n```\ndocker build docker --tag procgen\ndocker run --rm -it procgen python3 -m procgen.examples.random_agent_gym\n```\n\n此外，还有一个用于从源码安装的 Dockerfile：\n\n```\ndocker build . --tag procgen --file docker\u002FDockerfile.dev\ndocker run --rm -it procgen python -c \"from procgen import ProcgenGym3Env; env = ProcgenGym3Env(num=1, env_name='coinrun'); print(env.observe())\"\n```\n\n## 环境\n\n观测空间是一个盒状空间，其中的 RGB 像素以 numpy 数组的形式呈现，形状为 (64, 64, 3)。人类玩家的预期步速为 15 赫兹。\n\n动作空间为 `Discrete(15)`，对应需要按下哪些按钮组合。按钮组合已在 [`env.py`](procgen\u002Fenv.py) 中定义。\n\n如果你使用的是向量化环境，观测空间将转换为一个字典形式的空间，像素则存储在键 “rgb” 下。\n\n以下是这 16 个环境：\n\n| 图片 | 名称 | 描述 |\n| --- | --- | --- |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_3e1f7e56f45e.png\" width=\"200px\"> | `bigfish` | 玩家以一条小鱼的身份开始游戏，通过食用其他鱼类逐渐变大。玩家只能食用比自己体型更小的鱼类，且这种大小的判定完全依据鱼体宽度。若玩家与一条体型更大的鱼类接触，便会遭到吞噬，游戏随即结束。玩家在食用较小的鱼类时可获得少量奖励；而当玩家的体型超过所有其他鱼类时，将获得丰厚奖励，此时游戏亦告结束。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_913d14e21d81.png\" width=\"200px\"> | `bossfight` | 玩家操控一艘小型星际飞船，必须摧毁一只体型远大于自己的Boss星际飞船。在与玩家交战时，Boss会随机从一组可能的攻击方式中选择一种。玩家需巧妙躲避来袭的弹道，否则将被摧毁。玩家还可以利用随机散布的流星作为掩护。经过一段预设时间后，Boss会变得脆弱，其护盾也会随之降低。此时，玩家的弹道攻击将对Boss造成伤害。一旦Boss受到一定数量的伤害，玩家将获得奖励，同时Boss会重新提升护盾强度。若玩家多次以这种方式对Boss造成伤害，Boss将被彻底摧毁，玩家将获得丰厚奖励，游戏也随之结束。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_22e9f5bd66ec.png\" width=\"200px\"> | `caveflyer` | 玩家需在一片复杂的洞穴网络中航行，最终抵达出口。玩家的移动方式借鉴了Atari游戏《太空侵略者》：飞船可以旋转，并沿当前轴向向前或向后移动。大部分奖励都来自成功抵达关卡终点，不过玩家也可以通过使用飞船的激光武器摧毁沿途的目标物体来获取额外奖励。关卡中遍布着静止和移动的致命障碍物。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_479390adf7c6.png\" width=\"200px\"> | `chaser` | 该游戏灵感源自Atari游戏《MsPacman》。关卡布局采用Kruskal算法生成，随后逐步移除墙壁，直至洞穴中不再存在死胡同。玩家需收集所有绿色的球状物。当玩家收集到3颗大型星星时，这些星星会在短时间内使敌人变得脆弱。若与未处于脆弱状态的敌人发生碰撞，玩家将被击败。当敌人的生命值被消耗至一定程度时，地图上会生成一颗蛋，过一段时间后会孵化成新的敌人，从而保持敌人总数不变。玩家每收集一个球状物可获得少量奖励，而完成关卡则可获得丰厚奖励。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_3fe0b2fd3bb8.png\" width=\"200px\"> | `climber` | 这是一款简单的平台跳跃游戏。玩家需攀爬一串平台，在途中收集星星。收集一颗星星可获得小额奖励，而收集全部星星则可获得更高奖励。若玩家成功收集所有星星，游戏即告结束。关卡中还散布着致命的飞行怪物。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_567bae33a05b.png\" width=\"200px\"> | `coinrun` | 这是一款简单的平台跳跃游戏。玩家的目标是收集关卡最右侧的硬币，而玩家则从关卡最左侧出发。代理需躲避静止的锯齿状障碍物、来回踱步的敌人，以及通往死亡的深谷。需要注意的是，此前发布的CoinRun版本会直接将速度信息绘制在观察数据上，而当前版本则未作此处理。这使得游戏环境变得更加艰难。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_87303bfce754.png\" width=\"200px\"> | `dodgeball` | 该游戏 loosely 受Atari游戏《Berzerk》的启发。玩家出生在一个房间内，房间的墙壁和敌人分布随机。触碰墙壁会直接导致游戏结束，关卡就此终止。玩家的移动速度相对缓慢，可在整个房间内自由穿梭。游戏中还有缓慢移动的敌人，它们偶尔会向玩家投掷球状物。玩家也可以投掷球状物，但只能朝自己所面向的方向投掷。若所有敌人均被击中，玩家可前往解锁的平台，从而获得显著的关卡通关奖励。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_19df7a0c5e9e.png\" width=\"200px\"> | `fruitbot` | 这是一款滚动式游戏，玩家需操控一台机器人，在墙缝间穿梭，并沿途收集水果。玩家每收集到一颗水果可获得正向奖励，而若误捡到非水果物品，则可获得更大的负向奖励。关卡中产生的物体有一半为水果（正向奖励），另一半为非水果（负向奖励）。如果玩家成功抵达关卡终点，将获得丰厚奖励。有时，玩家还需使用钥匙解锁那些阻挡道路的门。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_ef954b549fe0.png\" width=\"200px\"> | `heist` | 玩家需盗取隐藏在锁链网络背后的宝石。每把锁都有三种不同的颜色，而开启这些锁所需的钥匙则散落在关卡各处。关卡布局呈迷宫形态，同样由Kruskal算法生成。当玩家收集到某一颜色的钥匙后，便可打开对应颜色的锁。玩家手中的所有钥匙都会显示在屏幕右上角。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_54da3a805b0f.png\" width=\"200px\"> | `jumper` | 这是一款开放世界布局的平台跳跃游戏。玩家扮演一只兔子，需在广阔的世界中寻找胡萝卜。为了达成这一目标，玩家可能需要在关卡中上下起伏。玩家具备“双跳”能力，能够灵活穿梭于复杂多变的关卡布局，轻松抵达高处的平台。关卡中还布满了尖刺障碍物，一旦与之接触，玩家将被摧毁。屏幕上设有指南针，可显示玩家与胡萝卜之间的方向及距离。游戏唯一的奖励来自于收集胡萝卜，此时游戏即告结束。由于存在一个漏洞——玩家可以在关键物体（如障碍物或目标）上方重生——约7%的关卡会在一次操作后立即结束，而其中绝大多数关卡的奖励均为零。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_ebfe2e43e242.png\" width=\"200px\"> | `leaper` | 该游戏灵感源自经典游戏《青蛙过河》。玩家需穿越数条车道，最终抵达终点并获得奖励。第一条车道上布满了汽车，玩家需小心避开这些车辆。第二条车道上则铺满了漂浮在河面上的原木。玩家需从一根原木跳到另一根原木，顺利渡过河流。若玩家不慎跌入河中，游戏即告结束。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_6887a2639d6f.png\" width=\"200px\"> | `maze` | 玩家扮演一只老鼠，需在迷宫中穿梭，找到唯一一块奶酪并获得奖励。迷宫由Kruskal算法生成，尺寸范围从3×3到25×25不等。迷宫的尺寸在该范围内均匀采样。玩家可通过上下左右移动，探索迷宫的各个角落。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_72acb206e968.png\" width=\"200px\"> | `miner` | 该游戏灵感源自经典游戏《BoulderDash》。玩家扮演一只机器人，可挖掘泥土，从而在广阔的世界中自由穿梭。这个世界拥有重力，而泥土则支撑着巨石和钻石。巨石和钻石会从空隙中坠落，并相互滚落。若巨石或钻石砸中玩家，游戏即告结束。玩家的目标是收集关卡中的所有钻石，然后顺利驶出关卡。玩家每收集到一颗钻石可获得少量奖励，而完成关卡则可获得丰厚奖励。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_b15904b2b507.png\" width=\"200px\"> | `ninja` | 这是一款简单的平台跳跃游戏。玩家扮演一名忍者，需在狭窄的岩壁间跳跃，同时避开炸弹障碍物。玩家可从多个角度投掷飞星，以清除炸弹，必要时还可采取此策略。玩家的跳跃动作可在多个时间步中进行充能，从而增强跳跃效果。玩家在关卡终点收集到蘑菇后，即可获得奖励，游戏随即结束。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_3f983ec24d21.png\" width=\"200px\"> | `plunder` | 玩家需通过发射炮弹，从屏幕底部的己方舰船对敌方海盗舰进行攻击，将其摧毁。屏幕上的计时器会缓缓倒计时。若计时器归零，游戏即告结束。每当玩家开火时，计时器会提前几步推进，鼓励玩家合理分配弹药。玩家需注意避免击中友军舰船。击中敌方舰船可获得正向奖励，而击中友军舰船则会面临高额计时器惩罚。左下角的目标会提示玩家要瞄准的敌方舰船颜色。\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_readme_88b9aa0f6316.png\" width=\"200px\"> | `starpilot` | 这是一款简单的横版射击游戏。对于人类玩家来说，游戏难度较高，因为所有敌人皆会发射直接瞄准玩家的弹道。若无法快速闪避，玩家将很快遭遇失败。游戏中既有快速行动的敌人，也有行动迟缓的敌人；既有生命值较高的固定炮塔，也有遮蔽玩家视线的云层，甚至还有难以逾越的流星。\n\n## 已知问题\n\n* `bigfish`——玩家有时可能会被困在环境的边界附近。\n* `caveflyer`——在约 0.5% 的关卡中，玩家会在敌人的身旁生成，并且无论采取何种操作，都会在一步之内死亡。\n* `jumper`——在约 7% 的关卡中，玩家会生成在敌人的上方或目标的上方，导致无论采取何种操作，关卡都会在一步之内结束。\n* `miner`——存在较低的概率出现无法解决的关卡配置，要么钻石无法到达，要么出口无法抵达。\n\n我们并不打算针对这些问题进行补丁修复，而是计划保持环境原样，以方便已发布结果的可重复性。\n\n## 环境选项\n\n* `env_name`——环境的名称，或者用逗号分隔的环境名称列表，用于在 VecEnv 中分别实例化每个环境。\n* `num_levels=0`——可生成的唯一关卡数量。设置为 0 时，将使用无限多的关卡。\n* `start_level=0`——用于生成关卡的最低种子值。`start_level` 和 `num_levels` 共同决定了可能的关卡集合。\n* `paint_vel_info=False`——在左上角显示玩家的移动速度信息。仅在特定游戏中支持此功能。\n* `use_generated_assets=False`——使用随机生成的资源，而非人工设计的资源。\n* `debug=False`——若从源代码构建，将设置为 `True` 以启用调试模式。\n* `debug_mode=0`——一个有用的标志，会传递给 Procgen 环境。在调试过程中可根据需要自行调整使用。\n* `center_agent=True`——决定观察值是聚焦于代理，还是展示整个关卡。请自行承担风险进行覆盖。\n* `use_sequential_levels=False`——当玩家到达关卡终点时，关卡将结束，同时选择新的关卡。如果将 `use_sequential_levels` 设置为 `True`，则到达关卡终点并不会终止整个关卡，新关卡的种子将基于当前关卡的种子生成。若将此设置与 `start_level=\u003C某个种子>` 和 `num_levels=1` 结合使用，即可实现一条线性排列的关卡序列，类似于 gym-retro 或 ALE 游戏。\n* `distribution_mode=\"hard\"`——关卡的变体类型，可用选项包括 `\"easy\"`、`\"hard\"`、`\"extreme\"`、`\"memory\"`、`\"exploration\"`。所有游戏均支持 `\"easy\"` 和 `\"hard\"`，而其他选项则取决于具体游戏。默认为 `\"hard\"`。切换至 `\"easy\"` 可以减少每局游戏的步数，非常适合测试或在计算资源有限的情况下使用。\n* `use_backgrounds=True`——通常情况下，游戏会使用人工设计的背景；若将此标志设置为 `False`，游戏将采用纯黑色背景。\n* `restrict_themes=False`——部分游戏会从多个主题中选取资源；若将此标志设置为 `True`，这些游戏将只使用单一主题。\n* `use_monochrome_assets=False`——若将此标志设置为 `True`，游戏将使用单色矩形，而非人工设计的资源。建议与 `restrict_themes=True` 一起使用。\n\n以下是设置相关选项的方法：\n\n```\nimport gym\nenv = gym.make(\"procgen:procgen-coinrun-v0\", start_level=0, num_levels=1)\n```\n\n由于 gym 环境源自 gym3 环境，因此禁止过早调用 `reset()` 方法，且 `render()` 方法也不会执行任何操作。要渲染环境，请在构造函数中传入 `render_mode=\"human\"`，这会将 `render_mode=\"rgb_array\"` 传递给环境构造函数，并将其包装在 `gym3.ViewerWrapper` 中。如果您只想获取帧数据，而不希望看到窗口，则可传入 `render_mode=\"rgb_array\"`。\n\n对于 gym3 向量化的环境：\n\n```\nfrom procgen import ProcgenGym3Env\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\", start_level=0, num_levels=1)\n```\n\n要使用 gym3 环境进行渲染，请传入 `render_mode=\"rgb_array\"`。若您希望查看输出结果，可使用 `gym3.ViewerWrapper`。\n\n## 保存与加载环境状态\n\n如果您使用的是 gym3 接口，可以保存并加载环境状态：\n\n```\nfrom procgen import ProcgenGym3Env\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\", start_level=0, num_levels=1)\nstates = env.callmethod(\"get_state\")\nenv.callmethod(\"set_state\", states)\n```\n\n此方法会返回一个字节字符串列表，用于表示向量化环境中每局游戏的状态。\n\n## 注意事项\n\n* 为了确保实验结果的可重复性，您应依赖特定版本的该库（通过 `==` 进行比较）。您可以使用 `pip show procgen` 来获取当前安装的版本。\n* 本库无需使用 GPU，也未对 GPU 进行任何依赖或利用。\n* 虽然该库应具备线程安全特性，但每个独立的环境实例只能由单个线程使用。除非将 `num_threads` 设置为 `0`，否则该库不支持多线程。即便您设置了 `num_threads=0`，Qt 也未必能保证线程安全，因此建议您在多线程环境下创建环境，或者干脆完全避免使用多线程。\n\n# 从源代码安装\n\n如果您想自定义环境或创建新环境，建议从源代码构建。如果您尚未安装 Miniconda，可前往 https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html 获取 Miniconda；或者手动从 [`environment.yml`](environment.yml) 中安装所需依赖项。在 Windows 系统上，您还需要安装 “Visual Studio 16 2019”。\n\n```\ngit clone git@github.com:openai\u002Fprocgen.git\ncd procgen\nconda env update --name procgen --file environment.yml\nconda activate procgen\npip install -e .\n# 应该显示“正在构建 procgen...完成”\npython -c \"from procgen import ProcgenGym3Env; ProcgenGym3Env(num=1, env_name='coinrun')\"\n# 应该会弹出一个窗口，让您能够体验 coinrun 环境\npython -m procgen.interactive\n```\n\n环境代码采用 C++ 编写，并编译成共享库，公开了 [`gym3.libenv`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym3\u002Fblob\u002Fmaster\u002Fgym3\u002Flibenv.h) C 接口，随后由 Python 加载。C++ 代码使用 Qt 进行绘图。\n\n# 创建新环境\n\n在从源代码安装完成后，您可以自定义现有环境，或创建属于自己的新环境。若想打造一款快速的 C++ 2D 环境，您可以 fork 这个仓库并按照以下步骤操作：\n\n* 将 [`src\u002Fgames\u002Fbigfish.cpp`](procgen\u002Fsrc\u002Fgames\u002Fbigfish.cpp) 复制到 `src\u002Fgames\u002F\u003Cname>.cpp`。\n* 在您的 C++ 文件中，将 `BigFish` 替换为 `\u003Cname>`，并将 `\"bigfish\"` 替换为 `\"\u003Cname>\"`。\n* 将 `src\u002Fgames\u002F\u003Cname>.cpp` 添加到 [`CMakeLists.txt`](procgen\u002FCMakeLists.txt) 中。\n* 运行 `python -m procgen.interactive --env-name \u003Cname>` 进行测试。\n\n该仓库包含 Travis CI 配置，可编译您的环境并构建 Python 轮子，以便轻松安装。若想通过缓存 Qt 编译过程来加快构建速度，您需要在 [common.py](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen-build\u002Fprocgen_build\u002Fcommon.py#L5) 和 [设置服务账号凭据](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen-build\u002Fprocgen_build\u002Fbuild_package.py#L41) 中配置 GCS 存储桶。\n\n# 向信息字典中添加信息\n\n要将 C++ 游戏代码中的游戏信息导出至 Python，您可以定义一个新的 `info_type`。`info_type` 会出现在 gym 环境返回的 `info` 字典中，或在 gym3 环境的 `get_info()` 中使用。\n\n要定义一个新的 `info_type`，只需在 `VecGame` 构造函数中添加以下代码：[vecgame.cpp](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen\u002Fsrc\u002Fvecgame.cpp#L290)\n\n```\n{\n    struct libenv_tensortype s;\n    strcpy(s.name, \"heist_key_count\");\n    s.scalar_type = LIBENV_SCALAR_TYPE_DISCRETE;\n    s.dtype = LIBENV_DTYPE_INT32;\n    s.ndim = 0,\n    s.low.int32 = 0;\n    s.high.int32 = INT32_MAX;\n    info_types.push_back(s);\n}\n```\n\n这样，Python 代码便可以预期接收一个整数，并将其暴露在 `info` 字典中。\n\n在添加上述代码后，您可以在 [heist.cpp](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fblob\u002Fmaster\u002Fprocgen\u002Fsrc\u002Fgames\u002Fheist.cpp#L93) 中添加以下代码：\n\n```\nvoid observe() override {\n    Game::observe();\n    int32_t key_count = 0;\n    for (const auto& has_key : has_keys) {\n        if (has_key) {\n            key_count++;\n        }\n    }\n    *(int32_t *)(info_bufs[info_name_to_offset.at(\"heist_key_count\")]) = key_count;\n}\n```\n\n每次环境被观察时，该代码都会更新 `heist_key_count` 信息值。\n\n如果您运行交互式脚本（确保已从源码安装），新键应该会显示在左下角：\n\n`python -m procgen.interactive --env-name heist`\n\n# 变更记录\n\n请参阅 [CHANGES](CHANGES.md) 以了解各版本中新增的功能。\n\n# 贡献方式\n\n有关贡献方式的信息，请参阅 [CONTRIBUTING](CONTRIBUTING.md)。\n\n# 资产\n\n请参阅 [ASSET_LICENSES](ASSET_LICENSES.md) 以获取资产许可证的相关信息。\n\n# 引用格式\n\n请使用以下 BibTeX 条目进行引用：\n\n```\n@article{cobbe2019procgen,\n  title={利用程序化生成来评估强化学习},\n  author={Cobbe, Karl and Hesse, Christopher and Hilton, Jacob and Schulman, John},\n  journal={arXiv preprint arXiv:1912.01588},\n  year={2019}\n}\n```","# Procgen 快速上手指南\n\nProcgen 是一个包含 16 个程序化生成环境的基准测试工具，专为评估强化学习（RL）智能体的泛化能力和样本效率而设计。这些环境基于 Gym 接口，单核运行速度极快（每秒数千步），且每次生成的关卡均随机变化，防止智能体死记硬背动作序列。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：Windows 10、macOS (10.14+) 或 Linux (manylinux2010)。\n*   **CPU 要求**：必须支持 **AVX** 指令集。\n*   **Python 版本**：仅支持 64 位 Python，版本需在 **3.7 到 3.10** 之间。\n\n您可以运行以下命令验证环境是否合规：\n\n```bash\n# 检查 Python 版本是否在 3.7 - 3.10 之间\npython -c \"import sys; assert (3,7,0) \u003C= sys.version_info \u003C= (3,10,0), 'python is incorrect version'; print('ok')\"\n\n# 检查是否为 64 位架构\npython -c \"import platform; assert platform.architecture()[0] == '64bit', 'python is not 64-bit'; print('ok')\"\n```\n\n## 安装步骤\n\n推荐使用国内镜像源（如清华源或阿里源）以加速安装过程。\n\n1.  **升级 pip**（避免找不到对应版本的问题）：\n    ```bash\n    pip install --upgrade pip -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n2.  **安装 procgen**：\n    ```bash\n    pip install procgen -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n### 1. 交互式体验\n安装完成后，您可以直接启动一个交互式窗口来试玩某个环境（例如 `coinrun`）。\n\n```bash\npython -m procgen.interactive --env-name coinrun\n```\n\n*   **操作方式**：使用方向键 (`left\u002Fright\u002Fup\u002Fdown`) 或 `q,w,e,a,s,d` 进行控制（具体按键取决于环境）。\n*   **界面信息**：左下角显示当前回合得分 (`episode_return`)；回合结束时显示最终得分及是否通关 (`prev_level_complete`)。\n\n### 2. 代码调用 (Gym 接口)\n在 Python 代码中创建标准的 Gym 环境实例：\n\n```python\nimport gym\n\n# 创建 CoinRun 环境\nenv = gym.make(\"procgen:procgen-coinrun-v0\")\n\nobs = env.reset()\n# 开始你的强化学习训练循环...\n```\n\n### 3. 代码调用 (向量化 Gym3 接口)\nProcgen 原生支持高效的向量化环境（适合并行训练）：\n\n```python\nfrom procgen import ProcgenGym3Env\n\n# 创建包含 1 个环境的向量化实例\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\")\n\n# 获取观察值 (返回字典，像素数据在 \"rgb\" 键下)\nobs = env.observe()\nprint(obs[\"rgb\"].shape)  # 输出形状应为 (1, 64, 64, 3)\n```\n\n### 4. Docker 快速验证\n如果您希望在不污染本地环境的情况下测试，可以使用 Docker：\n\n```bash\n# 构建镜像\ndocker build docker --tag procgen\n\n# 运行随机智能体示例\ndocker run --rm -it procgen python3 -m procgen.examples.random_agent_gym\n```","某 AI 实验室的研究团队正在开发一种能适应未知地图的通用游戏机器人，急需验证其强化学习算法是否真正学会了“举一反三”，而非死记硬背关卡流程。\n\n### 没有 procgen 时\n- **过拟合严重**：使用固定关卡（如 Gym Retro）训练时，智能体往往记住了特定动作序列来刷分，一旦地图微调就彻底失效，无法证明其具备泛化能力。\n- **训练效率低下**：传统环境运行速度较慢，难以在单核 CPU 上实现每秒数千步的高频交互，导致大规模采样实验耗时数天甚至数周。\n- **定制开发困难**：若想修改游戏规则或创建新场景，需要深入理解庞大的遗留代码库，修改成本极高且容易引入错误。\n- **评估标准模糊**：缺乏统一的程序化生成基准，不同团队使用的随机化程度不一，导致算法性能难以横向对比。\n\n### 使用 procgen 后\n- **强制泛化学习**：procgen 提供的 16 种环境每次重启都会自动生成全新地图，迫使智能体必须掌握通用的移动与决策技能，彻底杜绝了“背板”现象。\n- **极速迭代验证**：得益于高度优化的底层代码，环境在单核上的运行速度提升超过 4 倍，原本需要一周的采样实验现在仅需一天即可完成。\n- **灵活定制场景**：研究人员可以轻松读取少于 300 行的环境源码，快速修改物理规则或构建专属测试场，极大加速了算法创新周期。\n- **权威基准对标**：直接复用 NeurIPS 竞赛采用的标准基准，确保实验数据具有行业公信力，便于与全球顶尖算法进行公平比对。\n\nprocgen 通过高速、随机且可定制的程序化环境，将强化学习的研究焦点从“记忆关卡”真正转向了“习得通用智能”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_procgen_9a36f7ca.gif","openai","OpenAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fopenai_1960bbf4.png","",null,"https:\u002F\u002Fopenai.com\u002F","https:\u002F\u002Fgithub.com\u002Fopenai",[84,88,92,96,100],{"name":85,"color":86,"percentage":87},"C++","#f34b7d",88.7,{"name":89,"color":90,"percentage":91},"Python","#3572A5",10.4,{"name":93,"color":94,"percentage":95},"CMake","#DA3434",0.6,{"name":97,"color":98,"percentage":99},"C","#555555",0.3,{"name":101,"color":102,"percentage":103},"Dockerfile","#384d54",0.1,1153,218,"2026-03-29T21:43:58","MIT","Windows 10, macOS 10.14+, Linux","未说明",{"notes":111,"python":112,"dependencies":113},"CPU 必须支持 AVX 指令集。该工具主要用于强化学习基准测试，环境运行速度快（单核每秒数千步）。可通过 pip 直接安装，也提供了用于随机代理测试和源码安装的 Docker 镜像示例。","3.7, 3.8, 3.9, 3.10 (必须为 64 位)",[114,115,116],"gym","gym3","numpy",[13,54],"2026-03-27T02:49:30.150509","2026-04-06T06:51:57.229827",[121,126,131,136,141,146],{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},6471,"如何在 Procgen 环境中实现 reset() 函数以重置环境？","Procgen 环境可以通过发送动作值 **-1** 来强制重置。根据源码检查，发送该特定动作会触发环境重置逻辑，无需重新创建整个环境实例，从而显著提高运行效率。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fissues\u002F40",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},6472,"在自定义环境中向 info 字典添加新字段时出现 'std::out_of_range' 或 'map::at' 错误怎么办？","这是一个执行顺序问题。必须确保在 `vecgame.cpp` 中，将新的 info 类型添加到 `info_types` 列表的操作，发生在填充 `info_name_to_offset` 映射表的代码**之前**。如果在映射表生成后才添加新字段，程序在 `heist.cpp` 等具体游戏逻辑中尝试通过 `info_name_to_offset.at()` 查找该键时会因为键不存在而崩溃。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fissues\u002F49",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},6473,"在 Windows 上运行 procgen 时遇到 'FileNotFoundError: [WinError 2]' 构建错误如何解决？","这通常是由于 Conda 环境配置损坏或系统 PATH 环境变量缺失导致的。建议尝试以下步骤：\n1. 重新安装或修复 Conda（参考官方 Windows 安装指南或使用 Chocolatey 安装器）。\n2. 检查是否在激活 Conda 环境时出现 \"The system cannot find the path specified\" 警告。\n3. 确保 Conda 的 Qt DLLs 路径已正确添加到系统的 PATH 环境变量中，因为程序可能依赖这些库。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fissues\u002F4",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},6474,"如何使用 gym3 API 正确设置渲染模式并录制视频？","应避免使用混淆的 `render=True` 参数，改用 `render_mode` 参数。例如，使用 `render_mode=\"rgb_array\"` 获取图像数组。若需录制视频，可结合 `VideoRecorderWrapper`。示例代码如下：\n```python\nfrom gym3 import types_np, VideoRecorderWrapper\nfrom procgen import ProcgenGym3Env\n\nenv = ProcgenGym3Env(num=1, env_name=\"coinrun\", render_mode=\"rgb_array\")\nenv = VideoRecorderWrapper(env=env, directory=\".\", info_key=\"rgb\")\n\nstep = 0\nwhile True:\n    env.act(types_np.sample(env.ac_space, bshape=(env.num,)))\n    rew, obs, first = env.observe()\n    if step > 0 and first:\n        break\n    step += 1\n```","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fissues\u002F48",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},6475,"是否可以在游戏开始前获取当前的关卡 ID (level id)？","在原始版本中，在采取第一个动作之前无法直接获取正在玩的关卡 ID。虽然社区曾提出此需求并尝试通过 fork 版本实现变通方案，但在标准 API 中，通常需要执行一步或多步后才能确定具体的关卡信息，或者需要依赖内部状态的非公开访问（不推荐）。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fissues\u002F42",{"id":147,"question_zh":148,"answer_zh":149,"source_url":145},6476,"如何指定特定的关卡 ID 或自定义初始状态进行训练？","原生 Procgen 不支持直接设置特定的关卡 ID（除非环境只有一个关卡），也不支持指定除默认随机状态以外的初始状态。如果需要针对特定关卡或特定初始状态进行测试，目前的官方建议较少，通常需要创建新的环境实例或使用社区 fork 版本中实现的变通方法（workaround）来强行注入初始观察值。",[151,156,161,166,171,176,181,186,191,196,201,206],{"id":152,"version":153,"summary_zh":154,"released_at":155},106054,"0.10.7","* Custom `ToBaselinesVecEnv` to support `VecVideoRecorder` from @bragajj: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen\u002Fpull\u002F62","2022-01-24T17:31:28",{"id":157,"version":158,"summary_zh":159,"released_at":160},106055,"0.10.6","* Change supported pythons to be 3.7-3.10","2022-01-15T04:25:20",{"id":162,"version":163,"summary_zh":164,"released_at":165},106056,"0.10.5","test release","2022-01-15T02:51:43",{"id":167,"version":168,"summary_zh":169,"released_at":170},106057,"0.10.4","* More fixes to gym rendering","2020-07-10T22:00:19",{"id":172,"version":173,"summary_zh":174,"released_at":175},106058,"0.10.3","* fix render option for gym environment","2020-06-15T18:36:58",{"id":177,"version":178,"summary_zh":179,"released_at":180},106059,"0.10.2","* fix interactive script","2020-06-03T16:46:18",{"id":182,"version":183,"summary_zh":184,"released_at":185},106060,"0.10.1","* build fixes\r\n* save action during libenv_act","2020-06-03T15:50:04",{"id":187,"version":188,"summary_zh":189,"released_at":190},106061,"0.10.0","* add `set_state`, `get_state` methods to save\u002Frestore environment state\r\n* new flags: `use_backgrounds`, `restrict_themes`, `use_monocrhome_assets`\r\n* switch to use `gym3` instead of `libenv` + `Scalarize`, `gym` and `baselines.VecEnv` interfaces are still available with the same names, the `gym3` environment is called `ProcgenGym3Env`\r\n* zero initialize more member variables\r\n* changed `info` dict to have more clear keys, `prev_level_complete` tells you if the level was complete on the previous timestep, since the `info` dict corresponds to the current timestep, and the current timestep is never on a complete level due to automatic resetting.  Similarly, `prev_level_seed` is the level seed from the previous timestep.\r\n* environment creation should be slightly faster","2020-06-03T15:38:11",{"id":192,"version":193,"summary_zh":194,"released_at":195},106062,"0.9.4","* add random agent script\r\n* add example Dockerfile","2020-01-10T23:29:29",{"id":197,"version":198,"summary_zh":199,"released_at":200},106063,"0.9.3","* changed pyglet dependency to `pyglet~=1.4.8`\r\n* fix issue with procgen thinking it was installed in development mode and attempting to build when installed from a pypi package\r\n* make procgen more fork safe when `num_threads=0`","2019-12-29T08:07:44",{"id":202,"version":203,"summary_zh":204,"released_at":205},106064,"0.9.2","* fixed type bug in interactive script that would sometimes cause the script to not start","2019-12-03T19:14:42",{"id":207,"version":208,"summary_zh":80,"released_at":209},106065,"0.9.1","2019-12-03T02:11:57"]