[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Farama-Foundation--Miniworld":3,"tool-Farama-Foundation--Miniworld":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,2,"2026-04-05T10:45:23",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[19,13,20,18],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,1,"2026-04-03T21:50:24",[20,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},2234,"scikit-learn","scikit-learn\u002Fscikit-learn","scikit-learn 是一个基于 Python 构建的开源机器学习库，依托于 SciPy、NumPy 等科学计算生态，旨在让机器学习变得简单高效。它提供了一套统一且简洁的接口，涵盖了从数据预处理、特征工程到模型训练、评估及选择的全流程工具，内置了包括线性回归、支持向量机、随机森林、聚类等在内的丰富经典算法。\n\n对于希望快速验证想法或构建原型的数据科学家、研究人员以及 Python 开发者而言，scikit-learn 是不可或缺的基础设施。它有效解决了机器学习入门门槛高、算法实现复杂以及不同模型间调用方式不统一的痛点，让用户无需重复造轮子，只需几行代码即可调用成熟的算法解决分类、回归、聚类等实际问题。\n\n其核心技术亮点在于高度一致的 API 设计风格，所有估算器（Estimator）均遵循相同的调用逻辑，极大地降低了学习成本并提升了代码的可读性与可维护性。此外，它还提供了强大的模型选择与评估工具，如交叉验证和网格搜索，帮助用户系统地优化模型性能。作为一个由全球志愿者共同维护的成熟项目，scikit-learn 以其稳定性、详尽的文档和活跃的社区支持，成为连接理论学习与工业级应用的最",65628,"2026-04-05T10:10:46",[20,18,14],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":10,"last_commit_at":63,"category_tags":64,"status":22},3364,"keras","keras-team\u002Fkeras","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。",63927,"2026-04-04T15:24:37",[20,14,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":71,"readme_en":72,"readme_zh":73,"quickstart_zh":74,"use_case_zh":75,"hero_image_url":76,"owner_login":77,"owner_name":78,"owner_avatar_url":79,"owner_bio":80,"owner_company":81,"owner_location":81,"owner_email":82,"owner_twitter":83,"owner_website":84,"owner_url":85,"languages":86,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":105,"github_topics":81,"view_count":29,"oss_zip_url":81,"oss_zip_packed_at":81,"status":22,"created_at":106,"updated_at":107,"faqs":108,"releases":137},907,"Farama-Foundation\u002FMiniworld","Miniworld","Simple and easily configurable 3D FPS-game-like environments for reinforcement learning","Miniworld 是一个轻量化的 3D 仿真环境，专门为强化学习和机器人研究设计。它提供了类似第一人称射击游戏的简单三维场景，用户可以通过编程让智能体在模拟的室内空间（如房间、走廊、迷宫等）中学习导航、探索与决策。Miniworld 主要解决了传统 3D 仿真平台（如 VizDoom 或 DMLab）配置复杂、依赖繁重、难以定制的问题，让研究人员能够快速搭建实验环境，专注于算法开发而非底层模拟。\n\n这个工具非常适合强化学习领域的研究人员、高校学生以及相关领域的开发者使用。它采用纯 Python 实现，结构清晰且易于修改，即使初学者也能基于它创建自定义场景或调整现有环境。Miniworld 内置多种 3D 模型与纹理，支持多进程运行与领域随机化，有助于提升从仿真到现实场景的迁移能力。此外，它还提供完全可观测的俯视视角、深度图生成以及墙面文字显示等实用功能。\n\n需要注意的是，Miniworld 已于 2025 年 8 月停止更新，但其代码保持开源，仍可作为轻量化实验平台或教学工具使用。它的图形表现相对基础，不适合追求高真实感或复杂物理交互的研究，但在需要快速原型验证、算法对比与轻量仿真的","Miniworld 是一个轻量化的 3D 仿真环境，专门为强化学习和机器人研究设计。它提供了类似第一人称射击游戏的简单三维场景，用户可以通过编程让智能体在模拟的室内空间（如房间、走廊、迷宫等）中学习导航、探索与决策。Miniworld 主要解决了传统 3D 仿真平台（如 VizDoom 或 DMLab）配置复杂、依赖繁重、难以定制的问题，让研究人员能够快速搭建实验环境，专注于算法开发而非底层模拟。\n\n这个工具非常适合强化学习领域的研究人员、高校学生以及相关领域的开发者使用。它采用纯 Python 实现，结构清晰且易于修改，即使初学者也能基于它创建自定义场景或调整现有环境。Miniworld 内置多种 3D 模型与纹理，支持多进程运行与领域随机化，有助于提升从仿真到现实场景的迁移能力。此外，它还提供完全可观测的俯视视角、深度图生成以及墙面文字显示等实用功能。\n\n需要注意的是，Miniworld 已于 2025 年 8 月停止更新，但其代码保持开源，仍可作为轻量化实验平台或教学工具使用。它的图形表现相对基础，不适合追求高真实感或复杂物理交互的研究，但在需要快速原型验证、算法对比与轻量仿真的场景中，依然是一个友好而高效的选择。","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_b2f4a1a942dc.png\" width=\"500px\"\u002F>\n\u003C\u002Fp>\n\n**Aug 11, 2025: This project has been deprecated due to a lack of wide spread community use, and is no longer planned to receive any additional updates or support.**\n\n[![Build Status](https:\u002F\u002Ftravis-ci.org\u002Fmaximecb\u002Fgym-miniworld.svg?branch=master)](https:\u002F\u002Ftravis-ci.org\u002Fmaximecb\u002Fgym-miniworld)\n\nContents:\n- [Introduction](#introduction)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Environments](https:\u002F\u002Fminiworld.farama.org\u002Fcontent\u002Fenv_list\u002F)\n- [Design and Customization](https:\u002F\u002Fminiworld.farama.org\u002Fcontent\u002Fdesign\u002F)\n- [Troubleshooting](https:\u002F\u002Fminiworld.farama.org\u002Fcontent\u002Ftroubleshooting\u002F)\n\n## Introduction\n\nMiniWorld is a minimalistic 3D interior environment simulator for reinforcement\nlearning &amp; robotics research. It can be used to simulate environments with\nrooms, doors, hallways and various objects (eg: office and home environments, mazes).\nMiniWorld can be seen as a simpler alternative to VizDoom or DMLab. It is written\n100% in Python and designed to be easily modified or extended by students.\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_0fb6e2318bc8.jpg\" width=260 alt=\"Figure of Maze environment from top view\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_201bb7e88ddc.jpg\" width=260 alt=\"Figure of Sidewalk environment\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_043755c5af2d.jpg\" width=260 alt=\"Figure of Collect Health environment\">\n\u003C\u002Fp>\n\nFeatures:\n- Few dependencies, less likely to break, easy to install\n- Easy to create your own levels, or modify existing ones\n- Good performance, high frame rate, support for multiple processes\n- Lightweight, small download, low memory requirements\n- Provided under a permissive MIT license\n- Comes with a variety of free 3D models and textures\n- Fully observable [top-down\u002Foverhead view](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_0fb6e2318bc8.jpg) available\n- [Domain randomization](https:\u002F\u002Fblog.openai.com\u002Fgeneralizing-from-simulation\u002F) support, for sim-to-real transfer\n- Ability to [display alphanumeric strings](images\u002Ftextframe.jpg) on walls\n- Ability to produce depth maps matching camera images (RGB-D)\n\nLimitations:\n- Graphics are basic, nowhere near photorealism\n- Physics are very basic, not sufficient for robot arms or manipulation\n\nList of publications & submissions using MiniWorld (please open a pull request to add missing entries):\n- [Towards real-world navigation with deep differentiable planners](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.05713) (VGG, Oxford, CVPR 2022)\n- [Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02790) (Stanford University, ICML 2021)\n- [Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments](https:\u002F\u002Fopenreview.net\u002Fforum?id=MtEE0CktZht) (Texas A&M University, Kuai Inc., ICLR 2021)\n- [DeepAveragers: Offline Reinforcement Learning by Solving Derived Non-Parametric MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08891) (NeurIPS Offline RL Workshop, Oct 2020)\n- [Pre-trained Word Embeddings for Goal-conditional Transfer Learning in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.05196) (University of Antwerp, Jul 2020, ICML 2020 LaReL Workshop)\n- [Temporal Abstraction with Interest Functions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.00271) (Mila, Feb 2020, AAAI 2020)\n- [Addressing Sample Complexity in Visual Tasks Using Hindsight Experience Replay and Hallucinatory GANs](https:\u002F\u002Fopenreview.net\u002Fforum?id=H1xSXdV0i4) (Offworld Inc, Georgia Tech, UC Berkeley, ICML 2019 Workshop RL4RealLife)\n- [Avoidance Learning Using Observational Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11228) (Mila, McGill, Sept 2019)\n- [Visual Hindsight Experience Replay](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.11529.pdf) (Georgia Tech, UC Berkeley, Jan 2019)\n\nThis simulator was created as part of work done at [Mila](https:\u002F\u002Fmila.quebec\u002F).\n\n## Installation\n\nRequirements:\n- Python 3.7+\n- Gymnasium\n- NumPy\n- Pyglet (OpenGL 3D graphics)\n- GPU for 3D graphics acceleration (optional)\n\nYou can install it from `PyPI` using:\n\n```console\npython3 -m pip install miniworld\n```\n\nYou can also install from source:\n\n```console\ngit clone https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld.git\ncd Miniworld\npython3 -m pip install -e .\n```\n\nIf you run into any problems, please take a look at the [troubleshooting guide](docs\u002Fcontent\u002Ftroubleshooting.md).\n\n## Usage\n\nThere is a simple UI application which allows you to control the simulation or real robot manually.\nThe `manual_control.py` application will launch the Gym environment, display camera images and send actions\n(keyboard commands) back to the simulator or robot. The `--env-name` argument specifies which environment to load.\nSee the list of [available environments](docs\u002Fenvironments.md) for more information.\n\n```\n.\u002Fmanual_control.py --env-name MiniWorld-Hallway-v0\n\n# Display an overhead view of the environment\n.\u002Fmanual_control.py --env-name MiniWorld-Hallway-v0 --top_view\n```\n\nThere is also a script to run automated tests (`run_tests.py`) and a script to gather performance metrics (`benchmark.py`).\n\n### Offscreen Rendering (Clusters and Colab)\n\nWhen running MiniWorld on a cluster or in a Colab environment, you need to render to an offscreen display. You can\nrun `gym-miniworld` offscreen by setting the environment variable `PYOPENGL_PLATFORM` to `egl` before running MiniWorld, e.g.\n\n```\nPYOPENGL_PLATFORM=egl python3 your_script.py\n```\n\nAlternatively, if this doesn't work, you can also try running MiniWorld with `xvfb`, e.g.\n\n```\nxvfb-run -a -s \"-screen 0 1024x768x24 -ac +extension GLX +render -noreset\" python3 your_script.py\n```\n\n# Citation\n\nTo cite this project please use:\n\n```bibtex\n@article{MinigridMiniworld23,\n  author       = {Maxime Chevalier-Boisvert and Bolun Dai and Mark Towers and Rodrigo de Lazcano and Lucas Willems and Salem Lahlou and Suman Pal and Pablo Samuel Castro and Jordan Terry},\n  title        = {Minigrid \\& Miniworld: Modular \\& Customizable Reinforcement Learning Environments for Goal-Oriented Tasks},\n  journal      = {CoRR},\n  volume       = {abs\u002F2306.13831},\n  year         = {2023},\n}\n```\n","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_b2f4a1a942dc.png\" width=\"500px\"\u002F>\n\u003C\u002Fp>\n\n**2025年8月11日：由于缺乏广泛的社区使用，此项目已被弃用，不再计划接收任何额外的更新或支持。**\n\n[![构建状态](https:\u002F\u002Ftravis-ci.org\u002Fmaximecb\u002Fgym-miniworld.svg?branch=master)](https:\u002F\u002Ftravis-ci.org\u002Fmaximecb\u002Fgym-miniworld)\n\n目录：\n- [简介](#introduction)\n- [安装](#installation)\n- [使用](#usage)\n- [环境](https:\u002F\u002Fminiworld.farama.org\u002Fcontent\u002Fenv_list\u002F)\n- [设计与定制](https:\u002F\u002Fminiworld.farama.org\u002Fcontent\u002Fdesign\u002F)\n- [故障排除](https:\u002F\u002Fminiworld.farama.org\u002Fcontent\u002Ftroubleshooting\u002F)\n\n## 简介\n\nMiniWorld 是一个简约的 3D 室内环境模拟器，用于强化学习与机器人研究。它可以用来模拟包含房间、门、走廊和各种对象（例如：办公室和家庭环境、迷宫）的环境。MiniWorld 可以被视为 VizDoom 或 DMLab 的更简单替代品。它完全使用 Python 编写，旨在方便学生修改或扩展。\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_0fb6e2318bc8.jpg\" width=260 alt=\"迷宫环境俯视图\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_201bb7e88ddc.jpg\" width=260 alt=\"人行道环境图\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_043755c5af2d.jpg\" width=260 alt=\"收集生命环境图\">\n\u003C\u002Fp>\n\n特性：\n- 依赖少，不易出错，易于安装\n- 易于创建自己的关卡或修改现有关卡\n- 性能良好，帧率高，支持多进程\n- 轻量级，下载体积小，内存需求低\n- 采用宽松的 MIT 许可证提供\n- 附带各种免费的 3D 模型和纹理\n- 提供完全可观测的[俯视\u002F鸟瞰视图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_readme_0fb6e2318bc8.jpg)\n- 支持[领域随机化](https:\u002F\u002Fblog.openai.com\u002Fgeneralizing-from-simulation\u002F)，用于从模拟到现实的迁移\n- 能够在墙壁上[显示字母数字字符串](images\u002Ftextframe.jpg)\n- 能够生成与相机图像（RGB-D）匹配的深度图\n\n局限性：\n- 图形基础，远未达到照片级真实感\n- 物理模拟非常基础，不足以支持机器人手臂或操作任务\n\n使用 MiniWorld 的出版物和投稿列表（请提交 Pull Request 以添加缺失条目）：\n- [Towards real-world navigation with deep differentiable planners](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.05713) (VGG, Oxford, CVPR 2022)\n- [Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02790) (Stanford University, ICML 2021)\n- [Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments](https:\u002F\u002Fopenreview.net\u002Fforum?id=MtEE0CktZht) (Texas A&M University, Kuai Inc., ICLR 2021)\n- [DeepAveragers: Offline Reinforcement Learning by Solving Derived Non-Parametric MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08891) (NeurIPS Offline RL Workshop, Oct 2020)\n- [Pre-trained Word Embeddings for Goal-conditional Transfer Learning in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.05196) (University of Antwerp, Jul 2020, ICML 2020 LaReL Workshop)\n- [Temporal Abstraction with Interest Functions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.00271) (Mila, Feb 2020, AAAI 2020)\n- [Addressing Sample Complexity in Visual Tasks Using Hindsight Experience Replay and Hallucinatory GANs](https:\u002F\u002Fopenreview.net\u002Fforum?id=H1xSXdV0i4) (Offworld Inc, Georgia Tech, UC Berkeley, ICML 2019 Workshop RL4RealLife)\n- [Avoidance Learning Using Observational Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11228) (Mila, McGill, Sept 2019)\n- [Visual Hindsight Experience Replay](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.11529.pdf) (Georgia Tech, UC Berkeley, Jan 2019)\n\n此模拟器是在 [Mila](https:\u002F\u002Fmila.quebec\u002F) 工作期间创建的。\n\n## 安装\n\n要求：\n- Python 3.7+\n- Gymnasium\n- NumPy\n- Pyglet (OpenGL 3D 图形库)\n- 用于 3D 图形加速的 GPU（可选）\n\n你可以通过 `PyPI` 安装：\n\n```console\npython3 -m pip install miniworld\n```\n\n你也可以从源码安装：\n\n```console\ngit clone https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld.git\ncd Miniworld\npython3 -m pip install -e .\n```\n\n如果遇到任何问题，请查看[故障排除指南](docs\u002Fcontent\u002Ftroubleshooting.md)。\n\n## 使用\n\n有一个简单的 UI 应用程序，允许你手动控制模拟或真实机器人。`manual_control.py` 应用程序将启动 Gym 环境，显示相机图像并将动作（键盘命令）发送回模拟器或机器人。`--env-name` 参数指定要加载的环境。有关更多信息，请参阅[可用环境列表](docs\u002Fenvironments.md)。\n\n```\n.\u002Fmanual_control.py --env-name MiniWorld-Hallway-v0\n\n# 显示环境的俯视图\n.\u002Fmanual_control.py --env-name MiniWorld-Hallway-v0 --top_view\n```\n\n还有一个用于运行自动化测试的脚本 (`run_tests.py`) 和一个用于收集性能指标的脚本 (`benchmark.py`)。\n\n### 离屏渲染（集群和 Colab）\n\n在集群或 Colab 环境中运行 MiniWorld 时，你需要渲染到离屏显示器。你可以通过设置环境变量 `PYOPENGL_PLATFORM` 为 `egl` 来离屏运行 `gym-miniworld`，例如：\n\n```\nPYOPENGL_PLATFORM=egl python3 your_script.py\n```\n\n或者，如果这不起作用，你也可以尝试使用 `xvfb` 运行 MiniWorld，例如：\n\n```\nxvfb-run -a -s \"-screen 0 1024x768x24 -ac +extension GLX +render -noreset\" python3 your_script.py\n```\n\n# 引用\n\n如需引用此项目，请使用：\n\n```bibtex\n@article{MinigridMiniworld23,\n  author       = {Maxime Chevalier-Boisvert and Bolun Dai and Mark Towers and Rodrigo de Lazcano and Lucas Willems and Salem Lahlou and Suman Pal and Pablo Samuel Castro and Jordan Terry},\n  title        = {Minigrid \\& Miniworld: Modular \\& Customizable Reinforcement Learning Environments for Goal-Oriented Tasks},\n  journal      = {CoRR},\n  volume       = {abs\u002F2306.13831},\n  year         = {2023},\n}\n```","# MiniWorld 快速上手指南\n\nMiniWorld 是一个轻量级的 3D 室内环境模拟器，专为强化学习和机器人研究设计。它使用 Python 编写，易于修改和扩展。\n\n## 环境准备\n\n**系统要求：**\n- Python 3.7 或更高版本\n- 支持 OpenGL 的显卡（可选，用于 3D 图形加速）\n\n**前置依赖：**\n- Gymnasium\n- NumPy\n- Pyglet (用于 OpenGL 3D 图形渲染)\n\n## 安装步骤\n\n推荐使用 `pip` 从 PyPI 安装：\n\n```console\npython3 -m pip install miniworld\n```\n\n或者，你也可以从源代码安装：\n\n```console\ngit clone https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld.git\ncd Miniworld\npython3 -m pip install -e .\n```\n\n**注意：** 由于项目已停止维护，若安装过程遇到网络问题，可尝试使用 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple` 参数指定国内 PyPI 镜像源。\n\n## 基本使用\n\n安装完成后，你可以通过一个简单的 UI 应用程序手动控制模拟环境。\n\n1.  **运行手动控制程序：**\n    以下命令将启动 Gym 环境，显示摄像头图像，并允许你通过键盘命令控制智能体。\n\n    ```console\n    .\u002Fmanual_control.py --env-name MiniWorld-Hallway-v0\n    ```\n\n2.  **显示俯视视角：**\n    你可以添加 `--top_view` 参数来获得环境的俯视视角。\n\n    ```console\n    .\u002Fmanual_control.py --env-name MiniWorld-Hallway-v0 --top_view\n    ```\n\n**在无头服务器或 Colab 中运行：**\n如果你在集群或 Colab 等没有显示器的环境中运行，需要设置离屏渲染。在运行脚本前设置环境变量：\n\n```console\nPYOPENGL_PLATFORM=egl python3 your_script.py\n```\n\n如果上述方法无效，可以尝试使用 `xvfb` 虚拟显示：\n\n```console\nxvfb-run -a -s \"-screen 0 1024x768x24 -ac +extension GLX +render -noreset\" python3 your_script.py\n```\n\n现在，你已经可以开始使用 MiniWorld 创建和探索 3D 环境了。更多环境列表和自定义方法，请参考项目文档。","一名强化学习方向的研究生正在开发一个智能体，希望它能学习在室内环境中（如办公室或家庭）高效导航并找到目标物品。他需要一个轻量、可定制的 3D 模拟环境来训练和测试算法。\n\n### 没有 Miniworld 时\n- **环境搭建困难**：需要从零开始编写 3D 模拟器，或使用 VizDoom 等复杂平台，配置和接口学习成本极高，耗费数周时间。\n- **定制化门槛高**：想调整房间布局或增加特定物体（如一把椅子）时，需要深入修改底层图形和物理引擎代码，过程繁琐且易出错。\n- **实验迭代缓慢**：环境运行依赖重型引擎，启动慢、占用内存大，导致算法训练和调试的反馈周期很长，一天只能进行少数几次实验。\n- **依赖与环境问题**：复杂的依赖链（如特定版本的深度学习框架与模拟器绑定）常导致环境崩溃，兼容性问题消耗了大量调试时间。\n- **功能局限**：难以快速实现研究所需的高级功能，如域随机化（用于提升模型泛化能力）或生成与图像对齐的深度图（RGB-D数据）。\n\n### 使用 Miniworld 后\n- **快速启动实验**：通过 pip 简单安装，依赖极少，几分钟内就能导入并运行第一个导航环境，当天即可开始核心算法开发。\n- **轻松自定义场景**：使用清晰的 Python API，通过修改几行代码就能创建新的房间、放置物体或改变纹理，专注于研究逻辑而非底层实现。\n- **高效训练与调试**：轻量级设计带来高帧率运行，支持多进程，大幅缩短了每次训练迭代的时间，允许一天内进行大量超参数测试。\n- **环境稳定兼容**：纯 Python 实现，与主流强化学习库（如 Gymnasium）无缝集成，避免了复杂的依赖冲突，保证了实验环境的可复现性。\n- **内置高级特性支持**：直接支持域随机化、顶部俯视图、在墙面显示文字以及生成深度图等功能，方便直接开展前沿的仿真到现实迁移等研究。\n\nMiniworld 以其极简、易用和高度可定制的特性，显著降低了强化学习研究中 3D 视觉环境构建的门槛，让研究者能更专注于算法创新本身。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFarama-Foundation_Miniworld_0fb6e231.jpg","Farama-Foundation","Farama Foundation","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FFarama-Foundation_61834603.png","The Farama foundation is a nonprofit organization working to develop and maintain open source reinforcement learning tools.",null,"contact@farama.org","FaramaFound","farama.org","https:\u002F\u002Fgithub.com\u002FFarama-Foundation",[87],{"name":88,"color":89,"percentage":90},"Python","#3572A5",100,756,145,"2026-03-24T06:10:11","Apache-2.0","Linux, macOS, Windows","可选，用于3D图形加速，未指定具体型号和显存要求","未说明",{"notes":99,"python":100,"dependencies":101},"项目已废弃，不再更新。支持离屏渲染（集群\u002FColab），需设置环境变量PYOPENGL_PLATFORM=egl或使用xvfb。","3.7+",[102,103,104],"Gymnasium","NumPy","Pyglet",[18],"2026-03-27T02:49:30.150509","2026-04-06T06:46:13.209007",[109,114,119,124,128,132],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},3962,"运行环境时遇到 `assert res == GL_FRAMEBUFFER_COMPLETE` 错误怎么办？","此错误通常与无头环境（如通过 SSH 连接服务器）中的 OpenGL 初始化有关。解决方案是使用 `xvfb-run` 来虚拟显示。例如：`xvfb-run -a -s \"-screen 0 1024x768x24 -ac +extension GLX +render -noreset\" python your_script.py`。另外，可以尝试将 pyglet 升级到 `1.5.11` 并使用 headless 模式：在导入 `gym_miniworld` 前，先执行 `import pyglet; pyglet.options['headless'] = True`。","https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fissues\u002F4",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},3963,"如何在远程服务器（无图形界面）上运行 MiniWorld？","在远程服务器或没有显示器的环境中运行时，需要使用虚拟帧缓冲器。安装 `xvfb` 后，通过 `xvfb-run` 命令运行你的脚本。例如：`xvfb-run -a python your_program.py`。这可以避免与 OpenGL 和显示相关的初始化错误。","https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fissues\u002F13",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},3964,"运行代码时遇到 `AssertionError: failed to load textures for name \"brick_wall\"` 错误如何解决？","这个错误可能与纹理加载或 pyglet 版本不兼容有关。一个可行的解决方法是安装特定版本的 `gym-miniworld`（例如通过 pip 安装），而不是从源码安装当前的 `miniworld`。此外，确保你的 pyglet 版本是兼容的，有用户反馈版本 `1.5.27` 可以正常工作，而 `2.0.9` 则不行。","https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fissues\u002F82",{"id":125,"question_zh":126,"answer_zh":127,"source_url":113},3965,"在多进程或并行训练环境中遇到问题怎么办？","在使用像 `pytorch-a2c-ppo` 这类多进程训练代码时，可能会遇到进程创建问题。一个已知的解决方法是修改子进程的启动方式。在 `subproc_vec_env.py` 文件中，将 `fork` 方法改为 `'forkserver'`。具体位置在文件开头的 `mp = multiprocessing.get_context('forkserver')` 和 `process.start()` 调用处。这改变了子进程间资源共享的方式。",{"id":129,"question_zh":130,"answer_zh":131,"source_url":118},3966,"在 Jupyter Notebook 中导入 `gym_miniworld` 失败怎么办？","在 Jupyter Notebook 中直接导入 `gym_miniworld` 可能会因为缺少显示环境而失败。建议在服务器端运行 Notebook 时，也使用 `xvfb-run` 命令来启动 Jupyter 内核或整个 Notebook 服务器。例如：`xvfb-run -a jupyter notebook`。这样可以为其提供一个虚拟的显示环境。",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},3967,"如何确认我的图形环境（如 Mesa）设置是否正确？","在尝试运行 MiniWorld 前，建议先安装并检查基础的图形工具。可以安装 `mesa-utils` 包（例如在 Ubuntu 上使用 `sudo apt-get install mesa-utils`）来提供必要的 OpenGL 软件实现。运行 `glxinfo` 命令可以检查当前的 OpenGL 信息。虽然 MiniWorld 在无头服务器上主要依赖 `xvfb`，但确保这些基础库存在有助于排除其他依赖问题。","https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fissues\u002F24",[138,143],{"id":139,"version":140,"summary_zh":141,"released_at":142},103407,"2.1.0","# v2.1.0 Release notes\r\n\r\nIn this release, Miniworld has been updated to support Gymnasium 0.29.1 and 1.0.0+, along with minor changes to the website. \r\n\r\nIn addition, a `ManualControl` class has been added for easy testing by users in `miniworld.manual_control` (https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F94). The `*.mtl` loading was more robust, and position specification is now feasible for `place_agent`.  Add a `StochasticAction` wrapper for users who wish to take random actions for a percentage of the time. \r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fcompare\u002F2.0.0...2.1.0","2025-01-12T11:38:29",{"id":144,"version":145,"summary_zh":146,"released_at":147},103408,"2.0.1","### Release Notes\r\nMiniWorld is a minimalistic 3D interior environment simulator for reinforcement learning & robotics research that allows environments to be easily edited like Minigrid meets DM Lab. It can simulate environments with rooms, doors, hallways, and various objects (e.g., office and home environments, mazes). Miniworld 2.0.0 is the first mature release within Farama. This version transitions from gym to gymnasium. Additionally, this release adds CI testing, code standardization pipeline, tests for environments, and documentation for each environment. \r\n\r\nFurthermore, we have a website (https:\u002F\u002Fminiworld.farama.org\u002F) that has documentation that covers all the relevant details to start implementing a reinforcement learning agent. In future releases, we plan to add tutorials and more detailed documentation.\r\n\r\n### New Features and Improvements\r\n- Added Sign environment from the DREAM paper by @ezliu in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F47\r\n- Updated to `Gymnasium v0.26.2` by @BolunDai0216 in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F72\r\n- Replaced random number generator from random.py with np_random from gymnasium by @hh2564 in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F74\r\n- Added `EzPickle` inheritance to the environments which enables pickling and unpickling objects via their constructor arguments by @BolunDai0216 in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F76\r\n\r\n### Bug Fixes and Documentation Updates\t\r\n\r\n- Made offscreen `gym-miniworld` work without updating NVIDIA drivers by @ptigas in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F40\t\t\r\n- Added docstrings for the environments and updated `manual_control.py` by @BolunDai0216 in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F73\r\n- Updated README and `manual_control.py` by @BolunDai0216 in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F77\r\n- Added docs website by @mgoulao in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F78\r\n- Updated docstrings for all of the environments by @BolunDai0216 in https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMiniworld\u002Fpull\u002F81\r\n","2023-02-14T16:16:49"]