[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-leofan90--Awesome-World-Models":3,"tool-leofan90--Awesome-World-Models":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,2,"2026-04-08T23:32:35",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":75,"owner_email":75,"owner_twitter":75,"owner_website":75,"owner_url":77,"languages":75,"stars":78,"forks":79,"last_commit_at":80,"license":81,"difficulty_score":82,"env_os":83,"env_gpu":84,"env_ram":84,"env_deps":85,"category_tags":88,"github_topics":90,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":99,"updated_at":100,"faqs":101,"releases":102},5780,"leofan90\u002FAwesome-World-Models","Awesome-World-Models","A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.","Awesome-World-Models 是一个专为人工智能领域打造的开源资源库，致力于系统性地整理与“世界模型”相关的顶尖学术论文、代码实现及技术博客。它聚焦于通用视频生成、具身智能（Embodied AI）以及自动驾驶三大核心场景，旨在解决该领域研究分散、定义模糊以及复现困难的问题，为开发者提供一站式的知识导航。\n\n无论是希望追踪前沿动态的科研人员，还是寻求高质量基线模型的算法工程师，都能从中获益。该资源库不仅收录了如 2018 年奠基性论文《World Models》等经典文献，还持续更新包括 OpenWorldLib、DreamDojo、SIMA 2 在内的最新技术报告与开源项目。其独特亮点在于构建了从理论基础到实际基准测试（Benchmarks）的完整生态，涵盖了因果建模、物理对齐及大规模人类操作学习等前沿方向。通过清晰的分类索引，Awesome-World-Models 帮助用户快速定位所需资源，极大地降低了探索世界模型技术的门槛，是推动机器人感知与决策能力进化的重要助手。","# Awesome World Models for Robotics [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\r\n\r\nThis repository provides a curated list of **papers for World Models for General Video Generation, Embodied AI, and Autonomous Driving**. Template from [Awesome-LLM-Robotics](https:\u002F\u002Fgithub.com\u002FGT-RIPL\u002FAwesome-LLM-Robotics) and [Awesome-World-Model](https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model)\u003Cbr>\r\n\r\n#### Contributions are welcome! Please feel free to submit [pull requests](https:\u002F\u002Fgithub.com\u002Fleofan90\u002FAwesome-World-Models\u002Fblob\u002Fmain\u002Fhow-to-PR.md) or reach out via [email](mailto:chunkaifan-changetoat-stu-changetodot-pku--changetodot-changetoedu-changetocn) to add papers! \u003Cbr>\r\n\r\nIf you find this repository useful, please consider [citing](#citation) and giving this list a star ⭐. Feel free to share it with others!\r\n\r\n---\r\n## Overview\r\n\r\n- [Awesome World Models for Robotics ](#awesome-world-models-for-robotics-)\r\n      - [Contributions are welcome! Please feel free to submit pull requests or reach out via email to add papers! ](#contributions-are-welcome-please-feel-free-to-submit-pull-requests-or-reach-out-via-email-to-add-papers-)\r\n  - [Overview](#overview)\r\n  - [Foundation paper of World Model](#foundation-paper-of-world-model)\r\n  - [Blog or Technical Report](#blog-or-technical-report)\r\n  - [Surveys](#surveys)\r\n  - [Benchmarks \\& Evaluation](#benchmarks--evaluation)\r\n  - [General World Models](#general-world-models)\r\n  - [World Models for Embodied AI](#world-models-for-embodied-ai)\r\n  - [World Models for VLA](#world-models-for-vla)\r\n  - [World Models for Visual Understanding](#world-models-for-visual-understanding)\r\n  - [World Models for Autonomous Driving](#world-models-for-autonomous-driving)\r\n    - [Refer to https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model](#refer-to-httpsgithubcomlmd0311awesome-world-model)\r\n  - [Citation](#citation)\r\n\r\n---\r\n## Foundation paper of World Model\r\n* World Models, **`NIPS 2018 Oral`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10122)] [[Website](https:\u002F\u002Fworldmodels.github.io\u002F)] \r\n\r\n## Blog or Technical Report\r\n* **`OpenWorldLib`**, OpenWorldLib: A Unified Codebase and Definition of Advanced World Models. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.04707)]\r\n* **`ABot-PhysWorld`**, ABot-PhysWorld: Interactive World Foundation Model for Robotic Manipulation with Physics Alignment. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.23376)]\r\n* **`GigaWorld-Policy`**, GigaWorld-Policy: An Efficient Action-Centered World--Action Model. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17240)]\r\n* **`GigaBrain-0.5M*`**, GigaBrain-0.5M*: a VLA That Learns From World Model-Based Reinforcement Learning. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.12099)] [[Website](https:\u002F\u002Fgigabrain05m.github.io\u002F)] \r\n* **`ALIVE`**, ALIVE: Animate Your World with Lifelike Audio-Video Generation. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08682)] [[Website](https:\u002F\u002Ffoundationvision.github.io\u002FAlive\u002F)] \r\n* **`DreamDojo`**, DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06949)] [[Website](https:\u002F\u002Fdreamdojo-world.github.io\u002F)] \r\n* **`lingbot-va`**, Causal World Modeling for Robot Control. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.21998)] [[Website](https:\u002F\u002Ftechnology.robbyant.com\u002Flingbot-va)] [[Code](https:\u002F\u002Fgithub.com\u002Frobbyant\u002Flingbot-va)]\r\n* **`lingbot-world`**, Advancing Open-source World Models. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20540)] [[Website](https:\u002F\u002Ftechnology.robbyant.com\u002Flingbot-world)] [[Code](https:\u002F\u002Fgithub.com\u002Frobbyant\u002Flingbot-world)]\r\n* **`TARS`**, World In Your Hands: A Large-Scale and Open-source Ecosystem for Learning Human-centric Manipulation in the Wild. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24310)] [[Website](https:\u002F\u002Fwiyh.tars-ai.com\u002F)] \r\n* **`SIMA 2`**, SIMA 2: A Generalist Embodied Agent for Virtual Worlds. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04797)]\r\n* **`SimWorld`**, SimWorld: An Open-ended Realistic Simulator for Autonomous Agents in Physical and Social Worlds. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01078)] [[Website](https:\u002F\u002Fsimworld.org\u002F)] \r\n* **`Hunyuan-GameCraft-2`**, Hunyuan-GameCraft-2: Instruction-following Interactive Game World Model. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.23429)] [[Website](https:\u002F\u002Fhunyuan-gamecraft-2.github.io\u002F)] \r\n* **`GigaWorld-0`**, GigaWorld-0: World Models as Data Engine to Empower Embodied AI. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19861)] [[Website](https:\u002F\u002Fgigaworld0.github.io\u002F)] \r\n* **`PAN`**, PAN: A World Model for General, Interactable, and Long-Horizon World Simulation. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.09057)] \r\n* **`Cosmos-Predict2.5`**, World Simulation with Video Foundation Models for Physical AI. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00062)] [[Code](https:\u002F\u002Fgithub.com\u002Fnvidia-cosmos\u002Fcosmos-predict2.5)]\r\n* **`Emu3.5`**, Emu3.5: Native Multimodal Models are World Learners. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.26583)] [[Website](https:\u002F\u002Femu.world)] [[Code](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEmu3.5)]\r\n* **`ODesign`**, ODesign: A World Model for Biomolecular Interaction Design. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.22304)] [[Website](https:\u002F\u002Fodesign.lglab.ac.cn)]\r\n* **`GigaBrain-0`**, GigaBrain-0: A World Model-Powered Vision-Language-Action Model. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19430)] [[Website](https:\u002F\u002Fgigabrain0.github.io\u002F)]\r\n* **`CWM`**, CWM: An Open-Weights LLM for Research on Code Generation with World Models. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02387)] [[Website](https:\u002F\u002Fai.meta.com\u002Fresources\u002Fmodels-and-libraries\u002Fcwm-downloads)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fcwm)]\r\n* **`WoW`**, WoW: Towards a World omniscient World model Through Embodied Interaction. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.22642)] [[Website](https:\u002F\u002Fwow-world-model.github.io\u002F)]\r\n* **`Matrix-Game 2.0`**, Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.13009)] [[Website](https:\u002F\u002Fmatrix-game-v2.github.io\u002F)]\r\n* **`Matrix-3D`**, Matrix-3D: Omnidirectional Explorable 3D World Generation. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08086)] [[Website](https:\u002F\u002Fmatrix-3d.github.io)]\r\n* **`HunyuanWorld 1.0`**, HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21809)] [[Website](https:\u002F\u002F3d-models.hunyuan.tencent.com\u002Fworld\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FTencent-Hunyuan\u002FHunyuanWorld-1.0)] \r\n* What Does it Mean for a Neural Network to Learn a \"World Model\"?. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21513)]\r\n* **`Matrix-Game`**, Matrix-Game: Interactive World Foundation Model. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18701)] [[Code](https:\u002F\u002Fgithub.com\u002FSkyworkAI\u002FMatrix-Game)] \r\n* **`Cosmos-Drive-Dreams`**, Cosmos-Drive-Dreams: Scalable Synthetic Driving Data Generation with World Foundation Models. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09042)] [[Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Fcosmos_drive_dreams)] \r\n* **`GAIA-2`**, GAIA-2: A Controllable Multi-View Generative World Model for Autonomous Driving. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20523)] [[Website](https:\u002F\u002Fwayve.ai\u002Fthinking\u002Fgaia-2)]\r\n* **`Cosmos`**, Cosmos World Foundation Model Platform for Physical AI. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.03575)] [[Website](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fai\u002Fcosmos\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FCosmos)]\r\n* **`1X Technologies`**, 1X World Model. [[Blog](https:\u002F\u002Fwww.1x.tech\u002Fdiscover\u002F1x-world-model)]\r\n* **`Runway`**, Introducing General World Models. [[Blog](https:\u002F\u002Frunwayml.com\u002Fresearch\u002Fintroducing-general-world-models)]\r\n* **`Wayve`**, Introducing GAIA-1: A Cutting-Edge Generative AI Model for Autonomy. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.17080)] [[Blog](https:\u002F\u002Fwayve.ai\u002Fthinking\u002Fintroducing-gaia1\u002F)] \r\n* **`Yann LeCun`**, A Path Towards Autonomous Machine Intelligence. [[Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BZ5a1r-kVsf)]\r\n\r\n## Surveys\r\n* \"Video Generation Models as World Models: Efficient Paradigms, Architectures and Algorithms\" **`arXiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28489)]\r\n* \"From Digital Twins to World Models:Opportunities, Challenges, and Applications for Mobile Edge General Intelligence\" **`arXiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17420)]\r\n* \"The Trinity of Consistency as a Defining Principle for General World Models\" **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23152)] [[Code]( https:\u002F\u002Fgithub.com\u002Fopenraiser\u002Fawesome-world-model-evolution)]\r\n* \"A Mechanistic View on Video Generation as World Models: State and Dynamics\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17067)] \r\n* \"From Generative Engines to Actionable Simulators: The Imperative of Physical Grounding in World Models\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.15533)] \r\n* \"Modeling the Mental World for Embodied AI: A Comprehensive Review\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.02378)] \r\n* \"Digital Twin AI: Opportunities and Challenges from Large Language Models to World Models\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01321)] \r\n* \"Beyond World Models: Rethinking Understanding in AI Models\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.12239)] \r\n* \"Simulating the Visual World with Artificial Intelligence: A Roadmap\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08585)] [[Wesite](https:\u002F\u002Fworld-model-roadmap.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fziqihuangg\u002FAwesome-From-Video-Generation-to-World-Model)]\r\n* \"A Step Toward World Models: A Survey on Robotic Manipulation\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02097)]\r\n* \"World Models Should Prioritize the Unification of Physical and Social Dynamics\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21219)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fworld-model-position)]\r\n* \"From Masks to Worlds: A Hitchhiker's Guide to World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20668)] [[Website](https:\u002F\u002Fgithub.com\u002FM-E-AGI-Lab\u002FAwesome-World-Models)]\r\n* \"A Comprehensive Survey on World Models for Embodied AI\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.16732)] [[Website](https:\u002F\u002Fgithub.com\u002FLi-Zn-H\u002FAwesomeWorldModels)]\r\n* \"The Safety Challenge of World Models for Embodied AI Agents: A Review\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05865)]\r\n* \"Embodied AI: From LLMs to World Models\", **`IEEE CASM`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.20021)]\r\n* \"3D and 4D World Modeling: A Survey\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07996)]\r\n* \"Edge General Intelligence Through World Models and Agentic AI: Fundamentals, Solutions, and Challenges\", **`arXiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09561)]\r\n* \"A Survey: Learning Embodied Intelligence from Physical Simulators and World Models\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00917)] [[Code](https:\u002F\u002Fgithub.com\u002FNJU3DV-LoongGroup\u002FEmbodied-World-Models-Survey)]\r\n* \"Embodied AI Agents: Modeling the World\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22355)] \r\n* \"From 2D to 3D Cognition: A Brief Survey of General World Models\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20134)] \r\n* \"A Survey on World Models Grounded in Acoustic Physical Information\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13833)] \r\n* \"Exploring the Evolution of Physics Cognition in Video Generation: A Survey\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21765)] [[Code](https:\u002F\u002Fgithub.com\u002Fminnie-lin\u002FAwesome-Physics-Cognition-based-Video-Generation)]\r\n* \"World Models in Artificial Intelligence: Sensing, Learning, and Reasoning Like a Child\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15168)] \r\n* \"Simulating the Real World: A Unified Survey of Multimodal Generative Models\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04641)] [[Code](https:\u002F\u002Fgithub.com\u002FALEEEHU\u002FWorld-Simulator)]\r\n* \"Four Principles for Physically Interpretable World Models\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02143)]\r\n* \"The Role of World Models in Shaping Autonomous Driving: A Comprehensive Survey\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10498)] [[Code](https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model)]\r\n* \"A Survey of World Models for Autonomous Driving\", **`TPAMI`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11260)]\r\n* \"Understanding World or Predicting Future? A Comprehensive Survey of World Models\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14499)]\r\n* \"World Models: The Safety Perspective\", **`ISSRE WDMD`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07690)]\r\n* \"Exploring the Interplay Between Video Generation and World Models in Autonomous Driving: A Survey\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02914)]\r\n* \"From Efficient Multimodal Models to World Models: A Survey\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00118)]\r\n* \"Aligning Cyber Space with Physical World: A Comprehensive Survey on Embodied AI\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06886)] [[Code](https:\u002F\u002Fgithub.com\u002FHCPLab-SYSU\u002FEmbodied_AI_Paper_List)]\r\n* \"Is Sora a World Simulator? A Comprehensive Survey on General World Models and Beyond\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.03520)] [[Code](https:\u002F\u002Fgithub.com\u002FGigaAI-research\u002FGeneral-World-Models-Survey)]\r\n* \"World Models for Autonomous Driving: An Initial Survey\", **`TIV`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02622)]\r\n* \"A survey on multimodal large language models for autonomous driving\", **`WACVW 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.12320)] [[Code](https:\u002F\u002Fgithub.com\u002FIrohXu\u002FAwesome-Multimodal-LLM-Autonomous-Driving)]\r\n\r\n---\r\n## Benchmarks & Evaluation\r\n* \"World Reasoning Arena\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25887)] [[Code](https:\u002F\u002Fgithub.com\u002FMBZUAI-IFM\u002FWR-Arena)] \r\n* **Omni-WorldBench**: \"Omni-WorldBench: Towards a Comprehensive Interaction-Centric Evaluation for World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22212)] \r\n* \"Out of Sight, Out of Mind? Evaluating State Evolution in Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13215)] [[Website](https:\u002F\u002Fglab-caltech.github.io\u002FSTEVOBench\u002F)]\r\n* **MicroVerse**: \"MicroVerse: A Preliminary Exploration Toward a Micro-World Simulation\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00585)] [[Code](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FMicroVerse)]\r\n* **WorldArena**: \"WorldArena: A Unified Benchmark for Evaluating Perception and Functional Utility of Embodied World Models\" **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08971)] [[Code]( https:\u002F\u002Fworld-arena.ai)]\r\n* **MIND**: \"MIND: Benchmarking Memory Consistency and Action Control in World Models\" **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08025)] [[Code](https:\u002F\u002Fgithub.com\u002FCSU-JPG\u002FMIND)]\r\n* **WoW-bench**: \"World of Workflows: a Benchmark for Bringing World Models to Enterprise Systems\" **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22130)]\r\n* **WorldBench**: \"WorldBench: Disambiguating Physics for Diagnostic Evaluation of World Models\" **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.21282)] [[Website](https:\u002F\u002Fworld-bench.github.io\u002F)] \r\n* **PhysicsMind**: \"PhysicsMind: Sim and Real Mechanics Benchmarking for Physical Reasoning and Prediction in Foundational VLMs and World Models\" **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.16007)] \r\n* **RBench**: \"Rethinking Video Generation Model for the Embodied World\" **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.15282)] [[Website](https:\u002F\u002Fdagroup-pku.github.io\u002FReVidgen.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FDAGroup-PKU\u002FReVidgen\u002F)] \r\n* **Wow, wo, val!**: \"Wow, wo, val! A Comprehensive Embodied World Model Evaluation Turing Test\" **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04137)] \r\n* **DrivingGen**: \"DrivingGen: A Comprehensive Benchmark for Generative Video World Models in Autonomous Driving\" **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01528)] [[Website](https:\u002F\u002Fdrivinggen-bench.github.io\u002F)]\r\n* \"A Unified Definition of Hallucination, Or: It's the World Model, Stupid\" **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21577)] \r\n* \"Active Intelligence in Video Avatars via Closed-loop World Modeling\" **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20615)] [[Code](https:\u002F\u002Fxuanhuahe.github.io\u002FORCA\u002F)]\r\n* **MobileWorldBench**: \"MobileWorldBench: Towards Semantic World Modeling For Mobile Agents\" **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14014)] [[Code](https:\u002F\u002Fgithub.com\u002Fjacklishufan\u002FMobileWorld)]\r\n* **WorldLens**: \"WorldLens: Full-Spectrum Evaluations of Driving World Models in Real World\" **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10958)] [[Website](https:\u002F\u002Fworldbench.github.io\u002Fworldlens)]\r\n* \"Evaluating Gemini Robotics Policies in a Veo World Simulator\" **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10675)]\r\n* **On Memory**: \"On Memory: A comparison of memory mechanisms in world models\" **`World Modeling Workshop 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.06983)]\r\n* **SmallWorlds**: \"SmallWorlds: Assessing Dynamics Understanding of World Models in Isolated Environments\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.23465)] \r\n* **4DWorldBench**: \"4DWorldBench: A Comprehensive Evaluation Framework for 3D\u002F4D World Generation Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19836)] \r\n* **Target-Bench**: \"Target-Bench: Can World Models Achieve Mapless Path Planning with Semantic Targets?\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17792)] \r\n* **PragWorld**: \"PragWorld: A Benchmark Evaluating LLMs' Local World Model under Minimal Linguistic Alterations and Conversational Dynamics\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13021)] \r\n* \"Can World Simulators Reason? Gen-ViRe: A Generative Visual Reasoning Benchmark\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13853)] [[Code](https:\u002F\u002Fgithub.com\u002FL-CodingSpace\u002FGVR)]\r\n* \"Scalable Policy Evaluation with Video World Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11520)]\r\n* \"Expert Evaluation of LLM World Models: A High-Tc Superconductivity Case Study\", **`ICML 2025 workshop on Assessing World Models and the Explorations in AI Today`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03782)]\r\n* \"Benchmarking World-Model Learning\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19788)]\r\n* **LikePhys**: \"LikePhys: Evaluating Intuitive Physics Understanding in Video Diffusion Models via Likelihood Preference\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11512)] [[Website](https:\u002F\u002Fyuanjianhao508.github.io\u002FLikePhys\u002F)]\r\n* **World-in-World**: \"World-in-World: World Models in a Closed-Loop World\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18135)] [[Website](https:\u002F\u002Fgithub.com\u002FWorld-In-World\u002Fworld-in-world)] \r\n* **VideoVerse**: \"VideoVerse: How Far is Your T2V Generator from a World Model?\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08398)] \r\n* **OmniWorld**: \"OmniWorld: A Multi-Domain and Multi-Modal Dataset for 4D World Modeling\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12201)] [[Website](https:\u002F\u002Fyangzhou24.github.io\u002FOmniWorld\u002F)]\r\n* \"Beyond Simulation: Benchmarking World Models for Planning and Causality in Autonomous Driving\", **`ICRA 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.01922)] \r\n* **WM-ABench**: \"Do Vision-Language Models Have Internal World Models? Towards an Atomic Evaluation\", **`ACL 2025(Findings)`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21876)] [[Website](https:\u002F\u002Fwm-abench.maitrix.org\u002F)]\r\n* **UNIVERSE**: \"Adapting Vision-Language Models for Evaluating World Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.17967)]\r\n* **WorldPrediction**: \"WorldPrediction: A Benchmark for High-level World Modeling and Long-horizon Procedural Planning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04363)]\r\n* \"Toward Memory-Aided World Models: Benchmarking via Spatial Consistency\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22976)] [[Datasets](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FkevinLian\u002FLoopNav)] [[Code](https:\u002F\u002Fgithub.com\u002FKevin-lkw\u002FLoopNav)]\r\n* **SimWorld**: \"SimWorld: A Unified Benchmark for Simulator-Conditioned Scene\r\nGeneration via World Model\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13952)] [[Code](https:\u002F\u002Fgithub.com\u002FLi-Zn-H\u002FSimWorld)]\r\n* **EWMBench**: \"EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09694)] [[Code](https:\u002F\u002Fgithub.com\u002FAgibotTech\u002FEWMBench)]\r\n* \"Toward Stable World Models: Measuring and Addressing World Instability in Generative Environments\", **`arxiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08122)] \r\n* **WorldModelBench**: \"WorldModelBench: Judging Video Generation Models As World Models\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20694)] [[Website](https:\u002F\u002Fworldmodelbench-team.github.io\u002F)]\r\n* **Text2World**: \"Text2World: Benchmarking Large Language Models for Symbolic World Model Generation\", **`arxiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13092)] [[Website](https:\u002F\u002Ftext-to-world.github.io\u002F)] \r\n* **ACT-Bench**: \"ACT-Bench: Towards Action Controllable World Models for Autonomous Driving\", **`arxiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05337)]\r\n* **WorldSimBench**: \"WorldSimBench: Towards Video Generation Models as World Simulators\", **`arxiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18072)] [[Website](https:\u002F\u002Firanqin.github.io\u002FWorldSimBench.github.io\u002F)] \r\n* **EVA**: \"EVA: An Embodied World Model for Future Video Anticipation\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15461)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Feva-publi)] \r\n* **AeroVerse**: \"AeroVerse: UAV-Agent Benchmark Suite for Simulating, Pre-training, Finetuning, and Evaluating Aerospace Embodied World Models\", **`arxiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.15511)]\r\n* **CityBench**: \"CityBench: Evaluating the Capabilities of Large Language Model as World Model\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13945)] [[Code](https:\u002F\u002Fgithub.com\u002Ftsinghua-fib-lab\u002FCityBench)]\r\n* \"Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models\", **`NIPS 2023`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.09064)]\r\n\r\n---\r\n## General World Models\r\n* **InCoder-32B-Thinking**: \"InCoder-32B-Thinking: Industrial Code World Model for Thinking\", **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.03144)] \r\n* **Learn2Fold**: \"Learn2Fold: Structured Origami Generation with World Model Planning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.29585)] \r\n* **WorldFlow3D**: \"WorldFlow3D: Flowing Through 3D Distributions for Unbounded World Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.29089)] [[Website](https:\u002F\u002Flight.princeton.edu\u002Fworldflow3d)] \r\n* **LOME**: \"LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.27449)] \r\n* **VGGRPO**: \"VGGRPO: Towards World-Consistent Video Generation with 4D Latent Reward\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.26599)] [[Website](https:\u002F\u002Fzhaochongan.github.io\u002Fprojects\u002FVGGRPO)]\r\n* **PiJEPA**: \"Policy-Guided World Model Planning for Language-Conditioned Visual Navigation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25981)]\r\n* \"Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25716)]\r\n* **Lingshu-Cell**: \"Lingshu-Cell: A generative cellular world model for transcriptome modeling toward virtual cells\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25240)]\r\n* **AI-Supervisor**: \"AI-Supervisor: Autonomous AI Research Supervision via a Persistent Research World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24402)]\r\n* **WildWorld**: \"WildWorld: A Large-Scale Dataset for Dynamic World Modeling with Actions and Explicit State toward Generative ARPG\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.23497)] [[Website](https:\u002F\u002Fshandaai.github.io\u002Fwildworld-project\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FShandaAI\u002FWildWorld)]\r\n* \"Model Predictive Control with Differentiable World Models for Offline Reinforcement Learning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22430)]\r\n* **WorldCache**: \"WorldCache: Content-Aware Caching for Accelerated Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22286)] [[Website](https:\u002F\u002Fumair1221.github.io\u002FWorld-Cache\u002F)]\r\n* \"From Part to Whole: 3D Generative World Model with an Adaptive Structural Hierarchy\", **`ICME 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.21557)]\r\n* **EgoForge**: \"EgoForge: Goal-Directed Egocentric World Simulator\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20169)]\r\n* \"Structured Latent Dynamics in Wireless CSI via Homomorphic World Models\", **`IEEE ICC`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20048)]\r\n* **WorldAgents**: \"WorldAgents: Can Foundation Image Models be Agents for 3D World Models?\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19708)] [[Website]( https:\u002F\u002Fziyaerkoc.com\u002Fworldagents\u002F)] \r\n* **R2-Dreamer**: \"R2-Dreamer: Redundancy-Reduced World Models without Decoders or Augmentation\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.18202)] [[Code]( https:\u002F\u002Fgithub.com\u002FNM512\u002Fr2dreamer)] \r\n* **StereoWorld**: \"Stereo World Model: Camera-Guided Stereo Video Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17375)] [[Website]( https:\u002F\u002Fsunyangtian.github.io\u002FStereoWorld-web\u002F)] \r\n* **MosaicMem**: \"MosaicMem: Hybrid Spatial Memory for Controllable Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17117)] [[Website](https:\u002F\u002Fmosaicmem.github.io\u002Fmosaicmem\u002F)] \r\n* **WorldCam**: \"WorldCam: Interactive Autoregressive 3D Gaming Worlds with Camera Pose as a Unifying Geometric Representation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16871)] [[Website](https:\u002F\u002Fcvlab-kaist.github.io\u002FWorldCam\u002F)] \r\n* **SWM**: \"Grounding World Simulation Models in a Real-World Metropolis\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.15583)] [[Website](https:\u002F\u002Fseoul-world-model.github.io\u002F)] \r\n* **NavThinker**: \"NavThinker: Action-Conditioned World Models for Coupled Prediction and Planning in Social Navigation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.15359)] [[Website](https:\u002F\u002Fhutslib.github.io\u002FNavThinker)] \r\n* **EyeWorld**: \"EyeWorld: A Generative World Model of Ocular State and Dynamics\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14039)] \r\n* **CtrlAttack**: \"CtrlAttack: A Unified Attack on World-Model Control in Diffusion Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13435)] \r\n* **SAW**: \"SAW: Toward a Surgical Action World Model via Controllable and Scalable Video Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13024)] \r\n* **VGGT-World**: \"VGGT-World: Transforming VGGT into an Autoregressive Geometry World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.12655)] \r\n* **ARROW**: \"ARROW: Augmented Replay for RObust World models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.11395)] \r\n* **RAE-NWM**: \"RAE-NWM: Navigation World Model in Dense Visual Representation Space\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.09241)] [[Code](https:\u002F\u002Fgithub.com\u002F20robo\u002Fraenwm)]\r\n* **SPIRAL**: \"SPIRAL: A Closed-Loop Framework for Self-Improving Action World Models via Reflective Planning Agents\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08403)]\r\n* **MWM**: \"MWM: Mobile World Models for Action-Conditioned Consistent Prediction\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07799)] [[Website](https:\u002F\u002Faigeeksgroup.github.io\u002FMWM)] [[Code](https:\u002F\u002Fgithub.com\u002FAIGeeksGroup\u002FMWM)]\r\n* **Brain-WM**: \"Brain-WM: Brain Glioblastoma World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07562)] [[Code](https:\u002F\u002Fgithub.com\u002Fthibault-wch\u002FBrain-GBM-world-model)]\r\n* **DreamSAC**: \"DreamSAC: Learning Hamiltonian World Models via Symmetry Exploration\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07545)]\r\n* **LiveWorld**: \"LiveWorld: Simulating Out-of-Sight Dynamics in Generative Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07145)] [[Website](https:\u002F\u002Fzichengduan.github.io\u002FLiveWorld\u002Findex.html)]\r\n* \"What if? Emulative Simulation with World Models for Situated Reasoning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.06445)]\r\n* **WorldCache**: \"WorldCache: Accelerating World Models for Free via Heterogeneous Token Caching\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.06331)] [[Website](https:\u002F\u002Fgithub.com\u002FFofGofx\u002FWorldCache)]\r\n* \"Planning in 8 Tokens: A Compact Discrete Tokenizer for Latent World Model\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.05438)] \r\n* \"Beyond Pixel Histories: World Models with Persistent 3D State\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.03482)] [[Website](https:\u002F\u002Ffrancelico.github.io\u002Fpersist.github.io)]\r\n* \"Contextual Latent World Models for Offline Meta Reinforcement Learning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.02935)]\r\n* \"Next Embedding Prediction Makes World Models Stronger\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.02765)]\r\n* **COMBAT**: \"COMBAT: Conditional World Models for Behavioral Agent Training\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00825)]\r\n* **DreamWorld**: \"DreamWorld: Unified World Modeling in Video Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00466)] [[Code](https:\u002F\u002Fgithub.com\u002FABU121111\u002FDreamWorld)] \r\n* **MetaOthello**: \"MetaOthello: A Controlled Study of Multiple World Models in Transformers\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23164)] \r\n* **GeoWorld**: \"GeoWorld: Geometric World Models\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23058)] [[Website](https:\u002F\u002Fsteve-zeyu-zhang.github.io\u002FGeoWorld)]\r\n* **UCM**: \"UCM: Unifying Camera Control and Memory with Time-aware Positional Encoding Warping for World Models\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22960)] [[Website](https:\u002F\u002Fhumanaigc.github.io\u002Fucm-webpage\u002F)]\r\n* \"Code World Models for Parameter Control in Evolutionary Algorithms\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22260)]\r\n* **Solaris**: \"Solaris: Building a Multiplayer Video World Model in Minecraft\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22208)] [[Website](https:\u002F\u002Fsolaris-wm.github.io\u002F)]\r\n* \"MRI Contrast Enhancement Kinetics World Model\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.19285)] \r\n* \"Neural Fields as World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18690)] \r\n* \"Learning Invariant Visual Representations for Planning with Joint-Embedding Predictive World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18639)] \r\n* **Generated Reality**: \"Generated Reality: Human-centric World Simulation using Interactive Video Generation with Hand and Camera Control\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18422)] \r\n* **VLM-DEWM**: \"VLM-DEWM: Dynamic External World Model for Verifiable and Resilient Vision-Language Planning in Manufacturing\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15549)] \r\n* \"World-Model-Augmented Web Agents with Action Correction\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15384)] \r\n* \"Cold-Start Personalization via Training-Free Priors from Structured World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15012)] [[Code](https:\u002F\u002Fgithub.com\u002FAvinandan22\u002FPEP)]\r\n* \"World Models for Policy Refinement in StarCraft II\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.14857)]\r\n* **WebWorld**: \"WebWorld: A Large-Scale World Model for Web Agent Training\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.14721)]\r\n* **WIMLE**: \"WIMLE: Uncertainty-Aware World Models with IMLE for Sample-Efficient Continuous Control\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.14351)]\r\n* **Causal-JEPA**: \"Causal-JEPA: Learning World Models through Object-Level Latent Interventions\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11389)] [[Website](https:\u002F\u002Fhazel-heejeong-nam.github.io\u002Fcjepa\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fgalilai-group\u002Fcjepa)]\r\n* **Olaf-World**: \"Olaf-World: Orienting Latent Actions for Video World Modeling\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10104)] [[Website](https:\u002F\u002Fshowlab.github.io\u002FOlaf-World\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FOlaf-World)]\r\n* **Agent World Model**: \"Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10090)] [[Code](https:\u002F\u002Fgithub.com\u002FSnowflake-Labs\u002Fagent-world-model)]\r\n* **WorldCompass**: \"WorldCompass: Reinforcement Learning for Long-Horizon World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.09022)] [[Website](https:\u002F\u002F3d-models.hunyuan.tencent.com\u002Fworld\u002F)]\r\n* \"Horizon Imagination: Efficient On-Policy Training in Diffusion World Models\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08032)] [[Code](https:\u002F\u002Fgithub.com\u002Fleor-c\u002Fhorizon-imagination)]\r\n* \"Geometry-Aware Rotary Position Embedding for Consistent Video World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07854)]\r\n* \"Debugging code world models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07672)]\r\n* \"Cross-View World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07277)]\r\n* \"Interpreting Physics in Video World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07050)]\r\n* \"Neural Sabermetrics with World Model: Play-by-play Predictive Modeling with Large Language Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07030)]\r\n* \"From Kepler to Newton: Inductive Biases Guide Learned World Models in Transformers\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06923)]\r\n* \"Self-Improving World Modelling with Latent Actions\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06130)]\r\n* \"Reinforcement World Model Learning for LLM-based Agents\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.05842)]\r\n* **LIVE**: \"LIVE: Long-horizon Interactive Video World Modeling\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03747)] [[Website](https:\u002F\u002Fjunchao-cs.github.io\u002FLIVE-demo\u002F)]\r\n* **EHRWorld**: \"EHRWorld: A Patient-Centric Medical World Model for Long-Horizon Clinical Trajectories\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03569)] \r\n* \"Joint Learning of Hierarchical Neural Options and Abstract World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.02799)] \r\n* \"Test-Time Mixture of World Models for Embodied Agents in Dynamic Environments\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22647)] [[Code](https:\u002F\u002Fgithub.com\u002Fdoldam0\u002Ftmow)] \r\n* \"The Patient is not a Moving Document: A World Model Training Paradigm for Longitudinal EHR\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22128)] \r\n* **PathWise**: \"PathWise: Planning through World Model for Automated Heuristic Design via Self-Evolving LLMs\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20539)] \r\n* \"From Observations to Events: Event-Aware World Model for Reinforcement Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19336)] \r\n* **NuiWorld**: \"NuiWorld: Exploring a Scalable Framework for End-to-End Controllable World Generation\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19048)] \r\n* \"\"Just in Time\" World Modeling Supports Human Planning and Reasoning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14514)] \r\n* **Action Shapley**: \"Action Shapley: A Training Data Selection Metric for World Model in Reinforcement Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10905)] \r\n* \"Inference-time Physics Alignment of Video Generative Models with Latent World Models\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10553)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FWMReward)] \r\n* **Imagine-then-Plan**: \"Imagine-then-Plan: Agent Learning from Adaptive Lookahead with World Models\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.08955)] \r\n* **Puzzle it Out**: \"Puzzle it Out: Local-to-Global World Model for Offline Multi-Agent Reinforcement Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.07463)] \r\n* \"Object-Centric World Models Meet Monte Carlo Tree Search\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06604)] \r\n* \"Learning Latent Action World Models In The Wild\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05230)] \r\n* **VerseCrafter**: \"VerseCrafter: Dynamic Realistic Video World Model with 4D Geometric Control\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05138)] [[Website](https:\u002F\u002Fsixiaozheng.github.io\u002FVerseCrafter_page\u002F)] \r\n* \"Choreographing a World of Dynamic Objects\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04194)] [[Website](https:\u002F\u002Fyanzhelyu.github.io\u002Fchord)] \r\n* **MobileDreamer**: \"MobileDreamer: Generative Sketch World Model for GUI Agent\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04035)] \r\n* \"Current Agents Fail to Leverage World Model as Tool for Foresight\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03905)] \r\n* \"Flow Equivariant World Models: Memory for Partially Observed Dynamic Environments\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01075)] [[Website](https:\u002F\u002Fflowequivariantworldmodels.github.io)]\r\n* \"Value-guided action planning with JEPA world models\", **`ICLR 2026 World Modeling Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00844)]\r\n* **NeoVerse**: \"NeoVerse: Enhancing 4D World Model with in-the-wild Monocular Videos\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00393)] [[Website](https:\u002F\u002Fneoverse-4d.github.io)]\r\n* **TeleWorld**: \"TeleWorld: Towards Dynamic Multimodal Synthesis with a 4D World Model\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00051)]\r\n* \"World model inspired sarcasm reasoning with large language model agents\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24329)]\r\n* **LEWM**: \"Large Emotional World Model\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24149)]\r\n* **WWM**: \"Web World Models\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23676)] [[Website](https:\u002F\u002Fgithub.com\u002FPrinceton-AI2-Lab\u002FWeb-World-Models)] \r\n* **SurgWorld**: \"SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23162)] \r\n* **Agent2World**: \"Agent2World: Learning to Generate Symbolic World Models via Adaptive Multi-Agent Feedback\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22336)] [[Website](https:\u002F\u002Fagent2world.github.io)] \r\n* **Yume-1.5**: \"Yume-1.5: A Text-Controlled Interactive World Generation Model\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22096)] [[Website](https:\u002F\u002Fstdstu12.github.io\u002FYUME-Project)] [[Code](https:\u002F\u002Fgithub.com\u002Fstdstu12\u002FYUME)] \r\n* \"Aerial World Model for Long-horizon Visual Generation and Navigation in 3D Space\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21887)] \r\n* \"From Word to World: Can Large Language Models be Implicit Text-based World Models?\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18832)] \r\n* \"Dexterous World Models\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.17907)] [[Website](https:\u002F\u002Fsnuvclab.github.io\u002Fdwm)]\r\n* **PhysFire-WM**: \"PhysFire-WM: A Physics-Informed World Model for Emulating Fire Spread Dynamics\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.17152)] \r\n* **WorldPlay**: \"WorldPlay: Towards Long-Term Geometric Consistency for Real-Time Interactive World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14614)] [[Website](https:\u002F\u002F3d-models.hunyuan.tencent.com\u002Fworld\u002F)]\r\n* \"The Double Life of Code World Models: Provably Unmasking Malicious Behavior Through Execution Traces\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13821)] \r\n* **LongVie 2**: \"LongVie 2: Multimodal Controllable Ultra-Long Video World Model\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13604)] [[Website](https:\u002F\u002Fvchitect.github.io\u002FLongVie2-project\u002F)]\r\n* **VFMF**: \"VFMF: World Modeling by Forecasting Vision Foundation Model Features\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11225)] [[Website](https:\u002F\u002Fvfmf.gabrijel-boduljak.com\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fgboduljak\u002Fvfmf)]\r\n* **VDAWorld**: \"VDAWorld: World Modelling via VLM-Directed Abstraction and Simulation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11061)] [[Website](https:\u002F\u002Ffelixomahony.github.io\u002Fvdaworld\u002F)]\r\n* \"Closing the Train-Test Gap in World Models for Gradient-Based Planning\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.09929)]\r\n* **WonderZoom**: \"WonderZoom: Multi-Scale 3D World Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.09164)] [[Website](https:\u002F\u002Fwonderzoom.github.io\u002F)]\r\n* **Astra**: \"Astra: General Interactive World Model with Autoregressive Denoising\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08931)] [[Website](https:\u002F\u002Feternalevan.github.io\u002FAstra-project\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FEternalEvan\u002FAstra)]\r\n* **Visionary**: \"Visionary: The World Model Carrier Built on WebGPU-Powered Gaussian Splatting Platform\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08478)] [[Website](https:\u002F\u002Fvisionary-laboratory.github.io\u002Fvisionary)]\r\n* **CLARITY**: \"CLARITY: Medical World Model for Guiding Treatment Decisions by Modeling Context-Aware Disease Trajectories in Latent Space\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08029)]\r\n* **UnityVideo**: \"UnityVideo: Unified Multi-Modal Multi-Task Learning for Enhancing World-Aware Video Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.07831)] [[Website](https:\u002F\u002Fjackailab.github.io\u002FProjects\u002FUnityVideo)] [[Code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FUnityVideo)]\r\n* \"Speech World Model: Causal State-Action Planning with Explicit Reasoning for Speech\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05933)]\r\n* \"Probing the effectiveness of World Models for Spatial Reasoning through Test-time Scaling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05809)] [[Code](https:\u002F\u002Fgithub.com\u002Fchandar-lab\u002Fvisa-for-mindjourney)] \r\n* **ProPhy**: \"ProPhy: Progressive Physical Alignment for Dynamic World Simulation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05564)] \r\n* **BiTAgent**: \"BiTAgent: A Task-Aware Modular Framework for Bidirectional Coupling between Multimodal Large Language Models and World Models\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04513)] \r\n* **RELIC**: \"RELIC: Interactive Video World Model with Long-Horizon Memory\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04040)] [[Website](https:\u002F\u002Frelic-worldmodel.github.io\u002F)] \r\n* \"Better World Models Can Lead to Better Post-Training Performance\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03400)] \r\n* **SeeU**: \"SeeU: Seeing the Unseen World via 4D Dynamics-aware Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03350)] [[Website](https:\u002F\u002Fyuyuanspace.com\u002FSeeU\u002F)]\r\n* **DynamicVerse**: \"DynamicVerse: A Physically-Aware Multimodal Framework for 4D World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03000)] \r\n* **IC-World**: \"IC-World: In-Context Generation for Shared World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02793)] [[Code](https:\u002F\u002Fgithub.com\u002Fwufan-cse\u002FIC-World)]\r\n* **WorldPack**: \"WorldPack: Compressed Memory Improves Spatial Consistency in Video World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02473)]\r\n* **GrndCtrl**: \"GrndCtrl: Grounding World Models via Self-Supervised Reward Alignment\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01952)]\r\n* **ChronosObserver**: \"ChronosObserver: Taming 4D World with Hyperspace Diffusion Sampling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01481)]\r\n* **AVWM**: \"Audio-Visual World Models: Towards Multisensory Imagination in Sight and Sound\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00883)] \r\n* **VCWorld**: \"VCWorld: A Biological World Model for Virtual Cell Simulation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00306)] [[Code](https:\u002F\u002Fgithub.com\u002FGENTEL-lab\u002FVCWorld)]\r\n* **VISTAv2**: \"VISTAv2: World Imagination for Indoor Vision-and-Language Navigation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00041)] [[Website](https:\u002F\u002Ftaco-group.github.io\u002F)]\r\n* **Captain Safari**: \"Captain Safari: A World Engine\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22815)] [[Website](https:\u002F\u002Fjohnson111788.github.io\u002Fopen-safari\u002F)]\r\n* **WorldWander**: \"WorldWander: Bridging Egocentric and Exocentric Worlds in Video Generation\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22098)] [[Code](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FWorldWander)]\r\n* **Inferix**: \"Inferix: A Block-Diffusion based Next-Generation Inference Engine for World Simulation\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20714)] [[Code](https:\u002F\u002Fgithub.com\u002Falibaba-damo-academy\u002FInferix)]\r\n* **MagicWorld**, MagicWorld: Towards Long-Horizon Stability for Interactive Video World Exploration. [[Paper](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2511.18886v2)][[Website](https:\u002F\u002Fvivocameraresearch.github.io\u002Fmagicworld\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FvivoCameraResearch\u002FMagic-World)]\r\n* \"Counterfactual World Models via Digital Twin-conditioned Video Diffusion\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17481)] \r\n* **WorldGen**: \"WorldGen: From Text to Traversable and Interactive 3D Worlds\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16825)] [[Website](https:\u002F\u002Fwww.meta.com\u002Fblog.\u002Fworldgen-3d-world-generation-reality-1abs-generative-ai-research\u002F)]\r\n* **X-WIN**: \"X-WIN: Building Chest Radiograph World Model via Predictive Sensing\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14918)]\r\n* \"Object-Centric World Models for Causality-Aware Reinforcement Learning\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14262)]\r\n* \"Latent-Space Autoregressive World Model for Efficient and Robust Image-Goal Navigation\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11011)]\r\n* **Dynamic Sparsity**: \"Dynamic Sparsity: Challenging Common Sparsity Assumptions for Learning World Models in Robotic Reinforcement Learning Benchmarks\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08086)]\r\n* **MrCoM**: \"MrCoM: A Meta-Regularized World-Model Generalizing Across Multi-Scenarios\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06252)]\r\n* \"Next-Latent Prediction Transformers Learn Compact World Models\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.05963)]\r\n* **DR. WELL**: \"DR. WELL: Dynamic Reasoning and Learning with Symbolic World Model for Embodied LLM-Based Multi-Agent Collaboration\", **`NeurIPS 2025 Workshop: Bridging Language,\r\nAgent, and World Models for Reasoning and Planning (LAW)`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.04646)] [[Website](https:\u002F\u002Fnarjesno.github.io\u002FDR.WELL\u002F)]\r\n* \"How Far Are Surgeons from Surgical World Models? A Pilot Study on Zero-shot Surgical Video Generation with Expert Assessment\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01775)]\r\n* \"From Pixels to Cooperation Multi Agent Reinforcement Learning based on Multimodal World Models\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01310)]\r\n* \"Bootstrap Off-policy with World Model\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00423)]\r\n* \"Clone Deterministic 3D Worlds with Geometrically-Regularized World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.26782)]\r\n* \"Semantic Communications with World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24785)]\r\n* **TRELLISWorld**: \"TRELLISWorld: Training-Free World Generation from Object Generators\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23880)]\r\n* **WorldGrow**: \"WorldGrow: Generating Infinite 3D World\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21682)] [[Code](https:\u002F\u002Fgithub.com\u002Fworld-grow\u002FWorldGrow)] \r\n* **PhysWorld**: \"PhysWorld: From Real Videos to World Models of Deformable Objects via Physics-Aware Demonstration Synthesis\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21447)] \r\n* \"How Hard is it to Confuse a World Model?\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21232)]\r\n* \"Social World Model-Augmented Mechanism Design Policy Learning\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19270)]\r\n* **VAGEN**: \"VAGEN: Reinforcing World Model Reasoning for Multi-Turn VLM Agents\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16907)] [[Website](http:\u002F\u002Fmll.lab.northwestern.edu\u002FVAGEN\u002F)] \r\n* **Cosmos-Surg-dVRK**: \"Cosmos-Surg-dVRK: World Foundation Model-based Automated Online Evaluation of Surgical Robot Policy Learning\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16240)] \r\n* \"Zero-shot World Models via Search in Memory\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16123)] \r\n* \"Vector Quantization in the Brain: Grid-like Codes in World Models\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16039)] \r\n* **Terra**: \"Terra: Explorable Native 3D World Model with Point Latents\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14977)] [[Website](https:\u002F\u002Fhuang-yh.github.io\u002Fterra\u002F)]\r\n* **Deep SPI**: \"Deep SPI: Safe Policy Improvement via World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12312)] \r\n* \"One Life to Learn: Inferring Symbolic World Models for Stochastic Environments from Unguided Exploration\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12088)] [[Code](https:\u002F\u002Fonelife-worldmodel.github.io\u002F)]\r\n* **R-WoM**: \"R-WoM: Retrieval-augmented World Model For Computer-use Agents\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11892)] \r\n* **WorldMirror**: \"WorldMirror: Universal 3D World Reconstruction with Any-Prior Prompting\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.10726)] \r\n* **Unified World Models**: \"Unified World Models: Memory-Augmented Planning and Foresight for Visual Navigation\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08713)] [[code](https:\u002F\u002Fgithub.com\u002FF1y1113\u002FUniWM)]\r\n* \"Code World Models for General Game Playing\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04542)]\r\n* **MorphoSim**: \"MorphoSim: An Interactive, Controllable, and Editable Language-guided 4D World Simulator\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04390)] [[code](https:\u002F\u002Fgithub.com\u002Feric-ai-lab\u002FMorph4D)]\r\n* **ChronoEdit**: \"ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04290)] [[Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Fchronoedit)]\r\n* **SFP**: \"Spatiotemporal Forecasting as Planning: A Model-Based Reinforcement Learning Approach with Generative World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04020)]\r\n* **EvoWorld**: \"EvoWorld: Evolving Panoramic World Generation with Explicit 3D Memory\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01183)] [[Code](https:\u002F\u002Fgithub.com\u002FJiahaoPlus\u002FEvoWorld)]\r\n* \"World Model for AI Autonomous Navigation in Mechanical Thrombectomy\", **`MICCAI 2025. Lecture Notes in Computer Science`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25518)] \r\n* **DyMoDreamer**: \"DyMoDreamer: World Modeling with Dynamic Modulation\", **`NeurIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24804)] [[Code](https:\u002F\u002Fgithub.com\u002FUltraman-Tiga1\u002FDyMoDreamer)]\r\n* **Dreamer4**: \"Training Agents Inside of Scalable World Models\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24527)] [[Website](https:\u002F\u002Fdanijar.com\u002Fdreamer4\u002F)]\r\n* \"Reinforcement Learning with Inverse Rewards for World Model Post-training\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23958)]\r\n* \"Context and Diversity Matter: The Emergence of In-Context Learning in World Models\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.22353)]\r\n* **FantasyWorld**: \"FantasyWorld: Geometry-Consistent World Modeling via Unified Video and 3D Prediction\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21657)]\r\n* \"Remote Sensing-Oriented World Model\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17808)]\r\n* \"World Modeling with Probabilistic Structure Integration\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.09737)]\r\n* \"One Model for All Tasks: Leveraging Efficient World Models in Multi-Task Planning\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07945)] [[Code](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero)]\r\n* **LatticeWorld**: \"LatticeWorld: A Multimodal Large Language Model-Empowered Framework for Interactive Complex World Generation\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.05263)]\r\n* \"Planning with Reasoning using Vision Language World Model\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02722)]\r\n* \"Social World Models\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00559)]\r\n* \"Dynamics-Aligned Latent Imagination in Contextual World Models for Zero-Shot Generalization\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20294)]\r\n* **HERO**: \"HERO: Hierarchical Extrapolation and Refresh for Efficient World Models\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17588)]\r\n* \"Scalable RF Simulation in Generative 4D Worlds\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.12176)]\r\n* \"Finite Automata Extraction: Low-data World Model Learning as Programs from Gameplay Video\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11836)]\r\n* \"Visuomotor Grasping with World Models for Surgical Robots\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11200)]\r\n* \"In-Context Reinforcement Learning via Communicative World Models\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06659)] [[Code](https:\u002F\u002Fgithub.com\u002Ffernando-ml\u002FCORAL )]\r\n* **PIGDreamer**: \"PIGDreamer: Privileged Information Guided World Models for Safe Partially Observable Reinforcement Learning\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.02159)]\r\n* **SimuRA**: \"SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.23773)]\r\n* \"Back to the Features: DINO as a Foundation for Video World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.19468)] \r\n* **Yume**: \"Yume: An Interactive World Generation Model\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.17744)] [[Website](https:\u002F\u002Fstdstu12.github.io\u002FYUME-Project\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fstdstu12\u002FYUME)]\r\n* \"LLM world models are mental: Output layer evidence of brittle world model use in LLM mechanical reasoning\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.15521)]\r\n* \"Safety Certification in the Latent space using Control Barrier Functions and World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13871)]\r\n* \"Assessing adaptive world models in machines with novel games\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12821)]\r\n* \"Graph World Model\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10539)] [[Website](https:\u002F\u002Fgithub.com\u002Fulab-uiuc\u002FGWM)]\r\n* **MobiWorld**: \"MobiWorld: World Models for Mobile Wireless Network\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09462)]\r\n* \"Continual Reinforcement Learning by Planning with Online World Models\", **`ICML 2025 Spotlight`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09177)]\r\n* **AirScape**: \"AirScape: An Aerial Generative World Model with Motion Controllability\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.08885)] [[Website]( https:\u002F\u002Fembodiedcity.github.io\u002FAirScape\u002F)]\r\n* **Geometry Forcing**: \"Geometry Forcing: Marrying Video Diffusion and 3D Representation for Consistent World Modeling\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07982)] [[Website](https:\u002F\u002FGeometryForcing.github.io)]\r\n* **Martian World Models**: \"Martian World Models: Controllable Video Synthesis with Physically Accurate 3D Reconstructions\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07978)] [[Website](https:\u002F\u002Fmarsgenai.github.io)]\r\n* \"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06952)]\r\n* \"Critiques of World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05169)]\r\n* \"When do World Models Successfully Learn Dynamical Systems?\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04898)]\r\n* **WebSynthesis**: \"WebSynthesis: World-Model-Guided MCTS for Efficient WebUI-Trajectory Synthesis\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04370)]\r\n* \"Accurate and Efficient World Modeling with Masked Latent Transformers\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04075)]\r\n* **Dyn-O**: \"Dyn-O: Building Structured World Models with Object-Centric Representations\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03298)]\r\n* **NavMorph**: \"NavMorph: A Self-Evolving World Model for Vision-and-Language Navigation in Continuous Environments\", **`ICCV 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19055)] [[Code](https:\u002F\u002Fgithub.com\u002FFeliciaxyao\u002FNavMorph)]\r\n* \"A “Good” Regulator May Provide a World Model for Intelligent Systems\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23032)]\r\n* **Xray2Xray**: \"Xray2Xray: World Model from Chest X-rays with Volumetric Context\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19055)]\r\n* **MATWM**: \"Transformer World Model for Sample Efficient Multi-Agent Reinforcement Learning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18537)]\r\n* \"Measuring (a Sufficient) World Model in LLMs: A Variance Decomposition Framework\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16584)]\r\n* \"Efficient Generation of Diverse Cooperative Agents with World Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07450)]\r\n* **WorldLLM**: \"WorldLLM: Improving LLMs' world modeling using curiosity-driven theory-making\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06725)]\r\n* \"LLMs as World Models: Data-Driven and Human-Centered Pre-Event Simulation for Disaster Impact Assessment\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06355)]\r\n* \"Bootstrapping World Models from Dynamics Models in Multimodal Foundation Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06006)]\r\n* \"Video World Models with Long-term Spatial Memory\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05284)] [[Website](https:\u002F\u002Fspmem.github.io\u002F)]\r\n* **DSG-World**: \"DSG-World: Learning a 3D Gaussian World Model from Dual State Videos\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05217)]\r\n* \"Safe Planning and Policy Optimization via World Model Learning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04828)]\r\n* **FOLIAGE**: \"FOLIAGE: Towards Physical Intelligence World Models Via Unbounded Surface Evolution\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03173)]\r\n* \"Linear Spatial World Models Emerge in Large Language Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02996)] [[Code](https:\u002F\u002Fgithub.com\u002Fmatthieu-perso\u002Fspatial_world_models)]\r\n* **Simple, Good, Fast**: \"Simple, Good, Fast: Self-Supervised World Models Free of Baggage\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02612)] [[Code](https:\u002F\u002Fgithub.com\u002Fjrobine\u002Fsgf)]\r\n* **Medical World Model**: \"Medical World Model: Generative Simulation of Tumor Evolution for Treatment Planning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02327)]\r\n* \"General agents need world models\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01622)]\r\n* \"Learning Abstract World Models with a Group-Structured Latent Space\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01529)]\r\n* **DeepVerse**: \"DeepVerse: 4D Autoregressive Video Generation as a World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01103)]\r\n* \"World Models for Cognitive Agents: Transforming Edge Intelligence in Future Networks\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00417)]\r\n* **Dyna-Think**: \"Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00320)]\r\n* **StateSpaceDiffuser**: \"StateSpaceDiffuser: Bringing Long Context to Diffusion World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22246)]\r\n* \"Learning World Models for Interactive Video Generation\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21996)] \r\n* \"Revisiting Multi-Agent World Modeling from a Diffusion-Inspired Perspective\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21906)] \r\n* \"Long-Context State-Space Video World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20171)] [[Website](https:\u002F\u002Fryanpo.com\u002Fssm_wm)]\r\n* \"Unlocking Smarter Device Control: Foresighted Planning with a World Model-Driven Code Execution Approach\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16422)] \r\n* \"World Models as Reference Trajectories for Rapid Motor Adaptation\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15589)] \r\n* \"Policy-Driven World Model Adaptation for Robust Offline Model-based Reinforcement Learning\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13709)] \r\n* \"Building spatial world models from sparse transitional episodic memories\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13696)] \r\n* **PoE-World**: \"PoE-World: Compositional World Modeling with Products of Programmatic Experts\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10819)] [[Website](https:\u002F\u002Ftopwasu.github.io\u002Fpoe-world)]\r\n* \"Explainable Reinforcement Learning Agents Using World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08073)] \r\n* **seq-JEPA**: \"seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03176)] \r\n* \"Coupled Distributional Random Expert Distillation for World Model Online Imitation Learning\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02228)] \r\n* \"Learning Local Causal World Models with State Space Models and Attention\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02074)] \r\n* **WebEvolver**: \"WebEvolver: Enhancing Web Agent Self-Improvement with Coevolving World Model\", **`arxiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21024)] \r\n* **WALL-E 2.0**: \"WALL-E 2.0: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents\", **`arxiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15785)] [[Code](https:\u002F\u002Fgithub.com\u002Felated-sawyer\u002FWALL-E)]\r\n* **ViMo**: \"ViMo: A Generative Visual GUI World Model for App Agent\", **`arxiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13936)] \r\n* \"Simulating Before Planning: Constructing Intrinsic User World Model for User-Tailored Dialogue Policy Planning\", **`SIGIR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13643)] \r\n* **CheXWorld**: \"CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13820)] [[Code](https:\u002F\u002Fgithub.com\u002FLeapLabTHU\u002FCheXWorld)]\r\n* **EchoWorld**: \"EchoWorld: Learning Motion-Aware World Models for Echocardiography Probe Guidance\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13065)] [[Code](https:\u002F\u002Fgithub.com\u002FLeapLabTHU\u002FEchoWorld)]\r\n* \"Adapting a World Model for Trajectory Following in a 3D Game\", **`ICLR 2025 Workshop on World Models`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12299)] \r\n* **MineWorld**: \"MineWorld: a Real-Time and Open-Source Interactive World Model on Minecraft\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07257)] [[Website](https:\u002F\u002Faka.ms\u002Fmineworld)]\r\n* **MoSim**: \"Neural Motion Simulator Pushing the Limit of World Models in Reinforcement Learning\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07095)]\r\n* \"Improving World Models using Deep Supervision with Linear Probes\", **`ICLR 2025 Workshop on World Models`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03861)]\r\n* \"Decentralized Collective World Model for Emergent Communication and Coordination\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03353)]\r\n* \"Adapting World Models with Latent-State Dynamics Residuals\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02252)]\r\n* \"Can Test-Time Scaling Improve World Foundation Model?\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.24320)] [[Code](https:\u002F\u002Fgithub.com\u002FMia-Cong\u002FSWIFT.git)]\r\n* \"Synthesizing world models for bilevel planning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20124)] \r\n* \"Long-context autoregressive video modeling with next-frame prediction\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19325)] [[Code](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FFAR)] [[Website](https:\u002F\u002Ffarlongctx.github.io\u002F)]\r\n* **Aether**: \"Aether: Geometric-Aware Unified World Modeling\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18945)] [[Website](https:\u002F\u002Faether-world.github.io\u002F)]\r\n* **FUSDREAMER**: \"FUSDREAMER: Label-efficient Remote Sensing World Model for Multimodal Data Classification\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13814)] [[Website](https:\u002F\u002Fgithub.com\u002FCimy-wang\u002FFusDreamer)]\r\n* \"Inter-environmental world modeling for continuous and compositional dynamics\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09911)] \r\n* **Disentangled World Models**: \"Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08751)] \r\n* \"Revisiting the Othello World Model Hypothesis\", **`ICLR World Models Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04421)] \r\n* \"Learning Transformer-based World Models with Contrastive Predictive Coding\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04416)] \r\n* \"Surgical Vision World Model\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02904)] \r\n* \"World Models for Anomaly Detection during Model-Based Reinforcement Learning Inference\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02552)] \r\n* **WMNav**: \"WMNav: Integrating Vision-Language Models into World Models for Object Goal Navigation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02247)] [[Website](https:\u002F\u002Fb0b8k1ng.github.io\u002FWMNav\u002F)]\r\n* **SENSEI**: \"SENSEI: Semantic Exploration Guided by Foundation Models to Learn Versatile World Models\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01584)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fsensei-paper)]\r\n* \"Learning Actionable World Models for Industrial Process Control\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00713)]\r\n* \"Implementing Spiking World Model with Multi-Compartment Neurons for Model-based Reinforcement Learning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00713)]\r\n* \"Discrete Codebook World Models for Continuous Control\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00653)]\r\n* **Multimodal Dreaming**: \"Multimodal Dreaming: A Global Workspace Approach to World Model-Based Reinforcement Learning\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.21142)]\r\n* \"Generalist World Model Pre-Training for Efficient Reinforcement Learning\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19544)]\r\n* \"Learning To Explore With Predictive World Model Via Self-Supervised Learning\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13200)]\r\n* **M^3**: \"M^3: A Modular World Model over Streams of Tokens\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11537)]\r\n* \"When do neural networks learn world models?\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09297)]\r\n* \"Pre-Trained Video Generative Models as World Simulators\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07825)]\r\n* **DMWM**: \"DMWM: Dual-Mind World Model with Long-Term Imagination\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07591)]\r\n* **EvoAgent**: \"EvoAgent: Agent Autonomous Evolution with Continual World Model for Long-Horizon Tasks\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05907)]\r\n* \"Acquisition through My Eyes and Steps: A Joint Predictive Agent Model in Egocentric Worlds\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05857)]\r\n* \"Generating Symbolic World Models via Test-time Scaling of Large Language Models\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04728)] [[Website](https:\u002F\u002Fvmlpddl.github.io\u002F)]\r\n* \"Improving Transformer World Models for Data-Efficient RL\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01591)]\r\n* \"Trajectory World Models for Heterogeneous Environments\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01366)]\r\n* \"Enhancing Memory and Imagination Consistency in Diffusion-based World Models via Linear-Time Sequence Modeling\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00466)]\r\n* \"Objects matter: object-centric world models improve reinforcement learning in visually complex environments\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16443)]\r\n* **GLAM**: \"GLAM: Global-Local Variation Awareness in Mamba-based World Model\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11949)]\r\n* **GAWM**: \"GAWM: Global-Aware World Model for Multi-Agent Reinforcement Learning\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10116)]\r\n* \"Generative Emergent Communication: Large Language Model is a Collective World Model\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00226)]\r\n* \"Towards Unraveling and Improving Generalization in World Models\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00195)]\r\n* \"Towards Physically Interpretable World Models: Meaningful Weakly Supervised Representations for Visual Trajectory Prediction\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12870)]\r\n* \"Transformers Use Causal World Models in Maze-Solving Tasks\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11867)]\r\n* \"Causal World Representation in the GPT Model\", **`NIPS 2024 Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07446)]\r\n* **Owl-1**: \"Owl-1: Omni World Model for Consistent Long Video Generation\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09600)]\r\n* \"Navigation World Models\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03572)] [[Website](https:\u002F\u002Fwww.amirbar.net\u002Fnwm\u002F)]\r\n* \"Evaluating World Models with LLM for Decision Making\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08794)] \r\n* **LLMPhy**: \"LLMPhy: Complex Physical Reasoning Using Large Language Models and World Models\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08027)] \r\n* **WebDreamer**: \"Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06559)] [[Code](https:\u002F\u002Fgithub.com\u002FOSU-NLP-Group\u002FWebDreamer)]\r\n* \"Scaling Laws for Pre-training Agents and World Models\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04434)]\r\n* **DINO-WM**: \"DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04983)] [[Website](https:\u002F\u002Fdino-wm.github.io\u002F)]\r\n* \"Learning World Models for Unconstrained Goal Navigation\", **`NIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02446)]\r\n* \"How Far is Video Generation from World Model: A Physical Law Perspective\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02385)] [[Website](https:\u002F\u002Fphyworld.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fphyworld\u002Fphyworld)]\r\n* **Adaptive World Models**: \"Adaptive World Models: Learning Behaviors by Latent Imagination Under Non-Stationarity\", **`NIPS 2024 Workshop Adaptive Foundation Models`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01342)]\r\n* **LLMCWM**: \"Language Agents Meet Causality -- Bridging LLMs and Causal World Models\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19923)] [[Code](https:\u002F\u002Fgithub.com\u002Fj0hngou\u002FLLMCWM\u002F)]\r\n* \"Reward-free World Models for Online Imitation Learning\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14081)]\r\n* \"Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13232)]\r\n* **AVID**: \"AVID: Adapting Video Diffusion Models to World Models\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12822)] [[Code](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fcausica\u002Ftree\u002Fmain\u002Fresearch_experiments\u002Favid)]\r\n* **SMAC**: \"Grounded Answers for Multi-agent Decision-making Problem through Generative World Model\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02664)]\r\n* **OSWM**: \"One-shot World Models Using a Transformer Trained on a Synthetic Prior\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14084)]\r\n* \"Making Large Language Models into World Models with Precondition and Effect Knowledge\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12278)]\r\n* \"Efficient Exploration and Discriminative World Model Learning with an Object-Centric Abstraction\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11816)]\r\n* **MoReFree**: \"World Models Increase Autonomy in Reinforcement Learning\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09807)] [[Project](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmorefree)]\r\n* **UrbanWorld**: \"UrbanWorld: An Urban World Model for 3D City Generation\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11965)]\r\n* **PWM**: \"PWM: Policy Learning with Large World Models\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02466)] [[Code](https:\u002F\u002Fwww.imgeorgiev.com\u002Fpwm\u002F)]\r\n* \"Predicting vs. Acting: A Trade-off Between World Modeling & Agent Modeling\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02446)]\r\n* **GenRL**: \"GenRL: Multimodal foundation world models for generalist embodied agents\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18043)] [[Code](https:\u002F\u002Fgithub.com\u002Fmazpie\u002Fgenrl)]\r\n* **DLLM**: \"World Models with Hints of Large Language Models for Goal Achieving\", **`arXiv 2024.06`**. [[Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.07381)]\r\n* \"Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.15275)]\r\n* **CoDreamer**: \"CoDreamer: Communication-Based Decentralised World Models\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13600)]\r\n* **Pandora**: \"Pandora: Towards General World Model with Natural Language Actions and Video States\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.09455)] [[Code](https:\u002F\u002Fgithub.com\u002Fmaitrix-org\u002FPandora)]\r\n* **EBWM**: \"Cognitively Inspired Energy-Based World Models\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08862)]\r\n* \"Evaluating the World Model Implicit in a Generative Model\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03689)] [[Code](https:\u002F\u002Fgithub.com\u002Fkeyonvafa\u002Fworld-model-evaluation)]\r\n* \"Transformers and Slot Encoding for Sample Efficient Physical World Modelling\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20180)] [[Code](https:\u002F\u002Fgithub.com\u002Ftorchipeppo\u002Ftransformers-and-slot-encoding-for-wm)]\r\n* **Puppeteer**: \"Hierarchical World Models as Visual Whole-Body Humanoid Controllers\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18418)] [[Code](https:\u002F\u002Fnicklashansen.com\u002Frlpuppeteer)]\r\n* **BWArea Model**: \"BWArea Model: Learning World Model, Inverse Dynamics, and Policy for Controllable Language Generation\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.17039)]\r\n* **WKM**: \"Agent Planning with World Knowledge Model\", **`arXiv 2024.05`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14205)] [[Code](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FWKM)]\r\n* **Diamond**: \"Diffusion for World Modeling: Visual Details Matter in Atari\", **`arXiv 2024.05`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12399)] [[Code](https:\u002F\u002Fgithub.com\u002Feloialonso\u002Fdiamond)]\r\n* \"Compete and Compose: Learning Independent Mechanisms for Modular World Models\", **`arXiv 2024.04`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15109)]\r\n* \"Dreaming of Many Worlds: Learning Contextual World Models Aids Zero-Shot Generalization\", **`arXiv 2024.03`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.10967)] [[Code](https:\u002F\u002Fgithub.com\u002Fsai-prasanna\u002Fdreaming_of_many_worlds)]\r\n* **V-JEPA**: \"V-JEPA: Video Joint Embedding Predictive Architecture\", **`Meta AI`**. [[Blog](https:\u002F\u002Fai.meta.com\u002Fblog\u002Fv-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture\u002F)] [[Paper](https:\u002F\u002Fai.meta.com\u002Fresearch\u002Fpublications\u002Frevisiting-feature-prediction-for-learning-visual-representations-from-video\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fjepa)]\r\n* **IWM**: \"Learning and Leveraging World Models in Visual Representation Learning\", **`Meta AI`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00504)] \r\n* **Genie**: \"Genie: Generative Interactive Environments\", **`DeepMind`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15391)] [[Blog](https:\u002F\u002Fsites.google.com\u002Fview\u002Fgenie-2024\u002Fhome)]\r\n* **Sora**: \"Video generation models as world simulators\", **`OpenAI`**. [[Technical report](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fvideo-generation-models-as-world-simulators)]\r\n* **LWM**: \"World Model on Million-Length Video And Language With RingAttention\", **`arXiv 2024.02`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08268)] [[Code](https:\u002F\u002Fgithub.com\u002FLargeWorldModel\u002FLWM)]\r\n* \"Planning with an Ensemble of World Models\", **`OpenReview`**. [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=cvGdPXaydP)]\r\n* **WorldDreamer**: \"WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens\", **`arXiv 2024.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.09985)] [[Code](https:\u002F\u002Fgithub.com\u002FJeffWang987\u002FWorldDreamer)]\r\n* **CWM**: \"Understanding Physical Dynamics with Counterfactual World Modeling\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F03523.pdf)] [[Code](https:\u002F\u002Fneuroailab.github.io\u002Fcwm-physics\u002F)]\r\n* **Δ-IRIS**: \"Efficient World Models with Context-Aware Tokenization\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.19320)] [[Code](https:\u002F\u002Fgithub.com\u002Fvmicheli\u002Fdelta-iris)]\r\n* **LLM-Sim**: \"Can Language Models Serve as Text-Based World Simulators?\", **`ACL`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06485)] [[Code](https:\u002F\u002Fgithub.com\u002Fcognitiveailab\u002FGPT-simulator)]\r\n* **AD3**: \"AD3: Implicit Action is the Key for World Models to Distinguish the Diverse Visual Distractors\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09976)]\r\n* **MAMBA**: \"MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning\", **`ICLR 2024`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09859)] [[Code](https:\u002F\u002Fgithub.com\u002Fzoharri\u002Fmamba)]\r\n* **R2I**: \"Mastering Memory Tasks with World Models\", **`ICLR 2024`**. [[Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.04253)] [[Website](https:\u002F\u002Frecall2imagine.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fchandar-lab\u002FRecall2Imagine)]\r\n* **HarmonyDream**: \"HarmonyDream: Task Harmonization Inside World Models\", **`ICML 2024`**. [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=x0yIaw2fgk)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FHarmonyDream)]\r\n* **REM**: \"Improving Token-Based World Models with Parallel Observation Prediction\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05643)] [[Code](https:\u002F\u002Fgithub.com\u002Fleor-c\u002FREM)]\r\n* \"Do Transformer World Models Give Better Policy Gradients?\"\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05290)]\r\n* **DreamSmooth**: \"DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.01450)]\r\n* **TD-MPC2**: \"TD-MPC2: Scalable, Robust World Models for Continuous Control\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.16828)] [[Torch Code](https:\u002F\u002Fgithub.com\u002Fnicklashansen\u002Ftdmpc2)]\r\n* **Hieros**: \"Hieros: Hierarchical Imagination on Structured State Space Sequence World Models\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05167)]\r\n* **CoWorld**: \"Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement Learning\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15260)]\r\n\r\n---\r\n## World Models for Embodied AI\r\n* **JailWAM**: \"JailWAM: Jailbreaking World Action Models in Robot Control\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.05498)]\r\n* \"Hierarchical Planning with Latent World Models\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.03208)]\r\n* \"World Action Verifier: Self-Improving World Models via Forward-Inverse Asymmetry\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.01985)] [[Website](https:\u002F\u002Fworld-action-verifier.github.io)]\r\n* **EgoSim**: \"EgoSim: Egocentric World Simulator for Embodied Interaction Generation\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.01001)] [[Website](http:\u002F\u002Fegosimulator.github.io\u002F)]\r\n* \"Enhancing Policy Learning with World-Action Model\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28955)]\r\n* \"Persistent Robot World Models: Stabilizing Multi-Step Rollouts via Reinforcement Learning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25685)]\r\n* **ThinkJEPA**: \"ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22281)]\r\n* \"Do World Action Models Generalize Better than VLAs? A Robustness Study\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22078)]\r\n* **DDP**: \"Dreaming the Unseen: World Model-regularized Diffusion Policy for Out-of-Distribution Robustness\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.21017)]\r\n* **OmniVTA**: \"OmniVTA: Visuo-Tactile World Modeling for Contact-Rich Robotic Manipulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19201)] [[Website](https:\u002F\u002Fmrsecant.github.io\u002FOmniVTA)]\r\n* **EVA**: \"EVA: Aligning Video World Models with Executable Robot Actions via Inverse Dynamics Rewards\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17808)] [[Website](https:\u002F\u002Feva-project-page.github.io\u002F)]\r\n* **DreamPlan**: \"DreamPlan: Efficient Reinforcement Fine-Tuning of Vision-Language Planners via Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16860)] [[Website](https:\u002F\u002Fpsi-lab.ai\u002FDreamPlan\u002F)]\r\n* **Kinema4D**: \"Kinema4D: Kinematic 4D World Modeling for Spatiotemporal Embodied Simulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16669)] [[Website](https:\u002F\u002Fmutianxu.github.io\u002FKinema4D-project-page\u002F)]\r\n* \"Simulation Distillation: Pretraining World Models in Simulation for Rapid Real-World Adaptation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.15759)] [[Website](https:\u002F\u002Fsim-dist.github.io\u002F)]\r\n* **WestWorld**: \"WestWorld: A Knowledge-Encoded Scalable Trajectory World Model for Diverse Robotic Systems\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14392)] [[Website](https:\u002F\u002Fwestworldrobot.github.io\u002F)]\r\n* \"Building Explicit World Model for Zero-Shot Open-World Object Manipulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13825)] [[Website](https:\u002F\u002Fbojack-bj.github.io\u002Fprojects\u002Fthesis\u002F)]\r\n* \"Egocentric World Model for Photorealistic Hand-Object Interaction Synthesis\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13615)] [[Website](https:\u002F\u002Fegohoi.github.io\u002F)]\r\n* **RoboStereo**: \"RoboStereo: Dual-Tower 4D Embodied World Models for Unified Policy Optimization\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.12639)]\r\n* **ResWM**: \"ResWM: Residual-Action World Model for Visual RL\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.11110)]\r\n* **PlayWorld**: \"PlayWorld: Learning Robot World Models from Autonomous Play\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.09030)] [[Website](https:\u002F\u002Frobot-playworld.github.io\u002F)]\r\n* **MetaWorld-X**: \"MetaWorld-X: Hierarchical World Modeling via VLM-Orchestrated Experts for Humanoid Loco-Manipulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08572)] [[Website](https:\u002F\u002Fsyt2004.github.io\u002FmetaworldX\u002F)]\r\n* \"Interactive World Simulator for Robot Policy Training and Evaluation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08546)] [[Website](https:\u002F\u002Fyixuanwang.me\u002Finteractive_world_sim)]\r\n* \"Foundational World Models Accurately Detect Bimanual Manipulator Failures\", **`ICRA 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.06987)]\r\n* **LPWM**: \"Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.04553)] [[Website](https:\u002F\u002Ftaldatech.github.io\u002Flpwm-web)]\r\n* **AdaWorldPolicy**: \"AdaWorldPolicy: World-Model-Driven Diffusion Policy with Online Adaptive Learning for Robotic Manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.20057)] [[Website](https:\u002F\u002FAdaWorldPolicy.github.io)]\r\n* **FRAPPE**: \"FRAPPE: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.17259)] [[Website](https:\u002F\u002Fh-zhao1997.github.io\u002Ffrappe)]\r\n* \"Learning to unfold cloth: Scaling up world models to deformable object manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.16675)] \r\n* \"World Model Failure Classification and Anomaly Detection for Autonomous Inspection\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.16182)] [[Website](https:\u002F\u002Fautoinspection-classification.github.io)] \r\n* **DreamZero**: \"World Action Models are Zero-shot Policies\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15922)] [[Website](https:\u002F\u002Fdreamzero0.github.io\u002F)] \r\n* **WoVR**: \"WoVR: World Models as Reliable Simulators for Post-Training VLA Policies with RL\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.13977)] \r\n* \"Visual Foresight for Robotic Stow: A Diffusion-Based World Model from Sparse Snapshots\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.13347)] \r\n* **VLAW**: \"VLAW: Iterative Co-Improvement of Vision-Language-Action Policy and World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.12063)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fvla-w)]\r\n* **HAIC**: \"HAIC: Humanoid Agile Object Interaction Control via Dynamics-Aware World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11758)] [[Website](https:\u002F\u002Fhaic-humanoid.github.io\u002F)]\r\n* **H-WM**: \"H-WM: Robotic Task and Motion Planning Guided by Hierarchical World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11291)]\r\n* **RISE**: \"RISE: Self-Improving Robot Policy with Compositional World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11075)] [[Website](https:\u002F\u002Fopendrivelab.com\u002Fkai0-rl\u002F)] \r\n* \"ContactGaussian-WM: Learning Physics-Grounded World Model from Videos\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11021)] \r\n* \"Scaling World Model for Hierarchical Manipulation Policies\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10983)] [[Website](https:\u002F\u002Fvista-wm.github.io)]\r\n* **Say, Dream, and Act**: \"Say, Dream, and Act: Learning Video World Models for Instruction-Driven Robot Manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10717)]\r\n* \"Affordances Enable Partial World Modeling with LLMs\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10390)]\r\n* **VLA-JEPA**: \"VLA-JEPA: Enhancing Vision-Language-Action Model with Latent World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10098)]\r\n* **MVISTA-4D**: \"MVISTA-4D: View-Consistent 4D World Model with Test-Time Action Inference for Robotic Manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.09878)]\r\n* **Hand2World**: \"Hand2World: Autoregressive Egocentric Interaction Generation via Free-Space Hand Gestures\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.09600)] [[Website](https:\u002F\u002Fhand2world.github.io\u002F)]\r\n* **World-VLA-Loop**: \"World-VLA-Loop: Closed-Loop Learning of Video World Model and VLA Policy\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06508)] [[Website](https:\u002F\u002Fshowlab.github.io\u002FWorld-VLA-Loop\u002F)]\r\n* \"Coupled Local and Global World Models for Efficient First Order RL\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06219)]\r\n* \"Visuo-Tactile World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06001)]\r\n* **BridgeV2W**: \"BridgeV2W: Bridging Video Generation Models to Embodied World Models via Embodiment Masks\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03793)] [[Website](https:\u002F\u002FBridgeV2W.github.io)]\r\n* **World-Gymnast**: \"World-Gymnast: Training Robots with Reinforcement Learning in a World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.02454)] [[Website](https:\u002F\u002Fworld-gymnast.github.io\u002F)]\r\n* **MetaWorld**: \"MetaWorld: Skill Transfer and Composition in a Hierarchical World Model for Grounding High-Level Instructions\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17507)] [[Code](https:\u002F\u002Fanonymous.4open.science\u002Fr\u002Fmetaworld-2BF4\u002F)]\r\n* \"Walk through Paintings: Egocentric World Models from Internet Priors\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.15284)] \r\n* \"Aligning Agentic World Models via Knowledgeable Experience Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.13247)] \r\n* **ReWorld**: \"ReWorld: Multi-Dimensional Reward Modeling for Embodied World Models\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12428)] \r\n* \"An Efficient and Multi-Modal Navigation System with One-Step World Model\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12277)] \r\n* **PointWorld**: \"PointWorld: Scaling 3D World Models for In-The-Wild Robotic Manipulation\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03782)] [[Website](https:\u002F\u002Fpoint-world.github.io\u002F)]\r\n* **Dream2Flow**: \"Dream2Flow: Bridging Video Generation and Open-World Manipulation with 3D Object Flow\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24766)] [[Website](https:\u002F\u002Fdream2flow.github.io\u002F)]\r\n* \"What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24497)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fjepa-wms)]\r\n* **Act2Goal**: \"Act2Goal: From World Model To General Goal-conditioned Policy\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23541)] [[Website](https:\u002F\u002Fact2goal.github.io\u002F)]\r\n* **AstraNav-World**: \"AstraNav-World: World Model for Foresight Control and Consistency\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21714)]\r\n* **ChronoDreamer**: \"ChronoDreamer: Action-Conditioned World Model as an Online Simulator for Robotic Planning\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18619)]\r\n* **STORM**: \"STORM: Search-Guided Generative World Models for Robotic Manipulation\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18477)]\r\n* \"World Models Can Leverage Human Videos for Dexterous Manipulation\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2512.13644)]\r\n* \"Latent Action World Models for Control with Unlabeled Trajectories\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10016)]\r\n* **PRISM-WM**: \"Prismatic World Model: Learning Compositional Dynamics for Planning in Hybrid Systems\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08411)]\r\n* \"Learning Robot Manipulation from Audio World Models\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08405)]\r\n* \"Embodied Tree of Thoughts: Deliberate Manipulation Planning with Embodied World Model\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08188)] [[Website](https:\u002F\u002Fembodied-tree-of-thoughts.github.io)]\r\n* \"World Models That Know When They Don't Know: Controllable Video Generation with Calibrated Uncertainty\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05927)]\r\n* \"Real-World Robot Control by Deep Active Inference With a Temporally Hierarchical World Model\", **`IEEE Robotics and Automation Letters`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01924)]\r\n* \"Seeing through Imagination: Learning Scene Geometry via Implicit Spatial World Modeling\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01821)]\r\n* **IGen**: \"IGen: Scalable Data Generation for Robot Learning from Open-World Images\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01773)] [[Website](https:\u002F\u002Fchenghaogu.github.io\u002FIGen\u002F)]\r\n* **NavForesee**: \"NavForesee: A Unified Vision-Language World Model for Hierarchical Planning and Dual-Horizon Navigation Prediction\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01550)] \r\n* **TraceGen**: \"TraceGen: World Modeling in 3D Trace Space Enables Learning from Cross-Embodiment Videos\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21690)] [[Website](https:\u002F\u002Ftracegen.github.io\u002F)]\r\n* **ENACT**: \"ENACT: Evaluating Embodied Cognition with World Modeling of Egocentric Interaction\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20937)] [[Website](https:\u002F\u002Fenact-embodied-cognition.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fmll-lab-nu\u002FENACT)]\r\n* \"Learning Massively Multitask World Models for Continuous Control\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19584)] [[Website](https:\u002F\u002Fwww.nicklashansen.com\u002FNewtWM)]\r\n* **UNeMo**: \"UNeMo: Collaborative Visual-Language Reasoning and Navigation via a Multimodal World Model\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.18845)]\r\n* \"MindForge: Empowering Embodied Agents with Theory of Mind for Lifelong Cultural Learning\", **`NeurIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12977)]\r\n* \"Towards High-Consistency Embodied World Model with Multi-View Trajectory Videos\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.12882)]\r\n* **WMPO**: \"WMPO: World Model-based Policy Optimization for Vision-Language-Action Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.09515)] [[Website](https:\u002F\u002Fwm-po.github.io)]\r\n* \"Robot Learning from a Physical World Model\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07416)] [[Website](https:\u002F\u002Fpointscoder.github.io\u002FPhysWorld_Web\u002F)]\r\n* \"When Object-Centric World Models Meet Policy Learning: From Pixels to Policies, and Where It Breaks\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06136)]\r\n* **WorldPlanner**: \"WorldPlanner: Monte Carlo Tree Search and MPC with Action-Conditioned Visual World Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03077)]\r\n* \"Learning Interactive World Model for Object-Centric Reinforcement Learning\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02225)]\r\n* \"Scaling Cross-Embodiment World Models for Dexterous Manipulation\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01177)]\r\n* \"Co-Evolving Latent Action World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.26433)]\r\n* \"Deductive Chain-of-Thought Augmented Socially-aware Robot Navigation World Model\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23509)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002FNaviWM)] \r\n* \"Deep Active Inference with Diffusion Policy and Multiple Timescale World Model for Real-World Exploration and Navigation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23258)]\r\n* **ProTerrain**: \"ProTerrain: Probabilistic Physics-Informed Rough Terrain World Modeling\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19364)]\r\n* \"Ego-Vision World Model for Humanoid Contact Planning\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11682)] [[Website](https:\u002F\u002Fego-vcp.github.io\u002F)] \r\n* **Ctrl-World**: \"Ctrl-World: A Controllable Generative World Model for Robot Manipulation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.10125)] [[Website](https:\u002F\u002Fctrl-world.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FRobert-gyj\u002FCtrl-World)]\r\n* **iMoWM**: \"iMoWM: Taming Interactive Multi-Modal World Model for Robotic Manipulation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.07313)] [[Website](https:\u002F\u002Fxingyoujun.github.io\u002Fimowm\u002F)]\r\n* **WristWorld**: \"WristWorld: Generating Wrist-Views via 4D World Models for Robotic Manipulation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.07313)]\r\n* \"A Recipe for Efficient Sim-to-Real Transfer in Manipulation with Online Imitation-Pretrained World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02538)]\r\n* \"Kinodynamic Motion Planning for Mobile Robot Navigation across Inconsistent World Models\", **`RSS 2025 Workshop on Resilient Off-road Autonomous Robotics (ROAR)`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26339)]\r\n* **EMMA**: \"EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.22407)]\r\n* **LongScape**: \"LongScape: Advancing Long-Horizon Embodied World Models with Context-Aware MoE\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21790)]\r\n* **KeyWorld**: \"KeyWorld: Key Frame Reasoning Enables Effective and Efficient World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21027)]\r\n* **DAWM**: \"DAWM: Diffusion Action World Models for Offline Reinforcement Learning via Action-Inferred Transitions\", **`ICML 2025 Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.19538)]\r\n* **World4RL**: \"World4RL: Diffusion World Models for Policy Refinement with Reinforcement Learning for Robotic Manipulation\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.19080)]\r\n* **SAMPO**: \"SAMPO:Scale-wise Autoregression with Motion PrOmpt for generative world models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.15536)]\r\n* **PhysicalAgent**: \"PhysicalAgent: Towards General Cognitive Robotics with Foundation World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13903)]\r\n* \"Empowering Multi-Robot Cooperation via Sequential World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13095)]\r\n* \"World Model Implanting for Test-time Adaptation of Embodied Agents\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.03956)]\r\n* \"Learning Primitive Embodied World Models: Towards Scalable Robotic Learning\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.20840)] [[Website](https:\u002F\u002Fqiaosun22.github.io\u002FPrimitiveWorld\u002F)]\r\n* **GWM**: \"GWM: Towards Scalable Gaussian World Models for Robotic Manipulation\", **`ICCV 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17600)] [[Website](https:\u002F\u002Fgaussian-world-model.github.io\u002F)]\r\n* \"Imaginative World Modeling with Scene Graphs for Embodied Agent Navigation\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06990)]\r\n* \"Bounding Distributional Shifts in World Modeling through Novelty Detection\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06096)]\r\n* **Genie Envisioner**: \"Genie Envisioner: A Unified World Foundation Platform for Robotic Manipulation\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.05635)] [[Website](https:\u002F\u002Fgenie-envisioner.github.io\u002F)]\r\n* **DiWA**: \"DiWA: Diffusion Policy Adaptation with World Models\", **`CoRL 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03645)] [[Code](https:\u002F\u002Fdiwa.cs.uni-freiburg.de)]\r\n* **CoEx**: \"CoEx -- Co-evolving World-model and Exploration\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.22281)]\r\n* \"Latent Policy Steering with Embodiment-Agnostic Pretrained World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13340)]\r\n* **MindJourney**: \"MindJourney: Test-Time Scaling with World Models for Spatial Reasoning\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12508)] [[Website](https:\u002F\u002Fumass-embodied-agi.github.io\u002FMindJourney)]\r\n* **FOUNDER**: \"FOUNDER: Grounding Foundation Models in World Models for Open-Ended Embodied Decision Making\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12496)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Ffounder-rl)]\r\n* **EmbodieDreamer**: \"EmbodieDreamer: Advancing Real2Sim2Real Transfer for Policy Training via Embodied World Modeling\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.05198)] [[Website](https:\u002F\u002Fembodiedreamer.github.io\u002F)]\r\n* **World4Omni**: \"World4Omni: A Zero-Shot Framework from Image Generation World Model to Robotic Manipulation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23919)] [[Website](https:\u002F\u002Fworld4omni.github.io\u002F)]\r\n* **RoboScape**: \"RoboScape: Physics-informed Embodied World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23135)] [[Code](https:\u002F\u002Fgithub.com\u002Ftsinghua-fib-lab\u002FRoboScape)]\r\n* **ParticleFormer**: \"ParticleFormer: A 3D Point Cloud World Model for Multi-Object, Multi-Material Robotic Manipulation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23126)] [[Website](https:\u002F\u002Fparticleformer.github.io\u002F)]\r\n* **ManiGaussian++**: \"ManiGaussian++: General Robotic Bimanual Manipulation with Hierarchical Gaussian World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19842)] [[Code](https:\u002F\u002Fgithub.com\u002FApril-Yz\u002FManiGaussian_Bimanual)]\r\n* **ReOI**: \"Reimagination with Test-time Observation Interventions: Distractor-Robust World Model Predictions for Visual Model Predictive Control\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16565)] \r\n* **GAF**: \"GAF: Gaussian Action Field as a Dynamic World Model for Robotic Mlanipulation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14135)] [[Website](http:\u002F\u002Fchaiying1.github.io\u002FGAF.github.io\u002Fproject_page\u002F)]\r\n* \"Prompting with the Future: Open-World Model Predictive Control with Interactive Digital Twins\", **`RSS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13761)] [[Website](https:\u002F\u002Fprompting-with-the-future.github.io\u002F)]\r\n* **V-JEPA 2 and V-JEPA 2-AC**: \"V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09985)] [[Website](https:\u002F\u002Fai.meta.com\u002Fblog\u002Fv-jepa-2-world-model-benchmarks\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fvjepa2)]\r\n* \"Time-Aware World Model for Adaptive Prediction and Control\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08441)] \r\n* **3DFlowAction**: \"3DFlowAction: Learning Cross-Embodiment Manipulation from 3D Flow World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06199)] \r\n* **ORV**: \"ORV: 4D Occupancy-centric Robot Video Generation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03079)] [[Code](https:\u002F\u002Fgithub.com\u002FOrangeSodahub\u002FORV)] [[Website](https:\u002F\u002Forangesodahub.github.io\u002FORV\u002F)]\r\n* **WoMAP**: \"WoMAP: World Models For Embodied Open-Vocabulary Object Localization\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01600)] \r\n* \"Sparse Imagination for Efficient Visual World Model Planning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01392)]\r\n* **Humanoid World Models**: \"Humanoid World Models: Open World Foundation Models for Humanoid Robotics\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01182)] \r\n* \"Evaluating Robot Policies in a World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00613)] [[Website](https:\u002F\u002Fworld-model-eval.github.io)]\r\n* **OSVI-WM**: \"OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20425)] \r\n* **WorldEval**: \"WorldEval: World Model as Real-World Robot Policies Evaluator\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19017)] [[Website](https:\u002F\u002Fworldeval.github.io)]\r\n* \"Consistent World Models via Foresight Diffusion\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16474)] \r\n* **Vid2World**: \"Vid2World: Crafting Video Diffusion Models to Interactive World Models\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14357)] [[Website](http:\u002F\u002Fknightnemo.github.io\u002Fvid2world\u002F)]\r\n* **RLVR-World**: \"RLVR-World: Training World Models with Reinforcement Learning\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13934)] [[Website]( https:\u002F\u002Fthuml.github.io\u002FRLVR-World\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FRLVR-World)]\r\n* **LaDi-WM**: \"LaDi-WM: A Latent Diffusion-based World Model for Predictive Manipulation\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11528)]\r\n* **FlowDreamer**: \"FlowDreamer: A RGB-D World Model with Flow-based Motion Representations for Robot Manipulation\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10075)] [[Website](https:\u002F\u002Fsharinka0715.github.io\u002FFlowDreamer\u002F)]\r\n* \"Occupancy World Model for Robots\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05512)]\r\n* \"Learning 3D Persistent Embodied World Models\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05495)]\r\n* **TesserAct**: \"TesserAct: Learning 4D Embodied World Models\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20995)] [[Website](https:\u002F\u002Ftesseractworld.github.io\u002F)]\r\n* **PIN-WM**: \"PIN-WM: Learning Physics-INformed World Models for Non-Prehensile Manipulation\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16693)] \r\n* \"Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16680)] \r\n* **ManipDreamer**: \"ManipDreamer: Boosting Robotic Manipulation World Model with Action Tree and Visual Guidance\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16464)] \r\n* **UWM**: \"Unified World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02792)] [[Website](https:\u002F\u002Fweirdlabuw.github.io\u002Fuwm\u002F)]\r\n* \"Perspective-Shifted Neuro-Symbolic World Models: A Framework for Socially-Aware Robot Navigation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20425)] \r\n* **AdaWorld**: \"AdaWorld: Learning Adaptable World Models with Latent Actions\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18938)] [[Website](https:\u002F\u002Fadaptable-world-model.github.io\u002F)] \r\n* **DyWA**: \"DyWA: Dynamics-adaptive World Action Model for Generalizable Non-prehensile Manipulation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16806)] [[Website](https:\u002F\u002Fpku-epic.github.io\u002FDyWA\u002F)] \r\n* \"Towards Suturing World Models: Learning Predictive Models for Robotic Surgical Tasks\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.12531)] [[Website](https:\u002F\u002Fmkturkcan.github.io\u002Fsuturingmodels\u002F)] \r\n* \"World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10480)] \r\n* **LUMOS**: \"LUMOS: Language-Conditioned Imitation Learning with World Models\", **`ICRA 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10370)] [[Website](http:\u002F\u002Flumos.cs.uni-freiburg.de\u002F)] \r\n* \"Object-Centric World Model for Language-Guided Manipulation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06170)] \r\n* **DEMO^3**: \"Multi-Stage Manipulation with Demonstration-Augmented Reward, Policy, and World Model Learning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01837)] [[Website](https:\u002F\u002Fadrialopezescoriza.github.io\u002Fdemo3\u002F)] \r\n* \"Accelerating Model-Based Reinforcement Learning with State-Space World Models\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20168)] \r\n* \"Learning Humanoid Locomotion with World Model Reconstruction\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16230)] \r\n* \"Strengthening Generative Robot Policies through Predictive World Modeling\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00622)] [[Website](https:\u002F\u002Fcomputationalrobotics.seas.harvard.edu\u002FGPC)] \r\n* **Robotic World Model**: \"Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10100)]\r\n* **RoboHorizon**: \"RoboHorizon: An LLM-Assisted Multi-View World Model for Long-Horizon Robotic Manipulation\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06605)] \r\n* **Dream to Manipulate**: \"Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14957)] [[Website](https:\u002F\u002Fleobarcellona.github.io\u002FDreamToManipulate\u002F)] \r\n* **WHALE**: \"WHALE: Towards Generalizable and Scalable World Models for Embodied Decision-making\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05619)]\r\n* **VisualPredicator**: \"VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.23156)] \r\n* \"Multi-Task Interactive Robot Fleet Learning with Visual World Models\", **`CoRL 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22689)] [[Code](https:\u002F\u002Fut-austin-rpl.github.io\u002Fsirius-fleet\u002F)]\r\n* **X-MOBILITY**: \"X-MOBILITY: End-To-End Generalizable Navigation via World Modeling\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17491)]\r\n* **PIVOT-R**: \"PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.10394)]\r\n* **GLIMO**: \"Grounding Large Language Models In Embodied Environment With Imperfect World Models\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02664)]\r\n* **EVA**: \"EVA: An Embodied World Model for Future Video Anticipation\", **`arxiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15461)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Feva-publi)] \r\n* **PreLAR**: \"PreLAR: World Model Pre-training with Learnable Action Representation\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F03363.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fzhanglixuan0720\u002FPreLAR)]\r\n* **WMP**: \"World Model-based Perception for Visual Legged Locomotion\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16784)] [[Project](https:\u002F\u002Fwmp-loco.github.io\u002F)]\r\n* **R-AIF**: \"R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14216)]\r\n* \"Representing Positional Information in Generative World Models for Object Manipulation\" **`arXiv 2024.09`** [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12005)]\r\n* **DexSim2Real$^2$**: \"DexSim2Real$^2: Building Explicit World Model for Precise Articulated Object Dexterous Manipulation\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08750)]\r\n* **DWL**: \"Advancing Humanoid Locomotion: Mastering Challenging Terrains with Denoising World Model Learning\", **`RSS 2024 (Best Paper Award Finalist)`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14472)]\r\n* \"Physically Embodied Gaussian Splatting: A Realtime Correctable World Model for Robotics\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10788)] [[Website](https:\u002F\u002Fembodied-gaussians.github.io\u002F)]\r\n* **HRSSM**: \"Learning Latent Dynamic Robust Representations for World Models\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.06263)] [[Code](https:\u002F\u002Fgithub.com\u002Fbit1029public\u002FHRSSM)]\r\n* **RoboDreamer**: \"RoboDreamer: Learning Compositional World Models for Robot Imagination\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.12377)] [[Code](https:\u002F\u002Frobovideo.github.io\u002F)]\r\n* **COMBO**: \"COMBO: Compositional World Models for Embodied Multi-Agent Cooperation\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.10775)] [[Website](https:\u002F\u002Fvis-www.cs.umass.edu\u002Fcombo\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FUMass-Foundation-Model\u002FCOMBO)]\r\n* **ManiGaussian**: \"ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation\", **`arXiv 2024.03`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08321)] [[Code](https:\u002F\u002Fguanxinglu.github.io\u002FManiGaussian\u002F)]\r\n\r\n---\r\n## World Models for VLA\r\n* **DIAL**: \"DIAL: Decoupling Intent and Action via Latent World Modeling for End-to-End VLA\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.29844)] [[Website](https:\u002F\u002Fxpeng-robotics.github.io\u002Fdial)]\r\n* \"Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20607)]\r\n* \"Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.18532)]\r\n* **Fast-WAM**: \"Fast-WAM: Do World Action Models Need Test-time Future Imagination?\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16666)] [[Website](https:\u002F\u002Fyuantianyuan01.github.io\u002FFastWAM\u002F)] \r\n* **StructVLA**: \"Beyond Dense Futures: World Models as Structured Planners for Robotic Manipulation\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.12553)]\r\n* **World2Act**: \"World2Act: Latent Action Post-Training via Skill-Compositional World Models\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.10422)] [[Website](https:\u002F\u002Fwm2act.github.io\u002F)]\r\n* **AtomVLA**: \"AtomVLA: Scalable Post-Training for Robotic Manipulation via Predictive Latent World Models\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08519)] \r\n* **Chain of World**: \"Chain of World: World Model Thinking in Latent Motion\",  **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.03195)] [[Website](https:\u002F\u002Ffx-hit.github.io\u002Fcowvla-io\u002F)] \r\n* \"Learning Physics from Pretrained Video Models: A Multimodal Continuous and Sequential World Interaction Models for Robotic Manipulation\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00110)] \r\n* **World Guidance**: \"World Guidance: World Modeling in Condition Space for Action Generation\",  **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22010)] [[Website](https:\u002F\u002Fselen-suyue.github.io\u002FWoGNet\u002F)] \r\n* **SC-VLA**: \"Self-Correcting VLA: Online Action Refinement via Sparse World Imagination\",  **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.21633)] [[Website](https:\u002F\u002Fgithub.com\u002FKisaragi0\u002FSC-VLA)]\r\n* **Motus**: \"Motus: A Unified Latent Action World Model\",  **`arxiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13030)] [[Website](https:\u002F\u002Fmotus-robotics.github.io\u002Fmotus)] [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FMotus)]\r\n* **RoboScape-R**: \"RoboScape-R: Unified Reward-Observation World Models for Generalizable Robotics Training via RL\",  **`arxiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03556)]\r\n* **AdaPower**: \"AdaPower: Specializing World Foundation Models for Predictive Manipulation\",  **`arxiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03538)]\r\n* **RynnVLA-002**: \"RynnVLA-002: A Unified Vision-Language-Action and World Model\",  **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17502)] [[Code](https:\u002F\u002Fgithub.com\u002Falibaba-damo-academy\u002FRynnVLA-002)] \r\n* **NORA-1.5**: \"NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards\",  **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14659)] [[Website](https:\u002F\u002Fdeclare-lab.github.io\u002Fnora-1.5)] [[Code](https:\u002F\u002Fgithub.com\u002Fdeclare-lab\u002Fnora-1.5)] \r\n* \"Dual-Stream Diffusion for World-Model Augmented Vision-Language-Action Model\",  **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.27607)] \r\n* **VLA-RFT**: \"VLA-RFT: Vision-Language-Action Reinforcement Fine-tuning with Verified Rewards in World Simulators\",  **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00406)] \r\n* **World-Env**: \"World-Env: Leveraging World Model as a Virtual Environment for VLA Post-Training\",  **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24948)] \r\n* **MoWM**: \"MoWM: Mixture-of-World-Models for Embodied Planning via Latent-to-Pixel Feature Modulation\",  **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21797)] \r\n* **LAWM**: \"Latent Action Pretraining Through World Modeling\",  **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18428)] [[Code](https:\u002F\u002Fgithub.com\u002Fbaheytharwat\u002Flawm)]\r\n* **PAR**: \"Physical Autoregressive Model for Robotic Manipulation without Action Pretraining\",  **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09822)] [[Website](https:\u002F\u002Fsongzijian1999.github.io\u002FPAR_ProjectPage\u002F)]\r\n* **DreamVLA**: \"DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04447)] [[Code](https:\u002F\u002Fgithub.com\u002FZhangwenyao1\u002FDreamVLA)] [[Website](https:\u002F\u002Fzhangwenyao1.github.io\u002FDreamVLA\u002F)]\r\n* **WorldVLA**: \"WorldVLA: Towards Autoregressive Action World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21539)] [[Code](https:\u002F\u002Fgithub.com\u002Falibaba-damo-academy\u002FWorldVLA)]\r\n* **UniVLA**: \"UniVLA: Unified Vision-Language-Action Model\",  **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19850)] [[Code](https:\u002F\u002Frobertwyq.github.io\u002Funivla.github.)]\r\n* **MinD**: \"MinD: Unified Visual Imagination and Control via Hierarchical World Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18897)] [[Website](https:\u002F\u002Fmanipulate-in-dream.github.io\u002F)]\r\n* **FLARE**: \"FLARE: Robot Learning with Implicit World Modeling\",  **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15659)] [[Code](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FIsaac-GR00T)] [[Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fgear\u002Fflare)]\r\n* **DreamGen**: \"DreamGen: Unlocking Generalization in Robot Learning through Video World Models\",  **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12705)] [[Code](https:\u002F\u002Fgithub.com\u002Fnvidia\u002FGR00T-dreams)]\r\n* **CoT-VLA**: \"CoT-VLA: Visual Chain-of-Thought Reasoning for Vision-Language-Action Models\",  **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18867)]\r\n* **UP-VLA**: \"UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent\",  **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22020)] [[Code](https:\u002F\u002Fgithub.com\u002FCladernyJorn\u002FUP-VLA)]\r\n* **3D-VLA**: \"3D-VLA: A 3D Vision-Language-Action Generative World Model\",  **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09631)]\r\n\r\n---\r\n## World Models for Visual Understanding\r\n* **DILLO**: \"Describe-Then-Act: Proactive Agent Steering via Distilled Language-Action World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.23149)] \r\n* **WorldVLM**: \"WorldVLM: Combining World Model Forecasting and Vision-Language Reasoning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14497)] \r\n* \"When and How Much to Imagine: Adaptive Test-Time Scaling with World Models for Visual Spatial Reasoning\", **`arxiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08236)] [[Website](https:\u002F\u002Fadaptive-visual-tts.github.io\u002F)] \r\n* \"Visual Generation Unlocks Human-Like Reasoning through Multimodal World Models\", **`arxiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19834)] [[Website](https:\u002F\u002Fthuml.github.io\u002FReasoning-Visual-World)] \r\n* \"Semantic World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19818)] [[Website](https:\u002F\u002Fweirdlabuw.github.io\u002Fswm)] \r\n* **DyVA**: \"Can World Models Benefit VLMs for World Dynamics?\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00855)] [[Website](https:\u002F\u002Fdyva-worldlm.github.io)] \r\n* \"Video models are zero-shot learners and reasoners\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.20328)]\r\n* \"From Generation to Generalization: Emergent Few-Shot Learning in Video Diffusion Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07280)]\r\n\r\n---\r\n## World Models for Autonomous Driving\r\n### Refer to https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model\r\n* **DeltaWorld**: \"A Frame is Worth One Token: Efficient Generative World Modeling with Delta Tokens\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.04913)] [[Code](https:\u002F\u002Fdeltatok.github.io)] \r\n* **DriveDreamer-Policy**: \"DriveDreamer-Policy: A Geometry-Grounded World-Action Model for Unified Generation and Planning\", **`arxiv 2026.04**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.01765)] [[Website](https:\u002F\u002Fdrivedreamer-policy.github.io\u002F)] \r\n* **DLWM**: \"DLWM: Dual Latent World Models enable Holistic Gaussian-centric Pre-training in Autonomous Driving\", **`CVPR 2026**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.00969)] \r\n* **AutoWorld**: \"AutoWorld: Scaling Multi-Agent Traffic Simulation with Self-Supervised World Models\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28963)] \r\n* **OccSim**: \"OccSim: Multi-kilometer Simulation with Long-horizon Occupancy World Models\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28887)] \r\n* **Uni-World VLA**: \"Uni-World VLA: Interleaved World Modeling and Planning for Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.27287)] \r\n* **DreamerAD**: \"DreamerAD: Efficient Reinforcement Learning via Latent World Model for Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24587)] \r\n* **Latent-WAM**: \"Latent-WAM: Latent World Action Modeling for End-to-End Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24581)] \r\n* \"Toward Physically Consistent Driving Video World Models under Challenging Trajectories\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24506)] [[Website](https:\u002F\u002Fwm-research.github.io\u002FPhyGenesis\u002F)]\r\n* **CounterScene**: \"CounterScene: Counterfactual Causal Reasoning in Generative World Models for Safety-Critical Closed-Loop Evaluation\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.21104)]\r\n* **X-World**: \"X-World: Controllable Ego-Centric Multi-Camera World Models for Scalable End-to-End Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19979)]\r\n* **DynFlowDrive**: \"DynFlowDrive: Flow-Based Dynamic World Modeling for Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19675)] [[Code](https:\u002F\u002Fgithub.com\u002Fxiaolul2\u002FDynFlowDrive)]\r\n* **Enactor**: \"Enactor: From Traffic Simulators to Surrogate World Models\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.18266)]\r\n* **VectorWorld**: \"VectorWorld: Efficient Streaming World Model via Diffusion Flow on Vector Graphs\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17652)] [[Code](https:\u002F\u002Fgithub.com\u002Fjiangchaokang\u002FVectorWorld)]\r\n* \"Bridging Scene Generation and Planning: Driving with World Model via Unifying Vision and Motion Representation\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14948)] [[Code](https:\u002F\u002Fgithub.com\u002FTabGuigui\u002FWorldDrive)]\r\n* \"Latent World Models for Automated Driving: A Unified Taxonomy, Evaluation Framework, and Open Challenges\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.09086)]\r\n* \"Kinematics-Aware Latent World Models for Data-Efficient Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07264)]\r\n* **ShareVerse**: \"ShareVerse: Multi-Agent Consistent Video Generation for Shared World Modeling\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.02697)]\r\n* \"Risk-Aware World Model Predictive Control for Generalizable End-to-End Autonomous Driving\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23259)]\r\n* **RAYNOVA**: \"RAYNOVA: Scale-Temporal Autoregressive World Modeling in Ray Space\", **`CVPR 2026**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.20685)] [[Website](https:\u002F\u002Fraynova-ai.github.io\u002F)]\r\n* \"When World Models Dream Wrong: Physical-Conditioned Adversarial Attacks against World Models\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18739)] \r\n* \"Factored Latent Action World Models\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.16229)] \r\n* **ResWorld**: \"ResWorld: Temporal Residual World Model for End-to-End Autonomous Driving\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10884)] [[Code](https:\u002F\u002Fgithub.com\u002Fmengtan00\u002FResWorld.git)]\r\n* **DriveWorld-VLA**: \"DriveWorld-VLA: Unified Latent-Space World Modeling with Vision-Language-Action for Autonomous Driving\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06521)] [[Code](https:\u002F\u002Fgithub.com\u002Fliulin815\u002FDriveWorld-VLA.git)]\r\n* \"Safe Urban Traffic Control via Uncertainty-Aware Conformal Prediction and World-Model Reinforcement Learning\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.04821)]\r\n* **InstaDrive**: \"InstaDrive: Instance-Aware Driving World Models for Realistic and Consistent Video Generation\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03242)] [[Website](https:\u002F\u002Fshanpoyang654.github.io\u002FInstaDrive\u002Fpage.html)] \r\n* **ConsisDrive**: \"ConsisDrive: Identity-Preserving Driving World Models for Video Generation by Instance Mask\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03213)] [[Website](https:\u002F\u002Fshanpoyang654.github.io\u002FConsisDrive\u002Fpage.html)] \r\n* **MAD**: \"MAD: Motion Appearance Decoupling for efficient Driving World Models\", **`arxiv 2026.01**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.09452)] [[Website](https:\u002F\u002Fvita-epfl.github.io\u002FMAD-World-Model\u002F)] \r\n* **UniDrive-WM**: \"UniDrive-WM: Unified Understanding, Planning and Generation World Model For Autonomous Driving\", **`arxiv 2026.01**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04453)] [[Website](https:\u002F\u002Funidrive-wm.github.io\u002FUniDrive-WM)] \r\n* **DriveLaW**: \"DriveLaW:Unifying Planning and Video Generation in a Latent Driving World\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23421)] \r\n* **GaussianDWM**: \"GaussianDWM: 3D Gaussian Driving World Model for Unified Scene Understanding and Multi-Modal Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23180)] [[Code](https:\u002F\u002Fgithub.com\u002Fdtc111111\u002FGaussianDWM)]\r\n* **WorldRFT**: \"WorldRFT: Latent World Model Planning with Reinforcement Fine-Tuning for Autonomous Driving\", **`AAAI 2026**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19133)] \r\n* **InDRiVE**: \"InDRiVE: Reward-Free World-Model Pretraining for Autonomous Driving via Latent Disagreement\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18850)] \r\n* **GenieDrive**: \"GenieDrive: Towards Physics-Aware Driving World Model with 4D Occupancy Guided Video Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12751)] [[Website](https:\u002F\u002Fhuster-yzy.github.io\u002Fgeniedrive_project_page\u002F)]\r\n* **FutureX**: \"FutureX: Enhance End-to-End Autonomous Driving via Latent Chain-of-Thought World Model\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11226)]\r\n* \"Latent Chain-of-Thought World Modeling for End-to-End Driving\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10226)]\r\n* **MindDrive**: \"MindDrive: An All-in-One Framework Bridging World Models and Vision-Language Model for End-to-End Autonomous Driving\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04441)]\r\n* \"Think Before You Drive: World Model-Inspired Multimodal Grounding for Autonomous Vehicles\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03454)]\r\n* **U4D**: \"U4D: Uncertainty-Aware 4D World Modeling from LiDAR Sequences\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02982)]\r\n* \"Vehicle Dynamics Embedded World Models for Autonomous Driving\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02417)]\r\n* \"World Model Robustness via Surprise Recognition\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01119)]\r\n* **SparseWorld-TC**: \"SparseWorld-TC: Trajectory-Conditioned Sparse Occupancy World Model\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22039)]\r\n* **AD-R1**: \"AD-R1: Closed-Loop Reinforcement Learning for End-to-End Autonomous Driving with Impartial World Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20325)]\r\n* **Map-World**: \"Map-World: Masked Action planning and Path-Integral World Model for Autonomous Driving\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20156)]\r\n* **WPT**: \"WPT: World-to-Policy Transfer via Online World Model Distillation\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20095)]\r\n* **Percept-WAM**: \"Percept-WAM: Perception-Enhanced World-Awareness-Action Model for Robust End-to-End Autonomous Driving\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19221)]\r\n* **Thinking Ahead**: \"Thinking Ahead: Foresight Intelligence in MLLMs and World Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.18735)]\r\n* **LiSTAR**: \"LiSTAR: Ray-Centric World Models for 4D LiDAR Sequences in Autonomous Driving\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16049)] [[Website](https:\u002F\u002Focean-luna.github.io\u002FLiSTAR.gitub.io)]\r\n* \"Dual-Mind World Models: A General Framework for Learning in Dynamic Wireless Networks\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24546)]\r\n* \"Addressing Corner Cases in Autonomous Driving: A World Model-based Approach with Mixture of Experts and LLMs\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21867)]\r\n* \"From Forecasting to Planning: Policy World Model for Collaborative State-Action Prediction\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19654)] [[Code](https:\u002F\u002Fgithub.com\u002F6550Zhao\u002FPolicy-World-Model)] \r\n* \"Rethinking Driving World Model as Synthetic Data Generator for Perception Tasks\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19195)] [[Website](https:\u002F\u002Fwm-research.github.io\u002FDream4Drive\u002F)] \r\n* **OmniNWM**: \"OmniNWM: Omniscient Driving Navigation World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18313)] [[Website](https:\u002F\u002Farlo0o.github.io\u002FOmniNWM\u002F)] \r\n* **SparseWorld**: \"SparseWorld: A Flexible, Adaptive, and Efficient 4D Occupancy World Model Powered by Sparse and Dynamic Queries\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.17482)] [[Code](https:\u002F\u002Fgithub.com\u002FMSunDYY\u002FSparseWorld)] \r\n* \"Vision-Centric 4D Occupancy Forecasting and Planning via Implicit Residual World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16729)] \r\n* **DriveVLA-W0**: \"DriveVLA-W0: World Models Amplify Data Scaling Law in Autonomous Driving\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12560)] [[Code](https:\u002F\u002Fgithub.com\u002FBraveGroup\u002FDriveVLA-W0)] \r\n* **CoIRL-AD**: \"CoIRL-AD: Collaborative-Competitive Imitation-Reinforcement Learning in Latent World Models for Autonomous Driving\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12560)] [[Code](https:\u002F\u002Fgithub.com\u002FSEU-zxj\u002FCoIRL-AD)] \r\n* **TeraSim-World**: \"TeraSim-World: Worldwide Safety-Critical Data Synthesis for End-to-End Autonomous Driving\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13164)] [[Website](https:\u002F\u002Fwjiawei.com\u002Fterasim-world-web\u002F)] \r\n* \"Enhancing Physical Consistency in Lightweight World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12437)]\r\n* **OccTENS**: \"OccTENS: 3D Occupancy World Model via Temporal Next-Scale Prediction\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.03887)]\r\n* **IRL-VLA**: \"IRL-VLA: Training an Vision-Language-Action Policy via Reward World Model\", **`arXiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06571)] [[Website](https:\u002F\u002Flidarcrafter.github.io)] [[Code](https:\u002F\u002Fgithub.com\u002Flidarcrafter\u002Ftoolkit)]\r\n* **LiDARCrafter**: \"LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences\", **`arXiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03692)] [[Website](https:\u002F\u002Flidarcrafter.github.io)] [[Code](https:\u002F\u002Fgithub.com\u002Flidarcrafter\u002Ftoolkit)]\r\n* **FASTopoWM**: \"FASTopoWM: Fast-Slow Lane Segment Topology Reasoning with Latent World Models\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.23325)] [[Code](https:\u002F\u002Fgithub.com\u002FYimingYang23\u002FFASTopoWM)]\r\n* **Orbis**: \"Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13162)] [[Code](https:\u002F\u002Flmb-freiburg.github.io\u002Forbis.github.io\u002F)]\r\n* \"World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12762)]\r\n* **NRSeg**: \"NRSeg: Noise-Resilient Learning for BEV Semantic Segmentation via Driving World Models\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04002)] [[Code](https:\u002F\u002Fgithub.com\u002Flynn-yu\u002FNRSeg)]\r\n* **World4Drive**: \"World4Drive: End-to-End Autonomous Driving via Intention-aware Physical Latent World Model\", **`ICCV2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00603)] [[Code](https:\u002F\u002Fgithub.com\u002Fucaszyp\u002FWorld4Drive)]\r\n* **Epona**: \"Epona: Autoregressive Diffusion World Model for Autonomous Driving\", **`ICCV2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24113)] [[Code](https:\u002F\u002Fkevin-thu.github.io\u002FEpona\u002F)]\r\n* \"Towards foundational LiDAR world models with efficient latent flow matching\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23434)]\r\n* **SceneDiffuser++**: \"SceneDiffuser++: City-Scale Traffic Simulation via a Generative World Model\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21976)]\r\n* **COME**: \"COME: Adding Scene-Centric Forecasting Control to Occupancy World Model\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13260)] [[Code](https:\u002F\u002Fgithub.com\u002Fsynsin0\u002FCOME)]\r\n* **STAGE**: \"STAGE: A Stream-Centric Generative World Model for Long-Horizon Driving-Scene Simulation\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13138)] \r\n* **ReSim**: \"ReSim: Reliable World Simulation for Autonomous Driving\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09981)] [[Code](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FReSim)] [[Project Page](https:\u002F\u002Fopendrivelab.com\u002FReSim)]\r\n* \"Ego-centric Learning of Communicative World Models for Autonomous Driving\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08149)] \r\n* **Dreamland**: \"Dreamland: Controllable World Creation with Simulator and Generative Models\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08006)] [[Project Page](https:\u002F\u002Fmetadriverse.github.io\u002Fdreamland\u002F)] \r\n* **LongDWM**: \"LongDWM: Cross-Granularity Distillation for Building a Long-Term Driving World Model\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01546)] [[Project Page](https:\u002F\u002Fwang-xiaodong1899.github.io\u002Flongdwm\u002F)] \r\n* **GeoDrive**: \"GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22421)] [[Code](https:\u002F\u002Fgithub.com\u002Fantonioo-c\u002FGeoDrive)] \r\n* **FutureSightDrive**: \"FutureSightDrive: Thinking Visually with Spatio-Temporal CoT for Autonomous Driving\", **`NeurIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17685)] [[Code](https:\u002F\u002Fgithub.com\u002FMIV-XJTU\u002FFSDrive)] \r\n* **Raw2Drive**: \"Raw2Drive: Reinforcement Learning with Aligned World Models for End-to-End Autonomous Driving (in CARLA v2)\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16394)]\r\n* **VL-SAFE**: \"VL-SAFE: Vision-Language Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16377)] [[Project Page](https:\u002F\u002Fys-qu.github.io\u002Fvlsafe-website\u002F)] \r\n* **PosePilot**: \"PosePilot: Steering Camera Pose for Generative World Models with Self-supervised Depth\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01729)]\r\n* \"World Model-Based Learning for Long-Term Age of Information Minimization in Vehicular Networks\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01712)]\r\n* \"Learning to Drive from a World Model\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19077)]\r\n* **DriVerse**: \"DriVerse: Navigation World Model for Driving Simulation via Multimodal Trajectory Prompting and Motion Alignment\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18576)] \r\n* \"End-to-End Driving with Online Trajectory Evaluation via BEV World Model\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01941)] [[Code](https:\u002F\u002Fgithub.com\u002FliyingyanUCAS\u002FWoTE)] \r\n* \"Knowledge Graphs as World Models for Semantic Material-Aware Obstacle Handling in Autonomous Vehicles\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21232)]\r\n* **MiLA**: \"MiLA: Multi-view Intensive-fidelity Long-term Video Generation World Model for Autonomous Driving\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15875)] [[Project Page](https:\u002F\u002Fgithub.com\u002Fxiaomi-mlab\u002Fmila.github.io)] \r\n* **SimWorld**: \"SimWorld: A Unified Benchmark for Simulator-Conditioned Scene Generation via World Model\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13952)] [[Project Page](https:\u002F\u002Fgithub.com\u002FLi-Zn-H\u002FSimWorld)] \r\n* **UniFuture**: \"Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13587)] [[Project Page](https:\u002F\u002Fgithub.com\u002Fdk-liang\u002FUniFuture)] \r\n* **EOT-WM**: \"Other Vehicle Trajectories Are Also Needed: A Driving World Model Unifies Ego-Other Vehicle Trajectories in Video Latent Space\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09215)]\r\n* \"Temporal Triplane Transformers as Occupancy World Models\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07338)]\r\n* **InDRiVE**: \"InDRiVE: Intrinsic Disagreement based Reinforcement for Vehicle Exploration through Curiosity Driven Generalized World Model\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05573)]\r\n* **MaskGWM**: \"MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11663)]\r\n* **Dream to Drive**: \"Dream to Drive: Model-Based Vehicle Control Using Analytic World Models\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10012)]\r\n* \"Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous Driving\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07309)]\r\n* \"Dream to Drive with Predictive Individual World Model\", **`IEEE TIV`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16733)] [[Code](https:\u002F\u002Fgithub.com\u002Fgaoyinfeng\u002FPIWM)]\r\n* **HERMES**: \"HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14729)] \r\n* **AdaWM**: \"AdaWM: Adaptive World Model based Planning for Autonomous Driving\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13072)] \r\n* **AD-L-JEPA**: \"AD-L-JEPA: Self-Supervised Spatial World Models with Joint Embedding Predictive Architecture for Autonomous Driving with LiDAR Data\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04969)]  \r\n* **DrivingWorld**: \"DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19505)] [[Code](https:\u002F\u002Fgithub.com\u002FYvanYin\u002FDrivingWorld)] [[Project Page](https:\u002F\u002Fhuxiaotaostasy.github.io\u002FDrivingWorld\u002Findex.html)] \r\n* **DrivingGPT**: \"DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18607)] [[Project Page](https:\u002F\u002Frogerchern.github.io\u002FDrivingGPT\u002F)]\r\n* \"An Efficient Occupancy World Model via Decoupled Dynamic Flow and Image-assisted Training\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13772)]\r\n* **GEM**: \"GEM: A Generalizable Ego-Vision Multimodal World Model for Fine-Grained Ego-Motion, Object Dynamics, and Scene Composition Control\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11198)] [[Project Page](https:\u002F\u002Fvita-epfl.github.io\u002FGEM.github.io\u002F)]\r\n* **GaussianWorld**: \"GaussianWorld: Gaussian World Model for Streaming 3D Occupancy Prediction\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04380)] [[Code](https:\u002F\u002Fgithub.com\u002Fzuosc19\u002FGaussianWorld)]\r\n* **Doe-1**: \"Doe-1: Closed-Loop Autonomous Driving with Large World Model\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09627)] [[Project Page](https:\u002F\u002Fwzzheng.net\u002FDoe\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FDoe)]\r\n* \"Pysical Informed Driving World Model\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08410)] [[Project Page](https:\u002F\u002Fmetadrivescape.github.io\u002Fpapers_project\u002FDrivePhysica\u002Fpage.html)]\r\n* **InfiniCube**: \"InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03934)] [[Project Page](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Finfinicube\u002F)]\r\n* **InfinityDrive**: \"InfinityDrive: Breaking Time Limits in Driving World Models\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01522)] [[Project Page](https:\u002F\u002Fmetadrivescape.github.io\u002Fpapers_project\u002FInfinityDrive\u002Fpage.html)]\r\n* **ReconDreamer**: \"ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19548)] [[Project Page](https:\u002F\u002Frecondreamer.github.io\u002F)]\r\n* **Imagine-2-Drive**: \"Imagine-2-Drive: High-Fidelity World Modeling in CARLA for Autonomous Vehicles\", **`ICRA 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10171)] [[Project Page](https:\u002F\u002Fanantagrg.github.io\u002FImagine-2-Drive.github.io\u002F)]\r\n* **DynamicCity**: \"DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes\", **`ICLR 2025 Spotlight`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18084)] [[Project Page](https:\u002F\u002Fdynamic-city.github.io)] [[Code](https:\u002F\u002Fgithub.com\u002F3DTopia\u002FDynamicCity)]\r\n* **DriveDreamer4D**: \"World Models Are Effective Data Machines for 4D Driving Scene Representation\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13571)] [[Project Page](https:\u002F\u002Fdrivedreamer4d.github.io\u002F)]\r\n* **DOME**: \"Taming Diffusion Model into High-Fidelity Controllable Occupancy World Model\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10429)] [[Project Page](https:\u002F\u002Fgusongen.github.io\u002FDOME)]\r\n* **SSR**: \"Does End-to-End Autonomous Driving Really Need Perception Tasks?\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18341)] [[Code](https:\u002F\u002Fgithub.com\u002FPeidongLi\u002FSSR)]\r\n* \"Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16663)]\r\n* **LatentDriver**: \"Learning Multiple Probabilistic Decisions from Latent World Model in Autonomous Driving\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15730)] [[Code](https:\u002F\u002Fgithub.com\u002FSephirex-X\u002FLatentDriver)]\r\n* **RenderWorld**: \"World Model with Self-Supervised 3D Label\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11356)]\r\n* **OccLLaMA**: \"An Occupancy-Language-Action Generative World Model for Autonomous Driving\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03272)]\r\n* **DriveGenVLM**: \"Real-world Video Generation for Vision Language Model based Autonomous Driving\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16647)]\r\n* **Drive-OccWorld**: \"Driving in the Occupancy World: Vision-Centric 4D Occupancy Forecasting and Planning via World Models for Autonomous Driving\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14197)]\r\n* **CarFormer**: \"Self-Driving with Learned Object-Centric Representations\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15843)] [[Code](https:\u002F\u002Fkuis-ai.github.io\u002FCarFormer\u002F)]\r\n* **BEVWorld**: \"A Multimodal World Model for Autonomous Driving via Unified BEV Latent Space\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.05679)] [[Code](https:\u002F\u002Fgithub.com\u002Fzympsyche\u002FBevWorld)]\r\n* **TOKEN**: \"Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00959)]\r\n* **UMAD**: \"Unsupervised Mask-Level Anomaly Detection for Autonomous Driving\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06370)]\r\n* **SimGen**: \"Simulator-conditioned Driving Scene Generation\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.09386)] [[Code](https:\u002F\u002Fmetadriverse.github.io\u002Fsimgen\u002F)]\r\n* **AdaptiveDriver**: \"Planning with Adaptive World Models for Autonomous Driving\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10714)] [[Code](https:\u002F\u002Farunbalajeev.github.io\u002Fworld_models_planning\u002Fworld_model_paper.html)]\r\n* **UnO**: \"Unsupervised Occupancy Fields for Perception and Forecasting\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08691)] [[Code](https:\u002F\u002Fwaabi.ai\u002Fresearch\u002Funo)]\r\n* **LAW**: \"Enhancing End-to-End Autonomous Driving with Latent World Model\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08481)] [[Code](https:\u002F\u002Fgithub.com\u002FBraveGroup\u002FLAW)]\r\n* **Delphi**: \"Unleashing Generalization of End-to-End Autonomous Driving with Controllable Long Video Generation\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01349)] [[Code](https:\u002F\u002Fgithub.com\u002Fwestlake-autolab\u002FDelphi)]\r\n* **OccSora**: \"4D Occupancy Generation Models as World Simulators for Autonomous Driving\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20337)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FOccSora)]\r\n* **MagicDrive3D**: \"Controllable 3D Generation for Any-View Rendering in Street Scenes\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14475)] [[Code](https:\u002F\u002Fgaoruiyuan.com\u002Fmagicdrive3d\u002F)]\r\n* **Vista**: \"A Generalizable Driving World Model with High Fidelity and Versatile Controllability\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.17398)] [[Code](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FVista)]\r\n* **CarDreamer**: \"Open-Source Learning Platform for World Model based Autonomous Driving\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.09111)] [[Code](https:\u002F\u002Fgithub.com\u002Fucd-dare\u002FCarDreamer)]\r\n* **DriveSim**: \"Probing Multimodal LLMs as World Models for Driving\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05956)] [[Code](https:\u002F\u002Fgithub.com\u002Fsreeramsa\u002FDriveSim)]\r\n* **DriveWorld**: \"4D Pre-trained Scene Understanding via World Models for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04390)]\r\n* **LidarDM**: \"Generative LiDAR Simulation in a Generated World\", **`arXiv 2024.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02903)] [[Code](https:\u002F\u002Fgithub.com\u002Fvzyrianov\u002Flidardm)]\r\n* **SubjectDrive**: \"Scaling Generative Data in Autonomous Driving via Subject Control\", **`arXiv 2024.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19438)] [[Project](https:\u002F\u002Fsubjectdrive.github.io\u002F)]\r\n* **DriveDreamer-2**: \"LLM-Enhanced World Models for Diverse Driving Video Generation\", **`arXiv 2024.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.06845)] [[Code](https:\u002F\u002Fdrivedreamer2.github.io\u002F)]\r\n* **Think2Drive**: \"Efficient Reinforcement Learning by Thinking in Latent World Model for Quasi-Realistic Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16720)]\r\n* **MARL-CCE**: \"Modelling Competitive Behaviors in Autonomous Driving Under Generative World Model\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F05085.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fqiaoguanren\u002FMARL-CCE)]\r\n* **GenAD**: \"Generalized Predictive Model for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09630)] [[Data](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FDriveAGI?tab=readme-ov-file#genad-dataset-opendv-youtube)]\r\n* **GenAD**: \"Generative End-to-End Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11502)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FGenAD)]\r\n* **NeMo**: \"Neural Volumetric World Models for Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F02571.pdf)]\r\n* **MARL-CCE**: \"Modelling-Competitive-Behaviors-in-Autonomous-Driving-Under-Generative-World-Model\", **`ECCV 2024`**. [[Code](https:\u002F\u002Fgithub.com\u002Fqiaoguanren\u002FMARL-CCE)]\r\n* **ViDAR**: \"Visual Point Cloud Forecasting enables Scalable Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17655)] [[Code](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FViDAR)]\r\n* **Drive-WM**: \"Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17918)] [[Code](https:\u002F\u002Fgithub.com\u002FBraveGroup\u002FDrive-WM)]\r\n* **Cam4DOCC**: \"Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17663)] [[Code](https:\u002F\u002Fgithub.com\u002Fhaomo-ai\u002FCam4DOcc)]\r\n* **Panacea**: \"Panoramic and Controllable Video Generation for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16813)] [[Code](https:\u002F\u002Fpanacea-ad.github.io\u002F)]\r\n* **OccWorld**: \"Learning a 3D Occupancy World Model for Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16038)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FOccWorld)]\r\n* **Copilot4D**: \"Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.01017)]\r\n* **DrivingDiffusion**: \"Layout-Guided multi-view driving scene video generation with latent diffusion model\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07771)] [[Code](https:\u002F\u002Fgithub.com\u002Fshalfun\u002FDrivingDiffusion)]\r\n* **SafeDreamer**: \"Safe Reinforcement Learning with World Models\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=tsE5HLYtYg)] [[Code](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002FSafeDreamer)]\r\n* **MagicDrive**: \"Street View Generation with Diverse 3D Geometry Control\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02601)] [[Code](https:\u002F\u002Fgithub.com\u002Fcure-lab\u002FMagicDrive)]\r\n* **DriveDreamer**: \"Towards Real-world-driven World Models for Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.09777)] [[Code](https:\u002F\u002Fgithub.com\u002FJeffWang987\u002FDriveDreamer)]\r\n* **SEM2**: \"Enhance Sample Efficiency and Robustness of End-to-end Urban Autonomous Driving via Semantic Masked World Model\", **`TITS`**. [[Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10538211\u002F)]\r\n\r\n\r\n----\r\n## Citation\r\nIf you find this repository useful, please consider citing this list:\r\n```\r\n@misc{leo2024worldmodelspaperslist,\r\n    title = {Awesome-World-Models},\r\n    author = {Leo Fan},\r\n    journal = {GitHub repository},\r\n    url = {https:\u002F\u002Fgithub.com\u002Fleofan90\u002FAwesome-World-Models},\r\n    year = {2024},\r\n}\r\n```\r\n\r\n\r\n\r\n\r\n","# 机器人领域的优秀世界模型 [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n本仓库提供了一份精心整理的清单，收录了**用于通用视频生成、具身智能和自动驾驶的世界模型相关论文**。模板参考自[Awesome-LLM-Robotics](https:\u002F\u002Fgithub.com\u002FGT-RIPL\u002FAwesome-LLM-Robotics)和[Awesome-World-Model](https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model)\u003Cbr>\n\n#### 欢迎大家贡献！请随时提交[拉取请求](https:\u002F\u002Fgithub.com\u002Fleofan90\u002FAwesome-World-Models\u002Fblob\u002Fmain\u002Fhow-to-PR.md)，或通过[邮件](mailto:chunkaifan-changetoat-stu-changetodot-pku--changetodot-changetocn)联系我们，以添加新的论文！ \u003Cbr>\n\n如果您觉得本仓库很有用，请考虑[引用](#citation)并为该列表点个赞⭐。也欢迎您与他人分享！\n\n---\n## 概述\n\n- [机器人领域的优秀世界模型 ](#awesome-world-models-for-robotics-)\n      - [欢迎大家贡献！请随时提交拉取请求，或通过邮件联系我们，以添加论文！ ](#contributions-are-welcome-please-feel-free-to-submit-pull-requests-or-reach-out-via-email-to-add-papers-)\n  - [概述](#overview)\n  - [世界模型的基础性论文](#foundation-paper-of-world-model)\n  - [博客或技术报告](#blog-or-technical-report)\n  - [综述文章](#surveys)\n  - [基准测试与评估](#benchmarks--evaluation)\n  - [通用世界模型](#general-world-models)\n  - [面向具身智能的世界模型](#world-models-for-embodied-ai)\n  - [面向VLA的世界模型](#world-models-for-vla)\n  - [面向视觉理解的世界模型](#world-models-for-visual-understanding)\n  - [面向自动驾驶的世界模型](#world-models-for-autonomous-driving)\n    - [请参阅 https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model](#refer-to-httpsgithubcomlmd0311awesome-world-model)\n  - [引用](#citation)\n\n---\n## 世界模型的基础性论文\n* World Models, **`NIPS 2018 口头报告`**. [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10122)] [[官网](https:\u002F\u002Fworldmodels.github.io\u002F)]\n\n## 博客或技术报告\n* **`OpenWorldLib`**, OpenWorldLib：先进世界模型的统一代码库与定义。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.04707)]\n* **`ABot-PhysWorld`**, ABot-PhysWorld：面向机器人操控的物理对齐交互式世界基础模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.23376)]\n* **`GigaWorld-Policy`**, GigaWorld-Policy：一种高效的以行动为中心的世界—行动模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17240)]\n* **`GigaBrain-0.5M*`**, GigaBrain-0.5M*：一种基于世界模型强化学习进行训练的VLA。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.12099)] [[官网](https:\u002F\u002Fgigabrain05m.github.io\u002F)]\n* **`ALIVE`**, ALIVE：通过逼真的音视频生成为您的世界注入生命。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08682)] [[官网](https:\u002F\u002Ffoundationvision.github.io\u002FAlive\u002F)]\n* **`DreamDojo`**, DreamDojo：基于大规模人类视频的通用机器人世界模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06949)] [[官网](https:\u002F\u002Fdreamdojo-world.github.io\u002F)]\n* **`lingbot-va`**, 用于机器人控制的因果世界建模。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.21998)] [[官网](https:\u002F\u002Ftechnology.robbyant.com\u002Flingbot-va)] [[代码](https:\u002F\u002Fgithub.com\u002Frobbyant\u002Flingbot-va)]\n* **`lingbot-world`**, 推动开源世界模型的发展。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20540)] [[官网](https:\u002F\u002Ftechnology.robbyant.com\u002Flingbot-world)] [[代码](https:\u002F\u002Fgithub.com\u002Frobbyant\u002Flingbot-world)]\n* **`TARS`**, 世界尽在掌握：一个大规模、开源的人类中心型野外操控学习生态系统。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24310)] [[官网](https:\u002F\u002Fwiyh.tars-ai.com\u002F)]\n* **`SIMA 2`**, SIMA 2：面向虚拟世界的通用具身智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04797)]\n* **`SimWorld`**, SimWorld：面向物理与社会环境中自主智能体的开放式真实感模拟器。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01078)] [[官网](https:\u002F\u002Fsimworld.org\u002F)]\n* **`Hunyuan-GameCraft-2`**, Hunyuan-GameCraft-2：遵循指令的交互式游戏世界模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.23429)] [[官网](https:\u002F\u002Fhunyuan-gamecraft-2.github.io\u002F)]\n* **`GigaWorld-0`**, GigaWorld-0：以世界模型为数据引擎，赋能具身AI。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19861)] [[官网](https:\u002F\u002Fgigaworld0.github.io\u002F)]\n* **`PAN`**, PAN：用于通用、可交互且长 horizon 世界模拟的世界模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.09057)]\n* **`Cosmos-Predict2.5`**, 基于视频基础模型的物理AI世界模拟。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00062)] [[代码](https:\u002F\u002Fgithub.com\u002Fnvidia-cosmos\u002Fcosmos-predict2.5)]\n* **`Emu3.5`**, Emu3.5：原生多模态模型即世界学习者。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.26583)] [[官网](https:\u002F\u002Femu.world)] [[代码](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FEmu3.5)]\n* **`ODesign`**, ODesign：用于生物分子相互作用设计的世界模型。[[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.22304)] [[官网](https:\u002F\u002Fodesign.lglab.ac.cn)]\n* **`GigaBrain-0`**, GigaBrain-0：一种由世界模型驱动的视觉-语言-行动模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19430)] [[官网](https:\u002F\u002Fgigabrain0.github.io\u002F)]\n* **`CWM`**, CWM：一款开放权重的LLM，用于结合世界模型的代码生成研究。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02387)] [[官网](https:\u002F\u002Fai.meta.com\u002Fresources\u002Fmodels-and-libraries\u002Fcwm-downloads)] [[代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fcwm)]\n* **`WoW`**, WoW：通过具身交互迈向全知世界模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.22642)] [[官网](https:\u002F\u002Fwow-world-model.github.io\u002F)]\n* **`Matrix-Game 2.0`**, Matrix-Game 2.0：一款开源、实时、流式的交互式世界模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.13009)] [[官网](https:\u002F\u002Fmatrix-game-v2.github.io\u002F)]\n* **`Matrix-3D`**, Matrix-3D：全方位可探索的3D世界生成。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08086)] [[官网](https:\u002F\u002Fmatrix-3d.github.io)]\n* **`HunyuanWorld 1.0`**, HunyuanWorld 1.0：根据文字或像素生成沉浸式、可探索且交互式的3D世界。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21809)] [[官网](https:\u002F\u002F3d-models.hunyuan.tencent.com\u002Fworld\u002F)] [[代码](https:\u002F\u002Fgithub.com\u002FTencent-Hunyuan\u002FHunyuanWorld-1.0)]\n* 神经网络“学习世界模型”意味着什么？[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21513)]\n* **`Matrix-Game`**, Matrix-Game：交互式世界基础模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18701)] [[代码](https:\u002F\u002Fgithub.com\u002FSkyworkAI\u002FMatrix-Game)]\n* **`Cosmos-Drive-Dreams`**, Cosmos-Drive-Dreams：利用世界基础模型规模化生成合成驾驶数据。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09042)] [[官网](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Fcosmos_drive_dreams)]\n* **`GAIA-2`**, GAIA-2：一款可控的多视角生成式世界模型，用于自动驾驶。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20523)] [[官网](https:\u002F\u002Fwayve.ai\u002Fthinking\u002Fgaia-2)]\n* **`Cosmos`**, Cosmos世界基础模型平台，专为物理AI设计。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.03575)] [[官网](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fai\u002Fcosmos\u002F)] [[代码](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FCosmos)]\n* **`1X Technologies`**, 1X世界模型。[[博客](https:\u002F\u002Fwww.1x.tech\u002Fdiscover\u002F1x-world-model)]\n* **`Runway`**, 首次推出通用世界模型。[[博客](https:\u002F\u002Frunwayml.com\u002Fresearch\u002Fintroducing-general-world-models)]\n* **`Wayve`**, 首次推出GAIA-1：一款尖端的自主驾驶生成式AI模型。[[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.17080)] [[博客](https:\u002F\u002Fwayve.ai\u002Fthinking\u002Fintroducing-gaia1\u002F)]\n* **`Yann LeCun`**, 通往自主机器智能之路。[[论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BZ5a1r-kVsf)]\n\n## 调查研究\n* “视频生成模型作为世界模型：高效范式、架构与算法” **`arXiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28489)]\n* “从数字孪生到世界模型：移动边缘通用智能的机遇、挑战与应用” **`arXiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17420)]\n* “一致性三元组作为通用世界模型的定义性原则” **`arXiv 2026.02`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23152)] [[代码]( https:\u002F\u002Fgithub.com\u002Fopenraiser\u002Fawesome-world-model-evolution)]\n* “作为世界模型的视频生成的机制化视角：状态与动力学”，**`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17067)] \n* “从生成引擎到可行动的模拟器：世界模型中物理接地的必要性”，**`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.15533)] \n* “为具身AI建模心理世界：全面综述”，**`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.02378)] \n* “数字孪生AI：从大型语言模型到世界模型的机遇与挑战”，**`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01321)] \n* “超越世界模型：重新思考AI模型中的理解”，**`AAAI 2026`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.12239)] \n* “利用人工智能模拟视觉世界：路线图”，**`arXiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08585)] [[网站](https:\u002F\u002Fworld-model-roadmap.github.io\u002F)] [[代码](https:\u002F\u002Fgithub.com\u002Fziqihuangg\u002FAwesome-From-Video-Generation-to-World-Model)]\n* “迈向世界模型：机器人操作综述”，**`arXiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02097)]\n* “世界模型应优先统一物理与社会动态”，**`NIPS 2025`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21219)] [[网站](https:\u002F\u002Fsites.google.com\u002Fview\u002Fworld-model-position)]\n* “从掩码到世界：世界模型的搭车者指南”，**`arXiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20668)] [[网站](https:\u002F\u002Fgithub.com\u002FM-E-AGI-Lab\u002FAwesome-World-Models)]\n* “具身AI世界模型的全面综述”，**`arXiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.16732)] [[网站](https:\u002F\u002Fgithub.com\u002FLi-Zn-H\u002FAwesomeWorldModels)]\n* “具身AI智能体世界模型的安全挑战：综述”，**`arXiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05865)]\n* “具身AI：从LLM到世界模型”，**`IEEE CASM`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.20021)]\n* “3D与4D世界建模：综述”，**`arXiv 2025.09`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07996)]\n* “通过世界模型和代理型AI实现边缘通用智能：基础、解决方案与挑战”，**`arXiv 2025.08`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09561)]\n* “综述：从物理模拟器和世界模型中学习具身智能”，**`arXiv 2025.07`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00917)] [[代码](https:\u002F\u002Fgithub.com\u002FNJU3DV-LoongGroup\u002FEmbodied-World-Models-Survey)]\n* “具身AI智能体：建模世界”，**`arXiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22355)] \n* “从2D到3D认知：通用世界模型简要综述”，**`arXiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20134)] \n* “基于声学物理信息的世界模型综述”，**`arXiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13833)] \n* “探索视频生成中物理认知的演化：综述”，**`arXiv 2025.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21765)] [[代码](https:\u002F\u002Fgithub.com\u002Fminnie-lin\u002FAwesome-Physics-Cognition-based-Video-Generation)]\n* “人工智能中的世界模型：像孩子一样感知、学习和推理”，**`arXiv 2025.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15168)] \n* “模拟真实世界：多模态生成模型的统一综述”，**`arXiv 2025.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04641)] [[代码](https:\u002F\u002Fgithub.com\u002FALEEEHU\u002FWorld-Simulator)]\n* “物理可解释世界模型的四大原则”，**`arXiv 2025.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02143)]\n* “世界模型在塑造自动驾驶中的作用：全面综述”，**`arXiv 2025.02`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10498)] [[代码](https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model)]\n* “自动驾驶用世界模型综述”，**`TPAMI`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11260)]\n* “理解世界还是预测未来？世界模型全面综述”，**`arXiv 2024.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14499)]\n* “世界模型：安全视角”，**`ISSRE WDMD`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07690)]\n* “探索自动驾驶中视频生成与世界模型的相互作用：综述”，**`arXiv 2024.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02914)]\n* “从高效多模态模型到世界模型：综述”，**`arXiv 2024.07`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00118)]\n* “将网络空间与物理世界对齐：具身AI全面综述”，**`arXiv 2024.07`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06886)] [[代码](https:\u002F\u002Fgithub.com\u002FHCPLab-SYSU\u002FEmbodied_AI_Paper_List)]\n* “Sora是世界模拟器吗？通用世界模型及更广泛领域的全面综述”，**`arXiv 2024.05`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.03520)] [[代码](https:\u002F\u002Fgithub.com\u002FGigaAI-research\u002FGeneral-World-Models-Survey)]\n* “自动驾驶用世界模型：初步综述”，**`TIV`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02622)]\n* “关于自动驾驶用多模态大型语言模型的综述”，**`WACVW 2024`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.12320)] [[代码](https:\u002F\u002Fgithub.com\u002FIrohXu\u002FAwesome-Multimodal-LLM-Autonomous-Driving)]\n\n---\n\n## 基准与评估\n* “世界推理竞技场”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25887)] [[代码](https:\u002F\u002Fgithub.com\u002FMBZUAI-IFM\u002FWR-Arena)] \n* **Omni-WorldBench**：“Omni-WorldBench：迈向以交互为中心的世界模型综合评估”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22212)] \n* “眼不见，心不念？视频世界模型中状态演化的评估”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13215)] [[网站](https:\u002F\u002Fglab-caltech.github.io\u002FSTEVOBench\u002F)]\n* **MicroVerse**：“MicroVerse：迈向微观世界模拟的初步探索”，**`ICLR 2026`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00585)] [[代码](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FMicroVerse)]\n* **WorldArena**：“WorldArena：用于评估具身世界模型感知能力与功能效用的统一基准” **`arXiv 2026.02`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08971)] [[代码]( https:\u002F\u002Fworld-arena.ai)]\n* **MIND**：“MIND：世界模型中记忆一致性与动作控制的基准测试” **`arXiv 2026.02`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08025)] [[代码](https:\u002F\u002Fgithub.com\u002FCSU-JPG\u002FMIND)]\n* **WoW-bench**：“工作流世界：将世界模型引入企业系统的基准测试” **`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22130)]\n* **WorldBench**：“WorldBench：为世界模型诊断性评估消歧物理特性” **`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.21282)] [[网站](https:\u002F\u002Fworld-bench.github.io\u002F)] \n* **PhysicsMind**：“PhysicsMind：面向基础VLM和世界模型的物理推理与预测的仿真与真实力学基准测试” **`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.16007)] \n* **RBench**：“重新思考具身世界的视频生成模型” **`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.15282)] [[网站](https:\u002F\u002Fdagroup-pku.github.io\u002FReVidgen.github.io\u002F)] [[代码](https:\u002F\u002Fgithub.com\u002FDAGroup-PKU\u002FReVidgen\u002F)] \n* **Wow, wo, val!**：“Wow, wo, val！全面的具身世界模型评估图灵测试” **`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04137)] \n* **DrivingGen**：“DrivingGen：自动驾驶领域生成式视频世界模型的综合基准” **`arXiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01528)] [[网站](https:\u002F\u002Fdrivinggen-bench.github.io\u002F)]\n* “幻觉的统一定义，或者说：问题出在世界模型上，蠢货！” **`arXiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21577)] \n* “通过闭环世界建模实现视频化身中的主动智能” **`arXiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20615)] [[代码](https:\u002F\u002Fxuanhuahe.github.io\u002FORCA\u002F)]\n* **MobileWorldBench**：“MobileWorldBench：迈向面向移动智能体的语义世界建模” **`arXiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14014)] [[代码](https:\u002F\u002Fgithub.com\u002Fjacklishufan\u002FMobileWorld)]\n* **WorldLens**：“WorldLens：真实世界中驾驶类世界模型的全谱评估” **`arXiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10958)] [[网站](https:\u002F\u002Fworldbench.github.io\u002Fworldlens)]\n* “在Veo世界模拟器中评估Gemini机器人策略” **`arXiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10675)]\n* **On Memory**：“关于记忆：世界模型中记忆机制的比较” **`2026年世界建模研讨会`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.06983)]\n* **SmallWorlds**：“SmallWorlds：在孤立环境中评估世界模型的动力学理解能力”，**`arXiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.23465)] \n* **4DWorldBench**：“4DWorldBench：面向3D\u002F4D世界生成模型的综合评估框架”，**`arXiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19836)] \n* **Target-Bench**：“Target-Bench：世界模型能否实现基于语义目标的无地图路径规划？”，**`arXiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17792)] \n* **PragWorld**：“PragWorld：在最小语言扰动和对话动态下评估LLM本地世界模型的基准”，**`AAAI 2026`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13021)] \n* “世界模拟器能进行推理吗？Gen-ViRe：生成式视觉推理基准”，**`arXiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13853)] [[代码](https:\u002F\u002Fgithub.com\u002FL-CodingSpace\u002FGVR)]\n* “利用视频世界模型进行可扩展的策略评估”，**`arXiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11520)]\n* “LLM世界模型的专家评估：以高温超导为例”，**`ICML 2025关于评估世界模型及当前人工智能探索的研讨会`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03782)]\n* “世界模型学习的基准测试”，**`arXiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19788)]\n* **LikePhys**：“LikePhys：通过似然偏好评估视频扩散模型中的直观物理理解”，**`ICLR 2026`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11512)] [[网站](https:\u002F\u002Fyuanjianhao508.github.io\u002FLikePhys\u002F)]\n* **World-in-World**：“World-in-World：闭环世界中的世界模型”，**`arXiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18135)] [[网站](https:\u002F\u002Fgithub.com\u002FWorld-In-World\u002Fworld-in-world)] \n* **VideoVerse**：“VideoVerse：你的T2V生成器距离世界模型还有多远？”，**`arXiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08398)] \n* **OmniWorld**：“OmniWorld：用于4D世界建模的多领域、多模态数据集”，**`arXiv 2025.09`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12201)] [[网站](https:\u002F\u002Fyangzhou24.github.io\u002FOmniWorld\u002F)]\n* “超越仿真：面向自动驾驶中规划与因果关系的世界模型基准测试”，**`ICRA 2025`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.01922)] \n* **WM-ABench**：“视觉-语言模型是否具有内部世界模型？迈向原子级评估”，**`ACL 2025（Findings）`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21876)] [[网站](https:\u002F\u002Fwm-abench.maitrix.org\u002F)]\n* **UNIVERSE**：“调整视觉-语言模型以评估世界模型”，**`arxiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.17967)]\n* **WorldPrediction**：“WorldPrediction：面向高层世界建模与长 horizon 程序化规划的基准”，**`arxiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04363)]\n* “迈向记忆辅助的世界模型：通过空间一致性进行基准测试”，**`arxiv 2025.05`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22976)] [[数据集](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FkevinLian\u002FLoopNav)] [[代码](https:\u002F\u002Fgithub.com\u002FKevin-lkw\u002FLoopNav)]\n* **SimWorld**：“SimWorld：基于世界模型的模拟器条件场景生成统一基准”，**`arxiv 2025.05`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13952)] [[代码](https:\u002F\u002Fgithub.com\u002FLi-Zn-H\u002FSimWorld)]\n* **EWMBench**：“EWMBench：评估具身世界模型中的场景、运动和语义质量”，**`arxiv 2025.05`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09694)] [[代码](https:\u002F\u002Fgithub.com\u002FAgibotTech\u002FEWMBench)]\n* “迈向稳定的世界模型：测量并解决生成式环境中的世界不稳定问题”，**`arxiv 2025.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08122)] \n* **WorldModelBench**：“WorldModelBench：将视频生成模型视为世界模型进行评判”，**`CVPR 2025`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20694)] [[网站](https:\u002F\u002Fworldmodelbench-team.github.io\u002F)]\n* **Text2World**：“Text2World：大型语言模型符号世界模型生成的基准测试”，**`arxiv 2025.02`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13092)] [[网站](https:\u002F\u002Ftext-to-world.github.io\u002F)] \n* **ACT-Bench**：“ACT-Bench：迈向可用于自动驾驶的动作可控世界模型”，**`arxiv 2024.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05337)]\n* **WorldSimBench**：“WorldSimBench：迈向将视频生成模型作为世界模拟器的方向”，**`arxiv 2024.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18072)] [[网站](https:\u002F\u002Firanqin.github.io\u002FWorldSimBench.github.io\u002F)] \n* **EVA**：“EVA：用于未来视频预测的具身世界模型”，**`ICML 2025`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15461)] [[网站](https:\u002F\u002Fsites.google.com\u002Fview\u002Feva-publi)] \n* **AeroVerse**：“AeroVerse：用于模拟、预训练、微调和评估航空航天具身世界模型的无人机代理基准套件”，**`arxiv 2024.08`**。[[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.15511)]\n* **CityBench**：“CityBench：评估大型语言模型作为世界模型的能力”，**`arXiv 2024.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13945)] [[代码](https:\u002F\u002Fgithub.com\u002Ftsinghua-fib-lab\u002FCityBench)]\n* “想象不可见的世界：视觉世界模型中系统性泛化的基准测试”，**`NIPS 2023`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.09064)]\n\n---\n\n## General World Models\r\n* **InCoder-32B-Thinking**: \"InCoder-32B-Thinking: Industrial Code World Model for Thinking\", **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.03144)] \r\n* **Learn2Fold**: \"Learn2Fold: Structured Origami Generation with World Model Planning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.29585)] \r\n* **WorldFlow3D**: \"WorldFlow3D: Flowing Through 3D Distributions for Unbounded World Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.29089)] [[Website](https:\u002F\u002Flight.princeton.edu\u002Fworldflow3d)] \r\n* **LOME**: \"LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.27449)] \r\n* **VGGRPO**: \"VGGRPO: Towards World-Consistent Video Generation with 4D Latent Reward\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.26599)] [[Website](https:\u002F\u002Fzhaochongan.github.io\u002Fprojects\u002FVGGRPO)]\r\n* **PiJEPA**: \"Policy-Guided World Model Planning for Language-Conditioned Visual Navigation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25981)]\r\n* \"Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25716)]\r\n* **Lingshu-Cell**: \"Lingshu-Cell: A generative cellular world model for transcriptome modeling toward virtual cells\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25240)]\r\n* **AI-Supervisor**: \"AI-Supervisor: Autonomous AI Research Supervision via a Persistent Research World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24402)]\r\n* **WildWorld**: \"WildWorld: A Large-Scale Dataset for Dynamic World Modeling with Actions and Explicit State toward Generative ARPG\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.23497)] [[Website](https:\u002F\u002Fshandaai.github.io\u002Fwildworld-project\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FShandaAI\u002FWildWorld)]\r\n* \"Model Predictive Control with Differentiable World Models for Offline Reinforcement Learning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22430)]\r\n* **WorldCache**: \"WorldCache: Content-Aware Caching for Accelerated Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22286)] [[Website](https:\u002F\u002Fumair1221.github.io\u002FWorld-Cache\u002F)]\r\n* \"From Part to Whole: 3D Generative World Model with an Adaptive Structural Hierarchy\", **`ICME 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.21557)]\r\n* **EgoForge**: \"EgoForge: Goal-Directed Egocentric World Simulator\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20169)]\r\n* \"Structured Latent Dynamics in Wireless CSI via Homomorphic World Models\", **`IEEE ICC`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20048)]\r\n* **WorldAgents**: \"WorldAgents: Can Foundation Image Models be Agents for 3D World Models?\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19708)] [[Website]( https:\u002F\u002Fziyaerkoc.com\u002Fworldagents\u002F)] \r\n* **R2-Dreamer**: \"R2-Dreamer: Redundancy-Reduced World Models without Decoders or Augmentation\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.18202)] [[Code]( https:\u002F\u002Fgithub.com\u002FNM512\u002Fr2dreamer)] \r\n* **StereoWorld**: \"Stereo World Model: Camera-Guided Stereo Video Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17375)] [[Website]( https:\u002F\u002Fsunyangtian.github.io\u002FStereoWorld-web\u002F)] \r\n* **MosaicMem**: \"MosaicMem: Hybrid Spatial Memory for Controllable Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17117)] [[Website](https:\u002F\u002Fmosaicmem.github.io\u002Fmosaicmem\u002F)] \r\n* **WorldCam**: \"WorldCam: Interactive Autoregressive 3D Gaming Worlds with Camera Pose as a Unifying Geometric Representation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16871)] [[Website](https:\u002F\u002Fcvlab-kaist.github.io\u002FWorldCam\u002F)] \r\n* **SWM**: \"Grounding World Simulation Models in a Real-World Metropolis\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.15583)] [[Website](https:\u002F\u002Fseoul-world-model.github.io\u002F)] \r\n* **NavThinker**: \"NavThinker: Action-Conditioned World Models for Coupled Prediction and Planning in Social Navigation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.15359)] [[Website](https:\u002F\u002Fhutslib.github.io\u002FNavThinker)] \r\n* **EyeWorld**: \"EyeWorld: A Generative World Model of Ocular State and Dynamics\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14039)] \r\n* **CtrlAttack**: \"CtrlAttack: A Unified Attack on World-Model Control in Diffusion Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13435)] \r\n* **SAW**: \"SAW: Toward a Surgical Action World Model via Controllable and Scalable Video Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13024)] \r\n* **VGGT-World**: \"VGGT-World: Transforming VGGT into an Autoregressive Geometry World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.12655)] \r\n* **ARROW**: \"ARROW: Augmented Replay for RObust World models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.11395)] \r\n* **RAE-NWM**: \"RAE-NWM: Navigation World Model in Dense Visual Representation Space\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.09241)] [[Code](https:\u002F\u002Fgithub.com\u002F20robo\u002Fraenwm)]\r\n* **SPIRAL**: \"SPIRAL: A Closed-Loop Framework for Self-Improving Action World Models via Reflective Planning Agents\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08403)]\r\n* **MWM**: \"MWM: Mobile World Models for Action-Conditioned Consistent Prediction\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07799)] [[Website](https:\u002F\u002Faigeeksgroup.github.io\u002FMWM)] [[Code](https:\u002F\u002Fgithub.com\u002FAIGeeksGroup\u002FMWM)]\r\n* **Brain-WM**: \"Brain-WM: Brain Glioblastoma World Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07562)] [[Code](https:\u002F\u002Fgithub.com\u002Fthibault-wch\u002FBrain-GBM-world-model)]\r\n* **DreamSAC**: \"DreamSAC: Learning Hamiltonian World Models via Symmetry Exploration\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07545)]\r\n* **LiveWorld**: \"LiveWorld: Simulating Out-of-Sight Dynamics in Generative Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07145)] [[Website](https:\u002F\u002Fzichengduan.github.io\u002FLiveWorld\u002Findex.html)]\r\n* \"What if? Emulative Simulation with World Models for Situated Reasoning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.06445)]\r\n* **WorldCache**: \"WorldCache: Accelerating World Models for Free via Heterogeneous Token Caching\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.06331)] [[Website](https:\u002F\u002Fgithub.com\u002FFofGofx\u002FWorldCache)]\r\n* \"Planning in 8 Tokens: A Compact Discrete Tokenizer for Latent World Model\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.05438)] \r\n* \"Beyond Pixel Histories: World Models with Persistent 3D State\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.03482)] [[Website](https:\u002F\u002Ffrancelico.github.io\u002Fpersist.github.io)]\r\n* \"Contextual Latent World Models for Offline Meta Reinforcement Learning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.02935)]\r\n* \"Next Embedding Prediction Makes World Models Stronger\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.02765)]\r\n* **COMBAT**: \"COMBAT: Conditional World Models for Behavioral Agent Training\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00825)]\r\n* **DreamWorld**: \"DreamWorld: Unified World Modeling in Video Generation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00466)] [[Code](https:\u002F\u002Fgithub.com\u002FABU121111\u002FDreamWorld)] \r\n* **MetaOthello**: \"MetaOthello: A Controlled Study of Multiple World Models in Transformers\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23164)] \r\n* **GeoWorld**: \"GeoWorld: Geometric World Models\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23058)] [[Website](https:\u002F\u002Fsteve-zeyu-zhang.github.io\u002FGeoWorld)]\r\n* **UCM**: \"UCM: Unifying Camera Control and Memory with Time-aware Positional Encoding Warping for World Models\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22960)] [[Website](https:\u002F\u002Fhumanaigc.github.io\u002Fucm-webpage\u002F)]\r\n* \"Code World Models for Parameter Control in Evolutionary Algorithms\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22260)]\r\n* **Solaris**: \"Solaris: Building a Multiplayer Video World Model in Minecraft\", **`arxiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22208)] [[Website](https:\u002F\u002Fsolaris-wm.github.io\u002F)]\r\n* \"MRI Contrast Enhancement Kinetics World Model\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.19285)] \r\n* \"Neural Fields as World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18690)] \r\n* \"Learning Invariant Visual Representations for Planning with Joint-Embedding Predictive World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18639)] \r\n* **Generated Reality**: \"Generated Reality: Human-centric World Simulation using Interactive Video Generation with Hand and Camera Control\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18422)] \r\n* **VLM-DEWM**: \"VLM-DEWM: Dynamic External World Model for Verifiable and Resilient Vision-Language Planning in Manufacturing\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15549)] \r\n* \"World-Model-Augmented Web Agents with Action Correction\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15384)] \r\n* \"Cold-Start Personalization via Training-Free Priors from Structured World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15012)] [[Code](https:\u002F\u002Fgithub.com\u002FAvinandan22\u002FPEP)]\r\n* \"World Models for Policy Refinement in StarCraft II\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.14857)]\r\n* **WebWorld**: \"WebWorld: A Large-Scale World Model for Web Agent Training\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.14721)]\r\n* **WIMLE**: \"WIMLE: Uncertainty-Aware World Models with IMLE for Sample-Efficient Continuous Control\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.14351)]\r\n* **Causal-JEPA**: \"Causal-JEPA: Learning World Models through Object-Level Latent Interventions\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11389)] [[Website](https:\u002F\u002Fhazel-heejeong-nam.github.io\u002Fcjepa\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fgalilai-group\u002Fcjepa)]\r\n* **Olaf-World**: \"Olaf-World: Orienting Latent Actions for Video World Modeling\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10104)] [[Website](https:\u002F\u002Fshowlab.github.io\u002FOlaf-World\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FOlaf-World)]\r\n* **Agent World Model**: \"Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10090)] [[Code](https:\u002F\u002Fgithub.com\u002FSnowflake-Labs\u002Fagent-world-model)]\r\n* **WorldCompass**: \"WorldCompass: Reinforcement Learning for Long-Horizon World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.09022)] [[Website](https:\u002F\u002F3d-models.hunyuan.tencent.com\u002Fworld\u002F)]\r\n* \"Horizon Imagination: Efficient On-Policy Training in Diffusion World Models\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08032)] [[Code](https:\u002F\u002Fgithub.com\u002Fleor-c\u002Fhorizon-imagination)]\r\n* \"Geometry-Aware Rotary Position Embedding for Consistent Video World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07854)]\r\n* \"Debugging code world models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07672)]\r\n* \"Cross-View World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07277)]\r\n* \"Interpreting Physics in Video World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07050)]\r\n* \"Neural Sabermetrics with World Model: Play-by-play Predictive Modeling with Large Language Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.07030)]\r\n* \"From Kepler to Newton: Inductive Biases Guide Learned World Models in Transformers\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06923)]\r\n* \"Self-Improving World Modelling with Latent Actions\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06130)]\r\n* \"Reinforcement World Model Learning for LLM-based Agents\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.05842)]\r\n* **LIVE**: \"LIVE: Long-horizon Interactive Video World Modeling\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03747)] [[Website](https:\u002F\u002Fjunchao-cs.github.io\u002FLIVE-demo\u002F)]\r\n* **EHRWorld**: \"EHRWorld: A Patient-Centric Medical World Model for Long-Horizon Clinical Trajectories\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03569)] \r\n* \"Joint Learning of Hierarchical Neural Options and Abstract World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.02799)] \r\n* \"Test-Time Mixture of World Models for Embodied Agents in Dynamic Environments\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22647)] [[Code](https:\u002F\u002Fgithub.com\u002Fdoldam0\u002Ftmow)] \r\n* \"The Patient is not a Moving Document: A World Model Training Paradigm for Longitudinal EHR\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22128)] \r\n* **PathWise**: \"PathWise: Planning through World Model for Automated Heuristic Design via Self-Evolving LLMs\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20539)] \r\n* \"From Observations to Events: Event-Aware World Model for Reinforcement Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19336)] \r\n* **NuiWorld**: \"NuiWorld: Exploring a Scalable Framework for End-to-End Controllable World Generation\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19048)] \r\n* \"\"Just in Time\" World Modeling Supports Human Planning and Reasoning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14514)] \r\n* **Action Shapley**: \"Action Shapley: A Training Data Selection Metric for World Model in Reinforcement Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10905)] \r\n* \"Inference-time Physics Alignment of Video Generative Models with Latent World Models\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10553)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FWMReward)] \r\n* **Imagine-then-Plan**: \"Imagine-then-Plan: Agent Learning from Adaptive Lookahead with World Models\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.08955)] \r\n* **Puzzle it Out**: \"Puzzle it Out: Local-to-Global World Model for Offline Multi-Agent Reinforcement Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.07463)] \r\n* \"Object-Centric World Models Meet Monte Carlo Tree Search\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06604)] \r\n* \"Learning Latent Action World Models In The Wild\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05230)] \r\n* **VerseCrafter**: \"VerseCrafter: Dynamic Realistic Video World Model with 4D Geometric Control\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05138)] [[Website](https:\u002F\u002Fsixiaozheng.github.io\u002FVerseCrafter_page\u002F)] \r\n* \"Choreographing a World of Dynamic Objects\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04194)] [[Website](https:\u002F\u002Fyanzhelyu.github.io\u002Fchord)] \r\n* **MobileDreamer**: \"MobileDreamer: Generative Sketch World Model for GUI Agent\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04035)] \r\n* \"Current Agents Fail to Leverage World Model as Tool for Foresight\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03905)] \r\n* \"Flow Equivariant World Models: Memory for Partially Observed Dynamic Environments\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01075)] [[Website](https:\u002F\u002Fflowequivariantworldmodels.github.io)]\r\n* \"Value-guided action planning with JEPA world models\", **`ICLR 2026 World Modeling Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00844)]\r\n* **NeoVerse**: \"NeoVerse: Enhancing 4D World Model with in-the-wild Monocular Videos\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00393)] [[Website](https:\u002F\u002Fneoverse-4d.github.io)]\r\n* **TeleWorld**: \"TeleWorld: Towards Dynamic Multimodal Synthesis with a 4D World Model\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00051)]\r\n* \"World model inspired sarcasm reasoning with large language model agents\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24329)]\r\n* **LEWM**: \"Large Emotional World Model\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24149)]\r\n* **WWM**: \"Web World Models\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23676)] [[Website](https:\u002F\u002Fgithub.com\u002FPrinceton-AI2-Lab\u002FWeb-World-Models)] \r\n* **SurgWorld**: \"SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23162)] \r\n* **Agent2World**: \"Agent2World: Learning to Generate Symbolic World Models via Adaptive Multi-Agent Feedback\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22336)] [[Website](https:\u002F\u002Fagent2world.github.io)] \r\n* **Yume-1.5**: \"Yume-1.5: A Text-Controlled Interactive World Generation Model\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22096)] [[Website](https:\u002F\u002Fstdstu12.github.io\u002FYUME-Project)] [[Code](https:\u002F\u002Fgithub.com\u002Fstdstu12\u002FYUME)] \r\n* \"Aerial World Model for Long-horizon Visual Generation and Navigation in 3D Space\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21887)] \r\n* \"From Word to World: Can Large Language Models be Implicit Text-based World Models?\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18832)] \r\n* \"Dexterous World Models\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.17907)] [[Website](https:\u002F\u002Fsnuvclab.github.io\u002Fdwm)]\r\n* **PhysFire-WM**: \"PhysFire-WM: A Physics-Informed World Model for Emulating Fire Spread Dynamics\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.17152)] \r\n* **WorldPlay**: \"WorldPlay: Towards Long-Term Geometric Consistency for Real-Time Interactive World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14614)] [[Website](https:\u002F\u002F3d-models.hunyuan.tencent.com\u002Fworld\u002F)]\r\n* \"The Double Life of Code World Models: Provably Unmasking Malicious Behavior Through Execution Traces\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13821)] \r\n* **LongVie 2**: \"LongVie 2: Multimodal Controllable Ultra-Long Video World Model\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13604)] [[Website](https:\u002F\u002Fvchitect.github.io\u002FLongVie2-project\u002F)]\r\n* **VFMF**: \"VFMF: World Modeling by Forecasting Vision Foundation Model Features\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11225)] [[Website](https:\u002F\u002Fvfmf.gabrijel-boduljak.com\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fgboduljak\u002Fvfmf)]\r\n* **VDAWorld**: \"VDAWorld: World Modelling via VLM-Directed Abstraction and Simulation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11061)] [[Website](https:\u002F\u002Ffelixomahony.github.io\u002Fvdaworld\u002F)]\r\n* \"Closing the Train-Test Gap in World Models for Gradient-Based Planning\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.09929)]\r\n* **WonderZoom**: \"WonderZoom: Multi-Scale 3D World Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.09164)] [[Website](https:\u002F\u002Fwonderzoom.github.io\u002F)]\r\n* **Astra**: \"Astra: General Interactive World Model with Autoregressive Denoising\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08931)] [[Website](https:\u002F\u002Feternalevan.github.io\u002FAstra-project\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FEternalEvan\u002FAstra)]\r\n* **Visionary**: \"Visionary: The World Model Carrier Built on WebGPU-Powered Gaussian Splatting Platform\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08478)] [[Website](https:\u002F\u002Fvisionary-laboratory.github.io\u002Fvisionary)]\r\n* **CLARITY**: \"CLARITY: Medical World Model for Guiding Treatment Decisions by Modeling Context-Aware Disease Trajectories in Latent Space\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08029)]\r\n* **UnityVideo**: \"UnityVideo: Unified Multi-Modal Multi-Task Learning for Enhancing World-Aware Video Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.07831)] [[Website](https:\u002F\u002Fjackailab.github.io\u002FProjects\u002FUnityVideo)] [[Code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FUnityVideo)]\r\n* \"Speech World Model: Causal State-Action Planning with Explicit Reasoning for Speech\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05933)]\r\n* \"Probing the effectiveness of World Models for Spatial Reasoning through Test-time Scaling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05809)] [[Code](https:\u002F\u002Fgithub.com\u002Fchandar-lab\u002Fvisa-for-mindjourney)] \r\n* **ProPhy**: \"ProPhy: Progressive Physical Alignment for Dynamic World Simulation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05564)] \r\n* **BiTAgent**: \"BiTAgent: A Task-Aware Modular Framework for Bidirectional Coupling between Multimodal Large Language Models and World Models\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04513)] \r\n* **RELIC**: \"RELIC: Interactive Video World Model with Long-Horizon Memory\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04040)] [[Website](https:\u002F\u002Frelic-worldmodel.github.io\u002F)] \r\n* \"Better World Models Can Lead to Better Post-Training Performance\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03400)] \r\n* **SeeU**: \"SeeU: Seeing the Unseen World via 4D Dynamics-aware Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03350)] [[Website](https:\u002F\u002Fyuyuanspace.com\u002FSeeU\u002F)]\r\n* **DynamicVerse**: \"DynamicVerse: A Physically-Aware Multimodal Framework for 4D World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03000)] \r\n* **IC-World**: \"IC-World: In-Context Generation for Shared World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02793)] [[Code](https:\u002F\u002Fgithub.com\u002Fwufan-cse\u002FIC-World)]\r\n* **WorldPack**: \"WorldPack: Compressed Memory Improves Spatial Consistency in Video World Modeling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02473)]\r\n* **GrndCtrl**: \"GrndCtrl: Grounding World Models via Self-Supervised Reward Alignment\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01952)]\r\n* **ChronosObserver**: \"ChronosObserver: Taming 4D World with Hyperspace Diffusion Sampling\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01481)]\r\n* **AVWM**: \"Audio-Visual World Models: Towards Multisensory Imagination in Sight and Sound\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00883)] \r\n* **VCWorld**: \"VCWorld: A Biological World Model for Virtual Cell Simulation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00306)] [[Code](https:\u002F\u002Fgithub.com\u002FGENTEL-lab\u002FVCWorld)]\r\n* **VISTAv2**: \"VISTAv2: World Imagination for Indoor Vision-and-Language Navigation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00041)] [[Website](https:\u002F\u002Ftaco-group.github.io\u002F)]\r\n* **Captain Safari**: \"Captain Safari: A World Engine\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22815)] [[Website](https:\u002F\u002Fjohnson111788.github.io\u002Fopen-safari\u002F)]\r\n* **WorldWander**: \"WorldWander: Bridging Egocentric and Exocentric Worlds in Video Generation\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22098)] [[Code](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FWorldWander)]\r\n* **Inferix**: \"Inferix: A Block-Diffusion based Next-Generation Inference Engine for World Simulation\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20714)] [[Code](https:\u002F\u002Fgithub.com\u002Falibaba-damo-academy\u002FInferix)]\r\n* **MagicWorld**, MagicWorld: Towards Long-Horizon Stability for Interactive Video World Exploration. [[Paper](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2511.18886v2)][[Website](https:\u002F\u002Fvivocameraresearch.github.io\u002Fmagicworld\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FvivoCameraResearch\u002FMagic-World)]\r\n* \"Counterfactual World Models via Digital Twin-conditioned Video Diffusion\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17481)] \r\n* **WorldGen**: \"WorldGen: From Text to Traversable and Interactive 3D Worlds\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16825)] [[Website](https:\u002F\u002Fwww.meta.com\u002Fblog.\u002Fworldgen-3d-world-generation-reality-1abs-generative-ai-research\u002F)]\r\n* **X-WIN**: \"X-WIN: Building Chest Radiograph World Model via Predictive Sensing\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14918)]\r\n* \"Object-Centric World Models for Causality-Aware Reinforcement Learning\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14262)]\r\n* \"Latent-Space Autoregressive World Model for Efficient and Robust Image-Goal Navigation\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11011)]\r\n* **Dynamic Sparsity**: \"Dynamic Sparsity: Challenging Common Sparsity Assumptions for Learning World Models in Robotic Reinforcement Learning Benchmarks\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08086)]\r\n* **MrCoM**: \"MrCoM: A Meta-Regularized World-Model Generalizing Across Multi-Scenarios\", **`AAAI 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06252)]\r\n* \"Next-Latent Prediction Transformers Learn Compact World Models\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.05963)]\r\n* **DR. WELL**: \"DR. WELL: Dynamic Reasoning and Learning with Symbolic World Model for Embodied LLM-Based Multi-Agent Collaboration\", **`NeurIPS 2025 Workshop: Bridging Language,\r\nAgent, and World Models for Reasoning and Planning (LAW)`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.04646)] [[Website](https:\u002F\u002Fnarjesno.github.io\u002FDR.WELL\u002F)]\r\n* \"How Far Are Surgeons from Surgical World Models? A Pilot Study on Zero-shot Surgical Video Generation with Expert Assessment\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01775)]\r\n* \"From Pixels to Cooperation Multi Agent Reinforcement Learning based on Multimodal World Models\", **`arxiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01310)]\r\n* \"Bootstrap Off-policy with World Model\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00423)]\r\n* \"Clone Deterministic 3D Worlds with Geometrically-Regularized World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.26782)]\r\n* \"Semantic Communications with World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24785)]\r\n* **TRELLISWorld**: \"TRELLISWorld: Training-Free World Generation from Object Generators\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23880)]\r\n* **WorldGrow**: \"WorldGrow: Generating Infinite 3D World\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21682)] [[Code](https:\u002F\u002Fgithub.com\u002Fworld-grow\u002FWorldGrow)] \r\n* **PhysWorld**: \"PhysWorld: From Real Videos to World Models of Deformable Objects via Physics-Aware Demonstration Synthesis\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21447)] \r\n* \"How Hard is it to Confuse a World Model?\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21232)]\r\n* \"Social World Model-Augmented Mechanism Design Policy Learning\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19270)]\r\n* **VAGEN**: \"VAGEN: Reinforcing World Model Reasoning for Multi-Turn VLM Agents\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16907)] [[Website](http:\u002F\u002Fmll.lab.northwestern.edu\u002FVAGEN\u002F)] \r\n* **Cosmos-Surg-dVRK**: \"Cosmos-Surg-dVRK: World Foundation Model-based Automated Online Evaluation of Surgical Robot Policy Learning\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16240)] \r\n* \"Zero-shot World Models via Search in Memory\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16123)] \r\n* \"Vector Quantization in the Brain: Grid-like Codes in World Models\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16039)] \r\n* **Terra**: \"Terra: Explorable Native 3D World Model with Point Latents\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14977)] [[Website](https:\u002F\u002Fhuang-yh.github.io\u002Fterra\u002F)]\r\n* **Deep SPI**: \"Deep SPI: Safe Policy Improvement via World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12312)] \r\n* \"One Life to Learn: Inferring Symbolic World Models for Stochastic Environments from Unguided Exploration\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12088)] [[Code](https:\u002F\u002Fonelife-worldmodel.github.io\u002F)]\r\n* **R-WoM**: \"R-WoM: Retrieval-augmented World Model For Computer-use Agents\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11892)] \r\n* **WorldMirror**: \"WorldMirror: Universal 3D World Reconstruction with Any-Prior Prompting\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.10726)] \r\n* **Unified World Models**: \"Unified World Models: Memory-Augmented Planning and Foresight for Visual Navigation\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08713)] [[code](https:\u002F\u002Fgithub.com\u002FF1y1113\u002FUniWM)]\r\n* \"Code World Models for General Game Playing\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04542)]\r\n* **MorphoSim**: \"MorphoSim: An Interactive, Controllable, and Editable Language-guided 4D World Simulator\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04390)] [[code](https:\u002F\u002Fgithub.com\u002Feric-ai-lab\u002FMorph4D)]\r\n* **ChronoEdit**: \"ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04290)] [[Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Fchronoedit)]\r\n* **SFP**: \"Spatiotemporal Forecasting as Planning: A Model-Based Reinforcement Learning Approach with Generative World Models\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04020)]\r\n* **EvoWorld**: \"EvoWorld: Evolving Panoramic World Generation with Explicit 3D Memory\", **`arxiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01183)] [[Code](https:\u002F\u002Fgithub.com\u002FJiahaoPlus\u002FEvoWorld)]\r\n* \"World Model for AI Autonomous Navigation in Mechanical Thrombectomy\", **`MICCAI 2025. Lecture Notes in Computer Science`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25518)] \r\n* **DyMoDreamer**: \"DyMoDreamer: World Modeling with Dynamic Modulation\", **`NeurIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24804)] [[Code](https:\u002F\u002Fgithub.com\u002FUltraman-Tiga1\u002FDyMoDreamer)]\r\n* **Dreamer4**: \"Training Agents Inside of Scalable World Models\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24527)] [[Website](https:\u002F\u002Fdanijar.com\u002Fdreamer4\u002F)]\r\n* \"Reinforcement Learning with Inverse Rewards for World Model Post-training\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23958)]\r\n* \"Context and Diversity Matter: The Emergence of In-Context Learning in World Models\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.22353)]\r\n* **FantasyWorld**: \"FantasyWorld: Geometry-Consistent World Modeling via Unified Video and 3D Prediction\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21657)]\r\n* \"Remote Sensing-Oriented World Model\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17808)]\r\n* \"World Modeling with Probabilistic Structure Integration\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.09737)]\r\n* \"One Model for All Tasks: Leveraging Efficient World Models in Multi-Task Planning\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07945)] [[Code](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FLightZero)]\r\n* **LatticeWorld**: \"LatticeWorld: A Multimodal Large Language Model-Empowered Framework for Interactive Complex World Generation\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.05263)]\r\n* \"Planning with Reasoning using Vision Language World Model\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02722)]\r\n* \"Social World Models\", **`arxiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00559)]\r\n* \"Dynamics-Aligned Latent Imagination in Contextual World Models for Zero-Shot Generalization\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20294)]\r\n* **HERO**: \"HERO: Hierarchical Extrapolation and Refresh for Efficient World Models\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17588)]\r\n* \"Scalable RF Simulation in Generative 4D Worlds\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.12176)]\r\n* \"Finite Automata Extraction: Low-data World Model Learning as Programs from Gameplay Video\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11836)]\r\n* \"Visuomotor Grasping with World Models for Surgical Robots\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11200)]\r\n* \"In-Context Reinforcement Learning via Communicative World Models\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06659)] [[Code](https:\u002F\u002Fgithub.com\u002Ffernando-ml\u002FCORAL )]\r\n* **PIGDreamer**: \"PIGDreamer: Privileged Information Guided World Models for Safe Partially Observable Reinforcement Learning\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.02159)]\r\n* **SimuRA**: \"SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.23773)]\r\n* \"Back to the Features: DINO as a Foundation for Video World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.19468)] \r\n* **Yume**: \"Yume: An Interactive World Generation Model\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.17744)] [[Website](https:\u002F\u002Fstdstu12.github.io\u002FYUME-Project\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fstdstu12\u002FYUME)]\r\n* \"LLM world models are mental: Output layer evidence of brittle world model use in LLM mechanical reasoning\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.15521)]\r\n* \"Safety Certification in the Latent space using Control Barrier Functions and World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13871)]\r\n* \"Assessing adaptive world models in machines with novel games\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12821)]\r\n* \"Graph World Model\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10539)] [[Website](https:\u002F\u002Fgithub.com\u002Fulab-uiuc\u002FGWM)]\r\n* **MobiWorld**: \"MobiWorld: World Models for Mobile Wireless Network\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09462)]\r\n* \"Continual Reinforcement Learning by Planning with Online World Models\", **`ICML 2025 Spotlight`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09177)]\r\n* **AirScape**: \"AirScape: An Aerial Generative World Model with Motion Controllability\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.08885)] [[Website]( https:\u002F\u002Fembodiedcity.github.io\u002FAirScape\u002F)]\r\n* **Geometry Forcing**: \"Geometry Forcing: Marrying Video Diffusion and 3D Representation for Consistent World Modeling\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07982)] [[Website](https:\u002F\u002FGeometryForcing.github.io)]\r\n* **Martian World Models**: \"Martian World Models: Controllable Video Synthesis with Physically Accurate 3D Reconstructions\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07978)] [[Website](https:\u002F\u002Fmarsgenai.github.io)]\r\n* \"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06952)]\r\n* \"Critiques of World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05169)]\r\n* \"When do World Models Successfully Learn Dynamical Systems?\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04898)]\r\n* **WebSynthesis**: \"WebSynthesis: World-Model-Guided MCTS for Efficient WebUI-Trajectory Synthesis\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04370)]\r\n* \"Accurate and Efficient World Modeling with Masked Latent Transformers\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04075)]\r\n* **Dyn-O**: \"Dyn-O: Building Structured World Models with Object-Centric Representations\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03298)]\r\n* **NavMorph**: \"NavMorph: A Self-Evolving World Model for Vision-and-Language Navigation in Continuous Environments\", **`ICCV 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19055)] [[Code](https:\u002F\u002Fgithub.com\u002FFeliciaxyao\u002FNavMorph)]\r\n* \"A “Good” Regulator May Provide a World Model for Intelligent Systems\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23032)]\r\n* **Xray2Xray**: \"Xray2Xray: World Model from Chest X-rays with Volumetric Context\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19055)]\r\n* **MATWM**: \"Transformer World Model for Sample Efficient Multi-Agent Reinforcement Learning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18537)]\r\n* \"Measuring (a Sufficient) World Model in LLMs: A Variance Decomposition Framework\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16584)]\r\n* \"Efficient Generation of Diverse Cooperative Agents with World Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07450)]\r\n* **WorldLLM**: \"WorldLLM: Improving LLMs' world modeling using curiosity-driven theory-making\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06725)]\r\n* \"LLMs as World Models: Data-Driven and Human-Centered Pre-Event Simulation for Disaster Impact Assessment\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06355)]\r\n* \"Bootstrapping World Models from Dynamics Models in Multimodal Foundation Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06006)]\r\n* \"Video World Models with Long-term Spatial Memory\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05284)] [[Website](https:\u002F\u002Fspmem.github.io\u002F)]\r\n* **DSG-World**: \"DSG-World: Learning a 3D Gaussian World Model from Dual State Videos\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05217)]\r\n* \"Safe Planning and Policy Optimization via World Model Learning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04828)]\r\n* **FOLIAGE**: \"FOLIAGE: Towards Physical Intelligence World Models Via Unbounded Surface Evolution\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03173)]\r\n* \"Linear Spatial World Models Emerge in Large Language Models\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02996)] [[Code](https:\u002F\u002Fgithub.com\u002Fmatthieu-perso\u002Fspatial_world_models)]\r\n* **Simple, Good, Fast**: \"Simple, Good, Fast: Self-Supervised World Models Free of Baggage\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02612)] [[Code](https:\u002F\u002Fgithub.com\u002Fjrobine\u002Fsgf)]\r\n* **Medical World Model**: \"Medical World Model: Generative Simulation of Tumor Evolution for Treatment Planning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02327)]\r\n* \"General agents need world models\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01622)]\r\n* \"Learning Abstract World Models with a Group-Structured Latent Space\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01529)]\r\n* **DeepVerse**: \"DeepVerse: 4D Autoregressive Video Generation as a World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01103)]\r\n* \"World Models for Cognitive Agents: Transforming Edge Intelligence in Future Networks\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00417)]\r\n* **Dyna-Think**: \"Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00320)]\r\n* **StateSpaceDiffuser**: \"StateSpaceDiffuser: Bringing Long Context to Diffusion World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22246)]\r\n* \"Learning World Models for Interactive Video Generation\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21996)] \r\n* \"Revisiting Multi-Agent World Modeling from a Diffusion-Inspired Perspective\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21906)] \r\n* \"Long-Context State-Space Video World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20171)] [[Website](https:\u002F\u002Fryanpo.com\u002Fssm_wm)]\r\n* \"Unlocking Smarter Device Control: Foresighted Planning with a World Model-Driven Code Execution Approach\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16422)] \r\n* \"World Models as Reference Trajectories for Rapid Motor Adaptation\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15589)] \r\n* \"Policy-Driven World Model Adaptation for Robust Offline Model-based Reinforcement Learning\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13709)] \r\n* \"Building spatial world models from sparse transitional episodic memories\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13696)] \r\n* **PoE-World**: \"PoE-World: Compositional World Modeling with Products of Programmatic Experts\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10819)] [[Website](https:\u002F\u002Ftopwasu.github.io\u002Fpoe-world)]\r\n* \"Explainable Reinforcement Learning Agents Using World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08073)] \r\n* **seq-JEPA**: \"seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03176)] \r\n* \"Coupled Distributional Random Expert Distillation for World Model Online Imitation Learning\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02228)] \r\n* \"Learning Local Causal World Models with State Space Models and Attention\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02074)] \r\n* **WebEvolver**: \"WebEvolver: Enhancing Web Agent Self-Improvement with Coevolving World Model\", **`arxiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21024)] \r\n* **WALL-E 2.0**: \"WALL-E 2.0: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents\", **`arxiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15785)] [[Code](https:\u002F\u002Fgithub.com\u002Felated-sawyer\u002FWALL-E)]\r\n* **ViMo**: \"ViMo: A Generative Visual GUI World Model for App Agent\", **`arxiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13936)] \r\n* \"Simulating Before Planning: Constructing Intrinsic User World Model for User-Tailored Dialogue Policy Planning\", **`SIGIR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13643)] \r\n* **CheXWorld**: \"CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13820)] [[Code](https:\u002F\u002Fgithub.com\u002FLeapLabTHU\u002FCheXWorld)]\r\n* **EchoWorld**: \"EchoWorld: Learning Motion-Aware World Models for Echocardiography Probe Guidance\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13065)] [[Code](https:\u002F\u002Fgithub.com\u002FLeapLabTHU\u002FEchoWorld)]\r\n* \"Adapting a World Model for Trajectory Following in a 3D Game\", **`ICLR 2025 Workshop on World Models`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12299)] \r\n* **MineWorld**: \"MineWorld: a Real-Time and Open-Source Interactive World Model on Minecraft\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07257)] [[Website](https:\u002F\u002Faka.ms\u002Fmineworld)]\r\n* **MoSim**: \"Neural Motion Simulator Pushing the Limit of World Models in Reinforcement Learning\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07095)]\r\n* \"Improving World Models using Deep Supervision with Linear Probes\", **`ICLR 2025 Workshop on World Models`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03861)]\r\n* \"Decentralized Collective World Model for Emergent Communication and Coordination\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03353)]\r\n* \"Adapting World Models with Latent-State Dynamics Residuals\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02252)]\r\n* \"Can Test-Time Scaling Improve World Foundation Model?\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.24320)] [[Code](https:\u002F\u002Fgithub.com\u002FMia-Cong\u002FSWIFT.git)]\r\n* \"Synthesizing world models for bilevel planning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20124)] \r\n* \"Long-context autoregressive video modeling with next-frame prediction\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19325)] [[Code](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FFAR)] [[Website](https:\u002F\u002Ffarlongctx.github.io\u002F)]\r\n* **Aether**: \"Aether: Geometric-Aware Unified World Modeling\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18945)] [[Website](https:\u002F\u002Faether-world.github.io\u002F)]\r\n* **FUSDREAMER**: \"FUSDREAMER: Label-efficient Remote Sensing World Model for Multimodal Data Classification\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13814)] [[Website](https:\u002F\u002Fgithub.com\u002FCimy-wang\u002FFusDreamer)]\r\n* \"Inter-environmental world modeling for continuous and compositional dynamics\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09911)] \r\n* **Disentangled World Models**: \"Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08751)] \r\n* \"Revisiting the Othello World Model Hypothesis\", **`ICLR World Models Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04421)] \r\n* \"Learning Transformer-based World Models with Contrastive Predictive Coding\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04416)] \r\n* \"Surgical Vision World Model\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02904)] \r\n* \"World Models for Anomaly Detection during Model-Based Reinforcement Learning Inference\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02552)] \r\n* **WMNav**: \"WMNav: Integrating Vision-Language Models into World Models for Object Goal Navigation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02247)] [[Website](https:\u002F\u002Fb0b8k1ng.github.io\u002FWMNav\u002F)]\r\n* **SENSEI**: \"SENSEI: Semantic Exploration Guided by Foundation Models to Learn Versatile World Models\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01584)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fsensei-paper)]\r\n* \"Learning Actionable World Models for Industrial Process Control\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00713)]\r\n* \"Implementing Spiking World Model with Multi-Compartment Neurons for Model-based Reinforcement Learning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00713)]\r\n* \"Discrete Codebook World Models for Continuous Control\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00653)]\r\n* **Multimodal Dreaming**: \"Multimodal Dreaming: A Global Workspace Approach to World Model-Based Reinforcement Learning\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.21142)]\r\n* \"Generalist World Model Pre-Training for Efficient Reinforcement Learning\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19544)]\r\n* \"Learning To Explore With Predictive World Model Via Self-Supervised Learning\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13200)]\r\n* **M^3**: \"M^3: A Modular World Model over Streams of Tokens\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11537)]\r\n* \"When do neural networks learn world models?\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09297)]\r\n* \"Pre-Trained Video Generative Models as World Simulators\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07825)]\r\n* **DMWM**: \"DMWM: Dual-Mind World Model with Long-Term Imagination\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07591)]\r\n* **EvoAgent**: \"EvoAgent: Agent Autonomous Evolution with Continual World Model for Long-Horizon Tasks\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05907)]\r\n* \"Acquisition through My Eyes and Steps: A Joint Predictive Agent Model in Egocentric Worlds\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05857)]\r\n* \"Generating Symbolic World Models via Test-time Scaling of Large Language Models\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04728)] [[Website](https:\u002F\u002Fvmlpddl.github.io\u002F)]\r\n* \"Improving Transformer World Models for Data-Efficient RL\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01591)]\r\n* \"Trajectory World Models for Heterogeneous Environments\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01366)]\r\n* \"Enhancing Memory and Imagination Consistency in Diffusion-based World Models via Linear-Time Sequence Modeling\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00466)]\r\n* \"Objects matter: object-centric world models improve reinforcement learning in visually complex environments\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16443)]\r\n* **GLAM**: \"GLAM: Global-Local Variation Awareness in Mamba-based World Model\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11949)]\r\n* **GAWM**: \"GAWM: Global-Aware World Model for Multi-Agent Reinforcement Learning\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10116)]\r\n* \"Generative Emergent Communication: Large Language Model is a Collective World Model\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00226)]\r\n* \"Towards Unraveling and Improving Generalization in World Models\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00195)]\r\n* \"Towards Physically Interpretable World Models: Meaningful Weakly Supervised Representations for Visual Trajectory Prediction\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12870)]\r\n* \"Transformers Use Causal World Models in Maze-Solving Tasks\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11867)]\r\n* \"Causal World Representation in the GPT Model\", **`NIPS 2024 Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07446)]\r\n* **Owl-1**: \"Owl-1: Omni World Model for Consistent Long Video Generation\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09600)]\r\n* \"Navigation World Models\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03572)] [[Website](https:\u002F\u002Fwww.amirbar.net\u002Fnwm\u002F)]\r\n* \"Evaluating World Models with LLM for Decision Making\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08794)] \r\n* **LLMPhy**: \"LLMPhy: Complex Physical Reasoning Using Large Language Models and World Models\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08027)] \r\n* **WebDreamer**: \"Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06559)] [[Code](https:\u002F\u002Fgithub.com\u002FOSU-NLP-Group\u002FWebDreamer)]\r\n* \"Scaling Laws for Pre-training Agents and World Models\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04434)]\r\n* **DINO-WM**: \"DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04983)] [[Website](https:\u002F\u002Fdino-wm.github.io\u002F)]\r\n* \"Learning World Models for Unconstrained Goal Navigation\", **`NIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02446)]\r\n* \"How Far is Video Generation from World Model: A Physical Law Perspective\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02385)] [[Website](https:\u002F\u002Fphyworld.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fphyworld\u002Fphyworld)]\r\n* **Adaptive World Models**: \"Adaptive World Models: Learning Behaviors by Latent Imagination Under Non-Stationarity\", **`NIPS 2024 Workshop Adaptive Foundation Models`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01342)]\r\n* **LLMCWM**: \"Language Agents Meet Causality -- Bridging LLMs and Causal World Models\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19923)] [[Code](https:\u002F\u002Fgithub.com\u002Fj0hngou\u002FLLMCWM\u002F)]\r\n* \"Reward-free World Models for Online Imitation Learning\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14081)]\r\n* \"Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13232)]\r\n* **AVID**: \"AVID: Adapting Video Diffusion Models to World Models\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12822)] [[Code](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fcausica\u002Ftree\u002Fmain\u002Fresearch_experiments\u002Favid)]\r\n* **SMAC**: \"Grounded Answers for Multi-agent Decision-making Problem through Generative World Model\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02664)]\r\n* **OSWM**: \"One-shot World Models Using a Transformer Trained on a Synthetic Prior\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14084)]\r\n* \"Making Large Language Models into World Models with Precondition and Effect Knowledge\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12278)]\r\n* \"Efficient Exploration and Discriminative World Model Learning with an Object-Centric Abstraction\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11816)]\r\n* **MoReFree**: \"World Models Increase Autonomy in Reinforcement Learning\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09807)] [[Project](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmorefree)]\r\n* **UrbanWorld**: \"UrbanWorld: An Urban World Model for 3D City Generation\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11965)]\r\n* **PWM**: \"PWM: Policy Learning with Large World Models\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02466)] [[Code](https:\u002F\u002Fwww.imgeorgiev.com\u002Fpwm\u002F)]\r\n* \"Predicting vs. Acting: A Trade-off Between World Modeling & Agent Modeling\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02446)]\r\n* **GenRL**: \"GenRL: Multimodal foundation world models for generalist embodied agents\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18043)] [[Code](https:\u002F\u002Fgithub.com\u002Fmazpie\u002Fgenrl)]\r\n* **DLLM**: \"World Models with Hints of Large Language Models for Goal Achieving\", **`arXiv 2024.06`**. [[Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.07381)]\r\n* \"Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.15275)]\r\n* **CoDreamer**: \"CoDreamer: Communication-Based Decentralised World Models\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13600)]\r\n* **Pandora**: \"Pandora: Towards General World Model with Natural Language Actions and Video States\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.09455)] [[Code](https:\u002F\u002Fgithub.com\u002Fmaitrix-org\u002FPandora)]\r\n* **EBWM**: \"Cognitively Inspired Energy-Based World Models\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08862)]\r\n* \"Evaluating the World Model Implicit in a Generative Model\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03689)] [[Code](https:\u002F\u002Fgithub.com\u002Fkeyonvafa\u002Fworld-model-evaluation)]\r\n* \"Transformers and Slot Encoding for Sample Efficient Physical World Modelling\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20180)] [[Code](https:\u002F\u002Fgithub.com\u002Ftorchipeppo\u002Ftransformers-and-slot-encoding-for-wm)]\r\n* **Puppeteer**: \"Hierarchical World Models as Visual Whole-Body Humanoid Controllers\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18418)] [[Code](https:\u002F\u002Fnicklashansen.com\u002Frlpuppeteer)]\r\n* **BWArea Model**: \"BWArea Model: Learning World Model, Inverse Dynamics, and Policy for Controllable Language Generation\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.17039)]\r\n* **WKM**: \"Agent Planning with World Knowledge Model\", **`arXiv 2024.05`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14205)] [[Code](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FWKM)]\r\n* **Diamond**: \"Diffusion for World Modeling: Visual Details Matter in Atari\", **`arXiv 2024.05`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12399)] [[Code](https:\u002F\u002Fgithub.com\u002Feloialonso\u002Fdiamond)]\r\n* \"Compete and Compose: Learning Independent Mechanisms for Modular World Models\", **`arXiv 2024.04`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15109)]\r\n* \"Dreaming of Many Worlds: Learning Contextual World Models Aids Zero-Shot Generalization\", **`arXiv 2024.03`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.10967)] [[Code](https:\u002F\u002Fgithub.com\u002Fsai-prasanna\u002Fdreaming_of_many_worlds)]\r\n* **V-JEPA**: \"V-JEPA: Video Joint Embedding Predictive Architecture\", **`Meta AI`**. [[Blog](https:\u002F\u002Fai.meta.com\u002Fblog\u002Fv-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture\u002F)] [[Paper](https:\u002F\u002Fai.meta.com\u002Fresearch\u002Fpublications\u002Frevisiting-feature-prediction-for-learning-visual-representations-from-video\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fjepa)]\r\n* **IWM**: \"Learning and Leveraging World Models in Visual Representation Learning\", **`Meta AI`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00504)] \r\n* **Genie**: \"Genie: Generative Interactive Environments\", **`DeepMind`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15391)] [[Blog](https:\u002F\u002Fsites.google.com\u002Fview\u002Fgenie-2024\u002Fhome)]\r\n* **Sora**: \"Video generation models as world simulators\", **`OpenAI`**. [[Technical report](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fvideo-generation-models-as-world-simulators)]\r\n* **LWM**: \"World Model on Million-Length Video And Language With RingAttention\", **`arXiv 2024.02`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08268)] [[Code](https:\u002F\u002Fgithub.com\u002FLargeWorldModel\u002FLWM)]\r\n* \"Planning with an Ensemble of World Models\", **`OpenReview`**. [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=cvGdPXaydP)]\r\n* **WorldDreamer**: \"WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens\", **`arXiv 2024.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.09985)] [[Code](https:\u002F\u002Fgithub.com\u002FJeffWang987\u002FWorldDreamer)]\r\n* **CWM**: \"Understanding Physical Dynamics with Counterfactual World Modeling\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F03523.pdf)] [[Code](https:\u002F\u002Fneuroailab.github.io\u002Fcwm-physics\u002F)]\r\n* **Δ-IRIS**: \"Efficient World Models with Context-Aware Tokenization\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.19320)] [[Code](https:\u002F\u002Fgithub.com\u002Fvmicheli\u002Fdelta-iris)]\r\n* **LLM-Sim**: \"Can Language Models Serve as Text-Based World Simulators?\", **`ACL`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06485)] [[Code](https:\u002F\u002Fgithub.com\u002Fcognitiveailab\u002FGPT-simulator)]\r\n* **AD3**: \"AD3: Implicit Action is the Key for World Models to Distinguish the Diverse Visual Distractors\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09976)]\r\n* **MAMBA**: \"MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning\", **`ICLR 2024`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09859)] [[Code](https:\u002F\u002Fgithub.com\u002Fzoharri\u002Fmamba)]\r\n* **R2I**: \"Mastering Memory Tasks with World Models\", **`ICLR 2024`**. [[Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.04253)] [[Website](https:\u002F\u002Frecall2imagine.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fchandar-lab\u002FRecall2Imagine)]\r\n* **HarmonyDream**: \"HarmonyDream: Task Harmonization Inside World Models\", **`ICML 2024`**. [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=x0yIaw2fgk)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FHarmonyDream)]\r\n* **REM**: \"Improving Token-Based World Models with Parallel Observation Prediction\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05643)] [[Code](https:\u002F\u002Fgithub.com\u002Fleor-c\u002FREM)]\r\n* \"Do Transformer World Models Give Better Policy Gradients?\"\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05290)]\r\n* **DreamSmooth**: \"DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.01450)]\r\n* **TD-MPC2**: \"TD-MPC2: Scalable, Robust World Models for Continuous Control\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.16828)] [[Torch Code](https:\u002F\u002Fgithub.com\u002Fnicklashansen\u002Ftdmpc2)]\r\n* **Hieros**: \"Hieros: Hierarchical Imagination on Structured State Space Sequence World Models\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05167)]\r\n* **CoWorld**: \"Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement Learning\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15260)]\r\n\r\n---\n\n## World Models for Embodied AI\r\n* **JailWAM**: \"JailWAM: Jailbreaking World Action Models in Robot Control\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.05498)]\r\n* \"Hierarchical Planning with Latent World Models\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.03208)]\r\n* \"World Action Verifier: Self-Improving World Models via Forward-Inverse Asymmetry\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.01985)] [[Website](https:\u002F\u002Fworld-action-verifier.github.io)]\r\n* **EgoSim**: \"EgoSim: Egocentric World Simulator for Embodied Interaction Generation\",  **`arxiv 2026.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.01001)] [[Website](http:\u002F\u002Fegosimulator.github.io\u002F)]\r\n* \"Enhancing Policy Learning with World-Action Model\",  **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28955)]\r\n* \"Persistent Robot World Models: Stabilizing Multi-Step Rollouts via Reinforcement Learning\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.25685)]\r\n* **ThinkJEPA**: \"ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22281)]\r\n* \"Do World Action Models Generalize Better than VLAs? A Robustness Study\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22078)]\r\n* **DDP**: \"Dreaming the Unseen: World Model-regularized Diffusion Policy for Out-of-Distribution Robustness\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.21017)]\r\n* **OmniVTA**: \"OmniVTA: Visuo-Tactile World Modeling for Contact-Rich Robotic Manipulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19201)] [[Website](https:\u002F\u002Fmrsecant.github.io\u002FOmniVTA)]\r\n* **EVA**: \"EVA: Aligning Video World Models with Executable Robot Actions via Inverse Dynamics Rewards\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17808)] [[Website](https:\u002F\u002Feva-project-page.github.io\u002F)]\r\n* **DreamPlan**: \"DreamPlan: Efficient Reinforcement Fine-Tuning of Vision-Language Planners via Video World Models\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16860)] [[Website](https:\u002F\u002Fpsi-lab.ai\u002FDreamPlan\u002F)]\r\n* **Kinema4D**: \"Kinema4D: Kinematic 4D World Modeling for Spatiotemporal Embodied Simulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16669)] [[Website](https:\u002F\u002Fmutianxu.github.io\u002FKinema4D-project-page\u002F)]\r\n* \"Simulation Distillation: Pretraining World Models in Simulation for Rapid Real-World Adaptation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.15759)] [[Website](https:\u002F\u002Fsim-dist.github.io\u002F)]\r\n* **WestWorld**: \"WestWorld: A Knowledge-Encoded Scalable Trajectory World Model for Diverse Robotic Systems\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14392)] [[Website](https:\u002F\u002Fwestworldrobot.github.io\u002F)]\r\n* \"Building Explicit World Model for Zero-Shot Open-World Object Manipulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13825)] [[Website](https:\u002F\u002Fbojack-bj.github.io\u002Fprojects\u002Fthesis\u002F)]\r\n* \"Egocentric World Model for Photorealistic Hand-Object Interaction Synthesis\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.13615)] [[Website](https:\u002F\u002Fegohoi.github.io\u002F)]\r\n* **RoboStereo**: \"RoboStereo: Dual-Tower 4D Embodied World Models for Unified Policy Optimization\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.12639)]\r\n* **ResWM**: \"ResWM: Residual-Action World Model for Visual RL\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.11110)]\r\n* **PlayWorld**: \"PlayWorld: Learning Robot World Models from Autonomous Play\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.09030)] [[Website](https:\u002F\u002Frobot-playworld.github.io\u002F)]\r\n* **MetaWorld-X**: \"MetaWorld-X: Hierarchical World Modeling via VLM-Orchestrated Experts for Humanoid Loco-Manipulation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08572)] [[Website](https:\u002F\u002Fsyt2004.github.io\u002FmetaworldX\u002F)]\r\n* \"Interactive World Simulator for Robot Policy Training and Evaluation\", **`arxiv 2026.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08546)] [[Website](https:\u002F\u002Fyixuanwang.me\u002Finteractive_world_sim)]\r\n* \"Foundational World Models Accurately Detect Bimanual Manipulator Failures\", **`ICRA 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.06987)]\r\n* **LPWM**: \"Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling\", **`ICLR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.04553)] [[Website](https:\u002F\u002Ftaldatech.github.io\u002Flpwm-web)]\r\n* **AdaWorldPolicy**: \"AdaWorldPolicy: World-Model-Driven Diffusion Policy with Online Adaptive Learning for Robotic Manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.20057)] [[Website](https:\u002F\u002FAdaWorldPolicy.github.io)]\r\n* **FRAPPE**: \"FRAPPE: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.17259)] [[Website](https:\u002F\u002Fh-zhao1997.github.io\u002Ffrappe)]\r\n* \"Learning to unfold cloth: Scaling up world models to deformable object manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.16675)] \r\n* \"World Model Failure Classification and Anomaly Detection for Autonomous Inspection\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.16182)] [[Website](https:\u002F\u002Fautoinspection-classification.github.io)] \r\n* **DreamZero**: \"World Action Models are Zero-shot Policies\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.15922)] [[Website](https:\u002F\u002Fdreamzero0.github.io\u002F)] \r\n* **WoVR**: \"WoVR: World Models as Reliable Simulators for Post-Training VLA Policies with RL\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.13977)] \r\n* \"Visual Foresight for Robotic Stow: A Diffusion-Based World Model from Sparse Snapshots\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.13347)] \r\n* **VLAW**: \"VLAW: Iterative Co-Improvement of Vision-Language-Action Policy and World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.12063)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fvla-w)]\r\n* **HAIC**: \"HAIC: Humanoid Agile Object Interaction Control via Dynamics-Aware World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11758)] [[Website](https:\u002F\u002Fhaic-humanoid.github.io\u002F)]\r\n* **H-WM**: \"H-WM: Robotic Task and Motion Planning Guided by Hierarchical World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11291)]\r\n* **RISE**: \"RISE: Self-Improving Robot Policy with Compositional World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11075)] [[Website](https:\u002F\u002Fopendrivelab.com\u002Fkai0-rl\u002F)] \r\n* \"ContactGaussian-WM: Learning Physics-Grounded World Model from Videos\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.11021)] \r\n* \"Scaling World Model for Hierarchical Manipulation Policies\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10983)] [[Website](https:\u002F\u002Fvista-wm.github.io)]\r\n* **Say, Dream, and Act**: \"Say, Dream, and Act: Learning Video World Models for Instruction-Driven Robot Manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10717)]\r\n* \"Affordances Enable Partial World Modeling with LLMs\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10390)]\r\n* **VLA-JEPA**: \"VLA-JEPA: Enhancing Vision-Language-Action Model with Latent World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10098)]\r\n* **MVISTA-4D**: \"MVISTA-4D: View-Consistent 4D World Model with Test-Time Action Inference for Robotic Manipulation\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.09878)]\r\n* **Hand2World**: \"Hand2World: Autoregressive Egocentric Interaction Generation via Free-Space Hand Gestures\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.09600)] [[Website](https:\u002F\u002Fhand2world.github.io\u002F)]\r\n* **World-VLA-Loop**: \"World-VLA-Loop: Closed-Loop Learning of Video World Model and VLA Policy\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06508)] [[Website](https:\u002F\u002Fshowlab.github.io\u002FWorld-VLA-Loop\u002F)]\r\n* \"Coupled Local and Global World Models for Efficient First Order RL\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06219)]\r\n* \"Visuo-Tactile World Models\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06001)]\r\n* **BridgeV2W**: \"BridgeV2W: Bridging Video Generation Models to Embodied World Models via Embodiment Masks\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03793)] [[Website](https:\u002F\u002FBridgeV2W.github.io)]\r\n* **World-Gymnast**: \"World-Gymnast: Training Robots with Reinforcement Learning in a World Model\", **`arXiv 2026.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.02454)] [[Website](https:\u002F\u002Fworld-gymnast.github.io\u002F)]\r\n* **MetaWorld**: \"MetaWorld: Skill Transfer and Composition in a Hierarchical World Model for Grounding High-Level Instructions\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17507)] [[Code](https:\u002F\u002Fanonymous.4open.science\u002Fr\u002Fmetaworld-2BF4\u002F)]\r\n* \"Walk through Paintings: Egocentric World Models from Internet Priors\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.15284)] \r\n* \"Aligning Agentic World Models via Knowledgeable Experience Learning\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.13247)] \r\n* **ReWorld**: \"ReWorld: Multi-Dimensional Reward Modeling for Embodied World Models\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12428)] \r\n* \"An Efficient and Multi-Modal Navigation System with One-Step World Model\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12277)] \r\n* **PointWorld**: \"PointWorld: Scaling 3D World Models for In-The-Wild Robotic Manipulation\", **`arXiv 2026.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03782)] [[Website](https:\u002F\u002Fpoint-world.github.io\u002F)]\r\n* **Dream2Flow**: \"Dream2Flow: Bridging Video Generation and Open-World Manipulation with 3D Object Flow\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24766)] [[Website](https:\u002F\u002Fdream2flow.github.io\u002F)]\r\n* \"What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24497)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fjepa-wms)]\r\n* **Act2Goal**: \"Act2Goal: From World Model To General Goal-conditioned Policy\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23541)] [[Website](https:\u002F\u002Fact2goal.github.io\u002F)]\r\n* **AstraNav-World**: \"AstraNav-World: World Model for Foresight Control and Consistency\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21714)]\r\n* **ChronoDreamer**: \"ChronoDreamer: Action-Conditioned World Model as an Online Simulator for Robotic Planning\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18619)]\r\n* **STORM**: \"STORM: Search-Guided Generative World Models for Robotic Manipulation\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18477)]\r\n* \"World Models Can Leverage Human Videos for Dexterous Manipulation\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2512.13644)]\r\n* \"Latent Action World Models for Control with Unlabeled Trajectories\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10016)]\r\n* **PRISM-WM**: \"Prismatic World Model: Learning Compositional Dynamics for Planning in Hybrid Systems\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08411)]\r\n* \"Learning Robot Manipulation from Audio World Models\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08405)]\r\n* \"Embodied Tree of Thoughts: Deliberate Manipulation Planning with Embodied World Model\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.08188)] [[Website](https:\u002F\u002Fembodied-tree-of-thoughts.github.io)]\r\n* \"World Models That Know When They Don't Know: Controllable Video Generation with Calibrated Uncertainty\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.05927)]\r\n* \"Real-World Robot Control by Deep Active Inference With a Temporally Hierarchical World Model\", **`IEEE Robotics and Automation Letters`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01924)]\r\n* \"Seeing through Imagination: Learning Scene Geometry via Implicit Spatial World Modeling\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01821)]\r\n* **IGen**: \"IGen: Scalable Data Generation for Robot Learning from Open-World Images\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01773)] [[Website](https:\u002F\u002Fchenghaogu.github.io\u002FIGen\u002F)]\r\n* **NavForesee**: \"NavForesee: A Unified Vision-Language World Model for Hierarchical Planning and Dual-Horizon Navigation Prediction\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01550)] \r\n* **TraceGen**: \"TraceGen: World Modeling in 3D Trace Space Enables Learning from Cross-Embodiment Videos\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21690)] [[Website](https:\u002F\u002Ftracegen.github.io\u002F)]\r\n* **ENACT**: \"ENACT: Evaluating Embodied Cognition with World Modeling of Egocentric Interaction\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20937)] [[Website](https:\u002F\u002Fenact-embodied-cognition.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fmll-lab-nu\u002FENACT)]\r\n* \"Learning Massively Multitask World Models for Continuous Control\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19584)] [[Website](https:\u002F\u002Fwww.nicklashansen.com\u002FNewtWM)]\r\n* **UNeMo**: \"UNeMo: Collaborative Visual-Language Reasoning and Navigation via a Multimodal World Model\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.18845)]\r\n* \"MindForge: Empowering Embodied Agents with Theory of Mind for Lifelong Cultural Learning\", **`NeurIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12977)]\r\n* \"Towards High-Consistency Embodied World Model with Multi-View Trajectory Videos\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.12882)]\r\n* **WMPO**: \"WMPO: World Model-based Policy Optimization for Vision-Language-Action Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.09515)] [[Website](https:\u002F\u002Fwm-po.github.io)]\r\n* \"Robot Learning from a Physical World Model\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07416)] [[Website](https:\u002F\u002Fpointscoder.github.io\u002FPhysWorld_Web\u002F)]\r\n* \"When Object-Centric World Models Meet Policy Learning: From Pixels to Policies, and Where It Breaks\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06136)]\r\n* **WorldPlanner**: \"WorldPlanner: Monte Carlo Tree Search and MPC with Action-Conditioned Visual World Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03077)]\r\n* \"Learning Interactive World Model for Object-Centric Reinforcement Learning\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02225)]\r\n* \"Scaling Cross-Embodiment World Models for Dexterous Manipulation\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01177)]\r\n* \"Co-Evolving Latent Action World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.26433)]\r\n* \"Deductive Chain-of-Thought Augmented Socially-aware Robot Navigation World Model\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23509)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002FNaviWM)] \r\n* \"Deep Active Inference with Diffusion Policy and Multiple Timescale World Model for Real-World Exploration and Navigation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.23258)]\r\n* **ProTerrain**: \"ProTerrain: Probabilistic Physics-Informed Rough Terrain World Modeling\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19364)]\r\n* \"Ego-Vision World Model for Humanoid Contact Planning\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11682)] [[Website](https:\u002F\u002Fego-vcp.github.io\u002F)] \r\n* **Ctrl-World**: \"Ctrl-World: A Controllable Generative World Model for Robot Manipulation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2510.10125)] [[Website](https:\u002F\u002Fctrl-world.github.io\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FRobert-gyj\u002FCtrl-World)]\r\n* **iMoWM**: \"iMoWM: Taming Interactive Multi-Modal World Model for Robotic Manipulation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.07313)] [[Website](https:\u002F\u002Fxingyoujun.github.io\u002Fimowm\u002F)]\r\n* **WristWorld**: \"WristWorld: Generating Wrist-Views via 4D World Models for Robotic Manipulation\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.07313)]\r\n* \"A Recipe for Efficient Sim-to-Real Transfer in Manipulation with Online Imitation-Pretrained World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02538)]\r\n* \"Kinodynamic Motion Planning for Mobile Robot Navigation across Inconsistent World Models\", **`RSS 2025 Workshop on Resilient Off-road Autonomous Robotics (ROAR)`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26339)]\r\n* **EMMA**: \"EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.22407)]\r\n* **LongScape**: \"LongScape: Advancing Long-Horizon Embodied World Models with Context-Aware MoE\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21790)]\r\n* **KeyWorld**: \"KeyWorld: Key Frame Reasoning Enables Effective and Efficient World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21027)]\r\n* **DAWM**: \"DAWM: Diffusion Action World Models for Offline Reinforcement Learning via Action-Inferred Transitions\", **`ICML 2025 Workshop`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.19538)]\r\n* **World4RL**: \"World4RL: Diffusion World Models for Policy Refinement with Reinforcement Learning for Robotic Manipulation\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.19080)]\r\n* **SAMPO**: \"SAMPO:Scale-wise Autoregression with Motion PrOmpt for generative world models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.15536)]\r\n* **PhysicalAgent**: \"PhysicalAgent: Towards General Cognitive Robotics with Foundation World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13903)]\r\n* \"Empowering Multi-Robot Cooperation via Sequential World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13095)]\r\n* \"World Model Implanting for Test-time Adaptation of Embodied Agents\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.03956)]\r\n* \"Learning Primitive Embodied World Models: Towards Scalable Robotic Learning\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.20840)] [[Website](https:\u002F\u002Fqiaosun22.github.io\u002FPrimitiveWorld\u002F)]\r\n* **GWM**: \"GWM: Towards Scalable Gaussian World Models for Robotic Manipulation\", **`ICCV 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17600)] [[Website](https:\u002F\u002Fgaussian-world-model.github.io\u002F)]\r\n* \"Imaginative World Modeling with Scene Graphs for Embodied Agent Navigation\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06990)]\r\n* \"Bounding Distributional Shifts in World Modeling through Novelty Detection\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06096)]\r\n* **Genie Envisioner**: \"Genie Envisioner: A Unified World Foundation Platform for Robotic Manipulation\", **`arxiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.05635)] [[Website](https:\u002F\u002Fgenie-envisioner.github.io\u002F)]\r\n* **DiWA**: \"DiWA: Diffusion Policy Adaptation with World Models\", **`CoRL 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03645)] [[Code](https:\u002F\u002Fdiwa.cs.uni-freiburg.de)]\r\n* **CoEx**: \"CoEx -- Co-evolving World-model and Exploration\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.22281)]\r\n* \"Latent Policy Steering with Embodiment-Agnostic Pretrained World Models\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13340)]\r\n* **MindJourney**: \"MindJourney: Test-Time Scaling with World Models for Spatial Reasoning\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12508)] [[Website](https:\u002F\u002Fumass-embodied-agi.github.io\u002FMindJourney)]\r\n* **FOUNDER**: \"FOUNDER: Grounding Foundation Models in World Models for Open-Ended Embodied Decision Making\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12496)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Ffounder-rl)]\r\n* **EmbodieDreamer**: \"EmbodieDreamer: Advancing Real2Sim2Real Transfer for Policy Training via Embodied World Modeling\", **`arxiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.05198)] [[Website](https:\u002F\u002Fembodiedreamer.github.io\u002F)]\r\n* **World4Omni**: \"World4Omni: A Zero-Shot Framework from Image Generation World Model to Robotic Manipulation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23919)] [[Website](https:\u002F\u002Fworld4omni.github.io\u002F)]\r\n* **RoboScape**: \"RoboScape: Physics-informed Embodied World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23135)] [[Code](https:\u002F\u002Fgithub.com\u002Ftsinghua-fib-lab\u002FRoboScape)]\r\n* **ParticleFormer**: \"ParticleFormer: A 3D Point Cloud World Model for Multi-Object, Multi-Material Robotic Manipulation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23126)] [[Website](https:\u002F\u002Fparticleformer.github.io\u002F)]\r\n* **ManiGaussian++**: \"ManiGaussian++: General Robotic Bimanual Manipulation with Hierarchical Gaussian World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19842)] [[Code](https:\u002F\u002Fgithub.com\u002FApril-Yz\u002FManiGaussian_Bimanual)]\r\n* **ReOI**: \"Reimagination with Test-time Observation Interventions: Distractor-Robust World Model Predictions for Visual Model Predictive Control\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16565)] \r\n* **GAF**: \"GAF: Gaussian Action Field as a Dynamic World Model for Robotic Mlanipulation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14135)] [[Website](http:\u002F\u002Fchaiying1.github.io\u002FGAF.github.io\u002Fproject_page\u002F)]\r\n* \"Prompting with the Future: Open-World Model Predictive Control with Interactive Digital Twins\", **`RSS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13761)] [[Website](https:\u002F\u002Fprompting-with-the-future.github.io\u002F)]\r\n* **V-JEPA 2 and V-JEPA 2-AC**: \"V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09985)] [[Website](https:\u002F\u002Fai.meta.com\u002Fblog\u002Fv-jepa-2-world-model-benchmarks\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fvjepa2)]\r\n* \"Time-Aware World Model for Adaptive Prediction and Control\", **`ICML 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08441)] \r\n* **3DFlowAction**: \"3DFlowAction: Learning Cross-Embodiment Manipulation from 3D Flow World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06199)] \r\n* **ORV**: \"ORV: 4D Occupancy-centric Robot Video Generation\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03079)] [[Code](https:\u002F\u002Fgithub.com\u002FOrangeSodahub\u002FORV)] [[Website](https:\u002F\u002Forangesodahub.github.io\u002FORV\u002F)]\r\n* **WoMAP**: \"WoMAP: World Models For Embodied Open-Vocabulary Object Localization\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01600)] \r\n* \"Sparse Imagination for Efficient Visual World Model Planning\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01392)]\r\n* **Humanoid World Models**: \"Humanoid World Models: Open World Foundation Models for Humanoid Robotics\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01182)] \r\n* \"Evaluating Robot Policies in a World Model\", **`arxiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00613)] [[Website](https:\u002F\u002Fworld-model-eval.github.io)]\r\n* **OSVI-WM**: \"OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20425)] \r\n* **WorldEval**: \"WorldEval: World Model as Real-World Robot Policies Evaluator\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19017)] [[Website](https:\u002F\u002Fworldeval.github.io)]\r\n* \"Consistent World Models via Foresight Diffusion\", **`arxiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16474)] \r\n* **Vid2World**: \"Vid2World: Crafting Video Diffusion Models to Interactive World Models\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14357)] [[Website](http:\u002F\u002Fknightnemo.github.io\u002Fvid2world\u002F)]\r\n* **RLVR-World**: \"RLVR-World: Training World Models with Reinforcement Learning\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13934)] [[Website]( https:\u002F\u002Fthuml.github.io\u002FRLVR-World\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FRLVR-World)]\r\n* **LaDi-WM**: \"LaDi-WM: A Latent Diffusion-based World Model for Predictive Manipulation\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11528)]\r\n* **FlowDreamer**: \"FlowDreamer: A RGB-D World Model with Flow-based Motion Representations for Robot Manipulation\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10075)] [[Website](https:\u002F\u002Fsharinka0715.github.io\u002FFlowDreamer\u002F)]\r\n* \"Occupancy World Model for Robots\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05512)]\r\n* \"Learning 3D Persistent Embodied World Models\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05495)]\r\n* **TesserAct**: \"TesserAct: Learning 4D Embodied World Models\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20995)] [[Website](https:\u002F\u002Ftesseractworld.github.io\u002F)]\r\n* **PIN-WM**: \"PIN-WM: Learning Physics-INformed World Models for Non-Prehensile Manipulation\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16693)] \r\n* \"Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16680)] \r\n* **ManipDreamer**: \"ManipDreamer: Boosting Robotic Manipulation World Model with Action Tree and Visual Guidance\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16464)] \r\n* **UWM**: \"Unified World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02792)] [[Website](https:\u002F\u002Fweirdlabuw.github.io\u002Fuwm\u002F)]\r\n* \"Perspective-Shifted Neuro-Symbolic World Models: A Framework for Socially-Aware Robot Navigation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20425)] \r\n* **AdaWorld**: \"AdaWorld: Learning Adaptable World Models with Latent Actions\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18938)] [[Website](https:\u002F\u002Fadaptable-world-model.github.io\u002F)] \r\n* **DyWA**: \"DyWA: Dynamics-adaptive World Action Model for Generalizable Non-prehensile Manipulation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16806)] [[Website](https:\u002F\u002Fpku-epic.github.io\u002FDyWA\u002F)] \r\n* \"Towards Suturing World Models: Learning Predictive Models for Robotic Surgical Tasks\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.12531)] [[Website](https:\u002F\u002Fmkturkcan.github.io\u002Fsuturingmodels\u002F)] \r\n* \"World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10480)] \r\n* **LUMOS**: \"LUMOS: Language-Conditioned Imitation Learning with World Models\", **`ICRA 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10370)] [[Website](http:\u002F\u002Flumos.cs.uni-freiburg.de\u002F)] \r\n* \"Object-Centric World Model for Language-Guided Manipulation\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06170)] \r\n* **DEMO^3**: \"Multi-Stage Manipulation with Demonstration-Augmented Reward, Policy, and World Model Learning\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01837)] [[Website](https:\u002F\u002Fadrialopezescoriza.github.io\u002Fdemo3\u002F)] \r\n* \"Accelerating Model-Based Reinforcement Learning with State-Space World Models\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20168)] \r\n* \"Learning Humanoid Locomotion with World Model Reconstruction\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16230)] \r\n* \"Strengthening Generative Robot Policies through Predictive World Modeling\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00622)] [[Website](https:\u002F\u002Fcomputationalrobotics.seas.harvard.edu\u002FGPC)] \r\n* **Robotic World Model**: \"Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10100)]\r\n* **RoboHorizon**: \"RoboHorizon: An LLM-Assisted Multi-View World Model for Long-Horizon Robotic Manipulation\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06605)] \r\n* **Dream to Manipulate**: \"Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14957)] [[Website](https:\u002F\u002Fleobarcellona.github.io\u002FDreamToManipulate\u002F)] \r\n* **WHALE**: \"WHALE: Towards Generalizable and Scalable World Models for Embodied Decision-making\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05619)]\r\n* **VisualPredicator**: \"VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.23156)] \r\n* \"Multi-Task Interactive Robot Fleet Learning with Visual World Models\", **`CoRL 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22689)] [[Code](https:\u002F\u002Fut-austin-rpl.github.io\u002Fsirius-fleet\u002F)]\r\n* **X-MOBILITY**: \"X-MOBILITY: End-To-End Generalizable Navigation via World Modeling\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17491)]\r\n* **PIVOT-R**: \"PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.10394)]\r\n* **GLIMO**: \"Grounding Large Language Models In Embodied Environment With Imperfect World Models\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02664)]\r\n* **EVA**: \"EVA: An Embodied World Model for Future Video Anticipation\", **`arxiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15461)] [[Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Feva-publi)] \r\n* **PreLAR**: \"PreLAR: World Model Pre-training with Learnable Action Representation\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F03363.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fzhanglixuan0720\u002FPreLAR)]\r\n* **WMP**: \"World Model-based Perception for Visual Legged Locomotion\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16784)] [[Project](https:\u002F\u002Fwmp-loco.github.io\u002F)]\r\n* **R-AIF**: \"R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14216)]\r\n* \"Representing Positional Information in Generative World Models for Object Manipulation\" **`arXiv 2024.09`** [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12005)]\r\n* **DexSim2Real$^2$**: \"DexSim2Real$^2: Building Explicit World Model for Precise Articulated Object Dexterous Manipulation\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08750)]\r\n* **DWL**: \"Advancing Humanoid Locomotion: Mastering Challenging Terrains with Denoising World Model Learning\", **`RSS 2024 (Best Paper Award Finalist)`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14472)]\r\n* \"Physically Embodied Gaussian Splatting: A Realtime Correctable World Model for Robotics\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10788)] [[Website](https:\u002F\u002Fembodied-gaussians.github.io\u002F)]\r\n* **HRSSM**: \"Learning Latent Dynamic Robust Representations for World Models\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.06263)] [[Code](https:\u002F\u002Fgithub.com\u002Fbit1029public\u002FHRSSM)]\r\n* **RoboDreamer**: \"RoboDreamer: Learning Compositional World Models for Robot Imagination\", **`ICML 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.12377)] [[Code](https:\u002F\u002Frobovideo.github.io\u002F)]\r\n* **COMBO**: \"COMBO: Compositional World Models for Embodied Multi-Agent Cooperation\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.10775)] [[Website](https:\u002F\u002Fvis-www.cs.umass.edu\u002Fcombo\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002FUMass-Foundation-Model\u002FCOMBO)]\r\n* **ManiGaussian**: \"ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation\", **`arXiv 2024.03`**.  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08321)] [[Code](https:\u002F\u002Fguanxinglu.github.io\u002FManiGaussian\u002F)]\r\n\r\n---\n\n## VLA 的世界模型\r\n* **DIAL**: “DIAL：通过潜在世界建模解耦意图与动作，实现端到端 VLA”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.29844)] [[官网](https:\u002F\u002Fxpeng-robotics.github.io\u002Fdial)]\r\n* “面向视觉-语言-动作模型的实用型基于世界模型的强化学习”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20607)]\r\n* “利用生成式 3D 世界扩展机器人 VLAs 的模拟-现实强化学习”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.18532)]\r\n* **Fast-WAM**: “Fast-WAM：世界动作模型是否需要在测试时进行未来想象？”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.16666)] [[官网](https:\u002F\u002Fyuantianyuan01.github.io\u002FFastWAM\u002F)] \r\n* **StructVLA**: “超越密集未来：作为结构化规划器的世界模型用于机器人操作”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.12553)]\r\n* **World2Act**: “World2Act：通过技能组合型世界模型进行潜在动作的后训练”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.10422)] [[官网](https:\u002F\u002Fwm2act.github.io\u002F)]\r\n* **AtomVLA**: “AtomVLA：通过预测性潜在世界模型实现机器人操作的可扩展后训练”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08519)] \r\n* **Chain of World**: “Chain of World：潜运动中的世界模型思维”，**`CVPR 2026`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.03195)] [[官网](https:\u002F\u002Ffx-hit.github.io\u002Fcowvla-io\u002F)] \r\n* “从预训练视频模型中学习物理：一种用于机器人操作的多模态连续且序列化的世界交互模型”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.00110)] \r\n* **World Guidance**: “World Guidance：在条件空间中进行世界建模以生成动作”，**`arxiv 2026.02`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.22010)] [[官网](https:\u002F\u002Fselen-suyue.github.io\u002FWoGNet\u002F)] \r\n* **SC-VLA**: “自校正 VLA：通过稀疏世界想象进行在线动作细化”，**`arxiv 2026.02`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.21633)] [[官网](https:\u002F\u002Fgithub.com\u002FKisaragi0\u002FSC-VLA)]\r\n* **Motus**: “Motus：统一的潜在动作世界模型”，**`arxiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13030)] [[官网](https:\u002F\u002Fmotus-robotics.github.io\u002Fmotus)] [[代码](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FMotus)]\r\n* **RoboScape-R**: “RoboScape-R：通过 RL 实现通用机器人训练的统一奖励-观测世界模型”，**`arxiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03556)]\r\n* **AdaPower**: “AdaPower：针对预测性操作的专业化世界基础模型”，**`arxiv 2025.12`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03538)]\r\n* **RynnVLA-002**: “RynnVLA-002：统一的视觉-语言-动作及世界模型”，**`arxiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17502)] [[代码](https:\u002F\u002Fgithub.com\u002Falibaba-damo-academy\u002FRynnVLA-002)] \r\n* **NORA-1.5**: “NORA-1.5：使用基于世界模型和动作的偏好奖励训练的视觉-语言-动作模型”，**`arxiv 2025.11`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14659)] [[官网](https:\u002F\u002Fdeclare-lab.github.io\u002Fnora-1.5)] [[代码](https:\u002F\u002Fgithub.com\u002Fdeclare-lab\u002Fnora-1.5)] \r\n* “用于世界模型增强型视觉-语言-动作模型的双流扩散”，**`arxiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.27607)] \r\n* **VLA-RFT**: “VLA-RFT：在世界模拟器中使用验证过的奖励进行视觉-语言-动作强化微调”，**`arxiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00406)] \r\n* **World-Env**: “World-Env：将世界模型用作 VLA 后训练的虚拟环境”，**`arxiv 2025.09`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24948)] \r\n* **MoWM**: “MoWM：通过潜像素特征调制实现具身规划的世界模型混合体”，**`arxiv 2025.09`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21797)] \r\n* **LAWM**: “通过世界建模进行潜在动作预训练”，**`arxiv 2025.09`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18428)] [[代码](https:\u002F\u002Fgithub.com\u002Fbaheytharwat\u002Flawm)]\r\n* **PAR**: “无需动作预训练的机器人操作物理自回归模型”，**`arxiv 2025.08`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09822)] [[官网](https:\u002F\u002Fsongzijian1999.github.io\u002FPAR_ProjectPage\u002F)]\r\n* **DreamVLA**: “DreamVLA：一个融合全面世界知识的梦想型视觉-语言-动作模型”，**`arxiv 2025.07`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04447)] [[代码](https:\u002F\u002Fgithub.com\u002FZhangwenyao1\u002FDreamVLA)] [[官网](https:\u002F\u002Fzhangwenyao1.github.io\u002FDreamVLA\u002F)]\r\n* **WorldVLA**: “WorldVLA：迈向自回归动作世界模型”，**`arxiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21539)] [[代码](https:\u002F\u002Fgithub.com\u002Falibaba-damo-academy\u002FWorldVLA)]\r\n* **UniVLA**: “UniVLA：统一的视觉-语言-动作模型”，**`arxiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19850)] [[代码](https:\u002F\u002Frobertwyq.github.io\u002Funivla.github.)]\r\n* **MinD**: “MinD：通过层次化世界模型实现统一的视觉想象与控制”，**`arxiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18897)] [[官网](https:\u002F\u002Fmanipulate-in-dream.github.io\u002F)]\r\n* **FLARE**: “FLARE：通过隐式世界建模进行机器人学习”，**`arxiv 2025.05`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15659)] [[代码](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FIsaac-GR00T)] [[官网](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fgear\u002Fflare)]\r\n* **DreamGen**: “DreamGen：通过视频世界模型解锁机器人学习的泛化能力”，**`arxiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12705)] [[代码](https:\u002F\u002Fgithub.com\u002Fnvidia\u002FGR00T-dreams)]\r\n* **CoT-VLA**: “CoT-VLA：视觉-语言-动作模型的视觉思维链推理”，**`CVPR 2025`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18867)]\r\n* **UP-VLA**: “UP-VLA：一个统一的理解与预测模型，适用于具身智能体”，**`ICML 2025`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22020)] [[代码](https:\u002F\u002Fgithub.com\u002FCladernyJorn\u002FUP-VLA)]\r\n* **3D-VLA**: “3D-VLA：一个 3D 视觉-语言-动作生成式世界模型”，**`ICML 2024`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09631)]\r\n\r\n---\n\n## 用于视觉理解的世界模型\n* **DILLO**：“先描述再行动：通过蒸馏的语言-动作世界模型实现主动式智能体引导”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.23149)] \n* **WorldVLM**：“WorldVLM：结合世界模型预测与视觉-语言推理”，**`arxiv 2026.03`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14497)] \n* “何时以及在多大程度上进行想象：基于世界模型的自适应测试时缩放技术在视觉空间推理中的应用”，**`arxiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.08236)] [[网站](https:\u002F\u002Fadaptive-visual-tts.github.io\u002F)] \n* “视觉生成通过多模态世界模型解锁类人推理能力”，**`arxiv 2026.01`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19834)] [[网站](https:\u002F\u002Fthuml.github.io\u002FReasoning-Visual-World)] \n* “语义世界模型”，**`arxiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19818)] [[网站](https:\u002F\u002Fweirdlabuw.github.io\u002Fswm)] \n* **DyVA**：“世界模型能否助力视觉-语言模型理解世界动态？”，**`arxiv 2025.10`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00855)] [[网站](https:\u002F\u002Fdyva-worldlm.github.io)] \n* “视频模型是零样本学习者和推理者”，**`arxiv 2025.09`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.20328)]\n* “从生成到泛化：视频扩散模型中的涌现式少样本学习”，**`arxiv 2025.06`**。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07280)]\n\n---\n## 用于自动驾驶的世界模型\n\n### Refer to https:\u002F\u002Fgithub.com\u002FLMD0311\u002FAwesome-World-Model\r\n* **DeltaWorld**: \"A Frame is Worth One Token: Efficient Generative World Modeling with Delta Tokens\", **`CVPR 2026`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.04913)] [[Code](https:\u002F\u002Fdeltatok.github.io)] \r\n* **DriveDreamer-Policy**: \"DriveDreamer-Policy: A Geometry-Grounded World-Action Model for Unified Generation and Planning\", **`arxiv 2026.04**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.01765)] [[Website](https:\u002F\u002Fdrivedreamer-policy.github.io\u002F)] \r\n* **DLWM**: \"DLWM: Dual Latent World Models enable Holistic Gaussian-centric Pre-training in Autonomous Driving\", **`CVPR 2026**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.00969)] \r\n* **AutoWorld**: \"AutoWorld: Scaling Multi-Agent Traffic Simulation with Self-Supervised World Models\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28963)] \r\n* **OccSim**: \"OccSim: Multi-kilometer Simulation with Long-horizon Occupancy World Models\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.28887)] \r\n* **Uni-World VLA**: \"Uni-World VLA: Interleaved World Modeling and Planning for Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.27287)] \r\n* **DreamerAD**: \"DreamerAD: Efficient Reinforcement Learning via Latent World Model for Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24587)] \r\n* **Latent-WAM**: \"Latent-WAM: Latent World Action Modeling for End-to-End Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24581)] \r\n* \"Toward Physically Consistent Driving Video World Models under Challenging Trajectories\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.24506)] [[Website](https:\u002F\u002Fwm-research.github.io\u002FPhyGenesis\u002F)]\r\n* **CounterScene**: \"CounterScene: Counterfactual Causal Reasoning in Generative World Models for Safety-Critical Closed-Loop Evaluation\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.21104)]\r\n* **X-World**: \"X-World: Controllable Ego-Centric Multi-Camera World Models for Scalable End-to-End Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19979)]\r\n* **DynFlowDrive**: \"DynFlowDrive: Flow-Based Dynamic World Modeling for Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.19675)] [[Code](https:\u002F\u002Fgithub.com\u002Fxiaolul2\u002FDynFlowDrive)]\r\n* **Enactor**: \"Enactor: From Traffic Simulators to Surrogate World Models\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.18266)]\r\n* **VectorWorld**: \"VectorWorld: Efficient Streaming World Model via Diffusion Flow on Vector Graphs\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.17652)] [[Code](https:\u002F\u002Fgithub.com\u002Fjiangchaokang\u002FVectorWorld)]\r\n* \"Bridging Scene Generation and Planning: Driving with World Model via Unifying Vision and Motion Representation\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.14948)] [[Code](https:\u002F\u002Fgithub.com\u002FTabGuigui\u002FWorldDrive)]\r\n* \"Latent World Models for Automated Driving: A Unified Taxonomy, Evaluation Framework, and Open Challenges\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.09086)]\r\n* \"Kinematics-Aware Latent World Models for Data-Efficient Autonomous Driving\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.07264)]\r\n* **ShareVerse**: \"ShareVerse: Multi-Agent Consistent Video Generation for Shared World Modeling\", **`arxiv 2026.03**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.02697)]\r\n* \"Risk-Aware World Model Predictive Control for Generalizable End-to-End Autonomous Driving\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.23259)]\r\n* **RAYNOVA**: \"RAYNOVA: Scale-Temporal Autoregressive World Modeling in Ray Space\", **`CVPR 2026**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.20685)] [[Website](https:\u002F\u002Fraynova-ai.github.io\u002F)]\r\n* \"When World Models Dream Wrong: Physical-Conditioned Adversarial Attacks against World Models\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.18739)] \r\n* \"Factored Latent Action World Models\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.16229)] \r\n* **ResWorld**: \"ResWorld: Temporal Residual World Model for End-to-End Autonomous Driving\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.10884)] [[Code](https:\u002F\u002Fgithub.com\u002Fmengtan00\u002FResWorld.git)]\r\n* **DriveWorld-VLA**: \"DriveWorld-VLA: Unified Latent-Space World Modeling with Vision-Language-Action for Autonomous Driving\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06521)] [[Code](https:\u002F\u002Fgithub.com\u002Fliulin815\u002FDriveWorld-VLA.git)]\r\n* \"Safe Urban Traffic Control via Uncertainty-Aware Conformal Prediction and World-Model Reinforcement Learning\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.04821)]\r\n* **InstaDrive**: \"InstaDrive: Instance-Aware Driving World Models for Realistic and Consistent Video Generation\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03242)] [[Website](https:\u002F\u002Fshanpoyang654.github.io\u002FInstaDrive\u002Fpage.html)] \r\n* **ConsisDrive**: \"ConsisDrive: Identity-Preserving Driving World Models for Video Generation by Instance Mask\", **`arxiv 2026.02**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.03213)] [[Website](https:\u002F\u002Fshanpoyang654.github.io\u002FConsisDrive\u002Fpage.html)] \r\n* **MAD**: \"MAD: Motion Appearance Decoupling for efficient Driving World Models\", **`arxiv 2026.01**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.09452)] [[Website](https:\u002F\u002Fvita-epfl.github.io\u002FMAD-World-Model\u002F)] \r\n* **UniDrive-WM**: \"UniDrive-WM: Unified Understanding, Planning and Generation World Model For Autonomous Driving\", **`arxiv 2026.01**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04453)] [[Website](https:\u002F\u002Funidrive-wm.github.io\u002FUniDrive-WM)] \r\n* **DriveLaW**: \"DriveLaW:Unifying Planning and Video Generation in a Latent Driving World\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23421)] \r\n* **GaussianDWM**: \"GaussianDWM: 3D Gaussian Driving World Model for Unified Scene Understanding and Multi-Modal Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23180)] [[Code](https:\u002F\u002Fgithub.com\u002Fdtc111111\u002FGaussianDWM)]\r\n* **WorldRFT**: \"WorldRFT: Latent World Model Planning with Reinforcement Fine-Tuning for Autonomous Driving\", **`AAAI 2026**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19133)] \r\n* **InDRiVE**: \"InDRiVE: Reward-Free World-Model Pretraining for Autonomous Driving via Latent Disagreement\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18850)] \r\n* **GenieDrive**: \"GenieDrive: Towards Physics-Aware Driving World Model with 4D Occupancy Guided Video Generation\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12751)] [[Website](https:\u002F\u002Fhuster-yzy.github.io\u002Fgeniedrive_project_page\u002F)]\r\n* **FutureX**: \"FutureX: Enhance End-to-End Autonomous Driving via Latent Chain-of-Thought World Model\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11226)]\r\n* \"Latent Chain-of-Thought World Modeling for End-to-End Driving\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10226)]\r\n* **MindDrive**: \"MindDrive: An All-in-One Framework Bridging World Models and Vision-Language Model for End-to-End Autonomous Driving\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04441)]\r\n* \"Think Before You Drive: World Model-Inspired Multimodal Grounding for Autonomous Vehicles\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03454)]\r\n* **U4D**: \"U4D: Uncertainty-Aware 4D World Modeling from LiDAR Sequences\", **`arxiv 2025.12**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02982)]\r\n* \"Vehicle Dynamics Embedded World Models for Autonomous Driving\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02417)]\r\n* \"World Model Robustness via Surprise Recognition\", **`arXiv 2025.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01119)]\r\n* **SparseWorld-TC**: \"SparseWorld-TC: Trajectory-Conditioned Sparse Occupancy World Model\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22039)]\r\n* **AD-R1**: \"AD-R1: Closed-Loop Reinforcement Learning for End-to-End Autonomous Driving with Impartial World Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20325)]\r\n* **Map-World**: \"Map-World: Masked Action planning and Path-Integral World Model for Autonomous Driving\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20156)]\r\n* **WPT**: \"WPT: World-to-Policy Transfer via Online World Model Distillation\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20095)]\r\n* **Percept-WAM**: \"Percept-WAM: Perception-Enhanced World-Awareness-Action Model for Robust End-to-End Autonomous Driving\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.19221)]\r\n* **Thinking Ahead**: \"Thinking Ahead: Foresight Intelligence in MLLMs and World Models\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.18735)]\r\n* **LiSTAR**: \"LiSTAR: Ray-Centric World Models for 4D LiDAR Sequences in Autonomous Driving\", **`arXiv 2025.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16049)] [[Website](https:\u002F\u002Focean-luna.github.io\u002FLiSTAR.gitub.io)]\r\n* \"Dual-Mind World Models: A General Framework for Learning in Dynamic Wireless Networks\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24546)]\r\n* \"Addressing Corner Cases in Autonomous Driving: A World Model-based Approach with Mixture of Experts and LLMs\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21867)]\r\n* \"From Forecasting to Planning: Policy World Model for Collaborative State-Action Prediction\", **`NIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19654)] [[Code](https:\u002F\u002Fgithub.com\u002F6550Zhao\u002FPolicy-World-Model)] \r\n* \"Rethinking Driving World Model as Synthetic Data Generator for Perception Tasks\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19195)] [[Website](https:\u002F\u002Fwm-research.github.io\u002FDream4Drive\u002F)] \r\n* **OmniNWM**: \"OmniNWM: Omniscient Driving Navigation World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18313)] [[Website](https:\u002F\u002Farlo0o.github.io\u002FOmniNWM\u002F)] \r\n* **SparseWorld**: \"SparseWorld: A Flexible, Adaptive, and Efficient 4D Occupancy World Model Powered by Sparse and Dynamic Queries\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.17482)] [[Code](https:\u002F\u002Fgithub.com\u002FMSunDYY\u002FSparseWorld)] \r\n* \"Vision-Centric 4D Occupancy Forecasting and Planning via Implicit Residual World Models\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16729)] \r\n* **DriveVLA-W0**: \"DriveVLA-W0: World Models Amplify Data Scaling Law in Autonomous Driving\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12560)] [[Code](https:\u002F\u002Fgithub.com\u002FBraveGroup\u002FDriveVLA-W0)] \r\n* **CoIRL-AD**: \"CoIRL-AD: Collaborative-Competitive Imitation-Reinforcement Learning in Latent World Models for Autonomous Driving\", **`arXiv 2025.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12560)] [[Code](https:\u002F\u002Fgithub.com\u002FSEU-zxj\u002FCoIRL-AD)] \r\n* **TeraSim-World**: \"TeraSim-World: Worldwide Safety-Critical Data Synthesis for End-to-End Autonomous Driving\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13164)] [[Website](https:\u002F\u002Fwjiawei.com\u002Fterasim-world-web\u002F)] \r\n* \"Enhancing Physical Consistency in Lightweight World Models\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12437)]\r\n* **OccTENS**: \"OccTENS: 3D Occupancy World Model via Temporal Next-Scale Prediction\", **`arXiv 2025.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.03887)]\r\n* **IRL-VLA**: \"IRL-VLA: Training an Vision-Language-Action Policy via Reward World Model\", **`arXiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06571)] [[Website](https:\u002F\u002Flidarcrafter.github.io)] [[Code](https:\u002F\u002Fgithub.com\u002Flidarcrafter\u002Ftoolkit)]\r\n* **LiDARCrafter**: \"LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences\", **`arXiv 2025.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03692)] [[Website](https:\u002F\u002Flidarcrafter.github.io)] [[Code](https:\u002F\u002Fgithub.com\u002Flidarcrafter\u002Ftoolkit)]\r\n* **FASTopoWM**: \"FASTopoWM: Fast-Slow Lane Segment Topology Reasoning with Latent World Models\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.23325)] [[Code](https:\u002F\u002Fgithub.com\u002FYimingYang23\u002FFASTopoWM)]\r\n* **Orbis**: \"Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13162)] [[Code](https:\u002F\u002Flmb-freiburg.github.io\u002Forbis.github.io\u002F)]\r\n* \"World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.12762)]\r\n* **NRSeg**: \"NRSeg: Noise-Resilient Learning for BEV Semantic Segmentation via Driving World Models\", **`arXiv 2025.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04002)] [[Code](https:\u002F\u002Fgithub.com\u002Flynn-yu\u002FNRSeg)]\r\n* **World4Drive**: \"World4Drive: End-to-End Autonomous Driving via Intention-aware Physical Latent World Model\", **`ICCV2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00603)] [[Code](https:\u002F\u002Fgithub.com\u002Fucaszyp\u002FWorld4Drive)]\r\n* **Epona**: \"Epona: Autoregressive Diffusion World Model for Autonomous Driving\", **`ICCV2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24113)] [[Code](https:\u002F\u002Fkevin-thu.github.io\u002FEpona\u002F)]\r\n* \"Towards foundational LiDAR world models with efficient latent flow matching\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23434)]\r\n* **SceneDiffuser++**: \"SceneDiffuser++: City-Scale Traffic Simulation via a Generative World Model\", **`CVPR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21976)]\r\n* **COME**: \"COME: Adding Scene-Centric Forecasting Control to Occupancy World Model\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13260)] [[Code](https:\u002F\u002Fgithub.com\u002Fsynsin0\u002FCOME)]\r\n* **STAGE**: \"STAGE: A Stream-Centric Generative World Model for Long-Horizon Driving-Scene Simulation\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13138)] \r\n* **ReSim**: \"ReSim: Reliable World Simulation for Autonomous Driving\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09981)] [[Code](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FReSim)] [[Project Page](https:\u002F\u002Fopendrivelab.com\u002FReSim)]\r\n* \"Ego-centric Learning of Communicative World Models for Autonomous Driving\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08149)] \r\n* **Dreamland**: \"Dreamland: Controllable World Creation with Simulator and Generative Models\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08006)] [[Project Page](https:\u002F\u002Fmetadriverse.github.io\u002Fdreamland\u002F)] \r\n* **LongDWM**: \"LongDWM: Cross-Granularity Distillation for Building a Long-Term Driving World Model\", **`arXiv 2025.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01546)] [[Project Page](https:\u002F\u002Fwang-xiaodong1899.github.io\u002Flongdwm\u002F)] \r\n* **GeoDrive**: \"GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22421)] [[Code](https:\u002F\u002Fgithub.com\u002Fantonioo-c\u002FGeoDrive)] \r\n* **FutureSightDrive**: \"FutureSightDrive: Thinking Visually with Spatio-Temporal CoT for Autonomous Driving\", **`NeurIPS 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17685)] [[Code](https:\u002F\u002Fgithub.com\u002FMIV-XJTU\u002FFSDrive)] \r\n* **Raw2Drive**: \"Raw2Drive: Reinforcement Learning with Aligned World Models for End-to-End Autonomous Driving (in CARLA v2)\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16394)]\r\n* **VL-SAFE**: \"VL-SAFE: Vision-Language Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16377)] [[Project Page](https:\u002F\u002Fys-qu.github.io\u002Fvlsafe-website\u002F)] \r\n* **PosePilot**: \"PosePilot: Steering Camera Pose for Generative World Models with Self-supervised Depth\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01729)]\r\n* \"World Model-Based Learning for Long-Term Age of Information Minimization in Vehicular Networks\", **`arXiv 2025.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01712)]\r\n* \"Learning to Drive from a World Model\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19077)]\r\n* **DriVerse**: \"DriVerse: Navigation World Model for Driving Simulation via Multimodal Trajectory Prompting and Motion Alignment\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18576)] \r\n* \"End-to-End Driving with Online Trajectory Evaluation via BEV World Model\", **`arXiv 2025.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01941)] [[Code](https:\u002F\u002Fgithub.com\u002FliyingyanUCAS\u002FWoTE)] \r\n* \"Knowledge Graphs as World Models for Semantic Material-Aware Obstacle Handling in Autonomous Vehicles\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21232)]\r\n* **MiLA**: \"MiLA: Multi-view Intensive-fidelity Long-term Video Generation World Model for Autonomous Driving\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15875)] [[Project Page](https:\u002F\u002Fgithub.com\u002Fxiaomi-mlab\u002Fmila.github.io)] \r\n* **SimWorld**: \"SimWorld: A Unified Benchmark for Simulator-Conditioned Scene Generation via World Model\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13952)] [[Project Page](https:\u002F\u002Fgithub.com\u002FLi-Zn-H\u002FSimWorld)] \r\n* **UniFuture**: \"Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13587)] [[Project Page](https:\u002F\u002Fgithub.com\u002Fdk-liang\u002FUniFuture)] \r\n* **EOT-WM**: \"Other Vehicle Trajectories Are Also Needed: A Driving World Model Unifies Ego-Other Vehicle Trajectories in Video Latent Space\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09215)]\r\n* \"Temporal Triplane Transformers as Occupancy World Models\", **`arXiv 2025.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07338)]\r\n* **InDRiVE**: \"InDRiVE: Intrinsic Disagreement based Reinforcement for Vehicle Exploration through Curiosity Driven Generalized World Model\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05573)]\r\n* **MaskGWM**: \"MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11663)]\r\n* **Dream to Drive**: \"Dream to Drive: Model-Based Vehicle Control Using Analytic World Models\", **`arXiv 2025.02`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10012)]\r\n* \"Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous Driving\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07309)]\r\n* \"Dream to Drive with Predictive Individual World Model\", **`IEEE TIV`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16733)] [[Code](https:\u002F\u002Fgithub.com\u002Fgaoyinfeng\u002FPIWM)]\r\n* **HERMES**: \"HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14729)] \r\n* **AdaWM**: \"AdaWM: Adaptive World Model based Planning for Autonomous Driving\", **`ICLR 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13072)] \r\n* **AD-L-JEPA**: \"AD-L-JEPA: Self-Supervised Spatial World Models with Joint Embedding Predictive Architecture for Autonomous Driving with LiDAR Data\", **`arXiv 2025.01`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04969)]  \r\n* **DrivingWorld**: \"DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19505)] [[Code](https:\u002F\u002Fgithub.com\u002FYvanYin\u002FDrivingWorld)] [[Project Page](https:\u002F\u002Fhuxiaotaostasy.github.io\u002FDrivingWorld\u002Findex.html)] \r\n* **DrivingGPT**: \"DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18607)] [[Project Page](https:\u002F\u002Frogerchern.github.io\u002FDrivingGPT\u002F)]\r\n* \"An Efficient Occupancy World Model via Decoupled Dynamic Flow and Image-assisted Training\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13772)]\r\n* **GEM**: \"GEM: A Generalizable Ego-Vision Multimodal World Model for Fine-Grained Ego-Motion, Object Dynamics, and Scene Composition Control\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11198)] [[Project Page](https:\u002F\u002Fvita-epfl.github.io\u002FGEM.github.io\u002F)]\r\n* **GaussianWorld**: \"GaussianWorld: Gaussian World Model for Streaming 3D Occupancy Prediction\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04380)] [[Code](https:\u002F\u002Fgithub.com\u002Fzuosc19\u002FGaussianWorld)]\r\n* **Doe-1**: \"Doe-1: Closed-Loop Autonomous Driving with Large World Model\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09627)] [[Project Page](https:\u002F\u002Fwzzheng.net\u002FDoe\u002F)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FDoe)]\r\n* \"Pysical Informed Driving World Model\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08410)] [[Project Page](https:\u002F\u002Fmetadrivescape.github.io\u002Fpapers_project\u002FDrivePhysica\u002Fpage.html)]\r\n* **InfiniCube**: \"InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03934)] [[Project Page](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Finfinicube\u002F)]\r\n* **InfinityDrive**: \"InfinityDrive: Breaking Time Limits in Driving World Models\", **`arXiv 2024.12`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01522)] [[Project Page](https:\u002F\u002Fmetadrivescape.github.io\u002Fpapers_project\u002FInfinityDrive\u002Fpage.html)]\r\n* **ReconDreamer**: \"ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration\", **`arXiv 2024.11`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19548)] [[Project Page](https:\u002F\u002Frecondreamer.github.io\u002F)]\r\n* **Imagine-2-Drive**: \"Imagine-2-Drive: High-Fidelity World Modeling in CARLA for Autonomous Vehicles\", **`ICRA 2025`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10171)] [[Project Page](https:\u002F\u002Fanantagrg.github.io\u002FImagine-2-Drive.github.io\u002F)]\r\n* **DynamicCity**: \"DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes\", **`ICLR 2025 Spotlight`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18084)] [[Project Page](https:\u002F\u002Fdynamic-city.github.io)] [[Code](https:\u002F\u002Fgithub.com\u002F3DTopia\u002FDynamicCity)]\r\n* **DriveDreamer4D**: \"World Models Are Effective Data Machines for 4D Driving Scene Representation\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13571)] [[Project Page](https:\u002F\u002Fdrivedreamer4d.github.io\u002F)]\r\n* **DOME**: \"Taming Diffusion Model into High-Fidelity Controllable Occupancy World Model\", **`arXiv 2024.10`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10429)] [[Project Page](https:\u002F\u002Fgusongen.github.io\u002FDOME)]\r\n* **SSR**: \"Does End-to-End Autonomous Driving Really Need Perception Tasks?\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18341)] [[Code](https:\u002F\u002Fgithub.com\u002FPeidongLi\u002FSSR)]\r\n* \"Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16663)]\r\n* **LatentDriver**: \"Learning Multiple Probabilistic Decisions from Latent World Model in Autonomous Driving\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15730)] [[Code](https:\u002F\u002Fgithub.com\u002FSephirex-X\u002FLatentDriver)]\r\n* **RenderWorld**: \"World Model with Self-Supervised 3D Label\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11356)]\r\n* **OccLLaMA**: \"An Occupancy-Language-Action Generative World Model for Autonomous Driving\", **`arXiv 2024.09`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03272)]\r\n* **DriveGenVLM**: \"Real-world Video Generation for Vision Language Model based Autonomous Driving\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16647)]\r\n* **Drive-OccWorld**: \"Driving in the Occupancy World: Vision-Centric 4D Occupancy Forecasting and Planning via World Models for Autonomous Driving\", **`arXiv 2024.08`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14197)]\r\n* **CarFormer**: \"Self-Driving with Learned Object-Centric Representations\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15843)] [[Code](https:\u002F\u002Fkuis-ai.github.io\u002FCarFormer\u002F)]\r\n* **BEVWorld**: \"A Multimodal World Model for Autonomous Driving via Unified BEV Latent Space\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.05679)] [[Code](https:\u002F\u002Fgithub.com\u002Fzympsyche\u002FBevWorld)]\r\n* **TOKEN**: \"Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving\", **`arXiv 2024.07`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00959)]\r\n* **UMAD**: \"Unsupervised Mask-Level Anomaly Detection for Autonomous Driving\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06370)]\r\n* **SimGen**: \"Simulator-conditioned Driving Scene Generation\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.09386)] [[Code](https:\u002F\u002Fmetadriverse.github.io\u002Fsimgen\u002F)]\r\n* **AdaptiveDriver**: \"Planning with Adaptive World Models for Autonomous Driving\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10714)] [[Code](https:\u002F\u002Farunbalajeev.github.io\u002Fworld_models_planning\u002Fworld_model_paper.html)]\r\n* **UnO**: \"Unsupervised Occupancy Fields for Perception and Forecasting\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08691)] [[Code](https:\u002F\u002Fwaabi.ai\u002Fresearch\u002Funo)]\r\n* **LAW**: \"Enhancing End-to-End Autonomous Driving with Latent World Model\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08481)] [[Code](https:\u002F\u002Fgithub.com\u002FBraveGroup\u002FLAW)]\r\n* **Delphi**: \"Unleashing Generalization of End-to-End Autonomous Driving with Controllable Long Video Generation\", **`arXiv 2024.06`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01349)] [[Code](https:\u002F\u002Fgithub.com\u002Fwestlake-autolab\u002FDelphi)]\r\n* **OccSora**: \"4D Occupancy Generation Models as World Simulators for Autonomous Driving\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20337)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FOccSora)]\r\n* **MagicDrive3D**: \"Controllable 3D Generation for Any-View Rendering in Street Scenes\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14475)] [[Code](https:\u002F\u002Fgaoruiyuan.com\u002Fmagicdrive3d\u002F)]\r\n* **Vista**: \"A Generalizable Driving World Model with High Fidelity and Versatile Controllability\", **`NeurIPS 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.17398)] [[Code](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FVista)]\r\n* **CarDreamer**: \"Open-Source Learning Platform for World Model based Autonomous Driving\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.09111)] [[Code](https:\u002F\u002Fgithub.com\u002Fucd-dare\u002FCarDreamer)]\r\n* **DriveSim**: \"Probing Multimodal LLMs as World Models for Driving\", **`arXiv 2024.05`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05956)] [[Code](https:\u002F\u002Fgithub.com\u002Fsreeramsa\u002FDriveSim)]\r\n* **DriveWorld**: \"4D Pre-trained Scene Understanding via World Models for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04390)]\r\n* **LidarDM**: \"Generative LiDAR Simulation in a Generated World\", **`arXiv 2024.04`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02903)] [[Code](https:\u002F\u002Fgithub.com\u002Fvzyrianov\u002Flidardm)]\r\n* **SubjectDrive**: \"Scaling Generative Data in Autonomous Driving via Subject Control\", **`arXiv 2024.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19438)] [[Project](https:\u002F\u002Fsubjectdrive.github.io\u002F)]\r\n* **DriveDreamer-2**: \"LLM-Enhanced World Models for Diverse Driving Video Generation\", **`arXiv 2024.03`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.06845)] [[Code](https:\u002F\u002Fdrivedreamer2.github.io\u002F)]\r\n* **Think2Drive**: \"Efficient Reinforcement Learning by Thinking in Latent World Model for Quasi-Realistic Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16720)]\r\n* **MARL-CCE**: \"Modelling Competitive Behaviors in Autonomous Driving Under Generative World Model\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F05085.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fqiaoguanren\u002FMARL-CCE)]\r\n* **GenAD**: \"Generalized Predictive Model for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09630)] [[Data](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FDriveAGI?tab=readme-ov-file#genad-dataset-opendv-youtube)]\r\n* **GenAD**: \"Generative End-to-End Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11502)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FGenAD)]\r\n* **NeMo**: \"Neural Volumetric World Models for Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F02571.pdf)]\r\n* **MARL-CCE**: \"Modelling-Competitive-Behaviors-in-Autonomous-Driving-Under-Generative-World-Model\", **`ECCV 2024`**. [[Code](https:\u002F\u002Fgithub.com\u002Fqiaoguanren\u002FMARL-CCE)]\r\n* **ViDAR**: \"Visual Point Cloud Forecasting enables Scalable Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17655)] [[Code](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FViDAR)]\r\n* **Drive-WM**: \"Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17918)] [[Code](https:\u002F\u002Fgithub.com\u002FBraveGroup\u002FDrive-WM)]\r\n* **Cam4DOCC**: \"Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17663)] [[Code](https:\u002F\u002Fgithub.com\u002Fhaomo-ai\u002FCam4DOcc)]\r\n* **Panacea**: \"Panoramic and Controllable Video Generation for Autonomous Driving\", **`CVPR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16813)] [[Code](https:\u002F\u002Fpanacea-ad.github.io\u002F)]\r\n* **OccWorld**: \"Learning a 3D Occupancy World Model for Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16038)] [[Code](https:\u002F\u002Fgithub.com\u002Fwzzheng\u002FOccWorld)]\r\n* **Copilot4D**: \"Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.01017)]\r\n* **DrivingDiffusion**: \"Layout-Guided multi-view driving scene video generation with latent diffusion model\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07771)] [[Code](https:\u002F\u002Fgithub.com\u002Fshalfun\u002FDrivingDiffusion)]\r\n* **SafeDreamer**: \"Safe Reinforcement Learning with World Models\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=tsE5HLYtYg)] [[Code](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002FSafeDreamer)]\r\n* **MagicDrive**: \"Street View Generation with Diverse 3D Geometry Control\", **`ICLR 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02601)] [[Code](https:\u002F\u002Fgithub.com\u002Fcure-lab\u002FMagicDrive)]\r\n* **DriveDreamer**: \"Towards Real-world-driven World Models for Autonomous Driving\", **`ECCV 2024`**. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.09777)] [[Code](https:\u002F\u002Fgithub.com\u002FJeffWang987\u002FDriveDreamer)]\r\n* **SEM2**: \"Enhance Sample Efficiency and Robustness of End-to-end Urban Autonomous Driving via Semantic Masked World Model\", **`TITS`**. [[Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10538211\u002F)]\r\n\r\n\r\n----\n\n## 引用\n如果您觉得本仓库有用，请考虑引用此列表：\n```bib\n@misc{leo2024worldmodelspaperslist,\n    title = {Awesome-World-Models},\n    author = {Leo Fan},\n    journal = {GitHub 仓库},\n    url = {https:\u002F\u002Fgithub.com\u002Fleofan90\u002FAwesome-World-Models},\n    year = {2024},\n}\n```","# Awesome-World-Models 快速上手指南\n\n**项目简介**：\n`Awesome-World-Models` 并非一个可直接安装的单一软件库或工具包，而是一个**精选论文与资源列表**。它汇集了关于通用视频生成、具身智能（Embodied AI）和自动驾驶领域的“世界模型”（World Models）相关的前沿研究、技术报告、综述及基准测试。\n\n本指南旨在帮助开发者如何利用该列表快速定位所需模型，并获取对应项目的代码与环境配置方法。\n\n---\n\n## 1. 环境准备\n\n由于本项目是资源索引，**无需安装本项目本身作为依赖**。你需要根据列表中感兴趣的具体模型（如 `Cosmos`, `GAIA-2`, `HunyuanWorld` 等）来准备相应的环境。\n\n### 通用系统要求（参考主流世界模型项目）\n大多数列出的世界模型项目对硬件有较高要求，建议具备以下基础环境：\n*   **操作系统**: Linux (Ubuntu 20.04\u002F22.04 推荐) 或 macOS (部分轻量模型支持)。\n*   **GPU**: NVIDIA GPU，显存建议 16GB 以上（大型生成模型通常需要 24GB+ 或多卡）。\n*   **驱动**: NVIDIA Driver >= 535, CUDA >= 11.8 或 12.1。\n*   **基础软件**:\n    *   Python >= 3.9 (具体版本视目标项目而定)\n    *   Git\n    *   Conda 或 Mamba (推荐用于环境管理)\n\n### 前置依赖\n在克隆具体模型仓库前，请确保已安装基础工具：\n```bash\n# 更新包管理器\nsudo apt update\n\n# 安装 Git 和基础编译工具\nsudo apt install -y git build-essential wget curl\n\n# 安装 Miniconda (如果尚未安装)\nwget https:\u002F\u002Frepo.anaconda.com\u002Fminiconda\u002FMiniconda3-latest-Linux-x86_64.sh\nbash Miniconda3-latest-Linux-x86_64.sh\n```\n\n---\n\n## 2. 获取资源与安装具体模型\n\n使用步骤分为两步：首先浏览列表找到目标项目，然后前往该项目仓库进行安装。\n\n### 第一步：浏览与选择\n访问 `Awesome-World-Models` 仓库页面，在 `Blog or Technical Report` 或 `General World Models` 分类下查找你需要的模型。\n*   例如：若对 NVIDIA 的物理 AI 平台感兴趣，找到 **`Cosmos`**。\n*   若对腾讯的 3D 世界生成感兴趣，找到 **`HunyuanWorld 1.0`**。\n\n### 第二步：克隆与安装（以 Cosmos 为例）\n点击列表中的 `[Code]` 链接进入对应 GitHub 仓库。以下是通用的安装流程示例：\n\n```bash\n# 1. 克隆目标项目代码 (此处以 NVIDIA Cosmos 为例)\ngit clone https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FCosmos.git\ncd Cosmos\n\n# 2. 创建虚拟环境 (推荐使用 conda)\nconda create -n cosmos-env python=3.10 -y\nconda activate cosmos-env\n\n# 3. 安装 PyTorch (根据官方要求选择 CUDA 版本)\n# 国内用户推荐使用清华源加速\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 4. 安装项目依赖\n# 注意：不同项目依赖文件名称可能不同 (requirements.txt, setup.py, pyproject.toml)\npip install -r requirements.txt\n# 或者\npip install -e .\n```\n\n> **提示**：对于列表中带有 `[Website]` 的项目，建议优先访问其官方网站，通常提供更详细的 Demo 体验和特定的安装指令。\n\n---\n\n## 3. 基本使用\n\n由于每个模型的输入输出接口不同，以下为基于典型世界模型（视频生成\u002F预测）的通用使用逻辑示例。请以具体项目的 `README` 或 `examples` 文件夹为准。\n\n### 典型使用流程\n1.  **准备输入数据**：通常是初始帧图像、文本提示词（Prompt）或动作序列。\n2.  **加载模型权重**：首次运行通常会自动下载或需手动从 HuggingFace 下载权重。\n3.  **执行推理**：运行提供的推理脚本。\n\n### 使用示例 (伪代码\u002F通用命令)\n假设你已安装好某个名为 `awesome-model` 的项目：\n\n```bash\n# 激活环境\nconda activate cosmos-env\n\n# 运行推理脚本 (具体参数请参考该项目文档)\n# 示例：根据文本提示生成一段世界模拟视频\npython infer.py \\\n    --prompt \"A robot arm picking up a cube on a table\" \\\n    --output_dir .\u002Fresults \\\n    --num_frames 128 \\\n    --resolution 720p\n\n# 查看生成的视频\nls .\u002Fresults\n```\n\n### 利用列表进行学术研究\n如果你主要用于文献调研而非代码运行：\n1.  克隆本仓库以本地查阅最新论文：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fleofan90\u002FAwesome-World-Models.git\n    cd Awesome-World-Models\n    ```\n2.  直接阅读 `README.md` 中的分类链接，点击 `[Paper]` 跳转至 arXiv 下载 PDF。\n3.  关注 `Surveys` 部分，阅读综述论文以快速建立知识体系。\n\n---\n\n**注意事项**：\n*   列表中部分论文标记为 `arXiv 2026` 等未来时间戳，这可能代表预发布版本或占位符，请以实际 arXiv 页面为准。\n*   国内访问 HuggingFace 或 GitHub 可能较慢，建议在 `.bashrc` 中配置镜像加速或使用代理。\n*   世界模型训练和推理消耗巨大，初次尝试建议先使用官方提供的 Colab 演示或在线 Demo (如有)。","某自动驾驶初创公司的算法团队正致力于研发基于世界模型的端到端驾驶系统，需要在海量文献中筛选出适合实车部署的前沿方案。\n\n### 没有 Awesome-World-Models 时\n- **检索效率低下**：研究人员需在 arXiv、GitHub 和各类会议网站间反复切换，手动拼凑\"World Model\"、\"Autonomous Driving\"等关键词，耗时数天仍难保全。\n- **代码复现困难**：找到的论文往往缺乏官方代码链接，或仓库已归档，导致无法快速验证算法在真实驾驶场景中的有效性。\n- **技术视野局限**：容易遗漏跨领域（如具身智能或通用视频生成）中可迁移的关键技术，错失了利用 `DreamDojo` 或 `SIMA 2` 等最新成果优化感知预测模块的机会。\n- **基准评估混乱**：缺乏统一的评测标准列表，团队难以横向对比不同模型在动态障碍物预测等核心指标上的真实性能。\n\n### 使用 Awesome-World-Models 后\n- **一站式资源聚合**：团队直接通过分类目录锁定\"World Models for Autonomous Driving\"板块，几分钟内即可获取从基础理论到 `GigaWorld-Policy` 等最新策略模型的完整清单。\n- **落地路径清晰**：每个条目均附带论文、代码库及项目主页链接，工程师能迅速克隆 `lingbot-va` 等开源项目进行本地调试与微调。\n- **跨界灵感激发**：通过浏览\"Embodied AI\"和\"General Video Generation\"板块，团队成功借鉴了物理对齐技术，提升了车辆对复杂路况的推演能力。\n- **评估体系规范**：利用内置的\"Benchmarks & Evaluation\"章节，快速建立了符合行业标准的测试流程，确保模型选型科学可靠。\n\nAwesome-World-Models 将原本分散破碎的研究线索编织成一张高效的知识地图，极大缩短了从理论探索到工程落地的周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fleofan90_Awesome-World-Models_ff1cefec.png","leofan90","Leo Fan","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fleofan90_3be4a9b6.png",null,"PKU","https:\u002F\u002Fgithub.com\u002Fleofan90",1455,48,"2026-04-08T10:35:28","BSD-3-Clause",5,"","未说明",{"notes":86,"python":84,"dependencies":87},"该仓库（Awesome-World-Models）是一个 curated list（精选列表），主要收集了关于世界模型（World Models）在机器人、自动驾驶、具身智能等领域的论文、博客和技术报告链接。它本身不是一个可执行的软件工具或代码库，因此不包含具体的运行环境需求（如操作系统、GPU、内存、Python 版本或依赖库）。用户若需运行列表中提到的具体模型（如 Cosmos, HunyuanWorld, GAIA-2 等），需前往各模型对应的独立代码仓库查看其特定的安装和运行要求。",[],[14,89,13],"视频",[91,92,93,94,95,96,97,98],"artificial-intelligence","autonomous-driving","awesome","deep-learning","embodied-ai","future-prediction","video-prediction","world-model","2026-03-27T02:49:30.150509","2026-04-09T12:38:58.234035",[],[]]