[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-YanjieZe--Paper-List":3,"tool-YanjieZe--Paper-List":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",156033,2,"2026-04-14T23:32:00",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":77,"stars":81,"forks":82,"last_commit_at":83,"license":77,"difficulty_score":84,"env_os":85,"env_gpu":86,"env_ram":86,"env_deps":87,"category_tags":90,"github_topics":92,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":97,"updated_at":98,"faqs":99,"releases":100},7651,"YanjieZe\u002FPaper-List","Paper-List","A paper list of my history reading. Robotics, Learning, Vision.","Paper-List 是由研究者 Yanjie Ze 维护的一份高质量学术论文清单，专注于机器人学、机器学习与计算机视觉三大前沿领域。面对海量且分散的顶会论文，科研人员往往难以高效追踪最新进展，Paper-List 通过系统性地整理历年（2022-2025）RSS、CVPR、ICLR、CoRL 等顶级会议录用论文及精选预印本，有效解决了信息过载与检索困难的问题。\n\n这份清单不仅按“人形机器人”、“灵巧操作”、\"3D 机器人学习”及“机器人基础模型”等热门主题分类，还特别收录了具有代表性的近期随机论文，帮助读者快速捕捉如 3D 扩散策略、实时立体匹配等创新方向。其独特亮点在于兼顾了经典会议归档与前沿 arXiv 动态，并提供了直达论文官网或代码库的链接，极大提升了文献调研效率。\n\nPaper-List 非常适合高校研究人员、AI 开发者以及希望深入了解具身智能技术的学生使用。它摒弃了复杂的算法包装，回归学术分享的本质，以清晰的结构和持续的更新，成为探索机器人学习与视觉感知领域不可或缺的知识导航。","# A Paper List of [Yanjie Ze](https:\u002F\u002Fyanjieze.com\u002F)\n\nTopics:\n- [Humanoid Robots](https:\u002F\u002Fgithub.com\u002FYanjieZe\u002Fawesome-humanoid-robot-learning)\n- [Dexterous Manipulation](topics\u002Fdex_manipulation.md)\n- [3D Robot Learning](topics\u002F3d_robotic_learning.md)\n- [Robot Foundation Models](topics\u002Frobot_foundation_models.md)\n- [Best Papers](topics\u002Fbest_papers.md)\n\nPapers:\n- 2025\n  - [RSS 2025](https:\u002F\u002Froboticsconference.org\u002Fprogram\u002Fpapers\u002F)\n  - [CVPR 2025](https:\u002F\u002Fcvpr.thecvf.com\u002FConferences\u002F2025\u002FAcceptedPapers)\n  - [ICLR 2025 scores](https:\u002F\u002Fpapercopilot.com\u002Fstatistics\u002Ficlr-statistics\u002Ficlr-2025-statistics\u002F)\n- 2024\n  - [CoRL 2024](https:\u002F\u002Fopenreview.net\u002Fgroup?id=robot-learning.org\u002FCoRL\u002F2024\u002FConference#tab-accept) \u002F [statistics](https:\u002F\u002Fpapercopilot.com\u002Fstatistics\u002Fcorl-statistics\u002Fcorl-2024-statistics\u002F)\n  - [RSS 2024](https:\u002F\u002Froboticsconference.org\u002Fprogram\u002Fpapers\u002F)\n  - [3DV 2024](https:\u002F\u002F3dvconf.github.io\u002F2024\u002Faccepted-papers\u002F)\n  - [CVPR 2024](https:\u002F\u002Fcvpr.thecvf.com\u002FConferences\u002F2024\u002FAcceptedPapers) \u002F [interactive overview](https:\u002F\u002Fpublic.tableau.com\u002Fviews\u002FCVPR2024\u002FPaperList?%3AshowVizHome=no)\n  - [ICLR 2024](https:\u002F\u002Fopenreview.net\u002Fgroup?id=ICLR.cc\u002F2024\u002FConference) \u002F [scores](https:\u002F\u002Fguoqiangwei.xyz\u002Ficlr2024_stats\u002Ficlr2024_submissions.html)\n- 2023\n  - [NeurIPS 2023](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2023\u002Fpapers.html)\n  - [CoRL 2023](https:\u002F\u002Fopenreview.net\u002Fgroup?id=robot-learning.org\u002FCoRL\u002F2023\u002FConference#accept--oral-)\n  - [ICCV 2023](https:\u002F\u002Fopenaccess.thecvf.com\u002FICCV2023?day=all)\n  - [ICML 2023](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2023\u002Fpapers.html?filter=titles)\n  - [SIGGRAPH 2023](https:\u002F\u002Fkesen.realtimerendering.com\u002Fsig2023.html)\n  - [RSS 2023](https:\u002F\u002Froboticsconference.org\u002F2023\u002Fprogram\u002Fpapers\u002F)\n  - [CVPR 2023](https:\u002F\u002Fcvpr2023.thecvf.com\u002FConferences\u002F2023\u002FAcceptedPapers)\n  - [ICLR 2023](https:\u002F\u002Ficlr.cc\u002Fvirtual\u002F2023\u002Fpapers.html?filter=titles)\n- 2022\n  - [NeurIPS 2022](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2022\u002Fpapers.html?filter=titles)\n\n\n# Recent Random Papers\n- RSS 2024, **3D Diffusion Policy**: Generalizable Visuomotor Policy Learning via Simple 3D Representations, [Website](https:\u002F\u002F3d-diffusion-policy.github.io\u002F)\n- [arXiv 2025.12](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11130), Fast-FoundationStereo: Real-Time Zero-Shot Stereo Matching\n- [website](https:\u002F\u002Fopenreview.net\u002Fforum?id=Jtjurj7oIJ), Position: Scaling Simulation is Neither Necessary Nor Sufficient for In-the-Wild Robot Manipulation\n- [website](https:\u002F\u002Fwww.mitchel.computer\u002Fxfactor\u002F), True Self-Supervised Novel View Synthesis is Transferable\n- [arXiv 2025.09](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04343), Psychologically Enhanced AI Agents\n- [arXiv 2024.08](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07855), Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation\n- [website](https:\u002F\u002Ftoyotaresearchinstitute.github.io\u002Flbm1\u002F), A Careful Examination of Large Behavior Models for Multitask Dexterous Manipulation\n- [arXiv 2025.07](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07969), Reinforcement Learning with Action Chunking\n- arXiv 2025.06, DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation, [website](https:\u002F\u002Fdexwrist.csail.mit.edu\u002F)\n- arXiv 2025.06, Versatile Loco-Manipulation through Flexible Interlimb Coordination, [website](https:\u002F\u002Frelic-locoman.github.io\u002F)\n- [arXiv 2025.06](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01944), Feel the Force: Contact-Driven Learning from Humans\n- SIGGRAPH 2025, RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination, [website](https:\u002F\u002Fmicrosoft.github.io\u002Frenderformer\u002F)\n- [Science Robotics](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscirobotics.ads6192?utm_campaign=SciRobotics&utm_medium=ownedSocial&utm_source=twitter), High-speed control and navigation for quadrupedal robots on complex and discrete terrain\n- [Science Robotics](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscirobotics.adu3922?utm_campaign=SciRobotics&utm_medium=ownedSocial&utm_source=twitter), Learning coordinated badminton skills for legged manipulators\n- arXiv 2025.04, DexSinGrasp: Learning a Unified Policy for Dexterous Object Singulation and Grasping in Cluttered Environments, [website](https:\u002F\u002Fnus-lins-lab.github.io\u002Fdexsingweb\u002F)\n- arXiv 2025.04, ORCA: Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic Hand for Uninterrupted Dexterous Task Learning, [website](https:\u002F\u002Fwww.orcahand.com\u002F)\n- arXiv 2025.04, One-Minute Video Generation with Test-Time Training, [website](https:\u002F\u002Ftest-time-training.github.io\u002Fvideo-dit\u002F)\n- arXiv 2025.01, Vid2Sim: Realistic and Interactive Simulation from Video for Urban Navigation, [website](https:\u002F\u002Fmetadriverse.github.io\u002Fvid2sim\u002F)\n- arXiv 2023.05, Data-Free Learning of Reduced-Order Kinematics, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03846)\n- arXiv 2024.04, ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning, [website](https:\u002F\u002Fmaniptrans.github.io\u002F)\n- arXiv 2021.09, Geometric Fabrics: Generalizing Classical Mechanics to Capture the Physics of Behavior, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10443)\n- arXiv 2023.10, Grasp Multiple Objects with One Hand, [website](https:\u002F\u002Fmultigrasp.github.io\u002F)\n- arXiv 2025.03, Learning to Play Piano in the Real World, [website](https:\u002F\u002Flasr.org\u002Fresearch\u002Flearning-to-play-piano)\n- arXiv 2025.03, MotionStreamer: Streaming Motion Generation via Diffusion-based Autoregressive Model in Causal Latent Space, [website](https:\u002F\u002Fzju3dv.github.io\u002FMotionStreamer\u002F)\n- arXiv 2025.03, Unified Video Action Model, [website](https:\u002F\u002Funified-video-action-model.github.io\u002F)\n- arXiv 2025.03, Scalable Real2Sim: Physics-Aware Asset Generation Via Robotic Pick-and-Place Setups, [website](https:\u002F\u002Fscalable-real2sim.github.io\u002F)\n- arXiv 2025.03, Discrete-Time Hybrid Automata Learning: Legged Locomotion Meets Skateboarding, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01842)\n- arXiv 2025.02, InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20390)\n- arXiv 2025.02, LiDAR Registration with Visual Foundation Models, [website](https:\u002F\u002Fvfm-registration.cs.uni-freiburg.de\u002F)\n- arXiv 2025.02, FACTR: Force-Attending Curriculum Training for Contact-Rich Policy Learning, [website](https:\u002F\u002Fjasonjzliu.com\u002Ffactr\u002F)\n- arXiv 2025.02, DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning, [website](https:\u002F\u002Fdemo-generation.github.io\u002F)\n- arXiv 2025.02, AnyDexGrasp: Learning General Dexterous Grasping for Any Hands with Human-level Learning Efficiency, [website](https:\u002F\u002Fgraspnet.net\u002Fanydexgrasp\u002F)\n- IEEE Transactions on Human-Machine Systems 2015, The GRASP Taxonomy of Human Grasp Types, [website](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7243327)\n- Science Robotics, Intrinsic sense of touch for intuitive physical human-robot interaction, [website](https:\u002F\u002Fwww.science.org\u002Fstoken\u002Fauthor-tokens\u002FST-2065\u002Ffull)\n- arXiv 2025.02, Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation, [website](https:\u002F\u002Fuan.csail.mit.edu\u002F)\n- arXiv 2025.02, **RigAnything**: Template-Free Autoregressive Rigging for Diverse 3D Assets, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09615)\n- arXiv 2025.02, Robot Data Curation with Mutual Information Estimators, [website](http:\u002F\u002Fjoeyhejna.com\u002Fdemonstration-info\u002F)\n- arXiv 2024.05, Learning Force Control for Legged Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01402)\n- arXiv 2025.02, **TD-M(PC)2**: Improving Temporal Difference MPC Through Policy Constraint, [website](https:\u002F\u002Fdarthutopian.github.io\u002Ftdmpc_square\u002F)\n- arXiv 2025.02, **DexterityGen**: Foundation Controller for Unprecedented Dexterity, [website](https:\u002F\u002Fzhaohengyin.github.io\u002Fdexteritygen\u002F)\n- arXiv 2025.02, Strengthening Generative Robot Policies through Predictive World Modeling, [website](https:\u002F\u002Fcomputationalrobotics.seas.harvard.edu\u002FGPC\u002F)\n- arXiv 2025.01, **CuriousBot**: Interactive Mobile Exploration via Actionable 3D Relational Object Graph, [website](https:\u002F\u002Fbdaiinstitute.github.io\u002Fcuriousbot\u002F)\n- arXiv 2025.01, Improving Vision-Language-Action Model with Online Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16664)\n- arXiv 2025.01, **Physics IQ Benchmark**: Do generative video models learn physical principles from watching videos?, [website](https:\u002F\u002Fphysics-iq.github.io\u002F)\n- arXiv 2025.01, **FAST**: Efficient Robot Action Tokenization, [website](https:\u002F\u002Fwww.pi.website\u002Fresearch\u002Ffast)\n- arXiv 2025.01, **DAViD**: Modeling Dynamic Affordance of 3D Objects using Pre-trained Video Diffusion Models, [website](https:\u002F\u002Fsnuvclab.github.io\u002Fdavid\u002F)\n- arXiv 2025.01, Predicting 4D Hand Trajectory from Monocular Videos, [website](https:\u002F\u002Fjudyye.github.io\u002Fhaptic-www\u002F)\n- arXiv 2025.01, Learning to Transfer Human Hand Skills for Robot Manipulations, [website](https:\u002F\u002Frureadyo.github.io\u002FMocapRobot\u002F)\n- arXiv 2025.01, **Beyond Sight**: Finetuning Generalist Robot Policies with Heterogeneous Sensors via Language Grounding, [website](https:\u002F\u002Ffuse-model.github.io\u002F)\n- arXiv 2025.01, **Depth Any Camera**: Zero-Shot Metric Depth Estimation from Any Camera, [website](https:\u002F\u002Fyuliangguo.github.io\u002Fdepth-any-camera\u002F)\n- arXiv 2024.04, **Metric3Dv2**: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation, [website](https:\u002F\u002Fjugghm.github.io\u002FMetric3Dv2\u002F)\n- **Cosmos**: World Foundation Model Platform for Physical AI, [website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fdir\u002Fcosmos1\u002F) \u002F [github](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FCosmos)\n- arXiv 2024.12, **ManiBox**: Enhancing Spatial Grasping Generalization via Scalable Simulation Data Generation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01850)\n- arXiv 2024.12, **VLABench**: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks, [website](https:\u002F\u002Fvlabench.github.io\u002F)\n- arXiv 2024.12, **Video Prediction Policy**: A Generalist Robot Policy with Predictive Visual Representations, [website](https:\u002F\u002Fvideo-prediction-policy.github.io\u002F)\n- arXiv 2024.12, **RoboMIND**: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13877)\n- arXiv 2024.12, **Towards Generalist Robot Policies**: What Matters in Building Vision-Language-Action Models, [website](https:\u002F\u002Frobovlms.github.io\u002F)\n- arXiv 2024.12, **Genesis**: A Generative and Universal Physics Engine for Robotics and Beyond, [website](https:\u002F\u002Fgenesis-embodied-ai.github.io\u002F)\n- arXiv 2024.10, **articulate-anything**: Automatic Modeling of Articulated Objects via a Vision-Language Foundation Model, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13882) \u002F [website](https:\u002F\u002Farticulate-anything.github.io\u002F)\n- arXiv 2024.12, **HandsOnVLM**: Vision-Language Models for Hand-Object Interaction Prediction, [website](https:\u002F\u002Fwww.chenbao.tech\u002Fhandsonvlm\u002F)\n- arXiv 2024.12, **Meta Motivo**: Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models, [github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmetamotivo)\n- arXiv 2024.12, **illusion3d**: 3D Multiview Illusion with 2D Diffusion Priors, [website](https:\u002F\u002F3d-multiview-illusion.github.io\u002F)\n- arXiv 2024.12, **RLDG**: Robotic Generalist Policy Distillation via Reinforcement Learning, [website](https:\u002F\u002Fgeneralist-distillation.github.io\u002F)\n- arXiv 2024.12, **SOLAMI**: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters, [website](https:\u002F\u002Fsolami-ai.github.io\u002F)\n- arXiv 2024.12, Reinforcement Learning from **Wild Animal Videos**, [website](https:\u002F\u002Felliotchanesane31.github.io\u002FRLWAV\u002F)\n- arXiv 2024.12, **NaVILA**: Legged Robot Vision-Language-Action Model for Navigation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04453)\n- arXiv 2024.12, **Motion Prompting**: Controlling Video Generation with Motion Trajectories, [website](https:\u002F\u002Fmotion-prompting.github.io\u002F)\n- arXiv 2024.12, **CASHER**: Robot Learning with Super-Linear Scaling, [website](https:\u002F\u002Fcasher-robot-learning.github.io\u002FCASHER\u002F)\n- arXiv 2024.12, **CogACT**: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation, [website](https:\u002F\u002Fcogact.github.io\u002F)\n- arXiv 2024.11, **CAT4D**: Create Anything in 4D with Multi-View Video Diffusion Models, [website](https:\u002F\u002Fcat-4d.github.io\u002F)\n- arXiv 2024.11, **Generative Omnimatte**: Learning to Decompose Video into Layers, [website](https:\u002F\u002Fgen-omnimatte.github.io\u002F)\n- arXiv 2024.11, Inference-Time Policy Steering through Human Interactions, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16627)\n- arXiv 2024.11, **The Matrix**: Infinite-Horizon World Generation with Real-Time Moving Control, [website](https:\u002F\u002Fthematrix1999.github.io\u002F)\n- arXiv 2024.11, Learning-based Trajectory Tracking for Bird-inspired **Flapping-Wing Robots**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15130)\n- arXiv 2024.11, **WildLMA**: Long Horizon Loco-MAnipulation in the Wild, [website](https:\u002F\u002Fwildlma.github.io\u002F)\n- SIGGRAPH ASIA 2024, **CBIL**: Collective Behavior Imitation Learning for Fish from Real Videos, [website](https:\u002F\u002Ffrank-zy-dou.github.io\u002Fprojects\u002FCBIL\u002Findex.html)\n- arXiv 2024.11, Learning Time-Optimal and Speed-Adjustable Tactile In-Hand Manipulation, [website](https:\u002F\u002Faidx-lab.org\u002Fmanipulation\u002Fhumanoids24)\n- arXiv 2024.11, Soft Robotic **Dynamic In-Hand Pen Spinning**, [website](https:\u002F\u002Fsoft-spin.github.io\u002F)\n- arXiv 2024.11, **Generative World Explorer**, [website](https:\u002F\u002Fgenerative-world-explorer.github.io\u002F)\n- arXiv 2024.11, **RoboGSim**: A Real2Sim2Real Robotic Gaussian Splatting Simulator, [website](https:\u002F\u002Frobogsim.github.io\u002F)\n- arXiv 2024.11, **Moving Off-the-Grid**: Scene-Grounded Video Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05927) \u002F [website](https:\u002F\u002Fmoog-paper.github.io\u002F)\n- arXiv 2024.07, **From Imitation to Refinement** -- Residual RL for Precise Assembly, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16677) \u002F [website](https:\u002F\u002Fresidual-assembly.github.io\u002F)\n- arXiv 2024.10, **HIL-SERL**: Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning, [website](https:\u002F\u002Fhil-serl.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21845)\n- arXiv 2024.10, **DELTA**: Dense Efficient Long-range 3D Tracking for any video, [website](https:\u002F\u002Fsnap-research.github.io\u002FDELTA\u002F)\n- arXiv 2024.10, **π0**: A Vision-Language-Action Flow Model for General Robot Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.24164) \u002F [website](https:\u002F\u002Fwww.physicalintelligence.company\u002Fblog\u002Fpi0)\n- arXiv 2024.10, One Step Diffusion via **Shortcut Models**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12557)\n- arXiv 2024.10, **BUMBLE**: Unifying Reasoning and Acting with Vision-Language Models for Building-wide Mobile Manipulation, [Website](https:\u002F\u002Frobin-lab.cs.utexas.edu\u002FBUMBLE\u002F)\n- arXiv 2024.10, **Run-time Observation Interventions**: Make Vision-Language-Action Models More Visually Robust, [Website](https:\u002F\u002Faasherh.github.io\u002Fbyovla\u002F)\n- arXiv 2024.10, **MonST3R**: A Simple Approach for Estimating Geometry in the Presence of Motion, [Website](https:\u002F\u002Fmonst3r-project.github.io\u002F)\n- arXiv 2024.10, Learning Humanoid Locomotion over Challenging Terrain, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03654)\n- arXiv 2024.10, Estimating Body and Hand Motion in an Ego-sensed World, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03665)\n- arXiv 2024.09, **Opt2Skill**: Imitating Dynamically-feasible Whole-Body Trajectories for Versatile Humanoid Loco-Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.20514)\n- arXiv 2024.09, **Helpful DoggyBot**: Open-World Object Fetching using Legged Robots and Vision-Language Models, [Website](https:\u002F\u002Fhelpful-doggybot.github.io\u002F)\n- arXiv 2024.09 \u002F CoRL 2024 Oral, **Robot See Robot Do**: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction, [Website](https:\u002F\u002Frobot-see-robot-do.github.io\u002F)\n- arXiv 2024.09, **Full-Order Sampling-Based MPC** for Torque-Level Locomotion Control via Diffusion-Style Annealing, [Website](https:\u002F\u002Flecar-lab.github.io\u002Fdial-mpc\u002F)\n- arXiv 2024.09, **Blox-Net**: Generative Design-for-Robot-Assembly using VLM Supervision, Physics Simulation, and A Robot with Reset, [Website](https:\u002F\u002Fbloxnet.org\u002F)\n- arXiv 2024.06, **DigiRL**: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning, [Website](https:\u002F\u002Fdigirl-agent.github.io\u002F)\n- arXiv 2024.09, **ClearDepth**: Enhanced Stereo Perception of Transparent Objects for Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08926)\n- arXiv 2024.09, **HOP**: Hand-object interaction pretraining from videos, [Website](https:\u002F\u002Fhgaurav2k.github.io\u002Fhop\u002F)\n- arXiv 2024.09, **AnySkin**: Plug-and-play Skin Sensing for Robotic Touch, [Website](https:\u002F\u002Fany-skin.github.io\u002F)\n- CoRL 2024, Continuously Improving Mobile Manipulation with **Autonomous Real-World RL**, [Website](https:\u002F\u002Fcontinual-mobile-manip.github.io\u002F)\n- arXiv 2024.09, **Neural MP**: A Generalist Neural Motion Planner, [Website](https:\u002F\u002Fmihdalal.github.io\u002Fneuralmotionplanner\u002F)\n- IROS 2024, Learning to **Walk and Fly** with Adversarial Motion Priors, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12784)\n- arXiv 2024.09, **Robot Utility Models**: General Policies for Zero-Shot Deployment in New Environments, [Website](https:\u002F\u002Frobotutilitymodels.com\u002F)\n- CoRL 2024, **LucidSim**: Learning Agile Visual Locomotion from Generated Images, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=cGswIOxHcN)\n- CoRL 2024, **OKAMI**: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation, [Website](https:\u002F\u002Fopenreview.net\u002Fforum?id=URj5TQTAXM&referrer=%5Bthe%20profile%20of%20Yuke%20Zhu%5D(%2Fprofile%3Fid%3D~Yuke_Zhu1))\n- CoRL 2024, Learning Robotic Locomotion Affordances and **Photorealistic Simulators from Human-Captured Data**, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=1TEZ1hiY5m)\n- CoRL 2024, **Object-Centric Dexterous Manipulation** from Human Motion Data, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=KAzku0Uyh1)\n- CoRL 2024, **ALOHA Unleashed**: A Simple Recipe for Robot Dexterity, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=gvdXE7ikHI)\n- CoRL 2024, **GenDP**: 3D Semantic Fields for Category-Level Generalizable Diffusion Policy, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=7wMlwhCvjS)\n- CoRL 2024, **Dynamic 3D Gaussian Tracking** for Graph-Based Neural Dynamics Modeling, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=itKJ5uu1gW)\n- CoRL 2024, **D3RoMa**: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=7E3JAys1xO)\n- CoRL 2024, **Action Space Design** in Reinforcement Learning for Robot Motor Skills, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=GGuNkjQSrk)\n- CoRL 2024, So You Think You Can Scale Up **Autonomous Robot Data Collection**?, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=XrxLGzF0lJ)\n- CoRL 2024, **VISTA**: View-Invariant Policy Learning via Zero-Shot Novel View Synthesis, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03685)\n- arXiv 2024.08, **Bidirectional Decoding**: Improving Action Chunking via Closed-Loop Resampling, [Website](https:\u002F\u002Fbid-robot.github.io\u002F)\n- arXiv 2024.08, **Unsupervised-to-Online** Reinforcement Learning, [arXiv](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.14785)\n- arXiv 2024.08, **SkillMimic**: Learning Reusable Basketball Skills from Demonstrations, [Website](https:\u002F\u002Fingrid789.github.io\u002FSkillMimic\u002F)\n- arXiv 2024.08, **ReKep**: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation, [Website](https:\u002F\u002Frekep-robot.github.io\u002F)\n- arXiv 2024.08, In-Context Imitation Learning via Next-Token Prediction, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15980)\n- arXiv 2024.08, **GameNGen**: Diffusion Models Are Real-Time Game Engines, [Website](https:\u002F\u002Fgamengen.github.io\u002F)\n- arXiv 2024.08, **Splatt3R**: Zero-shot Gaussian Splatting from Uncalibrated Image Pairs, [Website](https:\u002F\u002Fsplatt3r.active.vision\u002F)\n- arXiv 2024.08, **CrossFormer**: Scaling Cross-Embodied Learning for Manipulation, Navigation, Locomotion, and Aviation, [Website](https:\u002F\u002Fcrossformer-model.github.io\u002F)\n- arXiv 2024.08, **UniT**: Unified Tactile Representation for Robot Learning, [Website](https:\u002F\u002Fzhengtongxu.github.io\u002Funifiedtactile.github.io\u002F)\n- arXiv 2024.08, **Body Transformer**: Leveraging Robot Embodiment for Policy Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06316)\n- ICML 2024 oral, **SAPG**: Split and Aggregate Policy Gradients, [Website](https:\u002F\u002Fsapg-rl.github.io\u002F)\n- Humanoid 2006, Dynamic Pen Spinning Using a High-speed Multifingered Hand with High-speed Tactile Sensor\n- IROS 2024, Radiance Fields for Robotic Teleoperation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20194)\n- arXiv 2024.07, Lessons from Learning to **Spin “Pens”**, [Website](https:\u002F\u002Fpenspin.github.io\u002F)\n- arXiv 2024.07, **Flow** as the Cross-Domain Manipulation Interface, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15208)\n- RSS 2024, Offline Imitation Learning Through **Graph Search and Retrieval**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15403)\n- CVPR 2024, **HOLD**: Category-agnostic 3D Reconstruction of Interacting Hands and Objects from Video, [Website](https:\u002F\u002Fzc-alexfan.github.io\u002Fhold)\n- CVPR 2023, **ARCTIC**: A Dataset for Dexterous Bimanual Hand-Object Manipulation, [Website](https:\u002F\u002Farctic.is.tue.mpg.de\u002F)\n- SIGGRAPH 2024, **Neural Gaussian Scale-Space Fields**, [Website](https:\u002F\u002Fneural-gaussian-scale-space-fields.mpi-inf.mpg.de\u002F)\n- arXiv 2024.07, A Simulation Benchmark for **Autonomous Racing** with Large-Scale Human Data, [Website](https:\u002F\u002Fassetto-corsa-gym.github.io\u002F)\n- arXiv 2024.07, **From Imitation to Refinement**: Residual RL for Precise Visual Assembly,[Website](https:\u002F\u002Fresidual-assembly.github.io\u002F)\n- arXiv 2024.07, **Shape of Motion**: 4D Reconstruction from a Single Video, [Website](https:\u002F\u002Fshape-of-motion.github.io\u002F)\n- arXiv 2024.07, Unifying 3D Representation and Control of Diverse Robots with a Single Camera, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08722)\n- arXiv 2024.07, Continuous Control with **Coarse-to-fine Reinforcement Learning**, [Website](https:\u002F\u002Fyounggyo.me\u002Fcqn\u002F)\n- arXiv 2024.07, **BiGym**: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark, [Website](https:\u002F\u002Fchernyadev.github.io\u002Fbigym\u002F)\n- arXiv 2024.07, **Generative Image as Action Models**, [Website](https:\u002F\u002Fgenima-robot.github.io\u002F)\n- arXiv 2024.07, **Omnigrasp**: Grasping Diverse Objects with Simulated Humanoids, [Website](https:\u002F\u002Fwww.zhengyiluo.com\u002FOmnigrasp-Site\u002F)\n- RSS 2024, **RoboPack**: Learning Tactile-Informed Dynamics Models for Dense Packing, [Website](https:\u002F\u002Frobo-pack.github.io\u002F)\n- arXiv 2024.07, **EquiBot**: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning, [Website](https:\u002F\u002Fequi-bot.github.io\u002F)\n- arXiv 2024.07, **Sparse Diffusion Policy**: A Sparse, Reusable, and Flexible Policy for Robot Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01531) \u002F [Website](https:\u002F\u002Fforrest-110.github.io\u002Fsparse_diffusion_policy\u002F)\n- arXiv 2024.07, **Open-TeleVision**: Teleoperation with Immersive Active Visual Feedback, [Website](https:\u002F\u002Frobot-tv.github.io\u002F)\n- SIGGRAPH 2023, **3DShape2VecSet**: A 3D Shape Representation for Neural Fields and Generative Diffusion Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11445)\n- SIGGRAPH 2024 Best Paper Honorable Mention, CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fclay-3dlm)\n- arXiv 2024.07, **UnSAM**: Segment Anything without Supervision, [Github](https:\u002F\u002Fgithub.com\u002Ffrank-xwang\u002FUnSAM)\n- arXiv 2024.06, **Dreamitate**: Real-World Visuomotor Policy Learning via Video Generation, [Website](https:\u002F\u002Fdreamitate.cs.columbia.edu\u002F)\n- CVPR 2024 best paper, Generative Image Dynamics, [Website](https:\u002F\u002Fgenerative-dynamics.github.io\u002F)\n- CVPR 2024 highlight, **XCube**: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies, [Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Fxcube\u002F)\n- CVPR 2024 oral, **DSINE**: Rethinking Inductive Biases for Surface Normal Estimation, [Website](https:\u002F\u002Fbaegwangbin.github.io\u002FDSINE\u002F)\n- arXiv 2024.06, **An Image is Worth More Than 16x16 Patches**: Exploring Transformers on Individual Pixels, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.09415)\n- arXiv 2024.06, **BAKU**: An Efficient Transformer for Multi-Task Policy Learning, [Website](https:\u002F\u002Fbaku-robot.github.io\u002F)\n- CVPR 2024 highlight, Image Neural Field Diffusion Models, [Website](https:\u002F\u002Fyinboc.github.io\u002Finfd\u002F)\n- RSS 2024, **MPI**: Learning Manipulation by Predicting Interaction, [Website](https:\u002F\u002Fopendrivelab.github.io\u002Fmpi.github.io\u002F)\n- arXiv 2024.05, **Model-based Diffusion** for Trajectory Optimization, [Website](https:\u002F\u002Flecar-lab.github.io\u002Fmbd\u002F)\n- RSS 2024, **RoboCasa**: Large-Scale Simulation of Everyday Tasks for Generalist Robots, [Website](https:\u002F\u002Frobocasa.ai\u002F)\n- CVPR 2024, **OmniGlue**: Generalizable Feature Matching with Foundation Model Guidance, [Website](https:\u002F\u002Fhwjiang1510.github.io\u002FOmniGlue\u002F)\n- arXiv 2024.05, **Pandora**: Towards General World Model with Natural Language Actions and Video States, [Website](https:\u002F\u002Fworld-model.maitrix.org\u002F)\n- arXiv 2024.05, **Images that Sound**: Composing Images and Sounds on a Single Canvas, [Website](https:\u002F\u002Fificl.github.io\u002Fimages-that-sound\u002F)\n- SIGGRAPH 2024, **Text-to-Vector Generation** with Neural Path Representation, [Website](https:\u002F\u002Fintchous.github.io\u002FT2V-NPR\u002F)\n- arXiv 2024.03, **GeoWizard**: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image, [Website](https:\u002F\u002Ffuxiao0719.github.io\u002Fprojects\u002Fgeowizard\u002F)\n- arXiv 2024.05, **Toon3D**: Seeing Cartoons from a New Perspective, [Website](https:\u002F\u002Ftoon3d.studio\u002F)\n- arXiv 2024.05, **TRANSIC**: Sim-to-Real Policy Transfer by Learning from Online Correction, [Website](https:\u002F\u002Ftransic-robot.github.io\u002F)\n- RSS 2024, **Natural Language** Can Help Bridge the Sim2Real Gap, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10020)\n- ICML 2024, The **Platonic Representation** Hypothesis, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07987)\n- arXiv 2024.05, **SPIN**: Simultaneous Perception, Interaction and Navigation, [Website](https:\u002F\u002Fspin-robot.github.io\u002F)\n- RSS 2024, **Consistency Policy**: Accelerated Visuomotor Policies via Consistency Distillation, [Website](https:\u002F\u002Fconsistency-policy.github.io\u002F)\n- arXiv 2024.05, **Humanoid Parkour** Learning, [Website](https:\u002F\u002Fhumanoid4parkour.github.io\u002F)\n- arXiv 2024.05, Evaluating Real-World Robot Manipulation Policies in Simulation, [Website](https:\u002F\u002Fsimpler-env.github.io\u002F)\n- arXiv 2024.05, **ScrewMimic**: Bimanual Imitation from Human Videos with Screw Space Projection, [Website](https:\u002F\u002Frobin-lab.cs.utexas.edu\u002FScrewMimic\u002F)\n- arXiv 2024.04, **DiffuseLoco**: Real-Time Legged Locomotion Control with Diffusion from Offline Datasets, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19264)\n- arXiv 2024.05, **DrEureka**: Language Model Guided Sim-To-Real Transfer, [Website](https:\u002F\u002Feureka-research.github.io\u002Fdr-eureka\u002F)\n- arXiv 2024.05, Customizing Text-to-Image Models with a Single Image Pair, [Website](https:\u002F\u002Fpaircustomization.github.io\u002F)\n- arXiv 2024.05, **SATO**: Stable Text-to-Motion Framework, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01461)\n- ICRA 2024, Learning Force Control for Legged Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01402)\n- arXiv 2024.05, **IntervenGen**: Interventional Data Generation for Robust and Data-Efficient Robot Imitation Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01472)\n- arXiv 2024.05, **Track2Act**: Predicting Point Tracks from Internet Videos enables Diverse Zero-shot Robot Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01527)\n- arXiv 2024.04, **KAN**: Kolmogorov-Arnold Networks, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19756)\n- RSS 2023, **IndustReal**: Transferring Contact-Rich Assembly Tasks from Simulation to Reality, [Website](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Findustreal)\n- arXiv 2024.04, Editable Image Elements for Controllable Synthesis, [Website](https:\u002F\u002Fjitengmu.github.io\u002FEditable_Image_Elements\u002F)\n- arXiv 2024.04, **EgoPet**: Egomotion and Interaction Data from an Animal's Perspective, [Website](https:\u002F\u002Fwww.amirbar.net\u002Fegopet\u002F)\n- SIIGRAPH 2023, **OctFormer**: Octree-based Transformers for 3D Point Clouds, [Website](https:\u002F\u002Fwang-ps.github.io\u002Foctformer.html)\n- arXiv 2024.04, **Clio**: Real-time Task-Driven Open-Set 3D Scene Graphs, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.13696)\n- ICCV 2023, Canonical Factors for Hybrid Neural Fields, [Website](https:\u002F\u002Fbrentyi.github.io\u002Ftilted\u002F)\n- arXiv 2024.04, **HATO**: Learning Visuotactile Skills with Two Multifingered Hands, [Website](https:\u002F\u002Ftoruowo.github.io\u002Fhato\u002F)\n- arXiv 2024.04, **SpringGrasp**: Synthesizing Compliant Dexterous Grasps under Shape Uncertainty, [Website](https:\u002F\u002Fstanford-tml.github.io\u002FSpringGrasp\u002F)\n- ICRA 2024 workshop, Object-Aware **Gaussian Splatting for Robotic Manipulation**, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=gdRI43hDgo)\n- arXiv 2024.04, **PhysDreamer**: Physics-Based Interaction with 3D Objects via Video Generation, [Website](https:\u002F\u002Fphysdreamer.github.io\u002F)\n- arXiv 2015.09, **MPPI**: Model Predictive Path Integral Control using Covariance Variable Importance Sampling, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1509.01149)\n- arXiv 2023.07, Sampling-based Model Predictive Control Leveraging Parallelizable Physics Simulations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09105) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ftud-airlab\u002Fmppi-isaac)\n- arXiv 2024.04, **BLINK**: Multimodal Large Language Models Can See but Not Perceive, [Website](https:\u002F\u002Fzeyofu.github.io\u002Fblink\u002F)\n- arXiv 2024.04, **Factorized Diffusion**: Perceptual Illusions by Noise Decomposition, [Website](https:\u002F\u002Fdangeng.github.io\u002Ffactorized_diffusion\u002F)\n- CVPR 2024, Probing the 3D Awareness of Visual Foundation Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.08636)\n- ICCV 2019, **Neural-Guided RANSAC**: Learning Where to Sample Model Hypotheses, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.04132)\n- arXiv 2024.04, **QuasiSim**: Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer, [Website](https:\u002F\u002Fmeowuu7.github.io\u002FQuasiSim\u002F)\n- arXiv 2024.04, **Policy-Guided Diffusion**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06356) \u002F [Github](https:\u002F\u002Fgithub.com\u002FEmptyJackson\u002Fpolicy-guided-diffusion)\n- RoboSoft 2024, Body Design and Gait Generation of **Chair-Type Asymmetrical Tripedal** Low-rigidity Robot, [Website](https:\u002F\u002Fshin0805.github.io\u002Fchair-type-tripedal-robot\u002F)\n- CVPR 2024 oral, **MicKey**: Matching 2D Images in 3D: Metric Relative Pose from Metric Correspondences, [Website](https:\u002F\u002Fnianticlabs.github.io\u002Fmickey\u002F)\n- arXiv 2024.04, **ZeST**: Zero-Shot Material Transfer from a Single Image, [Website](https:\u002F\u002Fttchengab.github.io\u002Fzest\u002F)\n- arXiv 2024.03, **Keypoint Action Tokens** Enable In-Context Imitation Learning in Robotics, [Website](https:\u002F\u002Fwww.robot-learning.uk\u002Fkeypoint-action-tokens)\n- arXiv 2024.04, Reconstructing **Hand-Held Objects** in 3D, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06507)\n- ICRA 2024, **Actor-Critic Model Predictive Control**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09852)\n- arXiv 2024.04, Finding Visual Task Vectors, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05729)\n- NeurIPS 2022, Visual Prompting via **Image Inpainting**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.00647)\n- CVPR 2024 highlight, **SpatialTracker**: Tracking Any 2D Pixels in 3D Space, [Website](https:\u002F\u002Fhenry123-boy.github.io\u002FSpaTracker\u002F)\n- CVPR 2024, **NeRF2Physics**: Physical Property Understanding from Language-Embedded Feature Fields, [Website](https:\u002F\u002Fajzhai.github.io\u002FNeRF2Physics\u002F)\n- CVPR 2024, **Scaling Laws of Synthetic Images** for Model Training ... for Now, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04567)\n- CVPR 2024, A Vision Check-up for Language Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01862)\n- CVPR 2024, **GenH2R**: Learning Generalizable Human-to-Robot Handover via Scalable Simulation, Demonstration, and Imitation, [Website](https:\u002F\u002Fgenh2r.github.io\u002F)\n- arXiv 2024.04, **PreAfford**: Universal Affordance-Based Pre-Grasping for Diverse Objects and Environments, [Website](https:\u002F\u002Fair-discover.github.io\u002FPreAfford\u002F)\n- CVPR 2024, **Lift3D**: Zero-Shot Lifting of Any 2D Vision Model to 3D, [Website](https:\u002F\u002Fmukundvarmat.github.io\u002FLift3D\u002F)\n- arXiv 2024.03, **LocoMan**: Advancing Versatile Quadrupedal Dexterity with Lightweight Loco-Manipulators, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.18197)\n- arXiv 2024.03, Leveraging **Symmetry** in RL-based Legged Locomotion Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17320)\n- arXiv 2024.03, **RoboDuet**: A Framework Affording Mobile-Manipulation and Cross-Embodiment, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17367)\n- arXiv 2024.03, Imitation Bootstrapped Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.02198)\n- arXiv 2024.03, **Visual Whole-Body Control** for Legged Loco-Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.16967)\n- arXiv 2024.03, **S2**: When Do We Not Need Larger Vision Models? [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13043)\n- ICCV 2021, **DPT**: Vision Transformers for Dense Prediction, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13413)\n- arXiv 2024.03, **GRM**: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation, [Website](https:\u002F\u002Fjustimyhxu.github.io\u002Fprojects\u002Fgrm\u002F)\n- arXiv 2024.03, **MVSplat**: Efficient 3D Gaussian Splatting from Sparse Multi-View Images, [Website](https:\u002F\u002Fdonydchen.github.io\u002Fmvsplat\u002F)\n- arXiv 2024.03, **LiFT**: A Surprisingly Simple Lightweight Feature Transform for Dense ViT Descriptors, [Website](https:\u002F\u002Fwww.cs.umd.edu\u002F~sakshams\u002FLiFT\u002F)\n- SIGGRAPH 2023, **VET**: Visual Error Tomography for Point Cloud Completion and High-Quality Neural Rendering, [Github](https:\u002F\u002Fgithub.com\u002Flfranke\u002Fvet)\n- arXiv 2024.03, On **Pretraining Data Diversity** for Self-Supervised Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13808)\n- arXiv 2024.03, **FeatUp**: A Model-Agnostic Framework for Features at Any Resolution, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.10516) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fmhamilton723\u002FFeatUp)\n- arXiv 2024.03, **Vid2Robot**: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12943)\n- arXiv 2024.03, **Yell At Your Robot**: Improving On-the-Fly from Language Corrections, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12910)\n- arXiv 2024.03, **DROID**: A Large-Scale In-the-Wild Robot Manipulation Dataset, [Website](https:\u002F\u002Fdroid-dataset.github.io\u002F)\n- ICLR 2024 oral, **Ghost on the Shell**: An Expressive Representation of General 3D Shapes, [Website](https:\u002F\u002Fgshell3d.github.io\u002F)\n- arXiv 2024.03, **HumanoidBench**: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation, [Website](https:\u002F\u002Fsferrazza.cc\u002Fhumanoidbench_site\u002F)\n- arXiv 2024.03, **PaperBot**: Learning to Design Real-World Tools Using Paper, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09566)\n- arXiv 2024.03, **GaussianGrasper**: 3D Language Gaussian Splatting for Open-vocabulary Robotic Grasping, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09637)\n- arXiv 2024.03, A Decade's Battle on **Dataset Bias**: Are We There Yet? [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08632)\n- arXiv 2024.03, **ManiGaussian**: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08321)\n- arXiv 2024.03, Learning **Generalizable Feature Fields** for Mobile Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07563)\n- arXiv 2024.03, **DexCap**: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07788)\n- arXiv 2024.03, **TeleMoMa**: A Modular and Versatile Teleoperation System for Mobile Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07869)\n- arXiv 2024.03, **OPEN TEACH**: A Versatile Teleoperation System for Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07870)\n- CVPR 2020 oral, **SuperGlue**: Learning Feature Matching with Graph Neural Networks, [Github](https:\u002F\u002Fgithub.com\u002Fmagicleap\u002FSuperGluePretrainedNetwork)\n- ICRA 2024, Learning to walk in confined spaces using 3D representation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00187)\n- CVPR 2024, **Hierarchical Diffusion Policy** for Kinematics-Aware Multi-Task Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03890) \u002F [Website](https:\u002F\u002Fyusufma03.github.io\u002Fprojects\u002Fhdp\u002F)\n- arXiv 2024.03, Reconciling Reality through Simulation: A **Real-to-Sim-to-Real** Approach for Robust Manipulation, [Website](https:\u002F\u002Freal-to-sim-to-real.github.io\u002FRialTo\u002F)\n- ICRA 2024, **Dexterous Legged Locomotion** in Confined 3D Spaces with Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03848)\n- arXiv 2024.03, **MOKA**: Open-Vocabulary Robotic Manipulation through Mark-Based Visual Prompting, [Website](https:\u002F\u002Fmoka-manipulation.github.io\u002F)\n- arXiv 2024.03, **VQ-BeT**: Behavior Generation with Latent Actions, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03181) \u002F [Website](https:\u002F\u002Fsjlee.cc\u002Fvq-bet\u002F)\n- **Humanoid-Gym**: Reinforcement Learning for Humanoid Robot with Zero-Shot Sim2Real Transfer, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fhumanoid-gym\u002F)\n- arXiv 2024.03, Twisting Lids Off with Two Hands, [Website](https:\u002F\u002Ftoruowo.github.io\u002Fbimanual-twist\u002F)\n- ICLR 2023 spotlight, Multi-skill Mobile Manipulation for Object Rearrangement, [Github](https:\u002F\u002Fgithub.com\u002FJiayuan-Gu\u002Fhab-mobile-manipulation)\n- CVPR 2024, **Gaussian Splatting SLAM**, [Github](https:\u002F\u002Fgithub.com\u002Fmuskie82\u002FMonoGS)\n- arXiv 2024.03, **TripoSR**: Fast 3D Object Reconstruction from a Single Image, [Github](https:\u002F\u002Fgithub.com\u002FVAST-AI-Research\u002FTripoSR)\n- arXiv 2024.03, **Point Could Mamba**: Point Cloud Learning via State Space Model, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00762)\n- CVPR 2024, Rethinking Few-shot 3D Point Cloud Semantic Segmentation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00592)\n- ICLR 2024, Can Transformers Capture Spatial Relations between Objects? [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00729) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fspatial-relation)\n- SIGGRAPH Asia 2023, **CamP**: Camera Preconditioning for Neural Radiance Fields, [Website](https:\u002F\u002Fcamp-nerf.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fjonbarron\u002Fcamp_zipnerf)\n- arXiv 2024.02, **Extreme Cross-Embodiment Learning** for Manipulation and Navigation, [Website](https:\u002F\u002Fextreme-cross-embodiment.github.io\u002F)\n- CVPR 2024, **DUSt3R**: Geometric 3D Vision Made Easy, [Github](https:\u002F\u002Fgithub.com\u002Fnaver\u002Fdust3r)\n- CVPR 2018 best paper, **TASKONOMY**: Disentangling Task Transfer Learning, [Website](http:\u002F\u002Ftaskonomy.stanford.edu\u002F)\n- arXiv 2024.02, **Mirage**: Cross-Embodiment Zero-Shot Policy Transfer with Cross-Painting, [Website](https:\u002F\u002Frobot-mirage.github.io\u002F)\n- CVPR 2024, **Diffusion 3D Features (Diff3F)**: Decorating Untextured Shapes with Distilled Semantic Features, [Website](https:\u002F\u002Fdiff3f.github.io\u002F)\n- arXiv 2024.02, **Disentangled 3D Scene Gen­eration** with Layout Learning, [Website](https:\u002F\u002Fdave.ml\u002Flayoutlearning\u002F)\n- arXiv 2024.02, **Transparent Image Layer Diffusion** using Latent Transparency, [Website](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17113)\n- arXiv 2024.02, **Diffusion Meets DAgger**: Supercharging Eye-in-hand Imitation Learning, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdiffusion-meets-dagger)\n- arXiv 2024.02, Massive Activations in Large Language Models, [Website](https:\u002F\u002Feric-mingjie.github.io\u002Fmassive-activations\u002Findex.html)\n- arXiv 2024.02, Dynamics-Guided Diffusion Model for **Robot Manipulator Design**, [Website](https:\u002F\u002Fdgdm-robot.github.io\u002F)\n- arXiv 2024.02, **Genie**: Generative Interactive Environments, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15391) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fgenie-2024\u002F)\n- arXiv 2024.02, **CyberDemo**: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14795) \u002F [Website](https:\u002F\u002Fcyber-demo.github.io\u002F)\n- CoRL 2020, **DSR**: Learning 3D Dynamic Scene Representations for Robot Manipulation, [Website](https:\u002F\u002Fdsr-net.cs.columbia.edu\u002F)\n- ICLR 2024 oral, Cameras as Rays: Pose Estimation via **Ray Diffusion**, [Website](https:\u002F\u002Fjasonyzhang.com\u002FRayDiffusion\u002F)\n- arXiv 2024.02, **Pedipulate**: Enabling Manipulation Skills using a Quadruped Robot's Leg, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10837)\n- arXiv 2024.02, **LMPC**: Learning to Learn Faster from Human Feedback with Language Model Predictive Control, [Website](https:\u002F\u002Frobot-teaching.github.io\u002F)\n- arXiv 2023.12, **W.A.L.T**: Photorealistic Video Generation with Diffusion Models, [Website](https:\u002F\u002Fwalt-video-diffusion.github.io\u002F)\n- arXiv 2024.02, **Universal Manipulation Interface**: In-The-Wild Robot Teaching Without In-The-Wild Robots, [Website](https:\u002F\u002Fumi-gripper.github.io\u002F)\n- ICCV 2023 oral, **DiT**: Scalable Diffusion Models with Transformers, [Website](https:\u002F\u002Fwww.wpeebles.com\u002FDiT)\n- arXiv 2023.07, Diffusion Models Beat GANs on Image Classification, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08702)\n- ICCV 2023 oral, **DDAE**: Denoising Diffusion Autoencoders are Unified Self-supervised Learners, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09769)\n- arXiv 2024.12, **Mosaic-SDF** for 3D Generative Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09222) \u002F [Website](https:\u002F\u002Flioryariv.github.io\u002Fmsdf\u002F)\n- arXiv 2024.02, **POCO**: Policy Composition From and For Heterogeneous Robot Learning, [Website](https:\u002F\u002Fliruiw.github.io\u002Fpolicycomp\u002F)\n- ICML 2024 submission, **Latent Graph Diffusion**: A Unified Framework for Generation and Prediction on Graphs, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02518)\n- ICLR 2024 spotlight, **AMAGO**: Scalable In-Context Reinforcement Learning for Adaptive Agents, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09971)\n- arXiv 2024.02, Offline Actor-Critic Reinforcement Learning Scales to Large Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05546)\n- arXiv 2024.02, **V-IRL**: Grounding Virtual Intelligence in Real Life, [Website](https:\u002F\u002Fvirl-platform.github.io\u002F)\n- ICRA 2024, **SERL**: A Software Suite for Sample-Efficient Robotic Reinforcement Learning, [Website](https:\u002F\u002Fserl-robot.github.io\u002F)\n- arXiv 2024.01, Generative Expressive Robot Behaviors using Large Language Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14673)\n- arXiv 2024.01, **pix2gestalt**: Amodal Segmentation by Synthesizing Wholes, [Website](https:\u002F\u002Fgestalt.cs.columbia.edu\u002F)\n- arXiv 2024.01, **DAE**: Deconstructing Denoising Diffusion Models for Self-Supervised Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14404)\n- ICLR 2024, **DittoGym**: Learning to Control Soft Shape-Shifting Robots, [Website](https:\u002F\u002Fdittogym.github.io\u002F)\n- arXiv 2024.01, **WildRGB-D**: RGBD Objects in the Wild: Scaling Real-World 3D Object Learning from RGB-D Videos, [Website](https:\u002F\u002Fwildrgbd.github.io\u002F)\n- arXiv 2024.01, **Spatial VLM**: Endowing Vision-Language Models with Spatial Reasoning Capabilities, [Website](https:\u002F\u002Fspatial-vlm.github.io\u002F)\n- arXiv 2024.01, Multimodal **Visual-Tactile Representation** Learning through Self-Supervised Contrastive Pre-Training, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12024)\n- arXiv 2024.01, **OK-Robot**: What Really Matters in Integrating Open-Knowledge Models for Robotics, [Website](https:\u002F\u002Fok-robot.github.io\u002F)\n- L4DC 2023, **Agile Catching** with Whole-Body MPC and Blackbox Policy Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08205)\n- arXiv 2024.01, **Depth Anything**: Unleashing the Power of Large-Scale Unlabeled Data, [Github](https:\u002F\u002Fgithub.com\u002FLiheYoung\u002FDepth-Anything?tab=readme-ov-file)\n- arXiv 2024.01, **WorldDreamer**: Towards General World Models for Video Generation via Predicting Masked Tokens, [Website](https:\u002F\u002Fworld-dreamer.github.io\u002F)\n- arXiv 2024.01, **VMamba**: Visual State Space Model, [Github](https:\u002F\u002Fgithub.com\u002FMzeroMiko\u002FVMamba)\n- arXiv 2024.01, **DiffusionGPT**: LLM-Driven Text-to-Image Generation System, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10061) \u002F[Website](https:\u002F\u002Fdiffusiongpt.github.io\u002F)\n- arXiv 2023.12, **PhysHOI**: Physics-Based Imitation of Dynamic Human-Object Interaction, [Website](https:\u002F\u002Fwyhuai.github.io\u002Fphyshoi-page\u002F)\n- ICLR 2024 oral, **UniSim**: Learning Interactive Real-World Simulators, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=sFyTZEqmUY)\n- ICLR 2024 oral, **ASID**: Active Exploration for System Identification and Reconstruction in Robotic Manipulation, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=jNR6s6OSBT)\n- ICLR 2024 oral, Mastering **Memory Tasks** with World Models, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=1vDArHJ68h)\n- ICLR 2024 oral, Predictive auxiliary objectives in deep RL mimic learning in the brain, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=agPpmEgf8C)\n- ICLR 2024 oral, **Is ImageNet worth 1 video?** Learning strong image encoders from 1 long unlabelled video, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08584) \u002F [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=Yen1lGns2o)\n- arXiv 2024.01, **URHand**: Universal Relightable Hands, [Website](https:\u002F\u002Ffrozenburning.github.io\u002Fprojects\u002Furhand\u002F)\n- arXiv 2023.12, **Mamba**: Linear-Time Sequence Modeling with Selective State Spaces, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00752) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fstate-spaces\u002Fmamba)\n- ICLR 2022, **S4**: Efficiently Modeling Long Sequences with Structured State Spaces, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00396)\n- arXiv 2024.01, **Dr2Net**: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04105)\n- arXiv 2023.12, **3D-LFM**: Lifting Foundation Model, [Website](https:\u002F\u002F3dlfm.github.io\u002F)\n- arXiv 2024.01, **DVT**: Denoising Vision Transformers, [Website](https:\u002F\u002Fjiawei-yang.github.io\u002FDenoisingViT\u002F)\n- arXiv 2024.01, **Open-Vocabulary SAM**: Segment and Recognize Twenty-thousand Classes Interactively, [Website](https:\u002F\u002Fwww.mmlab-ntu.com\u002Fproject\u002Fovsam\u002F) \u002F [Code](https:\u002F\u002Fgithub.com\u002FHarborYuan\u002Fovsam)\n- arXiv 2024.01, **ATM**: Any-point Trajectory Modeling for Policy Learning, [Website](https:\u002F\u002Fxingyu-lin.github.io\u002Fatm\u002F)\n- CVPR 2024 submission, **Learning Vision from Models** Rivals Learning Vision from Data, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17742) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fsyn-rep-learn)\n- CVPR 2024 submission, Visual Point Cloud Forecasting enables **Scalable Autonomous Driving**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17655) \u002F [Github](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FViDAR)\n- CVPR 2024 submission, **Ponymation**: Learning 3D Animal Motions from Unlabeled Online Videos, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13604) \u002F [Website](https:\u002F\u002Fkeqiangsun.github.io\u002Fprojects\u002Fponymation\u002F)\n- CVPR 2024 submission, **V\\***: Guided Visual Search as a Core Mechanism in Multimodal LLMs, [Website](https:\u002F\u002Fvstar-seal.github.io\u002F)\n- NIPS 2021 outstanding paper, Deep Reinforcement Learning at the Edge of the Statistical Precipice, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13264) \u002F [Website](https:\u002F\u002Fagarwl.github.io\u002Frliable\u002F)\n- CVPR 2024 submission, Zero-Shot **Metric Depth** with a Field-of-View Conditioned Diffusion Model, [Website](https:\u002F\u002Fdiffusion-vision.github.io\u002Fdmd\u002F)\n- ICLR 2023, **Deep Learning on 3D Neural Fields**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13277)\n- CVPR 2024 submission, **Tracking Any Object Amodally**, [Website](https:\u002F\u002Ftao-amodal.github.io\u002F)\n- CVPR 2024 submission, **MobileSAMv2**: Faster Segment Anything to Everything, [Github](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM)\n- CVPR 2024 submission, **AnyDoor**: Zero-shot Object-level Image Customization, [Github](https:\u002F\u002Fgithub.com\u002Fdamo-vilab\u002FAnyDoor)\n- CVPR 2024 submission, **Point Transformer V3**: Simpler, Faster, Stronger, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035) \u002F [Github](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3)\n- CVPR 2024 submission, **Alchemist**: Parametric Control of Material Properties with Diffusion Models, [Website](https:\u002F\u002Fprafullsharma.net\u002Falchemist\u002F)\n- CVPR 2024 submission, **Reconstructing Hands in 3D** with Transformers, [Website](https:\u002F\u002Fgeopavlakos.github.io\u002Fhamer\u002F)\n- CVPR 2024 submission, Language-Informed Visual Concept Learning, [Website](https:\u002F\u002Fai.stanford.edu\u002F~yzzhang\u002Fprojects\u002Fconcept-axes\u002F)\n- CVPR 2024 submission, **RCG**: Self-conditioned Image Generation via Generating Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03701) \u002F [Github](https:\u002F\u002Fgithub.com\u002FLTH14\u002Frcg)\n- CVPR 2024 submission, **Describing Differences in Image Sets** with Natural Language, [Website](https:\u002F\u002Funderstanding-visual-datasets.github.io\u002FVisDiff-website\u002F)\n- CVPR 2024 submission, **FaceStudio**: Put Your Face Everywhere in Seconds, [Website](https:\u002F\u002Ficoz69.github.io\u002Ffacestudio\u002F)\n- CVPR 2024 submission, **ImageDream**: Image-Prompt Multi-view Diffusion for 3D Generation, [Website](https:\u002F\u002FImage-Dream.github.io)\n- CVPR 2024 submission, **Fine-grained Controllable Video Generation** via Object Appearance and Context, [Website](https:\u002F\u002Fhhsinping.github.io\u002Ffactor\u002F)\n- CVPR 2024 submission, **AmbiGen**: Generating Ambigrams from Pre-trained Diffusion Model, [Website](https:\u002F\u002Fraymond-yeh.com\u002FAmbiGen\u002F)\n- CVPR 2024 submission, **ReconFusion**: 3D Reconstruction with Diffusion Priors, [Website](https:\u002F\u002Freconfusion.github.io\u002F)\n- CVPR 2024 submission, **Ego-Exo4D**: Understanding Skilled Human Activity from First- and Third-Person Perspectives, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18259) \u002F [Website](https:\u002F\u002Fego-exo4d-data.org\u002F)\n- CVPR 2024 submission, **MagicAnimate**: Temporally Consistent Human Image Animation using Diffusion Model, [Github](https:\u002F\u002Fgithub.com\u002Fmagic-research\u002Fmagic-animate)\n- CVPR 2024 submission, **VideoSwap**: Customized Video Subject Swapping with Interactive Semantic Point Correspondence, [Website](https:\u002F\u002Fvideoswap.github.io\u002F)\n- CVPR 2024 submission, **IMProv**: Inpainting-based Multimodal Prompting for Computer Vision Tasks, [Website](https:\u002F\u002Fjerryxu.net\u002FIMProv\u002F)\n- CVPR 2024 submission, Generative **Powers of Ten**, [Website](https:\u002F\u002Fpowers-of-10.github.io\u002F)\n- CVPR 2024 submission, **DiffiT**: Diffusion Vision Transformers for Image Generation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02139)\n- CVPR 2024 submission, Learning from **One Continuous Video Stream**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00598)\n- CVPR 2024 submission, **EvE**: Exploiting Generative Priors for Radiance Field Enrichment, [Website](https:\u002F\u002Feve-nvs.github.io\u002F)\n- CVPR 2024 submission, **Oryon**: Open-Vocabulary Object 6D Pose Estimation, [Website](https:\u002F\u002Fjcorsetti.github.io\u002Foryon-website\u002F)\n- CVPR 2024 submission, **Dense Optical Tracking**: Connecting the Dots, [Website](https:\u002F\u002F16lemoing.github.io\u002Fdot\u002F)\n- CVPR 2024 submission, Sequential Modeling Enables Scalable Learning for **Large Vision Models**, [Website](https:\u002F\u002Fyutongbai.com\u002Flvm.html)\n- CVPR 2024 submission, **VideoBooth**: Diffusion-based Video Generation with Image Prompts, [Website](https:\u002F\u002Fvchitect.github.io\u002FVideoBooth-project\u002F)\n- CVPR 2024 submission, **SODA**: Bottleneck Diffusion Models for Representation Learning, [Website](https:\u002F\u002Fsoda-diffusion.github.io\u002F)\n- CVPR 2024 submission, Exploiting **Diffusion Prior** for Generalizable Pixel-Level Semantic Prediction, [Website](https:\u002F\u002Fshinying.github.io\u002Fdmp\u002F)\n- arXiv 2023.11, Initializing Models with Larger Ones, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18823)\n- CVPR 2024 submission, **Animate Anyone**: Consistent and Controllable Image-to-Video Synthesis for Character Animation, [Website](https:\u002F\u002Fhumanaigc.github.io\u002Fanimate-anyone\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FHumanAIGC\u002FAnimateAnyone)\n- CVPR 2023 best demo award, **Diffusion Illusions**: Hiding Images in Plain Sight, [Website](https:\u002F\u002Fdiffusionillusions.com\u002F)\n- CVPR 2024 submission, Do text-free diffusion models learn discriminative visual representations? [Website](https:\u002F\u002Fmgwillia.github.io\u002Fdiffssl\u002F)\n- CVPR 2024 submission, **Visual Anagrams**: Synthesizing Multi-View Optical Illusions with Diffusion Models, [Website](https:\u002F\u002Fdangeng.github.io\u002Fvisual_anagrams\u002F)\n- NIPS 2023, **Provable Guarantees for Generative Behavior Cloning**: Bridging Low-Level Stability and High-Level Behavior, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=PhFVF0gwid)\n- CoRL 2023 best paper, **Distilled Feature Fields** Enable Few-Shot Language-Guided Manipulation, [Website](https:\u002F\u002Ff3rm.github.io\u002F)\n- ICLR 2024 submission, **RLIF**: Interactive Imitation Learning as Reinforcement Learning, [Website](https:\u002F\u002Frlif-page.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.12996)\n- CVPR 2024 submission, **PIE-NeRF**: Physics-based Interactive Elastodynamics with NeRF, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.13099)\n- RSS 2018, **Asymmetric Actor Critic** for Image-Based Robot Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06542)\n- ICLR 2022, **RvS**: What is Essential for Offline RL via Supervised Learning?, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.10751)\n- NIPS 2021, **Stochastic Solutions** for Linear Inverse Problems using the Prior Implicit in a Denoiser, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13640)\n- ICLR 2024 submission, Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=v8jdwkUNXb) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16984)\n- ICLR 2024 submission, Improved Techniques for Training Consistency Models, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=WNzy9bRDvG) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.14189)\n- ICLR 2024 submission, **Privileged Sensing** Scaffolds Reinforcement Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=EpVe8jAjdx)\n- ICLR 2024 submission, **SafeDiffuser**: Safe Planning with Diffusion Probabilistic Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00148) \u002F [Website](https:\u002F\u002Fsafediffuser.github.io\u002Fsafediffuser\u002F)\n- NIPS 2023 workshop, Vision-Language Models Provide Promptable Representations for Reinforcement Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=AVg8WnI5ba)\n- ICLR 2023 oral, **Extreme Q-Learning**: MaxEnt RL without Entropy, [Website](https:\u002F\u002Fdiv99.github.io\u002FXQL\u002F)\n- ICLR 2024 submission, Generalization in diffusion models arises from geometry-adaptive harmonic representation, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=ANvmVS2Yr0)\n- ICLR 2024 submission, **DiffTOP**: Differentiable Trajectory Optimization as a Policy Class for Reinforcement and Imitation Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=HL5P4H8eO2)\n- CoRL 2023 best system paper, **RoboCook**: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools, [Website](https:\u002F\u002Fhshi74.github.io\u002Frobocook\u002F)\n- CoRL 2023, Learning to Design and Use Tools for Robotic Manipulation, [Website](https:\u002F\u002Frobotic-tool-design.github.io\u002F)\n- arXiv 2023.10, Learning to (Learn at Test Time), [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.13807) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ftest-time-training\u002Fmttt)\n- CoRL 2023 workshop, **FMB**: a Functional Manipulation Benchmark for Generalizable Robotic Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fpdf?id=055oRimwls) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmanipulationbenchmark)\n- 2023.10, Non-parametric regression for robot learning on manifolds, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.19561)\n- IROS 2021, Explaining the Decisions of Deep Policy Networks for Robotic Manipulations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.19432)\n- ICML 2022, The **primacy bias** in deep reinforcement learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.07802)\n- ICML 2023 oral, The **dormant neuron** phenomenon in deep reinforcement learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12902)\n- arXiv 2022.04, Simplicial Embeddings in Self-Supervised Learning and Downstream Classification, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00616)\n- arXiv 2023.10, **SparseDFF**: Sparse-View Feature Distillation for One-Shot Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16838)\n- arXiv 2023.10, **SAM-CLIP**: Merging Vision Foundation Models towards Semantic and Spatial Understanding, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15308)\n- arXiv 2023.10, **TD-MPC2**: Scalable, Robust World Models for Continuous Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16828) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fnicklashansen\u002Ftdmpc2)\n- arXiv 2023.10, **EquivAct**: SIM(3)-Equivariant Visuomotor Policies beyond Rigid Object Manipulation, [Website](https:\u002F\u002Fequivact.github.io\u002F)\n- NeurIPS 2022, **CodeRL**: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01780) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FCodeRL)\n- arXiv 2023.10, **Robot Fine-Tuning Made Easy**: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning, [Website](https:\u002F\u002Frobofume.github.io\u002F)\n- CoRL 2023, **SAQ**: Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning, [Website](https:\u002F\u002Fsaqrl.github.io\u002F)\n- arXiv 2023.10, Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09676)\n- arXiv 2023.03, **PLEX**: Making the Most of the Available Data for Robotic Manipulation Pretraining, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08789)\n- arXiv 2023.10, **LAMP**: Learn A Motion Pattern for Few-Shot-Based Video Generation, [Website](https:\u002F\u002Frq-wu.github.io\u002Fprojects\u002FLAMP\u002F)\n- arXiv 2023.10, **4K4D**: Real-Time 4D View Synthesis at 4K Resolution, [Website](https:\u002F\u002Fzju3dv.github.io\u002F4k4d\u002F)\n- arXiv 2023.10, **SuSIE**: Subgoal Synthesis via Image Editing, [Website](https:\u002F\u002Frail-berkeley.github.io\u002Fsusie\u002F)\n- arXiv 2023.10, **Universal Visual Decomposer**: Long-Horizon Manipulation Made Easy, [Website](https:\u002F\u002Fzcczhang.github.io\u002FUVD\u002F)\n- arXiv 2023.10, Learning to Act from Actionless Video through Dense Correspondences, [Website](https:\u002F\u002Fflow-diffusion.github.io\u002F)\n- NIPS 2023, **CEC**: Cross-Episodic Curriculum for Transformer Agents, [Website](https:\u002F\u002Fcec-agent.github.io\u002F)\n- ICLR 2024 submission, **TD-MPC2**: Scalable, Robust World Models for Continuous Control, [Oepnreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=Oxh5CstDJU)\n- ICLR 2024 submission, **3D Diffuser Actor**: Multi-task 3D Robot Manipulation with Iterative Error Feedback, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=UnsLGUCynE)\n- ICLR 2024 submission, **NeRFuser**: Diffusion Guided Multi-Task 3D Policy Learning, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=8GmPLkO0oR)\n- arXiv 2023.10, **Foundation Reinforcement Learning**: towards Embodied Generalist Agents with Foundation Prior Assistance, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02635)\n- ICCV 2023, **S3IM**: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields, [Website](https:\u002F\u002Fmadaoer.github.io\u002Fs3im_nerf\u002F)\n- arXiv 2023.09, **Text2Reward**: Automated Dense Reward Function Generation for Reinforcement Learning, [Website](https:\u002F\u002Ftext-to-reward.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.11489)\n- ICCV 2023, End2End Multi-View Feature Matching with Differentiable Pose Optimization, [Website](https:\u002F\u002Fbarbararoessle.github.io\u002Fe2e_multi_view_matching\u002F)\n- arXiv 2023.10, Aligning Text-to-Image Diffusion Models with Reward Backpropagation, [Website](https:\u002F\u002Falign-prop.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fmihirp1998\u002FAlignProp\u002F)\n- NeurIPS 2023, **EDP**: Efficient Diffusion Policies for Offline Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.20081) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fedp)\n- arXiv 2023.09, **See to Touch**: Learning Tactile Dexterity through Visual Incentives,  [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12300) \u002F [Website](https:\u002F\u002Fsee-to-touch.github.io\u002F)\n- RSS 2023, **SAM-RL**: Sensing-Aware Model-Based Reinforcement Learning via Differentiable Physics-Based Simulation and Rendering, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15185) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Frss-sam-rl)\n- arXiv 2023.09, **MoDem-V2**: Visuo-Motor World Models for Real-World Robot Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14236) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmodem-v2)\n- arXiv 2023.09, **DreamGaussian**: Generative Gaussian Splatting for Efficient 3D Content Creation, [Website](https:\u002F\u002Fgithub.com\u002Fdreamgaussian\u002Fdreamgaussian) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fdreamgaussian\u002Fdreamgaussian)\n- arXiv 2023.09, **D3Fields**: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation, [Website](https:\u002F\u002Frobopil.github.io\u002Fd3fields\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FWangYixuan12\u002Fd3fields)\n- arXiv 2023.09, **GELLO**: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators, [Website](https:\u002F\u002Fwuphilipp.github.io\u002Fgello_site\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13037)\n- arXiv 2023.09, Human-Assisted Continual Robot Learning with Foundation Models, [Website](https:\u002F\u002Fsites.google.com\u002Fmit.edu\u002Fhalp-robot-learning) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14321)\n- arXiv 2023.09, Robotic Offline RL from Internet Videos via Value-Function Pre-Training, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13041) \u002F [Website](https:\u002F\u002Fdibyaghosh.com\u002Fvptr\u002F)\n- ICCV 2023, **PointOdyssey**: A Large-Scale Synthetic Dataset for Long-Term Point Tracking, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15055) \u002F [Github](https:\u002F\u002Fgithub.com\u002Faharley\u002Fpips2)\n- arXiv 2023, Compositional Foundation Models for Hierarchical Planning, [Website](https:\u002F\u002Fhierarchical-planning-foundation-model.github.io\u002F)\n- RSS 2022 Best Student Paper Award Finalist, **ACID**: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation, [Website](https:\u002F\u002Fb0ku1.github.io\u002Facid\u002F)\n- CoRL 2023, **REBOOT**: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03322) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Freboot-dexterous)\n- CoRL 2023, An Unbiased Look at Datasets for Visuo-Motor Pre-Training, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fpdf?id=qVc7NWYTRZ6)\n- CoRL 2023, **Q-Transformer**: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0I3su3mkuL)\n- ICCV 2023 oral, Tracking Everything Everywhere All at Once, [Website](https:\u002F\u002Fomnimotion.github.io\u002F)\n- arXiv 2023.08, **RoboTAP**: Tracking Arbitrary Points for Few-Shot Visual Imitation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15975) \u002F [Website](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15975)\n- arXiv 2023.06, **DreamSim**: Learning New Dimensions of Human Visual Similarity using Synthetic Data, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09344) \u002F [Website](https:\u002F\u002Fdreamsim-nights.github.io\u002F)\n- ICLR 2023 spotlight, **FluidLab**: A Differentiable Environment for Benchmarking Complex Fluid Manipulation, [Website](https:\u002F\u002Ffluidlab2023.github.io\u002F)\n- arXiv 2023.06, **Seal**: Segment Any Point Cloud Sequences by Distilling Vision Foundation Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09347) \u002F [Website](https:\u002F\u002Fldkong.com\u002FSeal) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fyouquanl\u002FSegment-Any-Point-Cloud)\n- arXiv 2023.08, **BridgeData V2**: A Dataset for Robot Learning at Scale, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12952) \u002F [Website](https:\u002F\u002Frail-berkeley.github.io\u002Fbridgedata\u002F)\n- arXiv 2023.08, **Diffusion with Forward Models**: Solving Stochastic Inverse Problems Without Direct Supervision, [Website](https:\u002F\u002Fdiffusion-with-forward-models.github.io\u002F)\n- ICML 2023, **QRL**: Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning, [Website](https:\u002F\u002Fwww.tongzhouwang.info\u002Fquasimetric_rl\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fquasimetric-learning\u002Fquasimetric-rl)\n- arXiv 2023.08, **Dynamic 3D Gaussians**: Tracking by Persistent Dynamic View Synthesis, [Website](https:\u002F\u002Fdynamic3dgaussians.github.io\u002F)\n- SIGGRAPH 2023 best paper, 3D Gaussian Splatting for Real-Time Radiance Field Rendering, [Website](https:\u002F\u002Frepo-sam.inria.fr\u002Ffungraph\u002F3d-gaussian-splatting\u002F)\n- CoRL 2022, In-Hand Object Rotation via Rapid Motor Adaptation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.04887) \u002F [Website](https:\u002F\u002Fhaozhi.io\u002Fhora\u002F)\n- ICLR 2019, **DPI-Net**: Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids, [Website](http:\u002F\u002Fdpi.csail.mit.edu\u002F)\n- ICLR 2019, **Plan Online, Learn Offline**: Efficient Learning and Exploration via Model-Based Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.01848) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fpolo-mpc)\n- NeurIPS 2021 spotlight, **NeuS**: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction, [Website](https:\u002F\u002Flingjie0206.github.io\u002Fpapers\u002FNeuS\u002F)\n- ICCV 2023, Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models, [Website](https:\u002F\u002Fenergy-based-model.github.io\u002Funsupervised-concept-discovery\u002F)\n- AAAI 2018, **FiLM**: Visual Reasoning with a General Conditioning Layer, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07871)\n- arXiv 2023.08, **RoboAgent**: Towards Sample Efficient Robot Manipulation with Semantic Augmentations and Action Chunking, [Website](https:\u002F\u002Frobopen.github.io\u002F)\n- ICRA 2000, **RRT-Connect**: An Efficient Approach to Single-Query Path Planning, [PDF](http:\u002F\u002Fwww.cs.cmu.edu\u002Fafs\u002Fandrew\u002Fscs\u002Fcs\u002F15-494-sp13\u002Fnslobody\u002FClass\u002Freadings\u002Fkuffner_icra2000.pdf)\n- CVPR 2017 oral, **Network Dissection**: Quantifying Interpretability of Deep Visual Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.05796) \u002F [Website](http:\u002F\u002Fnetdissect.csail.mit.edu\u002F)\n- NIPS 2020 (spotlight), Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains, [Website](https:\u002F\u002Fbmild.github.io\u002Ffourfeat\u002Findex.html)\n- ICRA 1992, Planning optimal grasps, [PDF](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~jfc\u002Fpapers\u002F92\u002FFCicra92.pdf)\n- RSS 2021, **GIGA**: Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01542) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Frpl-giga2021)\n- ECCV 2022, **StARformer**: Transformer with State-Action-Reward Representations for Visual Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06206) \u002F [Github](https:\u002F\u002Fgithub.com\u002Felicassion\u002FStARformer)\n- ICML 2023, **Parallel Q-Learning**: Scaling Off-policy Reinforcement Learning under Massively Parallel Simulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12983v1) \u002F [Github](https:\u002F\u002Fgithub.com\u002FImprobable-AI\u002Fpql)\n- ECCV 2022, **SeedFormer**: Patch Seeds based Point Cloud Completion with Upsample Transformer, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10315) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fhrzhou2\u002Fseedformer)\n- arXiv 2023.07, Waypoint-Based Imitation Learning for Robotic Manipulation, [Website](https:\u002F\u002Flucys0.github.io\u002Fawe\u002F)\n- ICML 2022, **Prompt-DT**: Prompting Decision Transformer for Few-Shot Policy Generalization, [Website](https:\u002F\u002Fmxu34.github.io\u002FPromptDT\u002F)\n- arXiv 2023, Reinforcement Learning from Passive Data via Latent Intentions, [Website](https:\u002F\u002Fdibyaghosh.com\u002Ficvf\u002F)\n- ICML 2023, **RPG**: Reparameterized Policy Learning for Multimodal Trajectory Optimization, [Website](https:\u002F\u002Fhaosulab.github.io\u002FRPG\u002F)\n- ICML 2023, **TGRL**: An Algorithm for Teacher Guided Reinforcement Learning, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Ftgrl-paper)\n- arXiv 2023.07, **XSkill**: Cross Embodiment Skill Discovery, [Website](https:\u002F\u002Fxskillcorl.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09955)\n- ICML 2023, Learning Neural Constitutive Laws: From Motion Observations for Generalizable PDE Dynamics, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fnclaw) \u002F [Github](https:\u002F\u002Fgithub.com\u002FPingchuanMa\u002FNCLaw)\n- arXiv 2023.07, **TokenFlow**: Consistent Diffusion Features for Consistent Video Editing, [Website](https:\u002F\u002Fdiffusion-tokenflow.github.io\u002F)\n- arXiv 2023.07, **PAPR**: Proximity Attention Point Rendering, [Website](https:\u002F\u002Fzvict.github.io\u002Fpapr\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11086)\n- ICCV 2023, **DreamTeacher**: Pretraining Image Backbones with Deep Generative Models, [Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002FDreamTeacher\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.07487)\n- RSS 2023, Robust and Versatile Bipedal Jumping Control through Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.09450)\n- arXiv 2023.07, **Differentiable Blocks World**: Qualitative 3D Decomposition by Rendering Primitives, [Website](https:\u002F\u002Fwww.tmonnier.com\u002FDBW\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.05473)\n- ICLR 2023, **DexDeform**: Dexterous Deformable Object Manipulation with Human Demonstrations and Differentiable Physics, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdexdeform\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsizhe-li\u002FDexDeform)\n- arXiv 2023.07, **RPDiff**: Shelving, Stacking, Hanging: Relational Pose Diffusion for Multi-modal Rearrangement, [Website](https:\u002F\u002Fanthonysimeonov.github.io\u002Frpdiff-multi-modal\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fanthonysimeonov\u002Frpdiff)\n- arXiv 2023.07, **SpawnNet**: Learning Generalizable Visuomotor Skills from Pre-trained Networks, [Website](https:\u002F\u002Fxingyu-lin.github.io\u002Fspawnnet\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fjohnrso\u002Fspawnnet)\n- RSS 2023, **DexPBT**: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdexpbt) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12127)\n- arXiv 2023.07, **KITE**: Keypoint-Conditioned Policies for Semantic Manipulation, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fkite-website\u002Fhome) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.16605)\n- arXiv 2023.06, Detector-Free Structure from Motion, [Website](https:\u002F\u002Fzju3dv.github.io\u002FDetectorFreeSfM\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15669)\n- arXiv 2023.06, **REFLECT**: Summarizing Robot Experiences for FaiLure Explanation and CorrecTion, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15724) \u002F [Website](https:\u002F\u002Froboreflect.github.io\u002F)\n- arXiv 2023.06, **ViNT**: A Foundation Model for Visual Navigation, [Website](https:\u002F\u002Fvisualnav-transformer.github.io\u002F)\n- AAAI 2023, Improving Long-Horizon Imitation Through Instruction Prediction, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12554) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fjhejna\u002Finstruction-prediction)\n- arXiv 2023.06, **RVT**: Robotic View Transformer for 3D Object Manipulation, [Website](https:\u002F\u002Frobotic-view-transformer.github.io\u002F)\n- arXiv 2023.01, **Ponder**: Point Cloud Pre-training via Neural Rendering, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00157)\n- arXiv 2023.06, **SGR**: A Universal Semantic-Geometric Representation for Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10474) \u002F [Website](https:\u002F\u002Fsemantic-geometric-representation.github.io\u002F)\n- arXiv 2023.06, Robot Learning with Sensorimotor Pre-training, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10007) \u002F [Website](https:\u002F\u002Frobotic-pretrained-transformer.github.io\u002F)\n- arXiv 2023.06, For SALE: State-Action Representation Learning for Deep Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02451) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsfujim\u002FTD7)\n- arXiv 2023.06, **HomeRobot**: Open Vocabulary Mobile Manipulation, [Website](https:\u002F\u002Fovmm.github.io\u002F)\n- arXiv 2023.06, Lifelike Agility and Play on Quadrupedal Robots using Reinforcement Learning and Deep Pre-trained Models, [Website](https:\u002F\u002Ftencent-roboticsx.github.io\u002Flifelike-agility-and-play\u002F)\n- arXiv 2023.06, **TAPIR**: Tracking Any Point with per-frame Initialization and temporal Refinement, [Website](https:\u002F\u002Fdeepmind-tapir.github.io\u002F)\n- CVPR 2017, **I3D**: Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07750)\n- arXiv 2023.06, Diffusion Models for Zero-Shot Open-Vocabulary Segmentation, [Website](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fresearch\u002Fovdiff\u002F)\n- arXiv 2023.06, **R-MAE**: Regions Meet Masked Autoencoders, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05411) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fr-mae)\n- arXiv 2023.05, **Optimus**: Imitating Task and Motion Planning with Visuomotor Transformers, [Website](https:\u002F\u002Fmihdalal.github.io\u002Foptimus\u002F)\n- arXiv 2023.05, Video Prediction Models as Rewards for Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14343) \u002F [Website](https:\u002F\u002Fwww.escontrela.me\u002Fviper\u002F)\n- ICML 2023, **VIMA**: General Robot Manipulation with Multimodal Prompts, [Website](https:\u002F\u002Fvimalabs.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fvimalabs\u002FVIMA)\n- arXiv 2023.05, **SPRING**: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15486)\n- arXiv 2023.05, Training Diffusion Models with Reinforcement Learning, [Website](https:\u002F\u002Frl-diffusion.github.io\u002F)\n- arXiv 2023.03, Foundation Models for Decision Making: Problems, Methods, and Opportunities, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04129)\n- ICLR 2017, Third-Person Imitation Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01703)\n- arXiv 2023.04, **CoTPC**: Chain-of-Thought Predictive Control, [Website](https:\u002F\u002Fzjia.eng.ucsd.edu\u002Fcotpc)\n- CVPR 2023 highlight, **ImageBind**: One embedding to bind them all, [Website](https:\u002F\u002Fimagebind.metademolab.com\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FImageBind)\n- arXiv 2023.05, **Shap-E**: Generating Conditional 3D Implicit Functions, [Github](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fshap-e)\n- arXiv 2023.04, **Track Anything**: Segment Anything Meets Videos, [Github](https:\u002F\u002Fgithub.com\u002Fgaomingqi\u002Ftrack-anything)\n- CVPR 2023, **GLaD**: Generalizing Dataset Distillation via Deep Generative Prior, [Website](https:\u002F\u002Fgeorgecazenavette.github.io\u002Fglad\u002F)\n- CVPR 2022 oral, **RegNeRF**: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs, [Website](https:\u002F\u002Fm-niemeyer.github.io\u002Fregnerf\u002F)\n- CVPR 2023, **FreeNeRF**: Improving Few-shot Neural Rendering with Free Frequency Regularization, [Website](https:\u002F\u002Fjiawei-yang.github.io\u002FFreeNeRF\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FJiawei-Yang\u002FFreeNeRF)\n- ICLR 2023 oral, **Decision-Diffuser**: Is Conditional Generative Modeling all you need for Decision-Making?, [Website](https:\u002F\u002Fanuragajay.github.io\u002Fdecision-diffuser\u002F)\n- CVPR 2022, **Depth-supervised NeRF**: Fewer Views and Faster Training for Free, [Website](http:\u002F\u002Fwww.cs.cmu.edu\u002F~dsnerf\u002F)\n- SIGGRAPH Asia 2022, **ENeRF**: Efficient Neural Radiance Fields for Interactive Free-viewpoint Video, [Website](https:\u002F\u002Fzju3dv.github.io\u002Fenerf\u002F)\n- ICML 2023, On the power of foundation models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16327)\n- ICML 2023, **SNeRL**: Semantic-aware Neural Radiance Fields for Reinforcement Learning, [Website](https:\u002F\u002Fsjlee.cc\u002Fsnerl\u002F)\n- ICLR 2023 outstanding paper, Emergence of Maps in the Memories of Blind Navigation Agents, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=lTt4KjHSsyl)\n- ICLR 2023 outstanding paper honorable mentions, Disentanglement with Biological Constraints: A Theory of Functional Cell Types, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=9Z_GfhZnGH)\n- CVPR 2023 award candidate, Data-driven Feature Tracking for Event Cameras, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12826)\n- CVPR 2023 award candidate, What Can Human Sketches Do for Object Detection?, [Website](http:\u002F\u002Fwww.pinakinathc.me\u002Fsketch-detect\u002F)\n- CVPR 2023 award candidate, Visual Programming for Compositional Visual Reasoning, [Website](https:\u002F\u002Fprior.allenai.org\u002Fprojects\u002Fvisprog)\n- CVPR 2023 award candidate, On Distillation of Guided Diffusion Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03142)\n- CVPR 2023 award candidate, **DreamBooth**: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, [Website](https:\u002F\u002Fdreambooth.github.io\u002F)\n- CVPR 2023 award candidate, Planning-oriented Autonomous Driving, [Github](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FUniAD)\n- CVPR 2023 award candidate, Neural Dynamic Image-Based Rendering, [Website](https:\u002F\u002Fdynibar.github.io\u002F)\n- CVPR 2023 award candidate, **MobileNeRF**: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures, [Website](https:\u002F\u002Fmobile-nerf.github.io\u002F)\n- CVPR 2023 award candidate, **OmniObject3D**: Large Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation, [Website](https:\u002F\u002Fomniobject3d.github.io\u002F)\n- CVPR 2023 award candidate, Ego-Body Pose Estimation via Ego-Head Pose Estimation, [Website](https:\u002F\u002Flijiaman.github.io\u002Fprojects\u002Fegoego\u002F)\n- CVPR 2023, Affordances from Human Videos as a Versatile Representation for Robotics, [Website](https:\u002F\u002Frobo-affordances.github.io\u002F)\n- CVPR 2022, Neural 3D Video Synthesis from Multi-view Video, [Website](https:\u002F\u002Fneural-3d-video.github.io\u002F)\n- ICCV 2021, **Nerfies**: Deformable Neural Radiance Fields, [Website](https:\u002F\u002Fnerfies.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fnerfies)\n- CVPR 2023 highlight, **HyperReel**: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling, [Website](https:\u002F\u002Fhyperreel.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhyperreel)\n- arXiv 2022.05, **FlashAttention**: Fast and Memory-Efficient Exact Attention with IO-Awareness, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14135) \u002F [Github](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fflash-attention)\n- CVPR 2023, **CLIP^2**: Contrastive Language-Image-Point Pretraining from Real-World Point Cloud Data, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12417)\n- CVPR 2023, **ULIP**: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.05171) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FULIP)\n- CVPR 2023, Learning Video Representations from Large Language Models, [Website](https:\u002F\u002Ffacebookresearch.github.io\u002FLaViLa\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FLaViLa)\n- CVPR 2023, **PLA**: Language-Driven Open-Vocabulary 3D Scene Understanding, [Website](https:\u002F\u002Fdingry.github.io\u002Fprojects\u002FPLA)\n- CVPR 2023, **PartSLIP**: Low-Shot Part Segmentation for 3D Point Clouds via Pretrained Image-Language Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.01558)\n- CVPR 2023, Mask-Free Video Instance Segmentation, [Website](https:\u002F\u002Fwww.vis.xyz\u002Fpub\u002Fmaskfreevis\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FSysCV\u002Fmaskfreevis)\n- arXiv 2023.04, **DINOv2**: Learning Robust Visual Features without Supervision, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07193) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdinov2)\n- arXiv 2023.04, Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields, [Website](https:\u002F\u002Fjonbarron.info\u002Fzipnerf\u002F)\n- arXiv 2023.04, SEEM: Segment Everything Everywhere All at Once, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06718) \u002F [code](https:\u002F\u002Fgithub.com\u002FUX-Decoder\u002FSegment-Everything-Everywhere-All-At-Once)\n- arXiv 2023.04, Internet Explorer: Targeted Representation Learning on the Open Web, [page](https:\u002F\u002Finternet-explorer-ssl.github.io\u002F) \u002F [code](https:\u002F\u002Fgithub.com\u002Finternet-explorer-ssl\u002Finternet-explorer)\n- arXiv 2023.03, Consistency Models, [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fconsistency_models) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01469)\n- arXiv 2023.02, SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections, [code](https:\u002F\u002Fgithub.com\u002FFrozenBurning\u002FSceneDreamer) \u002F [page](https:\u002F\u002Fscene-dreamer.github.io\u002F)\n- arXiv 2023.04, Generative Agents: Interactive Simulacra of Human Behavior, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442)\n- ICLR 2023 notable, NTFields: Neural Time Fields for Physics-Informed Robot Motion Planning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=ApF0dmi1_9K)\n- arXiv 2023, For Pre-Trained Vision Models in Motor Control, Not All Policy Learning Methods are Created Equal, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.04591)\n- code, Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions, [GitHub](https:\u002F\u002Fgithub.com\u002Fayaanzhaque\u002Finstruct-nerf2nerf)\n- arXiv 2023, Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05499) \u002F [GitHub](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FGroundingDINO)\n- arXiv 2023, Zero-1-to-3: Zero-shot One Image to 3D Object, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11328)\n- ICLR 2023, Towards Stable Test-Time Adaptation in Dynamic Wild World, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12400)\n- CVPR 2023 highlight, Neural Volumetric Memory for Visual Locomotion Control, [Website](https:\u002F\u002Frchalyang.github.io\u002FNVM\u002F)\n- arXiv 2023, Segment Anything, [Website](https:\u002F\u002Fsegment-anything.com\u002F)\n- ICRA 2023, DribbleBot: Dynamic Legged Manipulation in the Wild, [Website](https:\u002F\u002Fgmargo11.github.io\u002Fdribblebot\u002F)\n- arXiv 2023, Alpaca: A Strong, Replicable Instruction-Following Model, [Website](https:\u002F\u002Fcrfm.stanford.edu\u002F2023\u002F03\u002F13\u002Falpaca.html)\n- arXiv 2023, VC-1: Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?, [Website](https:\u002F\u002Feai-vc.github.io\u002F)\n- ICLR 2022, DroQ: Dropout Q-Functions for Doubly Efficient Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02034)\n- arXiv 2023, RoboPianist: A Benchmark for High-Dimensional Robot Control, [Website](https:\u002F\u002Fkzakka.com\u002Frobopianist\u002F)\n- ICLR 2021, DDIM: Denoising Diffusion Implicit Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02502)\n- arXiv 2023, Your Diffusion Model is Secretly a Zero-Shot Classifier, [Website](https:\u002F\u002Fdiffusion-classifier.github.io\u002F)\n- CVPR 2023 highlight, F2-NeRF: Fast Neural Radiance Field Training with Free Camera\n- arXiv 2023, Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware, [Website](https:\u002F\u002Ftonyzhaozh.github.io\u002Faloha\u002F)\n- RSS 2021, RMA: Rapid Motor Adaptation for Legged Robots, [Website](https:\u002F\u002Fashish-kmr.github.io\u002Frma-legged-robots\u002F)\n- ICCV 2021, Where2Act: From Pixels to Actions for Articulated 3D Objects, [Website](https:\u002F\u002Fcs.stanford.edu\u002F~kaichun\u002Fwhere2act\u002F)\n- CVPR 2019 oral, Semantic Image Synthesis with Spatially-Adaptive Normalization, [GitHub](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSPADE)\n\n\n# Contact\nIf you have any questions or suggestions, please feel free to contact me at lastyanjieze@gmail.com .\n","# [Yanjie Ze](https:\u002F\u002Fyanjieze.com\u002F) 的论文列表\n\n主题：\n- [人形机器人](https:\u002F\u002Fgithub.com\u002FYanjieZe\u002Fawesome-humanoid-robot-learning)\n- [灵巧操作](topics\u002Fdex_manipulation.md)\n- [3D 机器人学习](topics\u002F3d_robotic_learning.md)\n- [机器人基础模型](topics\u002Frobot_foundation_models.md)\n- [最佳论文](topics\u002Fbest_papers.md)\n\n论文：\n- 2025 年\n  - [RSS 2025](https:\u002F\u002Froboticsconference.org\u002Fprogram\u002Fpapers\u002F)\n  - [CVPR 2025](https:\u002F\u002Fcvpr.thecvf.com\u002FConferences\u002F2025\u002FAcceptedPapers)\n  - [ICLR 2025 分数](https:\u002F\u002Fpapercopilot.com\u002Fstatistics\u002Ficlr-statistics\u002Ficlr-2025-statistics\u002F)\n- 2024 年\n  - [CoRL 2024](https:\u002F\u002Fopenreview.net\u002Fgroup?id=robot-learning.org\u002FCoRL\u002F2024\u002FConference#tab-accept) \u002F [统计数据](https:\u002F\u002Fpapercopilot.com\u002Fstatistics\u002Fcorl-statistics\u002Fcorl-2024-statistics\u002F)\n  - [RSS 2024](https:\u002F\u002Froboticsconference.org\u002Fprogram\u002Fpapers\u002F)\n  - [3DV 2024](https:\u002F\u002F3dvconf.github.io\u002F2024\u002Faccepted-papers\u002F)\n  - [CVPR 2024](https:\u002F\u002Fcvpr.thecvf.com\u002FConferences\u002F2024\u002FAcceptedPapers) \u002F [交互式概览](https:\u002F\u002Fpublic.tableau.com\u002Fviews\u002FCVPR2024\u002FPaperList?%3AshowVizHome=no)\n  - [ICLR 2024](https:\u002F\u002Fopenreview.net\u002Fgroup?id=ICLR.cc\u002F2024\u002FConference) \u002F [分数](https:\u002F\u002Fguoqiangwei.xyz\u002Ficlr2024_stats\u002Ficlr2024_submissions.html)\n- 2023 年\n  - [NeurIPS 2023](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2023\u002Fpapers.html)\n  - [CoRL 2023](https:\u002F\u002Fopenreview.net\u002Fgroup?id=robot-learning.org\u002FCoRL\u002F2023\u002FConference#accept--oral-)\n  - [ICCV 2023](https:\u002F\u002Fopenaccess.thecvf.com\u002FICCV2023?day=all)\n  - [ICML 2023](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2023\u002Fpapers.html?filter=titles)\n  - [SIGGRAPH 2023](https:\u002F\u002Fkesen.realtimerendering.com\u002Fsig2023.html)\n  - [RSS 2023](https:\u002F\u002Froboticsconference.org\u002F2023\u002Fprogram\u002Fpapers\u002F)\n  - [CVPR 2023](https:\u002F\u002Fcvpr2023.thecvf.com\u002FConferences\u002F2023\u002FAcceptedPapers)\n  - [ICLR 2023](https:\u002F\u002Ficlr.cc\u002Fvirtual\u002F2023\u002Fpapers.html?filter=titles)\n- 2022 年\n  - [NeurIPS 2022](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2022\u002Fpapers.html?filter=titles)\n\n# Recent Random Papers\n- RSS 2024, **3D Diffusion Policy**: Generalizable Visuomotor Policy Learning via Simple 3D Representations, [Website](https:\u002F\u002F3d-diffusion-policy.github.io\u002F)\n- [arXiv 2025.12](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11130), Fast-FoundationStereo: Real-Time Zero-Shot Stereo Matching\n- [website](https:\u002F\u002Fopenreview.net\u002Fforum?id=Jtjurj7oIJ), Position: Scaling Simulation is Neither Necessary Nor Sufficient for In-the-Wild Robot Manipulation\n- [website](https:\u002F\u002Fwww.mitchel.computer\u002Fxfactor\u002F), True Self-Supervised Novel View Synthesis is Transferable\n- [arXiv 2025.09](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04343), Psychologically Enhanced AI Agents\n- [arXiv 2024.08](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07855), Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation\n- [website](https:\u002F\u002Ftoyotaresearchinstitute.github.io\u002Flbm1\u002F), A Careful Examination of Large Behavior Models for Multitask Dexterous Manipulation\n- [arXiv 2025.07](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07969), Reinforcement Learning with Action Chunking\n- arXiv 2025.06, DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation, [website](https:\u002F\u002Fdexwrist.csail.mit.edu\u002F)\n- arXiv 2025.06, Versatile Loco-Manipulation through Flexible Interlimb Coordination, [website](https:\u002F\u002Frelic-locoman.github.io\u002F)\n- [arXiv 2025.06](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01944), Feel the Force: Contact-Driven Learning from Humans\n- SIGGRAPH 2025, RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination, [website](https:\u002F\u002Fmicrosoft.github.io\u002Frenderformer\u002F)\n- [Science Robotics](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscirobotics.ads6192?utm_campaign=SciRobotics&utm_medium=ownedSocial&utm_source=twitter), High-speed control and navigation for quadrupedal robots on complex and discrete terrain\n- [Science Robotics](https:\u002F\u002Fwww.science.org\u002Fdoi\u002F10.1126\u002Fscirobotics.adu3922?utm_campaign=SciRobotics&utm_medium=ownedSocial&utm_source=twitter), Learning coordinated badminton skills for legged manipulators\n- arXiv 2025.04, DexSinGrasp: Learning a Unified Policy for Dexterous Object Singulation and Grasping in Cluttered Environments, [website](https:\u002F\u002Fnus-lins-lab.github.io\u002Fdexsingweb\u002F)\n- arXiv 2025.04, ORCA: Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic Hand for Uninterrupted Dexterous Task Learning, [website](https:\u002F\u002Fwww.orcahand.com\u002F)\n- arXiv 2025.04, One-Minute Video Generation with Test-Time Training, [website](https:\u002F\u002Ftest-time-training.github.io\u002Fvideo-dit\u002F)\n- arXiv 2025.01, Vid2Sim: Realistic and Interactive Simulation from Video for Urban Navigation, [website](https:\u002F\u002Fmetadriverse.github.io\u002Fvid2sim\u002F)\n- arXiv 2023.05, Data-Free Learning of Reduced-Order Kinematics, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03846)\n- arXiv 2024.04, ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning, [website](https:\u002F\u002Fmaniptrans.github.io\u002F)\n- arXiv 2021.09, Geometric Fabrics: Generalizing Classical Mechanics to Capture the Physics of Behavior, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10443)\n- arXiv 2023.10, Grasp Multiple Objects with One Hand, [website](https:\u002F\u002Fmultigrasp.github.io\u002F)\n- arXiv 2025.03, Learning to Play Piano in the Real World, [website](https:\u002F\u002Flasr.org\u002Fresearch\u002Flearning-to-play-piano)\n- arXiv 2025.03, MotionStreamer: Streaming Motion Generation via Diffusion-based Autoregressive Model in Causal Latent Space, [website](https:\u002F\u002Fzju3dv.github.io\u002FMotionStreamer\u002F)\n- arXiv 2025.03, Unified Video Action Model, [website](https:\u002F\u002Funified-video-action-model.github.io\u002F)\n- arXiv 2025.03, Scalable Real2Sim: Physics-Aware Asset Generation Via Robotic Pick-and-Place Setups, [website](https:\u002F\u002Fscalable-real2sim.github.io\u002F)\n- arXiv 2025.03, Discrete-Time Hybrid Automata Learning: Legged Locomotion Meets Skateboarding, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01842)\n- arXiv 2025.02, InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20390)\n- arXiv 2025.02, LiDAR Registration with Visual Foundation Models, [website](https:\u002F\u002Fvfm-registration.cs.uni-freiburg.de\u002F)\n- arXiv 2025.02, FACTR: Force-Attending Curriculum Training for Contact-Rich Policy Learning, [website](https:\u002F\u002Fjasonjzliu.com\u002Ffactr\u002F)\n- arXiv 2025.02, DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning, [website](https:\u002F\u002Fdemo-generation.github.io\u002F)\n- arXiv 2025.02, AnyDexGrasp: Learning General Dexterous Grasping for Any Hands with Human-level Learning Efficiency, [website](https:\u002F\u002Fgraspnet.net\u002Fanydexgrasp\u002F)\n- IEEE Transactions on Human-Machine Systems 2015, The GRASP Taxonomy of Human Grasp Types, [website](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7243327)\n- Science Robotics, Intrinsic sense of touch for intuitive physical human-robot interaction, [website](https:\u002F\u002Fwww.science.org\u002Fstoken\u002Fauthor-tokens\u002FST-2065\u002Ffull)\n- arXiv 2025.02, Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation, [website](https:\u002F\u002Fuan.csail.mit.edu\u002F)\n- arXiv 2025.02, **RigAnything**: Template-Free Autoregressive Rigging for Diverse 3D Assets, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09615)\n- arXiv 2025.02, Robot Data Curation with Mutual Information Estimators, [website](http:\u002F\u002Fjoeyhejna.com\u002Fdemonstration-info\u002F)\n- arXiv 2024.05, Learning Force Control for Legged Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01402)\n- arXiv 2025.02, **TD-M(PC)2**: Improving Temporal Difference MPC Through Policy Constraint, [website](https:\u002F\u002Fdarthutopian.github.io\u002Ftdmpc_square\u002F)\n- arXiv 2025.02, **DexterityGen**: Foundation Controller for Unprecedented Dexterity, [website](https:\u002F\u002Fzhaohengyin.github.io\u002Fdexteritygen\u002F)\n- arXiv 2025.02, Strengthening Generative Robot Policies through Predictive World Modeling, [website](https:\u002F\u002Fcomputationalrobotics.seas.harvard.edu\u002FGPC\u002F)\n- arXiv 2025.01, **CuriousBot**: Interactive Mobile Exploration via Actionable 3D Relational Object Graph, [website](https:\u002F\u002Fbdaiinstitute.github.io\u002Fcuriousbot\u002F)\n- arXiv 2025.01, Improving Vision-Language-Action Model with Online Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16664)\n- arXiv 2025.01, **Physics IQ Benchmark**: Do generative video models learn physical principles from watching videos?, [website](https:\u002F\u002Fphysics-iq.github.io\u002F)\n- arXiv 2025.01, **FAST**: Efficient Robot Action Tokenization, [website](https:\u002F\u002Fwww.pi.website\u002Fresearch\u002Ffast)\n- arXiv 2025.01, **DAViD**: Modeling Dynamic Affordance of 3D Objects using Pre-trained Video Diffusion Models, [website](https:\u002F\u002Fsnuvclab.github.io\u002Fdavid\u002F)\n- arXiv 2025.01, Predicting 4D Hand Trajectory from Monocular Videos, [website](https:\u002F\u002Fjudyye.github.io\u002Fhaptic-www\u002F)\n- arXiv 2025.01, Learning to Transfer Human Hand Skills for Robot Manipulations, [website](https:\u002F\u002Frureadyo.github.io\u002FMocapRobot\u002F)\n- arXiv 2025.01, **Beyond Sight**: Finetuning Generalist Robot Policies with Heterogeneous Sensors via Language Grounding, [website](https:\u002F\u002Ffuse-model.github.io\u002F)\n- arXiv 2025.01, **Depth Any Camera**: Zero-Shot Metric Depth Estimation from Any Camera, [website](https:\u002F\u002Fyuliangguo.github.io\u002Fdepth-any-camera\u002F)\n- arXiv 2024.04, **Metric3Dv2**: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation, [website](https:\u002F\u002Fjugghm.github.io\u002FMetric3Dv2\u002F)\n- **Cosmos**: World Foundation Model Platform for Physical AI, [website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Fdir\u002Fcosmos1\u002F) \u002F [github](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FCosmos)\n- arXiv 2024.12, **ManiBox**: Enhancing Spatial Grasping Generalization via Scalable Simulation Data Generation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01850)\n- arXiv 2024.12, **VLABench**: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks, [website](https:\u002F\u002Fvlabench.github.io\u002F)\n- arXiv 2024.12, **Video Prediction Policy**: A Generalist Robot Policy with Predictive Visual Representations, [website](https:\u002F\u002Fvideo-prediction-policy.github.io\u002F)\n- arXiv 2024.12, **RoboMIND**: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13877)\n- arXiv 2024.12, **Towards Generalist Robot Policies**: What Matters in Building Vision-Language-Action Models, [website](https:\u002F\u002Frobovlms.github.io\u002F)\n- arXiv 2024.12, **Genesis**: A Generative and Universal Physics Engine for Robotics and Beyond, [website](https:\u002F\u002Fgenesis-embodied-ai.github.io\u002F)\n- arXiv 2024.10, **articulate-anything**: Automatic Modeling of Articulated Objects via a Vision-Language Foundation Model, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13882) \u002F [website](https:\u002F\u002Farticulate-anything.github.io\u002F)\n- arXiv 2024.12, **HandsOnVLM**: Vision-Language Models for Hand-Object Interaction Prediction, [website](https:\u002F\u002Fwww.chenbao.tech\u002Fhandsonvlm\u002F)\n- arXiv 2024.12, **Meta Motivo**: Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models, [github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmetamotivo)\n- arXiv 2024.12, **illusion3d**: 3D Multiview Illusion with 2D Diffusion Priors, [website](https:\u002F\u002F3d-multiview-illusion.github.io\u002F)\n- arXiv 2024.12, **RLDG**: Robotic Generalist Policy Distillation via Reinforcement Learning, [website](https:\u002F\u002Fgeneralist-distillation.github.io\u002F)\n- arXiv 2024.12, **SOLAMI**: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters, [website](https:\u002F\u002Fsolami-ai.github.io\u002F)\n- arXiv 2024.12, Reinforcement Learning from **Wild Animal Videos**, [website](https:\u002F\u002Felliotchanesane31.github.io\u002FRLWAV\u002F)\n- arXiv 2024.12, **NaVILA**: Legged Robot Vision-Language-Action Model for Navigation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04453)\n- arXiv 2024.12, **Motion Prompting**: Controlling Video Generation with Motion Trajectories, [website](https:\u002F\u002Fmotion-prompting.github.io\u002F)\n- arXiv 2024.12, **CASHER**: Robot Learning with Super-Linear Scaling, [website](https:\u002F\u002Fcasher-robot-learning.github.io\u002FCASHER\u002F)\n- arXiv 2024.12, **CogACT**: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation, [website](https:\u002F\u002Fcogact.github.io\u002F)\n- arXiv 2024.11, **CAT4D**: Create Anything in 4D with Multi-View Video Diffusion Models, [website](https:\u002F\u002Fcat-4d.github.io\u002F)\n- arXiv 2024.11, **Generative Omnimatte**: Learning to Decompose Video into Layers, [website](https:\u002F\u002Fgen-omnimatte.github.io\u002F)\n- arXiv 2024.11, Inference-Time Policy Steering through Human Interactions, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16627)\n- arXiv 2024.11, **The Matrix**: Infinite-Horizon World Generation with Real-Time Moving Control, [website](https:\u002F\u002Fthematrix1999.github.io\u002F)\n- arXiv 2024.11, Learning-based Trajectory Tracking for Bird-inspired **Flapping-Wing Robots**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15130)\n- arXiv 2024.11, **WildLMA**: Long Horizon Loco-MAnipulation in the Wild, [website](https:\u002F\u002Fwildlma.github.io\u002F)\n- SIGGRAPH ASIA 2024, **CBIL**: Collective Behavior Imitation Learning for Fish from Real Videos, [website](https:\u002F\u002Ffrank-zy-dou.github.io\u002Fprojects\u002FCBIL\u002Findex.html)\n- arXiv 2024.11, Learning Time-Optimal and Speed-Adjustable Tactile In-Hand Manipulation, [website](https:\u002F\u002Faidx-lab.org\u002Fmanipulation\u002Fhumanoids24)\n- arXiv 2024.11, Soft Robotic **Dynamic In-Hand Pen Spinning**, [website](https:\u002F\u002Fsoft-spin.github.io\u002F)\n- arXiv 2024.11, **Generative World Explorer**, [website](https:\u002F\u002Fgenerative-world-explorer.github.io\u002F)\n- arXiv 2024.11, **RoboGSim**: A Real2Sim2Real Robotic Gaussian Splatting Simulator, [website](https:\u002F\u002Frobogsim.github.io\u002F)\n- arXiv 2024.11, **Moving Off-the-Grid**: Scene-Grounded Video Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05927) \u002F [website](https:\u002F\u002Fmoog-paper.github.io\u002F)\n- arXiv 2024.07, **From Imitation to Refinement** -- Residual RL for Precise Assembly, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16677) \u002F [website](https:\u002F\u002Fresidual-assembly.github.io\u002F)\n- arXiv 2024.10, **HIL-SERL**: Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning, [website](https:\u002F\u002Fhil-serl.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21845)\n- arXiv 2024.10, **DELTA**: Dense Efficient Long-range 3D Tracking for any video, [website](https:\u002F\u002Fsnap-research.github.io\u002FDELTA\u002F)\n- arXiv 2024.10, **π0**: A Vision-Language-Action Flow Model for General Robot Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.24164) \u002F [website](https:\u002F\u002Fwww.physicalintelligence.company\u002Fblog\u002Fpi0)\n- arXiv 2024.10, One Step Diffusion via **Shortcut Models**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12557)\n- arXiv 2024.10, **BUMBLE**: Unifying Reasoning and Acting with Vision-Language Models for Building-wide Mobile Manipulation, [Website](https:\u002F\u002Frobin-lab.cs.utexas.edu\u002FBUMBLE\u002F)\n- arXiv 2024.10, **Run-time Observation Interventions**: Make Vision-Language-Action Models More Visually Robust, [Website](https:\u002F\u002Faasherh.github.io\u002Fbyovla\u002F)\n- arXiv 2024.10, **MonST3R**: A Simple Approach for Estimating Geometry in the Presence of Motion, [Website](https:\u002F\u002Fmonst3r-project.github.io\u002F)\n- arXiv 2024.10, Learning Humanoid Locomotion over Challenging Terrain, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03654)\n- arXiv 2024.10, Estimating Body and Hand Motion in an Ego-sensed World, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03665)\n- arXiv 2024.09, **Opt2Skill**: Imitating Dynamically-feasible Whole-Body Trajectories for Versatile Humanoid Loco-Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.20514)\n- arXiv 2024.09, **Helpful DoggyBot**: Open-World Object Fetching using Legged Robots and Vision-Language Models, [Website](https:\u002F\u002Fhelpful-doggybot.github.io\u002F)\n- arXiv 2024.09 \u002F CoRL 2024 Oral, **Robot See Robot Do**: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction, [Website](https:\u002F\u002Frobot-see-robot-do.github.io\u002F)\n- arXiv 2024.09, **Full-Order Sampling-Based MPC** for Torque-Level Locomotion Control via Diffusion-Style Annealing, [Website](https:\u002F\u002Flecar-lab.github.io\u002Fdial-mpc\u002F)\n- arXiv 2024.09, **Blox-Net**: Generative Design-for-Robot-Assembly using VLM Supervision, Physics Simulation, and A Robot with Reset, [Website](https:\u002F\u002Fbloxnet.org\u002F)\n- arXiv 2024.06, **DigiRL**: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning, [Website](https:\u002F\u002Fdigirl-agent.github.io\u002F)\n- arXiv 2024.09, **ClearDepth**: Enhanced Stereo Perception of Transparent Objects for Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08926)\n- arXiv 2024.09, **HOP**: Hand-object interaction pretraining from videos, [Website](https:\u002F\u002Fhgaurav2k.github.io\u002Fhop\u002F)\n- arXiv 2024.09, **AnySkin**: Plug-and-play Skin Sensing for Robotic Touch, [Website](https:\u002F\u002Fany-skin.github.io\u002F)\n- CoRL 2024, Continuously Improving Mobile Manipulation with **Autonomous Real-World RL**, [Website](https:\u002F\u002Fcontinual-mobile-manip.github.io\u002F)\n- arXiv 2024.09, **Neural MP**: A Generalist Neural Motion Planner, [Website](https:\u002F\u002Fmihdalal.github.io\u002Fneuralmotionplanner\u002F)\n- IROS 2024, Learning to **Walk and Fly** with Adversarial Motion Priors, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12784)\n- arXiv 2024.09, **Robot Utility Models**: General Policies for Zero-Shot Deployment in New Environments, [Website](https:\u002F\u002Frobotutilitymodels.com\u002F)\n- CoRL 2024, **LucidSim**: Learning Agile Visual Locomotion from Generated Images, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=cGswIOxHcN)\n- CoRL 2024, **OKAMI**: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation, [Website](https:\u002F\u002Fopenreview.net\u002Fforum?id=URj5TQTAXM&referrer=%5Bthe%20profile%20of%20Yuke%20Zhu%5D(%2Fprofile%3Fid%3D~Yuke_Zhu1))\n- CoRL 2024, Learning Robotic Locomotion Affordances and **Photorealistic Simulators from Human-Captured Data**, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=1TEZ1hiY5m)\n- CoRL 2024, **Object-Centric Dexterous Manipulation** from Human Motion Data, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=KAzku0Uyh1)\n- CoRL 2024, **ALOHA Unleashed**: A Simple Recipe for Robot Dexterity, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=gvdXE7ikHI)\n- CoRL 2024, **GenDP**: 3D Semantic Fields for Category-Level Generalizable Diffusion Policy, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=7wMlwhCvjS)\n- CoRL 2024, **Dynamic 3D Gaussian Tracking** for Graph-Based Neural Dynamics Modeling, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=itKJ5uu1gW)\n- CoRL 2024, **D3RoMa**: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=7E3JAys1xO)\n- CoRL 2024, **Action Space Design** in Reinforcement Learning for Robot Motor Skills, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=GGuNkjQSrk)\n- CoRL 2024, So You Think You Can Scale Up **Autonomous Robot Data Collection**?, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=XrxLGzF0lJ)\n- CoRL 2024, **VISTA**: View-Invariant Policy Learning via Zero-Shot Novel View Synthesis, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03685)\n- arXiv 2024.08, **Bidirectional Decoding**: Improving Action Chunking via Closed-Loop Resampling, [Website](https:\u002F\u002Fbid-robot.github.io\u002F)\n- arXiv 2024.08, **Unsupervised-to-Online** Reinforcement Learning, [arXiv](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.14785)\n- arXiv 2024.08, **SkillMimic**: Learning Reusable Basketball Skills from Demonstrations, [Website](https:\u002F\u002Fingrid789.github.io\u002FSkillMimic\u002F)\n- arXiv 2024.08, **ReKep**: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation, [Website](https:\u002F\u002Frekep-robot.github.io\u002F)\n- arXiv 2024.08, In-Context Imitation Learning via Next-Token Prediction, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15980)\n- arXiv 2024.08, **GameNGen**: Diffusion Models Are Real-Time Game Engines, [Website](https:\u002F\u002Fgamengen.github.io\u002F)\n- arXiv 2024.08, **Splatt3R**: Zero-shot Gaussian Splatting from Uncalibrated Image Pairs, [Website](https:\u002F\u002Fsplatt3r.active.vision\u002F)\n- arXiv 2024.08, **CrossFormer**: Scaling Cross-Embodied Learning for Manipulation, Navigation, Locomotion, and Aviation, [Website](https:\u002F\u002Fcrossformer-model.github.io\u002F)\n- arXiv 2024.08, **UniT**: Unified Tactile Representation for Robot Learning, [Website](https:\u002F\u002Fzhengtongxu.github.io\u002Funifiedtactile.github.io\u002F)\n- arXiv 2024.08, **Body Transformer**: Leveraging Robot Embodiment for Policy Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06316)\n- ICML 2024 oral, **SAPG**: Split and Aggregate Policy Gradients, [Website](https:\u002F\u002Fsapg-rl.github.io\u002F)\n- Humanoid 2006, Dynamic Pen Spinning Using a High-speed Multifingered Hand with High-speed Tactile Sensor\n- IROS 2024, Radiance Fields for Robotic Teleoperation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20194)\n- arXiv 2024.07, Lessons from Learning to **Spin “Pens”**, [Website](https:\u002F\u002Fpenspin.github.io\u002F)\n- arXiv 2024.07, **Flow** as the Cross-Domain Manipulation Interface, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15208)\n- RSS 2024, Offline Imitation Learning Through **Graph Search and Retrieval**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15403)\n- CVPR 2024, **HOLD**: Category-agnostic 3D Reconstruction of Interacting Hands and Objects from Video, [Website](https:\u002F\u002Fzc-alexfan.github.io\u002Fhold)\n- CVPR 2023, **ARCTIC**: A Dataset for Dexterous Bimanual Hand-Object Manipulation, [Website](https:\u002F\u002Farctic.is.tue.mpg.de\u002F)\n- SIGGRAPH 2024, **Neural Gaussian Scale-Space Fields**, [Website](https:\u002F\u002Fneural-gaussian-scale-space-fields.mpi-inf.mpg.de\u002F)\n- arXiv 2024.07, A Simulation Benchmark for **Autonomous Racing** with Large-Scale Human Data, [Website](https:\u002F\u002Fassetto-corsa-gym.github.io\u002F)\n- arXiv 2024.07, **From Imitation to Refinement**: Residual RL for Precise Visual Assembly,[Website](https:\u002F\u002Fresidual-assembly.github.io\u002F)\n- arXiv 2024.07, **Shape of Motion**: 4D Reconstruction from a Single Video, [Website](https:\u002F\u002Fshape-of-motion.github.io\u002F)\n- arXiv 2024.07, Unifying 3D Representation and Control of Diverse Robots with a Single Camera, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08722)\n- arXiv 2024.07, Continuous Control with **Coarse-to-fine Reinforcement Learning**, [Website](https:\u002F\u002Fyounggyo.me\u002Fcqn\u002F)\n- arXiv 2024.07, **BiGym**: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark, [Website](https:\u002F\u002Fchernyadev.github.io\u002Fbigym\u002F)\n- arXiv 2024.07, **Generative Image as Action Models**, [Website](https:\u002F\u002Fgenima-robot.github.io\u002F)\n- arXiv 2024.07, **Omnigrasp**: Grasping Diverse Objects with Simulated Humanoids, [Website](https:\u002F\u002Fwww.zhengyiluo.com\u002FOmnigrasp-Site\u002F)\n- RSS 2024, **RoboPack**: Learning Tactile-Informed Dynamics Models for Dense Packing, [Website](https:\u002F\u002Frobo-pack.github.io\u002F)\n- arXiv 2024.07, **EquiBot**: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning, [Website](https:\u002F\u002Fequi-bot.github.io\u002F)\n- arXiv 2024.07, **Sparse Diffusion Policy**: A Sparse, Reusable, and Flexible Policy for Robot Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01531) \u002F [Website](https:\u002F\u002Fforrest-110.github.io\u002Fsparse_diffusion_policy\u002F)\n- arXiv 2024.07, **Open-TeleVision**: Teleoperation with Immersive Active Visual Feedback, [Website](https:\u002F\u002Frobot-tv.github.io\u002F)\n- SIGGRAPH 2023, **3DShape2VecSet**: A 3D Shape Representation for Neural Fields and Generative Diffusion Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11445)\n- SIGGRAPH 2024 Best Paper Honorable Mention, CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fclay-3dlm)\n- arXiv 2024.07, **UnSAM**: Segment Anything without Supervision, [Github](https:\u002F\u002Fgithub.com\u002Ffrank-xwang\u002FUnSAM)\n- arXiv 2024.06, **Dreamitate**: Real-World Visuomotor Policy Learning via Video Generation, [Website](https:\u002F\u002Fdreamitate.cs.columbia.edu\u002F)\n- CVPR 2024 best paper, Generative Image Dynamics, [Website](https:\u002F\u002Fgenerative-dynamics.github.io\u002F)\n- CVPR 2024 highlight, **XCube**: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies, [Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002Fxcube\u002F)\n- CVPR 2024 oral, **DSINE**: Rethinking Inductive Biases for Surface Normal Estimation, [Website](https:\u002F\u002Fbaegwangbin.github.io\u002FDSINE\u002F)\n- arXiv 2024.06, **An Image is Worth More Than 16x16 Patches**: Exploring Transformers on Individual Pixels, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.09415)\n- arXiv 2024.06, **BAKU**: An Efficient Transformer for Multi-Task Policy Learning, [Website](https:\u002F\u002Fbaku-robot.github.io\u002F)\n- CVPR 2024 highlight, Image Neural Field Diffusion Models, [Website](https:\u002F\u002Fyinboc.github.io\u002Finfd\u002F)\n- RSS 2024, **MPI**: Learning Manipulation by Predicting Interaction, [Website](https:\u002F\u002Fopendrivelab.github.io\u002Fmpi.github.io\u002F)\n- arXiv 2024.05, **Model-based Diffusion** for Trajectory Optimization, [Website](https:\u002F\u002Flecar-lab.github.io\u002Fmbd\u002F)\n- RSS 2024, **RoboCasa**: Large-Scale Simulation of Everyday Tasks for Generalist Robots, [Website](https:\u002F\u002Frobocasa.ai\u002F)\n- CVPR 2024, **OmniGlue**: Generalizable Feature Matching with Foundation Model Guidance, [Website](https:\u002F\u002Fhwjiang1510.github.io\u002FOmniGlue\u002F)\n- arXiv 2024.05, **Pandora**: Towards General World Model with Natural Language Actions and Video States, [Website](https:\u002F\u002Fworld-model.maitrix.org\u002F)\n- arXiv 2024.05, **Images that Sound**: Composing Images and Sounds on a Single Canvas, [Website](https:\u002F\u002Fificl.github.io\u002Fimages-that-sound\u002F)\n- SIGGRAPH 2024, **Text-to-Vector Generation** with Neural Path Representation, [Website](https:\u002F\u002Fintchous.github.io\u002FT2V-NPR\u002F)\n- arXiv 2024.03, **GeoWizard**: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image, [Website](https:\u002F\u002Ffuxiao0719.github.io\u002Fprojects\u002Fgeowizard\u002F)\n- arXiv 2024.05, **Toon3D**: Seeing Cartoons from a New Perspective, [Website](https:\u002F\u002Ftoon3d.studio\u002F)\n- arXiv 2024.05, **TRANSIC**: Sim-to-Real Policy Transfer by Learning from Online Correction, [Website](https:\u002F\u002Ftransic-robot.github.io\u002F)\n- RSS 2024, **Natural Language** Can Help Bridge the Sim2Real Gap, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10020)\n- ICML 2024, The **Platonic Representation** Hypothesis, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07987)\n- arXiv 2024.05, **SPIN**: Simultaneous Perception, Interaction and Navigation, [Website](https:\u002F\u002Fspin-robot.github.io\u002F)\n- RSS 2024, **Consistency Policy**: Accelerated Visuomotor Policies via Consistency Distillation, [Website](https:\u002F\u002Fconsistency-policy.github.io\u002F)\n- arXiv 2024.05, **Humanoid Parkour** Learning, [Website](https:\u002F\u002Fhumanoid4parkour.github.io\u002F)\n- arXiv 2024.05, Evaluating Real-World Robot Manipulation Policies in Simulation, [Website](https:\u002F\u002Fsimpler-env.github.io\u002F)\n- arXiv 2024.05, **ScrewMimic**: Bimanual Imitation from Human Videos with Screw Space Projection, [Website](https:\u002F\u002Frobin-lab.cs.utexas.edu\u002FScrewMimic\u002F)\n- arXiv 2024.04, **DiffuseLoco**: Real-Time Legged Locomotion Control with Diffusion from Offline Datasets, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19264)\n- arXiv 2024.05, **DrEureka**: Language Model Guided Sim-To-Real Transfer, [Website](https:\u002F\u002Feureka-research.github.io\u002Fdr-eureka\u002F)\n- arXiv 2024.05, Customizing Text-to-Image Models with a Single Image Pair, [Website](https:\u002F\u002Fpaircustomization.github.io\u002F)\n- arXiv 2024.05, **SATO**: Stable Text-to-Motion Framework, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01461)\n- ICRA 2024, Learning Force Control for Legged Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01402)\n- arXiv 2024.05, **IntervenGen**: Interventional Data Generation for Robust and Data-Efficient Robot Imitation Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01472)\n- arXiv 2024.05, **Track2Act**: Predicting Point Tracks from Internet Videos enables Diverse Zero-shot Robot Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01527)\n- arXiv 2024.04, **KAN**: Kolmogorov-Arnold Networks, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19756)\n- RSS 2023, **IndustReal**: Transferring Contact-Rich Assembly Tasks from Simulation to Reality, [Website](https:\u002F\u002Fsites.google.com\u002Fnvidia.com\u002Findustreal)\n- arXiv 2024.04, Editable Image Elements for Controllable Synthesis, [Website](https:\u002F\u002Fjitengmu.github.io\u002FEditable_Image_Elements\u002F)\n- arXiv 2024.04, **EgoPet**: Egomotion and Interaction Data from an Animal's Perspective, [Website](https:\u002F\u002Fwww.amirbar.net\u002Fegopet\u002F)\n- SIIGRAPH 2023, **OctFormer**: Octree-based Transformers for 3D Point Clouds, [Website](https:\u002F\u002Fwang-ps.github.io\u002Foctformer.html)\n- arXiv 2024.04, **Clio**: Real-time Task-Driven Open-Set 3D Scene Graphs, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.13696)\n- ICCV 2023, Canonical Factors for Hybrid Neural Fields, [Website](https:\u002F\u002Fbrentyi.github.io\u002Ftilted\u002F)\n- arXiv 2024.04, **HATO**: Learning Visuotactile Skills with Two Multifingered Hands, [Website](https:\u002F\u002Ftoruowo.github.io\u002Fhato\u002F)\n- arXiv 2024.04, **SpringGrasp**: Synthesizing Compliant Dexterous Grasps under Shape Uncertainty, [Website](https:\u002F\u002Fstanford-tml.github.io\u002FSpringGrasp\u002F)\n- ICRA 2024 workshop, Object-Aware **Gaussian Splatting for Robotic Manipulation**, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=gdRI43hDgo)\n- arXiv 2024.04, **PhysDreamer**: Physics-Based Interaction with 3D Objects via Video Generation, [Website](https:\u002F\u002Fphysdreamer.github.io\u002F)\n- arXiv 2015.09, **MPPI**: Model Predictive Path Integral Control using Covariance Variable Importance Sampling, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1509.01149)\n- arXiv 2023.07, Sampling-based Model Predictive Control Leveraging Parallelizable Physics Simulations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09105) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ftud-airlab\u002Fmppi-isaac)\n- arXiv 2024.04, **BLINK**: Multimodal Large Language Models Can See but Not Perceive, [Website](https:\u002F\u002Fzeyofu.github.io\u002Fblink\u002F)\n- arXiv 2024.04, **Factorized Diffusion**: Perceptual Illusions by Noise Decomposition, [Website](https:\u002F\u002Fdangeng.github.io\u002Ffactorized_diffusion\u002F)\n- CVPR 2024, Probing the 3D Awareness of Visual Foundation Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.08636)\n- ICCV 2019, **Neural-Guided RANSAC**: Learning Where to Sample Model Hypotheses, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.04132)\n- arXiv 2024.04, **QuasiSim**: Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer, [Website](https:\u002F\u002Fmeowuu7.github.io\u002FQuasiSim\u002F)\n- arXiv 2024.04, **Policy-Guided Diffusion**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06356) \u002F [Github](https:\u002F\u002Fgithub.com\u002FEmptyJackson\u002Fpolicy-guided-diffusion)\n- RoboSoft 2024, Body Design and Gait Generation of **Chair-Type Asymmetrical Tripedal** Low-rigidity Robot, [Website](https:\u002F\u002Fshin0805.github.io\u002Fchair-type-tripedal-robot\u002F)\n- CVPR 2024 oral, **MicKey**: Matching 2D Images in 3D: Metric Relative Pose from Metric Correspondences, [Website](https:\u002F\u002Fnianticlabs.github.io\u002Fmickey\u002F)\n- arXiv 2024.04, **ZeST**: Zero-Shot Material Transfer from a Single Image, [Website](https:\u002F\u002Fttchengab.github.io\u002Fzest\u002F)\n- arXiv 2024.03, **Keypoint Action Tokens** Enable In-Context Imitation Learning in Robotics, [Website](https:\u002F\u002Fwww.robot-learning.uk\u002Fkeypoint-action-tokens)\n- arXiv 2024.04, Reconstructing **Hand-Held Objects** in 3D, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06507)\n- ICRA 2024, **Actor-Critic Model Predictive Control**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09852)\n- arXiv 2024.04, Finding Visual Task Vectors, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05729)\n- NeurIPS 2022, Visual Prompting via **Image Inpainting**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.00647)\n- CVPR 2024 highlight, **SpatialTracker**: Tracking Any 2D Pixels in 3D Space, [Website](https:\u002F\u002Fhenry123-boy.github.io\u002FSpaTracker\u002F)\n- CVPR 2024, **NeRF2Physics**: Physical Property Understanding from Language-Embedded Feature Fields, [Website](https:\u002F\u002Fajzhai.github.io\u002FNeRF2Physics\u002F)\n- CVPR 2024, **Scaling Laws of Synthetic Images** for Model Training ... for Now, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04567)\n- CVPR 2024, A Vision Check-up for Language Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01862)\n- CVPR 2024, **GenH2R**: Learning Generalizable Human-to-Robot Handover via Scalable Simulation, Demonstration, and Imitation, [Website](https:\u002F\u002Fgenh2r.github.io\u002F)\n- arXiv 2024.04, **PreAfford**: Universal Affordance-Based Pre-Grasping for Diverse Objects and Environments, [Website](https:\u002F\u002Fair-discover.github.io\u002FPreAfford\u002F)\n- CVPR 2024, **Lift3D**: Zero-Shot Lifting of Any 2D Vision Model to 3D, [Website](https:\u002F\u002Fmukundvarmat.github.io\u002FLift3D\u002F)\n- arXiv 2024.03, **LocoMan**: Advancing Versatile Quadrupedal Dexterity with Lightweight Loco-Manipulators, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.18197)\n- arXiv 2024.03, Leveraging **Symmetry** in RL-based Legged Locomotion Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17320)\n- arXiv 2024.03, **RoboDuet**: A Framework Affording Mobile-Manipulation and Cross-Embodiment, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17367)\n- arXiv 2024.03, Imitation Bootstrapped Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.02198)\n- arXiv 2024.03, **Visual Whole-Body Control** for Legged Loco-Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.16967)\n- arXiv 2024.03, **S2**: When Do We Not Need Larger Vision Models? [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13043)\n- ICCV 2021, **DPT**: Vision Transformers for Dense Prediction, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13413)\n- arXiv 2024.03, **GRM**: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation, [Website](https:\u002F\u002Fjustimyhxu.github.io\u002Fprojects\u002Fgrm\u002F)\n- arXiv 2024.03, **MVSplat**: Efficient 3D Gaussian Splatting from Sparse Multi-View Images, [Website](https:\u002F\u002Fdonydchen.github.io\u002Fmvsplat\u002F)\n- arXiv 2024.03, **LiFT**: A Surprisingly Simple Lightweight Feature Transform for Dense ViT Descriptors, [Website](https:\u002F\u002Fwww.cs.umd.edu\u002F~sakshams\u002FLiFT\u002F)\n- SIGGRAPH 2023, **VET**: Visual Error Tomography for Point Cloud Completion and High-Quality Neural Rendering, [Github](https:\u002F\u002Fgithub.com\u002Flfranke\u002Fvet)\n- arXiv 2024.03, On **Pretraining Data Diversity** for Self-Supervised Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13808)\n- arXiv 2024.03, **FeatUp**: A Model-Agnostic Framework for Features at Any Resolution, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.10516) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fmhamilton723\u002FFeatUp)\n- arXiv 2024.03, **Vid2Robot**: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12943)\n- arXiv 2024.03, **Yell At Your Robot**: Improving On-the-Fly from Language Corrections, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12910)\n- arXiv 2024.03, **DROID**: A Large-Scale In-the-Wild Robot Manipulation Dataset, [Website](https:\u002F\u002Fdroid-dataset.github.io\u002F)\n- ICLR 2024 oral, **Ghost on the Shell**: An Expressive Representation of General 3D Shapes, [Website](https:\u002F\u002Fgshell3d.github.io\u002F)\n- arXiv 2024.03, **HumanoidBench**: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation, [Website](https:\u002F\u002Fsferrazza.cc\u002Fhumanoidbench_site\u002F)\n- arXiv 2024.03, **PaperBot**: Learning to Design Real-World Tools Using Paper, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09566)\n- arXiv 2024.03, **GaussianGrasper**: 3D Language Gaussian Splatting for Open-vocabulary Robotic Grasping, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09637)\n- arXiv 2024.03, A Decade's Battle on **Dataset Bias**: Are We There Yet? [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08632)\n- arXiv 2024.03, **ManiGaussian**: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08321)\n- arXiv 2024.03, Learning **Generalizable Feature Fields** for Mobile Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07563)\n- arXiv 2024.03, **DexCap**: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07788)\n- arXiv 2024.03, **TeleMoMa**: A Modular and Versatile Teleoperation System for Mobile Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07869)\n- arXiv 2024.03, **OPEN TEACH**: A Versatile Teleoperation System for Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07870)\n- CVPR 2020 oral, **SuperGlue**: Learning Feature Matching with Graph Neural Networks, [Github](https:\u002F\u002Fgithub.com\u002Fmagicleap\u002FSuperGluePretrainedNetwork)\n- ICRA 2024, Learning to walk in confined spaces using 3D representation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00187)\n- CVPR 2024, **Hierarchical Diffusion Policy** for Kinematics-Aware Multi-Task Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03890) \u002F [Website](https:\u002F\u002Fyusufma03.github.io\u002Fprojects\u002Fhdp\u002F)\n- arXiv 2024.03, Reconciling Reality through Simulation: A **Real-to-Sim-to-Real** Approach for Robust Manipulation, [Website](https:\u002F\u002Freal-to-sim-to-real.github.io\u002FRialTo\u002F)\n- ICRA 2024, **Dexterous Legged Locomotion** in Confined 3D Spaces with Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03848)\n- arXiv 2024.03, **MOKA**: Open-Vocabulary Robotic Manipulation through Mark-Based Visual Prompting, [Website](https:\u002F\u002Fmoka-manipulation.github.io\u002F)\n- arXiv 2024.03, **VQ-BeT**: Behavior Generation with Latent Actions, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03181) \u002F [Website](https:\u002F\u002Fsjlee.cc\u002Fvq-bet\u002F)\n- **Humanoid-Gym**: Reinforcement Learning for Humanoid Robot with Zero-Shot Sim2Real Transfer, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fhumanoid-gym\u002F)\n- arXiv 2024.03, Twisting Lids Off with Two Hands, [Website](https:\u002F\u002Ftoruowo.github.io\u002Fbimanual-twist\u002F)\n- ICLR 2023 spotlight, Multi-skill Mobile Manipulation for Object Rearrangement, [Github](https:\u002F\u002Fgithub.com\u002FJiayuan-Gu\u002Fhab-mobile-manipulation)\n- CVPR 2024, **Gaussian Splatting SLAM**, [Github](https:\u002F\u002Fgithub.com\u002Fmuskie82\u002FMonoGS)\n- arXiv 2024.03, **TripoSR**: Fast 3D Object Reconstruction from a Single Image, [Github](https:\u002F\u002Fgithub.com\u002FVAST-AI-Research\u002FTripoSR)\n- arXiv 2024.03, **Point Could Mamba**: Point Cloud Learning via State Space Model, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00762)\n- CVPR 2024, Rethinking Few-shot 3D Point Cloud Semantic Segmentation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00592)\n- ICLR 2024, Can Transformers Capture Spatial Relations between Objects? [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00729) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fspatial-relation)\n- SIGGRAPH Asia 2023, **CamP**: Camera Preconditioning for Neural Radiance Fields, [Website](https:\u002F\u002Fcamp-nerf.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fjonbarron\u002Fcamp_zipnerf)\n- arXiv 2024.02, **Extreme Cross-Embodiment Learning** for Manipulation and Navigation, [Website](https:\u002F\u002Fextreme-cross-embodiment.github.io\u002F)\n- CVPR 2024, **DUSt3R**: Geometric 3D Vision Made Easy, [Github](https:\u002F\u002Fgithub.com\u002Fnaver\u002Fdust3r)\n- CVPR 2018 best paper, **TASKONOMY**: Disentangling Task Transfer Learning, [Website](http:\u002F\u002Ftaskonomy.stanford.edu\u002F)\n- arXiv 2024.02, **Mirage**: Cross-Embodiment Zero-Shot Policy Transfer with Cross-Painting, [Website](https:\u002F\u002Frobot-mirage.github.io\u002F)\n- CVPR 2024, **Diffusion 3D Features (Diff3F)**: Decorating Untextured Shapes with Distilled Semantic Features, [Website](https:\u002F\u002Fdiff3f.github.io\u002F)\n- arXiv 2024.02, **Disentangled 3D Scene Gen­eration** with Layout Learning, [Website](https:\u002F\u002Fdave.ml\u002Flayoutlearning\u002F)\n- arXiv 2024.02, **Transparent Image Layer Diffusion** using Latent Transparency, [Website](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17113)\n- arXiv 2024.02, **Diffusion Meets DAgger**: Supercharging Eye-in-hand Imitation Learning, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdiffusion-meets-dagger)\n- arXiv 2024.02, Massive Activations in Large Language Models, [Website](https:\u002F\u002Feric-mingjie.github.io\u002Fmassive-activations\u002Findex.html)\n- arXiv 2024.02, Dynamics-Guided Diffusion Model for **Robot Manipulator Design**, [Website](https:\u002F\u002Fdgdm-robot.github.io\u002F)\n- arXiv 2024.02, **Genie**: Generative Interactive Environments, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15391) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fgenie-2024\u002F)\n- arXiv 2024.02, **CyberDemo**: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14795) \u002F [Website](https:\u002F\u002Fcyber-demo.github.io\u002F)\n- CoRL 2020, **DSR**: Learning 3D Dynamic Scene Representations for Robot Manipulation, [Website](https:\u002F\u002Fdsr-net.cs.columbia.edu\u002F)\n- ICLR 2024 oral, Cameras as Rays: Pose Estimation via **Ray Diffusion**, [Website](https:\u002F\u002Fjasonyzhang.com\u002FRayDiffusion\u002F)\n- arXiv 2024.02, **Pedipulate**: Enabling Manipulation Skills using a Quadruped Robot's Leg, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10837)\n- arXiv 2024.02, **LMPC**: Learning to Learn Faster from Human Feedback with Language Model Predictive Control, [Website](https:\u002F\u002Frobot-teaching.github.io\u002F)\n- arXiv 2023.12, **W.A.L.T**: Photorealistic Video Generation with Diffusion Models, [Website](https:\u002F\u002Fwalt-video-diffusion.github.io\u002F)\n- arXiv 2024.02, **Universal Manipulation Interface**: In-The-Wild Robot Teaching Without In-The-Wild Robots, [Website](https:\u002F\u002Fumi-gripper.github.io\u002F)\n- ICCV 2023 oral, **DiT**: Scalable Diffusion Models with Transformers, [Website](https:\u002F\u002Fwww.wpeebles.com\u002FDiT)\n- arXiv 2023.07, Diffusion Models Beat GANs on Image Classification, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08702)\n- ICCV 2023 oral, **DDAE**: Denoising Diffusion Autoencoders are Unified Self-supervised Learners, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09769)\n- arXiv 2024.12, **Mosaic-SDF** for 3D Generative Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09222) \u002F [Website](https:\u002F\u002Flioryariv.github.io\u002Fmsdf\u002F)\n- arXiv 2024.02, **POCO**: Policy Composition From and For Heterogeneous Robot Learning, [Website](https:\u002F\u002Fliruiw.github.io\u002Fpolicycomp\u002F)\n- ICML 2024 submission, **Latent Graph Diffusion**: A Unified Framework for Generation and Prediction on Graphs, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02518)\n- ICLR 2024 spotlight, **AMAGO**: Scalable In-Context Reinforcement Learning for Adaptive Agents, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09971)\n- arXiv 2024.02, Offline Actor-Critic Reinforcement Learning Scales to Large Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05546)\n- arXiv 2024.02, **V-IRL**: Grounding Virtual Intelligence in Real Life, [Website](https:\u002F\u002Fvirl-platform.github.io\u002F)\n- ICRA 2024, **SERL**: A Software Suite for Sample-Efficient Robotic Reinforcement Learning, [Website](https:\u002F\u002Fserl-robot.github.io\u002F)\n- arXiv 2024.01, Generative Expressive Robot Behaviors using Large Language Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14673)\n- arXiv 2024.01, **pix2gestalt**: Amodal Segmentation by Synthesizing Wholes, [Website](https:\u002F\u002Fgestalt.cs.columbia.edu\u002F)\n- arXiv 2024.01, **DAE**: Deconstructing Denoising Diffusion Models for Self-Supervised Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14404)\n- ICLR 2024, **DittoGym**: Learning to Control Soft Shape-Shifting Robots, [Website](https:\u002F\u002Fdittogym.github.io\u002F)\n- arXiv 2024.01, **WildRGB-D**: RGBD Objects in the Wild: Scaling Real-World 3D Object Learning from RGB-D Videos, [Website](https:\u002F\u002Fwildrgbd.github.io\u002F)\n- arXiv 2024.01, **Spatial VLM**: Endowing Vision-Language Models with Spatial Reasoning Capabilities, [Website](https:\u002F\u002Fspatial-vlm.github.io\u002F)\n- arXiv 2024.01, Multimodal **Visual-Tactile Representation** Learning through Self-Supervised Contrastive Pre-Training, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12024)\n- arXiv 2024.01, **OK-Robot**: What Really Matters in Integrating Open-Knowledge Models for Robotics, [Website](https:\u002F\u002Fok-robot.github.io\u002F)\n- L4DC 2023, **Agile Catching** with Whole-Body MPC and Blackbox Policy Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08205)\n- arXiv 2024.01, **Depth Anything**: Unleashing the Power of Large-Scale Unlabeled Data, [Github](https:\u002F\u002Fgithub.com\u002FLiheYoung\u002FDepth-Anything?tab=readme-ov-file)\n- arXiv 2024.01, **WorldDreamer**: Towards General World Models for Video Generation via Predicting Masked Tokens, [Website](https:\u002F\u002Fworld-dreamer.github.io\u002F)\n- arXiv 2024.01, **VMamba**: Visual State Space Model, [Github](https:\u002F\u002Fgithub.com\u002FMzeroMiko\u002FVMamba)\n- arXiv 2024.01, **DiffusionGPT**: LLM-Driven Text-to-Image Generation System, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10061) \u002F[Website](https:\u002F\u002Fdiffusiongpt.github.io\u002F)\n- arXiv 2023.12, **PhysHOI**: Physics-Based Imitation of Dynamic Human-Object Interaction, [Website](https:\u002F\u002Fwyhuai.github.io\u002Fphyshoi-page\u002F)\n- ICLR 2024 oral, **UniSim**: Learning Interactive Real-World Simulators, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=sFyTZEqmUY)\n- ICLR 2024 oral, **ASID**: Active Exploration for System Identification and Reconstruction in Robotic Manipulation, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=jNR6s6OSBT)\n- ICLR 2024 oral, Mastering **Memory Tasks** with World Models, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=1vDArHJ68h)\n- ICLR 2024 oral, Predictive auxiliary objectives in deep RL mimic learning in the brain, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=agPpmEgf8C)\n- ICLR 2024 oral, **Is ImageNet worth 1 video?** Learning strong image encoders from 1 long unlabelled video, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08584) \u002F [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=Yen1lGns2o)\n- arXiv 2024.01, **URHand**: Universal Relightable Hands, [Website](https:\u002F\u002Ffrozenburning.github.io\u002Fprojects\u002Furhand\u002F)\n- arXiv 2023.12, **Mamba**: Linear-Time Sequence Modeling with Selective State Spaces, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00752) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fstate-spaces\u002Fmamba)\n- ICLR 2022, **S4**: Efficiently Modeling Long Sequences with Structured State Spaces, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00396)\n- arXiv 2024.01, **Dr2Net**: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04105)\n- arXiv 2023.12, **3D-LFM**: Lifting Foundation Model, [Website](https:\u002F\u002F3dlfm.github.io\u002F)\n- arXiv 2024.01, **DVT**: Denoising Vision Transformers, [Website](https:\u002F\u002Fjiawei-yang.github.io\u002FDenoisingViT\u002F)\n- arXiv 2024.01, **Open-Vocabulary SAM**: Segment and Recognize Twenty-thousand Classes Interactively, [Website](https:\u002F\u002Fwww.mmlab-ntu.com\u002Fproject\u002Fovsam\u002F) \u002F [Code](https:\u002F\u002Fgithub.com\u002FHarborYuan\u002Fovsam)\n- arXiv 2024.01, **ATM**: Any-point Trajectory Modeling for Policy Learning, [Website](https:\u002F\u002Fxingyu-lin.github.io\u002Fatm\u002F)\n- CVPR 2024 submission, **Learning Vision from Models** Rivals Learning Vision from Data, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17742) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fsyn-rep-learn)\n- CVPR 2024 submission, Visual Point Cloud Forecasting enables **Scalable Autonomous Driving**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17655) \u002F [Github](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FViDAR)\n- CVPR 2024 submission, **Ponymation**: Learning 3D Animal Motions from Unlabeled Online Videos, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13604) \u002F [Website](https:\u002F\u002Fkeqiangsun.github.io\u002Fprojects\u002Fponymation\u002F)\n- CVPR 2024 submission, **V\\***: Guided Visual Search as a Core Mechanism in Multimodal LLMs, [Website](https:\u002F\u002Fvstar-seal.github.io\u002F)\n- NIPS 2021 outstanding paper, Deep Reinforcement Learning at the Edge of the Statistical Precipice, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13264) \u002F [Website](https:\u002F\u002Fagarwl.github.io\u002Frliable\u002F)\n- CVPR 2024 submission, Zero-Shot **Metric Depth** with a Field-of-View Conditioned Diffusion Model, [Website](https:\u002F\u002Fdiffusion-vision.github.io\u002Fdmd\u002F)\n- ICLR 2023, **Deep Learning on 3D Neural Fields**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13277)\n- CVPR 2024 submission, **Tracking Any Object Amodally**, [Website](https:\u002F\u002Ftao-amodal.github.io\u002F)\n- CVPR 2024 submission, **MobileSAMv2**: Faster Segment Anything to Everything, [Github](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM)\n- CVPR 2024 submission, **AnyDoor**: Zero-shot Object-level Image Customization, [Github](https:\u002F\u002Fgithub.com\u002Fdamo-vilab\u002FAnyDoor)\n- CVPR 2024 submission, **Point Transformer V3**: Simpler, Faster, Stronger, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10035) \u002F [Github](https:\u002F\u002Fgithub.com\u002FPointcept\u002FPointTransformerV3)\n- CVPR 2024 submission, **Alchemist**: Parametric Control of Material Properties with Diffusion Models, [Website](https:\u002F\u002Fprafullsharma.net\u002Falchemist\u002F)\n- CVPR 2024 submission, **Reconstructing Hands in 3D** with Transformers, [Website](https:\u002F\u002Fgeopavlakos.github.io\u002Fhamer\u002F)\n- CVPR 2024 submission, Language-Informed Visual Concept Learning, [Website](https:\u002F\u002Fai.stanford.edu\u002F~yzzhang\u002Fprojects\u002Fconcept-axes\u002F)\n- CVPR 2024 submission, **RCG**: Self-conditioned Image Generation via Generating Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03701) \u002F [Github](https:\u002F\u002Fgithub.com\u002FLTH14\u002Frcg)\n- CVPR 2024 submission, **Describing Differences in Image Sets** with Natural Language, [Website](https:\u002F\u002Funderstanding-visual-datasets.github.io\u002FVisDiff-website\u002F)\n- CVPR 2024 submission, **FaceStudio**: Put Your Face Everywhere in Seconds, [Website](https:\u002F\u002Ficoz69.github.io\u002Ffacestudio\u002F)\n- CVPR 2024 submission, **ImageDream**: Image-Prompt Multi-view Diffusion for 3D Generation, [Website](https:\u002F\u002FImage-Dream.github.io)\n- CVPR 2024 submission, **Fine-grained Controllable Video Generation** via Object Appearance and Context, [Website](https:\u002F\u002Fhhsinping.github.io\u002Ffactor\u002F)\n- CVPR 2024 submission, **AmbiGen**: Generating Ambigrams from Pre-trained Diffusion Model, [Website](https:\u002F\u002Fraymond-yeh.com\u002FAmbiGen\u002F)\n- CVPR 2024 submission, **ReconFusion**: 3D Reconstruction with Diffusion Priors, [Website](https:\u002F\u002Freconfusion.github.io\u002F)\n- CVPR 2024 submission, **Ego-Exo4D**: Understanding Skilled Human Activity from First- and Third-Person Perspectives, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18259) \u002F [Website](https:\u002F\u002Fego-exo4d-data.org\u002F)\n- CVPR 2024 submission, **MagicAnimate**: Temporally Consistent Human Image Animation using Diffusion Model, [Github](https:\u002F\u002Fgithub.com\u002Fmagic-research\u002Fmagic-animate)\n- CVPR 2024 submission, **VideoSwap**: Customized Video Subject Swapping with Interactive Semantic Point Correspondence, [Website](https:\u002F\u002Fvideoswap.github.io\u002F)\n- CVPR 2024 submission, **IMProv**: Inpainting-based Multimodal Prompting for Computer Vision Tasks, [Website](https:\u002F\u002Fjerryxu.net\u002FIMProv\u002F)\n- CVPR 2024 submission, Generative **Powers of Ten**, [Website](https:\u002F\u002Fpowers-of-10.github.io\u002F)\n- CVPR 2024 submission, **DiffiT**: Diffusion Vision Transformers for Image Generation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02139)\n- CVPR 2024 submission, Learning from **One Continuous Video Stream**, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00598)\n- CVPR 2024 submission, **EvE**: Exploiting Generative Priors for Radiance Field Enrichment, [Website](https:\u002F\u002Feve-nvs.github.io\u002F)\n- CVPR 2024 submission, **Oryon**: Open-Vocabulary Object 6D Pose Estimation, [Website](https:\u002F\u002Fjcorsetti.github.io\u002Foryon-website\u002F)\n- CVPR 2024 submission, **Dense Optical Tracking**: Connecting the Dots, [Website](https:\u002F\u002F16lemoing.github.io\u002Fdot\u002F)\n- CVPR 2024 submission, Sequential Modeling Enables Scalable Learning for **Large Vision Models**, [Website](https:\u002F\u002Fyutongbai.com\u002Flvm.html)\n- CVPR 2024 submission, **VideoBooth**: Diffusion-based Video Generation with Image Prompts, [Website](https:\u002F\u002Fvchitect.github.io\u002FVideoBooth-project\u002F)\n- CVPR 2024 submission, **SODA**: Bottleneck Diffusion Models for Representation Learning, [Website](https:\u002F\u002Fsoda-diffusion.github.io\u002F)\n- CVPR 2024 submission, Exploiting **Diffusion Prior** for Generalizable Pixel-Level Semantic Prediction, [Website](https:\u002F\u002Fshinying.github.io\u002Fdmp\u002F)\n- arXiv 2023.11, Initializing Models with Larger Ones, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18823)\n- CVPR 2024 submission, **Animate Anyone**: Consistent and Controllable Image-to-Video Synthesis for Character Animation, [Website](https:\u002F\u002Fhumanaigc.github.io\u002Fanimate-anyone\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FHumanAIGC\u002FAnimateAnyone)\n- CVPR 2023 best demo award, **Diffusion Illusions**: Hiding Images in Plain Sight, [Website](https:\u002F\u002Fdiffusionillusions.com\u002F)\n- CVPR 2024 submission, Do text-free diffusion models learn discriminative visual representations? [Website](https:\u002F\u002Fmgwillia.github.io\u002Fdiffssl\u002F)\n- CVPR 2024 submission, **Visual Anagrams**: Synthesizing Multi-View Optical Illusions with Diffusion Models, [Website](https:\u002F\u002Fdangeng.github.io\u002Fvisual_anagrams\u002F)\n- NIPS 2023, **Provable Guarantees for Generative Behavior Cloning**: Bridging Low-Level Stability and High-Level Behavior, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=PhFVF0gwid)\n- CoRL 2023 best paper, **Distilled Feature Fields** Enable Few-Shot Language-Guided Manipulation, [Website](https:\u002F\u002Ff3rm.github.io\u002F)\n- ICLR 2024 submission, **RLIF**: Interactive Imitation Learning as Reinforcement Learning, [Website](https:\u002F\u002Frlif-page.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.12996)\n- CVPR 2024 submission, **PIE-NeRF**: Physics-based Interactive Elastodynamics with NeRF, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.13099)\n- RSS 2018, **Asymmetric Actor Critic** for Image-Based Robot Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06542)\n- ICLR 2022, **RvS**: What is Essential for Offline RL via Supervised Learning?, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.10751)\n- NIPS 2021, **Stochastic Solutions** for Linear Inverse Problems using the Prior Implicit in a Denoiser, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13640)\n- ICLR 2024 submission, Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=v8jdwkUNXb) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16984)\n- ICLR 2024 submission, Improved Techniques for Training Consistency Models, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=WNzy9bRDvG) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.14189)\n- ICLR 2024 submission, **Privileged Sensing** Scaffolds Reinforcement Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=EpVe8jAjdx)\n- ICLR 2024 submission, **SafeDiffuser**: Safe Planning with Diffusion Probabilistic Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00148) \u002F [Website](https:\u002F\u002Fsafediffuser.github.io\u002Fsafediffuser\u002F)\n- NIPS 2023 workshop, Vision-Language Models Provide Promptable Representations for Reinforcement Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=AVg8WnI5ba)\n- ICLR 2023 oral, **Extreme Q-Learning**: MaxEnt RL without Entropy, [Website](https:\u002F\u002Fdiv99.github.io\u002FXQL\u002F)\n- ICLR 2024 submission, Generalization in diffusion models arises from geometry-adaptive harmonic representation, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=ANvmVS2Yr0)\n- ICLR 2024 submission, **DiffTOP**: Differentiable Trajectory Optimization as a Policy Class for Reinforcement and Imitation Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=HL5P4H8eO2)\n- CoRL 2023 best system paper, **RoboCook**: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools, [Website](https:\u002F\u002Fhshi74.github.io\u002Frobocook\u002F)\n- CoRL 2023, Learning to Design and Use Tools for Robotic Manipulation, [Website](https:\u002F\u002Frobotic-tool-design.github.io\u002F)\n- arXiv 2023.10, Learning to (Learn at Test Time), [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.13807) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ftest-time-training\u002Fmttt)\n- CoRL 2023 workshop, **FMB**: a Functional Manipulation Benchmark for Generalizable Robotic Learning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fpdf?id=055oRimwls) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmanipulationbenchmark)\n- 2023.10, Non-parametric regression for robot learning on manifolds, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.19561)\n- IROS 2021, Explaining the Decisions of Deep Policy Networks for Robotic Manipulations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.19432)\n- ICML 2022, The **primacy bias** in deep reinforcement learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.07802)\n- ICML 2023 oral, The **dormant neuron** phenomenon in deep reinforcement learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12902)\n- arXiv 2022.04, Simplicial Embeddings in Self-Supervised Learning and Downstream Classification, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00616)\n- arXiv 2023.10, **SparseDFF**: Sparse-View Feature Distillation for One-Shot Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16838)\n- arXiv 2023.10, **SAM-CLIP**: Merging Vision Foundation Models towards Semantic and Spatial Understanding, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15308)\n- arXiv 2023.10, **TD-MPC2**: Scalable, Robust World Models for Continuous Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16828) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fnicklashansen\u002Ftdmpc2)\n- arXiv 2023.10, **EquivAct**: SIM(3)-Equivariant Visuomotor Policies beyond Rigid Object Manipulation, [Website](https:\u002F\u002Fequivact.github.io\u002F)\n- NeurIPS 2022, **CodeRL**: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01780) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FCodeRL)\n- arXiv 2023.10, **Robot Fine-Tuning Made Easy**: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning, [Website](https:\u002F\u002Frobofume.github.io\u002F)\n- CoRL 2023, **SAQ**: Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning, [Website](https:\u002F\u002Fsaqrl.github.io\u002F)\n- arXiv 2023.10, Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09676)\n- arXiv 2023.03, **PLEX**: Making the Most of the Available Data for Robotic Manipulation Pretraining, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08789)\n- arXiv 2023.10, **LAMP**: Learn A Motion Pattern for Few-Shot-Based Video Generation, [Website](https:\u002F\u002Frq-wu.github.io\u002Fprojects\u002FLAMP\u002F)\n- arXiv 2023.10, **4K4D**: Real-Time 4D View Synthesis at 4K Resolution, [Website](https:\u002F\u002Fzju3dv.github.io\u002F4k4d\u002F)\n- arXiv 2023.10, **SuSIE**: Subgoal Synthesis via Image Editing, [Website](https:\u002F\u002Frail-berkeley.github.io\u002Fsusie\u002F)\n- arXiv 2023.10, **Universal Visual Decomposer**: Long-Horizon Manipulation Made Easy, [Website](https:\u002F\u002Fzcczhang.github.io\u002FUVD\u002F)\n- arXiv 2023.10, Learning to Act from Actionless Video through Dense Correspondences, [Website](https:\u002F\u002Fflow-diffusion.github.io\u002F)\n- NIPS 2023, **CEC**: Cross-Episodic Curriculum for Transformer Agents, [Website](https:\u002F\u002Fcec-agent.github.io\u002F)\n- ICLR 2024 submission, **TD-MPC2**: Scalable, Robust World Models for Continuous Control, [Oepnreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=Oxh5CstDJU)\n- ICLR 2024 submission, **3D Diffuser Actor**: Multi-task 3D Robot Manipulation with Iterative Error Feedback, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=UnsLGUCynE)\n- ICLR 2024 submission, **NeRFuser**: Diffusion Guided Multi-Task 3D Policy Learning, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=8GmPLkO0oR)\n- arXiv 2023.10, **Foundation Reinforcement Learning**: towards Embodied Generalist Agents with Foundation Prior Assistance, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02635)\n- ICCV 2023, **S3IM**: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields, [Website](https:\u002F\u002Fmadaoer.github.io\u002Fs3im_nerf\u002F)\n- arXiv 2023.09, **Text2Reward**: Automated Dense Reward Function Generation for Reinforcement Learning, [Website](https:\u002F\u002Ftext-to-reward.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.11489)\n- ICCV 2023, End2End Multi-View Feature Matching with Differentiable Pose Optimization, [Website](https:\u002F\u002Fbarbararoessle.github.io\u002Fe2e_multi_view_matching\u002F)\n- arXiv 2023.10, Aligning Text-to-Image Diffusion Models with Reward Backpropagation, [Website](https:\u002F\u002Falign-prop.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fmihirp1998\u002FAlignProp\u002F)\n- NeurIPS 2023, **EDP**: Efficient Diffusion Policies for Offline Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.20081) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fedp)\n- arXiv 2023.09, **See to Touch**: Learning Tactile Dexterity through Visual Incentives,  [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12300) \u002F [Website](https:\u002F\u002Fsee-to-touch.github.io\u002F)\n- RSS 2023, **SAM-RL**: Sensing-Aware Model-Based Reinforcement Learning via Differentiable Physics-Based Simulation and Rendering, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15185) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Frss-sam-rl)\n- arXiv 2023.09, **MoDem-V2**: Visuo-Motor World Models for Real-World Robot Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14236) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmodem-v2)\n- arXiv 2023.09, **DreamGaussian**: Generative Gaussian Splatting for Efficient 3D Content Creation, [Website](https:\u002F\u002Fgithub.com\u002Fdreamgaussian\u002Fdreamgaussian) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fdreamgaussian\u002Fdreamgaussian)\n- arXiv 2023.09, **D3Fields**: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation, [Website](https:\u002F\u002Frobopil.github.io\u002Fd3fields\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FWangYixuan12\u002Fd3fields)\n- arXiv 2023.09, **GELLO**: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators, [Website](https:\u002F\u002Fwuphilipp.github.io\u002Fgello_site\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13037)\n- arXiv 2023.09, Human-Assisted Continual Robot Learning with Foundation Models, [Website](https:\u002F\u002Fsites.google.com\u002Fmit.edu\u002Fhalp-robot-learning) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14321)\n- arXiv 2023.09, Robotic Offline RL from Internet Videos via Value-Function Pre-Training, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13041) \u002F [Website](https:\u002F\u002Fdibyaghosh.com\u002Fvptr\u002F)\n- ICCV 2023, **PointOdyssey**: A Large-Scale Synthetic Dataset for Long-Term Point Tracking, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15055) \u002F [Github](https:\u002F\u002Fgithub.com\u002Faharley\u002Fpips2)\n- arXiv 2023, Compositional Foundation Models for Hierarchical Planning, [Website](https:\u002F\u002Fhierarchical-planning-foundation-model.github.io\u002F)\n- RSS 2022 Best Student Paper Award Finalist, **ACID**: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation, [Website](https:\u002F\u002Fb0ku1.github.io\u002Facid\u002F)\n- CoRL 2023, **REBOOT**: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03322) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Freboot-dexterous)\n- CoRL 2023, An Unbiased Look at Datasets for Visuo-Motor Pre-Training, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fpdf?id=qVc7NWYTRZ6)\n- CoRL 2023, **Q-Transformer**: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0I3su3mkuL)\n- ICCV 2023 oral, Tracking Everything Everywhere All at Once, [Website](https:\u002F\u002Fomnimotion.github.io\u002F)\n- arXiv 2023.08, **RoboTAP**: Tracking Arbitrary Points for Few-Shot Visual Imitation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15975) \u002F [Website](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15975)\n- arXiv 2023.06, **DreamSim**: Learning New Dimensions of Human Visual Similarity using Synthetic Data, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09344) \u002F [Website](https:\u002F\u002Fdreamsim-nights.github.io\u002F)\n- ICLR 2023 spotlight, **FluidLab**: A Differentiable Environment for Benchmarking Complex Fluid Manipulation, [Website](https:\u002F\u002Ffluidlab2023.github.io\u002F)\n- arXiv 2023.06, **Seal**: Segment Any Point Cloud Sequences by Distilling Vision Foundation Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09347) \u002F [Website](https:\u002F\u002Fldkong.com\u002FSeal) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fyouquanl\u002FSegment-Any-Point-Cloud)\n- arXiv 2023.08, **BridgeData V2**: A Dataset for Robot Learning at Scale, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12952) \u002F [Website](https:\u002F\u002Frail-berkeley.github.io\u002Fbridgedata\u002F)\n- arXiv 2023.08, **Diffusion with Forward Models**: Solving Stochastic Inverse Problems Without Direct Supervision, [Website](https:\u002F\u002Fdiffusion-with-forward-models.github.io\u002F)\n- ICML 2023, **QRL**: Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning, [Website](https:\u002F\u002Fwww.tongzhouwang.info\u002Fquasimetric_rl\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fquasimetric-learning\u002Fquasimetric-rl)\n- arXiv 2023.08, **Dynamic 3D Gaussians**: Tracking by Persistent Dynamic View Synthesis, [Website](https:\u002F\u002Fdynamic3dgaussians.github.io\u002F)\n- SIGGRAPH 2023 best paper, 3D Gaussian Splatting for Real-Time Radiance Field Rendering, [Website](https:\u002F\u002Frepo-sam.inria.fr\u002Ffungraph\u002F3d-gaussian-splatting\u002F)\n- CoRL 2022, In-Hand Object Rotation via Rapid Motor Adaptation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.04887) \u002F [Website](https:\u002F\u002Fhaozhi.io\u002Fhora\u002F)\n- ICLR 2019, **DPI-Net**: Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids, [Website](http:\u002F\u002Fdpi.csail.mit.edu\u002F)\n- ICLR 2019, **Plan Online, Learn Offline**: Efficient Learning and Exploration via Model-Based Control, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.01848) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fpolo-mpc)\n- NeurIPS 2021 spotlight, **NeuS**: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction, [Website](https:\u002F\u002Flingjie0206.github.io\u002Fpapers\u002FNeuS\u002F)\n- ICCV 2023, Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models, [Website](https:\u002F\u002Fenergy-based-model.github.io\u002Funsupervised-concept-discovery\u002F)\n- AAAI 2018, **FiLM**: Visual Reasoning with a General Conditioning Layer, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07871)\n- arXiv 2023.08, **RoboAgent**: Towards Sample Efficient Robot Manipulation with Semantic Augmentations and Action Chunking, [Website](https:\u002F\u002Frobopen.github.io\u002F)\n- ICRA 2000, **RRT-Connect**: An Efficient Approach to Single-Query Path Planning, [PDF](http:\u002F\u002Fwww.cs.cmu.edu\u002Fafs\u002Fandrew\u002Fscs\u002Fcs\u002F15-494-sp13\u002Fnslobody\u002FClass\u002Freadings\u002Fkuffner_icra2000.pdf)\n- CVPR 2017 oral, **Network Dissection**: Quantifying Interpretability of Deep Visual Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.05796) \u002F [Website](http:\u002F\u002Fnetdissect.csail.mit.edu\u002F)\n- NIPS 2020 (spotlight), Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains, [Website](https:\u002F\u002Fbmild.github.io\u002Ffourfeat\u002Findex.html)\n- ICRA 1992, Planning optimal grasps, [PDF](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~jfc\u002Fpapers\u002F92\u002FFCicra92.pdf)\n- RSS 2021, **GIGA**: Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01542) \u002F [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Frpl-giga2021)\n- ECCV 2022, **StARformer**: Transformer with State-Action-Reward Representations for Visual Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06206) \u002F [Github](https:\u002F\u002Fgithub.com\u002Felicassion\u002FStARformer)\n- ICML 2023, **Parallel Q-Learning**: Scaling Off-policy Reinforcement Learning under Massively Parallel Simulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12983v1) \u002F [Github](https:\u002F\u002Fgithub.com\u002FImprobable-AI\u002Fpql)\n- ECCV 2022, **SeedFormer**: Patch Seeds based Point Cloud Completion with Upsample Transformer, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10315) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fhrzhou2\u002Fseedformer)\n- arXiv 2023.07, Waypoint-Based Imitation Learning for Robotic Manipulation, [Website](https:\u002F\u002Flucys0.github.io\u002Fawe\u002F)\n- ICML 2022, **Prompt-DT**: Prompting Decision Transformer for Few-Shot Policy Generalization, [Website](https:\u002F\u002Fmxu34.github.io\u002FPromptDT\u002F)\n- arXiv 2023, Reinforcement Learning from Passive Data via Latent Intentions, [Website](https:\u002F\u002Fdibyaghosh.com\u002Ficvf\u002F)\n- ICML 2023, **RPG**: Reparameterized Policy Learning for Multimodal Trajectory Optimization, [Website](https:\u002F\u002Fhaosulab.github.io\u002FRPG\u002F)\n- ICML 2023, **TGRL**: An Algorithm for Teacher Guided Reinforcement Learning, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Ftgrl-paper)\n- arXiv 2023.07, **XSkill**: Cross Embodiment Skill Discovery, [Website](https:\u002F\u002Fxskillcorl.github.io\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09955)\n- ICML 2023, Learning Neural Constitutive Laws: From Motion Observations for Generalizable PDE Dynamics, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fnclaw) \u002F [Github](https:\u002F\u002Fgithub.com\u002FPingchuanMa\u002FNCLaw)\n- arXiv 2023.07, **TokenFlow**: Consistent Diffusion Features for Consistent Video Editing, [Website](https:\u002F\u002Fdiffusion-tokenflow.github.io\u002F)\n- arXiv 2023.07, **PAPR**: Proximity Attention Point Rendering, [Website](https:\u002F\u002Fzvict.github.io\u002Fpapr\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11086)\n- ICCV 2023, **DreamTeacher**: Pretraining Image Backbones with Deep Generative Models, [Website](https:\u002F\u002Fresearch.nvidia.com\u002Flabs\u002Ftoronto-ai\u002FDreamTeacher\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.07487)\n- RSS 2023, Robust and Versatile Bipedal Jumping Control through Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.09450)\n- arXiv 2023.07, **Differentiable Blocks World**: Qualitative 3D Decomposition by Rendering Primitives, [Website](https:\u002F\u002Fwww.tmonnier.com\u002FDBW\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.05473)\n- ICLR 2023, **DexDeform**: Dexterous Deformable Object Manipulation with Human Demonstrations and Differentiable Physics, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdexdeform\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsizhe-li\u002FDexDeform)\n- arXiv 2023.07, **RPDiff**: Shelving, Stacking, Hanging: Relational Pose Diffusion for Multi-modal Rearrangement, [Website](https:\u002F\u002Fanthonysimeonov.github.io\u002Frpdiff-multi-modal\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fanthonysimeonov\u002Frpdiff)\n- arXiv 2023.07, **SpawnNet**: Learning Generalizable Visuomotor Skills from Pre-trained Networks, [Website](https:\u002F\u002Fxingyu-lin.github.io\u002Fspawnnet\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fjohnrso\u002Fspawnnet)\n- RSS 2023, **DexPBT**: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdexpbt) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12127)\n- arXiv 2023.07, **KITE**: Keypoint-Conditioned Policies for Semantic Manipulation, [Website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fkite-website\u002Fhome) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.16605)\n- arXiv 2023.06, Detector-Free Structure from Motion, [Website](https:\u002F\u002Fzju3dv.github.io\u002FDetectorFreeSfM\u002F) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15669)\n- arXiv 2023.06, **REFLECT**: Summarizing Robot Experiences for FaiLure Explanation and CorrecTion, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15724) \u002F [Website](https:\u002F\u002Froboreflect.github.io\u002F)\n- arXiv 2023.06, **ViNT**: A Foundation Model for Visual Navigation, [Website](https:\u002F\u002Fvisualnav-transformer.github.io\u002F)\n- AAAI 2023, Improving Long-Horizon Imitation Through Instruction Prediction, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12554) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fjhejna\u002Finstruction-prediction)\n- arXiv 2023.06, **RVT**: Robotic View Transformer for 3D Object Manipulation, [Website](https:\u002F\u002Frobotic-view-transformer.github.io\u002F)\n- arXiv 2023.01, **Ponder**: Point Cloud Pre-training via Neural Rendering, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00157)\n- arXiv 2023.06, **SGR**: A Universal Semantic-Geometric Representation for Robotic Manipulation, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10474) \u002F [Website](https:\u002F\u002Fsemantic-geometric-representation.github.io\u002F)\n- arXiv 2023.06, Robot Learning with Sensorimotor Pre-training, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10007) \u002F [Website](https:\u002F\u002Frobotic-pretrained-transformer.github.io\u002F)\n- arXiv 2023.06, For SALE: State-Action Representation Learning for Deep Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02451) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsfujim\u002FTD7)\n- arXiv 2023.06, **HomeRobot**: Open Vocabulary Mobile Manipulation, [Website](https:\u002F\u002Fovmm.github.io\u002F)\n- arXiv 2023.06, Lifelike Agility and Play on Quadrupedal Robots using Reinforcement Learning and Deep Pre-trained Models, [Website](https:\u002F\u002Ftencent-roboticsx.github.io\u002Flifelike-agility-and-play\u002F)\n- arXiv 2023.06, **TAPIR**: Tracking Any Point with per-frame Initialization and temporal Refinement, [Website](https:\u002F\u002Fdeepmind-tapir.github.io\u002F)\n- CVPR 2017, **I3D**: Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07750)\n- arXiv 2023.06, Diffusion Models for Zero-Shot Open-Vocabulary Segmentation, [Website](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fresearch\u002Fovdiff\u002F)\n- arXiv 2023.06, **R-MAE**: Regions Meet Masked Autoencoders, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05411) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fr-mae)\n- arXiv 2023.05, **Optimus**: Imitating Task and Motion Planning with Visuomotor Transformers, [Website](https:\u002F\u002Fmihdalal.github.io\u002Foptimus\u002F)\n- arXiv 2023.05, Video Prediction Models as Rewards for Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14343) \u002F [Website](https:\u002F\u002Fwww.escontrela.me\u002Fviper\u002F)\n- ICML 2023, **VIMA**: General Robot Manipulation with Multimodal Prompts, [Website](https:\u002F\u002Fvimalabs.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fvimalabs\u002FVIMA)\n- arXiv 2023.05, **SPRING**: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15486)\n- arXiv 2023.05, Training Diffusion Models with Reinforcement Learning, [Website](https:\u002F\u002Frl-diffusion.github.io\u002F)\n- arXiv 2023.03, Foundation Models for Decision Making: Problems, Methods, and Opportunities, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04129)\n- ICLR 2017, Third-Person Imitation Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01703)\n- arXiv 2023.04, **CoTPC**: Chain-of-Thought Predictive Control, [Website](https:\u002F\u002Fzjia.eng.ucsd.edu\u002Fcotpc)\n- CVPR 2023 highlight, **ImageBind**: One embedding to bind them all, [Website](https:\u002F\u002Fimagebind.metademolab.com\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FImageBind)\n- arXiv 2023.05, **Shap-E**: Generating Conditional 3D Implicit Functions, [Github](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fshap-e)\n- arXiv 2023.04, **Track Anything**: Segment Anything Meets Videos, [Github](https:\u002F\u002Fgithub.com\u002Fgaomingqi\u002Ftrack-anything)\n- CVPR 2023, **GLaD**: Generalizing Dataset Distillation via Deep Generative Prior, [Website](https:\u002F\u002Fgeorgecazenavette.github.io\u002Fglad\u002F)\n- CVPR 2022 oral, **RegNeRF**: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs, [Website](https:\u002F\u002Fm-niemeyer.github.io\u002Fregnerf\u002F)\n- CVPR 2023, **FreeNeRF**: Improving Few-shot Neural Rendering with Free Frequency Regularization, [Website](https:\u002F\u002Fjiawei-yang.github.io\u002FFreeNeRF\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FJiawei-Yang\u002FFreeNeRF)\n- ICLR 2023 oral, **Decision-Diffuser**: Is Conditional Generative Modeling all you need for Decision-Making?, [Website](https:\u002F\u002Fanuragajay.github.io\u002Fdecision-diffuser\u002F)\n- CVPR 2022, **Depth-supervised NeRF**: Fewer Views and Faster Training for Free, [Website](http:\u002F\u002Fwww.cs.cmu.edu\u002F~dsnerf\u002F)\n- SIGGRAPH Asia 2022, **ENeRF**: Efficient Neural Radiance Fields for Interactive Free-viewpoint Video, [Website](https:\u002F\u002Fzju3dv.github.io\u002Fenerf\u002F)\n- ICML 2023, On the power of foundation models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16327)\n- ICML 2023, **SNeRL**: Semantic-aware Neural Radiance Fields for Reinforcement Learning, [Website](https:\u002F\u002Fsjlee.cc\u002Fsnerl\u002F)\n- ICLR 2023 outstanding paper, Emergence of Maps in the Memories of Blind Navigation Agents, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=lTt4KjHSsyl)\n- ICLR 2023 outstanding paper honorable mentions, Disentanglement with Biological Constraints: A Theory of Functional Cell Types, [Openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=9Z_GfhZnGH)\n- CVPR 2023 award candidate, Data-driven Feature Tracking for Event Cameras, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12826)\n- CVPR 2023 award candidate, What Can Human Sketches Do for Object Detection?, [Website](http:\u002F\u002Fwww.pinakinathc.me\u002Fsketch-detect\u002F)\n- CVPR 2023 award candidate, Visual Programming for Compositional Visual Reasoning, [Website](https:\u002F\u002Fprior.allenai.org\u002Fprojects\u002Fvisprog)\n- CVPR 2023 award candidate, On Distillation of Guided Diffusion Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03142)\n- CVPR 2023 award candidate, **DreamBooth**: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, [Website](https:\u002F\u002Fdreambooth.github.io\u002F)\n- CVPR 2023 award candidate, Planning-oriented Autonomous Driving, [Github](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FUniAD)\n- CVPR 2023 award candidate, Neural Dynamic Image-Based Rendering, [Website](https:\u002F\u002Fdynibar.github.io\u002F)\n- CVPR 2023 award candidate, **MobileNeRF**: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures, [Website](https:\u002F\u002Fmobile-nerf.github.io\u002F)\n- CVPR 2023 award candidate, **OmniObject3D**: Large Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation, [Website](https:\u002F\u002Fomniobject3d.github.io\u002F)\n- CVPR 2023 award candidate, Ego-Body Pose Estimation via Ego-Head Pose Estimation, [Website](https:\u002F\u002Flijiaman.github.io\u002Fprojects\u002Fegoego\u002F)\n- CVPR 2023, Affordances from Human Videos as a Versatile Representation for Robotics, [Website](https:\u002F\u002Frobo-affordances.github.io\u002F)\n- CVPR 2022, Neural 3D Video Synthesis from Multi-view Video, [Website](https:\u002F\u002Fneural-3d-video.github.io\u002F)\n- ICCV 2021, **Nerfies**: Deformable Neural Radiance Fields, [Website](https:\u002F\u002Fnerfies.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fnerfies)\n- CVPR 2023 highlight, **HyperReel**: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling, [Website](https:\u002F\u002Fhyperreel.github.io\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhyperreel)\n- arXiv 2022.05, **FlashAttention**: Fast and Memory-Efficient Exact Attention with IO-Awareness, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14135) \u002F [Github](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fflash-attention)\n- CVPR 2023, **CLIP^2**: Contrastive Language-Image-Point Pretraining from Real-World Point Cloud Data, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12417)\n- CVPR 2023, **ULIP**: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.05171) \u002F [Github](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FULIP)\n- CVPR 2023, Learning Video Representations from Large Language Models, [Website](https:\u002F\u002Ffacebookresearch.github.io\u002FLaViLa\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FLaViLa)\n- CVPR 2023, **PLA**: Language-Driven Open-Vocabulary 3D Scene Understanding, [Website](https:\u002F\u002Fdingry.github.io\u002Fprojects\u002FPLA)\n- CVPR 2023, **PartSLIP**: Low-Shot Part Segmentation for 3D Point Clouds via Pretrained Image-Language Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.01558)\n- CVPR 2023, Mask-Free Video Instance Segmentation, [Website](https:\u002F\u002Fwww.vis.xyz\u002Fpub\u002Fmaskfreevis\u002F) \u002F [Github](https:\u002F\u002Fgithub.com\u002FSysCV\u002Fmaskfreevis)\n- arXiv 2023.04, **DINOv2**: Learning Robust Visual Features without Supervision, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07193) \u002F [Github](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdinov2)\n- arXiv 2023.04, Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields, [Website](https:\u002F\u002Fjonbarron.info\u002Fzipnerf\u002F)\n- arXiv 2023.04, SEEM: Segment Everything Everywhere All at Once, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06718) \u002F [code](https:\u002F\u002Fgithub.com\u002FUX-Decoder\u002FSegment-Everything-Everywhere-All-At-Once)\n- arXiv 2023.04, Internet Explorer: Targeted Representation Learning on the Open Web, [page](https:\u002F\u002Finternet-explorer-ssl.github.io\u002F) \u002F [code](https:\u002F\u002Fgithub.com\u002Finternet-explorer-ssl\u002Finternet-explorer)\n- arXiv 2023.03, Consistency Models, [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fconsistency_models) \u002F [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01469)\n- arXiv 2023.02, SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections, [code](https:\u002F\u002Fgithub.com\u002FFrozenBurning\u002FSceneDreamer) \u002F [page](https:\u002F\u002Fscene-dreamer.github.io\u002F)\n- arXiv 2023.04, Generative Agents: Interactive Simulacra of Human Behavior, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442)\n- ICLR 2023 notable, NTFields: Neural Time Fields for Physics-Informed Robot Motion Planning, [OpenReview](https:\u002F\u002Fopenreview.net\u002Fforum?id=ApF0dmi1_9K)\n- arXiv 2023, For Pre-Trained Vision Models in Motor Control, Not All Policy Learning Methods are Created Equal, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.04591)\n- code, Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions, [GitHub](https:\u002F\u002Fgithub.com\u002Fayaanzhaque\u002Finstruct-nerf2nerf)\n- arXiv 2023, Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05499) \u002F [GitHub](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FGroundingDINO)\n- arXiv 2023, Zero-1-to-3: Zero-shot One Image to 3D Object, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11328)\n- ICLR 2023, Towards Stable Test-Time Adaptation in Dynamic Wild World, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12400)\n- CVPR 2023 highlight, Neural Volumetric Memory for Visual Locomotion Control, [Website](https:\u002F\u002Frchalyang.github.io\u002FNVM\u002F)\n- arXiv 2023, Segment Anything, [Website](https:\u002F\u002Fsegment-anything.com\u002F)\n- ICRA 2023, DribbleBot: Dynamic Legged Manipulation in the Wild, [Website](https:\u002F\u002Fgmargo11.github.io\u002Fdribblebot\u002F)\n- arXiv 2023, Alpaca: A Strong, Replicable Instruction-Following Model, [Website](https:\u002F\u002Fcrfm.stanford.edu\u002F2023\u002F03\u002F13\u002Falpaca.html)\n- arXiv 2023, VC-1: Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?, [Website](https:\u002F\u002Feai-vc.github.io\u002F)\n- ICLR 2022, DroQ: Dropout Q-Functions for Doubly Efficient Reinforcement Learning, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02034)\n- arXiv 2023, RoboPianist: A Benchmark for High-Dimensional Robot Control, [Website](https:\u002F\u002Fkzakka.com\u002Frobopianist\u002F)\n- ICLR 2021, DDIM: Denoising Diffusion Implicit Models, [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02502)\n- arXiv 2023, Your Diffusion Model is Secretly a Zero-Shot Classifier, [Website](https:\u002F\u002Fdiffusion-classifier.github.io\u002F)\n- CVPR 2023 highlight, F2-NeRF: Fast Neural Radiance Field Training with Free Camera\n- arXiv 2023, Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware, [Website](https:\u002F\u002Ftonyzhaozh.github.io\u002Faloha\u002F)\n- RSS 2021, RMA: Rapid Motor Adaptation for Legged Robots, [Website](https:\u002F\u002Fashish-kmr.github.io\u002Frma-legged-robots\u002F)\n- ICCV 2021, Where2Act: From Pixels to Actions for Articulated 3D Objects, [Website](https:\u002F\u002Fcs.stanford.edu\u002F~kaichun\u002Fwhere2act\u002F)\n- CVPR 2019 oral, Semantic Image Synthesis with Spatially-Adaptive Normalization, [GitHub](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSPADE)\n\n# 联系方式\n如果您有任何问题或建议，请随时通过 lastyanjieze@gmail.com 与我联系。","# Paper-List 快速上手指南\n\nPaper-List 是由 Yanjie Ze 维护的机器人学与人工智能领域论文索引集合，涵盖人形机器人、灵巧操作、3D 机器人学习及基础模型等前沿方向。本项目主要为静态资源列表，无需复杂的环境配置或编译安装。\n\n## 环境准备\n\n本项目本质为 Markdown 文档与链接集合，对运行环境无特殊要求。\n\n*   **操作系统**：Windows \u002F macOS \u002F Linux 均可。\n*   **前置依赖**：\n    *   现代网页浏览器（推荐 Chrome, Edge 或 Firefox）用于查看整理好的论文列表。\n    *   （可选）Git：用于克隆仓库到本地以便离线浏览或贡献。\n    *   （可选）Markdown 编辑器（如 VS Code, Typora）：用于本地阅读 `.md` 文件。\n\n## 安装步骤\n\n由于项目不包含可执行代码，所谓的“安装”即为获取源代码。推荐使用 Git 克隆，或直接在线浏览。\n\n### 方式一：使用 Git 克隆（推荐）\n\n打开终端（Terminal 或 PowerShell），执行以下命令：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FYanjieZe\u002FPaper-List.git\ncd Paper-List\n```\n\n*国内用户若遇到连接缓慢，可使用 Gitee 镜像（如有）或通过代理加速，或直接下载 ZIP 包解压。*\n\n### 方式二：在线浏览\n\n直接访问 GitHub 仓库页面查看最新整理的分类目录：\n[https:\u002F\u002Fgithub.com\u002FYanjieZe\u002FPaper-List](https:\u002F\u002Fgithub.com\u002FYanjieZe\u002FPaper-List)\n\n## 基本使用\n\n获取项目后，您可以根据研究兴趣直接查阅对应的主题文件或会议列表。\n\n### 1. 按研究领域查阅\n\n进入 `topics` 目录，查看特定细分领域的论文汇总。例如，查看**灵巧操作**（Dexterous Manipulation）相关论文：\n\n```bash\n# 在本地仓库中查看\ncat topics\u002Fdex_manipulation.md\n\n# 或在 GitHub 在线查看\n# https:\u002F\u002Fgithub.com\u002FYanjieZe\u002FPaper-List\u002Fblob\u002Fmain\u002Ftopics\u002Fdex_manipulation.md\n```\n\n支持的主要主题包括：\n*   **人形机器人**: `topics\u002Fhumanoid_robots.md` (关联外部仓库)\n*   **3D 机器人学习**: `topics\u002F3d_robotic_learning.md`\n*   **机器人基础模型**: `topics\u002Frobot_foundation_models.md`\n*   **最佳论文**: `topics\u002Fbest_papers.md`\n\n### 2. 按会议年份查阅\n\n在根目录的 `README.md` 中，论文已按年份和顶级会议（如 CVPR, RSS, CoRL, ICLR, NeurIPS 等）分类。\n\n**使用示例**：\n若您想查找 **2024 年 CoRL 会议** 的录用论文及统计数据，直接在文档中定位到 `2024` -> `CoRL 2024` 部分，点击对应链接即可跳转至官方录用列表或统计分析页。\n\n### 3. 追踪最新随机论文\n\n关注 `README.md` 中的 `Recent Random Papers` 章节，这里汇集了最新的 arXiv 预印本和顶会亮点论文（如 *3D Diffusion Policy*, *DexterityGen*, *Cosmos* 等）。每个条目均附带论文标题、来源年份及项目网站\u002F代码链接，点击即可直达资源。\n\n> **提示**：该项目主要作为导航索引，点击列表中的 `[Website]`, `[arXiv]`, `[github]` 等链接即可获取论文的详细信息、代码实现或演示视频。","某机器人实验室的博士生正在为人形机器人抓取任务寻找最新的 3D 视觉策略，急需梳理近两年的顶会成果以确立研究方向。\n\n### 没有 Paper-List 时\n- **信息检索碎片化**：需要在 CVPR、CoRL、RSS 等多个会议官网间反复跳转，手动筛选与“灵巧操作”相关的论文，耗时且容易遗漏。\n- **缺乏领域聚焦**：通用论文列表中包含大量无关内容，难以快速定位到人形机器人或 3D 机器人学习等细分垂直领域的核心文章。\n- **前沿动态滞后**：无法及时获取如\"3D Diffusion Policy\"或\"DexWrist\"等刚发布的 arXiv 预印本及最新会议录用论文，导致研究起点落后于社区进度。\n- **历史脉络断裂**：难以系统性地回顾从 2022 年到 2025 年的技术演进路径，缺乏对最佳论文（Best Papers）的结构化整理。\n\n### 使用 Paper-List 后\n- **一站式聚合导航**：直接通过分类目录访问按年份和会议（如 ICML 2023、CVPR 2024）整理好的链接，瞬间锁定目标文献池。\n- **垂直主题精准直达**：利用\"Humansoid Robots\"和\"Dexterous Manipulation\"等专属主题索引，直接切入最相关的技术栈，过滤噪音。\n- **实时追踪前沿**：通过\"Recent Random Papers\"板块即时发现最新的零样本立体匹配或大行为模型研究，确保实验方案基于 SOTA（当前最先进）技术。\n- **知识体系结构化**：借助按时间轴排列的顶会清单和精选最佳论文，快速构建起从基础理论到最新突破的完整认知地图。\n\nPaper-List 将原本分散杂乱的学术情报转化为结构化的领域知识图谱，极大缩短了科研人员从“寻找问题”到“验证方案”的探索周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FYanjieZe_Paper-List_6102348d.png","YanjieZe","Yanjie Ze","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FYanjieZe_09d65cb8.png","A young roboticist","Stanford University",null,"ZeYanjie","https:\u002F\u002Fyanjieze.com","https:\u002F\u002Fgithub.com\u002FYanjieZe",528,23,"2026-04-14T01:42:30",1,"","未说明",{"notes":88,"python":86,"dependencies":89},"该工具（Paper-List）并非可执行的软件或模型代码库，而是一个静态的学术论文索引列表（Awesome List）。它主要包含指向外部会议网站、arXiv 论文页面以及其他开源项目仓库的超链接。因此，该工具本身没有操作系统、GPU、内存、Python 版本或依赖库的运行环境需求，用户只需通过浏览器访问即可查看内容。",[],[91,14,15],"其他",[93,94,95,96],"research","computer-vision","robotics","machine-learning","2026-03-27T02:49:30.150509","2026-04-15T10:58:56.853408",[],[]]