[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-facebookresearch--projectaria_tools":3,"tool-facebookresearch--projectaria_tools":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,2,"2026-04-10T11:39:34",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[52,15,13,14],"语言模型",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,61],"视频",{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":68,"readme_en":69,"readme_zh":70,"quickstart_zh":71,"use_case_zh":72,"hero_image_url":73,"owner_login":74,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":119,"forks":120,"last_commit_at":121,"license":122,"difficulty_score":32,"env_os":123,"env_gpu":123,"env_ram":123,"env_deps":124,"category_tags":127,"github_topics":78,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":129,"updated_at":130,"faqs":131,"releases":162},7389,"facebookresearch\u002Fprojectaria_tools","projectaria_tools","projectaria_tools is an C++\u002FPython open-source toolkit to interact with Project Aria data. ","projectaria_tools 是一套专为处理 Project Aria 数据而设计的开源工具包，支持 C++ 和 Python 语言。它旨在帮助研究人员和开发者更轻松地访问、解析及可视化来自 Aria 眼镜的多模态传感器数据，从而推动增强现实（AR）、机器感知和人工智能领域的创新研究。\n\n面对 Aria 设备产生的海量且复杂的原始数据（如高清图像、眼动追踪、惯性测量等），直接处理往往门槛较高。projectaria_tools 通过提供统一的 API 接口，完美兼容第一代（Gen1）和第二代（Gen2）设备数据，有效解决了数据格式不统一和读取困难的问题。特别是针对最新的 Aria Gen2，该工具不仅支持新增的 1200 万像素摄像头、健康传感器等设备特性，还内置了眼动与手部追踪等端侧机器学习算法的直接调用能力。\n\n这套工具非常适合计算机视觉研究员、AR\u002FVR 开发者以及从事多模态感知研究的科研人员使用。其独特的技术亮点包括全新的 `aria_rerun_viewer` 交互式 3D 可视化工具，能够直观呈现空间数据；同时提供基于 Google Colab 的系列教程，帮助用户快","projectaria_tools 是一套专为处理 Project Aria 数据而设计的开源工具包，支持 C++ 和 Python 语言。它旨在帮助研究人员和开发者更轻松地访问、解析及可视化来自 Aria 眼镜的多模态传感器数据，从而推动增强现实（AR）、机器感知和人工智能领域的创新研究。\n\n面对 Aria 设备产生的海量且复杂的原始数据（如高清图像、眼动追踪、惯性测量等），直接处理往往门槛较高。projectaria_tools 通过提供统一的 API 接口，完美兼容第一代（Gen1）和第二代（Gen2）设备数据，有效解决了数据格式不统一和读取困难的问题。特别是针对最新的 Aria Gen2，该工具不仅支持新增的 1200 万像素摄像头、健康传感器等设备特性，还内置了眼动与手部追踪等端侧机器学习算法的直接调用能力。\n\n这套工具非常适合计算机视觉研究员、AR\u002FVR 开发者以及从事多模态感知研究的科研人员使用。其独特的技术亮点包括全新的 `aria_rerun_viewer` 交互式 3D 可视化工具，能够直观呈现空间数据；同时提供基于 Google Colab 的系列教程，帮助用户快速上手从基础数据加载到多传感器流式处理的各种场景。无论是进行算法验证还是构建新应用，projectaria_tools 都是探索真实世界感知数据的得力助手。","# Project Aria Tools\n\nProject Aria Tools is a suite of C++\u002FPython utilities to help researchers expand\nthe horizons of Augmented Reality, Machine Perception and Artificial\nIntelligence with [Project Aria](https:\u002F\u002Fprojectaria.com\u002F). It is designed to\nmake it easier to use Aria data and its open datasets. It supports **both Aria\nGen1 and Aria Gen2 data**.\n\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Freleases\">\u003Cimg alt=\"Latest Release\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Ffacebookresearch\u002Fprojectaria_tools.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002FLICENSE\">\n  \u003Cimg alt=\"license\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache--2.0-blue.svg\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fprojectaria_tools\">\n  \u003Cimg alt=\"Downloads\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_projectaria_tools_readme_5d42cb9f32fd.png\">\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n---\n\n## 🚀 What's New in Aria Gen2\n\n**Aria Gen2** introduces significant hardware and software improvements with\nfull API support in this 2.0.0 release.\n\n### **Hardware & Sensors**\n\n- **12MP RGB camera**, 4 CV cameras (wider FOV, HDR, front-facing stereo), 2 eye\n  tracking cameras\n- **New sensors**: Proximity, contact microphone, PPG health, ambient light,\n  GNSS\n- **6-8 hour battery life**, foldable form factor, direct interactivity with\n  open-air speakers\n\n### **On-Device Machine Perception**\n\nOn-device algorithms powered by a custom Meta co-processor:\n\n- **Eye Tracking**, **Hand Tracking** (21 keypoints), **VIO\u002FSLAM** (20Hz + 800Hz\n  high-freq trajectory)\n\n### **Software & Tools**\n\n- **Unified APIs**: Same Python\u002FC++ interface for both Gen1 and Gen2 data\n- **New Tools**: `aria_rerun_viewer` (interactive 3D visualization),\n  `gen2_mp_csv_exporter`, upgraded `vrs_health_check`\n- **Enhanced Streaming**: USB\u002Fwireless sensor streaming with on-device\n  perception signals\n\n---\n\n## 📖 Documentation\n\n### **Aria Gen2 Documentation** - NEW! ✨\n\n- **[Gen2 Documentation](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002F)** -\n  Complete guide for Aria Gen2 data and tools\n  - Research Tools APIs\n  - Python\u002FC++ examples\n  - Data formats and specifications\n  - On-device ML features\n\n### **Aria Gen1 Documentation**\n\n- **[Gen1 Documentation](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fintro)** -\n  Legacy documentation for Aria Gen1\n\n[![Documentation Status](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Factions\u002Fworkflows\u002Fpublish-website.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Factions\u002Fworkflows\u002Fpublish-website.yml)\n\n---\n\n## 📚 Interactive Python Tutorials (Google Colab)\n\n### **Aria Gen2 Tutorials** - NEW! ✨\n\nComprehensive tutorials covering Aria Gen2 data processing:\n\n1. [![Tutorial 1](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_1_vrs_data_provider_basics.ipynb)\n   **VrsDataProvider Basics** - how to perform basic operations in loading and\n   access data in an Aria VRS file.\n\n2. [![Tutorial 2](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_2_device_calibration.ipynb)\n   **Device Calibration** - how to work with device calibration in Aria VRS.\n\n3. [![Tutorial 3](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_3_sequential_access_multi_sensor_data.ipynb)\n   **Sequential Multi-Sensor Access** - how to use the unified queued API to\n   efficiently “stream” multi-sensor data.\n\n4. [![Tutorial 4](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_4_on_device_eyetracking_handtracking.ipynb)\n   **Eye Tracking & Hand Tracking** - how to work with on-device-generated\n   EyeGaze and Hand-tracking signals from Aria Gen2 glasses.\n\n5. [![Tutorial 5](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_5_on_device_vio.ipynb)\n   **On-Device VIO** - how to work with on-device-generated VIO data from Aria\n   Gen2 glasses.\n\n6. [![Tutorial 6](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_6_timestamp_alignment_in_aria_gen2.ipynb)\n   **Time Synchronization** - understanding timestamp mapping in Aria data, and\n   how to use timestamp mapping in multi-device recording.\n\n7. [![Tutorial 7](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_7_mps_data_provider_basics.ipynb)\n   **MPS (Machine Perception Services)** - how to load and visualize output data\n   from\n   [Aria MP services](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fark\u002Fmps\u002Fstart).\n\n### **Aria Gen1 Tutorials**\n\n- [![Aria VRS Data Provider](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002F1.6.0\u002Fcore\u002Fexamples\u002Fdataprovider_quickstart_tutorial.ipynb)\n  Aria VRS Data Provider\n\n- [![Aria Machine Perception Services](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002F1.6.0\u002Fcore\u002Fexamples\u002Fmps_quickstart_tutorial.ipynb)\n  Reading and using Aria Machine Perception Services output (SLAM, Eye Tracking,\n  Hand Tracking data)\n\n---\n\n## 🗂️ Open Datasets\n\n### **Aria Gen2 Datasets**\n\n- **Aria Gen2 Pilot Dataset**:\n  [Dataset link](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Fgen2pilot\u002F)\n  - Multi-participant indoor and outdoor recordings\n  - Full on-device outputs (eye tracking, hand tracking, VIO)\n  - SubGHz-synchronized multi-device captures\n  - High-quality MPS outputs (SLAM, point clouds, trajectories)\n\n### **Aria Gen1 Datasets**\n\n- **Aria Everyday Activities**:\n  - [Dataset link](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Faea\u002F)\n  - [![Interactive python notebook](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fprojects\u002FAriaEverydayActivities\u002Fexamples\u002Faea_quickstart_tutorial.ipynb)\n- **Aria Digital Twin**:\n  - [Dataset link](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Fadt)\n  - [![Interactive python notebook](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fprojects\u002FAriaDigitalTwinDatasetTools\u002Fexamples\u002Fadt_quickstart_tutorial.ipynb)\n- **Aria Synthetic Environments**:\n  - [Dataset link](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Fase)\n\n---\n\n## How to Contribute\n\nWe welcome contributions! Go to\n[CONTRIBUTING](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002F.github\u002FCONTRIBUTING.md)\nand our\n[CODE OF CONDUCT](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002F.github\u002FCODE_OF_CONDUCT.md)\nfor how to get started.\n\n## License\n\nProject Aria Tools are released by Meta under the\n[Apache 2.0 license](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002FLICENSE).\n","# Project Aria 工具\n\nProject Aria 工具是一套 C++\u002FPython 实用程序，旨在帮助研究人员借助 [Project Aria](https:\u002F\u002Fprojectaria.com\u002F) 拓展增强现实、机器感知和人工智能的研究边界。它专为简化 Aria 数据及其开放数据集的使用而设计，同时支持 **Aria 第一代和第二代数据**。\n\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Freleases\">\u003Cimg alt=\"最新版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Ffacebookresearch\u002Fprojectaria_tools.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002FLICENSE\">\n  \u003Cimg alt=\"许可证\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache--2.0-blue.svg\"\u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fprojectaria_tools\">\n  \u003Cimg alt=\"下载量\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_projectaria_tools_readme_5d42cb9f32fd.png\">\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n---\n\n## 🚀 Aria 第二代的新特性\n\n**Aria 第二代** 在硬件和软件方面带来了显著提升，并在本次 2.0.0 版本中实现了完整的 API 支持。\n\n### **硬件与传感器**\n\n- **1200万像素 RGB 相机**，4 个计算机视觉相机（更广的视场角、HDR、前置立体相机），2 个眼动追踪相机\n- **新增传感器**：接近传感器、接触式麦克风、PPG 健康监测、环境光传感器、GNSS\n- **6–8 小时续航**，可折叠设计，配备开放式扬声器实现直接交互\n\n### **设备端机器感知**\n\n由 Meta 自定义协处理器驱动的设备端算法：\n\n- **眼动追踪**、**手势追踪**（21 个关键点）、**VIO\u002FSLAM**（20Hz + 800Hz 高频轨迹）\n\n### **软件与工具**\n\n- **统一的 API**：Gen1 和 Gen2 数据采用相同的 Python\u002FC++ 接口\n- **新工具**：`aria_rerun_viewer`（交互式 3D 可视化）、`gen2_mp_csv_exporter`、升级版 `vrs_health_check`\n- **增强的流式传输**：通过 USB 或无线方式传输传感器数据，并同步设备端感知信号\n\n---\n\n## 📖 文档\n\n### **Aria 第二代文档** - 新！✨\n\n- **[Gen2 文档](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002F)** - Aria 第二代数据及工具的完整指南\n  - 研究工具 API\n  - Python\u002FC++ 示例\n  - 数据格式与规格说明\n  - 设备端 ML 功能\n\n### **Aria 第一代文档**\n\n- **[Gen1 文档](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fintro)** - Aria 第一代的旧版文档\n\n[![文档状态](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Factions\u002Fworkflows\u002Fpublish-website.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Factions\u002Fworkflows\u002Fpublish-website.yml)\n\n---\n\n## 📚 交互式 Python 教程（Google Colab）\n\n### **Aria 第二代教程** - 新！✨\n\n全面覆盖 Aria 第二代数据处理的教程：\n\n1. [![教程 1](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_1_vrs_data_provider_basics.ipynb)\n   **VrsDataProvider 基础** - 如何在加载和访问 Aria VRS 文件中的数据时执行基本操作。\n\n2. [![教程 2](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_2_device_calibration.ipynb)\n   **设备校准** - 如何处理 Aria VRS 中的设备校准信息。\n\n3. [![教程 3](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_3_sequential_access_multi_sensor_data.ipynb)\n   **多传感器顺序访问** - 如何使用统一的队列式 API 高效“流式”读取多传感器数据。\n\n4. [![教程 4](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_4_on_device_eyetracking_handtracking.ipynb)\n   **眼动追踪与手势追踪** - 如何处理 Aria 第二代眼镜上设备端生成的眼球注视和手势追踪信号。\n\n5. [![教程 5](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_5_on_device_vio.ipynb)\n   **设备端 VIO** - 如何处理 Aria 第二代眼镜上设备端生成的 VIO 数据。\n\n6. [![教程 6](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_6_timestamp_alignment_in_aria_gen2.ipynb)\n   **时间同步** - 理解 Aria 数据中的时间戳映射，以及如何在多设备录制中应用时间戳对齐。\n\n7. [![教程 7](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_7_mps_data_provider_basics.ipynb)\n   **MPS（机器感知服务）** - 如何加载并可视化来自\n   [Aria MP 服务](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fark\u002Fmps\u002Fstart) 的输出数据。\n\n### **Aria 第一代教程**\n\n- [![Aria VRS 数据提供者](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002F1.6.0\u002Fcore\u002Fexamples\u002Fdataprovider_quickstart_tutorial.ipynb)\n  Aria VRS 数据提供者\n\n- [![Aria 机器感知服务](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002F1.6.0\u002Fcore\u002Fexamples\u002Fmps_quickstart_tutorial.ipynb)\n  读取并使用 Aria 机器感知服务的输出（SLAM、眼动追踪、手势追踪数据）\n\n---\n\n## 🗂️ 开放数据集\n\n### **Aria 第二代数据集**\n\n- **Aria 第二代试点数据集**：\n  [数据集链接](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Fgen2pilot\u002F)\n  - 多人参与的室内外录制\n  - 完整的设备端输出（眼动追踪、手势追踪、VIO）\n  - 子GHz 精确同步的多设备采集\n  - 高质量的 MPS 输出（SLAM、点云、轨迹）\n\n### **Aria 第一代数据集**\n\n- **Aria 日常活动**：\n  - [数据集链接](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Faea\u002F)\n  - [![交互式 Python 笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fprojects\u002FAriaEverydayActivities\u002Fexamples\u002Faea_quickstart_tutorial.ipynb)\n- **Aria 数字孪生**：\n  - [数据集链接](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Fadt)\n  - [![交互式 Python 笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fprojects\u002FAriaDigitalTwinDatasetTools\u002Fexamples\u002Fadt_quickstart_tutorial.ipynb)\n- **Aria 合成环境**：\n  - [数据集链接](https:\u002F\u002Fwww.projectaria.com\u002Fdatasets\u002Fase)\n\n---\n\n## 如何贡献\n\n我们欢迎各类贡献！请访问\n[CONTRIBUTING](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002F.github\u002FCONTRIBUTING.md)\n以及我们的\n[行为准则](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002F.github\u002FCODE_OF_CONDUCT.md)\n，了解如何开始参与。\n\n## 许可证\n\nProject Aria 工具由 Meta 根据\n[Apache 2.0 许可证](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002FLICENSE)\n发布。","# Project Aria Tools 快速上手指南\n\nProject Aria Tools 是一套 C++\u002FPython 实用工具集，旨在帮助研究人员利用 [Project Aria](https:\u002F\u002Fprojectaria.com\u002F) 数据拓展增强现实（AR）、机器感知和人工智能的研究边界。本工具支持 **Aria Gen1** 和 **Aria Gen2** 两代设备的数据处理。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python 版本**: Python 3.8 或更高版本。\n*   **前置依赖**:\n    *   `pip` (Python 包管理工具)\n    *   `git` (用于克隆示例代码)\n    *   (可选) CUDA 环境 (如需使用 GPU 加速的机器学习功能)\n\n> **提示**：国内开发者若遇到网络连接问题，建议在安装前配置国内 pip 镜像源（如清华源或阿里源）以加速下载。\n\n## 安装步骤\n\n推荐使用 `pip` 直接安装最新稳定版。\n\n### 1. 基础安装\n```bash\npip install projectaria_tools\n```\n\n### 2. 使用国内镜像源加速安装（推荐）\n如果默认源下载缓慢，请使用以下命令：\n```bash\npip install projectaria_tools -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 验证安装\n安装完成后，可通过以下命令检查版本：\n```bash\npython -c \"import projectaria_tools; print(projectaria_tools.__version__)\"\n```\n\n### 4. 获取示例代码（可选）\n为了运行官方教程，建议克隆仓库获取示例 Notebook 和数据：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools.git\ncd projectaria_tools\n```\n\n## 基本使用\n\nProject Aria Tools 的核心功能是读取和处理 `.vrs` (Video Recording Stream) 格式的数据文件。以下是一个最简单的 Python 示例，展示如何加载数据并获取传感器信息。\n\n### 示例：读取 VRS 文件基本信息\n\n假设你已拥有一个 Aria 设备录制的 `example.vrs` 文件：\n\n```python\nfrom projectaria_tools.core import data_provider\n\n# 1. 创建数据提供者实例\n# 替换为你的实际 .vrs 文件路径\nvrs_file_path = \"path\u002Fto\u002Fyour\u002Frecording.vrs\"\nprovider = data_provider.create_vrs_data_provider(vrs_file_path)\n\n# 2. 获取设备校准信息\ncalibration = provider.get_device_calibration()\n\n# 3. 打印可用的传感器标签\nprint(\"Available sensors:\")\nfor label in provider.get_available_labels():\n    print(f\"- {label}\")\n\n# 4. 获取特定传感器的图像帧数 (以 RGB 相机为例)\nrgb_label = \"camera-rgb\"\nif provider.has_stream(rgb_label):\n    count = provider.get_num_data(rgb_label)\n    print(f\"Total RGB frames: {count}\")\n    \n    # 获取第一帧的时间戳\n    first_frame = provider.get_image_data_by_id(rgb_label, 0)\n    print(f\"First frame timestamp: {first_frame[0].capture_timestamp_ns}\")\nelse:\n    print(\"RGB stream not found in this recording.\")\n```\n\n### 进阶学习资源\n\n对于更复杂的功能（如多传感器同步、眼动追踪、手部追踪、VIO 数据处理），官方提供了详细的 **Google Colab 交互式教程**：\n\n*   **Aria Gen2 系列教程**:\n    *   [Tutorial 1: VrsDataProvider 基础](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_1_vrs_data_provider_basics.ipynb)\n    *   [Tutorial 4: 眼动与手部追踪](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_4_on_device_eyetracking_handtracking.ipynb)\n    *   [Tutorial 5: 机载 VIO 数据](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fexamples\u002FGen2\u002Fpython_notebooks\u002FTutorial_5_on_device_vio.ipynb)\n\n*   **文档参考**:\n    *   [Aria Gen2 完整文档](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002F)\n    *   [Aria Gen1 遗留文档](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fintro)","某计算机视觉团队正在利用 Project Aria Gen2 眼镜采集的复杂多模态数据，训练一款高精度的室内导航与手势交互模型。\n\n### 没有 projectaria_tools 时\n- **数据解析困难**：面对 Gen2 新增的 12MP 高清 RGB、眼动追踪及 PPG 健康传感器等异构数据，团队需自行逆向解析专有的 VRS 文件格式，耗时数周且极易出错。\n- **时空对齐繁琐**：缺乏统一的时间戳同步机制，手动将 20Hz 的 VIO 轨迹、800Hz 的高频运动数据与视频帧进行微秒级对齐几乎不可能完成。\n- **调试直观性差**：无法直接查看传感器在三维空间中的实时姿态和覆盖范围，只能依靠枯燥的日志数据盲测，难以发现标定误差。\n- **代际兼容成本高**：若需对比 Gen1 历史数据，必须编写两套完全不同的读取逻辑，代码维护成本成倍增加。\n\n### 使用 projectaria_tools 后\n- **一键加载多模态数据**：通过统一的 Python API（如 `VrsDataProvider`），可直接流式读取 Gen2 所有新型传感器数据，无需关心底层二进制结构。\n- **自动高精度同步**：利用内置的队列化 API，自动处理多传感器数据的时序对齐，轻松实现视觉、惯性测量与眼动数据的微秒级融合。\n- **沉浸式三维调试**：借助 `aria_rerun_viewer` 工具，研究人员能在交互式 3D 场景中直观复盘采集过程，快速定位手眼标定或追踪丢失问题。\n- **无缝代际平滑过渡**：同一套代码接口同时支持 Gen1 和 Gen2 数据，团队可立即复用现有算法管线，无需重构即可利用新一代硬件优势。\n\nprojectaria_tools 将原本需要数周的数据清洗与对齐工作缩短至几小时，让研究者能专注于核心算法创新而非数据基础设施的搭建。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffacebookresearch_projectaria_tools_d81e96c2.png","facebookresearch","Meta Research","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffacebookresearch_449342bd.png","",null,"https:\u002F\u002Fopensource.fb.com","https:\u002F\u002Fgithub.com\u002Ffacebookresearch",[82,86,90,94,98,101,105,108,112,115],{"name":83,"color":84,"percentage":85},"C++","#f34b7d",57.6,{"name":87,"color":88,"percentage":89},"MDX","#fcb32c",24.2,{"name":91,"color":92,"percentage":93},"Python","#3572A5",12.2,{"name":95,"color":96,"percentage":97},"Jupyter Notebook","#DA5B0B",2.7,{"name":99,"color":100,"percentage":32},"CMake","#DA3434",{"name":102,"color":103,"percentage":104},"JavaScript","#f1e05a",0.4,{"name":106,"color":107,"percentage":104},"TypeScript","#3178c6",{"name":109,"color":110,"percentage":111},"CSS","#663399",0.2,{"name":113,"color":114,"percentage":111},"Shell","#89e051",{"name":116,"color":117,"percentage":118},"C","#555555",0.1,759,109,"2026-04-13T17:01:47","Apache-2.0","未说明",{"notes":125,"python":123,"dependencies":126},"该工具是一套 C++\u002FPython 实用程序，支持 Aria Gen1 和 Gen2 数据。提供 Google Colab 交互式教程。具体编译和运行环境需求（如操作系统版本、Python 具体版本、C++ 编译器要求等）在提供的 README 片段中未明确列出，需参考官方文档或安装指南。",[123],[15,16,128],"其他","2026-03-27T02:49:30.150509","2026-04-14T15:41:03.197265",[132,137,142,147,152,157],{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},33165,"ASE 数据集中边缘像素颜色比中心像素暗（晕影问题），如何修正这种亮度不一致？","这是由于镜头晕影（vignette）造成的。可以通过应用反晕影（anti_vignette）校正来解决。注意 anti_vignette 的每个通道是相等的（R=G=B），因此可以取任意一个通道作为权重。参考代码逻辑如下：\n1. 将 anti_vignette 转换为 float32 并除以 3.0（因为三通道相等）。\n2. 使用 cv2.addWeighted 将原始图像与反晕影图像融合。\n示例代码片段：\n```python\nanti_vignette = np.uint8(anti_vignette.astype(np.float32) * 1.0 \u002F 3.0)\nrgb = cv2.addWeighted(rgb, alpha, anti_vignette, beta, gamma)\n```\n相关讨论和解决方案可参考 PR #125。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fissues\u002F99",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},33166,"在 Aria Digital Twin (ADT) 数据集中，为什么某些序列无法检索到深度图或分割图（segmentation）？","这是因为并非所有序列都包含骨架（skeleton）数据。默认情况下，某些处理脚本（如 EgoLifter）会尝试加载 `segmentation_with_skeleton.vrs` 文件。如果某个序列只提供了 `segmentation.vrs`（仅包含物体分割，不含人体骨架），则会导致加载失败。\n解决方法：\n1. 检查序列文件夹中的 `metadata.json` 文件，查看 `'num_skeletons'` 键的值。\n2. 如果该值为 0 或文件不存在，请在代码中将 `skeleton_flag` 参数设置为 `False`，以便加载普通的 `segmentation.vrs` 文件。\n注意：设置为 False 后将无法获取人体分割掩码。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fissues\u002F149",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},33167,"运行 `aria_mps single` 命令处理大型 VRS 文件时出现 Error 501 崩溃，如何解决？","这是一个已知问题，通常发生在处理较大的演示文件（例如 20GB 以上的 VRS 文件）时，特别是在进行哈希步骤之后。目前社区中多个用户报告了相同错误，且维护者确认该问题尚未完全解决。建议暂时尝试处理较小的文件或分段处理数据。如果问题持续，请关注官方后续更新或在该 Issue 下反馈具体文件大小和环境信息以协助排查。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fissues\u002F77",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},33168,"如何将 Meta Aria 眼镜与外部时间码生成器（如 Tentacle Sync）进行同步？","目前官方尚未提供直接将外部时间码生成器（通过 LTC 音频或其他方式）嵌入到 Aria 设备中的原生解决方案。Aria 眼镜没有音频输入端口来直接录制 LTC 信号。\n现有的替代方案包括：\n1. 使用 TICSync 方法在网络间同步多台 Aria 设备（精度可达亚毫秒级），但这不适用于外部摄像机。\n2. 在计算机上单独录制 LTC 音频，然后在后处理阶段尝试将 Aria 录音与该音频进行对齐，但这需要自行开发对齐算法且精度难以保证。\n目前团队正在内部研究更精确的同步方法，但暂无公开的具体实施步骤。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fissues\u002F295",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},33169,"在使用 ADT 数据集重建 3D 点云时，如何正确获取变换矩阵将点云对齐到场景坐标系（Scene Coordinate）？","要将深度数据从相机坐标系转换到场景坐标系，需要组合两个变换矩阵：\n1. **相机到设备 (Camera to Device)**: 使用 `gt_provider.get_aria_transform_device_camera(stream_id).inverse().to_matrix()` 获取。\n2. **设备到场景 (Device to Scene)**: 使用 `gt_provider.get_aria_3d_pose_by_timestamp_ns(timestamp).data().transform_scene_device.inverse().to_matrix()` 获取。\n最终的变换矩阵为：`transform_matrix = cam2dev @ dev2sce`。\n请确保深度值已根据单位（通常需除以 1000.0 转为米）和内参（fx, fy, cx, cy）正确反投影为 3D 点，然后再应用上述外参矩阵进行坐标系对齐。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fissues\u002F27",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},33170,"生成 MPS 轨迹和半稠密点云 CSV 文件时失败报错，应该如何处理？","如果在桌面应用程序中能够获取注视点（gaze）CSV 文件，但在生成轨迹（trajectory）和半稠密点云（semi-dense point cloud）CSV 时出错，这通常涉及特定的数据处理后端问题。\n官方建议此类问题转移到学术合作伙伴支持渠道（academic partner support）进行处理。此外，有反馈指出虽然未公开，但团队内部可能有针对此类长片段（15-20 秒）数据的特殊处理方案。建议联系项目组的学术支持人员或相关负责人（如之前访问过实验室的团队成员）以获取进一步的帮助或内部工具。","https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fissues\u002F47",[163,168,172,177,182,187,192,197,202,207,212,217,222,226,231,235,240,245,250,255],{"id":164,"version":165,"summary_zh":166,"released_at":167},255351,"2.1.2","请参阅[完整变更日志](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fresearch-tools\u002Fprojectariatools\u002Fchangelog)","2026-04-10T21:38:47",{"id":169,"version":170,"summary_zh":166,"released_at":171},255352,"2.1.1","2025-12-11T00:59:08",{"id":173,"version":174,"summary_zh":175,"released_at":176},255353,"2.1.0","请参阅[完整变更日志](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fresearch-tools\u002Fprojectariatools\u002Fchangelog)。","2025-11-11T05:53:01",{"id":178,"version":179,"summary_zh":180,"released_at":181},255354,"1.7.1","**文档更新：**\n* Gen1 旧版文档现已指向新的 MPS CLI 维基，以获取最新指南和资源。\n\n**安全改进：**\n* 对于 PAT 旧版，加密密钥不再保存到磁盘。如果 Keyring 可用，则会使用 Keyring；否则，密钥将不会存储在磁盘上，从而提升安全性。\n\n**错误修复：**\n* mps_cli 现在会在 API 查询中正确使用 user_feedback 键，确保反馈 ID 的正确处理以及与 MPS API 的正常通信。\n* 从 Project Aria Tools 主分支引入了针对 MPS CLI 应用程序的修复，包括：\n  * 修正了多 SLAM 请求的错误类型。\n  * 确保在需要时能够正确重置用户令牌。","2025-11-11T01:30:57",{"id":183,"version":184,"summary_zh":185,"released_at":186},255355,"2.0.0","我们很高兴地宣布，该库迎来了一次**重大更新**——现在可以使用您熟悉的**相同的 C++ 和 Python API**，同时支持**Aria 第 1 代和第 2 代 VRS 数据**。此次发布还新增了对**Aria 第 2 代 VRS**中**设备端机器感知（MP）**数据的全面访问。\n\n---\n\n## 🔄 API 与数据兼容性\n\n- 💻 **统一接口**：**C++ 和 Python API 保持不变**——您现有的第 1 代代码将继续无缝运行。  \n- 🧠 **扩展功能**：新增了用于访问第 2 代录制中可用的**设备端机器感知**数据流的 API。  \n- ⚙️ **跨代支持**：单个代码库无需修改即可同时处理**第 1 代和第 2 代 VRS 数据**。\n\n---\n\n## 🛠️ 新增与增强工具\n\n- ✨ [**[新] `aria_rerun_viewer`**](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fresearch-tools\u002Fprojectariatools\u002Ftools\u002Fpythonviz#aria_rerun_viewer)  \n  基于 [Rerun](https:\u002F\u002Fwww.rerun.io\u002F) 构建的 Python 可视化工具，用于 Aria VRS 文件的交互式多模态探索。\n\n- ✨ [**[新] `gen2_mp_csv_exporter`**](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fresearch-tools\u002Fprojectariatools\u002Ftools\u002Fexportcsv)  \n  一款 Python 工具，可将 VRS 中的**设备端 MP**数据导出为 CSV 文件，完全兼容**机器感知服务（MPS）**格式。\n\n- 🆕 [**[升级] `vrs_health_check`**](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fark\u002Fvrs_health_check\u002Finstallation)  \n  经过全新重构，并以**独立的 Python 包**形式发布，安装和使用更加便捷。\n\n---\n\n## 📘 新的 Python 教程\n\n我们的 Python 教程已**全新改写**，现分为**七篇重点指南**，涵盖以下主题：\n- 🧭 队列化的多传感器数据  \n- 🔧 设备校准的使用方法  \n- ⏱️ 多设备的时间对齐  \n- ……以及更多  \n\n👉 **[探索新教程](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002Fresearch-tools\u002Fprojectariatools\u002Fpythontutorials\u002Fdataprovider)**，立即开始学习。\n\n---\n\n## 🌐 更新后的文档网站\n\n我们现在为每一代产品维护**专用的文档门户**：\n\n- 📄 **[Aria 第 2 代文档网站](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fgen2\u002F)** — 新增的 API、工具和数据格式  \n- 📄 **[Aria 第 1 代文档网站](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fintro)** — 第 1 代工作流的原始文档\n\n---","2025-10-16T07:29:38",{"id":188,"version":189,"summary_zh":190,"released_at":191},255356,"1.6.0","🎉 我们很高兴地宣布 Project Aria Tools 1.6.0 版本发布 🎉\n\n[MPS]\n\n修复了 MPS 查看器中 21 个手部关键点显示的配色问题。","2025-05-14T01:08:12",{"id":193,"version":194,"summary_zh":195,"released_at":196},255357,"1.5.9","🎉 我们很高兴地宣布 Project Aria Tools v1.5.9 🎉\n\n**[MPS]**\n- 在手部追踪输出中新增了 21 个手部关键点，并附带完整的 6 自由度变换矩阵，作为新的手部追踪输出 CSV 文件。该变换矩阵可用于将坐标从手部坐标系（原点位于手腕位置）转换到设备坐标系。手腕法线和手掌法线仍会提供。我们鼓励手部追踪用户开始使用包含此新 CSV 输出的 HandTrackingResult。\n  - 请参阅 [Notebook](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fcore\u002Fexamples\u002Fmps_quickstart_tutorial.ipynb)","2025-05-09T22:28:57",{"id":198,"version":199,"summary_zh":200,"released_at":201},255358,"1.5.8","🎉 我们很高兴地宣布 Project Aria Tools v1.5.8 🎉\n\n**[核心功能]**\n- 更新了镜头暗角校正掩码，并支持不同的 ISP 调校版本\n- 新增色彩校正功能\n  请参阅 [Notebook](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fcore\u002Fexamples\u002Fdataprovider_quickstart_tutorial.ipynb)\n  请参阅 [文档](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fadvanced_code_snippets\u002Fimage_utilities)\n- 移除 poseplayer\n- 移除 AEA 和 ADT 下载器 -> 所有数据集均使用 [通用数据集下载工具](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Fdataset_download#downloading-open-datasets)\n\n**[标定]**\n- 支持相机缩放中的非对称裁剪\n- 在相机模型中新增 FishEye62\n\n**[MPS]**\n- 支持更大的结果文件","2025-04-09T18:11:40",{"id":203,"version":204,"summary_zh":205,"released_at":206},255359,"1.5.7","🎉我们很高兴地宣布推出 Project Aria Tools v1.5.7！🎉\n\n[核心功能]\n- 新增去暗角支持\n  - 请参阅 [Notebook](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fcore\u002Fexamples\u002Fdataprovider_quickstart_tutorial.ipynb)\n  - 请参阅 [文档](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fadvanced_code_snippets\u002Fimage_utilities#image-devignetting)\n- 相机投影与反投影现支持浮点和双精度精度\n\n[示例]\n- [SAM2 VRS 对象标注演示](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Ftree\u002Fmain\u002Ftools\u002Fsamples\u002Fpython\u002FSAM2_vrs_object_annotation)\n  - 新增了一个代码示例，允许您使用 [SAM2](https:\u002F\u002Fai.meta.com\u002Fsam2\u002F) 半自动地为对象掩码和 2D 边界框添加标签。","2025-01-31T19:54:56",{"id":208,"version":209,"summary_zh":210,"released_at":211},255360,"1.5.6","🎉我们很高兴地宣布 Project Aria Tools v1.5.6 正式发布🎉\n\n**[代码示例]**\n- **[MPS]**\n  - [新增了一个代码示例，展示如何加载和使用半密集点云](https:\u002F\u002Fwww.linkedin.com\u002Fposts\u002Fpierre-moulon_projectaria-slam-rerun-activity-7265468565216993280-HANC?utm_source=share&utm_medium=member_desktop)（使您能够轻松了解在给定时间戳和指定图像流 ID 下可见的点）\n- **[核心]**\n   - [暗角掩码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002Fded9b0306b8b1c8404e7f809eb10c528953f29a1)（使您能够对图像流进行去暗角处理）\n\n**[ARK]**\n增加了对 Windows 平台 mps-cli 的支持\n","2024-12-05T18:32:06",{"id":213,"version":214,"summary_zh":215,"released_at":216},255361,"1.5.5","🎉We’re excited to announce Project Aria Tools v1.5.5🎉\r\n\r\n**[PyPI]**\r\n- Added Windows support to our wheels on PyPI\r\n\r\n**[MPS]**\r\n- Shown how to best use [SLAM online calibration](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fcore\u002Fexamples\u002Fmps_quickstart_tutorial.ipynb) and a time offset for the RGB camera\r\n- MPSDataProvider can now provide the [version ID of the MPS service](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F890e57b59bc9e1cd7cf9d30d0d605189f8eb3fdf)\r\n\r\n**[Documentation]**\r\n- [How to install projectaria_tools from pypi for windows](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Finstallation\u002Finstallation_python)\r\n- [How to install VRS tools and VRS Player](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Faria_vrs\u002Faria_vrs_tools_installation)\r\n  - [How to use VRSPlayer](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Faria_vrs\u002Faria_vrsplayer)\r\n  - [How to use VRS tools to trim VRS file in time or stream wise](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Faria_vrs\u002Faria_vrs_command_line)\r\n\r\n**[ADT]**\r\n- [All objects in ADT now have downloadable 3D models in glb format](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_digital_twin_dataset\u002Fobject_models)\r\n- [ADT rerun visualizer now supports visualizing 3D object models](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_digital_twin_dataset\u002Fobject_models#visualizing-object-models)","2024-09-26T15:19:33",{"id":218,"version":219,"summary_zh":220,"released_at":221},255362,"1.5.4","🎉 We’re excited to announce Project Aria Tools v1.5.4 🎉\r\n\r\n**[Core]**\r\n\r\nProjectaria_tools can now be compiled for native windows support. Assuming Visual Studio Compiler is setup on the machine (default to `Visual Studio 16 2019` but `Visual Studio 17 2022` is also supported, if generator is updated in the cmake command line `-G`)\r\n```\r\n# C++ compilation + unit test run with Pixi package manager\r\npixi run run_c\r\n# Python compilation + unit test run with Pixi package manager\r\npixi run run_python\r\n```\r\n\r\n**[Dataset]**\r\nADTv2 [Tutorial notebook update](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002Fc824f6b8782fa29c48fb308eb67a0ea3330916ba)\r\n\r\n**[Documentation]**\r\nAdded [HOT3D to the OpenDataset wiki page](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Fhot3d)\r\n[Enhanced documentation of ARK TimeSync feature](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fsdk\u002Fsamples\u002Fticsync_sample)\r\n\r\n**[MPS\u002FMPS-CLI]**\r\nAdd [support of Normals for wrist and palm helping to get the orientation of hands](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Fmps\u002Fhand_tracking#hand-tracking-outputs)\r\n\r\n**[MPS Code Sample]**\r\n[Python notebook showing how to align MediaPipe Hand tracking with MPS wrist and palm tracking](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Ftree\u002Fmain\u002Ftools\u002Fsamples\u002Fpython\u002Fmps_handtracking_overlay_media_pipe)","2024-08-05T04:08:45",{"id":223,"version":224,"summary_zh":78,"released_at":225},255363,"1.5.3","2024-08-01T17:11:10",{"id":227,"version":228,"summary_zh":229,"released_at":230},255364,"1.5.2","🎉 We’re excited to announce Project Aria Tools v1.5.2. 🎉\r\n\r\n**[API]**\r\n- VrsDataProvider\r\n  - Add support of VRS metadata\u002Ffile tags and time sync mode\r\n  - Calibration\r\n    - Add better support for [rotating calibration data](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F616766ba0a86c1bc31341019d97987f67411bb18) (intrinsics, extrinsics)\r\n      - See usage either on [wiki](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fadvanced_code_snippets\u002Fimage_utilities#rotated-image-clockwise-90-degrees) or in the [AEA](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fpull\u002F90) dataset viewer\r\nExpose [camera calibration rescaling function](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F8692a230b76539dc0e3583a2ced1f65f41c34e8d)\r\n\r\n**[Code Samples]**\r\n- C++\r\n  - [VRS Image mutation](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F6133b26f9c1f0ada4e33bbb32956b9621c475823)\r\n- Python \r\n  - TicSync notebook [tutorial](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fb4599e1b69b17629e2c0c7e1ae74c94493710f78\u002Fcore\u002Fexamples\u002Fticsync_tutorial.ipynb)\r\n  - MPS [EyeGaze Depth Estimation](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002Fff0ec97b17e830617f417f14f263f31b84311e3b)\r\n  - ADT - [Notebook on converting depth maps to point clouds](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F093bd1c85e605ba292f0a93384054a7c52641ecf) \r\n\r\n**[Tools - Python]**\r\n- [VRS to MP4](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fadvanced_code_snippets\u002Fvrs_to_mp4)\r\n  - Improved timing accuracy during export\r\n  - Updated to include timestamps in the metadata\r\n\r\n**[MPS-CLI]**\r\n- API update to mps-cli to support Aria Studio\r\n\r\n**[Build]**\r\n- Continuous Integration (adapt to changing software\u002Fhardware environment)\r\n  - C++ - [Fix for Pangolin](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F63c42728af405e01c697b6d4fe373592f1b5928f)\r\n  - Python wheels -  [Fix cibuildwheel version](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002Ff8481609a4d4622962d00149dd9104f09db5bf89)\r\n  - Python wheels - Fix GitHub runner to be Intel or Mac Silicon\t\r\n\r\n**[Visualization]**\r\n- MPS Viewer - [Enable showing RGB images upright, undistorted, or unchanged](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002Fdaa89dba94df36b2c83f3117f6d12045a4d58d37)\r\n\r\n**[Projects\u002FDataset]**\r\n- ADT\r\n  - Data & Tooling Updated to v2.0\r\n    - We have removed the notion of subsequences. All subsequence are now separated into their own sequence folder \r\n    - Tooling including visualization and downloaders have been updated to new file structure, but are still backwards compatible\r\n    - We have removed access to challenge data now that the challenge has completed\r\n  - Visualization\r\n    - Python - Add option to add image rectification to the viewer\r\n    - C++ - Update Pangolin viewer to better aspect ratio\r\n- AEA\r\n  - Visualization - Python - [Ease image rotation\u002Fundistortion](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fpull\u002F90) \r\n\r\n**[Documentation]**\r\n- Various improvements, including:\r\n  - New [Pairing Additional Glasses and Troubleshooting](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fglasses_manual\u002Fpair_glasses)\r\n  - New [Project Aria Glasses Cable Clip instructions](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fglasses_manual\u002Fcable_clip)\r\n  - Updated Aria Digital Twin (ADT) v2.0 updated to support [Aria Dataset Explorer](https:\u002F\u002Fexplorer.projectaria.com\u002F)\r\n  - New [Aria Dataset Explorer](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Fdataset_explorer)\r\n  - New [Aria Studio](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Faria_studio)\r\n  - New [Time Synchronized Recordings with Multiple Aria Devices](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fsdk\u002Fticsync)\r\n  - New [Project Aria FAQ](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Ffaq)\r\n  - Updated [Info page](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fintro) and [About ARK page](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fabout_ARK)","2024-06-14T16:09:38",{"id":232,"version":233,"summary_zh":78,"released_at":234},255365,"1.5.1","2024-06-04T19:06:21",{"id":236,"version":237,"summary_zh":238,"released_at":239},255366,"1.5.0","🎉 **We’re excited to announce Project Aria Tools v1.5.0.** 🎉\r\n\r\n**[Tools - Python]**\r\n- ASR code sample shows how to use [Faster Whisper](https:\u002F\u002Fgithub.com\u002FSYSTRAN\u002Ffaster-whisper) to run Whisper Speech Recognition on an Aria audio stream. The ASR outputs are time aligned with [Aria Device Time](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Faria_vrs\u002Ftimestamps_in_aria_vrs#vrs-timestamps-single-device). Go to the [Automated Speech Recognition Readme](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Ftools\u002Fsamples\u002Fpython\u002Fautomated_speech_recognition\u002FReadme.md) for how to get started.\r\n\r\n**[MPS CLI]**\r\n- Support for new MPS service, [hand tracking](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fsw_release_notes#new-hand-tracking-mps-wrist-and-palm-tracking)\r\n- Updated Eye Tracking to support [depth estimation](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fsw_release_notes#update-to-eye-gaze-outputs---depth-estimations)\r\n\r\n**[Documentation]**\r\n- Various improvements, including:\r\n- New [MPS Troubleshooting](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fmps\u002Fmps_troubleshooting) page\r\n- New [Collaborative Tools Page](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fcollaborative_tools), did you know you can use Aria data with Nerfstudio?\r\n- New [Hand Tracking Data Formats](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Fmps\u002Fhand_tracking) \r\n- Updated [MPS Google Colab tutorials](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fgetting_started#running--jupyter-notebooks-on-google-colab)","2024-03-21T18:02:02",{"id":241,"version":242,"summary_zh":243,"released_at":244},255367,"1.4.0","🎉  **We are excited to announce the release of our new 1.4.0 version!** 🎉 \r\n\r\n- MPS CLI ([ARK](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fabout_ARK)|Project Aria Research partners)\r\n  - New (and recommended) workflow for requesting Machine Perception services.\r\n  - MPS Multi-SLAM can only be requested by the MPS CLI. \r\n- MPS Multi-SLAM ([ARK](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fabout_ARK)|Project Aria Research partners)\r\n  - Compute SLAM MPS outputs in a shared coordinate frame for multiple VRS files\r\n\r\nHere is the complete changelog:\r\n\r\n**[API]**\r\n- DataProvider - [{Feature} - Make DeliverQueue start and end timestamp logic more robust](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F599409591b434416c5d532a44b3efdb75abb79a5)\r\n- Calibration - [{Feature} calibration - expose imu rectification matrix and bias vector](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002Fc735661436c0c6bc0b05ccd46902f56611413dca)\r\n- Viewer MPS - [{Visualizer} Python - viewer_mps - Add mps_folder option](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F900bd35366786309734ce6e5836bfacfd306dc4f)\r\n\r\n**[MPS CLI]**\r\n- [aria_mps](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fmps\u002Frequest_mps\u002Fmps_cli) - Addition of a Python CLI for \r\n  - Project Aria Research partners to manage their Machine Perception Services requests (run and monitor upload, run and results retrieval). Improvements on the Desktop app include:\r\n    - Auto run health checks prior to upload\r\n    - Recordings will not be uploaded if they are not valid for any of the MPS services requested\r\n    - Resumable uploads\r\n    - Concurrent processing\r\n    - Automatically downloads outputs once processing is complete\r\n    - Recordings are processed once\r\n    - Uploaded data is stored for 24 hours\r\n      - Additional MPS can be requested without needing to upload again\r\n      - Data can be reprocessed without needing to upload again\r\n    - CLI can be integrated into automated workflows\r\n- Request Multi-Recording outputs\r\n  - Compute SLAM MPS outputs in a shared coordinate frame for multiple VRS files\r\n- [Projectaria_tools](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Finstallation\u002Finstallation_python) must be installed using pip to access the CLI\r\n\r\n**[Documentation]**\r\n- [aria_mps](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fmps\u002Frequest_mps\u002Fmps_cli)  CLI documentation\r\n- [MPS Data Formats documentation](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Fmps\u002Fmps_summary) refactored to align with MPS CLI output structure\r\nMinor improvements to other documentation","2024-02-28T18:17:31",{"id":246,"version":247,"summary_zh":248,"released_at":249},255368,"1.3.3","🎉 **We are excited to announce the release of version 1.3.3** 🎉 \r\nThis release includes several new features and improvements, including:\r\n\r\n- Features\r\n  * A MpsDataPathsProvider & MPSDataProvider API\r\n- Dataset support\r\n  * Aria Everyday Activities (AEA) Dataset support\r\n- **NEW**  - Code Sample\r\n  - Demo to run [EfficientSAM with eye gaze prompt](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F019deb241e1a3c89c6f0f8f073397cef80d5480c)\r\n- CI\u002FCD\u002FBuild\r\n  - Improved our PyPI Python wheel generation workflow\r\n\r\nHere is the complete changelog:\r\n\r\n**[Visualization]**\r\n- [viewer_mps](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F856ecad9733fe64b9c8f9b239661752bfe9a2c57) update to load multi sequence datasets (AEA, ADT)\r\n- [viewer_projects_aea](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_everyday_activities_dataset\u002Faea_visualizers#python-visualizer)\r\n\r\n**[API]**\r\n- MPS \r\n  - Add [MpsDataPathsProvider](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002F5bd15f6c7869ab9e020ee5f0e5ac3d247e754a4a\u002Fcore\u002Fpython\u002Ftest\u002FmpsPyBindTest.py#L148) & [MPSDataProvider](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002F5bd15f6c7869ab9e020ee5f0e5ac3d247e754a4a\u002Fcore\u002Fpython\u002Ftest\u002FmpsPyBindTest.py#L142) API\r\nAllowing to easily retrieve MPS file assets and query them their metadata by timestamp\r\n\r\n**[Dataset]**\r\n[AEA Aria EveryDay Activities](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_everyday_activities_dataset) dataset\r\n[ADT Aria Digital Twin](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_digital_twin_dataset\u002Fdata_format) dataset is updated with MPS data\r\n\r\n**[Build\u002FCI\u002FCD]**\r\n- [{Build} Improve FMT compatibility (](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F205e36ee10f3f9e8350a20566434ea2968e5b7ec)[#54](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fpull\u002F54)[)](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F205e36ee10f3f9e8350a20566434ea2968e5b7ec)\r\n- [{Build} GitHub CI - Python Wheels generation](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F8c924babd5060c4111178d08a3e1f263ba7917f5)\r\n- [{Build} Fix python wheel build workflow for PyPI](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F454ffc2d345782dc95971efcd3b7e13e63f60238)\r\n\r\n**[Documentation]**\r\n- [AEA documentation](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_everyday_activities_dataset)\r\nMinor improvements to other documentation\r\n\r\n[Thank you to our new contributors]\r\n@selcuk-meta\r\n@eric-fb","2024-02-16T22:12:10",{"id":251,"version":252,"summary_zh":253,"released_at":254},255369,"1.3.0","🎉 **We are excited to announce the release of version 1.3.0** 🎉\r\n\r\nThis release includes several major new features and improvements, including:\r\n- New python visualization samples\r\n- A VRS_to_MP4 tool to help quickly review (RGB + Sound) for data collection\r\n- New C++ and Python tutorials on MPS point cloud colorization & how to use ADT depth map to generate point cloud\r\n…\r\n\r\nHere is the complete changelog:\r\n\r\n**[Visualization]**\r\n- [Python] New [rerun](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fvisualization\u002Fvisualization_python) visualization samples to help debug and visualize Aria and MPS temporal data & states\r\n  - [viewer_aria_sensors](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fvisualization\u002Fvisualization_python#run-aria-sensor-viewer)\r\n  - [viewer_mps](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fvisualization\u002Fvisualization_python#run-mps-viewer)\r\n  - [viewer_projects_adt](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_digital_twin_dataset\u002Fvisualizers#python-visualizer)\r\n  - [viewer_projects_ase](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_synthetic_environments_dataset\u002Fase_data_tools#python-sample)\r\n\r\n**[API]**\r\n- [Python] Introduce an `projectaria_tools.mps.utils` module to help query and filter loaded MPS data\r\n```\r\nimport projectaria_tools.mps.utils\r\n\r\n# Retrieve Pose\u002FEye Gaze data by timestamp \r\nget_nearest_eye_gaze, get_nearest_pose\r\n\r\n# Reproject eyegaze vector in image\r\nget_gaze_vector_reprojection\r\n\r\n# Filter Point Cloud data\r\nfilter_points_from_confidence\r\nfilter_points_from_count\r\n```\r\n\r\n- [Python - C++] Image undistortion update `distort_by_calibration`\r\n  - API update to perform bilinear or nearest neighbor [multithread](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F01dff693b8feec3074906336b05b5de0684389c2) interpolation to better select the right interpolation for depth (bilinear) or segmentation mask (nearest) -> see (`distort_by_calibration`, `distort_depth_by_calibration` & `distort_label_by_calibration`)\r\n- [Python - C++]  Calibration rotation\r\n  - Ease accessibility to upright pinhole calibration data for RGB\u002FSLAM images [rotate_camera_calib_cw90deg](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fadvanced_code_snippets\u002Fimage_utilities#rotated-image-clockwise-90-degrees)\r\n\r\n**[Tools]**\r\n- [Python] Tool to create an MP4 file from VRS RGB and audio data ([code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F89478f68af7dce922427647e325023b6e6b6c0d5), [documentation](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fadvanced_code_snippets\u002Fvrs_to_mp4))\r\n- [C++] [PointCloud Colorization Sample](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Ftree\u002Fmain\u002Ftools\u002Fsamples\u002Fpointcloud_colorization)\r\n- [C++] Aria Viewer - Enable plot buffer for AriaViewer to Avoids slow down\u002F speed up when audio is enabled\u002Fdisabled\r\n- [C++\u002FPython] Vrs health check\r\nadd health check for VRS streams (audio, barometer, bluetooth, camera, gps, imu, wifi) -> Reads all records from all streams and check the health of each record in each stream\r\n- [Python] Projects\u002FAria Digital Twin - [Add notebook tutorial to create and merge point clouds from depth maps data](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F305c218fd4d5223dfe23d75351172c6f31772225)\r\n\r\n\r\n**[Continuous integration - GitHub]**\r\n- Various cleanup in GitHub actions\r\n- Improved CI code coverage by adding [testing python notebooks](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002F33d45024bb7dfae3456fa0ac71895f451754b7b8)\r\n\r\n**[BugFix]**\r\n- [Core] fix support of multiple gps streams (coming from Aria and cell phone)\r\n\r\n**[Known Issues]**\r\n\r\n- [Machine Perception Services (MPS) outputs have been renamed](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fsw_release_notes#122023_features), so that they more clearly communicate what is in the outputs:\r\n```\r\nSLAM\u002FTrajectory\r\n  - global_points.csv.gz -> semidense_points.csv.gz\r\nEye Gaze\r\n  - generalized_eye_gaze.csv -> general_eye_gaze.csv\r\n  - calibrated_eye_gaze.csv -> personalized_eye_gaze.csv\r\n```\r\n\r\n**[Documentation]**\r\n- Aria Digital Twin, new [landing](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_digital_twin_dataset) and [challenge page](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_digital_twin_dataset\u002Fadt_challenges)\r\n- Aria Synthetic Environments, new [landing](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_synthetic_environments_dataset) and [challenge page](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fopen_datasets\u002Faria_synthetic_environments_dataset\u002Fase_challenges)\r\n- Data Formats - [how Project Aria uses VRS](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Faria_vrs)\r\n- Data Utilities - refactored our visualiz","2023-12-19T00:39:01",{"id":256,"version":257,"summary_zh":258,"released_at":259},255370,"1.2.0","**[Features]**\r\n  - **[Core - Python]**\r\n    - Sophus python binding\r\n      - Add [SO3, SE3 interface](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fcore_code_snippets\u002Fcalibration#python-binding-for-sophus-library) in python based on Sophus library.  Example code is provided in [sophus_quickstart_tutorial notebook](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fcore\u002Fexamples\u002Fsophus_quickstart_tutorial.ipynb) \r\n\r\n    - [Python type annotation](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Finstallation\u002Ftype_hinting)\r\n      - Python type hinting\u002F stubs are automatically generated as part of the pypi package when installing projectaria_tools with pip install. Users can also generate them on their own using the `generate_stubs.py` script. \r\n\r\n    - [Google Colab runnable notebooks](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fgetting_started#running-the-jupyter-notebooks-on-google-colab)\r\n      - Python notebooks can now be run in Google Colab -> [Dataprovider Quickstart Tutorial](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fcore\u002Fexamples\u002Fdataprovider_quickstart_tutorial.ipynb)  | [Machine Perception Services Tutorial](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fblob\u002Fmain\u002Fcore\u002Fexamples\u002Fmps_quickstart_tutorial.ipynb)\r\n      - No installation on local machine required to test and play with projectaria_tools\r\n \r\n  - **[Core]**\r\n    - Add `cameraId` to `ImageDataRecord`\r\n      - Allow the `ImageDataRecord` to list from which camera the data came from\r\n\r\n    - Continuous integration\r\n      - GitHub Actions runs Python Unit test\r\n\r\n    - Dependencies\r\n      - Update to use VRS v1.1.0\r\n      - Remove cereal dependency and use directly rapidjson\r\n\r\n  - **[MPS]**\r\n      - Calibrated and generalized EyeGaze\r\n        - Support of calibrated eye gaze via [in-session calibration](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fmps\u002Feye_gaze_calibration)\r\n        - Support for multiple wearers in a single Aria capture. The eye gaze output will contain a `session_uid` field that will help distinguish between different wearers.\r\n\r\n    - Python type format\r\n      - `print(X)` will now display object content\r\n\r\n  - **[Tools]**\r\n    - [MPS Replay Viewer {C++}](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fvisualization_guide#mps-3d-replay-visualizer)\r\n      - Renders static scene and dynamic elements: 2D\u002F3D observations rays + eye gaze data\r\n\r\n **[BugFix]**\r\n  - [Core]\r\n    - [{bug fix} update crop and rescale to SensorCalibration](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommit\u002Ffeaa489303ddf7fee3e74ed1fdb3001df142f541) \r\n    - Update the API to make calibration data to match from the sensor and device access point: `get_sensor_calibration(stream_id).camera_calibration()` and `provider.get_device_calibration().get_camera_calib(name)` to match.\r\n\r\n**[Known Issues]**\r\n  - [Core]\r\n    - The Sophus API has been updated, if you encounter issues, please update to v1.2 of Project Aria Tools\r\n    - Here is how to update your existing code following the API change for SO3\u002FSE3:\r\n      -  `.matrix()` to `.to_matrix()`\r\n      - `.quaternion()` -> `.rotation().to_quat()[0]` or `to_quat_and_translation()[0]`\r\n\r\n**[Documentation]**\r\n  - [Core]\r\n    - [VRS to MP4](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_utilities\u002Fadvanced_code_snippets\u002Fvrs_to_mp4) Tutorial showing how to export VRS RGB images to a MP4 video.\r\n    - Additional information added to [3D Coordinate Frame Conventions](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Fcoordinate_convention\u002F3d_coordinate_frame_convention)\r\n  - [MPS]\r\n    - [Eye Gaze Data Formats](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002Fdata_formats\u002Fmps\u002Fmps_eye_gaze) updated to include `calibrated_eye_gaze.csv` and `summary.json`\r\n    - [Eye Gaze Calibration](https:\u002F\u002Ffacebookresearch.github.io\u002Fprojectaria_tools\u002Fdocs\u002FARK\u002Fmps\u002Feye_gaze_calibration)\r\n\r\n**[Thank you to our new contributors]**\r\n@brentyi\r\n[Seanwarren-meta](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommits?author=seanwarren-meta)\r\nSelcuk Karakas\r\nPrzemyslaw Szczepanski\r\n[Guru Somasundaram](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcommits?author=gsomas)\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fprojectaria_tools\u002Fcompare\u002F1.1.0...1.2.0","2023-09-28T20:26:12"]